file_path
stringlengths
5
148
content
stringlengths
0
526k
customizing-the-dialog_Overview.md
# Overview — Omniverse Kit 1.1.12 documentation ## Overview The file_importer extension provides a standardized dialog for importing files. It is a wrapper around the `FilePickerDialog`, but with reasonable defaults for common settings, so it’s a higher-level entry point to that interface. Nevertheless, users will still have the ability to customize some parts but we’ve boiled them down to just the essential ones. Why you should use this extension: - Present a consistent file import experience across the app. - Customize only the essential parts while inheriting sensible defaults elsewhere. - Reduce boilerplate code. - Inherit future improvements. - Checkpoints fully supported if available on the server. ## Quickstart You can pop-up a dialog in just 2 steps. First, retrieve the extension. ```python # Get the singleton extension. file_importer = get_file_importer() if not file_importer: return ``` Then, invoke its show_window method. ```python file_importer.show_window( title="Import File", import_handler=self.import_handler, # filename_url="omniverse://ov-rc/NVIDIA/Samples/Marbles/Marbles_Assets.usd", ) ``` Note that the extension is a singleton, meaning there’s only one instance of it throughout the app. Basically, we are assuming that you’d never open more than one instance of the dialog at any one time. The advantage is that we can channel any development through this single extension and all users will inherit the same changes. ## Customizing the Dialog You can customize these parts of the dialog. - Title - The title of the dialog. - Collections - Which of these collections, ["bookmarks", "omniverse", "my-computer"] to display. - Filename Url - Url of the file to import. - Postfix options - Show only files of these content types. - Extension options - Show only files with these filename extensions. - Import label - Label for the import button. - Import handler - User provided callback to handle the import process. Note that these settings are applied when you show the window. Therefore, each time it’s displayed, the dialog can be tailored to the use case. ## Filter files by type The user has the option to filter what files get shown in the list view. One challenge of working in Omniverse is that everything is a USD file. An expected use case is to show only files of a particular content type. To facilitate this workflow, we suggest adding a postfix to the filename, e.g. “file.animation.usd”. The file bar contains a dropdown that lists the default postfix labels, so you can filter by these. You have the option to override this list. You can also filter by filename extension. By default, we provide the option to show only USD files. If you override either of the lists above, then you’ll also need to provide a filter handler. The handler is called to decide whether or not to display a given file. The default handler is shown below as an example. ```python def default_filter_handler(filename: str, filter_postfix: str, filter_ext: str) -> bool: """ Show only files whose names end with: *<postfix>.<ext>. Args: filename (str): The item's file name. filter_postfix (str): Whether file name match this filter postfix. filter_ext (str): Whether file name match this filter extension. Returns: True if file could show in dialog. Otherwise False. """ if not filename: return True # Show only files whose names end with: *<postfix>.<ext> if filter_ext: # split comma separated string into a list: filter_exts = filter_ext.split(",") if isinstance(filter_ext, str) else filter_ext filter_exts = [x.replace(" ", "") for x in filter_exts] filter_exts = [x for x in filter_exts if x] # check if the file extension matches anything in the list: if not ( "*.*" in filter_exts or any(filename.endswith(f.replace("*", "")) for f in filter_exts) ): # match failed: return False if filter_postfix: # strip extension and check postfix: filename = os.path.splitext(filename)[0] return filename.endswith(filter_postfix) return True ``` ## Import options A common need is to provide user options for the import process. You create the widget for accepting those inputs, then add it to the details pane of the dialog. Do this by subclassing from `ImportOptionsDelegate` and overriding the methods, `ImportOptionsDelegate._build_ui_impl()` and (optionally) `ImportOptionsDelegate._destroy_impl()`. ```python class MyImportOptionsDelegate(ImportOptionsDelegate): def __init__(self): super().__init__(build_fn=self._build_ui_impl, destroy_fn=self._destroy_impl) self._widget = None def _build_ui_impl(self): self._widget = ui.Frame() with self._widget: with ui.VStack(): with ui.HStack(height=24, spacing=2, style={"background_color": 0xFF23211F}): ``` ```python ui.Label("Prim Path", width=0) ui.StringField().model = ui.SimpleStringModel() ui.Spacer(height=8) def _destroy_impl(self, _): if self._widget: self._widget.destroy() self._widget = None ``` Then provide the controller to the file picker for display. ```python self._import_options = MyImportOptionsDelegate() file_importer.add_import_options_frame("Import Options", self._import_options) ``` ## Import handler Provide a handler for when the Import button is clicked. The handler should expect a list of :attr: ``` selections ``` made from the UI. ```python def import_handler(self, filename: str, dirname: str, selections: List[str] = []): # NOTE: Get user inputs from self._import_options, if needed. print(f"> Import '{filename}' from '{dirname}' or selected files '{selections}'") ``` ## Demo app A complete demo, that includes the code snippets above, is included with this extension at Python. ```
customizing-the-prompt_Overview.md
# Overview The widget extension provides a simple dialog for prompt. Users have the ability to customize buttons on it. ## Quickstart ```c++ prompt = Prompt("title", "message to user", on_closed_fn=lambda: print("prompt close")) prompt.show() ``` ## Customizing the Prompt You can customize these parts of the Prompt: - title: Text appearing in the titlebar of the window. - text: Text of the question being posed to the user. - ok_button_text: Text for the first button. - cancel_button_text: Text for the last button. - middle_button_text: Text for the middle button. - middle_2_button_text: Text for the second middle button. - ok_button_fn: Function executed when the first button is pressed. - cancel_button_fn: Function executed when the last button is pressed. - middle_button_fn: Function executed when the middle button is pressed. - middle_2_button_fn: Function executed when the second middle button is pressed. - modal: True if the window is modal, shutting down other UI until an answer is received - on_closed_fn: Function executed when the window is closed without hitting a button. - shortcut_keys: If it can be confirmed or hidden with shortcut keys like Enter or ESC. - width: The specified width. - height: The specified height. ## Example ```c++ from omni.kit.widget.prompt import Prompt folder_exist_popup = None def on_confirm(): print("overwrite the file") def on_cancel(): folder_exist_popup.hide() folder_exist_popup = None ``` folder_exist_popup = Prompt( title="Overwrite", text="The file already exists, are you sure you want to overwrite it?", ok_button_text="Overwrite", cancel_button_text="Cancel", ok_button_fn=on_confirm, cancel_button_fn=on_cancel, ) folder_exist_popup.show()
damage-application_structcarb_1_1blast_1_1_blast.md
# carb::blast::Blast Defined in [Blast.h](#file-blast-h) ## structcarb::blast::Blast Plugin interface for the omni.blast extension. ### Destructible Authoring Commands ```cpp const char* ( *combinePrims )( const char* *paths, size_t numPaths, float defaultContactThreshold, const carb::blast::DamageParameters& ); ``` ``` ``` ### DamageParameters * damageParameters, * float defaultMaxContactImpulse **Main entry point to combine a existing prims into a single destructible.** **Param paths** - **[in]** Full USD paths to prims that should be combined. **Param numPaths** - **[in]** How many prims are in the paths array. **Param defaultContactThreshold** - **[in]** How hard the prim needs to be hit to register damage during simulation. **Param damageParameters** - **[in]** See DamageParameters description. **Param defaultMaxContactImpulse** - **[in]** How much force can be used to push other prims away during a collision For kinematic prims only, used to allow heavy objects to continue moving through brittle destructible prims. **Return** - true iff the prims were combined successfully. **fracturePrims** - (const char **paths, size_t numPaths, const char *defaultInteriorMaterial, uint32_t numVoronoiSites, float defaultContactThreshold, DamageParameters *damageParameters) Main entry point to fracture an existing prim. **Param paths** [in] Full USD path(s) to prim(s) that should be fractured. They need to all be part of the same destructible if there are more than one. **Param numPaths** [in] How many prims are in the paths array. **Param defaultInteriorMaterial** [in] Material to set on newly created interior faces. (Ignored when re-fracturing and existing interior material is found.) **Param numVoronoiSites** [in] How many pieces to split the prim into. **Param defaultContactThreshold** [in] How hard the prim needs to be hit to register damage during simulation. **Param damageParameters** See [DamageParameters](structcarb_1_1blast_1_1_damage_parameters.html#structcarb_1_1blast_1_1_damage_parameters) description. **Param defaultMaxContactImpulse** [in] How much force can be used to push other prims away during a collision. For kinematic prims only, used to allow heavy objects to continue moving through brittle destroyable prims. **Param interiorUvScale** [in] Scale to apply to UV frame when mapping to interior face vertices. **Return** path to the new prim if the source prim was fractured successfully, nullptr otherwise. Set the random number generator seed for fracture operations. **Param seed** [in] the new seed. Reset the [Blast](#structcarb_1_1blast_1_1_blast) data (partial or full hierarchy) starting at the given path. The destructible will be rebuilt with only appropriate data remaining. ### Field List - **Param path** - [in] The path to a chunk, instance, or base destructible prim. - **Return** - true iff the operation could be performed on the prim at the given path. ### Function: createExternalAttachment - Modify a blast asset that is stored in the destructible at the given path, so that support chunks which touch static geometry are bound to the world. - All previous world bonds will be removed. - Returns true if the destructible’s NvBlastAsset was modified, but note this is not “if and only if.” If world bonds are removed and replaced with the exact same world bonds (e.g. the blast mesh was not moved since the last time this function was called), then this function will still return true. Note also that if path == NULL, this function always returns true. - **Param path** - [in] The USD path of the blast container. - **Param defaultMaxContactImpulse** - [in] Controls how much force physics can use to stop bodies from penetrating. - **Param relativePadding** - [in] A relative amount to grow chunk bounds in order when calculating world attachment. - **Return** - true if the destructible’s NvBlastAsset was modified (or if path == NULL). ### Function: removeExternalAttachment - Remove all external bonds from the given blast asset. - **Param path** - [in] The USD path of the blast container. - **Return** - true if the destructible’s NvBlastAsset was modified (or if path == NULL). ## recalculateBondAreas Recalculates the areas of bonds. This may be used when a destructible is scaled. ### Parameters - **Param path [in]** - Path to the chunk, instance, or base destructible prim. ### Return - true iff the operation was successful. ## selectChildren Finds all children of the chunks in the given paths, and sets kit’s selection set to the paths of those children. ### Parameters - **Param paths [in]** - Full USD path(s) to chunks. - **Param numPaths [in]** - How many paths are in the paths array. ### Return - true iff the operation was successful. ## selectParent Selects the parent of the chunks in the given paths. ### Parameters - **Param paths [in]** - Full USD path(s) to chunks. - **Param numPaths [in]** - How many paths are in the paths array. ### Return - true iff the operation was successful. ### Function: selectParent Finds all parents of the chunks in the given paths, and sets kit’s selection set to the paths of those parents. **Parameters:** - **paths [in]** Full USD path(s) to chunks. - **numPaths [in]** How many paths are in the paths array. **Return:** - true iff the operation was successful. ### Function: selectSource Finds all source meshes for chunks in the given paths, and sets kit’s selection set to the paths of those meshes. **Parameters:** - **paths [in]** Full USD path(s) to chunks. - **numPaths [in]** How many paths are in the paths array. **Return:** - true iff the operation was successful. ### Function: setInteriorMaterial Sets the material for the interior facets of the chunks at the given paths. **Parameters:** - **paths [in]** Full USD path(s) to chunks. - **numPaths [in]** How many paths are in the paths array. - **interiorMaterial [in]** The material to set for the interior facets. ### Description #### Param paths - **[in]** Full USD path(s) to chunks. #### Param numPaths - **[in]** How many paths are in the paths array. #### Return - the material path if all meshes found at the given paths have the same interior materials. If more than one interior material is found among the meshes found, the empty string (“”) is returned. If no interior material is found, nullptr is returned. #### Description - Recalculates UV coordinates for the interior facets of chunk meshes based upon the UV scale factor given. - If the path given is a chunk, UVs will be recalculated for the chunk’s meshes. If the path is an instance or base prim, all chunk meshes will have their interior facets’ UVs recalculated. #### Param path - **[in]** Path to the chunk, instance, or base destructible prim. #### Param interiorUvScale - **[in]** the scale to use to calculate UV coordinates. A value of 1 will cause the texture to map to a region in space roughly the size of the whole destructible’s largest width. #### Return - true iff the operation was successful. ### createDestructibleInstance Function ```csharp void createDestructibleInstance(const char *path, const DamageParameters *damageParameters, float defaultContactThreshold, float defaultMaxContactImpulse) ``` Creates a destructible instance with default values from the given destructible base. **Parameters:** - **path** [in] Path to the destructible base to instance. - **damageParameters** [in] The damage characteristics to assign to the instance (see DamageParameters). - **defaultContactThreshold** [in] Rigid body parameter to apply to actors generated by the instance. The minimum impulse required for a rigid body to generate a contact event, needed for impact damage. - **defaultMaxContactImpulse** [in] Rigid body parameter to apply to actors generated by the instance. The maximum impulse that a contact constraint on a kinematic rigid body can impart on a colliding body. ### setSimulationParams Function ```csharp void setSimulationParams(int32_t maxNewActorsPerFrame) ``` Sets the maximum number of actors which will be generated by destruction each simulation frame. **Parameters:** - **maxNewActorsPerFrame** [in] The maximum number of actors generated per frame. ```cpp void createDamageEvent(const char *hitPrimPath, DamageEvent *damageEvents, size_t numDamageEvents); ``` Create a destruction event during simulation. **Param hitPrimPath** - **[in]** The full path to the prim to be damaged (may be a blast actor prim or its collision shape). **Param damageEvents** - **[in]** An array of `DamageEvent` structs describing the damage to be applied. **Param numDamageEvents** - **[in]** The size of the damageEvents array. --- ```cpp void setExplodeViewRadius(const char *path, float radius); ``` Set the cached explode view radius for the destructible prim associated with the given path. The prim must have DestructionSchemaDestructibleInstAPI applied. The instance will be rendered with its chunks pushed apart by the radius value. **Param path** - **[in]** Full USD path to a destructible instance. **Param radius** - **[in]** The distance to move apart the instance’s rendered chunks. ``` Gives the cached explode view radius for the destructible instances associated with the given paths, if the cached value for all instances is the same. **Param paths [in]** Array of USD paths to destructible instances. **Param numPaths [in]** The length of the paths array. **Return** The cached explode view radius for all valid destructible instances at the given paths, if that value is the same for all instances. If there is more than one radius found, this function returns -1.0f. If no valid instances are found, this function returns 0.0f. Calculate the maximum depth for all chunks in the destructible prim associated with the given paths. **Param paths [in]** Array of USD paths to destructible prims. **Param numPaths [in]** The length of the paths array. **Return** the maximum chunk depth for all destructibles associated with the given paths. Returns 0 if no destructibles are found. ### getViewDepth Calculates what the view depth should be, factoring in internal override if set. /return return what the view depth should be. #### Parameters - **Param paths [in]** Array of USD paths to destructible prims. - **Param numPaths [in]** The length of the paths array. ### setViewDepth Set the view depth for explode view functionality. #### Parameters - **Param paths [in]** Array of USD paths to destructible prims. - **Param numPaths [in]** The length of the paths array. - **Param depth [in]** Either a string representation of the numerical depth value, or “Leaves” to view leaf chunks. ### setDebugVisualizationInfo Set debug visualization information. #### Parameters - **Param mode [in]** The debug visualization mode. - **Param value [in]** The value associated with the debug visualization mode. ### Set Debug Visualization Info Set the debug visualization mode & type. If mode != debugVisNone, an anonymous USD layer is created which overrides the render meshes for blast objects which are being visualized. #### Parameters - **mode [in]** - Supported modes: "debugVisNone", "debugVisSelected", "debugVisAll" - **type [in]** - Supported modes: "debugVisSupportGraph", "debugVisMaxStressGraph", "debugVisCompressionGraph", "debugVisTensionGraph", "debugVisShearGraph", "debugVisBondPatches" - **Return** - true iff a valid mode is selected. ### Debug Damage Functions #### Set Debug Damage Params Set parameters for the debug damage tool in kit. This is applied using Shift + B + (Left Mouse). A ray is cast from the camera position through the screen point of the mouse cursor, and intersected with scene geometry. The intersection point is used to find nearby destructibles using to damage. ##### Parameters - **amount [in]** - The base damage to be applied to each destructible, as an acceleration in m/s^2. - **impulse [in]** - An impulse to apply to rigid bodies within the given radius, in kg*m/s. (This applies to non-destructible rigid bodies too.) - **radius [in]** - The distance in meters from the ray hit point to search for rigid bodies to affect with this function. #### Apply Debug Damage This function applies debug damage at a specified world position. ### Apply Debug Damage Apply debug damage at the position given, in the direction given. The damage parameters set by setDebugDamageParams will be used. #### Parameters - **Param worldPosition [in]** - The world position at which to apply debug damage. - **Param worldDirection [in]** - The world direction of the applied damage. ### Notice Handler Functions These can be called at any time to enable or disable notice handler monitoring. When enabled, use BlastUsdMonitorNoticeEvents to catch unbuffered Usd/Sdf commands. It will be automatically cleaned up on system shutdown if enabled. - **blastUsdEnableNoticeHandlerMonitor()** - **blastUsdDisableNoticeHandlerMonitor()** ### Destructible Path Utilities These functions find destructible base or instance prims from any associated prim path. - **getDestructibleBasePath(const char* path)** - **Param path [in]** - Any path associated with a destructible base prim. - **Return** - the destructible prim’s path if found, or nullptr otherwise. ### getDestructibleInstancePath ```cpp const char* getDestructibleInstancePath(const char* path) ``` - **Param path**: [in] Any path associated with a destructible instance prim. - **Return**: the destructible prim’s path if found, or nullptr otherwise. ### Blast SDK Cache This function pushes the Blast SDK data that is used during simulation back to USD so it can be saved and then later restored in the same state. This is also the state that will be restored to when sim stops. ```cpp void blastCachePushBinaryDataToUSD() ``` ### Blast Stress This function modifies settings used to drive stress calculations during simulation. - **param path**: [in] Any path associated with a destructible instance prim. - **param gravityEnabled**: [in] Controls if gravity should be applied to stress simulation of the destructible instance. - **param rotationEnabled**: [in] Controls if rotational acceleration should be applied to stress simulation of the destructible instance. - **param residualForceMultiplier**: [in] Multiplies the residual forces on bodies after connecting bonds break. - **param settings**: [in] Values used to control the stress solver. - **return**: true if stress settings were updated, false otherwise. ```cpp bool blastStressUpdateSettings(const char* path, bool gravityEnabled, bool rotationEnabled, float residualForceMultiplier, const StressSettings& settings) ``` ```pre char ``` ```pre * ``` ```pre path ``` ```pre , ``` ```pre bool ``` ```pre gravityEnabled ``` ```pre , ``` ```pre bool ``` ```pre rotationEnabled ``` ```pre , ``` ```pre float ``` ```pre residualForceMultiplier ``` ```pre , ``` ```pre const ``` ```pre StressSolverSettings ``` ```pre & ``` ```pre settings ``` ```pre ) ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ```
data-collection-faq.md
# Omniverse Data Collection & Use FAQ NVIDIA Omniverse Enterprise is a simple to deploy, end-to-end collaboration and true-to-reality simulation platform that fundamentally transforms complex design workflows for organizations of any scale. In order to improve the product, Omniverse software collects usage and performance behavior. When an enterprise manages Omniverse deployment via IT managed launcher, IT admin is responsible to configure the data collection setting. If consent is provided, data is collected in an aggregate manner at enterprise account level. Individual user data is completely anonymized. ## Frequently Asked Questions Q: What data is being collected and how is it used? A: Omniverse collects usage data when you install and start interacting with our platform technologies. The data we collect and how we use it are as follows. - Installation and configuration details such as version of operating system, applications installed - This information allows us to recognize usage trends & patterns - Identifiers, such as your unique NVIDIA Enterprise Account ID(org-name) and Session ID which allow us to recognize software usage trends and patterns. - Hardware Details such as CPU, GPU, monitor information - This information allows us to optimize settings in order to provide best performance - Product session and feature usage - This information allows us to understand user journey and product interaction to further enhance workflows - Error and crash logs - This information allows to improve performance & stability for troubleshooting and diagnostic purposes of our software Q: Does NVIDIA collect personal information such as email id, name etc. ? A: When an enterprise manages Omniverse deployment via IT managed launcher, IT admin is responsible to configure the data collection setting. If consent is provided, data is collected in an aggregate manner at enterprise account level. Individual user data is completely anonymized. Q: How can I change my data collection setting - opt-in to data collection? A: NVIDIA provides full flexibility for an enterprise to opt-in to data collection. In the .config folder there is a privacy.toml file which can be set to “true”. For detailed instructions, review the appropriate installation guide: - Installation Guide for Windows - Installation Guide for Linux Q: How can I change my data collection setting - opt-out of data collection? A: NVIDIA provides full flexibility for an enterprise to opt-out of data collection. In the .config folder there is a privacy.toml file which can be set to “false”. For detailed instructions, review the appropriate installation guide: - Installation Guide for Windows - Installation Guide for Linux Q: How can I request the data Omniverse Enterprise has collected? A: If you are an Enterprise customer, please file a support ticket on NVIDIA ENterprise Portal. If any data was collected, NVIDIA will provide all data collected for your organization within 30 days. Q: How will Omniverse collect data in a scenario where my enterprise is firewalled with no Internet access? A: No data will be collected in a firewalled scenario.
data-files_Overview.md
# Overview ## Overview In order to effectively test the OmniGraph access to nodes and scripts that are not necessary for the graph to operate correctly. In order to minimize the unnecessary files, yet still have nodes and files explicitly for testing, all of that functionality has been broken out into this extension. ## Dependencies As the purpose of this extension is to provide testing facilities for all of OmniGraph it will have load dependencies on all of the `omni.graph.*` extensions. If any new ones are added they should be added to the dependencies in the file `config/extension.toml`. ## Data Files Three types of data files are accessed in this extension: 1. Generic data files, created in other extensions for use by the user (e.g. compound node definitions) 2. Example files, created to illustrate how to use certain nodes but not intended for general use 3. Test files, used only for the purpose of loading to test certain features The `data/` subdirectories in this extension contains the latter of those three. The other files live in the lowest level extension in which they are legal (e.g. if they contain a node from `omni.graph.nodes` then they will live in that extension). As this extension has dependencies on all of the OmniGraph extensions it will have access to all of their data files as well. ## Node Files Most nodes will come from other extensions. Some nodes are created explicitly for testing purposes. These will appear in this extension and should not be used for any other purpose. ### Import Example This simple example shows how the test files from the `omni.graph.examples.python` extension were imported and enabled in this extension. The first step was to move the required files into the directory tree: ``` omni.graph.test/ ├── python/ └── tests/ ├──── test_omnigraph_simple.py └── data/ ├──── TestEventTrigger.usda └──── TestExecutionConnections.usda ``` **Note:** The two .usda files contain only nodes from the `omni.graph.examples.python` extension and are solely used for test purposes. That is why they could be moved into the extension’s test directory. Next the standard automatic test detection file was added to `omni.graph.test/python/tests/__init__.py` ```python """There is no public API to this module.""" ``` __all__ = [] scan_for_test_modules = True """The presence of this object causes the test runner to automatically scan the directory for unit test cases""" Finally, the ``` ``` config/extension.toml ``` had additions made to inform it of the dependency on the new extension: ```toml [package] version = "0.79.1" title = "OmniGraph Regression Testing" category = "Graph" readme = "docs/README.md" changelog = "docs/CHANGELOG.md" description = "Contains test scripts and files used to test the OmniGraph extensions where the tests cannot live in a single extension." keywords = ["kit", "omnigraph", "tests"] python.import_mode = "ParallelThread" preview_image = "data/preview.png" icon = "data/icon.svg" writeTarget.kit = true support_level = "Enterprise" # Main module for the Python interface [[python.module]] name = "omni.graph.test" [[native.plugin]] path = "bin/*.plugin" recursive = false # Watch the .ogn files for hot reloading (only works for Python files) [fswatcher.patterns] include = ["*.ogn", "*.py"] exclude = ["Ogn*Database.py"] # The bare minimum of dependencies required for bringing up the extension [dependencies] "omni.graph.core" = { version = "2.177.1" } "omni.graph" = { version = "1.139.0" } "omni.graph.tools" = { version = "1.77.0" } [[test]] timeout = 1800 # Other extensions that need to load in order for this one to work. # This list deliberately omits omni.graph and omni.graph.tools to ensure that extensions that rely on recursive # dependencies on OmniGraph work properly. dependencies = [ "omni.kit.pipapi", "omni.kit.ui_test", "omni.kit.usd.layers", "omni.graph.examples.cpp", "omni.graph.examples.python", "omni.graph.nodes", "omni.graph.tutorials", "omni.graph.action", "omni.graph.scriptnode", "omni.inspect", "omni.usd", "omni.kit.stage_template.core" ] ``` 54 "omni.kit.primitive.mesh", 55 ] 56 57 stdoutFailPatterns.exclude = [ 58 # Exclude carb.events leak that only shows up locally 59 "*[Error] [carb.events.plugin]*PooledAllocator*", 60 # Exclude messages which say they should be ignored 61 "*Ignore this error/warning*", 62 "*Types: unknown and rel*" # OM-86183 63 ] 64 65 pythonTests.unreliable = [ 66 "*test_change_pipeline_stage*", # OM-66115 67 "*test_read_prim_attribute_nodes_in_non_instanced_lazy_graphs*", # OM-120024 68 "*test_action_compounds*", # OM-120545 69 "*test_recursive_graph_execution*", # OM-120609 70 "*test_read_prim_attribute_nodes_in_instanced_lazy_graphs*", # OM-120675 71 "*test_dirty_push_time_change*", # OM-120536 72 "*test_read_time_nodes_in_non_instanced_lazy_graphs*", # OM-120536 73 "*test_read_time_nodes_in_instanced_lazy_graphs*", # OM-120536 74 "*test_evaluator_type_changed_from_usd*", # OM-120536 75 ] 76 77 args = [ 78 "--no-window" 79 ] 80 81 [documentation] 82 pages = [ 83 "docs/Overview.md", 84 "docs/CHANGELOG.md", 85 ]
data-types.md
# Data Types — Omniverse Kit 1.140.0 documentation ## Data Types The Python module `omni.graph.core.types` contains definitions for Python type annotations that correspond to all of the data types used by Omnigraph. The annotation can be used to check that data extracted from the OmniGraph Python APIs for retrieving attribute values have the correct types. This table shows the relationships between the attribute type as you might see it in a .ogn file, the corresponding Python type annotation to use in function and variable declarations, and the underlying data type that is returned from Python APIs that retrieve values from attributes with those corresponding OGN data types. ```markdown | .ogn Type Definition | Type annotation | Python Data Type | |----------------------|-----------------|------------------| | any | omni.graph.core.types.any | any | | bool | omni.graph.core.types.bool | bool | | bool[] | omni.graph.core.types.boolarray | numpy.ndarray(shape=(N,), dtype=numpy.bool) | | bundle | omni.graph.core.types.bundle | omni.graph.core.BundleContents | | colord[3] | omni.graph.core.types.color3d | numpy.ndarray(shape=(3,), dtype=numpy.float64) | | colord[3][] | omni.graph.core.types.color3darray | numpy.ndarray(shape=(N,3), dtype=numpy.float64) | | colord[4] | omni.graph.core.types.color4d | numpy.ndarray(shape=(4,), dtype=numpy.float64) | | colord[4][] | omni.graph.core.types.color4darray | numpy.ndarray(shape=(N,4), dtype=numpy.float64) | | colorf[3] | omni.graph.core.types.color3f | numpy.ndarray(shape=(3,), dtype=numpy.float32) | | colorf[3][] | omni.graph.core.types.color3farray | numpy.ndarray(shape=(N,3), dtype=numpy.float32) | ``` ```markdown Note: The above table represents the mapping between .ogn type definitions, their corresponding Python type annotations, and the actual Python data types used. This is crucial for ensuring the correct handling and interpretation of data within the Omnigraph system. | | | | | --- | --- | --- | | **numpy.ndarray(shape=(N,3), dtype=numpy.float32)** | | | | **colorf[4]** | omni.graph.core.types.color4f | numpy.ndarray(shape=(4,), dtype=numpy.float32) | | **colorf[4][]** | omni.graph.core.types.color4farray | numpy.ndarray(shape=(N,4), dtype=numpy.float32) | | **colorh[3]** | omni.graph.core.types.color3h | numpy.ndarray(shape=(3,), dtype=numpy.float16) | | **colorh[3][]** | omni.graph.core.types.color3harray | numpy.ndarray(shape=(N,3), dtype=numpy.float16) | | **colorh[4]** | omni.graph.core.types.color4h | numpy.ndarray(shape=(4,), dtype=numpy.float16) | | **colorh[4][]** | omni.graph.core.types.color4harray | numpy.ndarray(shape=(N,4), dtype=numpy.float16) | | **double** | omni.graph.core.types.double | float | | **double[]** | omni.graph.core.types.doublearray | numpy.ndarray(shape=(N,), dtype=numpy.float64) | | **double[2]** | omni.graph.core.types.double2 | numpy.ndarray(shape=(2,), dtype=numpy.float64) | | **double[2][]** | omni.graph.core.types.double2array | numpy.ndarray(shape=(N,2), dtype=numpy.float64) | | **double[3]** | omni.graph.core.types.double3 | numpy.ndarray(shape=(3,), dtype=numpy.float64) | | **double[3][]** | omni.graph.core.types.double3array | numpy.ndarray(shape=(N,3), dtype=numpy.float64) | | **double[4]** | omni.graph.core.types.double4 | numpy.ndarray(shape=(4,), dtype=numpy.float64) | | **double[4][]** | omni.graph.core.types.double4array | numpy.ndarray(shape=(N,4), dtype=numpy.float64) | | **execution** | omni.graph.core.types.execution | int | | **float** | omni.graph.core.types.float | float | | **float[]** | omni.graph.core.types.floatarray | numpy.ndarray(shape=(N,), dtype=numpy.float32) | | **float[2]** | omni.graph.core.types.float2 | numpy.ndarray(shape=(2,), dtype=numpy.float32) | | **float[2][]** | omni.graph.core.types.float2array | numpy.ndarray(shape=(N,2), dtype=numpy.float32) | | **float[3]** | omni.graph.core.types.float3 | numpy.ndarray(shape=(3,), dtype=numpy.float32) | | **float[3][]** | omni.graph.core.types.float3array | numpy.ndarray(shape=(N,3), dtype=numpy.float32) | | **float[4]** | omni.graph.core.types.float4 | numpy.ndarray(shape=(4,), dtype=numpy.float32) | | HTML Content | Markdown Content | |--------------|------------------| | float[4][] | `float[4][]` | | omni.graph.core.types.float4array | `omni.graph.core.types.float4array` | | numpy.ndarray(shape=(N,4), dtype=numpy.float32) | `numpy.ndarray(shape=(N,4), dtype=numpy.float32)` | | frame[4] | `frame[4]` | | omni.graph.core.types.frame4d | `omni.graph.core.types.frame4d` | | numpy.ndarray(shape=(4,4), dtype=numpy.float64) | `numpy.ndarray(shape=(4,4), dtype=numpy.float64)` | | frame[4][] | `frame[4][]` | | omni.graph.core.types.frame4darray | `omni.graph.core.types.frame4darray` | | numpy.ndarray(shape=(N,4,4), dtype=numpy.float64) | `numpy.ndarray(shape=(N,4,4), dtype=numpy.float64)` | | half | `half` | | omni.graph.core.types.half | `omni.graph.core.types.half` | | float | `float` | | half[] | `half[]` | | omni.graph.core.types.halfarray | `omni.graph.core.types.halfarray` | | numpy.ndarray(shape=(N,), dtype=numpy.float16) | `numpy.ndarray(shape=(N,), dtype=numpy.float16)` | | half[2] | `half[2]` | | omni.graph.core.types.half2 | `omni.graph.core.types.half2` | | numpy.ndarray(shape=(2,), dtype=numpy.float16) | `numpy.ndarray(shape=(2,), dtype=numpy.float16)` | | half[2][] | `half[2][]` | | omni.graph.core.types.half2array | `omni.graph.core.types.half2array` | | numpy.ndarray(shape=(N,2), dtype=numpy.float16) | `numpy.ndarray(shape=(N,2), dtype=numpy.float16)` | | half[3] | `half[3]` | | omni.graph.core.types.half3 | `omni.graph.core.types.half3` | | numpy.ndarray(shape=(3,), dtype=numpy.float16) | `numpy.ndarray(shape=(3,), dtype=numpy.float16)` | | half[3][] | `half[3][]` | | omni.graph.core.types.half3array | `omni.graph.core.types.half3array` | | numpy.ndarray(shape=(N,3), dtype=numpy.float16) | `numpy.ndarray(shape=(N,3), dtype=numpy.float16)` | | half[4] | `half[4]` | | omni.graph.core.types.half4 | `omni.graph.core.types.half4` | | numpy.ndarray(shape=(4,), dtype=numpy.float16) | `numpy.ndarray(shape=(4,), dtype=numpy.float16)` | | half[4][] | `half[4][]` | | omni.graph.core.types.half4array | `omni.graph.core.types.half4array` | | numpy.ndarray(shape=(N,4), dtype=numpy.float16) | `numpy.ndarray(shape=(N,4), dtype=numpy.float16)` | | int | `int` | | omni.graph.core.types.int | `omni.graph.core.types.int` | | int[] | `int[]` | | omni.graph.core.types.intarray | `omni.graph.core.types.intarray` | | numpy.ndarray(shape=(N,), dtype=numpy.int32) | `numpy.ndarray(shape=(N,), dtype=numpy.int32)` | | int[2] | `int[2]` | | omni.graph.core.types.int2 | `omni.graph.core.types.int2` | | numpy.ndarray(shape=(2,), dtype=numpy.int32) | `numpy.ndarray(shape=(2,), dtype=numpy.int32)` | | int[2][] | `int[2][]` | | omni.graph.core.types.int2array | `omni.graph.core.types.int2array` | | numpy.ndarray(shape=(N,2), dtype=numpy.int32) | `numpy.ndarray(shape=(N,2), dtype=numpy.int32)` | | int[3] | `int[3]` | | omni.graph.core.types.int3 | `omni.graph.core.types.int3` | | numpy.ndarray(shape=(3,), dtype=numpy.int32) | `numpy.ndarray(shape=(3,), dtype=numpy.int32)` | | int[3][] | `int[3][]` | | omni.graph.core.types.int3array | `omni.graph.core.types.int3array` | | numpy.ndarray(shape=(N,3), dtype=numpy.int32) | `numpy.ndarray(shape=(N,3), dtype=numpy.int32)` | | int[4] | `int[4]` | | omni.graph.core.types.int4 | `omni.graph.core.types.int4` | | numpy.ndarray(shape=(4,), dtype=numpy.int32) | `numpy.ndarray(shape=(4,), dtype=numpy.int32)` | | int[4][] | `int[4][]` | | omni.graph.core.types.int4array | `omni.graph.core.types.int4array` | | numpy.ndarray(shape=(N,4), dtype=numpy.int32) | `numpy.ndarray(shape=(N,4), dtype=numpy.int32)` | | int64 | `int64` | | omni.graph.core.types.int64 | `omni.graph.core.types.int64` | | int64[] | `int64[]` | | omni.graph.core.types.int64array | `omni.graph.core.types.int64array` | | numpy.ndarray(shape=(N,), dtype=numpy.int64) | `numpy.ndarray(shape=(N,), dtype=numpy.int64)` | | matrixd[2] | `matrixd[2]` | | omni.graph.core.types.matrix2d | `omni.graph.core.types.matrix2d` | | numpy.ndarray(shape=(2,2), dtype=numpy.float64) | `numpy.ndarray(shape=(2,2), dtype=numpy.float64)` | | matrixd[2][] | `matrixd[2][]` | | omni.graph.core.types.matrix2darray | `omni.graph.core.types.matrix2darray` | | numpy.ndarray(shape=(N,2,2), dtype=numpy.float64) | `numpy.ndarray(shape=(N,2,2), dtype=numpy.float64)` | | Emphasis | Description | Data Type | |----------|-------------|-----------| | *matrixd[3]* | omni.graph.core.types.matrix3d | numpy.ndarray(shape=(3,3), dtype=numpy.float64) | | *matrixd[3][]* | omni.graph.core.types.matrix3darray | numpy.ndarray(shape=(N,3,3), dtype=numpy.float64) | | *matrixd[4]* | omni.graph.core.types.matrix4d | numpy.ndarray(shape=(4,4), dtype=numpy.float64) | | *matrixd[4][]* | omni.graph.core.types.matrix4darray | numpy.ndarray(shape=(N,4,4), dtype=numpy.float64) | | *normald[3]* | omni.graph.core.types.normal3d | numpy.ndarray(shape=(3,), dtype=numpy.float64) | | *normald[3][]* | omni.graph.core.types.normal3darray | numpy.ndarray(shape=(N,3), dtype=numpy.float64) | | *normalf[3]* | omni.graph.core.types.normal3f | numpy.ndarray(shape=(3,), dtype=numpy.float32) | | *normalf[3][]* | omni.graph.core.types.normal3farray | numpy.ndarray(shape=(N,3), dtype=numpy.float32) | | *normalh[3]* | omni.graph.core.types.normal3h | numpy.ndarray(shape=(3,), dtype=numpy.float16) | | *normalh[3][]* | omni.graph.core.types.normal3harray | numpy.ndarray(shape=(N,3), dtype=numpy.float16) | | *objectId* | omni.graph.core.types.objectid | int | | *objectId[]* | omni.graph.core.types.objectidarray | numpy.ndarray(shape=(N,), dtype=numpy.uint64) | | *path* | omni.graph.core.types.path | list[usdrt::SdfPath] | | *pointd[3]* | omni.graph.core.types.point3d | numpy.ndarray(shape=(3,), dtype=numpy.float64) | | *pointd[3][]* | omni.graph.core.types.point3darray | numpy.ndarray(shape=(N,3), dtype=numpy.float64) | | *pointf[3]* | omni.graph.core.types.point3f | numpy.ndarray(shape=(3,), dtype=numpy.float32) | | *pointf[3][]* | omni.graph.core.types.point3farray | numpy.ndarray(shape=(N,3), dtype=numpy.float32) | | *pointh[3]* | omni.graph.core.types.point3h | numpy.ndarray(shape=(3,), dtype=numpy.float16) | | *pointh[3][]* | omni.graph.core.types.point3harray | numpy.ndarray(shape=(N,3), dtype=numpy.float16) | | *quatd[4]* | omni.graph.core.types.quatd | numpy.ndarray(shape=(4,), dtype=numpy.float64) | | *quatd[4][]* | omni.graph.core.types.quatdarray | numpy.ndarray(shape=(N,4), dtype=numpy.float64) | | *quatf[4]* | omni.graph.core.types.quatf | numpy.ndarray(shape=(4,), dtype=numpy.float32) | | *quatf[4][]* | omni.graph.core.types.quatfarray | numpy.ndarray(shape=(N,4), dtype=numpy.float32) | | 名称 | 类型 | 描述 | | --- | --- | --- | | quatf[4] | omni.graph.core.types.quatf | numpy.ndarray(shape=(4,), dtype=numpy.float32) | | quatf[4][] | omni.graph.core.types.quatfarray | numpy.ndarray(shape=(N,4), dtype=numpy.float32) | | quath[4] | omni.graph.core.types.quath | numpy.ndarray(shape=(4,), dtype=numpy.float16) | | quath[4][] | omni.graph.core.types.quatharray | numpy.ndarray(shape=(N,4), dtype=numpy.float16) | | string | omni.graph.core.types.string | str | | target | omni.graph.core.types.target | list[usdrt::SdfPath] | | texcoordd[2] | omni.graph.core.types.texcoord2d | numpy.ndarray(shape=(2,), dtype=numpy.float64) | | texcoordd[2][] | omni.graph.core.types.texcoord2darray | numpy.ndarray(shape=(N,2), dtype=numpy.float64) | | texcoordd[3] | omni.graph.core.types.texcoord3d | numpy.ndarray(shape=(3,), dtype=numpy.float64) | | texcoordd[3][] | omni.graph.core.types.texcoord3darray | numpy.ndarray(shape=(N,3), dtype=numpy.float64) | | texcoordf[2] | omni.graph.core.types.texcoord2f | numpy.ndarray(shape=(2,), dtype=numpy.float32) | | texcoordf[2][] | omni.graph.core.types.texcoord2farray | numpy.ndarray(shape=(N,2), dtype=numpy.float32) | | texcoordf[3] | omni.graph.core.types.texcoord3f | numpy.ndarray(shape=(3,), dtype=numpy.float32) | | texcoordf[3][] | omni.graph.core.types.texcoord3farray | numpy.ndarray(shape=(N,3), dtype=numpy.float32) | | texcoordh[2] | omni.graph.core.types.texcoord2h | numpy.ndarray(shape=(2,), dtype=numpy.float16) | | texcoordh[2][] | omni.graph.core.types.texcoord2harray | numpy.ndarray(shape=(N,2), dtype=numpy.float16) | | texcoordh[3] | omni.graph.core.types.texcoord3h | numpy.ndarray(shape=(3,), dtype=numpy.float16) | | texcoordh[3][] | omni.graph.core.types.texcoord3harray | numpy.ndarray(shape=(N,3), dtype=numpy.float16) | | timecode | omni.graph.core.types.timecode | float | | timecode[] | omni.graph.core.types.timecodearray | numpy.ndarray(shape=(N,), dtype=numpy.float64) | | token | omni.graph.core.types.token | str | | token[] | omni.graph.core.types.tokenarray | numpy.ndarray(shape=(N,), dtype=numpy.str) | | uchar | omni.graph.core.types.uchar | int | | uchar[] | omni.graph.core.types.uchararray | numpy.ndarray(shape=(N,), dtype=numpy.uint8) | | | HTML Type | Omni Graph Type | NumPy Type | |------|--------------------------------|------------------------------------------|---------------------------------------------| | Even | *uint* | omni.graph.core.types.uint | int | | Odd | *uint[]* | omni.graph.core.types.uintarray | numpy.ndarray(shape=(N,), dtype=numpy.uint32) | | Even | *uint64* | omni.graph.core.types.uint64 | int | | Odd | *uint64[]* | omni.graph.core.types.uint64array | numpy.ndarray(shape=(N,), dtype=numpy.uint64) | | Even | *vectord[3]* | omni.graph.core.types.vector3d | numpy.ndarray(shape=(3,), dtype=numpy.float64) | | Odd | *vectord[3][]* | omni.graph.core.types.vector3darray | numpy.ndarray(shape=(N,3), dtype=numpy.float64) | | Even | *vectorf[3]* | omni.graph.core.types.vector3f | numpy.ndarray(shape=(3,), dtype=numpy.float32) | | Odd | *vectorf[3][]* | omni.graph.core.types.vector3farray | numpy.ndarray(shape=(N,3), dtype=numpy.float32) | | Even | *vectorh[3]* | omni.graph.core.types.vector3h | numpy.ndarray(shape=(3,), dtype=numpy.float16) | | Odd | *vectorh[3][]* | omni.graph.core.types.vector3harray | numpy.ndarray(shape=(N,3), dtype=numpy.float16) |
debug.md
# Debug a Build Recognizing the critical role of debugging in development, Omniverse offers tools and automation to streamline and simplify debugging workflows. In combination with third-party tools, Omniverse accelerates bug and anomaly detection, aiming for steady increases in project stability throughout the development process. Omniverse provides utilities for debugging via extensions both for use within a given Application or in conjunction with third-party tools such as VSCode. - **Console Extension**: Allows the user to see log output and input commands directly from the Application interface. - **Visual Studio Code Link Extension**: Enables the connection of an Omniverse Application to VS Code’s python debugger. **Additional Learning:** - Video Tutorial - How to Debug Your Kit Extension with Omniverse Code App. - Advanced Project Template Tutorial - Step-by-step instructions for debugging within the context of an Application development tutorial.
Debugging.md
# Debugging When things are not behaving as expected, it is good to start by understanding the topology of the execution graph. As described in the [Graph Concepts](#ef-graph-concepts) section, the execution graph is built of many nested graphs. The framework allows you to visualize a flattened versions of this graph. ```c++ std::ostringstream stream; writeFlattenedAsGraphviz(test.g, stream); ``` Graph utilities will traverse the entire topology of the graph and write it out to a given stream in GraphViz format. Below is an interactive example of an execution graph. The svg file was generated using an online editor. ``` [Graph Concepts]: GraphConcepts.html#ef-graph-concepts The output is flattened, which means that all instantiated NodeGraphDef are expanded in place. We use a small number of colors to help visually distinguish nodes that are in the same topology. It also helps identify when two expanded node graph definitions are references of the same definition in memory.
declarative-syntax_Overview.md
# Overview ## Extension : omni.ui.scene-1.10.3 ## Documentation Generated : May 08, 2024 ### Overview SceneUI helps build great-looking 3d manipulators and 3d helpers with as little code as possible. It provides shapes and controls for declaring the UI in 3D space. ### Declarative syntax SceneUI uses declarative syntax, so it’s possible to state what the manipulator should do. For example, you can write that you want an item list consisting of an image and lines. The code is simpler and easier to read than ever before. ```python scene_view = sc.SceneView( aspect_ratio_policy=sc.AspectRatioPolicy.PRESERVE_ASPECT_FIT, height=200 ) with scene_view.scene: sc.Line([-0.5,-0.5,0], [-0.5, 0.5, 0], color=cl.red) sc.Line([-0.5,-0.5,0], [0.5, -0.5, 0], color=cl.green) sc.Arc(0.5, color=cl.documentation_nvidia) ``` This declarative style applies to complex concepts like interaction with the mouse pointer. A gesture can be easily added to almost any item with a few lines of code. The system handles all of the steps needed to compute the intersection with the mouse pointer and depth sorting if you click many items at runtime. With this easy input, your manipulator comes ready very quickly.
default-prim-only-mode_Overview.md
# Overview **Extension** : omni.kit.usd.collect-2.2.21 **Documentation Generated** : May 08, 2024 ## Overview `omni.kit.usd.collect` provides the core API for collecting a USD file with all of its dependencies that are scattered around different locations. ```python from omni.kit.usd.collect import Collector ... collector = Collector(usd_path, target_folder) success, target_root_usd = await collector.collect() ``` Here it instantiates a `omni.kit.usd.collect.Collector` to collect USD file from `usd_path` to target location `target_folder` with default parameters. You can check `omni.kit.usd.collect.Collector.__init__()` for more customizations to instantiate a Collector. ## Differences between Flat Collection and Non-Flat Collection Collector supports to organize a final collection in two different folder structures: flat or non-flat. By default, collector collects all assets with non-flat structure, that collected files are organized in the same folder structure as the source files. In flat mode, folder structure will not be kept and all dependencies will be put into specified folders. Also, you can specify the policy about how to group textures (see `omni.kit.usd.collect.FlatCollectionTextureOptions` for more details). Currently there are 3 available options: | Options | Description | |---------|-------------| | Group by MDL | Textures will be grouped by their parent MDL file name. | | Group by USD | Textures will be grouped by their parent USD file name. | | Flat | All textures will be collected under the same hierarchy under “textures” folder. Note that there might be potential danger of textures overwriting each other, if they have the same names but belong to different assets/mdls. | ## Default Prim Only Mode User can also specify the option to enable “Default Prim Only” mode. In this mode, collector will prune USD files according to the given policy (see the `Keyword Args` section of `omni.kit.usd.collect.Collector.__init__()`). So prim except default prim will be removed to speed up collection. If USD file has no default prim set, it does nothing to the USD file. REMINDER: This is an advanced mode that may remove valid data of your stage, and if you have references with explicit prim set, and the prim is not the default prim from the reference file. It may create stale reference if you apply this mode to all USD layers since non-default prims will be removed. ## Limitations There is no USDZ support currently until Kit resolves MDL loading issue inside USDZ file.
defining-commands_Overview.md
# Overview An example C++ extension that can be used as a reference/template for creating new extensions. Demonstrates how to create commands in C++ that can then be executed from either C++ or Python. See the omni.kit.commands extension for extensive documentation about commands themselves. # C++ Usage Examples ## Defining Commands ```c++ using namespace omni::kit::commands; class ExampleCppCommand : public Command { public: static carb::ObjectPtr<ICommand> create(const char* extensionId, const char* commandName, const carb::dictionary::Item* kwargs) { return carb::stealObject<ICommand>(new ExampleCppCommand(extensionId, commandName, kwargs)); } static void populateKeywordArgs(carb::dictionary::Item* defaultKwargs, carb::dictionary::Item* optionalKwargs, carb::dictionary::Item* requiredKwargs) { if (carb::dictionary::IDictionary* iDictionary = carb::getCachedInterface<carb::dictionary::IDictionary>()) { iDictionary->makeAtPath(defaultKwargs, "x", 9); iDictionary->makeAtPath(defaultKwargs, "y", -1); } } }; ``` ```cpp ExampleCppCommand(const char* extensionId, const char* commandName, const carb::dictionary::Item* kwargs) : Command(extensionId, commandName) { if (carb::dictionary::IDictionary* iDictionary = carb::getCachedInterface<carb::dictionary::IDictionary>()) { m_x = iDictionary->get<int32_t>(kwargs, "x"); m_y = iDictionary->get<int32_t>(kwargs, "y"); } } void doCommand() override { printf("Executing command '%s' with params 'x=%d' and 'y=%d'.\n", getName(), m_x, m_y); } void undoCommand() override { printf("Undoing command '%s' with params 'x=%d' and 'y=%d'.\n", getName(), m_x, m_y); } private: int32_t m_x = 0; int32_t m_y = 0; }; ``` ### Registering Commands ```cpp auto commandBridge = carb::getCachedInterface<omni::kit::commands::ICommandBridge>()); commandBridge->registerCommand( "omni.example.cpp.commands", "ExampleCppCommand", ExampleCppCommand::create, ExampleCppCommand::populateKeywordArgs); // Note that the command name (in this case "ExampleCppCommand") is arbitrary and does not need to match the C++ class ``` ### Executing Commands ```cpp auto commandBridge = carb::getCachedInterface<omni::kit::commands::ICommandBridge>()); // Create the kwargs dictionary. auto iDictionary = carb::getCachedInterface<carb::dictionary::IDictionary>(); carb::dictionary::Item* kwargs = iDictionary->createItem(nullptr, "", carb::dictionary::ItemType::eDictionary); iDictionary->makeIntAtPath(kwargs, "x", 7); ``` ```c++ iDictionary->makeIntAtPath(kwargs, "y", 9); ``` ```c++ // Execute the command using its name... commandBridge->executeCommand("ExampleCppCommand", kwargs); // or without the 'Command' postfix just like Python commands... commandBridge->executeCommand("ExampleCpp", kwargs); // or fully qualified if needed to disambiguate (works with or without the 'Command)' postfix. commandBridge->executeCommand("omni.example.cpp.commands", "ExampleCppCommand", kwargs); ``` ```c++ // Destroy the kwargs dictionary. iDictionary->destroyItem(kwargs); ``` ```c++ // The C++ command can be executed from Python exactly like any Python command, // and we can also execute Python commands from C++ in the same ways as above: commandBridge->executeCommand("SomePythonCommand", kwargs); // etc. ``` ## Undo/Redo/Repeat Commands ```c++ auto commandBridge = carb::getCachedInterface<omni::kit::commands::ICommandBridge>()); // It doesn't matter whether the command stack contains Python commands, C++ commands, // or a mix of both, and the same stands for when undoing/redoing commands from Python. commandBridge->undoCommand(); commandBridge->redoCommand(); commandBridge->repeatCommand(); ``` ## Deregistering Commands ```c++ auto commandBridge = carb::getCachedInterface<omni::kit::commands::ICommandBridge>()); commandBridge->deregisterCommand("omni.example.cpp.commands", "ExampleCppCommand"); ```
defining-custom-actions_Overview.md
# Overview — Kit Extension Template C++ 1.0.1 documentation ## Overview An example C++ extension that can be used as a reference/template for creating new extensions. Demonstrates how to create actions in C++ that can then be executed from either C++ or Python. See the omni.kit.actions.core extension for extensive documentation about actions themselves. ## C++ Usage Examples ### Defining Custom Actions ```c++ using namespace omni::kit::actions::core; class ExampleCustomAction : public Action { public: static carb::ObjectPtr<IAction> create(const char* extensionId, const char* actionId, const MetaData* metaData) { return carb::stealObject<IAction>(new ExampleCustomAction(extensionId, actionId, metaData)); } ExampleCustomAction(const char* extensionId, const char* actionId, const MetaData* metaData) : Action(extensionId, actionId, metaData), m_executionCount(0) { } carb::variant::Variant execute(const carb::variant::Variant& args = {}, const carb::dictionary::Item* kwargs = nullptr) override ``` ```c++ return carb::variant::Variant(); ``` ```c++ carb::getCachedInterface<omni::kit::actions::core::IActionRegistry>()->registerAction(exampleLambdaAction); ``` ```c++ // Example of creating and registering (at the same time) a lambda action from C++. carb::getCachedInterface<omni::kit::actions::core::IActionRegistry>()->registerAction( "omni.example.cpp.actions", "example_lambda_action_id", [](const carb::variant::Variant& args = {}, const carb::dictionary::Item* kwargs = nullptr) { printf("Executing example_lambda_action_id.\n"); return carb::variant::Variant(); }, "Example Lambda Action Display Name", "Example Lambda Action Description."); ``` ## Discovering Actions ```c++ auto registry = carb::getCachedInterface<omni::kit::actions::core::IActionRegistry>(); // Retrieve an action that has been registered using the registering extension id and the action id. carb::ObjectPtr<IAction> action = registry->getAction("omni.example.cpp.actions", "example_custom_action_id"); // Retrieve all actions that have been registered by a specific extension id. std::vector<carb::ObjectPtr<IAction>> actions = registry->getAllActionsForExtension("example"); // Retrieve all actions that have been registered by any extension. std::vector<carb::ObjectPtr<IAction>> actions = registry->getAllActions(); ``` ## Deregistering Actions ```c++ auto actionRegistry = carb::getCachedInterface<omni::kit::actions::core::IActionRegistry>(); // Deregister an action directly... actionRegistry->deregisterAction(exampleCustomAction); // or using the registering extension id and the action id... actionRegistry->deregisterAction("omni.example.cpp.actions", "example_custom_action_id"); // or deregister all actions that were registered by an extension. actionRegistry->deregisterAllActionsForExtension("omni.example.cpp.actions"); ``` ## Executing Actions ```c++ auto actionRegistry = carb::getCachedInterface<omni::kit::actions::core::IActionRegistry>(); // Execute an action after retrieving it from the action registry. auto action = actionRegistry->getAction("omni.example.cpp.actions", "example_custom_action_id"); ``` ```cpp action->execute(); // Execute an action indirectly (retrieves it internally). actionRegistry->executeAction("omni.example.cpp.actions", "example_custom_action_id"); // Execute an action that was stored previously. exampleCustomAction->execute(); ``` Note: All of the above will find any actions that have been registered from either Python or C++, and you can interact with them without needing to know anything about where they were registered.
defining-pybind-module_Overview.md
# Overview An example C++ extension that can be used as a reference/template for creating new extensions. Demonstrates how to reflect C++ code using pybind11 so that it can be called from Python code. The IExampleBoundInterface located in `include/omni/example/cpp/pybind/IExampleBoundInterface.h` is: - Implemented in `plugins/omni.example.cpp.pybind/ExamplePybindExtension.cpp`. - Reflected in `bindings/python/omni.example.cpp.pybind/ExamplePybindBindings.cpp`. - Accessed from Python in `python/tests/test_pybind_example.py` via `python/impl/example_pybind_extension.py`. # C++ Usage Examples ## Defining Pybind Module ```c++ PYBIND11_MODULE(_example_pybind_bindings, m) { using namespace omni::example::cpp::pybind; m.doc() = "pybind11 omni.example.cpp.pybind bindings"; carb::defineInterfaceClass<IExampleBoundInterface>( m, "IExampleBoundInterface", "acquire_bound_interface", "release_bound_interface" ) .def( "register_bound_object", &IExampleBoundInterface::registerBoundObject, R"( Register a bound object. Args: object: The bound object to register. )", py::arg("object") ) .def( "deregister_bound_object", &IExampleBoundInterface::deregisterBoundObject, R"( Deregister a bound object. Args: object: The bound object to deregister. )", py::arg("object") ) ; } ``` ```python def find_bound_object(id: str) -> IExampleBoundInterface: """ Find a bound object. Args: id: Id of the bound object. Return: The bound object if it exists, an empty object otherwise. """ ``` ```python class IExampleBoundObject(carb::ObjectPtr<IExampleBoundObject>): @property def id(self) -> str: """ Get the id of this bound object. Return: The id of this bound object. """ ``` ```python class PythonBoundObject(IExampleBoundObject, carb::ObjectPtr<PythonBoundObject>): def __init__(self, id: str): """ Create a bound object. Args: id: Id of the bound object. Return: The bound object that was created. """ self.m_memberInt = 0 self.m_memberBool = False @property def property_int(self) -> int: """ Int property bound directly. """ @property def property_bool(self) -> bool: """ Bool property bound directly. """ @property def property_string(self) -> str: """ String property bound using accessors. """ def multiply_int_property(self, value_to_multiply: int): """ Bound function that accepts an argument. Args: value_to_multiply: The value to multiply by. """ def toggle_bool_property(self) -> bool: """ Bound function that returns a value. Return: The toggled bool value. """ def append_string_property(self, value_to_append: str): """ Bound function that appends to a string property. Args: value_to_append: The value to append to the string property. """ ``` Bound function that accepts an argument and returns a value. Args: value_to_append: The value to append. Return: The new string value.
DefinitionCreation.md
# Definition Creation This is a practitioner’s guide to using the Execution Framework. Before continuing, it is recommended you first review the Execution Framework Overview along with basic topics such as Graphs Concepts, Pass Concepts, and Execution Concepts. Definitions in the Execution Framework define the work each node represents. Definitions come in two forms: opaque definitions (implemented by NodeDef) and definitions described by a graph (i.e. NodeGraphDef). Each is critical to EF’s operation. This article covers how to create both. ## Customizing NodeDef NodeDef encapsulates opaque user code the Execution Framework cannot examine/optimize. Probably the best example of how we can customize NodeDef is by looking at how NodeDefLambda is implemented. The implementation is simple. At creation, the object is given a function pointer, which it stores. When INodeDef::execute() is called, the stored function is invoked. ### Implementation of NodeDefLambda ```cpp class NodeDefLambda : public NodeDef { public: // ... (code implementation details) }; ``` //! Templated constructor for wrapper class //! //! The given definition name must not be @c nullptr. //! //! The given invokable object must not be @c nullptr. //! //! The returned object will not be @c nullptr. //! //! @tparam Fn Invokable type (e.g. function, functor, lambda, etc) with the signature `Status(ExecutionTask&amp;)`. //! //! @param definitionName Definition name is considered as a token that transformation passes can register against. //! //! @param fn Execute function body. Signature should be `Status(ExecutionTask&amp;)`. //! //! @param schedInfo Fixed at runtime scheduling constraint. template <typename Fn> static omni::core::ObjectPtr<NodeDefLambda> create(const carb::cpp::string_view& definitionName, Fn&& fn, SchedulingInfo schedInfo) noexcept { OMNI_GRAPH_EXEC_ASSERT(definitionName.data()); return omni::core::steal(new NodeDefLambda(definitionName, std::forward<Fn>(fn), schedInfo)); } protected: //! Templated and protected constructor for wrapper class. //! //! Use the `create` factory method to construct objects of this class. template <typename Fn> NodeDefLambda(const carb::cpp::string_view& definitionName, Fn&& fn, SchedulingInfo schedInfo) noexcept : NodeDef(definitionName), m_fn(std::move(fn)), m_schedulingInfo(schedInfo) { } //! @copydoc omni::graph::exec::unstable::IDef::execute_abi Status execute_abi(ExecutionTask* info) noexcept override { OMNI_GRAPH_EXEC_ASSERT(info); return m_fn(*info); } //! @copydoc omni::graph::exec::unstable::IDef::getSchedulingInfo_abi SchedulingInfo getSchedulingInfo_abi(const ExecutionTask* info) noexcept override { return m_schedulingInfo; } private: std::function<Status(ExecutionTask&)> m_fn; //!< Execute function body SchedulingInfo m_schedulingInfo; //!< Scheduling constraint }; Definition of a behavior tree. ```c++ // // ┌─────────────────┐ // │ │ // │ SEQUENCE │ // │ │ // └────────┬────────┘ // │ // ┌──────────────────────┴──────────────┬─────────────────────────┐ // │ │ │ // ┌────────▼────────┐ ┌────────▼─────────┐ ┌────────▼────────┐ // │ │ │ ┌──────────────┐ │ │ │ // │ SELECTOR │ │ │BtRunAndWinDef│ │ │ CELEBRATE │ // │ │ │ └──────────────┘ │ │ │ // └────────┬────────┘ └──────────────────┘ └─────────────────┘ // │ // ┌──────────────────────┴───────────────────────┐ // │ │ // ┌────────▼────────┐ ┌────────▼────────┐ // │ │ │ │ // │ READY FOR RACE │ │ TRAIN TO RUN │ // │ │ │ │ // └─────────────────┘ └─────────────────┘ //! Nested behavior tree leveraging composability of EF to add training behavior to BtRunAndWinDef definition. //! We added a @p CELEBRATE node which together with the behavior @p SEQUENCE will require proper state propagation //! from nested @p BtRunAndWinDef definition. class BtTrainRunAndWinDef : public NodeGraphDef { public: //! Factory method static omni::core::ObjectPtr<BtTrainRunAndWinDef>& create(IGraphBuilder* builder) { auto def = omni::core::steal(new BtTrainRunAndWinDef(builder->getGraph(), "tests.def.BtTrainRunAndWinDef")); def->build(builder); return def; } // The definition owns its nodes using NodePtr = omni::core::ObjectPtr<Node>; NodePtr sequenceNode; NodePtr selectorNode; NodePtr readyNode; NodePtr trainNode; NodePtr runAndWinNode; NodePtr celebrateNode; protected: //! Constructor BtTrainRunAndWinDef(IGraph* graph, const carb::cpp::string_view& definitionName) noexcept; private: //! Connect the topology of already allocated nodes and populate definition of @p runAndWinNode node void build(IGraphBuilder* parentBuilder) noexcept { // Create the graph seen above using the builder. Only builder objects can modify the topology. auto builder{ GraphBuilder::create(parentBuilder, this) }; builder->connect(getRoot(), sequenceNode); builder->connect(sequenceNode, selectorNode); builder->connect(sequenceNode, runAndWinNode); builder->connect(sequenceNode, celebrateNode); } } ``` ```cpp builder->connect(selectorNode, readyNode); builder->connect(selectorNode, trainNode); builder->setNodeGraphDef(runAndWinNode, BtRunAndWinDef::create(builder.get())); } }; ``` ``` ## Customizing NodeGraphDef When we do not know the nodes at compile time, we are still responsible for maintaining the nodes’ lifetime. We also are encouraged to reuse of nodes between topology changes. In the example below, we create a definition that builds a graph where each node represents a runner. The number of runners is not known at compile time and is specified at runtime as an argument to the `build()` method. During `build()`, each node is stored in a `std::vector` and a definition is attached to the node to define each runner’s behavior. ```cpp // ┌────────────┐ // │ │ // ┌────►│ Runner_1 │ // │ │ │ // │ └────────────┘ // │ ┌────────────┐ // ├───┘ │ │ // ├────────►│ ... │ // ├───┐ │ │ // │ └────────────┘ // │ ┌────────────┐ // │ │ │ // └────►│ Runner_N │ // │ │ // └────────────┘ //! Definition for instantiating a given number of runners. Each runner shares the same @p NodeGraphDef provided as a template parameter RunnerDef. Definition can be repopulated with reuse on nodes and definition. template<typename RunnerDef> class BtRunnersDef : public NodeGraphDef { using ThisClass = BtRunnersDef<RunnerDef>; public: //! Factory method static omni::core::ObjectPtr<ThisClass> create(IGraph* graph) { return omni::core::steal(new ThisClass(graph, "tests.def.BtRunnersDef")); } //! Construct the graph topology by reusing as much as possible already allocated runners. //! All runners will share the same behavior tree instance. void build(IGraphBuilder* builder, uint32_t runnersCount) { if (runnersCount < m_all.size()) { m_all.resize(runnersCount); } else if (runnersCount > m_all.size()) { m_all.reserve(runnersCount); NodeGraphDefPtr def; if (m_all.empty()) { def = RunnerDef::create(builder); } else ``` ```cpp { def = omni::core::borrow(m_all.front()->getNodeGraphDef()); } for (uint64_t i = m_all.size(); i < runnersCount; i++) { std::string newNodeName = carb::fmt::format("Runner_{}", i); auto newNode = Node::create(getTopology(), def, newNodeName); m_all.emplace_back(newNode); } } INode* rootNode = getRoot(); for (uint64_t i = 0; i < m_all.size(); i++) { builder->connect(rootNode, m_all[i].get()); } } //! Acquire runner state in given execution context at given index. If doesn't exist, default one will be allocated. BtActorState* getRunnerState(IExecutionContext* context, uint32_t index); protected: //! Initialize each runner state when topology changes. Make goals for each runner different. void initializeState_abi(ExecutionTask* rootTask) noexcept override; //! Constructor BtRunnersDef(IGraph* graph, const carb::cpp::string_view& definitionName) noexcept : NodeGraphDef(graph, BtRunnersExecutor::create, definitionName) { } private: using NodePtr = omni::core::ObjectPtr<Node>; std::vector<NodePtr> m_all; //!< Holds all runners used in the current topology. }; ``` ## Next Steps Readers are encouraged to examine ```cpp kit/source/extensions/omni.graph.exec/tests.cpp/graphs/TestBehaviorTree.cpp ``` to see the full implementation of behavior trees using EF. Now that you saw how to create definitions, make sure to consult the [Pass Creation](#ef-pass-creation) guide. If you haven’t yet created a module for extending EF, consult the [Plugin Creation](#ef-plugin-creation) guide.
definitions.md
# Definitions - **exact coverage**: the condition that a walk from any leaf chunk to its ancestor root chunk will always encounter exactly one support chunk - **family**: the memory allocated when an asset is instanced into its initial set of actors, and all descendant actors formed from fracturing the initial set, recursively - **root chunk**: a chunk with no parent - **leaf chunk**: a chunk with no children - **lower-support chunk**: a chunk that is either a support or subsupport chunk - **subsupport chunk**: a chunk that is descended from a support chunk - **supersupport chunk**: a chunk that is the ancestor of a support chunk - **support chunk**: a chunk that is represented in the support graph - **upper-support chunk**: a chunk that is either a support or supersupport chunk
demo-app_Overview.md
# Overview — Omniverse Kit 2.0.24 documentation ## Overview A set of simple Popup Dialogs for passing user inputs. All of these dialogs subclass from the base PopupDialog, which provides OK and Cancel buttons. The user is able to re-label these buttons as well as associate callbacks that execute upon being clicked. Why you should use the dialogs in this extension: - Avoid duplicating UI code that you then have to maintain. - Re-use dialogs that have standard look and feel to keep a consistent experience across the app. - Inherit future improvements. ### Form Dialog A form dialog can display a mixed set of input types. Code for above: ```python field_defs = [ FormDialog.FieldDef("string", "String: ", ui.StringField, "default"), FormDialog.FieldDef("int", "Integer: ", ui.IntField, 1), FormDialog.FieldDef("float", "Float: ", ui.FloatField, 2.0), FormDialog.FieldDef( "tuple", "Tuple: ", lambda **kwargs: ui.MultiFloatField(column_count=3, h_spacing=2, **kwargs), None ), FormDialog.FieldDef("slider", "Slider: ", lambda **kwargs: ui.FloatSlider(min=0, max=10, **kwargs), 3.5), FormDialog.FieldDef("bool", "Boolean: ", ui.CheckBox, True), ] dialog = FormDialog( title="Form Dialog", message="Please enter values for the following fields:", field_defs=field_defs, ok_handler=lambda dialog: print(f"Form accepted: '{dialog.get_values()}'"), ) ``` # Input Dialog An input dialog allows one input field. Code for above: ```python dialog = InputDialog( title="String Input", message="Please enter a string value:", pre_label="LDAP Name: ", post_label="@nvidia.com", ok_handler=lambda dialog: print(f"Input accepted: '{dialog.get_value()}'"), ) ``` # Message Dialog A message dialog is the simplest of all popup dialogs; it displays a confirmation message before executing some action. Code for above: ```python message = "Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua." dialog = MessageDialog( title="Message", message=message, ok_handler=lambda dialog: print(f"Message acknowledged"), ) ``` # Options Dialog An options dialog displays a set of checkboxes; the choices optionally belong to a radio group - meaning only one choice is active at a given time. Code for above: ```python field_defs = [ OptionsDialog.FieldDef("hard", "Hard place", False), OptionsDialog.FieldDef("harder", "Harder place", True), OptionsDialog.FieldDef("hardest", "Hardest place", False), ] dialog = OptionsDialog( title="Options Dialog", message="Please make your choice:", field_defs=field_defs, width=300, radio_group=True, ok_handler=lambda choice: print(f"Choice: '{dialog.get_choice()}'"), ) ``` # Options Menu Similar to the options dialog, but displayed in menu form. Code for above: ```python field_defs = [ OptionsMenu.FieldDef("audio", "Audio", None, False), OptionsMenu.FieldDef("materials", "Materials", None, True), OptionsMenu.FieldDef("scripts", "Scripts", None, False), OptionsMenu.FieldDef("textures", "Textures", None, False), OptionsMenu.FieldDef("usd", "USD", None, True), ] menu = OptionsMenu( title="Options Menu", field_defs=field_defs, width=150, value_changed_fn=lambda dialog, name: print(f"Value for '{name}' changed to {dialog.get_value(name)}"), ) ``` #  A complete demo, that includes the code snippets above, is included with this extension at “scripts/demo_popup_dialog.py”.
Deploying.md
# Deploying a Carbonite Application Applications developed with the Carbonite SDK will need to redistribute some of its components to function correctly. The Carbonite package for applications is `carb_sdk+plugins.${platform}` and is distributed via Packman. You can use the Packman search tool to find package versions. Generally speaking, there is little harm in redistributing *too many* files. If in doubt, redistribute it. ## Redistributable The package contains a `_build/{platform}/{config}` directory where binary artifacts that must be redistributed are placed. Not all of these files will need to be redistributed. The following sections describe the requirements in more details. ## Debug vs Release The package contains both *debug* and *release* builds of binaries. If debugging Carbonite itself is not desired, your application can use the *release* binaries, even if the application itself is built as *debug*. This also tends to be faster at runtime as the *debug* binaries are non-optimized and therefore less performant. On Windows, the *debug* binaries may require debug runtime libraries. Carbonite is not licensed to distribute the Microsoft debug runtime files, so these files must be sourced elsewhere. A possible means of acquiring the debug Microsoft libraries is to install a version of Microsoft Visual Studio. ## Core Library If you are using the [Carbonite Framework](carb/Framework.html#carb-framework) with plugins, or [Omniverse Native Interfaces](OmniverseNativeInterfaces.html), or the Carbonite memory management functions (i.e. `carb::allocate()`) you will need to package the core library along with your application. This is `carb.dll` (Windows), `libcarb.so` (Linux) or `libcarb.dylib` (Mac). ## Plugins Only the plugins that your application uses (and their recursive dependencies) must be redistributed. For instance, few applications use `carb.simplegui.plugin` though it is among the largest Carbonite plugins. It need not be redistributed with your application if it is not being used. However, keep in mind that there may be dependencies between plugins. For instance, `carb.settings.plugin` requires `carb.dictionary.plugin`. The provided `plugin.inspector` tool application can be used to examine these dependencies. ## Python Carbonite provides a means of embedding Python through `carb.scripting-python.plugin`. Python 3.7 and 3.10 are both offered. These are meant to be singular: that is, only one version of Python may be loaded into an application. The contents of the `scripting-python-${version}` directory must be redistributed along with your application if you use embedded Python. ## Python Bindings ## Python Bindings If you are using the Carbonite plugins through embedded Python, or are running as a Python application (i.e. started from Python), then you likely also want to include the relevant portions of the *bindings-python* directory. Python uses dot-notation with the *import* directive to load python code from directories and packages. *.py* files may exist loose in a directory to comprise a package, or compiled into a library file (* .pyd* file on Windows or *.so* file on Linux/Mac). Carbonite Python Bindings are generally compiled into a library file. In some cases, Carbonite Python Bindings are prefixed with an underscore (i.e. *_carb*) and a wrapper *__init__.py* file is used to import and augment the library package contents. In these cases, both the library with the underscore prefix and the *__init__.py* file must be redistributed in the same directory structure layout. Since Python interprets directories as package names, the directory structure is important. Therefore, it is important that the directory structure under *bindings-python* is replicated in your application distribution, and the root of the bindings is added to the *PYTHONPATH* environment variable. Bindings are available for the following versions of Python, identified by the *cpXXX* number in their filename: 3.7 (*cp37*), 3.8 (*cp38*), 3.9 (*cp39*), 3.10 (*cp310*). Only the version that is in use must be redistributed. ### Core library bindings The Core library bindings are prefixed with *_carb* and located in the *bindings-python/carb* directory. The *__init__.py* is also required to be redistributed along with the Core library bindings. The Core library bindings are required if you are redistributing any of the other bindings (or have your own bindings for additional plugins). ### Plugin bindings Plugins that do not have an associated *__init__.py* file are located in either the *carb* or *omni* subdirectories of *bindings-python* and do not have an underscore prefix. Plugins that have an associated *__init__.py* are located in an additional subdirectory. The subdirectory name must be redistributed along with the binding library for the desired version of Python as well as the *__init__.py* file. ## Platform Specific ### Windows Some plugins may require the Visual Studio 2019 Runtime Redistributable. The files therein are not distributed as part of the *carb_sdk+plugins* package and must be sourced separately. Typically the files required would be located in the */X64/Microsoft.VC142.CRT* directory: *vcruntime140*.dll and *msvcp140*.dll. In some cases the Windows SDK runtime is required as well: *x64/ucrtbase.dll*. If *carb.profiler-nvtx.plugin* is redistributed, the *nvToolsExt64_1.dll* file must also be present in the same directory. ### Linux If *carb.profiler-nvtx.plugin* is redistributed, the *libnvToolsExt.so* file must also be present in the same directory. ## Telemetry Transmitter If an application makes use of *omni.structuredlog.plugin* to gather telemetry data, the *omni.telemetry.transmitter* application can be used to send the gathered information to a server that collects this data. The *omni.telemetry.transmitter* application changes less frequently and is distributed via Packman in a separate package: *telemetry_transmitter.${platform}*. The entire contents of *_build/${platform}/release* from within that package should be redistributed along with your application in a separate directory. > **Warning** > The *telemetry_transmitter* package contains copies of various plugins required by the transmitter. It is generally assumed that these are older versions than the plugins from the *carb_sdk+plugins* package, and should be located in a separate directory and loaded only by the transmitter. ## Symbols Symbols are not distributed along with any of the Carbonite packages. Instead they are stored at build time using the repo_symbolstore utility.
deprecated_CHANGELOG.md
# Changelog This document records all notable changes to `omni.graph.io` extension. This project adheres to `Semantic Versioning <https://semver.org/>`_. ## [1.8.1] - 2024-04-17 ### Added - Add a ‘support_level’ entry to the configuration file of the extensions ## [1.8.0] - 2024-02-09 ### Changed - Updated version number to work after the Kit branch was renamed from 105.2 to 106. ## [1.7.0] - 2024-01-16 ### Changed - All nodes have been moved to the omni.graph.nodes extension - Extension dependency changed to omni.graph.nodes ### Removed - All nodes, tests and code files have been removed from this extension ## [1.6.0] - 2023-08-19 ### Removed - Dependency on the downstream extension omni.graph.ui ## [1.5.1] - 2023-06-27 ### Fixed - Refactored OmniGraph documentation to point to locally generated files ## [1.5.0] - 2023-06-20 ### Changed - Use Change Tracking for bundles from omni.graph.core ## [1.4.5] - 2023-05-31 ### Fixed - Adjusted the CRLF settings for the generated .md node table of content files ## [1.4.4] - 2023-04-11 ### Added - Table of documentation links for nodes in the extension # [1.4.3] - 2023-03-16 ## Added - “threadsafe” scheduling hint to OgnBundleToUSDA node. # [1.4.2] - 2023-03-07 ## Added - “usd-write” scheduling hint to OgnExportUSDPrim # [1.4.1] - 2023-02-25 ## Changed - Modifed format of Overview to be consistent with the rest of Kit # [1.4.0] - 2023-02-23 ## Removed - omni.graph.io.python module # [1.3.0] - 2023-02-17 ## Changed - Soft deprecate INode::(de)registerPathChangedCallback # [1.2.9] - 2023-01-30 ## Changed - Removed the kit-sdk landing page - Moved all of the documentation into the new omni.graph.docs extension # [1.2.8] - 2023-01-13 ## Changed - Import Type is always on to match Read Prims behavior. # [1.2.7] - 2022-12-21 ## Changed - Refactored CUDA build to consolidate build functions and remove unnecessary rebuilds # [1.2.6] - 2022-12-15 ## Fixed - Fixed a bug in OgnImportUSDPrim to output deformed points and normals # [1.2.5] - 2022-11-18 ## Changed - Allow to be used in headless mode # [1.2.4] - 2022-08-22 ## Changed - Allow OgnImportUSDPrim to have targets that don’t exist # [1.2.3] - 2022-08-09 ## Fixed - Applied formatting to all of the Python files # [1.2.2] - 2022-08-09 ## Fixed - Fixed backward compatibility break for the deprecated `IDirtyID` interface. # [1.2.1] - 2022-08-03 ## Fixed - Compilation errors related to deprecation of methods in ogn bundle. # [1.2.0] - 2022-07-28 ## Deprecated - Deprecated `IDirtyID` interface # [1.1.0] - 2022-07-07 ## Added - Test for public API consistency - Added build handling for tests module # [1.0.10] - 2021-11-19 - Added option on ImportUsdPrim to include local bounding box of imported prims as a common attribute. - Fixed case where Import is not re-computed when a transform is needed and an ancestor prim’s transform has changed. # [1.0.9] - 2021-11-10 - Added option on ExportUSDPrim node type to remove, from output prims, any authored attributes that aren’t being exported # [1.0.8] - 2021-10-22 - Should be identical to 1.0.7, but incrementing the version number just in case, for logistical reasons # [1.0.7] - 2021-10-14 - Added option to export time sampled data to specified time in ExportUSDPrim node type # [1.0.6] - 2021-10-05 - Fixed re-importing of transforming attributes in ImportUSDPrim node type when transforms change # [1.0.5] - 2021-09-24 - Added attribute-level change tracking to ImportUSDPrim node type # [1.0.4] - 2021-09-16 - Added “Attributes to Import” and “Attributes to Export” to corresponding nodes to reduce confusion about how to import/export a subset of attributes - Added support for importing/exporting “widths” interpolation from USD # [1.0.3] - 2021-08-18 - Updated for an ABI break in Kit # [1.0.2] - 2021-08-17 - Fixed crash related to ImportUSDPrim node type and breaking change in Kit from eTransform being deprecated in favour of eMatrix # [1.0.1] - 2021-08-13 - Fixed crash related to ImportUSDPrim node type # [1.0.0] - 2021-07-27 ## Added - Initial version. Added ImportUSDPrim, ExportUSDPrim, TransformBundle, and BundleToUSDA node types.
depth-compositing_Overview.md
# Overview ## Introduction The `omni.kit.scene_view.opengl` module provides an OpenGL drawing backend for `omni.ui.scene`. The usage is the same as `omni.ui.scene` for creating items, the only difference is in how the top-level `SceneView` object is created. ## How to create a simple OpenGLSceneView ### Python 1. Import the required packages: ```python import omni.ui as ui from omni.ui_scene import scene as sc from omni.kit.scene_view.opengl import OpenGLSceneView ``` 2. Create a simple model to deliver view and projection to the `omni.ui.scene.SceneView`: ```python class SimpleModel(sc.AbstractManipulatorModel): def __init__(self, view=None, projection=None): super().__init__() self.__view = view or [0.7071067811865476, -0.4082482839677536, 0.5773502737830688, 0, 2.7755575615628914e-17, 0.8164965874238355, 0.5773502600027396, 0, -0.7071067811865477, -0.40824828396775353, 0.5773502737830687, 0, 5.246555321336316e-14, -0.0000097441642310514, -866.0254037844385, 1] self.__projection = projection or [1.7320507843521864, 0, 0, 0, 0, 1.7320507843521864, 0, 0, 0, 0, 1.7320507843521864, 0, 0, 0, 0, 1] ``` 0, 2.911189413558437, 0, 0, 0, 0, -1.00000020000002, -1, 0, 0, -2.00000020000002, 0] ```python def get_as_floats(self, item): """Called by SceneView to get projection and view matrices""" if item == self.get_item("projection"): return self.__projection if item == self.get_item("view"): return self.__view ``` 3. Create an `omni.ui.Window` and an OpenGLSceneView in it. ```python window = ui.Window("OpenGLSceneView", width=512, height=512) with window.frame: gl_sceneview = OpenGLSceneView(SimpleModel()) with gl_sceneview.scene: sc.Arc(250, axis=0, tesselation=64, color=[1, 0, 0, 1]) sc.Arc(250, axis=1, tesselation=64, color=[0, 1, 0, 1]) sc.Arc(250, axis=2, tesselation=64, color=[0, 0, 1, 1]) ``` Depth Compositing ================ Because the drawing is done with OpenGL, it is also possible to do in Viewport drawing that is depth-composited. This can be accomplished with the `ViewportOpenGLSceneView` class, which also handles setting up the Viewport to output a depth channel to clip against. Python ------ ```python from omni.ui_scene import scene as sc from omni.kit.scene_view.opengl import ViewportOpenGLSceneView from omni.kit.viewport.utility import get_active_viewport_window def build_fn(gl_scene_view): with gl_scene_view.scene: sc.Arc(45, axis=1, tesselation=64, color=[1, 0.9, 0.4, 1], wireframe=False, thickness=50) # Use a static helper method to set everything up # Just provide a ViewportWindow, unique identifier, and a callable function to build the scene # ui_frame, gl_scene_view = ViewportOpenGLSceneView.create_in_viewport_window(get_active_viewport_window(), "demo.unique.identifier", build_fn) ``` Further reading =============== * [Omni UI Scene](https://docs.omniverse.nvidia.com/kit/docs/omni.ui.scene/latest)
design_manual.md
# WRAPP CLI usage WRAPP provides a command line tool helping with asset packaging and publishing operations for assets stored in Nucleus servers or file systems. It encourages a structured workflow for defining the content of an asset package, and methods to publish and consume those packages in a version-safe manner. ## Design The WRAPP command line tool is a pure Nucleus client utilizing only publicly available APIs, it lives completely in the Nucleus user space. Thus, all operations performed are limited by the permissions granted to the user executing the script. The tool itself offers a variety of commands that document themselves via the `--help` command line flag. To get a list of all commands, run: ``` wrapp --help ``` To get the help for a single command, do e.g.: ``` wrapp create --help ``` The commands are displayed in alphabetical order, but it is important to understand that the design is based on three layers of increasing abstraction of a pure file-system based workflow. Those layers are: 1. Files & Folders 2. Packages 3. Stages We will present the commands in the order of the lowest abstraction to the highest abstraction because it makes it easier to understand how the later commands function, but in day to day usage mostly layer 2 and 3 will be used. ## Supported URLs Wrapp in general accesses data through the Omniverse Client-Library and therefore supports URLs to Nucleus servers, S3 buckets, Azure containers/blobs and to the local file system: - **Nucleus Servers**: Data on Nucleus servers can be accessed using “omniverse://…” URLs. Authentication will by default occur interactively, for more details please refer to the Nucleus documentation. - **Azure**: Data on Azure can be accessed using “https://…..blob.core.windows.net” URLs. For more details on authentication and requirements on the Azure containers/blobs, please refer to the client-library documentation. - **S3**: Data on S3 can be accessed using “http(s)://…cloudfront.net” or “http(s)://…amazonaws.com” URLs. For more details on authentication and requirements on the S3 buckets, please refer to the client-library documentation. - **Local file system**: Data on the local file system can be accessed using “file://localhost/….” or “file:///…” URLs. Any URL or path that has no scheme is interpreted as a file path, so you can specify `file:local_folder` or `local_folder` to address a local directory. Not all commands support all URL types for all parameters. ## Generic parameters Most if not all commands support the following parameters: - `--verbose`: Specify this to have more visibility on what is currently being processed. - `--time`: Measure the wall clock time the command took to execute. - `--stats`: Produce some stats about the estimated number of roundtrips and file counts encountered. Note many of these roundtrips may be cached and not actually be executed, this is more of informative nature than a benchmark. ## Options - `--jobs` Specify the maximum number of parallel jobs executed. The default is 100. This can be useful to throttle load on the server while running bulk operations. Note that downloads are always maxed out at 10 (or use the OMNI_CONN_TRANSFER_MAX_CONN_COUNT setting to set this specifically) - `--tagging-jobs` Specify the maximum number of parallel jobs run on the tagging service. Default is 50. - `--log-file` Specify the name of the log file. The default name is `wrapp.log` - `--debug` Turn on debug level logging for client library - `--json-logging` Use this to produce a JSON structured log instead of a human readable log ## Authentication By default, wrapp uses interactive authentication which is appropriate for the server and server version your are contacting. It might open a browser window to allow for single sign-on workflows. Successful connections will be cached and no further authentication will be required running commands. If this is not desired or not possible as in headless programs, the `--auth` parameter is used to supply credentials. The credentials need to be in the form of a comma separated triplet, consisting of: 1. The server URL. This needs to start with `omniverse://` and must match the server name as used in the URLs that target the server. 2. The username. This can be a regular username or the special name `$omni-api-token` when the third item is an API token and not a password 3. The password for that user, or the API token generated for a single sign-on user. As an example, this is how to specify a wrapp command authenticating against a localhost workstation with the default username and password: ``` wrapp list-repo omniverse://localhost --auth omniverse://localhost,omniverse,omniverse ``` and this is how you would use an API token stored in an environment variable on Windows: ``` wrapp list-repo omniverse://staging.nvidia.com/staging_remote/beta_packages --auth omniverse::staging.nvidia.com,$omni-api-token,%STAGING_TOKEN% ``` on Linux, don’t forget to escape the $. ## Runnning wrapp commands concurrently If several wrapp commands are executed and awaited concurrently, it is strongly recommended to use them in one context created with the CommandContext.run_scheduler method. ## Layer 1 commands - Files & Folders and their metadata ### Catalog The catalog command can be used to create a list of files and folders in a specified subtree and store the result together with explicit version information in a catalog (aka manifest) file. To catalog the content of a specific subtree on your localhost Nucleus with the assets being at the path `NVIDIA/Assets/Skies/Cloudy/`, just run: ``` wrapp catalog omniverse://localhost/NVIDIA/Assets/Skies/Cloudy/ cloudy_skies_catalog.json --local-hash ``` Of course, replace localhost with the server name if the data is somewhere else. The `--local-hash` is required here only because the data in the example is stored on a mount or if the data is not checkpointed. Use the `--local-hash` to calculate them on the fly, but the data needs to be downloaded to your local machine! The json file produced has now archived the files and their versions at the very moment the command was run. Being a server, running the command again might produce a different catalog when files are added, deleted, or updated in the meantime. To be able to determine if the version that was cataloged is still the same, we can use the `diff` command to compare two catalogs made at different points in time or even different copy locations of the same asset. ### Ignore rules, e.g. for thumbnails The command supports ignore rules that are by default read from a file called `.wrappignore` in the current working directory. The name of the ignore file can also be specified with the `--ignore-file myignorefile.txt`. For example, to ignore all thumbnail directories for cataloging operation and do not include them in the package, create a file called `.wrappignore` in your current directory containing the line ``` .thumbs ``` # If tags need to be cataloged, copied, and diffed as well, specify the `--tags` parameter. This will do a second pass using the Omniverse tagging service and will archive the current state of tags, their namespaces and values in the catalog file: ``` ```markdown wrapp catalog omniverse://example.nvidia.com/lib/props/vegetation vegetation_tagged.json --tags ``` # Creating a catalog from a file list Should your asset be structured differently from a simple folder tree that is traversed recursively by the catalog operation, you can create and specify a file list in the form of a tab separated URL list split into the base and the relative path. As an example, this can be used to create a catalog of an asset structured differently: ```markdown omniverse://localhost/NVIDIA/Assets/Skies/Clear/ evening_road_01_4k.hdr omniverse://localhost/NVIDIA/Assets/Skies/Dynamic/ Cirrus.usd ``` If this is stored in a file called input_files.tsv (with a proper ASCII tab character instead of the `\t` placeholder), you can create the catalog of this asset with the `--file-list` parameter like this: ```markdown wrapp catalog input_files.tsv evening_road.json --local-hash --file-list ``` Both files will now be in the root directory of the package to be created, as only the relative part of the path is kept. # Diff The diff command compares two catalogs, and can be used to find out what has changed or what are the differences between two different copies of the same subtree. Assuming we have two catalogs of the same package from the same location at two different dates, we can just run ```markdown wrapp diff vegetation_catalog_20230505.json vegetation_catalog_20230512.json --show ``` with the –show command asking not only return if there is a diff (exit code will be 1 if a diff is detected, 0 otherwise) but to even print out a list of items that are only in catalog 1, but not 2, those which are only in 2 but not 1, and a list of files that differs in their content. # Get Sometimes, it can be handy to have a quick way of retrieving a single file or folder with a command line tool. This is what the get command was made for. To retrieve a single file onto your local disk, just do ```markdown wrapp get omniverse://localhost/NVIDIA/Assets/Isaac/2022.1/Isaac/Materials/Isaac/nv_green.mdl ``` and the tool will download the usd file. # Cat For viewing the content of a single text file, you can issue the cat command and wrapp will download the content and print it to stdout: ```markdown wrapp cat omniverse://localhost/NVIDIA/Assets/Isaac/2022.1/Isaac/Materials/Isaac/nv_green.mdl ``` # Freeze The freeze command is used to freeze or archive a specific version into a new location. This is used to make sure a specific version can be reproducibly addressed at that location, e.g. to run a CI job on a specific version or to create a reproducible version for QA testing and subsequent release. The freeze command has two modes. The first mode takes a source subtree URL and creates a copy of the head version of the files at the source position. If both source and destination are on the same Nucleus server, the operation is efficient as no data has to be transferred and the files and folders at the new destination are effectively hard links to the same content, causing no data duplication. Note that the history is not copied and the checkpoint numbers will not be the same as in the source. Here is a command to freeze the example vegetation package at the current head version into a new subtree on the same server: ```markdown wrapp freeze omniverse://example.nvidia.com/lib/props/vegetation omniverse://example.nvidia.com/archive/props/vegetation_20230505 ``` The second mode of the command just takes a catalog file as input and again a destination path as second parameter, but needs the flag `--catalog`. ```markdown wrapp freeze vegetation_catalog_20230505.json omniverse://example.nvidia.com/archive/props/vegetation_20230505 --catalog ``` Note that while this allows to defer the copy command to a later point and only catalog the files to be used as a first step, there is no guarantee that the freeze operation will still be able to find all files listed in the catalog - they might have been moved away or obliterated. So while creating the catalog first and freezing later is an optimization, be aware that the content in the catalog file is not securely stored. One useful operation is to specify a local file URL as destination, this allows you to copy out a specific cataloged version out to local disk, e.g. to run a CI job on it ```markdown wrapp freeze vegetation_catalog_20230505.json file:/c:/build_jobs/20230505 --catalog ``` Freeze also supports uses the `.wrappignore` file like catalog, and also supports the `--ignore-file` parameter. So even if files are part of the catalog, they can be ignored at freeze stage by providing an ignore file. To enable respecting tags during the freeze operation and making sure they are copied as well, specify the flag `--copy-tags`. Note this has no effect when doing a copy within the same Nucleus server, as it will always copy the tags anyhow. # create-patch and apply-patch The create-patch command uses a three-way comparison to produce a patch file that will merge one file tree into another given their common ancestor. For example, when we have a file tree at time-point 1 and have created a catalog file for this file tree called “catalog_1.json”. We do a copy of this state to a new location and use it from there. Now work continues in the original location, and we create a new catalog at time point 2 called “catalog_2.json”. If we now want to update the copy of the file tree and our use location, and want to know if it is safe to overwrite the tree or if there are local modifications we want to be alerted about, we use the following steps: 1. First, catalog also the target location, let’s call this catalog_target.json. 2. Then, run the following command to produce the patch or delta file which contains the operations needed for the update: ``` wrapp create-patch catalog_target.json catalog_2.json catalog_1.json –patch update_target.json ``` 3. When the command produced the patch file ok, it indicates no local changes have been made and there are no conflicts. Then run the following command to apply the changes in the update_target.json to the target: ``` wrapp apply-patch update_target.json ``` After this command, the file tree at the target location matches the file tree at the source at time point 2. This is the operation that is done by the higher level `update` command. In case there are local changes to the target, two options are offered: 1. To ignore local changes and keeping them, rather just adding new files and new versions where unmodified in the target, specify the `--ignore` parameter to the merge command. 2. To rollback changes and lose local changes in the target, specify the `--force` parameter to the merge command, this will produce a larger patch file containing also the rollback commands. # Layer 2 commands - Packages So far we have only worked with subtrees like in a versioned file system. This is very powerful and can be used for many use cases, but to have an easier workflow with less complex URLs and fewer possibilities for mistakes, we introduce a few conventions and new commands. The concept of a `repository` is known from distributed versioning systems like git, and denotes a location where a repository or module is stored. We use the term repository to point at a directory on a Nucleus server which is used as an intermediate safe storage for the frozen/archived versions, and consumers of these files use that as a copy source. The package directory is called `.packages`. Each folder in there represents a named package, and has sub-folders of named versions of that package. No prescriptions are made for how packages or versions have to be named, they just have to be valid file and folder names. An example package cache could look like this: - /.packages - /.packages/vegetation_pack - /.packages/vegetation_pack/20230505 - /.packages/vegetation_pack/20230512 - /.packages/vegetation_pack/20230519 - /.packages/rocks_pack - /.packages/rocks_pack/v1.0.0 - /.packages/rocks_pack/v1.1.0 Concretely, we introduce the new commands `new`, `create`, `install`, and `list-repo`. We allow both named and unnamed packages to be used. Unnamed packages are top level directories that are just consumers of packages produced elsewhere and have no own package description file. Names packages are all packages that have a file `.<\package name>.wrapp`. You can create a named package by using the new command, or create a named package from an unnamed package during the create operation (which will leave the unnamed source package to be unnamed - but you can run new for a directory that already contains files!). ## New The new command does not operate on any files or packages, it rather is a shortcut to create a suitable `.<package>.wrapp` file to be used by subsequent install-commands. For instance, when creating a new scenario and wanting to capture the asset packages used, it is useful to have a wrapp.toml (any name is fine) file that will record the dependencies installed. As an example, just run ``` wrapp new san_diego_scenario 1.0.0 omniverse://localhost/scenarios/san_diego ``` This will create a single file `.san_diego_scenario.wrapp` in the given location. You can display the contents with ``` wrapp cat omniverse://localhost/scenarios/san_diego/.san_diego_scenario.wrapp ``` and it will look similar to this: ```json { "format_version": "1", "name": "san_diego_scenario", "version": "1.0.0", "catalog": null, "remote": null, "source_url": "omniverse://localhost/scenarios/san_diego", "dependencies": null } ``` ## Create The create command is a shorter form of freeze. The destination directory for the freeze operation always is a package cache directory, which by default is on the same Nucleus server as the source data. To create a versioned package for reuse from our previous example, run: ``` wrapp create --package omniverse://localhost/scenarios/san_diego/.san_diego_scenario.wrapp ``` When you want to later create a new version of this package, just additionally specify the new version: ```markdown Alternatively, if you have not run new and there is no .wrapp file in the package directory, you can just specify the name and version directly. This will create a .wrapp file only in the package cache, not in the source of the package: ``` ```shell wrapp create vegetation_pack 20230505 omniverse://localhost/lib/props/vegetation ``` ```markdown This will create a copy of the vegetation library in the default package cache at omniverse://localhost/.packages/vegetation_pack/20230505. ``` ```markdown You can use the ‘–repo’ option to specify a different downstream Nucleus server to receive the data, but note that this will first download the data and then upload it to the other server. For example, to create the package on a different Nucleus server that is used for staging tests, we could run: ``` ```shell wrapp create vegetation_pack 20230505 omniverse://localhost/lib/props/vegetation --repo omniverse://staging.nvidia.com ``` ```markdown This will create a copy of the vegetation library in omniverse://staging.nvidia.com/.packages/vegetation_pack/20230505. ``` ```markdown Additionally, this will create a wrapp file recording the package name, the version, and the source from which it was created. The name will be `.{package_name}.wrapp`. Running the new command to prepare a .wrapp file is optional, create will generate the file in case there is none yet. ``` ```markdown Alternatively, packages can be created from previously generated catalogs as well. For this, specify the filename of the catalog file instead of a source URL and add the –catalog option: ``` ```shell wrapp create vegetation_pack 20230505 --catalog vegetation.json --repo omniverse://staging.nvidia.com ``` ## List-repo With the concepts of remotes, you can also list the packages available on any of these. Running ```shell wrapp list-repo omniverse://localhost ``` would give you the list of known packages with the list of the versions present, for example the output could be ```shell > wrapp list-repo omniverse://localhost vegetation_pack: 20230505, 20230401 ``` to show you that one package is available, and that in two different versions. ## Install These are still pure file based operations, and when we copy a version of the asset library into a folder with a version name in it, obviously all references to these files would need to be renamed, making it harder to update to a new version of that asset library from within USD. The idea here is to not reference the package archive directly from within the USD files and materials, but rather to create yet another copy as a subfolder of the scenario or stage, and that subfolder to have no version in its path. This can most easily be achieved via the `install` command. Assume the author of a `SanDiego` scenario stored at omniverse://localhost/scenarios/SanDiego wants to use the vegetation asset pack in a specific version. This can be done with the following command line: ```shell wrapp install vegetation_pack 20230505 omniverse://localhost/scenarios/SanDiego/asset_packs ``` This will look for the package version in the servers .packages directory, and make a hard linked copy in the specified subdirectory `asset_packs/` from where the assets can be imported and used in the scenario scene. The install command can also be used to update a package at the same location to a different version (actually it also allows to downgrade). For that, just specify a different version number. This command will check if the installed package is unmodified, else it will fail with conflicts (to override, just delete the package at the install location and run install again). To update the previously installed vegetation_pack to a newer version, just run ```shell wrapp install vegetation_pack 20230523 omniverse://staging.nvidia.com/scenarios/SanDiego/asset_packs ``` If you use more than one package, it can get quickly complicated to remember which package was installed from where. To help with this, wrapp introduces the concept of package files with dependencies. To create/update a dependency file, specify an additional parameter to the install command like this: ```shell wrapp install vegetation_pack 20230523 omniverse://staging.nvidia.com/scenarios/SanDiego/asset_packs --package omniverse://staging.nvidia.com/scenarios/SanDiego/.sandiego.wrapp ``` This will create a file `.sandiego.wrapp` at the specified location. If any of the files the install command needs to modify have been manually changed in the installation folder, the installation will fail with an appropriate error message, indicating that the file in the installation folder cannot be updated to match the file in the package folder. This is called a “conflict”. The following examples constitute conflicts: - The same file has been changed in the installation folder and the package but with different content. - A new file has been added to both the installation folder and the package but with different content. - A file has been deleted from the package, but modified in the installation folder. This conflict mechanism protects the user from losing any data or modifications to the installation folder. To update the installation folder in such a situation, the patch/apply mechanism can be used. In order to record the conflicts into a patch file, the failed installation can be rerun with an additional parameter specifying the name of the patch file to create. This will apply all non-conflicting changes and record all conflicts in the patch file: ```shell wrapp install vegetation_pack 20230925 omniverse://staging.nvidia.com/scenarios/SanDiego/asset_packs --patch install_conflicts.patch ``` The `install_conflicts.patch` file is a json file with the operations that would resolve/override the conflicts. Inspect this and edit or remove operations not desired, and apply with ```shell wrapp apply install_conflicts.patch ``` # Uninstall Any package that has been installed can be uninstalled again. There are two modes of uninstallation: Via the directory in which the package has been installed, or via pointing to the dependency file which had been used to record the install operation. Then uninstall will also remove the dependency information recorded in that file. ## Uninstall via directory: ```bash wrapp uninstall vegetation_pack omniverse://staging.nvidia.com/scenarios/SanDiego/asset_packs ``` ## or via package file, no need to specify the installation directory: ```bash wrapp uninstall vegetation_pack --package omniverse://staging.nvidia.com/scenarios/SanDiego/dependencies.toml ``` # Mirror When working with multiple servers, it might make sense to transfer created packages (or rather specific versions of these) into the .packages folder on another server so install operations on that server are fast and don’t need to specify the source server as a repository. This is what the mirror operation is built for - it will copy a package version from one server’s .packages directory into another server’s .packages directory. The simple format of the command is ```bash wrapp mirror vegetation_pack 20230523 --source-repo omniverse://dev.nvidia.com --destination-repo omniverse://staging.nvidia.com ``` There is the possibility to resume an aborted transfer. This is implemented by cataloging the destination directory first and then calculating and applying a delta patch. Activate this behavior with the `--resume` parameter. If the destination directory does not exist, this parameter does nothing and is ignored: ```bash wrapp mirror vegetation_pack 20230523 --source-repo omniverse://dev.nvidia.com --destination-repo omniverse://staging.nvidia.com --resume ``` To accelerate upload of subsequent versions, we can force a differential upload versus an arbitrary version that had already been mirrored, just specify the template version as an additional parameter: ```bash wrapp mirror vegetation_pack 20230623 --source-repo omniverse://dev.nvidia.com --destination-repo omniverse://staging.nvidia.com --template-version 20250523 ``` This will first copy, on the target server, the version specified as template version into the target folder. Then, it will calculate a differential update and only upload and delete files that are changed. This can be a big time saver when many files stayed the same between versions, but will slow down things if the difference is actually large because it has to do the additional copy on the destination server and catalog the result of that copy in the destination directory [optimization possible - we could rewrite the source catalog so the subsequent catalog is not required] # Export Instead of directly copying a package from server to server using the mirror command, you can also have wrapp create a tar file with all contents of a package for a subsequent import operation. To export, just run ```bash wrapp export vegetation_pack 20230623 --repo omniverse://dev.nvidia.com ``` this will download everything to you computer and produce an uncompressed tar file called `vegetation_pack.20230623.tar`. You can specify an alternative output file name or path with the `--output` option. You can also specify a catalog to export using “export –catalog”, e.g. ```bash wrapp export vegetation_pack 20230505 --catalog vegetation.json ``` This allows creating tar files and packages from arbitrary sources, e.g. data hosted on S3 or Azure. If you plan on importing the data later using the “wrapp import” command, consider using the “–dedup” switch to avoid downloading and storing the same content several times in the tar file. # Import You might have guessed, an exported package can also be imported again. To do that, run ```bash wrapp import vegetation_pack.20230623.tar --repo omniverse://staging.nvidia.com ``` to import the package into the .packages folder on the specified receiving repository.
destructible-path-utilities_structcarb_1_1blast_1_1_blast.md
# carb::blast::Blast Defined in [Blast.h](file_Blast.h) ## Destructible Authoring Commands ```cpp const char* combinePrims(const char** paths, size_t numPaths, float defaultContactThreshold, const carb::blast::DamageParameters& damageParameters); ``` ``` ``` ### DamageParameters * damageParameters, * float defaultMaxContactImpulse) **Main entry point to combine a existing prims into a single destructible.** **Param paths** - **[in]** Full USD paths to prims that should be combined. **Param numPaths** - **[in]** How many prims are in the paths array. **Param defaultContactThreshold** - **[in]** How hard the prim needs to be hit to register damage during simulation. **Param damageParameters** - **[in]** See DamageParameters description. **Param defaultMaxContactImpulse** - **[in]** How much force can be used to push other prims away during a collision For kinematic prims only, used to allow heavy objects to continue moving through brittle destructible prims. **Return** - true iff the prims were combined successfully. ### fracturePrims(const char *paths, size_t numPaths, const char *defaultInteriorMaterial, uint32_t numVoronoiSites, float defaultContactThreshold, DamageParameters *damageParameters) Main entry point to fracture an existing prim. **Param paths** [in] Full USD path(s) to prim(s) that should be fractured. They need to all be part of the same destructible if there are more than one. **Param numPaths** [in] How many prims are in the paths array. **Param defaultInteriorMaterial** [in] Material to set on newly created interior faces. (Ignored when re-fracturing and existing interior material is found.) **Param numVoronoiSites** [in] How many pieces to split the prim into. **Param defaultContactThreshold** [in] How hard the prim needs to be hit to register damage during simulation. **Param damageParameters** [in] See DamageParameters description. **Param defaultMaxContactImpulse** [in] How much force can be used to push other prims away during a collision. For kinematic prims only, used to allow heavy objects to continue moving through brittle destroyable prims. **Param interiorUvScale** [in] Scale to apply to UV frame when mapping to interior face vertices. **Return** path to the new prim if the source prim was fractured successfully, nullptr otherwise. Set the random number generator seed for fracture operations. **Param seed** [in] the new seed. Reset the Blast data (partial or full hierarchy) starting at the given path. **Param path** [in] the path to reset. The destructible will be rebuilt with only appropriate data remaining. - **Param path** - **[in]** The path to a chunk, instance, or base destructible prim. - **Return** - true iff the operation could be performed on the prim at the given path. - **Param path** - **[in]** The USD path of the blast container. - **Param defaultMaxContactImpulse** - **[in]** Controls how much force physics can use to stop bodies from penetrating. - **Param relativePadding** - **[in]** A relative amount to grow chunk bounds in order when calculating world attachment. - **Return** - true if the destructible’s NvBlastAsset was modified (or if path == NULL). - **Param path** - **[in]** The USD path of the blast container. - **Return** - true if the destructible’s NvBlastAsset was modified (or if path == NULL). Recalculates the areas of bonds. This may be used when a destructible is scaled. **Param path** - **[in]** Path to the chunk, instance, or base destructible prim. **Return** - true iff the operation was successful. Finds all children of the chunks in the given paths, and sets kit’s selection set to the paths of those children. **Param paths** - **[in]** Full USD path(s) to chunks. **Param numPaths** - **[in]** How many paths are in the paths array. **Return** - true iff the operation was successful. ### Function: selectParent Finds all parents of the chunks in the given paths, and sets kit’s selection set to the paths of those parents. **Parameters:** - **paths [in]** - Full USD path(s) to chunks. - **numPaths [in]** - How many paths are in the paths array. **Return:** - true iff the operation was successful. ### Function: selectSource Finds all source meshes for chunks in the given paths, and sets kit’s selection set to the paths of those meshes. **Parameters:** - **paths [in]** - Full USD path(s) to chunks. - **numPaths [in]** - How many paths are in the paths array. **Return:** - true iff the operation was successful. ### Function: setInteriorMaterial Sets the material for the interior facets of the chunks at the given paths. **Parameters:** - **paths [in]** - Full USD path(s) to chunks. - **numPaths [in]** - How many paths are in the paths array. - **interiorMaterial [in]** - Material for the interior facets. ### Description #### Param paths - **[in]** Full USD path(s) to chunks. #### Param numPaths - **[in]** How many paths are in the paths array. #### Return - true iff the operation was successful. #### Param interiorMaterial - **[in]** Path to the prim holding the material prim to use. #### Description - Gets the material for the interior facets of the chunks at the given paths. #### Param paths - **[in]** Full USD path(s) to chunks. #### Param numPaths - **[in]** How many paths are in the paths array. #### Return - the material path if all meshes found at the given paths have the same interior materials. If more than one interior material is found among the meshes found, the empty string (“”) is returned. If no interior material is found, nullptr is returned. #### Description - Recalculates UV coordinates for the interior facets of chunk meshes based upon the UV scale factor given. - If the path given is a chunk, UVs will be recalculated for the chunk’s meshes. If the path is an instance or base prim, all chunk meshes will have their interior facets’ UVs recalculated. #### Param path - **[in]** Path to the chunk, instance, or base destructible prim. #### Param interiorUvScale - **[in]** the scale to use to calculate UV coordinates. A value of 1 will cause the texture to map to a region in space roughly the size of the whole destructible’s largest width. #### Return - true iff the operation was successful. ```cpp void createDestructibleInstance( const char *path, const DamageParameters *damageParameters, float defaultContactThreshold, float defaultMaxContactImpulse ) ``` Creates a destructible instance with default values from the given destructible base. **Parameters:** - **path [in]** - Path to the destructible base to instance. - **damageParameters [in]** - The damage characteristics to assign to the instance (see DamageParameters). - **defaultContactThreshold [in]** - Rigid body parameter to apply to actors generated by the instance. The minimum impulse required for a rigid body to generate a contact event, needed for impact damage. - **defaultMaxContactImpulse [in]** - Rigid body parameter to apply to actors generated by the instance. The maximum impulse that a contact constraint on a kinematic rigid body can impart on a colliding body. --- ```cpp void setSimulationParams( int32_t maxNewActorsPerFrame ) ``` Sets the maximum number of actors which will be generated by destruction each simulation frame. **Parameters:** - **maxNewActorsPerFrame [in]** - The maximum number of actors generated per frame. ```cpp void createDamageEvent(const char *hitPrimPath, DamageEvent *damageEvents, size_t numDamageEvents) ``` Create a destruction event during simulation. **Parameters:** - **hitPrimPath [in]** - The full path to the prim to be damaged (may be a blast actor prim or its collision shape). - **damageEvents [in]** - An array of DamageEvent structs describing the damage to be applied. - **numDamageEvents [in]** - The size of the damageEvents array. --- ```cpp void setExplodeViewRadius(const char *path, float radius) ``` Set the cached explode view radius for the destructible prim associated with the given path. **Parameters:** - **path [in]** - Full USD path to a destructible instance. - **radius [in]** - The distance to move apart the instance’s rendered chunks. Gives the cached explode view radius for the destructible instances associated with the given paths, if the cached value for all instances is the same. **Param paths [in]** Array of USD paths to destructible instances. **Param numPaths [in]** The length of the paths array. **Return** The cached explode view radius for all valid destructible instances at the given paths, if that value is the same for all instances. If there is more than one radius found, this function returns -1.0f. If no valid instances are found, this function returns 0.0f. Calculate the maximum depth for all chunks in the destructible prim associated with the given paths. **Param paths [in]** Array of USD paths to destructible prims. **Param numPaths [in]** The length of the paths array. **Return** the maximum chunk depth for all destructibles associated with the given paths. Returns 0 if no destructibles are found. ### Calculates what the view depth should be, factoring in internal override if set. - **Param paths [in]** - Array of USD paths to destructible prims. - **Param numPaths [in]** - The length of the paths array. ### Set the view depth for explode view functionality. - **Param paths [in]** - Array of USD paths to destructible prims. - **Param numPaths [in]** - The length of the paths array. - **Param depth [in]** - Either a string representation of the numerical depth value, or “Leaves” to view leaf chunks. ### Set debug visualization info. - **Param mode [in]** - The debug visualization mode. - **Param value [in]** - The value associated with the debug visualization mode. <span class="pre"> type </span> </span> <span class="p"> <span class="pre"> ) </span> </span> <br/> </dt> <dd> <p> Set the debug visualization mode & type. </p> <p> If mode != debugVisNone, an anonymous USD layer is created which overrides the render meshes for blast objects which are being visualized. </p> <dl class="field-list simple"> <dt class="field-odd"> Param mode </dt> <dd class="field-odd"> <p> <strong> [in] </strong> Supported modes: “debugVisNone”, “debugVisSelected”, “debugVisAll” </p> </dd> <dt class="field-even"> Param type </dt> <dd class="field-even"> <p> <strong> [in] </strong> Supported modes: “debugVisSupportGraph”, “debugVisMaxStressGraph”, “debugVisCompressionGraph”, “debugVisTensionGraph”, “debugVisShearGraph”, “debugVisBondPatches” </p> </dd> <dt class="field-odd"> Return </dt> <dd class="field-odd"> <p> true iff a valid mode is selected. </p> </dd> </dl> </dd> </dl> </div> <div class="breathe-sectiondef docutils container"> <p class="breathe-sectiondef-title rubric-h3 rubric" id="breathe-section-title-debug-damage-functions"> Debug Damage Functions </p> <p> </p> </p> <dl class="cpp var"> <dt class="sig sig-object cpp" id="_CPPv4N4carb5blast5Blast20setDebugDamageParamsE"> <span id="_CPPv3N4carb5blast5Blast20setDebugDamageParamsE"> </span> <span id="_CPPv2N4carb5blast5Blast20setDebugDamageParamsE"> </span> <span class="target" id="structcarb_1_1blast_1_1_blast_1a7f97f56019757787927d09e877b58692"> </span> <span class="kt"> <span class="pre"> void </span> </span> <span class="w"> </span> <span class="p"> <span class="pre"> ( </span> </span> <span class="p"> <span class="pre"> * </span> </span> <span class="sig-name descname"> <span class="n"> <span class="pre"> setDebugDamageParams </span> </span> </span> <span class="p"> <span class="pre"> ) </span> </span> <span class="p"> <span class="pre"> ( </span> </span> <span class="kt"> <span class="pre"> float </span> </span> <span class="w"> </span> <span class="n"> <span class="pre"> amount </span> </span> <span class="p"> <span class="pre"> , </span> </span> <span class="w"> </span> <span class="kt"> <span class="pre"> float </span> </span> <span class="w"> </span> <span class="n"> <span class="pre"> impulse </span> </span> <span class="p"> <span class="pre"> , </span> </span> <span class="w"> </span> <span class="kt"> <span class="pre"> float </span> </span> <span class="w"> </span> <span class="n"> <span class="pre"> radius </span> </span> <span class="p"> <span class="pre"> ) </span> </span> <br/> </dt> <dd> <p> Set parameters for the debug damage tool in kit. </p> <p> This is applied using Shift + B + (Left Mouse). A ray is cast from the camera position through the screen point of the mouse cursor, and intersected with scene geometry. The intersection point is used to find nearby destructibles using to damage. </p> <dl class="field-list simple"> <dt class="field-odd"> Param amount </dt> <dd class="field-odd"> <p> <strong> [in] </strong> The base damage to be applied to each destructible, as an acceleration in m/s^2. </p> </dd> <dt class="field-even"> Param impulse </dt> <dd class="field-even"> <p> <strong> [in] </strong> An impulse to apply to rigid bodies within the given radius, in kg*m/s. (This applies to non-destructible rigid bodies too.) </p> </dd> <dt class="field-odd"> Param radius </dt> <dd class="field-odd"> <p> <strong> [in] </strong> The distance in meters from the ray hit point to search for rigid bodies to affect with this function. </p> </dd> </dl> </dd> </dl> <dl class="cpp var"> <dt class="sig sig-object cpp" id="_CPPv4N4carb5blast5Blast16applyDebugDamageE"> <span id="_CPPv3N4carb5blast5Blast16applyDebugDamageE"> </span> <span id="_CPPv2N4carb5blast5Blast16applyDebugDamageE"> </span> <span class="target" id="structcarb_1_1blast_1_1_blast_1a027d087862ca71dac812d7111cf84a20"> </span> <span class="kt"> <span class="pre"> void </span> </span> <span class="w"> </span> <span class="p"> <span class="pre"> ( </span> </span> <span class="p"> <span class="pre"> * </span> </span> <span class="sig-name descname"> <span class="n"> <span class="pre"> applyDebugDamage </span> </span> </span> <span class="p"> <span class="pre"> ) </span> </span> <span class="p"> <span class="pre"> ( </span> </span> <span class="k"> <span class="pre"> const </span> </span> <span class="w"> </span> <span class="n"> <span class="pre"> carb </span> </span> <span class="p"> <span class="pre"> :: </span> </span> <span class="n"> <span class="pre"> Float3 </span> </span> <span class="w"> </span> <span class="p"> <span class="pre"> * </span> </span> <span class="n"> <span class="pre"> worldPosition </span> </span> <span class="p"> <span class="pre"> , </span> </span> <span class="w"> </span> <span class="k"> <span class="pre"> const </span> </span> <span class="w"> </span> <span class="n"> <span class="pre"> carb </span> </span> <span class="p"> <span class="pre"> :: </span> </span> <span class="n"> <span class="pre"> Float3 </span> </span> <span class="w"> ### Apply Debug Damage Apply debug damage at the position given, in the direction given. The damage parameters set by setDebugDamageParams will be used. #### Parameters - **Param worldPosition [in]** The world position at which to apply debug damage. - **Param worldDirection [in]** The world direction of the applied damage. ### Notice Handler Functions These can be called at any time to enable or disable notice handler monitoring. When enabled, use BlastUsdMonitorNoticeEvents to catch unbuffered Usd/Sdf commands. It will be automatically cleaned up on system shutdown if enabled. #### Functions - **blastUsdEnableNoticeHandlerMonitor()** - **blastUsdDisableNoticeHandlerMonitor()** ### Destructible Path Utilities These functions find destructible base or instance prims from any associated prim path. #### Functions - **getDestructibleBasePath(const char* path)** - **Param path [in]** Any path associated with a destructible base prim. - **Return** the destructible prim’s path if found, or nullptr otherwise. ## getDestructibleInstancePath ```cpp const char * ( * getDestructibleInstancePath )( const char * path ) ``` Param path: - **[in]** Any path associated with a destructible instance prim. Return: - the destructible prim’s path if found, or nullptr otherwise. ## Blast SDK Cache This function pushes the Blast SDK data that is used during simulation back to USD so it can be saved and then later restored in the same state. This is also the state that will be restored to when sim stops. ```cpp void ( * blastCachePushBinaryDataToUSD )( ) ``` ## Blast Stress This function modifies settings used to drive stress calculations during simulation. param path: - **[in]** Any path associated with a destructible instance prim. param gravityEnabled: - **[in]** Controls if gravity should be applied to stress simulation of the destructible instance. param rotationEnabled: - **[in]** Controls if rotational acceleration should be applied to stress simulation of the destructible instance. param residualForceMultiplier: - **[in]** Multiplies the residual forces on bodies after connecting bonds break. param settings: - **[in]** Values used to control the stress solver. Return: - true if stress settings were updated, false otherwise. ```cpp bool ( * blastStressUpdateSettings )( const ) ``` char * path, bool gravityEnabled, bool rotationEnabled, float residualForceMultiplier, const StressSolverSettings & settings
dev-1-2023-12-01_CHANGELOG.md
# [201.1.0-dev.8] - 2024-05-06 ## Updated - OM-123014 - Converters now take absolute usd file output path as input. # [201.1.0-dev.7] - 2024-04-30 ## Updated - OM-122942 - Refactored to share code in omni.kit.converter.common # [201.1.0-dev.6] - 2024-04-30 ## Updated - OM-124420 - Renamed cad_core to hoops_core # [201.1.0-dev.5] - 2024-04-24 ## Updated - OM-116478 - Updated set_app_data to include client name / version # [201.1.0-dev.4] - 2024-04-22 ## Updated - OM-123014 - Run scene optimizer as post-conversion task # [201.1.0-dev.3] - 2024-04-19 ## Updated - OM-123008 - Remove converter name from method signature # [201.1.0-dev.2] - 2024-04-16 ## Updated - OM-123571 - Update extension.toml to lock extension to Kit SDK version being used # [201.1.0-dev.1] - 2024-04-09 ## Updated - OM-121673 - Update to 201.1.0, move connect-sdk and scene optimizer to omni.kit.converter.common # [201.0.0-dev.10] - 2024-03-04 ## Updated - OM-121276 - Update to kit-kernel 106.0 # [201.0.0-dev.9] - 2024-02-23 ## Updated - OM-121276 - Update to kit-kernel 106.0 # [201.0.0-dev.8] - 2024-02-12 ## Fixed - **OM-109219** - Fix USD output path of DGN converter # [201.0.0-dev.7] - 2024-02-09 ## Fixed - **OM-118646** - Use same kit-kernel version as Connect SDK # [201.0.0-dev.6] - 2024-01-31 ## Updated - **OM-118567** - Updated keywords for improving searchability for CAD Converters # [201.0.0-dev.5] - 2024-02-08 ## Updated - **OM-118646** - Update to DGN converter that uses Connect SDK # [201.0.0-dev.4] - 2024-02-06 ## Updated - **OM-109082** - Added error when no USD file was created # [201.0.0-dev.3] - 2024-01-12 ## Updated - **OMFP-118316** - Update Connect SDK to release 0.6.0 # [201.0.0-dev.2] - 2023-12-12 ## Fixed - **OMFP-116513** - fix etm - use explicit pre-release # [201.0.0-dev.1] - 2023-12-01 ## Fixed - **OM-115742** - etm-failure-fix and merge release to master # [200.1.1-rc.6] - 2023-12-11 ## Updated - **OM-116923**: Documentation for using the DGN Converter through the service extension. # [200.1.1-rc.5] - 2023-11-28 ## Fixed - Hardcode `--ext-folder`. # [200.1.1-rc.4] - 2023-11-28 ## Fixed - Update search path for `--ext-folder`. - Add `--allow-root`. # [200.1.1-rc.3] - 2023-11-28 ## Fixed - Typo in progress facility # [200.1.1-rc.2] - 2023-11-22 ## Changed - **OMFP-3960** - Update dependency version in extension.toml # [200.1.1-rc.1] - 2023-11-10 ## Changed - **OM-114631** - Setup DGN converter to run as a subprocess # [200.1.1-rc.0] - 2023-11-13 # id22 ## Changed - OM-114367 - Updated CAD Converter deps. Bump all extension versions # rc-2-2023-09-06 ## [0.1.9-rc.2] - 2023-09-06 ### Changed - Updated tests with dgn_core service - Added default json file # id24 ## [0.1.7] - 2023-04-02 ### Changed - Update omni.kit.converter.cad_core - Added response model # id26 ## [0.1.6] - 2023-02-24 ### Changed - Update omni.kit.converter.cad_core deps for flag (retry) + Bump version # id28 ## [0.1.5] - 2023-02-23 ### Changed - Update omni.kit.converter.cad_core deps for flag + Bump version # id30 ## [0.1.4] - 2023-02-18 ### Changed - set exact version to true # id32 ## [0.1.3] - 2023-02-18 ### Changed - Added version lock to omni.kit.converter.cad_core v0.1.0-alpha (headless) # id34 ## [0.1.2] - 2023-02-18 ### Changed - Added version lock to omni.kit.converter.cad_core v0.1.1-alpha (headless) # id36 ## [0.1.1] - 2023-02-18 ### Changed - Added version lock to omni.kit.converter.cad_core v0.1.0-alpha (headless) # id38 ## [0.1.0] - 2023-02-14 ### Added - Added initial version of the Extension.
develop.md
# Develop a Project After creating a new Project, the development phase begins. In this phase, you configure and use an assortment of tools and extensions, along with automated documentation features to fit the needs of your project. ## Sidebar As a reminder, you can find additional documentation in the left-hand menu, such as: > - [Kit Manual](http://docs.omniverse.nvidia.com/kit/docs/kit-manual/latest/guide/kit_overview.html) for extensive information about programming using the Kit SDK. > - [Extensions](../../../../extensions/latest/index.html) for an extensive list of extensions you can include as dependencies in your project. Having followed the methods outlined in the [Create](../create/create.html) section, you’ve produced configuration files and established a folder setup. Now you will transform this set of default files to enable new functionality. This stage of Omniverse Project Development is undeniably the most in-depth, offering numerous paths to achieve desired outcomes as a developer. In this section, we’ll discuss tools and resources for project development, be it crafting an [Extension](../../common/glossary-of-terms.html#term-Extension), [Application](../../common/glossary-of-terms.html#term-Application), [Service](../../common/glossary-of-terms.html#term-Service), or [Connector](../../common/glossary-of-terms.html#term-Connector). ## Configure TOML Files Both Omniverse Applications and Extensions fundamentally rely on a configuration file in [TOML](../../common/glossary-of-terms.html#term-TOML) format. This file dictates dependencies and settings that the Kit SDK loads and executes. Through this mechanism, Applications can include Extensions, which may further depend on other Extensions, forming a dependency tree. For details on constructing this tree and the corresponding settings for each Extension, it’s essential to understand the specific configuration files. Applications utilize the .kit file, while Extensions are defined using .toml files. For more on each type of configuration file, please refer to the tabs above. ### Extension (extension.toml) Requirements: - Understanding [TOML](../../common/glossary-of-terms.html#term-TOML) file format. - Text Editor ([VS Code](../../common/glossary-of-terms.html#term-VS-Code) recommended) Extensions can contain many types of assets, such as images, python files, data files, C++ code/header files, documentation, and more. However, one thing all Extensions have in common is the **extension.toml** file. Extension.toml should be located in the `./config` folder of your project so that it can be found by various script tools. Here is an example extension.toml file that can be found in the Advanced Template Repository: ```toml [package] version = "1.0.0" title = "Simple UI Extension Template" description = "The simplest python extension example. Use it as a starting point for your extensions." # One of categories for UI. category = "Example" # Keywords for the extension keywords = ["kit", "example"] # Path (relative to the root) or content of readme markdown file for UI. readme = "docs/README.md" # Path (relative to the root) of changelog changelog = "docs/CHANGELOG.md" # URL of the extension source repository. repository = "https://github.com/NVIDIA-Omniverse/kit-project-template" # Icon to show in the extension manager icon = "data/icon.png" # Preview to show in the extension manager preview_image = "data/preview.png" [dependencies] "omni.kit.uiapp" = {} [[python.module]] name = "my.hello.world" ``` Here we will break this down… ```toml [package] version = "1.0.0" ``` This sets the version of your extension. It is critical that this version is set any time you produce a new release of your extension, as this version is most often used to differentiate releases of extensions in registries and databases. As a best practice, it is useful to maintain semantic versioning. It is also best practice to ensure that you document changes you have made to your code. See the Documentation section for more information. ```toml title = "Simple UI Extension Template" description = "The simplest python extension example. Use it as a starting point for your extensions." category = "Example" keywords = ["kit", "example"] ``` The `title` and `description` can be used in registries and publishing destinations to allow users more information on what your extension is used for. The `category` sets an overall filter for where this extension should appear in various UIs. The `keywords` property lists an array of searchable, filterable attributes for this extension. ```toml [dependencies] "omni.kit.uiapp" = {} ``` This section is critical to the development of all aspects of your project. The dependencies section in your toml files specifies which extensions are required. As a best practice, you should ensure that you use the smallest list of dependencies that still accomplishes your goals. When setting dependencies for extensions, ensure you only add extensions that are dependencies of that extension. The brackets `{}` in the dependency line allow for parameters such as the following: - `order=[ordernum]` allows you to define by signed integer which order the dependencies are loaded. Lower integers are loaded first. (e.g. `order=5000`) - `version=["version ID"]` lets you specify which version of an extension is loaded. (e.g. `version="1.0.1"`) - `exact=true` (default is false) - If set to true, parser will use only an exact match for the version, not just a partial match. - This section should contain one or more named python modules that are used by the extension. The name is expected to also match a folder structure within the extension path. In this example, the extension named `my.hello.world` would have the following path: `my/hello/world`. - These are the minimum required settings for extensions and apps. We will discuss more settings later in the Dev Guide, and you can find plenty of examples of these configuration files in the Developer Reference sections of the menu. - Requirements: - Understanding TOML file format. - Text Editor (VS Code recommended) - Applications are not much different than extensions. It is assumed that an application is the “root” of a dependency tree. It also often has settings in it related to the behavior of a particular workflow. Regardless, an App has the same TOML file configuration as extensions, but an App’s TOML file is called a `.kit` file. - `.kit` files should be located in the `./source/apps` folder of your project so that it can be found by various script tools. - Here is an example kit file that provides some of the minimum settings you’ll need. Additional settings and options can be found later: ```toml [package] version = "1.0.0" title = "My Minimum App" description = "A very simple app." # One of categories for UI. category = "Example" # Keywords for the extension keywords = ["kit", "example"] # Path (relative to the root) or content of readme markdown file for UI. readme = "docs/README.md" # Path (relative to the root) of changelog changelog = "docs/CHANGELOG.md" # URL of the extension source repository. repository = "https://github.com/NVIDIA-Omniverse/kit-project-template" # Icon to show in the extension manager icon = "data/icon.png" # Preview to show in the extension manager preview_image = "data/preview.png" [dependencies] "omni.kit.uiapp" = {} ``` - Here we will break this down… ```toml [package] version = "1.0.0" ``` - This sets the version of your extension or app. It is critical that this version is set any time you produce a new release of your extension, as this version is most often used to differentiate releases of extensions/apps in registries and databases. As a best practice, it is useful to maintain semantic versioning. - It is also best practice to ensure that you document changes you have made in your docs show each version you’ve released. - The `title` and `description` can be used in registries and publishing destinations to allow users more information on what your app or extension is used for. - The `category` sets an overall filter for where this project should appear in various UIs. - The `keywords` property lists an array of searchable, filterable attributes for this extension. ```toml [dependencies] "omni.kit.uiapp" = {} ``` # Development Guide ## Dependencies This section is critical to the development of all aspects of your project. The dependencies section in your toml files specifies which extensions are to be used by the app. As a best practice, you should ensure that you use the smallest list of dependencies that still accomplishes your goals. And, in extensions especially, you only add dependencies which THAT extension requires. The brackets `{}` in the dependency line allow for parameters such as the following: - `order=[ordernum]` allows you to define by signed integer which order the dependencies are loaded. Lower integers are loaded first. (e.g. `order=5000`) - `version=["version ID"]` lets you specify which version of an extension is loaded. (e.g. `version="1.0.1"`) - `exact=true` (default is false) - If set and true, parser will use only an exact match for the version, not just a partial match. These are the minimum required settings for Apps. We will discuss more settings later in the Dev Guide, and you can find plenty of examples of these configuration files in the Developer Reference sections of the menu. ## Available Extensions Virtually all user-facing elements in an Omniverse Application, such as Omniverse USD Composer or Omniverse USD Presenter, are created using Extensions. The very same extensions used in Omniverse Applications are also available to you for your own development. The number of extensions provided by both the Community and NVIDIA is continually growing to support new features and use cases. However, a core set of extensions is provided alongside the Omniverse Kit SDK. These ensure basic functionality for your Extensions and Applications including: - Omniverse UI Framework: A UI toolkit for creating beautiful and flexible graphical user interfaces within extensions. - Omni Kit Actions Core: A framework for creating, registering, and discovering programmable Actions in Omniverse. - Omni Scene UI: Provides tooling to create great-looking 3d manipulators and 3d helpers with as little code as possible. - And more. A list of available Extensions can be found via API Search. ## Documentation If you are developing your project using Repo Tools, you also have the ability to create documentation from source files to be included in your build. This powerful feature helps automate html-based documentation from human-readable .md files. You can refer to the `repo docs -h` command to see more information on the docs tool and its parameters. By running ``` repo docs ``` you will generate in the `_build/docs/[project_name]/latest/` folder a set of files which represents the html version of your source documentation. The “home page” for your documentation will be the `index.html` file in that folder. You can find latest information by reading the Omniverse Documentation System. > **Note** > You may find that when running `repo docs` you receive an error message instead of the build proceeding. If this is the case it is likely that you are either using a project that does not contain the “docs” tool OR that your `repo.toml` file is not setup correctly. Please refer to the repo tools documentation linked to above for more information. ## Additional Documentation - Script Editor - Code Samples - Repo Tools
DeveloperReference.md
# Developer Reference OmniGraph development can be done by users with a wide variety of programming proficiency. A basic familiarity with the Python scripting language is enough to get you started. If you know how to create optimized CUDA code for high throughput machine learning data analysis we’ve got you covered there too. You can start off with some basic [Naming Conventions](Conventions.html#omnigraph-naming-conventions) that let you easily recognize the various pieces of OmniGraph. While you are free to set up your extension in any way you wish, if you follow the [Directory Structure](DirectoryStructure.html#omnigraph-directory-structure) then some LUA utilities will help keep your `premake5.lua` file small. ## Working In Python OmniGraph supports development of nodes implemented in Python, Commands that modify the graph in an undoable way, Python bindings to our C++ ABI, and a general Python scripting API. See the details in the [Python Nodes and Scripts](PythonScripting.html#omnigraph-python-scripting) page. See also the [OGN Code Samples - Python](ogn/ogn_code_samples_python.html#ogn-code-samples-py) for examples of how to access different types of data within a node. ## Working In C++ OmniGraph supports development of nodes implemented in C++, as well as an extensive ABI for accessing data at the low level. See also the [OGN Code Samples - C++](ogn/ogn_code_samples_cpp.html#ogn-code-samples-cpp) for examples of how to access different types of data within a node. ## Implementation Details The architecture and some of the basic components of OmniGraph can be see in the [OmniGraph Architecture](Architecture.html#omnigraph-architecture) description. OmniGraph uses USD as its persistent storage for compute parameters and results. The details of how this USD data corresponds to OmniGraph data can be seen in the [OmniGraph and USD](Usd.html#omnigraph-and-usd) page. All of the details regarding the .ogn format can be found in the [Node Generator](ogn/Overview.html#omnigraph-ogn) page. # Action Graph Action Graph is a type of OmniGraph with unique features that can be used in custom nodes. See Action Code Samples - C++ and Action Graph Code Samples - Python for code examples. # Compound Nodes See Compound Nodes for details about compound nodes; specifically how they are represented in USD, and how to work with them using python.
dgn-converter-config-file-inputs_Overview.md
# Overview — omni.kit.converter.dgn_core 201.1.0-dev.8 documentation ## Overview ``` ```plaintext omni.kit.converter.dgn_core ``` uses the ODA Kernel and Drawings SDKs to convert the DGN data format to USD. When this extension loads, it will register itself with the CAD Converter service ( ```plaintext omni.services.convert.cad ``` ) if it is available. The resulting USD file from the DGN Converter prepends names of DGN levels to prims. This allows for quick search by users to find geometry belonging to desired converted levels. ## DGN CONVERTER CONFIG FILE INPUTS: Conversion options are configured by supplying a JSON file. Below are the available configuration options. ### JSON Converter Settings: **Format**: “setting name” : default value ```json "sConfigFilePath" : "C:/test/sample_config.json" ``` Configuration file path ```json "dSurfaceTolerance" : 0.2 ``` Sets the maximum distance (surface tolerance) between the tessellated mesh and the source surface. Limits of value are [0,1] ```json "iTessLOD" : 2 ``` Preset level of detail (LOD) values to provide to ODA API for converting solids and surfaces into tessellated meshes. - ```plaintext 0 ``` = ExtraLow, SurfaceTolerance = 1.0, - ```plaintext 1 ``` = Low, SurfaceTolerance = 0.1, - ```plaintext 2 ``` = Medium, SurfaceTolerance = 0.01, - ```plaintext 3 ``` = High, SurfaceTolerance = 0.001, - ```plaintext 4 ``` = ExtraHigh, SurfaceTolerance = 0.0001, ```json "bOptimize" : true ``` Flag to invoke USD scene optimization. ```json "bConvertHidden" : true ``` If true, convert hidden DGN elements but set to invisible; else, skip hidden elements. ```json "bHideLevelsByList" : true ``` Flag to hide DGN levels by name. ```json "hiddenLevels" : ["default", "customLayer1"] ``` <div> <p> Array of level names that contain the name of the custom DGN level and a flag to hide (if true) the levels after conversion. </p> <pre><code>"bImportAttributesByList" : true </code></pre> <p> Flag to export DGN custom properties and convert to DGN attributes. </p> <pre><code> "attributes" : [ { "name" : "myAttribute1", "converted_name" : "myAttribute1_foobar" }, { "name" : "myAttribute2", "converted_name" : "myAttribute2_foobar" } ], </code></pre> <p> Array of attribute objects that contain the name of the custom DGN property and the desired name for the converted USD attributed. </p> </div> <h3> Full <code> sample_config.json </code> : </h3> <pre><code>{ "bOptimize" : true, "iTessLOD" : 2, "dSurfaceTolerance" : 0.2, "bConvertHidden" : true, "bHideLevelsByList" : true, "bImportAttributesByList" : true, "attributes" : [ { "name" : "myAttribute1", "converted_name" : "myAttribute1_foobar" }, { "name" : "myAttribute2", "converted_name" : "myAttribute2_foobar" } ], "hiddenLevels" : ["default", "customLayer1"] } </code></pre>
dgn-converter_Overview.md
# DGN Converter ## Overview The DGN Converter extension enables conversion for DGN file formats to USD. USD Explorer includes the DGN Converter extension enabled by default. ## Supported CAD file formats The following file formats are supported by DGN Converter: - DGN (`*.DGN`) > Note: The file formats *.fbx, *.obj, *.gltf, *.glb, *.lxo, *.md5, *.e57 and *.pts are supported by Asset Converter and also available by default. > Note: If expert tools such as Creo, Revit or Alias are installed, we recommend using the corresponding connectors. These provide more extensive options for conversion. > Note: CAD Assemblies may not work when converting files from Nucleus. When converting assemblies with external references we recommend either working with local files or using Omniverse Drive. ## Converter Options This section covers options for configuration of conversions of DGN file formats to USD. ### Surface Tolerance This is the maximum distance between the tessellated mesh and the original solid/surface. Please refer to Open Design Alliance’s webpage here. The more precise the value (e.g., 0.00001) the more triangles are generated for the mesh. A field is provided when selecting a DGN file. The minimum and maximum values are 0 to 1. If a value of 0 is provided, then surface tolerance of an object is calculated as the diagonal its extents multiplied by 0.025. ## Related Extensions These related extensions make up the DGN Converter. This extension provides import tasks to the extensions through their interfaces. The DGN Core extension is launched and provided configuration options through a subprocess to avoid library conflicts with those loaded by the other converters. ### Core Converter - DGN Core: `omni.kit.converter.dgn_core:Overview` ### Services - CAD Converter Service: `omni.services.convert.cad:Overview` ### Utils - Converter Common: `omni.kit.converter.common:Overview`
dictionary_settings.md
# Dictionaries and Settings Settings is a generalized subsystem designed to provide a simple to use interface to Kit’s various subsystems, which can be automated, enumerated, serialized and so on. It is accessible from both C++ and scripting bindings such as Python bindings. `carb.settings` is a Python namespace (and, coincidentally, a C++ plugin name) for the Settings subsystem. Settings uses `carb.dictionary` under the hood, and is effectively a singleton dictionary with a specialized API to streamline access. `carb.dictionary` is a Dictionary subsystem, which provides functionality to work with the data structure type known as dictionary, associative array, map, and so on. ## Dictionaries For the low-level description of the design and general principles, please refer to the Carbonite documentation for the `carb.dictionary` interfaces. ## Settings As mentioned above, the settings subsystem is using `carb.dictionary` under the hood, and to learn more about the low-level description of the design and general principles, please refer to the Carbonite documentation. On a higher level, there are several important principles and guidelines for using settings infrastructure, and best practices for using settings within Omniverse Kit. ### Default values Default values need to be set for settings at the initialization stage of the plugin, and in the extension configuration file. A rule of thumb is that no setting should be read when there is no value for it. As always, there are exceptions to this rule, but in the vast majority of cases, settings should be read after the setting owner sets a default value for this particular setting. ### Notifications To ensure optimal performance, it is recommended to use notifications instead of directly polling for settings, to avoid the costs of accessing the settings backend when the value didn’t change. **DON’T**: This is an example of polling in a tight loop, and it is **not recommended** to do things this way: ```c++ while (m_settings->get<bool>("/snippet/app/isRunning")) { doStuff(); // Stop the loop via settings change m_settings->set("/snippet/app/isRunning", false); } ``` **DO**: Instead, use the notification APIs, and available helpers that simplify the notification subscription code, to reduce the overhead significantly: ```c++ carb::settings::ThreadSafeLocalCache<bool> valueTracker; valueTracker.startTracking("/snippet/app/isRunning"); ``` ```c++ while (valueTracker.get()) { doStuff(); // Stop the loop via settings change m_settings->set("/snippet/app/isRunning", false); } valueTracker.stopTracking(); ``` With the bool value, getting and setting the value is cheap, but in cases of more complicated types, e.g. string, marking and clearing dirty marks could be used in the helper. In case a helper is not sufficient for the task at hand - it is always possible to use the settings API in such a way as ``` subscribeToNodeChangeEvents ``` / ``` subscribeToTreeChangeEvents ``` and ``` unsubscribeToChangeEvents ``` to achieve what’s needed with greater flexibility. ## Settings structure Settings are intended to be easily tweakable, serializable and human readable. One of the use-cases is automatic UI creation from the settings snapshot to help users view and tweak settings at run time. **DO**: Simple and readable settings like ``` /app/rendering/enabled ``` **DON’T**: Internal settings that don’t make sense to anyone outside the core developer group, things like: ```c++ /component/modelArray/0=23463214 /component/modelArray/1=54636715 /component/modelArray/2=23543205 ... /component/modelArray/100=66587434 ``` ## Reacting to and consuming settings Ideally settings should be monitored for changes and plugin/extensions should be reacting to the changes accordingly. But exceptions are possible, and in these cases, the settings changes should still be monitored and user should be given a warning that the change in setting is not going to affect the behavior of a particular system. ## Combining API and settings Often, there are at least two ways to modify behavior: via the designated API function call, or via changing the corresponding setting. The question is how to reconcile these two approaches. One way to address this problem - API functions should only change settings, and the core logic tracks settings changes and react to them. Never change the core logic value directly when the corresponding setting value is present. By adding a small detour into the settings subsystem from API calls, you can make sure that the value stored in the core logic and corresponding setting value are never out of sync.
directories.md
# Directories - [8fa04669143f4cb0](#dir-0612118555fae677dced868d63781571) - [8fa04669143f4cb0/_build](#dir-8a2f7be843a233509bb1bc1ed4f4bc15) - [8fa04669143f4cb0/_build/target-deps](#dir-deb636f69a2bc2a7bceab692952225ef) - [8fa04669143f4cb0/_build/target-deps/hoops_exchange_cad_converter_release](#dir-1eb887b6b7b0977ac20ac43bf8332669) - [8fa04669143f4cb0/_build/target-deps/hoops_exchange_cad_converter_release/hoops_exchange_cad_converter](#dir-ec03f3037ed32d3c17f34b738179950d) - [8fa04669143f4cb0/_build/target-deps/hoops_exchange_cad_converter_release/hoops_exchange_cad_converter/include](#dir-36e3145d041d1326ad902f318aa968b8) - [8fa04669143f4cb0/_build/target-deps/hoops_exchange_cad_converter_release/hoops_exchange_cad_converter/include/hoops_reader](#dir-94d92ab70a1a4d71afe8628ac725ddc2)
DirectoryStructure.md
# Directory Structure It is advantageous to consider nodes as a separate type of thing and structure your directories to make them easier to find. While it’s not required in order to make the build work, it’s recommended in order to keep the location of files consistent. The standard Kit extension layout has these directories by default: ``` omni.my.feature/ bindings/ Files related to Python bindings of your C++ config/ extension.toml configuration file docs/ index.rst explaining your extension plugins/ C++ code used by your extension python/ __init__.py extension.py = Imports of your bindings and commands, and a omni.ext.IExt object for startup/shutdown scripts/ Python code used by your extension ``` The contents of your `__init__.py` file should expose the parts of your Python code that you wish to make public, including some boilerplate to register your extension and its nodes. For example, if you have two scripts for general use in a `utility.py` file then your `__init__.py` file might look like this: ```python """Public interface for my.extension""" import .extension import .ogn from .scripts.utility import my_first_useful_script from .scripts.utility import my_second_useful_script ``` The C++ node files (OgnSomeNode.ogn and OgnSomeNode.cpp) will live in a top level `nodes/` directory and the Python ones (OgnSomePythonNode.ogn and OgnSomePythonNode.py) go into a `python/nodes/` subdirectory: ``` omni.my.feature/ bindings/ config/ docs/ nodes/ OgnSomeNode.ogn OgnSomeNode.cpp plugins/ python/ nodes/ OgnSomePythonNode.ogn OgnSomePythonNode.py ``` If your extension has a large number of nodes you might also consider adding extra subdirectories to keep them together: ``` omni.my.feature/ ... nodes/ math/ OgnMathSomeNode.ogn OgnMathSomeNode.cpp physics/ OgnPhysicsSomeNode.ogn OgnPhysicsSomeNode.cpp utility/ OgnUtilitySomeNode.ogn OgnUtilitySomeNode.cpp ... ``` **Tip** Although any directory structure can be used, using this particular structure lets you take advantage of the predefined build project settings for OmniGraph nodes, and makes it easier to find files in both familiar and unfamiliar extensions.
dir_8fa04669143f4cb0.md
# 8fa04669143f4cb0 ## Directories - _build
dir_8fa04669143f4cb0__build.md
# 8fa04669143f4cb0/_build Directories ---------- * [target-deps](dir_8fa04669143f4cb0__build_target-deps.html#dir-deb636f69a2bc2a7bceab692952225ef)
dir_8fa04669143f4cb0__build_target-deps.md
# 8fa04669143f4cb0/_build/target-deps ## Directories - [hoops_exchange_cad_converter_release](dir_8fa04669143f4cb0__build_target-deps_hoops_exchange_cad_converter_release.html#dir-1eb887b6b7b0977ac20ac43bf8332669)
dir_8fa04669143f4cb0__build_target-deps_hoops_exchange_cad_converter_release.md
# 8fa04669143f4cb0/_build/target-deps/hoops_exchange_cad_converter_release ## Directories - [hoops_exchange_cad_converter](dir_8fa04669143f4cb0__build_target-deps_hoops_exchange_cad_converter_release_hoops_exchange_cad_converter.html#dir-ec03f3037ed32d3c17f34b738179950d)
dir_8fa04669143f4cb0__build_target-deps_hoops_exchange_cad_converter_release_hoops_exchange_cad_converter.md
# 8fa04669143f4cb0/_build/target-deps/hoops_exchange_cad_converter_release/hoops_exchange_cad_converter ## Directories - **include**
dir_8fa04669143f4cb0__build_target-deps_hoops_exchange_cad_converter_release_hoops_exchange_cad_converter_include.md
# 8fa04669143f4cb0/_build/target-deps/hoops_exchange_cad_converter_release/hoops_exchange_cad_converter/include ## Directories - [hoops_reader](#)
dir_omni.md
# omni ## Directories - avreality
dir_omni_avreality.md
# omni/avreality Directories ------------ - rain
dir_omni_avreality_rain.md
# omni/avreality/rain ## Files - **IPuddleBaker.h** - **IWetnessController.h** - **PuddleBaker.h** - **WetnessController.h**
documentation_index.md
# Omniverse USD Resolver This is a USD plugin that allows for working with files in Omniverse ## Documentation The latest documentation can be found at ## Getting You can get the latest build from Packman. There are separate packages for each usd flavor, python version, and platform. They are all named: omni_usd_resolver.{usd_flavor}.{python_flavor}.{platform} usd_flavor is one of: - nv-20_08 - nv-21_11 - nv-22_05 - nv-22_11 - pxr-20_08 - pxr-21_08 - pxr-21_11 - 3dsmax-21_11 - 3dsmax-22_11 - 3dsmax-23_11 - maya-21_11 - maya-22_11 - maya-23_11 - (see generate_redist_deps.py for the full list) python_flavor is one of: - nopy - py37 - py38 - py39 - py310 platform is one of: - windows-x86_64 - linux-x86_64 - linux-aarch64 All packages use the same versioning scheme: ``` {major}.{minor}.{patch} ``` ## USD & Client Library The package includes `redist.packman.xml` which point to the versions of USD and the Omniverse Client Library that this plugin was built against. You can include it in your own packman.xml file like this: ```xml <project toolsVersion="5.0"> <import path="../_build/target-deps/omni_usd_resolver/deps/redist.packman.xml" /> <dependency name="usd_debug" linkPath="../_build/target-deps/usd/debug" /> <dependency name="usd_release" linkPath="../_build/target-deps/usd/release" /> <dependency name="omni_client_library" linkPath="../_build/target-deps/omni_client_library" /> </project> ``` ``` # Initializing You must either copy the omni_usd_resolver plugin to the default USD plugin location, or register the plugin location at application startup using `PXR_NS::PlugRegistry::GetInstance().RegisterPlugins`. Be sure to package both the library (.dll or .so) and the “plugInfo.json” file. Be sure to keep the folder structure the same for the “plugInfo.json” file. It should look like this: - omni_usd_resolver.dll or omni_usd_resolver.so - usd/omniverse/resources/plugInfo.json If you use `RegisterPlugins`, provide it the path to the “resources” folder. Otherwise, you can copy the entire ‘debug’ or ‘release’ folders into the standard USD folder structure. # Live Mode In order to send/receive updates you must: 1. `#include <OmniClient.h>` (from client library) 2. Create or open a “.live” file on an Omniverse server 3. Call `omniClientLiveProcess();` periodically For “frame based” applications, you can safely just call `omniClientLiveProcess` inside your main loop. For event based applications, you can register a callback function using `omniClientLiveSetQueuedCallback` to receive a notification that an update is queued and ready to be processed. In either case, make sure that nothing (ie: no other thread) is using the USD library when you call `omniClientLiveProcess` because it will modify the layers and that is not thread safe. # Contents - [C API](_build/docs/usd_resolver/latest/usd_resolver_api.html) - [Python API](docs/python.html) - [Changes](docs/changes.html) ## Technical - [Technical Overview](docs/technical-overview.html) - [OmniUsdResolver Overview](docs/resolver.html) - [OmniUsdResolver Details](docs/resolver-details.html) - [OmniUsdWrapperFileFormat Overview](docs/wrapper-file-format.html) - [OmniUsdLiveFileFormat Overview](docs/live-layers.html) - [OmniUsdLiveFileFormat (Multi-threaded) Overview](docs/live-layers-multithread.html) - [Live Layer Details](docs/live-layers-details.html) - [Live Layer Wire Format](docs/live-layers-wire-format.html) - [Live Layer Data](docs/live-layers-data.html) - [Client Library Live Functions](docs/omni-client-live.html)
documentation_Overview.md
# Overview This extension is the gold-standard for an extension that contains only OmniGraph Python nodes without a build process to create the generated OmniGraph files. They will be generated at run-time when the extension is enabled. ## The Files To use this template first copy the entire directory into a location that is visible to the extension manager, such as `Documents/Kit/shared/exts`. You will end up with this directory structure. The highlighted lines should be renamed to match your extension, or removed if you do not want to use them. ```text omni.graph.template.no_build/ config/ extension.toml data/ icon.svg preview.png docs/ CHANGELOG.md Overview.md README.md directory.txt ogn/ nodes.json omni/ graph/ template/ no_build/ __init__.py _impl/ __init__.py extension.py nodes/ OgnTemplateNodeNoBuildPy.ogn OgnTemplateNodeNoBuildPy.py tests/ __init__.py test_api.py test_omni_graph_template_no_build.py ``` By convention the Python files are structured in a directory tree that matches a namespace corresponding to the extension name, in this case `omni/graph/template/no_build/`, which corresponds to the extension name *omni.graph.template.no_build*. You’ll want to modify this to match your own extension’s name. The file `ogn/nodes.json` was manually written, usually being a byproduct of the build process. It contains a JSON list of all nodes implemented in this extension with the description, version, extension owner, and implementation language for each node. It is used in the extension window as a preview of nodes in the extension so it is a good idea to provide this file with your extension, though not mandatory. The convention of having implementation details of a module in the `_impl/` subdirectory is to make it clear to the user that they should not be directly accessing anything in that directory, only what is exposed in the `__init__.py`. ## The Configuration Every extension requires a `config/extension.toml` file with metadata describing the extension to the extension management system. Below is the annotated version of this file, where the highlighted lines are the ones you should change to match your own extension. ```toml # Main extension description values [package] # The current extension version number - uses [Semantic Versioning](https://semver.org/spec/v2.0.0.html) version = "2.3.1" ``` # The title of the extension that will appear in the extension window 5 # Longer description of the extension 7 # Authors/owners of the extension - usually an email by convention 9 # Category under which the extension will be organized 11 # Location of the main README file describing the extension for extension developers 13 # Location of the main CHANGELOG file describing the modifications made to the extension during development 15 # Location of the repository in which the extension's source can be found 17 # Keywords to help identify the extension when searching 19 # Image that shows up in the preview pane of the extension window 21 # Image that shows up in the navigation pane of the extension window - can be a .png, .jpg, or .svg 23 # Specifying this ensures that the extension is always published for the matching version of the Kit SDK 25 # Specify the minimum level for support 27 # Main module for the Python interface. This is how the module will be imported. 30 [[python.module]] 31 name = "omni.graph.template.no_build" 32 # Watch the .ogn files for hot reloading. Only useful during development as after delivery files cannot be changed. 34 [fswatcher.patterns] 35 include = ["*.ogn", "*.py"] 36 exclude = ["Ogn*Database.py"] 37 # Other extensions that need to load in order for this one to work 39 [dependencies] 40 "omni.graph" = {} # For basic functionality and node registration 41 "omni.graph.tools" = {} # For node type code generation 42 # Main pages published as part of documentation. (Only if you build and publish your documentation.) 44 [documentation] 45 pages = [ 46 "docs/Overview.md", 47 "docs/CHANGELOG.md", 48 ] 49 # Some extensions are only needed when writing tests, including those automatically generated from a .ogn file. 51 # Having special test-only dependencies lets you avoid introducing a dependency on the test environment when only 52 # using the functionality. 53 [[test]] 54 dependencies = [ 55 "omni.kit.test" # Brings in the Kit testing framework 56 ] 57 Everything in the ``` ``` docs/ ``` subdirectory is considered documentation for the extension. - **README.md** The contents of this file appear in the extension manager window so you will want to customize it. The location of this file is configured in the ``` extension.toml ``` file as the **readme** value. - **CHANGELOG.md** It is good practice to keep track of changes to your extension so that users know what is available. The location of this file is configured in the ``` extension.toml ``` file as the **changelog** value. - **Overview.md** This file is not usually required when not running a build process; in particular a documentation and can be deleted. - **directory.txt** This file can be deleted as it is specific to these instructions. ## The Node Type Definitions You define a new node type using two files, examples of which are in the ``` nodes/ ``` subdirectory. Tailor the definition of your node types for your computations. Start with the OmniGraph User Guide for information on how to configure your own definitions. ## Tests While completely optional it’s always a good idea to add a few tests for your node to ensure that it works as you intend it and continues to work when you make changes to it. The sample tests in the ``` tests/ ``` subdirectory show you how you can integrate with the Kit testing framework to easily run tests on nodes built from your node type definition. That’s all there is to creating a simple node type! You can now open your app, enable the new extension, and your sample node type will be available to use within OmniGraph. > **Note** > Although development is faster without a build process you are sacrificing discoverability of your node type. There will be no automated test or documentation generation, and your node types will not be visible in the extension manager. They will, however, still be visible in the OmniGraph editor windows. There will also be a small one-time performance price as the node type definitions will be generated the first time your extension is enabled.
DocumentingPython.md
# Documenting This guide is for developers who write API documentation. To build documentation run: ```bash repo docs ``` in the repo and you will find the output under `_build/docs/carbonite/latest/`. ## Documenting Python API The best way to document our Python API is to do so directly in the code. That way it’s always extracted from a location where it’s closest to the actual code and most likely to be correct. We have two scenarios to consider: - Python code - C++ code that is exposed to Python For both of these cases we need to write our documentation in the Python Docstring format (see [PEP 257](https://www.python.org/dev/peps/pep-0257/) for background). In a perfect world we would be able to use exactly the same approach, regardless of whether the Python API was written in Python or coming from C++ code that is exposing Python bindings via pybind11. Our world is unfortunately not perfect here but it’s quite close; most of the approach is the same - we will highlight when a different approach is required for the two cases of Python code and C++ code exposed to Python. Instead of using the older and more cumbersome restructredText Docstring specification we have adopted the more streamlined [Google Python Style Docstring](http://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings) format. This is how you would document an API function in Python: ```python from typing import Optional def answer_question(question: str) -> Optional[str]: """This function can answer some questions. It currently only answers a limited set of questions so don't expect it to know everything. Args: question: The question passed to the function, trailing question mark is not necessary and casing is not important. Returns: The answer to the question or ``None`` if it doesn't know the answer. """ if question.lower().startswith("what is the answer to life, universe, and everything"): return str(42) else: return None ``` After running the documentation generation system we will get this as the output (assuming the above was in a module named carb). There are a few things you will notice: 1. We use the [Python type hints](https://docs.python.org/3/library/typing.html) (introduced in Python 3.5) in the function signature so we don’t need to write any of that information in the docstring. An additional benefit of this approach is that many Python IDEs can utilize this information and perform type checking when programming against the API. Notice that we always do `from typing import ...` so we never have to prefix with `typing` namespace when referring to `List`, `Union`, etc. - **Using Docstrings**: Docstrings are essentially comments that describe what a function, class, or method does. They are written in triple quotes (`'''` or `"""`) and are placed at the beginning of the code block. For example, in Python, you might see something like this: ```python def function_name(arg1, arg2): ''' This is a docstring. It explains what the function does. ''' # function body ``` Docstrings can be accessed using the `__doc__` attribute of the function, class, or method. - **Using `reStructuredText`**: `reStructuredText` (reST) is a lightweight markup language used for documentation in the Python community. It is used to write the documentation for Python libraries and is also used in Sphinx, a documentation generator. Here's an example of how you might use reST in a docstring: ```python def function_name(arg1, arg2): ''' :param arg1: This is the first argument. :type arg1: int :param arg2: This is the second argument. :type arg2: str :returns: This function returns a tuple. :rtype: tuple ''' # function body ``` This format allows for detailed documentation of function parameters and return values. - **Using `Google Style`**: Google style docstrings are a specific format for writing docstrings that is popular in the Python community. They are similar to reST but have a more structured format. Here's an example: ```python def function_name(arg1, arg2): ''' This is a function that does something. Args: arg1 (int): The first argument. arg2 (str): The second argument. Returns: tuple: A tuple of results. ''' # function body ``` Google style docstrings are often used with the `Sphinx` documentation generator and the `Napoleon` extension, which converts them into reST. - **Using `numpy Style`**: Numpy style docstrings are another popular format for writing docstrings in the Python community. They are similar to Google style but have a different structure. Here's an example: ```python def function_name(arg1, arg2): ''' This is a function that does something. Parameters ---------- arg1 : int The first argument. arg2 : str The second argument. Returns ------- tuple A tuple of results. ''' # function body ``` Numpy style docstrings are often used with the `Sphinx` documentation generator and the `Numpydoc` extension, which converts them into reST. module_variable (Optional[str]): This is important ... """ or module_variable = None """Optional[str]: This is important ...""" But we **don’t** write: from typing import Optional module_variable: Optional[str] = None """This is important ...""" This is because the last form (which was introduced in Python 3.6) is still poorly supported by tools - including our documentation system. It also doesn’t work with Python bindings generated from C++ code using pybind11. For instructions on how to document classes, exceptions, etc please consult the Sphinx Napoleon Extension Guide.
documenting_exts.md
# Documenting Extensions This guide is for developers who write API documentation. To build the documentation, run: ```shell repo.{sh|bat} docs ``` Add the `-o` flag to automatically open the resulting docs in the browser. If multiple projects of documentation are generated, each one will be opened. Add the `--project` flag to specify a project to only generate those docs. Documentation generation can be long for some modules, so this may be important to reduce iteration time when testing your docs. e.g: ```shell repo.bat docs --project kit-sdk repo.bat docs --project omni.ui ``` Add the `-v` / `-vv` flags to repo docs invocations for additional debug information, particularly for low-level Sphinx events. ### Note You must have successfully completed a debug build of the repo before you can build the docs for **Python**. This is due to the documentation being extracted from the `.pyd` and `.py` files in the `_build` folder. Run `build --debug-only` from the root of the repo if you haven’t done this already. As a result of running `repo docs` in the repo, and you will find the project-specific output under `_build/docs/{project}/latest`. The generated `index.html` is what the `-o` flag will launch in the browser if specified. ### Warning sphinx warnings will result in a non-zero exit code for repo docs, therefore will fail a CI build. This means that it is important to maintain docstrings with the correct syntax (as described below) over the lifetime of a project. ## Documenting Python API The best way to document our Python API is to do so directly in the code. That way it’s always extracted from a location where it’s closest to the actual code and most likely to be correct. We have two scenarios to consider: - Python code - C++ code that is exposed to Python For both of these cases we need to write our documentation in the Python Docstring format (see [PEP 257](https://peps.python.org/pep-0257/) for background). Our world is unfortunately not perfect here but it’s quite close; most of the approach is the same - we will highlight when a different approach is required for the two cases of Python code and C++ code exposed to Python. Instead of using the older and more cumbersome restructuredText Docstring specification, we have adopted the more streamlined Google Python Style Docstring format. This is how you would document an API function in Python: ```python from typing import Optional def answer_question(question: str) -> Optional[str]: """This function can answer some questions. It currently only answers a limited set of questions so don't expect it to know everything. Args: question: The question passed to the function, trailing question mark is not necessary and casing is not important. Returns: The answer to the question or ``None`` if it doesn't know the answer. """ if question.lower().startswith("what is the answer to life, universe, and everything"): return str(42) else: return None ``` After running the documentation generation system we will get this as the output (assuming the above was in a module named carb): There are a few things you will notice: 1. We use the Python type hints (introduced in Python 3.5) in the function signature so we don’t need to write any of that information in the docstring. An additional benefit of this approach is that many Python IDEs can utilize this information and perform type checking when programming against the API. Notice that we always do `from typing import ...` so that we never have to prefix with the `typing` namespace when referring to `List`, `Union`, `Dict`, and friends. This is the common approach in the Python community. 2. The high-level structure is essentially in four parts: - A one-liner describing the function (without details or corner cases), referred to by Sphinx as the “brief summary”. - A paragraph that gives more detail on the function behavior (if necessary). - An `Args:` section (if the function takes arguments, note that `self` is not considered an argument). - A `Returns:` section (if the function does return something other than `None`). Before we discuss the other bits to document (modules and module attributes), let’s examine how we would document the very same function if it was written in C++ and exposed to Python using pybind11. ```cpp m.def("answer_question", &answerQuestion, py::arg("question"), R"( This function can answer some questions. It currently only answers a limited set of questions so don't expect it to know everything. Args: question: The question passed to the function, trailing question mark is not necessary and casing is not important. Returns: The answer to the question or empty string if it doesn't know the answer. )"); ``` The outcome is identical to what we saw from the Python source code, except that we cannot return optionally a string in C++. The same docstring syntax rules must be obeyed because they will be propagated through the bindings. We want to draw your attention to the following: 1. pybind11 generates the type information for you, based on the C++ types. The `py::arg` object must be used to get properly named arguments into the function signature (see pybind11 documentation) - otherwise you just get arg0 and so forth in the documentation. 2. Indentation and whitespace are key when writing docstrings. The documentation system is clever enough to remove uniform indentation. That is, as long as all the lines have the same amount of padding, that padding will be ignored and not passed onto the RestructuredText processor. Fortunately clang-format leaves this funky formatting alone - respecting the raw string qualifier. Sphinx warnings caused by non-uniform whitespace can be opaque (such as referring to nested blocks being ended without newlines, etc) Let’s now turn our attention to how we document modules and their attributes. We should of course only document modules that are part of our API (not internal helper modules) and only public attributes. Below is a detailed example: ```python """Example of Google style docstrings for module. This module demonstrates documentation as specified by the `Google Python Style Guide`. Docstrings may extend over multiple lines. Sections are created with a section header and a colon followed by a block of indented text. Example: Examples can be given using either the ``Example`` or ``Examples`` sections. Sections support any reStructuredText formatting, including literal blocks:: $ python example.py Section breaks are created by resuming unindented text. Section breaks are also implicitly created anytime a new section starts. Attributes: module_level_variable1 (int): Module level variables may be documented in either the ``Attributes`` section of the module docstring, or in an inline docstring immediately following the variable. Either form is acceptable, but the two should not be mixed. Choose one convention to document module level variables and be consistent with it. module_level_variable2 (Optional[str]): Use objects from typing, such as Optional, to annotate the type properly. module_level_variable4 (Optional[File]): We can resolve type references to other objects that are built as part of the documentation. This will link to `carb.filesystem.File`. Todo: * For module TODOs if you want them * These can be useful if you want to communicate any shortcomings in the module we plan to address .. _Google Python Style Guide: http://google.github.io/styleguide/pyguide.html """ module_level_variable1 = 12345 module_level_variable3 = 98765 """int: Module level variable documented inline. The type hint should be specified on the first line, separated by a colon from the text. This approach may be preferable since it keeps the documentation closer to the code and the default assignment is shown. A downside is that the variable will get alphabetically sorted among functions in the module so won't have the same cohesion as the approach above.""" module_level_variable2 = None module_level_variable4 = None ``` This is what the documentation would look like: As we have mentioned we should not mix the ``` Attributes: ``` style of documentation with inline documentation of attributes. Notice how ``` module_level_variable3 ``` appears in a separate block from all the other attributes that were documented. It is even after the TODO section. Choose one approach for your module and stick to it. There are valid reasons to pick one style above the other, but don’t cross the streams! As before, we use type hints from ``` typing ``` but we don’t use the typing syntax to attach them. We write: ```python """... Attributes: module_variable (Optional[str]): This is important ... """ ``` or ```python module_variable = None """Optional[str]: This is important ...""" ``` But we **don’t** write: ```python from typing import Optional module_variable: Optional[str] = 12345 """This is important ...""" ``` This is because the last form (which was introduced in Python 3.6) is still poorly supported by tools - including our documentation system. It also doesn’t work with Python bindings generated from C++ code using pybind11. For instructions on how to document classes, exceptions, etc please consult the Sphinx Napoleon Extension Guide. ## Adding Extensions to the automatic-introspection documentation system It used to be necessary to maintain a ``` ./docs/index.rst ``` to write out automodule/autoclass/etc directives, as well as to include hand-written documentation about your extensions. In order to facilitate rapid deployment of high-quality documentation out-of-the-box, a new system has been implemented. > Warning > If your extension’s modules cannot be imported at documentation-generation time, they cannot be documented correctly by this system. > Check the logs for warnings/errors about any failures to import, and any errors propagated. In the Kit ``` repo.toml ``` , the ``` [repo_docs.projects."kit-sdk"] ``` section is responsible for targeting the old system, and the ``` [repo_docs.kit] ``` section is responsible for targeting the new. Opt your extension in to the new system by: 1. Adding the extension to the list of extensions. 2. In ``` ./source/extensions/{ext_name}/docs/ ``` , Add or write an ``` Overview.md ``` if none exists. Users will land here first. 3. In ``` ./source/extensions/{ext_name}/config/extension.toml ``` , Add all markdown files - except ``` README.md ``` - to an entry per the example below. 4. In ``` ./source/extensions/{ext_name}/config/extension.toml ``` , Add all markdown files - except ``` README.md ``` - to an entry per the example below. ``` # Documentation Configuration To configure the documentation, you need to add any extension dependencies that your documentation depends on, such as links or Sphinx ref-targets. This syntax follows the repo_docs tools intersphinx syntax. The `deps` are a list of lists, where the inner list contains the name of the target intersphinx project, followed by the path to the folder containing that project's `objects.inv` file. HTTP links to websites that host their `objects.inv` file online, like Python's, will work as well, if discoverable at docs build time. Apart from web paths, this will only work for projects inside of the kit repo for now. ```python [documentation] deps = [ ["kit-sdk", "_build/docs/kit-sdk/latest"] ] pages = [ "docs/Overview.md", "docs/CHANGELOG.md", ] ``` The first item in the list will be treated as the “main page” for the documentation, and a user will land there first. Changelogs are automatically bumped to the last entry regardless of their position in the list. # Dealing with Sphinx Warnings The introspection system ends up introducing many more objects to Sphinx than previously, and in a much more structured way. It is therefore extremely common to come across many as-yet-undiscovered Sphinx warnings when migrating to this new system. Here are some strategies for dealing with them. ## MyST-parser warnings These are common as we migrate away from the RecommonMark/m2r2 markdown Sphinx extensions, and towards MyST-parser, which is more extensible and stringent. Common issues include: 1. Header-level warnings. MyST does not tolerate jumping from h1 directly to h3, without first passing through h2, for example. 2. Links which fail to match a reference. MyST will flag these to be fixed (Consider it a QC check that your links are not broken). 3. Code block syntax - If the language of a code-block cannot be automatically determined, a highlighting-failure warning may be emitted. Specify the language directly after the first backticks. 4. General markdown syntax - Recommonmark/m2r2 were more forgiving of syntax failures. MyST can raise warnings where they would not previously. ## Docstring syntax warnings The biggest issue with the Sphinx `autodoc` extension’s module-introspection is that it is difficult to control which members to inspect, and doubly so when recursing or imported-members are being inspected. Therefore, it is **strongly advised** that your python modules define `__all__`, which controls which objects are imported when `from module import *` syntax is used. It is also advised to do this step from the perspective of python modules acting as bindings for C++ modules. `__all__` is respected by multiple stages of the documentation generation process (introspection, autosummary stub generation, etc). This has two notable effects: 1. Items that your module imports will not be considered when determining the items to be documented. This speeds up documentation generation. 2. Prevents unnecessary or unwanted autosummary stubs from being generated and included in your docs. 3. Optimizes the import-time of your module when star-imports are used in other modules. 4. Unclutters imported namespaces for easier debugging. 5. Reduces “duplicate object” Sphinx warnings, because the number of imported targets with the same name is reduced to one. Other common sources of docstring syntax warnings: 1. Indentation / whitespace mismatches in docstrings. 2. Improper usage or lack-of newlines where required. e.g. for an indented block. ## C++ docstring issues As a boon to users of the new system, and because default bindings-generated initialization docstrings typically make heavy use of asterisks and backticks, these are automatically escaped at docstring-parse time. Please note that the `pybind11_builtins.pybind11_object` object Base is automatically hidden from class pages.
ecs-entity-component-system-framework-implementation_overview.md
# omni.ecs [](#omni-ecs) ## ECS (Entity-Component-System) framework implementation [](#ecs-entity-component-system-framework-implementation)
EditHotkey.md
# Edit Hotkey ## Edit Hotkey Select or mouse hover a row will show. Click to show the edit bar. Press keyboard to change key binding Click to change trigger option: On Press or On Release Click to save changes and exit, including key binding and trigger option Click to exit without change
ef-definition_GraphConcepts.md
# Graph Concepts This article covers core graph concepts found in EF. Readers are encouraged to review the [Execution Framework Overview](Overview.html#ef-framework) before diving into this article. ![The Execution Framework pipeline. This article covers concepts found in the Execution Graph (IR).](_images/ef-graph-concepts.png) The core data structure Execution Framework (EF) uses to describe execution is a *graph of graphs*. Each *graph* contains a [root node](#ef-root-node). The root node can connect to zero or many downstream [nodes](#ef-nodes) via directed [edges](#ef-edges). Nodes represent work to be executed. Edges represent ordering dependencies between nodes. ![A simple graph.](_images/ef-simple.svg) The work each node represents is encapsulated in a [definition](#ef-definition). Each node in the graph may have a pointer to a definition. There are two categories of definitions: [opaque](#ef-opaque-definition) and [graph](#ef-graph-definition). An *opaque definition* is a work implementation hidden from the framework. An example would be a function pointer. The second type of definition is another *graph*. Allowing a node’s work definition to be yet another graph is why we say EF’s core execution data structure is a *graph of graphs*. The top-level container of the *graph of graphs* is called the [execution graph](#ef-execution-graph]. The graphs to which individual nodes point are called [graph definitions](#ef-graph-definition) or simply [graphs](#ef-graph-definition). The following sections dive into each of the topics above with the goal of providing the reader with a general understanding of each of the core concepts in EF’s *graph of graphs*. ## Nodes Nodes in a [graph](#ef-graph-definition) represent work to be executed. The actual work to be performed is stored in a [definition](#ef-definition), to which a node points. Nodes can have both parent and child nodes. This relationship between parent and child defines an ordering dependency. The interface for interacting with nodes is [INode](api/classomni_1_1graph_1_1exec_1_1unstable_1_1INode.html#_CPPv4N4omni5graph4exec8unstable5INodeE). EF contains the [NodeT](api/classomni_1_1graph_1_1exec_1_1unstable_1_1NodeT.html#_CPPv4IDpEN4omni5graph4exec8unstable5NodeTE). Node implementation of INode for instantiation when constructing graph definitions. Each node is logically contained within a single graph definition (i.e. INodeGraphDef). ## Edges Edges represent ordering between nodes in a graph definition. Edges are represented in EF with simple raw pointers between nodes. These pointers can be accessed with INode::getParents() to list nodes that are before a node, and INode::getChildren() to list nodes that are after the node. ## Definitions Definitions define the work each node represents. Definitions can be opaque, meaning EF has no visibility into the actual work being performed. Opaque definitions implement the INodeDef interface. Helper classes, like NodeDefLambda exists to easily wrap chunks of code into an opaque definition. Definitions can also be defined with a graph, making the definitions transparent. The transparency of graph definitions enables EF to perform many optimizations such as: - Execute nodes in the graph in parallel - Optimize the graph for the current hardware environment - Reorder/Defer execution of nodes to minimize lock contention Many of these optimizations are enabled by writing custom passes and executors. See Pass Creation and Executor Creation for details. Graph definitions are defined by the INodeGraphDef interface. During graph construction, it is common for IPass authors to instantiate custom graph definitions to bridge EF with the authoring layer. The NodeGraphDef class is designed to help implement these custom definitions. Definition instances are not unique to each node. Definitions are designed to be shared between multiple nodes. This means two different INode instances are free to point to the same definition instance. This not only saves space, it also decreases graph construction time. Above we see the graph from Figure 8, but now with pointers to definitions (dashed lines). Notice how definitions are shared between nodes. Furthermore, notice that nodes in graph definitions can point to other graph definitions. Both INodeDef and NodeGraphDef are designed to help implement these custom definitions. INodeDef (i.e. opaque definitions) and INodeGraphDef (i.e. graph definitions) inherit from the IDef interface. All user definitions must implement either INodeDef or INodeGraphDef. Definitions are attached to nodes and can be accessed with INode::getDef(). Note, a node is not required to have a definition. In fact, each graph’s root node will not have an attached definition. ### Execution Graph The top-level container for execution is the *execution graph*. The execution graph is special. It is the only entity, other than a node, that can contain a definition. In particular, the execution graph always contains a single graph definition. It is this graph definition that is the actual *graph of graphs*. The execution graph does not contain nodes, rather, it is the execution graph’s definition that contains nodes. In addition to containing the top-level graph definition, the execution graph’s other jobs are to track: - If the graph is currently being constructed - Gross changes to the topologies in the execution graph. See invalidation for details. The execution graph is defined by the IGraph interface. EF contains the Graph implementation of IGraph for applications to instantiate. ### Topology Each graph definition owns a *topology* object. Each topology object is owned by a single graph definition. The topology object has several tasks: - Owns and provides access to the root node - Assigns each node in the graph definition an unique index - Handles and tracks invalidation of the topology (via stamps) Topology is defined by the ITopology interface and accessed via INodeGraphDef::getTopology(). ### Root Nodes Each graph definition contains a topology which owns a *root node*. The root node is where traversal in a graph definition starts. Only descendants of the root node will be traversed. The root node is accessed with # Section Root nodes are special in that they do not have an attached definition, though a graph definition’s executor may assign special meaning to the root node. Root nodes are defined by the `INode` interface, just like any other node. Each graph definition (technically the graph definition’s topology) has a root node. This means there are many root nodes in EF (i.e. EF is a graph of graphs). # Next Steps In this article, an overview of graph concepts was provided. To learn how these concepts are utilized during graph construction, move on to Pass Concepts.
ef-execution-concepts_ExecutionConcepts.md
# Execution Concepts This article covers core execution concepts. Readers are encouraged to review the [Execution Framework Overview](#ef-framework), [Graph Concepts](#ef-graph-concepts), and [Pass Concepts](#ef-pass-concepts) before diving into this article. Execution Framework (i.e. EF) contains many classes with an `execute()` method. `IExecutionContext`, `IExecutor`, `ExecutionTask`, `INodeDef`, and `INodeGraphDef` are a subset of the classes with said method. With so many classes, understanding how execution works can be daunting. The purpose of this article is to step through how execution works in EF and illustrate some of its abilities. We start with introducing the concepts involved in execution. Once complete, we’ll dive into the details on how they are used together to perform execution. ## Nodes `INode` is the main structural component used to build the graph’s topology. `INode` stores edges to parents (i.e. predecessors) and children (i.e. successors). These edges set an ordering between nodes. In addition to defining the execution graph’s topology, `INode` also defines the execution logic of the graph. Each `INode` has an `execute()` method that is called during the execution of the graph. # INode INode stores one of two definitions: INodeDef or INodeGraphDef. These definitions define the actual computation to be performed when the node is executed. See Graph Concepts for more details on nodes and how they fit into the EF picture. ## Opaque Definitions INodeDef is one of the two definition classes that can be attached to an INode (note the difference in the spelling of INodeDef and INode). Definitions contain the logic of the computation to be performed when the INode is executed. INodeDef defines an *opaque* computation. An *opaque* computation is logic contained within the definition that EF is unable to examine and optimize. ## Graph Definitions INodeGraphDef is one of the two definition classes that can be attached to an INode. INodeGraphDef should not be confused with IGraph, which is the top-level container that stores the entire structure of the graph (i.e., the execution graph). Definitions contain the logic of the computation to be performed when the INode is executed. Unlike INodeDef, which defines opaque computational logic that EF cannot examine (and thereby optimize), INodeGraphDef defines its computation by embedding a subgraph. This subgraph contains INode objects to define the subgraph’s structure (like any other EF graph). Each of these nodes can point to either an INodeDef or yet another INodeGraphDef. (again, like any other EF graph). The ability to define a `INodeGraphDef` which contains nodes that point to additional `INodeGraphDef` objects is where EF gets its **composibility** power. This is why it is said that EF is a “graph of graphs”. Adding new implementations of `INodeGraphDef` is common when extending EF with new graph types. See Definition Creation for details. ## Executors and Schedulers Executors traverse a graph definition, generating tasks for each node *visited*. One of the core concepts of EF is that each graph definition can specify the executor that should be used to execute the subgraph it defines. This allows each graph definition to control a host of strategies for how its subgraph is executed: - If a node should be scheduled - How a node should be scheduled (e.g. parallel, deferred, serially, isolated, etc.) - Where nodes are scheduled (e.g. GPU, CPU core, machine) - The amount of work to be scheduled (i.e. how many tasks should be generated) Executors and schedulers work together to produce, schedule, and execute tasks on behalf of the node. Executors determine which nodes should be visited and generate appropriate work (i.e. tasks). Schedulers collect tasks, possibly concurrently from many executor objects, and map the tasks to hardware resources for execution. Executors are described by the `IExecutor` interface. Most users defining their own executor will inherit from the `Executor` template, which is an implementation of `IExecutor`. `Executor` is a powerful template allowing users to easily control the strategies above. See `Executor`’s documentation for a more in-depth explanation of what’s possible with EF’s executors. ## ExecutionPaths The `ExecutionPath` class is an efficient utility class used to store the *execution path* of an `INode`. Since a graph definition may be pointed to/shared by multiple nodes, nodes within a graph definition can be at multiple “paths”. Consider node *k* below: Figure 16: A flattened execution graph. Graph definitions can be shared amongst multiple nodes (e.g. *X*). As a result, nodes must be identified with a path rather than their pointer value. Executions paths provide context as to what “instance” of a node is being executed. Above, the yellow arrow is pointing to */f/p/k*. However, since *X* is a shared definition, another valid path for *k* is */e/k*. Above, the graph definition *X* is shared by nodes *e* and *p*. The execution path for *k* is either */f/p/k* (the yellow arrow) or */e/k*. Figure 16: A flattened execution graph. Graph definitions can be shared amongst multiple nodes (e.g. *X*). As a result, nodes must be identified with a path rather than their pointer value. Executions paths provide context as to what “instance” of a node is being executed. Above, the yellow arrow is pointing to */f/p/k*. However, since *X* is a shared definition, another valid path for *k* is */e/k*. demonstrates that when associating data with a node, do not use the node’s pointer value. Rather use an `ExecutionPath`. The same holds true for definitions. ## Execution Contexts / Execution State `INodeDef` and `INodeGraphDef` are stateless entities in EF. Likewise, other than connectivity information, `INode` is also stateless. That begs to question, “If my computation needs state, where is it stored?” The answer is in the `IExecutionContext`. `IExecutionContext` is a limited key/value store where each key is an `ExecutionPath` and the value is an application defined subclass of the `IExecutionStateInfo` interface. `IExecutionContext` allows the graph structure to be decoupled from the computational state. As a consequence, the execution graph can be executed in parallel, each execution with its own `IExecutionContext`. In fact, `ExecutionContext::execute()` is the launching point of all computation (more on this below). `IExecutionContext` is meant to store data that lives across multiple executions of the execution graph. This is in contrast to the state data traversals and executors store, which are transient in nature. `IExecutionContext` is implemented by EF’s `ExecutionContext` template. `IExecutionContext` is an important entity during execution, as it serves as the data store for EF’s stateless graph of graphs. This article only touches on execution contexts. Readers should consult `IExecutionContext`’s documentation for a better understanding on how to use `IExecutionContext`. ## Execution Tasks ExecutionTask is a utility class that describes a task to be potentially executed on behalf of a INode in a given IExecutionContext. ExecutionTask stores three key pieces of information: the node to be executed, the path to the node, and the execution context. ## Execution in Practice With the overview of the different pieces in EF execution out of the way, we can now focus on how the pieces fit together. As mentioned above, EF utilizes a *graph of graphs* to define computation and execution order. The structure of these graphs is constructed with INode objects while the computational logic each INode encapsulates is delegated to either INodeDef or INodeGraphDef. The top-level structure that contains the entire graph is the IGraph object (e.g. execution graph). The IGraph object simply contains a single INodeGraphDef object. It is this top-level INodeGraphDef that defines the *graph of graphs*. After a concrete implementation of IGraph has been constructed and populated, computation starts by constructing a concrete subclass of IExecutionContext and calling IExecutionContext::execute(): ### Listing 1 Pattern seen in most uses of EF to execute the execution graph. Create the graph, populate the graph, execute the graph with a context. ```cpp auto graph{ Graph::create("myGraph") }; // populate graph &lt;not shown&gt; MyExecutionState state; auto context{ MyExecutionContext::create(graph, state) }; Status result = context->execute(); ``` IExecutionContext::execute(): IExecutionContext::execute() will initialize the context (if needed) and then pass itself and the IGraph to IExecutionCurrentThread::executeGraph(), which is in charge of creating an ExecutionTask to execute the IGraph’s top-level definition. IExecutionCurrentThread additionally keeps track of which ExecutionTask/IGraph/INode/IExecutionContext/IExecutor is running on the current thread (see getCurrentTask() and getCurrentExecutor()). IExecutionCurrentThread::executeGraph() is special in that it accounts for the odd nature of the top-level INodeGraphDef. The top-level INodeGraphDef is the only such INodeGraphDef that isn’t pointed to by a node and as such special logic must be written to handle this edge case. For all other definitions (and what the remainder of this article covers), execution starts with ExecutionTask::execute(IExecutor&) which calls IExecutionCurrentThread::execute(): ```cpp Status ExecutionCurrentThread::execute_abi(ExecutionTask* task, IExecutor* executor) noexcept ``` Signature of the method used for initiating node execution. ``` Here, the given `task`’s `ExecutionTask::getNode()` points to the node whose definition we wish to execute. The given `executor` is the executor of the `INodeGraphDef` who owns the node we wish to execute and has created the `ExecutionTask` (i.e. `task`) to execute the node. There are three cases `IExecutionCurrentThread::execute()` must handle: 1. If the node points to an **opaque definition** 2. If the node does not point to a definition 3. If the node points to a **graph definition** ### Executing an Opaque Definition The first case, opaque definition, is handled as follows: #### Listing 3: How nodes with an opaque definition are executed. ```cpp auto node = task->getNode(); auto nodeDef = node->getNodeDef(); if (nodeDef) { ScopedExecutionTask setCurrentExecution(task, executor); // important to update task status before calling into continueExecute since it may look at it task->setExecutionStatus(nodeDef->execute(*task)); // the task has had a chance to execute. it may have succeeded, failed, been deferred, etc. it's up to the // user defined IExecutor::continueExecute to determine the status of the task and react appropriately. return executor->continueExecute(*task); } ``` The listing above is straight forward, call `INodeDef::execute()` followed by `IExecutor::continueExecute()`. ### Executing an Empty Definition The second case is also straight-forward: #### Listing 4: How nodes without a definition are executed. ```cpp // empty node...we didn't fail, so just continue execution ScopedExecutionTask setCurrentExecution(task, executor); // important to update task status before calling into continueExecute since it may look at it task->setExecutionStatus(Status::eSuccess); // the task has had a chance to execute. it may have succeeded, failed, been deferred, etc. it's up to the // user defined IExecutor::continueExecute to determine the status of the task and react appropriately. return executor->continueExecute(*task); ``` ## Executing a Graph Definition The third case, graph definitions, is a bit more complex: ### Listing 5 How nodes without a graph definition are executed. ```c++ exec::unstable::ExecutionPath pathToInstancingNode{ task->getUpstreamPath(), task->getNode() }; ExecutionTask rootTask{ task->getContext(), nodeGraphDef->getRoot(), pathToInstancingNode }; ScopedExecutionTask setCurrentExecution(&rootTask, executor); auto status = nodeGraphDef->preExecute(*task); if (status == Status::eSuccess) { status = nodeGraphDef->execute(*task); if (status == Status::eSuccess) { status = nodeGraphDef->postExecute(*task); } } if (status == Status::eSkip) { // we skipped execution, so record this as success status = Status::eSuccess; } // important to update task status before calling into continueExecute since it may look at it task->setExecutionStatus(status); // the task has had a chance to execute. it may have succeeded, failed, been deferred, etc. it's up to the // user defined IExecutor::continueExecute to determine the status of the task and react appropriately. return executor->continueExecute(*task); ``` To execute the node’s graph definition, we start by creating a new task that will execute the graph definition’s root node (i.e., rootTask). This task is given to the graph definition’s `INodeGraphDef::preExecute(ExecutionTask*)`, `INodeGraphDef::execute(ExecutionTask*)`, and `INodeGraphDef::postExecute(ExecutionTask*)`. The meanings of pre- and post-execute are up to the user. ## Creating the Graph Definition’s Executor `INodeGraphDef::execute(ExecutionTask*)`’s job is clear: *execute the node*. `INodeGraphDef` implementations based on EF’s `INodeGraphDef`. ```cpp NodeGraphDef ``` class handle execution by instantiating the graph definition’s executor and telling it to execute the given node (i.e. ```cpp info-&gt;getNode() ``` below): ```cpp INodeGraphDef::execute(ExecutionTask*) ``` ’s implementation instantiates the graph definition’s preferred executor and executes the given node. ```cpp omni::core::ObjectPtr<IExecutor>&lt; IExecutor&gt; executor; if (m_executorFactory) { executor = m_executorFactory(m_topology, *info); } else { executor = ExecutorFallback::create(m_topology, *info); } return executor->execute(); // execute the node specified by info-&gt;getNode() ``` ## Starting Execution In [Listing 5](#ef-listing-execution-current-thread-nodegraphdef), we saw the node to execute was the node’s root. The root node does not have an associated definition, though some executors may assign special meaning when executing it. How ```cpp IExecutor::execute() ``` performs execution is up to the executor. As an example of what’s possible, let’s look at the ```cpp Executor ``` template’s execute method: ```cpp Executor ``` template’s execute method. ```cpp //! Main execution method. Called once by each node instantiating same graph definition. Status execute_abi() noexcept override { // We can bypass all subsequent processing if the node associated with the task starting // this execution has no children. Note that we return an eSuccess status because nothing // invalid has occurred (e.g., we tried to execute an empty NodeGraphDef); we were asked to // compute nothing, and so we computed nothing successfully (no-op)! if (m_task.getNode()->getChildren().empty()) { return Status::eSuccess | m_task.getExecutionStatus(); } (void)continueExecute_abi(&m_task); // Give a chance for the scheduler to complete the execution of potentially parallel work which should complete // within current execution. All background tasks will continue pass this point. // Scheduler is responsible for collecting the execution status for everything that this executor generated. return m_scheduler.getStatus() | m_schedulerBypass; } ``` The ```cpp Executor ``` template ignores the root node and calls ```cpp IExecutor::continueExecute ``` IExecutor::continueExecute()'s job is to continue execution. What it means to “continue execution” is up to the executor. After the call to Executor::continueExecute(const ExecutionTask&amp;) the scheduler’s getStatus() is called. This is a blocking call that will wait for any work generated during Executor::continueExecute(const ExecutionTask&amp;) to report a status (e.g. Status::eSuccess, Status::eDeferred, etc). ## Visiting Nodes and Generating Work Let us assume we’re using the ExecutorFallback executor. In Figure 16, if node /f/n is the node that just executed, calling IExecutor::continueExecute() will visit /f/p (via ExecutionVisit), notice that /f/p’s parents have all executed, create a task to execute /f/p, and given the task to the scheduler. This behavior of ExecutorFallback can be seen in the following listing: ### Listing 8 The ExecutorFallback’s strategy for visiting nodes in IExecutor::continueExecute(). ```c++ //! Graph traversal visit strategy. //! //! Will generate a new task when all upstream nodes have been executed. struct ExecutionVisit { //! Called when the traversal wants to visit a node. This method determines what to do with the node (e.g. schedule //! it, defer it, etc). template <typename ExecutorInfo> static Status tryVisit(ExecutorInfo info) noexcept { auto&amp; nodeData = info.getNodeData(); if (info.currentTask.getExecutionStatus() == Status::eDeferred) { // Implementation details } } } ``` ```cpp nodeData.hasDeferredUpstream = true; // we only set to true...doesn't matter which thread does it first std::size_t requiredCount = info.nextNode->getParents().size() - info.nextNode->getCycleParentCount(); if ((requiredCount == 0) || (++nodeData.visitCount == requiredCount)) { if (!nodeData.hasDeferredUpstream) { // spawning a task within executor doesn't change the upstream path. just reference the same one. ExecutionTask newTask(info.getContext(), info.nextNode, info.getUpstreamPath()); return info.schedule(std::move(newTask)); } else return Status::eDeferred; } return Status::eUnknown; }; ``` The scheduler uses the `SchedulingStrategy` given to the executor to determine how to schedule the task. The strategy may decide to skip scheduling and execute the task immediately. Likewise, the strategy may tell the scheduler to run the task in parallel with other tasks (see [SchedulingInfo](api/enum_namespaceomni_1_1graph_1_1exec_1_1unstable_1a36b9c08e72889b8029dd280279104760.html#_CPPv4N4omni5graph4exec8unstable14SchedulingInfoE) for details). We can see an example of this decision making in the listing below: ```cpp Status ret = Status::eUnknown; SchedulingInfo schedInfo = getSchedulingInfo(newTask); if (schedInfo != SchedulingInfo::eSchedulerBypass) { // this task will finish before we exit executor...just capture as reference to avoid unnecessary cost ret = m_scheduler.schedule([executor = this, task = std::move(newTask)]() mutable -> Status { return task.execute(executor); }, schedInfo); } else // bypass the scheduler...no need for extra scheduling overhead { m_schedulerBypass |= newTask.execute(this); } return ret; ``` Regardless of the scheduling strategy for the task, [ExecutionTask::execute(IExecutor&)](api/classomni_1_1graph_1_1exec_1_1unstable_1_1ExecutionTask.html#_CPPv4N4omni5graph4exec8unstable13ExecutionTask7executeEN4omni4core11ObjectParamI9IExecutorEE) is called. # Ending Execution In Listing 3, Listing 4, and Listing 5 we see they all end the same way, once the node has been executed, tell the executor to continue execution of the current graph definition by calling `IExecutor::continueExecute()`. As covered above, what “continue execution” means is defined by the executor, but a common approach is to visit the children of the node that was just executed. Once there are no more children to visit, the stack starts to unwind and the task is complete. # Generating Dynamic Work Above, we saw how `ExecutorFallback` traverses parents to child, generating a task per-node once its parents have executed. That doesn’t have to be the case though. An executor is free to generate many tasks per-node. In fact, an executor can generate a task, and that task can generate additional tasks using `IExecutor::schedule(ScheduleFunction&&, SchedulingInfo)`. # Deferred Execution In Listing 8 you’ll find references to the “deferred” (e.g. `Status::eDeferred`). Deferred execution refers to tasks that have been designated to finish outside of the current execution frame (i.e. output of the call to `IExecutor::execute()`). # Next Steps In this article, an overview of graph execution was provided. For an in-depth guide to building your own executors, consult the Executor Creation guide. This article concludes the EF concepts journey. Further your EF education by consulting one of the tutorials in the *Guides* section of the manual or explore more in-depth topics in the *Advanced* section.
ef-framework_Overview.md
# Execution Framework Overview The Omniverse ecosystem enjoys a bevy of software components (e.g. PhysX, RTX, USD, OmniGraph, etc). These software components can be assembled together to form domain specific applications and services. One of the powerful concepts of the Omniverse ecosystem is that the assembly of these components is not limited to compile time. Rather, users are able to assemble these components on-the-fly to create tailor-made tools, services, and experiences. With this great power comes challenges. In particular, many of these software components are siloed and monolithic. Left on their own, they can starve other components from hardware resources, and introduce non-deterministic behavior into the system. Often the only way to integrate these components together was with a model “don’t call me, I’ll call you”. For such a dynamic environment to be viable, an intermediary must be present to guide these different components in a composable way. The **Execution Framework** is this intermediary. The Omniverse Execution Framework’s job is to orchestrate, at runtime, computation across different software components and logical application stages by decoupling the description of the compute from execution. ## Architecture Pillars The Execution Framework (i.e. EF) has three main architecture pillars. ### Decoupled architecture The first pillar is decoupling the authoring format from the computation back end. Multiple authoring front ends are able to populate EF’s intermediate representation (IR). EF calls this intermediate representation the execution graph. Once populated by the front end, the execution graph is transformed and refined, taking into account the available hardware resources. By decoupling the authoring front end from the computation back end, developers are able to assemble software components without worrying about multiple hardware configurations. Furthermore, the decoupling allows EF to optimize the computation for the current execution environment (e.g. HyperScale). ### Extendable architecture The second pillar is extensibility. Extensibility allows developers to augment and extend EF’s capabilities without changes to the core library. Graph transformations, traversals, execution behavior, computation logic, and scheduling are examples of EF features that can be extended by developers. ## Composable architecture The third pillar of EF is **composability**. Composability is the principle of constructing novel building blocks out of existing smaller building blocks. Once constructed, these novel building blocks can be used to build yet other larger building blocks. In EF, these building blocks are nodes (i.e. `Node`). Nodes stores two important pieces of information. The first piece they store is connectivity information to other nodes (i.e. topology edges). Second, they stores the **computation definition**. Computation definitions in EF are defined by the `NodeDef` and `NodeGraphDef` classes. `NodeDef` defines opaque computation while `NodeGraphDef` contains an entirely new graph. It is via `NodeGraphDef` that EF derives its composibility power. The big picture of what EF is trying to do is simple: take all of the software components that wish to run, generate nodes/graphs for the computation each component wants to perform, add edges between the different software components’ nodes/graphs to define execution order, and then optimize the graph for the current execution environment. Once the **execution graph** is constructed, an **executor** traverses the graph (in parallel when possible) making sure each software component gets its chance to compute. ## Practical Examples Let’s take a look at how Omniverse USD Composer, built with Omniverse Kit, handles the the update of the USD stage. Kit maintains a list of extensions (i.e. software components) that either the developer or user has requested to be loaded. These extensions register callbacks into Kit to be executed at fixed points in Kit’s update loop. Using an empty scene, and USD Composer’s default extensions, the populated execution graph looks like this: CurveManipulator OmniSkel SkeletonsCombo omni.anim.retarget.&lt;class 'omni.anim.retarget.ui.scripts.retarget_window.RetargetWindow'&gt; SingletonCurveEditor CurveCreator AnimationGraph Stage Recorder SequencePlayer PhysXCameraPrePhysics PhysXSupportUI PhysxInspector Before Update Physics PhysXVehiclePrePhysics PhysxInspector After Update Physics UsdPhysicsUI PhysXUI PhysXCameraPostPhysics PhysxInspector Debug visualization SkelAnimationAnnotation PhysXVehiclePostPhysics PhysX SceneVisualization PhysXFabric DebugDraw <p> <span class="caption-text"> USD Composer's execution graph used to update the USD stage. </span> </p> <p> Notice in the picture above that each node in the graph is represented as an opaque node, except for the OmniGraph (OG) front-end. The OmniGraph node further refines the compute definition by expressing its update pipeline with <em>pre-simulation</em>, <em>simulation</em>, and <em>post-simulation</em> nodes. This would not be possible without EF’s <strong>composable architecture</strong>. </p> <p> Below, we illustrate an example of a graph authored in OG that runs during the simulation stage of the OG pipeline. This example runs as part of Omniverse Kit, with a limited number of extensions loaded to increase the readability of the graph and to illustrate the dynamic aspect of the execution graph population. </p> <p> kit.customPipeline </p> <p> og.pre_simulation(1) | og.def.pre_simulation(18371527843990822053) </p> <p> og.simulation(2) | og.def.simulation(2058752528039269071) </p> <p> og.post_simulation(3) | og.def.post_simulation(12859070463537551084) </p> <figure class="align-center"> <figcaption> <p> <span class="caption-text"> An example of the OmniGraph definition </span> </p> </figcaption> </figure> Generating more fine-grained execution definitions allows OG to scale performance with available CPU resources. Leveraging **extensibility** <strong>Open Graph (OG)</strong> allows implementation of executors for different graph types outside of the core OG library. This joined with <strong>composability</strong> creates a foundation for executing compound graphs. The final example in this overview focuses on execution pipelines in Omniverse Kit. Leveraging all of the architecture pillars, we can start customizing per application (and/or per scene) execution pipelines. There is no longer a need to base execution ordering only on a fixed number or keep runtime components siloed. In the picture below, as a proof-of-concept, we define at runtime a new custom execution pipeline. This new pipeline runs before the “legacy” one ordered by a magic number and introduces fixed and variable update times. Extending the ability of OG to choose the pipeline stage in which it runs, we are able to place it anywhere in this new custom pipeline. Any other runtime component can do the same thing and leverage the EF architecture to orchestrate executions in their application. custom.async custom.syncMain custom.async omni_graph_nodes_constantdouble3 CubeReadAttrib_02 on_playback_tick og.customPipelineToUsd make_transformation_matrix_from_trs_01 PhysX matrix_multiply SkelAnimationAnnotation get_translation <figure class="align-center"> <figcaption> <p> <span class="caption-text"> The customizable execution pipeline in Kit - POC </span> </p> </figcaption> </figure> ## Next Steps Above we provided a brief overview of EF’s philosophy and capabilities. Readers are encouraged to continue learning about EF by first reviewing Graph Concepts.
ef-graph-traversal-guide_GraphTraversalGuide.md
# Graph Traversal Guide This is a practitioner’s guide to using the Execution Framework. Before continuing, it is recommended you first review the [Execution Framework Overview](#ef-framework) along with basic topics such as [Graphs Concepts](#ef-graph-concepts), [Pass Concepts](#ef-pass-concepts), and [Execution Concepts](#ef-execution-concepts). **Graph traversal** – the systematic visitation of nodes within the IR – is an integral part of EF. EF contains several built-in traversal functions: - `traverseDepthFirst()` traverses a graph in depth-first order. - `traverseBreadthFirst()` traverses a graph in breadth-first order. - `traverseDepthFirstAsync()` traverses a graph in depth-first order, potentially initiating asynchronous work before visiting the next node. - `traverseBreadthFirstAsync()` traverses a graph in breadth-first order, potentially initiating asynchronous work before visiting the next node. The following sections examine a few code examples demonstrating how one can explore EF graphs in a customized manner using the available APIs. ## Getting Started with Writing Graph Traversals In order to further elucidate the concepts embedded in these examples, some of the traversals will be applied to the following sample IR graph \(G_1\) in order to see what the corresponding output would look like for a concrete case: Figure 19: An example IR graph \(G_1\). Note that each node’s downstream edges are ordered alphabetically with respect to their connected children nodes, e.g. for node \(a\), its first, second, and third edges are \(\{a,b\}\), \(\{a,c\}\), and \(\{a,d\}\), respectively. Also note that the below examples are all assumed to reside within the omni::graph::exec::unstable namespace. ### Print all Node Names Listing 26 shows how one can print out all top-level node names present in a given IR graph in serial DFS ordering using the VisitFirst policy. Here the term top-level refers to nodes that lie directly in the top level execution graph definition; any nodes not contained in the execution graph’s NodeGraphDef (implying that they are contained within other node’s NodeGraphDefs) will not have their names printed with the below code-block. Listing 26: Serial DFS using the VisitFirst strategy to print all top-level visited node names. ```cpp std::vector<INode*> nodes; traverseDepthFirst<VisitFirst>(myGraph->getRoot(), [&nodes](auto info, INode* prev, INode* curr) { std::cout << curr->getName() << std::endl; nodes.emplace_back(curr); info.continueVisit(curr); }; ``` If we applied the above code-block to \(G_1\), we would get the following ordered list of visited node names: \[b \rightarrow e \rightarrow g \rightarrow c \rightarrow f \rightarrow d\] Note that the root node \(a\) is ignored since we started our visitations at \(a\), which would make prev point to \(a\) during the very first traversal step, and since we aren’t printing prev \(a\) doesn’t show up in the output. ### Print all Node Traversal Paths **Recursively** Listing 27 shows how one can recursively print the traversal paths (list of upstream nodes that were visited prior to reaching the current node) of all nodes present in a given IR graph in serial DFS ordering using the VisitFirst strategy; this will include all nodes that lie within other non-execution graph definitions (i.e. inside other nodes’ NodeGraphDefs that are nested inside the execution graph definition), hence the need for recursion. The resultant list of nodes can be referred to as the member nodes of the flattened IR. ```cpp auto traversalFn = ``` ```cpp void traverseDepthFirst( INodeGraphDef* nodeGraphDef, INode* topLevelGraphRoot, std::vector<INode*>& currentTraversalPath, std::vector<std::pair<INode*, std::vector<INode*>>>& nodeTraversalPaths, auto& recursionFn) { traverseDepthFirst<VisitFirst>( nodeGraphDef->getRoot(), [nodeGraphDef, topLevelGraphRoot, &currentTraversalPath, &nodeTraversalPaths, &recursionFn]( auto info, INode* prev, INode* curr) { // Remove node elements from the current path until we get back to a common // branching point for the current node. if (prev == topLevelGraphRoot) { currentTraversalPath.clear(); } else if (!prev->isRoot()) { while (!currentTraversalPath.empty() && currentTraversalPath.back()->getName() != prev->getName()) { currentTraversalPath.pop_back(); } } // Add the node to the current traversal path. If the previous node was also a // graph root node, add it as well. if (prev->isRoot()) { currentTraversalPath.emplace_back(prev); } currentTraversalPath.emplace_back(curr); // Store the current node's corresponding traversal path. nodeTraversalPaths.emplace_back( std::piecewise_construct, std::forward_as_tuple(curr), std::forward_as_tuple(currentTraversalPath)); // Continue the traversal. INodeGraphDef* currNodeGraphDef = curr->getNodeGraphDef(); if (currNodeGraphDef) { recursionFn( currNodeGraphDef, topLevelGraphRoot, currentTraversalPath, nodeTraversalPaths, recursionFn); } info.continueVisit(curr); }); }; std::vector<INode*> currentTraversalPath; std::vector<std::pair<INode*, std::vector<INode*>>> nodeTraversalPaths; ``` traversalFn(myGraph->getNodeGraphDef(), myGraph->getNodeGraphDef()->getRoot(), currentTraversalPath, nodeTraversalPaths, traversalFn); // Print the results. Note that nodeTraversalPaths will be ordered in a serial, DFS, VisitFirst-like manner // (even though we used the VisitAll strategy, since we continue traversal along the first edge). for (const std::pair<INode*, std::vector<INode*>>& namePathPair : nodeTraversalPaths) { // Print the node's name. std::cout << namePathPair.first->getName() << ": "; // Print the node's traversal path. for (INode* const pathElement : namePathPair.second) { std::cout << pathElement->getName() << "/"; } std::cout << std::endl; } Applying this logic to \(G_1\), the list of node traversal paths (paired with their names as well for further clarity, and ordered based on when each node was visited) would look something like this: 1. \(b: a/b\) 2. \(e: a/b/e\) 3. \(i: a/b/e/h/i\) 4. \(j: a/b/e/h/i/j\) 5. \(g: a/b/e/g\) 6. \(c: a/c\) 7. \(f: a/c/f\) 8. \(l: a/c/f/k/l\) 9. \(m: a/c/f/k/l/m\) 10. \(i: a/c/f/k/l/m/h/i\) 11. \(j: a/c/f/k/l/m/h/i/j\) 12. \(d: a/c/f/d\) <div class="admonition note"> <p class="admonition-title">Note</p> <p>EF typically uses a more space-efficient path representation called the ExecutionPath when discussing nodal paths; the above example prints the explicit traversal path to highlight how the graph is crawled through.</p> </div> <h3>Print all Edges <strong>Recursively</strong></h3> <p>Listing 28 uses the VisitAll strategy to <em>recursively</em> store and print out all edges in an IR graph in serial BFS order.</p> BFS is arbitrary (other search algorithms could have been chosen to still print all top-level edges, albeit in a different order); only the selection of VisitAll matters since it enables us to actually explore all of the edges. Also note that traversal continues along the first discovered edge (similar to the VisitFirst policy). ### Listing 28 #### Serial BFS using the VisitAll strategy to recursively print all edges in the inputted graph. ```cpp std::vector<std::pair<INode*, INode*>> edges; auto traversalFn = [&edges](INodeGraphDef* nodeGraphDef, auto& recursionFn) -> void { traverseBreadthFirst<VisitAll>(nodeGraphDef->getRoot(), [&edges, nodeGraphDef, &recursionFn](auto info, INode* prev, INode* curr) { std::cout << "{" << prev->getName() << ", " << curr->getName() << "}" << std::endl; edges.emplace_back(prev, curr); if (info.isFirstVisit()) { INodeGraphDef* currNodeGraphDef = curr->getNodeGraphDef(); if (currNodeGraphDef) { recursionFn(currNodeGraphDef, recursionFn); } info.continueVisit(curr); } }); }; traversalFn(myGraph->getNodeGraphDef(), traversalFn); ``` Running this traversal on \(G_1\) would produce the following list of edges (in the order that they are visited): \[ \begin{split} &\set{a,b} \rightarrow \set{a,c} \rightarrow \set{a,d} \rightarrow \set{b,e} \rightarrow \set{a/b/e/h,a/b/e/h/i} \rightarrow \set{a/b/e/h/i,a/b/e/h/i/j} \rightarrow \set{a/b/e/h/i/j,a/b/e/h/i} \rightarrow \set{c,f} \rightarrow \set{k,l} \\ &\rightarrow \set{l,m} \rightarrow \set{a/c/f/k/l/m/h,a/c/f/k/l/m/h/i} \rightarrow \set{a/c/f/k/l/m/h/i,a/c/f/k/l/m/h/i/j} \rightarrow \set{a/c/f/k/l/m/h/i/j,a/c/f/k/l/m/h/i} \\ &\rightarrow \set{d,f} \rightarrow \set{e,g} \rightarrow \set{f,d} \rightarrow \set{f,g} \end{split} \] Note that for node instances which share the same definition (e.g. \(i\), \(j\), etc.), we’ve used their full traversal path for clarity’s sake. ### Print all Node Names **Recursively** in **Topological Order** Listing 29 highlights how one can *recursively* print out all node names in *topological order* using the `VisitLast` strategy, meaning that no node will be visited until all of its parents have been visited. Note that any traversal, whether it be a serial DFS, serial BFS, parallel DFS, parallel BFS, or something else entirely, can be considered topological as long as it employs the `VisitLast` strategy; this example has opted to utilize a serial DFS approach. ```cpp std::vector<INode*> nodes; auto traversalFn = [&nodes](INodeGraphDef* nodeGraphDef, auto& recursionFn) -> void { traverseDepthFirst<VisitLast>(nodeGraphDef->getRoot(), [&nodes, nodeGraphDef, &recursionFn](auto info, INode* prev, INode* curr) { }); }; ``` This example uses the `VisitLast` strategy to *recursively* print all visited nodes in *topological order*. ``` ```cpp class PassStronglyConnectedComponents : public Implements<IGlobalPass> { public: static omni::core::ObjectPtr<PassStronglyConnectedComponents> create( omni::core::ObjectParam<exec::unstable::IGraphBuilder> builder) { return omni::core::steal(new PassStronglyConnectedComponents(builder.get())); } protected: PassStronglyConnectedComponents(IGraphBuilder*) { } void run_abi(IGraphBuilder* builder) noexcept override { _detectCycles(builder, builder->getTopology()); } private: void _detectCycles(IGraphBuilder* builder, ITopology* topology) { } }; ``` In the case of \(G_1\), we would obtain the following ordered node name list: \[b \rightarrow e \rightarrow a/b/e/h/i \rightarrow a/b/e/h/i/j \rightarrow c \rightarrow f \rightarrow l \rightarrow m \rightarrow a/c/f/k/l/m/h/i \rightarrow a/c/f/k/l/m/h/i/j \rightarrow d \rightarrow g\] Using Custom `NodeUserData`: Listing 30 showcases how one can pass custom node data into the traversal methods to tackle problems that would otherwise be much more inconvenient (or downright impossible) to solve if the API were missing that flexibility. In this case we are using the `SCC_NodeData` struct to store per-node information that is necessary for implementing Tarjan’s algorithm for strongly connected components; this is what ultimately allows us to create the global graph transformation pass responsible for detecting cycles in the graph. { struct SCC_NodeData { size_t index{0}; size_t lowLink{0}; uint32_t cycleParentCount{0}; bool onStack{false}; }; size_t globalIndex = 0; std::stack<INode*> globalStack; traverseDepthFirst<VisitAll, SCC_NodeData>( topology->getRoot(), [this, builder, &globalIndex, &globalStack](auto info, INode* prev, INode* curr) { auto pushStack = [&globalStack](INode* node, SCC_NodeData& data) { data.onStack = true; globalStack.push(node); }; auto popStack = [builder, &info, &globalStack]() { auto* top = globalStack.top(); globalStack.pop(); auto& userData = info.userData(top); userData.onStack = false; auto node = exec::unstable::cast<exec::unstable::IGraphBuilderNode>(top); node->setCycleParentCount(userData.cycleParentCount); return top; }; auto& userData = info.userData(curr); auto& userDataPrev = info.userData(prev); if (info.isFirstVisit()) { userData.index = userData.lowLink = globalIndex++; pushStack(curr, userData); info.continueVisit(curr); userDataPrev.lowLink = std::min(userDataPrev.lowLink, userData.lowLink); } } ); } ```cpp if (userData.lowLink == userData.index) { auto* top = popStack(); if (top != curr) { while (top != curr) { top = popStack(); } } } auto nodeGraph = curr->getNodeGraphDef(); if (nodeGraph) { this->_detectCycles(builder, nodeGraph->getTopology()); } ``` ```cpp if (!userData.onStack) { userData.index = counter; userData.lowLink = counter; counter++; pushStack(curr); userData.onStack = true; for (auto& edge : curr->getEdges()) { auto* dst = edge.getDst(); if (dst->getUserData().index == -1) { depthFirstSearch(dst, builder); userData.lowLink = std::min(userData.lowLink, dst->getUserData().lowLink); } else if (userData.onStack) { userData.lowLink = std::min(userData.lowLink, dst->getUserData().index); } } if (userData.lowLink == userData.index) { auto* top = popStack(); if (top != curr) { while (top != curr) { top = popStack(); } } } } else if (userData.onStack) { userDataPrev.lowLink = std::min(userDataPrev.lowLink, userData.index); userData.cycleParentCount++; } ``` ### Next Steps To learn more about graph traversals in the context of EF, see [Graph Traversal In-Depth](#ef-graph-traversal-advanced). ```
ef-pass-concepts_PassConcepts.md
# Pass Concepts This article covers core concepts found in EF’s passes/graph transformations. Readers are encouraged to review both the [Execution Framework Overview](Overview.html#ef-framework) and [Graph Concepts](GraphConcepts.html#ef-graph-concepts) before diving into this article. Now that we understand the underlying structure of an execution graph, let’s dive into the graph transformations to see how the population and partitioning of `NodeGraphDef` is done to achieve the final topology. ## Pass Pipeline `PassPipeline` is the main orchestrator of graph construction. It composes the final topology of the graph, leveraging passes from `PassRegistry`. It is possible to register different passes, some only known to the pass pipeline. To build the graph, the pipeline will instantiate a `GraphBuilder` for each visited definition and give the builder to each of the passes selected to run on the definition. Pass instances are not reused, i.e. each time a pass is selected to run, it will be allocated, run, and immediately destroyed. Definitions can be shared by multiple nodes and care is taken by `PassPipeline` to only process a definition once per topology. Passes are grouped by `PassType`, with each having specific responsibilities and permissions. To learn more about pass types, consult the documentation for `IPopulatePass`, `IPartitionPass`. # Populate and Partitioning Passes Graph construction typically starts by running populate passes (i.e. IPopulatePass) over each node in the graph. If the topology was altered during this step, the pipeline will run partitioning passes (i.e. IPartitionPass) on the graph. If partitioning generated a new Node or NodeGraphDef, the pipeline again runs the population passes on the new entities. Partitioning runs only once on the graph, which means there won’t be a second partitioning pass over the topology if the second run of the population passes altered it. This is because population only alters definitions one level deeper than the currently processed topology. # Global Passes Once the entire topology of the execution graph is processed by population and partitioning passes (potentially in a threaded manner), the pipeline will give a chance to global passes (i.e. IGlobalPass) to run. Because global passes have such a broad impact on both the graph and transformation performance, their use is discouraged. # Graph Builders When passes operate to create or alter the topology of a graph, they rely on GraphBuilder to perform topology modification. Under the hood, builder implementation will leverage a private IGraphBuilderNode interface. Relying directly on the IGraphBuilderNode interface is strongly discouraged. # Transformation Algorithm The following pseudo-code represents the overall graph transformation procedure. For simplicity, it illustrates serial execution, but in Omniverse Kit, the pipeline process node’s concurrently. ```tasm PROC PopulatePass(context, nodeGraphDef) graphBuilder <- create new instance for given nodeGraphDef FOR node IN nodes in topology in DFS order from root CALL PopulateNode(node) IF graphBuilder recorded modifications to the construction stamp CALL PartitionPass() PROC PartitionPass(context, nodeGraphDef) graphBuilder <- create new instance for given nodeGraphDef partitionPassInstances <- allocate and store pass instances that successfully initialize for nodeGraphDef ``` FOR node IN nodes in topology in DFS order from root FOR initializedPass IN partitionPassInstances initializedPass.run(node) FOR initializedPass IN partitionPassInstances initializedPass.commit(graphBuilder) FOR newNodes IN graphBuilder CALL PopulateNode(newNodes) PROC GlobalPass(context, nodeGraphDef) FOR global pass from registry passInstance <- allocate new instance CALL passInstance.run() PROC PopulateNode(node) IF node has registered populate pass populatePassInstance <- allocate new instance populatePassInstance.run() ELSE IF node has NodeGraphDef definition and populate pass exists for it populatePassInstance <- allocate new instance populatePassInstance.run() IF node has NodeGraphDef CALL PopulatePass(context, node.getNodeGraphDef()) PROC GraphTransformations(context, nodeGraphDef) IF nodeGraphDef needs construction CALL PopulatePass CALL GlobalPass og.simulation(2) | og.def.simulation(2058752528039269071) og.post_simulation(3) | og.def.post_simulation(12859070463537551084) omni_graph_nodes_constantdouble3 CubeReadAttrib_02 on_playback_tick make_transformation_matrix_from_trs_01 matrix_multiply get_translation box0WriteAttrib_01 make_transformation_matrix_from_trs # SkelAnimationAnnotation # DebugDraw An example of constructed execution graph. Graph transformation starts with a basic pipeline defined at the top level `NodeGraphDef`. ## Figure 11 Basic execution pipeline with custom and legacy pipeline stages. While traversing the top-level definition, `StageUpdateDef` is created by a populate registered for the `kit.legacyPipeline` node. ## Figure 12 Legacy pipeline with loaded nodes from StageUpdate. The `PopulatePass` procedure from our pseudo-code is now recursively called to expand the definition of any node represented as part of `kit.def.legacyPipeline`. In the example we are exploring, we have several OmniGraph population passes registered. The first one created the execution pipeline for OmniGraph. ## Figure 13 Expanded OmniGraph definition containing nodes representing its pipeline stages: Pre-Simulation -> Simulation -> Post-Simulation. OmniGraph registers populate passes for each pipeline stage it created. These passes populate each pipeline stage’s node with a generic graph definition if the pipeline stage contains nodes in OG. In this example, an action graph is in the simulation pipeline stage. Both the pre-simulation and post-simulation stages are empty. ## Figure 14 OG’s populate passes create EF nodes for each OG graph in each OG pipeline stage. Here we see the Simulation stage contains an Action Graph. Finally, population runs on `og.def.graph_execution` and expands the `NodeGraphDef` to a custom one with an `Executor` responsible for both generating and scheduling work. EXECUTION GRAPH | kit.def.execution(8412328473570437098) /World/ActionGraph(1) | og.def.graph_execution(1354986524710330633) kit.legacyPipeline(2) | kit.def.legacyPipeline(14601541822497998125) OmniGraph(2) | OmniGraphDef(13532122613264624703) kit.customPipeline og.pre_simulation(1) | og.def.pre_simulation(18371527843990822053) og.simulation(2) | og.def.simulation(2058752528039269071) og.post_simulation(3) | og.def.post_simulation(12859070463537551084) omni_graph_nodes_constantdouble3 CubeReadAttrib_02 on_playback_tick make_transformation_matrix_from_trs_01 matrix_multiply get_translation box0WriteAttrib_01 <section id="graph-transformations"> <h2>Graph Transformations</h2> <div class="figure align-center"> <svg> <!-- SVG content with nodes and edges --> </svg> <figcaption> <p> <span class="caption-text"> Fully populated execution graph after all graph transformations. Here we see the */World/ActionGraph* node has been populated with a definition that describes the OmniGraph Action Graph. </span> </p> </figcaption> </div> </section> <section id="next-steps"> <h2>Next Steps</h2> <p> In this article, an overview of graph transformations/graph construction was provided. For an in-depth guide to building your own passes, consult the <span class="std std-ref">Pass Creation</span> guide. To continue learning about EF’s core concepts, move on to <span class="std std-ref">Execution Concepts</span>. </p> </section>
ef-plugin-creation_PluginCreation.md
# Plugin Creation This is a practitioner’s guide to using the Execution Framework. Before continuing, it is recommended you first review the Execution Framework Overview along with basic topics such as Graphs Concepts, Pass Concepts, and Execution Concepts. The Execution Framework is a graph of graphs. EF allows users, with their own code, to: - Build the graph - Optimize the graph - Defines how/when nodes in the graph are executed - Provide chunks of code to execute in the graph - Customize how graph data is stored - Define custom schedulers to dispatch the graph’s tasks The primary method used to extend EF’s functionality is to subclass from EF’s implementations of its core interfaces: Node, NodeDef, NodeGraphDef, ExecutionContext, Executor, PopulatePass, PartitionPass, etc. A reasonable questions is, “How are these custom user implementations instantiated by EF?” In short: - ExecutionContext objects are usually instantiated by the application. - Node objects are usually instantiated by the application. - `Node` objects are usually instantiated by implementations of `NodeGraphDef`. - `Executor` objects are instantiated by implementations of `NodeGraphDef`. - `NodeGraphDef` objects are usually instantiated by passes (e.g. `PopulatePass`). - `NodeDef` objects are usually instantiated by passes (e.g. `PopulatePass`). - Passes are instantiated by `PassPipeline` which uses a global registry of available passes. - `PassPipeline` is usually instantiated by the application. Visually: Above, we see there are two objects the application will instantiate: `PassPipeline` and `ExecutionContext`. The implementations instantiated here will be application specific. The creation of all other entities can be tied back to passes. As mentioned above, passes are instantiated by the application’s `PassPipeline`, which accesses a global registry of available passes. This global registry, available via the global `getPassRegistry()` function, can be populated by user plugins. In this article, we do not cover application level customization, such as `PassPipeline` and `ExecutionContext`, since such customizations are rare when using the Kit SDK (Kit already does this for you). We will cover how users can create their own plugins to define their own passes, and thereby their own nodes, definitions, and executors. Omniverse has two methods to define plugins: *Carbonite Plugins* and *Omniverse Modules*. ## Creating an Omniverse Module The minimum needed to implement an Omniverse module can be found in the *omni.kit.exec.example-omni* extension. ### Listing 10 Example of defining an Omniverse Module using the Kit SDK. ```c++ #include "OmniExamplePass.h" #include &lt;omni/core/Omni.h&gt; #include &lt;omni/core/ModuleInfo.h&gt; #include &lt;omni/graph/exec/unstable/PassRegistry.h&gt; #include &lt;omni/kit/exec/core/unstable/Module.h&gt; // we need the name in a couple of places so we define it once here #define MODULE_NAME "omni.kit.exec.example-omni.plugin" // this is required by omniverse modules OMNI_MODULE_GLOBALS( MODULE_NAME, // name of the module "Example Execution Framework Module" // description of the module ); // this registers the OmniExamplePass population pass. any time a node named "ef.example.greet" is seen, this pass will // attach a definition to the node that will print out "hi". // // this macro can be called from any .cpp file in the DLL, but must be called at global scope. OMNI_GRAPH_EXEC_REGISTER_POPULATE_PASS(OmniExamplePass, "ef.example.greet"); namespace { omni::core::Result onLoad(const omni::core::InterfaceImplementation** out, uint32_t* outCount) { // this method can be used to register default implementations for objects. for example, omni.kit.exec.core uses // this method to register its singletons: IExecutionControllerFactory, IExecutionGraphSettings, ITbbSchedulerState, // etc. // // this function is not used in this example. return omni::core::kResultSuccess; } // called once the DLL is loaded void onStarted() { // this macro must be called by any DLL providing EF functionality (e.g. passes). it will register any passes found // in the module with EF. OMNI_KIT_EXEC_CORE_ON_MODULE_STARTED( MODULE_NAME, []() { // this optional function is called when any EF module is unloaded. the purpose of this function is to // remove references to any objects that may by potentially be unloaded. }); } // tells the framework that this module can be unloaded bool onCanUnload() { return true; } // called when the DLL is about to be unloaded void onUnload() { // if OMNI_KIT_EXEC_CORE_ON_MODULE_STARTED() is called, this macro must also be called. it will inform EF that the // DLL is about to be unloaded. additionally this macro will unregister any passes registered by the DLL. OMNI_KIT_EXEC_CORE_ON_MODULE_UNLOAD(); } } // end of anonymous namespace // main entry point called by the carbonite framework. OMNI_MODULE_API omni::core::Result omniModuleGetExports(omni::core::ModuleExports* exports) { OMNI_MODULE_SET_EXPORTS(exports); OMNI_MODULE_ON_MODULE_LOAD(exports, onLoad); OMNI_MODULE_ON_MODULE_STARTED(exports, onStarted); OMNI_MODULE_ON_MODULE_CAN_UNLOAD(exports, onCanUnload); OMNI_MODULE_ON_MODULE_UNLOAD(exports, onUnload); return omni::core::kResultSuccess; } ``` Building the DLL is build system dependent, but when using the Kit SDK, the following snippet from `source/extensions/omni.kit.exec.example-omni/premake5.lua` should do the job: ```lua source/extensions/omni.kit.exec.example-omni/premake5.lua ``` ### Listing 11 Example of building an Omniverse Module using the Kit SDK. The **omni.kit.exec.example-omni** extension is a fully functioning extension found at ```code source/extension/omni.kit.exec.example-omni/ ``` . It includes much more than what is presented above, for example, how to create tests for your EF extension. It is a suitable starting point for your own EF extension. ## Creating a Carbonite Plugin The minimum needed to implement a Carbonite plugin can be found in the **omni.kit.exec.example-carb** extension: ```code-block-caption Listing 12 Example of defining an Carbonite plugin using the Kit SDK. ``` ```c++ #define CARB_EXPORTS // must be defined (folks often forget this) #include "CarbExamplePass.h" #include &lt;carb/PluginUtils.h&gt; #include &lt;omni/graph/exec/unstable/PassRegistry.h&gt; #include &lt;omni/kit/exec/core/unstable/Module.h&gt; // we need the name in a couple of places so we define it once here #define MODULE_NAME "omni.kit.exec.example-carb.plugin" // CARB_PLUG_IMPL must be called with an interface. this is an example interface. // // if your plugin does not publish any interfaces, consider using Omniverse Modules rather than a Carbonite Plugin. struct IExampleInterface { CARB_PLUGIN_INTERFACE("omni::graph::exec::example::IExampleInterface", 1, 0) }; void fillInterface(IExampleInterface& iface) { // used to populate your interface } // required. described the plugin to the carbonite framework. const struct carb::PluginImplDesc kPluginImpl = { MODULE_NAME, "Example Execution Framework Plugin", "NVIDIA", carb::PluginHotReload::eDisabled, "dev" }; // call CARB_PLUGIN_IMPL_DEPS if your plugin has static dependencies. this plugin does not. CARB_PLUGIN_IMPL_NO_DEPS(); // required. describes the carbonite interfaces this plugin provides CARB_PLUGIN_IMPL( kPluginImpl, IExampleInterface // add any carbonite interfaces here ) // this registers the CarbExamplePass population pass. any time a node named "ef.example.greet" is seen, this pass will // attach a definition to the node that will print out "hi". // // this macro can be called from any .cpp file in the DLL, but must be called at global scope. OMNI_GRAPH_EXEC_REGISTER_POPULATE_PASS(CarbExamplePass, "ef.example.greet"); // called once the DLL is loaded CARB_EXPORT bool carbOnPluginStartupEx() { // this macro must be called by any DLL providing EF functionality (e.g. passes). it will register any passes found // in the module with EF. OMNI_KIT_EXEC_CORE_ON_MODULE_STARTED( MODULE_NAME, []() { // this optional function is called when any EF module is unloaded. the purpose of this function is to // remove references to any objects that may by potentially be unloaded. }); return true; } // called right before the DLL will be unloaded CARB_EXPORT void carbOnPluginShutdown() { // if OMNI_KIT_EXEC_CORE_ON_MODULE_STARTED() is called, this macro must also be called. it will inform EF that the // DLL is about to be unloaded. additionally this macro will unregister any passes registered by the DLL. } ``` ```cpp OMNI_KIT_EXEC_CORE_ON_MODULE_UNLOAD(); ``` ``` </pre> </div> </div> </div> Building the DLL is build system dependent, but when using the Kit SDK, the following snippet from ```cpp source/extensions/omni.kit.exec.example-carb/premake5.lua ``` should do the job: ```cpp -- start the omnigraph/omni.kit.exec.example-carb project.. project_ext(ext, { generate_ext_project=false }) -- target: omnigraph/omni.kit.exec.example-carb/omni.kit.exec.example-carb.plugin -- -- builds the c++ code project_ext_plugin(ext, ext.id..".plugin") add_files("/impl", "plugin") -- add plugins directory to files to be built exceptionhandling "On" -- api layer is allowed to throw exceptions (abi is not) rtti "Off" -- shouldn't be needed since we're using oni ``` ## Deciding on Which Approach to Take When implementing new EF functionality, it is recommended to use Omniverse modules. Omniverse modules work well with EF’s ONI based interfaces. Additionally, if you plan on providing your own ONI interfaces that encapsulate global state that needs to be accessed across many DLLs, Omniverse modules allow you to register interfaces via ```cpp omni::core::ITypeFactory ``` . See *omni.kit.exec.core* for an example. If you are extending an existing Carbonite plugin with EF functionality (e.g. *omni.graph.core* ) using the existing Carbonite plugin is the path of least resistance. By taking this approach, your new EF implementation will be able to access implementation details of existing functionality located in the same plugin. ## Avoiding Crashes at Exit > Note > This section covers a crash on exit problem often seen when using the Kit SDK. The solution provided is not implemented in the core EF library, rather it is implemented in the *omni.kit.exec.core* extension, which bridges EF with Kit. Both the problem and solution are presented here, in the core EF docs, to help users of EF outside of the Kit SDK to understand potential edge cases with EF integration. Applications based on the Kit SDK will shutdown each extension/plugin/module before exit. This can lead to unexpected crashes when DLLs depend upon each other. This coupling of functionality between DLLs is often the case in EF. As an example, consider the *omni.graph.action* extension, which provides definitions and passes to implement OmniGraph’s Action Graph extension. The *omni.graph.action* extension depends upon *omni.graph.core* which in turn depends upon *omni.kit.exec.core* , which depends upon the core EF extension ( *omni.graph.exec* ). When the application starts, this dependency information is used to load *omni.graph.exec* first, followed by *omni.kit.exec.core* second, then *omni.graph.core* , and finally *omni.graph.action* . During shutdown, the extensions are unloaded in reverse order. ```mermaid flowchart LR oge[omni.graph.exec] -- Provides PassRegistry To --> okec[omni.kit.exec.core] okec -- Provides ExecutionController To --> ogc[omni.graph.core] ogc -- Provides OG To --> oga[omni.graph.action] oga -. Stores Data In .--> ogc ogc -. Stores Data In .--> okec ``` Safely unloading extensions is no easy task. Explicit extension dependencies are depicted with solid lines. Implicit reference counting dependencies are depicted with dotted lines. During shutdown, *omni.graph.action* will unload without issue. However, when unloading *omni.graph.core* you’re likely to see a crash when OmniGraph’s destructs its internal objects. This is because OmniGraph stores an ```cpp ObjectPtr ``` to each EF [definition] is creates. This isn’t a bug, as it allows OmniGraph to quickly and precisely [invalidate] parts of EF’s execution graph. However, during shutdown, definitions provided by *omni.graph.action* will crash, because attempting to invoke their destructors will call into unloaded code. EF’s solution to this problem is ```cpp OMNI_KIT_EXEC_CORE_ON_MODULE_STARTED() ``` . This macro’s second argument is a callback ## Next Steps Above, we covered the creation of plugins to extend EF’s functionality. Readers are encouraged to move onto to either - Definition Creation - Pass Creation - Executor Creation to begin implementing new graphs.
embedded_kit_python.md
# Embedded Python ## Hello Python Run `> kit.exe --exec your_script.py` to run your script using **Kit** Python. ## Using system Python When the Python interpreter is initialized, system-defined environment variables (like `PYTHONHOME`, `PYTHONPATH`) are ignored. Instead, the following setting is used for python home: - `/plugins/carb.scripting-python.plugin/pythonHome` instead of [PYTHONHOME](https://docs.python.org/3.7/using/cmdline.html?highlight=pythonhome#envvar-PYTHONHOME) > **Note** > You can find default values for this setting in `kit-core.json` file. To use a system-level Python installation, override `PYTHONHOME`, e.g.: `--/plugins/carb.scripting-python.plugin/pythonHome="C:\Users\bob\AppData\Local\Programs\Python\Python310"`. Changing `PYTHONHOME` won’t change the loaded Python library. This is platform specific, but for instance on Windows, **Kit** is linked with `python.dll` and loads the one that is in the package using standard dll search rules. However, the standard library, `site-packages`, and everything else will be used from the specified python path. ## Add extra search paths To add search paths (to `sys.path`), you can use the `--/plugins/carb.scripting-python.plugin/extraPaths` setting. ```code sys.path ``` ), the ```code /app/python/extraPaths ``` setting can be used. For example: ``` ```code > kit.exe --/app/python/extraPaths/0="C:/temp" ``` ``` or in a kit file: ```toml [settings] app.python.extraPaths = ["C:/temp"] ``` To summarize, those are all the methods to extend ```code sys.path ``` : - Create new extension with ```code [python.module] ``` definitions (recommended). - Explicitly in python code: ```code sys.path.append(...) ``` - The ```code /app/python/extraPaths ``` setting. ## Other Python Configuration Tweaks Most python configuration variables can be changed using following settings: ``` config variable | python flag documentation --- | --- ```code /plugins/carb.scripting-python.plugin/Py_VerboseFlag ``` | Py_VerboseFlag ```code /plugins/carb.scripting-python.plugin/Py_QuietFlag ``` | Py_QuietFlag ```code /plugins/carb.scripting-python.plugin/Py_NoSiteFlag ``` | Py_NoSiteFlag ```code /plugins/carb.scripting-python.plugin/Py_IgnoreEnvironmentFlag ``` | Py_IgnoreEnvironmentFlag ```code /plugins/carb.scripting-python.plugin/Py_NoUserSiteDirectory ``` | Py_NoUserSiteDirectory ```code /plugins/carb.scripting-python.plugin/Py_UnbufferedStdioFlag ``` | Py_UnbufferedStdioFlag ```code /plugins/carb.scripting-python.plugin/Py_IsolatedFlag ``` | Py_IsolatedFlag ``` ## Using `numpy`, `Pillow` etc. **Kit** comes with `omni.kit.pip_archive` extension which has few popular Python modules bundled into it. Have a look inside of it on filesystem. After this extension is started you can freely do `import numpy`. Declare a dependency on this extension in your extension, or enable it by any other means to use any of them. E.g.: run ```code > kit.exe --enable omni.kit.pip_archive --exec use_numpy.py ``` to run your script that can import and use `numpy`. ## As a starting point change ```python PYTHONHOME ``` setting described above to point to Anaconda environment: ```python --/plugins/carb.scripting-python.plugin/pythonHome="C:/Users/bob/anaconda3/envs/py37" ``` . It is known to work for some packages and fail for others, on a case by case basis. ## Using other packages from pip For most Python packages (installed with any package manager or locally developed) it is enough to add them to the search path ( ```python sys.path ``` ). That makes them discoverable by the python import system. Any of the methods described above can be used for that. Alternatively, **Kit** has the ```python omni.kit.pipapi ``` extension to install modules from the ```python pip ``` package manager at runtime. It will check if the package is not available, and will try to pip install it and cache it. Example of usage: ```python omni.kit.pipapi.install("some_package") ``` . After that call, import the installed package. Enabling the ```python omni.kit.pipapi ``` extension will allow specification of pip dependencies by extensions loaded after it. Refer to ```python omni.kit.pipapi ``` doc. At build-time, any Python module can be packaged into any extension, including packages from pip. That can be done using other Python installations or kit Python. This is the recommended way, so that when an extension is downloaded and installed, it is ready to use. There is also no requirement for connectivity to public registries, and no runtime cost during installation. ## Why do some native Python modules not work in **Kit**? It is common for something that works out of the box as-installed from *pip* or *Anaconda* not to work in **Kit**. Or vice versa, the **Kit** Python module doesn’t load outside of **Kit**. For pure Python modules (only ```python *.py ``` files), finding the root cause might be a matter of following import errors. However, when it involves loading native Python modules ( ```python *.pyd ``` files on Windows and ```python *.so ``` files on Linux), errors are often not really helpful. Native Python modules are just regular OS shared libraries, with a special **C API** that Python looks for. They also are often implicitly linked with other libraries. When loaded, they might not be able to find other libraries, or be in conflict with already loaded libraries. Those issues can be debugged as any other library loading issue, specific to the OS. Some examples are: - Exploring ```python PATH ``` / ```python LD_LIBRARY_PATH ``` env vars. - Exploring libraries that are already loaded by the process. - Using tools like Dependency Walker. - Trying to isolate the issue, by loading in a simpler or more similar environment. **Kit** doesn’t do anything special in this regard, and can be treated as just another instance of Python, with a potentially different set of loaded modules. ## Running **Kit** from Python Normally the ```python kit.exe ``` process starts and loads an embedded Python library. **Kit** provides Python bindings to its core runtime components. This allows you to start Python, and then start **Kit** from that Python. It is an experimental feature, and not used often. An example can be found within the **Kit** package: ```python example.pythonapp.bat ``` . Differences from running normally: - A different Python library file is used (different ```python python.dll ``` ). - There may be some GIL implications, because the call stack is different. - Allows explicit control over the update loop.
enabling-the-extension_overview.md
# Overview ## Overview Viewport Next is a preview of the next generation of Kit’s Viewport. It was designed to be as light as possible, providing a way to isolate features and compose them as needed to create unique experiences. This documentation will walk through a few simple examples using this technology, as well as how it can be used in tandem with the `omni.ui.scene` framework. ## What is a Viewport Exactly what a viewport is can be a bit ill-defined and dependent on what you are trying to accomplish, so it’s best to define some terms up front and explain what this documentation is targeting. At a very high level a Viewport is a way for a user to visualize (and often interact with) a Renderer’s output of a scene. When you create a “Viewport Next” instance via Kit’s Window menu, you are actually creating a hierarchy of objects. The three objects of interest in this hierarchy are: 1. The `ViewportWindow`, which we will be re-implementing as `StagePreviewWindow`. 2. The `ViewportWidget`, one of which we will be instantiating. 3. The `ViewportTexture`, which is created and owned by the `ViewportWidget`. While we will be using (or re-implementing) all three of those objects, this documentation is primarily targeted towards understanding the `ViewportWidget` and it’s usage in the `omni.kit.viewport.stage_preview`. After creating a Window and instance of a `ViewportWidget`, we will finally add a camera manipulator built with `omni.ui.scene` to interact with the `Usd.Stage`, as well as control aspects of the Renderer’s output to the underlying `ViewportTexture`. Even though the `ViewportWidget` is our main focus, it is good to understand the backing `ViewportTexture` is independent of the `ViewportWidget`, and that a texture’s resolution may not necessarily match the size of the `ViewportWidget` it is contained in. This is particularly important for world-space queries or other advanced usage. ## Enabling the Extension To enable the extension and open a “Viewport Next” window, go to the “Extensions” tab and enable the “Viewport Window” extension ( ```code omni.kit.viewport.window ```code ). ## Simplest example The ```code omni.kit.viewport.stage_preview ```code adds additional features that make may make a first read of the code a bit harder. So before stepping through that example, let's take a moment to reduce it to an even simpler case where we create a single Window and add only a Viewport which is tied to the default ```code UsdContext ```code and ```code Usd.Stage ```code . We won’t be able to interact with the Viewport other than through Python, but because we are associated with the default ```code UsdContext ```code : any changes in the ```code Usd.Stage ```code (from navigation or editing in another Viewport or adding a ```code Usd.Prim ```code from the Create menu) will be reflected in our new view. ```python from omni.kit.widget.viewport import ViewportWidget viewport_window = omni.ui.Window('SimpleViewport', width=1280, height=720+20) # Add 20 for the title-bar with viewport_window.frame: viewport_widget = ViewportWidget(resolution = (1280, 720)) # Control of the ViewportTexture happens through the object held in the viewport_api property viewport_api = viewport_widget.viewport_api # We can reduce the resolution of the render easily viewport_api.resolution = (640, 480) # We can also switch to a different camera if we know the path to one that exists viewport_api.camera_path = '/World/Camera' # And inspect print(viewport_api.projection) print(viewport_api.transform) # Don't forget to destroy the objects when done with them # viewport_widget.destroy() # viewport_window.destroy() # viewport_window, viewport_widget = None, None ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ```
enterprise-install.md
# Enterprise Install Guide ## Licensing Need walkthrough steps on setting up your Omniverse Enterprise account and getting your licenses in order? Review the Omniverse Enterprise Licensing Quick Start Guide for more information. ## Enterprise Nucleus Server The following documentation is available to help you properly plan, deploy, and configure an Enterprise Nucleus Server: - Hardware Sizing Guide - Information on server sizing for your environment - Planning Your Installation - Best practices, requirements, and prerequisites - Installing an Enterprise Nucleus Server - An easy step-by-step guide for successful installation ## Launcher Deployment Options The Omniverse Launcher is available in two versions: the Workstation Launcher and the IT Managed Launcher. Omniverse Enterprise customers may choose either version depending on their deployment preference. - The Workstation Launcher offers a complete experience and does not require IT management for application installation or updates. The Omniverse Workstation Launcher requires network connectivity and an NVIDIA account. - The IT Managed Launcher is designed to be used in an air-gapped or tightly controlled environment, and does not require network connectivity or an NVIDIA account. Installation and updates of Omniverse applications are managed by the IT administrator for end users. Both the Workstation Launcher and the IT Managed Launcher are available from the NVIDIA Licensing Portal. ## Virtual Workstation Deployments Kit based apps (including USD Composer, USD Presenter, etc.) can be run in a virtualized environment using NVIDIA’s vGPU products. The Virtual Deployment Guide provides an overview of how to set up a vGPU environment capable of hosting Omniverse. Additionally, Omniverse Virtual Workstations can be run in a Cloud Service Provider (CSP) using the how-to guides here.
ErrorHandling.md
# Error Handling This document outlines the how the Execution Framework (i.e. EF) handles errors. EF errors fit into one of the following broad categories: - Memory allocation errors. - Invalid pointers passed to the API. - Unmet API preconditions. - Failure to build the execution graph. - Failure to execute. - Failure to retrieve a node’s data. Most API’s in EF are expected to never fail and as such do not return a result indicating success or failure. The general approach taken by EF is to terminate the program when unrecoverable errors or programmer errors are detected. For errors generated by plugins (i.e. developer authored executors and passes), it is up to the developer to report errors via either the integration layer (e.g. `omni.kit.exec.core`) or authoring layer (e.g. `omni.graph.core`). The following sections explore the topics above in-depth. ## Memory Allocation Errors EF allocates memory on the heap during both graph construction and execution. The size of each allocation is generally small (less than 1KB). Because of the small size of each allocation, if an allocation fails, EF considers the system’s memory to be exhausted and no reasonable action can be taken to free memory. The system is in a bad state, and as such, EF terminates the application. This termination happens in two ways: - When allocations via `new` fail, an exception is thrown. Since functions in EF are marked `noexcept`, an uncaught exception triggers `std::unexpected()`, which by default calls `std::terminate()`. - When allocations via `std::malloc()` or `carb::allocate()` fail, the bad allocation is detected and the application terminated via `OMNI_GRAPH_EXEC_FATAL_UNLESS()`. ## Invalid Pointers EF is a low-level API designed with speed in mind. As such, EF spends little time validating and report bad input to its API. The expectation is that the developer is providing valid input. When invalid input is provided, EF immediately terminates the application. While seemingly harsh, this “fail-fast” approach has several benefits: - Developers often neglect to handle errors returned from APIs. This neglect can lead to the application being in an unexpected state and generate hard to find bugs. - By failing-fast and terminating the application, API misuse is captured by Omniverse’s Carbonite Crash Reporter. During local development, the crash reporter immediately reports the stack trace of any API misuse. During testing, the reporter logs the API misuse and generates telemetry. This telemetry can be aggregated and examined to find API misuse across Omniverse’s suite of products before said products ship to customers. To implement this fail-fast strategy, EF primarily uses two macros: ```c OMNI_GRAPH_EXEC_ASSERT() ``` and ```c OMNI_GRAPH_EXEC_FATAL_UNLESS_ARG() ``` . ```c OMNI_GRAPH_EXEC_ASSERT() ``` is used to validate that a supplied pointer is not ```c nullptr ``` . Its use is preferred when the pointer will be dereferenced by the function before it returns. The reason for this is two-fold: 1. ```c OMNI_GRAPH_EXEC_ASSERT() ``` checks the given pointer only in debug builds. This means there is no performance penalty in release builds. 2. Since the pointer will be used by the function performing the check, in release builds a crash will be generated (and reported) due to dereferencing the null pointer. The latter point suggests ```c OMNI_GRAPH_EXEC_ASSERT() ``` is not strictly needed. While true, ```c OMNI_GRAPH_EXEC_ASSERT() ``` serves as “code as documentation” and provides a helpful message when the check fails. Following you can see an example of when it is appropriate to use ```c OMNI_GRAPH_EXEC_ASSERT() ``` . ```c++ void printName(INode* node) noexcept { OMNI_GRAPH_EXEC_ASSERT(node); // prints a useful message in debug builds if node is nullptr // if node is nullptr, a crash will be triggered and reported in the release build. // // prefer using OMNI_GRAPH_EXEC_ASSERT() to check if an input parameter is nullptr when // the pointer is immediately used by the function. this mean you'll get a helpful message in // debug builds and an easy to debug crash in release builds. std::cout << node->getName() << std::endl; } ``` The next macro used is ```c OMNI_GRAPH_EXEC_FATAL_UNLESS_ARG() ``` . EF prefers using this macro when the input pointer is not immediately used, but rather stored for later use. ```c OMNI_GRAPH_EXEC_FATAL_UNLESS_ARG() ``` has the benefit of performing the ```c nullptr ``` check in both debug and release builds. By checking the pointer in both build flavors, we avoid hard to debug situations where the stored pointer is later used and unexpectedly ```c nullptr ``` . When encountering such a situation, questions such as “Was the pointer passed ```c nullptr ``` ?” or “Was the stored pointer corrupted due to an overrun?” are reasonable. Checking for ```c nullptr ``` when the pointer is stored, helps answer questions like these much easier. Below, you can see an example use of ```c OMNI_GRAPH_EXEC_FATAL_UNLESS_ARG() ``` . ```c++ void MyObject::setDef(IDef* def) noexcept { // prints a useful message in both release and debug builds if def is nullptr OMNI_GRAPH_EXEC_FATAL_UNLESS_ARG(node); } ``` // here we store def for later use. by checking if def is nullptr above, we can quickly // debug why m_def is nullptr when later used. m_def = def; ``` ## Unmet Preconditions To avoid the generation of hard to investigate bugs, EF lists expected preconditions for each part of its API and terminates the program if any of these preconditions are not met. Preconditions that are not `nullptr` checks are usually checked with the `OMNI_GRAPH_EXEC_FATAL_UNLESS()` macro. This macro performs the precondition check in both release and debug builds. An example of one of these checks follows: ```cpp PassTypeRegistryEntry getPassAt_abi(uint64_t index) noexcept override { OMNI_GRAPH_EXEC_FATAL_UNLESS(index < passes.size()); return { passes[index].id, passes[index].name.c_str(), passes[index].factory.get(), &amp;(passes[index].nameToMatch), passes[index].priority }; } ``` For hot code paths, `OMNI_GRAPH_EXEC_ASSERT()` can be used to eliminate the performance cost of these checks in release builds. ## Failure to Build the Execution Graph Graph construction is handled by user plugins via passes. The main method in these passes is the `run()` method (e.g. `IPopulatePass::run()`). `run()` does not report errors. It is up to the implementor of `run()` to handle and report errors. How a developer handles errors is their choice. They may choose to flag to the integration layer that the graph should not be executed. They may choose to populate the graph with “pass-through” nodes. They may choose to report the error via an authoring level API or an integration layer API. The main message here is that EF assumes graph construction will succeed and if it does not, it’s up to the developer to handle and report the failure during construction and ensure the program is in a defined state. ## Failure During Graph Execution Failures are expected during graph execution. For example, it is reasonable to assume that a node that makes an I/O request, may periodically fail. EF’s execution APIs are designed to flag that a task failed, but that’s it. EF does not contain APIs to describe the failure or even associate a failure with nodes or definitions. EF’s execution APIs generally return a `Status` object, which is a bit-field of possible execution outcomes. When using the default `ExecutorFallback`, nodes downstream of failing node are still executed and their resulting `Status` or’d together. The end result is a `Status` object. # Failure to Retrieve Node Data Node data needed by the graph during construction and execution is stored in `IExecutionContext`. This context allows each instance of a node to store arbitrary data based on the node’s `path` and a user defined key. The data is accessed with the `IExecutionContext::getNodeData()` method, which returns a pointer to the data. The pointer returned by this method may be `nullptr`. Here we run into a design decision. Does `nullptr` mean the data was never set or does it mean the data was set, but set to `nullptr`? EF is designed to allow for the latter scenario. A returned `nullptr` means the data was explicitly set to `nullptr`. In order to handle the case where the data was never set, `IExecutionContext::getNodeData()` returns an `omni::expected`. `omni::expected` contains either the “expected” value or an “unexpected” value. For `IExecutionContext::getNodeData()`, it contains the value of the pointer set by the user or a `omni::core::Result` with a value of `omni::core::kResultNotFound`. An example of valid usage of this API is as follows: ```c++ auto data = OMNI_GRAPH_EXEC_GET_NODE_DATA_AS( task->getContext(), // pointer to either IExecutionContext or IExecutionStateInfo GraphContextCacheOverride, // the type of the data to retrieve task->getUpstreamPath(), // node path tokens::kInstanceContext // key to use as a lookup in the node's key/value datastore ); if (data) { GraphContextCacheOverride* item = data.value(); // ... } else { omni::core::Result badResult = data.error(); // e.g. kResultNotFound (see docs) // ... } ``` An alternative usage of the API, can be seen here: ```c++ auto data = OMNI_GRAPH_EXEC_GET_NODE_DATA_AS( task->getContext(), // pointer to either IExecutionContext or IExecutionStateInfo GraphContextCacheOverride, // the type of the data to retrieve task->getUpstreamPath(), // node path tokens::kInstanceContext // key to use as a lookup in the node's key/value datastore ).data(); // will throw an exception if the result is unexpected ``` Above, by not checking if the ```cpp omni::expected ``` has an unexpected value, ```cpp omni::expected ``` will throw an exception. This exception can be caught by the developer. If the exception is not caught, it will eventually reach an ABI boundary, call std::unexpected(), and terminate the program. Such a strategy is useful when the missing node data represents an unexpected state in the program. ## Exceptions EF does not use exceptions to report errors. Rather, it uses the error reporting strategies outlined above. This fact introduces two questions developers may ask: - Can I use exceptions in my EF plugin? - What happens if I throw an exception and don’t catch it? Developers are free to use exceptions in their plugins. However, if an exception crosses an ABI boundary (i.e., escapes a function postfixed with ``` _abi ``` ), the following will happen: - The C++ runtime will invoke std::unexpected(), which by default calls std::terminate(). - In Omniverse applications, std::terminate() has been set to be handled by Omniverse’s Carbonite Crash Reporter. The reporter will generate a `.dmp` file for later inspection, print out a stack trace, upload the `.dmp` to Omniverse’s crash aggregation system, and produce telemetry describing the context of the crash. In short, developers should feel free to use exceptions. If an exception can be handled, they should be caught and appropriate cleanup actions performed. If an exception represents an undefined state, it can be ignored so that it is reported by the crash reporting system, which will terminate the ill-defined application.
event_streams.md
# Event streams ## API/design overview The singleton `IEvents` interface is used to create `IEventStream` objects. Whenever an event is being pushed into an event stream, the **immediate** callback is triggered, and the event stream stores the event in the internal event queue. Then, events can be popped from the queue one by one, or all at once (also called pump), and at this point **deferred** callbacks are triggered. The event stream owner typically controls where this pumping is happening. Event consumers can subscribe to both immediate (push) and deferred (pop) callbacks. Subscription functions create `ISubscription` class, which usually unsubscribes automatically upon destruction. Callbacks are wrapped into `IEventListener` class that allows for context binding to the subscription, and upon triggering, the callback is triggered with the `IEvent` passed as parameter, this parameter describes the event which triggered the callback. `IEvent` contains event type, sender id and custom payload, which is stored as `carb.dictionary` item. ## Recommended usage The events subsystem is flexible and there are several recommendations that are intended to help the most frequent use-cases, as well as provide clarifications on specific parts of the events logic. ### Deferred callbacks As opposed to immediate callback invocation, the recommended way of using events streams is through the deferred callbacks mechanisms, unless using immediate callbacks are absolutely necessary. When an event is pushed into an event stream, it is fairly frequent that the subsequent immediate callback is not a safe place to modify or even read related data outside the event payload. To avoid corruptions, it is recommended to use the deferred callbacks, which will be triggered at some place that the event stream owner deemed safe. ### Event types Each event contains an event type, which is set upon pushing the event into the stream, and can be specified when a consumer subscribes to an event stream. This can be used to narrow/decrease the number of callback invocations, which is especially important when listening to the event stream from the scripting language. It is recommended to use string hashes as event types, as this will help avoid managing the event type allocation in case multiple sources can push events into an event stream. In C++, use `CARB_EVENTS_TYPE_FROM_STR` which provides a 64-bit FNV-1a hash computed in compile-time, or its run-time counterpart, `carb::events::typeFromString`. In Python, `carb.events.type_from_string` can be used. Important event streams design choices: either multiple event streams with fairly limited number of event types served by each, or one single event stream can be created, serving many different event types. The latter approach is more akin to the event bus with many producers and consumers. Event buses are useful when designing a system that is easily extendable. # Transient subscriptions In case you want to implement a deferred-action triggered by some event - instead of subscribing to the event on startup and then checking the action queue on each callback trigger, consider doing the transient subscriptions. This approach involves subscribing to the event stream only after you have a specific instance of action you want to execute in a deferred manner. When the event callback subscription is triggered, you execute the action and immediately unsubscribe, so you don’t introduce an empty callback ticking unconditionally each time the event happens. The transient subscription can also include a simple counter, so you execute your code only on Nth event, not necessarily on the next one. # Code examples ## Subscribe to Shutdown Events ```python # App/Subscribe to Shutdown Events import carb.events import omni.kit.app # Stream where app sends shutdown events shutdown_stream = omni.kit.app.get_app().get_shutdown_event_stream() def on_event(e: carb.events.IEvent): if e.type == omni.kit.app.POST_QUIT_EVENT_TYPE: print("We are about to shutdown") sub = shutdown_stream.create_subscription_to_pop(on_event, name="name of the subscriber for debugging", order=0) ``` ## Subscribe to Update Events ```python # App/Subscribe to Update Events import carb.events import omni.kit.app update_stream = omni.kit.app.get_app().get_update_event_stream() def on_update(e: carb.events.IEvent): print(f"Update: {e.payload['dt']}") sub = update_stream.create_subscription_to_pop(on_update, name="My Subscription Name") ``` ## Create custom event ```python # App/Create Custom Event import carb.events import omni.kit.app # Event is unique integer id. Create it from string by hashing, using helper function. # [ext name].[event name] is a recommended naming convention: MY_CUSTOM_EVENT = carb.events.type_from_string("omni.my.extension.MY_CUSTOM_EVENT") # App provides common event bus. It is event queue which is popped every update (frame). bus = omni.kit.app.get_app().get_message_bus_event_stream() def on_event(e): print(e.type, e.type == MY_CUSTOM_EVENT, e.payload) # Subscribe to the bus. Keep subscription objects (sub1, sub2) alive for subscription to work. # Push to queue is called immediately when pushed sub1 = bus.create_subscription_to_push_by_type(MY_CUSTOM_EVENT, on_event) # Pop is called on next update sub2 = bus.create_subscription_to_pop_by_type(MY_CUSTOM_EVENT, on_event) # Push event the bus with custom payload bus.push(MY_CUSTOM_EVENT, payload={"data": 2, "x": "y"}) ```
example-multiple-projects-in-a-repo_index.md
# Example: Multiple Projects in a Repo This is an example of a nested documentation project. This project was defined as follows in `repo.toml`: ```toml [repo_docs.projects.nested-project] # example-begin version_selector_enabled version_selector_enabled = false # example-end version_selector_enabled name_in_nav_bar_enabled = true enhanced_search_enabled = false # example-begin solr-search # enable the use of solr search solr_search_enabled = true solr_search_site = "https://docs.nvidia.com" solr_search_path = "/cuda" # example-end solr-search # example-begin temporary-links temporary_links = [ { source = "../repo_docs-link-example", link_path = "tmp" } ] # example-end temporary-links # docs_root should be redefined per-project docs_root = "examples/nested-project" # most keys can be redefined. if a key is not redefined, it inherits the key's value # from the root [repo_docs] table. name = "Example: Nested Project" # we want to link back to repo_docs from this build so we add it as a dependency deps = [ [ "repo_docs", "_build/docs/repo_docs/latest" ], ] ``` See [Defining Multiple Projects](../../repo_docs/0.51.4/docs/Projects.html#multiple-projects-overview) for more information on defining, building, and publishing sub-projects.
example-project-with-extra-builds_index.md
# Example: Project with Extra Builds This is an example of a project (i.e. “project-with-extra-builds” in `repo.toml`) that defines multiple builds. The project defines two builds: - public - internal ```toml # this defines the "public" build [repo_docs.projects.project-with-extra-builds] docs_root = "examples/project-with-extra-builds" name = "Example: Project with Extra Builds" # we don't want "internal-only.rst" in the public build sphinx_exclude_patterns = [ "internal-only.rst", "tools" ] # we want to link back to repo_docs from this build so we add it as a dependency deps = [ ["repo_docs", "_build/docs/repo_docs/latest"], ] # this defines the "internal" build [repo_docs.projects.project-with-extra-builds.builds.internal] # settings are inherited from the "public" build, but can be redefined as we # do with 'name' here: name = "Example: Project with Extra Builds (Internal)" # reset the exclude patterns so that "internal-only.rst" isn't excluded sphinx_exclude_patterns = [ "tools" ] ``` Above, “public” does not need to be specified because it is considered the default build. Snippets of documentation can be conditionally included based on the build. Consider the following example: ```rst .. ifconfig:: build_name in ('internal') .. note:: This text will only appear in the "internal" build of the documentation. .. ifconfig:: build_name in ('public') .. note:: This text will only appear in the "public" build of the documentation. ``` The snippet above produces the following note in this build of the documentation: > **Note** > This text will only appear in the “public” build of the documentation. For more information on defining multiple builds, see [Multiple Builds](#).
Example.md
# Examples ## Simplified submenu creation with build_submenu_dict This creates a dictionary of lists from the `name` paths in MenuItemDescription, expanding the path and creating (multiple, if required) sub_menu lists. The last item on the path is assumed to be not a sub_menu item. ```python menu_dict = omni.kit.menu.utils.build_submenu_dict([ MenuItemDescription(name="File/Open"), MenuItemDescription(name="Edit/Select/Select by kind/Group"), MenuItemDescription(name="Window/Viewport/Viewport 1"), MenuItemDescription(name="Help/About"), ]) ``` ## using add_menu_items ```python for group in menu_dict: omni.kit.menu.utils.add_menu_items(menu_dict[group], group) ``` ## using remove_menu_items ```python for group in menu_dict: omni.kit.menu.utils.remove_menu_items(menu_dict[group], group) ``` ## Another example: Adding a menu with submenu for your extension; ```c++ from omni.kit.menu.utils import MenuItemDescription import carb.input def on_startup(self, ext_id): self._file_menu_list = [ MenuItemDescription( name="Sub Menu Example", ) ] ``` ```python import carb import asyncio import omni.ext import omni.ui as ui import omni.kit.menu.utils from omni.kit.menu.utils import MenuItemDescription from .window import ExampleWindow class TestMenu(omni.ext.IExt): """The entry point for Example Extension""" WINDOW_NAME = "Example" MENU_DESCRIPTION = "Example Window" MENU_GROUP = "TEST" def on_startup(self): print(f"[{self.__class__.__name__}] on_startup") ui.Workspace.set_show_window_fn(TestMenu.WINDOW_NAME, lambda v: self.show_window(None, v)) self._menu_entry = [MenuItemDescription( name=TestMenu.MENU_DESCRIPTION, ticked=True, # menu item is ticked ticked_fn=self._is_visible, # gets called when the menu needs to get the state of the ticked menu onclick_fn=self._toggle_window )] omni.kit.menu.utils.add_menu_items(self._menu_entry, name=TestMenu.MENU_GROUP) ``` ```python ui.Workspace.show_window(TestMenu.WINDOW_NAME) def on_shutdown(self): print(f"[{self.__class__.__name__}] on_shutdown") omni.kit.menu.utils.remove_menu_items(self._menu_entry, name=TestMenu.MENU_GROUP) self._menu_entry = None ui.Workspace.set_show_window_fn(TestMenu.WINDOW_NAME, None) if self._window: self._window.destroy() self._window = None async def _destroy_window_async(self): print(f"[{self.__class__.__name__}] _destroy_window_async") # wait one frame, this is due to the one frame defer # in Window::_moveToMainOSWindow() await omni.kit.app.get_app().next_update_async() if self._window: self._window.destroy() self._window = None def _is_visible(self) -> bool: print(f"[{self.__class__.__name__}] _is_visible returning {False if self._window is None else self._window.visible}") return False if self._window is None else self._window.visible def _show(self): print(f"[{self.__class__.__name__}] _show") if self._window is None: self.show_window(None, True) if self._window and not self._window.visible: self.show_window(None, True) def _hide(self): print(f"[{self.__class__.__name__}] _hide") if self._window is not None: self.show_window(None, False) def _toggle_window(self): print(f"[{self.__class__.__name__}] _toggle_window") if self._is_visible(): self._hide() else: self._show() def _visiblity_changed_fn(self, visible): print(f"[{self.__class__.__name__}] _visiblity_changed_fn") if not visible: # Destroy the window, since we are creating new window # in show_window asyncio.ensure_future(self._destroy_window_async()) # this only tags test menu to update when menu is opening, so it # doesn't matter that is called before window has been destroyed omni.kit.menu.utils.refresh_menu_items(TestMenu.MENU_GROUP) ``` ```python def show_window(self, menu, value): print(f"[{self.__class__.__name__}] show_window menu:{menu} value:{value}") if value: self._window = ExampleWindow() self._window.set_visibility_changed_listener(self._visiblity_changed_fn) elif self._window: self._window.visible = False ``` ## Window class ```python import omni.ui as ui class ExampleWindow(ui.Window): """The Example window""" def __init__(self, usd_context_name: str = ""): print(f"[{self.__class__.__name__}] __init__") super().__init__("Example Window", width=300, height=300) self._visiblity_changed_listener = None self.set_visibility_changed_fn(self._visibility_changed_fn) def destroy(self): """ Called by extension before destroying this object. It doesn't happen automatically. Without this hot reloading doesn't work. """ print(f"[{self.__class__.__name__}] destroy") self._visiblity_changed_listener = None super().destroy() def _visibility_changed_fn(self, visible): print(f"[{self.__class__.__name__}] _visibility_changed_fn visible:{visible}") if self._visiblity_changed_listener: self._visiblity_changed_listener(visible) def set_visibility_changed_listener(self, listener): print(f"[{self.__class__.__name__}] set_visibility_changed_listener listener:{listener}") self._visiblity_changed_listener = listener ``` ``` ---
example.python_ext.Classes.md
# example.python_ext Classes ## Classes Summary - [HelloPythonExtension](./example.python_ext/example.python_ext.HelloPythonExtension.html)
example.python_ext.Functions.md
# example.python_ext Functions ## Functions Summary: - [some_public_function](./example.python_ext.Functions.html)
example.python_ext.HelloPythonExtension.md
# HelloPythonExtension ## HelloPythonExtension ``` class example.python_ext.HelloPythonExtension ``` Bases: ``` omni.ext._extensions.IExt ``` ### Methods | Method | Description | | ------ | ----------- | | `on_shutdown()` | | | `on_startup(ext_id)` | | ``` def __init__(self: omni.ext._extensions.IExt) -> None ``` ```
example.python_ext.md
# example.python_ext ## Submodules Summary: | Module | Description | |--------|-------------| | example.python_ext.python_ext | No submodule docstring provided | ## Classes Summary: | Class | Description | |-------|-------------| | HelloPythonExtension | | ## Functions Summary: | Function | Description | |----------|-------------| | some_public_function | |
example.python_ext.python_ext.Functions.md
# example.python_ext.python_ext Functions ## Functions Summary | Function Name | |---------------| | [some_public_function](example.python_ext.python_ext/example.python_ext.python_ext.some_public_function.html) |
example.python_ext.python_ext.HelloPythonExtension.md
# HelloPythonExtension ## HelloPythonExtension ``` Bases: `omni.ext._extensions.IExt` ### Methods | Method | Description | |--------|-------------| | `on_shutdown()` | | | `on_startup(ext_id)` | | ```python def __init__(self: omni.ext._extensions.IExt) -> None: pass ```
example.python_ext.python_ext.md
# example.python_ext.python_ext  ## Classes Summary - HelloPythonExtension ## Functions Summary - some_public_function
example.python_ext.python_ext.some_public_function.md
# some_public_function ## some_public_function
example.python_ext.some_public_function.md
# some_public_function ## some_public_function ```python example.python_ext.some_public_function(x: int) ``` ``` ``` ```
example.python_ext.Submodules.md
# example.python_ext Submodules ## example.python_ext.python_ext No submodule docstring provided
ExampleBakery.md
# Integrating an Authoring Layer In this article, a toy example using the Execution Framework is used to describe an online bakery. While the simplistic subject matter of the example is contrived, the concepts demonstrated in the example have real-world applications. The article is structured such that the example starts simple, and new concepts are introduced piecemeal. ## The Authoring Layer The Execution Framework, in particular the execution graph, is a common language to describe execution across disparate software components. It is the job of each component (or an intermediary) to populate the execution graph based on some internal description. We call this per-component, internal description the authoring layer. It is common to have multiple different authoring layers contribute to a single execution graph. This example demonstrate a single authoring layer that describes several online bakeries. The data structures used by this authoring layer is as follows: ```c++ struct BakedGood { unsigned int bakeMinutes; std::string name; }; struct Order { std::string customer; std::vector<BakedGood> bakedGoods; }; struct Bakery { std::string name; std::vector<Order> orders; }; ``` The example starts by describing two bakeries at the authoring layer: ```c++ std::vector<Bakery> bakeries { Bakery { "The Pie Hut", // bakery name { Order { // ... }, // ... }, }, // ... }; ``` "Tracy", // customer { BakedGood { 20, "applePie" }, BakedGood { 30, "chickenPotPie" } } , Order { "Kai", // customer { BakedGood { 22, "peachPie" }, } } , Bakery { "Sam's Bakery", // bakery name { Order { "Alex", // customer { BakedGood { 20, "blueberryPie" }, } } } }; ``` ## Setting Up the Execution Graph With the authoring layer defined, the following code is then used to populate the execution graph based on the authoring layer description: ```c++ // this example manually creates an execution graph. in most applications (e.g. kit-based applications) this will // already be created for you GraphPtr graph = Graph::create("exec.graph"); // as mentioned above, the builder context, pass pipeline, and builder will likely already be created for you in // real-world scenarios. GraphBuilderContextPtr builderContext{ GraphBuilderContext::create(graph, PassPipeline::create()) }; GraphBuilderPtr builder{ GraphBuilder::create(builderContext) }; // ef relies on the user to maintain a reference (i.e. a call omni::core::Object::acquire()) on each node in a graph // definition. this can be done by simply holding an array of NodePtr objects in your definition. in this case, // since we're populating the top-level graph definition, we simply store the NodePtrs here. std::vector<NodePtr> nodes; for (auto&amp; bakery : bakeries) // for each bakery { auto node = Node::create( graph, // this makes the node a part of the execution graph's top-level graph definition BakeryGraphDef::create(builder, bakery), // bakery's definition (i.e. work description) carb::fmt::format("node.bakery.{}", bakery.name) ); // connect the bakery to the root of the execution graph's definition so that it will be executed. only nodes // in a graph definition that can reach the definition's root node will be executed. builder->connect(graph->getRoot(), node); nodes.emplace_back(std::move(node)); } ``` The execution graph can be visualized as follows: ```mermaid flowchart LR 00000261A3F90560(( )) 00000261A3F90560--&gt;00000261A0498170 00000261A3F90560--&gt;00000261A3FB0260 00000261A0498170(node.bakery.The Pie Hut) 00000261A0498170-.-&gt;00000261A3F8DA50 00000261A3FB0260(node.bakery.Sam's Bakery) 00000261A3FB0260-.-&gt;00000261A3F8E750 subgraph 00000261A3F8DA50[def.bakery] direction LR style 00000261A3F8DA50 fill:#FAFAFA,stroke:#777777 00000261A3F90CE0(( )) 00000261A3F90CE0--&gt;00000261A3F91000 00000261A3F90CE0--&gt;00000261A3F913C0 00000261A3F90CE0--&gt;00000261A3F90600 00000261A3F90CE0--&gt;00000261A3F90920 00000261A3F91000(node.bakery.The Pie Hut.preHeatOven) 00000261A3F91000-.-&gt;00000261A3DC44C0 00000261A3F91000--&gt;00000261A3F90E20 00000261A3F91000--&gt;00000261A3F906A0 00000261A3F91000--&gt;00000261A3F90A60 00000261A3F913C0(node.bakedGood.prepare.applePie) 00000261A3F913C0-.-&gt;00000261A3D77160 00000261A3F913C0--&gt;00000261A3F90E20 00000261A3F90600(node.bakedGood.prepare.chickenPotPie) 00000261A3F90600-.-&gt;00000261A3D76B60 00000261A3F90600--&gt;00000261A3F906A0 00000261A3F90920(node.bakedGood.prepare.peachPie) 00000261A3F90920-.-&gt;00000261A3D767A0 00000261A3F90920--&gt;00000261A3F90A60 00000261A3F90E20(node.bakedGood.bake.applePie) 00000261A3F90E20-.-&gt;00000261A3CA6BF0 00000261A3F90E20--&gt;00000261A3F909C0 00000261A3F90E20--&gt;00000261A3F911E0 00000261A3F906A0(node.bakedGood.bake.chickenPotPie) 00000261A3F906A0-.-&gt;00000261A3CA5210 00000261A3F906A0--&gt;00000261A3F909C0 00000261A3F906A0--&gt;00000261A3F911E0 00000261A3F90A60(node.bakedGood.bake.peachPie) 00000261A3F90A60-.-&gt;00000261A3CA53C0 00000261A3F90A60--&gt;00000261A3F90EC0 00000261A3F90A60--&gt;00000261A3F911E0 00000261A3F909C0(node.bakedGood.ship.Tracy) 00000261A3F909C0-.-&gt;00000261A3DC4510 00000261A3F911E0(node.bakery.The Pie Hut.turnOffOven) 00000261A3F911E0-.-&gt;00000261A3DC3F20 00000261A3F90EC0(node.bakedGood.ship.Kai) 00000261A3F90EC0-.-&gt;00000261A3DC45B0 end 00000261A3CA5210{{def.bakedGood.bake}} 00000261A3CA53C0{{def.bakedGood.bake}} 00000261A3CA6BF0{{def.bakedGood.bake}} subgraph 00000261A3D767A0[def.bakedGood.prepare] direction LR style 00000261A3D767A0 fill:#FAFAFA,stroke:#777777 00000261A3F90F60(( )) 00000261A3F90F60--&gt;00000261A3F90740 00000261A3F90740(node.bakedGood.gather.peachPie) 00000261A3F90740-.-&gt;00000261A3CA52A0 00000261A3F90740--&gt;00000261A3F907E0 00000261A3F907E0(node.bakedGood.assemble.peachPie) 00000261A3F907E0-.-&gt;00000261A3CA5330 end 00000261A3CA52A0{{def.bakedGood.gatherIngredients}} 00000261A3CA5330{{def.bakedGood.assemble}} subgraph 00000261A3D76B60[def.bakedGood.prepare] direction LR style 00000261A3D76B60 fill:#FAFAFA,stroke:#777777 00000261A3F91140(( )) 00000261A3F91140--&gt;00000261A3F90BA0 00000261A3F90BA0(node.bakedGood.gather.chickenPotPie) 00000261A3F90BA0-.-&gt;00000261A3CA7100 00000261A3F90BA0--&gt;00000261A3F904C0 00000261A3F904C0(node.bakedGood.assemble.chickenPotPie) 00000261A3F904C0-.-&gt;00000261A3CA73D0 end 00000261A3CA7100{{def.bakedGood.gatherIngredients}} 00000261A3CA73D0{{def.bakedGood.assemble}} subgraph 00000261A3D77160[def.bakedGood.prepare] direction LR style 00000261A3D77160 fill:#FAFAFA,stroke:#777777 00000261A3F91320(( )) 00000261A3F91320--&gt;00000261A3F90B00 00000261A3F90B00(node.bakedGood.gather.applePie) 00000261A3F90B00-.-&gt;00000261A3CA6B60 00000261A3F90B00--&gt;00000261A3F90D80 00000261A3F90D80(node.bakedGood.assemble.applePie) 00000261A3F90D80-.-&gt;00000261A3CA6530 end 00000261A3CA6530{{def.bakedGood.assemble}} 00000261A3CA6B60{{def.bakedGood.gatherIngredients}} 00000261A3DC3F20{{def.oven.turnOff}} 00000261A3DC44C0{{def.oven.preHeat}} 00000261A3DC4510{{def.order.ship}} 00000261A3DC45B0{{def.order.ship}} subgraph 00000261A3F8E750[def.bakery] direction LR style 00000261A3F8E750 fill:#FAFAFA,stroke:#777777 00000261A3FB0B20(( )) 00000261A3FB0B20--&gt;00000261A3FAFFE0 00000261A3FB0B20--&gt;00000261A3FB0940 00000261A3FAFFE0(node.bakery.Sam's Bakery.preHeatOven) 00000261A3FAFFE0-.-&gt;00000261A3DC3A20 00000261A3FAFFE0--&gt;00000261A3FB0760 00000261A3FB0940(node.bakedGood.prepare.blueberryPie) 00000261A3FB0940-.-&gt;00000261A3D76CE0 00000261A3FB0940--&gt;00000261A3FB0760 00000261A3FB0760(node.bakedGood.bake.blueberryPie) 00000261A3FB0760-.-&gt;00000261A3CA60B0 00000261A3FB0760--&gt;00000261A3FAF720 00000261A3FB0760--&gt;00000261A3FB0DA0 00000261A3FAF720(node.bakedGood.ship.Alex) 00000261A3FAF720-.-&gt;00000261A3DC3F70 00000261A3FB0DA0(node.bakery.Sam's Bakery.turnOffOven) 00000261A3FB0DA0-.-&gt;00000261A3DC4560 end 00000261A3CA60B0{{def.bakedGood.bake}} subgraph 00000261A3D76CE0[def.bakedGood.prepare] direction LR style 00000261A3D76CE0 fill:#FAFAFA,stroke:#777777 00000261A3FAF680(( )) 00000261A3FAF680--&gt;00000261A3FAFA40 00000261A3FAFA40(node.bakedGood.gather.blueberryPie) ``` 00000261A3FAFA40-.-&gt;00000261A3CA59F0 00000261A3FAFA40--&gt;00000261A3FB0120 00000261A3FB0120(node.bakedGood.assemble.blueberryPie) 00000261A3FB0120-.-&gt;00000261A3CA6020 end 00000261A3CA59F0{{def.bakedGood.gatherIngredients}} 00000261A3CA6020{{def.bakedGood.assemble}} 00000261A3DC3A20{{def.oven.preHeat}} 00000261A3DC3F70{{def.order.ship}} 00000261A3DC4560{{def.oven.turnOff}} ## Figure 20 The execution graph showing both bakeries. Arrows with solid lines represent orchestration ordering while arrows with dotted lines represent the definition a node is using. You can see the execution graph has several types of entities: - **Nodes** are represented by rounded boxes. Their name starts with “node.”. - **Opaque Definitions** are represented by angled boxes. Their name starts with “def.”. - **Graph Definitions** are represented by shaded boxes. Their name, at the top of the box, starts with “def.”. - **Root Nodes** are represented by circles. Their name is not shown. - **Edges**, represented by an arrow with a solid line, show the orchestration ordering between nodes. - Each node points to a definition, either an opaque definition or a graph definition. This relationship is represented by an arrow with a dotted line. Note, definitions can be pointed to by multiple nodes, though this example does not utilize the definition sharing feature of EF. To simplify the example, this article focuses on a single bakery. Below, you can see a visualization of only The Pie Hut’s graph definition: ```mermaid flowchart LR 00000261A3F913C0(( )) 00000261A3F913C0--&gt;00000261A3F91320 00000261A3F913C0--&gt;00000261A3F90B00 00000261A3F913C0--&gt;00000261A3F911E0 00000261A3F913C0--&gt;00000261A3F90E20 00000261A3F91320(node.bakery.The Pie Hut.preHeatOven) 00000261A3F91320-.-&gt;00000261A3DC3930 00000261A3F91320--&gt;00000261A3F90920 00000261A3F91320--&gt;00000261A3F90F60 00000261A3F91320--&gt;00000261A3F90EC0 00000261A3F90B00(node.bakedGood.prepare.applePie) 00000261A3F90B00-.-&gt;00000261A3D76CE0 00000261A3F90B00--&gt;00000261A3F90920 00000261A3F911E0(node.bakedGood.prepare.chickenPotPie) 00000261A3F911E0-.-&gt;00000261A3D77160 00000261A3F911E0--&gt;00000261A3F90F60 00000261A3F90E20(node.bakedGood.prepare.peachPie) 00000261A3F90E20-.-&gt;00000261A3D767A0 00000261A3F90E20--&gt;00000261A3F90EC0 00000261A3F90920(node.bakedGood.bake.applePie) 00000261A3F90920-.-&gt;00000261A3CA6530 00000261A3F90920--&gt;00000261A3F909C0 00000261A3F90920--&gt;00000261A3F904C0 00000261A3F90F60(node.bakedGood.bake.chickenPotPie) 00000261A3F90F60-.-&gt;00000261A3CA7070 00000261A3F90F60--&gt;00000261A3F909C0 00000261A3F90F60--&gt;00000261A3F904C0 00000261A3F90EC0(node.bakedGood.bake.peachPie) 00000261A3F90EC0-.-&gt;00000261A3CA6B60 00000261A3F90EC0--&gt;00000261A3F91000 00000261A3F90EC0--&gt;00000261A3F904C0 00000261A3F909C0(node.bakedGood.ship.Tracy) 00000261A3F909C0-.-&gt;00000261A3DC44C0 00000261A3F904C0(node.bakery.The Pie Hut.turnOffOven) 00000261A3F904C0-.-&gt;00000261A3DC4880 00000261A3F91000(node.bakedGood.ship.Kai) 00000261A3F91000-.-&gt;00000261A3DC3D40 00000261A3CA6530{{def.bakedGood.bake}} 00000261A3CA6B60{{def.bakedGood.bake}} 00000261A3CA7070{{def.bakedGood.bake}} subgraph 00000261A3D767A0[def.bakedGood.prepare] direction LR style 00000261A3D767A0 fill:#FAFAFA,stroke:#777777 00000261A3F907E0(( )) 00000261A3F907E0--&gt;00000261A3F90880 00000261A3F90880(node.bakedGood.gather.peachPie) 00000261A3F90880-.-&gt;00000261A3CA73D0 00000261A3F90880--&gt;00000261A3F90BA0 00000261A3F90BA0(node.bakedGood.assemble.peachPie) 00000261A3F90BA0-.-&gt;00000261A3CA6260 end 00000261A3CA6260{{def.bakedGood.assemble}} 00000261A3CA73D0{{def.bakedGood.gatherIngredients}} subgraph 00000261A3D76CE0[def.bakedGood.prepare] direction LR style 00000261A3D76CE0 fill:#FAFAFA,stroke:#777777 00000261A3F90CE0(( )) 00000261A3F90CE0--&gt;00000261A3F90560 00000261A3F90560(node.bakedGood.gather.applePie) 00000261A3F90560-.-&gt;00000261A3CA57B0 00000261A3F90560--&gt;00000261A3F90600 00000261A3F90600(node.bakedGood.assemble.applePie) 00000261A3F90600-.-&gt;00000261A3CA6C80 end 00000261A3CA57B0{{def.bakedGood.gatherIngredients}} 00000261A3CA6C80{{def.bakedGood.assemble}} subgraph 00000261A3D77160[def.bakedGood.prepare] direction LR style 00000261A3D77160 fill:#FAFAFA,stroke:#777777 00000261A3F90D80(( )) 00000261A3F90D80--&gt;00000261A3F906A0 00000261A3F906A0(node.bakedGood.gather.chickenPotPie) 00000261A3F906A0-.-&gt;00000261A3CA5E70 00000261A3F906A0--&gt;00000261A3F90740 00000261A3F90740(node.bakedGood.assemble.chickenPotPie) 00000261A3F90740-.-&gt;00000261A3CA6FE0 end 00000261A3CA5E70{{def.bakedGood.gatherIngredients}} 00000261A3CA6FE0{{def.bakedGood.assemble}} 00000261A3DC3930{{def.oven.preHeat}} 00000261A3DC3D40{{def.order.ship}} 00000261A3DC44C0{{def.order.ship}} 00000261A3DC4880{{def.oven.turnOff}} ``` 00000261A3F90880--->00000261A3CA73D0 00000261A3F90880--->00000261A3F90BA0 00000261A3F90BA0(node.bakedGood.assemble.peachPie) 00000261A3F90BA0--->00000261A3CA6260 end 00000261A3CA6260{{def.bakedGood.assemble}} 00000261A3CA73D0{{def.bakedGood.gatherIngredients}} subgraph 00000261A3D76CE0[def.bakedGood.prepare] direction LR style 00000261A3D76CE0 fill:#FAFAFA,stroke:#777777 00000261A3F90CE0(( )) 00000261A3F90CE0--->00000261A3F90560 00000261A3F90560(node.bakedGood.gather.applePie) 00000261A3F90560--->00000261A3CA57B0 00000261A3F90560--->00000261A3F90600 00000261A3F90600(node.bakedGood.assemble.applePie) 00000261A3F90600--->00000261A3CA6C80 end 00000261A3CA57B0{{def.bakedGood.gatherIngredients}} 00000261A3CA6C80{{def.bakedGood.assemble}} subgraph 00000261A3D77160[def.bakedGood.prepare] direction LR style 00000261A3D77160 fill:#FAFAFA,stroke:#777777 00000261A3F90D80(( )) 00000261A3F90D80--->00000261A3F906A0 00000261A3F906A0(node.bakedGood.gather.chickenPotPie) 00000261A3F906A0--->00000261A3CA5E70 00000261A3F906A0--->00000261A3F90740 00000261A3F90740(node.bakedGood.assemble.chickenPotPie) 00000261A3F90740--->00000261A3CA6FE0 end 00000261A3CA5E70{{def.bakedGood.gatherIngredients}} 00000261A3CA6FE0{{def.bakedGood.assemble}} 00000261A3DC3930{{def.oven.preHeat}} 00000261A3DC3D40{{def.order.ship}} 00000261A3DC44C0{{def.order.ship}} 00000261A3DC4880{{def.oven.turnOff}} ## Building a Graph Definition When creating the execution graph, most of the work is done in `BakeryGraphDef`, which is defined as follows: ```cpp class BakeryGraphDef : public NodeGraphDef // NodeGraphDef is an ef provided implementation of INodeGraphDef { public: // implementations of ef interfaces are encouraged to define a static create() method. this method returns an // ObjectPtr which correctly manages the reference count of the returned object. // // when defining api methods like create(), the use of ObjectParam<>& to accept ONI object is encouraged. below, // ObjectParam<IGraphBuilder> is a light-weight object that will accept either a raw IGraphBuilder* or a // GraphBuilderPtr. static omni::core::ObjectPtr<BakeryGraphDef> create( omni::core::ObjectParam<IGraphBuilder> builder, const Bakery& bakery) noexcept { // the pattern below of creating an graph definition followed by calling build() is common. in libraries like // OmniGraph, all definitions are subclassed from a public interface that specifies a build_abi() method. since // the build method is virtual (in OG, not here), calling build_abi() during the constructor would likely lead // to incorrect behavior (i.e. calling a virtual method on an object that isn't fully instantiated is an // anti-pattern). by waiting to call build() after the object is fully instantiated, as below, the proper // build_abi() will be invoked. auto def = omni::core::steal(new BakeryGraphDef(builder->getGraph(), bakery)); def->build(builder); return def; } // build() (usually build_abi()) is a method often seen in ef definitions. it usually serves two purposes: // // - build the graph definition's graph // // - update the graph definition's graph when something has changed in the authoring layer // // note, this example doesn't consider updates to the authoring layer. void build(omni::core::ObjectParam<IGraphBuilder> parentBuilder) noexcept { // when building a graph definition, a *dedicated* builder must be created to handle connecting nodes and // setting the node's definitions. // // below, notice the use of 'auto'. use of auto is highly encouraged in ef code. in many ef methods, it is // unclear if the return type is either a raw pointer or a smart ObjectPtr. by using auto, the caller doesn't ``` // need to care and "The Right Thing (TM)" will happen. auto builder{ GraphBuilder::create(parentBuilder, this) }; // when using the build method to repspond to update in the authoring layer, we clear out the old nodes (if // any). a more advanced implementation may choose to reuse nodes to avoid memory thrashing. m_nodes.clear(); if (m_bakery.orders.empty()) { // no orders to bake. don't turn on the oven. return; // LCOV_EXCL_LINE } m_preheatOven = Node::create( getTopology(), // each node must be a part of a single topology. getTopology() returns this defs topology. PreHeatOvenNodeDef::create(m_bakery), carb::fmt::format("node.bakery.{}.preHeatOven", m_bakery.name) ); // connecting nodes in a graph must go through the GraphBuilder object created to construct this graph // definition builder->connect(getRoot(), m_preheatOven); m_turnOffOven = Node::create( getTopology(), TurnOffOvenNodeDef::create(m_bakery), carb::fmt::format("node.bakery.{}.turnOffOven", m_bakery.name) ); for (auto& order : m_bakery.orders) // for each order { if (!order.bakedGoods.empty()) // make sure the order isn't empty { auto ship = Node::create( getTopology(), ShipOrderNodeDef::create(order), carb::fmt::format("node.bakedGood.ship.{}", order.customer) ); for (const BakedGood& bakedGood : order.bakedGoods) // for each item in the order { auto prepare = Node::create( getTopology(), PrepareBakedGoodGraphDef::create(builder, bakedGood), carb::fmt::format("node.bakedGood.prepare.{}", bakedGood.name) ); auto bake = Node::create( getTopology(), NodeDefLambda::create( // NodeDefLambda is an opaque def which uses a lambda to perform work "def.bakedGood.bake", [&bakedGood = bakedGood](ExecutionTask& info) ## Node Definitions A `NodeDef` (e.g. `PreHeatOvenNodeDef`) is useful when: - The developer wishes to provide additional methods on the definition. - The opaque definition needs to store authoring data whose ownership and lifetime can’t be adequately captured in the lambda provided to `NodeDefLambda`. ## Graph Definitions The second type of definitions seen in Figure 21 are graph definitions. Graph definitions are represented by shaded boxes. Each graph definition has a root node, represented by a circle. In Figure 21, there is one type of graph definition: `PrepareBakedGoodGraphDef`. Here, you can see the code behind `PrepareBakedGoodGraphDef`: ### Listing 43: An example of a graph definition used to prepare a baked good. ```cpp class PrepareBakedGoodGraphDef : public NodeGraphDefT<INodeGraphDef, INodeGraphDefDebug, IPrivateBakedGoodGetter> { public: static omni::core::ObjectPtr<PrepareBakedGoodGraphDef> create( omni::core::ObjectParam<IGraphBuilder> builder, const BakedGood& bakedGood) noexcept { auto def = omni::core::steal(new PrepareBakedGoodGraphDef(builder->getGraph(), bakedGood)); def->build(builder); return def; } void build(omni::core::ObjectParam<IGraphBuilder> parentBuilder) noexcept { auto builder = GraphBuilder::create(parentBuilder, this); m_nodes.clear(); auto gather = Node::create( getTopology(), NodeDefLambda::create("def.bakedGood.gatherIngredients", [this](ExecutionTask& info) { log("gather ingredients for {}", m_bakedGood.name); return Status::eSuccess; }, SchedulingInfo::eParallel ), carb::fmt::format("node.bakedGood.gather.{}", m_bakedGood.name) ); auto assemble = Node::create( ```cpp class PopulateChickenPotPiePass : public omni::graph::exec::unstable::Implements<IPopulatePass> ``` `PrepareBakedGoodGraphDef` creates a simple graph with two opaque nodes, one which gathers all of the ingredients of the baked good and another which assembles the baked good. ## Population Passes A powerful feature in EF are [passes](PassConcepts.html#ef-pass-concepts). Passes are user created chunks of code that transform the graph during [graph construction](PassConcepts.html#ef-pass-concepts). As an example, an oft performed transformation is one in which a generic graph definition is replaced with an optimized user defined graph definition. [Figure 21](#ef-figure-bakery-simplified) shows `node.bakedGood.prepare.chickenPotPie` has a fairly generic graph definition. An opportunity exists to replace this definition with one which can prepare ingredients in parallel. To do this, a [population pass](api/classomni_1_1graph_1_1exec_1_1unstable_1_1IPopulatePass.html#_CPPv4N4omni5graph4exec8unstable13IPopulatePassE) is used: ```cpp class PopulateChickenPotPiePass : public omni::graph::exec::unstable::Implements<IPopulatePass> ``` ## Listing 45 The `IPrivateBakedGoodGetter` allows the bakery library to safely access private implementation details via EF. ```cpp class IPrivateBakedGoodGetter : public omni::core::Inherits<IBase, OMNI_TYPE_ID("example.IPrivateBakedGoodGetter")> { public: virtual const BakedGood& getBakedGood() const noexcept = 0; }; ``` IPrivateBakedGoodGetter is an example of a *private interface*. Private interfaces are often used in graph construction and graph execution to safely access non-public implementation details. To understand the need of private interfaces, consider what the `run_abi()` method is doing in Listing 44. The purpose of the method is to replace the generic “prepare” graph definition with a higher-fidelity graph definition specific to the preparation of chicken pot pie. In order to build that new definition, parameters in the chicken pot pie’s `BakedGood` object are needed. Therein lies the problem: EF has no concept of a “baked good”. The population pass is only given a pointer to an `INode` EF interface. With that pointer, the pass is able to get yet another EF interface, `IDef`. Neither of these interfaces have a clue what a `BakedGood` is. So, how does one go about getting a `BakedGood` from an `INode`? The answer lies in the type casting mechanism Omniverse Native Interfaces provides. The idea is simple, when creating a definition (e.g. PrepareBakedGoodGraphDef or `ChickenPotPieGraphDef`), do the following: 1. Store a reference to the baked good on the definition. 2. In addition to implementing the `INodeGraphDef` interface, also implement the IPrivateBakedGoodGetter private interface. To demonstrate the latter point, consider the code used to define the `ChickenPotPieGraphDef` class used in the population pass: ```cpp class ChickenPotPieGraphDef : public NodeGraphDefT<INodeGraphDef, INodeGraphDefDebug, IPrivateBakedGoodGetter> ``` NodeGraphDefT is an implementation of INodeGraphDef and INodeGraphDefDebug. NodeGraphDefT’s template arguments allow the developer to specify all of the interfaces the subclass will implement, including interfaces EF has no notion about (e.g. IPrivateBakedGoodGetter). Recall, much like ChickenPotPieGraphDef, PrepareBakedGoodGraphDef’s implementation also inherits from NodeGraphDefT and specifies IPrivateBakedGoodGetter as a template argument. To access the BakedGood from the given INode, the population pass calls omni::graph::exec::unstable::cast() on the node’s definition. If the definition implements IPrivateBakedGoodGetter, a valid pointer is returned, on which getBakedGood() can be called. With the BakedGood in hand, it can be used to create the new ChickenPotPieGraphDef graph definition, which the builder attaches to the node, replacing the generic graph definition. ### Why Not Use dynamic_cast? A valid question that may arise from the code above is, “Why not use dynamic_cast?” There are two things to note about dynamic_cast: 1. dynamic_cast is not ABI-safe. Said differently, different compilers may choose to implement its ABI differently. 2. dynamic_cast relies on runtime type information (RTTI). When compiling C++, RTTI is an optional feature that can be disabled. In Listing 44, run_abi()’s INode pointer (and its attached IDef) may point to an object implemented by a DLL different than the DLL that implements the population pass. That means if dynamic_cast is called on the pointer, the compiler will assume the pointer utilizes its dynamic_cast ABI contract. However, since the pointer is from another DLL, possibly compiled with a different ABI and compiler settings, that assumption may be bad, leading to undefined behavior. In contrast to dynamic_cast, omni::graph::exec::unstable::cast() is ABI-safe due to its use of Omniverse Native Interfaces. # ONI (Open Normalized Interface) ONI defines iron-clad, ABI-safe contracts that work across different compiler tool chains. Calling `omni::graph::exec::unstable::cast()` on an ONI object (e.g. `IDef`) has predictable behavior regardless of the compiler and compiler settings used to compile the object. ## Private vs. Public Interfaces ONI objects are designed to be ABI-safe. However, it is clear that `IPrivateBakedGoodGetter` is not ABI-safe. `getBakedGood()` returns a reference, which is not allowed by ONI. Furthermore, the returned object, `BakedGood`, also is not ABI-safe due to its use of complex C++ objects like `std::string` and `std::vector`. Surprisingly, since `IPrivateBakedGoodGetter` is a private interface that is defined, implemented, and only used within a single DLL, the interface can violate ONI’s ABI rules because the ABI will be consistent once `omni::graph::exec::unstable::cast()` returns a valid `IPrivateBakedGoodGetter`. If `IPrivateBakedGoodGetter` was able to be implemented by other DLLs, this scheme would not work due the ambiguities of the C++ ABI across DLL borders. Private interfaces allow developers to safely access private implementation details (e.g. `BakedGood`) as long as the interfaces are truly private. The example above illustrates a common EF pattern, a library embedding implementation specific data in either nodes or definitions, and then defining passes which cast the generic EF `INode` and `IDef` pointers to a private interface to access the private data. But what if the developer wants to not be limited to accessing this data in a single DLL? What if the developer wants to allow other developers, authoring their own DLLs, to access this data? In the bakery example, such a system would allow anyone to create DLLs that implement passes which can optimize the production of any baked good. The answer to these questions are public interfaces. Unlike private interfaces, public interfaces are designed to be implemented by many DLLs. As such, public interfaces must abide by ONI’s strict ABI rules. In the context of the example above, the following public interface can be defined to allow external developers to access baked good information: ### Listing 47 An example of replacing the private `IPrivateBakedGoodGetter` with a public interface. Such an interface allows external developers to access baked good information in novel passes to optimize the bakery. ```cpp class IBakedGood_abi : public omni::core::Inherits<omni::core::IObject, OMNI_TYPE_ID("example.IBakedGood")> { protected: virtual uint32_t getBakeMinutes_abi() noexcept = 0; virtual const char* getName_abi() noexcept = 0; }; ``` To utilize the public interface, definitions simply need to inherit from it and implement its methods. For example: ### Listing 48 ```markdown This version of `PrepareBakedGoodGraphDef` is similar to the previous one, but now inherits and implements (not shown) the public interface `IBakedGood` rather than the private `IPrivateBakedGoodGetter`. `IBakedGood` is an API class generated by the *omni.bind* tool which wraps the raw `IBakedGood_abi` into a more friendly C++ class. ```c++ class PrepareBakedGoodGraphDef : public NodeGraphDefT<INodeGraphDef, INodeGraphDefDebug, IBakedGood> ``` Using public interfaces is often more work, but unlocks the ability for external developers to improve and extend a library’s execution graph. When deciding whether to use a public or private interface, consult the following flowchart. ```mermaid flowchart TD S[Start] external{{Will external devs require data stored in your library to extend and improve your part of the execution graph?}} data{{Do you need private data for graph construction or execution?}} public[Create a public interface.] private[Create a private interface.] none[No interface is needed.] S --> external external -- Yes --> public external -- No --> data data -- Yes --> private data -- No --> none ``` **Figure 22:** Flowchart of when to use public or private interfaces. ## Back to the Population Pass After [PopulateChickenPotPiePass](#ef-listing-bakery-populatechickenpotpiepass) runs and replaces node *node.bakedGood.prepare.chickenPotPie*’s generic graph definition with a new `ChickenPotPieGraphDef`, the bakery’s definition is as follows: ```mermaid flowchart LR 00000261A3F90D80(( )) 00000261A3F90D80-->00000261A3F90880 00000261A3F90D80-->00000261A3F91320 00000261A3F90D80-->00000261A3F913C0 00000261A3F90D80-->00000261A3F90600 00000261A3F90880(node.bakery.The Pie Hut.preHeatOven) 00000261A3F90880-.-&gt;00000261A3DC4290 00000261A3F90880-->00000261A3F90E20 00000261A3F90880-->00000261A3F90F60 00000261A3F90880-->00000261A3F90740 00000261A3F91320(node.bakedGood.prepare.applePie) 00000261A3F91320-.-&gt;00000261A3D767A0 00000261A3F91320-->00000261A3F90E20 00000261A3F913C0(node.bakedGood.prepare.chickenPotPie) 00000261A3F913C0-.-&gt;00000261A3D76CE0 00000261A3F913C0-->00000261A3F90F60 00000261A3F90600(node.bakedGood.prepare.peachPie) 00000261A3F90600-.-&gt;00000261A3D77160 00000261A3F90600-->00000261A3F90740 00000261A3F90E20(node.bakedGood.bake.applePie) 00000261A3F90E20-.-&gt;00000261A3CA6FE0 00000261A3F90E20-->00000261A3F906A0 00000261A3F90E20-->00000261A3F90CE0 00000261A3F90F60(node.bakedGood.bake.chickenPotPie) 00000261A3F90F60-.-&gt;00000261A3CA7100 00000261A3F90F60-->00000261A3F906A0 ``` 00000261A3F90F60-->00000261A3F90CE0 00000261A3F90740(node.bakedGood.bake.peachPie) 00000261A3F90740-.-&gt;00000261A3CA5570 00000261A3F90740-->00000261A3F91000 00000261A3F90740-->00000261A3F90CE0 00000261A3F906A0(node.bakedGood.ship.Tracy) 00000261A3F906A0-.-&gt;00000261A3DC44C0 00000261A3F90CE0(node.bakery.The Pie Hut.turnOffOven) 00000261A3F90CE0-.-&gt;00000261A3DC3610 00000261A3F91000(node.bakedGood.ship.Kai) 00000261A3F91000-.-&gt;00000261A3DC34D0 00000261A3CA5570{{def.bakedGood.bake}} 00000261A3CA6FE0{{def.bakedGood.bake}} 00000261A3CA7100{{def.bakedGood.bake}} subgraph 00000261A3D767A0[def.bakedGood.prepare] direction LR style 00000261A3D767A0 fill:#FAFAFA,stroke:#777777 00000261A3F909C0(( )) 00000261A3F909C0-->00000261A3F90A60 00000261A3F90A60(node.bakedGood.gather.applePie) 00000261A3F90A60-.-&gt;00000261A3CA73D0 00000261A3F90A60-->00000261A3F91140 00000261A3F91140(node.bakedGood.assemble.applePie) 00000261A3F91140-.-&gt;00000261A3CA6260 end 00000261A3CA6260{{def.bakedGood.assemble}} 00000261A3CA73D0{{def.bakedGood.gatherIngredients}} subgraph 00000261A3D76CE0[def.bakedGood.pie] direction LR style 00000261A3D76CE0 fill:#FAFAFA,stroke:#777777 00000261A3FB08A0(( )) 00000261A3FB08A0-->00000261A3FB0080 00000261A3FB08A0-->00000261A3FAF220 00000261A3FB08A0-->00000261A3FAFCC0 00000261A3FB0080(node.pie.chop.carrots) 00000261A3FB0080-.-&gt;00000261A3CA5CC0 00000261A3FB0080-->00000261A3FAFC20 00000261A3FAF220(node.pie.makeCrust.chickenPotPie) 00000261A3FAF220-.-&gt;00000261A3CA5F00 00000261A3FAF220-->00000261A3FAFC20 00000261A3FAFCC0(node.pie.cook.chicken) 00000261A3FAFCC0-.-&gt;00000261A3CA6020 00000261A3FAFCC0-->00000261A3FAFC20 00000261A3FAFC20(node.pie.assemble.chickenPotPie) 00000261A3FAFC20-.-&gt;00000261A1508650 end 00000261A1508650{{def.bakedGood.assemble}} 00000261A3CA5CC0{{def.bakedGood.chop}} 00000261A3CA5F00{{def.bakedGood.makeCrust}} 00000261A3CA6020{{def.bakedGood.cook}} subgraph 00000261A3D77160[def.bakedGood.prepare] direction LR style 00000261A3D77160 fill:#FAFAFA,stroke:#777777 00000261A3F911E0(( )) 00000261A3F911E0-->00000261A3F904C0 00000261A3F904C0(node.bakedGood.gather.peachPie) 00000261A3F904C0-.-&gt;00000261A3CA6BF0 00000261A3F904C0-->00000261A3F90560 00000261A3F90560(node.bakedGood.assemble.peachPie) 00000261A3F90560-.-&gt;00000261A3CA53C0 end 00000261A3CA53C0{{def.bakedGood.assemble}} 00000261A3CA6BF0{{def.bakedGood.gatherIngredients}} 00000261A3DC34D0{{def.order.ship}} 00000261A3DC3610{{def.oven.turnOff}} 00000261A3DC4290{{def.oven.preHeat}} 00000261A3DC44C0{{def.order.ship}} Figure 23: The execution graph after the `PopulateChickenPotPiePass` runs. Above, you can see *node.bakedGood.prepare.chickenPotPie* now points to a new graph definition which performs task such as preparing the crust and cooking the chicken in parallel. ## Populating Graph Definitions As mentioned earlier, population passes run on either matching node names or matching graph definition names. You are encouraged to inspect the names used in Figure 23. There, node names are fairly specific. For example, *node.bakedGood.prepare.chickenPotPie* rather than *node.bakedGood.prepare*. Definitions, on the other hand, are generic. For example, *def.bakedGood.prepare* instead of *def.bakedGood.prepare.applePie*. This naming scheme allows for a clever use of the rules for population passes. Earlier, you saw a population pass that optimized chicken pot pie orders by matching *node* names. Here, a new pass is created: *PopulatePiePass*: ```c++ class PopulatePiePass : public omni::graph::exec::unstable::Implements<IPopulatePass> { public: static omni::core::ObjectPtr<PopulatePiePass> create(omni::core::ObjectParam<IGraphBuilder> builder) noexcept { return omni::core::steal(new PopulatePiePass(builder.get())); } protected: PopulatePiePass(IGraphBuilder*) noexcept { } void run_abi(IGraphBuilder* builder, INode* node) noexcept override { auto bakedGoodGetter = omni::graph::exec::unstable::cast<IPrivateBakedGoodGetter>(node->getDef()); } } ``` ```c if (!bakedGoodGetter) { // either the node or def matches the name we're looking for, but the def doesn't implement our private // interface to access the baked good, so this isn't a def we can populate. bail. return; // LCOV_EXCL_LINE } const BakedGood& bakedGood = bakedGoodGetter->getBakedGood(); // if the baked good ends with "Pie" attach a custom def that knows how to bake pies if (!omni::extras::endsWith(bakedGood.name, "Pie")) { // this baked good isn't a pie. do nothing. return; // LCOV_EXCL_LINE } builder->setNodeGraphDef( node, PieGraphDef::create(builder, bakedGood) ); } }; ``` PopulatePiePass’s purpose is to better define the process of baking pies. This is achieved by registering PopulatePiePass with a matching name of “def.bakedGood.prepare”. Any graph definition matching “def.bakedGood.prepare” with be given to PopulatePiePass. Above, PopulatePiePass’s `run_abi()` method first checks if the currently attached definition can provide the associated `BakedGood`. If so, the name of the baked good is checked. If the name of the baked good ends with “Pie” the node’s definition is replaced with a new `PieGraphDef`, which is a graph definition that better describes the preparation of pies. The resulting bakery graph definition is as follows: ```mermaid flowchart LR 00000261A3F91320(( )) 00000261A3F91320--&gt;00000261A3F91140 00000261A3F91320--&gt;00000261A3F90560 00000261A3F91320--&gt;00000261A3F91000 00000261A3F91320--&gt;00000261A3F909C0 00000261A3F91140(node.bakery.The Pie Hut.preHeatOven) 00000261A3F91140-.-&gt;00000261A3DC37F0 00000261A3F91140--&gt;00000261A3F907E0 00000261A3F91140--&gt;00000261A3F911E0 00000261A3F91140--&gt;00000261A3F90B00 00000261A3F90560(node.bakedGood.prepare.applePie) 00000261A3F90560-.-&gt;00000261A3D76E60 00000261A3F90560--&gt;00000261A3F907E0 00000261A3F91000(node.bakedGood.prepare.chickenPotPie) 00000261A3F91000-.-&gt;00000261A3D767A0 00000261A3F91000--&gt;00000261A3F911E0 00000261A3F909C0(node.bakedGood.prepare.peachPie) ``` 00000261A3F909C0--&gt;00000261A3D76B60 00000261A3F909C0--&gt;00000261A3F90B00 00000261A3F907E0(node.bakedGood.bake.applePie) 00000261A3F907E0--&gt;00000261A3CA6B60 00000261A3F907E0--&gt;00000261A3F90CE0 00000261A3F907E0--&gt;00000261A3F913C0 00000261A3F911E0(node.bakedGood.bake.chickenPotPie) 00000261A3F911E0--&gt;00000261A3CA5330 00000261A3F911E0--&gt;00000261A3F90CE0 00000261A3F911E0--&gt;00000261A3F913C0 00000261A3F90B00(node.bakedGood.bake.peachPie) 00000261A3F90B00--&gt;00000261A3CA5600 00000261A3F90B00--&gt;00000261A3F90600 00000261A3F90B00--&gt;00000261A3F913C0 00000261A3F90CE0(node.bakedGood.ship.Tracy) 00000261A3F90CE0--&gt;00000261A3DC3980 00000261A3F913C0(node.bakery.The Pie Hut.turnOffOven) 00000261A3F913C0--&gt;00000261A3DC3930 00000261A3F90600(node.bakedGood.ship.Kai) 00000261A3F90600--&gt;00000261A3DC3D40 00000261A3CA5330{{def.bakedGood.bake}} 00000261A3CA5600{{def.bakedGood.bake}} 00000261A3CA6B60{{def.bakedGood.bake}} subgraph 00000261A3D767A0[def.bakedGood.pie] direction LR style 00000261A3D767A0 fill:#FAFAFA,stroke:#777777 00000261A3FB0A80(( )) 00000261A3FB0A80--&gt;00000261A3FB04E0 00000261A3FB0A80--&gt;00000261A3FB0580 00000261A3FB0A80--&gt;00000261A3FB0C60 00000261A3FB04E0(node.pie.chop.carrots) 00000261A3FB04E0--&gt;00000261A3CA6A40 00000261A3FB04E0--&gt;00000261A3FB0EE0 00000261A3FB0580(node.pie.makeCrust.chickenPotPie) 00000261A3FB0580--&gt;00000261A3CA7100 00000261A3FB0580--&gt;00000261A3FB0EE0 00000261A3FB0C60(node.pie.cook.chicken) 00000261A3FB0C60--&gt;00000261A1508650 00000261A3FB0C60--&gt;00000261A3FB0EE0 00000261A3FB0EE0(node.pie.assemble.chickenPotPie) 00000261A3FB0EE0--&gt;00000261A0496AA0 end 00000261A0496AA0{{def.bakedGood.assemble}} 00000261A1508650{{def.bakedGood.cook}} 00000261A3CA6A40{{def.bakedGood.chop}} 00000261A3CA7100{{def.bakedGood.makeCrust}} subgraph 00000261A3D76B60[def.bakedGood.pie] direction LR style 00000261A3D76B60 fill:#FAFAFA,stroke:#777777 00000261A3FB0300(( )) 00000261A3FB0300--&gt;00000261A3FAFB80 00000261A3FB0300--&gt;00000261A3FB0620 00000261A3FAFB80(node.pie.chop.peachPie) 00000261A3FAFB80--&gt;00000261A3CA52A0 00000261A3FAFB80--&gt;00000261A3FB09E0 00000261A3FB0620(node.pie.makeCrust.peachPie) 00000261A3FB0620--&gt;00000261A3CA57B0 00000261A3FB0620--&gt;00000261A3FB09E0 00000261A3FB09E0(node.pie.assemble.peachPie) 00000261A3FB09E0--&gt;00000261A3FB40D0 end 00000261A3CA52A0{{def.bakedGood.chop}} 00000261A3CA57B0{{def.bakedGood.makeCrust}} 00000261A3FB40D0{{def.bakedGood.assemble}} subgraph 00000261A3D76E60[def.bakedGood.pie] direction LR style 00000261A3D76E60 fill:#FAFAFA,stroke:#777777 00000261A3FAF9A0(( )) 00000261A3FAF9A0--&gt;00000261A3FAF4A0 00000261A3FAF9A0--&gt;00000261A3FAFAE0 00000261A3FAF4A0(node.pie.chop.applePie) 00000261A3FAF4A0--&gt;00000261A3CA5F00 00000261A3FAF4A0--&gt;00000261A3FAF2C0 00000261A3FAFAE0(node.pie.makeCrust.applePie) 00000261A3FAFAE0--&gt;00000261A3CA60B0 00000261A3FAFAE0--&gt;00000261A3FAF2C0 00000261A3FAF2C0(node.pie.assemble.applePie) 00000261A3FAF2C0--&gt;00000261A3CA6260 end 00000261A3CA5F00{{def.bakedGood.chop}} 00000261A3CA60B0{{def.bakedGood.makeCrust}} 00000261A3CA6260{{def.bakedGood.assemble}} 00000261A3DC37F0{{def.oven.preHeat}} 00000261A3DC3930{{def.oven.turnOff}} 00000261A3DC3980{{def.order.ship}} 00000261A3DC3D40{{def.order.ship}} Figure 24 The execution graph after the `PopulatePiePass` runs. It is important to note that EF’s default `pass pipeline` only matches population passes with either node names or graph definition names. Opaque definition names are not matched. The example above shows that knowing the rules of the application’s `pass pipeline` can help EF developers name their nodes and definitions in such as way to make more effective use of passes. Conclusion The bakery example is trivial in nature, but shows several of the patterns and concepts found in the wild when using the Execution Framework. An inspection of OmniGraph’s use of EF will reveal the use of all of the patterns outlined above. A full source listing for the example can be found at *source/extensions/omni.graph.exec/tests.cpp/TestBakeryDocs.cpp*.
examples-all.md
# All Data Type Examples This file contains example usage for all of the AutoNode data types in one place for easy reference and searching. For a view of the examples that is separated into more digestible portions see AutoNode Examples. ## Contents - [bool](#bool) - [bool[]](#id1) - [double](#double) - [double[]](#id2) - [float](#float) - [float[]](#id3) - [half](#half) - [half[]](#id4) - [int](#int) - [int[]](#id5) - [int64](#int64) - [int64[]](#id6) - [string](#string) - [token](#token) - [token[]](#id7) - [uchar](#uchar) - [uchar[]](#id8) - [uint](#uint) - [uint[]](#id9) - [uint64](#uint64) - [uint64[]](#id10) - [og.create_node_type(ui_name=str)](#og-create-node-type-ui-name-str) - @og.create_node_type(unique_name=str) - @og.create_node_type(add_execution_pins) - @og.create_node_type(metadata=dict(str,any)) - Multiple Simple Outputs - Multiple Tuple Outputs - double[2] - double[2][] - double[3] - double[3][] - double[4] - double[4][] - float[2] - float[2][] - float[3] - float[3][] - float[4] - float[4][] - half[2] - half[2][] - half[3] - half[3][] - half[4] - half[4][] - int[2] - int[2][] - int[3] - int[3][] - int[4] - int[4][] - colord[3] - colord[3][] - colorf[3] - colorf[3][] - colorh[3] - colorh[3][] - colord[4] - colord[4][] - colorf[4] - colorf[4][] - colorh[4] - colorh[4][] - frame[4] - frame[4][] - matrixd[2] - matrixd[2][] - **matrixd[3]** - **matrixd[3][]** - **matrixd[4]** - **matrixd[4][]** - **normald[3]** - **normald[3][]** - **normalf[3]** - **normalf[3][]** - **normalh[3]** - **normalh[3][]** - **pointd[3]** - **pointd[3][]** - **pointf[3]** - **pointf[3][]** - **pointh[3]** - **pointh[3][]** - **quatd[4]** - **quatd[4][]** - **quatf[4]** - **quatf[4][]** - **quath[4]** - **quath[4][]** - **texcoordd[2]** - **texcoordd[2][]** - **texcoordf[2]** - **texcoordf[2][]** - **texcoordh[2]** - **texcoordh[2][]** - **texcoordd[3]** - **texcoordd[3][]** - **texcoordf[3]** - **texcoordf[3][]** - **texcoordh[3]** - **texcoordh[3][]** - **timecode** - **timecode[]** - **vectord[3]** - **vectord[3][]** - **vectorf[3]** - **vectorf[3][]** - **vectorh[3]** - **vectorh[3][]** - **bundle** - **execution** - **objectId** - **objectId[]** - **target** ## bool Takes in two boolean values and outputs the logical AND of them. The types of both inputs and the return value are Python booleans. Note that the return type name is the Warp-compatible "boolean", which is just a synonym for "bool". ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_bool(first_value: ot.bool, second_value: ot.bool) -> ot.boolean: """Takes in two boolean values and outputs the logical AND of them. The types of both inputs and the return value are Python booleans. Note that the return type name is the Warp-compatible "boolean", which is just a synonym for "bool". """ return first_value and second_value ``` ## bool[] Takes in two arrays of boolean attributes and returns an array with the logical AND of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype=bool) where "N" is the size of the array determined at runtime. ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_boolarray(first_value: ot.boolarray, second_value: ot.boolarray) -> ot.boolarray: """Takes in two arrays of boolean attributes and returns an array with the logical AND of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype=bool) where "N" is the size of the array determined at runtime. """ return first_value & second_value ``` ## double Takes in two double precision values and outputs the sum of them. The types of both inputs and the return value are Python floats as Python does not distinguish between different precision levels. When put into Fabric and USD the values are stored as double precision values. Note that the return type is the Warp-compatible "float64" which is a synonym for "double". ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_double(first_value: ot.double, second_value: ot.double) -> ot.float64: """Takes in two double precision values and outputs the sum of them. The types of both inputs and the return value are Python floats as Python does not distinguish between different precision levels. When put into Fabric and USD the values are stored as double. precision values. Note that the return type is the Warp-compatible "float64" which is a synonym for "double". """ return first_value + second_value ``` ## double[] Takes in two arrays of double attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_doublearray(first_value: ot.doublearray, second_value: ot.doublearray) -> ot.doublearray: """Takes in two arrays of double attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` ## float Takes in two single-precision floating point values and outputs the sum of them. The types of both inputs and the return value are Python floats as Python does not distinguish between different precision levels. When put into Fabric and USD the values are stored as single-precision floating point values. Note that the return type is the Warp-compatible "float32" which is a synonym for "float". ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_float(first_value: ot.float, second_value: ot.float) -> ot.float32: """Takes in two single-precision floating point values and outputs the sum of them. The types of both inputs and the return value are Python floats as Python does not distinguish between different precision levels. When put into Fabric and USD the values are stored as single-precision floating point values. Note that the return type is the Warp-compatible "float32" which is a synonym for "float". """ return first_value + second_value ``` ## float[] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_floatarray(first_value: ot.floatarray, second_value: ot.floatarray) -> ot.floatarray: """Takes in two arrays of float attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype=numpy.float32) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` ## half ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_half(first_value: ot.half, second_value: ot.half) -> ot.float16: """Takes in two half-precision floating point values and outputs the sum of them. The types of both inputs and the return value are Python floats as Python does not distinguish between different precision levels. When put into Fabric and USD the values are stored as half precision floating point values. Note that the return type is the Warp-compatible "float16" which is a synonym for "half". """ return first_value + second_value ``` ## half[] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_halfarray(first_value: ot.halfarray, second_value: ot.halfarray) -> ot.halfarray: """Takes in two arrays of half attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` ## int ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_int(first_value: ot.int, second_value: ot.int) -> ot.int32: """Takes in two 32-bit precision integer values and outputs the sum of them. The types of both inputs and the return value are Python ints as Python does not distinguish between different precision levels. When put into Fabric and USD the values are stored as 32-bit precision integer values. Note that the return type is the Warp-compatible "int32" which is a synonym for "int". """ return first_value + second_value ``` ## int[] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_intarray(first_value: ot.intarray, second_value: ot.intarray) -> ot.intarray: """Takes in two arrays of integer attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype=numpy.int32) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` ## int64 ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_int64(first_value: ot.int64, second_value: ot.int64) -> ot.int64: """Takes in two 64-bit precision integer values and outputs the sum of them. The types of both inputs and the return value are Python ints as Python does not distinguish between different precision levels. When put into Fabric and USD the values are stored as 64-bit precision integer values. Note that the return type is the Warp-compatible "int64" which is a synonym for "int64". """ return first_value + second_value ``` # Autonode int64 ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_int64(first_value: ot.int64, second_value: ot.int64) -> ot.int64: """Takes in two 64-bit precision integer values and outputs the sum of them. The types of both inputs and the return value are Python ints as Python does not distinguish between different precision levels. When put into Fabric and USD the values are stored as 64-bit precision integer values. """ return first_value + second_value ``` # Autonode int64[] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_int64array(first_value: ot.int64array, second_value: ot.int64array) -> ot.int64array: """Takes in two arrays of 64-bit integer attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype=numpy.int64) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` # Autonode string ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_string(first_value: ot.string, second_value: ot.string) -> ot.string: """Takes in two string values and outputs the concatenated string. The types of both inputs and the return value are Python str. When put into Fabric the values are stored as uchar arrays with a length value. USD stores it as a native string type. """ return first_value + second_value ``` # Autonode token ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_token(first_value: ot.token, second_value: ot.token) -> ot.token: """Takes in two tokenized strings and outputs the string resulting from concatenating them together. The types of both inputs and the return value are Python strs as Python does not have the concept of a unique tokenized string. When put into Fabric and USD the values are stored as a single 64-bit unsigned integer that is a token. """ return first_value + second_value ``` # Autonode token[] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_tokenarray(first_value: ot.tokenarray, second_value: ot.tokenarray) -> ot.tokenarray: """Takes in two arrays of tokens and returns an array containing the concatenations of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype="<U") where "N" is the size of the array determined at runtime. """ return np.array([x + y for x, y in zip(first_value, second_value)]) ``` # Autonode uchar ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_uchar(first_value: ot.uchar, second_value: ot.uchar) -> ot.uchar: """Takes in two unsigned character values and outputs the concatenated string. The types of both inputs and the return value are Python str. When put into Fabric the values are stored as uchar arrays with a length value. USD stores it as a native string type. """ return first_value + second_value ``` # uchar ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_uchar(first_value: ot.uchar, second_value: ot.uchar) -> ot.uint8: """Takes in two 8-bit precision unsigned integer values and outputs the sum of them. The types of both inputs and the return value are Python ints as Python does not distinguish between different precision levels or signs. When put into Fabric and USD the values are stored as 8-bit precision unsigned integer values. Note that the return type is the Warp-compatible "uint8" which is a synonym for "uchar". """ return first_value + second_value ``` # uchar[] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_uchararray(first_value: ot.uchararray, second_value: ot.uchararray) -> ot.uchararray: """Takes in two arrays of 8-bit unsigned integer attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype=numpy.uchar8) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` # uint ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_uint(first_value: ot.uint, second_value: ot.uint) -> ot.uint: """Takes in two 32-bit precision unsigned integer values and outputs the sum of them. The types of both inputs and the return value are Python ints as Python does not distinguish between different precision levels or signs. When put into Fabric and USD the values are stored as 32-bit precision unsigned integer values. """ return first_value + second_value ``` # uint[] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_uintarray(first_value: ot.uintarray, second_value: ot.uintarray) -> ot.uintarray: """Takes in two arrays of 32-bit unsigned integer attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype=numpy.uint32) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` # uint64 ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_uint64(first_value: ot.uint64, second_value: ot.uint64) -> ot.uint64: """Takes in two 64-bit precision unsigned integer values and outputs the sum of them. The types of both inputs and the return value are Python ints as Python does not distinguish between different precision levels or signs. When put into Fabric and USD the values are stored as 64-bit precision unsigned integer values. """ return first_value + second_value ``` # uint64[] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_uint64array(first_value: ot.uint64array, second_value: ot.uint64array) -> ot.uint64array: """Takes in two arrays of 64-bit unsigned integer attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype=numpy.uint64) where "N" is the size of the array determined at runtime. """ return first_value + second_value ```python @og.create_node_type def autonode_uint64array(first_value: ot.uint64array, second_value: ot.uint64array) -> ot.uint64array: """Takes in two arrays of 8-bit unsigned integer attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype=numpy.uint64) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` ## @og.create_node_type(ui_name=str) ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type(ui_name="Fluffy Bunny") def autonode_decoration_ui_name() -> ot.string: """This node type has no inputs and returns the UI name of its node type as output. It demonstrates how the optional ui_name argument can be used on the decorator to modify the name of the node type as it will appear to the user. """ # We know the name of the node type by construction node_type = og.get_node_type("omni.graph.autonode_decoration_ui_name") # Get the metadata containing the UI name - will always return "Fluffy Bunny" return node_type.get_metadata(og.MetadataKeys.UI_NAME) ``` ## @og.create_node_type(unique_name=str) ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type(unique_name="omni.graph.autonode_unique_name") def autonode_decoration_unique_name() -> ot.string: """This node type has no inputs and returns the unique name of its node type as output. It demonstrates how the optional unique_name argument can be used on the decorator to modify the name of the node type as it is used for registration and identification. """ # Look up the node type name using the supplied unique name rather than the one that would have been # automatically generated (omni.graph.autonode_decoration_unique_name) node_type = og.get_node_type("omni.graph.autonode_unique_name") return node_type.get_node_type() if node_type.is_valid() else "" ``` ## @og.create_node_type(add_execution_pins) ```python import inspect import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type(add_execution_pins=True) def autonode_decoration_add_execution_pins() -> ot.int: """This node type has no inputs and returns the number of attributes it has of type "execution". It demonstrates how the optional add_execution_pins argument can be used on the decorator to automatically include both an input and an output execution pin so that the node type can be easily included in the Action Graph. """ frame = inspect.currentframe().f_back node = frame.f_locals.get("node") # This will return 2, counting the automatically added input and output execution attributes return sum(1 for attr in node.get_attributes() if attr.get_resolved_type().role == og.AttributeRole.EXECUTION) ``` ## @og.create_node_type(metadata=dict(str,any)) ```python import omni.graph.core as og import omni.graph.core.types as ot ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type(metadata={"Emperor": "Palpatine"}) def autonode_decoration_metadata() -> ot.string: """This node type has no inputs and returns a string consisting of the value of the metadata whose name was specified in the decorator "metadata" argument. It demonstrates how the optional metadata argument can be used on the decorator to automatically add metadata to the node type definition. """ # We know the name of the node type by construction node_type = og.get_node_type("omni.graph.autonode_decoration_metadata") # Return the metadata with the custom name we specified - will always return "Palpatine" return node_type.get_metadata("Emperor") ``` ## Multiple Simple Outputs ```python import statistics as st import numpy as np import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_multi_simple(values: ot.floatarray) -> tuple[ot.float, ot.float, ot.float]: """Takes in a list of floating point values and returns three outputs that are the mean, median, and mode of the values in the list. The outputs will be named "out_0", "out_1", and "out_2". """ return (values.mean(), np.median(values), st.mode(values)) ``` ## Multiple Tuple Outputs ```python import numpy as np import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_multi_tuple(original: ot.matrix2d) -> tuple[ot.matrix2d, ot.matrix2d]: """Takes in a 2x2 matrix and returns two outputs that are the inverse and transpose of the matrix. Reports an error if the matrix is not invertible. Note that even though the data types themselves are tuples the return values will be correctly interpreted as being separate output attributes with each of the outputs itself being a tuple value. The outputs will be named "out_1" and "out_2". """ try: return (original.transpose(), np.linalg.inv(original)) except np.linalg.LinAlgError as error: raise og.OmniGraphError(f"Could not invert matrix {original}") from error ``` ## double[2] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_double2(first_value: ot.double2, second_value: ot.double2) -> ot.double2: """Takes in two double[2] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(2,), dtype=numpy.float64). When put into Fabric and USD the values are stored as two double-precision floating point values. """ return first_value + second_value ``` ## double[2][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type ``` ```python def autonode_double2array(first_value: ot.double2array, second_value: ot.double2array) -> ot.double2array: """Takes in two arrays of double2 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(2,N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` ### double[3] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_double3(first_value: ot.double3, second_value: ot.double3) -> ot.double3: """Takes in two double[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float64). When put into Fabric and USD the values are stored as three double-precision floating point values. """ return first_value + second_value ``` ### double[3][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_double3array(first_value: ot.double3array, second_value: ot.double3array) -> ot.double3array: """Takes in two arrays of double3 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` ### double[4] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_double4(first_value: ot.double4, second_value: ot.double4) -> ot.double4: """Takes in two double[4] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(4,), dtype=numpy.float64). When put into Fabric and USD the values are stored as four double-precision floating point values. """ return first_value + second_value ``` ### double[4][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_double4array(first_value: ot.double4array, second_value: ot.double4array) -> ot.double4array: """Takes in two arrays of double4 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(4,N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` ### float[2] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_float2(first_value: ot.float2, second_value: ot.float2) -> ot.float2: """ (No description provided) """ return first_value + second_value ``` ```python """Takes in two float[2] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(2,), dtype=numpy.float32). When put into Fabric and USD the values are stored as two single-precision floating point values. """ return first_value + second_value ``` ## float[2][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_float2array(first_value: ot.float2array, second_value: ot.float2array) -> ot.float2array: """Takes in two arrays of float2 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(2,N,), dtype=numpy.float32) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` ## float[3] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_float3(first_value: ot.float3, second_value: ot.float3) -> ot.float3: """Takes in two float[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float32). When put into Fabric and USD the values are stored as three single-precision floating point values. """ return first_value + second_value ``` ## float[3][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_float3array(first_value: ot.float3array, second_value: ot.float3array) -> ot.float3array: """Takes in two arrays of float3 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float32) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` ## float[4] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_float4(first_value: ot.float4, second_value: ot.float4) -> ot.float4: """Takes in two float[4] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(4,), dtype=numpy.float32). When put into Fabric and USD the values are stored as four single-precision floating point values. """ return first_value + second_value ``` ## float[4][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_float4array(first_value: ot.float4array, second_value: ot.float4array) -> ot.float4array: """Takes in two arrays of float4 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(4,N,), dtype=numpy.float32) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` ## half[2] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_half2(first_value: ot.half2, second_value: ot.half2) -> ot.half2: """Takes in two half[2] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(2,), dtype=numpy.float16). When put into Fabric and USD the values are stored as two 16-bit floating point values. """ return first_value + second_value ``` ## half[2][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_half2array(first_value: ot.half2array, second_value: ot.half2array) -> ot.half2array: """Takes in two arrays of half2 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(2,N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` ## half[3] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_half3(first_value: ot.half3, second_value: ot.half3) -> ot.half3: """Takes in two half[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float16). When put into Fabric and USD the values are stored as three 16-bit floating point values. """ return first_value + second_value ``` ## half[3][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_half3array(first_value: ot.half3array, second_value: ot.half3array) -> ot.half3array: """Takes in two arrays of half3 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` ## half[4] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_half4(first_value: ot.half4, second_value: ot.half4) -> ot.half4: """Takes in two half[4] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(4,), dtype=numpy.float16). When put into Fabric and USD the values are stored as four 16-bit floating point values. """ return first_value + second_value ``` ## half[4][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_half4array(first_value: ot.half4array, second_value: ot.half4array) -> ot.half4array: """Takes in two arrays of half4 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(4,N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` ```python def autonode_half4array(first_value: ot.half4array, second_value: ot.half4array) -> ot.half4array: """Takes in two arrays of half4 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(4,N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_int2(first_value: ot.int2, second_value: ot.int2) -> ot.int2: """Takes in two int[2] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(2,), dtype=numpy.int32). When put into Fabric and USD the values are stored as two 32-bit integer values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_int2array(first_value: ot.int2array, second_value: ot.int2array) -> ot.int2array: """Takes in two arrays of int2 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(2,N,), dtype=numpy.int32) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_int3(first_value: ot.int3, second_value: ot.int3) -> ot.int3: """Takes in two int[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.int32). When put into Fabric and USD the values are stored as three 32-bit integer values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_int3array(first_value: ot.int3array, second_value: ot.int3array) -> ot.int3array: """Takes in two arrays of int3 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.int32) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_int4(first_value: ot.int4, second_value: ot.int4) -> ot.int4: """Takes in two int[4] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(4,), dtype=numpy.int32). When put into """ return first_value + second_value ``` ## Fabric and USD the values are stored as two 32-bit integer values. """ return first_value + second_value """ ## int[4][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_int4array(first_value: ot.int4array, second_value: ot.int4array) -> ot.int4array: """Takes in two arrays of int4 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(4,N,), dtype=numpy.int32) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` ## colord[3] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_color3d(first_value: ot.color3d, second_value: ot.color3d) -> ot.color3d: """Takes in two colord[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float64). When put into Fabric the values are stored as 3 double-precision values. The color role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ## colord[3][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_color3darray(first_value: ot.color3darray, second_value: ot.color3darray) -> ot.color3darray: """Takes in two arrays of color3d attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. The color role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ## colorf[3] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_color3f(first_value: ot.color3f, second_value: ot.color3f) -> ot.color3f: """Takes in two colorf[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float32). When put into Fabric the values are stored as 3 single-precision values. The color role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ## colorf[3][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_color3farray(first_value: ot.color3farray, second_value: ot.color3farray) -> ot.color3farray: """Takes in two arrays of color3f attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float32) where "N" is the size of the array determined at runtime. The color role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ## colorh[3] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_color3h(first_value: ot.color3h, second_value: ot.color3h) -> ot.color3h: """Takes in two colorh[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float16). When put into Fabric the values are stored as 3 half-precision values. The color role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ## colorh[3][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_color3harray(first_value: ot.color3harray, second_value: ot.color3harray) -> ot.color3harray: """Takes in two arrays of color3h attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. The color role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ## colord[4] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_color4d(first_value: ot.color4d, second_value: ot.color4d) -> ot.color4d: """Takes in two color4d values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(4,), dtype=numpy.float64). When put into Fabric the values are stored as 4 double-precision values. The color role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ## colord[4][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_color4darray(first_value: ot.color4darray, second_value: ot.color4darray) -> ot.color4darray: """Takes in two arrays of color4d attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(4,N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. The color role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ## colorf[4] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_color4f(first_value: ot.color4f, second_value: ot.color4f) -> ot.color4f: """Takes in two color4f values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(4,), dtype=numpy.float32). When put into Fabric the values are stored as 4 single-precision values. The color role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ## colorf[4][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_color4farray(first_value: ot.color4farray, second_value: ot.color4farray) -> ot.color4farray: """Takes in two arrays of color4f attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(4,N,), dtype=numpy.float32) where "N" is the size of the array determined at runtime. The color role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ## colorh[4] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_color4h(first_value: ot.color4h, second_value: ot.color4h) -> ot.color4h: """Takes in two color4h values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(4,), dtype=numpy.float16). When put into Fabric the values are stored as 4 half-precision values. The color role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ## colorh[4][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_color4harray(first_value: ot.color4harray, second_value: ot.color4harray) -> ot.color4harray: """Takes in two arrays of color4h attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(4,N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. The color role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ## frame[4] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_frame4d(first_value: ot.frame4d, second_value: ot.frame4d) -> ot.frame4d: """Takes in two frame4d values and outputs the sum of them. The types of both inputs and the return value are numpy.ndarray(shape=(4,4), dtype=numpy.float64). When put into Fabric the values are stored as a set of 16 double-precision values. USD uses the special frame4d type. """ return first_value + second_value ``` ## frame[4][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_frame4darray(first_value: ot.frame4darray, second_value: ot.frame4darray) -> ot.frame4darray: """Takes in two frame4darray values and outputs the sum of them. The types of both inputs and the return value are numpy.ndarray(shape=(4,4,N), dtype=numpy.float64) where "N" is the size of the array determined at runtime. When put into Fabric the values are stored as an array of sets of 16 double-precision values. USD stores it as the native frame4d[] type. """ return first_value + second_value ``` ## matrixd[2] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_matrix2d(first_value: ot.matrix2d, second_value: ot.matrix2d) -> ot.matrix2d: """Takes in two matrix2d values and outputs the sum of them. The types of both inputs and the return value are numpy.ndarray(shape=(2,2), dtype=numpy.float64). When put into Fabric and USD the values are stored as a list of 4 double-precision values. """ return first_value + second_value ``` ## matrixd[2][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_matrix2darray(first_value: ot.matrix2darray, second_value: ot.matrix2darray) -> ot.matrix2darray: """Takes in two matrix2darray values and outputs the sum of them. The types of both inputs and the return value are numpy.ndarray(shape=(N,2,2), dtype=numpy.float64) where "N" is the size of the array determined at runtime.. When put into Fabric the values are stored as an array of sets of 9 double-precision values. USD stores it as the native matrix2d[] type. """ return first_value + second_value ``` ## matrixd[3] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_matrix3d(first_value: ot.matrix3d, second_value: ot.matrix3d) -> ot.matrix3d: """Takes in two matrix3d values and outputs the sum of them. The types of both inputs and the return value are numpy.ndarray(shape=(3,3), dtype=numpy.float64). When put into Fabric and USD the values are stored as a list of 9 double-precision values. """ return first_value + second_value ``` ## matrixd[3][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_matrix3darray(first_value: ot.matrix3darray, second_value: ot.matrix3darray) -> ot.matrix3darray: """Takes in two matrix3darray values and outputs the sum of them. The types of both inputs and the return value are numpy.ndarray(shape=(3,3,N), dtype=numpy.float64) where "N" is the size of the array determined at runtime.. When put into Fabric the values are stored as an array of sets of 9 double-precision values. USD stores it as the native matrix3d[] type. """ return first_value + second_value ``` ## matrixd[4] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_matrix4d(first_value: ot.matrix4d, second_value: ot.matrix4d) -> ot.matrix4d: """Takes in two matrix4d values and outputs the sum of them. The types of both inputs and the return value are numpy.ndarray(shape=(4,4), dtype=numpy.float64). When put into Fabric and USD the values are stored as a list of 9 double-precision values. """ return first_value + second_value ``` ## matrixd[4][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_matrix4darray(first_value: ot.matrix4darray, second_value: ot.matrix4darray) -> ot.matrix4darray: """Takes in two matrix4darray values and outputs the sum of them. The types of both inputs and the return value are numpy.ndarray(shape=(4,4,N), dtype=numpy.float64) where "N" is the size of the array determined at runtime.. When put into Fabric the values are stored as an array of sets of 9 double-precision values. USD stores it as the native matrix4d[] type. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_matrix4darray(first_value: ot.matrix4darray, second_value: ot.matrix4darray) -> ot.matrix4darray: """Takes in two matrix4darray values and outputs the sum of them. The types of both inputs and the return value are numpy.ndarray(shape=(4,4,N), dtype=numpy.float64) where "N" is the size of the array determined at runtime.. When put into Fabric the values are stored as an array of sets of 16 double-precision values. USD stores it as the native matrix4d[] type. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_normal3d(first_value: ot.normal3d, second_value: ot.normal3d) -> ot.normal3d: """Takes in two normald[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float64). When put into Fabric the values are stored as 3 double-precision values. The normal role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_normal3darray(first_value: ot.normal3darray, second_value: ot.normal3darray) -> ot.normal3darray: """Takes in two arrays of normal3d attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. The normal role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_normal3f(first_value: ot.normal3f, second_value: ot.normal3f) -> ot.normal3f: """Takes in two normalf[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float32). When put into Fabric the values are stored as 3 single-precision values. The normal role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_normal3farray(first_value: ot.normal3farray, second_value: ot.normal3farray) -> ot.normal3farray: """Takes in two arrays of normal3f attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float32) where "N" is the size of the array determined at runtime. The normal role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` # Python Code Snippets ## autonode_normal3h ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_normal3h(first_value: ot.normal3h, second_value: ot.normal3h) -> ot.normal3h: """Takes in two normalh[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float16). When put into Fabric the values are stored as 3 half-precision values. The normal role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ## autonode_normal3harray ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_normal3harray(first_value: ot.normal3harray, second_value: ot.normal3harray) -> ot.normal3harray: """Takes in two arrays of normal3h attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. The normal role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ## autonode_point3d ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_point3d(first_value: ot.point3d, second_value: ot.point3d) -> ot.point3d: """Takes in two pointd[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float64). When put into Fabric the values are stored as 3 double-precision values. The point role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ## autonode_point3darray ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_point3darray(first_value: ot.point3darray, second_value: ot.point3darray) -> ot.point3darray: """Takes in two arrays of point3d attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. The point role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ## autonode_point3f ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_point3f(first_value: ot.point3f, second_value: ot.point3f) -> ot.point3f: """Takes in two pointf[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float32). When put into Fabric the values are stored as 3 single-precision values. The point role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ## autonode_point3farray ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_point3farray(first_value: ot.point3farray, second_value: ot.point3farray) -> ot.point3farray: """Takes in two arrays of point3f attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float32) where "N" is the size of the array determined at runtime. The point role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_point3farray(first_value: ot.point3farray, second_value: ot.point3farray) -> ot.point3farray: """Takes in two arrays of point3f attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float32) where "N" is the size of the array determined at runtime. The point role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_point3h(first_value: ot.point3h, second_value: ot.point3h) -> ot.point3h: """Takes in two pointh[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float16). When put into Fabric the values are stored as 3 half-precision values. The point role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_point3harray(first_value: ot.point3harray, second_value: ot.point3harray) -> ot.point3harray: """Takes in two arrays of point3h attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. The point role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_quatd(first_value: ot.quatd, second_value: ot.quatd) -> ot.quatd: """Takes in two quatd[4] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(4,), dtype=numpy.float64). When put into Fabric the values are stored as 4 double-precision values. The quaternion role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_quatdarray(first_value: ot.quatdarray, second_value: ot.quatdarray) -> ot.quatdarray: """Takes in two arrays of quatd attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(4,N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. The quaternion role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_quatf(first_value: ot.quatf, second_value: ot.quatf) -> ot.quatf: """Takes in two quatf[4] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(4,), dtype=numpy.float32). When put into Fabric the values are stored as 4 single-precision values. The quaternion role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_quatf(first_value: ot.quatf, second_value: ot.quatf) -> ot.quatf: """Takes in two quatf[4] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(4,), dtype=numpy.float32). When put into Fabric the values are stored as 4 single-precision values. The quaternion role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_quatfarray(first_value: ot.quatfarray, second_value: ot.quatfarray) -> ot.quatfarray: """Takes in two arrays of quatf attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(4,N,), dtype=numpy.float32) where "N" is the size of the array determined at runtime. The quaternion role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_quath(first_value: ot.quath, second_value: ot.quath) -> ot.quath: """Takes in two quath[4] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(4,), dtype=numpy.float16). When put into Fabric the values are stored as 4 half-precision values. The quaternion role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_quatharray(first_value: ot.quatharray, second_value: ot.quatharray) -> ot.quatharray: """Takes in two arrays of quath attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(4,N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. The quaternion role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_texcoord2d(first_value: ot.texcoord2d, second_value: ot.texcoord2d) -> ot.texcoord2d: """Takes in two texcoordd[2] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(2,), dtype=numpy.float64). When put into Fabric the values are stored as 2 double-precision values. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_texcoord2darray(first_value: ot.texcoord2darray, second_value: ot.texcoord2darray) -> ot.texcoord2darray: """Takes in two arrays of texcoordd attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(2,N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core.types as ot @og.create_node_type def autonode_texcoord2darray( first_value: ot.texcoord2darray, second_value: ot.texcoord2darray ) -> ot.texcoord2darray: """Takes in two arrays of texcoord2d attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ### texcoordf[2] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_texcoord2f(first_value: ot.texcoord2f, second_value: ot.texcoord2f) -> ot.texcoord2f: """Takes in two texcoordf[2] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(2,), dtype=numpy.float32). When put into Fabric the values are stored as 2 single-precision values. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ### texcoordf[2][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_texcoord2farray( first_value: ot.texcoord2farray, second_value: ot.texcoord2farray ) -> ot.texcoord2farray: """Takes in two arrays of texcoord2f attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float32) where "N" is the size of the array determined at runtime. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ### texcoordh[2] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_texcoord2h(first_value: ot.texcoord2h, second_value: ot.texcoord2h) -> ot.texcoord2h: """Takes in two texcoordh[2] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(2,), dtype=numpy.float16). When put into Fabric the values are stored as 2 half-precision values. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ### texcoordh[2][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_texcoord2harray( first_value: ot.texcoord2harray, second_value: ot.texcoord2harray ) -> ot.texcoord2harray: """Takes in two arrays of texcoord2h attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ### texcoordd[3] ```python import omni.graph.core as og ``` ```python import omni.graph.core.types as ot @og.create_node_type def autonode_texcoord3d(first_value: ot.texcoord3d, second_value: ot.texcoord3d) -> ot.texcoord3d: """Takes in two texcoordd[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float64). When put into Fabric the values are stored as 3 double-precision values. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_texcoord3darray(first_value: ot.texcoord3darray, second_value: ot.texcoord3darray) -> ot.texcoord3darray: """Takes in two arrays of texcoord3d attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_texcoord3f(first_value: ot.texcoord3f, second_value: ot.texcoord3f) -> ot.texcoord3f: """Takes in two texcoordf[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float32). When put into Fabric the values are stored as 3 single-precision values. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_texcoord3farray(first_value: ot.texcoord3farray, second_value: ot.texcoord3farray) -> ot.texcoord3farray: """Takes in two arrays of texcoord3f attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float32) where "N" is the size of the array determined at runtime. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_texcoord3h(first_value: ot.texcoord3h, second_value: ot.texcoord3h) -> ot.texcoord3h: """Takes in two texcoordh[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float16). When put into Fabric the values are stored as 3 half-precision values. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_texcoord3harray(first_value: ot.texcoord3harray, second_value: ot.texcoord3harray) -> ot.texcoord3harray: """Takes in two arrays of texcoord3h attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core.types as ot @og.create_node_type def autonode_texcoord3harray( first_value: ot.texcoord3harray, second_value: ot.texcoord3harray ) -> ot.texcoord3harray: """Takes in two arrays of texcoord3h attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ### timecode ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_timecode(first_value: ot.timecode, second_value: ot.timecode) -> ot.timecode: """Takes in two timecodes outputs the sum of them. The types of both inputs and the return value are Python floats with the full precision required in order to represent the range of legal timecodes. When put into Fabric and USD the values are stored as a double-precision floating point value. """ return first_value + second_value ``` ### timecode[] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_timecodearray(first_value: ot.timecodearray, second_value: ot.timecodearray) -> ot.timecodearray: """Takes in two arrays of timecodes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` ### vectord[3] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_vector3d(first_value: ot.vector3d, second_value: ot.vector3d) -> ot.vector3d: """Takes in two vectord[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float64). When put into Fabric the values are stored as 3 double-precision values. The vector role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ### vectord[3][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_vector3darray(first_value: ot.vector3darray, second_value: ot.vector3darray) -> ot.vector3darray: """Takes in two arrays of vector3d attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. The vector role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ### vectorf[3] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_vector3d(first_value: ot.vector3d, second_value: ot.vector3d) -> ot.vector3d: """Takes in two vectorf[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float32). When put into Fabric the values are stored as 3 single-precision values. The vector role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_vector3f(first_value: ot.vector3f, second_value: ot.vector3f) -> ot.vector3f: """Takes in two vectorf[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float32). When put into Fabric the values are stored as 3 single-precision values. The vector role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_vector3farray(first_value: ot.vector3farray, second_value: ot.vector3farray) -> ot.vector3farray: """Takes in two arrays of vector3f attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float32) where "N" is the size of the array determined at runtime. The vector role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_vector3h(first_value: ot.vector3h, second_value: ot.vector3h) -> ot.vector3h: """Takes in two vectorh[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float16). When put into Fabric the values are stored as 3 half-precision values. The vector role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_vector3harray(first_value: ot.vector3harray, second_value: ot.vector3harray) -> ot.vector3harray: """Takes in two arrays of vector3h attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. The vector role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import inspect import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_bundle(bundle: ot.bundle, added: ot.int) -> ot.bundle: """Takes in a bundle value and outputs a bundle containing everything in the input bundle plus a count of "added" extra integer members named "added_0", "added_1", etc. Use the special value "added = 0" to indicate that the bundle should be cleared. The types of both inputs and the return value are og.BundleContents. When put into Fabric the bundle is stored as a data bucket and in USD it is represented as a target or reference to a prim when connected. Note how, since AutoNode definitions do not have direct access to the node, the inspect module must be used to get at it in order to construct an output bundle. """ frame = inspect.currentframe().f_back node = frame.f_locals.get("node") ``` ```python context = frame.f_locals.get("context") result = og.BundleContents(context, node, "outputs_out_0", read_only=False, gpu_by_default=False) result.clear() if bundle.valid: result.bundle = bundle if added > 0: first_index = result.size for index in range(added): result.bundle.create_attribute(f"added_{index + first_index}", og.Type(og.BaseDataType.INT)) return result ``` ### execution ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_execution(first_trigger: ot.execution, second_trigger: ot.execution) -> ot.execution: """Takes two execution pins and triggers the output only when both of them are enabled. The types of both inputs and the return value are Python ints as Python does not distinguish between different precision levels. When put into Fabric and USD the values are stored as 32-bit precision integer values. """ if first_trigger == og.ExecutionAttributeState.ENABLED and second_trigger == og.ExecutionAttributeState.ENABLED: return og.ExecutionAttributeState.ENABLED return og.ExecutionAttributeState.DISABLED ``` ### objectId ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_objectid(first_value: ot.objectid, second_value: ot.objectid) -> ot.objectid: """Takes in two objectId values and outputs the larger of them. The types of both inputs and the return value are Python ints as Python does not distinguish between different precision levels or signs. When put into Fabric and USD the values are stored as 64-bit unsigned integer values. """ return first_value if first_value > second_value else second_value ``` ### objectId[] ```python import numpy as np import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_objectidarray(first_value: ot.objectidarray, second_value: ot.objectidarray) -> ot.objectidarray: """Takes in two arrays of object IDs and returns an array containing the largest of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype=numpy.uint64) where "N" is the size of the array determined at runtime. """ return np.maximum(first_value, second_value) ``` ### target ```python import omni.graph.core as og import omni.graph.core.types as ot ``` ```python from usdrt import Sdf @og.create_node_type def autonode_target(target_values: ot.target) -> ot.target: """Takes in target values and outputs the targets resulting from appending "TestChild" to the target. The types of both inputs and the return value are list[usdrt.Sdf.Path]. Unlike most other array types this is represented as a list rather than a numpy array since the value type is not one supported by numpy. When put into Fabric the value is stored as an SdfPath token, while USD uses the native rel type. """ return [target.AppendPath("Child") for target in target_values] ``` --- </footer> </div> </div> </section> </div>
examples-decoration.md
# Examples Using Custom Decoration This file contains example usage of AutoNode decoration that use the extra decorator parameters. For access to the other types of examples see AutoNode Examples. ## Contents - [@og.create_node_type(ui_name=str)](#og-create-node-type-ui-name-str) - [@og.create_node_type(unique_name=str)](#og-create-node-type-unique-name-str) - [@og.create_node_type(add_execution_pins)](#og-create-node-type-add-execution-pins) - [@og.create_node_type(metadata=dict(str,any))](#og-create-node-type-metadata-dict-str-any) The `@og.create_node_type` decorator takes a number of optional arguments that helps provide the extra information to the node type that is normally part of the .ogn definition. ```python def create_node_type( func: callable = None, *, unique_name: str = None, ui_name: str = None, add_execution_pins: bool = False, metadata: dict[str, str] = None, ) -> callable: """Decorator to transform a Python function into an OmniGraph node type definition. The decorator is configured to allow use with and without parameters. When used without parameters all of the default values for the parameters are assumed. If the function is called from the __main__ context, as it would if it were executed from the script editor or from a file, then the decorator is assumed to be creating a short-lived node type definition and the default module name "__autonode__" is applied to indicate this. Any attempts to save a scene containing these short-term node types will be flagged as a warning. Examples: >>> import omni.graph.core as og >>> @og.create_node_type >>> def double_float(a: ogdt.Float) -> ogdt.Float: >>> return a * 2.0 >>> >>> @og.create_node_type(add_execution_pins=True) >>> def double_float(a: ogdt.Float) -> ogdt.Float: >>> return a * 2.0 Args: func: the function object being wrapped. Should be a pure python function object or any other callable which has an `__annotations__` property. If "None" then the decorator was called using the parameterized form "@create_node_type(...)" instead of "@create_node_type" and the function will be inferred in other ways. """ ``` unique_name: Override the default unique name, which is the function name in the module namespace ui_name: Name that appears in the node type's menu and node display. add_execution_pins: Include both input and output execution pins so that this can be used as a trigger node type in an action graph. metadata: Dictionary of extra metadata to apply to the node type Returns: Decorated version of the function that will create the node type definition """ These examples show how the definition of the node type is affected by each of them. ## @og.create_node_type(ui_name=str) ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type(ui_name="Fluffy Bunny") def autonode_decoration_ui_name() -> ot.string: """This node type has no inputs and returns the UI name of its node type as output. It demonstrates how the optional ui_name argument can be used on the decorator to modify the name of the node type as it will appear to the user. """ # We know the name of the node type by construction node_type = og.get_node_type("omni.graph.autonode_decoration_ui_name") # Get the metadata containing the UI name - will always return "Fluffy Bunny" return node_type.get_metadata(og.MetadataKeys.UI_NAME) ``` ## @og.create_node_type(unique_name=str) ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type(unique_name="omni.graph.autonode_unique_name") def autonode_decoration_unique_name() -> ot.string: """This node type has no inputs and returns the unique name of its node type as output. It demonstrates how the optional unique_name argument can be used on the decorator to modify the name of the node type as it is used for registration and identification. """ # Look up the node type name using the supplied unique name rather than the one that would have been # automatically generated (omni.graph.autonode_decoration_unique_name) node_type = og.get_node_type("omni.graph.autonode_unique_name") return node_type.get_node_type() if node_type.is_valid() else "" ``` ## @og.create_node_type(add_execution_pins) ```python import inspect import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type(add_execution_pins=True) def autonode_decoration_add_execution_pins() -> ot.int: """This node type has no inputs and returns the number of attributes it has of type "execution". It demonstrates how the optional add_execution_pins argument can be used on the decorator to automatically include both an input and an output execution pin so that the node type can be easily included in the Action Graph. """ frame = inspect.currentframe().f_back node = frame.f_locals.get("node") # This will return 2, counting the automatically added input and output execution attributes return sum(1 for attr in node.get_attributes() if attr.get_resolved_type().role == og.AttributeRole.EXECUTION) ``` ## @og.create_node_type(metadata=dict(str,any)) ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type(metadata={"Emperor": "Palpatine"}) def autonode_decoration_metadata() -> ot.string: pass ``` """This node type has no inputs and returns a string consisting of the value of the metadata whose name was specified in the decorator "metadata" argument. It demonstrates how the optional metadata argument can be used on the decorator to automatically add metadata to the node type definition.""" # We know the name of the node type by construction node_type = og.get_node_type("omni.graph.autonode_decoration_metadata") # Return the metadata with the custom name we specified - will always return "Palpatine" return node_type.get_metadata("Emperor")
examples-multi-outputs.md
# Multiple-Output Examples This file contains example usage for AutoNode functions that have more than one output. For access to the other types of examples see AutoNode Examples. ## Contents * [Multiple Simple Outputs](#multiple-simple-outputs) * [Multiple Tuple Outputs](#multiple-tuple-outputs) ## Multiple Simple Outputs ```python import statistics as st import numpy as np import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_multi_simple(values: ot.floatarray) -> tuple[ot.float, ot.float, ot.float]: """Takes in a list of floating point values and returns three outputs that are the mean, median, and mode of the values in the list. The outputs will be named "out_0", "out_1", and "out_2". """ return (values.mean(), np.median(values), st.mode(values)) ``` ## Multiple Tuple Outputs ```python import numpy as np import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_multi_tuple(original: ot.matrix2d) -> tuple[ot.matrix2d, ot.matrix2d]: """Takes in a 2x2 matrix and returns two outputs that are the inverse and transpose of the matrix. Reports an error if the matrix is not invertible. Note that even though the data types themselves ``` are tuples the return values will be correctly interpreted as being separate output attributes with each of the outputs itself being a tuple value. The outputs will be named "out_1" and "out_2". """ try: return (original.transpose(), np.linalg.inv(original)) except np.linalg.LinAlgError as error: raise og.OmniGraphError(f"Could not invert matrix {original}") from error
examples-roles.md
# Role-Based Data Type Examples This file contains example usage for all of the AutoNode role-based data types. For access to the other types of examples see AutoNode Examples. ## Contents - [colord[3]](#colord-3) - [colord[3][]](#id1) - [colorf[3]](#colorf-3) - [colorf[3][]](#id2) - [colorh[3]](#colorh-3) - [colorh[3][]](#id3) - [colord[4]](#colord-4) - [colord[4][]](#id4) - [colorf[4]](#colorf-4) - [colorf[4][]](#id5) - [colorh[4]](#colorh-4) - [colorh[4][]](#id6) - [frame[4]](#frame-4) - [frame[4][]](#id7) - [matrixd[2]](#matrixd-2) - [matrixd[2][]](#id8) - [matrixd[3]](#matrixd-3) - [matrixd[3][]](#id9) - [matrixd[4]](#matrixd-4) - [matrixd[4][]](#id10) - [normald[3]](#normald-3) - [normald[3][]](#id11) - **`normald[3][]`** - **`normalf[3]`** - **`normalf[3][]`** - **`normalh[3]`** - **`normalh[3][]`** - **`pointd[3]`** - **`pointd[3][]`** - **`pointf[3]`** - **`pointf[3][]`** - **`pointh[3]`** - **`pointh[3][]`** - **`quatd[4]`** - **`quatd[4][]`** - **`quatf[4]`** - **`quatf[4][]`** - **`quath[4]`** - **`quath[4][]`** - **`texcoordd[2]`** - **`texcoordd[2][]`** - **`texcoordf[2]`** - **`texcoordf[2][]`** - **`texcoordh[2]`** - **`texcoordh[2][]`** - **`texcoordd[3]`** - **`texcoordd[3][]`** - **`texcoordf[3]`** - **`texcoordf[3][]`** - **`texcoordh[3]`** - **`texcoordh[3][]`** - **`timecode`** - **`timecode[]`** - **`vectord[3]`** - **`vectord[3][]`** - **`vectorf[3]`** - **`vectorf[3][]`** - **`vectorh[3]`** - **`vectorh[3][]`** ### colord[3] ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_color3d(first_value: ot.color3d, second_value: ot.color3d) -> ot.color3d: """Takes in two colord[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float64). When put into Fabric the values are stored as 3 double-precision values. The color role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ## colord[3][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_color3darray(first_value: ot.color3darray, second_value: ot.color3darray) -> ot.color3darray: """Takes in two arrays of color3d attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. The color role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ## colorf[3] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_color3f(first_value: ot.color3f, second_value: ot.color3f) -> ot.color3f: """Takes in two colorf[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float32). When put into Fabric the values are stored as 3 single-precision values. The color role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ## colorf[3][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_color3farray(first_value: ot.color3farray, second_value: ot.color3farray) -> ot.color3farray: """Takes in two arrays of color3f attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float32) where "N" is the size of the array determined at runtime. The color role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ## colorh[3] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_color3h(first_value: ot.color3h, second_value: ot.color3h) -> ot.color3h: """Takes in two colorh[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float16). When put into Fabric the values are stored as 3 half-precision values. The color role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ## colorh[3][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_color3harray(first_value: ot.color3harray, second_value: ot.color3harray) -> ot.color3harray: """Takes in two arrays of color3h attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. The color role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ## colord[4] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_color4d(first_value: ot.color4d, second_value: ot.color4d) -> ot.color4d: """Takes in two color4d values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(4,), dtype=numpy.float64). When put into Fabric the values are stored as 4 double-precision values. The color role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ## colord[4][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_color4darray(first_value: ot.color4darray, second_value: ot.color4darray) -> ot.color4darray: """Takes in two arrays of color4d attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(4,N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. The color role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ## colorf[4] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_color4f(first_value: ot.color4f, second_value: ot.color4f) -> ot.color4f: """Takes in two color4f values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(4,), dtype=numpy.float32). When put into Fabric the values are stored as 4 single-precision values. The color role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ## colorf[4][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_color4farray(first_value: ot.color4farray, second_value: ot.color4farray) -> ot.color4farray: """Takes in two arrays of color4f attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(4,N,), dtype=numpy.float32) where "N" is the size of the array determined at runtime. The color role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ## colorh[4] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_color4h(first_value: ot.color4h, second_value: ot.color4h) -> ot.color4h: """Takes in two color4h values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(4,), dtype=numpy.float16). When put into Fabric the values are stored as 4 half-precision values. The color role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ### colorh[4][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_color4harray(first_value: ot.color4harray, second_value: ot.color4harray) -> ot.color4harray: """Takes in two arrays of color4h attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(4,N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. The color role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ### frame[4] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_frame4d(first_value: ot.frame4d, second_value: ot.frame4d) -> ot.frame4d: """Takes in two frame4d values and outputs the sum of them. The types of both inputs and the return value are numpy.ndarray(shape=(4,4), dtype=numpy.float64). When put into Fabric the values are stored as a set of 16 double-precision values. USD uses the special frame4d type. """ return first_value + second_value ``` ### frame[4][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_frame4darray(first_value: ot.frame4darray, second_value: ot.frame4darray) -> ot.frame4darray: """Takes in two frame4darray values and outputs the sum of them. The types of both inputs and the return value are numpy.ndarray(shape=(4,4,N), dtype=numpy.float64) where "N" is the size of the array determined at runtime. When put into Fabric the values are stored as an array of sets of 16 double-precision values. USD stores it as the native frame4d[] type. """ return first_value + second_value ``` ### matrixd[2] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_matrix2d(first_value: ot.matrix2d, second_value: ot.matrix2d) -> ot.matrix2d: """Takes in two matrix2d values and outputs the sum of them. The types of both inputs and the return value are numpy.ndarray(shape=(2,2), dtype=numpy.float64). When put into Fabric and USD the values are stored as a list of 4 double-precision values. """ return first_value + second_value ``` ### matrixd[2][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_matrix2darray(first_value: ot.matrix2darray, second_value: ot.matrix2darray) -> ot.matrix2darray: """Takes in two matrix2darray values and outputs the sum of them. The types of both inputs and the return value are numpy.ndarray(shape=(N,2,2), dtype=numpy.float64) where "N" is the size of the array determined at runtime.. When put into Fabric the values are stored as an array of sets of 9 double-precision values. USD stores it as the native matrix2d[] type. """ return first_value + second_value ``` ### matrixd[3] ``` # matrix3d ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_matrix3d(first_value: ot.matrix3d, second_value: ot.matrix3d) -> ot.matrix3d: """Takes in two matrix3d values and outputs the sum of them. The types of both inputs and the return value are numpy.ndarray(shape=(3,3), dtype=numpy.float64). When put into Fabric and USD the values are stored as a list of 9 double-precision values. """ return first_value + second_value ``` # matrix3darray ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_matrix3darray(first_value: ot.matrix3darray, second_value: ot.matrix3darray) -> ot.matrix3darray: """Takes in two matrix3darray values and outputs the sum of them. The types of both inputs and the return value are numpy.ndarray(shape=(3,3,N), dtype=numpy.float64) where "N" is the size of the array determined at runtime.. When put into Fabric the values are stored as an array of sets of 9 double-precision values. USD stores it as the native matrix3d[] type. """ return first_value + second_value ``` # matrix4d ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_matrix4d(first_value: ot.matrix4d, second_value: ot.matrix4d) -> ot.matrix4d: """Takes in two matrix4d values and outputs the sum of them. The types of both inputs and the return value are numpy.ndarray(shape=(4,4), dtype=numpy.float64). When put into Fabric and USD the values are stored as a list of 9 double-precision values. """ return first_value + second_value ``` # matrix4darray ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_matrix4darray(first_value: ot.matrix4darray, second_value: ot.matrix4darray) -> ot.matrix4darray: """Takes in two matrix4darray values and outputs the sum of them. The types of both inputs and the return value are numpy.ndarray(shape=(4,4,N), dtype=numpy.float64) where "N" is the size of the array determined at runtime.. When put into Fabric the values are stored as an array of sets of 16 double-precision values. USD stores it as the native matrix4d[] type. """ return first_value + second_value ``` # normal3d ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_normal3d(first_value: ot.normal3d, second_value: ot.normal3d) -> ot.normal3d: """Takes in two normald[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float64). When put into Fabric the values are stored as 3 double-precision values. The normal role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` # normal3darray ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_normal3darray(first_value: ot.normal3darray, second_value: ot.normal3darray) -> ot.normal3darray: """Takes in two normal3darray values and outputs the sum of them. The types of both inputs and the return value are numpy.ndarray(shape=(3,N), dtype=numpy.float64) where "N" is the size of the array determined at runtime.. When put into Fabric the values are stored as an array of sets of 3 double-precision values. USD stores it as the native normal3d[] type. """ return first_value + second_value ```python import omni.graph.core.types as ot @og.create_node_type def autonode_normal3darray(first_value: ot.normal3darray, second_value: ot.normal3darray) -> ot.normal3darray: """Takes in two arrays of normal3d attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. The normal role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ### normalf[3] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_normal3f(first_value: ot.normal3f, second_value: ot.normal3f) -> ot.normal3f: """Takes in two normalf[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float32). When put into Fabric the values are stored as 3 single-precision values. The normal role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ### normalf[3][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_normal3farray(first_value: ot.normal3farray, second_value: ot.normal3farray) -> ot.normal3farray: """Takes in two arrays of normal3f attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float32) where "N" is the size of the array determined at runtime. The normal role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ### normalh[3] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_normal3h(first_value: ot.normal3h, second_value: ot.normal3h) -> ot.normal3h: """Takes in two normalh[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float16). When put into Fabric the values are stored as 3 half-precision values. The normal role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ### normalh[3][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_normal3harray(first_value: ot.normal3harray, second_value: ot.normal3harray) -> ot.normal3harray: """Takes in two arrays of normal3h attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. The normal role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ### pointd[3] ```python import omni.graph.core as og ``` ```python import omni.graph.core.types as ot @og.create_node_type def autonode_point3d(first_value: ot.point3d, second_value: ot.point3d) -> ot.point3d: """Takes in two pointd[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float64). When put into Fabric the values are stored as 3 double-precision values. The point role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_point3darray(first_value: ot.point3darray, second_value: ot.point3darray) -> ot.point3darray: """Takes in two arrays of point3d attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. The point role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_point3f(first_value: ot.point3f, second_value: ot.point3f) -> ot.point3f: """Takes in two pointf[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float32). When put into Fabric the values are stored as 3 single-precision values. The point role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_point3farray(first_value: ot.point3farray, second_value: ot.point3farray) -> ot.point3farray: """Takes in two arrays of point3f attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float32) where "N" is the size of the array determined at runtime. The point role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_point3h(first_value: ot.point3h, second_value: ot.point3h) -> ot.point3h: """Takes in two pointh[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float16). When put into Fabric the values are stored as 3 half-precision values. The point role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_point3harray(first_value: ot.point3harray, second_value: ot.point3harray) -> ot.point3harray: """Takes in two arrays of pointh attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. The point role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_point3harray(first_value: ot.point3harray, second_value: ot.point3harray) -> ot.point3harray: """Takes in two arrays of point3h attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. The point role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ### quatd[4] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_quatd(first_value: ot.quatd, second_value: ot.quatd) -> ot.quatd: """Takes in two quatd[4] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(4,), dtype=numpy.float64). When put into Fabric the values are stored as 4 double-precision values. The quaternion role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ### quatd[4][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_quatdarray(first_value: ot.quatdarray, second_value: ot.quatdarray) -> ot.quatdarray: """Takes in two arrays of quatd attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(4,N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. The quaternion role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ### quatf[4] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_quatf(first_value: ot.quatf, second_value: ot.quatf) -> ot.quatf: """Takes in two quatf[4] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(4,), dtype=numpy.float32). When put into Fabric the values are stored as 4 single-precision values. The quaternion role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ### quatf[4][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_quatfarray(first_value: ot.quatfarray, second_value: ot.quatfarray) -> ot.quatfarray: """Takes in two arrays of quatf attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(4,N,), dtype=numpy.float32) where "N" is the size of the array determined at runtime. The quaternion role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ### quath[4] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_quath(first_value: ot.quath, second_value: ot.quath) -> ot.quath: """Takes in two quath[4] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(4,), dtype=numpy.float128). When put into Fabric the values are stored as 4 quadruple-precision values. The quaternion role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python def autonode_quath(first_value: ot.quath, second_value: ot.quath) -> ot.quath: """Takes in two quath[4] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(4,), dtype=numpy.float16). When put into Fabric the values are stored as 4 half-precision values. The quaternion role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_quatharray(first_value: ot.quatharray, second_value: ot.quatharray) -> ot.quatharray: """Takes in two arrays of quath attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(4,N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. The quaternion role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_texcoord2d(first_value: ot.texcoord2d, second_value: ot.texcoord2d) -> ot.texcoord2d: """Takes in two texcoordd[2] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(2,), dtype=numpy.float64). When put into Fabric the values are stored as 2 double-precision values. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_texcoord2darray(first_value: ot.texcoord2darray, second_value: ot.texcoord2darray) -> ot.texcoord2darray: """Takes in two arrays of texcoord2d attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_texcoord2f(first_value: ot.texcoord2f, second_value: ot.texcoord2f) -> ot.texcoord2f: """Takes in two texcoordf[2] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(2,), dtype=numpy.float32). When put into Fabric the values are stored as 2 single-precision values. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_texcoord2farray(first_value: ot.texcoord2farray, second_value: ot.texcoord2farray) -> ot.texcoord2farray: """Takes in two arrays of texcoord2f attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float32) where "N" is the size of the array determined at runtime. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python def autonode_texcoord2farray( first_value: ot.texcoord2farray, second_value: ot.texcoord2farray ) -> ot.texcoord2farray: """Takes in two arrays of texcoord2f attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float32) where "N" is the size of the array determined at runtime. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_texcoord2h(first_value: ot.texcoord2h, second_value: ot.texcoord2h) -> ot.texcoord2h: """Takes in two texcoordh[2] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(2,), dtype=numpy.float16). When put into Fabric the values are stored as 2 half-precision values. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_texcoord2harray( first_value: ot.texcoord2harray, second_value: ot.texcoord2harray ) -> ot.texcoord2harray: """Takes in two arrays of texcoord2h attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_texcoord3d(first_value: ot.texcoord3d, second_value: ot.texcoord3d) -> ot.texcoord3d: """Takes in two texcoordd[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float64). When put into Fabric the values are stored as 3 double-precision values. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_texcoord3darray( first_value: ot.texcoord3darray, second_value: ot.texcoord3darray ) -> ot.texcoord3darray: """Takes in two arrays of texcoord3d attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_texcoord3f(first_value: ot.texcoord3f, second_value: ot.texcoord3f) -> ot.texcoord3f: """Takes in two texcoordf[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float32). When put into Fabric the values are stored as 3 single-precision values. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_texcoord3f(first_value: ot.texcoord3f, second_value: ot.texcoord3f) -> ot.texcoord3f: """Takes in two texcoordf[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float32). When put into Fabric the values are stored as 3 single-precision values. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_texcoord3farray( first_value: ot.texcoord3farray, second_value: ot.texcoord3farray ) -> ot.texcoord3farray: """Takes in two arrays of texcoord3f attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float32) where "N" is the size of the array determined at runtime. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_texcoord3h(first_value: ot.texcoord3h, second_value: ot.texcoord3h) -> ot.texcoord3h: """Takes in two texcoordh[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float16). When put into Fabric the values are stored as 3 half-precision values. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_texcoord3harray( first_value: ot.texcoord3harray, second_value: ot.texcoord3harray ) -> ot.texcoord3harray: """Takes in two arrays of texcoord3h attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. The texcoord role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_timecode(first_value: ot.timecode, second_value: ot.timecode) -> ot.timecode: """Takes in two timecodes outputs the sum of them. The types of both inputs and the return value are Python floats with the full precision required in order to represent the range of legal timecodes. When put into Fabric and USD the values are stored as a double-precision floating point value. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_timecodearray(first_value: ot.timecodearray, second_value: ot.timecodearray) -> ot.timecodearray: """Takes in two arrays of timecodes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_vector3d(first_value: ot.vector3d, second_value: ot.vector3d) -> ot.vector3d: """Takes in two vectord[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float64). When put into Fabric the values are stored as 3 double-precision values. The vector role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_vector3darray(first_value: ot.vector3darray, second_value: ot.vector3darray) -> ot.vector3darray: """Takes in two arrays of vector3d attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. The vector role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_vector3f(first_value: ot.vector3f, second_value: ot.vector3f) -> ot.vector3f: """Takes in two vectorf[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float32). When put into Fabric the values are stored as 3 single-precision values. The vector role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_vector3farray(first_value: ot.vector3farray, second_value: ot.vector3farray) -> ot.vector3farray: """Takes in two arrays of vector3f attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float32) where "N" is the size of the array determined at runtime. The vector role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_vector3h(first_value: ot.vector3h, second_value: ot.vector3h) -> ot.vector3h: """Takes in two vectorh[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float16). When put into Fabric the values are stored as 3 half-precision values. The vector role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python def autonode_vector3h(first_value: ot.vector3h, second_value: ot.vector3h) -> ot.vector3h: """Takes in two vectorh[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float16). When put into Fabric the values are stored as 3 half-precision values. The vector role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ``` ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_vector3harray(first_value: ot.vector3harray, second_value: ot.vector3harray) -> ot.vector3harray: """Takes in two arrays of vector3h attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. The vector role is applied to USD and OmniGraph types as an aid to interpreting the values. """ return first_value + second_value ```
examples-simple.md
# Simple Data Type Examples This file contains example usage for all of the AutoNode simple data types. For access to the other types of examples see AutoNode Examples. ## Contents - [bool](#bool) - [bool[]](#bool-array) - [double](#double) - [double[]](#double-array) - [float](#float) - [float[]](#float-array) - [half](#half) - [half[]](#half-array) - [int](#int) - [int[]](#int-array) - [int64](#int64) - [int64[]](#int64-array) - [string](#string) - [token](#token) - [token[]](#token-array) - [uchar](#uchar) - [uchar[]](#uchar-array) - [uint](#uint) - [uint[]](#uint-array) - [uint64](#uint64) - [uint64[]](#uint64-array) ## bool # 自动节点类型 ## 布尔类型 ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_bool(first_value: ot.bool, second_value: ot.bool) -> ot.boolean: """Takes in two boolean values and outputs the logical AND of them. The types of both inputs and the return value are Python booleans. Note that the return type name is the Warp-compatible "boolean", which is just a synonym for "bool". """ return first_value and second_value ``` ## 布尔数组类型 ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_boolarray(first_value: ot.boolarray, second_value: ot.boolarray) -> ot.boolarray: """Takes in two arrays of boolean attributes and returns an array with the logical AND of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype=bool) where "N" is the size of the array determined at runtime. """ return first_value & second_value ``` ## 双精度类型 ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_double(first_value: ot.double, second_value: ot.double) -> ot.float64: """Takes in two double precision values and outputs the sum of them. The types of both inputs and the return value are Python floats as Python does not distinguish between different precision levels. When put into Fabric and USD the values are stored as double. precision values. Note that the return type is the Warp-compatible "float64" which is a synonym for "double". """ return first_value + second_value ``` ## 双精度数组类型 ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_doublearray(first_value: ot.doublearray, second_value: ot.doublearray) -> ot.doublearray: """Takes in two arrays of double attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` ## 浮点类型 ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_float(first_value: ot.float, second_value: ot.float) -> ot.float32: """Takes in two single-precision floating point values and outputs the sum of them. The types of both inputs and the return value are Python floats as Python does not distinguish between different precision levels. When put into Fabric and USD the values are stored as single-precision floating point values. Note that the return type is the Warp-compatible "float32" which is a synonym for "float". """ return first_value + second_value ``` ## 浮点数组类型 ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_floatarray(first_value: ot.floatarray, second_value: ot.floatarray) -> ot.floatarray: """Takes in two arrays of float attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype=numpy.float32) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` # autonode_floatarray ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_floatarray(first_value: ot.floatarray, second_value: ot.floatarray) -> ot.floatarray: """Takes in two arrays of float attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype=numpy.float32) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` # half ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_half(first_value: ot.half, second_value: ot.half) -> ot.float16: """Takes in two half-precision floating point values and outputs the sum of them. The types of both inputs and the return value are Python floats as Python does not distinguish between different precision levels. When put into Fabric and USD the values are stored as half precision floating point values. Note that the return type is the Warp-compatible "float16" which is a synonym for "half". """ return first_value + second_value ``` # half[] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_halfarray(first_value: ot.halfarray, second_value: ot.halfarray) -> ot.halfarray: """Takes in two arrays of half attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` # int ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_int(first_value: ot.int, second_value: ot.int) -> ot.int32: """Takes in two 32-bit precision integer values and outputs the sum of them. The types of both inputs and the return value are Python ints as Python does not distinguish between different precision levels. When put into Fabric and USD the values are stored as 32-bit precision integer values. Note that the return type is the Warp-compatible "int32" which is a synonym for "int". """ return first_value + second_value ``` # int[] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_intarray(first_value: ot.intarray, second_value: ot.intarray) -> ot.intarray: """Takes in two arrays of integer attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype=numpy.int32) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` # int64 ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_int64(first_value: ot.int64, second_value: ot.int64) -> ot.int64: """Takes in two 64-bit precision integer values and outputs the sum of them. The types of both inputs and the return value are Python ints as Python does not distinguish between different precision levels. When put into Fabric and USD the values are stored as 64-bit precision integer values. Note that the return type is the Warp-compatible "int64" which is a synonym for "int64". """ return first_value + second_value ``` """Takes in two 64-bit precision integer values and outputs the sum of them. The types of both inputs and the return value are Python ints as Python does not distinguish between different precision levels. When put into Fabric and USD the values are stored as 64-bit precision integer values. """ return first_value + second_value ## int64[] """Takes in two arrays of 64-bit integer attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype=numpy.int64) where "N" is the size of the array determined at runtime. """ return first_value + second_value ## string """Takes in two string values and outputs the concatenated string. The types of both inputs and the return value are Python str. When put into Fabric the values are stored as uchar arrays with a length value. USD stores it as a native string type. """ return first_value + second_value ## token """Takes in two tokenized strings and outputs the string resulting from concatenating them together. The types of both inputs and the return value are Python strs as Python does not have the concept of a unique tokenized string. When put into Fabric and USD the values are stored as a single 64-bit unsigned integer that is a token. """ return first_value + second_value ## token[] """Takes in two arrays of tokens and returns an array containing the concatenations of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype="<U") where "N" is the size of the array determined at runtime. """ return np.array([x + y for x, y in zip(first_value, second_value)]) ## uchar """Takes in two 8-bit precision unsigned integer values and outputs the sum of them. """ return first_value + second_value The types of both inputs and the return value are Python ints as Python does not distinguish between different precision levels or signs. When put into Fabric and USD the values are stored as 8-bit precision unsigned integer values. Note that the return type is the Warp-compatible "uint8" which is a synonym for "uchar". """ return first_value + second_value ## uchar[] Takes in two arrays of 8-bit unsigned integer attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype=numpy.uchar8) where "N" is the size of the array determined at runtime. """ return first_value + second_value ## uint Takes in two 32-bit precision unsigned integer values and outputs the sum of them. The types of both inputs and the return value are Python ints as Python does not distinguish between different precision levels or signs. When put into Fabric and USD the values are stored as 32-bit precision unsigned integer values. """ return first_value + second_value ## uint[] Takes in two arrays of 32-bit unsigned integer attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype=numpy.uint32) where "N" is the size of the array determined at runtime. """ return first_value + second_value ## uint64 Takes in two 64-bit precision unsigned integer values and outputs the sum of them. The types of both inputs and the return value are Python ints as Python does not distinguish between different precision levels or signs. When put into Fabric and USD the values are stored as 64-bit precision unsigned integer values. """ return first_value + second_value ## uint64[] Takes in two arrays of 8-bit unsigned integer attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype=numpy.uint64) where "N" is the size of the array determined at runtime. """ return first_value + second_value # 标题1 ## 子标题1 这是一个段落,包含了一些文本和一些链接,比如[这是一个链接](#)。 ## 子标题2 ```python def add_values(first_value, second_value): """ return first_value + second_value """ ```
examples-special.md
# Special Data Type Examples This file contains example usage for all of the AutoNode special data types. For access to the other types of examples see AutoNode Examples. ## Contents - [bundle](#bundle) - [execution](#execution) - [objectId](#objectid) - [objectId[]](#id1) - [target](#target) ## bundle ### Takes in a bundle value and outputs a bundle containing everything in the input bundle plus a count of "added" extra integer members named "added_0", "added_1", etc. Use the special value "added = 0" to indicate that the bundle should be cleared. The types of both inputs and the return value are og.BundleContents. When put into Fabric the bundle is stored as a data bucket and in USD it is represented as a target or reference to a prim when connected. Note how, since AutoNode definitions do not have direct access to the node, the inspect module must be used to get at it in order to construct an output bundle. ```python import inspect import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_bundle(bundle: ot.bundle, added: ot.int) -> ot.bundle: """Takes in a bundle value and outputs a bundle containing everything in the input bundle plus a count of "added" extra integer members named "added_0", "added_1", etc. Use the special value "added = 0" to indicate that the bundle should be cleared. The types of both inputs and the return value are og.BundleContents. When put into Fabric the bundle is stored as a data bucket and in USD it is represented as a target or reference to a prim when connected. Note how, since AutoNode definitions do not have direct access to the node, the inspect module must be used to get at it in order to construct an output bundle. """ frame = inspect.currentframe().f_back node = frame.f_locals.get("node") context = frame.f_locals.get("context") result = og.BundleContents(context, node, "outputs_out_0", read_only=False, gpu_by_default=False) result.clear() ``` ```python if bundle.valid: result.bundle = bundle if added > 0: first_index = result.size for index in range(added): result.bundle.create_attribute(f"added_{index + first_index}", og.Type(og.BaseDataType.INT)) return result ``` ## execution ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_execution(first_trigger: ot.execution, second_trigger: ot.execution) -> ot.execution: """Takes two execution pins and triggers the output only when both of them are enabled. The types of both inputs and the return value are Python ints as Python does not distinguish between different precision levels. When put into Fabric and USD the values are stored as 32-bit precision integer values. """ if first_trigger == og.ExecutionAttributeState.ENABLED and second_trigger == og.ExecutionAttributeState.ENABLED: return og.ExecutionAttributeState.ENABLED return og.ExecutionAttributeState.DISABLED ``` ## objectId ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_objectid(first_value: ot.objectid, second_value: ot.objectid) -> ot.objectid: """Takes in two objectId values and outputs the larger of them. The types of both inputs and the return value are Python ints as Python does not distinguish between different precision levels or signs. When put into Fabric and USD the values are stored as 64-bit unsigned integer values. """ return first_value if first_value > second_value else second_value ``` ## objectId[] ```python import numpy as np import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_objectidarray(first_value: ot.objectidarray, second_value: ot.objectidarray) -> ot.objectidarray: """Takes in two arrays of object IDs and returns an array containing the largest of each element. The types of both inputs and the return value are numpy.ndarray(shape=(N,), dtype=numpy.uint64) where "N" is the size of the array determined at runtime. """ return np.maximum(first_value, second_value) ``` ## target ```python import omni.graph.core as og import omni.graph.core.types as ot from usdrt import Sdf @og.create_node_type def autonode_target(target_values: ot.target) -> ot.target: """Takes in target values and outputs the targets resulting from appending "TestChild" to the target. The types of both inputs and the return value are list[usdrt.Sdf.Path]. Unlike most other array types this ``` is represented as a list rather than a numpy array since the value type is not one supported by numpy. When put into Fabric the value is stored as an SdfPath token, while USD uses the native rel type. """ return [target.AppendPath("Child") for target in target_values]
examples-tuple.md
# Tuple Data Type Examples This file contains example usage for all of the AutoNode tuple data types. For access to the other types of examples see AutoNode Examples. ## Contents - [double[2]](#double-2) - [double[2][]](#id1) - [double[3]](#double-3) - [double[3][]](#id2) - [double[4]](#double-4) - [double[4][]](#id3) - [float[2]](#float-2) - [float[2][]](#id4) - [float[3]](#float-3) - [float[3][]](#id5) - [float[4]](#float-4) - [float[4][]](#id6) - [half[2]](#half-2) - [half[2][]](#id7) - [half[3]](#half-3) - [half[3][]](#id8) - [half[4]](#half-4) - [half[4][]](#id9) - [int[2]](#int-2) - [int[2][]](#id10) - [int[3]](#int-3) - [int[3][]](#id11) - **int[4]** - **int[4][]** ## double[2] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_double2(first_value: ot.double2, second_value: ot.double2) -> ot.double2: """Takes in two double[2] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(2,), dtype=numpy.float64). When put into Fabric and USD the values are stored as two double-precision floating point values. """ return first_value + second_value ``` ## double[2][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_double2array(first_value: ot.double2array, second_value: ot.double2array) -> ot.double2array: """Takes in two arrays of double2 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(2,N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` ## double[3] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_double3(first_value: ot.double3, second_value: ot.double3) -> ot.double3: """Takes in two double[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float64). When put into Fabric and USD the values are stored as three double-precision floating point values. """ return first_value + second_value ``` ## double[3][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_double3array(first_value: ot.double3array, second_value: ot.double3array) -> ot.double3array: """Takes in two arrays of double3 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` ## double[4] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_double4(first_value: ot.double4, second_value: ot.double4) -> ot.double4: """Takes in two double[4] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(4,), dtype=numpy.float64). When put into Fabric and USD the values are stored as four double-precision floating point values. """ return first_value + second_value ``` ## double[4][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_double4array(first_value: ot.double4array, second_value: ot.double4array) -> ot.double4array: """Takes in two arrays of double4 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(4,N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` # double[4][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_double4array(first_value: ot.double4array, second_value: ot.double4array) -> ot.double4array: """Takes in two arrays of double4 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(4,N,), dtype=numpy.float64) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` # float[2] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_float2(first_value: ot.float2, second_value: ot.float2) -> ot.float2: """Takes in two float[2] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(2,), dtype=numpy.float32). When put into Fabric and USD the values are stored as two single-precision floating point values. """ return first_value + second_value ``` # float[2][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_float2array(first_value: ot.float2array, second_value: ot.float2array) -> ot.float2array: """Takes in two arrays of float2 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(2,N,), dtype=numpy.float32) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` # float[3] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_float3(first_value: ot.float3, second_value: ot.float3) -> ot.float3: """Takes in two float[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float32). When put into Fabric and USD the values are stored as three single-precision floating point values. """ return first_value + second_value ``` # float[3][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_float3array(first_value: ot.float3array, second_value: ot.float3array) -> ot.float3array: """Takes in two arrays of float3 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float32) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` # float[4] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_float4(first_value: ot.float4, second_value: ot.float4) -> ot.float4: """Takes in two float[4] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(4,), dtype=numpy.float32). When put into Fabric and USD the values are stored as four single-precision floating point values. """ return first_value + second_value ``` # autonode_float4 ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_float4(first_value: ot.float4, second_value: ot.float4) -> ot.float4: """Takes in two float[4] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(4,), dtype=numpy.float32). When put into Fabric and USD the values are stored as four single-precision floating point values. """ return first_value + second_value ``` # float[4][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_float4array(first_value: ot.float4array, second_value: ot.float4array) -> ot.float4array: """Takes in two arrays of float4 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(4,N,), dtype=numpy.float32) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` # half[2] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_half2(first_value: ot.half2, second_value: ot.half2) -> ot.half2: """Takes in two half[2] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(2,), dtype=numpy.float16). When put into Fabric and USD the values are stored as two 16-bit floating point values. """ return first_value + second_value ``` # half[2][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_half2array(first_value: ot.half2array, second_value: ot.half2array) -> ot.half2array: """Takes in two arrays of half2 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(2,N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` # half[3] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_half3(first_value: ot.half3, second_value: ot.half3) -> ot.half3: """Takes in two half[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.float16). When put into Fabric and USD the values are stored as three 16-bit floating point values. """ return first_value + second_value ``` # half[3][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_half3array(first_value: ot.half3array, second_value: ot.half3array) -> ot.half3array: """Takes in two arrays of half3 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` """Takes in two arrays of half3 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. """ return first_value + second_value ## half[4] """Takes in two half[4] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(4,), dtype=numpy.float16). When put into Fabric and USD the values are stored as four 16-bit floating point values. """ return first_value + second_value ## half[4][] """Takes in two arrays of half4 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(4,N,), dtype=numpy.float16) where "N" is the size of the array determined at runtime. """ return first_value + second_value ## int[2] """Takes in two int[2] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(2,), dtype=numpy.int32). When put into Fabric and USD the values are stored as two 32-bit integer values. """ return first_value + second_value ## int[2][] """Takes in two arrays of int2 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(2,N,), dtype=numpy.int32) where "N" is the size of the array determined at runtime. """ return first_value + second_value ## int[3] """Takes in two int[3] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(3,), dtype=numpy.int32). When put into Fabric and USD the values are stored as three 32-bit integer values. """ return first_value + second_value ## int[3][] """Takes in two arrays of int3 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.int32) where "N" is the size of the array determined at runtime. """ return first_value + second_value ## int3array ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_int3array(first_value: ot.int3array, second_value: ot.int3array) -> ot.int3array: """Takes in two arrays of int3 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(3,N,), dtype=numpy.int32) where "N" is the size of the array determined at runtime. """ return first_value + second_value ``` ## int[4] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_int4(first_value: ot.int4, second_value: ot.int4) -> ot.int4: """Takes in two int[4] values and outputs the sum of them. The types of both inputs and the return value are numpy.array(shape=(4,), dtype=numpy.int32). When put into Fabric and USD the values are stored as two 32-bit integer values. """ return first_value + second_value ``` ## int[4][] ```python import omni.graph.core as og import omni.graph.core.types as ot @og.create_node_type def autonode_int4array(first_value: ot.int4array, second_value: ot.int4array) -> ot.int4array: """Takes in two arrays of int4 attributes and returns an array containing the sum of each element. The types of both inputs and the return value are numpy.ndarray(shape=(4,N,), dtype=numpy.int32) where "N" is the size of the array determined at runtime. """ return first_value + second_value ```
examples.md
# AutoNode Examples For simplicity the set of examples is broken out into subsections so that you more easily navigate to the types that are of use to you. In addition, if you wish to easily peruse the set of all available examples they have been aggregated into a single file. ## Types of Data Handled - [Simple Data Type Examples](examples-simple.html) - [Tuple Data Type Examples](examples-tuple.html) - [Role-Based Data Type Examples](examples-roles.html) - [Special Data Type Examples](examples-special.html) To see the full list of data types available for use look at the Data Types document. ## Special Examples - [Multiple-Output Examples](examples-multi-outputs.html) - [Examples Using Custom Decoration](examples-decoration.html) - [All Data Type Examples](examples-all.html)
example_Overview.md
# Overview This document will cover the basic concepts and give examples of how to build a graph extension. It mainly focuses on: - how to build a simple graph model which manages the data for the graph - how to create a custom graph delegate which controls the look of the graph - how to create a graph example based on our generic base widget of GraphEditorCoreWidget. ## Example `omni.kit.graph.editor.example` is an extension we built up as an example for developers to start their journey on building up their own graph extension. To have a preview of how the graph looks and how it works, you can find the extension `omni.kit.graph.editor.example` and enable it in your app (e.g. Code, enabled by default) There is a start panel on the right hand graph editor where you can start to Open or Create your graph. You can drag and drop nodes from the left catalog widget, which contains a list of available nodes, to the graph editor area. There is also a simple toolbar on the top where you can open or save, or go back to the start frame. Once you start editing the graph, there will be a dropdown widget where you can switch between different delegate styles for your graph. In summary, the simple example demonstrates: - save and import graph - node creation and deletion - ports connection and disconnection - switch between different graph delegates with the same graph model - Use backdrop and subgraph nodes to organize the graph visually and hierarchically ## Make your own extension You are welcome to fork the code as your extension start point and build your own from there. This example is not USD based to simplify the demonstration of the graph model. It is using json for serialization. All the Nodes and Ports and their properties are string based. If you are looking for a Usd-based graph extension example, please refer to `omni.kit.window.material_graph`, which has more complications, however.
execution.md
# Running the service The Scene Optimizer can either be executed via the docker container, on bare metal via the kit CLI or as a Task on Omniverse Farm (recommended). ## Docker Sign up to NGC Log in via docker: First, pull the `services` docker container from NGC’s docker repository: ```docker docker pull YOUR_DOCKERHUB_USERNAME/my-service-image:v1 ``` ```docker docker run -it --rm \ -p 8000:8000 \ -e USER=my_username \ -e TOKEN=my_token \ my-service-image --config_json_url=https://example.com/path/to/config.json ``` You can now access the services’ swagger UI to learn about it’s API, send requests and get Curl example commands. ## Bare Metal + Omniverse Kit CLI Example command line when using it with Omniverse Kit: ```bash ``` Once the service is running, you can evoke the /request endpoint to process files. ## Omniverse Farm TaaS (Task as a Service) When using it as a Task on Omniverse Farm, a job definition needs to be provided. The job definition informs the farm task about which service endpoint should be executed by the farm as well as the json configuration for defining Scene Optimizer processes to be completed.
ExecutorCreation.md
# Executor Creation This is a practitioner’s guide to using the Execution Framework. Before continuing, it is recommended you first review the [Execution Framework Overview](#ef-framework) along with basic topics such as [Graphs Concepts](#ef-graph-concepts), [Pass Concepts](#ef-pass-concepts), and [Execution Concepts](#ef-execution-concepts). Customizing execution can happen at many levels, let’s have a look at different examples. ## Customizing Visit Strategy The default `ExecutorFallback`’s visit strategy and execution order is matching traversal over the entire graph, where each node gets computed only once when all upstream nodes complete computation. Without changing the traversal order, we can change the visit strategy to only compute when the underlying node requests to compute. ### Listing 31: A custom visit strategy for visiting only nodes that requested compute. ```cpp struct ExecutionVisitWithCacheCheck { //! Called when the traversal wants to visit a node. This method determines what to do with the node (e.g. schedule //! it, defer it, etc). template <typename ExecutorInfo> static Status tryVisit(ExecutorInfo info) noexcept { auto& nodeData = info.getNodeData(); auto triggeringTaskStatus = info.currentTask.getExecutionStatus(); if (triggeringTaskStatus == Status::eSuccess) nodeData.hasComputedUpstream = true; // we only set to true...doesn't matter which thread does it first else if (triggeringTaskStatus == Status::eDeferred) nodeData.hasDeferredUpstream = true; // we only set to true...doesn't matter which thread does it first } } ``` ```cpp std::size_t requiredCount = info.nextNode->getParents().size() - info.nextNode->getCycleParentCount(); if ((requiredCount == 0) || (++nodeData.visitCount == requiredCount)) { if (nodeData.hasDeferredUpstream) return Status::eDeferred; else { // spawning a task within executor doesn't change the upstream path. just reference the same one. ExecutionTask newTask(info.getContext(), info.nextNode, info.getUpstreamPath()); if (nodeData.hasComputedUpstream || info.getContext()->getStateInfo(newTask)->needsCompute(info.getExecutionStamp())) return info.schedule(std::move(newTask)); else // continue downstream...there may be something dirty. Bypass scheduler to avoid unnecessary // overhead return info.continueExecute(newTask); } } return Status::eUnknown; } }; ``` In this modified version, we will only compute a node and propagate this to the downstream when compute was requested. ## Customizing Preallocated Per-node Data Sometimes visit strategy must store more data per node to achieve the desired execution behavior. We will use an example from a pipeline graph that dynamically generates more work based on data and a static graph. ```cpp struct TestPipelineExecutionNodeData : public ExecutionNodeData { DynamicNode* getNode(ExecutionTaskTag tag) { if (tag == ExecutionTask::kEmptyTag) return nullptr; auto findIt = generatedNodes.find(tag); return findIt != generatedNodes.end() ? &findIt->second : nullptr; } DynamicNode* createNode(ExecutionTask&& task) { if (!task.hasValidTag()) return nullptr; // LCOV_EXCL_LINE auto findIt = generatedNodes.find(task.getTag()); // ... rest of the code ... } }; ``` ```cpp if (findIt != generatedNodes.end()) return &findIt->second; // LCOV_EXCL_LINE auto added = generatedNodes.emplace(task.getTag(), std::move(task)); return &added.first->second; } using DynamicNodes = std::map<ExecutionTaskTag, DynamicNode>; DynamicNodes generatedNodes; std::atomic<std::size_t> dynamicUpstreamCount{ 0 }; std::atomic<std::size_t> dynamicVisitCount{ 0 }; }; ``` ```cpp template <typename ExecutorInfo> Status TestPipelineExecutionVisit::tryVisit(ExecutorInfo info) noexcept { OMNI_GRAPH_EXEC_ASSERT(info.nextNode->getCycleParentCount() == 0); auto pipelineNodeDef = omni::graph::exec::unstable::cast<TestPipelineNodeDef>(info.nextNode->getNodeDef()); if (!pipelineNodeDef) return Status::eFailure; // LCOV_EXCL_LINE auto executor = omni::graph::exec::unstable::cast<TestPipelineExecutor>(info.getExecutor()); REQUIRE(executor); const ExecutionTask& currentTask = info.currentTask; auto& predData = info.getExecutor()->getNodeData(currentTask.getNode()); auto& nodeData = info.getNodeData(); std::size_t dynamicVisit = 0; if (!currentTask.hasValidTag()) // we enter a pre-visit that can statically generate work { nodeData.dynamicUpstreamCount += predData.generatedNodes.size(); nodeData.visitCount++; dynamicVisit = nodeData.dynamicVisitCount; Status status = pipelineNodeDef->generate( ``` currentTask, info.nextNode, TestPipelineNodeDef::VisitStep::ePreExecute, executor->getDynamicGraph()); if (status == Status::eSuccess /*STATIC*/ && nodeData.visitCount >= info.nextNode->getParents().size()) { ExecutionTask newTask(info.getContext(), info.nextNode, info.getUpstreamPath()); (void)executor->continueExecute(newTask); } else { dynamicVisit = ++nodeData.dynamicVisitCount; DynamicNode* predDynamicNode = predData.getNode(currentTask.getTag()); predDynamicNode->done(); pipelineNodeDef->generate( currentTask, info.nextNode, TestPipelineNodeDef::VisitStep::eExecute, executor->getDynamicGraph()); } // this was the last dynamic call into the node if (nodeData.visitCount >= info.nextNode->getParents().size() && nodeData.dynamicUpstreamCount == dynamicVisit) { Status status = pipelineNodeDef->generate( currentTask, info.nextNode, TestPipelineNodeDef::VisitStep::ePostExecute, executor->getDynamicGraph()); if (status == Status::eSuccess /*DYNAMIC*/) { ExecutionTask newTask(info.getContext(), info.nextNode, info.getUpstreamPath()); (void)executor->continueExecute(newTask); } } // Kick dynamic work for (auto& pair : nodeData.generatedNodes) { DynamicNode& dynNode = pair.second; if (dynNode.trySchedule()) { ExecutionTask newTask = dynNode.task(); info.schedule(std::move(newTask)); } } return Status::eUnknown; ## Customizing Scheduler The default `ExecutorFallback`'s scheduler will run all the generated tasks serially on a calling thread. We can easily change that and request task dispatch from a custom scheduler. ### Listing 34 A custom scheduler dispatch implementation to run all generated tasks concurrently. ```cpp struct TestTbbScheduler { tbb::task_group g; TestTbbScheduler(IExecutionContext* context) { } ~TestTbbScheduler() noexcept { } template<typename Fn> Status schedule(Fn&& task, SchedulingInfo) { g.run( [task = captureScheduleFunction(task), this]() mutable { Status ret = invokeScheduleFunction(task); Status current, newValue = Status::eUnknown; do // LCOV_EXCL_LINE { current = this->m_status.load(); newValue = ret | current; } while (!this->m_status.compare_exchange_weak(current, newValue)); }); return Status::eSuccess; } Status getStatus() { g.wait(); return m_status; } private: std::atomic<Status> m_status{ Status::eUnknown }; }; ``` ## Customizing Traversal In all examples above, the executor was iterating over all children of a node and was able to stop dispatching the node to compute. We can further customize the continuation loop over children of a node by overriding the `Executor::continueExecute(const ExecutionTask&amp;)` method. This ultimately allows us to change entire traversal behavior. In this final example, we will push this to the end by also customizing `IExecutor::execute()` and delegating the entire execution to the implementation of `NodeDef`. ``` We will use Behavior Tree to illustrate it all. Make sure to follow examples from Definition Creation to learn how NodeGraphDef were implemented. ```cpp // Listing 35: A custom executor for behavior tree. using BaseExecutorClass = Executor<Node, BtVisit, BtNodeData, SerialScheduler, DefaultSchedulingStrategy>; class BtExecutor : public BaseExecutorClass { public: //! Factory method static omni::core::ObjectPtr<BtExecutor> create(omni::core::ObjectParam<ITopology> toExecute, const ExecutionTask& thisTask) { return omni::core::steal(new BtExecutor(toExecute.get(), thisTask)); } //! Custom execute method to bypass continuation and start visitation directly. //! Propagate the behavior tree status to node instantiating NodeGraphDef this executor operate on. This enables //! composability of behavior trees. Status execute_abi() noexcept override { auto& instantiatingNodeState = BtNodeState::forceGet(m_task.getContext()->getStateInfo(m_path)); instantiatingNodeState.computeStatus = BtNodeState::Status::eSuccess; for (auto child : m_task.getNode()->getChildren()) { if (BtVisit::tryVisit(Info(this, m_task, child)) == Status::eFailure) { instantiatingNodeState.computeStatus = BtNodeState::Status::eFailure; break; } } return Status::eSuccess; } //! We don't leverage continuation called from within executed task. Entire traversal logic is handled before //! from within NodeDef execution method. See nodes implementing @p BtNodeDefBase. Status continueExecute_abi(ExecutionTask* currentTask) noexcept override { return currentTask->getExecutionStatus(); } protected: BtExecutor(ITopology* toExecute, const ExecutionTask& currentTask) noexcept } ``` ```cpp class BaseExecutorClass(toExecute, currentTask) { } ; ``` ```cpp struct BtVisit { template <typename ExecutorInfo> static Status tryVisit(ExecutorInfo info) noexcept { // Illustrate that we can still leverage pre-allocated data to avoid potential cycles. // FWIW. They can as well be detected earlier in the pipeline. auto& nodeData = info.getNodeData(); if (std::exchange(nodeData.executed, true)) { return Status::eFailure; // LCOV_EXCL_LINE } // We don't engage the scheduler because there should be only single node under root...if not but we could get // all the independent branches executed concurrently when going via scheduler. ExecutionTask newTask(info.getContext(), info.nextNode, info.getUpstreamPath()); if (newTask.execute(info.getExecutor()) == Status::eSuccess) { auto& nodeState = BtNodeState::forceGet(&newTask); return (nodeState.computeStatus == BtNodeState::Status::eSuccess) ? Status::eSuccess : Status::eFailure; } return Status::eFailure; // LCOV_EXCL_LINE } }; ``` ```cpp class BtSequenceNodeDef : public BtNodeDefBase { public: //! Factory method static omni::core::ObjectPtr<BtSequenceNodeDef> create() { return omni::core::steal(new BtSequenceNodeDef()); } protected: //! Specialized composition method for sequence behavior. We don't engage scheduler since all work needs to happen //! during the call and scheduler would only add overhead in here. Status execute_abi(ExecutionTask* info) noexcept override { auto& nodeState = BtNodeState::forceGet(info); } }; ``` nodeState.computeStatus = BtNodeState::Status::eSuccess; for (auto child : info->getNode()->getChildren()) { ExecutionTask newTask(info->getContext(), child, info->getUpstreamPath()); newTask.execute(getCurrentExecutor()); // bypass scheduler if (BtNodeState::forceGet(&newTask).computeStatus == BtNodeState::Status::eFailure) { nodeState.computeStatus = BtNodeState::Status::eFailure; break; } } return Status::eSuccess; } //! Constructor BtSequenceNodeDef() noexcept : BtNodeDefBase("tests.def.BtSequenceNodeDef") { } };
explore-community-extensions_app_from_scratch.md
# Develop a Simple App This section provides an introduction to Application development and presents important foundational knowledge: - How Applications and Extensions are defined in `.kit` and `.toml` files. - How to explore existing Extensions and adding them to your Application. - How user settings can override Application configurations. - Controlling Application window layout. ## Kit and Toml Files If you have developed solutions before you are likely to have used configuration files. Configuration files present developers with a “low-code” approach to changing behaviors. With Kit SDK you will use configuration files to declare: - Package metadata - Dependencies - Settings Kit allows Applications and Services to be configured via `.kit` files and Extensions via `.toml` files. Both files present the same ease of readability and purpose of defining a configuration - they simply have different file Extensions. Let’s create a `.kit` file and register it with the build system: 1. Create a Kit file: 1. Create a file named `my_company.my_app.kit` in `.\source\apps`. 2. Add this content to the file: ```toml [package] title = "My App" description = "An Application created from a tutorial." version = "2023.0.0" [dependencies] "omni.kit.uiapp" = {} [settings] app.window.title = "My App" [[test]] args = [ "--/app/window/title=My Test App", ] ``` 2. Configure the build tool to recognize the new Application: 1. Open `.\premake5.lua`. 2. Find the section `-- Apps:`. 3. Add an entry for the new app: - Define the application: ```plaintext define_app("my_company.my_app") ``` - Run the `build` command. - Start the app: - Windows: ```plaintext .\_build\windows-x86_64\release\my_company.my_app.bat ``` - Linux: ```plaintext ./_build/linux-x86_64/release/my_company.my_app.sh ``` - Congratulations, you have created an Application! - Let’s review the sections of `.kit` and `.toml` files: ### Package This section provides information used for publishing and displaying information about the Application/Extension. For example, `version = "2023.0.0"` is used both in publishing and UI: a publishing process can alert a developer that the given version has already been published and the version can be shown in an “About Window” and the Extension Manager. ### Dependencies Dependencies section is a list of Extensions used by the Application/Extension. The above reference `"omni.kit.uiapp" = {}` points to the most recent version available but can be configured to use specific versions. Example of an Extension referenced by a specific version: ```toml "omni.kit.converter.cad" = {version = "200.1", exact = true} ``` The dependencies can be hosted in Extension Registries for on-demand download or in various locations on the local workstation - including inside a project like kit-app-template. ### Settings Settings provide a low-code mechanism to customize Application/Extension behavior. Some settings modify UI and others modify functionality - it all depends on how an Application/Extension makes use of the setting. An Omniverse developer should consider exposing settings to developers - and end users - to make Extensions as modular as possible. #### Experiment Change the title to `My Company App` - `app.window.title = "My Company App"` - and run the app again - still, no build required. Note the Application title bar shows the new name. ### Test The test section can be thought of as a combined dependencies and settings section. It allows adding dependencies and settings for when running an Application and Extension in test mode. ```toml [[test]] args = [ "--/app/window/title=My Test App", ] ``` We will cover this in greater detail later. Note: - Reference: Testing Extensions with Python. - Reference: .kit and .toml configurations. ``` ## Extension Manager The Extension Manager window is a tool for developers to explore Extensions created on the Omniverse platform. It lists Extensions created by NVIDIA, the Omniverse community, and can be configured to list Extensions that exist on a local workstation. Let’s add the Extension Manager to the app so we can look for dependencies to add. 1. Add Extension Manager. - Open `.\source\apps\my_company.my_app.kit`. - Add dependency `omni.kit.window.extensions`. Dependencies section should read: ```toml [dependencies] "omni.kit.uiapp" = {} "omni.kit.window.extensions" = {} ``` - In order to point the Extension Manager to the right Extension Registry we need to add the following settings: ```toml # Extension Registries [settings.exts."omni.kit.registry.nucleus"] registries = [ { name = "kit/default", url = "https://ovextensionsprod.blob.core.windows.net/exts/kit/prod/shared" }, { name = "kit/sdk", url = "https://ovextensionsprod.blob.core.windows.net/exts/kit/prod/sdk/${kit_version_short}/${kit_git_hash}" }, ] ``` - Observe that - once you save the source kit file - the corresponding kit file in the build directory was updated as well. This is due to the use of symlinks. A build is not necessary when editing .kit files. See: - Windows: `.\_build\windows-x86_64\release\apps\my_company.my_app.kit` - Linux: `./_build/linux-x86_64/release/apps/my_company.my_app.kit` 2. Explore Extensions in Extension Manager. - Start the app: - Windows: `.\_build\windows-x86_64\release\my_company.my_app.bat` - Linux: `./_build/linux-x86_64/release/my_company.my_app.sh` - Open Extension Manager: Window > Extensions. - Please allow Extension Manager to sync with the Extension Registry. The listing might not load instantly. - Search for `graph editor example`. The Extension Manager should list `omni.kit.graph.editor.example` in the NVIDIA tab. - Click `INSTALL`. - Click the toggle `DISABLED` to enable Extension. - Check `AUTOLOAD`. - Close the app and start again. - Observe that the *Graph Editor Example* Extension is enabled. Look at the `[dependencies]` section in `.\source\apps\my_company.my_app.kit`. The `omni.kit.graph.editor.example` Extension is not listed. The point here is to make it clear that when an Extension is enabled by a user in the Extension Manager, the dependency is **NOT** added to the Application `.kit`. { name = "kit/sdk", url = "https://ovextensionsprod.blob.core.windows.net/exts/kit/prod/sdk/${kit_version_short}/${kit_git_hash}" }, { name = "kit/community", url = "https://dw290v42wisod.cloudfront.net/exts/kit/community" }, ] ### Restart the app and allow Extension Manager to sync with the Extension Registry. The listing might not load instantly. You can now experiment by adding community Extensions such as ``` ``` "heavyai.ui.component" = {} ``` to the ``` [dependencies] ``` section. ### Add Extensions #### Let’s assume we found a few Extensions we want to use. Add the below ``` [dependencies] "omni.kit.uiapp" = {} # Viewport "omni.kit.viewport.bundle" = {} # Render Settings "omni.rtx.window.settings" = {} # Content Browser "omni.kit.window.content_browser" = {} # Stage Inspector "omni.kit.window.stage" = {} # Layer Inspector "omni.kit.widget.layers" = {} # Toolbar. Setting load order so that it loads last. "omni.kit.window.toolbar" = { order = 1000 } # Properties Inspector "omni.kit.property.bundle" = {} # DX shader caches (windows only) [dependencies."filter:platform"."windows-x86_64"] "omni.rtx.shadercache.d3d12" = {} ``` #### Add this setting: ``` app.content.emptyStageOnStart = true ``` #### Run the app again. It’ll look something like this: ### Application Layout The Application window layout is fairly organized already but let’s take care of the floating Content Browser by docking it below the viewport window. #### Add a Resource Extension Extensions do not need to provide code. We use so-called “resource Extensions” to provide assets, data, and anything else that can be considered a resource. In this example we create it to provide a layout file. 1. Create a new Extension using ``` repo template new ``` command (command cheat-sheet). 1. For `What do you want to add` choose `extension`. 2. For `Choose a template` choose `python-extension-simple`. 3. Enter an all new name: `my_company.my_app.resources`. Do 1. Do not use the default name. 2. Leave version as `0.1.0`. 3. The new Extension is created in `.\source\extensions\my_company.my_app.resources`. 4. Add a `layouts` directory inside `my_company.my_app.resources`. We’ll be adding a resource file here momentarily. 5. Configure the build to pick up the `layouts` directory by adding a `{ "layouts", ext.target_dir.."/layouts" },` in the Extension’s `.\my_company.my_app.resources\premake5.lua` file: ```lua -- Use folder name to build Extension name and tag. Version is specified explicitly. local ext = get_current_extension_info() -- That will also link whole current "target" folder into as extension target folder: project_ext(ext) repo_build.prebuild_link { { "data", ext.target_dir.."/data" }, { "docs", ext.target_dir.."/docs" }, { "layouts", ext.target_dir.."/layouts" }, { "my_company", ext.target_dir.."/my_company" }, } ``` ## Configure App to Recognize Extensions By default, Extensions that are part of the Kit SDK will be recognized by Applications. When we add Extensions like the one above we need to add paths to the Application’s .kit file. The below adds the paths for these additional Extensions. Note the use of `${app}` as a token. This will be replaced with the path to the app at runtime. Add this to the `my_company_my_app.kit`: ```toml [settings.app.exts] # Add additional search paths for dependencies. folders.'++' = [ "${app}/../exts", "${app}/../extscache/" ] ``` **Note** Reference: [Tokens](https://docs.omniverse.nvidia.com/kit/docs/kit-manual/latest/guide/tokens.html) ## Configure App to Provide Layout Capabilities Add these Extensions to the `my_company_my_app.kit` `[dependencies]` section. `omni.app.setup` provides layout capabilities. ```toml # Layout capabilities "omni.app.setup" = {} # App resources "my_company.my_app.resources" = {} ``` ## Create a Layout File 1. Run a build to propagate the new Extension to the built solution and start the app. 2. Drag and drop the `Content Browser` on top of the lower docker manipulator within the `Viewport` window. 3. Save the layout: - Use menu `Window` > `Layout` > `Save Layout...` command. - Save the layout as `.\source\extensions\my_company.my_app.resources\layouts\layout.json`. ## Use Layout 1. Add this to the `my_company.my_app.kit` files `[settings]` section. Again, here we are using a token: `${my_company.my_app.resources}`. That token is replaced with the path to the Extension at runtime: ```toml app.kit.editor.setup = true app.layout.default = "${my_company.my_app.resources}/layouts/layout.json" ``` 2. Run a build so the `layouts` directory with its `layout.json` file is created in the `_build` directory structure. 3. Run the Application again and see the `Content Browser` being docked. A developer can provide end users with different layouts - or `workflows`. This topic can be further explored in the omni.app.setup reference. You now have an Application and could skip ahead to the [Package App](#) and [Publish App](#) sections; however, this tutorial now continues with a more advanced example: [Develop a USD Explorer App](#).
export-handler_Overview.md
# Overview — Omniverse Kit 1.0.30 documentation ## Overview The file_exporter extension provides a standardized dialog for exporting files. It is a wrapper around the `FilePickerDialog`, but with reasonable defaults for common settings, so it’s a higher-level entry point to that interface. Nevertheless, users will still have the ability to customize some parts but we’ve boiled them down to just the essential ones. Why you should use this extension: - Present a consistent file export experience across the app. - Customize only the essential parts while inheriting sensible defaults elsewhere. - Reduce boilerplate code. - Inherit future improvements. - Checkpoints fully supported if available on the server. ## Quickstart You can pop-up a dialog in just 2 steps. First, retrieve the extension. ```python # Get the singleton extension object, but as weakref to guard against the extension being removed. file_exporter = get_file_exporter() if not file_exporter: return ``` Then, invoke its show_window method. ```python file_exporter.show_window( title="Export As ...", export_button_label="Save", export_handler=self.export_handler, filename_url="omniverse://ov-rc/NVIDIA/Samples/Marbles/foo", show_only_folders=True, enable_filename_input=False, ) ``` Note that the extension is a singleton, meaning there’s only one instance of it throughout the app. Basically, we are assuming that you’d never open more than one instance of the dialog at any one time. The advantage is that we can channel any development through this single extension and all users will inherit the same changes. ## Customizing the Dialog You can customize these parts of the dialog. - Title - The title of the dialog. - Collections - Which of these collections, ["bookmarks", "omniverse", "my-computer"] to display. - Filename Url - Url to open the dialog with. - Postfix options - List of content labels appended to the filename. - Extension options - List of filename extensions. - Export options - Options to apply during the export process. - Export label - Label for the export button. - Export handler - User provided callback to handle the export process. Note that these settings are applied when you show the window. Therefore, each time it’s displayed, the dialog can be tailored ## Filename postfix options Users might want to set up data libraries of just animations, materials, etc. However, one challenge of working in Omniverse is that everything is a USD file. To facilitate this workflow, we suggest adding a postfix to the filename, e.g. “file.animation.usd”. The file bar contains a dropdown that lists the postfix labels. A default list is provided but you can also provide your own. ```python DEFAULT_FILE_POSTFIX_OPTIONS = [ None, "anim", "cache", "curveanim", "geo", "material", "project", "seq", "skel", "skelanim", ] ``` A list of file extensions, furthermore, allows the user to specify what flavor of USD to export. ```python DEFAULT_FILE_EXTENSION_TYPES = [ ("*.usd", "Can be Binary or Ascii"), ("*.usda", "Human-readable text format"), ("*.usdc", "Binary format"), ] ``` When the user selects a combination of postfix and extension types, the file view will filter out all other file types, leaving only the matching ones. ## Export options A common need is to provide user options for the export process. You create the widget for accepting those inputs, then add it to the details pane of the dialog. Do this by subclassing from `ExportOptionsDelegate` and overriding the methods, :meth:`ExportOptionsDelegate._build_ui_impl` and (optionally) :meth:`ExportOptionsDelegate._destroy_impl`. ```python class MyExportOptionsDelegate(ExportOptionsDelegate): def __init__(self): super().__init__(build_fn=self._build_ui_impl, destroy_fn=self._destroy_impl) self._widget = None def _build_ui_impl(self): self._widget = ui.Frame() with self._widget: with ui.VStack(style={"background_color": 0xFF23211F}): ui.Label("Checkpoint Description", alignment=ui.Alignment.CENTER) ui.Separator(height=5) model = ui.StringField(multiline=True, height=80).model model.set_value("This is my new checkpoint.") def _destroy_impl(self, _): if self._widget: self._widget.destroy() self._widget = None ``` Then provide the controller to the file picker for display. ```python self._export_options = MyExportOptionsDelegate() file_exporter.add_export_options_frame("Export Options", self._export_options) ``` ## Export handler Provide a handler for when the Export button is clicked. In addition to :attr:`filename` and :attr:`dirname`, the handler should expect a list of :attr:`selections` made from the UI. ```python def export_handler(self, filename: str, dirname: str, extension: str = "", selections: List[str] = []): ``` # NOTE: Get user inputs from self._export_options, if needed. ``` ```python print(f"> Export As '{filename}{extension}' to '{dirname}' with additional selections '{selections}'") ``` ## Demo app A complete demo, that includes the code snippets above, is included with this extension. ``` ---
ext-omni-graph-action_Overview.md
# Overview ## Extension : omni.graph.action-1.101.1 ## Documentation Generated : Apr 26, 2024 ## Changelog # Overview This extension is a bundle of **omni.graph.action_core** and **omni.graph.action_nodes**, which is the core functionality of OmniGraph Action Graph.
ext-omni-graph-template-cpp_Overview.md
# Overview This is the gold standard template for creating a Kit extension that contains only C++ OmniGraph nodes. ## The Files To use this template first copy the entire directory into a location that is visible to your build, such as `source/extensions`. The copy will have this directory structure. The highlighted lines should be renamed to match your extension, or removed if you do not want to use them. ```text omni.graph.template.cpp/ config/ extension.toml data/ icon.svg preview.png docs/ CHANGELOG.md directory.txt Overview.md README.md nodes/ OgnTemplateNodeCpp.cpp OgnTemplateNodeCpp.ogn plugins/ Module.cpp premake5.lua ``` ## The Build File Kit normally uses premake for building so this example shows how to use the template `premake5.lua` file to customize your build. By default the build file is set up to correspond to the directory structure shown above. By using this standard layout the utility functions can do most of the work for you. ```lua -- -------------------------------------------------------------------------------------------------------------------- -- Build file for the build tools used by the OmniGraph C++ extension. These are tools required in order to -- run the build on that extension, and all extensions dependent on it. -- -------------------------------------------------------------------------------------------------------------------- -- This sets up a shared extension configuration, used by most Kit extensions. local ext = get_current_extension_info() -- -------------------------------------------------------------------------------------------------------------------- -- Set up a variable containing standard configuration information for projects containing OGN files local ogn = get_ogn_project_information(ext, "omni/graph/template/cpp") -- -------------------------------------------------------------------------------------------------------------------- -- Put this project into the "omnigraph" IDE group ext.group = "omnigraph" -- -------------------------------------------------------------------------------------------------------------------- -- Set up the basic shared project information first project_ext( ext ) -- -------------------------------------------------------------------------------------------------------------------- -- Define a build project to process the ogn files to create the generated code that will be used by the node ``` ``` 23 -- implementations. The (optional) "toc" value points to the directory where the table of contents with the OmniGraph 24 -- nodes in this extension will be generated. Omit it if you will be generating your own table of contents. 25 project_ext_ogn( ext, ogn, { toc="docs/Overview.md" } ) 26 27 -- -------------------------------------------------------------------------------------------------------------------- 28 -- The main plugin project is what implements the nodes and extension interface 29 project_ext_plugin( ext, ogn.plugin_project ) 30 -- These lines add the files in the project to the IDE where the first argument is the group and the second 31 -- is the set of files in the source tree that are populated into that group. 32 add_files("impl", ogn.plugin_path) 33 add_files("nodes", ogn.nodes_path) 34 add_files("config", "config") 35 add_files("docs", ogn.docs_path) 36 add_files("data", "data") 37 38 -- Add the standard dependencies all OGN projects have. The second parameter is normally omitted for C++ nodes 39 -- as hot reload of C++ definitions is not yet supported. 40 add_ogn_dependencies(ogn) 41 42 -- Link the directories required to make the extension definition complete 43 repo_build.prebuild_link { 44 { "docs", ext.target_dir.."/docs" }, 45 { "data", ext.target_dir.."/data" }, 46 } 47 48 -- This optional line adds support for CUDA (.cu) files in your project. Only include it if you are building nodes 49 -- that will run on the GPU and implement CUDA code to do so. Your deps/ directory should contain a file with a 50 -- cuda dependency that looks like the following to access the cuda library: 51 -- &lt;dependency name="cuda" linkPath="../_build/target-deps/cuda"&gt; 52 -- &lt;package name="cuda" version="11.8.0_520.61-d8963068-${platform}" platforms="linux-x86_64"/&gt; 53 -- &lt;package name="cuda" version="11.8.0_520.61-abe3d9d7-${platform}" platforms="linux-aarch64"/&gt; 54 -- &lt;package name="cuda" version="11.8.0_522.06-abe3d9d7-${platform}" platforms="windows-x86_64"/&gt; 55 -- &lt;/dependency&gt; 56 -- add_cuda_build_support() 57 58 -- -------------------------------------------------------------------------------------------------------------------- 59 -- With the above copy/link operations this is what the source and build trees will look like 60 -- 61 -- SOURCE BUILD 62 -- omni.graph.template.cpp/ omni.graph.template.cpp/ 63 -- config/ config@ -> SOURCE/config 64 -- data/ data@ -> SOURCE/data 65 -- docs/ docs@ -> SOURCE/docs 66 -- nodes/ ogn/ (generated by build) 67 -- plugins/ ``` Normally your nodes will have tests automatically generated for them, which will be in Python even though the nodes are in C++. By convention the installed Python files are structured in a directory tree that matches a namespace corresponding to the extension name, in this case `omni/graph/template/cpp/`, which corresponds to the extension name *omni.graph.template.cpp*. You’ll want to modify this to match your own extension’s name. Changing the first highlighted line is all you have to do to make that happen. ### The Configuration  Every extension requires a `config/extension.toml` file with metadata describing the extension to the extension management system. Below is the annotated version of this file, where the highlighted lines are the ones you should change to match your own extension. ```toml # Main extension description values [package] # The current extension version number - uses [Semantic Versioning](https://semver.org/spec/v2.0.0.html) version = "2.3.1" # The title of the extension that will appear in the extension window title = "OmniGraph C++ Template" # Longer description of the extension description = "Templates for setting up an extension containing only C++ OmniGraph nodes." # Authors/owners of the extension - usually an email by convention authors = ["NVIDIA <no-reply@nvidia.com>"] ``` # Category under which the extension will be organized category = "Graph" # Location of the main README file describing the extension for extension developers readme = "docs/README.md" # Location of the main CHANGELOG file describing the modifications made to the extension during development changelog = "docs/CHANGELOG.md" # Location of the repository in which the extension's source can be found repository = "kit-omnigraph" # Keywords to help identify the extension when searching keywords = ["kit", "omnigraph", "nodes", "cpp", "c++"] # Image that shows up in the preview pane of the extension window preview_image = "data/preview.png" # Image that shows up in the navigation pane of the extension window - can be a .png, .jpg, or .svg icon = "data/icon.svg" # Specifying this ensures that the extension is always published for the matching version of the Kit SDK writeTarget.kit = true # Specify the minimum level for support support_level = "Enterprise" # Other extensions that need to load in order for this one to work [dependencies] "omni.graph.core" = {} # For basic functionality "omni.graph.tools" = {} # For node generation # This extension has a compiled C++ project and so requires this declaration that it exists [[native.plugin]] path = "bin/*.plugin" recursive = false # Main pages published as part of documentation. (Only if you build and publish your documentation.) [documentation] pages = [ "docs/Overview.md", "docs/CHANGELOG.md", ] # Contained in this file are references to the icon file in # data/icon.svg # and the preview image in # data/preview.png # which control how your extension appears in the extension manager. You will want to customize those. # The Plugin Module ## The Plugin Module Every C++ extension requires some standard code setup to register and deregister the node types at the proper time. The minimum requirements for the Carbonite wrappers that implement this are contained in the file plugins/Module.cpp. ```cpp // Copyright (c) 2023-2024, NVIDIA CORPORATION. All rights reserved. // // NVIDIA CORPORATION and its licensors retain all intellectual property // and proprietary rights in and to this software, related documentation // and any modifications thereto. Any use, reproduction, disclosure or // distribution of this software and related documentation without an express // license agreement from NVIDIA CORPORATION is strictly prohibited. // // ============================================================================================================== // // This file contains mostly boilerplate code required to register the extension and the nodes in it. // // See the full documentation for OmniGraph Native Interfaces online at // https://docs.omniverse.nvidia.com/kit/docs/carbonite/latest/docs/OmniverseNativeInterfaces.html // // ============================================================================================================== #include &lt;omni/core/ModuleInfo.h&gt; #include &lt;omni/core/Omni.h&gt; #include &lt;omni/graph/core/ogn/Registration.h&gt; // These are the most common interfaces that will be used by nodes. Others that are used within the extension but ``` ```c++ // not registered here will issue a warning and can be added. OMNI_PLUGIN_IMPL_DEPS(omni::graph::core::IGraphRegistry, omni::fabric::IToken) // OMNI_MODULE_GLOBALS("omni.graph.template.cpp.plugin", "OmniGraph Template With C++ Nodes"); // This declaration is required in order for registration of C++ OmniGraph nodes to work // DECLARE_OGN_NODES(); namespace { void onStarted() { // Macro required to register all of the C++ OmniGraph nodes in the extension // INITIALIZE_OGN_NODES(); } bool onCanUnload() { return true; } void onUnload() { // Macro required to deregister all of the C++ OmniGraph nodes in the extension // RELEASE_OGN_NODES(); } } // end of anonymous namespace // Hook up the above functions to the module to be called at the right times OMNI_MODULE_API omni::Result omniModuleGetExports(omni::ModuleExports* exports) { OMNI_MODULE_SET_EXPORTS(exports); OMNI_MODULE_ON_MODULE_STARTED(exports, onStarted); OMNI_MODULE_ON_MODULE_CAN_UNLOAD(exports, onCanUnload); OMNI_MODULE_ON_MODULE_UNLOAD(exports, onUnload); OMNI_MODULE_GET_MODULE_DEPENDENCIES(exports, omniGetDependencies); return omni::core::kResultSuccess; } ``` The first highlighted line shows where you customize the extension plugin name to match your own. The others indicate standard macros that set up the OmniGraph node type registration and deregistration process. Without these lines your node types will not be known to OmniGraph and will not be available in any of the editors. ## Documentation Everything in the `docs/` subdirectory is considered documentation for the extension. - **README.md** The contents of this file appear in the extension manager window so you will want to customize it. The location of this file is configured in the `extension.toml` file as the **readme** value. - **CHANGELOG.md** It is good practice to keep track of changes to your extension so that users know what is available. The location of this file is configured in the `extension.toml` file as the **changelog** value, and as an entry in the [documentation] pages. - **Overview.md** This contains the main documentation page for the extension. It can stand alone or reference an arbitrarily complex set of files, images, and videos that document use of the extension. The **toctree** reference at the bottom of the file contains at least `GeneratedNodeDocumentation/`, which creates links to all of the documentation that is automatically generated for your nodes. The location of this file is configured in the `extension.toml` file in the [documentation] pages section. - **directory.txt** This file can be deleted as it is specific to these instructions. ## The Node Type Definitions You define a new node type using two files, examples of which are in the `nodes/` directory. subdirectory. Tailor the definition of your node types for your computations. Start with the OmniGraph User Guide for information on how to configure your own definitions. That’s all there is to creating a simple C++ node type! You can now open your app, enable the new extension, and your sample node type will be available to use within OmniGraph. OmniGraph Nodes In This Extension * C++ Template Node
ext-omni-graph-template-python_Overview.md
# Overview ## Extension : omni.graph.template.python-1.3.1 ## Documentation Generated : Apr 26, 2024 ## Changelog This is the gold standard template for creating a Kit extension that contains only Python OmniGraph nodes. ## The Files To use this template first copy the entire directory into a location that is visible to your build, such as `source/extensions`. The copy will have this directory structure. The highlighted lines should be renamed to match your extension, or removed if you do not want to use them. ```text omni.graph.template.python/ config/ extension.toml data/ icon.svg preview.png docs/ CHANGELOG.md directory.txt Overview.md README.md premake5.lua python/ __init__.py _impl/ __init__.py extension.py nodes/ OgnTemplateNodePy.ogn OgnTemplateNodePy.py tests/ __init__.py test_api.py test_omni_graph_template_python.py ``` The convention of having implementation details of a module in the `_impl/` subdirectory is to make it clear to the user that they should not be directly accessing anything in that directory, only what is exposed in the `__init__.py`. ## The Build File Kit normally uses premake for building so this example shows how to use the template `premake5.lua` file to customize your build. By default the build file is set up to correspond to the directory structure shown above. By using this standard layout the utility functions can do most of the work for you. ```lua -- -------------------------------------------------------------------------------------------------------------------- -- Build file for the build tools used by the OmniGraph Python template extension. These are tools required in order to -- run the build on that extension, and all extensions dependent on it. -- -------------------------------------------------------------------------------------------------------------------- -- This sets up a shared extension configuration, used by most Kit extensions. local ext = get_current_extension_info() -- -------------------------------------------------------------------------------------------------------------------- -- Set up a variable containing standard configuration information for projects containing OGN files. -- The string corresponds to the Python module name, in this case omni.graph.template.python. local ogn = get_ogn_project_information(ext, "omni/graph/template/python") -- -------------------------------------------------------------------------------------------------------------------- ``` 15 -- Put this project into the "omnigraph" IDE group. You might choose a different name for convenience. 16 ext.group = "omnigraph" 17 18 -- -------------------------------------------------------------------------------------------------------------------- 19 -- Define a build project to process the ogn files to create the generated code that will be used by the node 20 -- implementations. The (optional) "toc" value points to the directory where the table of contents with the OmniGraph 21 -- nodes in this extension will be generated. Omit it if you will be generating your own table of contents. 22 project_ext_ogn( ext, ogn, { toc="docs/Overview.md" }) 23 24 -- -------------------------------------------------------------------------------------------------------------------- 25 -- Build project responsible for generating the Python nodes and installing them and any scripts into the build tree. 26 project_ext( ext, { generate_ext_project=true }) 27 28 -- These lines add the files in the project to the IDE where the first argument is the group and the second 29 -- is the set of files in the source tree that are populated into that group. 30 add_files("python", "*.py") 31 add_files("python/_impl", "python/_impl/**.py") 32 add_files("python/nodes", "python/nodes") 33 add_files("python/tests", "python/tests") 34 add_files("docs", "docs") 35 add_files("data", "data") 36 37 -- Add the standard dependencies all OGN projects have. The second parameter is a table of all directories 38 -- containing Python nodes. Here there is only one. 39 add_ogn_dependencies(ogn, { "python/nodes" }) 40 41 -- Copy the init script directly into the build tree. This is required because the build will create an ogn/ 42 -- subdirectory in the Python module so only the subdirectories can be linked. 43 repo_build.prebuild_copy { 44 { "python/__init__.py", ogn.python_target_path }, 45 } 46 47 -- Linking directories allows them to hot reload when files are modified in the source tree. 48 -- Docs are linked to get the README into the extension window. 49 -- Data contains the images used by the extension configuration preview. 50 -- The "nodes/" directory does not have to be mentioned here as it will be handled by add_ogn_dependencies() above. 51 repo_build.prebuild_link { 52 { "docs", ext.target_dir.."/docs" }, 53 { "data", ext.target_dir.."/data" }, 54 { "python/tests", ogn.python_tests_target_path }, 55 { "python/_impl", ogn.python_target_path.."/_impl" }, 56 } 57 58 -- With the above copy/link operations this is what the source and build trees will look like 59 -- 60 -- SOURCE BUILD 61 -- omni.graph.template.python/ omni.graph.template.python/ 62 -- config/ config@ -> SOURCE/config 63 -- data/ data@ -> SOURCE/data 64 -- docs/ docs@ -> SOURCE/docs 65 -- python/ ogn/ (generated by the build) 66 -- __init__.py omni/ 67 -- _impl/ graph/ 68 -- nodes/ template/ 69 -- python/ 70 -- __init__.py (copied from SOURCE/python) 71 -- _impl@ -> SOURCE/python/_impl 72 -- nodes@ -> SOURCE/python/nodes 73 -- tests@ -> SOURCE/python/tests 74 -- ogn/ (Generated by the build) ``` By convention the installed Python files are structured in a directory tree that matches a namespace corresponding to the extension name, in this case ``` omni/graph/template/python/ ``` , which corresponds to the extension name **omni.graph.template.python**. You’ll want to modify this to match your own extension’s name. Changing the first highlighted line is all you have to do to make that happen. ## The Configuration  Every extension requires a ```toml # Main extension description values [package] # The current extension version number - uses [Semantic Versioning](https://semver.org/spec/v2.0.0.html) version = "1.3.1" # The title of the extension that will appear in the extension window title = "OmniGraph Python Template" # Longer description of the extension description = "Templates for setting up an extension containing OmniGraph Python nodes only (no C++)." # Authors/owners of the extension - usually an email by convention authors = ["NVIDIA &lt;no-reply@nvidia.com&gt;"] # Category under which the extension will be organized category = "Graph" # Location of the main README file describing the extension for extension developers readme = "docs/README.md" # Location of the main CHANGELOG file describing the modifications made to the extension during development changelog = "docs/CHANGELOG.md" # Location of the repository in which the extension's source can be found repository = "https://gitlab-master.nvidia.com/omniverse/kit-extensions/kit-omnigraph" # Keywords to help identify the extension when searching keywords = ["kit", "omnigraph", "nodes", "python"] # Image that shows up in the preview pane of the extension window preview_image = "data/preview.png" # Image that shows up in the navigation pane of the extension window - can be a .png, .jpg, or .svg icon = "data/icon.svg" # Specifying this ensures that the extension is always published for the matching version of the Kit SDK writeTarget.kit = true # Specify the minimum level for support support_level = "Enterprise" # Main module for the Python interface. This is how the module will be imported. [[python.module]] name = "omni.graph.template.python" # Watch the .ogn files for hot reloading. Only useful during development as after delivery files cannot be changed. [fswatcher.patterns] include = ["*.ogn", "*.py"] exclude = ["Ogn*Database.py"] # Other extensions that need to load in order for this one to work [dependencies] "omni.graph" = {} # For basic functionality "omni.graph.tools" = {} # For node generation # Main pages published as part of documentation. (Only if you build and publish your documentation.) [documentation] pages = [ "docs/Overview.md", "docs/CHANGELOG.md", ] # Some extensions are only needed when writing tests, including those automatically generated from a .ogn file. # Having special test-only dependencies lets you avoid introducing a dependency on the test environment when only # using the functionality. [[test]] dependencies = [ "omni.kit.test" # Brings in the Kit testing framework ] ``` This is a file with metadata describing the extension to the extension management system. Below is the annotated version of this file, where the highlighted lines are the ones you should change to match your own extension. # Documentation Everything in the `docs/` subdirectory is considered documentation for the extension. - **README.md** The contents of this file appear in the extension manager window so you will want to customize it. The location of this file is configured in the `extension.toml` file as the **readme** value. - **CHANGELOG.md** It is good practice to keep track of changes to your extension so that users know what is available. The location of this file is configured in the `extension.toml` file as the **changelog** value, and as an entry in the `[documentation]` pages. - **Overview.md** This contains the main documentation page for the extension. It can stand alone or reference an arbitrarily complex set of files, images, and videos that document use of the extension. The **toctree** reference at the bottom of the file contains at least `GeneratedNodeDocumentation/`, which creates links to all of the documentation that is automatically generated for your nodes. The location of this file is configured in the `extension.toml` file in the `[documentation]` pages section. - **directory.txt** This file can be deleted as it is specific to these instructions. # The Node Type Definitions You define a new node type using two files, examples of which are in the `nodes/` subdirectory. Tailor the definition of your node types for your computations. Start with the OmniGraph User Guide for information on how to configure your own definitions. # Tests While completely optional it’s always a good idea to add a few tests for your node to ensure that it works as you intend it and continues to work when you make changes to it. Automated tests will be generated for each of your node type definitions to exercise basic functionality. What you want to write here are more complex tests that use your node types in more complex graphs. The sample tests in the `tests/` subdirectory show you how you can integrate with the Kit testing framework to easily run tests on nodes built from your node type definition. That’s all there is to creating a simple Python node type! You can now open your app, enable the new extension, and your sample node type will be available to use within OmniGraph. ## OmniGraph Nodes In This Extension - Python Template Node
ext-omni-graph-ui-nodes_Overview.md
# Overview This extension provides a set of standard ui-related nodes for general use in OmniGraph. ## OmniGraph Nodes In This Extension - Button (BETA) - Draw Debug Curve - Get Active Camera - Get Camera Position - Get Camera Target - Get Viewport Renderer - Get Viewport Resolution - Lock Viewport Render - On New Frame - On Picked (BETA) - On Viewport Clicked (BETA) - On Viewport Dragged (BETA) - On Viewport Hovered (BETA) - On Viewport Pressed (BETA) - On Viewport Scrolled (BETA) - On Widget Clicked (BETA) - On Widget Value Changed (BETA) - Print Text - Read Mouse State - Read Pick State (BETA) - Read Viewport Click State (BETA) - Read Viewport Drag State (BETA) - Read Viewport Hover State (BETA) - Read Viewport PressState (BETA) - Read Viewport Press State (BETA) - Read Viewport Scroll State (BETA) - Set Active Camera - Set Camera Position - Set Camera Target - Set Viewport Fullscreen - Set Viewport Mode (BETA) - Set Viewport Renderer - Set Viewport Resolution - Slider (BETA)
ext-omni-graph-ui_Overview.md
# Overview ## Extension : omni.graph.ui-1.67.1 ## Documentation Generated : Apr 26, 2024 ## Changelog ## Overview This extension provides basic user interface elements for OmniGraph.
ext-omni-graph_Overview.md
# Overview This extension contains the Python bindings and scripts used by `omni.graph.core`. ## Python API Automatically generated Python API documentation can be found at `omni.graph.core`. ## Python ABI ABI bindings are available through these links: - `omni.graph.core._omni_graph_core.Attribute` - `omni.graph.core._omni_graph_core.AttributeData` - `omni.graph.core._omni_graph_core.AttributePortType` - `omni.graph.core._omni_graph_core.AttributeRole` - `omni.graph.core._omni_graph_core.AttributeType` - `omni.graph.core._omni_graph_core.BaseDataType` - `omni.graph.core._omni_graph_core.BucketId` - `omni.graph.core._omni_graph_core.ComputeGraph` - omni.graph.core._omni_graph_core.ConnectionInfo - omni.graph.core._omni_graph_core.ConnectionType - omni.graph.core._omni_graph_core.ExecutionAttributeState - omni.graph.core._omni_graph_core.ExtendedAttributeType - omni.graph.core._omni_graph_core.FileFormatVersion - omni.graph.core._omni_graph_core.Graph - omni.graph.core._omni_graph_core.GraphBackingType - omni.graph.core._omni_graph_core.GraphContext - omni.graph.core._omni_graph_core.GraphEvaluationMode - omni.graph.core._omni_graph_core.GraphEvent - omni.graph.core._omni_graph_core.GraphPipelineStage - omni.graph.core._omni_graph_core.GraphRegistry - omni.graph.core._omni_graph_core.GraphRegistryEvent - omni.graph.core._omni_graph_core.IBundle2 - omni.graph.core._omni_graph_core.IBundleFactory - omni.graph.core._omni_graph_core.IBundleFactory2 - omni.graph.core._omni_graph_core.IConstBundle2 - omni.graph.core._omni_graph_core.INodeCategories - omni.graph.core._omni_graph_core.ISchedulingHints - omni.graph.core._omni_graph_core.IVariable - omni.graph.core._omni_graph_core.MemoryType - **omni.graph.core._omni_graph_core.MemoryType** - **omni.graph.core._omni_graph_core.Node** - **omni.graph.core._omni_graph_core.NodeEvent** - **omni.graph.core._omni_graph_core.NodeType** - **omni.graph.core._omni_graph_core.OmniGraphBindingError** - **omni.graph.core._omni_graph_core.PtrToPtrKind** - **omni.graph.core._omni_graph_core.Severity** - **omni.graph.core._omni_graph_core.Type** - **omni.graph.core._omni_graph_core.eAccessLocation** - **omni.graph.core._omni_graph_core.eAccessType** - **omni.graph.core._omni_graph_core.eComputeRule** - **omni.graph.core._omni_graph_core.ePurityStatus** - **omni.graph.core._omni_graph_core.eThreadSafety** - **omni.graph.core._omni_graph_core.eVariableScope** - **AutoNode** - **Data Types**
ExtendingOniInterfaces.md
# Extending an Omniverse Native Interface ## Overview Once released, an Omniverse Native Interface’s ABI may not be changed. This guarantees that any library or plugin that was dependent on a previous version of the interface will always be able to access it even if newer versions of the interface become available later. The implementation of an interface may change, but the interface’s ABI layer itself may not change. A change to an ABI may for instance mean adding a new function, changing the prototype of an existing function, or removing an existing function. None of these may occur on a released version of the interface since that would break released apps that make use of the interface. If additional functionality is needed in an ONI interface, a new version of the interface can still be added. The new interface may either inherit from the previous version(s) or may be entirely standalone if needed. In cases where it is possible, it is always preferrable to have the new version of the interface inherit from the previous version. Note that it is possible to add new enum or flag values to an existing interface’s header without breaking the ABI. However, care must still be taken when doing that to ensure that the behavior added by the new flags or enums is both backward and forward compatible and safe. For example, if an older version of the plugin is loaded, will it either fail gracefully or safely ignore any new flag/enum values passed in. Similarly, if a newer version of the plugin is loaded in an app expecting an older version, will the expected behavior still be supported without the new enums or flags being passed in. In general though, adding new flags or enums without adding a new interface version as well should be avoided unless it can be absolutely guaranteed to be safe in all cases. The process for adding a new version of an ONI interface will be described below. This will extend the example interfaces used in Creating a New Omniverse Native Interface and assumes that document has already been read. This assumes the reader’s familiarity with and extends the examples presented there. More information on ONI can be found in Omniverse Native Interfaces. If further examples of extending a real ONI interface are needed, `omni::platforminfo::IOsInfo2_abi` (in `include/omni/platforminfo/IOsInfo2.h`) or `omni::structuredlog::IStructuredLogSettings2_abi` (in `include/omni/structuredlog/IStructuredLogSettings2.h`) may be used for reference. In both these cases, the new interface inherits from the previous one. ## Defining the New Interface Version The main difference between the previous version of an interface and its new version are that should inherit from the previous ABI interface class instead of `omni::core::IObject`. The new interface version must also be declared in its own new C++ header file. The new interface version must also have a different name than the previous version. Adding a version number to the new interface version’s name is generally suggested. It is always suggested that the implementation of the new version(s) of the interface be added to the same C++ plugin as the previous version(s). This reduces code duplication, internal dependencies, and allows all versions of an interface to be present in a single location. To extend our `ILunch` interface to be able to ask if the user would like salad with lunch, a new version of the interface would be needed. the interface could be added to the C++ header ```cpp include/omni/meals/ILunch2.h ``` as follows: ```cpp // file 'include/omni/meals/ILunch2.h' #pragma once #include "ILunch.h" namespace omni { namespace meals { enum class OMNI_ATTR("prefix=e") Dressing { eNone, eRaspberryVinaigrette, eBalsamicVinaigrette, eCaesar, eRanch, eFrench, eRussian, eThousandIsland, }; // we must always forward declare each interface that will be referenced here. class ILunch2; // the interface's name must end in '_abi'. class ILunch2_abi : public omni::core::Inherits<omni::meals::ILunch, OMNI_TYPE_ID("omni.meals.lunch2")> { protected: // all ABI functions must always be 'protected' and must end in '_abi'. virtual void addGardenSalad_abi(Dressing dressing) noexcept = 0; virtual void addWedgeSalad_abi(Dressing dressing) noexcept = 0; virtual void addColeSlaw_abi() noexcept = 0; }; } // namespace meals } // namespace omni // include the generated header and declare the API interface. Note that this must be // done at the global scope. #define OMNI_BIND_INCLUDE_INTERFACE_DECL #include "ILunch2.gen.h" // this is the API version of the interface that code will call into. Custom members and // helpers may also be added to this interface API as needed, but this API object may not // hold any additional data members. class omni::meals::ILunch2 : public omni::core::Generated<omni::meals::ILunch2_abi> { }; #define OMNI_BIND_INCLUDE_INTERFACE_IMPL #include "ILunch2.gen.h" ``` Once created, this new header also needs to be added to the ``` omnibind ``` call in the premake script. This should be added to the same interface generator project that previous versions used: ```lua project "omni.meals.interfaces" location (workspaceDir.."/%{prj.name}") omnibind { { file="include/omni/meals/IBreakfast.h", api="include/omni/meals/IBreakfast.gen.h", py="source/bindings/python/omni.meals/PyIBreakfast.gen.h" }, { file="include/omni/meals/ILunch.h", api="include/omni/meals/ILunch.gen.h", py="source/bindings/python/omni.meals/PyILunch.gen.h" }, { file="include/omni/meals/IDinner.h", api="include/omni/meals/IDinner.gen.h", py="source/bindings/python/omni.meals/PyIDinner.gen.h" }, -- new header(s) added here: { file="include/omni/meals/ILunch2.h", api="include/omni/meals/ILunch2.gen.h", py="source/bindings/python/omni.meals/PyILunch2.gen.h" }, } dependson { "omni.core.interfaces" } ``` Building the interface generator project should then result in the new header files being generated. ## The New Interface’s Python Bindings The new interface’s python bindings would be added to the python binding project just as they were before. This would simply require including the new generated header and calling the new generated inlined helper function. Note that the above header file will now generate two inlined helper functions in the bindings header. One helper function will add python bindings for the new version of the interface and one will add python bindings for the `Dressing` enum. ### Code Example ```cpp #include &lt;omni/python/PyBind.h&gt; #include &lt;omni/meals/IBreakfast.h&gt; #include &lt;omni/meals/ILunch.h&gt; #include &lt;omni/meals/ILunch2.h&gt; // &lt;-- include the new API header file. #include &lt;omni/meals/IDinner.h&gt; #include "PyIBreakfast.gen.h" #include "PyILunch.gen.h" #include "PyILunch2.gen.h" // &lt;-- include the new generated bindings header file. #include "PyIDinner.gen.h" OMNI_PYTHON_GLOBALS("omni.meals-pyd", "Python bindings for omni.meals.") PYBIND11_MODULE(_meals, m) { bindIBreakfast(m); bindILunch(m); bindIDinner(m); // call the new generated inlined helper functions. bindILunch2(m); bindDressing(m); } ``` ## Implementing the New Interface In most cases, implementing the new version of the interface is as simple as changing the implementation object from inheriting from the previous version API (ie: `omni::meals::ILunch` in this case) to inheriting from the new version (`omni::meals::ILunch2`) instead, then adding the implementations of the new methods. If the new interface version does not inherit from the previous version, this can still be handled through inheritence in the implementation, but appropriate casting must occur when returning the new version’s object from the creator function. Once the new version’s implementation is complete, a new entry needs to be added to the plugin’s interface implementation listing object. This object is retrieved by the type factory from the plugin’s `onLoad()` function. To add the new interface, the following simple changes would need to be made: ### Code Example ```cpp omni::core::Result onLoad(const omni::core::InterfaceImplementation** out, uint32_t* outCount) { // clang-format off static const char* breakfastInterfaces[] = { "omni.meals.IBreakfast" }; static const char* lunchInterfaces[] = { "omni.meals.ILunch", "omni.meals.ILunch2" }; // &lt;-- add new interface name. static const char* dinnerInterfaces[] = { "omni.meals.IDinner" }; static omni::core::InterfaceImplementation impls[] = { { "omni.meals.breakfast", []() { return static_cast<omni::core::IObject*>(new Breakfast); }, 1, // version breakfastInterfaces, CARB_COUNTOF32(breakfastInterfaces) }, { "omni.meals.lunch", // switch this to create the new interface version's object instead. Callers can then // ... }, // ... }; // ... } ``` // cast between the new and old interface versions as needed. ```cpp []() { return static_cast<omni::core::IObject*>(new Lunch2); }, 1, // version lunchInterfaces, CARB_COUNTOF32(lunchInterfaces) { "omni.meals.dinner", []() { return static_cast<omni::core::IObject*>(new Dinner); }, 1, // version dinnerInterfaces, CARB_COUNTOF32(dinnerInterfaces) }; ``` Note that the structure of this interface implementation listing object can differ depending on how the implementation class is structured. For example, if all interfaces in the plugin are implemented as a single class internally where the class inherits from all interfaces in its invocation, only a single entry would be needed for the listing. This case would look similar to this: ```cpp omni::core::Result onLoad(const omni::core::InterfaceImplementation** out, uint32_t* outCount) { // clang-format off static const char* interfacesImplemented[] = { "omni.meals.ITable", "omni.meals.IWaiter", "omni.meals.IKitchenStaff" }; static omni::core::InterfaceImplementation impls[] = { { "omni.meals.IRestaurant", []() -> omni::core::IObject* { omni::meals::Restaurant* obj = new omni::meals::Restaurant::getInstance(); // cast to `omni::core::IObject` before return to ensure a good base object is given. return static_cast<omni::core::IObject*>(obj->cast(omni::core::IObject::kTypeId)); }, 1, // version interfacesImplemented, CARB_COUNTOF32(interfacesImplemented) }, }; // clang-format on *out = impls; *outCount = CARB_COUNTOF32(impls); return omni::core::kResultSuccess; } ``` ```cpp return omni::core::kResultSuccess; } ``` When the caller receives this object from `omni::core::createType()`, it will then be able to use `omni::core::IObject::cast()` to convert the returned object to the interface it needs instead of having to explicitly create each interface object provided through the plugin using multiple calls to `omni::core::createType()`. This typically ends up being a better user experience for developers.
extension-architecture_kit_sdk_overview.md
# Kit SDK Overview Omniverse is a developer platform. It provides Nucleus for collaboration and data storage. Connector API provides USD conversion capabilities. The Omniverse developer platform provides the Kit SDK for developing Applications, Extensions, and Services. This tutorial is focused on creating Applications and Extensions on top of Kit SDK. ## Kit Apps & Extensions The Kit SDK Extension Architecture allow developers to define Extensions and Applications. An Extension is defined by a `.toml` file and most commonly has a set of directories with Python or C++ code. Extensions can also bundle resources such as images. An Application is a single `.kit` file. These modules can state each other as dependencies to combine small capabilities into a greater whole providing complex solutions. Throughout this document you will encounter many Extensions and Applications. You will start to think of Extensions as “pieces of capabilities” and of Applications as “the collection of Extensions”. ### Extension - Defined by an `extension.toml` file - Contains code (Python or C++) and/or resource files. - Provides a user interface and/or runtime capability. ### App - Defined by a `.kit` file. - Combines dependencies into an end user workflow. ## Extension Architecture At the foundation of Kit SDK, the Kit Kernel provides the ability to bootstrap Applications and execute code. All capability on top of the Kernel is provided by Extensions. Kit SDK contains hundreds of Extensions providing runtime functionality such as USD, rendering, and physics - and other Extensions providing workflow solutions such as USD Stage inspectors, viewport, and content browsers. By combining the Kit SDK Extensions with one or more custom Extensions, new workflow and service based solutions can be created. The Extension Architecture of Kit has been designed for extreme modularity - enabling rapid development of reusable modules: - Extensions are lego pieces of functionality. - One Extension can state any number of other Extensions as dependencies. - Applications provide a complete solution by combining many Extensions. - Any Omniverse developer can create more Extensions. Here’s another way to conceptualize the stack of an Application. At the foundation level of an app we have the Kit Kernel. There are runtime Extensions such as USD, RTX, and PhysX. Also behind the scene, there are framework Extensions that enable interfaces to be created, Extension management, and so on. Finally, we have the Extensions that provide end users with interfaces - such as the Viewport, Content Browser, and Stage inspector. Applications you create will have the same stack - the only difference is what Extensions the Application makes use of and how they are configured. We will explore the Extensions available in Kit SDK, how to create Applications, and how to get started with Extension development. # Getting Started ## Introduction Welcome to the tutorial on getting started with our platform. In this tutorial, we will guide you through the process of setting up your developer environment. ## Prerequisites Before you begin, ensure you have met the following requirements: * You have installed the latest version of Python. * You have installed the latest version of Node.js. * You have a Windows/Linux/Mac machine. ## Installing Dependencies To install the necessary dependencies, follow these steps: 1. Install Python: ```bash sudo apt-get install python3 ``` 2. Install Node.js: ```bash sudo apt-get install nodejs ``` ## Setting Up the Developer Environment In this tutorial, let's get the developer environment setup.
extension-config_Overview.md
# Overview — Omniverse Kit 107.0.0 documentation ## Overview ### Overview Module to enable usage of `pip install` in Omniverse Kit environment. It wraps `pip install` calls and reroutes package installation into user specified environment folder. It also extends Kit Extension System by enabling extensions to depend on python packages and providing pip archive folders for offline pip install (prebundling). ## Important Notes ### Important Notes **This extension is not recommended for production. Installing packages at runtime relies on network availability, slow and security risk. Use this extension for prototyping and local development only.** More information on pip packages in Kit can be found in [Using Python pip Packages](https://docs.omniverse.nvidia.com/kit/docs/kit-manual/latest/guide/using_pip_packages.html). ## Usage ### Usage Simples example is to just call `omni.kit.pipapi.install()` before importing a package: ```python import omni.kit.pipapi omni.kit.pipapi.install("semver==2.13.0") # use import semver ver = semver.VersionInfo.parse('1.2.3-pre.2+build.4') print(ver) ``` It can also be used to call pip with custom arguments, e.g.: ```python omni.kit.pipapi.call_pip(["--help"]) ``` ## Extension Config ### Extension Config All extensions that loaded after **omni.kit.pipapi** can specify those additional configuration settings in their extension.toml file: ### extension.toml ```toml [python.pipapi] # List of additional directories with pip achives to be passed into pip using ``--find-links`` arg. # Relative paths are relative to extension root. Tokens can be used. archiveDirs = ["path/to/pip_archive"] # Commands passed to pip install before extension gets enabled. Can also contain flags, like `--upgrade`, `--no--index`, etc. # Refer to: https://pip.pypa.io/en/stable/reference/requirements-file-format/ requirements = [ "simplejson==6.1", ] ``` "numpy" # Optional list of modules to import before (check) and after pip install if different from packages in requirements. modules = [ "simplejson", "numpy" ] # Allow going to online index. Required to be set to true for pip install call. use_online_index = true # Ignore import check for modules. ignore_import_check = false # Use this to specify a list of additional repositories if your pip package is hosted somewhere other # than the default repo(s) configured in pip. Will pass these to pip with "--extra-index-url" argument repositories = [ "https://my.additional.pip_repo.com/" ] # Other arguments to pass to pip install. For example, to disable caching: extra_args = [ "--no-cache-dir" ] This is equivalent to just calling ```python omni.kit.pipapi.install() ``` in your extension’s startup.
extension-toml-important-keys_Overview.md
# Overview ## A simple extension demonstrating how to bundle an external renderer into a Kit extension. ### extension.toml: Important keys - **order = -100** _# Load the extension early in startup (before Open USD libraries)_ - **writeTarget.usd = true** _# Publish the extension with the version of Open USD built against_ ### extension.py: Important methods - **on_startup** _# Handle registration of the renderer for Open USD and the Viewport menu_ - **on_shutdown** _# Handle the removal of the renderer from the Viewport menu_ ### settings.py: - **register_sample_settings** _# Register UI for communication via HdRenderSettings API via RenderSettings Window_ - **deregister_sample_settings** _# Remove renderer specific UI from RenderSettings Window_
extension-types_index.md
# kit-rtp-texture: Omniverse Kit Extension & App Template ## Kit Extensions & Apps Example :package: This repo is a gold standard for building Kit extensions and applications. The idea is that you fork it, trim down parts you don’t need and use it to develop your extensions and applications. Which then can be packaged, shared, reused. This README file provides a quick overview. In-depth documentation can be found at: 📖 [omniverse-docs.s3-website-us-east-1.amazonaws.com/kit-rtp-texture](http://omniverse-docs.s3-website-us-east-1.amazonaws.com/kit-rtp-texture) Teamcity Project ## Extension Types ```mermaid graph TD subgraph "python" A3(__init__.py + python code) end subgraph cpp A1(omni.ext-example_cpp_ext.plugin.dll) end subgraph "mixed" A2(__init__.py + python code) A2 --import _mixed_ext--&gt; B2(example.mixed_ext.python) B2 -- carb::Framework --&gt; C2(example.mixed_ext.plugin.dll) end Kit[Kit] --&gt; A1 Kit[Kit] --&gt; A2 Kit[Kit] --&gt; A3 ``` ## Getting Started 1. build: ``` build.bat -r ``` 2. run: ``` _build\windows-x86_64\release\omni.app.new_exts_demo_mini.bat ``` 3. notice enabled extensions in “Extension Manager Window” of Kit. One of them brought its own test in “Test Runner” window. To run tests: ``` repo.bat test ``` To run from python: ``` _build\windows-x86_64\release\example.pythonapp.bat ``` ## Using a Local Build of Kit SDK By default packman downloads Kit SDK (from `deps/kit-sdk.packman.xml`). For developing purposes local build of Kit SDK can be used. To use your local build of Kit SDK, assuming it is located say at `C:/projects/kit`. Use `repo_source` tool to link: ``` repo source link C:/projects/kit ``` ``` ## Using a Local Build of another Extension Other extensions can often come from the registry to be downloaded by kit at run-time or build-time (e.g. `omni.app.my_app.kit` example). Developers often want to use a local clone of their repo to develop across multiple repos simultaneously. To do that additional extension search path needs to be passed into kit pointing to the local repo. There are many ways to do it. Recommended is using `deps/user.toml`. You can use that file to override any setting. Create `deps/user.toml` file in this repo with the search to path to your repo added to `app/exts/folders` setting, e.g.: ```toml [app.exts] folders."++" = ["c:/projects/extensions/kit-converters/_build/windows-x86_64/release/exts"] ``` `repo source` tool can also be used to create and edit `user.toml`. Provide a path to a repo or a direct path to an extension(s): ``` repo source link [repo_path] - If repo produces kit extensions add them to `deps/user.toml` file. repo source link [ext_path] - If the path is a kit extension or folder with kit extensions add to `deps/user.toml` file. ``` Other options: - Pass CLI arg to any app like this: `--ext-folder c:/projects/extensions/kit-converters/_build/windows-x86_64/release/exts`. - Use *Extension Manager UI (Gear button)*. - Use other `user.toml` (or other) configuration files, refer to [Kit Documentation: Configuration](https://docs.omniverse.nvidia.com/kit/docs/kit-sdk/104.0/docs/guide/configuration.html#user-settings). You can always find out where an extension is coming from in *Extension Manager* by selecting an extension and hovering over the open button. You can also find it in the log, by looking either for `registered` message for each extension or `About to startup:` when it starts. ## Other Useful Links - See [Kit Manual](https://docs.omniverse.nvidia.com/kit/docs/kit-manual) - See [Kit Documentation](https://docs.omniverse.nvidia.com/kit/docs) # Kit Developer Documentation Index - **Build Systems** - See Anton’s Video Tutorials for Anton’s videos about the build systems.
extension-types_overview.md
# Kit Extensions &amp; Apps Example :package: This repo is a gold standard for building Kit extensions and applications. The idea is that you fork it, trim down parts you don’t need and use it to develop your extensions and applications. Which then can be packaged, shared, reused. This README file provides a quick overview. In-depth documentation can be found at: 📖 http://omniverse-docs.s3-website-us-east-1.amazonaws.com/kit-template Teamcity Project ## Extension Types graph TD subgraph "python" A3(__init__.py + python code) end subgraph cpp A1(omni.ext-example_cpp_ext.plugin.dll) end subgraph "mixed" A2(__init__.py + python code) A2 --import _mixed_ext--&gt; B2(example.mixed_ext.python) B2 -- carb::Framework --&gt; C2(example.mixed_ext.plugin.dll) end Kit[Kit] --&gt; A1 Kit[Kit] --&gt; A2 Kit[Kit] --&gt; A3 ## Getting Started 1. build: ``` build.bat -r ``` 2. run: ``` _build\windows-x86_64\release\omni.app.new_exts_demo_mini.bat ``` 3. notice enabled extensions in “Extension Manager Window” of Kit. One of them brought its own test in “Test Runner” window. To run tests: ``` repo.bat test ``` To run from python: ``` _build\windows-x86_64\release\example.pythonapp.bat ``` ## Using a Local Build of Kit SDK By default packman downloads Kit SDK (from ``` deps/kit-sdk.packman.xml ``` ). For developing purposes local build of Kit SDK can be used. To use your local build of Kit SDK, assuming it is located say at ``` C:/projects/kit ``` . Use ``` repo_source ``` tool to link: ``` repo source link c:/projects/kit/kit ``` Or you can also do it manually: create a file: ``` deps/kit-sdk.packman.xml.user ``` containing the following lines: ``` </p> <div class="highlight-xml notranslate"> <div class="highlight"> <pre><span></span><span class="nt">&lt;project</span> <span class="na">toolsVersion=</span><span class="s">"5.6"</span><span class="nt">&gt;</span> <span class="nt">&lt;dependency</span> <span class="na">name=</span><span class="s">"kit_sdk_${config}"</span> <span class="na">linkPath=</span><span class="s">"../_build/${platform}/${config}/kit"</span><span class="nt">&gt;</span> <span class="nt">&lt;source</span> <span class="na">path=</span><span class="s">"c:/projects/kit/kit/_build/$platform/$config"</span> <span class="nt">/&gt;</span> <span class="nt">&lt;/dependency&gt;</span> <span class="nt">&lt;/project&gt;</span> </pre> </div> </div> <p> To see current source links: </p> <blockquote> <div> <p> <code> repo source list </code> </p> </div> </blockquote> <p> To remove source link: </p> <blockquote> <div> <p> <code> repo source unlink kit-sdk </code> </p> </div> </blockquote> <p> To remove all source links: </p> <blockquote> <div> <p> <code> repo source clear </code> </p> </div> </blockquote> </section> <section id="using-a-local-build-of-another-extension"> <h2> Using a Local Build of another Extension </h2> <p> Other extensions can often come from the registry to be downloaded by kit at run-time or build-time (e.g. <code> omni.app.my_app.kit </code> example). Developers often want to use a local clone of their repo to develop across multiple repos simultaneously. </p> <p> To do that additional extension search path needs to be passed into kit pointing to the local repo. There are many ways to do it. Recommended is using <code> deps/user.toml </code> . You can use that file to override any setting. </p> <p> Create <code> deps/user.toml </code> file in this repo with the search to path to your repo added to <code> app/exts/folders </code> setting, e.g.: </p> <div class="highlight-toml notranslate"> <div class="highlight"> <pre><span></span><span class="k">[app.exts]</span><span class="w"></span> <span class="n">folders</span><span class="p">.</span><span class="s">"++"</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p">[</span><span class="s">"c:/projects/extensions/kit-converters/_build/windows-x86_64/release/exts"</span><span class="p">]</span><span class="w"></span> </pre> </div> </div> <p> <code> repo source </code> tool can also be used to create and edit <code> user.toml </code> . Provide a path to a repo or a direct path to an extension(s): </p> <blockquote> <div> <p> <code> repo source link [repo_path] </code> - If repo produces kit extensions add them to <code> deps/user.toml </code> file. </p> </div> </blockquote> <blockquote> <div> <p> <code> repo source link [ext_path] </code> - If the path is a kit extension or folder with kit extensions add to <code> deps/user.toml </code> file. </p> </div> </blockquote> <p> Other options: </p> <ul> <li> <p> Pass CLI arg to any app like this: <code> --ext-folder c:/projects/extensions/kit-converters/_build/windows-x86_64/release/exts </code> . </p> </li> <li> <p> Use <em> Extension Manager UI (Gear button) </em> </p> </li> <li> <p> Use other <code> user.toml </code> (or other) configuration files, refer to <em> Kit Documentation: Configuration </em> . </p> </li> </ul> <p> You can always find out where an extension is coming from in <em> Extension Manager </em> by selecting an extension and hovering over the open button. </p> <p> You can also find it in the log, by looking either for <code> registered </code> message for each extension or <code> About to startup: </code> when it starts. </p> </section> <section id="other-useful-links"> <h2> Other Useful Links </h2> <ul> <li> <p> See <em> Kit Manual </em> </p> </li> <li> <p> See <em> Kit Developer Documentation Index </em> </p> </li> <li> <p> See <em> Anton’s Video Tutorials </em> for Anton’s videos about the build systems. </p> </li> </ul> </section> </section>
extensions-loaded_Overview.md
# Overview — kit-omnigraph 2.3.1 documentation ## Overview This extension is an aggregator of the set of extensions required for basic Action Graph use. Action Graphs are a subset of OmniGraph that control execution flow through event triggers. Loading this bundle extension is a convenient way to load all of the extensions required to use the OmniGraph Action Graphs. There is no new functionality added by this bundle over and above what is provided by loading each of the extensions individually. ### Extensions Loaded - [omni.graph](https://docs.omniverse.nvidia.com/kit/docs/omni.graph/latest/Overview.html#ext-omni-graph) (in Omniverse Kit) - [omni.graph.action](../../omni.graph.action/1.101.1/Overview.html#ext-omni-graph-action) (in kit-omnigraph) - [omni.graph.nodes](../../omni.graph.nodes/1.141.2/Overview.html#ext-omni-graph-nodes) (in kit-omnigraph) - [omni.graph.ui](../../omni.graph.ui/1.67.1/Overview.html#ext-omni-graph-ui) (in kit-omnigraph) - [omni.graph.ui_nodes](../../omni.graph.ui_nodes/1.24.1/Overview.html#ext-omni-graph-ui-nodes) (in kit-omnigraph)
extensions.md
# Related Extensions ## Related Extensions - **omni.graph** - Python access to OmniGraph functionality - **omni.graph.action** - Support and nodes best suited for using in an [Action Graph](Glossary.html#term-Action-Graph) - **omni.graph.bundle.action** - Aggregator of the extensions required to use all Action Graph features - **omni.graph.core** - Core extension with all of the basic OmniGraph functionality - **omni.graph.examples.cpp** - Example nodes written in C++ - **omni.graph.examples.python** - Example nodes written in Python - **omni.graph.nodes** - Library of simple nodes for common use - **omni.graph.scriptnode** - Implementation of an OmniGraph node that lets you implement your own node computation without needing to add all of the extension support required for a regular node - **omni.graph.telemetry** - Omniverse Kit telemetry integration for OmniGraph - **omni.graph.template.cpp** - Template extension to copy when building an extension with only C++ OmniGraph nodes - **omni.graph.template.mixed** - Template extension to copy when building an extension with both C++ and Python OmniGraph nodes <span class="xref std std-ref"> omni.graph.template.mixed </span> Template extension to copy when building an extension with a mix of C++ and Python OmniGraph nodes <span class="xref std std-ref"> omni.graph.template.no_build </span> Template extension to copy when creating an extension with only Python nodes without a premake build process <span class="xref std std-ref"> omni.graph.template.python </span> Template extension to copy when building an extension with only Python OmniGraph nodes <span class="xref std std-ref"> omni.graph.tools </span> Tools for build and runtime operations, such as the node generator <span class="xref std std-ref"> omni.graph.tutorials </span> Nodes that walk through use of various facets of the node description format <span class="xref std std-ref"> omni.graph.ui </span> User interface elements for interacting with the elements of an OmniGraph <span class="xref std std-ref"> omni.graph.ui_nodes </span> OmniGraph nodes that facilitate dynamic creation of user interface elements
extensions_advanced.md
# Extensions in-depth ## What is an Extension? An extension is, in its simplest form, just a folder with a config file (`extension.toml`). The Extension system will find that extension and if it’s enabled it will do whatever the config file tells it to do, which may include loading python modules, **Carbonite** plugins, shared libraries, applying settings etc. There are many variations, you can have an extension whose sole purpose is just to enable 10 other extensions for example, or just apply some settings. ## Extension in a single folder Everything that an extension has should be contained or nested within its root folder. This is a convention we are trying to adhere to. Looking at other package managers, like those in the Linux ecosystem, the content of packages can be spread across the filesystem, which makes some things easier (like loading shared libraries), but also creates many other problems. Following this convention makes the installation step very simple - we just have to unpack a single folder. A typical extension might look like this: ``` [extensions-folder] └── omni.appwindow │──bin │ └───windows-x86_64 │ └───debug │ └─── omni.appwindow.plugin.dll │───config │ └─── extension.toml └───omni └───appwindow │─── _appwindow.cp37-win_amd64.pyd └─── __init__.py ``` This example contains a **Carbonite** plugin and a python module (which contains the bindings to this plugin). ## Extension Id Extension id consists of 3 parts: `[ext_name]-[ext_tag]-[ext_version]`: - `[ext_name]`: Extension name. Extension folder or kit file name. - `[ext_tag]`: Extension tag. Optional. Used to have different implementations of the same extension. Also part of folder or kit file name. - `[ext_version]`: Extension version. Defined in `extension.toml`. Can also be part of folder name, but ignored there. Extension id example: `omni.kit.commands-1.2.3-beta.1`. Extension name is `omni.kit.commands`. Version is `1.2.3-beta.1`. Tag is empty. ## Extension Version Version is defined in `[package.version]` config field. Semantic Versioning is used. A good example of valid and invalid versions: [link](https://regex101.com/r/Ly7O1x/3/) In short, it is `[major].[minor].[patch]-[prerelease].[build]`. # Versioning Versioning follows semantic versioning: ```markdown [major].[minor].[patch]-[prerelease] ``` . Express compatibility with version change: - For breaking change increment major version - For backwards compatible change increment minor version - For bugfixes increment patch version Use ```markdown [prerelease] ``` part to test a new version e.g. ```markdown 1.2.3-beta.1 ``` # Extension Package Id Extension package id is extension id plus build metadata. ```markdown [ext_id]+[build_meta] ``` . One extension id can have 1 or multiple packages in order to support different targets. It is common for binary extensions to have packages like: - ```markdown [ext_id]+wx64.r ``` - windows-x86_64, release config. - ```markdown [ext_id]+lx64.r ``` - linux-x86_64, release config. - ```markdown [ext_id]+lx64.d ``` - linux-x86_64, debug config. - … Python version or kit version can also denote different target. Refer to ```markdown [package.target] ``` section of extension config. # Single File Extensions Single file Extensions are supported - i.e. Extensions consisting only of a config file, without any code. In this case the name of the config will be used as the extension ID. This is used to make a top-level extension which we call an **app**. They are used as an application entry point, to unroll the whole extension tree. The file extension can be anything ( ```markdown .toml ``` for instance), but the recommendation is to name them with the ```markdown .kit ``` file extension, so that it can be associated with the ```markdown kit.exe ``` executable and launched with a single click. ```markdown [extensions-folder] └── omni.exp.hello.kit ``` # App Extensions When ```markdown .kit ``` files are passed to ```markdown kit.exe ``` they are treated specially: This: ```markdown > kit.exe C:/abc/omni.exp.hello.kit ``` Is the same as: ```markdown > kit.exe --ext-path C:/abc/omni.exp.hello.kit --enable omni.exp.hello ``` It adds this ```markdown .kit ``` file as an extension and enables it, ignoring any default app configs, and effectively starting an app. Single file ( ```markdown .kit ``` file) extensions are considered apps, and a “launch” button is added to it in the UI of the extension browser. For regular extensions, specify the keyword: ```markdown package.app = true ``` in the config file to mark your extension as an app that users can launch. App extensions can be published, versioned, etc. just like normal extensions. So for instance if the ```markdown omni.exp.hello ``` example from above is published, we can just run Kit as: ```markdown > kit.exe omni.exp.hello.kit ``` Kit will pull it from the registry and start. # Extension Search Paths Extensions are automatically searched for in specified folders. Core **Kit** config ```markdown kit-core.json ``` specifies default search folders in ```markdown /app/exts/foldersCore ``` setting. This way **Kit** can find core extensions, it also looks for extensions in system-specific documents folders for user convenience. To add more folders to search paths there are a few ways: 1. Pass ```markdown --ext-folder [PATH] ``` CLI argument to kit. 2. Add to array in settings: ```markdown /app/exts/folders ``` 3. Use the ```markdown omni::ext::ExtensionManager::addPath ``` API to add more folders (also available in python). 1. To specify direct path to a specific extension use the ``` /app/exts/paths ``` setting or the ``` --ext-path [PATH] ``` CLI argument. 2. Folders added last are searched first. This way they will be prioritized over others, allowing the user to override existing extensions. 3. Example of adding an extension seach path in a kit file: ```toml [settings.app.exts] folders.'++' = [ "C:/hello/my_extensions" ] ``` 4. Custom Search Paths Protocols Both folders and direct paths can be extended to support other url schemes. If no scheme is specified, they are assumed to be local filesystem. The extension system provides APIs to implement custom protocols. This way an extension can be written to enable searching for extensions in different locations, for example: git repos. E.g. ``` --ext-folder foo://abc/def ``` – The extension manager will redirect this search path to the implementor of the ``` foo ``` scheme, if it was registered. 5. Git URL as Extension Search Paths Extension ``` omni.kit.extpath.git ``` implements following extension search path schemes: ``` git ``` , ``` git+http ``` , ``` git+https ``` , ``` git+ssh ``` . Optional URL query params are supported: - ``` dir ``` subdirectory of a git repo to use as a search path (or direct path). - ``` branch ``` git branch to checkout - ``` tag ``` git tag to checkout - ``` sha ``` git sha to checkout Example of usage with cmd arg: ``` --ext-folder git://github.com/bob/somerepo.git?branch=main&dir=exts ``` – Add ``` exts ``` subfolder and ``` main ``` branch of this git repo as extension search paths. Example of usage in kit file: ```toml [settings.app.exts] folders.'++' = [ "git+https://gitlab-master.nvidia.com/bob/some-repo.git?dir=exts&branch=feature" ] ``` After the first checkout, the git path is cached into global cache. To pull updates: - use extension manager properties pages - setting: ``` --/exts/omni.kit.extpath.git/autoUpdate=1 ``` - API call ``` omni.kit.extpath.git.update_all_git_paths() ``` The Extension system automatically enables this extension if a path with a scheme is added. It enables extensions specified in a setting: ``` app/extensions/pathProtocolExtensions ``` , which by default is ``` ["omni.kit.extpath.git"] ``` . Note Git installation is required for this functionality. It expects the ``` git ``` executable to be available in system shell. 6. Extension Discovery The Extension system monitors any specified extension search folders (or direct paths) for changes. It automatically syncs all changed/added/removed extensions. Any subfolder which contains an ``` extension.toml ``` in the root or ``` config ``` folder is considered to be an extension. The subfolder name uniquely identifies the extension and is used to extract the extension name and tag: ``` [ext_name]-[ext_tag] ``` [extensions-folder] └── omni.kit.example-gpu-2.0.1-stable.3+cp37 # Extension Overview This folder contains the following files: - `extension.toml` - ... ## Example Extension In this example, we have an extension `omni.kit.example-gpu-2.0.1-stable.3+cp37`, where: - **name:** `omni.kit.example` - **tag:** `gpu` (optional, default is "") The version and other information (like supported platforms) are queried in the extension config file. They may also be included in the folder name, which is what the system does with packages downloaded from a remote registry. In this example, anything could have been after the "gpu" tag, e.g., `omni.kit.example-gpu-whatever`. ## Extension Dependencies When a Kit-based application starts, it discovers all extensions and does nothing with them until some of them are enabled, whether via config file or API. Each extension can depend on other extensions, and this is where the whole application tree can unroll. The user may enable a high-level extension like **omni.usd_viewer**, which will bring in dozens of others. An extension can express dependencies on other extensions using **name**, **version**, and optionally **tag**. It is important to keep extensions versioned properly and express breaking changes using [Semantic Versioning](https://semver.org/). This is a good place to grasp what **tag** is for. If extension `foo` depends on `bar`, you might implement other versions of `bar`, like `bar-light`, `bar-prototype`. If they still fulfill the same API contract and expected behavior, you can safely substitute `bar` without `foo` noticing. In other words, if the extension is an interface, **tag** is the implementation. The effect is that just enabling some high-level extensions like omni.kit.window.script_editor will expand the whole dependency tree in the correct order without the user having to specify all of them or worry about initialization order. One can also substitute extensions in a tree with a different version or tag, towards the same end-user experience, but having swapped in-place a different low-level building block. When an extension is enabled, the manager tries to satisfy all of its dependencies by recursively solving the dependency graph. This is a difficult problem - If dependency resolution succeeds, the whole dependency tree is enabled in order so that all dependents are enabled first. The opposite is true for disabling extensions. All extensions which depend on the target extension are disabled first. More details on the dependency system can be found in the C++ unit tests: `source/tests/test.unit/ext/TestExtensions.cpp`. A Dependency graph defines the order in which extensions are loaded - it is sorted topologically. There are however, many ways to sort the same graph (think of independent branches). To give finer control over startup order, the `order` parameters can be used. Each extension can use `core.order` config parameter to define its order and the order of dependencies can also be overridden with `order` param in `[[dependencies]]` section. Those with lower order will start sooner. If there are multiple extensions that depend on one extension and are trying to override this order then the one that is loaded last will be used (according to dependency tree). In summary - the dependency order is always satisfied (or extensions won’t be started at all, if the graph contains cycles) and soft ordering is applied on top using config params. ## Extension Configuration File (extension.toml) An Extension config file can specify: 1. Dependencies to import 2. Settings 3. A variety of metadata/information which are used by the Extension Registry browser TOML is the format used. See this short [toml tutorial](https://learnxinyminutes.com/docs/toml/). Note in particular the `[[]]` TOML syntax for arrays of objects and quotes around keys which contain special symbols (e.g., `"omni.physx"`). The config file should be placed in the root of the extension folder or in a `config` subfolder. ### Note All relative paths in configs are relative to the extension folder. All paths accept tokens (like `${platform}`, `${config}`, `${kit}` etc). More info: [Tokens](tokens.html#list-tokens). There are no mandatory fields in a config, so even with an empty config, the extension will be considered valid and can be enabled - without any effect. Next we will list all the config fields the extension system uses, though a config file may contain more. The Extension system provides a mechanism to query config files and hook into itself. That allows us to extend the extension system itself and add new config sections. For instance `omni.kit.pipapi` allows extensions to specify pip packages to be installed before enabling them. More info on that: Hooks. That also means that **typos or unknown config settings will be left as-is and no warning will be issued**. ### Config Fields #### [core] section For generic extension properties. Used directly by the Extension Manager core system. ##### [core.reloadable] (default: true) Is the extension **reloadable**? The Extension system will monitor the extension’s files for changes and try to reload the extension when any of them change. If the extension is marked as **non-reloadable**, all other extensions that depend on it are also non-reloadable. ##### [core.order] (default: 0) When extensions are independent of each other they are enabled in an undefined order. An extension can be ordered to be before (negative) or after (positive) other extensions #### [package] section Contains information used for publishing extensions and displaying user-facing details about the package. ##### [package.version] (default: "0.0.0") Extension version. This setting is required in order to publish extensions to the remote registry. The Semantic Versioning concept is baked into the extension system, so make sure to follow the basic rules: - Before you reach `1.0.0`, anything goes, but if you make breaking changes, increment the minor version. - After `1.0.0`, only make breaking changes when you increment the major version. - Incrementing the minor version implies a backward-compatible change. Let’s say extension `bar` depends on `foo-1.2.0`. That means that `foo-1.3.0`, `foo-1.4.5`, etc.. are also suitable and can be enabled by extension system. - Use version numbers with three numeric parts such as `1.0.0` rather than `1.0`. - Prerelease labels can also be used like so: `1.3.4-beta`, `1.3.4-rc1.test.1`, or `1.3.4-stable`. ##### [package.title] default: "" User-facing package name, used for UI. ##### [package.description] default: "" User facing package description, used for UI. ##### [package.category] (default: "") ## Package Category Default Extension category, used for UI. One of: - animation - graph - rendering - audio - simulation - example - internal - other ## Package App Default False Whether the extension is an App. Used to mark extension as an app in the UI. Adds a “Launch” button to run kit with only this extension (and its dependents) enabled. For single-file extensions (.kit files), it defaults to true. ## Package Feature Default False Extension is a Feature. Used to show user-facing extensions, suitable to be enabled by the user from the UI. By default, the app can choose to show only those feature extensions. ## Package Toggleable Default True Indicates whether an extension can be toggled (i.e enabled/disabled) by the user from the UI. There is another related setting: [core.reloadable], which can prevent the user from disabling an extension in the UI. ## Package Authors Default Lists people or organizations that are considered the “authors” of the package. Optionally include email addresses within angled brackets after each author. ## Package Repository Default URL of the extension source repository, used for display in the UI. ## Package Keywords Default Array of strings that describe this extension. Helpful when searching for it in an Extension registry. ## Package Changelog Default Location of a CHANGELOG.MD file in the target (final) folder of the Extension, relative to the root. The UI will load and show it. We can also insert the content of that file inline instead of specifying a filepath. It is important to keep the changelog updated when new versions of an extension are released and published. For more info on writing changelogs refer to Keep a Changelog ## Package Readme Default Location of README file in the target (final) folder of an extension, relative to the root. The UI will load and show it. We can also insert the content of that file inline instead of specifying a filepath. ### [package.preview_image] (default: "") Location of a preview image in the target (final) folder of extension, relative to the root. The preview image is shown in the “Overview” of the extension in the Extensions window. A screenshot of your extension might make a good preview image. ### [package.icon] (default: "") Location of the icon in the target (final) folder of extension, relative to the root. Icon is shown in Extensions window. Recommended icon size is 256x256 pixels. ### [package.target] This section is used to describe the target platform this extension runs on - this is fairly arbitrary, but can include: - Operating system - CPU architecture - Python version - Build configuration The Extension system will filter out extensions that doesn’t match the current environment/platform. This is particularly important for extensions published and downloaded from a remote Extension registry. Normally you don’t need to fill this section in manually. When extensions are published, this will be automatically filled in with defaults, more in [Publishing Extensions](#kit-ext-publishing). But it can be overriden by setting: ```toml package.target.kit = ["*"] – Kit version (major.minor), e.g. "101.0", "102.3". package.target.kitHash = ["*"] – Kit git hash (8 symbols), e.g. "abcdabcd". package.target.config = ["*"] – Build config, e.g. "debug", "release". package.target.platform = ["*"] – Build platform, e.g. "windows-x86_64", "linux-aarch64". package.target.python = ["*"] – Python version, e.g. "cp37" (cpython 3.7). Refer to [PEP 0425](https://peps.python.org/pep-0425/). ``` A full example: ```toml [package.target] config = ["debug"] platform = ["linux-*", "windows"] python = ["*"] ``` ### [package.writeTarget] This section can be used to explicitly control if [package.target] should be written. By default it is written based on rules described in [Extension Publishing](#kit-ext-publishing). But if for some [target] a field is set, such as `package.writeTarget.[target] = true/false`, that tells explicitly whether it should automatically be filled in. For example if you want to target a specific kit version to make the extension **only** work with that version, set: ```toml [package] writeTarget.kit = true ``` Or if you want your extension to work for all python versions, write: ```toml [package] writeTarget.python = false ``` The list of known targets is the same as in the ``` [package.target] ``` section: ``` kit ``` , ``` config ``` , ``` platform ``` , ``` python ``` . ## [dependencies] section This section is used to describe which extensions this extension depends on. The extension system will guarantee they are enabled before it loads your extension (assuming that it doesn’t fail to enable any component). One can optionally specify a version and tag per dependency, as well as make a dependency optional. Each entry is a **name** of other extension. It may or may not additionally specify: **tag**, **version**, **optional**, **exact**: ```toml "omni.physx" = { version="1.0", "tag"="gpu" } "omni.foo" = { version="3.2" } "omni.cool" = { optional = true } ``` **Note that it is highly recommended to use versions**, as it will help maintain stability for extensions (and the whole application) - i.e if a breaking change happens in a dependency and dependents have not yet been updated, an older version can still be used instead. (only the **major** and **minor** parts are needed for this according to semver). ``` optional ``` (default: ``` false ``` ) – will mean that if the extension system can’t resolve this dependency the extension will still be enabled. So it is expected that the extension can handle the absence of this dependency. Optional dependencies are not enabled unless they are a non-optional dependency of some other extension which is enabled, or if they are enabled explicitly (using API, settings, CLI etc). ``` exact ``` (default: ``` false ``` ) – only an exact version match of extension will be used. This flag is experimental and may change. ``` order ``` (default: ``` None ``` ) – override the ``` core.order ``` parameter of an extension that it depends on. Only applied if set. ## [python] section If an extension contains python modules or scripts this is where to specify them. ### [[python.module]] Specifies python module(s) that are part of this extension. Multiple can be specified. Take notice the ``` [[]] ``` syntax. When an extension is enabled, modules are imported in order. Here we specify 2 python modules to be imported ( ``` import omni.hello ``` and ``` import omni.calculator ``` ). When modules are scheduled for import this way, they will be reloaded if the module is already present. Example: ```toml [[python.module]] name = "omni.hello" [[python.module]] name = "omni.calculator" path = "." ``` ```python public = True ``` ``` name ``` (required) – Python module name, can be empty. Think of it as what will be imported by other extensions that depend on you: ```python import omni.calculator ``` ``` public ``` (default: ``` true ``` ) – If public, a module will be available to be imported by other extensions (extension folder is added to sys.path). Non-public modules have limited support and their use is not recommended. ``` path ``` (default: ``` "." ``` ) – Path to the root folder where the python module is located. If written as relative, it is relative to extension root. Think of it as what gets added to ``` sys.path ``` . By default the extension root folder is added if any ``` [[python.module]] ``` directive is specified. ``` searchExt ``` (default: ``` true ``` ) – If true, imports said module and launches the extensions search routine within the module. If false, only the module is imported. By default the extension system uses a custom fast importer. Fast importer only looks for python modules in extension root subfolders that correspond to the module namespace. In the example above it would only look in ``` [ext root]/omni/** ``` . If you have other subfolders that contain python modules you at least need to specify top level namespace. E.g. if you have also ``` foo.bar ``` in ``` [ext root]/foo/bar.py ``` : ```toml [[python.module]] name = "foo" ``` Would make it discoverable by fast importer. You can also just specify empty name to make importer search all subfolders: ```toml [[python.module]] path = "." ``` Example of that is in ``` omni.kit.pip_archive ``` which brings a lot of different modules, which would be tedious to list. ### [[python.scriptFolder]]  Script folders can be added to ``` IAppScripting ``` , and they will be searched for when a script file path is specified to executed (with –exec or via API). Example: ```toml [[python.scriptFolder]] path = "scripts" ``` ``` path ``` (required) – Path to the script folder to be added. If the path is relative it is relative to the extension root. ### [native] section  Used to specify Carbonite plugins to be loaded. ### [[native.plugin]]  When an Extension is enabled, the Extension system will search for Carbonite plugins using ``` path ``` pattern and load all of them. It will also try to acquire the ``` omni::ext::IExt ``` interface if any of the plugins implements it. That provides an optional entry point in C++ code where your extension can be loaded. When an extension is disabled it releases any acquired interfaces which may lead to plugins being unloaded. Example: ```toml [[native.plugin]] path = "bin/${platform}/${config}/*.plugin" recursive = false ``` ``` path ``` (required) – Path to search for Carbonite plugins, may contain wildcards and Tokens. ## recursive (default: false) – Search recursively in folders. ## [[native.library]] section Used to specify shared libraries to load when an Extension is enabled. When an Extension is enabled the Extension system will search for native shared libraries using `path` and load them. This mechanism is useful to “preload” libraries needed later, avoid OS specific calls in your code, and the use of `PATH`/`LD_LIBRARY_PATH` etc to locate and load DSOs/DLLs. With this approach we just load the libraries needed directly. When an extension is disabled it tries to unload those shared libraries. Example: ```toml [[native.library]] path = "bin/${platform}/${config}/foo.dll" ``` `path` (required) – Path to search for shared libraries, may contain wildcards and Tokens. ## [settings] section Everything under this section is applied to the root of the global Carbonite settings (`carb.settings.plugin`). In case of conflict, the original setting is kept. It is good practice to namespace your settings with your extension name and put them all under the `exts` root key, e.g.: ```toml [settings] exts."omni.kit.renderer.core".compatibilityMode = true ``` > Note > Quotes are used here to distinguish between the `.` of a toml file and the `.` in the name of extension. An important detail is that settings are applied in reverse order of extension startup (before any extensions start) and they don’t override each other. Therefore a parent extension can specify settings for child extensions to use. ## [[env]] section This section is used to specify one or more environment variables to set when an extension is enabled. Just like settings, env vars are applied in reverse order of startup. They don’t by default override if already set, but override behavior does allow parent extensions to override env vars of extensions they depend on. Example: ```toml [[env]] name = "HELLO" value = "123" isPath = false append = false override = false platform = "windows-x86_64" ``` `name` (required) – Environment variable name. `value` (required) – Environment variable value to set. `isPath` (default: false) – Treat value as path. If relative it is relative to the extension root folder. Tokens can also be used as within any path. `append` (default: false) – Append value to already set env var if any. Platform-specific separators will be used. `override` (default: false) – Override value of already set env var if any. `platform` (default: "") – Set only if platform matches pattern. Wildcards can be used. ### [fswatcher] section Used to specify file system watcher used by the Extension system to monitor for changes in extensions and auto reload. #### [fswatcher.patterns] Specify files that are monitored. - include (default: ["*.toml", "*.py"]) – File patterns to include. - exclude (default: []) – File patterns to exclude. Example: ```toml [fswatcher.patterns] include = ["*.toml", "*.py", "*.txt"] exclude = ["*.cache"] ``` #### [fswatcher.paths] Specify folders that are monitored. FS watcher will use OS specific API to listen for changes on those folders. You can use that setting that limit amount of subscriptions if your extensions has too many folders inside. - include (default: ["*/config/*", "*/./*"] and python modules) – Folder path patterns to include. - exclude (default: ["*/__pycache__/*", "*/.git/*"]) – Folder path patterns to exclude. Example: ```toml [fswatcher.paths] include = ["*/config"] exclude = ["*/data*"] ``` ### [[test]] section This section is read only by the testing system (omni.kit.test extension) when running per-extension tests. Extension tests are run as a separate process where only the tested extension is enabled and runs all the tests that it has. Usually this section can be left empty, but extensions can specify additional extensions (which are not a part of their regular dependencies, or when the dependency is optional), additional cmd arguments, or filter out (or in) any additional stdout messages. Each [[test]] entry will run a separate extension test process. Extension tests run in the context of an app. An app can be empty, which makes extension test isolated and only its dependencies are enabled. Testing in an empty app is the minimal recommended test coverage. Extension developers can then opt-in to be tested in other apps and fine-tune their test settings per app. Example: ```toml [[test]] name = "default" enabled = true apps = [""] args = ["-v"] dependencies = ["omni.kit.capture"] pythonTests.include = ["omni.foo.*"] pythonTests.exclude = [] cppTests.libraries = ["bin/${lib_prefix}omni.appwindow.tests${lib_ext}"] timeout = 180 ``` parallelizable = true unreliable = false profiling = false pyCoverageEnabled = false waiver = "" stdoutFailPatterns.include = ["*[error]*", "[fatal]*"] stdoutFailPatterns.exclude = [ "*Leaking graphics objects*", # Exclude graphics leaks until fixed ] ### Test Configuration - **name** – Test process name. If there are multiple `[[test]]` entries, this name must be unique. - **enabled** – If tests are enabled. By default it is true. Useful to disable tests per platform. - **apps** – List of apps to use this test configuration for. Used in case there are multiple `[[test]]` entries. Wildcards are supported. Defaults to `[""]` which is an empty app. - **args** – Additional cmd arguments to pass into the extension test process. - **dependencies** – List of additional extensions to enable when running the extension tests. - **pythonTests.include** – List of tests to run. If empty, python modules are used instead (`[[python.module]]`), since all tests names start with module they are defined in. Can contain wildcards. - **pythonTests.exclude** – List of tests to exclude from running. Can contain wildcards. - **cppTests.libraries** – List of shared libraries with C++ doctests to load. - **timeout** (default: 180) – Test process timeout (in seconds). - **parallelizable** (default: true) – Whether the test processes can run in parallel relative to other test processes. - **unreliable** (default: false) – If marked as unreliable, test failures won’t fail a whole test run. - **profiling** (default: true) – Collects and outputs Chrome trace data via carb profiling for CPU events for the test process. - **pyCoverageEnabled** (default: false) – Collects python code coverage using Coverage.py. - **waiver** – String explaining why an extension contains no tests. - **stdoutFailPatterns.include** – List of additional patterns to search stdout for and mark as a failure. Can contain wildcards. - **stdoutFailPatterns.exclude** – List of additional patterns to search stdout for and exclude as a test failure. Can contain wildcards. ### [documentation] section This section is read by the `omni.kit.documenation.builder` extension, and is used to specify a list of markdown files for in-app API documentation and offline sphinx generation. Example: ```toml [documentation] pages = ["docs/Overview.md"] menu = "Help/API/omni.kit.documentation.builder" title = "Omni UI Documentation Builder" ``` - **pages** – List of .md file paths, relative to the extension root. - **menu** – Menu item path to add to the popup in-app documentation window. ## Config Filters Any part of a config can be filtered based on the current platform or build configuration. Use ``` ```toml "filter:platform"."[platform_name]" ``` or ``` ```toml "filter:config"."[build_config]" ``` pair of keys. Anything under those keys will be merged on top of the tree they are located in (or filtered out if it doesn’t apply). | filter | values | | --- | --- | | platform | `windows-x86_64`, `linux-x86_64` | | config | `debug`, `release` | To understand, here are some examples: ```toml [dependencies] "omni.foo" = {} "filter:platform"."windows-x86_64"."omni.fox" = {} "filter:platform"."linux-x86_64"."omni.owl" = {} "filter:config"."debug"."omni.cat" = {} ``` After loading that extension on a Windows debug build, it would resolve to: ```toml [dependencies] "omni.foo" = {} "omni.fox" = {} "omni.cat" = {} ``` **Note** You can debug this behavior by running in debug mode, with ``` --/app/extensions/debugMode=1 ``` setting and looking into the log file. ## Example Here is a full example of an ``` extension.toml ``` file: ```toml [core] reloadable = true order = 0 [package] version = "0.1.0" category = "Example" feature = false app = false title = "The Best Package" description = "long and boring text.." authors = ["John Smith <jsmith@email.com>"] repository = "https://gitlab-master.nvidia.com/omniverse/kit" keywords = ["banana", "apple"] changelog = "docs/CHANGELOG.md" readme = "docs/README.md" preview_image = "data/preview.png" icon = "data/icon.png" # writeTarget.kit = true # writeTarget.kitHash = true # writeTarget.platform = true # writeTarget.config = true # writeTarget.python = true [dependencies] "omni.physx" = { version="1.0", "tag"="gpu" } "omni.foo" = {} # Modules are loaded in order. Here we specify 2 python modules to be imported (``import hello`` and ``import omni.physx``). [[python.module]] ``` name = "hello" path = "." public = false [[python.module]] name = "omni.physx" [[python.scriptFolder]] path = "scripts" # Native section, used if extension contains any Carbonite plugins to be loaded [[native.plugin]] path = "bin/${platform}/${config}/*.plugin" recursive = false # false is default, hence it is optional # Library section. Shared libraries will be loaded when the extension is enabled, note [[]] toml syntax for array of objects. [[native.library]] path = "bin/${platform}/${config}/foo.dll" # Settings. They are applied on the root of global settings. In case of conflict original settings are kept. [settings] exts."omni.kit.renderer.core".compatibilityMode = true # Environment variables. Example of adding "data" folder in extension root to PATH on Windows: [[env]] name = "PATH" value = "data" isPath = true append = true platform = "windows-x86_64" # Fs Watcher patterns and folders. Specify which files are monitored for changes to reload an extension. Use wildcard for string matching. [fswatcher] patterns.include = ["*.toml", "*.py"] patterns.exclude = [] paths.include = ["*"] paths.exclude = ["*/__pycache__*", "*/.git*"] # Documentation [documentation] pages = ["docs/Overview.md"] menu = "Help/API/omni.kit.documentation.builder" title = "Omni UI Documentation Builder" ``` # Extension Enabling/Disabling Extensions can be enabled and disabled at runtime using the provided API. The default **Create** application comes with an Extension Manager UI which shows all the available extensions and allows a user to toggle them. An App configuration file can also be used to control which extensions are to be enabled. You may also use command-line arguments to the Kit executable (or any Omniverse App based on Kit) to enable specific extensions: Example: ``` > kit.exe --enable omni.kit.window.console --enable omni.kit.window.extensions ``` `--enable` adds the chosen extension to the “enabled list”. The command above will start only extensions needed to show those 2 windows. ## Python Modules Enabling an extension loads the python modules specified and searches for children of :class: `omni.ext.IExt` class. They are instantiated and the `on_startup` method is called, e.g.: `hello.py` ```python import omni.ext class MyExt(omni.ext.IExt): def on_startup(self, ext_id): pass def on_shutdown(self): pass ``` When an extension is disabled, `on_shutdown` is called and all references to the extension object are released. ## Native Plugins Enabling an extension loads all Carbonite plugins specified by search masks in the `native.plugin` section. If one or more plugins implement the `omni.ext.IExt` interface, they are loaded and initialized. # IExt Interface When an extension is enabled, if it implements the `omni::ext::IExt` interface, it is acquired and the `onStartup` method is called. When an extension is disabled, `onShutdown` is called and the interface is released. # Settings Settings to be applied when an extension is enabled can be specified in the `settings` section. They are applied on the root of global settings. In case of any conflicts, the original settings are kept. It is recommended to use the path `exts/[extension_name]` for extension settings, but in general any path can be used. It is also good practice to document each setting in the `extension.toml` file, for greater discoverability of which settings a particular extension supports. # Tokens When extension is enabled it sets tokens into the Carbonite `ITokens` interface with a path to the extension root folder. E.g. for the extension `omni.foo-bar`, the tokens `${omni.foo}` and `${omni.foo-bar}` are set. # Extensions Manager ## Reloading Extensions can be hot reloaded. The Extension system monitors the file system for changes to enabled extensions. If it finds any, the extensions are disabled and enabled again (which can involve reloading large parts of the dependency tree). This allows live editing of python code and recompilation of C++ plugins. Use the `fswatcher.patterns` and `fswatcher.paths` config settings (see above) to control which files change triggers reloading. Use the `reloadable` config setting to disable reloading. This will also block the reloading of all extensions this extension depends on. The extension can still be unloaded directly using the API. New extensions can also be added and removed at runtime. ## Extension interfaces The Extension manager is implemented in `omni.ext.plugin`, with an interface: `omni::ext::IExtensions` (for C++), and `omni.ext` module (for python) It is loaded by `omni.kit.app` and you can get an extension manager instance using its interface: `omni::kit::IApp` (for C++) and `omni.kit.app` (for python) ## Runtime Information At runtime, a user can query various pieces of information about each extension. Use `omni::ext::IExtensions::getExtensionDict()` to get a dictionary for each extension with all the relevant information. For python use `omni.ext.ExtensionManager.get_extension_dict()`. This dictionary contains: - Everything the extension.toml contains under the same path - An additional `state` section which contains: - `state/enabled` (bool): Indicates if the extension is currently enabled. - `state/reloadable` (bool): Indicates if the extension can be reloaded (used in the UI to disable extension unloading/reloading) ## Hooks Both the C++ and python APIs for the Extension system provide a way to hook into certain actions/phases of the Extension System to enable extending it. If you register a hook like this: ```python def on_before_ext_enabled(self, ext_id: str, *_): pass manager = omni.kit.app.get_app_interface().get_extension_manager() ``` ```python # Extensions/Enable Extension import omni.kit.app manager = omni.kit.app.get_app().get_extension_manager() # enable immediately manager.set_extension_enabled_immediate("omni.kit.window.about", True) print(manager.is_extension_enabled("omni.kit.window.about")) # or next update (frame), multiple commands are be batched manager.set_extension_enabled("omni.kit.window.about", True) manager.set_extension_enabled("omni.kit.window.console", True) ``` ```python # Extensions/Get All Extensions import omni.kit.app # there are a lot of extensions, print only first N entries in each loop PRINT_ONLY_N = 10 # get all registered local extensions (enabled and disabled) manager = omni.kit.app.get_app().get_extension_manager() for ext in manager.get_extensions()[:PRINT_ONLY_N]: print(ext["id"], ext["package_id"], ext["name"], ext["version"], ext["path"], ext["enabled"]) # get all registered non-local extensions (from the registry) # this call blocks to download registry (slow). You need to call it at least once, or use refresh_registry() for non-blocking. manager.sync_registry() for ext in manager.get_registry_extensions()[:PRINT_ONLY_N]: print(ext["id"], ext["package_id"], ext["name"], ext["version"], ext["path"], ext["enabled"]) # functions above print all versions of each extension. There is other API to get them grouped by name (like in ext manager UI). # "enabled_version" and "latest_version" contains the same dict as returned by functions above, e.g. with "id", "name", etc. for summary in manager.fetch_extension_summaries()[:PRINT_ONLY_N]: print(summary["fullname"], summary["flags"], summary["enabled_version"]["id"], summary["latest_version"]["id"]) # get all versions for particular extension for ext in manager.fetch_extension_versions("omni.kit.window.script_editor"): print(ext["id"]) ``` ```python # Extensions/Get Extension Config import omni.kit.app manager = omni.kit.app.get_app().get_extension_manager() # There could be multiple extensions with same name, but different version # Extension id is: [ext name]-[ext version]. # Many functions accept extension id: ``` # Extension dict contains whole extension.toml as well as some runtime data: ## package section ```python print(data["package"]) ``` ## is enabled? ```python print(data["state/enabled"]) ``` ## resolved runtime dependencies ```python print(data["state/dependencies"]) ``` ## time it took to start it (ms) ```python print(data["state/startupTime"]) ``` ## can be converted to python dict for convenience and to prolong lifetime ```python data = data.get_dict() print(type(data)) ``` # Get Extension Path ## Extensions/Get Extension Path ```python import omni.kit.app manager = omni.kit.app.get_app().get_extension_manager() # There could be multiple extensions with same name, but different version # Extension id is: [ext name]-[ext version]. # Many functions accept extension id. # You can get extension of enabled extension by name or by python module name: ext_id = manager.get_enabled_extension_id("omni.kit.window.script_editor") print(ext_id) ext_id = manager.get_extension_id_by_module("omni.kit.window.script_editor") print(ext_id) # There are few ways to get fs path to extension: print(manager.get_extension_path(ext_id)) print(manager.get_extension_dict(ext_id)["path"]) print(manager.get_extension_path_by_module("omni.kit.window.script_editor")) ``` # Other Settings ## /app/extensions/disableStartup (default: false) Special mode where extensions are not started (the python and C++ startup functions are not called). Everything else will work as usual. One use-case might be to warm-up everything and get extensions downloaded. Another use case is getting python environment setup without starting anything. ## /app/extensions/precacheMode (default: false) Special mode where all dependencies are solved and extensions downloaded, then the app exits. It is useful for precaching all extensions before running an app to get everything downloaded and check that all dependencies are correct. ## /app/extensions/debugMode (default: false) Output more debug information into the info logging channel. ## /app/extensions/detailedSolverExplanation (default: false) Output more information after the solver finishes explaining why certain versions were chosen and what the available versions were (more costly). ## /app/extensions/registryEnabled (default: true) Disable falling back to the extension registry when the application couldn’t resolve all its dependencies, and fail immediately. ## /app/extensions/skipPublishVerification (default: false) Skip the verification of the publish status of extensions and assume they are all published. ### Skip extension verification before publishing. Use wisely. ### /app/extensions/excluded (default: []) List of extensions to exclude from startup. Can be used with or without a version. Before solving the startup order, all of those extensions are removed from all dependencies. ### /app/extensions/preferLocalVersions (default: true) If true, prefer local extension versions over remote ones during dependency solving. Otherwise all are treated equally, so it can become likely that newer versions are selected and downloaded. ### /app/extensions/syncRegistryOnStartup (default: false) Force sync with the registry on startup. Otherwise the registry is only enabled if dependency solving fails (i.e. something is missing). The --update-exts command line switch enables this behavior. ### /app/extensions/publishExtraDict (default: {}) Extra data to write into the extension index root when published. ### /app/extensions/fsWatcherEnabled (default: true) Globally disable all filesystem watchers that the extension system creates. ### /app/extensions/mkdirExtFolders (default: true) Create non-existing extension folders when adding extension search path. ### /app/extensions/installUntrustedExtensions (default: false) Skip untrusted-extensions check when automatically installing dependencies and install anyway. ### /app/extensions/profileImportTime (default: false) Replace global import function with the one that sends events to carb.profiler. It makes all imported modules show up in a profiler. Similar to PYTHONPROFILEIMPORTTIME. ### /app/extensions/fastImporter/enabled (default: true) Enable faster python importer, which doesn’t rely on sys.path and manually scan extensions instead. ### /app/extensions/fastImporter/searchInTopNamespaceOnly (default: true) If true, fast importer will skip search for python files in subfolders of the extension root that doesn’t match module names defined in [[python.module]]. E.g. it won’t usually look in any folder other than [ext. # Extension Registries ## Publishing Extensions ### Cache file existence checks in the fast importer. This speeds up startup time, but extensions python code can’t be modified without cleaning the cache. ### Enable parallel pulling of extensions from the registry. The Extension system supports adding external registry providers for publishing extensions to, and pulling extensions from. By default Kit comes with the omni.kit.registry.nucleus extension which adds support for Nucleus as an extension registry. When an extension is enabled, the dependency solver resolves all dependencies. If a dependency is missing in the local cache, it will ask the registry for a particular extension and it will be downloaded/installed at runtime. Installation is just unpacking of a zip archive into the cache folder (app/extensions/registryCache setting). The Extension system will only enable the Extension registry when it can’t find all extensions locally. At that moment, it will try to enable any extensions specified in the setting: app/extensions/registryExtension, which by default is omni.kit.registry.nucleus. The Registry system can be completely disabled with the app/extensions/registryEnabled setting. The Extension manager provides an API to add other extension registries and query any existing ones (omni::ext::IExtensions::addRegistryProvider, omni::ext::IExtensions::getRegistryProviderCount etc). Multiple registries can be configured to be used at the same time. They are uniquely identified with a name. Setting app/extensions/registryPublishDefault sets which one to use by default when publishing and unpublishing extensions. The API provides a way to explicitly pass the registry to use. To properly publish your extension to the registry use publishing tool, refer to: Publishing Extensions Guide Alternatively, kit.exe --publish CLI command can be used during development: Example: ``` > kit.exe --publish omni.my.ext-tag ``` If there is more than one version of this extension available, it will produce an error saying that you need to specify which one to publish. Example: ``` > kit.exe --publish omni.my.ext-tag-1.2.0 ``` To specify the registry to publish to, override the default registry name: Example: ``` > kit.exe --publish omni.my.ext-tag-1.2.0 --/app/extensions/registryPublishDefault="kit/mr" ``` If the extension already exists in the registry, it will fail. To force overwriting, use the additional --publish-overwrite argument: Example: ``` > kit.exe --publish omni.my.ext --publish-overwrite ``` The version must be specified in a config file for publishing to succeed. All [package.target] config subfields are filled in automatically if unspecified: - If the extension config has the [native] field [package.target.platform] or [package.target.config], they are filled with the current platform information. If the extension config has the ``` ``` [native] ``` ``` and ``` ``` [python] ``` ``` fields, the field ``` ``` [package.target.python] ``` ``` is filled with the current python version. ``` ``` If the ``` ``` /app/extensions/writeTarget/kitHash ``` ``` setting is true, the field ``` ``` [package.target.kitHash] ``` ``` is filled with the current kit githash. ``` ``` An Extension package name will look like this: ``` ``` [ext_name]-[ext_tag]-[major].[minor].[patch]-[prerelease]+[build] ``` ``` . Where: ``` ``` [ext_name]-[ext_tag] ``` ``` is the extension name (initially coming from the extension folder). ``` ``` [ext_tag]-[major].[minor].[patch]-[prerelease] ``` ``` if ``` ``` [package.version] ``` ``` field of a config specifies. ``` ``` [build] ``` ``` is composed from the ``` ``` [package.target] ``` ``` field. ``` ``` Pulling Extensions ``` ``` There are multiple ways to get Extensions (both new Extensions and updated versions of existing Extensions) from a registry: ``` ``` Use the UI provided by this extension: ``` ``` omni.kit.window.extensions ``` ``` If any extensions are specified in the app config file - or required through dependencies - are missing from the local cache, the system will attempt to sync with the registry and pull them. That means, if you have version “1.2.0” of an Extension locally, it won’t be updated to “1.2.1” automatically, because “1.2.0” satisfies the dependencies. To force an update run Kit with ``` ``` --update-exts ``` ``` Example: ``` ``` > kit.exe --update-exts ``` ``` Pre-downloading Extensions ``` ``` You can also run Kit without starting any extensions. The benefit of doing this is that they will be downloaded and cached for the next run. To do that run Kit with ``` ``` --ext-precache-mode ``` ``` Example: ``` ``` > kit.exe --ext-precache-mode ``` ``` Authentication and Users ``` ``` The Omniverse Client library is used to perform all operations with the nucleus registry. Syncing, downloading, and publishing extensions requires signing in. For automation, 2 separate accounts can be explicitly provided, for read and write operations. ``` ``` User account read/write permissions can be set for the ``` ``` omni.kit.registry.nucleus ``` ``` extension. The “read” user account is used for syncing with registry and downloading extensions. The “write” user account is used for publishing or unpublishing extensions. If no user is set, it defaults to a regular sign-in using a browser. ``` ``` By default, kit comes with a default read and write accounts set for the default registry. ``` ``` Accounts setting example: ``` ```toml [exts."omni.kit.registry.nucleus".accounts] [ { url = "omniverse://kit-extensions.ov.nvidia.com", read = "[user]:[password]", write = "[user]:[password]" } ] ``` ``` Where ``` ``` read ``` ``` - is read user account, ``` ``` write ``` ``` - is write user account. Both are optional. Format is: “user:password”. ``` ``` Building Extensions ``` ``` Extensions are a runtime concept. This guide doesn’t describe how to build them or how to build other extensions which might depend on another specific extension at build-time. One can use a variety of different tools and setups for that. We do however have some best-practice recommendations. The best sources of information on that topic are currently: ``` ``` The example ``` ``` omni.example.hello ``` ``` extension (and many other extensions). Copy and rename it to create a new extension. ``` ``` We also strive to use folder linking as much as possible. Meaning we don’t copy python files and configs from the source folder to the target (build) folder, but link them. This permits live changes to those files under version control to be immediately reflected, even at runtime. Unfortunately we can’t link files, because of Windows limitations, so folder linking is used. This adds some verbosity to the way the folder structure is organized. ``` ``` For example, for a simple python-only extension, we link the whole python namespace subfolder: ``` source/extensions/omni.example.hello/omni ``` – [linked to] –&gt; ``` _build/windows-x86_64/debug/exts/omni.example.hello/omni ``` For an extension with binary components we link python code parts and copy binary parts. We specify other parts to link in the premake file: ``` repo_build.prebuild_link { "folder", ext.target_dir.."/folder" } ``` When working with the build system it is always a good idea to look at what the final ``` _build/windows-x86_64/debug/exts ``` folder looks like, which folder links exist, where they point to, which files were copied, etc. Remember that the goal is to produce **one extension folder** which will potentially be zipped and published. Folder links are just zipped as-is, as if they were actual folders.
extensions_basic.md
# Getting Started with Extensions This guide will help you get started creating new extensions for **Kit** based apps and sharing them with other people. While this guide can be followed from any **Kit** based app with a UI, it was written for and tested in [Create](https://docs.omniverse.nvidia.com/app_create/app_create/overview.html). ### Note For more comprehensive documentation on what an extension is and how it works, refer to :doc: `Extensions (Advanced) <extensions_advanced>`. ### Note We recommend installing and using [Visual Studio Code](https://code.visualstudio.com/) as the main developer environment for the best experience. ## 1. Open Extension Manager UI: Window -> Extensions This window shows all found extensions, regardless of whether they are enabled or disabled, local or remote. ## 2. Create New Extension Project: Press “Plus” button on the top left It will ask you to select an empty folder to create a project in. You can create a new folder right in this dialog with a right-click. It will then ask you to pick an extension name. It is good practice to match it with a python module that the extension will contain. Save the extension folder to your own convenient location for development work. A few things will happen next: - The selected folder will be prepopulated with a new extension. - `exts` subfolder will be automatically added to extension search paths. - `app` subfolder will be linked (symlink) to the location of your **Kit** based app. - The folder gets opened in **Visual Studio Code**, configured and ready to hack! - The new extension is enabled and a new UI window pops up. The small “Gear” icon (on the right from the search bar) opens the extension preferences. There you can see and edit extension search paths. Notice your extension added at the end. Have a look at the `README.md` file of created folder for more information on its content. Try changing some python files in the new extension and observe changes immediately after saving. You can create new extensions by just cloning an existing one and renaming. You should be able to find it in the list of extensions immediately. ## 3. Push to git When you are ready to share it with the world, push it to some public git repository host, for instance: [GitHub](https://github.com/). A link to your extension might look like: ## Git URL as Extension Search Paths The repository link can be added right into the extension search paths in UI. To get new changes pulled in from the repository, click on the little sync button. ### Note Git must already be installed (`git` command available in the shell) for this feature to work. ## More Advanced Things To Try ### Explore kit.exe From Visual Studio Code terminal in a newly created project you have easy access to **Kit** executable. Try a few commands in the terminal: - `app\kit\kit.exe -h` to get started - `app\kit\kit.exe --ext-folder exts --enable company.hello.world` to only start newly added extension. It has one dependency which will automatically start few more extensions. - `app\kit\omni.app.mini.bat` to run another **Kit** based app. More developer oriented, minimalistic and fast to start. ### Explore other extensions **Kit** comes with a lot of bundled extensions. Look inside `app/kit/exts`, `app/kit/extscore`, and `app/exts`. Most of them are written in python. All of the source to these extensions is available and can serve as an excellent reference to learn from.
extensions_usd_schema.md
# USD Schema Extensions USD libraries are part of omni.usd.libs extension and are loaded as one of the first extensions to ensure that USD dlls are available to other extensions. USD schemas itself are each an individual extension that can be a part of any repository. USD schema extensions are loaded after omni.usd.libs and ideally before omni.usd. Example of a schema extension config.toml file: ```toml [core] reloadable = false # Load at the start, load all schemas with order -100 (with order -1000 the USD libs are loaded) order = -100 [package] category = "Simulation" keywords = ["physics", "usd"] # pxr modules to load [[python.module]] name = "pxr.UsdPhysics" # python loader module [[python.module]] name = "usd.physics.schema" # pxr libraries to be preloaded [[native.library]] path = "bin/${lib_prefix}usdPhysics${lib_ext}" ``` Schema extensions contain pxr::Schema, its plugin registry and config.toml definition file. Additionally it contains a loading module omni/schema/_schema_name that does have python init.py file containing the plugin registry code. Example: ```python import os from pxr import Plug pluginsRoot = os.path.join(os.path.dirname(__file__), '../../../plugins') physicsSchemaPath = pluginsRoot + '/UsdPhysics/resources' Plug.Registry().RegisterPlugins(physicsSchemaPath) ```
ext_blast.md
# Blast Destruction ## Overview The Omniverse™ Blast Destruction (omni.blast) extension integrates the NVIDIA Omniverse™ Blast SDK into NVIDIA Omniverse™ Kit applications. It supports authoring of destructible content, and also implements destruction in PhysX SDK-driven simulation. ## Introductory Video ## User Guide ### Interface The Blast window is divided into panels described in the following subsections. #### Fracture Controls These settings are used to author destructibles from mesh prims. | Control | Effect | |---------|--------| | Combine Selected | Combines the selected mesh prims and/or destructible instances into a “multi-root” destructible. | | Fracture Selected | Fracture the selected mesh prim. | | Num Voronoi Sites | The number of pieces in which to fracture the selected meshes. | | Random Seed | Seed value of pseudorandom number generator used during fracture operations. | | Auto Increment | If checked, the Random Seed will automatically increment after each fracture operation. | | Select Parent | | ### Select Parent Select the parent(s) of the currently selected chunk(s). Removes children from selection set. ### Select Children Select the children(s) of the currently selected chunk(s). Removes parents from selection set. ### Select Source Select the source prim associated with any part of the destructible. ### Contact Threshold The minimum contact impulse to trigger a contact event with a destructible. ### Max Contact Impulse Applied to kinematic destructibles, limits the force applied to an impacting object. ### Reset to Default Set Max Contact Impulse back to ‘inf’ for the selection. ### Interior Material The material applied to faces generated through fracture. Can be set before or after fracture and will be applied to all descendants of the selected chunk(s). ### Interior UV Scale Stretch to apply to material textures on newly-created interior faces. ### Apply Interior UV Scale Apply the stretch value Interior UV Scale to selected chunks. ### Recalculate Bond Areas Recalculate the bond area of selected instances. Use this after scaling an instance to ensure correct areas. Bond areas are used in stress pressure calculations. ### Recalculate Attachment Search for nearby static or dynamic geometry and form bonds with that geometry. ### Make External Bonds Unbreakable Bonds created between a blast destructible and external geometry will never break. ### Remove External Bonds Remove all bonds to external geometry. ### Create Instance Creates an instance based on the selected destructible base or instance. On instances, this is equivalent to using Kit’s duplicate command on the instance prim. ### Reset Blast Data Destroys fracture information, depending on the type of prim(s) selected: - Base selected - destroy all destruction info (including combine) and restores the orig mesh. - Instance selected - destroy the selected instance. - Anything else - search for chunks under selection and reset all fracture information for them. ### Important Fracturing operations can increase geometry counts exponentially and have the potential to overwhelm computer resources. Use caution when increasing Number of Voronoi Sites. ### Instance Stress Settings Beginning with omni.blast-0.11, damage in Blast has been unified into a stress model simulated for each destructible instance. External accelerations applied to support-level chunks are used as input, and the solver determines the internal bond forces which are required to keep the bonded chunks from moving relative to one another. Given each bond’s area, these forces are translated into pressures. The material strength of the destructible is described in terms of pressure limits which it can withstand before breaking. The pressure is decomposed into components: the pressure in the bond-normal direction, and pressure perpendicular to the bond normal. Furthermore, the normal component can either be compressive (if the chunks are being pushed together) or tensile (if the chunks are being pulled apart). The pressure component perpendicular to the bond normal is called shear. For each component (compressive, tensile, or shear), we allow the user to specify an “elastic limit” and a “fatal limit.” These are described in the table below. Damage has units of acceleration, which is applied to the support-level chunk at the damage application point. If “Stress Gravity Enabled” is checked (see the table below), then gravitational acceleration is applied to each support chunk, every frame in the stress simulation. If “Stress Rotation Enabled” is checked, then centrifugal acceleration is calculated and applied to each support chunk as well. The stress solver replaces damage shaders in older versions of Blast. Using the stress model, damage spreads naturally through the system and fracture occurs because of physically modeled limits. Impact damage is applied by translating the contact impulse into an acceleration applied to the contacting support chunk. When fractured chunk islands (actors) break free, the excess force (that which exceeded the bonds’ limits) is applied to the separating islands, which leads naturally to pieces flying off with higher speeds when the system is hit harder. The excess force and contact impulse effects are adjustable with a multiplier for each (see Residual Force Multiplier and Impact Damage Scale). <figure class="align-center" id="id1"> <figcaption> <p> <span class="caption-text"> This wall has fractured under its own weight due to the pressure of the overhang jutting out to the right. </span> </p> </figcaption> </figure> <p> These settings are applied to the selected destructible instance(s). They control how stress is processed during simulation. </p> <table class="docutils align-left"> <colgroup> <col style="width: 18%"/> <col style="width: 82%"/> </colgroup> <thead> <tr class="row-odd"> <th class="head"> <p> Control </p> </th> <th class="head"> <p> Effect </p> </th> </tr> </thead> <tbody> <tr class="row-even"> <td> <p> Stress Gravity Enabled </p> </td> <td> <div class="line-block"> <div class="line"> Whether or not the stress solver includes gravitational acceleration. </div> </div> </td> </tr> <tr class="row-odd"> <td> <p> Stress Rotation Enabled </p> </td> <td> <div class="line-block"> <div class="line"> Whether or not the stress solver includes rotational acceleration. </div> </div> </td> </tr> <tr class="row-even"> <td> <p> Maximum Solver Iterations Per Frame </p> </td> <td> <div class="line-block"> <div class="line"> Controls how many passes the solver can do per frame. Higher numbers here will result in converging on a stable solution faster. </div> </div> </td> </tr> <tr class="row-odd"> <td> <p> Residual Force Multiplier </p> </td> <td> <div class="line-block"> <div class="line"> Multiplies the residual forces on bodies after connecting bonds break. </div> </div> </td> </tr> <tr class="row-even"> <td> <p> Stress Limit Presets </p> </td> <td> <div class="line-block"> <div class="line"> Set stress limits based on various physical substances. Can be used as a rough starting point, then tweaked for a specific use case. </div> </div> </td> </tr> <tr class="row-odd"> <td> <p> Compression Elastic Limit </p> </td> <td> <div class="line-block"> <div class="line"> Stress limit (in megapascals) for being compressed where bonds starts taking damage. </div> </div> </td> </tr> <tr class="row-even"> <td> <p> Compression Fatal Limit </p> </td> <td> <div class="line-block"> <div class="line"> Stress limit (in megapascals) for being compressed where bonds break instantly. </div> </div> </td> </tr> <tr class="row-odd"> <td> <p> Tension Elastic Limit </p> </td> <td> <div class="line-block"> <div class="line"> Stress limit (in megapascals) for being pulled apart where bonds starts taking damage. Use a negative value to fall back on compression limit. </div> </div> </td> </tr> <tr class="row-even"> <td> <p> Tension Fatal Limit </p> </td> <td> <div class="line-block"> <div class="line"> Stress limit (in megapascals) for being pulled apart where bonds break instantly. Use a negative value to fall back on compression limit. </div> </div> </td> </tr> <tr class="row-odd"> <td> <p> Shear Elastic Limit </p> </td> <td> <div class="line-block"> <div class="line"> Stress limit (in megapascals) for linear stress perpendicular to bond directions where bonds starts taking damage. Use a negative value to fall back on compression limit. </div> </div> </td> </tr> <tr class="row-even"> <td> <p> Shear Fatal Limit </p> </td> <td> <div class="line-block"> <div class="line"> Stress limit (in megapascals) for linear stress perpendicular to bond directions where bonds break instantly. Use a negative value to fall back on compression limit. </div> </div> </td> </tr> <tr class="row-odd"> <td> <p> Reset All to Default </p> </td> <td> <div class="line-block"> <div class="line"> Reset values in this panel to their defaults. </div> </div> </td> </tr> </tbody> </table> <section id="instance-damage-settings"> <h4> Instance Damage Settings </h4> <p> These settings are applied to the selected destructible instance(s). They control how damage is processed during simulation. </p> <table class="docutils align-left"> <colgroup> <col style="width: 20%"/> <col style="width: 80%"/> </colgroup> <thead> <tr class="row-odd"> <th class="head"> <p> Control </p> </th> <th class="head"> <p> Effect </p> </th> </tr> </thead> <tbody> <tr class="row-even"> <td> <p> Impact Damage Scale </p> </td> <td> <div class="line-block"> <div class="line"> A multiplier, contact impulses are multiplied by this amount before being used as stress solver inputs. If not positive, impact damage is disabled. </div> </div> </td> </tr> <tr class="row-odd"> <td> <p> Allow Self Damage </p> </td> <td> <div class="line-block"> <div class="line"> If On, chunks may damage other chunks which belong to the same destructible. </div> </div> </td> </tr> </tbody> </table> </section> <section id="global-simulation-settings"> <h4> Global Simulation Settings </h4> <p> These settings are general simulation settings. </p> <table class="docutils align-left"> <colgroup> <col style="width: 25%"/> <col style="width: 75%"/> </colgroup> ## Debug Visualization These settings are used to visualize various aspects of a destructible. ### Controls | Control | Effect | |------------------------|---------------------------------------------------------------------------------------------| | Max New Actors Per Frame | Only this many Blast actors may be created per frame. Additional actors will be delayed to subsequent frames. | | Explode View Radius | When Not simulating, separates the chunks for inspection and/or selection. | | View Chunk Depth | Which chunk hierarchy depth to render while in exploded view. | | Visualization Mode | Controls what to render debug data for (see Visualization Mode table below). | | Visualization Type | Controls what debug data is to be rendered (see Visualization Type table below). | ### Visualization Mode | Visualization Mode | Description | |---------------------|--------------------------------------------------------------------------------------------| | Off | Disable Blast debug visualization system. | | Selected | Only render debug visualization for the selected actor/instance. | | On | Render debug visualization for all instances. | ### Visualization Type | Visualization Type | Description | |---------------------|--------------------------------------------------------------------------------------------| | Support Graph | This shows representations of chunk centroids and bonds (drawn between the centroids). External bonds have a brown square around the bond centroid to distinguish them. The bond colors have meaning: | | | Green - the bond’s health is at or near its full value. | | | Red - the bond’s health is near zero. (Zero-health bonds are “broken” and not displayed.) | | | In Green -> Yellow -> Red continuum - the bond’s health is somewhere between zero and full value. | | | Light blue - the bond is an unbreakable external bond. | | Max Stress Graph | This shows the maximum of the stress components for each bond, by coloring the bond lines drawn between the centroids. The colors have meaning: | | | Green -> Yellow -> Red continuum - the stress is between 0 (green) and the bond’s elastic limit (red). | | | Blue -> Magenta continuum - the stress is between the bond’s elastic limit (blue) and fatal limit (magenta). | | Compression Graph | If the bond is under compression, this shows the compression component of stress for each bond. Colors have the same meaning as described for the Max Stress Graph. | | Tension Graph | If the bond is under tension, this shows the tension component of stress for each bond. Colors have the same meaning as described for the Max Stress Graph. | | Shear Graph | This shows the shear component of stress for each bond. Colors have the same meaning as described for the Max Stress Graph. | | Bond Patches | Render bonded faces between chunks with matching colors. | ## Debug Damage Tool Settings These settings control damage applied using Shift+B+(Left Mouse Button) during simulation. This is only intended for testing the behavior of a destructible. | Control | Effect | |---------|--------| | Damage Amount | The base damage amount (acceleration in m/s^2) applied to the nearest support chunk in a destructible instance. | | Damage Impulse | The outward impulse applied to rigid bodies which lie within the damage radius after fracture. This is in addition to the excess forces applied by the stress solver to chunks when they break free. | | Damage Radius | The distance from the damage center (where the cursor is over scene geometry) to search for destructibles to damage. | ## Reset Button This button resets all controls in the Blast window to their default settings. ## Push to USD Button This will push the current runtime destruction data to USD, allowing the scene to be saved and later restored in the same destroyed state. ## OmniGraph Blast Node Documentation This section describes the Blast nodes for use in the OmniGraph system. You can find them under “omni.blast” in the “Add node” right click menu or the pop out side bar. NOTE: The nodes automatically update the USD stage. It is possible for them to get out of sync. If that happens just save and reload the scene. That will cause the Blast data to regenerate and should get things back in sync. ### Authoring: Combine The Combine node takes two mesh or destructible prims and combines them into a single destructible. These can be chained together to create complex destructibles built out of many parts. The output of this can also be sent to Fracture nodes to break them down. Note that fracturing after combine will apply to all parts that are touched by the point cloud. If you want points to only apply to a specific part, run it through Fracture first, then combine it to simplify the process. | Input | Description | |-------|-------------| | prim1 | Prim to use as input to fracture. Connect to “destructible” output from another Blast node or “output” from an “import USD prim data” node. The import node should be connected to a mesh or xform/scope containing meshes. | | prim2 | Prim to use as input to fracture. Connect to “destructible” output from another Blast node or “output” from an “import USD prim data” node. The import node should be connected to a mesh or xform/scope containing meshes. | | contactThreshold | Force from impact must be larger than this value to be used to apply damage. Set higher to prevent small movements from breaking bonds. | | bondStrength | Base value to use for bond strength. How it is applied depends on the mode used. Bond strength is automatically recalculated when this changes. | | bondStrengthMode | Controls how the Default Bond Strength is used. Bond strength is automatically recalculated when this changes. | | maxContactImpulse | Force that PhysX can use to prevent objects from penetrating. It is only used for kinematic destructibles that are attached to the world. This can approximate brittle materials by setting the value low, giving the bonds a chance to break before the objects are pushed apart. | | damageMinRadius | Full damage from user generated fracture event is applied inside this radius. | | damageMaxRadius | No damage from user generated fracture event is applied outside this radius. | | impactDamageScale | Scale physical impacts by this amount when applying damage from collisions. | |------------------|------------------------------------------------------------------------------------| | impactClusterRadius | If positive, contact reports will be aggregated into clusters of approximately the given radius, and damage accumulated for that cluster. The cluster will be reported as one damage event. | | allowSelfDamage | If on, chunks from a destructible actor may cause damage to sibling actors. Default behavior for legacy assets is to disable self-damage, which changes the legacy behavior. | | destructible | Result of the combine operation. Can be used as the input prim to another Blast authoring node. | ## Authoring: Fracture The Fracture node takes a mesh or destructible and fractures it based on a point cloud input. For best results, the mesh should be closed and not contain degenerate triangles. Input points that are inside of the mesh will be used to generate Voronoi sites, points outside of the mesh are ignored. Sending in an already fractured mesh to another fracture node will create layers of fracture. The newly generated chunks from the first fracture will be fractured again using another set of points. This allows you to sculpt the detail and density in areas that you want it without requiring the entire mesh to be broken up at that fidelity. During simulation if all children of a given parent chunk are still intact, then the parent will be rendered instead. | Input | Description | |-------|-------------| | prim | Prim to use as input to fracture. Connect to “destructible” output from another Blast node or “output” from an “import USD prim data” node. The import node should be connected to a mesh prim. | | points | Point cloud to use for fracture. Points inside will be used as Voronoi sites, points outside will be ignored. | | contactThreshold | Force from impact must be larger than this value to be used to apply damage. Set higher to prevent small movements from breaking bonds. | | bondStrength | Base value to use for bond strength. How it is applied depends on the mode used. Bond strength is automatically recalculated when this changes. | | bondStrengthMode | Controls how the Default Bond Strength is used. Bond strength is automatically recalculated when this changes. | | | “areaInstance”: (Default) Multiply the default value by the area of the bond using destructible instance scale. | | | “areaBase”: Multiply the default value by the area of the bond using destructible base scale. | | | “absolute”: Use the default value directly. | | interiorMaterial | Path to material to use for interior faces created through fracture. | | maxContactImpulse | Force that PhysX can use to prevent objects from penetrating. It is only used for kinematic destructibles that are attached to the world. This can approximate brittle materials by setting the value low, giving the bonds a chance to break before the objects are pushed apart. | | interiorUvScale | UV scale to use on interior faces generated through fracture. | | damageMinRadius | Full damage from user generated fracture event is applied inside this radius. | | damageMaxRadius | No damage from user generated fracture event is applied outside this radius. | | impactDamageScale | Scale physical impacts by this amount when applying damage from collisions. | | impactClusterRadius | If positive, contact reports will be aggregated into clusters of approximately the given radius, and damage accumulated for that cluster. The cluster will be reported as one damage event. | | allowSelfDamage | If on, chunks from a destructible actor may cause damage to sibling actors. Default behavior for legacy assets is to disable self-damage, which changes the legacy behavior. | ## Events: Flow Adapter The Events Flow Adapter node takes a bundle of active events and translates it into outputs that can drive Flow emitter prim attributes. Currently only one layer is supported for all events. Work is ongoing to support visible materials mapping to Flow data via a new schema, and for Flow emitters to be able to emit on multiple layers from a single input stream of data. Then events will use material data to drive emission specific to that material. ### Input | Input | Description | |-------|-------------| | events | Bundle of events that can be processed by adapter nodes. | ### Output | Output | Description | |--------|-------------| | positions | Unique world space vectors. | | faceVertexIndices | Indices into “positions” used to build faces. | | faceVertexCounts | Defines how many indices make up each face. | | velocities | Per vertex velocity. | | coupleRateSmokes | Per vertex emission rate. | ### Warning The outputs prefixed with “subset” are not intended to be used yet. They will support multiple layers in the event bundle being passed to Flow when materials can be mapped to Flow layer IDs and Flow emitters support multiple layers. ### Experimental Output | Experimental Output | Description | |---------------------|-------------| | subsetLayers | Flow layer IDs referenced by faces. It is possible for their to be duplicates. | | subsetFaceCounts | Number of faces each layer ID slot represents. Faces are grouped by layer ID. | | subsetEnabledStatus | Tells Flow emitter if each block of faces is enabled. Allows other data to be cached. | ## Events: Gather The Events Gather node takes no inputs and produces a bundle of active events. These can come from bonds breaking and contacts between objects. - Destruction events are generated for all Blast based prims when bonds break. - Collision events are reported for all prims that have the rigid body and report contacts APIs applied. The output of this can be sent to adapter nodes to generate responses to the events. Visual materials should have the **Rigid Body Material** applied. This can be added by selecting the material and doing **Add -> Physics -> Rigid Body Material**. It is needed to report the correct materials for collision events. This is not strictly required yet, but will be when multiple materials are fully supported by this system. # Demo Scenes ## Demo Scenes Several demo scenes can be accessed through the physics demo scenes menu option ( ```plaintext Window > Simulation > Physics / Demo Scenes ``` ). This will enable a Physics Demo Scenes window, which has a Blast Samples section. # Tutorials ## Tutorials ### Getting Started This tutorial shows initial setup. First enable the Blast extension (if it isn’t already): ```plaintext Navigate to Window > Extensions Enable the Blast Destruction extension ``` You can check `Autoload` if you will be working with Blast frequently and don’t want to have to enable it by hand each time you load the application. Make sure it says `UP TO DATE` at the top of the extension details, if not, update to the latest release of the extension. Make sure there something for the mesh to interact with: ```plaintext Navigate to Create > Physics > Physics Scene And Create > Physics > Ground Plane ``` Next, create a mesh: ```plaintext Navigate to Create > Mesh and pick a mesh primitive type to create (Cube, Sphere, etc) ``` Be sure not to select `Create > Shape`, those do not support fracture, it must be a “Mesh” type prim. Alternatively, you can load a mesh from another Usd source, just make sure it is a closed mesh. Set up any physics properties you want on the mesh: ```plaintext Right click on the mesh and navigate to Add > Physics ``` - **Make the object dynamic and collidable:** - Set the Rigid Body with Colliders Preset to make it dynamic and collidable. - **Change physics properties:** - You can then change physics properties in the Properties > Physics panel. Now fracture it: - **Select the mesh to fracture:** - Make sure the mesh to fracture is selected. - **Locate the Blast pane:** - Locate the Blast pane (by default it docs in the same area as Console). - **Adjust Blast settings:** - Adjust Blast settings as desired (see above for what the settings control). - **Fracture the selected mesh:** - Click the Fracture Selected button to fracture it. - This will deactivate the original mesh and select the new Blast instance container (prim named after source mesh with __blastInst in the name). - The original mesh is not altered and can be easily restored later if desired. - **Review the fracture:** - Scrub/adjust the Debug Visualization > Explode View Radius in the Blast panel to review the fracture. - **Create additional instances:** - Additional copies of the destructible can be made by clicking the “Create Instance” button in the Blast panel or running the duplicate command (Ctrl + D or Right Click > Duplicate). Running the simulation (by pressing Play), the destructible falls to the ground and fractures due to impact damage. ## Reset Fracture Here we see how to undo a fracture operation: - **Select the original mesh or Blast base container:** - Select the original mesh with Blast applied or the Blast base container (prim named after source mesh with __blastBase in the name). - **Reset Fracture Data:** - Click the Reset Fracture Data button in the Fracture Controls section. - **Confirm deletion:** - Dialogue will warn of deletion permanency; agree to delete. - **Re-activate the source mesh prim:** - This will remove all Blast data generated and re-activate the source mesh prim. In the video we change the Num Voronoi Sites number and re-fracture. ## Multilevel Fracture Here we see how to recursively fracture. This allows for higher overall detail with lower rendering and simulation cost by using a parent chunk for rendering and simulation until at least one child breaks off. It also allows for non-uniform fracture density throughout the mesh. Chunks of a fractured mesh may be broken down further. > ### Fracturing a Mesh with Blast > 1. Select a mesh with Blast applied > 2. Adjust the explode view radius to make chunk selection easy > 3. Select the desired chunk(s) to fracture further > > You can also select them directly from Stage view under `<Blast base container>` while in explode view > 4. Adjust the `Num Voronoi Sites` in the Blast panel to the desired number > > Each chunk will be split into that many pieces > 5. Click the `Fracture Selected` button to fracture the selected chunks > 6. Changing the `View Chunk Depth` value selects different hierarchy depths to display > 7. You may use the `Select Parent` and `Select Children` buttons to select up and down the hierarchy > 8. Repeat as needed to get the desired level of granularity and fracture density throughout the mesh > ### Static Attachment > This tutorial shows the preferred way of creating a “static” destructible, by emulating an attachment to the static world: > 1. Create a static collision mesh > > - Create/load a mesh > > - Right click on it > > - Navigate to `Add > Physics > Colliders Preset` > 2. Create a destructible prim in the usual way (see Getting Started above) > > Leave the source prim as dynamic, do not set to kinematic or static > 3. Place the destructible prim adjacent to (or slightly overlapping) a static collision mesh > 4. Select the Blast instance container prim > 5. Set the state of `Make External Bonds Unbreakable` based on how you want the base to behave > > - When checked, the base pieces will remain kinematic no matter how much damage they take > > - This is a good idea if the Blast instance container is deeply penetrating the static geometry, otherwise physics will force them apart when the pieces become dynamic > > - When unchecked, the chunks can take damage and break free, becoming new dynamic rigid bodies > 6. Press the `Recalculate Attachment` button to form bonds between the destructible and nearby static geometry (“external” bonds) > 7. Debug Visualization > Visualization Type > Support Graph ``` shows where external bonds have been formed when ```markdown Debug Visualization > Visualization Mode ``` is set to ```markdown Selected ``` or ```markdown All ``` When simulating, the bonds formed keep the destructible “static” (kinematic). When bonds are damaged, chunk islands that are not connected via bonds to static geometry become separate dynamic bodies. The chunks that remain connected to static geometry via bonds remain kinematic. # Dynamic Attachment Here we see how to turn a part of a rigid body into a destructible. If the rigid body consists of multiple meshes, we may select one and make it destructible: 1. Create an Xform prim to contain the meshes with ```markdown Create > Xform ``` 2. Create/load meshes and add them to the Xform container 3. Set the hierarchy as dynamic and collidable - Right click on the Xform prim - Navigate to ```markdown Add > Physics > Rigid Body with Colliders Preset ``` - This will automatically set the contained meshes as colliders 4. Select one or more of the meshes in the container to make destructible 5. Click the ```markdown Fracture Selected ``` button to fracture the mesh as usual 6. Set the state of ```markdown Make External Bonds Unbreakable ``` based on how you want the base to behave - The same rules apply as for Static Attachment above 7. Press the ```markdown Recalculate Attachment ``` button - Bonds will be formed with any adjacent or overlapping collision geometry from the same rigid Body When simulating, the destructible mesh will move with its parent physics as a single rigid body. When bonds are damaged, chunk islands that are not connected via bonds to the parent physics geometry become separate dynamic bodies. The chunks that remain connected to the parent physics geometry via bonds remain rigidly connected to the physics body. Note, if no bonds were formed (because there was no adjacent or overlapping collision geometry from the parent rigid body), then all chunk islands become dynamic when the destructible is fractured. # Stress Damage This tutorial covers the basics of the new stress damage system in Blast destruction, which is available in omni.blast-0.11. The viewer will learn how it relates to the previous damage system, how various sorts of damage are integrated within the stress framework, and how to adjust stress settings in a destructible. > - Create a “wall” by first creating a mesh cube > > - Create menu -> Mesh -> Cube > > - With the new cube selected, in the property window set the Transform > Scale (x, y, z) to 5.0, 2.0, 0.2, and Translate y = 100.0 > > - Right click on the `cube` (wall) > > - Navigate to `Add > Physics > Rigid Body with Colliders Preset` > - Create the destructible wall > > - Select the wall in the viewport > > - In the Blast window, under the `Fracture Controls` panel, set `Num Voronoi Sites` to 1000 > > - Click the `Fracture Selected` button. There will be a long pause until the controls become responsive and the Cube__blastBase and Cube__blastInst prims appear in the Stage view > > - With Cube_blastInst selected, press the `Recalculate Attachment` button to form permanent bonds between the destructible and the ground > - Set stress limits and debug damage amount > > - With Cube_blastInst selected, in the `Instance Stress Settings` panel, set `Compression Elastic Limit` to 0.05 and `Compression Fatal Limit` to 0.1 > > - In the `Debug Damage Tool Settings` panel, Set `Damage Amount` to 25000.0 > - Checking `non-stress` behavior > > - With Cube_blastInst selected, in the `Instance Stress Settings` panel, uncheck both `Stress Gravity Enabled` and `Stress Rotation Enabled` > > - Deselect the wall (you can deselect all by hitting the Esc key). Otherwise outline drawing of the 1000-piece wall causes a large hitch - **Using the Cube_blastInst** - **Applying damage to the wall** - Shift+B+(Left Click) with the mouse cursor on the wall to apply damage - You can damage along the bottom of the wall, and the top will only break off when you’ve completely cut through to the other simulated - **Using the full stress solver** - With Cube_blastInst selected, in the `Instance Stress Settings` panel, ensure both `Stress Gravity Enabled` and `Stress Rotation Enabled` are checked - Deselect the wall (you can deselect all by hitting the Esc key) - Shift+B+(Left Click) with the mouse cursor on the wall to apply damage - You can damage along the bottom of the wall, but before you cut all the way across, the stress of the overhanging piece should cause it to snap off on its own - **Repeating with a weaker wall (lower pressure limits)** - With Cube_blastInst selected, in the `Instance Stress Settings` panel, set `Compression Elastic Limit` to 0.025 and `Compression Fatal Limit` to 0.05 - Deselect the wall (you can deselect all by hitting the Esc key) - In the `Debug Damage Tool Settings` panel, Set `Damage Amount` to 10000.0 - Shift+B+(Left Click) with the mouse cursor on the wall to apply damage - Now as you damage along the bottom of the wall, it doesn’t take as large of an overhang for the stress to cause it to snap off **Note** - If Shift+B+(Left Mouse Button) does not fracture a destructible, try increasing Damage Amount or Damage Radius. - Changing Material(s) requires re-fracturing. - Physics Properties of the Source Asset propagate to the fractured elements during authoring.