file_path
stringlengths
5
148
content
stringlengths
0
526k
1_0_0_48.md
# 1.00.48 **Release Date:** Feb 2021 ## New Features - News tab – shows the latest information about Omniverse - Show Nucleus Web on the collaboration tab - Improved keyboard navigation and accessibility - Update info for installed apps and connectors automatically when the library tab is opened - Improved the drag area for the main window - Remove failed installers from the installation queue automatically - Added a button to clear the search input - Added a button to open logs location - Allow users to copy notification text - Hide Launcher when started with the system - Change the bell color to red if notifications contain errors - Added a header for error notifications - Added a link to show open-source licenses used in the launcher - Create a user session in System Monitor after Nucleus is installed - Show loading indicators and errors on the startup window ## Improvements - Fixed installation controls were not clickable - Ignore OS errors during the installation cancel - Fixed problems with loading internal launcher settings - Fixed problems with initialization during the authentication - Fixed a bug where users went back in collaboration tab and saw ‘null’ instead of a data path - Fixed a bug where users got redirected to a broken Nucleus page when clicked on a notification - Fixed left spacing in component details on the exchange tab - Fixed issues with invalid usernames specified during the installation of the collaboration service - Fixed users were not prompted to select data paths or install Cache - Fixed previous Cache versions were not deleted automatically after updates - Fixed the launch button on the library tab displaying “Up to date” when update is available - Fixed “Cancel” button was visible when installing components - Fixed text overflow in the installation progress
1_0_42.md
# 1.0.42 **Release Date:** Jan 2021 > - Added News section to display content from Omniverse News in the launcher > - Fixed collecting hardware info on Linux when lshw returns an array > - Add a login session in System Monitor when Nucleus is installed > - Moved all licenses to PACKAGE-LICENSES folder, added a LICENSES link to the about dialog > - Fixed user was not prompted to select data paths or install Cache
1_0_50.md
# 1.0.50 **Release Date:** March 2021 ## Fixed - Catch unexpected Starfleet responses and return the error that tells users to log in. - Fixed licenses link not working on Linux. ## Removed - Remove debug noise in logs from the auto-updater.
1_1_2.md
# 1.1.2 **Release Date:** March 2021 ## Spotlight Features New “Learn Tab” available in launcher lets you get quick and immediate “in-launcher” access to our video learning portal. From Introductory content for the beginner to highly focused deep dives for the experienced, Omniverse Learning Videos are now just a click away. ## New Capabilities - Show available updates for components on the exchange tab. - Show component versions in the list view on the exchange tab. - Added `omniverse-launcher://exit` command to close the launcher. - Register a custom protocol handler on Linux automatically. - HTTP API to get the current authentication info. - HTTP API to get a list of installed components and their settings. - Added Learn tab. - Added new options to group and sort content on the exchange tab. - Added the list view for components on the exchange tab. - Use `omniverse-launcher://` custom protocol to accept commands from other apps. - Added the telemetry service for events from external applications. ## Fixed/Altered - Changed the aspect ratio of images used in component cards to be 16:9. - Fixed focusing the search bar automatically when nothing was typed in the input. - Fixed reinstalling components that were marked as installed after a restart. - Changed the gap between cards on the Exchange tab using the grid view. - Fixed refreshing News and Learn pages when users click on the header tabs. - Fixed News and Learn links to load webpages without headers and footers. - Fixed the scrollbar on the Exchange tab not working correctly when dragged with a mouse. - Fixed clicking area for the notification bell. - Fixed Nucleus showing up in the library. - Fixed “Uninstall” button position in component settings dialog. - Fixed the search input losing focus after typing. - Fixed losing the search filters after selecting a card on the exchange tab. - Changed how content is structured and searched on the exchange tab – moved Apps and Connectors categories to the left menu. - Improved the load speed and performance on the exchange tab. - Show placeholder in the installation queue if no downloads are queued. - Load messages displayed in the footer from the server. - Match the font size used for links in the settings dialog. - Updated links on Collaboration tab. - Fixed extracting files from zip archives with a read-only flag on Windows. ## 问题修复 - Fixed error that crashed browser page and didn’t let users log in. - Fixed showing invalid error messages in the exchange tab when Starfleet returns unexpected response body. - Fixed expiration of the authentication token.
1_2_8.md
# 1.2.8 **Release Date:** Aug 2021 ## Spotlight Features **Previous Versions** Users can now install previous version of Omniverse via a new and improved interface. Easy Links When opening files for the first time from a linked location, launcher now confirms the app to load with. Users can also set the default app manually in settings. ## Added - Allow to customize system requirements for installable components. - Display a notification when applications are successfully uninstalled. - Added a smooth transition animation for the library tab. - Added thin packaging support - allows splitting large components into smaller packages that can be reused. - Added “Exit” button to the user menu. - Support installing previous versions of applications and connectors. - Added new dialog to select the default application used to open Omniverse links. - Support markdown for component description. - Show an error message on startup if Launcher can’t read the library database file. ## Changed - Scale side menu on the exchange tab based on the screen size. - Updated Starfleet integration to use `name` instead of `preferred_username`. - Renamed Download Launcher button to Download Enterprise Launcher - Display “News” tab inside of Launcher. - Use thumbnail images on the exchange tab to optimize page load time. - Disable Launch button for three seconds after click to prevent launching apps multiple times. - Display an error when iframes can’t load the external web page. - Update privacy settings on a new login. - Renamed “Collaboration” tab to “Nucleus”. - Updated “News” and “Learn” links to hide the website header. - Support tables and headers for component description via Markdown and HTML. - Changed the inactive tab color on the library tab. - Made Launcher errors more readable. - “News” and “Learn” tabs open a web browser now instead of showing these pages in iframes. ## Fixed - Fixed issues where users could install or update the same app more than once. - Fixed resizing of the main window at the top. - Fixed issues with scrolling the exchange tab when the featured section is available. ## Fixed - Fixed showing the main window on system startup (Launcher will be hidden in the system tray). - Ignore system errors when Launcher removes installed components. - Fixed an issue when users could not change their current account without a reboot. - Fixed race condition when users rapidly click on the installer pause button multiple times. - Fixed an issue with installers not queueing up in the correct order. - Fixed missing vendor prefix for desktop files on Linux to register a custom protocol handler. - Fixed issues with floating carousel images for featured components. - Preserve unknown fields in privacy.toml file. - Invalidate cached HTTP responses on a startup. - Fixed issues with cached URLs for loading applications and their versions. - Fixed installing previous versions of applications that don’t support side-by-side installations. - Fixed thin package installer did not create intermediate folders for installed packages. - Refresh auth tokens when external apps query /auth endpoint. - Fixed displaying loading state on the Nucleus tab if Nucleus installer fails. - Fixed issue with components not been marked as installed. - Fixed sorting of exchange components in the left menu. - Fixed displaying hidden components on the library tab after installation. - Allow Launcher to start without downloading the GDPR agreement if it’s accepted already. - Fixed running applications that require the `finalize` script after install. - Fixed running applications that require the `finalize` script after install. ## Removed - Launcher Cleanup tool disabled from running during uninstall/reinstall in Windows. - Removed OS and Video Driver from system requirements.
1_3_3.md
# 1.3.3 ## Added - Added the OS version to com.nvidia.omniverse.launcher.session telemetry event. - Added the locale to com.nvidia.omniverse.launcher.session telemetry event. - Added “Developer” and “Publisher” fields to component details. - Show “Welcome to Omniverse” page on the first launch. - Support installing the Enterprise Launcher to a shared location on Linux. ## Fixed - Fixed an issue where “Add Connector” button was pointing to the wrong location on the Exchange tab. - Fixed an issue where the default Omniverse app was not reset after its last version is uninstalled. - Fixed an issue where the startup component didn’t react on the background authentication. - Fixed an issue where installers that were initiated from the library tab ignored the current queue and started a download immediately. - Fixed Japanese translations. - Fixed an issue that caused a delay for queuing new installers. - Fixed an issue where components were added to the library before they were registered by scripts. - Fixed an issue where component platforms were determined incorrectly if thin packaging is used. - Fixed an issue where installers used incorrect path with the latest component version instead of the specified version.
1_3_4.md
# 1.3.4 ## Added - Show the installation date for apps displayed on the library tab. - Added “Collect debug info” button to the settings dialog to prepare a tarball with logs and configuration files. - Show “External link” button for third-party components on the detailed page. ## Fixed - Fixed an issue where links on the “Learn” tab didn’t work after watching a video. - Fixed showing the latest component version instead of the currently installed version on the library tab. - Fixed an issue with dangling installers in the queue.
1_4_0.md
# 1.4.0 **Release Date:** Nov 2022 ## Changed - Added retries for downloading EULA and GDPR agreements. ## Fixed - Fixed an issue with scrolling the left menu on the exchange tab. - Fixed an issue where Launcher dialogs were displayed behind the exchange view after navigation. - Fixed an issue where thin packages could not install correctly if the file system had dangling symlinks. - Remove all unused packages on the startup. - Fixed an issue where failed updates changed the latest registered app version in the library. - Fixed an issue where scheduling two installers could not finish the download if authentication needs to be refreshed. - Fixed an issue with collecting hardware info on Windows 11. - Fixed sending multiple simultaneous session events.
1_5_1.md
# 1.5.1 ## Added - Launcher can now pull notifications from the server. This can be used to send important messages to users instead of displaying them in the toolbar. ## Changed - Changed colors for messages displayed in the bottom toolbar (white for regular text and blue for links instead of yellow). ## Fixed - Escape desktop entry path on Linux for opening custom protocol links. - Fixed an issue where News and Learn tabs were refreshed when Launcher lost or regained focus. - Increased the default window size to fit the cache path in the Launcher settings. - Raise an error when users try to install components that are already installed. - Raise an error when users try to launch components that are not installed. - Fixed an issue where some packages couldn’t be moved to the right location by offline installer.
1_5_3.md
# 1.5.3 **Release Date:** March 2022 ## Added - Added a `check` argument for `omniverse-launcher://install` command that manages throwing an error in case the same component is already installed. - Support external links for third-party content. ## Fixed - Fixed an issue where downloaded zip archives were not removed when the installer is cancelled. - Improved Italian localization. - Fixed an issue where licenses were required for third-party content. - Fixed an issue where problematic components could crash the application.
1_5_4.md
# 1.5.4 ## Fixed - Fixed an issue where users couldn’t change configured paths in Launcher. - Fixed Italian translations. - Fixed an issue where Launcher couldn’t be started if System Monitor is installed but missing on disk. - Fixed an issue where connector updates were started immediately instead of being queued on the library tab. - Fixed an issue where components couldn’t be found by full names and tags on the Exchange tab.
1_5_5.md
# 1.5.5 **Release Date:** June 2022 ## Fixed - Fixed an issue related to refreshing the session when installing new apps after desktop comes out of sleep mode. - Fixed an issue where connectors displayed in the library were enlarged. - Fixed an issue related to a blank error when Launcher is started without internet connection. - Fixed an issue where “Data collection and use” did not reset the language on dialog cancel. - Fixed an issue where trailing slashes were omitted when Launcher opened omniverse:// links.
1_5_7.md
# 1.5.7 ## Added - Added new controls for changing the current language on the login screen. - Added the Extensions content type in the Exchange. ## Fixed - Fixed an issue where the close icon closed Launcher instead of hiding it to the system tray. - Removed the crash reporter logging when using custom protocol commands. - Fixed an issue that caused a crash when Launcher failed to collect hardware info. - Fixed an issue where connectors were sometimes duplicated after the installation. - Fixed an issue where “Add connector” button on the Library tab opened a blank page.
1_6_1.md
# 1.6.10 **Release Date:** Sept 2022 ## Added - Integrated Navigator filesystem directly into the Nucleus tab. - Added “Show in folder” option for downloaded files in Navigator. - Extension installation. - /open command for opening files with installed applications. - Added links for Nucleus Cache in the library. - Display an icon for external packages on the Exchange tab. - Added new UI for Omniverse Drive 2. - Added Ukrainian language translations to locale. ## Fixed - Updated dependencies to address minor potential security vulnerabilities. - Improved error reporting when files are used by other processes. - Fixed an issue where pressing Enter closed modal dialogs. - Fixed selecting all notifications when users click on the notification bell. - Fixed “Invalid time value” error displayed for some OS locales on startup. - Fixed an issue where Launcher displayed dates in different formats on the Library tab. - Fixed scrolling the language selection in the settings displayed during the first installation. - Fixed triggering Google Translate dialog on the login result page. - Fixed displaying user settings and notifications behind Nucleus installation dialog. - Fixed an issue where Launcher couldn’t download new updates after the first downloaded patch. - Fixed translations.
1_7_1.md
# 1.7.1 ## Added - Renamed Enterprise Launcher to IT Managed Launcher. - Added new UI elements on Exchange cards to filter releases by release channel. A release is classified as Alpha, Beta, Release, or Enterprise, depending on its maturity and stability. If an Alpha or Beta release is selected, a banner appears on the main image to emphasize the relative risk of that release. Alpha or Beta releases may not be feature complete or fully stable. Versions classified as Release (also known as GA or General Availability) are feature complete and stable. Release versions that are supported for Enterprise customers appear in the Enterprise list. - Added /settings HTTP API for GET requests for the Launcher settings.
1_8_11.md
# 1.8.11 **Release Date:** July 2023 ## Changed - Updated Nucleus tab with Navigator 3.3.2. - Display “Install” button for new versions in the setting menu on the Library tab. - Extend omniverse-launcher://launch command to support custom launch scripts. ## Fixed - Fixed an issue where users were unable to exit from Launcher via UI when not authenticated. - Fixed incorrect links displayed in GDPR/EULA error. - Add retries for pulling component versions during installations. - Fixed an issue where Launcher raised an error during installation if installed package printed too many messages. - Amend Italian and Korean translation strings. - Fixed the beta banner displayed with the carousel. - Fixed an issue where Launcher didn’t remember command line arguments specified for AppImage.
1_8_2.md
# 1.8.2 **Release Date:** Dec 2022 ## Added - Added new UI elements on Exchange cards to filter releases by release channel. A release is classified as Alpha, Beta, Release, or Enterprise, depending on its maturity and stability. If an Alpha or Beta release is selected, a banner appears on the main image to emphasize the relative risk of that release. Alpha or Beta releases may not be feature complete or fully stable. Versions classified as Release (also known as GA or General Availability) are feature complete and stable. Release versions that are supported for Enterprise customers appear in the Enterprise list. - Added release classification labels and Beta banner (when applicable) to the Library tab.
1_8_7.md
# 1.8.7 ## Added - Show Nucleus update version on the Nucleus tab. - Added a Data Opt-In checkbox allowing users to opt-in and opt-out of data collection. - Display Cache version in the library. ## Fixed - Updated Nucleus tab with Navigator 3.3.
1_9_10.md
# 1.9.10 **Release Date:** Jan 2024 ## Added - Anonymize telemetry events when users opt out of data collection. - Support feature targeting by OS, Launcher version, build or user profile. - Update Nucleus Navigator to 3.3.4. - Support silent launch for custom protocol commands. ## Changed - Enabled “Newest” sorting for the Exchange tab by default. ## Fixed - Fixed an issue where Launcher did not bring its main window to the front when Launcher opened a second instance. - Fixed an issue where Launcher closed the application when its main window was closed from the taskbar. - Fixed an issue where Launcher installation might hang before running installation scripts. - Fixed an issue where Hub settings represented incorrect information in the UI. - Fixed an issue where Navigator did not refresh its server list after Nucleus installation. - Fixed an issue where Nucleus installation was displayed on top of other UI elements. - Fixed an issue where closing the main window in IT Managed Launcher closed the entire application. - Fixed a crash in IT Managed Launcher when a user opens a Nucleus tab without internet connection. - Users are now redirected to the library page when Nucleus is uninstalled in IT Managed Launcher. - Fixed an issue with the version drop-down arrow displayed incorrectly. - Validated email input on the login screen. - Fixed an issue where the email field was not updated in privacy.toml after users sign in. - Fixed an incorrect error translation for Czech language. - Fixed an issue where the dialog close icon was not clickable when the scroll bar was displayed. - Fixed an issue where the uninstall notification did not include the component version. - Fixed an issue where Launcher could not register the first Nucleus account. ## Security - Fix for CVE-2023-45133.
1_9_11.md
# 1.9.11 **Release Date:** April 2024 ## Fixed - Fixed an issue where Launcher was minimized to the system tray instead of exiting when users clicked on Exit option in user settings menu. - Fixed a race condition that could cause settings reset. [OM-118568] - Fixed gallery positioning for content packs. [OM-118695] - Fixed beta banner positioning on the Exchange tab. [OM-119105] - Fixed an issue on the Hub page settings that caused showing “infinity” in disk chart for Linux. [HUB-965] - Fixed cache size max validations on Hub page settings tab. [OM-119136] - Fixed cache size decimal points validations on Hub page settings tab. [OM-119335] - Fixed Hub Total Disk Space chart to not allow available disk space to become negative. [HUB-966] - Fixed an issue on the Hub page settings that caused showing “infinity” in disk chart for Linux. [HUB-965] - Fixed an issue on the Hub page settings that cause cache size not to be displayed. [HUB-960] - Fixed an issue on the Hub page settings preventing editing Cleanup Threshold. [OM-119137] - Fixed Hub page settings chart drive/mount detection size based on cache path. [HUB-970] - Replace Omniverse Beta license agreement text with NVIDIA License and add license agreement link in the About dialog. [OM-120991]
1_9_8.md
# 1.9.8 ## Added - Featured Collections section added to the Exchange tab. - Collection Side Nav expanded by default in Exchange tab. - Pass proxy settings to launched applications with OMNI_PROXY_SERVER environment variable. - Add custom UI for Omniverse Hub app. - Add window titles for News and Learn tabs. - Block user navigation on the Nucleus tab if Navigator tasks are active. - Support displaying server notifications until a specific date. - Support sending server notifications for specific platforms or Launcher versions. - Add splash notifications displayed as a modal dialog. ## Changed - Update Nucleus Navigator to 3.3.3. - Updated the UI for collection menu on the Exchange tab. - Updated the suggestion for installing the default Omniverse application. - Updated IT Managed Web Launcher Documentation tab to be a link to online omniverse documentation. - Changed the default page to Library for IT Managed Launcher. - Updated the look for featured collections. - Updated the look for the side menu on the Exchange tab (only display categories). ## Fixed - Display an error when user tries to delete version that is not installed. - Fixed an issue that displayed an Update button for installed connectors and apps in IT Managed Launcher. - Fixed an issue where “New” badges were displayed incorrectly for IT Managed Launcher. - Fixed displaying duplicate connectors after installing with IT Managed Launcher. - Fixed displaying a spinner for connectors page in IT Managed Launcher. - Fixed displaying applications and connectors on the Library tab after calling omniverse-launcher://uninstall command. - Fixed an issue when uninstall notification was not shown if triggered by omniverse-launcher://uninstall command. - Fixed an issue where filtering content by collections that do not exist could crash the application. - Fixed an issue where tags were not displayed for components on the Exchange tab. - Fixed displaying regular notifications instead of errors if installer returned an empty error message. - Fixed displaying Cache installation suggestion in IT Managed Launcher. - Fixed an issue with webview links not opening in a browser window. - Fixed an issue where IT Managed Launcher didn’t work without internet connection. - Fixed an issue where custom protocol commands args were persisted to Linux .desktop files. - Fixed an issue where Collaboration packages were not displayed on the Enterprise portal. - Disable bringing IT Managed Launcher in front of other apps when custom protocol commands are used. - Fixed issues with focusing input elements inside modal dialogs. - Fixed an issue where login result page opened Launcher or brought it in front of other applications. ## Fixes - Fixed opening Nucleus settings from the menu on the Nucleus tab. - Fixed incorrect coloring for the beta banner. - Fixed an issue where buttons and pagination controls could be scrolled in the version dialog. - Fixed an issue where autostart registry keys were kept after uninstall. - Fixed the color for the name displayed in the channel dropdown. - Fixed an issue where Launcher API wasn’t hosted on 127.0.0.1. - Fixed an issue where users could not close modal dialogs. - Fixed an issue where the beta overlay was displayed compressed sometimes. - Fixed an issue where UI and Navigator logs were not being saved to a log file. - Fixed an issue blocking custom protocol commands on Ubuntu. - Use 127.0.0.1 address for registering new account during Nucleus installation. ## Security - Fix for CVE-2023-5217 - Fix for CVE-2023-4863 - Fix for CVE-2023-44270
a-simple-extension-demonstrating-how-to-bundle-an-external-renderer-into-a-kit-extension_Overview.md
# Overview ## A simple extension demonstrating how to bundle an external renderer into a Kit extension. ### extension.toml: Important keys - **order = -100** - # Load the extension early in startup (before Open USD libraries) - **writeTarget.usd = true** - # Publish the extension with the version of Open USD built against ### extension.py: Important methods - **on_startup** - # Handle registration of the renderer for Open USD and the Viewport menu - **on_shutdown** - # Handle the removal of the renderer from the Viewport menu ### settings.py: - **register_sample_settings** - # Register UI for communication via HdRenderSettings API via RenderSettings Window - **deregister_sample_settings** - # Remove renderer specific UI from RenderSettings Window
a-third-way-to-add-an-extension_overview.md
# Omniverse Kit Project Template ## Omniverse Kit Project Template This project is a template for developing apps and extensions for *Omniverse Kit*. ## Important Links - Omniverse Documentation Site - Github Repo for this Project Template - Extension System ## Getting Started 1. Fork/Clone the Kit Project Template repo link to a local folder, for example in `C:\projects\kit-project-template`. 2. (Optional) Open this cloned repo using Visual Studio Code: `code C:\projects\kit-project-template`. It will likely suggest installing a few extensions to improve python experience. None are required, but they may be helpful. 3. In a CLI terminal (if in VS Code, CTRL + ` or choose Terminal->New Terminal from the menu) run `pull_kit_kernel.bat` (windows) / `pull_kit_kernel.sh` (linux) to pull the *Omniverse Kit Kernel*. `kit` folder link will be created under the main folder in which your project is located. 4. Run the example app that includes example extensions: `source/apps/my_name.my_app.bat` (windows) / `./source/apps/my_name.my_app.sh` (linux) to ensure everything is working. The first start will take a while as it will pull all the extensions from the extension registry and build various caches. Subsequent starts will be much faster. Once finished, you should see a “Kit Base Editor” window and a welcome screen. Feel free to browse through the base application and exit when finished. You are now ready to begin development! ### An Omniverse App If you look inside a `source/apps/my_name.my_app.bat` or any other *Omniverse App*, they all run off of an SDK we call *Omniverse Kit*. The base application for *Omniverse Kit* (`kit.exe`) is the runtime that powers *Apps* build out of extensions. Think of it as `python.exe`. It is a small runtime, that enables all the basics, like settings, python, logging and searches for extensions. **All other functionality is provided by extensions.** # Packaging an App Once you have developed, tested, and documented your app or extension, you will want to publish it. Before publishing the app, we must first assemble all its components into a single deliverable. This step is called “packaging”. To package an app run `tools/package.bat` (or `repo package`). The package will be created in the `_build/packages` folder. To use the package, unzip the package that was created by the above step into its own separate folder. Then, run `pull_kit_kernel.bat` inside the new folder once before running the app. # Version Lock An app `kit` file fully defines all the extensions, but their versions are not `locked`, which is to say that by default the latest versions of a given extension will be used. Also, many extensions have dependencies themselves which cause kit to download and enable other extensions. This said, for the final app it is important that it always gets the same extensions and the same versions on each run. This is meant to provide reliable and reproducible builds. This is called a *version lock* and we have a separate section at the end of `kit` file to lock versions of all extensions and all their dependencies. It is also important to update the version lock section when adding new extensions or updating existing ones. To update version lock the `precache_exts` tool is used. **To update version lock run:** `tools/update_version_lock.bat`. Once you’ve done this, use your source control methods to commit any changes to a kit file as part of the source necessary to reproduce the build. The packaging tool will verify that version locks exist and will fail if they do not. # An Omniverse Extension This template includes one simple extension: `omni.hello.world`. It is loaded in the example app, but can also be loaded and tested in any other *Omniverse App*. You should feel free to copy or rename this extension to one that you wish to create. Please refer to Omniverse Documentation for more information on developing extensions. # Using Omniverse Launcher 1. Install *Omniverse Launcher*: download 2. Install and launch one of *Omniverse* apps in the Launcher. For instance: *Code*. # Add a new extension to your Omniverse App If you want to add extensions from this repo to your other existing Omniverse App. 1. In the *Omniverse App* open extension manager: *Window* → *Extensions*. 2. In the *Extension Manager Window* open a settings page, with a small gear button in the top left bar. 3. In the settings page there is a list of *Extension Search Paths*. Add cloned repo `source/extensions` subfolder there as another search path: `C:/projects/kit-project-template/source/extensions` 4. Now you can find `omni.hello.world` extension in the top left search bar. Select and enable it. 5. “My Window” window will pop up. *Extension Manager* watches for any file changes. You can try changing some code in this extension and see them applied immediately with a hotreload. ## Adding a new extension - Now that `source/extensions` folder was added to the search you can add new extensions to this folder and they will be automatically found by the *App*. - Look at the *Console* window for warnings and errors. It also has a small button to open current log file. - All the same commands work on linux. Replace `.bat` with `.sh` and `\` with `/`. - Extension name is a folder name in `source/extensions`, in this example: `omni.hello.world`. This is most often the same name as the “Extension ID”. - The most important thing that an extension has is its config file: `extension.toml`. You should familiarize yourself with its contents. - In the *Extensions* window, press *Burger* button near the search bar and select *Show Extension Graph*. It will show how the current *App* comes to be by displaying all its dependencies. - Extensions system documentation can be found here. ## Alternative way to add a new extension To get a better understanding of the extension topology, we recommend following: 1. Run bare `kit.exe` with `source/extensions` folder added as an extensions search path and new extension enabled: ```bash > kit\kit.exe --ext-folder source/extensions --enable omni.hello.world ``` - `--ext-folder [path]` - adds new folder to the search path - `--enable [extension]` - enables an extension on startup. Use `-h` for help: ```bash > kit\kit.exe -h ``` 2. After the *App* starts you should see: - new “My Window” window popup. - extension search paths in *Extensions* window as in the previous section. - extension enabled in the list of extensions. It starts much faster and will only have extensions enabled that are required for this new extension (look at `[dependencies]` section of `extension.toml`). You can enable more extensions: try adding `--enable omni.kit.window.extensions` to have extensions window enabled (yes, extension window is an extension too!): ```bash > kit\kit.exe --ext-folder source/extensions --enable omni.hello.world --enable omni.kit.window.extensions ``` You should see a menu in the top left. From this UI you can browse and enable more extensions. It is important to note that these enabled extensions are NOT added to your kit file, but instead live in a local “user” file as an addendum. If you want the extension to be a part of your app, you must add its name to the list of dependencies in the `[dependencies]` section. ## A third way to add an extension Here is how to add an extension by copying the “hello world” extension as a template: 1. copy `source/extensions/omni.hello.world` to `source/extensions/[new extension id]` 2. rename python module (namespace) in `source/extensions/[new extension id]` # Running Tests To run tests we run a new configuration where only the tested extension (and its dependencies) is enabled. Like in example above + testing system (omni.kit.test extension). There are 2 ways to run extension tests: 1. Run: `tools\test_ext.bat omni.hello.world` 2. Alternatively, in *Extension Manager* (*Window → Extensions*) find your extension, click on *TESTS* tab, click *Run Test* For more information about testing refer to: testing doc. No restart is needed, you should be able to find and enable `[new extension name]` in extension manager. # Sharing extensions To make extension available to other users use Github Releases. 1. Make sure the repo has omniverse-kit-extension topic set for auto discovery. 2. For each new release increment extension version (in `extension.toml`) and update the changelog (in `docs/CHANGELOG.md`). Semantic versionning must be used to express severity of API changes. # Contributing The source code for this repository is provided as-is and we are not accepting outside contributions. # License By using any part of the files in the KIT-PROJECT-TEMPLATE repo you agree to the terms of the NVIDIA Omniverse License Agreement, the most recent version of which is available here.
ABI.md
# ABI Compatibility ## Importance Each C/C++ source file (compilation unit) is processed by the compiler to form an object file. These object files are then combined together by a linker to form a binary (executable, static library, dynamic library, etc.). All of these items–object files, executables, static/dynamic libraries–can be generated at different times yet linked together, they must agree on how functions exchange data. When using plugins as dynamic libraries as Carbonite does, it is monumentally important to ensure that changes can be made to both plugins and the Framework in such a way that ensures backwards compatibility. This allows the API to grow and evolve, but allows binaries built at different times to function correctly. A similar topic to ABI is that of Interoperability, often shortened to interop. Interop is the ability for two different languages to be able to call functions and exchange data between themselves. Some languages are able to call functions directly with a C calling convention, whereas other languages may require a binding layer – code that is generated to translate function calls and data. ## Terminology ### “API” Application Programmer Interface This is a programmer’s contract between two entities that describes how functions are called and data is exchanged. This contract involves the names of functions that can be called, the parameters and return values that are passed, and other concepts such as atomicity and global state that are considered meta-concepts and not enforced specifically in the code. ### “ABI” Application Binary Interface Defined above to be the binary contract between two entities (generally modules) of how functions are called and data is exchanged. This is similar to an “API”, but is a contract at the machine level, describing items such as calling convention, parameter type, count and order, structure size and binary layout, enum values, etc. Whereas a programmer’s API might think of a function parameter as an `int`, the ABI considers this value as a 32-bit machine word and how it is passed (stack or register) with no inherent type safety, no concept of signed or unsigned, etc. While an API describes a `struct`, the ABI considers only the binary representation of the data. An API might describe a function by name, but to the ABI this is just an address. ### “Breaking ABI” In terminology below, “Breaking ABI” or “ABI-Breaking” means that a change is made that causes a compiled object with an understanding of the object’s binary layout before the change will no longer work correctly after the change. Non-exhaustive examples of changes that break ABI (expounded further below): - Function calling convention - Function return type or parameters (including changing pass-by-value to pass-by-reference) - Ordering of members within a ```cpp struct ``` or ```cpp class ``` - Type of a member (i.e. changing ```cpp size_t ``` to ```cpp unsigned ``` ) - Offset or alignment (such as by inserting a member) - Size of a member (i.e. changing ```cpp char buffer[128] ``` to ```cpp char buffer[256] ``` ) Making these types of changes is acceptable for an ```cpp InterfaceType ``` by increasing the major version parameter in the ```cpp CARB_PLUGIN_INTERFACE ``` macro usage (or latest version given to the ```cpp CARB_PLUGIN_INTERFACE_EX ``` macro usage) by at least one. Carbonite interfaces may support multiple versions in order to be backwards compatible. ## “ABI Boundary” An “ABI Boundary” in this document is a potentially fragile point in the code where an ABI-Breaking change may occur. Generally this represents the boundary of a module or executable, since that is the typical granularity of a binary that is built at a given point in time. Calling a function in a different binary module is considered calling across an ABI Boundary. If static libraries or object files are stored in source control and not rebuilt when linking a binary, functions in those object files or static libraries could be considered ABI Boundaries as well. Any and all parameters, either passed to or returned from an object in a different binary is considered as crossing the ABI Boundary. Inline functions do not cross a ABI Boundaries and the restrictions below do not apply. ## “Semantically Compatible” Certain changes are allowed to be made as they are semantically compatible with modules built prior to the change. Carbonite Interfaces use semantic versioning to determine compatibility. The version of an interface is specified in the ```cpp CARB_PLUGIN_INTERFACE ``` macro usage. There is also a ```cpp CARB_PLUGIN_INTERFACE_EX ``` macro that allows specifying latest and default versions. Any ABI breaking change must be accompanied by a major version increase of at least one. Any change that is semantically compatible with older modules built prior to the change must be accompanied by a minor version increased by at least one and retaining the same major version. Carbonite interfaces may support multiple versions in order to be backwards compatible. In the case of a plugin supporting multiple versions the version in the ```cpp CARB_PLUGIN_INTERFACE ``` macro is the highest version supported, or the latest version passed to the ```cpp CARB_PLUGIN_INTERFACE_EX ``` macro. ## “Interop” As stated above, Interop (short for Interoperability) is the ability for two languages to exchange data and call functions between themselves. Many languages (such as Python and Rust) can call functions with a C calling convention, but cannot call functions with a C++ calling convention. Generally for C++ types we require them to be trivially-copyable and conform to StandardLayoutType at a bare minimum to consider them Interop-Safe. . Carbonite has the `CARB_ASSERT_INTEROP_SAFE` to ensure this. Interop Safety allows two languages to agree on the data layout of a type, but the code that modifies that type still needs to ensure atomicity, memory ownership and expected data. This is up to the programmer to implement. ## Calling Convention This ABI has many different levels to it. For instance, system calls to an Operating System such as Linux or Windows generally require calls using the `syscall` CPU instruction, but the OS defines a system calling convention – a contract between the application and the OS that describes the registers or memory that correspond to function arguments and return values. Applications are made from the building blocks of object files, static and dynamic libraries, and executables. Since these components can be built at different times, yet need to work together, a calling convention is also used to form a contract in how these different pieces call functions and exchange data. Different compilers on the same platform should agree on the same convention, which would allow a binary with objects compiled by GCC to call functions in a library of objects compiled with Clang. C supports different types of platform-specific calling conventions specified per function, such as `__cdecl` , `__pascal` , `__stdcall` , `__fastcall` , etc. The default for 32-bit x86 architecture was `__cdecl`. The calling convention indicates what registers (or where on the stack) arguments are placed in, where the return value is returned, and if the caller or callee is responsible for cleanup. Some applicable ABI references that discuss calling conventions: - Itanium C++ ABI - System V AMD64 ABI (used by Linux and MacOS on x86_64) - Microsoft x64 Calling Convention (x86_64) - ARM AAPCS ABI (used by Linux and MacOS on aarch64) While calling convention is typically not on the mind of a programmer writing an API, it does affect ABI and therefore should be considered. Calling convention also comes into play with Interop. Many languages (such as Python and Rust) can call functions with a C calling convention, but cannot call functions with a C++ calling convention. The distinction here is important: A C calling convention specifies how built-in types (such as `int` and `float`) are passed, as well as pointer types (including `const char*`-style strings) and even `struct`-by-value types (provided that they are standard layout and trivially copyable). More complicated C++ types (such as `omni::string` and `omni::function`) can also be passed by value, but these types require additional specification on top of the C calling convention, such as who is responsible for destruction and how copying works. This is handled by the C++ calling convention, which makes passing these types by value interop-unsafe. Carbonite provides the macro `CARB_ASSERT_INTEROP_SAFE` which ensures that a type is standard layout and trivially copyable. > **Note** > Though references are not part of the C calling convention (as they are a C++ feature), the ABI for references is essentially the same as pointers. They are therefore ABI-safe and interop-safe. > **Warning** > Changing calling convention, return value or parameters of a function will break its ABI. Changing almost anything about a function will break ABI. Ironically, changing the name of a function will typically not break ABI, but will affect compilation (“API”). This is because source code refers to names whereas the compiled binary code typically does not. ## Built-in Types C/C++ types (e.g. `int` , `uint32_t` , `size_t` , ...) # Basic Types Basic types (e.g., `int`, `float`, etc.) are always ABI-safe as their characteristics are guaranteed to never change. However, note that these may be different between platforms/architectures. These types may be safely used as function parameters, return values, and members of structs and classes. # Pointers and References Both pointers and references have a well-defined ABI described by the calling convention and are therefore safe to use as members, function parameters and return values provided that the types referenced or pointed to are also ABI safe. # Variadic Arguments Declaring a function as having variadic arguments (e.g., `log(const char* message, ...)`) has an ABI described by the calling convention and is therefore safe, provided that the types passed through the variadic arguments are also ABI safe. # Endianness Also typically not thought of by programmers, endianness should be considered part of the ABI. This is the ordering of bytes in memory of a binary word. A hardware architecture is one specific endianness: big or little. For big-endian hardware, a 32-bit hexadecimal word `0x01020304` would be represented in memory bytes in the same order: `01 02 03 04`. For little-endian hardware, the bytes of `0x01020304` are stored backwards: `04 03 02 01`. Carbonite focuses mostly on little-endian because all supported architectures are little-endian. However, Network Byte Order as specified by the TCP/IP standard is big-endian. If a function changed to keep the same datatype, but instead required a parameter to be in network byte order, this would break ABI. # Enum Values Changing the values of existing enums is an ABI-breaking change for every function that uses them. Adding a new, previously-unassigned enum value is a semantically compatible change. # Struct/Class Layout Changing the layout of a `struct` or `class` that is passed across an ABI boundary is likely to affect ABI. See Best Practices below for some semantically-compatible methods of changing structs and classes. Consider the following struct: ```cpp struct Version { uint32_t major; uint32_t minor; }; ``` This struct has two members: major and minor. Its size is 8 bytes. Those 8 bytes are laid out in memory as follows: ``` minor (bytes 4 - 7) |---------| 00000000 00 00 00 00 00 00 00 00 .. .. .. .. .. .. .. .. |........ | \_________/ major (bytes 0 - 3) |---------------------| total size: 8 bytes ``` The binary layout of this object represents its ABI. Now consider what would happen if we change the class as follows: ```cpp struct Version { uint32_t major; char dummy; // Added field uint32_t minor; }; ``` The binary representation of this object changes: ``` 00000000 00 00 00 00 00 00 00 00 00 00 00 00 .. .. .. .. |............ | \__major__/ | (padding) \__minor__/ dummy |__________________________________| total size: 12 bytes ``` Notice how the object changed? The major member is still at the same size and location. However, the size of the `struct` changed to 12 bytes instead of 8, and the minor member no longer starts at byte 4 but now at byte 8. Any module that was compiled with an understanding of `Version` in the top code block would not work properly with the bottom block. Therefore, we cannot change `Version` in this manner without Breaking ABI. > **Warning** > Changing the type, size, order, or alignment of any member within a struct will break its ABI. # Inheritance Changing inheritance or inheriting from multiple base classes may break ABI and is not recommended. Inheritance may be changed only if it does not affect the characteristics of any existing members. # Non-virtual Functions Adding member functions that are non-virtual to an existing struct or class does not break ABI. Typically these functions will be declared as `inline`, otherwise they would be written into an object file (or static library) that must be linked in order to operate. Functions declared as `inline` that cannot actually be compiled in-line where called will be written into any object files that reference them and then typically be coalesced down to a single function at link time. Changing parameters, return values, or the body of a non-virtual member function also does not break ABI since this function is essentially copied into whatever module calls it. However, a module will not take advantage of any changes to this function until it is rebuilt with the changes. # Virtual Functions Changing a class that does not have a v-table to add a v-table by adding a virtual function will break ABI as it causes all members of the class to change characteristics. If the class is a base class for other classes, it will break the ABI of all descendant classes. Since virtual functions in a class are in order of their declarations, adding a new virtual function to the end of a `final` class (that already has virtual functions) is allowed; this is a semantically compatible change. It is very important that the functions are added to the end of the declaration, and that the class/struct is `final` so that it may not be inherited from. # Members Members of a struct or class must themselves be ABI-safe in order for a struct or class to be considered ABI-safe. > Caution: It is not ABI-safe to change a struct that is contained as a member within another struct. # Constructor and Initialization Adding a new type initializer or changing the existing type initializer is a non-ABI-breaking (allowed) change and can be done at any time. However, keep in mind that older modules will not have this change, so previous values of the initializer must be anticipated. Likewise, adding a new inline constructor to a class is a non-ABI-breaking (allowed) change. # Standard Layout Data that is exchanged across the ABI Boundary must conform to the C++ named requirements of a standard-layout type. This is often checked with the is_standard_layout type-trait. # Copying and Moving Data that is passed by value across the ABI Boundary must conform to the C++ is_trivially_copyable type-trait. # C++ Runtime Library Types C++ Runtime Library Types, such as `std::chrono::time_point`, `std::vector`, `std::string`, `std::optional`, `std::unique_ptr`, `std::function`, `std::shared_ptr`, `std::weak_ptr`, et al have no guarantees about ABI safety; their layout could change with an update. Carbonite considers passing C++ Runtime Library Types to be **ABI unsafe**. However, it is acceptable to use these types within inline functions as they do not cross the ABI Boundary. > Warning: Do not pass any C++ Runtime Library Types (or pointers or references to them) across an ABI boundary. # Exceptions Exceptions are generally C++ Runtime Library Types that inherit from std::exception and as such are not considered by Carbonite to be ABI safe. It is recommended that all interface functions be declared as `noexcept`. > Warning: Do not allow any exceptions to cross an ABI Boundary when unwinding. # ABI-Breaking Changes Summarized This is a non-exhaustive summation of the ABI-breaking changes mentioned above. Do not do the things listed here without an ABI-breaking InterfaceType version change (major version change). - Changing Interface function calling convention, parameters, or return type - Reordering members within an InterfaceType - Adding members to an InterfaceType not at the end (note: adding members to the end is a semantically compatible change) - Changing types, size, alignment or other characteristics of members ## Best Practices ### Use Built-In Types Built-in data types (i.e. `char`, `uint32_t`, `size_t`, `float`, etc.) are always ABI-safe. Pointers and references are ABI-safe only if the type pointed-to or referenced is also ABI-safe. ### Use ABI-Safe Complex Types Carbonite provides some helper classes that are ABI-safe: - omni::function - A replacement for `std::function` - omni::string - A replacement for `std::string` - carb::RString (and variations) - A fast string-interning mechanism Note that not all Carbonite types are intended to be ABI-safe. > **Warning** > The omni::function and omni::string types, while ABI-safe, should only be passed by reference or pointer. This is because these types are not trivially copyable. This allows maintaining a C calling convention and interop safety. ### Add New Interface Functions to the End Adding interface functions to the end of an InterfaceType allows the change to be semantically-compatible (minor version change only). ### Design Structs to be Changed It can be advantageous to design structs for future change by including a version of some sort. Many Windows API structs contain a member that must be initialized to the sizeof the struct. This is an effort to plan for future changes in a way that does not break ABI compatibility. New members can then be added to the end of the struct (avoid removing members, resizing them or adding them anywhere but the end as this drastically complicates processing). This technique can be used to change a struct in a way that is safe and is employed by Carbonite structures, such as carb::tasking::TaskDesc and carb::AcquireInterfaceOptions. As the struct grows, the size naturally changes, so the size of the struct makes a decent version. At the first point of the code where the struct is used, promote it to the latest version: ```cpp static std::optional<AcquireInterfaceOptions> versionPromote(const AcquireInterfaceOptions& opt) { CARB_LIKELY_IF(sizeof(opt) == opt.sizeofThis) { return opt; } // Version promotion to current goes here. Initial size of the struct was 48 bytes. CARB_LOG_ERROR("Unknown size of AcquireInterfaceOptions struct: %zu", opt.sizeofThis); return std::nullopt; } void* FrameworkImpl::acquireInterface(AcquireInterfaceOptions opt_) { auto options = versionPromote(opt_); if (!options) return nullptr; auto& opt = options.value(); // ... } ``` When promoting, if the `sizeof` field is not equal to the current `sizeof` the struct, keep in mind that every member with an offset after the `sizeof` field is not initialized; reading it will result in undefined behavior. It is also good practice to do the promote into a copy of the struct (remember from above that structs must be trivially copyable). > ### Warning > While it may seem like a good idea to pass a struct by value and promote it in-place, this is dangerous and can lead to stack corruption. This is because of the calling convention: the space for the struct is allocated by the caller, so only the space for the `sizeof` struct is allocated. Writing to members beyond that will corrupt the stack. Instead, pass the struct by const-reference and copy into a new local struct. Some older Carbonite APIs were written unaware of this and pass the struct by value. > ### Warning > When working with arrays of structs, keep in mind that the size of passed-in structs might be different than the current size, so standard pointer incrementing will not work. Instead, you will have to cast to `uint8_t*` (a byte pointer) and manually calculate the pointer offset. Example of working with arrays of versioned structs: ```cpp void Scheduler::addTasks(TaskDesc* tasks, uint32_t numTasks, Counter* counter) { CARB_CHECK(counter != detail::kListOfCounters, "addTasks() does not support list of counters"); // Because the struct may be an older version (different size), we cannot increment the pointer itself. We need to // handle it at the byte level. uint8_t* p = reinterpret_cast<uint8_t*>(tasks); while (numTasks--) { TaskDesc* pTask = reinterpret_cast<TaskDesc*>(p); CARB_FATAL_UNLESS(pTask->size <= sizeof(TaskDesc), "Invalid TaskDesc size"); p += pTask->size; addTask(*pTask, counter); } } ``` It is good practice to have unit tests for all versions of a struct to ensure that they promote correctly. Remember to ensure that your struct conforms to C++ type-trait is_standard_layout. ## Consider Thread Safety While not strictly about ABI, it is good Interface design to consider thread safety. For example, this could lead to problems as you cannot atomically get the count and then fill the buffer (also the fill function doesn’t take a count for safety!): ```cpp struct Widget; struct MyInterface { CARB_PLUGIN_INTERFACE("MyInterface", 1, 0) size_t(CARB_ABI* getCount)(); void(CARB_ABI* fillWidgets)(Widget** outWidgets); }; ``` Since no container is currently ABI-safe, one possibility might be to take a function/context: ```cpp using WalkWidgetsFn = void(Widget*, void*); void(CARB_ABI* walkWidgets)(WalkWidgetsFn* fn, void* context); // Call example: std::vector<Widget*> widgets; iWidget->walkWidgets([](Widget* w, void* context) { static_cast<std::vector<Widget*>*>(context)->push_back(w); }, &widgets); ```
About.md
# Welcome screen: ABOUT Show general information of this application, same as “Help” - “About”.
about_Overview.md
# Overview CAD Converter Service Extension - [omni.services.convert.cad] ## About The CAD Converter Service is a service for batch conversion of CAD files to USD. ## SAMPLE REQUEST: **Format**: “setting name” : default value Request arguments: ```json { "import_path": "" } ``` Full path to the CAD File to convert. ```json { "output_path": "" } ``` Full path to the output folder. ```json { "config_path": "" } ``` Full path to converter config file. Refer to omni.kit.converter.hoops_core, omni.kit.converter.jt_core, and omni.kit.converter.dgn_core for configuration options. ### Sample Input JSON: ```json { "import_path": "/ANCHOR.sldprt", "output_path": "/tmp/testing", "config_path": "/sample_config.json" } ``` To use the DGN Converter, the parameter must be set to “DGN Converter”. This is shown with the following request body example: ### Sample Input JSON for DGN: ```json { "import_path": "/tmp/input_file.dgn", "output_path": "/tmp", "config_path": "/tmp/sample_config.json" } ```
action-registry_Overview.md
# Overview — Omniverse Kit 1.0.0 documentation ## Overview The Omni Kit Actions Core extension is a framework for creating, registering, and discovering actions. Actions are programmable objects that can encapsulate anything which occurs in a fire and forget manner. An action can be created in one extension and then executed from anywhere else in the application; for example, a user could execute the same action via a UI Menu, Hot Key, or Gesture. ### Actions Actions can be: - Created, registered, discovered, and executed from any extension in C++, Python, or a mix of the two. - Created with associated metadata to aid in their discoverability and/or display in a UI. - Executed in a ‘fire and forget’ manner. - Executed using arbitrary parameters. Actions are not stateful (although their end result may not manifest immediately), therefore they do not support undo/redo functionality. ### Action Registry The Action Registry maintains a collection of all registered actions and allows any extension to: - Register new actions. - Deregister existing actions. - Discover registered actions. Here is an example of registering an action from Python that creates a new file when it is executed: ```python action_registry = omni.kit.actions.core.get_action_registry() actions_tag = "File Actions" action_registry.register_action( extension_id, "new", omni.kit.window.file.new, display_name="File->New", description="Create a new USD stage." ) ``` tag=actions_tag, For more examples, please consult the Python and C++ usage pages. ## User Guide - Python Usage Examples - C++ Usage Examples - Changelog
ActionGraph.md
# Action Graph This extension is a collection of functionality required for OmniGraph Action Graphs. - [Hands-on Introduction to Action Graphs](https://docs.omniverse.nvidia.com/app_code/prod_extensions/ext_omnigraph/tutorials/quickstart.html) - [Action Graph Car Customizer Tutorial](https://docs.omniverse.nvidia.com/prod_extensions/prod_extensions/ext_omnigraph/car_customizer.html) - [Converting Action Graph Nodes to IActionGraph](ConvertActionGraphNodesToAPI.html) For a hands-on introduction to OmniGraph Action Graphs see Action Graph Quickstart. For more comprehensive and thorough documentation on various OmniGraph features see [OGN User Guide](../dev/ogn/ogn_user_guide.html#ogn-user-guide). Action Graphs are comprised of any number of separate chains of nodes, like deformer graphs. However there are important differences which make Action graphs more suited to particular applications. ## Event Sources Action graphs are *event driven*, which means that each chain of nodes must start with an *Event Source* node. Each event source node can be thought of as an entry point of the graph. *Event Source* nodes are named with an *On* prefix, they never have an *execution* input attribute, and always have at least one output *execution* attribute. Many event nodes don’t need to compute every update. If that’s the case it is more efficient if the system can skip them until they are ready. For example *On Keyboard Input* doesn’t need to compute until a key press has been detected. These nodes can use the `compute-on-request` OGN scheduling hint in combination with `omni.graph.core.Node.request_compute()`. ```json { "scheduling": ["compute-on-request"] } ``` See [Scheduling Hints for OG Nodes](../dev/SchedulingHints.html#omnigraph-scheduling-hints) for more details about scheduling hints generally. | Event Source Nodes | | --- | | [On Keyboard Input](../../../omni.graph.action_nodes/1.21.3/GeneratedNodeDocumentation/OgnOnKeyboardInput.html#omni-graph-action-onkeyboardinput) | | [On Tick](../../../omni.graph.action_nodes/1.21.3/GeneratedNodeDocumentation/OgnOnTick.html#omni-graph-action-ontick) | | [On Playback Tick](../../../omni.graph.action_nodes/1.21.3/GeneratedNodeDocumentation/OgnOnPlaybackTick.html#omni-graph-action-onplaybacktick) | # On Playback Tick # On Impulse Event # On Object Change # On Custom Event ## Execution Attributes Action graphs make use of *execution*-type attributes. The *execution* evaluator works by following *execution* connections downstream and computing nodes it encounters until there are no more downstream connections to follow. The entire chain is executed to completion. When there is no downstream node the execution terminates and the next node is popped off the *execution stack*. Note that if there is more than one downstream connection from an *execution* attribute, each path will be followed in an undetermined order. Multiple downstream chains can be executed in a fixed order either by chaining the end of one to the start of the other, or by using the Sequence node. ## Flow Control Many Action graphs will need to do different things depending on some state. In a python script you would use an *if* or *while* loop to accomplish this. Similarly in Action graph there are nodes which provide this branching functionality. Flow control nodes have more than one *execution* output attribute, which is used to branch the evaluation flow. | Flow Control Nodes | |--------------------| | Branch | | ForEach | | For Loop | | Flip Flop | | Gate | | Sequence | | Delay | ## Latent Nodes (Computing over Time) For some graphs it’s useful to have a node that does its work over several update cycles. In other words, when that node is reached in the graph the node does not need to complete what it is doing right away. For example a node that downloads a file may take a few seconds. One way to accomplish this is to arrange for a node to be `ticked` every update so that it can start the download and then check for completion on subsequent computes. However it’s also possible to put the node into a `Latent` state in Action Graph. This allows the node to suspend the graph evaluation until it completes. An example of such a node is Delay, which suspends evaluation for some number of seconds. Note that if the event node triggers while the evaluation is suspended by a latent node, it will interrupt it. For example if a key press triggers a `Delay <omni_graph_action_Delay>` node, if the key press subsequently triggers again it will effectively reset the delay. ## Build Your Own You can use the OmniGraph python or C++ APIs to implement your own nodes that are usable in Action graphs. The features that are particular to Action Graph are accessed in C++ through the # Convert Legacy Nodes Converting Action Graph Nodes to IActionGraph explains what might need to change in older node implementations.
actions_Overview.md
# Overview ## Overview The Omni Kit Actions Core extension is a framework for creating, registering, and discovering actions. Actions are programmable objects that can encapsulate anything which occurs in a fire and forget manner. An action can be created in one extension and then executed from anywhere else in the application; for example, a user could execute the same action via a UI Menu, Hot Key, or Gesture. ```mermaid graph TD subgraph Interactions[Interactions] Interaction1(UI Menu) Interaction2(Hot Key) Interaction3(Gesture) Interaction4(Voice) end Extension[Extension] --&gt;|Register| Action[Action] Interaction1 --&gt;|Execute| Action[Action] Interaction2 --&gt;|Execute| Action[Action] Interaction3 --&gt;|Execute| Action[Action] Interaction4 --&gt;|Execute| Action[Action] ``` ## Actions Actions can be: - Created, registered, discovered, and executed from any extension in C++, Python, or a mix of the two. - Created with associated metadata to aid in their discoverability and/or display in a UI. - Executed in a ‘fire and forget’ manner. - Executed using arbitrary parameters. Actions are not stateful (although their end result may not manifest immediately), therefore they do not support undo/redo functionality. ## Action Registry The Action Registry maintains a collection of all registered actions and allows any extension to: - Register new actions. - Deregister existing actions. - Discover registered actions. Here is an example of registering an action from Python that creates a new file when it is executed: ```python action_registry = omni.kit.actions.core.get_action_registry() actions_tag = "File Actions" action_registry.register_action( extension_id, "new", omni.kit.window.file.new, display_name="File->New", description="Create a new USD stage." ) ``` <pre> <span class="w"> </span><span class="n">tag</span><span class="o">=</span><span class="n">actions_tag</span><span class="p">,</span><span class="w"></span> <span class="p">)</span><span class="w"></span> </pre> For more examples, please consult the **Python** and **C++** usage pages. ## User Guide * **Python Usage Examples** * **C++ Usage Examples** * **Changelog** ---
action_code_samples_cpp.md
# Action Code Samples - C++ This file contains a collection of examples for implementing OGN nodes that work in Action Graphs. The features that are particular to Action Graph are accessed in C++ through the `IActionGraph` interface. All the concepts of OmniGraph apply equally to Action Graph nodes. See [OGN User Guide](#ogn-user-guide). `{note} The API used in these samples are usable in kit-sdk version 105.1 or greater. ` ## Contents - [Action Code Samples - C++](#action-code-samples-c) - [Password Branch Node](#password-branch-node) - [OnSelect Node](#onselect-node) - [While Node](#while-node) - [DoN Node](#don-node) ## Password Branch Node This example demonstrates branching the incoming control flow based on input data. The node activates the *opened* output if the password is correct. ```cpp #include &lt;omni/graph/action/IActionGraph.h&gt; #include &lt;OgnPasswordDatabase.h&gt; class OgnPassword { public: static bool compute(OgnPasswordDatabase& db) { auto iActionGraph = omni::graph::action::getInterface(); [Python Version] ``` ```markdown OnSelect Node ``` ```markdown This example demonstrates an event node that activates the selected output when the kit selection changes. The implementation has to consider instances of nodes vs authored nodes because there are potentially many instances of a given node computing concurrently. Each authored node has a subscription to the UsdContext event stream, in addition there is one Stamp in the authored node state, which is synchronized with many SyncStamp in the node instance states. ``` ```cpp // clang-format off #include "UsdPCH.h" // clang-format on #include <omni/graph/action/IActionGraph.h> #include <omni/kit/IApp.h> #include <omni/usd/UsdContext.h> #include <OgnOnSelectDatabase.h> class OgnOnSelect { public: carb::ObjectPtr<carb::events::ISubscription> m_sub; // The stage event subscription handle omni::graph::exec::unstable::Stamp m_selectionChangedStamp; // The stamp set by the authoring node when the event occurs omni::graph::exec::unstable::SyncStamp m_selectionChangedSyncStamp; // The stamp used by each instance to sync with above static void initialize(const GraphContextObj& context, const NodeObj& nodeObj) { auto& authoringState = OgnOnSelectDatabase::sInternalState<OgnOnSelect>(nodeObj, kAuthoringGraphIndex); authoringState.m_sub = carb::events::createSubscriptionToPop( omni::usd::UsdContext::getContext()->getStageEventStream().get(), [nodeHandle = nodeObj.nodeHandle](carb::events::IEvent* e) { if (static_cast<omni::usd::StageEventType>(e->type) == omni::usd::StageEventType::eSelectionChanged) ```cpp auto iNode = carb::getCachedInterface<omni::graph::core::INode>(); NodeObj nodeObj = iNode->getNodeFromHandle(nodeHandle); if (nodeObj.isValid()) { auto& authoringState = OgnOnSelectDatabase::sInternalState<OgnOnSelect>(nodeObj, kAuthoringGraphIndex); authoringState.m_selectionChangedStamp.next(); } ``` ```cpp static bool compute(OgnOnSelectDatabase& db) { auto const& authoringState = OgnOnSelectDatabase::sInternalState<OgnOnSelect>(db.abi_node(), kAuthoringGraphIndex); auto& localState = db.internalState<OgnOnSelect>(); if (localState.m_selectionChangedSyncStamp.makeSync(authoringState.m_selectionChangedStamp)) { auto iActionGraph = omni::graph::action::getInterface(); iActionGraph->setExecutionEnabled(outputs::selected.token(), kAccordingToContextIndex); } return true; } ``` ```cpp #include <omni/graph/action/IActionGraph.h> #include <OgnWhileDatabase.h> class OgnWhile { public: static bool compute(OgnWhileDatabase& db) { auto iActionGraph = omni::graph::action::getInterface(); auto keepGoing = db.inputs.keepGoing(); // enable the output execution if authorized if (keepGoing) iActionGraph->setExecutionEnabledAndPushed(outputs::loopBody.token(), kAccordingToContextIndex); else iActionGraph->setExecutionEnabled(outputs::finished.token(), kAccordingToContextIndex); } }; ``` [Python Version] ## While Node This example demonstrates activating an output several times in one update. The node activates the `loopBody` output while the condition is true, and finally calling the `finished` output. ```cpp #include <omni/graph/action/IActionGraph.h> #include <OgnWhileDatabase.h> class OgnWhile { public: static bool compute(OgnWhileDatabase& db) { auto iActionGraph = omni::graph::action::getInterface(); auto keepGoing = db.inputs.keepGoing(); // enable the output execution if authorized if (keepGoing) iActionGraph->setExecutionEnabledAndPushed(outputs::loopBody.token(), kAccordingToContextIndex); else iActionGraph->setExecutionEnabled(outputs::finished.token(), kAccordingToContextIndex); } }; ``` ```cpp #include <omni/graph/action/IActionGraph.h> #include <OgnDoNDatabase.h> class OgnDoN { public: static bool compute(OgnDoNDatabase& db) { auto iActionGraph = omni::graph::action::getInterface(); auto count = db.state.count(); auto n = db.inputs.n(); if (count == 0) { iActionGraph->startLatentState(kAccordingToContextIndex); db.state.count() += 1; } else if (count >= n) { db.state.count() = 0; iActionGraph->endLatentState(kAccordingToContextIndex); iActionGraph->setExecutionEnabled(outputs::finished.token(), kAccordingToContextIndex); } else { db.state.count() += 1; iActionGraph->setExecutionEnabled(outputs::tick.token(), kAccordingToContextIndex); } return true; } }; REGISTER_OGN_NODE() ``` # DoN Node This example demonstrates a node that enters a latent state for N ticks, before triggering the finished output. While counting down the evaluation will be “paused”, but continue to activate a ‘tick’ output. The node logic is if `state:count` is at the initial value (0), then start the latent state. If the count has reached n, end the latent state and trigger the output. This is done with `omni::graph::action::IActionGraph::startLatentState` and `omni::graph::action::IActionGraph::endLatentState`. [Python Version]
action_code_samples_python.md
# Action Graph Code Samples - Python This file contains a collection of examples for implementing OGN nodes that work in Action Graphs. The features that are particular to Action Graph are accessed with `<omni.graph.action.IActionGraph>`. All the concepts of OmniGraph apply equally to Action Graph nodes. See [OGN User Guide](#ogn-user-guide). `{note} The API used in these samples are usable in kit-sdk version 105.1 or greater. ` ## Contents - [Action Graph Code Samples - Python](#action-graph-code-samples-python) - [Password Branch Node](#password-branch-node) - [OnSelect Node](#onselect-node) - [While Node](#while-node) - [DoN Node](#don-node) ## Password Branch Node This example demonstrates branching the incoming control flow based on input data. The node activates the *opened* output if the password is correct. ```python from omni.graph.action import get_interface # import the ActionGraph API class OgnPassword: @staticmethod def compute(db) -> bool: password = db.inputs.password # enable the output execution if authorized if password == "Mellon": get_interface().set_execution_enabled("outputs:opened") else: get_interface().set_execution_enabled("outputs:denied") return True ``` ## OnSelect Node This example demonstrates an event node that activates the **selected** output when the kit selection changes. Note that this is simpler than the C++ implementation because we use one subscription per node instance instead of sharing the subscription between instances. ```python import carb import omni.kit.app import omni.usd from com.myextension.ogn.OgnOnSelectDatabase import OgnOnSelectDatabase # The generated database class from omni.graph.action import get_interface class OgnOnSelectInternalState: """Convenience class for maintaining per-node state information""" def __init__(self): self.sub = None # The stage-event subscription holder self.selection_changed = False # Set to True when a selection change has happened def first_time_subscribe(self): """Set up the stage event subscription""" usd_context = omni.usd.get_context() events = usd_context.get_stage_event_stream() self.sub = events.create_subscription_to_pop(self._on_stage_event) def _on_stage_event(self, e: carb.events.IEvent): """The event callback""" if e is None: return if e.type == int(omni.usd.StageEventType.SELECTION_CHANGED): self.selection_changed = True # ----------------------------------------------------------------------------- class OgnOnSelect: @staticmethod def internal_state(): """Returns an object that will contain per-node state information""" return OgnOnSelectInternalState() @staticmethod def release(node): """Clean up the subscription when node is removed""" state = OgnOnSelectDatabase.per_node_internal_state(node) if state.sub: state.sub.unsubscribe() state.sub = None @staticmethod def compute(db) -> bool: state = db.state.internal_state if state.sub is None: # The initial compute call, set up our subscription state.first_time_subscribe() if state.selection_changed: state.selection_changed = False get_interface().set_execution_enabled("outputs:selected") return True ``` ## While Node This example demonstrates activating an output several times in one update. The node activates the **loopBody** output while the condition is true, and finally calling the **finished** output. ```python from omni.graph.action import get_interface class OgnWhile: @staticmethod def compute(db) -> bool: keep_going = db.inputs.keepGoing if keep_going: get_interface().set_execution_enabled_and_pushed("outputs:loopBody") else: get_interface().set_execution_enabled("outputs:finished") ``` # Action While Node This example demonstrates a node that enters a latent state for N ticks, before triggering the `finished` output. While counting down the evaluation will be “paused”. The node logic is if `state:count` is at the initial value (0), then start the latent state. If the count has reached n, end the latent state and trigger the output. This is done with `omni.graph.action.IActionGraph.start_latent_state` and `omni.graph.action.IActionGraph.end_latent_state`. ```python from omni.graph.action import get_interface class OgnDoN: @staticmethod def compute(db) -> bool: count = db.state.count n = db.inputs.n if count == 0: get_interface().start_latent_state() db.state.count += 1 elif count >= n: db.state.count = 0 get_interface().end_latent_state() get_interface().set_execution_enabled("outputs:finished") else: get_interface().set_execution_enabled("outputs:tick") db.state.count += 1 return True ``` # DoN Node This example demonstrates a node that enters a latent state for N ticks, before triggering the `finished` output. While counting down the evaluation will be “paused”. The node logic is if `state:count` is at the initial value (0), then start the latent state. If the count has reached n, end the latent state and trigger the output. This is done with `omni.graph.action.IActionGraph.start_latent_state` and `omni.graph.action.IActionGraph.end_latent_state`. ```python from omni.graph.action import get_interface class OgnDoN: @staticmethod def compute(db) -> bool: count = db.state.count n = db.inputs.n if count == 0: get_interface().start_latent_state() db.state.count += 1 elif count >= n: db.state.count = 0 get_interface().end_latent_state() get_interface().set_execution_enabled("outputs:finished") else: get_interface().set_execution_enabled("outputs:tick") db.state.count += 1 return True ``` [C++ Version]
activities-tab_Overview.md
# Overview omni.activity.ui is an extension created to display progress and activities. This is the new generation of the activity monitor which replaces the old version of omni.kit.activity.widget.monitor extension since the old activity monitor has limited information on activities and doesn’t always provide accurate information about the progress. Current work of omni.activity.ui focuses on loading activities but the solution is in general and will be extended to unload and frame activities in the future. There are currently two visual presentations for the activities. The main one is the Activity Progress window which can be manually enabled from menu: Window->Utilities->Activity Progress: It can also be shown when user clicks the status bar’s progress area at the bottom right of the app. The Activity Progress window will be docked and on focus as a standard window. The other window: Activity Timeline window can be enabled through the drop-down menu: Show Timeline, by clicking the hamburger button on the top right corner from Activity Progress window. Closing any window shouldn’t affect the activities. When the window is re-enabled, the data will pick up from the model to show the current progress. ## Activity Progress Window The Activity Progress window shows a simplified user interface about activity information that can be easily understood and displayed. There are two tabs in the Activity Progress window: Progress Bar and Activity. They share the same data model and present the data in two ways. The Activity Progress window shows the total loading file number and total time at the bottom. The rotating spinner indicates the loading is in progress or not. The total progress bar shows the current loading activity and the overall progress of the loading. The overall progress is linked to the progress of the status bar. ### Progress Bar Tab The Progress Bar Tab focuses on the overall progress on the loading of USD, Material and Texture which are normally the top time-consuming activities. We display the total loading size and speed to each category. The user can easily see how many of the files have been loaded vs the total numbers we’ve traced. All these numbers are dynamically updated when the data model changes. ### Activity Tab The activity tab displays loading activities in the order of the most recent update. It is essentially a flattened treeview. It details the file name, loading duration and file size if relevant. When the user hover over onto each tree item, you will see more detailed information about the file path, duration and size. ## Activity Timeline Window The Activity Timeline window gives advanced users (mostly developers) more details to explore. It currently shows 6 threads: Stage, USD, Textures, Render Thread, Meshes and Materials. It also contains two tabs: Timeline and Activities. They share the same data model but have two different data presentations which help users to understand the information from different perspectives. ### Timeline Tab In the Timeline Tab, each thread is shown as a growing rectangle bar on their own lane, but the SubActivities for those are “bin packed” to use as few lanes as possible even when they are on many threads. Each timeline block represents an activity, whose width shows the duration of the activity. Different activities are color coded to provide better visual results to help users understand the data more intuitively. When users hover onto each item, it will give more detailed information about the path, duration and size. ## Timeline Tab Users can double click to see what’s happening in each activity, double click with shift will expand/collapse all. Right mouse move can pan the timeline view vertically and horizontally. Middle mouse scrolling can zoom in/out the timeline ranges. Users can also use middle mouse click (twice) to select a time range, which will zoom to fit the Timeline window. This will filter the activities treeview under the Activities Tab to only show the items which are within the selected time range. Here is an image showing a time range selection: ## Activities Tab The data is presented in a regular treeview with the root item as the 6 threads. Each thread activity shows its direct subActivities. Users can also see the duration, start time, end time and size information about each activity. When users hover onto each item, it will give more detailed information. The expansion and selection status are synced between the Timeline Tab and Activities Tab. ## Save and Load Activity Log Both Activity Progress Window and Activity Timeline Window have a hamburger button on the top right corner where you can save or choose to open an .activity or .json log file. The saved log file has the same data from both windows and it records all the activities happening for a certain stage. When you open the same .activity file from different windows, you get a different visual representation of the data. A typical activity entry looks like this: ```python { "children": [], "events": [ { "size": 2184029, "time": 133129969539325474, "type": "BEGAN" }, { "size": 2184029, "time": 133129969540887869, "type": "ENDED" } ], "name": "omniverse://ov-content/NVIDIA/Samples/Marbles/assets/standalone/SM_board_4/SM_board_4.usd" } ``` This is really useful to send to people for debug purposes, e.g find the performance bottleneck of the stage loading or spot problematic texture and so on. ## Dependencies This extension depends on two core activity extensions omni.activity.core and omni.activity.pump. omni.activity.core is the core activity progress processor which defines the activity and event structure and provides APIs to subscribe to the events dispatching on the stream. omni.activity.pump makes sure the activity and the progress gets pumped every frame.
activity-timeline-window_Overview.md
# Overview ## Activity Progress Window ### Progress Bar Tab ### Activity Tab ## Activity Timeline Window ### Timeline Tab ## Timeline Tab Users can double click to see what’s happening in each activity, double click with shift will expand/collapse all. Right mouse move can pan the timeline view vertically and horizontally. Middle mouse scrolling can zoom in/out the timeline ranges. Users can also use middle mouse click (twice) to select a time range, which will zoom to fit the Timeline window. This will filter the activities treeview under the Activities Tab to only show the items which are within the selected time range. Here is an image showing a time range selection: ## Activities Tab The data is presented in a regular treeview with the root item as the 6 threads. Each thread activity shows its direct subActivities. Users can also see the duration, start time, end time and size information about each activity. When users hover onto each item, it will give more detailed information. The expansion and selection status are synced between the Timeline Tab and Activities Tab. ## Save and Load Activity Log Both Activity Progress Window and Activity Timeline Window have a hamburger button on the top right corner where you can save or choose to open an .activity or .json log file. The saved log file has the same data from both windows and it records all the activities happening for a certain stage. When you open the same .activity file from different windows, you get a different visual representation of the data. A typical activity entry looks like this: ```python { "children": [], "events": [ { "size": 2184029, "time": 133129969539325474, "type": "BEGAN" }, { "size": 2184029, "time": 133129969540887869, "type": "ENDED" } ], "name": "omniverse://ov-content/NVIDIA/Samples/Marbles/assets/standalone/SM_board_4/SM_board_4.usd" } ``` This is really useful to send to people for debug purposes, e.g find the performance bottleneck of the stage loading or spot problematic texture and so on. ## Dependencies This extension depends on two core activity extensions omni.activity.core and omni.activity.pump. omni.activity.core is the core activity progress processor which defines the activity and event structure and provides APIs to subscribe to the events dispatching on the stream. omni.activity.pump makes sure the activity and the progress gets pumped every frame.
add-extensions_app_from_scratch.md
# Develop a Simple App This section provides an introduction to Application development and presents important foundational knowledge: - How Applications and Extensions are defined in `.kit` and `.toml` files. - How to explore existing Extensions and adding them to your Application. - How user settings can override Application configurations. - Controlling Application window layout. ## Kit and Toml Files If you have developed solutions before you are likely to have used configuration files. Configuration files present developers with a “low-code” approach to changing behaviors. With Kit SDK you will use configuration files to declare: - Package metadata - Dependencies - Settings Kit allows Applications and Services to be configured via `.kit` files and Extensions via `.toml` files. Both files present the same ease of readability and purpose of defining a configuration - they simply have different file Extensions. Let’s create a `.kit` file and register it with the build system: 1. Create a Kit file: 1. Create a file named `my_company.my_app.kit` in `.\source\apps`. 2. Add this content to the file: ```toml [package] title = "My App" description = "An Application created from a tutorial." version = "2023.0.0" [dependencies] "omni.kit.uiapp" = {} [settings] app.window.title = "My App" [[test]] args = [ "--/app/window/title=My Test App", ] ``` 2. Configure the build tool to recognize the new Application: 1. Open `.\premake5.lua`. 2. Find the section `-- Apps:`. 3. Add an entry for the new app: 1. Define the application: ``` define_app("my_company.my_app") ``` 2. Run the `build` command. 3. Start the app: - Windows: ``` .\_build\windows-x86_64\release\my_company.my_app.bat ``` - Linux: ``` ./_build/linux-x86_64/release/my_company.my_app.sh ``` 4. Congratulations, you have created an Application! 5. Let’s review the sections of `.kit` and `.toml` files: ### Package This section provides information used for publishing and displaying information about the Application/Extension. For example, `version = "2023.0.0"` is used both in publishing and UI: a publishing process can alert a developer that the given version has already been published and the version can be shown in an “About Window” and the Extension Manager. ```toml [package] title = "My App" description = "An Application created from a tutorial." version = "2023.0.0" ``` ### Dependencies Dependencies section is a list of Extensions used by the Application/Extension. The above reference `"omni.kit.uiapp" = {}` points to the most recent version available but can be configured to use specific versions. Example of an Extension referenced by a specific version: ```toml "omni.kit.converter.cad" = {version = "200.1", exact = true} ``` ### Settings Settings provide a low-code mechanism to customize Application/Extension behavior. Some settings modify UI and others modify functionality - it all depends on how an Application/Extension makes use of the setting. An Omniverse developer should consider exposing settings to developers - and end users - to make Extensions as modular as possible. ```toml [settings] app.window.title = "My App" ``` #### Experiment Change the title to `My Company App` - `app.window.title = "My Company App"` - and run the app again - still, no build required. Note the Application title bar shows the new name. ### Test The test section can be thought of as a combined dependencies and settings section. It allows adding dependencies and settings for when running an Application and Extension in test mode. ```toml [[test]] args = [ "--/app/window/title=My Test App", ] ``` Note: Reference: Testing Extensions with Python. Reference: .kit and .toml configurations. The Extension Manager window is a tool for developers to explore Extensions created on the Omniverse platform. It lists Extensions created by NVIDIA, the Omniverse community, and can be configured to list Extensions that exist on a local workstation. Let’s add the Extension Manager to the app so we can look for dependencies to add. 1. Add Extension Manager. - Open `.\source\apps\my_company.my_app.kit`. - Add dependency `omni.kit.window.extensions`. Dependencies section should read: ```toml [dependencies] "omni.kit.uiapp" = {} "omni.kit.window.extensions" = {} ``` - In order to point the Extension Manager to the right Extension Registry we need to add the following settings: ```toml # Extension Registries [settings.exts."omni.kit.registry.nucleus"] registries = [ { name = "kit/default", url = "https://ovextensionsprod.blob.core.windows.net/exts/kit/prod/shared" }, { name = "kit/sdk", url = "https://ovextensionsprod.blob.core.windows.net/exts/kit/prod/sdk/${kit_version_short}/${kit_git_hash}" }, ] ``` - Observe that - once you save the source kit file - the corresponding kit file in the build directory was updated as well. This is due to the use of symlinks. A build is not necessary when editing .kit files. See: - Windows: `.\_build\windows-x86_64\release\apps\my_company.my_app.kit` - Linux: `./_build/linux-x86_64/release/apps/my_company.my_app.kit` 2. Explore Extensions in Extension Manager. - Start the app: - Windows: `.\_build\windows-x86_64\release\my_company.my_app.bat` - Linux: `./_build/linux-x86_64/release/my_company.my_app.sh` - Open Extension Manager: Window > Extensions. - Please allow Extension Manager to sync with the Extension Registry. The listing might not load instantly. - Search for `graph editor example`. The Extension Manager should list `omni.kit.graph.editor.example` in the NVIDIA tab. - Click `INSTALL`. - Click the toggle `DISABLED` to enable Extension. - Check `AUTOLOAD`. - Close the app and start again. - Observe that the *Graph Editor Example* Extension is enabled. Look at the `[dependencies]` section in `.\source\apps\my_company.my_app.kit`. The `omni.kit.graph.editor.example` Extension is not listed. The point here is to make it clear that when an Extension is enabled by a user in the Extension Manager, the dependency is **NOT** added to the Application `.kit`. 1. **User Settings** - Kit Applications allow a user’s choices and settings to persist between sessions. Configurations like these are stored in a `user.config.json` file. The default location for this file is the `DATA PATH`: `[data path]\Kit\[application name]\[application version]\user.config.json`. - If you are not using Omniverse Launcher - or you cannot find the location of the `user.config.json` file for whatever reason - look at the first lines of the Application log. Look for the line that starts with `Loading user config located at:`. - Inspecting the file for this tutorial you should see this section: ```toml "exts": { "enabled": { "0": "omni.kit.graph.editor.example-1.0.22" } } ``` - When you uncheck `AUTOLOAD` for `omni.kit.graph.editor.example` in the Extension Manager you’ll notice the `user.config.json` file is no longer listing the Extension as enabled. - **Note:** Because the config file can store both custom enabled Extensions and custom settings, as a developer you may want to at times delete your `user.config.json` file for the app you are developing to make sure you are viewing the Application like an end user will when they use it in its default state. - **Note:** As a developer, we recommend that you refrain from using the `AUTOLOAD` functionality unless you truly want to enable an Extension without making it permanently available in the Application. 2. **Dependency Hierarchy** - While the selected Extension is `ENABLED`, select the `DEPENDENCIES` tab and click the `Toggle View` button. It might take a few seconds for the UI to refresh. - The Extension Manager presents up and downstream dependencies. This is useful for discovering how one Extension is composed of many. It’s also a convenient way to find where an Extension is being referenced from. 3. **Explore Community Extensions** - Extensions developed by the community are not really any different from NVIDIA Extensions, they are just stored in a different location and have not been vetted by NVIDIA the same way NVIDIA Extensions have. - There are two settings that need to be added to the Application to make use of community Extensions: - Add `app.extensions.installUntrustedExtensions = true` to enable the app to install and load community Extensions. ```toml [settings] app.window.title = "My Company App" app.extensions.installUntrustedExtensions = true ``` - Add the URL to the Community Extension Registry by modifying the `registries` setting. ```toml # Extension Registries [settings.exts."omni.kit.registry.nucleus"] registries = [ { name = "kit/default", url = "https://ovextensionsprod.blob.core.windows.net/exts/kit/prod/shared" }, ] ``` ```toml [dependencies] "omni.kit.uiapp" = {} # Viewport "omni.kit.viewport.bundle" = {} # Render Settings "omni.rtx.window.settings" = {} # Content Browser "omni.kit.window.content_browser" = {} # Stage Inspector "omni.kit.window.stage" = {} # Layer Inspector "omni.kit.widget.layers" = {} # Toolbar. Setting load order so that it loads last. "omni.kit.window.toolbar" = { order = 1000 } # Properties Inspector "omni.kit.property.bundle" = {} # DX shader caches (windows only) [dependencies."filter:platform"."windows-x86_64"] "omni.rtx.shadercache.d3d12" = {} ``` 3. Restart the app and allow Extension Manager to sync with the Extension Registry. The listing might not load instantly. You can now experiment by adding community Extensions such as ```toml "heavyai.ui.component" = {} ``` to the ```toml [dependencies] ``` section. ## Add Extensions 1. Let’s assume we found a few Extensions we want to use. Add the below ```toml [dependencies] ``` section to the ```toml my_company.my_app.kit ``` Application. The Extension Manager has been removed since that is a developer tool. ## Application Layout The Application window layout is fairly organized already but let’s take care of the floating Content Browser by docking it below the viewport window. ### Add a Resource Extension Extensions do not need to provide code. We use so-called “resource Extensions” to provide assets, data, and anything else that can be considered a resource. In this example we create it to provide a layout file. 1. Create a new Extension using ```toml repo template new ``` command (command cheat-sheet). 1. For ```toml What do you want to add ``` choose ```toml extension ``` . 2. For ```toml Choose a template ``` choose ```toml python-extension-simple ``` . 3. Enter an all new name: ```toml my_company.my_app.resources ``` . Do not include the URL or image links. 1. Do not use the default name. 2. Leave version as `0.1.0`. 3. The new Extension is created in `.\source\extensions\my_company.my_app.resources`. 4. Add a `layouts` directory inside `my_company.my_app.resources`. We’ll be adding a resource file here momentarily. 5. Configure the build to pick up the `layouts` directory by adding a `{ "layouts", ext.target_dir.."/layouts" },` in the Extension’s `.\my_company.my_app.resources\premake5.lua` file: ```lua -- Use folder name to build Extension name and tag. Version is specified explicitly. local ext = get_current_extension_info() -- That will also link whole current "target" folder into as extension target folder: project_ext(ext) repo_build.prebuild_link { { "data", ext.target_dir.."/data" }, { "docs", ext.target_dir.."/docs" }, { "layouts", ext.target_dir.."/layouts" }, { "my_company", ext.target_dir.."/my_company" }, } ``` ## Configure App to Recognize Extensions By default, Extensions that are part of the Kit SDK will be recognized by Applications. When we add Extensions like the one above we need to add paths to the Application’s .kit file. The below adds the paths for these additional Extensions. Note the use of `${app}` as a token. This will be replaced with the path to the app at runtime. Add this to the `my_company_my_app.kit`: ```toml [settings.app.exts] # Add additional search paths for dependencies. folders.'++' = [ "${app}/../exts", "${app}/../extscache/" ] ``` **Note:** Reference: [Tokens](https://docs.omniverse.nvidia.com/kit/docs/kit-manual/latest/guide/tokens.html) ## Configure App to Provide Layout Capabilities Add these Extensions to the `my_company_my_app.kit` `[dependencies]` section. `omni.app.setup` provides layout capabilities. ```toml # Layout capabilities "omni.app.setup" = {} # App resources "my_company.my_app.resources" = {} ``` # Create a Layout File 1. Run a build to propagate the new Extension to the built solution and start the app. 2. Drag and drop the `Content Browser` on top of the lower docker manipulator within the `Viewport` window. 3. Save the layout: - Use menu `Window` > `Layout` > `Save Layout...` command. - Save the layout as `.\source\extensions\my_company.my_app.resources\layouts\layout.json`. # Use Layout 1. Add this to the `my_company.my_app.kit` files `[settings]` section. Again, here we are using a token: `${my_company.my_app.resources}`. That token is replaced with the path to the Extension at runtime. ```toml app.kit.editor.setup = true app.layout.default = "${my_company.my_app.resources}/layouts/layout.json" ``` 2. Run a build so the `layouts` directory with its `layout.json` file is created in the `_build` directory structure. 3. Run the Application again and see the `Content Browser` being docked. A developer can provide end users with different layouts - or `workflows`. This topic can be further explored in the omni.app.setup reference. You now have an Application and could skip ahead to the Package App and Publish App sections; however, this tutorial now continues with a more advanced example: Develop a USD Explorer App.
add-language-from-kit-file_OVERVIEW.md
# Overview ## Omniverse Language & Font extension [omni.kit.language.core] This extension has support for changing font. ```python omni.kit.language.core.register_language(("ja_JP", "Japanese", "日本語"), f"{extension_path}/data/fonts/NotoSansJP-SemiBold.ttf", 1.2, [ f"{extension_path}/data/regions/japanese.txt", f"{extension_path}/data/regions/japanese_extended.txt" ], "いろはにほへと ちりぬるを わかよたれそ つねならむ うゐのおくやま けふこえて あさきゆめみし ゑひもせす") ``` ## Change locale_id from python You can set current locale_id via setting “/persistent/app/locale_id”, EG. ```python carb.settings.get_settings().set("/persistent/app/locale_id", "de_DE") ``` But you will have to re-start kit or set in .kit file as this has to be used before the font is loaded. ## Change locale_id from .kit file You can force current locale_id via setting “persistent.app.locale_id”, EG. ```c++ [settings.persistent] app.locale_id = "ja_JP" ``` NOTE: This will be changeable in Language Preferences but .kit file will always reset the value when kit restarts. ## Add language from .kit file ```c++ [[settings.exts."omni.kit.language.core".language]] name = "Japanese" locale_info = ["ja_JP", "Japanese", "日本語"] font_path = "${fonts}/NotoSansJP-SemiBold.ttf" font_scale = 1.2 regions = [ "${kit}/exts/omni.kit.renderer.imgui/data/regions/japanese.txt", "${kit}/exts/omni.kit.renderer.imgui/data/regions/japanese_extended.txt" ] pangram = "いろはにほへと ちりぬるを わかよたれそ つねならむ うゐのおくやま けふこえて あさきゆめみし ゑひもせす" ``` NOTE: For this to work, font and region paths have to be resolvable by kit. Which is why ${fonts} and ${kit} are used. Also note that NotoSansJP-SemiBold.ttf isn’t in currently in font resources and is shown for demonstration only, although it can be added.
add-menu_Overview.md
# Overview This is the context menu used in stage and viewport windows. For documentation on creating/adding to context menu, see documentation for omni.kit.widget.context_menu. Functions supported are; ## get_widget_instance Get instance of omni.kit.widget.context_menu class ## get_instance Get instance of context menu class ## close_menu Close currently open context menu. Used by tests not to leave context menu in bad state. ## reorder_menu_dict Reorder menus using “appear_after” value in menu ## post_notification Post a notification via omni.kit.notification_manager ## get_hovered_prim Get prim currently under mouse cursor or None ## add_menu Add custom menu to any context_menu ## get_menu_dict Get custom menus, returns list of dictionaries containing custom menu settings ## get_menu_event_stream Gets menu event stream ## Example of stage context menu
add-to-add-menu_Overview.md
# Overview — Omniverse Kit 1.2.0 documentation ## Overview This is the low level context menu that drives omni.kit.context_menu and other extensions. This is a widget. ### Implement a context menu in window ```python import omni.ui as ui import omni.usd import omni.kit.widget.context_menu from omni.kit.widget.context_menu import DefaultMenuDelegate # custom menu delegate class MyMenuDelegate(DefaultMenuDelegate): def get_parameters(self, name, kwargs): if name == "tearable": kwargs[name] = False class MyContextMenu(): def __init__(self, window: ui.Window): # set window to call _on_mouse_released on mouse release window.frame.set_mouse_released_fn(self._on_mouse_released) # create menu delegate self._menu_delegate = MyMenuDelegate() def _on_mouse_released(self, x, y, button, m): """Called when the user presses & releases the mouse button""" if button == 0: # right mouse button only return # setup objects, this dictionary passed to all functions objects = { "stage": omni.usd.get_context().get_stage(), "prim_list": omni.usd.get_context().get_selection().get_selected_prim_paths(), "menu_xpos": x, "menu_ypos": y, } # setup context menus submenu_list = [ {"name": "Sub Test Menu", "glyph": "menu_save.svg", "show_fn": [self.is_prim_selected], "onclick_fn": self.test_menu_clicked} ] ```python # menu items menu_items = [ { "name": "Sub Copy Menu", "glyph": "gamepad.svg", "onclick_fn": self.copy_menu_clicked }, ] # context menu lists add_list = omni.kit.widget.context_menu.get_menu_dict("ADD", "") create_list = omni.kit.widget.context_menu.get_menu_dict("CREATE", "") # main menu list menu_list = [ { "name": "Test Menu", "glyph": "menu_rename.svg", "show_fn": [self.is_prim_selected], "onclick_fn": self.test_menu_clicked }, { "name": "Copy Menu", "glyph": "menu_link.svg", "onclick_fn": self.copy_menu_clicked }, { "name": "", "header": "More things..." }, { 'name': { 'Things': submenu_list }, "glyph": "menu_flow.svg", }, { 'name': { 'Add': add_list }, "glyph": "menu_audio.svg", }, { 'name': { 'Create': create_list }, "glyph": "physics_dark.svg", }, ] # show menu omni.kit.widget.context_menu.get_instance().show_context_menu("My test context menu", objects=objects, menu_list=menu_list, delegate=self._menu_delegate) # show_fn functions def is_prim_selected(self, objects: dict): return bool(objects["prim_list"]) # click functions def copy_menu_clicked(self, objects: dict): print("copy_menu_clicked") # add code here def test_menu_clicked(self, objects): print("test_menu_clicked") # add code here ``` # add to create menu ```python menu_dict = { 'glyph': f"{EXTENSION_FOLDER_PATH}/data/fish_icon.svg", 'name': 'Fish', 'onclick_fn': on_create_fish } self._context_menu = omni.kit.widget.context_menu.add_menu(menu_dict, "CREATE") ``` # add to add menu ```python menu_dict = { 'glyph': f"{EXTENSION_FOLDER_PATH}/data/cheese_icon.svg", 'name': 'Cheese', 'onclick_fn': on_create_cheese } self._context_menu = omni.kit.widget.context_menu.add_menu(menu_dict, "ADD") ``` ## Supported Parameters by Context Menu Dictionary - “name” is name shown on menu. (if name is “” then a menu ui.Separator is added. Can be combined with show_fn). - “glyph” is icon shown on menu, can use full paths to extensions. - “name_fn” function to get menu item name. - “show_fn” function or list of functions used to decide if menu item is shown. All functions must return True to show. - “enabled_fn” function or list of functions used to decide if menu item is enabled. All functions must return True to be enabled. - “onclick_fn” function to be called when user clicks menu item. - “onclick_action” action to be called when user clicks menu item. - “checked_fn” function returns True/False and shows solid/grey tick. - “header” as be used with name of “” to use named ui.Separator. - “populate_fn” a function to be called to populate the menu. Can be combined with show_fn. - “appear_after” a identifier of menu name. Used by custom menus and will allow custom menu to change order. - “show_fn_async” this is async function to set items visible flag. These behave differently to show_fn callbacks as the item will be created regardless and have its visibility set to False then its up-to the show_fn_async callback to set the visible flag to True if required. ```python menu_list = [ {"name": "Test Menu", "glyph": "menu_rename.svg", "show_fn_async": is_item_checkpointable, "onclick_fn": self.test_menu_clicked}, ] async def is_item_checkpointable(objects: dict, menu_item: ui.MenuItem): """ async show function. The `menu_item` is created but not visible, if this item is shown then `menu_item.visible = True` This scans all the prims in the stage looking for a material, if one is found then it can "assign material" and `menu_item.visible = True` """ if "item" in objects: path = objects["item"].path if VersioningHelper.is_versioning_enabled(): if await VersioningHelper.check_server_checkpoint_support_async( VersioningHelper.extract_server_from_url(path) ): menu_item.visible = True ```
add-to-the-primpathwidget_Overview.md
# Overview — Omniverse Kit 1.11.1 documentation ## Overview The omni.kit.window.property extension offers a window that displays properties and enables users to modify them. Additional features such as property filtering for convenient searching are included as well as convenience functions to add new properties. USD-related functionalities are incorporated through omni.kit.property.usd. Building upon omni.kit.window.property, omni.kit.property.usd specializes in managing USD properties, including basic Usd.Prim information showcased through the PrimPathWidget, which is also used in the example below. ## Register new property window handler with non-schema properties. This example is also available in extension manager, called omni.kit.property.example What does this code do? - Register “prim” handler of name “example_properties” & uses ExampleAttributeWidget class to build UI. - Removes “prim” handler of name “example_properties” on shutdown. - Defines `ExampleAttributeWidget` class which overrides `on_new_payload()` & `_customize_props_layout()` functions. ```python import omni.ext from pxr import Sdf, Usd, UsdGeom, Gf from omni.kit.property.usd.prim_selection_payload import PrimSelectionPayload class ExamplePropertyExtension(omni.ext.IExt): def __init__(self): super().__init__() self._registered = False self._menu_items = [] def on_startup(self, ext_id): self._register_widget() def on_shutdown(self): if self._registered: self._unregister_widget() def _register_widget(self): import omni.kit.window.property as property_window_ext from .example_attribute_widget import ExampleAttributeWidget property_window = property_window_ext.get_window() if property_window: # register ExampleAttributeWidget class with property window. # you can have multiple of these but must have to be different scheme names ``` # but always "prim" or "layer" type # "prim" when a prim is selected # "layer" only seen when root layer is selected in layer window property_window.register_widget("prim", "example_properties", ExampleAttributeWidget()) self._registered = True # ordering of property widget is controlled by omni.kit.property.bundle def _unregister_widget(self): import omni.kit.window.property as property_window_ext property_window = property_window_ext.get_window() if property_window: # remove ExampleAttributeWidget class with property window property_window.unregister_widget("prim", "example_properties") self._registered = False ## ExampleAttributeWidget source This widget class handles ``` ``` Usd.Attributes ``` for prim/example_properties and builds UI. ```python import copy import carb import omni.ui as ui import omni.usd from pxr import Usd, Sdf, Vt, Gf, UsdGeom, Trace from typing import List from omni.kit.property.usd.usd_property_widget import UsdPropertiesWidget, UsdPropertyUiEntry from omni.kit.property.usd.usd_property_widget import create_primspec_token, create_primspec_float, create_primspec_bool class ExampleAttributeWidget(UsdPropertiesWidget): def __init__(self): super().__init__(title="Example Properties", collapsed=False) self._attribute_list = ["hovercraftWheels", "deafeningSilence", "randomOrder", "melancholyMerriment"] # As these attributes are not part of the schema, placeholders need to be added. These are not # part of the prim until the value is changed. They will be added via prim.CreateAttribute() function. self.add_custom_schema_attribute("melancholyMerriment", lambda p: p.IsA(UsdGeom.Xform) or p.IsA(UsdGeom.Mesh), None, "", {Sdf.PrimSpec.TypeNameKey: "float3", "customData": {"default": Gf.Vec3f(1.0, 1.0, 1.0)}}) self.add_custom_schema_attribute("hovercraftWheels", lambda p: p.IsA(UsdGeom.Xform) or p.IsA(UsdGeom.Mesh), None, "", create_primspec_token(["None", "Square", "Round", "Triangle"], "Round")) self.add_custom_schema_attribute("deafeningSilence", lambda p: p.IsA(UsdGeom.Xform) or p.IsA(UsdGeom.Mesh), None, "", create_primspec_float(1.0)) self.add_custom_schema_attribute("randomOrder", lambda p: p.IsA(UsdGeom.Xform) or p.IsA(UsdGeom.Mesh), None, "", create_primspec_bool(False)) def on_new_payload(self, payload): ``` """ Called when a new payload is delivered. PropertyWidget can take this opportunity to update its UI models, or schedule full UI rebuild. Args: payload: The new payload to refresh UI or update model. Return: True if the UI needs to be rebuilt. build_impl will be called as a result. False if the UI does not need to be rebuilt. build_impl will not be called. """ # nothing selected, so do not show widget. If you don't do this # you widget will be always on, like the path widget you see # at the top. if not payload or len(payload) == 0: return False # filter out special cases like large number of prim selected. As # this can cause UI stalls in certain cases if not super().on_new_payload(payload): return False # check is all selected prims are relevant class/types used = [] for prim_path in self._payload: prim = self._get_prim(prim_path) if not prim or not (prim.IsA(UsdGeom.Xform) or prim.IsA(UsdGeom.Mesh)): return False if self.is_custom_schema_attribute_used(prim): used.append(None) used.append(prim) return used is not None def _customize_props_layout(self, props): """ To reorder the properties display order, reorder entries in props list. To override display group or name, call prop.override_display_group or prop.override_display_name respectively. If you want to hide/add certain property, remove/add them to the list. NOTE: All above changes won't go back to USD, they're pure UI overrides. Args: props: List of Tuple(property_name, property_group, metadata) Example: for prop in props: # Change display group: prop.override_display_group("New Display Group") # Change display name (you can change other metadata, it won't be write back to USD, only affect UI): prop.override_display_name("New Display Name") # add additional "property" that doesn't exist. props.append(UsdPropertyUiEntry("PlaceHolder", "Group", { Sdf.PrimSpec.TypeNameKey: "bool"}, Usd.Property)) """ from omni.kit.property.usd.custom_layout_helper import CustomLayoutFrame, CustomLayoutGroup, CustomLayoutProperty from omni.kit.property.usd.usd_property_widget_builder import UsdPropertiesWidgetBuilder from omni.kit.window.property.templates import HORIZONTAL_SPACING, LABEL_HEIGHT, LABEL_WIDTH self.add_custom_schema_attributes_to_props(props) # remove any unwanted props (all of the Xform & Mesh # attributes as we don't want to display them in the widget) for attr in copy.copy(props): if not attr.attr_name in self._attribute_list: props.remove(attr) # custom UI attributes frame = CustomLayoutFrame(hide_extra=False) with frame: # Set layout order. this rearranges attributes in widget to the following order. CustomLayoutProperty("melancholyMerriment", "Melancholy Merriment") CustomLayoutProperty("hovercraftWheels", "Hovercraft Wheels") CustomLayoutProperty("deafeningSilence", "Deafening Silence") CustomLayoutProperty("randomOrder", "Random Order") return frame.apply(props) ``` This will add a new CollapsableFrame called “Example Properties” in the property window, but will be only visible when prims of type Xform or Mesh are selected. Add to the PrimPathWidget ======================== Add new menu items for adding custom properties to prims. What does this code do? ----------------------- - Register add menus to omni.kit.property.usd - Removes add menus on shutdown. - Defines the functions that the menus requires. ```python def on_startup(self, ext_id): ... self._menu_items = [] self._register_add_menus() def on_shutdown(self): self._unregister_add_menus() ... def _register_add_menus(self): from omni.kit.property.usd import PrimPathWidget # add menus to property window path/+add and context menus +add submenu. # show_fn: controls when option will be shown, IE when selected prim(s) are Xform or Mesh. # onclick_fn: is called when user selects menu item. self._menu_items.append( PrimPathWidget.add_button_menu_entry( "Example/Hovercraft Wheels", show_fn=ExamplePropertyExtension.prim_is_example_type, onclick_fn=ExamplePropertyExtension.click_add_hovercraft_wheels ) ) self._menu_items.append( PrimPathWidget.add_button_menu_entry( "Example/Deafening Silence", show_fn=ExamplePropertyExtension.prim_is_example_type, onclick_fn=ExamplePropertyExtension.click_add_deafening_silence ) ) self._menu_items.append( PrimPathWidget.add_button_menu_entry( "Example/Random Order", show_fn=ExamplePropertyExtension.prim_is_example_type, onclick_fn=ExamplePropertyExtension.click_add_random_order ) ) self._menu_items.append( PrimPathWidget.add_button_menu_entry( "Example/Melancholy Merriment", show_fn=ExamplePropertyExtension.prim_is_example_type, onclick_fn=ExamplePropertyExtension.click_add_melancholy_merriment ) ) def _unregister_add_menus(self): from omni.kit.property.usd import PrimPathWidget # remove menus to property window path/+add and context menus +add submenu. for item in self._menu_items: PrimPathWidget.remove_button_menu_entry(item) self._menu_items = None @staticmethod def prim_is_example_type(objects: dict) -> bool: """ checks if prims are required type """ if not "stage" in objects or not "prim_list" in objects or not objects["stage"]: return False stage = objects["stage"] if not stage: return False prim_list = objects["prim_list"] for path in prim_list: if isinstance(path, Usd.Prim): prim = path else: prim = stage.GetPrimAtPath(path) if prim: if not (prim.IsA(UsdGeom.Xform) or prim.IsA(UsdGeom.Mesh)): return False return len(prim_list) > 0 @staticmethod def click_add_hovercraft_wheels(payload: PrimSelectionPayload): """ create hovercraftWheels Prim.Attribute """ stage = payload.get_stage() for prim_path in payload: prim = stage.GetPrimAtPath(prim_path) if stage and prim_path else None ``` ```python if prim: attr = prim.CreateAttribute("hovercraftWheels", Sdf.ValueTypeNames.Token, False) attr.SetMetadata("allowedTokens", ["None", "Square", "Round", "Triangle"]) attr.Set("Round") @staticmethod def click_add_deafening_silence(payload: PrimSelectionPayload): """ create deafeningSilence Prim.Attribute """ stage = payload.get_stage() for prim_path in payload: prim = stage.GetPrimAtPath(prim_path) if stage and prim_path else None if prim: attr = prim.CreateAttribute("deafeningSilence", Sdf.ValueTypeNames.Float, False) attr.Set(1.0) @staticmethod def click_add_random_order(payload: PrimSelectionPayload): """ create randomOrder Prim.Attribute """ stage = payload.get_stage() for prim_path in payload: prim = stage.GetPrimAtPath(prim_path) if stage and prim_path else None if prim: attr = prim.CreateAttribute("randomOrder", Sdf.ValueTypeNames.Bool, False) attr.Set(False) @staticmethod def click_add_melancholy_merriment(payload: PrimSelectionPayload): """ create melancholyMerriment Prim.Attribute """ stage = payload.get_stage() for prim_path in payload: prim = stage.GetPrimAtPath(prim_path) if stage and prim_path else None if prim: attr = prim.CreateAttribute("melancholyMerriment", Sdf.ValueTypeNames.Float3, False) attr.Set(Gf.Vec3f(1.0, 1.0, 1.0)) ``` Which will add a new “Example” submenu to the PrimPathWidget: ```
add-world-bonds_ext_assetutils.md
# Asset Utilities (NvBlastExtAssetUtils) NvBlastExtAssetUtils provides simple utility functions for modifying NvBlastAsset objects. Three functions are provided, described in the following sections. ## Add World Bonds The function NvBlastExtAssetUtilsAddWorldBonds allows the user to create an asset from an existing asset, with the addition of new bonds that connect support chunks to the world. (See the documentation for NvBlastBondDesc.) For example, given an asset called `oldAsset`, ```text const uint32_t worldBoundChunks[3] = { 1, 2, 3 }; // Chunks to bind to the world. These must be support chunks. const NvcVec3 bondDirections[3] = { { -1, 0, 1 }, { 0, 0, -1}, { 1, 0, 0 } }; // Normal directions for the new bonds. // Create a new asset NvBlastAsset* newAsset = NvBlastExtAssetUtilsAddWorldBonds(oldAsset, worldBoundChunks, 3, bondDirections, NULL); ``` Memory for the new asset is allocated using the allocator available through NvBlastGlobals. Therefore the new asset may be freed using ```text NVBLAST_FREE(newAsset); ``` ## Merge Assets The NvBlastExtAssetUtilsMergeAssets function will combine any number of assets, generating an asset descriptor which may be passed to NvBlastCreateAsset. This is done in order to allow the user to make adjustments to the descriptor before creating the merged asset. The geometric data in each asset to be merged may be transformed so that the assets will have desired relative poses. In addition, the user may describe new bonds, in order to join support chunks of two different assets and create a larger support graph which spans the entire combined asset. The reference frame for the new bonds’ geometric data is that of the new asset. For example, if one wants to merge two wall assets together, with a relative translation between them of 10 units in the x-direction, the code might look something like this: ```text const NvBlastAsset* components[2] = { asset0, asset1 }; // asset0 and asset1 are already created const NvcVec3 translations[2] = { { -5, 0, 0 }, { 5, 0, 0 } }; // Translate asset0 -5 in x, and asset1 +5 in x // New bonds: const uint32_t newBondCount = ... // Some number of new bonds const NvBlastExtAssetUtilsBondDesc newBondDescs[newBondCount]; newBondDesc[0].bond.normal.x = 1; // Normal in the +x direction, pointing from asset0 to asset1 newBondDesc[0].bond.normal.y = 0; newBondDesc[0].bond.normal.z = 0; newBondDesc[0].bond.area = 1; newBondDesc[0].bond.centroid.x = 0; newBondDesc[0].bond.centroid.y = 0; newBondDesc[0].bond.centroid.z = 2.5; // Position is in the middle, off the ground newBondDesc[0].bond.userData = 0; newBondDesc[0].chunkIndices[0] = 5; // Connect from chunk[5] in components[componentIndices[0]] newBondDesc[0].chunkIndices[1] = 13; // .. to chunk[13] in components[componentIndices[1]] newBondDesc[0].componentIndices[0] = 0; // Connect asset in components[0] newBondDesc[0].componentIndices[1] = 1; // .. to the asset in components[1] // Create merged asset descriptor NvBlastAssetDesc mergedDesc = NvBlastExtAssetUtilsMergeAssets(components, NULL, translations, 2, newBondDescs, newBondCount); ``` Note, we passed in NULL for the list of relative rotations, meaning no asset will be rotated. Also note, the new bond descriptors can just as well apply to a single asset (by setting both component indices to the same index), allowing the user to create additional bonds within a single asset if desired. The chunk and bond arrays referenced by the returned NvBlastAssetDesc are allocated using the NvBlastGlobals allocator, and it is up to the user to free this memory when it is no longer needed: ```text NVBLAST_FREE(mergedDesc.chunkDescs); NVBLAST_FREE(mergedDesc.bondDescs); ``` ## Transform In-Place The NvBlastExtAssetTransformInPlace function will apply an affine transformation (given by scaling, rotation, translation components) to the geometric data within an asset. To use this function, simply pass in an NvcVec3 pointer to represent scale (which may be non-uniform), an NvcQuat pointer to represent rotation, and an NvcVec3 pointer to represent translation. Any of these pointers may be NULL, in which case that transform component is implicitly considered to be the identity. This transforms: - Chunk centroids - Chunk volumes - Bond normals - Bond areas - Bond centroids The transformation of position vectors is done in the following order: scale, followed by rotation, followed by translation. The transformation of normal vectors uses the cofactors of the scale matrix (diagonals given by {scale.y*scale.z, scale.z*scale.x, scale.x*scale.y}), followed by rotation.
added_CHANGELOG.md
# Changelog All notable changes to this project will be documented in this file. The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/), and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html). ## [0.1.0] - 2023-08-24 ### Added - Added initial version of the Extension.
adding-a-menu-for-your-extension_Overview.md
# Overview ## Overview omni.kit.menu.utils allows users to add/remove/update menus for the top bar of the window. ## Adding a menu for your extension What does this do? - Adds a menu item “Menu Example” in “File” menu using icon file.svg and when clicked executes action “omni.kit.menuext.extension” / “menu_example”. NOTES: - You need to keep a copy of your file menu list to prevent python from garbage collecting it. - Although kit uses fastexit and on_shutdown is never called, a user can still disable your extension and on_shutdown will be called, so you need to cleanup your menus otherwise you will get leaks and menu will still be shown. ### glyphs - **Name/path of an icon/image.** Can either be a full path to extension or just an “cog.svg” which are loaded from kit/_build/resources/glyphs ### onclick_action / unclick_action - **tuple containing extension & action name** ### onclick/unclick - **As menus use carb.input for hotkeys and carb.input also supports joypad buttons, joypad buttons can be bound to menus** - **onclick is called when menu item selected / input button is pressed** - **unclick is called when input button is released** - **This can be used for long-press processing** ### appear_after - **Example: appear_after=["Mesh", MenuItemOrder.FIRST]** - **This can be used to have your menu item appear after other menu items, list is used for multiple items to try.** - **EG “Mesh” would only exist if omni.kit.primitive.mesh is enabled** ### sub_menu - **Used for submenu, like “File” “Recent” and is a list of MenuItemDescription’s** ### name/name_fn - **Value “name” is menu item name** - **Function “name_fn” is function that returns string used for name** ### show_fn - **Function that returns True/False, if item is False, then menu item is not shown (hidden)** ### enabled/enable_fn - **Value enabled is True/False** - **Function enabled_fn is functions that returns True/False** - **When False the menu item is greyed out but still shown** ### ticked/ticked_value/ticked_fn - **Value ticked. When True Menu item is tickable** - **Value ticked_value True/False. When True white tick is shown otherwise greyed tick is shown** - **Function ticked_fn function returns True/False. When True white tick is shown otherwise greyed tick is shown** ### user - **dictionary** - **Can be used to pass information into delegate functions or build_item function or onclick function etc.** - **NOTE: menus also add information to this dictionary** ## Delegates - **You can have a delegate associated with your menu** - **Currently supported functions are:** - **build_item(item: ui.Menu)** - **created ui for menu items, be careful overriding this function as IconMenuDelegate has a custom build_item** - **get_elided_length(menu_name: str)** - **returns max string length & any menu text over this length will be elided** - **get_menu_alignment() returns MenuAlignment.LEFT or MenuAlignment.RIGHT** - **used by “Live” and “Cache” button on top right. These are right aligned menu items with custom build_item function** - **update_menu_item(menu_item: Union[ui.Menu, ui.MenuItem], menu_refresh: bool)** - **allows modification of ui.Menu/ui.MenuItem.** - **menu_refresh indicates its updating state during ui triggered_fn** ```python if isinstance(menu_item, ui.MenuItem): menu_item.visible = False def get_elided_length(self, menu_name): if menu_name == "Open Recent": return 160 return 0 self._file_delegate = FileMenuDelegate() self._file_menu_list = [ MenuItemDescription( name="Menu Example", glyph="file.svg", onclick_action=("omni.kit.menuext.extension", "menu_example"), hotkey=(carb.input.KEYBOARD_MODIFIER_FLAG_CONTROL, carb.input.KeyboardInput.T), ) ] omni.kit.menu.utils.add_menu_items(self._file_menu_list, "File", -10, delegate=self._file_delegate) ```
adding-preferences-to-your-extension_Overview.md
# Overview — Omniverse Kit 1.5.1 documentation ## Overview omni.kit.window.preferences has a window where users customize settings. NOTE: This document is going to refer to pages, a page is a `PreferenceBuilder` subclass and has name and builds UI. See `MyExtension` below ## Adding preferences to your extension To create your own preference’s pane follow the steps below: ### 1. Register hooks/callbacks so page can be added/remove from omni.kit.window.preferences as required ```c++ def on_startup(self): .... manager = omni.kit.app.get_app().get_extension_manager() self._preferences_page = None self._hooks = [] # Register hooks/callbacks so page can be added/remove from omni.kit.window.preferences as required # keep copy of manager.subscribe_to_extension_enable so it doesn't get garbage collected self._hooks.append( manager.subscribe_to_extension_enable( on_enable_fn=lambda _: self._register_page(), on_disable_fn=lambda _: self._unregister_page(), ext_name="omni.kit.window.preferences", hook_name="my.extension omni.kit.window.preferences listener", ) ) .... def on_shutdown(self): .... self._unregister_page() .... def _register_page(self): ``` ```python try: from omni.kit.window.preferences import register_page from .my_extension_page import MyExtensionPreferences self._preferences_page = register_page(MyExtensionPreferences()) except ModuleNotFoundError: pass def _unregister_page(self): if self._preferences_page: try: import omni.kit.window.preferences omni.kit.window.preferences.unregister_page(self._preferences_page) self._preferences_page = None except ModuleNotFoundError: pass ``` ## 2.Define settings in toml extension.toml ```toml [settings] persistent.exts."my.extension".mySettingFLOAT = 1.0 persistent.exts."my.extension".mySettingINT = 1 persistent.exts."my.extension".mySettingBOOL = true persistent.exts."my.extension".mySettingSTRING = "my string" persistent.exts."my.extension".mySettingCOLOR3 = [0.25, 0.5, 0.75] persistent.exts."my.extension".mySettingDOUBLE3 = [2.5, 3.5, 4.5] persistent.exts."my.extension".mySettingINT2 = [1, 2] persistent.exts."my.extension".mySettingDOUBLE2 = [1.25, 1.65] persistent.exts."my.extension".mySettingASSET = "${kit}/exts/my.extension/icons/kit.png" persistent.exts."my.extension".mySettingCombo1 = "hovercraft" persistent.exts."my.extension".mySettingCombo2 = 1 ``` ## 3.Subclass PreferenceBuilder to customize the UI my_extension_page.py ```python import carb.settings import omni.kit.app import omni.ui as ui from omni.kit.window.preferences import PreferenceBuilder, show_file_importer, SettingType, PERSISTENT_SETTINGS_PREFIX class MyExtensionPreferences(PreferenceBuilder): pass ``` ```python def __init__(self): super().__init__("My Custom Extension") # update on setting change, this is required as setting could be changed via script or other extension def on_change(item, event_type): if event_type == carb.settings.ChangeEventType.CHANGED: omni.kit.window.preferences.rebuild_pages() self._update_setting = omni.kit.app.SettingChangeSubscription(PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingBOOL", on_change) def build(self): combo_list = ["my", "hovercraft", "is", "full", "of", "eels"] with ui.VStack(height=0): with self.add_frame("My Custom Extension"): with ui.VStack(): self.create_setting_widget("My FLOAT Setting", PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingFLOAT", SettingType.FLOAT) self.create_setting_widget("My INT Setting", PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingINT", SettingType.INT) self.create_setting_widget("My BOOL Setting", PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingBOOL", SettingType.BOOL) self.create_setting_widget("My STRING Setting", PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingSTRING", SettingType.STRING) self.create_setting_widget("My COLOR3 Setting", PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingCOLOR3", SettingType.COLOR3) self.create_setting_widget("My DOUBLE3 Setting", PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingDOUBLE3", SettingType.DOUBLE3) self.create_setting_widget("My INT2 Setting", PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingINT2", SettingType.INT2) self.create_setting_widget("My DOUBLE2 Setting", PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingDOUBLE2", SettingType.DOUBLE2) self.create_setting_widget("My ASSET Setting", PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingASSET", SettingType.ASSET) self.create_setting_widget_combo("My COMBO Setting 1", PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingCombo1", combo_list) ``` ```python self.create_setting_widget_combo("My COMBO Setting 2", PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingCombo2", combo_list, setting_is_index=True) ``` What does this do? - **subscribe_to_extension_enable** adds a callback for `on_enable_fn` / `on_disable_fn`. `on_enable_fn` will trigger on running kit or even enabling/disabling omni.kit.window.preferences from extension manager will trigger `on_enable_fn` / `on_disable_fn`. - **_register_page** registers new page to omni.kit.window.preferences - **_unregister_page** removes new page from omni.kit.window.preferences - **MyExtension** is definition of new page name “My Custom Extension” will appear in list of names on left hand side of omni.kit.window.preferences window NOTES: - Multiple extensions can add same page name like “My Custom Extension” and only one “My Custom Extension” will appear in list of names on left hand side of omni.kit.window.preferences window but all pages will be shown when selecting page - build function can build any UI wanted and isn’t restricted to `self.create_xxx` functions - mySettingCombo1 uses combobox with settings as string - mySettingCombo1 uses combobox with settings as integer My Custom Extension page will look like this
adding-widgets-to-your-ui_Overview.md
# Overview — Omniverse Kit 1.1.2 documentation ## Overview omni.kit.widget.settings has a library of widget functions that access values via settings, used by: - omni.kit.property.transform - omni.kit.viewport.menubar.core - omni.kit.window.extensions - omni.kit.window.preferences - omni.rtx.window.settings ## Adding widgets to your UI ### 1. Define settings in toml extension.toml ```toml [settings] persistent.exts."my.extension".mySettingFLOAT = 1.0 persistent.exts."my.extension".mySettingINT = 1 persistent.exts."my.extension".mySettingBOOL = true persistent.exts."my.extension".mySettingSTRING = "my string" persistent.exts."my.extension".mySettingCOLOR3 = [0.25, 0.5, 0.75] persistent.exts."my.extension".mySettingDOUBLE3 = [2.5, 3.5, 4.5] persistent.exts."my.extension".mySettingINT2 = [1, 2] persistent.exts."my.extension".mySettingDOUBLE2 = [1.25, 1.65] persistent.exts."my.extension".mySettingVECTOR3 = [0, 1, 2] ``` persistent.exts."my.extension".mySettingASSET = "${kit}/exts/my.extension/icons/kit.png" persistent.exts."my.extension".mySettingCombo1 = "hovercraft" persistent.exts."my.extension".mySettingCombo2 = 1 persistent.exts."my.extension".mySettingRADIO.value = "Two" persistent.exts."my.extension".mySettingRADIO.items = ["One", "Two", "Three", "Four"] ## 2. Build the UI example.py ```python import omni.ui as ui from omni.kit.widget.settings import SettingType, SettingWidgetType, create_setting_widget, create_setting_widget_combo, SettingsSearchableCombo PERSISTENT_SETTINGS_PREFIX = "/persistent" my_window = ui.Window("Widget Test", width=800, height=800) with my_window.frame: with ui.VStack(spacing=10): combo_list = ["my", "hovercraft", "is", "full", "of", "eels"] with ui.HStack(height=24): ui.Label("My FLOAT Setting", width=ui.Percent(35)) create_setting_widget(PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingFLOAT", SettingType.FLOAT) with ui.HStack(height=24): ui.Label("My INT Setting", width=ui.Percent(35)) create_setting_widget(PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingINT", SettingType.INT) with ui.HStack(height=24): ui.Label("My BOOL Setting", width=ui.Percent(35)) create_setting_widget(PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingBOOL", SettingType.BOOL) with ui.HStack(height=24): ui.Label("My STRING Setting", width=ui.Percent(35)) create_setting_widget(PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingSTRING", SettingType.STRING) ``` ui.Separator(height=5) with ui.HStack(height=24): ui.Label("My INT2 Setting", width=ui.Percent(35)) create_setting_widget(PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingINT2", SettingType.INT2) with ui.HStack(height=24): ui.Label("My DOUBLE2 Setting", width=ui.Percent(35)) create_setting_widget(PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingDOUBLE2", SettingType.DOUBLE2) with ui.HStack(height=24): ui.Label("My COLOR3 Setting", width=ui.Percent(35)) create_setting_widget(PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingCOLOR3", SettingType.COLOR3) with ui.HStack(height=24): ui.Label("My DOUBLE3 Setting", width=ui.Percent(35)) create_setting_widget(PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingDOUBLE3", SettingType.DOUBLE3) with ui.HStack(height=24): ui.Label("My VECTOR3 Setting", width=ui.Percent(35)) create_setting_widget(PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingVECTOR3", SettingWidgetType.VECTOR3) ui.Separator(height=5) with ui.HStack(height=24): ui.Label("My ASSET Setting", width=ui.Percent(35)) create_setting_widget(PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingASSET", SettingType.ASSET) with ui.HStack(height=24): ui.Label("My COMBO Setting 1", width=ui.Percent(35)) create_setting_widget_combo(PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingCombo1", combo_list) with ui.HStack(height=24): ui.Label("My COMBO Setting 2", width=ui.Percent(35)) ```python create_setting_widget_combo(PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingCombo2", combo_list, setting_is_index=True) ``` ```python with ui.HStack(height=24): ui.Label("My RADIO Button Setting", word_wrap=True, width=ui.Percent(35)) create_setting_widget(PERSISTENT_SETTINGS_PREFIX + "/exts/my.extension/mySettingRADIO/value", SettingWidgetType.RADIOBUTTON) ``` Which looks like this
aliases_index.md
# Omniverse Client Library ## Documentation The latest documentation can be found at [omniverse-docs.s3-website-us-east-1.amazonaws.com/client_library/](http://omniverse-docs.s3-website-us-east-1.amazonaws.com/client_library/) ## Samples The Omniverse Connect sample is available through the Omniverse Launcher. Documentation for the sample can be found at [docs.omniverse.nvidia.com/con_connect/con_connect/connect-sample.html](https://docs.omniverse.nvidia.com/con_connect/con_connect/connect-sample.html) ## Getting You can get the latest build from Packman. There are separate packages for each platform. They are all named: ``` omni_client_library.{platform} ``` platform is one of: - windows-x86_64 - linux-x86_64 - linux-aarch64 All packages use the same versioning scheme: ``` {major}.{minor}.{patch} ``` For example: ```xml <project toolsVersion="5.0"> <dependency name="omni_client_library" linkPath="_deps/omni_client_library"> <package name="omni_client_library.windows-x86_64" version="2.10.0" /> </dependency> </project> ``` ## Hub If Hub is installed via Launcher, it will be used by default. If Hub is not installed, we will fall back to the previous behavior. This will change at some point and Hub will be required. There are some environment variables to control this: - OMNICLIENT_USE_HUB=0: Don’t use Hub (even if it’s installed) - OMNICLIENT_HUB_CACHE_DIR: Use a local Hub with this cache directory. - OMNICLIENT_HUB_EXE: When using a local Hub, this is the exe to launch (defaults to “hub.exe” next to your application). A local Hub is spawned as a child process and will be shut down when your application terminates. The global Hub keeps running even after your application terminates. Newer Hub versions are backwards compatible to older client-library versions, but older Hub versions are not compatible with newer client-library versions. If you install Hub via Launcher, Hub is guaranteed to be compatible with all applications available on Launcher. To make it easier to install a compatible Hub executable version if Launcher is not available, client-library packages v2.30.0 and higher are shipped with an additional file deps/public-redist.packman.xml which specifies a compatible Hub version. You can use that file to make the Hub executable available by adding a few lines to your projects packman dependencies. For example: ```xml <!-- Existing dependency to the client-library --> ``` <dependency name="omni_client_library" linkPath="../_build/target-deps/omni_client_library"> <package name="omni_client_library.${platform}" version="2.28.1" /> </dependency> <!-- Add these lines to make the correct version of Hub available --> <import path="../_build/target-deps/omni_client_library/deps/public-redist.packman.xml" /> <dependency name="omni-hub" linkPath="../_build/target-deps/omni-hub" /> <!-- The Hub executable is now available in ../_build/target-deps/omni-hub/target/release/hub resp. hub.exe --> Once the Hub executable is available in a known location, you can use that location for OMNICLIENT_HUB_EXE. If possible, please package the Hub executable next to the your application executable - this will allow client-library to find the Hub executable for local Hub even if OMNICLIENT_HUB_EXE is not set, and users just need to set the OMNICLIENT_HUB_CACHE_DIR environment variable to activate local Hub. ## Using ### URLs Almost all the functions take a URL as the parameter. Currently the library supports a few URLs types: 1. `omniverse://server:port/path` 2. `omni://server:port/path` 3. `http://website/stuff/?omniverse://server:port/path` 4. `file:///c:/local%20path` 5. `c:\local path` 6. `https://www.nvidia.com` The `port` is optional. For the 3rd case, we ignore everything before “omniverse:” file: URLs must be in the correct file URI format as described here: https://en.wikipedia.org/wiki/File_URI_scheme Notably this means special characters must be percent-encoded (and there should be either one or three slashes, not two!) The 5th type is not a URL at all, but the library understands certain “raw” local file paths. Note that although this is mostly reliable on Windows (because drive letters make it pretty obvious), **is is very unreliable on Linux** because a path which starts with “/” could be a reference to another path on the same server. For this reason, always prefer using “file:” URLs over raw file paths. ### Aliases You can set aliases for URLs by adding a section to the global omniverse.toml file. For example: ```toml [aliases] "nvidia:" = "http://www.nvidia.com/" ``` The above alias would turn “nvidia:omniverse” into “http://www.nvidia.com/omniverse” You can also set aliases at runtime with the “set_alias” function in Python or “omniClientSetAlias” in C. ### Basics Include “OmniClient.h” and you can the basic functions like omniClientList, omniClientRead, etc. without any extra work. Even calling omniClientInitialize and omniClientShutdown is not strictly required (though encouraged, because it enables extra checks). ## Providers ### File File provider implements the “file:” scheme and support for “raw” paths like “C:\file” ### Nucleus Nucleus provider implements the “omniverse:” scheme for loading files from an Omniverse Nucleus server. This is the only provider which supports “live mode” ### HTTP(S) # HTTP Provider HTTP provider supports a variety of HTTP/HTTPS based URLs. ## HTTP Plain old HTTP/HTTPS URLs support `stat()` via HTTP HEAD verb and `readFile()` via HTTP GET verb. ### Configuration HTTP providers can be configured using the following environment variables: - `OMNICLIENT_HTTP_VERBOSE`: Set to “1” to enable verbose logging, displaying additional HTTP information in the logs. - `OMNICLIENT_HTTP_TIMEOUT`: Specifies the number of seconds the transfer speed must remain below 1 byte per second before the operation is considered too slow and aborted. - `OMNICLIENT_HTTP_RETRIES`: Defines the number of times a failed operation will be retried before being aborted. These configuration options apply to all HTTP and HTTP-based providers like HTTP, HTTPS, Azure, S3, CloudFront, etc. # Azure Azure URLs are identified by the host ending in “.blob.core.windows.net” “Container Public” containers support `stat`, `list`, and `readFile` operations. “Blob Public” containers support `stat` and `readFile` operations (but not `list`, unless you specified a SAS token). “Private” containers support `stat`, `list`, and `readFile` operations if you specified a SAS token. A SAS token can be specified in `omniverse.toml` like this: ```toml [azure."{account}.blob.core.windows.net/{container}"] sasToken="{sas-token}" ``` # S3 S3 URLs currently support `stat()`, `list()`, and `readFile()` operations. S3 URLs can be identified in one of three ways: - The URL ends in .amazonaws.com - The URL ends in .cloudfront.net - The URL has a S3 configuration in the TOML config file. Useful for S3 buckets with custom domain name. If the bucket requires access tokens or has a CloudFront distribution, omniverse.toml can be used to configure per bucket. Example S3 section in `omniverse.toml`: ```toml [s3] [s3."{url}"] bucket = "{bucket}" region = "{region}" accessKeyId = "{access-key-id}" secretAccessKey = "{secret-access-key}" cloudfront = "http(s)://{cloudfront-distribution-id}.cloudfront.net" # optional cloudfrontList = false # optional ``` ## Note on using CloudFront The preferred way to use CloudFront is to use CloudFront URLs directly in your application. This requires additional CloudFront configuration as outlined below. When using CloudFront URLs directly, the `cloudfront` and `cloudfrontList` options are ignored. These are only used if you are using S3 URLs and want use CloudFront implicitly (which can be slightly more confusing to the end user as they may not know that CloudFront is being used). ## Server-Side CloudFront Configuration CloudFront configuration can also be specified via a ## .cloudfront.toml file in the root of the S3 bucket. Example CloudFront configuration in `.cloudfront.toml`: ```toml cloudfront="http://dcb18d6mfegct.cloudfront.net" cloudfrontList=false ``` This configuration is also optional. It is parsed before any configuration in omniverse.toml, so one can always override this server-side configuration with their local configuration. ## API Operations when using CloudFront If `cloudfrontList` is set to true or you are using CloudFront URLs, `list()` and `stat()` operations will also go though cloudfront, but requires further AWS CloudFront configuration. By default CloudFront does not pass any query strings. The following HTTP query parameters have to be forwarded from CloudFront to S3 on the backend: 1. list-type 2. delimiter 3. prefix 4. continuation-token See [AmazonCloudFront DeveloperGuide](https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/QueryStringParameters.html) for information on how to do this. ## Contents - [C API](_build/docs/client_library/latest/client_library_api.html) - [Python API](docs/python.html) - [Changes](docs/changes.html)
allocator_api_globals_users_guide.md
# Globals API (NvBlastGlobals) The NvBlastGlobals library is a utility library which is used by NvBlastTk (see High Level (Toolkit) API (NvBlastTk)) and some extensions (see Extensions (NvBlastExt)) and samples. It provides a global allocator, error callback, and profiler API. ## Allocator **Include NvBlastGlobals.h** A global allocator with interface `nvidia::NvAllocatorCallback` may be set by the user with the function `NvBlastGlobalSetAllocatorCallback` and accessed using `NvBlastGlobalGetAllocatorCallback`. An internal, default allocator is used if the user does not set their own, or if NULL is passed into `NvBlastGlobalSetAllocatorCallback`. This allocator is used by NvBlastTk, as well as any extension that allocates memory. In addition, utility macros are provided such as `NVBLAST_ALLOC`, `NVBLAST_FREE`, `NVBLAST_NEW`, and `NVBLAST_DELETE`. ## Error Callback **Include NvBlastGlobals.h** A global error message callback with interface `nvidia::NvErrorCallback` may be set by the user with the function `NvBlastGlobalSetErrorCallback` and accessed using `NvBlastGlobalGetErrorCallback`. An internal, default error callback is used if the user does not set their own, or if NULL is passed into `NvBlastGlobalSetErrorCallback`. This error callback is used by NvBlastTk, as well as many extensions. In addition, utility macros are provided such as `NVBLAST_LOG_ERROR` and `NVBLAST_LOG_WARNING`. Finally, a function with signature given by `Nv::Blast::logLL` is provided which uses the global error callback. This function may be passed into any NvBlast function’s log parameter. ## Profiler API **Include NvBlastGlobals.h** BlastTk contains many profiling zones which use the global profiler which can be accessed in this library. The user may implement the interface `nvidia::NvProfilerCallback` and pass it to the globals library using `NvBlastGlobalSetProfilerCallback`. The profiler callback may be retrieved with `NvBlastGlobalGetProfilerCallback`. A NULL pointer may be passed in, disabling profiling.
alpha-2016-10-21_CHANGELOG.md
# Changelog ## [5.0.4] - 22-January-2024 ### Bugfixes - Fixed issue https://github.com/NVIDIA-Omniverse/PhysX/issues/207, Island removal doesn’t work as expected ## [5.0.3] - 1-November-2023 ### Bugfixes - Fixed memory leak in NvBlastExtAuthoringFindAssetConnectingBonds reported in issue #185. ## [5.0.2] - 25-July-2023 ### Bugfixes - Fixed slice fracturing bug which set the local chunk transform to the identity in some cases ## [5.0.1] - 22-June-2023 ### Bugfixes - Use proper constructors for NvTransform and NvVec3 to avoid using garbage data ## [5.0.0] - 23-Jan-2023 ### Changes - Removed all PhysX dependencies from code outside of the ExtPx extension - Replaced Px types with NvShared types - NvFoundation headers in include/shared/NvFoundation - Includes NvPreprocessor.h and NvcTypes.h (formerly in include/lowlevel) - Include basic Nv types, such as NvVec3 (used by the Tk library) - Consolidated header structure - include/lowlevel/NvBlastPreprocessor.h is gone - Previously-defined NVBLAST_API has been renamed NV_C_API and is now defined in NvPreprocessor.h ## [4.0.2] - 31-Aug-2022 ### Bugfixes - Stress solver Linux crash fix. Explicitly allocating aligned data buffers for use with simd data. ## [4.0.1] - 10-Aug-2022 ### Bugfixes - Stress solver fixes: - More robust conversion from angular pressures to linear pressures. - Better error tolerance checking. - Force sign consistency. ## [4.0.0] - 31-May-2022 ### New Features ## New Features - Fully integrated stress-damage system. A stress solver is used to determine how bond forces react to impacts and other externally-supplied accelerations. Stress limits (elastic and fatal) determine how bond health (area) deteriorates with bond force. When bonds break and new actors are generated, excess forces are applied to the previously joined bodies. Using a new stress solver with better convergence properties. - Documentation publishing. ## [3.1.3] - 28-Feb-2022 ### Changes - Update triangulation ear clipping algorithm to avoid outputting triangle slivers. ## [3.1.2] - 24-Feb-2022 ### Changes - Change ExtStressSolver::create() (and downstream functions/classes) to take const NvBlastFamily. - Change the order colors are compressed in PxVec4ToU32Color() ### Bug Fixes - Fixed triangulation ear clipping bug that could cause input verts to not be used in output triangulation, leading to T junctions. ## [3.1.1] - 2022-01-12 ### Changes - Exposing NvcVec2 and NvcVec3 operators in new file include/globals/NvCMath.h. ## [3.1.0] - 2022-01-10 ### Changes - Exposing boolean tool API, along with spatial accelerators used with the tool. - include/extensions/authoring/NvBlastExtAuthoringBooleanTool.h contains virtual API class Nv::Blast::BooleanTool. - include/extensions/authoringCommon/NvBlastExtAuthoringAccelerator.h contains virtual API classes Nv::Blast::SpatialAccelerator and SpatialGrid. - include/extensions/authoring/NvBlastExtAuthoring.h has global functions: - NvBlastExtAuthoringCreateBooleanTool() - NvBlastExtAuthoringCreateSpatialGrid(…) - NvBlastExtAuthoringCreateGridAccelerator(…) - NvBlastExtAuthoringCreateSweepingAccelerator(…) - NvBlastExtAuthoringCreateBBoxBasedAccelerator(…) ## [3.0.0] - 2021-10-13 ### Changes - Rearranged folder layout. Public include files are now under a top-level “include” folder. ## [2.1.7] - 2021-07-18 ### Bug Fixes - Fixed edge case with convex hull overlap test in processWithMidplanes(), so that (0,0,0) normals aren’t generated. - Prevented crash when no viable chunk is found by findClosestNode(), leading to lookup by invalid chunk index in damage shaders. ## [2.1.6] - 2021-06-24 ### Changes - Prioritize convex hulls over triangles in processWithMidplanes() - Store local convex hulls longer in createFullBondListAveraged() so they can be used in processWithMidplanes() - Fix bug with buildDescFromInternalFracture() when there are multiple root chunks. ## [2.1.5] - 2021-05-10 ### Changes - Bond centroid and normal calculations improved when splitting a child chunk. The normal will only be different from before with non-planar splitting surfaces. - Mesh facet user data, which stores a splitting plane (or surface) identifier, will stay unique if the split mesh is fed back into a new instance of the fracture tool. The new IDs generated will be larger than any ID input using FractureTool::setChunkMesh(…). ## [2.1.4] - 2021-04-08 ### Bug Fixes - OM-29933: Crash fracturing dynamic attachment ## [2.1.3] - 2021-04-05 ## Bug Fixes - Bond area calculation was producing values twice the correct value. - Fixed exception in TkGroupImpl. ## [2.1.2] - 2021-03-15 ### Bug Fixes - MR #18: Fix asset joint serialization (BlastTk) - MR #19: Fix index out of bounds (BlastTk) ## [2.1.1] - 2021-03-02 ### Changes - Added Cap’n Proto serialization path for Family to match Asset. - Fix bug with BlastAsset::getSupportChunkHealthMax() returning the wrong value. - Add get/set/size data interface to FixedBoolArray. - Allocate asset memory based on how much space it needs, not serialized data size. - Release asset memory if deserialization fails. - Removed FamilyHeader::getActorBufferSize(), use FamilyHeader::getActorsArraySize() instead. ## [2.0.1] - 2021-03-01 ### Changes - Added .pdb files to windows package. - Bumping version to update dependency chain with linux built from gcc 5.5.0 (for CentOS7 compatibility). ## [2.0.0] - 2021-02-19 ### Changes - Add optional chunkId params to FractureTool::setSourceMeshes() and FractureTool::setChunkMesh() - Rename functions and variables to better indicate what the indices are used for instead of using generic “chunkIndex” for everything ## [1.4.7] - 2020-10-20 ### Changes - Don’t include bonds that can’t take damage (already broken or unbreakable) in results when applying damage ### Bug Fixes - Make sure all fields (specifically userData) on NvBlastBondDesc are initialized when creating bonds ## [1.4.6] - 2020-10-08 ### Changes - Updated license file - Updated copyright dates ### Bug Fixes - Pull request #15 “Fix Blast bond generation” - Pull request #16 “Fix invalid pointer access in authoring tools” ## [1.4.5] - 2020-09-30 ### Bug Fixes - Allocate on heap instead of stack in importerHullsInProximityApexFree() to prevent crash ## [1.4.4] - 2020-09-29 ### Changes - Support unbreakable bonds and chunks by setting their health above Nv::Blast::kUnbreakableLimit - Consolidate code when node is removed ## [1.4.3] - 2020-09-26 ### Changes - Per-chunk internal scaling. ChunkInfo contains the struct TransformST (scale &amp; translation) ### Bug Fixes - Fixes many fracturing instabilities with per-chunk scaling ## [1.4.2] - 2020-08-28 ### Bug Fixes - Fixed mesh generation bug when using FractureToolImpl::createChunkMesh ## [1.4.1] - 2020-06-26 ### Changes - Change API references to ‘external’ instead of ‘world’ bonds - Deprecate ‘world’ versions, should be removed on next major version bump ## [1.2.0] - 2020-01-23 ### Changes - Removed BlastTool - Removed ApexImporter tool - Removed ExtImport extension (for Apex) ### New Features - Reenabling runtime fracture ### Known Issues - Damage shaders in extensions can miss bonds if the damage volume is too small. - Authoring code does not use the user-defined allocator (NvBlastGlobals) exclusively. ## [1.1.5] - 2019-09-16 ### Changes - Extensions API refactored to eliminate use of Px types. - Numerous API changes to meet new coding conventions. - Packman package manager updated to v. 5.7.2, cleaned up dependency files. - Chunks created from islands use padded bounds to determine connectivity. - FractureTool::deleteAllChildrenOfChunk renamed FractureTool::deleteChunkSubhierarchy, added ability to delete chunks. - NvBlastAsset::testForValidChunkOrder (used when creating an NvBlastAsset) is now more strict, requiring parent chunk descriptors to come before their children. It is still less strict than the order created by NvBlastBuildAssetDescChunkReorderMap. ### New Features - Authoring tools: - Ability to pass chunk connectivity info to uniteChunks function, enabling chunks split by island detection to be united. - Option to remove original merged chunks in uniteChunks function. - The function uniteChunks allows the user to specify a chunk set to merge. Chunks from that set, and all descendants, are considered for merging. - Ability to delete chunks (see note about FractureTool::deleteChunkSubhierarchy in Changes section, above). - Added FractureTool::setApproximateBonding function. Signals the tool to create bonds by proximity instead of just using cut plane data. ### Bug Fixes - Authoring tools: - Fixed chunk reordering bug in BlastTool. - Chunks which have been merged using the uniteChunks function may be merged again - Restored chunk volume calculation - NvBlastBuildAssetDescChunkReorderMap failure cases fixed. ### Known Issues - Damage shaders in extensions can miss bonds if the damage volume is too small. - Authoring code does not use the user-defined allocator (NvBlastGlobals) exclusively. ## [1.1.4] - 2018-10-24 ### Changes - Unity plugin example updated to work with latest Blast SDK. ### New Features - Authoring tools: - Island detection function islandDetectionAndRemoving has a new parameter, createAtNewDepth. - Bonds created between island-based chunks. - Added “agg” (aggregate) commandline switch to AuthoringTool. This allows multiple convex hulls per chunk to be generated. - Damage pattern authoring interface. ### Bug Fixes - Build working on later C++ versions (e.g. deprecated UINT32_MAX removed). - Authoring tools: - Fixed .obj material loading when obj folder is same as working directory. - Degenerate face generation fix. - Fixed memory leak in FractureTool. - Proper memory releasing in samples. ## [1.1.4] - 2018-07-09 ### Changes - Single-actor serialization bugfix when actor has world bonds. - Updated PhysX package for Win64 (vc14 and vc15) and Linux64 to 3.4.24990349, improving GRB behavior and fixing GRB crash/failure on Volta and Turing. - Documented JSON collision export option introduced in previous version. ### Known Issues - Damage shaders in extensions can miss bonds if the damage volume is too small. - Authoring code does not use the user-defined allocator (NvBlastGlobals) exclusively. ## [1.1.3] - 2018-05-30 ### Changes - No longer testing Win32 project scripts. Note generate_projects_vc14win32.bat has been renamed generate_projects_vc14win32_untested.bat. - Using a PhysX Packman package that no longer includes APEX. - Updated documentation: - Authoring documentation mentions restrictions for meshes to be fractured. - Added BlastTool reference to README.md. - Updated documentation paths in README.md. - Using Packman5 for external packages. - Authoring tools: - In NoiseConfiguration, surfaceResolution changed to samplingInterval. The latter is reciprocal of resolution and defined for all 3 axes. - Improved cutout robustness. - Exporter (used by both authoring tools and ApexImporter) has a JSON collision export option. ### New Features - VC15 Win64 project scripts. Run generate_projects_vc15win64.bat. - Authoring tools: - Noisy cutout fracture. - Conic cutout option (tapers cut planes relative to central point). - Cutout option “useSmoothing.” Add generatad faces to the same smoothing group as original face without noise. - Periodic cutout boundary conditions. ### Bug Fixes - Packman target platform dependencies no longer pulling windows packages into other platforms. - Fixed bond generation for cutout fracture. ### Known Issues - Damage shaders in extensions can miss bonds if the damage volume is too small. - Authoring code does not use the user-defined allocator (NvBlastGlobals) exclusively. ## [1.1.2] - 2018-01-26 ### Changes - Improvements to uniteChunks for hierarchy optimization. - NvBlastExtAuthoringFindAssetConnectingBonds optimized. - APEX dependency has been removed (ExtImport used it). Now ExtImport has a built-in NvParameterized read that can load an APEX Destructible asset. ### New Features - FractureTool::setChunkMesh method. - Distance threshold added to NvBlastExtAuthoringFindAssetConnectingBonds. - NvBlastExtExporter: IMeshFileWriter::setInteriorIndex function, for control of interior material. - Cutout and cut fracture methods: NvBlastExtAuthoringCreateCutoutSet and Nv::Blast::CutoutSet API, FractureTool::cut and FractureTool::cutout APIs. - NvBlastExtAuthoring: - NvBlastExtAuthoringCreateMeshFromFacets function. - NvBlastExtUpdateGraphicsMesh function. - NvBlastExtAuthoringBuildCollisionMeshes function. - UV fitting on interior materials using new FractureTool::fitUvToRect and FractureTool::fitAllUvToRect functions. - Multi-material support in OBJ file format. ### Bug Fixes - Fixed bug causing normals on every other depth level to be flipped when exporting Blast meshes. - Fixed bug where faces are missed after hierarchy optimization on a sliced mesh. - Fixed subtree chunk count generated in Nv::Blast::Asset::Create (led to a crash in authoring tools, fracturing a pre-fractured mesh). - Fixed a crash when loading an obj with bad material indices. - Fixed Actor::split so that visibility lists are correctly updated even when the number of split actors exceeds newActorsMaxCount. ### Known Issues - Damage shaders in extensions can miss bonds if the damage volume is too small. - Authoring code does not use the user-defined allocator (NvBlastGlobals) exclusively. ## [1.1.1] - 2017-10-10 ### Changes - NvBlastProgramParams moved to NvBlastExtDamageShaders - Materials removed from NvBlastTk ### New Features - Damage shader acceleration structure - Extended support structures via new asset merge functions in NvBlastExtAssetUtils - Ability to scale asset components when merging assets with NvBlastExtAssetUtilsMergeAssets - NvBlastExtAuthoring - Option to fit multiple convex hulls to a chunk (uses VHACD) - deleteAllChildrenOfChunk and uniteChunks APIs - Triangle damage shader for swept segments - Impact damage spread shaders ### Bug Fixes - Linux build fixes - NvBlastExtAuthoring - Fracturing tools chunk index fix - VoronoiSitesGeneratorImpl::generateInSphere fix - More consistent use of NVBLAST_ALLOC and NVBLAST_FREE - Boolean tool bug fix ### Known Issues - Damage shaders in extensions can miss bonds if the damage volume is too small. - Authoring code does not use the user-defined allocator (NvBlastGlobals) exclusively. ## [1.1.0] - 2017-08-28 ### Changes - VC12 is no longer supported. - New license header, consistent with PhysX license header. - New serialization extension. NvBlastExtSerialization is now a modular serialization manager. It loads serializers sets for low-level, Tk, and ExtPx. Each serializer handles a particular file format and object type. Currently the universally available format for all object types is Cap’n Proto binary. The file format is universal, as it uses a header to inform the serialization manager which serializer is needed to deserialize the contained data. All authoring and import tools write using this format to files with a “.blast” filename extension. - Corresponding to the new serialization, the old formats have been deprecated. In particular, the DataConverter tool has been removed. Instead see LegacyConverter in the ### New Features section. - TkSerializable virtual base class has been removed. TkAsset and TkFamily are now derived directly from TkIdentifiable. Serialization functions have been removed, replaced by the new serialization extension. - ExtPxAsset serialization functions have been removed, replaced by the new serialization extension. - World bonds. A bond descriptor can now take the invalid index for one of its chunkIndices. This will cause an additional support graph node to be created within an asset being created with this descriptor. This node will not correspond to any chunk (it maps to the invalid index in the graph’s chunkIndices array). Actors that contain this new “world node” may be kept static by the user, emulating world attachment. This is easily tested using the new low-level function NvBlastActorIsBoundToWorld. - With the addition of world bonds (see above), the NvBlastExtImport extension no longer creates an extra “earth chunk” to bind chunks to the world. Instead, it creates world bonds. - ExtPxAsset now contains an NvBlastActorDesc, which is used as the default actor descriptor when creating an ExtPxFamily from the asset. - TkFramework no longer has its own allocator and message handler. Instead, this is part of a new NvBlastGlobals API. This way, extensions and TkFramework may share the same allocator. - SampleAssetViewer - Physics simulation now runs concurrently with graphics and some of the sample/blast logic. - New Damage tool added: line segment damage - Damage tool radius can be set individually for each tool (radial, cutter, line segment, hierarchical). - Cubes now removed when a scene is reloaded. - Cube throw velocity can be “charged” by holding down the ‘F’ key. - New damage system built around “health,” see API ### changes in NvBlastExtShaders and ### changes in NvBlastExtImpactDamageManager. - NvBlastExtShearGraphShader uses a chunk-based method to find the closest graph node, improving performance. - TkGroup no longer uses physx::PxTaskManager interface for task management. Instead, a TkGroupWorker interface has been added. The NvBlastExtPhysX extension uses the physx::PxTaskManager to implement this interface. - Better error handling in AuthoringTool (stderr and user error handler). - More consistent commandline switches in AuthoringTool and ApexImporter (–ll, –tk, –px flags). - Various small clean-ups. ### New Features - LegacyConverter for handling deprecated formats. ## Features - **NvBlastExtAssetUtils extension** - Merge multiple assets into one. - Add “world bonds” to an asset (see “World bonds” in the ### Changes section). - Transform an NvBlastAsset’s geometric data in-place. - **NvBlastExtAuthoring** - Open edge detection. - Rotation of voronoi cells used for fracturing. - **“Globals” code (under sdk/globals)** - Includes a global allocator, message handler, and profiler API used by TkFramework and extensions. - **NvBlastExtStress extension** - A PhysX-independent API for performing stress calculations with low-level Blast actors. - **NvBlastActorIsSplitRequired() function** - For low-level actors. If this function returns false, NvBlastActorSplit() may be skipped as it will have no effect. - **NvBlastExtShaders** - New “Segment Radial Damage” shader. Damages everything within a given distance of a line segment. - **New NvBlastExtExporter extension** - Allows collision data to be stored in one of three ways: - JSON format. - FBX mesh format (separate file). - FBX mesh format in a second “collision” layer, alongside the graphics mesh nodes corresponding to Blast chunks. - **LegacyConverter tool** - Converts .llasset, .tkasset, .bpxa, .pllasset, .ptkasset, and .pbpxa asset files to the new .blast format using the universal serialization scheme in the new NvBlastExtSerialization extension. - **NvBlastExtAuthoring** - Mesh cleaner, tries to remove self intersections and open edges in the interior of a mesh. - Ability to set interior material to existing (external) material, or a new material id. - Material ID remapping API. ## Bug Fixes - **NvBlastExtAuthoring** - Slicing normals fix. - **Various instances of &amp;array[0] to get the data buffer from a std::vector** - Now use data() member function. This had led to some crashes with empty vectors. - **SampleAssetViewer** - Fixed dragging kinematic actor. - Now loads the commandline-defined asset also when sample resources were not downloaded yet. - **Serialization documented.** - **Fixed smoothing groups in FBX exporter code.** - **Impulse passing from parent to child chunks fixed.** - **Reading unskinned fbx meshes correctly.** - **Collision hull generation from fbx meshes fixed.** - **Win32/64 PerfTest crash fix.** ## Known Issues - Damage shaders in extensions can miss bonds if the damage volume is too small. - Authoring extension does not perform convex decomposition to fit chunks with multiple collision hulls. - Authoring code does not use the user-defined allocator (NvBlastGlobals) exclusively. ## [1.0.0] - 2017-02-24 ### Changes - tclap, imgui, moved to Packman package - Models and textures for the sample application have been moved to Packman - Packman packages with platform-specific sections have been split into platform-specific packages - Improvements to fracturing tools - TkJoint events no longer contain actor data - API cleanup: - NvBlastActorCreate -> NvBlastFamilyCreateFirstActor - NvBlastActorRelease -> NvBlastActorDeactivate - NvBlastActorDeserialize -> NvBlastFamilyDeserializeActor - Functions that operate on an object start with NvBlast[ObjectName] - Functions that create an object purely from a desc start with NvBlastCreate - Functions that get scratch start with NvBlast[Object]GetScratchFor[functionname], etc. - Object functions take the object as the first input parameter (non-optional output parameters always come first) - Removal of NvBlastCommon.h - More consistent parameter checking in low-level API - NvBlastAlloc and NvBlastFree functions have been removed. Blast low-level no longer does (de)allocation. All memory is passed in and managed by the user - All Blast low-level functions take a log (NvBlastLog) function pointer (which may still be NULL) - Authoring tool now handles FBX mesh format - Constructor for TkAssetDesc sets sane defaults - Sample uses skinning for the 38k tower, for perf improvement - Further optimzations to sample, including using 4 instead of 2 CPU cores and capping the actor count at 40k - Linux build (SDK and tests) - Renamed TkJointUpdateEvent::eventSubtype -> TkJointUpdateEvent::subtype - “LowLevel” extension renamed “ConvertLL” - Renamed TkEventReceiver -> TkEventListener ## New Features - Serialization enabled for XBoxOne ## Bug Fixes - Can change worker thread count in CPU dispatcher - TkJoints created from the TkFramework::createJoint function are now released when the TkFramework is released - Various fixes to unit tests - Crash fix in CPU dispatcher - Returning enough buffer space to handle hierarchical fracturing cases ## Known Issues - Serialization requires documentation ## Changes - Material API simplified (NvBlastProgramParams) - Nv::Blast::ExtPhysics renamed Nv::Blast::ExtPx - Various small ### changes to the low-level API (function renaming, argument list ### changes, etc.) - Extensions libraries reconfigured according to major dependencies and functionality: - Authoring - Import (depends on PhysX and APEX) - PhysX (depends on PhysX) - Serialization (depends on PhysX and Cap’n Proto) - Shaders - Source folder reorganization: low-level, Tk, and extensions all under an sdk folder ## New Features - TkFamily serialization - Versioned data serialization extensions for both low-level and Tk, based on Cap’n Proto - TkJoint API, can create joints at runtime, attachments to Newtonian Reference Frame supported - CMake projects - PackMan used for dependencies - Per-bond and per-chunk health initialization - XBoxOne and Windows support for perf zones - Timers in Tk - Stress solver (automatic bond breaking) - ExtPx asset serialization, combined TkAsset + PhysX collision meshes (.bpxa files) ## Removed Features - TkComposite objects. Composites may be created using the new TkJoint API in the TkFramework ## Known Issues - Serialization requires documentation ## Features - Blast (low-level) library - BlastTk (high-level) library - BlastExt (extensions) library including: - AssetAuthoring - DataConverter - BlastID Utilities - ApexImporter Utilities - Materials - Physics Manager - Sync Layer - Tools: - ApexImporter - DataConverter - AuthoringTool - Samples: - SampleAssetViewer ## Known Issues - Documentation incomplete - TkFamily cannot be serialized - Data conversion utility for Tk library does not exist - Material API is still changing </section> </section> </section> </div> </div> <footer> <hr/> </footer> </div> </div> </section> </div>
animation_index.md
# omni.kit.usd_docs: Omni USD Documentation ## USD in Kit ### FAQ Q: Does Kit (and Omniverse) use Vanilla USD? The short answer is no - we have some custom modifications which mean our version of USD “nv_usd” is partially ABI incompatible with a standard build of USD. Many of these changes are in Hydra and Asset Resolver API. We are hoping to migrate towards using vanilla USD as soon as we can, as new APIs in USD like AR 2.0 make this possible. We do provide header files for nv_usd on request, which will allow you to build against nv_usd Q: Does Kit ship with USD? Yes - a full USD build is shipped as part of the Kit install. This contains standard command line tools like usdcat etc built against nv_usd. Some tools may require various env vars to be set before they can be used, we don’t currently supply a script which sets them for you. ### Layers USD layers in Kit can be manipulated through the ommi.kit.widget.stage Widget. This emits a number of commands which may be useful to understand #### Layer/EditTarget Commands These commands are all part of omni.kit.widget.layers and most of them can be invoked from various parts of that widget - SetEditTarget - CreateSublayer - RemoveSublayer - RemovePrimSpec - MergeLayers - FlattenLayers - CreateLayerReference - StitchPrimSpecsToLayer - MovePrimSpecsToLayer - MoveSublayer - ReplaceSublayer - SetLayerMuteness - LockLayer An example of using the SetLayerMuteness command to mute a specific layer in a large scene: ```python import omni.kit.commands omni.kit.commands.execute('SetLayerMuteness', layer_identifier='/media/USD/AnimalLogic/USD_ALab_0723_vanilla/USD_ALab_0723_vanilla/entity/ztl01_060/ztl01_060_light_pre_input_v002.usda', muted=True) ``` ### Transforms USD has a fairly flexible but complex transformation stack, see: - USDGeomXFormable - UsdGeomXformCommonAPI #### Transform-related Commands There are Kit commands which implement a specific subset of that functionality - TransformPrim - Transform primitive - takes a 4x4 Matrix input and transforms the object ## Transform Commands - TransformPrims - TransformPrimSRT - Transform primitive - takes a set of vectors for scale, euler rotation, translate, rotation order - TransformPrimsSRT - Transform multiple primitives - AddXformOp - Add and attributes corresponding XformOp to xformOpOrder - EnableXformOp - RemoveXformOp - RemoveXformOpAndAttrbute - ChangeRotationOp ## Prims We can create USD Prims in Kit as you would expect. The “Create” menu contains a representative set of Meshes, Shapes, Lights etc Most of those are calling a single command - CreatePrim If we call: ```python import omni.kit.commands omni.kit.commands.execute('CreatePrim', prim_type='Cylinder', attributes={'radius': 50, 'height': 100, 'extent': [(-50, -50, -50), (50, 50, 50)]}) ``` we will see a cylinder in the viewport. The resulting USD snippet is: ```usda #usda 1.0 def Cylinder "Cylinder" { uniform token axis = "Y" float3[] extent = [(-50, -50, -50), (50, 50, 50)] double height = 100 double radius = 50 custom bool refinementEnableOverride = 1 custom int refinementLevel = 2 double3 xformOp:rotateXYZ = (0, 0, 0) double3 xformOp:scale = (1, 1, 1) double3 xformOp:translate = (0, 0, 0) uniform token[] xformOpOrder = ["xformOp:translate", "xformOp:rotateXYZ", "xformOp:scale"] } ``` ## Prim State Manipulation Commands - TogglePayLoadLoadSelectedPrims - Toggles the load/unload payload of the selected primitives - SetPayLoadLoadSelectedPrims - Set the load/unload payload of the selected primitives - ToggleVisibilitySelectedPrims- Toggles the visiblity of the selected primitives - UnhideAllPrims - unhide all prims which are hidden ## Prim Creation Commands - CreateMeshPrim - create non-USD Primitive meshes (Cube, Cylinder, Plane, Sphere etc). - CreatePrimWithDefaultXform/CreatePrim - Create a typed USD Prim e.g Shape (Cube, Cylinder, Cone, Capsule), Light, Camera, Scope, Xform etc. - CreatePrims - Create multiple primitives ## Hierarchy/Prim Modification Commands - ParentPrims - Move prims into children of “parent” primitives - UnparentPrims - Move prims into “/” primitives - GroupPrims - Group primitive - CopyPrim - Copy primitive - CopyPrims - CreateInstance - Instance primitive - CreateInstances - DeletePrims - Delete primitives - MovePrim - Move Prim - MovePrims ## References/Payloads Kit allows you to manipulate references and payloads on Prims, through e.g “Add” in the Context Menu available in the Viewport and Stage Widget. ## Reference/Payload related Commands - AddReference - add a Reference to a Prim - RemoveReference - from a Prim - ReplaceReference - on a Prim - CreateReference - It creates a new prim and adds the asset and path as references. Matching Command set for Payloads - AddPayload - RemovePayload - ReplacePayload - CreatePayload The example below adds a reference to “/var/tmp/my_reference.usd” to /World/Cone. Note that the stage indicated inside Usd.Stage.Open is just the stage from the current USD Context ```python import omni.kit.commands from pxr import Usd, Sdf omni.kit.commands.execute('AddReference', ``` ```python stage = Usd.Stage.Open(rootLayer=Sdf.Find('anon:0xe4a44d0:World0.usd'), sessionLayer=Sdf.Find('anon:0xe644f20:World0-session.usda')), prim_path = Sdf.Path('/World/Cone'), reference = Sdf.Reference('/var/tmp/my_reference.usd')) ``` ## Properties and Metadata See User Docs - Property Panel These can be manipulated using standard USD APIs but there are some Kit commands which can help. The “ChangeProperty” command is invoked whenever you change a property using one of Kit’s USD Property Panel Widgets A handy tip - you can get API documentation for a property by hovering over it’s name in any Property Panel Widget e.g ## Commands - ChangeProperty - RemoveProperty - ChangeMetadataInPrims - ChangeMetadata - ChangeAttributesColorSpace ## Materials The RTX renderer and IRay both use MDL as their shading language. MDL based shaders are used in Kit, see: User Docs - Overview User Docs - Materials To understand Material binding, let’s start with a sphere created with Create-&gt;Shape-&gt;Sphere ```usda #usda 1.0 def Sphere "Sphere" { float3[] extent = [(-50, -50, -50), (50, 50, 50)] double radius = 50 } ``` If we have this sphere selected, and we call Create-&gt;Material-&gt;OmniSurface from the Kit Menu Bar and then tweak the resulting material so it’s red, it generates the following commands ```python import asyncio import omni.kit.commands from pxr import Gf, Sdf async def assign_red_material(): omni.kit.commands.execute('CreateAndBindMdlMaterialFromLibrary', mdl_name='OmniSurface.mdl', mtl_name='OmniSurface', mtl_created_list=None, bind_selected_prims=True) await omni.kit.app.get_app().next_update_async() omni.kit.commands.execute('ChangeProperty', prop_path=Sdf.Path('/World/Looks/OmniSurface/Shader.inputs:diffuse_reflection_color'), value=Gf.Vec3f(1.0, 0.0, 0.0), prev=Gf.Vec3f(1.0, 1.0, 1.0)) asyncio.ensure_future(assign_red_material()) ``` NOTE THAT IF WE RUN THIS AS A SCRIPT - THE SHADER VALUES WILL NOT CHANGE. Why not? In Kit, all MDL material params are populated in their corresponding USD shader nodes lazily.. so won’t be populated upfront The ChangePropertyCommand can only change properties that already exist in USD, so if we don’t trigger parameter authoring, it will fail silently. The authoring can be triggered by selecting the shader Here is a working version of the above: ```python import omni.kit.commands from pxr import Gf, Sdf import asyncio async def assign_red_material(): omni.kit.commands.execute('CreateAndBindMdlMaterialFromLibrary', mdl_name='OmniSurface.mdl', mtl_name='OmniSurface', mtl_created_list=None, bind_selected_prims=True) await omni.kit.app.get_app().next_update_async() selection = omni.usd.get_context().get_selection() ``` #NOTE SELECTION.. selection.set_selected_prim_paths(["/World/Looks/OmniSurface/Shader"], False) await omni.kit.app.get_app().next_update_async() omni.kit.commands.execute('ChangeProperty', prop_path=Sdf.Path('/World/Looks/OmniSurface/Shader.inputs:diffuse_reflection_color'), value=Gf.Vec3f(1.0, 0.0, 0.0), prev=Gf.Vec3f(1.0, 1.0, 1.0)) asyncio.ensure_future(assign_red_material()) Running that snippet, the resulting USD scene looks like this: ```usda 1.0 def Xform "World" { def Sphere "Sphere" { float3[] extent = [(-50, -50, -50), (50, 50, 50)] rel material:binding = </World/Looks/OmniRed> ( bindMaterialAs = "weakerThanDescendants" ) double radius = 50 } def Scope "Looks" { def Material "OmniSurface" { token outputs:mdl:displacement.connect = </World/Looks/OmniSurface/Shader.outputs:out> token outputs:mdl:surface.connect = </World/Looks/OmniSurface/Shader.outputs:out> token outputs:mdl:volume.connect = </World/Looks/OmniSurface/Shader.outputs:out> def Shader "Shader" { uniform token info:implementationSource = "sourceAsset" uniform asset info:mdl:sourceAsset = @OmniSurface.mdl@ uniform token info:mdl:sourceAsset:subIdentifier = "OmniSurface" color3f inputs:diffuse_reflection_color = (1, 0, 0) ( customData = { float3 default = (1, 1, 1) } displayGroup = "Base" displayName = "Color" hidden = false ) token outputs:out } } } } ``` The first command in the script is an example of a composite command which is used to group together 3 other commands: - CreatePrim - Create “/World/Looks”, where all Material Prims live - CreateMdlMaterialPrim - Create “/World/Looks/Omnisurface” Material Prim - BindMaterial - Bind this material to the selected prim(s) Material Prims are created in a shared/common location, and can be shared - i.e bound to many prims in the scene which is why the shader is created where it is ### Material Commands - BindMaterial - SetMaterialStrength - Set material binding - CreateMdlMaterialPrim - CreatePreviewSurfaceMaterialPrim - CreateAndBindMdlMaterialFromLibrary ### Material Queries in USD query the bound material on a prim: ```python import omni.usd from pxr import UsdShade stage = omni.usd.get_context().get_stage() prim = stage.GetPrimAtPath("/World/Sphere") bound_material, _ = UsdShade.MaterialBindingAPI(prim).ComputeBoundMaterial() print(f"Bound Material {bound_material}") ``` ### Material Assignment As well as prim-based material assignment, Kit and RTX also support - Collection-based Material Assignment - GeomSubsets For collection-based assignment, see Pixar Docs - Collection Based Material Assignment ### GeomSubsets GeomSubset encodes a subset of a piece of geometry (i.e.a UsdGeomImageable) as a set of indices. Currently only supports encoding of face-subsets. For more details: Pixar Docs - GeomSubset Here is an example of a single mesh containing 2 cubes, with different materials assigned to each cube ```usda 1.0 def Mesh "cubeymccubeface" { uniform token subsetFamily:materialBind:familyType = "partition" def GeomSubset "green" { uniform token elementType = "face" uniform token familyName = "materialBind" int[] indices = [12,13,14,15,16,17] custom rel material:binding = </World/Looks/OmniPBR2> } def GeomSubset "red" { uniform token elementType = "face" uniform token familyName = "materialBind" int[] indices = [0,1,2,3,4,5,6,7,8,9,10,11] custom rel material:binding = </World/Looks/OmniPBR1> } int[] faceVertexCounts = [4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4, 4] int[] faceVertexIndices = [0, 1, 3, 2, 0, 4, 5, 1, 1, 5, 6, 3, 2, 3, 6, 7, 0, 2, 7, 4, 4, 7, 6, 5, 8, 9, 11, 10, 8, 12, 13, 9, 9, 13, 14, 11, 10, 11, 14, 15, 8, 10, 15, 12, 12, 15, 14, 13, 16, 17, 19, 18, 16, 20, 21, 17, 17, 21, 22, 19, 18, 19, 22, 23, 16, 18, 23, 20, 20, 23, 22, 21] ``` ```c int foo = 6 normal3f[] normals = [(0, -1, 0), (0, -1, 0), (0, -1, 0), (0, -1, 0), (0, 0, -1), (0, 0, -1), (0, 0, -1), (0, 0, -1), (1, 0, 0), (1, 0, 0), (1, 0, 0), (1, 0, 0), (0, 0, 1), (0, 0, 1), (0, 0, 1), (0, 0, 1), (-1, 0, 0), (-1, 0, 0), (-1, 0, 0), (-1, 0, 0), (0, 1, 0), (0, 1, 0), (0, 1, 0), (0, 1, 0), (0, -1, 0), (0, -1, 0), (0, -1, 0), (0, -1, 0), (0, 0, -1), (0, 0, -1), (0, 0, -1), (0, 0, -1), (1, 0, 0), (1, 0, 0), (1, 0, 0), (1, 0, 0), (0, 0, 1), (0, 0, 1), (0, 0, 1), (0, 0, 1), (-1, 0, 0), (-1, 0, 0), (-1, 0, 0), (-1, 0, 0), (0, 1, 0), (0, 1, 0), (0, 1, 0), (0, 1, 0)] ( interpolation = "faceVarying" ) point3f[] points = [(-50, -50, -50), (50, -50, -50), (-50, -50, 50), (50, -50, 50), (-50, 50, -50), (50, 50, -50), (50, 50, 50), (-50, 50, 50), (150, -50, -50), (250, -50, -50), (150, -50, 50), (250, -50, 50), (150, 50, -50), (250, 50, -50), (250, 50, 50), (150, 50, 50), (50, -50, -50), (150, -50, -50), (50, -50, 50), (150, -50, 50), (50, 50, -50), (150, 50, -50), (150, 50, 50), (50, 50, 50)] float2[] primvars:st = [(1, 0), (0, 0), (0, 1), (1, 1), (1, 0), (1, 1), (0, 1), (0, 0), (1, 0), (0, 0), (0, 1), (1, 1), (1, 0), (0, 0), (0, 1), (1, 1), (1, 0), (1, 1), (0, 1), (0, 0), (1, 0), (1, 1), (0, 1), (0, 0), (1, 0), (0, 0), (0, 1), (1, 1), (1, 0), (1, 1), (0, 1), (0, 0), (1, 0), (0, 0), (0, 1), (1, 1), (1, 0), (0, 0), (0, 1), (1, 1), (1, 0), (1, 1), (0, 1), (0, 0), (1, 0), (1, 1), (0, 1), (0, 0), (1, 0), (0, 0), (0, 1), (1, 1), (1, 0), (1, 1), (0, 1), (0, 0), (1, 0), (0, 0), (0, 1), (1, 1), (1, 0), (0, 0), (0, 1), (1, 1), (1, 0), (1, 1), (0, 1), (0, 0), (1, 0), (1, 1), (0, 1), (0, 0)] ( interpolation = "faceVarying" ) double3 xformOp:translate = (0, 0, 200) uniform token[] xformOpOrder = ["xformOp:translate"] } ``` # USD Collections USD Collections can be used in Kit for: - Light-Linking in the RTX Pathtracer - Material Assignment There are some widgets for manipulating, viewing and authoring collections in Kit, see: # The USD Scene in Kit Saving an empty USD scene in Kit will give you the scene below…it’s worth digging into it a bit: - the customLayerData is read by Kit only - other USD clients will ignore it. It specifies some basic camera settings, render settings (empty unless invididual setings are set to non-default values), muted layer state - Some standard USD defaults - upAxis, timecode etc common to all “root” USD files - A default light ```usda #usda 1.0 ( customLayerData = { dictionary cameraSettings = { dictionary Front = { double3 position = (0, 0, 50000) double radius = 500 } dictionary Perspective = { double3 position = (500.0000000000001, 500.0000000000001, 499.9999999999998) double3 target = (0, 0, 0) } dictionary Right = { double3 position = (-50000, 0, -1.1102230246251565e-11) double radius = 500 } dictionary Top = { double3 position = (-4.329780281177466e-12, 50000, 1.1102230246251565e-11) double radius = 500 } string boundCamera = "/OmniverseKit_Persp" } dictionary omni_layer = { dictionary muteness = { } } dictionary renderSettings = { } } defaultPrim = "World" endTimeCode = 100 metersPerUnit = 0.01 startTimeCode = 0 timeCodesPerSecond = 24 upAxis = "Y" ) def Xform "World" { def DistantLight "defaultLight" ( prepend apiSchemas = ["ShapingAPI"] ) { float angle = 1 float intensity = 3000 float shaping:cone:angle = 180 float shaping:cone:softness float shaping:focus color3f shaping:focusTint asset shaping:ies:file double3 xformOp:rotateXYZ = (315, 0, 0) double3 xformOp:scale = (1, 1, 1) double3 xformOp:translate = (0, 0, 0) uniform token[] xformOpOrder = ["xformOp:translate", "xformOp:rotateXYZ", "xformOp:scale"] } } ``` # Audio Kit includes a set of specialised Audio Schemas and a number of audio related commands To get started, see: Note that is similar to, but not a conformant implementation of The audio prims are typed USD prims, so can be created with CreatePrimWithDefaultXform e.g ```python import omni.kit.commands omni.kit.commands.execute('CreatePrimWithDefaultXform', prim_type='Sound', attributes={}) omni.kit.commands.execute('CreatePrimWithDefaultXform', prim_type='Sound', attributes={'auralMode': 'nonSpatial'}) omni.kit.commands.execute('CreatePrimWithDefaultXform', prim_type='Listener', attributes={}) ``` There is a specialised API for working with Kit Audio, see: # Audio Commands - CreateAudioPrimFromAssetPath - create a new audio prim referencing an audio file # Physics Start with the User Docs: # User Docs - Physics # User Docs - Zero Gravity # User Docs - Vehicle Dynamics See also: # USD Physics Proposal For Developer Docs, if the extension omni.physx.ui is enabled in Kit, you will see an entry for “Python Scripting Manual” under the Help Menu. This contains an overview, API reference, schema description, commands etc # Lights USD comes with a set of basic Light types, see: # User Docs - Lighting Many of these are supported by the RTX Renderer. In theory the common set are portable across multiple renderers, but there may be some disparities - e.g illumination levels are not always the same across renderers For example to get the Animal Logic ALab scene: # ALab download …to look roughly right, we made the following adjustments ``` over "lightrig" { over "lgt_roof" { float exposure = 9.400001 } over "lgt_bnc" { float exposure = 0.2 float intensity = 19 } over "lgt_fill01" { float exposure = 6.2000003 } over "lgt_fill02" { float exposure = 5.9 } over "lgt_drawer" { float exposure = 4.5 float intensity = 18.800001 } over "lgt_sun_bench" { float exposure = 19.4 float intensity = 22.9 } over "lgt_sun_spools" { float exposure = 17 } over "lgt_sun_leaves" { float exposure = 20.2 } } ``` # Cameras Kit uses standard USD Cameras, so you can create one like this: ```python import omni.kit.commands omni.kit.commands.execute('CreatePrimWithDefaultXform', prim_type='Camera', attributes={'focusDistance': 400, 'focalLength': 24}) ``` You can duplicate the existing Viewport camera with: ```python import omni.kit.commands omni.kit.commands.execute('DuplicateFromActiveViewportCameraCommand', viewport_name='Viewport') ``` Note that manipulating the built-in cameras (“Perspective”, “Top”, “Front”, “Right”) does not cause commands to be emitted. The idea is that these are session layer cameras, used to navigate/manipulate the scene, you might not want undo/commands. However if you create any new cameras, commands will be emitted for all relevant operations, and undo will be enabled. Kit also adds several custom attributes to the base USD Camera schema, such as: - Sensor Model Attributes - Synthetic Data Generation Attributes - Fisheye Lens Attributes # Animation Kit can play back sampled USD-based animation. There is a built in omni.timeline control that will allow you to control animation playback There are many extensions available for Kit-based apps to work with and author USD-based animation e.g: # User Docs - Animation # User Docs - Keyframer # User Docs - Sequencer # Curves USD has a couple of schemas for curves, see: # Pixar Docs - USDGeomBasisCurves # Pixar Docs - USDGeomNurbsCurves Only USDGeomBasisCurves schema is currently supported by RTX. Currently nonperiodic linear/Bezier/Spline tubes can be rendered (ribbons not yet) Simple hair curve example: ```python import omni.usd from omni.kit.usd_docs import simple_hair_01_usda usd_context = omni.usd.get_context() usd_context.open_stage(simple_hair_01_usda) ``` # Points Kit/RTX can render USDGeomPoints. See docs # Pixar Docs - USDGeomPoints Here is an example with 2 different interpolation modes for width ```c++ def Xform "Points" ``` # Particles ## Particles Kit/RTX can render USD PointInstancers. See docs: Pixar Docs - USDGeomPointInstancer There are a number of Kit extensions which allow authoring and manipulation of these in various ways, see: User Docs - PointClouds Extension User Docs - Particle System User Docs - Surface Instancer # Render Settings ## Render Settings See User Docs - Render Settings Kit will store Render Settings for RTX and IRay in the root USD layer. Render Settings can also be saved as standalone USD files which can be loaded/saved like “presets”. This is explained in the user docs above. Settings are saved when you set a non-default value, and are part of the root customLayerData described above. Example: ```c++ #usda 1.0 ( customLayerData = { dictionary renderSettings = { int "rtx:directLighting:sampledLighting:autoEnableLightCountThreshold" = 15 bool "rtx:directLighting:sampledLighting:enabled" = 1 bool "rtx:ecoMode:enabled" = 1 int "rtx:post:aa:op" = 1 double "rtx:post:scaling:staticRatio" = 0.8599999807775021 double "rtx:post:taa:alpha" = 0 int "rtx:post:taa:samples" = 13 } } ) ``` Kit’s Render Settings predate the Pixar proposal for a standard set of schemas for Render Settings and related concepts, see: Pixar White Paper We do hope to adopt this in the future # Asset Resolver ## Asset Resolver Kit ships with an Asset Resolver (currently using a slightly modified Asset Resolver 1.0 API) for resolving Omniverse URLs i.e those with the prefix “omni://” or “omniverse://” Currently this does not easily allow you to fall back to a different URI scheme, although this will be possible with the AR2.0 API which we hope to implement soon. You can use standard usd tools such as usdresolve e.g ```c++ ./usdresolve "omniverse://mynucleus_server.ov.nvidia.com/Projects/ALab/USD_ALab_0730_OVMaterials/entry.usda" ```
anonymous-data-mode_KitTelemetry.md
# Configuring the `omni.kit.telemetry` Extension ## Overview The `omni.kit.telemetry` extension is responsible for a few major tasks. These largely occur in the background and require no direct interaction from the rest of the app. All of this behavior occurs during the startup of the extension automatically. The major tasks that occur during extension startup are: - Launch the telemetry transmitter app. This app is shipped with the extension and is responsible for parsing, validating, and transmitting all structured log messages produced by the app. Only the specific messages that have been approved and validated will be transmitted. More on this below. - Collect system information and emit structured log messages and crash reporter metadata values for it. The collected system information includes CPU, memory, OS, GPU, and display information. Only information about the capabilities of the system is collected, never any user specific information. - Emit various startup events. This includes events that identify the run environment being used (ie: cloud/enterprise/individual, cloud node/cluster name, etc), the name and version of the app, the various session IDs (ie: telemetry, launcher, cloud, etc), and the point at which the app is ready for the user to interact with it. - Provide interfaces that allow some limited access to information about the session. The `omni::telemetry::ITelemetry` and `omni::telemetry::ITelemetry2` interfaces can be used to access this information. These interfaces are read-only for the most part. Once the extension has successfully started up, it is generally not interacted with again for the duration of the app’s session. ## The Telemetry Transmitter The telemetry transmitter is a separate app that is bundled with the `omni.kit.telemetry` extension. It is launched during the extension’s startup. For the most part the configuration of the transmitter is automatic. However, its configuration can be affected by passing specific settings to the Kit based app itself. In general, any settings under the `/telemetry/` settings branch will be passed directly on to the transmitter when it is launched. There are some settings that may be slightly adjusted or added to however depending on the launch mode. The transmitter process will also inherit any settings under the `/log/` (with a few exceptions) and `/structuredLog/extraFields/` settings branches. In almost all cases, the transmitter process will be unique in the system. At any given time, only a single instance of the transmitter process will be running. If another instance of the transmitter is launched while another one is running, the new instance will immediately exit. This single instance of the transmitter will however handle events produced by all Kit based apps, even if multiple apps are running simultaneously. This limitation can be overcome by specifying a new launch guard name with the `/telemetry/launchGuardName` setting, but is not recommended without also including additional configuration changes for the transmitter such as the log folder to be scanned. Having multiple transmitters running simultaneously could result in duplicate messages being sent and more contention on accessing log files. When the transmitter is successfully launched, it will keep track of how many Kit based apps have attempted to launch it. The transmitter will continue to run until all Kit based apps that tried to launch it have exited. This is true regardless of how each Kit based app exits - whether through a normal exit, crashing, or being terminated by the user. The only cases where the transmitter will exit early will be if it detects that another instance is already running, and if it detects that the user has not given any consent to transmit any data. In the latter case, the transmitter exits because it has no job to perform without user consent. When the transmitter is run with authentication enabled (ie: the `/telemetry/transmitter/0/authenticate=true` or `/telemetry/authenticate=true` settings), it requires a way to deliver the authentication token to it. This is usually provided by downloading a JSON file from a certain configurable URL. The authentication token may arrive with an expiry time. The transmitter will request a renewed authentication token only once the expiry time has passed. The authentication token is never stored locally in a file by the transmitter. If the transmitter is unable to acquire an authentication token for any reason (ie: URL not available, attempt to download the token failed or was rejected, etc), that endpoint in the transmitter will simply pause its event processing queue until a valid authentication token can be acquired. When the transmitter starts up, it performs the following checks: - Reads the current privacy consent settings for the user. These settings are found in the `privacy.toml` file that the Kit based app loaded on startup. By default this file is located in `~/.nvidia-omniverse/config/privacy.toml` but can be relocated for a session using the `/structuredLog/privacySettingsFile` setting. - Loads its configuration settings and builds all the requested transmission profiles. The same set of parsed, validated events can be sent to multiple endpoints if the transmitter is configured to do so. - Downloads the appropriate approved schemas package for the current telemetry mode. Each schema in the package is then loaded and validated. Information about each event in each schema is then stored internally. - Parses out the extra fields passed to it. Each of the named extra fields will be added to each validated message before it is transmitted. - In newer versions of the transmitter (v0.5.0 and later), the list of current schema IDs is downloaded and parsed if running in ‘open endpoint’ mode (ie: authentication is off and the `schemaid` extra field is passed on to it). This is used to set the latest value for the `schemaid` field. - Outputs its startup settings to its log file. Depending on how the Kit based app is launched, this log file defaults to either `${kit}/logs/` or `~/.nvidia-omniverse/logs/`. The default name for the log file is `omni.telemetry.transmitter.log`. While the transmitter is running, it repeatedly performs the following operations: - Scans the log directory for new structured log messages. If no new messages are found, the transmitter will sleep for one minute (by default) before trying again. - All new messages that are found are then validated against the set of loaded events. Any message that fails validation (ie: not formatted correctly or its event type isn’t present in the approved events list) will simply be dropped and not transmitted. - Send the set of new approved, validated events to each of the requested endpoints. The transmitter will remove any endpoint that repeatedly fails to be contacted but continue doing its job for all other endpoints. If all endpoints are removed, the transmitter will simply exit. - Update the progress tags for each endpoint in each log file to indicate how far into the log file it has successfully processed and transmitted. If the transmitter exits and the log files persist, the next run will simply pick off where it left off. - Check whether the transmitter should exit. This can occur if all of the launching Kit based apps have exited or if all endpoints have been removed due to them being unreachable. ## Anonymous Data Mode An anonymous data mode is also supported for Omniverse telemetry. This guarantees that all user information is cleared out, if loaded, very early on startup. Enabling this also enables open endpoint usage, and sets the transmitter to ‘production’ mode. All consent levels will also be enabled once a random user ID is chosen for the session. This mode is enabled using the `/telemetry/enableAnonymousData` setting (boolean). For more information, please see the [Anonymous Data Mode documentation](#). ## Configuration Options Available to the `omni.kit.telemetry` Extension The `omni.kit.telemetry` will do its best to automatically detect the mode that it should run in. However, sometimes an app can be run in a setting where the correct mode cannot be accurately detected. In these cases the extension will just fall back to its default mode. The current mode can be explicitly chosen using the `/telemetry/mode` setting. However, some choices of mode (ie: ‘test’) may not function properly without the correct build of the extension and transmitter. The extension can run in the following modes: - `Production`: Only transmits events that are approved for public users. Internal-only events will only be emitted to local log files and will not be transmitted anywhere. The default transmission endpoint is Kratos (public). This is the default mode. - `Developer`: Transmits events that are approved for both public users and internal users. The default transmission endpoints are Kratos (public) and NVDF (internal only). - Send only locally defined test events. This mode is typically only used for early iterative testing purposes during development. This mode in the transmitter allows locally defined schemas to be provided. The default transmission endpoints are Kratos (public) and NVDF (internal only). The extension also detects the ‘run environment’ it is in as best it can. This detection cannot be overridden by a setting. The current run environment can be retrieved with the `omni::telemetry::ITelemetry2::getRunEnvironment()` function (C++) or the `omni.telemetry.ITelemetry2().run_environment` property (python). The following run environments are detected and supported: - **Individual**: This is the default mode. This launches the transmitter in its default mode as well (ie: `production` unless otherwise specified). If consent is given, all generated and approved telemetry events will be sent to both Kratos (public) and NVDF (internal only). This mode requires that the user be logged into the Omniverse Launcher app since it provides the authentication information that the public data endpoint requires. If the Omniverse Launcher is not running, data transmission will just be paused until the Launcher app is running. This mode is chosen only if no others are detected. This run environment is typically picked for individual users who install their Omniverse apps through the desktop Omniverse Launcher app. This run environment is referred to as “OVI”. - **Cloud**: This launches the transmitter in ‘cloud’ mode. In this mode the final output from the transmitter is not sent anywhere, but rather written to a local file on disk. The intent is that another log consumer service will monitor for changes on this log file and consume events as they become available. This allows more control over which data is ingested and how that data is ingested. This run environment is typically launched through the Omniverse Cloud cockpit web portal and is referred to as “OVC”. - **Enterprise**: This launches the transmitter in ‘enterprise’ mode. In this mode, data is sent to an open endpoint data collector. No authentication is needed in this mode. The data coming in does however get validated before storing. This run environment is typically detected when using the Omniverse Enterprise Launcher app to install or launch the Kit based app. This run environment is referred to as “OVE”. Many of the structured logging and telemetry settings that come from the Carbonite components of the telemetry system also affect how the `omni.kit.telemetry` extension starts up. Some of the more useful settings that affect this are listed below. Other settings listed in the above Carbonite documentation can be referred to for additional information. The following settings can control the startup behavior of the `omni.kit.telemetry` extension, the transmitter launch, and structured logging for the app: - Settings used for configuring the transmitter to use an open endpoint: - `/structuredLog/privacySettingsFile`: Sets the location of the privacy settings TOML file. This setting should only be used when configuring an app in a container to use a special privacy settings file instead of the default one. The default location and name for this file is `~/.nvidia-omniverse/config/privacy.toml`. This setting is undefined by default. - `/telemetry/openTestEndpointUrl`: Sets the URL to use as the test mode open endpoint URL for the transmitter. This value is specified in the extension’s configuration files and may override anything given on the command line or other global config files. - `/telemetry/openEndpointUrl`: Sets the URL to use as the dev or production mode open endpoint URL for the transmitter. This value is specified in the extension’s configuration files and may override anything given on the command line or other global config files. - `/telemetry/enterpriseOpenTestEndpointUrl`: Sets the URL to use as the test mode open endpoint URL for OVE for the transmitter. This value is specified in the extension’s configuration files and may override anything given on the command line or other global config files. - `/telemetry/enterpriseOpenEndpointUrl`: Sets the URL to use as the dev or production mode open endpoint URL for OVE for the transmitter. This value is specified in the extension’s configuration files and may override anything given on the command line or other global config files. - `/telemetry/useOpenEndpoint`: Boolean value to explicitly launch the transmitter in ‘open endpoint’ mode. This will configure the transmitter to set its endpoint to the Kratos open endpoint URL for the current telemetry mode and run environment. In most cases this setting and ensuring that the privacy settings are provided by the user are enough to successfully launch the transmitter in open endpoint mode. This defaults to `false`. - `/telemetry/enableAnonymousData`: Boolean value to override several other telemetry, privacy, and endpoint settings. This will clear out all user information settings (both in the settings registry and cached in the running process), choose a random user ID for the session, enable all consent levels, enable `/telemetry/useOpenEndpoint`, and clear out `/telemetry/mode` so that only production mode events can be used. - **Extension Startup:** - `omni.kit.telemetry` - Extension starts up. - **Logging Control Settings:** - `/telemetry/log/level`: Sets the logging level that will be passed on to the transmitter. This allows the transmitter to be given a different logging level than the Kit based app was launched with. This defaults to `warning`. - `/telemetry/log/file`: Sets the logging output filename that will be passed on to the transmitter. This allows the transmitter to be given a different log output file than was requested for the Kit based app. This defaults to `omni.telemetry.transmitter.log` in the structured log system’s log directory (defaults to `~/.nvidia-omniverse/logs/`) but can be overridden to point to the `${kit}/logs/` directory when the app is run in ‘portable’ mode or from a local developer build. - Any other `/log/` settings passed to the Kit based app with the exception of `/log/enableStandardStreamOutput`, `/log/file`, and `/log/level` will be inherited by the transmitter when it is launched. - Any settings under the `/structuredLog/extraFields/` branch will be passed along to the transmitter unmodified. - Any settings under the `/telemetry/` branch will be passed along to the transmitter unmodified. - The `/structuredLog/privacySettingsFile` setting will be passed along to the transmitter if it is specified for the Kit based app. If not, the various privacy consent settings will be passed on individually. Note that the transmitter may still override these privacy settings if it detects a `privacy.toml` file in the expected location. - The `/structuredLog/logDirectory` setting will be passed on to the transmitter if explicitly given to the Kit based app. - `/telemetry/testLogFile`: Specifies the path to the special log file to use to output some additional information from the transmitter. This log file does not get created by the Carbonite logging system nor does it use its settings. This file provides some extra diagnostic information from the transmitter unaffected by the normal logging system. This defaults to disabling the test log. - **Telemetry Destination Behavior Control Settings:** - `/telemetry/enableNVDF`: For OVI run environments, this boolean setting controls whether the NVDF (internal only) endpoint will be added to the transmitter during its launch. For other run environments or for OVI run environments using the open endpoint, this is ignored. This is enabled by default. - `/telemetry/nvdfTestEndpoint`: A boolean setting used to specify whether the ‘test’ or ‘production’ NVDF (internal only) endpoint should be used. This setting is only used if `/telemetry/enableNVDF` is also enabled and being used in the OVI run environment. This defaults to `false`. - `/telemetry/endpoint`: Overrides the default public endpoint to use in the transmitter. This setting is ignored in the OVE run environment and when using an open endpoint. This defaults to an empty string. - `/telemetry/cloudLogEndpoint`: Allows the default endpoint to use for OVC to be overridden. This is expected to be a local file URI the points to the file that the transmitter’s final output will be written to. This setting is ignored unless the current run environment is OVC. This defaults to `file:///${omni_logs}/kit.transmitter.out.log`. Note that this file URI must either set the server name to `localhost` or leave it blank (implying `localhost`). A common point of confusion with local file URIs is in the number of slashes after the `file://` tag. In general, a file URI is of the form `file://&lt;host&gt;/&lt;local_path&gt;` where `&lt;host&gt;` is the server name (ie: must be `localhost`). ## Settings for the Telemetry Extension ### File Access Settings - **Local File Access**: The extension allows local file access via the following settings: - `<local_path>`: This is the absolute file path on the given host system. On Windows, it might look like `file:///c:/path/to/file.txt`. On POSIX systems, it might look like `file:////path/to/file.txt`. ### Startup Behavior Settings - **skipDeferredStartup**: A Boolean setting to allow the extension’s various startup tasks to be run serially instead of in parallel. This is often needed for unit test purposes. If enabled, this will cause the extension to take slightly longer to startup. This defaults to `false`. ### OVC Run Environment Settings - **cluster**: A string setting that specifies the name of the cluster the session will run on. This defaults to an empty string. - **node**: A string setting that specifies the name of the node that the session will run on. This defaults to an empty string. - **extraFieldsToAdd**: A string setting that specifies which extra fields under `/structuredLog/extraFields/` should be added to each message by the transmitter. This is expected to be a comma-separated list of key names. This defaults to an empty string. - Note: If the `schemaid` field name is present, the transmitter will automatically retrieve and add the correct schema ID value to each message. This requires the `/telemetry/runEnvironment` setting to determine which schema ID to use. - **runEnvironment**: A string setting that specifies the run environment detected by the `omni.kit.telemetry` extension. ## Crash Reporter Metadata The `omni.kit.telemetry` extension sets or modifies several crash reporter metadata values during its startup. The following metadata values are managed by this extension: - **environmentName**: This metadata value is originally set by Kit-kernel but can be modified by `omni.kit.telemetry` if it is left at the value `default`. Its value will be replaced by the current detected run environment. - **runEnvironment**: Contains the current detected run environment. - **externalBuild**: Set to `true` if the current Kit app is being run by an external user or has not been detected as an internal-only session. Set to `false` if an internal user or session has been detected. - **launcherSessionId**: If the OVI launcher app is currently running in the system, this value is set to the session ID for the launcher. - `cloudPodSessionId`: If in the OVC run environment, this will contain the cloud session ID. - `cpuName`: The friendly name of the system’s main CPU. - `cpuId`: The internal ID of the system’s main CPU. - `cpuVendor`: The name of the system’s main CPU vendor. - `osName`: The friendly name of the operating system. - `osDistro`: The distribution name of the operating system. - `osVersion`: The detailed version number or code of the operating system. - `primaryDisplayRes`: The resolution of the system’s primary display (if any). - `desktopSize`: The size of the entire system desktop for the current user. - `desktopOrigin`: The top-left origin point of the desktop window. On some systems this may just be (0, 0), but others such as Windows allow for negative origin points. - `displayCount`: The number of attached displays (if any). - `displayRes_<n>`: The current resolution in pixels of the n-th display. - `gpu_<n>`: The name of the n-th GPU attached to the system. - `gpuVRAM_<n>`: The amount of video memory the n-th GPU attached to the system has. - `gpuDriver_<n>`: The active driver version for the n-th GPU attached to the system.
api-and-changelog_index.md
# Omni Asset Validator (UI) Kit UIs for validating assets against Omniverse specific rules to ensure they run smoothly across Omniverse products. It includes the following components: - An **Asset Validator Window** to select rules and run validation on individual USD layer files, recursively search a folder for layer files and validate them all, or validate a live / in-memory `Usd.Stage`, such as the main stage in OV Create or other USD based applications. - **Content Browser** context menus to launch the **Asset Validator Window** preset to the selected file/folder. - **Layer Window** context menus to launch the **Asset Validator Window** preset to either the in-memory layer or the selected layer’s file URI. - **Stage Window** context menus to launch the **Asset Validator Window** preset to the currently open stage (e.g the main stage of the application). ## Launching the **Asset Validator Window** In the main menu of any Kit based application (e.g. Create, View) you will find a **Window > Asset Validator** submenu. Click this menu item to open or close the **Asset Validator Window**. The window state will be preserved, so do not worry about losing your settings. For other ways to launch the **Asset Validator Window**, see [Launching from other Windows](#launching-from-other-windows). | # | Option | Description | |---|--------|-------------| | 1 | Asset Mode Selector | Switches between **URI Mode** and **Stage Mode**. See [Choosing your Asset Mode](#choosing-your-asset-mode) for more details. | | 2 | Asset Description | Describes the selected Asset. In URI Mode this should be the fully qualified URI. In Stage Mode it will be an automated message. | | 3 | Asset URI Browser Button | Click the button to launch a **File Picker** and select any USD layer or folder. Only available in **URI Mode**. | | 4 | Enable/Disable Rules Buttons | These buttons toggle all Rules or the Rules within each category. | ## Choosing your Asset Mode The **Asset Validator Window** operates in 2 distinct modes, called **Asset Modes**. - **URI Mode** operates on an Omniverse URI, including files & folders on your local disk, networked drives, nucleus servers, or any other protocol supported by Omniverse (e.g. https). - **Stage Mode** operates on a live / in-memory `Usd.Stage`. This can be the main stage of an Omniverse application like Create or View, or any other stage in memory within the application (e.g. a bespoke stage authored in the **Script Editor**). Once the window has been launched, you can freely switch between these modes using the dropdown menu at the top left corner. Your asset and rule selections will be preserved when you switch modes, so do not worry about losing your place. By default, the window opens in **URI Mode** (1): To locate an Asset URI, use the **Asset URI Browser Button** (2) or paste a link into the **Asset Description** (3). In this mode, the description must be the fully qualified URI of the Asset (e.g. `omniverse://localhost/NVIDIA/Samples/Astronaut/Astronaut.usd`). Alternatively, you can switch to **Stage Mode** (1), which is preset to validate the main stage of the application (2): **Caution** Using **Stage Mode** on the main stage is a live operation that will temporarily alter the main stage. Any edits will occur on a bespoke session sublayer, which requires switching the authoring layer temporarily. It will not alter your actual layers, so you don’t need to save or lock the layers first, and the authoring layer will be restored when the validation completes. ## Validating the selected Asset Regardless of which **Asset Mode** you have chosen, you will be presented with a list of pre-defined **validation rules**, organized by category, which has been configured based on the extensions you have loaded and the particular app you are running. If you would like to configure this further, see the section below called **Configuring Rules with Carbonite Settings**. ### Enabling and Disabling Rules You are free to enable/disable any of the individual rules using the checkboxes (1), or you may use the **Enable/Disable Rules Buttons** (2) to do this in-bulk or per-category. If you want to reset the rules to their default state, use the **Reset Default Rules Button** (3). **Tip** While the rule names may seem a bit cryptic, the tooltips help to explain the reasoning behind each rule. ### Running the Validator When you are ready, press the “Analyze Asset Validation” button at the bottom of the window (1): **Caution** There may be a brief pause while the system contacts the appropriate server, particularly if you have not interacted with this file or folder recently. The **Asset Validator Window** will now advance to an in-progress results listing of individual assets. This may initially be a blank page, but as each asset is located by the Omni Client Library, a new section will appear with a grey **Waiting** status. **Tip** These sections are clickable and will expand to show you more details. ## Reviewing Results As validation of each asset completes, the relevant section will update to one of the following statuses: - **Valid**: Assets that pass all rules without issue will be blue with a check-mark icon. - **Failure**: Assets that failed or errored on any of the validation rules will be marked in red. - **Warning**: Assets that generated no failures may have still generated warnings and will be marked in yellow. Regardless of status, you can hover on the section header for a quick summary (1): Or click the section header for detailed reports of each issue (1). There may be many issues, as some rules run per-prim-per-variant: ## Copying Issues To generate a report we have added a simple functionality of `Copy Output`. Just select the issues or assets you are interested and click on the (1) `Copy Output` button. This will put in your clipboard the contents of the selected issues in a human readable format. You can also (2) `Save Output` to a .csv file. ## Fixing Issues Asset validation includes new functionality to fix issues. Issues that have suggestions can make use of `Fix errors on Selected`. Just select the issue(s) you are interested to fix and select the corresponding action. `Fix errors on Selected` will perform the fix. Once the scene is fixed, you can save your changes by saving the scene. Some issues may offer multiple locations to fix (ComboBox at `At`). Asset validation would offer a default option, but you can choose the one that suits you better. ### Note Repeated validation of the same asset will be substantially quicker. Any interaction with servers should be cached and the stage itself may still be in-memory. ## Launching from other Windows ### Using the Content Browser You may have already located your USD layer file or folder in the Content Browser. For convenience, you can right click on any USD layer file and select “Validate USD” from the context menu: Doing the same on a folder will provide a context menu entry called “Search for USD files and Validate”: Clicking either of these will not run a validation directly, it will instead launch the Asset Validator Window, preset to URI Mode and pointing to the file or folder that you just specified in the Content Browser. ### Using the Layer Window If you want to validate layers that you already have open, begin in the Layer Window. Right clicking on any layer (except session layers) will provide related context menus. Clicking any of the related menu entires will not run a validation directly, but will instead launch the Asset Validator Window, preset to either URI Mode or Stage Mode, and pointing to the asset you requested to validate. Layers will provide either “Validate Layer (w/unsaved changes)” (1) or “Validate Layer (in-memory)” entires, depending if there are currently unsaved changes to the layer. Both of these “live” options will operate in Stage Mode, on a bespoke stage containing only that layer and its sublayers: Layers that are associated with a file (i.e are not anonymous) will also provide an entry called “Validate Layer (from file)”. Use this entry to operate in URI Mode on the file, ignoring any changes you have made in the application: ### Important Any edits will occur on a session sublayer, so neither option will save or alter the layer you have selected. You don’t need to lock the layer first. ### Using the #### Stage Window If you want to validate the application’s main stage (i.e. the entire layer stack), begin in the **Stage Window**. Right clicking on the empty space (with no prims selected) will provide a context menu entry called “Validate Stage” (1). Clicking this will not run a validation directly, it will instead launch the **Asset Validator Window**, preset to **Stage Mode** and pointing to the same live `Usd.Stage` that you are currently authoring: > This is a live operation that will temporarily alter the main stage. Any edits will occur on a bespoke session sublayer, which requires switching the authoring layer temporarily. It will not alter your actual layers, so you don’t need to save or lock the layers first, and the authoring layer will be restored when the validation completes. ## Configuring Rules with ### Carbonite Settings As with many Omniverse tools, the **Asset Validator Window** is configurable at runtime using Carbonite settings. The following **Asset Validator Window** settings work in concert with the `omni.asset_validator.core` settings to hide rules that may not be of use in a particular app, company, or team. Any rules hidden from the UI via these settings **will not be used during validation** runs initiated from the **Asset Validator Window** nor the `omni.asset_validator.ui` Python API. #### Settings - `hideDisabledCategories` can be used to exclude rules in the UI based on the `enabledCategories` / `disabledCategories` settings in `omni.asset_validator.core` - `hideDisabledRules` can be used to exclude rules in the UI based on the `enabledRules` / `disabledRules` settings in `omni.asset_validator.core` ## API and Changelog We provide a public Python API for controlling the **Asset Validation Window**. Client code is able to: - Show and hide the window. - Change **Asset Mode** and set the selected **URI** or **Stage** (including bespoke in-memory stages authored at runtime by the client). - Enable and disable individual rules (e.g. click the checkboxes). - Initiate validation runs (e.g. click the run button) and await the results. - Results will be both displayed in the UI and returned by the coroutine. - Reset the window back to the selection page (e.g. click the back button) or reset to the default state. - [Python API](api.html) - [Changelog](CHANGELOG.html)
api-changes_Overview.md
# Overview ## Introduction The Omni USD extension (`omni.usd`) is the Python frontend of `omni.usd.core`. It serves to load and initialize Pixar USD library, and Omniverse USD Resolver that supports opening USD from Omniverse/HTTP urls. It provides synchronous and asynchronous interfaces in C++ and Python to manage `omni.usd.UsdContext`, which will be introduced in section [UsdContext](#usdcontext). Besides those, this extension also provides common utilities and undoable commands for wrapped USD operations that can be shared for other extensions. It is the foundation component for all other extentions that need to access USD. ## UsdContext `omni.usd.UsdContext` is the container for a `PXR_NS::UsdStage` that manages the lifecycle of a UsdStage instance. Because of historical reasons, UsdContext also undertakes extra responsibilities, including managing Hydra Engines and rendering threads, selections, and etc. Developers can create multiple instances of UsdContext, while a default instance of UsdContext is instantiated after initialization of **omni.usd** extension, which can be accessed through API `omni.usd.get_context()`. Most of the components instantiated by Kit application use default UsdContext as the source of a UsdStage, like Viewport, Stage Window, and Layer Window. Therefore, all status changes from default UsdContext and all authoring to the stage inside it will be synced by those components, and rendered if they are enabled. ## Programming Guide The following section mainly introduces how to program with `omni.usd` in Python. For its C++ counterpart, you can refer to class `omni::usd::UsdContext` for reference. ### UsdContext As mentioned above, UsdContext is a thin container for a UsdStage. It doesn’t encapsulate all APIs of UsdStage, but exposes the UsdStage instance for developers to interact with, so that developers could still use native USD API to interact with a stage. ## How to Access Default UsdContext and Manage Stage Here are the common steps to get the default UsdContext and open/close a stage. ### Python 1. Import package: ```python import omni.usd ``` 2. Get the default UsdContext: ```python usd_context = omni.usd.get_context() ``` 3. Open a stage: ```python result, error_str = await usd_context.open_stage_async(stage_url) # Gets UsdStage handle stage = usd_context.get_stage() ``` 4. Save a stage: UsdContext provides APIs to save stage. Unlike the USD API ```python Usd.Stage.Save() ``` , ```python omni.usd.UsdContext.save_stage_async() ``` , ```python omni.usd.UsdContext.save_as_stage_async() ``` or ```python omni.usd.UsdContext.save_layers_async() ``` only saves layers in the local layer stack. It also provides more options for developers to choose which set of layers to save. ```python # Saves the current stage (all layers in the local layer stack). result, error_str = await usd_context.save_stage_async() ... # Saves the current stage to a new location. result, error_str = await usd_context.save_as_stage_async(new_location_url) ... # Saves a specified set of layers only. result, error_str, saved_layers = await usd_context.save_layers_async("", list_of_layer_identifiers) ... # Saves a specified set of layers and also save-as root layer to a new location. # Unlike save_as_stage_async, it only saves those layers specified if they have pending edits. Those # layers that are not given but have pending edits will keep pending edits still. result, error_str, saved_layers = await usd_context.save_layers_async(new_root_location, list_of_layer_identifiers) ``` 5. Close a stage: ```python error_str = await usd_context.close_stage_async() ``` You can also attach an already opened stage with USD API to the UsdContext, like so: ```python from pxr import Usd stage = Usd.Stage.Open(stage_url) # More setups to stage before it's attached to a UsdContext. ... result, error_str = await usd_context.attach_stage_async(stage) ``` Along with those async APIs, synchronous APIs are supported also without “_async” suffix. It’s recommended to use asynchronous APIs to avoid blocking main thread. ### C++ In C++, you need to include header ```cpp #include &lt;omni/usd/UsdContext.h&gt; ``` to access UsdContext. ```cpp ... auto usdContext = omni::usd::UsdContext::getContext(); ... ``` ``` # How to subscribe to Stage Events Through the APIs of UsdContext, developers can also query status changes of the stage with `omni.usd.UsdContext.get_stage_state()`, and subscribe to event changes via creating a event stream subscription `omni.usd.UsdContext.get_stage_event_stream()`: ## Python ```python import omni.usd import carb def on_stage_event(event: carb.events.IEvent): if event.type == int(omni.usd.StageEventType.OPENING): ... elif event.type == int(omni.usd.StageEventType.OPENED): ... elif event.type == int(omni.usd.StageEventType.CLOSING): ... elif event.type == int(omni.usd.StageEventType.CLOSED): ... usd_context = omni.usd.get_context() subscription = usd_context.get_stage_event_stream().create_subscription_to_pop( on_stage_event, name="Example" ) ``` ## C++ ```cpp #include &lt;omni/usd/UsdContext.h&gt; #include &lt;carb/events/EventsUtils.h&gt; ... auto usdContext = omni::usd::UsdContext::getContext(); auto stageStream = usdContext->getStageEventStream(); auto stageEvtSub = createSubscriptionToPop(stageStream.get(), [](carb::events::IEvent* e) { switch (static_cast&lt;omni::usd::StageEventType&gt;(e->type)) { case omni::usd::StageEventType::eOpened: ... break; case omni::usd::StageEventType::eClosed: ... break; default: break; } }, 0, "Example" ); ... ``` You can refer to `omni.usd.StageEventType` for a full list of supported events. # How to Create Another UsdContext Module omni.usd supports multiple UsdContexts so it can have different stages opened in addition to the default one, and also different Hydra Engines attached to render other viewports. UsdContext is indexed with unique name string, therefore You need to provide unique name string when you want to create/access specific UsdContext. You can refer to ```python omni.usd.create_context() ``` and ```python omni.usd.destroy_context ``` for creating/destroying a UsdContext. And you can get the context through ```python omni.usd.get_context ``` with ```python name ``` argument to access it. ## Selections UsdContext also provides interfaces to manage prim selections in a stage. Through which, you can set/query user selections with API, and also it provides event to detect selection changes. See ```python omni.usd.Selection ``` for more details. ### Python ```python import omni.usd context = omni.usd.get_context() selection = context.get_selection() ... # How to select multiple prims # [...] is a list of prim paths in string type. selection.set_selected_prim_paths([...], True) ... # How to select/unselect single prim, where: # select_or_unselect means if you want to select a prim or unselect a prim. # clear_selected_or_not means if you want to clear all selected prims before this action. selection.set_prim_path_selected(prim_path_in_str, select_or_unselect, True, clear_selected_or_not, True) ... # How to clear all selected prims selection.clear_selected_prim_paths() ... # How to check if a prim is selected or not. if selection.is_prim_path_selected(prim_path_in_str): ... # How to get all selected prim paths. prim_paths = selection.get_selected_prim_paths() ... # How to select all prims with specific types, where [...] is a list of prim type names. selection.select_all_prims([...]) ``` ### C++ ```cpp #include &lt;omni/usd/UsdContext.h&gt; #include &lt;carb/events/EventsUtils.h&gt; ... auto usdContext = omni::usd::UsdContext::getContext(); auto selection = usdContext->getSelection(); ... In order to subscribe to selection changes, you can refer to “How to subscribe to Stage Events” for reference. ## Commands Module ```python omni.usd ``` provides a set of pre-defined undoable USD commands to help other extensions to implement undo/redo workflow around USD. For the full list of commands supported, you can refer to ```python omni.usd.commands ``` . You can also refer to ```python omni.kit.commands ``` for the details of command system supported in Kit. ## Utils Module ```python omni.usd ``` extends several metadata for instructing UI implementation, and it provides encapsulated APIs for each access to those metadata. For example, you can instruct Stage Window to not show the specific prim, or you can instruct UI to avoid removing your prim. You can refer to ```python omni.usd.editor ``` for more details about all APIs provided. # Thread Safety UsdStage access is not threadsafe, therefore, you should avoid authoring USD with multiple writers in Python as omni.usd doesn’t provide thread-safety APIs but exposes UsdStage handle. In C++, you need to ensure all write accesses are locked with USD lock (see omni::usd::UsdContext for more details.) # API Changes Since Kit 106.0, old layer interface accessed through **omni.usd.UsdContext.get_layers** is removed which was deprecated since Kit 104.0. In order to access corresponding APIs, you’d have to refer `omni.kit.usd.layers` for more details.
api-contents_blast-sdk_api.md
# blast-sdk-5.0.4 ## Directory hierarchy - **dir extensions** - **extensions/assetutils** - **extensions/assetutils/NvBlastExtAssetUtils.h** - **extensions/authoring** - **extensions/authoring/NvBlastExtAuthoring.h** - **extensions/authoring/NvBlastExtAuthoringBondGenerator.h** ### File List - **file** - extensions/authoring/NvBlastExtAuthoringBooleanTool.h - extensions/authoring/NvBlastExtAuthoringCutout.h - extensions/authoring/NvBlastExtAuthoringFractureTool.h - extensions/authoring/NvBlastExtAuthoringMeshCleaner.h ### Directory: extensions/authoringCommon - **file** - extensions/authoringCommon/NvBlastExtAuthoringAccelerator.h - extensions/authoringCommon/NvBlastExtAuthoringConvexMeshBuilder.h - extensions/authoringCommon/NvBlastExtAuthoringMesh.h - extensions/authoringCommon/NvBlastExtAuthoringPatternGenerator.h - extensions/authoringCommon/NvBlastExtAuthoringTypes.h ### Directory: extensions/serialization - **file** - extensions/serialization/NvBlastExtLlSerialization.h - extensions/serialization/NvBlastExtSerialization.h - extensions/serialization/NvBlastExtTkSerialization.h ### Directory: extensions/shaders <details> <summary> dir extensions/stress </summary> <ul> <li> <p> file extensions/stress/NvBlastExtStressSolver.h </p> </li> </ul> </details> <details> <summary> dir globals </summary> <ul> <li> <p> file globals/NvBlastAllocator.h </p> </li> <li> <p> file globals/NvBlastDebugRender.h </p> </li> <li> <p> file globals/NvBlastGlobals.h </p> </li> <li> <p> file globals/NvCMath.h </p> </li> </ul> </details> <details> <summary> dir lowlevel </summary> <ul> <li> <p> file lowlevel/NvBlast.h </p> </li> <li> <p> file lowlevel/NvBlastTypes.h </p> </li> </ul> </details> <details> <summary> dir toolkit </summary> <ul> <li> <p> file toolkit/NvBlastTk.h </p> </li> </ul> </details> - file toolkit/NvBlastTk.h - file toolkit/NvBlastTkActor.h - file toolkit/NvBlastTkAsset.h - file toolkit/NvBlastTkEvent.h - file toolkit/NvBlastTkFamily.h - file toolkit/NvBlastTkFramework.h - file toolkit/NvBlastTkGroup.h - file toolkit/NvBlastTkGroupTaskManager.h - file toolkit/NvBlastTkIdentifiable.h - file toolkit/NvBlastTkJoint.h - file toolkit/NvBlastTkObject.h - file toolkit/NvBlastTkType.h ## Namespace hierarchy - namespace Nv - namespace Nv::Blast - class Nv::Blast::Allocator - struct Nv::Blast::AuthoringResult - struct Nv::Blast::BeamPatternDesc - class Nv::Blast::BlastBondGenerator - struct Nv::Blast::BondGenerationConfig - class Nv::Blast::BooleanTool - struct Nv::Blast::ChunkInfo - struct Nv::Blast::CollisionHull - struct Nv::Blast::ConvexDecompositionParams - class Nv::Blast::ConvexMeshBuilder - struct Nv::Blast::CutoutConfiguration - class Nv::Blast::CutoutSet - struct Nv::Blast::DamagePattern - struct Nv::Blast::DebugBuffer - struct Nv::Blast::DebugLine - struct Nv::Blast::Edge - struct Nv::Blast::ExtForceMode - class Nv::Blast::ExtSerialization - class Nv::Blast::ExtStressSolver - struct Nv::Blast::ExtStressSolverSettings - struct Nv::Blast::Facet - class Nv::Blast::FractureTool - struct Nv::Blast::HullPolygon - struct Nv::Blast::LlObjectTypeID - class Nv::Blast::Mesh - class Nv::Blast::MeshCleaner - struct Nv::Blast::NoiseConfiguration - struct Nv::Blast::PatternDescriptor - Nv::Blast::PatternGenerator - struct Nv::Blast::PlaneChunkIndexer - class Nv::Blast::RandomGeneratorBase - struct Nv::Blast::RegularRadialPatternDesc - struct Nv::Blast::SlicingConfiguration - class Nv::Blast::SpatialAccelerator - class Nv::Blast::SpatialGrid - class Nv::Blast::TkActor - struct Nv::Blast::TkActorData - struct Nv::Blast::TkActorDesc - class Nv::Blast::TkAsset - struct Nv::Blast::TkAssetDesc - struct Nv::Blast::TkAssetJointDesc - struct Nv::Blast::TkEvent - class Nv::Blast::TkEventListener - class Nv::Blast::TkFamily - struct Nv::Blast::TkFractureCommands - struct Nv::Blast::TkFractureEvents - class Nv::Blast::TkFramework - class Nv::Blast::TkGroup - struct Nv::Blast::TkGroupDesc - struct Nv::Blast::TkGroupStats - class Nv::Blast::TkGroupTaskManager - class Nv::Blast::TkGroupWorker - class Nv::Blast::TkGroupWorker - class Nv::Blast::TkIdentifiable - class Nv::Blast::TkJoint - struct Nv::Blast::TkJointData - struct Nv::Blast::TkJointDesc - struct Nv::Blast::TkJointUpdateEvent - class Nv::Blast::TkObject - struct Nv::Blast::TkObjectTypeID - struct Nv::Blast::TkSplitEvent - class Nv::Blast::TkType - struct Nv::Blast::TkTypeIndex - struct Nv::Blast::TransformST - struct Nv::Blast::Triangle - struct Nv::Blast::TriangleIndexed - struct Nv::Blast::UniformPatternDesc - struct Nv::Blast::Vertex - class Nv::Blast::VoronoiSitesGenerator - namespace nvidia - namespace nvidia::task - struct NvBlastActor - struct NvBlastActorDesc - struct NvBlastActorSplitEvent - **NvBlastActorSplitEvent** - **NvBlastAsset** - **NvBlastAssetDesc** - **NvBlastAssetMemSizeData** - **NvBlastBond** - **NvBlastBondDesc** - **NvBlastBondFractureData** - **NvBlastChunk** - **NvBlastChunkDesc** - **NvBlastChunkFractureData** - **NvBlastDamageProgram** - **NvBlastDataBlock** - **NvBlastExtAssetUtilsBondDesc** - **NvBlastExtCapsuleRadialDamageDesc** - **NvBlastExtDamageAccelerator** - **NvBlastExtImpactSpreadDamageDesc** - **NvBlastExtMaterial** - **NvBlastExtProgramParams** - **NvBlastExtRadialDamageDesc** - **NvBlastExtShearDamageDesc** - **NvBlastExtTriangleIntersectionDamageDesc** - **NvBlastFamily** - **NvBlastFractureBuffers** - **NvBlastGraphShaderActor** - **NvBlastID** - **NvBlastMessage** - **NvBlastSubgraphShaderActor** ## Structs - **NvBlastSupportGraph** - **NvBlastTimers** ## API contents ### Classes ### Macros ### Directories ### Files ### Functions ### Namespaces ### Structs ### Typedefs ### Variables
api-documentation_Overview.md
# Overview This extension enables support for changing scene wide wetness parameters and enable runtime puddle placement on Drivesim. ## Attention This extension is not related to the SimReady Enviroment System UI. Both can coexist and will be merged at some point. ## API documentation Refer to - [omni::avreality::rain](api/namespace_omni__avreality__rain.html#_CPPv4N4omni9avreality4rainE) for the C++ API documentation - [omni.avreality.rain](omni.avreality.rain.html#module-omni.avreality.rain) for the Python API documentation ## Describing wetness ### Wetness description in the scene Wetness in the scene is to be described using a few parameters. The main ones affecting a properly configured scene are: - Enabled: Whether wetness support is enabled in the scene. Wetness is activated upon request as it adds some computation load to the scene. - Wetness: The amount of wetness in the scene, from full dry (equivalent to wetness being off) to fully wet. - Water albedo: The tint that standing water will apply on the underlying asset. - Water transparency: A much the underlying asset will be visible under standing water. - Accumulation scale: The amount of water present in areas that can accumulate water, from fully dry to max accumulation. Those are expected to be changed at runtime depending on some weather conditions. See [Runtime wetness settings](#runtime-wetness-settings) for details on how to change wetness state of the scene at runtime. Some other properties are of interest when authoring an asset. - Porosity: Describe areas of the asset that are possibly absorbing water. - Water accumulation: Describes where standing water would accumulate in case of high precipitations. - Water albedo: How the accumulated water would tint the underlying asset in case of standing water, if the water is not fully transparent. - Water transparency: How transparent is the accumulated water. This will define how much the underlying asset is visible. See [Authoring wetness](#authoring-wetness) for details on how to enable assets for wetness. ### Puddle placement For drivable surface type of assets, this extension gives the ability to dynamically place puddles. Puddles are configured using the following parameters: - Center: The center of the puddle. - Radius: The radius of the puddle. - Depth: The maximum depth of the puddle at its center. See [Puddle placement](#puddle-placement) for details on how to place puddles in the scene. ## Note Puddle shape can only be circular for now, with a depth that is maximum at the center and # Scene preparation ## Preparation for wetness and puddle placement The scene needs to be properly setup to be able to react to wetness changes and to enable puddle placement. ### C++ ```cpp #include &lt;omni/avreality/rain/WetnessController.h&gt; using omni::avreality::rain::WetnessController; ... WetnessController::loadShaderParameters(); ``` ### Python ```python from omni.avreality.rain import WetnessController ... WetnessController.load_shaders_parameters() ``` The above sequences will populate the session layer so default MDL parameters, if those are not actually authored, are set on related prims. ## Additional setup for puddle placement Puddle placement at runtime requires an additional configuration step. Dynamic textures need to be bound to the shaders so those can be updated at runtime and be consumed by the renderer. ### C++ ```cpp #include &lt;omni/avreality/rain/PuddleBaker.h&gt; using omni::avreality::rain::PuddleBaker; ... std::vector<pxr::SdfPath, std::string> mapShaderPathsToAccumulationMapNames; // Fill in the above. PuddleBaker::assignShadersAccumulationMapTextureNames(mapShaderPathsToAccumulationMapNames); ``` ### Python ```python from pxr import Sdf from omni.avreality.rain import PuddleBaker ... mapShaderPathsToAccumulationMapNames: List[Tuple[Sdf.fPath, str]] = [] PuddleBaker.assign_shaders_accumulation_map_texture_names(mapShaderPathsToAccumulationMapNames) ``` ### See also `omni.avreality.rain.gather_drivable_shaders()` as an example on how to populate those lists. See `omni::avreality::rain::WetnessController` and `omni::avreality::rain::PuddleBaker`. # Runtime wetness settings Wetness can be controlled at runtime using the `WetnessController` class (C++ and Python). This controls the configuration of the complete scene. - Wetness support can be toggle on/off using the `applyGlobalWetnessState` (`apply_global_wetness_state`) call. - Wetness amount can be set using the `applyGlobalWetness` (`apply_global_wetness`) call. Ranging from 0 (dry) to 1 (wet). Water accumulation can also be controller using the `WetnessController`. - The amount of accumulated water can updated using `applyGlobalWaterAccumulationScale` (`apply_global_water_accumulation_scale`) call. - To apply a global scale to the water accumulation, use the `apply_global_water_accumulation_scale` call. - The color of standing water and its transparency level can be changed using the `applyGlobalWaterAlbedo` (`apply_global_water_albedo`) and `applyGlobalWaterTransparency` (`apply_global_water_transparency`) calls. ### C++ ```cpp #include &lt;omni/avreality/rain/WetnessController.h&gt; using omni::avreality::rain::WetnessController; ... WetnessController wc; wc.applyGlobalWetnessState(true); // Enables wetness. wc.applyGlobalWetness(0.5f); // Half-way wetness. wc.applyGlobalWaterAccumulationScale(0.5f); // Half-way accumulation ``` ### Python ```python from omni.avreality.rain import WetnessController ... wc = WetnessController() wc.apply_global_wetness_state(True) # Enable wetness wc.apply_global_wetness(0.5) # Half-way wetness wc.apply_global_water_accumulation_scale(0.5) # Half-way accumulation ``` See `omni::avreality::rain::WetnessController`. ## Authoring wetness Assets need to be setup so they can properly react to scene wetness changes. ### Props Props can define those properties to help with wetness interpretation in the scene: - `Porosity map`, that describes how porous is the asset on. This ranges from 0 (impermeable) to 1 (highly absorbant). This will affect how the asset renders when the `wetness` increases. - `Water accumulation map` that describes regions of the asset where water would accumulate in case of high precipitations. This setting will affect how the asset renders when the `Water accumulation scale` increases. ### Drivable surfaces Drivable surface also provides a `porosity map`. Compared to props, the handling of accumulation of roads is two folds: - Road can exhibit an intrisic accumulation property, that would describe potholes, cracks and gutters. This akin to the `Water accumulation map` of the props, this property is baked and cannot be edited. - This leaves the actual `Water accumulation map` free. Puddle placement relies on this to give the ability to place puddles at runtime. ## Puddle placement To place puddles, the scene must have been prepared after loading (see Scene preparation). Once this is done, the following steps are to be applied: - Create a `PuddleBaker` instance. - Allocate one or more puddles using its `acquirePuddle` (`acquire_puddle`) method. - The puddles can be created using the following calls: - `createPuddle` - `setPuddlePosition` - `setPuddleRadius` - `setPuddleDepth` - `setPuddleColor` - `setPuddleOpacity` - The puddles can then be updated using - `setPuddlePosition` - `setPuddleRadius` - `setPuddleDepth` - `setPuddleColor` - `setPuddleOpacity` - To remove a puddle, it needs to be released through `releasePuddle`. - To do the actual puddle baking, one need to call `bake` using the target texture name and its extent. The texture extent, puddle radius and position must be expressed in the space coordinate space. ### C++ ```cpp #include <omni/avreality/rain/PuddleBaker.h> using omni::avreality::rain::PuddleBaker; ... unsigned int puddleCount = 2; carb::Float2 positions[] = {{5.0f, 10.0f}, {10.0f, 5.0f}}; float radii[] = {5.0f, 7.0f}; float depths[] = {1.0f, 0.5f}; PuddleBaker::bake("my-dynamic-texture", carb::Float2(0.0f, 0.0f), carb::Float2(20.0f, 20.0f), gsl::span<const carb::Float2>(positions, puddleCount), gsl::span<const float>(radii, puddleCount), gsl::span<const float>(depths, puddleCount)); // gsl::span can be implicitely built with C++17. ``` ### Python ```python from omni.avreality.rain import PuddleBaker ... puddleCount = 2 positions = [(5.0, 10.0), (10.0, 5.0)] radii = [5.0, 7.0] depths = [1.0, 0.5] PuddleBaker.bake("my-dynamic-texture", (0.0, 0.0), (20.0, 20.0)) ``` See also `omni.avreality.rain.bake_puddles()` as an example on how to bake the water accumulation maps of all drivable surface shaders in the active scene.
API.md
# API (python) ## Module Summary: | Module | Description | |--------|-------------| | omni.example.cpp.ui_widget | omni.example.cpp.ui_widget | ## Module Details: omni.example.cpp.ui_widget: ## omni.example.cpp.ui_widget
api_hl_users_guide.md
# High Level (Toolkit) API (NvBlastTk) ## Table of Contents - [Introduction to NvBlastTk](#tkintroduction) - [NvBlastTk Class Hierarchy](#tk-class-hierarchy) - [Linking and Header Files](#tk-include-and-library) - [Creating the TkFramework](#framework-init) - [Creating a TkAsset](#tkasset-creation) - [Instancing a TkAsset: Creation of a TkActor and a TkFamily](#tkasset-instancing) - [Groups](#tkgroups) - [Applying Damage to Actors and Families](#damage-in-tk) - [Joints](#tkjoints) - [Events](#tkevents) - [Object and Type Identification](#tktypes) ## Introduction to NvBlastTk The high-level API, NvBlastTk (Tk stands for “toolkit”), is intended to be a more powerful library and a much more convenient entry point into the use of Blast. Like the low-level library, Tk is physics and graphics-agnostic. Whereas the low-level API is C-style, Tk uses a C++ API. Everything in Tk is in the namespace: ```cpp Nv::Blast ``` (the only exceptions are global-scope functions to create and access a framework singleton, see below). Every object in Tk is prefixed with ‘Tk’. For example, the Tk framework interface is: ```cpp Nv::Blast::TkFramework ``` **For the remainder of this page we will be in the Nv::Blast namespace, and will drop the explicit scope Nv::Blast:: from our names.** BlastTk adds: - An object class hierarchy (see [NvBlastTk Class Hierarchy](#tk-class-hierarchy), below). - A global framework, **TkFramework** (a singleton). This keeps track of **TkIdentifiable** objects and allows the user to query them based upon either GUID or **TkIdentifiable** subclass type, and also provides a number of functions to create the various objects in BlastTk. - Processing groups with a task interface (see **TkGroup**). - Event dispatching for actor families (see **TkFamily**). - Intra-actor and inter-actor joint management (see **TkJoint**). Note, these “joints” only hold descriptor data, since physical objects are not handled by BlastTk. ## NvBlastTk Class Hierarchy There are two abstract interfaces, one of which deriving from the other: **TkObject &lt;- TkIdentifiable**. * Lightweight objects are derived from **TkObject**. * Objects which use a GUID and class identification are derived from **TkIdentifiable**. ### TkFramework Components - **TkIdentifiable**. - **TkAsset** derives from **TkIdentifiable**. This is mostly a wrapper for NvBlastAsset, however it also stores extra data associated with the asset such as internal joint descriptors. - **TkFamily** derives from **TkIdentifiable**. One of these objects is made when a **TkActor** is instanced from a **TkAsset**. All actors that are created by splitting the family’s original actor remain within the same family. Actor and joint events are dispatched from the **TkFamily**. - **TkGroup** derives from **TkIdentifiable**. Groups are processing units. The user may create as many groups as they please, and add or remove actors as they please from groups. The group provides a worker (TkGroupWorker) interface which allows the user to process multiple jobs in the group asynchronously. These jobs, along with a call to TkGroup::endProcess(), perform the tasks of generating fracture commands, applying fracture commands, and actor splitting at the low-level. The user is informed of splitting through listeners given to TkFamily objects. - **TkActor** derives from **TkObject**. It is mostly a wrapper for NvBlastActor, but it also provides a number of damage functions to the user. - **TkJoint** derives from **TkObject**. **TkAsset** descriptors, cause internal **TkJoint** objects to be created within an actor (joining chunks within the same actor). Alternatively, the TkFramework provides a function which allows the user to create an external joint between any two different actors. As actors split, internal joints may become external. The user gets notification whenever joints become external, or when actors joined by joints change or are deleted, through listeners attached to the associated TkFamily objects. ### Linking and Header Files To use the BlastTk library, the application need only include the header NvBlastTk.h, found in the **include/toolkit** folder, and link against the appropriate version of the NvBlastTk library. Depending on the platform and configuration, various suffixes will be added to the library name. The general naming scheme is: NvBlastTk(config)(arch).(ext) (config) is DEBUG, CHECKED, OR PROFILE for the corresponding configurations. For a release configuration there is no (config) suffix. (arch) is _x86 or _x64 for Windows 32- and 64-bit builds, respectively, and empty for non-Windows platforms. (ext) is .lib for static linking and .dll for dynamic linking on Windows. ### Creating the TkFramework As a reminder, in this document we assume we are in the Nv::Blast namespace: ```cpp using namespace Nv::Blast; ``` In order to use NvBlastTk, one first has to create a TkFramework singleton. This simply requires a call to the global function NvBlastTkFrameworkCreate: ```cpp TkFramework* framework = NvBlastTkFrameworkCreate(); ``` The framework may be accessed via: ```cpp TkFramework* framework = NvBlastTkFrameworkGet(); ``` In the sections that follow, it is assumed that a framework has been created, and we have a pointer to it named ‘framework’ within scope. Finally, to release the framework, use: ```cpp framework->release(); ``` This will release all assets, families, actors, joints, and groups. ### Creating a TkAsset The TkAsset object is a high-level wrapper for the low-level NvBlastAsset. The descriptor used to create a TkAsset, a TkAssetDesc, is derived from NvBlastAssetDesc. The base fields should be filled in as described in Creating an Asset from a Descriptor (Authoring). The new field is an optional array of flags to be associated with each bond in the base descriptor. Currently the only flag is “BondJointed,” and if set will cause an “internal joint” to be created in actors (TkActor type) created from the asset. See Joints for more on joints in BlastTk. ```cpp TkAssetDesc desc; myFunctionToFillInLowLevelAssetFields(desc); // Fill in the low-level (NvBlastAssetDesc) fields as usual std::vector<uint8_t*> bondFlags(desc.bondCount, 0); // Clear all flags // Set BondJointed flags corresponding to joints selected by the user for (uint32_t i = 0; i < desc.bondCount; ++i) { if (myBondIsJointedFunction(i)) // User-authored { bondFlags[i] |= TkAssetDesc::BondJointed; } } TkAsset* asset = framework->createAsset(desc); // Create a new TkAsset ``` The createAsset function used above creates a low-level NvBlastAsset from the base fields of the descriptor, and then adds internal joint descriptors based upon the bonds’ centroids and attached chunks. An alternative method to create a TkAsset allows the user to pass in a pre-existing NvBlastAsset, and a list of joint descriptors. If the TkAsset is to have no internal joints, then the joint descriptors are not necessary and with an NvBlastAsset pointer **llAsset**, a TkAsset may be created simply by using: ```cpp TkAsset* asset = framework->createAsset(llAsset); ``` By default, such a TkAsset will not “own” the llAsset. When the TkAsset is released, the llAsset memory will be untouched. You can pass ownership to the TkAsset using all of the default parameters of the createAsset function: ```cpp TkAsset* asset = framework->createAsset(llAsset, nullptr, 0, true); ``` The last parameter sets ownership. N.B.: in order for the TkAsset to own the underlying llAsset, and therefore release it when the TkAsset is released. the memory for the llAsset must be allocated using the allocator accessed through NvBlastGlobals (see Globals API (NvBlastGlobals) ). </p> <p> If one wants to author internal joints in a TkAsset using this second createAsset method, one must pass in a valid array of joint descriptors of type TkAssetJointDesc. Each joint descriptor takes two positions and two node indices. The positions are the joint’s attachment positions in asset space, and the nodes indices are those of the graph nodes that correspond to support chunks. These indices are not, in general, the same as the chunk indices. An example of initialization of the joint descriptors is given below. </p> <div class="highlight-text notranslate"> <div class="highlight"> <pre><span></span>std::vector&lt;TkAssetJointDesc&gt; jointDescs(jointCount); // Assume jointCount = the number of joints to add jointDescs[0].nodeIndices[0] = 0; // Attach node 0 to node 1 jointDescs[0].nodeIndices[1] = 1; jointDescs[0].attachPoistions[0] = nvidia::NvVec3( 1.0f, 2.0f, 3.0f ); // Attachment positions are often the same within an asset, but they don't have to be jointDescs[0].attachPoistions[1] = nvidia::NvVec3( 1.0f, 2.0f, 3.0f ); // ... etc. TkAsset* asset = framework-&gt;createAsset(llAsset, jointDescs.data(), jointDescs.size()); </pre> </div> </div> <p> The code above assumes you know the support graph nodes to which you’d like to attach joints. Often, the user only knows the corresponding chunk indices. Fortunately it’s easy to map chunk indices to graph node indices. In order to get the map, use the low-level function </p> <div class="highlight-text notranslate"> <div class="highlight"> <pre><span></span>const uint32_t map = NvBlastAssetGetChunkToGraphNodeMap(llAsset, logFn); </pre> </div> </div> <p> This map is an array with an entry for every chunk index. To get the graph node index for a chunk indexed <strong> chunkIndex </strong> , use </p> <div class="highlight-text notranslate"> <div class="highlight"> <pre><span></span>uint32_t nodeIndex = map[chunkIndex]; </pre> </div> </div> <p> If the chunk indexed by <strong> chunkIndex </strong> does <em> not </em> correspond to a support chunk, then the mapped value will be UINT32_MAX, the invalid index. Otherwise, the mapped value will be a valid graph node index. </p> <p> Finally, to release a TkAsset, as with any TkObject-derived object, use the release() method: </p> <div class="highlight-text notranslate"> <div class="highlight"> <pre><span></span>asset-&gt;release(); </pre> </div> </div> </section> <section id="instancing-a-tkasset-creation-of-a-tkactor-and-a-tkfamily"> <span id="tkasset-instancing"> </span> <h2> Instancing a TkAsset: Creation of a TkActor and a TkFamily </h2> <p> Whereas with the Blast low-level ( Low Level API (NvBlast) ), one must explicitly create a family (NvBlastFamily) from an asset (NvBlastAsset) before creating the first actor (NvBlastActor) in the family, NvBlastTk creates a TkFamily automatically when an unfractured TkActor is instanced from a TkAsset using the framework’s createActor function. This family is accessible through the actor and any actor that is created from splitting it. The family is <em> not </em> released automatically when all actors within it have been released. The user must use the TkFamily’s release() method (see TkObject base API) to do so. (Or wait until the framework is released.) If a family is released that contains actors, the actors within will be released as well. </p> <p> The TkFamily has a special role in NvBlastTk, holding user-supplied event listeners ( Events ). All <em> internal </em> actor creation and destruction events are broadcast to listeners through split events (TkSplitEvent). These signal when a fracturing operation has destroyed an actor and created child actors from it. TkActor creation or release that occurs from an explicit API call do not produce events. For example when creating a first unfractured instance of an asset using createAsset, or when calling the release() method on a TkActor. TkJoint events are similarly broadcast to receivers (TkJointEvent). These signal when the actors which are joined by the joints change, so that the user may update a corresponding physical joint. They also signal when a joint no longer attaches actors and is therefore unreferenced. The user may invalidate or release the joint using the TkObject release() method when this occurs (more on joint ownership in Joints ). </p> <p> To create an unfractured TkActor instance from a TkAsset, one first fills in a descriptor ( Instancing a TkAsset: Creation of a TkActor and a TkFamily ) and passes it to the framework’s createActor function. As with the TkAssetDesc, the TkActorDesc is derived from its low-level counterpart, the NvBlastActorDesc. In addition the TkActorDesc holds a pointer to the TkAsset being instanced. An example of TkActor creation is given below, given a TkAsset pointer <strong> asset </strong> . </p> <div class="highlight-text notranslate"> <div class="highlight"> <pre><span></span>TkActorDesc desc; // The TkActorDesc constructor sets sane default values for the base (NvBlastActorDesc) fields, giving uniform chunk and bond healths of 1.0. desc.asset = asset; // This field of TkActorDesc must be set to a valid asset pointer. TkActor* actor = framework-&gt;createActor(desc); </pre> </div> </div> <p> The TkFamily created with the actor above may be accessed through the actor’s getFamily field: </p> <div class="highlight-text notranslate"> <div class="highlight"> <pre><span></span>TkFamily&amp; family = actor-&gt;getFamily(); </pre> </div> </div> <p> The returned value is a reference since a TkActor’s family can never be NULL. Actors resulting from the split of a “parent” actor will always belong to the parent’s family. </p> <p> For most applications, the user will need to create a listener object to pass to every family created, in order to keep their physics and graphics representations in sync with the splitting of the TkActor. For more on this, see Events . </p> </section> <section id="groups"> <span id="tkgroups"> </span> <h2> Groups </h2> <p> One important feature of NvBlastTk is the ability to multitask damage processing. The mechanism by which the toolkit does this is the group object, TkGroup. Groups are created at the request of the user; the user may create as many groups as he or she likes. Actors may be added or removed from groups in any way the user wishes, with the only constraint being that a given actor may belong to no more than one group. A group is a processing object, much like a scene in a physics simulation. Indeed, a natural pattern would be to associate one group per physics scene, and synchronize the group processing with scene simulation. Another pattern would be to subdivide the world into neighborhoods, and associate each neighborhood with a group. A distributed game could take advantage of this structure to similarly distribute computation. </p> <p> Group processing is performed by <em> workers </em> , which have a TkGroupWorker API exposed to the user. The number of workers may be set by the user, with the idea being that this should correspond to the number of threads available for group processing. Processing starts with a call to TkGroup::startProcess(). This creates a number of jobs which the user may assign to workers as they like, each worker potentially on its own thread. The jobs calculate the effects of all damage taken by the group’s actors. After all jobs have been run, the user must call TkGroup::endProcess(). This will result in all events being fired off to listeners associated with families with actors in the group. A convenience function, TkGroup::process(), is provided which uses one worker to perform all jobs sequentially on the calling thread. This is useful shortcut to get BlastTk up and running quickly. A multithreaded group processing implementation is given by TkGroupTaskManager (in NvBlastTkGroupTaskManager.h). Actors resulting from the split of a “parent” actor will be placed automatically into the group that the parent belonged to. This is similar to the assignment of families from a split, except that unlike families, the user then has the option to move the new actors to other groups, or no group at all. Also similar to families, groups are not automatically released when the last actor is removed from it. Unlike families, when a group is released, the actors which belong to the group are not released. They will, however, be removed from the group before the release is complete. A typical usage is outlined below. See Applying Damage to Actors and Families for methods of applying damage to actors. ```cpp // Create actors from descriptors desc1, desc2, ... etc., and attach a listener to each new family created TkActor* actor1 = framework->createActor(desc1); actor1->getFamily().addListener(gMyReceiver); // gMyReceiver is a TkEventListener-derived object. More on events in a subsequent section. TkActor* actor2 = framework->createActor(desc2); actor2->getFamily().addListener(gMyReceiver); TkActor* actor3 = framework->createActor(desc3); actor3->getFamily().addListener(gMyReceiver); // etc... // Let's create two groups. First, create a group descriptor. This may be used to create both groups. TkGroupDesc groupDesc; groupDesc.workerCount = 1; // this example processes groups on the calling thread only // Now create the groups TkGroup* group1 = framework->createGroup(groupDesc); TkGroup* group2 = framework->createGroup(groupDesc); // Add actor1 and actor2 to group1, and actor2 to group3... group1->addActor(actor1); group1->addActor(actor2); group2->addActor(actor3); // etc... // Now apply damage to all actors - *NOTE* damage is described in detail in the next section. // For now we will just assume a "myDamageFunction" to apply the damage. myDamageFunction(actor1); myDamageFunction(actor2); myDamageFunction(actor3); // etc... // Calling the groups' process functions will (synchronously) run all jobs to process damage taken by the contained actors. group1->process(); group2->process(); // When the groups are no longer needed, they may be released with the usual release method. group1->release(); group2->release(); ``` **Multithreaded processing** When distributing the jobs as mentioned above, every job must be processed exactly once (over all user tasks). The number of jobs processed per worker can range from a single job (resulting in a user task per job) to all jobs (like Nv::Blast::TkGroup::process() does). At any point in time, no more than the set workerCount amount of workers may have been acquired. Return the worker at the end of each task. ```cpp Nv::Blast::TkGroupWorker* worker = group->acquireWorker(); // process some jobs group->returnWorker(worker); ``` ## Applying Damage to Actors and Families Damage in NvBlastTk uses the same damage program scheme as the low-level SDK (see Damage and Fracturing). One passes the program (NvBlastDamageProgram), damage descriptor (program-dependent), and material (also program-dependent) to a TkActor::damage function. Ultimately, the damage descriptor and material data are all parameters used by the damage program. The distinction is that the damage descriptor should describe properties of the thing doing the damage, while the material should describe properties of the actor (the thing being damaged). The interpretation of this data is entirely up to the program’s functions, however. For convenience, the user may set a default material in the actor’s family. This assumes, of course, that the material parameters for this default are compatible with the program being used to damage the family’s actors. Examples of the three TkActor damage methods are given below. ### Multiple Damage Descriptors using NvBlastProgramParams **N.B. - with this method of damage, the lifetime of the NvBlastProgramParams *must* extend at least until the TkGroup::endProcess call for the actor.** ```cpp NvBlastDamageProgram program = { myGraphShaderFunction, // A function with the NvBlastGraphShaderFunction signature mySubgraphShaderFunction // A function with the NvBlastSubgraphShaderFunction signature }; // The example struct "RadialDamageDesc" is modeled after NvBlastExtRadialDamageDesc in the NvBlastExtShaders extension RadialDamageDesc damageDescs[2]; damageDescs[0].compressive = 10.0f; damageDescs[0].position[0] = 1.0f; damageDescs[0].position[1] = 2.0f; damageDescs[0].position[2] = 3.0f; damageDescs[0].minRadius = 0.0f; damageDescs[0].maxRadius = 1.0f; damageDescs[1].compressive = 100.0f; damageDescs[1].position[0] = 3.0f; damageDescs[1].position[1] = 4.0f; damageDescs[1].position[2] = 5.0f; damageDescs[1].minRadius = 0.0f; damageDescs[1].maxRadius = 5.0f; // The example material "Material" is modeled after NvBlastExtMaterial in the NvBlastExtShaders extension Material material; material.health = 10.0f; material.minDamageThreshold = 0.1f; material.maxDamageThreshold = 0.8f; // Set the damage params struct NvBlastProgramParams params = { damageDescs, 2, &material }; // Apply damage actor->damage(program, &params); // params must be kept around until TkGroup::endProcess is called! ``` ### Single Damage Descriptor with Default TkFamily Material This method of damage copies the damage descriptor into a buffer, so the user need not hold onto a copy after the damage function call. Only one damage descriptor may be passed in at once. To use this method, the user must first set a default material in the actor’s family. For example: ```cpp // The example material "Material" is modeled after NvBlastExtMaterial in the NvBlastExtShaders extension Material material; material.health = 10.0f; material.minDamageThreshold = 0.1f; material.maxDamageThreshold = 0.8f; // Set the default material used by the material-less TkActor::damage call actor->getFamily().setMaterial(&material); ``` **N.B. the lifetime of the material set *must* extend at least until the TkGroup::endProcess call for the actor.** Then to apply damage, use: ```cpp NvBlastDamageProgram program = { myGraphShaderFunction, // A function with the NvBlastGraphShaderFunction signature mySubgraphShaderFunction // A function with the NvBlastSubgraphShaderFunction signature }; // The example struct "RadialDamageDesc" is modeled after NvBlastExtRadialDamageDesc in the NvBlastExtShaders extension RadialDamageDesc damageDesc; damageDesc.compressive = 10.0f; damageDesc.position[0] = 1.0f; damageDesc.position[1] = 2.0f; damageDesc.position[2] = 3.0f; ``` damageDesc.minRadius = 0.0f; damageDesc.maxRadius = 1.0f; // Apply damage actor->damage(program, &damageDesc, (uint32_t)sizeof(RadialDamageDesc)); ## Single Damage Descriptor with Specified Material This method is just like the one above, except that the user has the opportunity to override the material used during damage. **N.B. - the lifetime of the material passed in *must* extend at least until the TkGroup::endProcess call for the actor.** This call is just like the one above with an extra material parameter: ```cpp actor->damage(program, &damageDesc, (uint32_t)sizeof(RadialDamageDesc), &material); ``` ## Joints Joints in NvBlastTk are abstract representations of physical joints. When joints become active, change the actors they join, or become unreferenced (the actors they join disappear), the user will receive notification via a TkJointUpdateEvent. Joints may be defined as a part of a TkAsset, in which case they are considered “internal” joints. Upon splitting into multiple actors, however, an internal joint’s chunks may now belong to two different TkActors. When this happens, the user will receive a TkJointUpdateEvent of subtype TkJointUpdateEvent::External. Joints may also be created externally at runtime, using the TkFramework::createJoint function. A joint created this way must be between two different TkActors. An externally created joint of this type has another distinguishing characteristic: it may join an actor to “the world,” or “Newtonial Reference Frame” (NRF). ```cpp TkJointDesc desc; desc.families[0] = &actor0->getFamily(); // Assume we have a valid actor0 pointer desc.chunkIndices[0] = 1; // This chunk *must* be a support chunk in the asset that created desc.families[0] desc.attachPositions[0] = nvidia::NvVec3(1.0f, 2.0f, 3.0f); // The attach position is in asset space desc.families[1] = &actor1->getFamily(); // Assume we have a valid actor1 pointer... note, actor0 and actor1 could have the same family desc.chunkIndices[1] = 10; // This chunk *must* be a support chunk in the asset that created desc.families[1] desc.attachPositions[1] = nvidia::NvVec3(4.0f, 5.0f, 6.0f); // The attach position is in asset space // Create the external joint from the descriptor, which joins actor0 and actor1 TkJoint* joint = framework->createJoint(desc); // Now join actor0 to the NRF // desc.families[0] already contains actor0's family desc.chunkIndices[0] = 2; // Again, this chunk must be a support chunk in the asset that created desc.families[0] desc.attachPositions[0] = nvidia::NvVec3(0.0f, 0.0f, 0.0f); // The attach position is in asset space desc.families[1] = nullptr; // Setting the family to NULL designates the world (NRF) // The value of desc.chunkIndices[1] is not used, since desc.families[1] is NULL desc.attachPositions[1] = nvidia::NvVec3(0.0f, 0.0f, 10.0f); // Attach position in the world // Create the external joint which joins actor0 to the world TkJoint* jointNRF = framework->createJoint(desc); ``` ### Releasing Joints TkJoints are not released by Blast, except when the TkFramework is released. Otherwise, the user is responsible for releasing TkJoints after they become unreferenced. This is facilitated by the Unreferenced subtype of the TkJointUpdateEvent. After receiving this event for joint, the user may choose to release, using the typical TkObject::release() method. ```cpp joint->release(); ``` Note, this method can be called at any time, even before the joint is unreferenced. When called, it will remove its references to its attached actors first, causing the joint to then become unreferenced. It should be mentioned, however, that joints created with an asset are allocated differently from external joints created using TkFramework::createJoint. Internal joints created from the joint descriptors in a TkAsset are block allocated with every TkFamily that instances the asset. Calling the release() method on those joints will remove any remaining references to them (as mentioned above), but will not perform any deallocation. Only when the TkFamily itself is released will the internal joint memory for that family be released. ## Events NvBlastTk uses events to communicate the results of actor splitting, joint updates from actor splitting, and fracture event buffers that can be used to synchronize fracturing between multiple clients. Events are broadcast to listeners which implement the TkEventListener interface. Listeners are held by TkFamily objects. During a TkGroup::endProcess call, relevant events are broadcast to the listeners in the families associated with the actors in the group. A typical user’s receiver implementation might take on the form shown below. ``` ```cpp // The parent actor may no longer be valid. Instead, we receive the information it held // which we need to update our app's representation (e.g. removal of the corresponding physics actor) myRemoveActorFunction(splitEvent->parentData.family, splitEvent->parentData.index, splitEvent->parentData.userData); // The split event contains an array of "child" actors that came from the parent. These are valid // TkActor pointers and may be used to create physics and graphics representations in our application for (uint32_t j = 0; j < splitEvent->numChildren; ++j) { myCreateActorFunction(splitEvent->children[j]); } } break; case TkJointUpdateEvent::EVENT_TYPE: { const TkJointUpdateEvent* jointEvent = event.getPayload<TkJointUpdateEvent>(); // Joint update event payload // Joint events have three subtypes, see which one we have switch (jointEvent->subtype) { case TkJointUpdateEvent::External: myCreateJointFunction(jointEvent->joint); // An internal joint has been "exposed" (now joins two different actors). Create a physics joint. break; case TkJointUpdateEvent::Changed: myUpdatejointFunction(jointEvent->joint); // A joint's actors have changed, so we need to update its corresponding physics joint. break; case TkJointUpdateEvent::Unreferenced: myDestroyJointFunction(jointEvent->joint); // This joint is no longer referenced, so we may delete the corresponding physics joint. break; } } // Unhandled: case TkFractureCommands::EVENT_TYPE: case TkFractureEvents::EVENT_TYPE: default: break; } } ``` ```markdown Whenever a new TkActor is created by the user (via TkFramework::createActor, see Instancing a TkAsset: Creation of a TkActor and a TkFamily), its newly-made family should be given whatever listeners the user wishes to attach. For example, ```cpp TkActor* actor = framework->createActor(actorDesc); actor->getFamily().addListener(myListener); // myListener is an object which implements TkEventListener (see MyActorAndJointListener above, for example) ``` Listeners may also be removed from families at any time. ### Object and Type Identification NvBlastTk objects that are derived from TkIdentifiable (TkAsset, TkFamily, and TkGroup) support an object and class (type) identification system. The TkIdentifiable interfaces setID and getID allow the user to set and access an NvBlastID for each object. The NvBlastID is a 128-bit identifier. TkIdentifiable objects are tracked by the TkFramework, which may be used to look up an object by its NvBlastID. Upon creation, TkIdentifiable objects are given a GUID, a unique NvBlastID. The user is welcome to change the object’s guid at any time, with the restriction that the GUID cannot be all zero bytes. With an object’s GUID, one may look up the object using the TkFramework function findObjectByID: ```cpp TkIdentifiable* object = framework->findObjectByID(id); // id = an NvBlastID GUID ``` If the object is found, a non-NULL pointer will be returned. TkIdentifiable-derived classes also have a class identification system, the TkType interface. From an individual object one may use the TkIdentifiable interface getType to access the class’s TkType interface. Alternatively, one may use the TkFramework getType function with TkTypeIndex::Enum argument. For example, to get the TkType interface for the TkAsset class, use ```cpp const TkType* assetType = framework->getType(TkTypeIndex::Asset); ``` The type interface may be used: - to access class-specific object lists in the framework, - identify the class of a TkIdentifiable obtained through ID lookup or deserialization, or - to obtain the class’s name and format version number. For example, to access a list of all families: ```cpp // Get the TkFamily type interface const TkType* familyType = framework->getType(TkTypeIndex::Family); // Get the family count to allocate a buffer const uint32_t familyCount = framework->getObjectCount(familyType); std::vector<TkIdentifiable*> families(familyCount); // Write the families to the buffer const uint32_t familiesFound = framework->getObjects(families.data(), familyCount, familyType); ``` In the above code, the values of familyCount and familiesFound should be equal. An alternative usage of TkFramework::getObjects allows the user to write to a (potentially) smaller buffer, iteratively. For example: ```cpp uint32_t familiesFound; uint32_t totalFamilyCount = 0; do { // Write to a fixed-size buffer TkIdentifiable* familyBuffer[16]; familiesFound = framework->getObjects(familyBuffer, 16, familyType, totalFamilyCount); totalFamilyCount += familiesFound; // Process the families found so far myProcessFamiliesFunction(familyBuffer, familiesFound); } while (familiesFound == 16); ``` To use the type interface to identify a class, perhaps after serialization or lookup by ID, one may do something like: ```cpp \\ Assume we have a TkIdentifiable pointer called "object" // Get the type interfaces of interest const TkType* assetType = framework->getType(TkTypeIndex::Asset); const TkType* familyType = framework->getType(TkTypeIndex::Family); if (object->getType() == *assetType) { TkAsset* asset = static_cast<TkAsset*>(object); // Process the object as a TkAsset } if (object->getType() == *familyType) else { TkFamily* family = static_cast<TkFamily*>(object); // Process the object as a TkFamily } ``` A TkIdentifiable-derived class may be queried for its name using the TkType interface, using TkType::getName(). This function returns a const char pointer to a string. Finally, one may query the class for its current format version number using TkType::getVersion().
api_ll_users_guide.md
# Low Level API (NvBlast) ## Introduction The low-level API is the core of Blast destruction. It is designed to be a minimal API that allows an experienced user to incorporate destruction into their application. Summarizing what the low-level API has, and doesn’t have: - There is no physics representation. The low-level API is agnostic with respect to any physics engine, and furthermore does not have any notion of collision geometry. The NvBlastActor is an abstraction which is intended to correspond to a rigid body. However, it is up to the user to implement that connection. The NvBlastActor references a list of visible chunk indices, which correspond to NvBlastChunk data in the asset. The NvBlastChunk contains a userData field which can be used to associate collision geometry with the actor based upon the visible chunks. The same is true for constraints created between actors. Bonds contain a userData field that can be used to inform the user that actors should have joints created at a particular location, but it is up to the user to create and manage physical joints between two actors. - There is no graphics representation. Just as there is no notion of collision geometry, there is also no notion of graphics geometry. The NvBlastChunk userData field (see the item above) can be used to associate graphics geometry with the actor based upon the visible chunks. - There is no notion of threading. The API is a collection of free functions which the user may call from appropriate threads. Blast guarantees that it is safe to operate on different actors from different threads. - There is no global memory manager, message handler, etc. All low-level API functions take an optional message function pointer argument in order to report warnings or errors. Memory is managed by the user, and functions that build objects require an appropriately-sized memory block to be passed in. A corresponding utility function that calculates the memory requirements is always present alongside such functions. Temporary storage needed by a function is always handled via user-supplied scratch space. For scratch, there is always a corresponding “RequiredScratch” function or documentation which lets the user know how much scratch space is needed based upon the function arguments. - Backwards-compatible, versioned, device-independent serialization is not handled by Blast. There is, however, a Blast extension which does, see Serialization (NvBlastExtSerialization). However, a simple form of serialization may be performed on assets and families (see Definitions) via simple memory copy. The data associated with these objects is available to the user, and may be copied and stored by the user. Simply casting a pointer to such a block of memory to the correct object type will produce a usable object for Blast. (The only restriction is that the block must be 16-byte aligned.) Families contain a number of actors and so this form of deserialization recreates all actors in the family. This form of serialization may be used between two devices which have the same endianness, and contain Blast SDKs which use the same object format. - Single-actor serialization and deserialization is, however, supported. This is not as light-weight as family serialization, but may be a better serialization model for a particular application. To deserialize a single actor, one must have a family to hold the actor, created from the appropriate asset. If none exists already, the user may create an empty family. After that, all actors that had been in that family may be deserialized into it one-at-a-time, in any order. ## Linking and Header Files # Using the Low-Level Blast SDK To use the low-level Blast SDK, the application need only include the header NvBlast.h, found in the top-level `include` folder, and link against the appropriate version of the NvBlast library. Depending on the platform and configuration, various suffixes will be added to the library name. The general naming scheme is: NvBlast(config)(arch).(ext) (config) is DEBUG, CHECKED, OR PROFILE for the corresponding configurations. For a release configuration there is no (config) suffix. (arch) is _x86 or _x64 for Windows 32- and 64-bit builds, respectively, and empty for non-Windows platforms. (ext) is .lib for static linking and .dll for dynamic linking on Windows. # Creating an Asset from a Descriptor (Authoring) The NvBlastAsset is an opaque type pointing to an object constructed by Blast in memory allocated by the user. To create an asset from a descriptor, use the function `NvBlastCreateAsset`. See the function documentation for a description of its parameters. N.B., there are strict rules for the ordering of chunks with an asset, and also conditions on the chunks marked as “support” (using the NvBlastChunkDesc::SupportFlag). See the function documentation for these conditions. NvBlastCreateAsset does *not* reorder chunks or modify support flags to meet these conditions. If the conditions are not met, NvBlastCreateAsset fails and returns NULL. However, Blast provides helper functions to reorder chunk descriptors and modify the support flags within those descriptors so that they are valid for asset creation. The helper functions return a mapping from the original chunk ordering to the new chunk ordering, so that corresponding adjustments or mappings may be made for graphics and other data the user associates with chunks. Example code is given below. Throughout, we assume the user has defined a logging function called `logFn`, with the signature of NvBlastLog. In all cases, the log function is optional, and NULL may be passed in its place. ```cpp // Create chunk descriptors std::vector<NvBlastChunkDesc> chunkDescs; chunkDescs.resize( chunkCount ); // chunkCount > 0 chunkDescs[0].parentChunkIndex = UINT32_MAX; // invalid index denotes a chunk hierarchy root chunkDescs[0].centroid[0] = 0.0f; // centroid position in asset-local space chunkDescs[0].centroid[1] = 0.0f; chunkDescs[0].centroid[2] = 0.0f; chunkDescs[0].volume = 1.0f; // Unit volume chunkDescs[0].flags = NvBlastChunkDesc::NoFlags; chunkDescs[0].userData = 0; // User-supplied ID. For example, this can be the index of the chunkDesc. // The userData can be left undefined. chunkDescs[1].parentChunkIndex = 0; // child of chunk described by chunkDescs[0] chunkDescs[1].centroid[0] = 2.0f; // centroid position in asset-local space chunkDescs[1].centroid[1] = 4.0f; chunkDescs[1].centroid[2] = 6.0f; chunkDescs[1].volume = 1.0f; // Unit volume chunkDescs[1].flags = NvBlastChunkDesc::SupportFlag; // This chunk should be represented in the support graph chunkDescs[1].userData = 1; // ... etc. for all chunks // Create bond descriptors std::vector<NvBlastBondDesc> bondDescs; bondDescs.resize( bondCount ); // bondCount > 0 bondDescs[0].chunkIndices[0] = 1; // chunkIndices refer to chunk descriptor indices for support chunks bondDescs[0].chunkIndices[1] = 2; bondDescs[0].bond.normal[0] = 1.0f; // normal in the +x direction bondDescs[0].bond.normal[1] = 0.0f; bondDescs[0].bond.normal[2] = 0.0f; bondDescs[0].bond.area = 1.0f; // unit area bondDescs[0].bond.centroid[0] = 1.0f; // centroid position in asset-local space bondDescs[0].bond.centroid[1] = 2.0f; bondDescs[0].bond.centroid[2] = 3.0f; bondDescs[0].userData = 0; // this can be used to tell the user more information about this // bond for example to create a joint when this bond breaks bondDescs[1].chunkIndices[0] = 1; bondDescs[1].chunkIndices[1] = ~0; // ~0 (UINT32_MAX) is the "invalid index." This creates a world bond // ... etc. for bondDescs[1], all other fields are filled in as usual // ... etc. for all bonds // Set the fields of the descriptor NvBlastAssetDesc assetDesc; assetDesc.chunkCount = chunkCount; assetDesc.chunkDescs = chunkDescs.data(); assetDesc.bondCount = bondCount; assetDesc.bondDescs = bondDescs.data(); // Now ensure the support coverage in the chunk descriptors is exact, and the chunks are correctly ordered std::vector<char> scratch( chunkCount * sizeof(NvBlastChunkDesc) ); // This is enough scratch for both NvBlastEnsureAssetExactSupportCoverage and NvBlastReorderAssetDescChunks NvBlastEnsureAssetExactSupportCoverage( chunkDescs.data(), chunkCount, scratch.data(), logFn ); std::vector<uint32_t> map(chunkCount); // Will be filled with a map from the original chunk descriptor order to the new one NvBlastReorderAssetDescChunks( chunkDescs.data(), chunkCount, bondDescs.data(), bondCount, map, true, scratch.data(), logFn ); // Create the asset scratch.resize( NvBlastGetRequiredScratchForCreateAsset( &assetDesc ) ); // Provide scratch memory for asset creation void* mem = malloc( NvBlastGetAssetMemorySize( &assetDesc ) ); // Allocate memory for the asset object NvBlastAsset* asset = NvBlastCreateAsset( mem, &assetDesc, scratch.data(), logFn ); ``` It should be noted that the geometric information (centroid, volume, area, normal) in chunks and bonds is only used by damage shader functions. Depending on the shader, some, all, or none of the geometric information will be needed. The user may write damage shader functions that interpret this data in any way they wish. # Cloning an Asset To clone an asset, one only needs to copy the memory associated with the NvBlastAsset. ```cpp uint32_t assetSize = NvBlastAssetGetSize( asset ); NvBlastAsset* newAsset = (NvBlastAsset*)malloc(assetSize); // NOTE: the memory buffer MUST be 16-byte aligned! memcpy( newAsset, asset, assetSize ); // this data may be copied into a buffer, stored to a file, etc. ``` N.B. the comment after the malloc call above. NvBlastAsset memory **must** be 16-byte aligned. # Releasing an Asset Blast low-level does no internal allocation; since the memory is allocated by the user, one simply has to free the memory they’ve allocated. The asset pointer returned by NvBlastCreateAsset has the same numerical value as the mem block passed in (if the function is successful, or NULL otherwise). So releasing an asset with memory allocated by `malloc` is simply done with a call to `free`: ```cpp free( asset ); ``` # Creating Actors and Families Actors live within a family created from asset data. To create an actor, one must first create a family. This family is used by the initial actor created from the asset, as well as all of the descendant actors created by recursively fracturing the initial actor. As with assets, family allocation is done by the user. To create a family, use: ```cpp // Allocate memory for the family object - this depends on the asset being represented by the family. void* mem = malloc( NvBlastAssetGetFamilyMemorySize( asset, logFn ) ); NvBlastFamily* family = NvBlastAssetCreateFamily( mem, asset, logFn ); ``` When an actor is first created from an asset, it represents the root of the chunk hierarchy, that is the unfractured object. To create this actor, use: ```cpp // Set the fields of the descriptor NvBlastActorDesc actorDesc; actorDesc.asset = asset; // point to a valid asset actorDesc.initialBondHealth = 1.0f; // this health value will be given to all bonds actorDesc.initialChunkHealth = 1.0f; // this health value will be given to all lower-support chunks // Provide scratch memory std::vector<char> scratch( NvBlastFamilyGetRequiredScratchForCreateFirstActor( &actorDesc ) ); // Create the first actor NvBlastActor* actor = NvBlastFamilyCreateFirstActor( family, &actorDesc, scratch.data(), logFn ); // ready to be associated with physics and graphics by the user ``` ## Copying Actors (Serialization and Deserialization) There are two ways to copy NvBlastActors: cloning an NvBlastFamily, and single-actor serialization. Cloning an NvBlastFamily is extremely fast as it only requires a single memory copy. All actors in the family may be saved, loaded, or copied at once in this way. ## Cloning a Family To clone a family, use the family pointer which may be retrieved from any active actor in the family if needed, using the NvBlastActorGetFamily function: ```cpp const NvBlastFamily* family = NvBlastActorGetFamily( &actor, logFn ); ``` ## Cloning a Family Then the size of the family may be obtained using: ```cpp size_t size = NvBlastFamilyGetSize( family, logFn ); ``` Now this memory may be copied, saved to disk, etc. To clone the family, for example, we can duplicate the memory: ```cpp std::vector<char> buffer( size ); NvBlastFamily* family2 = reinterpret_cast<NvBlastFamily*>( buffer.data() ); memcpy( family2, family, size ); ``` **N.B.** If this data has been serialized from an external source, the family will not contain a valid reference to its associated asset. The user **must** set the family’s asset. The family does however contain the asset’s ID, to help the user match the correct asset to the family. So one way of restoring the asset to the family follows: ```cpp const NvBlastGUID guid = NvBlastFamilyGetAssetID( family2, logFn ); // ... here the user must retrieve the asset using the GUID or by some other means NvBlastFamilySetAsset( family2, asset, logFn ); ``` The data in family2 will contain the same actors as the original family. To access them, use: ```cpp uint32_t actorCount = NvBlastFamilyGetActorCount( family2, logFn ); std::vector<NvBlastActor*> actors( actorCount ); uint32_t actorsWritten = NvBlastFamilyGetActors( actors.data(), actorCount, family2, logFn ); ``` In the code above, actorsWritten should equal actorCount. ## Single Actor Serialization To perform single-actor serialization, first find the buffer size required to store the serialization data: ```cpp size_t bufferSize = NvBlastActorGetSerializationSize( actor, logFn ); ``` If you want to use an upper bound which will be large enough for any actor in a family, you may use: ```cpp size_t bufferSize = NvBlastAssetGetActorSerializationSizeUpperBound( asset, logFn ); ``` Then create a buffer of that size and use NvBlastActorSerialize to write to the buffer: ```cpp std::vector<char> buffer( bufferSize ); size_t bytesWritten = NvBlastActorSerialize( buffer, bufferSize, actor, logFn ); ``` To deserialize the buffer, an appropriate family must be created. It must not already hold a copy of the actor. It must be formed using the correct asset (the one that originally created the actor): ```cpp void* mem = malloc( NvBlastAssetGetFamilyMemorySize( asset, logFn ) ); NvBlastFamily* family = NvBlastAssetCreateFamily( mem, asset, logFn ); ``` Then deserialize into the family: ```cpp NvBlastActor* newActor = NvBlastFamilyDeserializeActor( family, buffer.data(), logFn ); ``` If newActor is not NULL, then the actor was successfully deserialized. ## Deactivating an Actor Actors may not be released in the usual sense of deallocation. This is because actors’ memory is stored as a block within the owning family. The memory is only released when the family is released. However, one may deactivate an actor using NvBlastActorDeactivate. This clears the actor’s chunk lists and marks it as invalid, effectively disassociating it from the family. The user should consider this actor to be destroyed. ```cpp bool success = NvBlastActorDeactivate( actor, logFn ); ``` ## Releasing a family As mentioned above, releasing an actor does not actually do any deallocation; it simply invalidates the actor within its family. To actually deallocate memory, you must deallocate the family. Note, this will invalidate all actors in the family. This is a fast way to delete all actors that were created from repeated fracturing of a single instance. As with NvBlastAsset, memory is allocated by the user, so to release a family with memory allocated by **malloc**, simply free that memory with **free**: ```cpp free( family ); ``` ## Damage and Fracturing Damaging and fracturing is a staged process. In a first step, a **NvBlastDamageProgram** creates lists of Bonds and Chunks to damage - so called Fracture Commands. The lists are created from input specific to the NvBlastDamageProgram.&lt;br&gt; NvBlastDamagePrograms are composed of a **NvBlastGraphShaderFunction** and a **NvBlastSubgraphShaderFunction** operating on support graphs (support chunks and bonds) and disconnected subsupport chunks respectively. An implementer can freely define the shader functions and parameters. Different functions can have the effect of emulating different physical materials.&lt;br&gt; Blast provides reference implementations of such functions in **Damage Shaders (NvBlastExtShaders)**, see also NvBlastExtDamageShaders.h. The NvBlastDamageProgram is used through **BlastActorGenerateFracture** that will provide the necessary internal data for the NvBlastActor being processed. The shader functions see the internal data as **BlastGraphShaderActor** and **BlastSubgraphShaderActor** respectively. The second stage is carried out with **BlastActorApplyFracture**. This function takes the previously generated Fracture Commands and applies them to the NvBlastActor. The result of every applied command is reported as a respective Fracture Event if requested. Fracture Commands and Fracture Events both are represented by a **NvBlastFractureBuffer**. The splitting of the actor into child actors is not done until the third stage, **BlastActorSplit**, is called. Fractures may be repeatedly applied to an actor before splitting. The **NvBlastActorGenerateFracture**, **NvBlastActorApplyFracture** and **NvBlastActorSplit** functions are profiled in Profile configurations. This is done through a pointer to a NvBlastTimers struct passed into the functions. If this pointer is not NULL, then timing values will be accumulated in the referenced struct. The following example illustrates the process: ```cpp // Step one: Generate Fracture Commands // Damage programs (shader functions), material properties and damage description relate to each other. // Together they define how actors will break by generating the desired set of Fracture Commands for Bonds and Chunks. NvBlastDamageProgram damageProgram = { GraphShader, SubgraphShader }; NvBlastProgramParams programParams = { damageDescs, damageDescCount, materialProperties }; // Generating the set of Fracture Commands does not modify the NvBlastActor. NvBlastActorGenerateFracture( fractureCommands, actor, damageProgram, &programParams, logFn, &timers ); // Step two: Apply Fracture Commands // Applying Fracture Commands does modify the state of the NvBlastActor. // The Fracture Events report the resulting state of each Bond or Chunk involved. // Chunks fractured hard enough will also fracture their children, creating Fracture Events for each. NvBlastActorApplyFracture( fractureEvents, actor, fractureCommands, logFn, &timers ); // Step three: Splitting // The Actor may be split into all its smallest pieces. uint32_t maxNewActorCount = NvBlastActorGetMaxActorCountForSplit( actor, logFn ); std::vector<NvBlastActor*> newActors( maxNewActorCount ); // Make this memory available to NvBlastSplitEvent. NvBlastActorSplitEvent splitEvent; splitEvent.newActors = newActors.data(); // Some temporary memory is necessary as well. std::vector<char> scratch( NvBlastActorGetRequiredScratchForSplit( actor, logFn ) ); // New actors created are reported in splitEvent.newActors. // If newActorCount != 0, then the old actor is deleted and is reported in splitEvent.deletedActor. size_t newActorCount = NvBlastActorSplit( &splitEvent, actor, maxNewActorCount, scratch.data(), logFn, &timers ); ```
application-dependencies-management_creating_kit_apps.md
# Building an App ## Simple App Here is an example of a very simple app with a dependency and setting applied: `repl.kit`: ```toml [dependencies] "omni.kit.console" = {} [settings] exts."omni.kit.console".autoRunREPL = true ``` Pass the `repl.kit` file to the `Kit` executable: ``` > kit.exe repl.kit ``` and it will enable a few extensions (including dependencies) to run a simple `REPL`. ## Application Dependencies Management There are **conceptual differences** when specifying dependencies for an extension vs an app, although the syntax is the same: - For extensions, dependencies are specified as broadly as possible. Versions describe compatibility with other extensions. Your extension can be used in many different apps with different extensions included. - An app is the final leaf on a dependency chain, and is an end-product. All versions of dependencies must be locked in the final package, and in the version control system. That helps to guarantee reproducible builds for end users and developers. If you pass an app to the `Kit` executable, it will first resolve all extension versions (either locally or using the registry system), and will then enable the latest compatible versions. Next time you run the app, someone may have published a newer version of some extension and you may get a different result. You also don’t often have a clear view of the versions chosen, because one extension brings in other extensions that they depend on, and so on. That builds a tree of N-order dependencies. To lock all dependencies we want to write them back to the kit file. You can manually specify each version of each dependency with `exact=true` and lock all of them, but that would be very tedious to maintain. It would also make upgrading to newer versions very difficult. To address this, `Kit` has a mode where it will write a dependency solution (of all the resolved versions) back to the tail of the kit file it was launched from. It will look something like this: ```toml ######################################################################################################################## ``` # BEGIN GENERATED PART (Remove from 'BEGIN' to 'END' to regenerate) ######################################################################################################################## # Date: 09/15/21 15:50:53 (UTC) # Kit SDK Version: 103.0+master.58543.0643d57a.teamcity # Kit SDK dependencies: # carb.audio-0.1.0 # carb.windowing.plugins-1.0.0 # ... # Version lock for all dependencies: [settings.app.exts] enabled = [ "omni.kit.asset_converter-1.1.36", "omni.kit.tool.asset_importer-2.3.12", "omni.kit.widget.timeline_standalone-103.0.7", "omni.kit.window.timeline-103.0.7", ] ######################################################################################################################## # END GENERATED PART ######################################################################################################################## On top of that, we have a repo tool: ``` repo_precache_exts ``` . You specify a list of kit files in ``` repo.toml ``` to run **Kit** in that mode on: ```toml [repo_precache_exts] # Apps to run and precache apps = [ "${root}/_build/$platform/$config/apps/omni.app.my_app.kit" ] ``` Besides locking the versions of all extensions, it will also download/precache them. It is then packaged together, and the final app package is deployed into the **Launcher**. Usually, that tool runs as the final step of the build process. To run it explicitly call: ``` > repo.bat precache_exts -c release ``` By default, extensions are cached into the ``` _build/$platform/$config/extscache ``` folder. The version-lock is written at the tail of the kit file and the changed kit file can then be committed to version-control. ## Updating Version Lock [](#updating-version-lock) **Short version**: run: ``` build.bat ``` with ``` -u ``` flag. **Longer explanation**: You can remove the generated part of the kit file and run the build or precache tool. That will write it again. But if you did run it before you already downloaded extensions in ``` _build/$platform/$config/extscache ``` , then those local versions of extensions will still be selected again, because local versions are preferred. Before doing that, this folder needs to be cleared. To automate this process, the precache tool has a ``` -u ``` / ``` --update ``` flag: ```bash $ repo precache_exts -h usage: repo precache_exts [-h] [-c CONFIG] [-v] [-x] [-u] Tool to precache kit apps. Downloads extensions without running. optional arguments: -h, --help show this help message and exit -c CONFIG, --config CONFIG -v, --info -x, --clean Reset version lock and cache: remove generated part from kit files, clean cache path and exit. -u, --update Update version lock: remove generated part from kit files, clean cache path, then run ext precaching. ``` The latest versions of ``` repo_build ``` and ``` repo_kit_tools ``` allow propagation of that flag to ``` build.bat ``` . So run ``` build.bat -h ``` to check if your project has the ``` -u ``` flag available. To use, run: ``` build.bat -u -r ``` to build a new release with updated versions. ## Version Specification Recommendations [](#version-specification-recommendations) The general advice is to write the dependencies required-versions for apps the same way as for extensions, in an open-ended form, like: ```toml [dependencies] "omni.kit.tool.asset_importer" = {} ``` or say, locking only to a major-version (Semantic Versioning is used): ```toml [dependencies] "omni.kit.tool.asset_importer" = { version = "2.0.0" } ``` Then the automatic version lock will select ```code 2.0.0 ``` , ```code 2.0.1 ``` , ```code 2.1.0 ``` , ```code 2.99.99 ``` for you, but never ```code 3.0.0 ``` . You can also review the git diff at that point, to see which versions actually changed when the selection process ran. ### Windows or Linux only dependencies Version locks, and all versions, are by default defined as cross-platform. While we build them only on your current platform, we assume that the same app will run on other platforms. If you need to have an extension that is for a single platform only, you can explicitly specify the version in this way: ```toml [dependencies."filter:platform"."windows-x86_64"] "omni.kit.window.section" = { version = "102.1.6", exact = true } ``` **Dependencies specified as exact are not written into an automatic version lock**. This way, you lock the version manually, and only for the selected platform. ### Caching extensions disabled by default Often an app has extensions that are brought into the Launcher, but disabled by default. *Create* is an example of that. We still want to lock all the versions, and download them, but we can’t put them into the main app kit file, as it will enable them on startup. The solution is simple: use a separate kit file, which includes the main one. For instance, the *Create* project has: ```code omni.create.kit ``` and ```code omni.create.full.kit ``` . The latter includes the former in its dependencies, but adds extra extensions. Both kit files are passed to the precache tool (specified in ```code repo.toml ``` ). ### Deploying an App Kit files fully describe how to run an app, and also are interpreted as an extension. This means that it can be versioned and published into the registry like any other extension. Any *Kit* app can be found in the extension manager. Click *Launch*, and the app will be run. It can also just be shared as a file, and anyone can pass it to ```code kit.exe ``` or associate it with *Kit* and just use a mouse double-click to open it. In practice, we deploy apps into the launcher with both *Kit* and all of the dependent extensions downloaded ahead of time. So an app in the launcher basically is: - Kit file (1 or more) - Precached extensions - Kit SDK The Kit SDK is already shared between apps using the thin-package feature of *Omniverse Launcher* (downloaded from packman). In the future, we can get to a point where you only need to publish a single Kit file to define an *Omniverse App* of any complexity. This goal guides and explains certain decisions described in the guide. ### Extensions without an App Many repositories contain only extensions without publishing an app. However, all their dependencies should be downloaded at build time and version locked. You don’t have to create a kit file for them, precache tool will do it by default using ```code generated_app_path = "${root}/source/apps/exts.deps.generated.kit" ``` setting. In this location, a kit file with all extensions will be automatically generated. ```code exts.deps.generated.kit ``` - is an app, that contains all extensions from the repo as dependencies. It is used to: - Lock all versions of their dependencies (reproducible builds). - Precache (download) all dependencies before building. This file is regenerated if: - Any extension is added or removed from the repo. - Any extension version is updated - This file is removed. To update version lock the same ```code repo build -u ``` flag can be used. The same other kit files with version lock, it should be version controlled to produce reproducible builds. To disable this feature set a generated app path to empty ```code generated_app_path = "" ``` . ### Other Precache Tool Settings As with any repo tool, to find more settings available for the precache tool, look into its ```code ``` ``` repo_tools.toml ``` file. Since it comes with Kit, this file is a part of the ``` kit-sdk ``` package and can be found at: ``` _build/$platform/$config/kit/dev/repo_tools.toml ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ``` ``` --- ```
application-id_Overview.md
# GDN Asset Publisher [omni.gdn_asset_publisher] The GDN asset publisher collects the chosen asset, zips it, and then uploads it to GDN. ## Asset Path: The path to the stage. By default when the extension is loaded it uses the currently opened stage, but you can click the browse button to browse to any other stage. You can also set the input field by clicking the “Set From Current Stage”. ## Temp Dir: Choose a folder location where to collect the asset to (Example: C:/Temp/MyAsset). Note: The Zip file will be created in the parent directory of this folder. ## Application Id: The application id of your application on GDN. It can be found in the Package Management -> Packages section. ## Token: GDN access token ## When you click on publish, the asset will be collected, renamed (if necessary) to the standard the application on GDN expects, zipped and uploaded. Once uploaded to GDN, the asset will be auto staged within a few minutes.
apps.md
# Example Apps ## Simple App Window: `my_name.my_app.kit` The simple application showcases how to create a very basic Kit-based application showing a simple window. It leverages the kit compatibility mode for the UI that enables it to run on virtually any computer with a GPU. It uses the window extension example that shows how you can create an extension that shows the window in your application. ## Editor App: `my_name.my_app.editor.kit` The simple Editor application showcases how you can start leveraging more of the Omniverse Shared Extensions around USD editing to create an application that has the basic features of an app like Create. It will require an RTX compatible GPU, Turing or above. You can see how it uses the kit.QuickLayout to setup the example Window at the bottom next to the Content Browser, of course you can choose your own layout and add more functionality and Windows. ## Simple Viewport App: `my_name.my_app.viewport.kit` This simple viewport application showcases how to create an application that leverages the RTX Renderer and the Kit Viewport to visualize USD files. It will require an RTX compatible GPU, Turing or above. The functionality is very limited to mostly viewing the data, and is just a starting point for something you might want to build from. You can see how it uses the kit.QuickLayout to setup the example Window on the right of the Viewport but you can setup any layout that you need.
Ar.md
# Ar module Summary: The Ar (Asset Resolution) library is responsible for querying, reading, and writing asset data. ## Classes: - **DefaultResolver** - Default asset resolution implementation used when no plugin implementation is provided. - **DefaultResolverContext** - Resolver context object that specifies a search path to use during asset resolution. - **Notice** - (No description provided) - **ResolvedPath** - Represents a resolved asset path. - **Resolver** - Interface for the asset resolution system. - **ResolverContext** - An asset resolver context allows clients to provide additional data to the resolver for use during resolution. - **ResolverContextBinder** - Helper object for managing the binding and unbinding of ArResolverContext objects with the asset resolver. - **ResolverScopedCache** - Helper object for managing asset resolver cache scopes. - **Timestamp** - Represents a timestamp for an asset. ## pxr.Ar.DefaultResolver ### Description Default asset resolution implementation used when no plugin implementation is provided. In order to resolve assets specified by relative paths, this resolver implements a simple "search path" scheme. The resolver will anchor the relative path to a series of directories and return the first absolute path where the asset exists. The first directory will always be the current working directory. The resolver will then examine the directories specified via the following mechanisms (in order): - The currently-bound ArDefaultResolverContext for the calling thread - ArDefaultResolver::SetDefaultSearchPath - The environment variable PXR_AR_DEFAULT_SEARCH_PATH. This is expected to be a list of directories delimited by the platform’s standard path separator. ArDefaultResolver supports creating an ArDefaultResolverContext via ArResolver::CreateContextFromString by passing a list of directories delimited by the platform’s standard path separator. ### Methods - **classmethod** SetDefaultSearchPath(searchPath) -> None - Set the default search path that will be used during asset resolution. - This must be called before the first call to ArGetResolver. The specified paths will be searched in addition to, and before, paths specified via the environment variable PXR_AR_DEFAULT_SEARCH_PATH - Parameters: - **searchPath** (list [str]) – ## pxr.Ar.DefaultResolverContext ### Description Resolver context object that specifies a search path to use during asset resolution. This object is intended for use with the default ArDefaultResolver asset resolution implementation; see documentation for that class for more details on the search path resolution algorithm. Example usage: ``` ArDefaultResolverContext ctx({"/Local/Models", "/Installed/Models"}); { // Bind the context object: ArResolverContextBinder binder(ctx); // While the context is bound, all calls to ArResolver::Resolve // (assuming ArDefaultResolver is the underlying implementation being // used) will include the specified paths during resolution. std::string resolvedPath = resolver.Resolve("ModelName/File.txt") } // Once the context is no longer bound (due to the ArResolverContextBinder // going out of scope), its search path no longer factors into asset // resolution. ``` ### Methods - **GetSearchPath** () -> list [str] - Return this context's search path. ### Notice #### Classes: | Class Name | Description | |------------|-------------| | `ResolverChanged` | | | `ResolverNotice` | | #### Class: ResolverChanged **Methods:** | Method Name | Description | |-------------|-------------| | `AffectsContext` | | #### Class: ResolverNotice #### Class: ResolvedPath Represents a resolved asset path. **Methods:** | Method Name | Description | |-------------|-------------| | `GetPathString` | Return the resolved path held by this object as a string. | #### Method: GetPathString Return the resolved path held by this object as a string. #### Class: Resolver Interface for the asset resolution system. An asset resolver is responsible for resolving asset information (including the asset’s physical path) from a logical path. See ar_implementing_resolver for information on how to customize asset resolution behavior by implementing a subclass of ArResolver. Clients may use ArGetResolver to access the configured asset resolver. **Methods:** | Method Name | Description | |-------------|-------------| | `CanWriteAssetToPath` | Returns true if an asset may be written to the given resolvedPath, false otherwise. | 1. `CreateContextFromString(contextStr)` - Return an ArResolverContext created from the primary ArResolver implementation using the given `contextStr`. 2. `CreateContextFromStrings(contextStrs)` - Return an ArResolverContext created by combining the ArResolverContext objects created from the given `contextStrs`. 3. `CreateDefaultContext()` - Return an ArResolverContext that may be bound to this resolver to resolve assets when no other context is explicitly specified. 4. `CreateDefaultContextForAsset(assetPath)` - Return an ArResolverContext that may be bound to this resolver to resolve the asset located at `assetPath` or referenced by that asset when no other context is explicitly specified. 5. `CreateIdentifier(assetPath, anchorAssetPath)` - Returns an identifier for the asset specified by `assetPath`. 6. `CreateIdentifierForNewAsset(assetPath, ...)` - Returns an identifier for a new asset specified by `assetPath`. 7. `GetAssetInfo(assetPath, resolvedPath)` - Returns an ArAssetInfo populated with additional metadata (if any) about the asset at the given `assetPath`. 8. `GetCurrentContext()` - Returns the asset resolver context currently bound in this thread. 9. `GetExtension(assetPath)` - Returns the file extension for the given `assetPath`. 10. `GetModificationTimestamp(assetPath, resolvedPath)` - Returns an ArTimestamp representing the last time the asset at `assetPath` was modified. 11. `IsContextDependentPath(assetPath)` - Returns true if `assetPath` is a context-dependent path, false otherwise. 12. `RefreshContext(context)` - Refresh any caches associated with the given context. <p> Resolve (assetPath) </p> <p> Returns the resolved path for the asset identified by the given `assetPath` if it exists. </p> <p> ResolveForNewAsset (assetPath) </p> <p> Returns the resolved path for the given `assetPath` that may be used to create a new asset. </p> <dl> <dt> CanWriteAssetToPath (resolvedPath, whyNot) -&gt; bool </dt> <dd> <p> Returns true if an asset may be written to the given `resolvedPath`, false otherwise. </p> <p> If this function returns false and `whyNot` is not `nullptr`, it may be filled with an explanation. </p> <dl> <dt>Parameters</dt> <dd> <ul> <li> <strong>resolvedPath</strong> (ResolvedPath) – </li> <li> <strong>whyNot</strong> (str) – </li> </ul> </dd> </dl> </dd> </dl> <dl> <dt> CreateContextFromString (contextStr) -&gt; ResolverContext </dt> <dd> <p> Return an ArResolverContext created from the primary ArResolver implementation using the given `contextStr`. </p> <dl> <dt>Parameters</dt> <dd> <p> <strong>contextStr</strong> (str) – </p> </dd> </dl> <hr /> <p> CreateContextFromString(uriScheme, contextStr) -&gt; ResolverContext </p> <p> Return an ArResolverContext created from the ArResolver registered for the given `uriScheme` using the given `contextStr`. </p> <p> An empty `uriScheme` indicates the primary resolver and is equivalent to CreateContextFromString(string). </p> <p> If no resolver is registered for `uriScheme`, returns an empty ArResolverContext. </p> <dl> <dt>Parameters</dt> <dd> <ul> <li> <strong>uriScheme</strong> (str) – </li> <li> <strong>contextStr</strong> (str) – </li> </ul> </dd> </dl> </dd> </dl> <dl> <dt> CreateContextFromStrings (contextStrs) -&gt; ResolverContext </dt> <dd> <p> Return an ArResolverContext created from the primary ArResolver implementation using the given `contextStrs`. </p> <dl> <dt>Parameters</dt> <dd> <p> <strong>contextStrs</strong> (str) – </p> </dd> </dl> </dd> </dl> ## pxr.Ar.Resolver.CreateContextFromStrings Return an ArResolverContext created by combining the ArResolverContext objects created from the given `contextStrs`. `contextStrs` is a list of pairs of strings. The first element in the pair is the URI scheme for the ArResolver that will be used to create the ArResolverContext from the second element in the pair. An empty URI scheme indicates the primary resolver. For example: ```text ArResolverContext ctx = ArGetResolver().CreateContextFromStrings( { {"", "context str 1"}, {"my_scheme", "context str 2"} }); ``` This will use the primary resolver to create an ArResolverContext using the string "context str 1" and use the resolver registered for the "my_scheme" URI scheme to create an ArResolverContext using "context str 2". These contexts will be combined into a single ArResolverContext and returned. If no resolver is registered for a URI scheme in an entry in `contextStrs`, that entry will be ignored. ### Parameters - **contextStrs** (list [tuple [str, str]]) – ## pxr.Ar.Resolver.CreateDefaultContext Return an ArResolverContext that may be bound to this resolver to resolve assets when no other context is explicitly specified. The returned ArResolverContext will contain the default context returned by the primary resolver and all URI resolvers. ## pxr.Ar.Resolver.CreateDefaultContextForAsset Return an ArResolverContext that may be bound to this resolver to resolve the asset located at `assetPath` or referenced by that asset when no other context is explicitly specified. The returned ArResolverContext will contain the default context for `assetPath` returned by the primary resolver and all URI resolvers. ### Parameters - **assetPath** (str) – ## pxr.Ar.Resolver.CreateIdentifier Returns an identifier for the asset specified by `assetPath`. If `anchorAssetPath` is not empty, it is the resolved asset path that `assetPath` should be anchored to if it is a relative path. ### Parameters - **assetPath** (str) – - **anchorAssetPath** (ResolvedPath) – ### CreateIdentifierForNewAsset ```python CreateIdentifierForNewAsset(assetPath, anchorAssetPath) -> str ``` Returns an identifier for a new asset specified by `assetPath`. If `anchorAssetPath` is not empty, it is the resolved asset path that `assetPath` should be anchored to if it is a relative path. **Parameters** - **assetPath** (`str`) – - **anchorAssetPath** (`ResolvedPath`) – ### GetAssetInfo ```python GetAssetInfo(assetPath, resolvedPath) -> ArAssetInfo ``` Returns an ArAssetInfo populated with additional metadata (if any) about the asset at the given `assetPath`. `resolvedPath` is the resolved path computed for the given `assetPath`. **Parameters** - **assetPath** (`str`) – - **resolvedPath** (`ResolvedPath`) – ### GetCurrentContext ```python GetCurrentContext() -> ResolverContext ``` Returns the asset resolver context currently bound in this thread. ArResolver::BindContext, ArResolver::UnbindContext ### GetExtension ```python GetExtension(assetPath) -> str ``` Returns the file extension for the given `assetPath`. The returned extension does not include a “.” at the beginning. **Parameters** - **assetPath** (`str`) – ### GetModificationTimestamp ```python GetModificationTimestamp(assetPath) -> ??? ``` Returns the modification timestamp for the asset at the given `assetPath`. **Parameters** - **assetPath** (`str`) – ``` ### pxr.Ar.Resolver.GetModificationTimestamp - **Description**: Returns an ArTimestamp representing the last time the asset at `assetPath` was modified. - `resolvedPath` is the resolved path computed for the given `assetPath`. If a timestamp cannot be retrieved, return an invalid ArTimestamp. - **Parameters**: - **assetPath** (str) – - **resolvedPath** (ResolvedPath) – ### pxr.Ar.Resolver.IsContextDependentPath - **Description**: Returns true if `assetPath` is a context-dependent path, false otherwise. - A context-dependent path may result in different resolved paths depending on what asset resolver context is bound when Resolve is called. Assets located at the same context-dependent path may not be the same since those assets may have been loaded from different resolved paths. In this case, the assets’ resolved paths must be consulted to determine if they are the same. - **Parameters**: - **assetPath** (str) – ### pxr.Ar.Resolver.RefreshContext - **Description**: Refresh any caches associated with the given context. - If doing so would invalidate asset paths that had previously been resolved, an ArNotice::ResolverChanged notice will be sent to inform clients of this. - **Parameters**: - **context** (ResolverContext) – ### pxr.Ar.Resolver.Resolve - **Description**: Returns the resolved path for the asset identified by the given `assetPath` if it exists. - If the asset does not exist, returns an empty ArResolvedPath. - **Parameters**: - **assetPath** (str) – ### pxr.Ar.Resolver.ResolveForNewAsset - **Description**: Returns the resolved path for the asset identified by the given `assetPath` if it exists. - If the asset does not exist, returns an empty ArResolvedPath. - **Parameters**: - **assetPath** (str) – ### ResolvedPath Returns the resolved path for the given `assetPath` that may be used to create a new asset. If such a path cannot be computed for `assetPath`, returns an empty ArResolvedPath. Note that an asset might or might not already exist at the returned resolved path. #### Parameters - **assetPath** (`str`) – ### ResolverContext An asset resolver context allows clients to provide additional data to the resolver for use during resolution. Clients may provide this data via context objects of their own (subject to restrictions below). An ArResolverContext is simply a wrapper around these objects that allows it to be treated as a single type. Note that an ArResolverContext may not hold multiple context objects with the same type. A client-defined context object must provide the following: - Default and copy constructors - operator&lt; - operator== - An overload for size_t hash_value(const T&amp;) Note that the user may define a free function: std::string ArGetDebugString(const Context &amp; ctx); (Where Context is the type of the user’s path resolver context.) This is optional; a default generic implementation has been predefined. This function should return a string representation of the context to be utilized for debugging purposes(such as in TF_DEBUG statements). The ArIsContextObject template must also be specialized for this object to declare that it can be used as a context object. This is to avoid accidental use of an unexpected object as a context object. The AR_DECLARE_RESOLVER_CONTEXT macro can be used to do this as a convenience. AR_DECLARE_RESOLVER_CONTEXT ArResolver::BindContext ArResolver::UnbindContext ArResolverContextBinder #### Methods: - **Get**() - Returns pointer to the context object of the given type held in this resolver context. - **GetDebugString**() - Returns a debug string representing the contained context objects. - **IsEmpty**() - Returns whether this resolver context is empty. ### pxr.Ar.ResolverContextBinder Helper object for managing the binding and unbinding of ArResolverContext objects with the asset resolver. Asset Resolver Context Operations ### pxr.Ar.ResolverScopedCache Helper object for managing asset resolver cache scopes. A scoped resolution cache indicates to the resolver that results of calls to Resolve should be cached for a certain scope. This is important for performance and also for consistency — it ensures that repeated calls to Resolve with the same parameters will return the same result. Scoped Resolution Cache ### pxr.Ar.Timestamp Represents a timestamp for an asset. Timestamps are represented by Unix time, the number of seconds elapsed since 00:00:00 UTC 1/1/1970. **Methods:** | Method | Description | | --- | --- | | `GetTime()` | Return the time represented by this timestamp as a double. | | `IsValid()` | Return true if this timestamp is valid, false otherwise. | #### GetTime Return the time represented by this timestamp as a double. If this timestamp is invalid, issue a coding error and return a quiet NaN value. #### IsValid Return true if this timestamp is valid, false otherwise.
architecture-pillars_index.md
# Execution Framework The Omniverse ecosystem enjoys a bevy of software components (e.g. PhysX, RTX, USD, OmniGraph, etc). These software components can be assembled together to form domain specific applications and services. One of the powerful concepts of the Omniverse ecosystem is that the assembly of these components is not limited to compile time. Rather, users are able to assemble these components on-the-fly to create tailor-made tools, services, and experiences. With this great power comes challenges. In particular, many of these software components are siloed and monolithic. Left on their own, they can starve other components from hardware resources, and introduce non-deterministic behavior into the system. Often the only way to integrate these components together was with a model “don’t call me, I’ll call you”. For such a dynamic environment to be viable, an intermediary must be present to guide these different components in a composable way. The **Execution Framework** is this intermediary. The Omniverse Execution Framework’s job is to orchestrate, at runtime, computation across different software components and logical application stages by decoupling the description of the compute from execution. ## Architecture Pillars The Execution Framework (i.e. EF) has three main architecture pillars. The first pillar is decoupling the authoring format from the computation back end. Multiple authoring front ends are able to populate EF’s intermediate representation (IR). EF calls this intermediate representation the execution graph. Once populated by the front end, the execution graph is transformed and refined, taking into account the available hardware resources. By decoupling the authoring front end from the computation back end, developers are able to assemble software components without worrying about multiple hardware configurations. Furthermore, the decoupling allows EF to optimize the computation for the current execution environment (e.g. HyperScale). The second pillar is extensibility. Extensibility allows developers to augment and extend EF’s capabilities without changes to the core library. Graph transformations, traversals, execution behavior, computation logic, and scheduling are examples of EF features that can be extended by developers. The third pillar is composability. This allows for the seamless integration of various software components and services, ensuring they work together harmoniously and efficiently. # Composable architecture The third pillar of EF is **composability**. Composability is the principle of constructing novel building blocks out of existing smaller building blocks. Once constructed, these novel building blocks can be used to build yet other larger building blocks. In EF, these building blocks are nodes (i.e. `Node`). Nodes stores two important pieces of information. The first piece they store is connectivity information to other nodes (i.e. topology edges). Second, they stores the **computation definition**. Computation definitions in EF are defined by the `NodeDef` and `NodeGraphDef` classes. `NodeDef` defines opaque computation while `NodeGraphDef` contains an entirely new graph. It is via `NodeGraphDef` that EF derives its composibility power. The big picture of what EF is trying to do is simple: take all of the software components that wish to run, generate nodes/graphs for the computation each component wants to perform, add edges between the different software components’ nodes/graphs to define execution order, and then optimize the graph for the current execution environment. Once the **execution graph** is constructed, an **executor** traverses the graph (in parallel when possible) making sure each software component gets its chance to compute. ## Practical Examples Let’s take a look at how Omniverse USD Composer, built with Omniverse Kit, handles the the update of the USD stage. Kit maintains a list of extensions (i.e. software components) that either the developer or user has requested to be loaded. These extensions register callbacks into Kit to be executed at fixed points in Kit’s update loop. Using an empty scene, and USD Composer’s default extensions, the populated execution graph looks like this: CurveManipulator OmniSkel SkeletonsCombo SingletonCurveEditor CurveCreator AnimationGraph Stage Recorder SequencePlayer PhysXCameraPrePhysics PhysXSupportUI PhysxInspector Before Update Physics PhysXVehiclePrePhysics PhysxInspector After Update Physics UsdPhysicsUI PhysXUI PhysXCameraPostPhysics PhysxInspector Debug visualization SkelAnimationAnnotation PhysXVehiclePostPhysics PhysX SceneVisualization PhysXFabric DebugDraw USD Composer's execution graph used to update the USD stage. Notice in the picture above that each node in the graph is represented as an opaque node, except for the OmniGraph (OG) front-end. The OmniGraph node further refines the compute definition by expressing its update pipeline with *pre-simulation*, *simulation*, and *post-simulation* nodes. This would not be possible without EF’s **composable architecture**. Below, we illustrate an example of a graph authored in OG that runs during the simulation stage of the OG pipeline. This example runs as part of Omniverse Kit, with a limited number of extensions loaded to increase the readability of the graph and to illustrate the dynamic aspect of the execution graph population. EXECUTION GRAPH | kit.def.execution(8412328473570437098) /World/ActionGraph(1) | og.def.graph_execution(1354986524710330633) kit.legacyPipeline(2) | kit.def.legacyPipeline(14601541822497998125) OmniGraph(2) | OmniGraphDef(13532122613264624703) kit.customPipeline og.pre_simulation(1) | og.def.pre_simulation(18371527843990822053) og.simulation(2) | og.def.simulation(2058752528039269071) og.post_simulation(3) | og.def.post_simulation(12859070463537551084) An example of the OmniGraph definition with **composability** creates a foundation for executing compound graphs. The final example in this overview focuses on execution pipelines in Omniverse Kit. Leveraging all of the architecture pillars, we can start customizing per application (and/or per scene) execution pipelines. There is no longer a need to base execution ordering only on a fixed number or keep runtime components siloed. In the picture below, as a proof-of-concept, we define at runtime a new custom execution pipeline. This new pipeline runs before the “legacy” one ordered by a magic number and introduces fixed and variable update times. Extending the ability of OG to choose the pipeline stage in which it runs, we are able to place it anywhere in this new custom pipeline. Any other runtime component can do the same thing and leverage the EF architecture to orchestrate executions in their application. custom.async custom.syncMain CubeReadAttrib_02 omni_graph_nodes_constantdouble3 CubeReadAttrib_02 on_playback_tick og.customPipelineToUsd make_transformation_matrix_from_trs_01 PhysX matrix_multiply SkelAnimationAnnotation get_translation ## Next Steps Above we provided a brief overview of EF’s philosophy and capabilities. Readers are encouraged to continue learning about EF by first reviewing Graph Concepts.
Architecture.md
# Architecture ## Architectural Principles A few important architectural principles guide the design of the software and have been captured here. It’s important you know these principles since they will help you understand our design decisions and guide you when faced with design trade-offs. ### Customer first *The customer comes first, we come second* Always think about the customer experience, i.e. the developer, who will be utilizing this framework to build applications and/or plugins. The customer comes first, we come second. For example, we seek to minimize the work that developers have to do to transform their existing SDKs into Carbonite plugins. We also understand that our customers’ time is precious, this is why we optimize the build process so they can rapidly build-inspect-learn. Another manifestation of this principle is that we work directly with customers to design the right solution for their needs. Our design is therefore iterative, we don’t believe that we can anticipate all of our customers’ needs - we must find them and work with them. Finally, we see ourselves in the role of janitors for the Carbonite Framework. Even though we work hard to make this Framework shine, the truly magical technology will be made by the people that use Carbonite to build plugins and applications. Our role is to facilitate and get out of the way of innovation. ### Simplicity *Achieve more with less* We strive for solutions that are elegant in their simplicity. These solutions are harder to design but they result in a system that is easier for our customers to understand and control. As an added bonus for us, these systems are also easier to maintain. Think about the cognitive load you put on customers with your design; even if the C++ standard allows you to do something doesn’t necessarily mean it’s a good idea. Don’t add code unless it’s needed. Unused code is like untested code. It’s a liability. Even if you think something *could* be useful in the future please defer adding it until it’s actually needed. This also means that if you make code redundant you must remove it. We can always recover it later using source control, if needed. ### Zero impact to your environment *Don’t make a mess and expect others to deal with it* We should minimize our dependency on local machine configuration, the goal is zero. This principle guides us to configure our code and build processes such that our repository builds directly from source control, without any manual configuration. There is an added bonus here: all of our configuration is accurately captured in the repository and therefore versioned and branched with the associated source code. This principle also guides us to prevent leaks of our configuration into the environment; example of this is our requirement that the runtime is statically linked into the modules and that the choice of compiler is abstracted via the Carbonite [ABI](ABI.html). If we didn’t do this our users would have to install the corresponding runtime to run Carbonite-based applications and developers would have to use the same compiler that was used to build the modules of Carbonite. Both of these would be examples of us spilling our internal choices into the environment. It is admittedly extra work to contain your work this way but the benefit is reaped by all the users and developers and that makes it worth it. This is also in line with our architecture principle of customers coming first and us coming second. ### Don’t reinvent the wheel *Use well-tested internal and external code when possible* There are some things that Carbonite is good at, and other things best left to people and projects that have hardened, well-tested implementations that we can make use of, both inside and outside of NVIDIA. For external source, there is a process in place for requesting legal approval for licensing. Licensing from heavily-tested and hardened projects will allow us to more quickly develop solutions to our customers’ needs. In some cases, we recognize that there is a need to adapt certain code to Carbonite’s style and methodology (such as with Boost constructs). Boost is not desired due to increase in complexity and compile times. We should also make liberal use of the available constructs in the C++ STL, including containers. However, there are some places where the standard set of containers falls short, or where performance testing shows a bottleneck that can only be solved by an algorithmic shuffling. There is a significant cost to writing a new container, both in time to develop and thoroughly test, but also in maintenance and understanding by future developers. For containers specifically, a set of criteria exists that must pass muster with the Carbonite team before a container should be written: 1. What is the immediate business need? 2. What shortcomings of the existing container set creates the need for a new container? 3. Does the new container cause a 10%+ increase in performance over the required operation? 4. Can the new container achieve 100% code coverage in tests? ## Truly modular Make ravioli - not spaghetti In a lot of software documentation you will find references to “modular architecture” but in reality few of them have true modularity. Many systems that advertise modularity only have virtual modularity, i.e. the source code has been grouped into clusters of related functionality but these clusters must all be built together using the same build system/compiler and there isn’t a versioned ABI between the clusters. This type of modularity is understandable because true modularity comes at a cost. For instance, as part of the ABI you cannot leak memory management outside of your plugin. E.g. if you use the STL internally you cannot leak this decision out of your plugin because in doing so you force others to use the same STL and runtime. We recommend that the runtime be statically linked so that others don’t have to deal with your dependencies - they could be using a different version (see “Zero impact to your environment” principle). Another cost of the ABI is that the interfaces need to be maintained separately and versioned. When changes are made that affect the interface the version number needs to be adjusted accordingly (see later sections for details). In our case this is cost that we can justify because our goal is to make a framework that helps (rather than hinders) us in sharing different technologies developed and maintained by different groups within NVIDIA. The versioned Application Binary Interface (ABI) is our binding contract. It enables the sharing of pre-built and pre-verified Carbonite plugins between teams, in stark contrast to the fork-and-really-hard-to-merge strategy that we’ve had to employ with the monolithic source code repos of the past. Consumers of Carbonite plugins will not have to rig up build environments that match what the producing teams used. This also frees the producing teams to use whatever technology they want to build their plugins. For example, if they want to use the latest bleeding edge version of CUDA they can do so - but not burden anybody else with that choice or burden them with having to build your binaries from your CUDA source files. ## Quality assurance is our responsibility When we break things, we should be the first to know Quality assurance of the Carbonite framework and foundational plugins is our responsibility. We write unit tests and system tests and run them in automated fashion. We implement test doubles for plugins and functionality that is too costly or should not be run as part of testing. We respond to test failures and bad tests are either improved or eliminated. The plugin architecture helps significantly here because the interface and the function of a plugin that implements it can be validated using black box testing. ## Rapid value delivery Always be delivering The modular architecture allows subsystems to be developed and iterated on in isolation. This makes the time from “code change to testable build” short. Built-in support that allows plugins to be reloaded at runtime shortens the change to test time, since changes can be tested interactively without relaunching and re-configuring for the test. ## Data-driven Allow the data to drive, rather than our assumptions Measure before you decide to optimize your code, don’t assume you know where the bottleneck is. This principle requires us to provide great support for tools that enable measurement, like profiling. But this principle is farther reaching than just profiling and optimization. Our architecture should be highly configurable via data. We should avoid locking in behavior and assumptions into the code when we know that users will require flexibility. This will also facilitate A/B testing which is highly beneficial, not only for comparing a new solution to and old one, but also provides a fallback strategy if we identify catastrophic problems with the new approach. In this type of scenario it’s often called a “feature flag”, i.e. there is a way via data to enable a feature in the code. Embrace this method of working, integrate new features or new versions as alternate code paths while they are being proven out. Additionally, when designing systems we should design them from the data and up; think about the most efficient layout for the data and how we can fit an efficient but still user friendly interface on top of that. ## Architectural Overview Carbonite is a lightweight framework for building applications. It is the opposite of a monolithic software system since capabilities and features are encapsulated in plugins with versioned interfaces. This allows the assembly of an application from binary plugins, developers choose what to use. They can leave behind the plugins they don’t need, improve the implementations of those that don’t meet their needs, and build the plugins that are missing for their use case. A Carbonite application is composed of - application code, - custom plugins, and - Carbonite plugins. These are all written on top of the - Carbonite Framework & Utils, - C++11 runtime, operating system, and drivers. ## Architecture diagram Carbonite architecture ## Namespaces ### Namespaces Anyone looking through public Carbonite header will notice that there are two top-level namespaces used - **carb** and **omni**. These two namespaces should typically be seen as both being the project level organizational namespaces for Carbonite, but have historical significance. Originally the Carbonite library started out with everything being part of the **carb** namespace. This provided top-level organization and symbol scoping early on during development. Later in development of the SDK, the Omniverse Native Interfaces (ONI) system was added. With this came the introduction of the **omni** namespace. Generally, new development is encouraged to utilize ONI and exist in the **omni** namespace. For backwards compatibility and to limit downstream maintenance effects, existing items in the **carb** namespace are retained and used. ### Plugins, Framework, and Utils It is important to emphasize that how developers split an application into application code vs plugin code is up to them. We encourage developers to use existing plugins if they meet their needs. Carbonite is however designed in such a way that most pieces can be omitted. Of the three building blocks: - Framework - Utils - Plugins Only the Framework is mandatory and it is quite small. It provides essential services, like FileSystem, extendable LogSystem, and plugin management. All other services in Carbonite are provided by plugins and utility code. It should be emphasized here that Carbonite doesn’t attempt to cover all services that you may need. Instead we expand the catalog as we build useful plugins with customers or harvest plugins built entirely by customers. The plugins the Carbonite team maintains are developed in the Carbonite repo. Plugins developed and maintained by other teams are often called custom plugins. Those are housed outside the Carbonite repo, typically in the application repo or a separate repo if the plugin is being shared across multiple applications. The general rule we’ve followed in designing Carbonite is the following: > If a system is optional or we expect multiple implementations of it we make it a plugin. Of course we only do this for systems. For smaller utility code we use the header-only Carbonite Utils library. In there you will find unicode conversion functions and path manipulation utilities, to name a few. Plugins must follow strict rules in how they are built so that they can be shared as binary artifacts: - Carbonite is 64-bit only. This means you can gleefully ignore making 32-bit versions of your plugins. Carbonite supports Linux-x86_64, Linux-aarch64 (Tegra) and Windows-x86_64 (MacOS Universal binary support is experimental). A plugin needs to include an implementation for these platforms (Windows and Linux) to be considered for adoption by the Carbonite team. - A semantically versioned C-style ABI must be exposed by the plugin and used by clients. It should be noted here that these are the rules that all Carbonite plugins must live by, so that they can be shared as binary artifacts. In custom plugins you have more flexibility and can decide to sacrifice these benefits but those plugins cannot be accepted into the Carbonite repo. Doing so would violate our Architectural principles of Truly modular and Zero impact to your environment. If your plugin is foundational and follows the rules above it can be submitted in source code form to the Carbonite repo, along with premake build scripts and packman dependency specification. The plugin interface would be stored under `include/carb/<plugin-interface-name>` and contained in namespace `carb::<plugin-interface-name>`. All other files, including implementation would be stored under `source/plugins/<plugin-interface-name>`. If there are multiple implementations of the interface the different implementations are separated by postfixing `-<implementation-name>` to the plugin. For example: `source/plugins/carb.graphics-vulkan` and `source/plugins/carb.graphics-direct3d`. If you are writing a custom plugin you would do this similarly but choose a different top level namespace from `carb` and therefore another folder under include to store the interface. ### Thread safety Our approach to thread safety is as follows in Carbonite: 1. If nothing else is specified in the documentation of a system then locking is left to the application. 2. Where this is not feasible we create an API that is lock friendly (begin/end/commit) and document where locking must be performed (commit) 3. Last resort is to lock internally. This is only done where it’s not feasible to push this control up the stack (e.g. logger writing a message to a file or console). The Framework itself is thread-safe in that multiple threads can acquire and release interfaces simultaneously. In an effort to allow maximum forward progress, the Framework lock is not held while a plugin is initialized. ### ABI and versioning As already mentioned Carbonite employs a versioned C-style ABI (Application Binary Interface) to facilitate easy sharing of plugins. This is a stronger contract than an API (Application Programming Interface) because an API doesn’t guarantee binary compatibility. A module with a versioned API will commonly require consumers to rebuild their source code against the new version of the API. Put differently, just replacing the built binary with a new version will in most cases be disastrous. and **binary compatible** emerge. For this discussion we will use the term **external source code** for all code that is outside the module. This code can be in other modules or application side. A change to a module is only source compatible when 1. changes to external source code are not required 2. a rebuild of external source code is required An example of this is when adding data members to a transparent data type by expanding the data type. This type of change will not require any code changes externally but a rebuild is required of all the external code because the size of the data structure has changed. A change is both source and binary compatible when 1. changes to external source code are not required 2. a rebuild of external source code is not required It’s important to note that most engines and middleware are designed so that new versions can only be source compatible, which forces sharing to happen on the source code level rather than binary artifact level. As we covered earlier, Carbonite needs to be **truly modular** which means that many changes to plugins can be done in a binary compatible manner. In the example above, adding data members to a transparent data type, can be achieved in a binary compatible manner by extending the data type via indirection. The pattern is as follows, a transparent data type contains an extension member at the end. This member is only used when the data type needs to be extended. For example: ```cpp // This is version 1.0 struct ImageDesc { int width; int height; void* ext; // This is to future proof the struct. Type is void because it's not used in v1.0. }; // Usage: ImageDesc i = {1024, 1024}; // compiler initializes 'ext' to nullptr, in both debug and release builds. ``` Later we realize that we want to support an optional name for these images as well, so we expand: ```cpp struct ImageDescExt; // This is version 1.1 struct ImageDesc { int width; int height; ImageDescExt* ext; // The pointer has now become typed, sending a clear signal that it can be used. }; struct ImageDescExt { const char* name; void* ext; // This is to future proof the ImageDesc struct. Type is void because it's not used in v1.1 }; // Usage: ImageDescExt e = {"my_awesome_image"}; ImageDesc t = {1024, 1024, &e}; ``` As you can see the size of each struct doesn’t changes, we just chain new data via the ‘ext’ member, and the compiler automatically sets the ‘ext’ member to nullptr when you initialize the struct using an initializer list. This gives us binary compatibility when extending an established plugin with optional features. When these new features are not optional we of course bump the major version number and in that case the client must make code changes to accommodate the interface changes that have been made. In the case above this would lead to changing the data layout of the struct, like this: ```cpp // This is version 2.0 - previous versions and extensions have been wiped struct ImageDesc { const char* name; int width; int height; void* ext; // This is to allow extensions in minor version upgrades. Type is void because it's not used in v2.0 }; // Usage: ImageDesc t = {"my_awesome_image", 1024, 1024}; ``` Notice how we purposefully re-order the data members in the v2.0 of the struct. This is to cause compilation errors where the struct is being initialized - because those call sites need to be revisited when upgrading to this new major version of the plugin interface. If we are paranoid about this case we could also rename it for version 2.x, to `ImageDesc2`. That is bound to generate compiler errors everywhere external code interacts with it. In Carbonite we use structs for grouping together input parameters because that way we can extend the list of parameters while still maintaining binary compatibility. The plugin interface functions are also exposed via structs but these are never created by clients, they are acquired from the plugin and released by calling the plugin. This means that newer versions of the plugin can introduce optional functions that an older client won’t know about but this will cause no harm because the struct is always created by the plugin and thus always of the correct size. These structs can even contain state data that is opaque to the client. It is therefore perhaps appropriate to call them semi-transparent structs in terms of client visibility. A plugin will expose a main interface structs. To avoid complicated versioning we use one version number to capture the version of a plugin interface and store this in the main interface struct. If the plugin exposes other interface structs (sometimes referred to as sub-interfaces) those must be fetched via the main interface struct. This guarantees that version checks are performed because the main interface must be acquired first. Let’s illustrate this with an example, using the Input plugin: ```cpp namespace carb { namespace input { // this is a sub-interface - we keep it outside the main interface for cleanliness because most // users will never use this interface, it is used by other plugins that handle input devices. struct InputProvider; struct Input { CARB_PLUGIN_INTERFACE("carb::input::Input", 1, 0) /** * Get keyboard logical device name. * * @param keyboard Logical keyboard. * @return Specified keyboard logical device name string. */ const char* (CARB_ABI* getKeyboardName)(Keyboard* keyboard); /* * Lots of code here with different Input interface functions, pages of really well documented and well * designed functions. Honestly. Check it out at source/plugins/carb/input/Input.h. */ /** * Returns input provider's part of the input interface. * * @return Input provider interface. */ InputProvider* (CARB_ABI* getInputProvider)(); }; } } ``` If we need to make a change to the `InputProvider` interface struct we bump the version number for the Input plugin interface. In the example above the version is major 1 and minor 0, so 1.0. Before we discuss the rules of how those numbers change let’s summarize what we just discussed: > A plugin interface is the collection of all the headers that a plugin exposes. Any change to this interface requires that you modify the plugin version. The main interface struct is always acquired via the `Framework::acquireInterface` function that will perform version checks for you. Sub-interfaces must only be accessible via this main interface (using `get` functions or a factory pattern). Now we turn our attention to the rules of how we set and adjust the version numbers. - A plugin **interface** has a major and minor version number: - Major version is increased every time a non-backwards compatible change is made. Please realize that this means any aspect of the interface, including sub-interfaces, essentially any header that is externally accessible in the root folder for the plugin. - Minor version is increased every time a backwards compatible change is made to the interface of the plugin. This can involve adding a new optional function to the API. Clients of the interface can therefore use plugins that implement this new interface even if they were compiled against a lower minor version (since the additions are optional and these types of changes are **binary compatible**). - A plugin **implementation** has a major version, minor version, and build identifier: - Major version matches the interface that this implementation supports - Minor version matches the interface that this implementation supports, and all lower minor versions of that same major version. - Build identifier is a string that uniquely identifies the build, usually composed of repo name, branch name, and build number. It may also include git hash for easy identification of latest commit included. This string is set to “dev” on development builds. - Please endeavor to **not** bump major version unless absolutely needed. Interface changes cause pain for users of the plugin and plugins that constantly bump their major version will not be popular. Quite to the contrary, users will avoid them. Instead, spend the time and energy to make your changes backwards compatible if at all possible. Then save up for a larger refactoring that can be done at an opportune time where multiple changes are made at once to the interface. If you find it difficult to make backwards compatible changes please consult with the Carbonite team, they have experience that can hopefully help. It should be clear from reading this that creating an ABI and managing changes so they cause the least disruption to clients requires dedication and resourcefulness. You should therefore only expose **necessary functionality** in a plugin interface. Optional and nice-to have things are better provided in source code form via the header-only utils library. built against this version of Carbonite and implicitly picked up the ```code ILogging ``` 1.1 version requirement. For whatever reason, two distinct Omniverse Kit applications both adopted version 105.2, but were based on different branches of code that used different Carbonite versions: one with the ```code ILogging ``` 1.1 change and one without. Extensions that were built against the “newer” 105.2 (with ```code ILogging ``` 1.1) were no longer compatible with the “older” 105.2 since they implicitly required ```code ILogging ``` 1.1 without needing any additional functionality from it. After discussing this point in Carbonite Office Hours, OVCC-1472 was created to document this phenomenon and attempt to find a solution. Essentially the solution is this: version changes must be opted into. This means that when it becomes necessary to bump an interface’s version, we maintain the previous version as the “default” version. If an extension/module/application requires use of the new functionality that application must declare this by requesting the newer version of the interface. This new method is recommended for all new interfaces, and should be switched to when a change to an existing interface is required (maintaining the previous version as the **default** version). In order to do this, interfaces must be declared slightly differently. First, macros should be declared that define the latest version and the default version (which may be different). This cannot be generated by normal means (i.e. macros) and therefore must follow this boilerplate code: ```cpp //! Latest version for carb::logging::ILogging #define carb_logging_ILogging_latest CARB_HEXVERSION(1, 1) #ifndef carb_logging_ILogging //! The default current version for carb::logging::ILogging # define carb_logging_ILogging CARB_HEXVERSION(1, 0) #endif ``` The recommended naming paradigm is to take the fully-qualified interface struct name with ```code :: ``` changed to ```code _ ``` to make the name valid. The latest version has a ```code _latest ``` suffix and is always defined. The current version does not have a suffix and is only defined if not already defined. This allows project settings to override the define by specifying on the compiler command line (example: ```code -Dcarb_logging_ILogging=carb_logging_ILogging_latest ``` or ```code -Dcarb_logging_ILogging=CARB_HEXVERSION(1,1) ``` ). **NOTE**: The module that implements the interface **must** have ```code &lt;current version&gt;=&lt;latest version&gt; ``` in its project settings or passed on the compiler command line. Next, the interface must be declared with ```code CARB_PLUGIN_INTERFACE_EX ``` instead of ```code CARB_PLUGIN_INTERFACE ``` : ```cpp CARB_PLUGIN_INTERFACE_EX("carb::logging::ILogging", carb_logging_ILogging_latest, carb_logging_ILogging) ``` And finally, use the ```code CARB_VERSION_ATLEAST ``` macro with ```code #if ``` to conditionally include versioned code: ```cpp #if CARB_VERSION_ATLEAST(carb_logging_ILogging, 1, 1) /** * Retrieves the extended StandardLogger2 interface from an old \ref StandardLogger instance. * @param logger The logger to retrieve the instance from. If `nullptr` then `nullptr` will be returned. * @returns The \ref StandardLogger2 interface for \p logger, or `nullptr`. */ StandardLogger2*(CARB_ABI* getStandardLogger2)(StandardLogger* logger); // ... #endif ``` ## Deprecation and Retirement Given the goal above to minimize changes to major versions and allow backwards compatibility through minor version changes, a technique is employed to deprecate and retire functions. ### Step 1: Deprecation Along with a minor version increment, a function may be decorated with ```code CARB_DEPRECATED ``` , which signals intent that the function will be eventually removed and further use of this function is discouraged. This breaks **source compatibility** in that a warning will be generated when using the function (if such warnings are enabled), but does not affect **binary compatibility**. The function must remain in the plugin, and must remain at the same location within the plugin’s interface. ### Step 2: Retirement At least a calendar month later, and with another minor version increment, a function may be **retired** by appending ```code _RETIRED_V(x)_(y) ``` ## Step 2: Version Bump The function name may be changed to include the version number, such as `(x)(y)`, where `(x)` is the major version and `(y)` is the minor version after increment. The function should still remain in the plugin, and must remain at the same location within the plugin’s interface. The modification of the function name will absolutely break source compatibility but will still not affect binary compatibility. ## Step 3: Optional removal At least a calendar month later, and with another minor version increment, a function may be removed by changing its location within the interface struct to be `nullptr` or replacing the function with a stub function that performs a `CARB_FATAL_UNLESS(false, ...)` with an error message. It is important that the location within the interface struct does not change in order to maintain binary compatibility. **NOTE: This change breaks binary compatibility for the removed function but maintains overall binary compatibility.** This is only recommended when it is highly unlikely that any dependencies exist for older versions of the interface. ## Step 4: Removal and cleanup When the next major version bump happens, all references to the function may be removed, and the `nullptr` or stub function may be removed from the interface struct. A common and recommended method for decorating code with reminders to remove elements is to place `static_assert` statements for the major version equal to the current value. When the major version is incremented, the various `static_assert` statements will trigger errors that serve as reminders to perform clean up. ## Structure | Module | Source location | Namespace | Binary name | |-----------------------|-----------------------|-----------------------|-----------------------| | Carbonite Framework | source/carb | carb | carb.dll, libcarb.so | | Carbonite Extras | source/carb/extras | carb::extras | header only | | Carbonite Plugins | source/plugins/carb/&lt;plugin-name&gt; | carb::&lt;plugin-name&gt; | {lib}carb.&lt;plugin-name&gt;{-&lt;impl-name&gt;}.plugin.{dll|so} | | Custom App | user defined | user defined | &lt;user-defined&gt;.exe | | Custom Plugins | user defined | anything but carb | &lt;user-defined&gt;.plugin.{dll|so} |
Asserts.md
# Carbonite Asserts ## Overview The assertion framework in Carbonite is designed to be usable for a variety of situations, extensible, and performant. At its core, an assertion framework allows logging a message and/or stopping in the debugger if an unexpected situation occurs. This allows a programmer to intervene and observe a malformed situation at the point where it first happens. Various options are available for debug and release builds. If the assert is not handled, a crash is triggered whereby the state of the program ideally is saved for later investigation. The basic types of asserts provided by Carbonite are as follows: - Debug asserts (CARB_ASSERT) - Runtime check asserts (CARB_CHECK) - Fatal conditions (CARB_FATAL_UNLESS) > Note: Static asserts are best. If checks can be done at compile time (or link time), use `static_assert`. ## Default Implementation and Overriding Debug asserts and Runtime check asserts can be enabled or disabled by default or by overriding. The `CARB_ASSERT` macro has an additional macro `CARB_ASSERT_ENABLED` that will be set to zero if debug asserts are not enabled, and non-zero if debug asserts are enabled. This can be used to conditionally enable code that may do additional work or may add members if debug asserts are enabled: ```c++ struct MyClass { #if CARB_ASSERT_ENABLED bool checkValue{}; MyClass(bool value) : checkValue(value) {} #else MyClass() = default; #endif void foo() { CARB_ASSERT(checkValue); } } ``` ```c++ # define CARB_IMPL_ASSERT(cond, ...) \ (CARB_LIKELY(cond) || \ ![&amp;](const char* funcname__, ...) CARB_NOINLINE { \ return g_carbAssert ? \ g_carbAssert->reportFailedAssertion(#cond, __FILE__, funcname__, __LINE__, ##__VA_ARGS__) : \ ::carb::assertHandlerFallback(#cond, __FILE__, funcname__, __LINE__, ##__VA_ARGS__); \ }(CARB_PRETTY_FUNCTION) || \ (CARB_BREAK_POINT(), false)) ``` !!! warning "Warning" Adding extra members conditionally (as shown above) must not change the ABI for plugins. This is recommended for header-only classes only. Runtime check assertion macro (`CARB_CHECK`) similarly has an additional macro `CARB_CHECK_ENABLED` that is set to zero if runtime check asserts are not enabled, and non-zero if runtime check asserts are enabled, and can be used in conditional code just as `CARB_ASSERT_ENABLED` is. The `CARB_ASSERT_ENABLED` and `CARB_CHECK_ENABLED` macros are two-way. The default implementation defines `CARB_ASSERT_ENABLED` to match the value of `CARB_DEBUG`, effectively enabling debug asserts for debug builds and disabling them for non-debug builds. The default implementation of `CARB_CHECK_ENABLED` is always set to `1` making it always enabled. However, this behavior can be overridden by defining `CARB_ASSERT_ENABLED` or `CARB_CHECK_ENABLED` to the desired value, either on the compiler command line (or in your build scripts such as `premake5.lua` or `Makefile`) or before including `carb/Defines.h`. This allows, for instance, enabling `CARB_ASSERT` for release or optimized builds, or disabling `CARB_CHECK` for certain builds. !!! warning "Warning" Assertion expressions should not have side effects! When disabled, the expression is **not evaluated**. Similarly, the `CARB_ASSERT` macro can be specified on the compiler command line or defined before including `carb/Defines.h`. This allows the assert behavior to be customized as desired. The default implementation of `CARB_ASSERT` and `CARB_CHECK` are the same (but enabled at different times) and is as follows: <span class="pre"> check1 </span> </code> is the expression that you’re asserting. If it succeeds, great: execution continues. Also note the use of <code class="xref c c-macro docutils literal notranslate"> <span class="pre"> CARB_LIKELY </span> </code> : this tells compilers to optimize expecting that the result will produce <code class="docutils literal notranslate"> <span class="pre"> true </span> </code> . If <code class="docutils literal notranslate"> <span class="pre"> check1 </span> </code> fails, <code class="docutils literal notranslate"> <span class="pre"> check2 </span> </code> is evaluated. This calls the <span class="std std-ref"> Assert Handler </span> and passes a stringification of the expression, along with any additional arguments passed to the variadic portion of the <code class="xref c c-macro docutils literal notranslate"> <span class="pre"> CARB_ASSERT </span> </code> macro. If the Assert Handler returns <code class="docutils literal notranslate"> <span class="pre"> true </span> </code> , execution continues as if the assert expression had not failed. If <code class="docutils literal notranslate"> <span class="pre"> false </span> </code> is returned however, <code class="docutils literal notranslate"> <span class="pre"> check3 </span> </code> is evaluated. This is not really a check at all, but merely invokes CARB_BREAK_POINT and evaluates <code class="docutils literal notranslate"> <span class="pre"> false </span> </code> to the entire expression. This macro has a few other tricks. In order to attempt to generate more ideal assembly, the call to the assertion handler is in a lambda that is declared as <code class="xref c c-macro docutils literal notranslate"> <span class="pre"> CARB_NOINLINE </span> </code> to prevent inlining (due to a bug in some versions of Microsoft Visual C++, this lambda also accepts variadic arguments in an attempt to prevent inlining). By avoiding inlining the lambda, the call to the assertion handler (which should be called rarely) can be removed from the fast path of code execution and prevent wasted bytes in the instruction cache. Generally speaking, it is better to use the default behavior of <code class="xref c c-macro docutils literal notranslate"> <span class="pre"> CARB_ASSERT </span> </code> and <code class="xref c c-macro docutils literal notranslate"> <span class="pre"> CARB_CHECK </span> </code> and instead handle assertions using the Assert Handler. ## Assertion Handlers When a <code class="xref c c-macro docutils literal notranslate"> <span class="pre"> CARB_ASSERT </span> </code> , <code class="xref c c-macro docutils literal notranslate"> <span class="pre"> CARB_CHECK </span> </code> or even <code class="xref c c-macro docutils literal notranslate"> <span class="pre"> CARB_FATAL_UNLESS </span> </code> assertion fails, the first step is to call the Assertion Handler. This gives the application the chance to notify the user about the assert, log the assert, attach a debugger, etc. If the Carbonite Framework is started and built-in plugins are loaded, <code class="docutils literal notranslate"> <span class="pre"> g_carbAssert </span> </code> will be set to the built-in <code class="xref cpp cpp-struct docutils literal notranslate"> <span class="pre"> carb::assert::IAssert </span> </code> plugin, and <code class="xref cpp cpp-func docutils literal notranslate"> <span class="pre"> carb::assert::IAssert::reportFailedAssertion() </span> </code> will be called when an assert occurs. If the Framework is not started, the <code class="xref cpp cpp-func docutils literal notranslate"> <span class="pre"> carb::assertHandlerFallback() </span> </code> is called instead. To override the Assertion Handler for the local module (EXE/DLL/SO), you can set the <code class="docutils literal notranslate"> <span class="pre"> g_carbAssert </span> </code> global variable to a different implementation of the <code class="xref cpp cpp-struct docutils literal notranslate"> <span class="pre"> carb::assert::IAssert </span> </code> structure any time after <code class="xref cpp cpp-func docutils literal notranslate"> <span class="pre"> carb::assert::registerAssertForClient() </span> </code> is called. Example: ```cpp // Override the assert handler with our test one class ScopedAssertionOverride : public carb::assert::IAssert { public: carb::assert::IAssert* m_prev; std::atomic_uint64_t m_count{ 0 }; // ... implementation details ... }; ``` ```cpp static carb::assert::AssertFlags setAssertionFlags(carb::assert::AssertFlags, carb::assert::AssertFlags) { return 0; } static uint64_t getAssertionFailureCount() { return static_cast<ScopedAssertionOverride*>(g_carbAssert)->m_count.load(); } static bool reportFailedAssertionVimpl(const char*, const char*, const char*, int32_t, const char*, ...) { static_cast<ScopedAssertionOverride*>(g_carbAssert)->m_count.fetch_add(1); return false; } public: ScopedAssertionOverride() : carb::assert::IAssert({&setAssertionFlags, &getAssertionFailureCount, &reportFailedAssertionVimpl}) { m_prev = std::exchange(g_carbAssert, this); } ~ScopedAssertionOverride() { g_carbAssert = m_prev; } auto& count() { return m_count; } }; ``` While it may be possible to override the Assert Handler globally for an application, this is not currently tested or supported. ## Debug Asserts Debug asserts are performed via `CARB_ASSERT` and enabled when `CARB_ASSERT_ENABLED` is non-zero. By default, these checks are available only when `CARB_DEBUG` is defined (i.e. debug builds). As such, they are best used for algorithmic checks and to ensure that changes to a system meet expectations. Since they are typically only enabled for debug builds, they are less useful if debug builds are not in wide usage. Keep in mind that the condition checked by `CARB_ASSERT` is **compiled out** when not enabled, and therefore must have no side-effects. ## Runtime Check Asserts Runtime checks are performed via `CARB_CHECK` and enabled when `CARB_CHECK_ENABLED` is non-zero. By default, these are always on, but can be disabled if a consumer of Carbonite desires. `CARB_CHECK` is not intended to be included in Shipping builds. but is designed to be included in optimized builds that will have a wide range of developers and QA personnel running them. However, since Carbonite currently only packages Debug and Release builds, `CARB_CHECK` is turned on for both builds. This macro is therefore forward-looking to a potential future where Debug, Checked and Shipping builds are provided, where Checked would have `CARB_CHECK` enabled, but Shipping builds would not. Keep in mind that the condition checked by `CARB_CHECK` is **compiled out** when not enabled, and therefore must have no side-effects. ## Fatal Conditions `CARB_FATAL_UNLESS` exists to terminate an application when a check fails. This should only be used when gracefully handling the error condition is impossible, and continuing to execute can lead to difficult-to-diagnose instability or data corruption. In contrast to `CARB_ASSERT` and `CARB_CHECK` , this macro is always enabled and **can never be disabled** , even in Release or hypothetical future Shipping builds, but may be overridden similarly to the other assertion macros. Due to specific behavior to always terminate, `CARB_FATAL_UNLESS` is similar to, but different than the other asserts: ```c++ # define CARB_FATAL_UNLESS(cond, fmt, ...) \ (CARB_LIKELY(cond) || \ ([&amp;](const char* funcname__, ...) CARB_NOINLINE { \ if (false) \ ::printf(fmt, ##__VA_ARGS__); \ g_carbAssert ? g_carbAssert-&gt;reportFailedAssertion(#cond, __FILE__, funcname__, __LINE__, fmt, ##__VA_ARGS__) : \ ::carb::assertHandlerFallback(#cond, __FILE__, funcname__, __LINE__, fmt, ##__VA_ARGS__); \ }(CARB_PRETTY_FUNCTION), std::terminate(), false)) ``` `CARB_FATAL_UNLESS` can be overridden as described above, but a program is malformed if execution is allowed to continue past `CARB_FATAL_UNLESS` when the expression returns `false` ; this is undefined behavior. ## How to use these macros `CARB_ASSERT` has the lowest exposure, `CARB_CHECK` has middle exposure and `CARB_FATAL_UNLESS` has absolute exposure. These are varying degrees of guardrails to keep your program in check in a wide variety of circumstances. The principles listed below are not meant to be exhaustive, but to serve as a guide for usage. In general, failing an assert should always be considered fatal, just at different points of the development cycle. As such, they should be used for things that **absolutely should never ever fail based on the current authorship of the program.** Asserts are an excellent practice to ensure that the expectations and assumptions of the code are called out in a program-enforced manner, as opposed to comments that may explain intent but are not enforced. In other words, asserts tell future programmers what the code assumes so that changes to the code either maintain those assumptions or require them to change. Asserts lose effectiveness if they are not enabled. For instance, if use of `CARB_ASSERT` is prolific, but `CARB_ASSERT_ENABLED` is always `false` (because debug builds are too slow, say), then `CARB_ASSERT` is ineffective. is effectively useless. ### Use CARB_ASSERT for programmer-only checks or debug unit tests CARB_ASSERT is typically only compiled into debug builds, which do not have wide exposure (running several instances of the program). As such, this makes it useful for algorithmic checks where it is not desirable to run always, but as long as code has not been changed will always succeed. When a programmer starts making changes to the code with debug builds, the asserts will be enabled which ensures that the programmer continues to meet the existing assumptions of unchanged code. A simple example of this is: ```c++ int a = 2; // … int b = 2; // … CARB_ASSERT((a + b) == 4); // proceed with validated assumption that a+b == 4 ``` As this code is written, there is no possible way that the assert would fail. However, if a programmer starts making changes to this code and decides that `a` should equal 3, then the assert will fail and indicate to the programmer that the code has an assumption that `a + b` should always equal 4. The programmer will then have to investigate why this is and evaluate the situation. As a side note, sometimes it makes sense to have conditional code that is only intended to be enabled when changes are being made to a system as the cost to assert is very high. For instance, perhaps you have a binary-tree implementation with a `validate()` function that will walk the entire tree and ensure that it is built correctly. This is cost-prohibitive to run in most cases, but it is also unnecessary to run unless the tree algorithm is changing. It would be better to have the calls to `validate()` compiled out unless specifically enabled: ```c++ #define BINARY_TREE_CHECKS 0 // Enable if you are working on this system // ... #if BINARY_TREE_CHECKS CARB_ASSERT(validate()); // Run full validation (costly!) #endif ``` ### Use CARB_CHECK for more exposure Since CARB_CHECK is always enabled by default (but can be disabled if desired), code utilizing it will run on many more instances and therefore has much greater exposure. This can be useful to find issues in complex code and non-deterministic code (especially multi-threading code). Like CARB_ASSERT, CARB_CHECK should only be used for situations where the assertion will never fire unless changes are being made to the code, and will let future programmers understand the assumptions made in the code. However, if debug builds are not utilized, or CARB_ASSERT_ENABLED is never true, it may make more sense to use CARB_CHECK instead. As it has greater exposure, CARB_CHECK can also be used in situations to check non-deterministic execution, such as with multi-threaded code. This can be used to find edge cases or to find 1-in-1000[0…] bugs with generally very little runtime cost. ### Use CARB_FATAL_UNLESS to catch unrecoverable issues as early as possible The CARB_FATAL_UNLESS macro, which always guarantees to terminate the program if the assertion fails, should be used for unrecoverable issues, as early as possible. For instance, a heap `free()` function may determine that memory is corrupt, and should use CARB_FATAL_UNLESS to inform the user and terminate the program, since continuing on merely leads to further instability and eliminates information that is currently present that could be used to diagnose the issue. If continuing on could lead to data corruption, CARB_FATAL_UNLESS should be used to halt the program with a message to the user. Another example of using CARB_FATAL_UNLESS is to ensure that critical conditions are met before proceeding, thus preventing potential data corruption or system instability. CARB_FATAL_UNLESS is system or runtime-library calls that should never fail. For instance, in a RAII mutex class that wraps pthread’s `mutex_t`, the API functions such as `pthread_mutex_init()` are allowed to fail in various ways, but in practice should not fail unless given bad data or the system is experiencing instability. It is appropriate to use `CARB_FATAL_UNLESS` if `pthread_mutex_init()` returns anything other than zero, as this is generally not an error that can be recovered. `CARB_FATAL_UNLESS` requires a printf-style user message be provided, and as much information should be imparted in the message as possible. ## Providing your own assert macro It is entirely possible that none of the above macros suffice for your needs. Perhaps you want to allow side-effects (i.e. always execute even when disabled) or to log failure conditions. In these cases, it is advisable to create your own assert macro rather than overriding the definition of any of the above macros. This will also prevent breaking assumptions that are present in using the Carbonite macros.
asset-structure_introduction.md
# Introduction Blast is an NVIDIA GameWorks Blast destruction library. It consists of a Low Level API (NvBlast), a High Level (Toolkit) API (NvBlastTk) (Blast Toolkit or BlastTk), and Extensions (NvBlastExt) (Blast Extensions or BlastExt). This layered API is designed to allow short ramp-up time for first usage (through the Ext and Tk APIs) while also allowing for customization and optimization by experienced users through the low-level API. This library is intended to replace APEX Destruction. It is being developed with years of user feedback and experience, with the goal of addressing shortcomings in performance, stability, and customizability of the APEX Destruction module. ## Asset Structure Blast is currently designed to support rigid body, pre-fractured destruction. Future versions may support runtime fracturing or deformation. The static data associated with a destructible is stored in an *asset*. Assets are instanced into actors, which may be damaged and fractured. When fractured, actors are broken into pieces called *chunks*. Connected groups of chunks belong to new actors. The grouping of chunks into actors is determined by the support graph in the asset. Chunks are defined hierarchically, so that when a chunk is fractured its child chunks are created. The user may tag any chunk in this hierarchy as a *support* chunk. This is covered in more detail in the Support Model section. The user also supplies a description of the connections between support chunks. A *bond* represents the surface joining neighboring chunks. A bond is represented by a surface centroid, an average surface normal, and the surface area. These quantities don’t need to be exact for Blast to operate effectively. Multiple chunk hierarchies may exist in a single asset. The *root chunks* (see Definitions) will be visible when the asset is initially instanced. Subsequent fracturing has the effect of breaking the root chunks into their hierarchical descendants. ## Support Model Blast requires that support chunks form an *exact cover* (see the definition of exact coverage in Definitions). The geometric interpretation of exact coverage is that the support chunks fill the space of the root (unfractured) chunk, without any volume being covered by more than one chunk. A helper function is provided to modify a set of chunk descriptors so that they have exact coverage. This function fills in missing coverage by assigning support to chunks at the highest place possible (closest to root) in the hierarchy, and redundant support is removed: if a chunk and one of its descendant chunks are both marked as support, the function will remove support from the descendant chunk. Support chunks that are joined by bonds will be grouped together in the same actor when fracturing occurs. Bonds may be defined between any two support chunks, or between a support chunk and “the world.” There is no corresponding “world chunk,” but the bond represents a connection between the chunk and its external environment. All chunks with a support graph connected to the world will be put into the same actor. An expected use case is to make this actor static (or kinematic). Actors may be queried to determine if they are “world-bound.” In order to take advantage of the chunk hierarchy to reduce the number of chunks which represent an actor physically and graphically, Blast calculates a list of *visible chunks* from the support chunks in an actor. These may be the support chunks, or they may be ancestors of support chunks if all descendant support chunks are in the actor. Support chunks do not have to be leaves in the chunk hierarchy, nor do they have to be at the same depth in the hierarchy. Children of support chunks will always be the sole chunk in their actor, since there are no bonds defined between them. If an actor consists of a *subsupport chunk* (see Definitions), the support will be managed accordingly. # Definitions In the context of the support graph (see [Definitions](#definitions)), the visible chunk is the same chunk. The same is true if an actor consists of a single support chunk. # Damage Model Damage is defined as loss of an actor’s material integrity. This is modeled by a simple health value associated with the bonds and chunks in the support graph. The user applies damage to an actor at a given location, with a maximum effect radius. The resulting loss of bond and chunk health is determined by a user-defined material function. In this way, the user can customize the effect of damage based upon the bonds’ properties such as normal and area, as well as distance from impact location. Damage is applied during the processing of a damage event buffer. After all damage events are processed, bonds with non-positive healths are considered to be broken. Blast performs island detection on the support graph to find all groups of support chunks that are connected by unbroken bonds, and any new islands found result in new actors. If an actor is composed of a single support or subsupport chunk with subsupport descendants, then there is no bond structure to model damage. Instead, such a chunk is considered to have its own health value, which may be decreased by damage. When such a lower-support (see [Definitions](#definitions)) chunk’s health is non-positive, its associated actor is deleted and replaced by actors that represent its child chunks, if any. The effect of damage on leaf chunks depends upon which API is used. The low-level API does not delete leaf chunks. It is up to the user to delete them, and manage their physical and graphical representation outside of Blast if so desired.
AttributeType.md
# Attribute Type Definition Attributes in OmniGraph all have a type, which indicates what kind of data they can represent. The type definitions are attached to the attributes so that their associated data can be correctly stored and interpreted. In addition, the type definition can be used to disambiguate values that can be interpreted as multiple types. For example the number 5 could be interpreted as integer, floating point, double-precision, etc. The correct interpretation is needed in order to decide how much storage this value will occupy, as well as how the bits representing that number are to be arranged. ## The Type Structure The attribute type is a simple structure that contains a few pieces that uniquely identify the entire type. The type definition in C++ is inherited from Fabric since that is where the data is stored. There is an OmniGraph wrapper around it that allows creation of some specific utilities, as well as interpretation of some OmniGraph-specific role types. Full documentation of the type structure is available for both `C++` and `Python`. ### Base Data Type The base data type is the low level data type that stores a single value of the given attribute type. There are some simple values that are plain-old-data (POD) types and a few that represent USD-specific data such as prims. In C++ the value is specified by the enum `omni::fabric::BaseDataType`, and in Python it is in `omni.graph.core.BaseDataType`. PYTHON ```python import omni.graph.core as og all_base_data_types = { og.BaseDataType.ASSET: "Data represents an Asset", og.BaseDataType.BOOL: "Data is a boolean", og.BaseDataType.CONNECTION: "Data is a special value representing a connection", og.BaseDataType.DOUBLE: "Data is a double precision floating point value", og.BaseDataType.FLOAT: "Data is a single precision floating point value", og.BaseDataType.HALF: "Data is a half precision floating point value", og.BaseDataType.INT: "Data is a 32-bit integer", og.BaseDataType.INT64: "Data is a 64-bit integer" } ``` og.BaseDataType.PRIM: "Data is a reference to a USD prim", og.BaseDataType.RELATIONSHIP: "Data is a relationship to a USD prim", og.BaseDataType.TAG: "Data is a special Fabric tag", og.BaseDataType.TOKEN: "Data is a reference to a unique shared string", og.BaseDataType.UCHAR: "Data is an 8-bit unsigned character", og.BaseDataType.UINT: "Data is a 32-bit unsigned integer", og.BaseDataType.UINT64: "Data is a 64-bit unsigned integer", og.BaseDataType.UNKNOWN: "Data type is currently unknown", ``` ```cpp enum class BaseDataType : uint8_t { eUnknown = 0, //!< The base type is not defined eBool, //!< Boolean type eUChar, //!< Unsigned character (8-bit) eInt, //!< 32-bit integer eUInt, //!< 32-bit unsigned integer eInt64, //!< 64-bit integer eUInt64, //!< 64-bit unsigned integer eHalf, //!< Half-precision floating point number eFloat, //!< Full-precision floating point number eDouble, //!< Double-precision floating point number eToken, //!< Unique token identifying a constant shared string // RELATIONSHIP is stored as a 64-bit integer internally, but shouldn't be // treated as an integer type by nodes. eRelationship,//!< 64-bit handle to a USD relationship // For internal use only eAsset, //!< (INTERNAL) Handle to a USD asset eDeprecated1, //!< (INTERNAL) Handle to a USD prim eConnection, //!< (INTERNAL) Special type connecting to USD elements eTag, eCount // for compile-time checks }; ``` ### Component Count A component count corresponds to the number of repeated elements of the base data type that appear in the type as a whole. Only a small subset of component counts are supported, corresponding to those supported by USD. (Every type supports a component count of 1, meaning only a single element is present.) | Base Type | Component Counts Supported | |-----------|----------------------------| | float | 2, 3, 4 | | double | 2, 3, 4, 9, 16 | | half | 2, 3, 4 | | int | 2, 3, 4 | The above indicate support when there is no role specified. When there is a role there are fewer component counts allowed, as follows: | Role | Component Counts Supported | |----------|----------------------------| | Color | 3, 4 | | Frame | 16 | | Matrix | 4, 9, 16 | | Normal | 3 | | Point | 3 | | --- | --- | | Quaternion | 4 | | TexCoord | 2, 3 | | Vector | 3 | ### Important While component counts not appearing in the above tables can be specified in a type definition they will not be supported in OmniGraph or Fabric. ### Array Depth Many of the types also support arrays of the basic types. The value in the type is the array depth, where a depth of 0 is a single value, a depth of 1 is an array of values. Other depth levels are not currently supported. In Fabric every type except for `string and path` can be an array since a single string value is already represented as an array of depth 1. OmniGraph also does not use arrays of type bundle, execution, and target. ### Role A role provides an interpretation of the data in the underlying type. For data storage the value is irrelevant because it does not alter the size of data specified by the type. #### Note OmniGraph Unsupported Roles Although all of the roles are supported in Fabric some of them are not used by OmniGraph. These are the ones whose base type is tag (applied schema, prim type name, instanced attribute, and ancestor prim type). In C++ the value is specified by the enum `omni::fabric::AttributeRole`, and in Python it is in `omni.graph.core.AttributeRole`. ```python import omni.graph.core as og all_roles = { og.AttributeRole::APPLIED_SCHEMA: "Data is to be interpreted as an applied schema", og.AttributeRole::BUNDLE: "Data is to be interpreted as an OmniGraph Bundle", og.AttributeRole::COLOR: "Data is to be interpreted as RGB or RGBA color", og.AttributeRole::EXECUTION: "Data is to be interpreted as an Action Graph execution pin", og.AttributeRole::FRAME: "Data is to be interpreted as a 4x4 matrix representing a reference frame", og.AttributeRole::MATRIX: "Data is to be interpreted as a square matrix of values", og.AttributeRole::NONE: "Data has no special role", og.AttributeRole::NORMAL: "Data is to be interpreted as a normal vector", og.AttributeRole::OBJECT_ID: "Data is to be interpreted as a unique object identifier", og.AttributeRole::PATH: "Data is to be interpreted as a path to a USD element", og.AttributeRole::POSITION: "Data is to be interpreted as a position or point vector", og.AttributeRole::PRIM_TYPE_NAME: "Data is to be interpreted as the name of a prim type", og.AttributeRole::QUATERNION: "Data is to be interpreted as a rotational quaternion", og.AttributeRole::TARGET: "Data is to be interpreted as a relationship target path", og.AttributeRole::TEXCOORD: "Data is to be interpreted as texture coordinates", og.AttributeRole::TEXT: "Data is to be interpreted as a text string", og.AttributeRole::TIMECODE: "Data is to be interpreted as a time code", og.AttributeRole::TRANSFORM: "Deprecated", og.AttributeRole::VECTOR: "Data is to be interpreted as a simple vector", og.AttributeRole::UNKNOWN: "Data role is currently unknown", } ``` - [x] C++ ```c++ /** * @brief The role enum provides an interpretation of the base data as a specific type * * The roles are meant to provide some guidance on how to use the data after extraction from Fabric. * For example a length calculation makes sense for a "vector" role but not for a "position" role. * Some of the roles correspond to an equivalent role in USD, others are Fabric-specific. */ enum class AttributeRole : uint8_t { eNone = 0, //!< No special role eVector, //!< A vector in space eNormal, //!< A surface normal ePosition, //!< A position in space eColor, //!< A color representation eTexCoord, //!< Texture coordinates eQuaternion, //!< A 4d quaternion vector eTransform, //!< (DEPRECATED) eFrame, //!< A 4x4 matrix representing a coordinate frame eTimeCode, //!< A double value representing a time code // eText is not a USD role. If a uchar[] attribute has role eText then // the corresponding USD attribute will have type "string", and be human // readable in USDA. If it doesn't, then it will have type "uchar[]" in USD // and appear as an array of numbers in USDA. eText, //!< (Non-USD) Interpret uchar-array as a string eAppliedSchema, //!< (Non-USD) eTag representing a USD applied schema ePrimTypeName, //!< (Non-USD) eTag representing a USD prim type eExecution, //!< (Non-USD) UInt value used for control flow in OmniGraph Action Graphs eMatrix, //!< A 4x4 or 3x3 matrix eObjectId, //!< (Non-USD) UInt64 value used for Python object identification eBundle, //!< (Non-USD) Representation of the OmniGraph "bundle" type, which is a set of attributes ePath, //!< (Non-USD) uchar-array representing a string that is interpreted as a USD Sdf Path eInstancedAttribute, //!< (Non-USD) eTag used in place of attribute types on instanced prims eAncestorPrimTypeName, //!< (Non-USD) eTag representing an ancestor type of a USD prim type eTarget, //!< (Non-USD) eRelationship representing path data in OmniGraph eUnknown, eCount // for compile-time checks }; ``` ## Special Types While most of the correlations between data types and roles are obvious, e.g. a **float[3]** and a **pointf[3]** are basically the same thing, there are a few combinations that require more explanation. ### String String types have base data type **unsigned character**, a component count of 1, and array depth of 1, and a role of **text**. You can probably see why they are defined this way, but the implication is that even though it is literally an array type, conceptually it is a single value of type **string** so there is often special case code to handle this specific type. Arrays of strings are not possible as that would require the currently unsupported array depth of 2. There is an alternative that provides a similar data type, which is an array of tokens. The only downside is that tokens require conversion back into a string for any kind of manipulation, and they are immutable so you have to create a brand new string and convert it back into a token if you modify it. ### Path A **path** type is the same as a string described above, except that it has a role of **path** instead of **text**. While use of it is the same the interpretation of it is more restrictive in that it can be considered to be an Sdf Path representing a reference to some other data in Fabric or in USD. ### Bundle A **bundle** type has a base data type of **relationship** because the bundle data itself is just a reference to a collection of other data. The data value is an opaque handle to the [IBundle2](https://docs.omniverse.nvidia.com/kit/docs/omni.graph.core/latest/_build/docs/omni.graph.core/latest/classomni_1_1graph_1_1core_1_1IBundle2__abi.html#_CPPv4N4omni5graph4core12IBundle2_abiE) / [IBundle2](https://docs.omniverse.nvidia.com/kit/docs/omni.graph/latest/omni.graph.core/omni.graph.core.IBundle2.html#omni.graph.core.IBundle2) in Omniverse Kit. # IBundle2 Interface The `omni.graph.core.IBundle2` object provides an interface to all of the data referenced by a bundle. ## Execution An *execution* type is a special type only recognized by an Action Graph node. Attributes with this type are trigger points that either accept a signal from another node to begin executing, or send a signal to another node to begin executing. It has a numeric value so that some small amount of information can be encoded into the signal. ## Target A *target* represents a path object that points to an object in Fabric or on the USD stage. It is similar to the *path* attribute except that its underlying data provides a direct handle to the object so that if the object is renamed or moved around in the scene hierarchy the path object stays intact and still points to the same underlying object. The base data type is *relationship*, as it is for the *bundle* role. ## Union and Any These types are not explicitly represented as data types because they are not data types themselves, they are representations of possible data types an attribute can take. All of that is handled at a higher level though, with the data type being a simple *token* for both of them. The attributes created with either of these types will have an extended attribute type (C++/Python) set. The union type will further have added data which defines the set of attribute types it can accept. The list of accepted types for unions includes the base types, with combinations built up from a configuration file: ```json { "unionDefinitions": { "$description": [ "This file contains the definitions for attribute union groups. Attribute Union Groups are convenient groupings of related attribute types", "Each entry contains an union name as the key, and a list of the string representations for corresponding ogn types, or the names of other", "unions to include. Valid dictionary values are strings, lists of strings, or dictionaries.", "In the case of dictionaries, it must contain two keys:", " 'entries', whose values are attribute type names or union names, and", " 'append', whose value is a string to append to each resolved value in 'entries.'" ], "integral_scalers": ["uchar", "int", "uint", "uint64", "int64"], "integral_tuples": ["int[2]", "int[3]", "int[4]"], "decimal_scalers": ["double", "float", "half", "timecode"], "decimal_tuples": [ "double[2]", "double[3]", "double[4]", "float[2]", "float[3]", "float[4]", "half[2]", "half[3]", "half[4]", "colord[3]", "colord[4]", "colorf[3]", "colorf[4]", "colorh[3]", "colorh[4]", "normald[3]", "normalf[3]", "normalh[3]", "pointd[3]", "pointf[3]", "pointh[3]", "texcoordd[2]", "texcoordd[3]", "texcoordf[2]", "texcoordf[3]", "texcoordh[2]", "texcoordh[3]", "quatd[4]", "quatf[4]" ] } } { "vectors": [ "quath[4]", "vectord[3]", "vectorf[3]", "vectorh[3]" ], "matrices": [ "matrixd[2]", "matrixd[3]", "matrixd[4]", "transform[4]", "frame[4]" ], "integral_array_elements": [ "integral_scalers", "integral_tuples" ], "integral_arrays": { "entries": [ "integral_array_elements" ], "append": "[]" }, "integrals": [ "integral_array_elements", "integral_arrays" ], "decimal_array_elements": [ "decimal_scalers", "decimal_tuples" ], "decimal_arrays": { "entries": [ "decimal_array_elements" ], "append": "[]" }, "decimals": [ "decimal_array_elements", "decimal_arrays" ], "numeric_scalers": [ "integral_scalers", "decimal_scalers" ], "numeric_tuples": [ "integral_tuples", "decimal_tuples" ], "numeric_array_elements": [ "numeric_scalers", "numeric_tuples", "matrices" ], "numeric_arrays": { "entries": [ "numeric_array_elements" ], "append": "[]" }, "numerics": [ "numeric_array_elements", "numeric_arrays" ], "array_elements": [ "numeric_array_elements", "token" ], "arrays": [ "numeric_arrays", "token[]" ], "string_types": [ "path", "string", "token", "token[]" ] } ``` ## OGN and SDF type names Though the underlying *Type* representations are consistent, the string representation of the type name varies slightly depending on whether you are accessing it for OmniGraph or for USD. For example a floating point value with three components in OmniGraph is represented as **float[3]** but in USD is represented as **float3**. In most places that accept type names the OmniGraph style is preferred but many also accept the USD style. A .ogn file will always use the OmniGraph style names. There is also a utility that yields a slightly different variation dubbed the Fabric style in `omni::fabric::Type::getTypeName()`. This is the complete list of the differences between OmniGraph and USD style names. | OmniGraph | USD | |-----------|-----------| | float[2], float[3], float[4] | float2, float3, float4 | | float[2][], float[3][], float[4][] | float2[], float3[], float4[] | | double[2], double[3], double[4] | double2, double3, double4 | | double[2][], double[3][], double[4][] | double2[], double3[], double4[] | | half[2], half[3], half[4] | half2, half3, half4 | half2, half3, half4 half[2][], half[3][], half[4][] half2[], half3[], half4[] int[2], int[3], int[4] int2, int3, int4 half[2][], half[3][], half[4][] half2[], half3[], half4[] matrixd[2], matrixd[3], matrixd[4] matrix2d, matrix3d, matrix4d matrixd[2][], matrixd[3][], matrixd[4][] matrix2d[], matrix3d[], matrix4d[] frame[4] frame frame[4][] frame[] colord[3], colord[4] color3d, color4d colord[3][], colord[4][] color3d[], color4d[] colorf[3], colorf[4] color3f, color4f colorf[3][], colorf[4][] color3f[], color4f[] colorh[3], colorh[4] color3h, color4h colorh[3][], colorh[4][] color3h[], color4h[] normald[3] normal3d normald[3][] normal3d[] normalf[3] normal3f normalf[3][] normal3f[] normalh[3] normal3h normalh[3][] normal3h[] pointd[3] point3d pointd[3][] point3d[] pointf[3] point3f pointf[3][] point3f[] pointh[3] point3h pointh[3][] point3h[] quatd[4] quatd quatd[4][] quatd[] quatf[4] quatf quatf[4][] quatf[] quath[4] quath quath[4][] quath[] texcoordd[2], texcoordd[3] texCoord2d, texCoord3d texcoordd[2][], texcoordd[3][] texCoord2d[], texCoord3d[] texcoordf[2], texcoordf[3] texCoord2f, texCoord3f texcoordf[2][], texcoordf[3][] texCoord2f[], texCoord3f[] texcoordh[2], texcoordh[3] texCoord2h, texCoord3h texcoordh[2][], texcoordh[3][] texCoord2h[], texCoord3h[] vectord[3], vectord[4] | Vector Types | Array Types | |--------------|-------------| | vector3d, vector4d | vectord[3][], vectord[4][] | | vector3d[], vector4d[] | vectorf[3], vectorf[4] | | vector3f, vector4f | vectorf[3][], vectorf[4][] | | vector3f[], vector4f[] | vectorh[3], vectorh[4] | | vector3h, vector4h | vectorh[3][], vectorh[4][] | | vector3h[], vector4h[] | See also the OGN documentation on attribute types for how these types are used in a .ogn file and in running code. ## IAttributeType Interface In addition to the basic structure there is a Carbonite interface class defined that provides a few simple utilities for converting between OGN and Sdf representations of the type name, as well as processing the strings that define the various union type descriptions. The documentation for omni::graph::core::IAttributeType / omni.graph.core.AttributeType has full details on what it can do.
attribute_types.md
# Attribute Data Types The attribute data type is the most important part of the attribute. It describes the type of data the attribute references, and the type of generated interface the node writer will use to access that data. Attribute data at its core consists of a short list of data types, called [Base Data Types](#base-data-types). These types encapsulate a single value, such as a float or integer. !!! warning Not all attribute types may be supported by the code generator. For a list of currently supported types use the command `generate_node.py --help`. !!! note The information here is for the default type definitions. You can override the type definitions using a configuration file whose format is shown in [Type Definition Overrides](#type-definition-overrides). !!! important When extracting bundle members in C++ you’ll be passing in a template type to get the value. That is not the type shown here, these are the types you’ll get as a return value. (e.g. pass in _OgnToken_ to get a return value of _NameToken_, or _float[]_ to get a return value of _ogn::array&lt;float&gt;_.) ## Base Data Types This table shows the conversion of the **Type Name**, which is how the attribute type appears in the .ogn file `type` value of the attribute, to the various data types of the other locations the attribute might be referenced. | Type Name | USD | C++ | CUDA | Python | JSON | Description | |-----------|-------|--------|--------|--------|------|----------------------| | bool | bool | bool | bool* | bool | bool | True/False | | double | double| double | double*| float | float| Double precision floating point value | | float | float | float | float* | float | float| Single precision floating point value | | Type | Type | Type | Type | Type | Type | Description | |--------|--------|--------|--------|--------|--------|----------------------------| | float* | float | float | float | float | float | 32 bit floating point value| | half | half | pxr::GfHalf | __half* | float | float | 16 bit floating point value| | int | int | int32_t | int* | int | integer| 32 bit signed integer | | int64 | int64 | int64_t | longlong* | int | integer| 64 bit signed integer | | path | string | ogn::string | ogn::string | str | string | Path to another node or attribute | | string | string | ogn::string | ogn::string | str | string | Standard string | | token | token | NameToken | NameToken* | str | string | Interned string with fast comparison and hashing | | uchar | uchar | uchar_t | uchar_t* | int | integer| 8 bit unsigned integer | | uint | uint | uint32_t | uint32_t* | int | integer| 32 bit unsigned integer | | uint64 | uint64 | uint64_t | uint64_t* | int | integer| 64 bit unsigned integer | Note: For C++ types on input attributes a `const` is prepended to the simple types. In addition to the standard numbers for floating point values the special values `inf`, `-inf`, `nan`, and `snan` are accepted, meaning +Infinity, -Infinity, QuietNaN, and SignalingNan, respectively. They translate into the nearest representation of the value according to the type used. For example `std::numeric_limits<double>::infinity()` for a C++ double-valued infinity, and `float('Inf')` for the same thing in Python. Here are samples of base data type values in the various languages: - **USD**: ``` custom int inputs:myInt = 1 custom float inputs:myFloat = 1 ``` - **C++**: ```cpp // The value type a C++ node implementation uses to access the attribute's data static bool compute(OgnMyNodeDatabase& db) ``` ```c++ { const int& iValue = db.inputs.myInt(); const float& fValue = db.inputs.myFloat(); } ``` ```c++ // The value type a C++ node implementation uses to pass the attribute's data to CUDA code. Note the use of attribute // type definitions to make the function declarations more consistent. extern "C" void runCUDAcompute(inputs::myInt_t*, inputs::myFloat_t*); static bool compute(OgnMyNodeDatabase& db) { const int* iValue = db.inputs.myInt(); const float* fValue = db.inputs.myFloat(); runCUDAcompute( iValue, fValue ); } ``` ```c++ extern "C" void runCUDAcompute(inputs::myInt_t* intValue, inputs::myFloat_t* fValue) { } ``` ```python # The value used by the Python typing system to provide a hint about the expected data type @property def myInt(self) -> int: return attributeValues.myInt @property def myFloat(self) -> float: return attributeValues.myFloat ``` ```json { "myNode": { "description": ["This is my node with one integer and one float input"], "version": 1, "inputs": { "myInt": { "description": ["This is an integer attribute"], "type": "int", "default": 0 }, "myFloat": { "description": ["This is a float attribute"], "type": "float", "default": 0.0 } } } } ``` ## Array Data Types An array type is a list of another data type with indeterminate length, analogous to a <span class="pre"> std::vector </span> </code> in C++ or a <code class="docutils literal notranslate"> <span class="pre"> list </span> </code> type in Python. Any of the base data types can be made into array types by appending square brackets ( <cite> [] </cite> ) to the type name. For example an array of integers would have type <cite> int[] </cite> and an array of floats would have type <cite> float[] </cite> . The JSON schema type is “array” with the type of the array’s “items” being the base type, although in the file it will just look like <code class="docutils literal notranslate"> <span class="pre"> [VALUE, </span> <span class="pre"> VALUE, </span> <span class="pre"> VALUE] </span> </code> . Python uses the _numpy_ library to return both tuple and array data types. | Type Name | USD | C++ | CUDA | Python | JSON | Description | |-----------|-----|-----|------|--------|------|-------------| | bool[] | bool[] | ogn::array&lt;bool&gt; | bool*,size_t | numpy.ndarray[numpy.bool] | bool[] | Array of True/False | | double[] | double[] | ogn::array&lt;double&gt; | double*,size_t | numpy.ndarray[numpy.float64] | float[] | Array of double precision floating point values | | float[] | float[] | ogn::array&lt;float&gt; | float*,size_t | numpy.ndarray[numpy.float64] | float[] | Array of 32 bit floating point values | | half[] | half[] | ogn::array&lt;pxr::GfHalf&gt; | __half*,size_t | numpy.ndarray[numpy.float64] | float[] | Array of 16 bit floating point values | | int[] | int[] | ogn::array&lt;int32_t&gt; | int*,size_t | numpy.ndarray[numpy.int32] | integer[] | Array of 32 bit signed integers | | int64[] | int64[] | ogn::array&lt;int64_t&gt; | longlong*,size_t | numpy.ndarray[numpy.int32] | integer[] | Array of 64 bit signed integers | | token[] | token[] | ogn::array&lt;NameToken&gt; | NameToken*,size_t | numpy.ndarray[numpy.str] | string[] | Array of interned strings | | uchar[] | uchar[] | ogn::array&lt;uchar_t&gt; | uchar_t*,size_t | numpy.ndarray[numpy.int32] | integer[] | Array of 8 bit unsigned integers | | uint[] | uint[] | ogn::array&lt;uint32_t&gt; | uint32_t*,size_t | numpy.ndarray[numpy.int32] | integer[] | Array of 16 bit unsigned integers | | uint64[] | uint64[] | ogn::array&lt;uint64_t&gt; | uint64_t*,size_t | numpy.ndarray[numpy.int32] | integer[] | Array of 32 bit unsigned integers | <p> ogn::array&lt;uint64_t&gt; </p> <p> uint64_t*,size_t </p> <p> numpy.ndarray[numpy.int32] </p> <p> integer[] </p> <p> Array of 32 bit unsigned integers </p> <div class="admonition note"> <p class="admonition-title"> Note </p> <p> For C++ types on input attributes the array type is <cite> ogn::const_array </cite> .</p> </div> <p> Here are samples of array data type values in the various languages: </p> <div class="sd-tab-set docutils"> <div class="sd-tab-content docutils"> <div class="highlight-usd notranslate"> <div class="highlight"> <pre><span></span><span class="k k-Token">custom</span><span class="w"> </span><span class="kt">int[]</span><span class="w"> </span><span class="na">inputs:myIntArray</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p">[</span><span class="m">1</span><span class="p">,</span><span class="w"> </span><span class="m">2</span><span class="p">,</span><span class="w"> </span><span class="m">3</span><span class="p">]</span><span class="w"></span> <span class="k k-Token">custom</span><span class="w"> </span><span class="kt">float[]</span><span class="w"> </span><span class="na">inputs:myFloatArray</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="p">[</span><span class="m">1.0</span><span class="p">,</span><span class="w"> </span><span class="m">2.0</span><span class="p">,</span><span class="w"> </span><span class="m">3.0</span><span class="p">]</span><span class="w"></span> </pre> </div> </div> </div> <div class="sd-tab-content docutils"> <div class="highlight-c++ notranslate"> <div class="highlight"> <pre><span></span><span class="k">static</span><span class="w"> </span><span class="kt">bool</span><span class="w"> </span><span class="nf">compute</span><span class="p">(</span><span class="n">OgnMyNodeDatabase</span><span class="o">&amp;</span><span class="w"> </span><span class="n">db</span><span class="p">)</span><span class="w"></span> <span class="p">{</span><span class="w"></span> <span class="w"> </span><span class="n">ogn</span><span class="o">::</span><span class="n">const_array</span><span class="w"> </span><span class="k">const</span><span class="o">&amp;</span><span class="w"> </span><span class="n">iValue</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">db</span><span class="p">.</span><span class="n">inputs</span><span class="p">.</span><span class="n">myIntArray</span><span class="p">();</span><span class="w"></span> <span class="w"> </span><span class="k">auto</span><span class="w"> </span><span class="k">const</span><span class="o">&amp;</span><span class="w"> </span><span class="n">fValue</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">db</span><span class="p">.</span><span class="n">inputs</span><span class="p">.</span><span class="n">myFloatArray</span><span class="p">();</span><span class="w"></span> <span class="p">}</span><span class="w"></span> </pre> </div> </div> </div> <div class="sd-tab-content docutils"> <div class="highlight-c++ notranslate"> <div class="highlight"> <pre><span></span><span class="k">extern</span><span class="w"> </span><span class="s">"C"</span><span class="w"> </span><span class="n">runCUDAcompute</span><span class="p">(</span><span class="n">inputs</span><span class="o">::</span><span class="n">myIntArray_t</span><span class="o">*</span><span class="p">,</span><span class="w"> </span><span class="kt">size_t</span><span class="p">,</span><span class="w"> </span><span class="n">inputs</span><span class="o">::</span><span class="n">myFloatArray_t</span><span class="o">*</span><span class="p">,</span><span class="w"> </span><span class="kt">size_t</span><span class="p">);</span><span class="w"></span> <span class="k">static</span><span class="w"> </span><span class="kt">bool</span><span class="w"> </span><span class="nf">compute</span><span class="p">(</span><span class="n">OgnMyNodeDatabase</span><span class="o">&amp;</span><span class="w"> </span><span class="n">db</span><span class="p">)</span><span class="w"></span> <span class="p">{</span><span class="w"></span> <span class="w"> </span><span class="kt">int</span><span class="w"> </span><span class="k">const</span><span class="o">*</span><span class="w"> </span><span class="n">iValue</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">db</span><span class="p">.</span><span class="n">inputs</span><span class="p">.</span><span class="n">myIntArray</span><span class="p">();</span><span class="w"></span> <span class="w"> </span><span class="k">auto</span><span class="w"> </span><span class="n">iSize</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">db</span><span class="p">.</span><span class="n">inputs</span><span class="p">.</span><span class="n">myIntArray</span><span class="p">.</span><span class="n">size</span><span class="p">();</span><span class="w"></span> <span class="w"> </span><span class="k">auto</span><span class="w"> </span><span class="k">const</span><span class="w"> </span><span class="n">fValue</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">db</span><span class="p">.</span><span class="n">inputs</span><span class="p">.</span><span class="n">myFloatArray</span><span class="p">();</span><span class="w"></span> <span class="w"> </span><span class="k">auto</span><span class="w"> </span><span class="n">fSize</span><span class="w"> </span><span class="o">=</span><span class="w"> </span><span class="n">db</span><span class="p">.</span><span class="n">inputs</span><span class="p">.</span><span class="n">myFloatArray</span><span class="p">.</span><span class="n">size</span><span class="p">();</span><span class="w"></span> <span class="w"> </span><span class="n">runCUDAcompute</span><span class="p">(</span><span class="w"> </span><span class="n">iValue</span><span class="p">,</span><span class="w"> </span><span class="n">iSize</span><span class="p">,</span><span class="w"> </span><span class="n">fValue</span><span class="p">,</span><span class="w"> </span><span class="n">fSize</span><span class="w"> </span><span class="p">);</span><span class="w"></span> <span class="p">}</span><span class="w"></span> </pre> </div> </div> <div class="highlight-c++ notranslate"> <div class="highlight"> <pre><span></span><span class="k">extern</span><span class="w"> </span><span class="s">"C"</span><span class="w"> </span><span class="kt">void</span><span class="w"> </span><span class="n">runCUDAcompute</span><span class="p">(</span><span class="n">inputs</span><span class="o">::</span><span class="n">myIntArray_t</span><span class="o">*</span><span class="w"> </span><span class="n">iArray</span><span class="p">,</span><span class="w"> </span><span class="kt">size_t</span><span class="w"> </span><span class="n">iSize</span><span class="p">,</span><span class="w"> </span><span class="n">inputs</span><span class="o">::</span><span class="n">myFloat_t</span><span class="o">*</span><span class="w"> </span><span class="n">fArray</span><span class="p">,</span><span class="w"> </span><span class="kt">size_t</span><span class="w"> </span><span class="n">fSize</span><span class="p">)</span><span class="w"></span> <span class="p">{</span><span class="w"></span> <span class="w"> </span><span class="c1">// In here it is true that the number of elements in iArray = iSize</span> <span class="p">}</span><span class="w"></span> </pre> </div> </div> </div> <div class="sd-tab-content docutils"> <div class="highlight-python notranslate"> <div class="highlight"> <pre><span></span><span class="kn">import</span> <span class="nn">numpy</span> <span class="k">as</span> <span class="nn">np</span> <span class="nd">@property</span> <span class="k">def</span> <span class="nf">myIntArray</span><span class="p">(</span><span class="bp">self</span><span class="p">)</span> <span class="o">-&gt;</span> <span class="n">np</span><span class="o">.</span><span class="n">ndarray</span><span class="p">[</span><span class="n">np</span><span class="o">.</span><span class="n">int32</span><span class="p">]:</span> </pre> </div> </div> </div> ```python return attributeValues.myIntArray ``` ```python @property def myFloatArray(self) -> np.ndarray[np.float16]: return attributeValues.myFloatArray ``` ```json { "myNode": { "description": [ "This is my node with one integer array and one float array input" ], "version": 1, "inputs": { "myIntArray": { "description": [ "This is an integer array attribute" ], "type": "int[]", "default": [1, 2, 3] }, "myFloatArray": { "description": [ "This is a float array attribute" ], "type": "float[]", "default": [1.0, 2.0, 3.0] } } } } ``` ## Tuple Data Types An tuple type is a list of another data type with fixed length, analogous to a `std::array` in C++ or a `tuple` type in Python. Not every type can be a tuple, and the tuple count is restricted to a small subset of those supported by USD. They are denoted with by appending square brackets containing the tuple count to the type name. For example a tuple of two integers would have type `int[2]` and a tuple of three floats would have type `float[3]`. Since tuple types are implemented in C++ as raw data there is no differentiation between the types returned by input versus output attributes, just a `const` clause. | Type Name | USD | C++ | CUDA | Python | JSON | Description | |-----------|-----|-----|------|--------|------|-------------| | double[2] | double2 | pxr::GfVec2d | double2* | numpy.ndarray[numpy.float64](2,) | [float,float] | 2-tuple of double precision floating point values | | double[3] | double3 | pxr::GfVec3d | double3* | numpy.ndarray[numpy.float64](3,) | [float,float,float] | 3-tuple of double precision floating point values | | double[4] | double4 | pxr::GfVec4d | double4* | numpy.ndarray[numpy.float64](4,) | [float,float,float,float] | 4-tuple of double precision floating point values | | float[2] | float2 | pxr::GfVec2f | float2* | numpy.ndarray[numpy.float32](2,) | [float,float] | 2-tuple of single precision floating point values | | float[2] | float2 | pxr::GfVec2f | float2* | numpy.ndarray[numpy.float](2,) | [float,float] | 2-tuple of 32 bit floating point values | |----------------|--------------|--------------|--------------|--------------------------------|---------------|-----------------------------------------| | float[3] | float3 | pxr::GfVec3f| float3* | numpy.ndarray[numpy.float](3,) | [float,float,float] | 3-tuple of 32 bit floating point values | | float[4] | float4 | pxr::GfVec4f| float4* | numpy.ndarray[numpy.float](4,) | [float,float,float,float] | 4-tuple of 32 bit floating point values | | half[2] | half2 | pxr::GfVec2h| __half2* | numpy.ndarray[numpy.float16](2,) | [float,float] | 2-tuple of 16 bit floating point values | | half[3] | half3 | pxr::GfVec3h| __half3* | numpy.ndarray[numpy.float16](3,) | [float,float,float] | 3-tuple of 16 bit floating point values | | half[4] | half4 | pxr::GfVec4h| __half4* | numpy.ndarray[numpy.float16](4,) | [float,float,float,float] | 4-tuple of 16 bit floating point values | | int[2] | int2 | pxr::GfVec2i| int2* | numpy.ndarray[numpy.int32](2,) | [float,float] | 2-tuple of 32 bit signed integer values | | int[3] | int3 | pxr::GfVec3i| int3* | numpy.ndarray[numpy.int32](3,) | [float,float,float] | 3-tuple of 32 bit signed integer values | | int[4] | int4 | pxr::GfVec4i| int4* | numpy.ndarray[numpy.int32](4,) | [float,float,float,float] | 4-tuple of 32 bit signed integer values | Note ----- Owing to this implementation of a wrapper around raw data all of these types can also be safely cast to other types that have an equivalent memory layout. For example: ```cpp MyFloat3&amp; f3 = reinterpret_cast&lt;MyFloat3&amp;&gt;(db.inputs.myFloat3Attribute()); ``` Here’s an example of how the class, typedef, USD, and CUDA types relate: ```cpp GfVec3f const& fRaw = db.inputs.myFloat3(); ogn::float3 const& fOgn = reinterpret_cast&lt;ogn::float3 const&amp;&gt;(fRaw); carb::Float3 const& fCarb = reinterpret_cast&lt;carb::Float3 const&gt;(fOgn); vectorOperation( fCarb.x, fCarb.y, fCarb.z ); ``` ```c callCUDAcode( fRaw ); ``` ```c++ extern "C" void callCUDAcode(float3 myFloat3) {...} ``` Here are samples of tuple data type values in the various languages: - **USD** ```usd custom int2 inputs:myIntTuple = (1, 2) custom float3 inputs:myFloatTuple = (1.0, 2.0, 3.0) ``` - **C++** ```c++ static bool compute(OgnMyNodeDatabase& db) { GfVec2i const& iValue = db.inputs.myIntTuple(); GfVec3f const& fValue = db.inputs.myFloatTuple(); } ``` - **CUDA** ```c++ // Note how the signatures are not identical between the declaration here and the // definition in the CUDA file. This is possible because the data types have identical // memory layouts, in this case equivalent to int[2] and float[3]. extern "C" runCUDAcompute(pxr::GfVec2i* iTuple, pxr::GfVec3f* fTuple); static bool compute(OgnMyNodeDatabase& db) { runCUDAcompute( db.inputs.myIntTuple(), db.inputs.myFloatTuple() ); } ``` ```c++ extern "C" void runCUDAcompute(float3* iTuple, float3* fTuple) { // In here it is true that the number of elements in iArray = iSize } ``` - **Python** ```python import numpy as np @property def myIntTuple(self) -> np.ndarray[nd.int]: return attributeValues.myIntTuple @property def myFloatTuple(self) -> np.ndarray[nd.float]: return attributeValues.myFloatTuple ``` - **OGN** ```json { "myNode" : { "description" : ["This is my node with one integer tuple and one float tuple input"], "version" : 1 } } ``` | half[2][] | half2[] | ogn::array&lt;pxr::GfVec2h&gt; | __half2**,size_t | numpy.ndarray[numpy.float16](N,2) | [float,float][] | Array of 2-tuples of 16 bit floating point values | |-----------|---------|--------------------------------|------------------|-----------------------------------|----------------|----------------------------------------------------| | half[3][] | half3[] | ogn::array&lt;pxr::GfVec3h&gt; | __half3**,size_t | numpy.ndarray[numpy.float16](N,3) | [float,float,float][] | Array of 3-tuples of 16 bit floating point values | | half[4][] | half4[] | ogn::array&lt;pxr::GfVec4h&gt; | __half4**,size_t | numpy.ndarray[numpy.float16](N,4) | [float,float,float,float][] | Array of 4-tuples of 16 bit floating point values | | int[2][] | int2[] | ogn::array&lt;pxr::GfVec2i&gt; | int2**,size_t | numpy.ndarray[numpy.int32](N,2) | [float,float][] | Array of 2-tuples of 32 bit signed integer values | | int[3][] | int3[] | ogn::array&lt;pxr::GfVec3i&gt; | int3**,size_t | numpy.ndarray[numpy.int32](N,3) | [float,float,float][] | Array of 3-tuples of 32 bit signed integer values | | int[4][] | int4[] | ogn::array&lt;pxr::GfVec4i&gt; | int4**,size_t | numpy.ndarray[numpy.int32](N,4) | [float,float,float,float][] | Array of 4-tuples of 32 bit signed integer values | Here are samples of arrays of tuple data type values in the various languages: - USD ```usd custom int2[] inputs:myIntTuple = [(1, 2), (3, 4), (5, 6)]; custom float3[] inputs:myFloatTuple = [(1.0, 2.0, 3.0)]; ``` - C++ ```cpp static bool compute(OgnMyNodeDatabase& db) { ogn::const_array&lt;GfVec2i&gt; const& iValue = db.inputs.myIntTupleArray(); ogn::const_array&lt;GfVec3f&gt; const& fValue = db.inputs.myFloatTupleArray(); // or auto const& fValue = db.inputs.myFloatTupleArray(); } ``` - CUDA ```cpp extern "C" runCUDAcompute(inputs::myIntTupleArray_t* iTuple, size_t iSize, ``` ```c++ static bool compute(OgnMyNodeDatabase& db) { runCUDAcompute(db.inputs.myIntTupleArray(), db.inputs.myIntTupleArray.size(), db.inputs.myFloatTupleArray(), db.inputs.myFloatTupleArray.size()); } ``` ```c++ extern "C" void runCUDAcompute(float3** iTuple, size_t iSize, float3** fTuple, size_t fSize) { } ``` ```python import numpy as np @property def myIntTupleArray(self) -> np.ndarray: return attributeValues.myIntTupleArray @property def myFloatTupleArray(self) -> np.ndarray: return attributeValues.myFloatTupleArray ``` ```json { "myNode": { "description": ["This is my node with one integer tuple array and one float tuple array input"], "version": 1, "inputs": { "myIntTuple": { "description": ["This is an integer tuple array attribute"], "type": "int[2][]", "default": [1, 2], [3, 4], [5, 6] }, "myFloatTuple": { "description": ["This is a float tuple array attribute"], "type": "float[3][]", "default": [] } } } } ``` ## Attribute Types With Roles Some attributes have specific interpretations that are useful for determining how to use them at runtime. These roles are encoded into the names for simplicity. **Note** The fundamental data in the attributes when an AttributeRole is set are unchanged. Adding the role just allows the interpretation of that data as a first class object of a non-trivial type. The “C++ Type” column in the table below shows how the underlying data is represented. For simplicity of specification, the type of base data is encoded in the type name, e.g. `colord` for colors using double values and `colorf` for colors using float values. | Type Name | USD | C++ | CUDA | Python | JSON | Description | |-----------|-----|-----|------|--------|------|-------------| | colord[3] | color3d | GfVec3d | double3 | numpy.ndarray[numpy.float64](3,) | [float,float,float] | Color value with 3 members of type double precision float | | colorf[3] | color3f | GfVec3f | float3 | numpy.ndarray[numpy.float32](3,) | [float,float,float] | Color value with 3 members of type 32 bit float | | colorh[3] | color3h | GfVec3h | __half3 | numpy.ndarray[numpy.float16](3,) | [float,float,float] | Color value with 3 members of type 16 bit float | | colord[4] | color4d | GfVec4d | double4 | numpy.ndarray[numpy.float64](4,) | [float,float,float,float] | Color value with 4 members of type double precision float | | colorf[4] | color4f | GfVec4f | float4 | numpy.ndarray[numpy.float32](4,) | [float,float,float,float] | Color value with 4 members of type 32 bit float | | colorh[4] | color4h | GfVec4h | __half4 | numpy.ndarray[numpy.float16](4,) | [float,float,float,float] | Color value with 4 members of type 16 bit float | | execute | int64 | int64_t | int64 | int | integer | Execution state stored as a 64 bit integer | | frame[4] | frame4d | GfMatrix4d | Matrix4d | numpy.ndarray[numpy.float64](4,4) | [[float,float,float,float],[float,float,float,float],[float,float,float,float],[float,float,float,float]] | Coordinate frame with 16 members of type double precision float | | matrixd[2] | matrix2d | GfMatrix2d | Matrix2d | numpy.ndarray[numpy.float64](2,2) | [[float,float],[float,float]] | Transform matrix with 4 members of type double precision float | | matrixd[3] | matrix3d | GfMatrix3d | Matrix3d | numpy.ndarray[numpy.float64](3,3) | [[float,float,float],[float,float,float],[float,float,float]] | Transform matrix with 9 members of type double precision float | | matrixd[4] | matrix4d | GfMatrix4d | Matrix4d | numpy.ndarray[numpy.float64](4,4) | [[float,float,float,float], [float,float,float,float], [float,float,float,float], [float,float,float,float]] | Transform matrix with 16 members of type double precision float | |------------|----------|-------------|----------|----------------------------------|----------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------| | normald[3] | normal3d | GfVec3d | double3 | numpy.ndarray[numpy.float64](3,) | [float,float,float] | Normal vector with 3 members of type double precision float | | normalf[3] | normal3f | GfVec3f | float3 | numpy.ndarray[numpy.float32](3,) | [float,float,float] | Normal vector with 3 members of type 32 bit float | | normalh[3] | normal3h | GfVec3h | __half3 | numpy.ndarray[numpy.float16](3,) | [float,float,float] | Normal vector with 3 members of type 16 bit float | | objectId | uint64 | uint64_t | uint64 | int | integer | Object identifier stored as a 64 bit unsigned integer | | pointd[3] | pointd | GfVec3d | double3 | numpy.ndarray[numpy.float64](3,) | [float,float,float] | Cartesian point value with 3 members of type double precision float| | pointf[3] | pointf | GfVec3f | float3 | numpy.ndarray[numpy.float32](3,) | [float,float,float] | Cartesian point value with 3 members of type 32 bit float | | pointh[3] | pointh | GfVec3h | __half3 | numpy.ndarray[numpy.float16](3,) | [float,float,float] | Cartesian point value with 3 members of type 16 bit float | | quatd[4] | quatd | GfQuatd | double3 | numpy.ndarray[numpy.float64](4,) | [float,float,float] | Quaternion with 4 members of type double precision float as IJKR | | quatf[4] | quatf | GfQuatf | float3 | numpy.ndarray[numpy.float32](4,) | [float,float,float] | Quaternion with 4 members of type 32 bit float as IJKR | | quath[4] | quath | GfQuath | __half3 | numpy.ndarray[numpy.float16](4,) | [float,float,float] | Quaternion with 4 members of type 16 bit float as IJKR | | texcoordd[2]| texCoord2d| GfVec2d | double2 | numpy.ndarray[numpy.float64](2,) | [float,float] | | | USD Type | Alias | C++ Type | GLSL Type | NumPy Type | Python Type | Description | |----------|-------|----------|-----------|------------|-------------|-------------| | texcoordd[2] | texCoord2d | GfVec2d | double2 | numpy.ndarray[numpy.float64](2,) | [float,float] | Texture coordinate with 2 members of type double precision float | | texcoordf[2] | texCoord2f | GfVec2f | float2 | numpy.ndarray[numpy.float32](2,) | [float,float] | Texture coordinate with 2 members of type 32 bit float | | texcoordh[2] | texCoord2h | GfVec2h | __half2 | numpy.ndarray[numpy.float16](2,) | [float,float] | Texture coordinate with 2 members of type 16 bit float | | texcoordd[3] | texCoord3d | GfVec3d | double3 | numpy.ndarray[numpy.float64](3,) | [float,float,float] | Texture coordinate with 3 members of type double precision float | | texcoordf[3] | texCoord3f | GfVec3f | float3 | numpy.ndarray[numpy.float32](3,) | [float,float,float] | Texture coordinate with 3 members of type 32 bit float | | texcoordh[3] | texCoord3h | GfVec3h | __half3 | numpy.ndarray[numpy.float16](3,) | [float,float,float] | Texture coordinate with 3 members of type 16 bit float | | timecode | timecode | double | double | float | float | Double value representing a timecode | | vectord[3] | vector3d | GfVec3d | double3 | numpy.ndarray[numpy.float64](3,) | [float,float,float] | Vector with 3 members of type double precision float | | vectorf[3] | vector3f | GfVec3f | float3 | numpy.ndarray[numpy.float32](3,) | [float,float,float] | Vector with 3 members of type 32 bit float | | vectorh[3] | vector3h | GfVec3h | __half3 | numpy.ndarray[numpy.float16](3,) | [float,float,float] | Vector with 3 members of type 16 bit float | Python and JSON do not have special types for role-based attributes, although that may change for Python once its interface is fully defined. The roles are all tuple types so the Python equivalents will all be of the form `Tuple[TYPE, TYPE…]`, and JSON data will be of the form `[TYPE, TYPE, TYPE]`. The types corresponding to the *Equivalent* column base types are seen above in *Base Data Types*. The color role will serve for our example types here: - **USD**: ```usd custom color3d inputs:myColorRole = (1.0, 0.5, 1.0) ``` - **C++**: ```cpp static bool compute(OgnMyNodeDatabase& db) ``` { GfVec3d const& colorValue = db.inputs.myColorRole(); // or auto const& colorValue = db.inputs.myColorRole(); } ## CUDA ```cpp extern "C" runCUDAcompute(pxr::GfVec3d* color); static bool compute(OgnMyNodeDatabase& db) { runCUDAcompute( db.inputs.myColorRole() ); } ``` ```cpp extern "C" void runCUDAcompute(double3* color) { } ``` ## Python ```python import numpy as np @property def myColorRole(self) -> np.ndarray: return attributeValues.myColorRole ``` ## OGN ```json { "myNode": { "description": ["This is my node with one color role input"], "version": 1, "inputs": { "myColorRole": { "description": ["This is a color role attribute"], "type": "colord[3]", "default": [0.0, 0.5, 1.0] } } } } ``` ### Arrays of Role-Based Data Like base data types, there can also be arrays of role-based data by appending ‘[]’ to the data type. The type names will have the tuple specification followed by the array specification, e.g. `colord[3][]` for an array of 3d colors. JSON makes no distinction between arrays and tuples so it will be a multi-dimensional list. Both the Python and C++ tuple and array types nest for arrays of tuple types. | Type Name | USD | C++ | CUDA | Python | JSON | Description | |-----------|-----|-----|------|--------|------|-------------| | colord[3][] | color3d[] | ogn::array<GfVec3d> | double3*,size_t | numpy.ndarray[numpy.float64](N,3,) | [float,float,float][] | Array of color values with 3 members of type double precision float | | colorf[3][] | color3f[] | | | | [float,float,float][] | Array of color values with 3 members of type float | | | | | | | | | |----------|----------|----------|----------|----------|----------|----------| | colord[3][] | color3d[] | ogn::array&lt;GfVec3d&gt; | double3*,size_t | numpy.ndarray[numpy.float64](N,3,) | [float,float,float][] | Array of color values with 3 members of type double precision float | | colorh[3][] | color3h[] | ogn::array&lt;GfVec3h&gt; | __half3*,size_t | numpy.ndarray[numpy.float16](N,3,) | [float,float,float][] | Array of color values with 3 members of type 16 bit float | | colord[4][] | color4d[] | ogn::array&lt;GfVec4d&gt; | double4*,size_t | numpy.ndarray[numpy.float64](N,4,) | [float,float,float,float][] | Array of color values with 4 members of type double precision float | | colorf[4][] | color4f[] | ogn::array&lt;GfVec4f&gt; | float4*,size_t | numpy.ndarray[numpy.float32](N,4,) | [float,float,float,float][] | Array of color values with 4 members of type 32 bit float | | colorh[4][] | color4h[] | ogn::array&lt;GfVec4h&gt; | __half4*,size_t | numpy.ndarray[numpy.float16](N,4,) | [float,float,float,float][] | Array of color values with 4 members of type 16 bit float | | frame[4][] | frame4d[] | ogn::array&lt;GfMatrix4d&gt; | Matrix4d*,size_t | numpy.ndarray[numpy.float64](N,4,4) | [[float,float,float,float],[float,float,float,float],[float,float,float,float],[float,float,float,float]][] | Array of coordinate frames with 16 members of type double precision float | | matrixd[2][] | matrix2d[] | ogn::array&lt;GfMatrix2d&gt; | Matrix2d*,size_t | numpy.ndarray[numpy.float64](N,2,2) | [[float,float],[float,float]] | Array of transform matrices with 4 members of type double precision float | | matrixd[3][] | matrix3d[] | ogn::array&lt;GfMatrix3d&gt; | Matrix3d*,size_t | numpy.ndarray[numpy.float64](N,3,3) | [[float,float,float],[float,float,float],[float,float,float]][] | Array of transform matrices with 9 members of type double precision float | | matrixd[4][] | matrix4d[] | ogn::array&lt;GfMatrix4d&gt; | Matrix4d*,size_t | numpy.ndarray[numpy.float64](N,4,4) | [[float,float,float,float],[float,float,float,float],[float,float,float,float],[float,float,float,float]][] | Array of transform matrices with 16 members of type double precision float | | normald[3][] | normal3d[] | ogn::array&lt;GfVec3d&gt; | double3*,size_t | numpy.ndarray[numpy.float64](N,3,) | [float,float,float][] | Array of normal vectors with 3 members of type double precision float | | normalf[3][] | normal3f[] | ogn::array&lt;GfVec3f&gt; | float3*,size_t | numpy.ndarray[numpy.float32](N,3,) | [float,float,float][] | Array of normal vectors with 3 members of type 32 bit float | - float3*,size_t - numpy.ndarray[numpy.float32](N,3,) - [float,float,float][] - Array of normal vectors with 3 members of type 32 bit float - normalh[3][] - normal3h[] - ogn::array&lt;GfVec3h&gt; - __half3*,size_t - numpy.ndarray[numpy.float16](N,3,) - [float,float,float][] - Array of normal vectors with 3 members of type 16 bit float - objectId[] - uint64[] - ogn::array&lt;uint64_t&gt; - uint64*,size_t - numpy.ndarray[numpy.uint32](N,) - integer[] - Array of object identifiers stored as a 64 bit unsigned integer - pointd[3][] - pointd[] - ogn::array&lt;GfVec3d&gt; - double3*,size_t - numpy.ndarray[numpy.float64](N,3,) - [float,float,float][] - Array of cartesian point values with 3 members of type double precision float - pointf[3][] - pointf[] - ogn::array&lt;GfVec3f&gt; - float3*,size_t - numpy.ndarray[numpy.float32](N,3,) - [float,float,float][] - Array of cartesian point values with 3 members of type 32 bit float - pointh[3][] - pointh[] - ogn::array&lt;GfVec3h&gt; - __half3*,size_t - numpy.ndarray[numpy.float16](N,3,) - [float,float,float][] - Array of cartesian point values with 3 members of type 16 bit float - quatd[4][] - quatd[] - ogn::array&lt;GfQuatd&gt; - double3*,size_t - numpy.ndarray[numpy.float64](N,4,) - [float,float,float][] - Array of quaternions with 4 members of type double precision float as IJKR - quatf[4][] - quatf[] - ogn::array&lt;GfQuatf&gt; - float3*,size_t - numpy.ndarray[numpy.float32](N,4,) - [float,float,float][] - Array of quaternions with 4 members of type 32 bit float as IJKR - quath[4][] - quath[] - ogn::array&lt;GfQuath&gt; - __half3*,size_t - numpy.ndarray[numpy.float16](N,4,) - [float,float,float][] - Array of quaternions with 4 members of type 16 bit float as IJKR - texcoordd[2][] - texCoord2d[] - ogn::array&lt;GfVec2d&gt; - double2*,size_t - numpy.ndarray[numpy.float64](N,2,) - [float,float][] - Array of texture coordinates with 2 members of type double precision float - texcoordf[2][] - texCoord2f[] - ogn::array&lt;GfVec2f&gt; - float2*,size_t - numpy.ndarray[numpy.float32](N,2,) - [float,float][] - Array of texture coordinates with 2 members of type 32 bit float - texcoordh[2][] - texCoord2h[] - ogn::array&lt;GfVec2h&gt; - __half2*,size_t - numpy.ndarray[numpy.float16](N,2,) - [float,float][] - Array of texture coordinates with 2 members of type 16 bit float texcoordd[3][] texCoord3d[] ogn::array&lt;GfVec3d&gt; double3*,size_t numpy.ndarray[numpy.float64](N,3,) [float,float,float][] Array of texture coordinates with 3 members of type double precision float texcoordf[3][] texCoord3f[] ogn::array&lt;GfVec3f&gt; float3*,size_t numpy.ndarray[numpy.float32](N,3,) [float,float,float][] Array of texture coordinates with 3 members of type 32 bit float texcoordh[3][] texCoord3h[] ogn::array&lt;GfVec3h&gt; __half3*,size_t numpy.ndarray[numpy.float16](N,3,) [float,float,float][] Array of texture coordinates with 3 members of type 16 bit float timecode[] timecode[] ogn::array&lt;double&gt; double*,size_t numpy.ndarray[numpy.float64](N,3,) float[] Array of double values representing a timecode vectord[3][] vector3d[] ogn::array&lt;GfVec3d&gt; double3*,size_t numpy.ndarray[numpy.float64](N,3,) [float,float,float][] Array of vectors with 3 members of type double precision float vectorf[3][] vector3f[] ogn::array&lt;GfVec3f&gt; float3*,size_t numpy.ndarray[numpy.float32](N,3,) [float,float,float][] Array of vectors with 3 members of type 32 bit float vectorh[3][] vector3h[] ogn::array&lt;GfVec3h&gt; __half3*,size_t numpy.ndarray[numpy.float16](N,3,) [float,float,float][] Array of vectors with 3 members of type 16 bit float As above the color role will serve for our example types here: USD ``` usd custom color3d[] inputs:myColorRoles = [(1.0, 0.5, 1.0), (0.5, 1.0, 0.5)] ``` C++ ```cpp static bool compute(OgnMyNodeDatabase& db) { ogn::array&lt;GfVec3d&gt; const& colorValues = db.inputs.myColorRoles(); // or auto const& colorValues = db.inputs.myColorRoles(); } ``` CUDA ```cpp extern "C" runCUDAcompute(pxr::GfVec3d** color, size_t arraySize); static bool compute(OgnMyNodeDatabase& db) ``` ```cpp void runCUDAcompute(double3** color, size_t arraySize) { } ``` ```cpp { runCUDAcompute(db.inputs.myColorRoles(), db.inputs.myColorRoles().size()); } ``` ```python import numpy as np @property def myColorRoles(self) -> np.ndarray: return attributeValues.myColorRoles ``` ```json { "myNode": { "description": ["This is my node with one color role array input"], "version": 1, "inputs": { "myColorRoles": { "description": ["This is a color role array attribute"], "type": "colord[3][]", "default": [[0.0, 0.5, 1.0], [0.5, 1.0, 0.5]] } } } } ``` ## Bundle Type Attributes There is a special type of attribute whose type is *bundle*. This attribute represents a set of attributes whose contents can only be known at runtime. It can still be in a tuple, array, or both. In itself it has no data in Fabric. Its purpose is to be a container to a description of other attributes of any of the above types, or even other bundles. ```usd custom rel inputs:inBundle ( doc="The input bundle is a relationship, which comes from another bundle attribute" ) custom rel "outputs_outBundle" ( doc="The output bundle is a relationship, which is consumed by another bundle attribute" ) custom rel "state_stateBundle" ( doc="The state bundle is a relationship. It is an internal state of the node and can not be connected." ) ``` ```cpp static bool compute(OgnMyNodeDatabase& db) { // The simplest method of breaking open a bundle is to get an attribute by name auto const& inBundle = db.inputs.inBundle(); auto myFloat3Attribute = inBundle.attributeByName(db.stringToToken("myFloat3")); } ``` ```cpp if (auto asFloat3Array = myFloat3Attribute.get<float[][3]>()) { handleFloat3Array(asFloat3Array); // The data is of type float[][3] } // The bundle has iteration capabilities for (auto& bundledAttribute : inBundle) { // Use the type information to find the actual data type and then cast it if ((bundledAttribute.type().baseType == BaseDataType::eInt) && (bundledAttribute.type().componentCount == 1) && (bundledAttribute.type().arrayDepth == 0)) { CARB_ASSERT(nullptr != bundledAttribute.get<int>()); } } ``` ```cpp extern "C" runCUDAcompute(float3* value, size_t iSize); static bool compute(OgnMyNodeDatabase& db) { auto const& myBundle = db.inputs.myBundle(); auto myFloat3Attribute = myBundle.attributeByName(db.stringToToken("myFloat3")); if (auto asFloat3Array = myFloat3Attribute.get<float[][3]>()) { runCUDAcompute(asFloat3Array.data(), asFloat3Array.size()); } return true; } ``` ```cpp extern "C" void runCUDAcompute(float3** value, size_t iSize) { } ``` ```python from omni.graph.core.types import AttributeTypes, Bundle, BundledAttribute @staticmethod def compute(db) -> bool: attribute_count = db.myBundle.attribute_count() for bundled_attribute in db.myBundle.attributes(): if bundled_attribute.type.base_type == AttributeTypes.INT: deal_with_integers(bundled_attribute.value) return True ``` OGN ```json { "myNode": { "description": ["This is my node with one bundled input"], "version": 1, "inputs": { "myBundle": { "description": ["This is input bundle attribute"], "type": "bundle" } } } } ``` See the tutorials on Tutorial 15 - Bundle Manipulation and Tutorial 16 - Bundle Data for more details on manipulating the bundle and its attributes. It’s worth noting here that as a bundle does not represent actual data these attributes are not allowed to have a default value. If a bundle attribute is defined to live on the GPU, either at all times or as a decision at runtime, this is equivalent to stating that any attributes that exist inside the bundle will be living on the GPU using the same criteria. ## Relationship Type Attributes A relationship type contains a pointer to a list of paths. This paths point to prims or properties on the stage. In USD these types will always be represented by the `rel` type. They currently do not support default values in json. Nor can they be included in Extended Attribute Types | Type Name | USD | C++ | CUDA | Python | JSON | Description | |-----------|-----|-----|------|--------|------|-------------| | target | rel | TargetPath* | TargetPath*,size_t | usdrt.Sdf.Path[] | N/A | Reference to another prim on the USD stage | USD ```usd custom rel inputs:rel ( doc="""Targets relationship, which could come from a prim or another target attribute""" ) custom rel output:rel ( doc="""Relationship that returns a path or paths""" ) ``` C++ ```cpp static bool compute(OgnMyNodeDatabase& db) { const TargetPath* paths = db.inputs.rel(); } ``` CUDA ```cpp extern "C" runCUDAcompute(TargetPath*, size_t); static bool compute(OgnMyNodeDatabase& db) ``` ```c++ const TargetPath* iPaths = db.inputs.rel(); auto iSize = db.inputs.paths.size(); runCUDAcompute(iPaths, iSize); ``` ```c++ extern "C" void runCUDAcompute(TargetPath* iPaths, size_t iSize) { // In here it is true that the number of elements in iArray = iSize } ``` ```python import usdrt @property def myRelationship(self): return attributeValues.rel ``` ```json { "myNode": { "description": ["This is my node with one target input"], "version": 1, "inputs": { "myTarget": { "description": ["This is input target attribute"], "type": "target" } } } } ``` # Extended Attribute Types Some attribute types are only determined at runtime by the data they receive. These types include the “any” type, which is a single attribute that can be any of the above types, and the “union” type, which specifies a subset of the above types it can take on. (The astute will notice that “union” is a subset of “any”.) Extended attribute types allow a single node to handle several different attribute-type configurations. For example a generic ‘Cos’ node may be able to compute the cosine of any decimal type. ```usd custom token inputs:floatOrInt custom token inputs:floatArrayOrIntArray custom token inputs:anyType ``` ```c++ static bool compute(OgnMyNodeDatabase& db) { // Casting can be used to find the actual data type the attribute contains // Different types are cast in the same way as bundle attributes auto const& floatOrInt = db.inputs.floatOrInt(); bool isFloat = (nullptr != floatOrInt.get<float>()); bool isInt = (nullptr != floatOrInt.get<int>()); auto const& floatOrIntArray = db.inputs.floatOrIntArray(); } ``` ```cpp auto intPower = db.inputs.anyType().get<int>(); runCUDAcomputeInt(asIntArray.data(), asIntArray.size(), intMultiplier, intPower); ``` ```cpp extern "C" void runCUDAcomputeFloat(float** value, size_t iSize, float* multiplier, float* power) { // ...CUDA implementation } extern "C" void runCUDAcomputeInt(int** value, size_t iSize, int* multiplier, int* power) { // ...CUDA implementation } ``` ```python @property def floatOrInt(self) -> int | float: return attributeValues.floatOrInt @property def floatOrIntArray(self) -> numpy.ndarray: return attributeValues.floatOrIntArray @property def anyType(self) -> any: return attributeValues.anyType ``` ```json { "myNode" : { "description" : "This is my node with some extended inputs", "version" : 1, "inputs" : { "anyType" : { "description" : "This attribute accepts any type of data, determined at runtime", "type" : "any" }, "floatOrInt" : { "description" : "This attribute accepts either float or int values", "type" : ["float", "int"] }, "floatOrIntArray" : { "description" : ["This attributes accepts an array of float or int values.", "All values in the array must be of the same type, like a regular array attribute."], "type" : ["float[]", "int[]"] } } } } ``` ## Types permitted in Extended Attributes There are a few specialized types that are not included by extended types, either as part of a union group, or by an any extended type. Bundle, target and execution types are explicitly excluded, and attributes defined by an extended type cannot be resolved, or connected to one of these types. ## Extended Attribute Union Groups ## Extended Attribute Union Groups As described above, union extended types are specified by providing a list of types in the OGN definition. These lists can become quite long if a node can handle a large subset of the possible types. For convenience there are special type names that can be used inside the JSON list to denote groups of types. For example: ```json { "myNode": { "description": "This is my node using union group types", "version": 1, "inputs": { "decimal": { "description": "This attribute accepts double, float and half", "type": ["decimal_scalers"] } } } } ``` ## List of Attribute Union Groups | Group Type Name | Type Members | |-----------------------|---------------------------------------------------------------------------------------------------| | integral_scalers | uchar, int, uint, uint64, int64, timecode | | integral_tuples | int[2], int[3], int[4] | | integral_array_elements | integral_scalers, integral_tuples | | integral_arrays | arrays of integral_array_elements | | integrals | integral_array_elements, integral_arrays | | matrices | matrixd[3], matrixd[4], transform[4], frame[4] | | decimal_scalers | double, float, half | | decimal_tuples | double[2], double[3], double[4], float[2], float[3], float[4], half[2], half[3], half[4], colord[3], colord[4], colorf[3], colorf[4], colorh[3], colorh[4], normald[3], normalf[3], normalh[3], pointd[3], pointf[3], pointh[3], texcoordd[2], texcoordd[3], texcoordf[2], texcoordf[3], texcoordh[2], texcoordh[3], quatd[4], quatf[4], quath[4], vectord[3], vectorf[3], vectorh[3] | | decimal_array_elements | decimal_scalers, decimal_tuples | | decimal_arrays | arrays of decimal_array_elements | | decimals | decimal_array_elements, decimal_arrays | | numeric_scalers | integral_scalers, decimal_scalers | | numeric_tuples | integral_tuples, decimal_tuples | | numeric_array_elements| numeric_scalers, numeric_tuples, matrices | | numeric_arrays | arrays of numeric_array_elements | | numerics | numeric_array_elements, numeric_arrays | | array_elements | numeric_array_elements, token | | arrays | numeric_arrays, token[] | | strings | path, string, token, token[] | ``` ## Extended Attribute Resolution Extended attributes are useful to improve the usability of nodes with different types. However, the node author has an extra responsibility to resolve the extended type attributes when possible in order to resolve possible ambiguity in the graph. If graph connections are unresolved at execution, the node’s computation will be skipped. There are various helpful Python APIs for type resolution, including ```python omni.graph.core.resolve_base_coupled() ``` and ```python omni.graph.core.resolve_fully_coupled() ``` which allow you to match unresolved inputs to resolved inputs. ```python @staticmethod def on_connection_type_resolve(node) -> None: a_attr = node.get_attribute("inputs:a") result_attr = node.get_attribute("outputs:result") og.resolve_fully_coupled([a_attr, result_attr]) ``` You can also define your own semantics for custom type resolution. The following node takes two decimals, a and b, and returns their product. If one input is at a lower “significance” than the other, the less significant will be “promoted” to prevent loss of precision. For example, if inputs are `float` and `double`, the output will be a `double`. See ```python omni.graph.core.Type ``` for more information about creating custom types. ```python @staticmethod def on_connection_type_resolve(node) -> None: a_type = node.get_attribute("inputs:a").get_resolved_type() b_type = node.get_attribute("inputs:b").get_resolved_type() product_attr = node.get_attribute("outputs:product") product_type = product_attr.get_resolved_type() # we can only infer the output given both inputs are resolved and they are the same. if (a_type.base_type != og.BaseDataType.UNKNOWN and b_type.base_type != og.BaseDataType.UNKNOWN and product_type.base_type == og.BaseDataType.UNKNOWN): if a_type.base_type == b_type.base_type: base_type = a_type.base_type else: decimals = [og.BaseDataType.HALF, og.BaseDataType.FLOAT, og.BaseDataType.DOUBLE] try: a_ix = decimals.index(a_type.base_type) except ValueError: a_ix = -1 try: b_ix = decimals.index(b_type.base_type) except ValueError: b_ix = -1 if a_ix >= 0 or b_ix >= 0: base_type = a_type.base_type if a_ix > b_ix else b_type.base_type else: base_type = og.BaseDataType.DOUBLE product_attr.set_resolved_type(og.Type(base_type, max(a_type.tuple_count, b_type.tuple_count), max(a_type.array_depth, b_type.array_depth))) ``` ## Type Definition Overrides The generated types provide a default implementation you can use out of the box. Sometimes you might have your own favorite library for type manipulation so you can provide a type definition configuration file that modifies the return types used by the generated code. There are four ways you can implement type overrides. 1. Use the `typeDefinitions` flag on the `generate_node.py` script to point to the file containing the configuration. 2. Use the `“typeDefinitions”: “ConfigurationFile.json”` keyword in the .ogn file to point a single node to a configuration. 3. Use the `“typeDefintitions”: {TypeConfigurationDictionary}` keyword in the .ogn file to implement simple targeted overrides in a single node. 4. Add the name of the type definitions file to your premake5.lua file in `get_ogn_project_information(“omni/test”, “ConfigurationFile.json”)` to modify the types for every node in your extension. The format used for the type definition information is the same for all methods. Here is a sample, with an embedded explanation on how it is formatted. ```json { "typeDefinitions": { "$description": [ "This file contains the casting information that will tell the OGN code generator what kind of data", "types it should generate for all of the attribute types. Any that do not explicitly appear in this file", "will use the default USD types (e.g. float[3] or int[][2]). As with regular .ogn files the keywords", "starting with a '$' are ignored by the parser and can be used for adding explanatory comments, such as", "this one.", "", "This file provides the type cast configuration for using POD data types. Note that as this is no", "POD equivalent for special types, including the 'half' float, they are left as their defaults. So long", "as none of your nodes use them they will not bring in USD. Other types such as 'any', 'bundle', 'string',", "and 'token' have explicit OGN types." ], "c++": { "$description": [ "This section contains cast information for C++ data types. These are the types returned by the", "database generated for nodes written in C++. Each entry consists of a key value corresponding to", "the attribute type as it appears in .ogn files, and a value pair consisting of the raw data type", "to which the attribute value should be cast and the include file required to define it. Though there", "may be considerable duplication of include file definitions only one will be emitted by the generated code.", "Every supported type must be present in this file, using an empty list as the implementation if there", "is no explicit definition for them in this library. In those cases the hardcoded defaults will be", "used. If supported types are missing a warning will be logged as it may indicate an oversight. One", "caveat is allowed - if an array type is not specified but the non-array base type is then it is", "assumed that the array type information is the same as the non-array type information. e.g. the", "information for bool[] is the same as for bool[]." ], "any": [], "bool": ["bool"], "bundle": [], "colord[3]": ["double[3]"], "colord[4]": ["double[4]"], "colorf[3]": ["float[3]"], "colorf[4]": ["float[4]"], "colorh[3]": [], "colorh[4]": [], "double": ["double"], "double[2]": ["double[2]"], "double[3]": ["double[3]"], ... } } } ``` See Tutorial 19 - Extended Attribute Types for more examples on how to perform attribute resolution in C++ and Python. "double[4]": ["double[4]"], "execution": ["uint32_t"], "float": ["float"], "float[2]": ["float[2]"], "float[3]": ["float[3]"], "float[4]": ["float[4]"], "frame[4]": ["double[4][4]"], "half": [], "half[2]": [], "half[3]": [], "half[4]": [], "int": ["int"], "int[2]": ["int[2]"], "int[3]": ["int[3]"], "int[4]": ["int[4]"], "int64": ["int64_t"], "matrixd[2]": ["double[2][2]"], "matrixd[3]": ["double[3][3]"], "matrixd[4]": ["double[4][4]"], "normald[3]": ["double[3]"], "normalf[3]": ["float[3]"], "normalh[3]": [], "objectId": ["uint64_t"], "path": ["uint64_t"], "pointd[3]": ["double[3]"], "pointf[3]": ["float[3]"], "pointh[3]": [], "$Quaternion layout": "[i, j, k, r] must be used to match the memory layout used by GfQuat4d et. al.", "quatd[4]": ["double[4]"], "quatf[4]": ["float[4]"], "quath[4]": [], "string": [], "target": [], "texcoordd[2]": ["double[2]"], "texcoordd[3]": ["double[3]"], "texcoordf[2]": ["float[2]"], "texcoordf[3]": ["float[3]"], "texcoordh[2]": [], "texcoordh[3]": [], "timecode": ["double"], "token": [], "transform[4]": ["double[4][4]"], "uchar": ["uint8_t"], "uint": ["uint32_t"], "uint64": ["uint64_t"], "vectord[3]": ["double[3]"], "vectorf[3]": ["float[3]"], "vectorh[3]": [] ``` } ``` ## Enum Attribute Type While there isn’t an `enum` type as such, you can use the *token* type with some metadata supplied in the .ogn file to make an attribute behave like an enum, including having the UI provide a dropdown menu with the allowed choices. When accessing the data you use the same approach as with a *token* attribute. | Type Name | USD | C++ | CUDA | Python | JSON | Description | |-----------|-----|-----|------|--------|------|-------------| | token | token | NameToken | NameToken* | str | string | Interned string with fast comparison and hashing | > See the description in the OGN reference guide for details on the use of the `allowedTokens` keyword that lets the token operate as an enum. - **USD** ```usd custom token inputs:myEnum = "Red" ( allowedTokens = ["Red", "Green", "Blue"] ) ``` - **C++** ```cpp static bool compute(OgnMyNodeDatabase& db) { NameToken const& colorName = db.inputs.myEnum(); // or auto const& colorName = db.inputs.myEnum(); } ``` - **CUDA** ```cpp extern "C" runCUDAcompute(NameToken* color); static bool compute(OgnMyNodeDatabase& db) { runCUDAcompute( db.inputs.myEnum() ); } ``` ```cpp extern "C" void runCUDAcompute(NameToken* color) { } ``` - **Python** ```python @property def myEnum(self) -> str: return attributeValues.myEnum ``` - **OGN** ```json { "myNode" : { "description" : [ ] } } ``` "This is my node with one enum-style input", "Here is where the regular token is transformed into an enum." ], "version": 1, "inputs": { "myEnum": { "description": [ "This is an enum attribute. By adding the 'allowedTokens' keyword you tell OmniGraph", "that the token is only meant to have a value appearing on that list." ], "type": "token", "default": "Red", "metadata": { "allowedTokens": [ "Red", "Green", "Blue" ] } } }
auto-authoring_Overview.md
# Overview — Omniverse Kit 2.1.34 documentation ## Overview **Extension** : omni.kit.usd.layers-2.1.34 **Documentation Generated** : May 08, 2024 ### Overview Layer is the atomic persistent storage for USD. Extension `omni.kit.usd.layers` is built on top of USD that serves to provide utilities and workflows around layers. Through which, user can query layer states, subscribe layer changes, switch authoring mode, and start live-syncing for layers. This extension is the core that serves as the foundation for layer widgets, like `omni.kit.widget.layers`, all collaboration related extensions, like `omni.kit.collaboration.*`, and all other components inside Kit that need easier access to layer states. The following sections will introduce each module and how to access them through Python API. ### Layer State Management USD runtime does not handle ACL information from file system, nor can it detect changes from other users dynamically. Omniverse is designed for multi-users to co-work together simultaneously. Layer state management takes the responsibility to manage layer states and provide a cache view to all layers in the current UsdContext for fast query and access. Also, it provides the functionality to subscribe layer changes with Carbonite event stream. The following states/properties are extended or encapsulated from vanilla USD for better access for layers to support dynamical notification: - **Writable or not**. USD only provides a runtime flag to make the layer read-only. It does not care about access permission from the file system. We extended this to include file system permissions. - **Muteness**. Muteness in USD is not serialized, and in order to support persistence, it defines two influence scopes as below: - **Local**. When it’s in local scope, mute states are only local without persistence. - **Global**. When it’s in global scope, mute states will be saved for the next session and live-synced in Live Session. In this mode, mute states will be persistent inside the customLayerData of root layer. - **Layer Lock**. Lock status is the concept to support locking layers so users cannot switch it as edit target nor it cannot be saved. It’s a flag saved inside the custom data of the layer to be locked. So it can be serialized and live updated. Lock is not real file ACL, but only an UX hint to guide UI. - **Dirtiness**. Whether it has pending edits that are not saved yet. Unlike dirty flag of PXR_NS::SdfLayer::IsDirty(), dirty state is returned only for non-anonymous layer. - **Up-to-date or not**. If this layer in memory is out of sync with the version on the disk or Nucleus server. - **Layer Name**. It supports to assign user readable name to a layer and serialize it. By default, the layer name is the file name. - **Layer Owner**. The file owner who creates the layer file. It only applies to files that exist in the Nucleus server. You can refer to `omni.kit.usd.layers.LayersState` for all Python APIs. ```cpp omni::kit::usd::layers::ILayersState ``` for all cpp APIs to access and query layer properties/states. And see [Programming Guide](#programming-guide) about how to get the interface instance. ## Live Session ### Glossary - **USD Stage**: A USD Stage is a runtime concept that’s specific to the one that’s opened with USD API. - **Base Layer**: Base layer is the one which you create the Live Session for and in the layer stack of the USD stage. Each Live Session is bound to an unique base layer. - **Live Layer**: Live Layer is the USD layer that supports Omni-Objects with extension .live. Authoring to Live Layer will be live-synced. - **Connectors**: Plugins or extensions of DCC tools that connect to Omniverse. - **Live User**: A Live User is the instance that joins the Live Session. - **Live Session Owner**: A Live Session owner is the one that has the permission to merge the Live Session to base layer. - **Presenter**: A Live Session Presenter is the one that controls the timeline to scrub/play the animation. - **Live Prim**: A Live Prim is the USD prim that has one of its references or payloads in Live Session. ### What’s a Live Session? A Live Session is a concept that all Omniverse connectors can join in to do live-syncing to the same USD stage. User stories about Live Session: - A Live Session is bound to a base layer. - Live users use connectors to list or find Live Sessions. - Live users use connectors to join the existing Live Sessions. - Live users can invite other live users to join the Live Session. - A live user can join multiple Live Sessions at the same time. - A USD stage can have multiple Live Sessions joined at the same time for multiple base layers. Live users who join the same Live Session can see what other users’ modifications towards the USD stage in realtime. They can be aware of each other by querying the session. ### Physical Structure of Live Session All the Live Sessions will be physically mapped as `$(Base Directory)/.live/$(Layer Name).live/$(session name)/.live`, where: - **Base Directory** is where the base layer is located at. - **Layer Name** is the name of the base layer. - **Session Name** is the name of the session. Under the Live Session folder, it includes: - **A meta file (__session__.toml)**: this meta file records the metadata of this Live Session. It includes: - Description of the Live Session. - Date of creation. (Can be fetched from Nucleus) - Name of the Live Session. - Owner of the Live Session. (Can be fetched from Nucleus) - Layer of this Live Session is bound to. - Presenter of the Live Session. (By default, it’s the same as owner) One example of **session**.toml: ```toml version = "1.0" name = "Test Live Session" stage_url = "omniverse://ov-content/test/test.usd" ``` description = "A test Live Session for demo omni-objects" user_name = "lrohan@nvidia.com" presenter = "lrohan@nvidia.com" ## Nucleus Channel file (__session__.channel), this channel file can be used to communicate with other peers to be aware of users in this session. The protocol defined to communicate through the channel can be referred to in the next section. ## Live Layer: All the live files belong to the Live Session. The root one is named as root.live. ## Shared Data Folder (shared_data): It defines the standard location for shared data between clients. Currently, Presence Layer omni.kit.collaboration.presence_layer uses it to share spatial awareness data. ## Others. ## How does a Live Session work? Physically, Live Session creates a space that users can co-work together. Users who join the same session will insert Live Layer into the layer stack of the opened stage. Then the Live Layer will contribute to the stage composition as its USD layer which is powered by NVIDIA technology Omni-Objects that creates an endpoint of local client so any users who accesses the same Live Layer can see other users’ modifications transparently. It supports to create/join Live Sessions for both subLayers in the local layer stack and references or payloads. REMINDER: Omni-Objects is supported by Nucleus server only currently. That means you can only create Live Session in the Nucleus server for base layers that are in the server also. ## Data Share in Live Session As described above, users in the same session can share the data or view the modifications from others. Before joining the session, all users see the same states to the base stage. All further modifications/states are shared in different ways. It includes 2 parts: - Data share of stage modifications. Modifications towards the USD stage are broadcasted through Live Layer. All modifications will be broadcasted automatically to all users, and contributed to the stage composition. - Data share of others. Besides USD modifications, it also includes other data that needs to be shared across the session, like querying users, user login/logout events, or session merge/stop, and so on. It’s through two ways: Nucleus Channel provided by extension omni.kit.collaboration.channel_manager and shared data folder as described in the last section. The Nucleus Channel will be used for transient events, which will be defined below with details. Shared data folder can be used to host customized data that’s defined by application. Kit already provides a default method for developers to use, which is Presence Layer (extension omni.kit.collaboration.presence_layer), through which users can share structural and persistent data to all users with the power and convenience of utilizing USD API. ## Live Session Implementation Kit supports two kinds of Live Sessions: Live Session for subLayer and prim. As described in the above section, joining a Live Session will add a Live Layer into the stage’s composition stack. When you join a Live Session for a subLayer, the Live Layer will be inserted into your local subLayer stack, while it will be inserted into the prim’s referenceList/payloadList for Live Prim (see API omni.kit.usd.layers.LiveSyncing.join_live_session about joining Live Session for a subLayer or prim). As described, a Live Session is bound to a base layer only. However, USD could support to reference the same base layer for multiple times, for example, a layer that’s referenced for multiple prims. Those prims can join Live Sessions for the same base layer separately. If those Live Prims joined the same Live Sessions, they physically share the same Live Session instance. You can also stop Live Session for single prim or you can stop all Live Prims for the specific base layer (see API omni.kit.usd.layers.LiveSyncing.stop_live_session for details). # Programming Guide about how to get the interface instance. ## How to Enable Your Extension to be Live Synced? In section **Data Share in Live Session**, it mentions 3 ways to do data share across a Live Session: Live Layer, Nucleus Channel, and Presence Layer. So if your extension only makes modifications to the opened stage in the current UsdContext, you only need to make sure your changes are made to the Live Layer if you want them to be live-synced to other users. For other data, you can see `omni.kit.collaboration.presence_layer` for reference. Currently, the Nucleus Channel that’s bound to a Live Session is not opened externally. So if you want to share anything with the Nucleus Channel, you’d have to open/operate channel directly with `omni.kit.collaboration.channel_manager`. **REMINDER: You must add `omni.kit.collaboration.channel_manager` into your ext’s dependency list to enable live syncing feature for your ext along with this module.** ## Common Questions ### Question 1: Why can’t I see my changes after joining a Live Session? A: This normally happens when you fix your data authoring to the specific layer. Live Session does not promise all of your modifications to any layers are live-synced. It will only live-sync those changes made to the Live Layer. So it’s developer’s responsibility to sync the data model between Live Layer and your local modifications. It’s strongly recommended to do the authoring in the current edit target. Because when a stage joins a Live Session, it will change the edit target to the Live Layer. If you want your changes to be viewable by other clients, and you don’t need to support Live Sessions standalone, you should always make changes to the current edit target without realizing the existence of other sublayers. ## Experimental Features `omni.kit.usd.layers` provides a bunch of experimental features for developers to implement related workflows easier. ### Specs Locking Specs Locking module aims to provide universal solution to USD so it could support locking to prims/properties so user cannot edit locked prims/properties. USD does not provide a layer that could support that, but it provides PXR_NS::SdfLayerStateDelegate that can monitor authoring state information associated to a layer. This module utilizes it to monitor changes to USD stage, and revert them back once we have the prims/properties locked. Currently, the lock states are not persistent. You can refer `omni.kit.usd.layers.SpecsLocking` for all Python APIs, and `omni::kit::usd::layers::ISpecsLocking` for all cpp APIs to access and query layer properties/states. And see Programming Guide about how to get the interface instance. ### Auto Authoring USD supports switching edit targets so that all authoring will take place in that specified layer. When it’s working with multiple sublayers, this kind of freedom may cause experience issues. The user has to be aware the changes made are not overridden in any stronger layer. We extend a new mode called Auto Authoring to improve it. In this mode, all changes will firstly go into a middle delegate layer and it will then distribute edits (per frame) into their corresponding layers where the edits have the strongest opinions. So it cannot switch edit targets freely, and users do not need to be aware of the existence of multi-sublayers and how USD will compose the changes. You can refer `omni.kit.usd.layers.AutoAuthoring` for all Python APIs, and `omni::kit::usd::layers::IAutoAuthoring` for all cpp APIs to access and query layer properties/states. And see Programming Guide about how to get the interface instance. #### How does Auto Authoring Work? When it switches to Auto Authoring mode (see `omni.kit.usd.layers.Layers` about how to switch edit mode), an auto authoring layer will be created under the session layer and all authoring will be done inside this auto authoring layer. For each modification to the stage, Auto Authoring backend will forward the change to the layer that has corresponding strongest opinion automatically. What if the changed prim did not exist before? Those newly created prims are forwarded to the # Specs Linking Layer link is a concept to support linking prim changes to specific layers, and serialization of those links. So if the prims/attributes are linked, all the edits to those prims or attributes will be forwarded to the layers specified. This is an experimental feature right now. You can refer `omni.kit.usd.layers.SpecsLinking` for all Python APIs, and `omni::kit::usd::layers::ISpecsLinking` for all cpp APIs to access and query layer properties/states. And see *Programming Guide* about how to get the interface instance. # Programming Guide Currently, it supports both C++ and Python APIs to query layer states and control workflows. However, Python is full-fledged and the recommended way to access all APIs. Therefore, it will only introduce the steps for accessing Python APIs below. ## Gets Interfaces Bound to a UsdContext All APIs are under module `omni.kit.usd.layers`, and each `omni.usd.UsdContext` has a separate instance of `omni.kit.usd.layers.Layers`, through which you can get all interfaces. Here are the common steps to use Python APIs for accessing different interfaces: 1. Imports package: ```python import omni.kit.usd.layers as layers ``` 2. Gets layers instance to specific UsdContext: ```python layers_interface = layers.get_layers(usd_context_name or instance_of_usd_context) ``` 3. Gets specific interfaces: ```python layers_state = layers_interface.get_layers_state() # Layer State Management Interfaces live_syncing = layers_interface.get_live_syncing() # Live Session Interfaces auto_authoring = layers_interface.get_auto_authoring() # Auto Authoring Interfaces specs_linking = layers_interface.get_specs_linking() # Specs Linking Interfaces specs_locking = layers_interface.get_specs_locking() # Specs Locking Interfaces ... ``` 4. If you want to subscribe state changes from layers: ```python def on_events(events: carb.events.IEvent): payload = layers.get_layer_event_payload(events) if payload.event_type == layers.LayerEventType.XXXXXX: ... subscription = layers.get_event_stream().create_subscription_to_pop(on_events, name="xxxx") ... ``` ## Subsribing Changes All state changes or notifications from all interfaces under module `omni.kit.usd.layers` are notified in the same Carbonite event stream, you can access it through the layers instance like: ```python def on_events(events: carb.events.IEvent): payload = layers.get_layer_event_payload(events) if payload.event_type == layers.LayerEventType.XXXXXX: ... subscription = layers.get_event_stream().create_subscription_to_pop(on_events, name="xxxx") ... ``` ```python import omni.usd import omni.kit.usd.layers as layers usd_context = omni.usd.get_context() stage = usd_context.get_stage() layers_interface = layers.get_layers(usd_context) live_syncing = layers_interface.get_live_syncing() root_layer = stage.GetRootLayer() session_name = "test" live_session = live_syncing.find_live_session_by_name(root_layer.identifier, "test") # If session does not exist, creating a new one. if not live_session: live_session = live_syncing.create_live_session("test", layer_identifier=root_layer.identifier) success = live_syncing.join_live_session(live_session) ... # Gets the current Live Session for specific layer. current_live_session = live_syncing.get_current_live_session(root_layer.identifier) ... # Checks if a base layer is in Live Session. in_live_session = live_syncing.is_layer_in_live_session(root_layer.identifier) ``` # Layer Commands and Utils Besides those interfaces, module `omni.kit.usd.layers` also provides `omni.kit.usd.layers.LayerUtils` to operate layers easily, and undoable commands for layer operations. # Error Handling For all APIs, you can check the return value to see if the API call is failed or not. If you want to get more information, you can get the detailed error type with function `omni.kit.usd.layers.Layers.get_last_error_type()` and string with function `omni.kit.usd.layers.Layers.get_last_error_string()`. See `omni.kit.usd.layers.LayerErrorType` for all error types and its description. REMINDER: you need to check the error type after the API call immediately as new API call will clear the error type. # Threading Safe All interfaces of module `omni.kit.usd.layers` are not thread-safe. They should be called in the same loop/thread as the UsdContext it binds to. # Examples Section Programming Guide introduces the common steps to access all APIs under module `omni.kit.usd.layers`. This section will provide examples and tutorials for specific applications. ## Example 1: Create, Join/Stop, and query a Live Session for Root Layer ``` # Stops Live Session. live_syncing.stop_live_session(self.root_layer.identifier)
basic-principle_Overview.md
# Overview Nucleus registry implementation for extension system. It is important for such extension to be enabled as soon as possible so that other extensions can use it as a bootstrap to be downloaded. Once enabled it registers itself in the extension manager as extension registry provider. ## Settings Refer to `extension.toml` file for settings. ## Basic Principle In the provided nucleus URL it stores all extensions as zip archives and special `index.zip` file (zipped json) which contains information about each extension stored. Omniverse Client library is used under the hood so local filesystem can also be used as a storage.
batch-position-helper_overview.md
# Overview Omni.kit.widget.graph provides the base for the delegate of graph nodes. It also defines the standard interface for graph models. With the layout algorithm, GraphView gives a visualization layer of how graph nodes display and how they connect/disconnect with each to form a graph. ## Delegate The graph node’s delegate defines how the graph node looks like. It has two types of layout: List and Column. It defines the appearance of the ports, header, footer and connection. The delegate keeps multiple delegates and pick them depending on the routing conditions. The conditions could be a type or a lambda expression. We use type routing to make the specific kind of nodes unique (e.g. backdrop delegate or compound delegate), and also we can use the lambda function to make the particular state of nodes unique (e.g. full expanded or collapsed delegate). ## Model GraphModel defines the base class for the Graph model. It defines the standard interface to be able to interoperate with the components of the model-view architecture. The model manages two kinds of data elements. Node and port are the atomic data elements of the model. ## Widget GraphNode Represents the Widget for the single node. Uses the model and the delegate to fill up its layout. The overall graph layout follows the method developed by Sugiyama which computes the coordinates for drawing the entire directed graphs. GraphView plays as the visualization layer of omni.kit.widget.graph. It behaves like a regular widget and displays nodes and their connections. ## Batch Position Helper This extension also adds support for batch position updates for graphs. It makes moving a collection of nodes together super easy by inheriting from GraphModelBatchPositionHelper, such as multi-selection, backdrops etc. ## Relationship with omni.kit.graph.editor.core For users to easily set up a graph framework, we provide another extension `omni.kit.graph.editor.core` which is based on this extension. omni.kit.graph.editor.core is more on the application level which defines the graph to have a catalog view to show all the available graph nodes and the graph editor view to construct the actual graph by dragging nodes from the catalog view, while `omni.kit.widget.graph` is the core definition of how graphs work. There is the documentation extension of `omni.kit.graph.docs` which explains how `omni.kit.graph.editor.core` is built up. Also we provide a real graph example extension called `omni.kit.graph.editor.example` which is based on `omni.kit.graph.editor.core`. It showcases how you can easily build a graph extension by feeding in your customized graph model and how to control the graph look by switching among different graph delegates. Here is an example graph built from `omni.kit.graph.editor.example`: --- ---
BestPractices.md
# Best Practices This is a collection of best practices we recommend for making development with OmniGraph consistent and easy. Although there is certainly more than one way of accomplishing many goals, we believe that there is value in following a set of core guidelines. It not only helps keep your own code simple, it makes it easier for you and others to read code written by others as it will have a familiar form. Feel free to use as many or as few of these guidelines as possible. Anything that follows our ABIs and APIs will still work so choose the parts that work best for you.
bindings_Overview.md
# Overview This is the gold standard template for creating a Kit extension that contains a mixture of Python and C++ OmniGraph nodes. ## The Files To use this template first copy the entire directory into a location that is visible to your build, such as `source/extensions`. The copy will have this directory structure. The highlighted lines should be renamed to match your extension, or removed if you do not want to use them. ```text omni.graph.template.mixed/ bindings/ Bindings.cpp config/ extension.toml data/ icon.svg preview.png docs/ CHANGELOG.md directory.txt Overview.md README.md nodes/ OgnTemplateNodeMixedCpp.cpp OgnTemplateNodeMixedCpp.ogn plugins/ Module.cpp OmniGraphTemplateMixed.h premake5.lua python/ __init__.py _impl/ __init__.py extension.py nodes/ OgnTemplateNodePy.ogn OgnTemplateNodePy.py tests/ __init__.py test_api.py test_omni_graph_template_python.py ``` The convention of having implementation details of a module in the `_impl/` subdirectory is to make it clear to the user that they should not be directly accessing anything in that directory, only what is exposed in the `__init__.py`. ## The Build File Kit normally uses premake for building so this example shows how to use the template `premake5.lua` file to customize your build. By default the build file is set up to correspond to the directory structure shown above. By using this standard layout the utility functions can do most of the work for you. ```lua -- -------------------------------------------------------------------------------------------------------------------- -- Build file for the build tools used by the OmniGraph Python mixed extension. These are tools required in order to -- run the build on that extension, and all extensions dependent on it. -- -------------------------------------------------------------------------------------------------------------------- -- This sets up a shared extension configuration, used by most Kit extensions. local ext = get_current_extension_info() -- -------------------------------------------------------------------------------------------------------------------- -- Set up a variable containing standard configuration information for projects containing OGN files local ogn = get_ogn_project_information(ext, "omni/graph/template/mixed") ``` 12 13 -- -------------------------------------------------------------------------------------------------------------------- 14 -- Put this project into the "omnigraph" IDE group 15 ext.group = "omnigraph" 16 17 -- -------------------------------------------------------------------------------------------------------------------- 18 -- Set up the basic shared project information first 19 project_ext( ext ) 20 21 -- -------------------------------------------------------------------------------------------------------------------- 22 -- ONI binding generation. The code generation is a separate project to ensure it happens before any of the 23 -- generated code is used. See the full documentation for OmniGraph Native Interfaces online at 24 -- OmniGraph Native Interfaces.html 25 -- 26 project_ext_omnibind(ext, ext.id..".interfaces") 27 omnibind { 28 { 29 -- This file is written by the developer, containing the basic ONI definition of the interface 30 file = "%{root}/include/omni/graph/template/mixed/IOmniGraphTemplateMixed.h", 31 -- These two files are generated by omni.bind from the definition and can be used in the C++ and Python 32 -- bindings code respectively. 33 api = "%{root}/include/omni/graph/template/mixed/IOmniGraphTemplateMixed.gen.h", 34 py = "%{root}/include/omni/graph/template/mixed/PyIOmniGraphTemplateMixed.gen.h" 35 } 36 } 37 dependson { "omni.core.interfaces" } 38 39 -- -------------------------------------------------------------------------------------------------------------------- 40 -- Define a build project to process the ogn files to create the generated code that will be used by the node 41 -- implementations. The (optional) "toc" value points to the directory where the table of contents with the OmniGraph 42 -- nodes in this extension will be generated. Omit it if you will be generating your own table of contents. 43 project_ext_ogn( ext, ogn, { toc="docs/Overview.md" } ) 44 45 -- -------------------------------------------------------------------------------------------------------------------- 46 -- The main plugin project is what implements the nodes and extension interface 47 project_ext_plugin( ext, ogn.plugin_project ) 48 49 -- These lines add the files in the project to the IDE where the first argument is the group and the second 50 -- is the set of files in the source tree that are populated into that group. 51 add_files("impl", ogn.plugin_path) 52 add_files("nodes", ogn.nodes_path) 53 add_files("config", "config") 54 add_files("docs", ogn.docs_path) 55 56 -- omni.bind must run on the interfaces first so that the generated interfaces exist for the build 57 dependson { ext.id..".interfaces", ogn.python_project } 58 59 -- Add the standard dependencies all OGN projects have. The second parameter is normally omitted for C++ nodes 60 -- as hot reload of C++ definitions is not yet supported. 61 add_ogn_dependencies(ogn) 62 63 -- This optional line adds support for CUDA (.cu) files in your project. Only include it if you are building nodes 64 -- that will run on the GPU and implement CUDA code to do so. Your deps/ directory should contain a file with a 65 -- cuda dependency that looks like the following to access the cuda library: 66 -- <dependency name="cuda" linkPath="../_build/target-deps/cuda"> 67 -- <package name="cuda" version="11.8.0_520.61-d8963068-${platform}" platforms="linux-x86_64"/> 68 -- <package name="cuda" version="11.8.0_520.61-abe3d9d7-${platform}" platforms="linux-aarch64"/> 69 -- <package name="cuda" version="11.8.0_522.06-abe3d9d7-${platform}" platforms="windows-x86_64"/> 70 -- </dependency> 71 -- add_cuda_build_support() 72 73 -- -------------------------------------------------------------------------------------------------------------------- 74 -- Build project responsible for generating the Python nodes and installing them and any scripts into the build tree. 75 -- This includes support for Python bindings of the C++ interface ABI. 76 project_ext_bindings { 77 ext = ext, -- Shared project definitions 78 project_name = ogn.python_project, -- Name of this project (omni.graph.template.mixed.python) module = "_o_g_t_m", -- Name of the bindings module (the normal ogn.bindings_module is too long for Windows) src = ogn.bindings_path, -- Location of the bindings source files (bindings/) target_subdir = ogn.import_path, -- Subdirectory in the build tree for Python files (omni/graph/template/mixed/) iface_project = ext.id..".interfaces" -- Project with omni.bind interfaces - ensures they generate first -- These lines add the files in the project to the IDE where the first argument is the group and the second -- is the set of files in the source tree that are populated into that group. add_files("python", "*.py") add_files("python/_impl", "python/_impl/**.py") add_files("python/nodes", "python/nodes") add_files("python/tests", "python/tests") add_files("docs", "docs") add_files("data", "data") -- Add the standard dependencies all OGN projects have. The second parameter is a table of all directories -- containing Python nodes. Here there is only one. add_ogn_dependencies(ogn, {"python/nodes"}) -- Copy the init script directly into the build tree. This is required because the build will create an ogn/ -- subdirectory in the Python module so only the subdirectories can be linked. repo_build.prebuild_copy { {"python/__init__.py", ogn.python_target_path}, } -- Linking directories allows them to hot reload when files are modified in the source tree. -- Docs are linked to get the README into the extension window. -- Data contains the images used by the extension configuration preview. -- The "nodes/" directory does not have to be mentioned here as it will be handled by add_ogn_dependencies() above. repo_build.prebuild_link { {"docs", ext.target_dir.."/docs"}, {"data", ext.target_dir.."/data"}, {"python/tests", ogn.python_tests_target_path}, {"python/_impl", ogn.python_target_path.."/_impl"}, } -- -------------------------------------------------------------------------------------------------------------------- -- With the above copy/link operations this is what the source and build trees will look like -- -- SOURCE BUILD -- omni.graph.template.mixed/ omni.graph.template.mixed/ -- bindings/ config@ -> SOURCE/config -- config/ data@ -> SOURCE/data -- data/ docs@ -> SOURCE/docs -- docs/ ogn/ (generated by the build) -- nodes/ omni/ -- plugins/ graph/ -- python/ template/ -- __init__.py mixed/ -- _impl/ python/ -- nodes/ __init__.py (copied from SOURCE/python) -- _impl@ -> SOURCE/python/_impl -- nodes@ -> SOURCE/python/nodes -- tests@ -> SOURCE/python/tests -- ogn/ (generated by the build) ```code <span class="pre"> config/extension.toml </span> </code> file with metadata describing the extension to the extension management system. Below is the annotated version of this file, where the highlighted lines are the ones you should change to match your own extension. ```toml # Main extension description values [package] # The current extension version number - uses [Semantic Versioning](https://semver.org/spec/v2.0.0.html) version = "2.3.1" # The title of the extension that will appear in the extension window title = "OmniGraph Mixed C++ and Python Template" # Longer description of the extension description = "Templates for setting up an extension containing both OmniGraph Python and C++ nodes." # Authors/owners of the extension - usually an email by convention authors = ["NVIDIA &lt;no-reply@nvidia.com&gt;"] # Category under which the extension will be organized category = "Graph" # Location of the main README file describing the extension for extension developers readme = "docs/README.md" # Location of the main CHANGELOG file describing the modifications made to the extension during development changelog = "docs/CHANGELOG.md" # Location of the repository in which the extension's source can be found repository = "https://gitlab-master.nvidia.com/omniverse/kit-extensions/kit-omnigraph" # Keywords to help identify the extension when searching keywords = ["kit", "omnigraph", "nodes", "python"] # Image that shows up in the preview pane of the extension window preview_image = "data/preview.png" # Image that shows up in the navigation pane of the extension window - can be a .png, .jpg, or .svg icon = "data/icon.svg" # Specifying this ensures that the extension is always published for the matching version of the Kit SDK writeTarget.kit = true # Specify the minimum level for support support_level = "Enterprise" # This extension has a compiled C++ project and so requires this declaration that it exists [[native.plugin]] path = "bin/*.plugin" recursive = false # Main module for the Python interface. This is how the module will be imported. [[python.module]] name = "omni.graph.template.mixed" # Watch the .ogn files for hot reloading. Only useful during development as after delivery files cannot be changed. [fswatcher.patterns] include = ["*.ogn", "*.py"] exclude = ["Ogn*Database.py"] # Other extensions that need to load in order for this one to work [dependencies] "omni.graph" = {} # For basic functionality "omni.graph.tools" = {} # For node generation # Main pages published as part of documentation. (Only if you build and publish your documentation.) [documentation] pages = [ "docs/Overview.md", "docs/CHANGELOG.md", ] # Since this module publishes an interface the documentation for it should also be included in the output. # The paths to the include files documenting the interface are relative to the directory this file lives in. ``` ```cpp cpp_api = [ "../../../include/omni/graph/template/mixed/IOmniGraphTemplateMixed.h", ] # Some extensions are only needed when writing tests, including those automatically generated from a .ogn file. # Having special test-only dependencies lets you avoid introducing a dependency on the test environment when only # using the functionality. [[test]] dependencies = [ "omni.kit.test" # Brings in the Kit testing framework ] ``` Contained in this file are references to the icon file in ``` data/icon.svg ``` and the preview image in ``` data/preview.png ``` which control how your extension appears in the extension manager. You will want to customize those. ## The Plugin Module Every C++ extension requires some standard code setup to register and deregister the node types at the proper time. The minimum requirements for the Carbonite wrappers that implement this are contained in the file ``` plugins/Module.cpp ``` . ```cpp // Copyright (c) 2023-2024, NVIDIA CORPORATION. All rights reserved. // // NVIDIA CORPORATION and its licensors retain all intellectual property // and proprietary rights in and to this software, related documentation // and any modifications thereto. Any use, reproduction, disclosure or // distribution of this software and related documentation without an express // license agreement from NVIDIA CORPORATION is strictly prohibited. // // ============================================================================================================== // // This file contains mostly boilerplate code required to register the interfaces with Carbonite. // // See the full documentation for OmniGraph Native Interfaces online at // https://docs.omniverse.nvidia.com/kit/docs/carbonite/latest/docs/OmniverseNativeInterfaces.html // // ============================================================================================================== #include "OmniGraphTemplateMixed.h" #include <omni/core/ModuleInfo.h> #include <omni/core/Omni.h> #include <omni/graph/core/ogn/Registration.h> #include <omni/graph/template/mixed/IOmniGraphTemplateMixed.h> // These are the most common interfaces that will be used by nodes. Others that are used within the extension but // not registered here will issue a warning and can be added. OMNI_PLUGIN_IMPL_DEPS(omni::graph::core::IGraphRegistry, omni::fabric::IToken) OMNI_MODULE_GLOBALS("omni.graph.template.mixed.plugin", "OmniGraph Template With Mixed Nodes"); // This declaration is required in order for registration of C++ OmniGraph nodes to work DECLARE_OGN_NODES(); namespace { using namespace omni::graph::template_mixed; omni::core::Result onLoad(const omni::core::InterfaceImplementation** out, uint32_t* outCount) { // clang-format off static const char* omniGraphTemplateMixedImplemented[] = { "omni.graph.template.mixed.IOmniGraphTemplateMixed" }; static omni::InterfaceImplementation impls[] = { { ```cpp // Copyright (c) 2023-2024, NVIDIA CORPORATION. All rights reserved. // // NVIDIA CORPORATION and its licensors retain all intellectual property ``` ```cpp // clang-format off static const omni::IObjectFactory::Impl impls[] = { { "nv.OmniGraphTemplateMixedImpl", []() { return static_cast<omni::IObject*>(new OmniGraphTemplateMixed()); }, 1, // version omniGraphTemplateMixedImplemented, CARB_COUNTOF32(omniGraphTemplateMixedImplemented) }, }; // clang-format on *out = impls; *outCount = CARB_COUNTOF32(impls); return omni::core::kResultSuccess; ``` ```cpp void onStarted() { // Macro required to register all of the C++ OmniGraph nodes in the extension INITIALIZE_OGN_NODES() } ``` ```cpp bool onCanUnload() { return true; } ``` ```cpp void onUnload() { // Macro required to deregister all of the C++ OmniGraph nodes in the extension RELEASE_OGN_NODES() } ``` ```cpp // Hook up the above functions to the module to be called at the right times OMNI_MODULE_API omni::Result omniModuleGetExports(omni::ModuleExports* exports) { OMNI_MODULE_SET_EXPORTS(exports); OMNI_MODULE_ON_MODULE_LOAD(exports, onLoad); OMNI_MODULE_ON_MODULE_STARTED(exports, onStarted); OMNI_MODULE_ON_MODULE_CAN_UNLOAD(exports, onCanUnload); OMNI_MODULE_ON_MODULE_UNLOAD(exports, onUnload); OMNI_MODULE_GET_MODULE_DEPENDENCIES(exports, omniGetDependencies); return omni::kResultSuccess; } ``` The first highlighted line shows where you customize the extension plugin name to match your own. The macros ending in `_OGN_NODES` set up the OmniGraph node type registration and deregistration process. Without these lines your node types will not be known to OmniGraph and will not be available in any of the editors. The code in the `onLoad` method is what is required to register your interface definition with Carbonite. If you do not implement an interface then this method can be omitted. ### Bindings If you have an interface set up then you will also want to expose it to Python. Kit uses the [pybind11](https://github.com/pybind/pybind11) library for exposing Python bindings of C++ classes. This template shows how the bindings that are automatically generated by [ONI](https://docs.omniverse.nvidia.com/kit/docs/carbonite/latest/docs/OmniverseNativeInterfaces.html) can be added to your extension’s Python module. ```c++ // and proprietary rights in and to this software, related documentation // and any modifications thereto. Any use, reproduction, disclosure or // distribution of this software and related documentation without an express // license agreement from NVIDIA CORPORATION is strictly prohibited. // // ============================================================================================================== // This file provides the bindings of the C++ ABI to the Python interface equivalent. Anything can be added to the // Python interface here using the pybind11 syntax. The standard binding functions generated by omni.bind will // add any bindings that were automatically generated from the ONI definitions. // ============================================================================================================== #include <omni/core/Omni.h> #include <omni/graph/template/mixed/IOmniGraphTemplateMixed.h> #include <omni/graph/template/mixed/PyIOmniGraphTemplateMixed.gen.h> // generated file #include <omni/python/PyBind.h> OMNI_PYTHON_GLOBALS("omni.graph.template.mixed-pyd", "Python bindings for omni::graph::template_mixed"); PYBIND11_MODULE(_o_g_t_m, m) { // This function is defined by PyIOmniGraphTemplateMixed.gen.h and was generated by omni.bind. // Every function in the interface is enabled for Python but if there were any that were not, usually due to // multiple return values or return values that are "out" function arguments, then you can add manual bindings // for them here by adding ".def()" calls to the return value of this function. bindIOmniGraphTemplateMixed(m); } ``` Notice how the names of the libraries and modules correspond to the ones you defined in the `premake5.lua` file. The others, such as `bindIOmniGraphTemplateMixed`, follow an obvious naming pattern. ## Documentation Everything in the `docs/` subdirectory is considered documentation for the extension. - **README.md** The contents of this file appear in the extension manager window so you will want to customize it. The location of this file is configured in the `extension.toml` file as the **readme** value. - **CHANGELOG.md** It is good practice to keep track of changes to your extension so that users know what is available. The location of this file is configured in the `extension.toml` file as the **changelog** value, and as an entry in the `[documentation]` pages. - **Overview.md** This contains the main documentation page for the extension. It can stand alone or reference an arbitrarily complex set of files, images, and videos that document use of the extension. The **toctree** reference at the bottom of the file contains at least `GeneratedNodeDocumentation/`, which creates links to all of the documentation that is automatically generated for your nodes. The location of this file is configured in the `extension.toml` file in the `[documentation]` pages section. - **directory.txt** This file can be deleted as it is specific to these instructions. ## The Node Type Definitions You define a new node type using two files, examples of which are in the `nodes/` and the `python/nodes` subdirectories. Although they do not have to be separated this way it is cleaner to do so. The Python implementations must be part of the build directory since they are used at runtime to define the nodes, whereas the C++ implementations are only used at build time and do not need to be part of the shipped extension. (The plugin library will contain everything the node type needs.) Tailor the definition of your node types for your computations. Start with the OmniGraph User Guide for information on how to configure your own definitions. ## Tests While completely optional it’s always a good idea to add a few tests for your node to ensure that it works as you intend it and continues to work when you make changes to it. Automated tests will be generated for each of your node type definitions to exercise basic functionality. What you want to write here are more complex tests that use your node types in more complex graphs. The sample tests in the `tests/` subdirectory show you how you can integrate with the Kit testing framework. to easily run tests on nodes built from your node type definition. That’s all there is to creating an extension with both Python and C++ node types! You can now open your app, enable the new extension, and your sample node types will be available to use within OmniGraph. ### OmniGraph Nodes In This Extension * C++ Template Node * Python Template Node
blast-destruction_index.md
# Blast Destruction ## Omniverse Blast Extension ## Omniverse USD Schema Destruction
blast-sdk-documentation_index.md
# Blast SDK Documentation Blast is a NVIDIA Omniverse destruction library. It consists of three layers: the low-level (NvBlast), a high-level “toolkit” wrapper (NvBlastTk), and extensions (prefixed with NvBlastExt). This layered API is designed to allow short ramp-up time for first usage (through the Ext and Tk APIs) while also allowing for customization and optimization by experienced users through the low-level API. Some notable features of NvBlast: - C-style API consisting of stateless functions, with no global framework or context. - Functions do not spawn tasks, allocate, or deallocate memory. - A support structure may be defined that includes chunks from different hierarchical depths. - Multiple chunk hierarchies may exist in a single asset. - Damage behavior is completely defined by user-supplied “shader” functions. - Has a portable memory layout for assets and actor families, which allows for memcopy cloning and direct binary serialization (on platforms with the same endianness). Features of NvBlastTk: - C++ API which includes a global framework. - Manages objects, allocating and deallocating using a user-supplied callback. - Generates “worker” objects to process damage, which the user may call from multiple threads. - Uses an event system to inform the user of actor splitting and chunk fracturing. - Introduces a joint representation which uses the event system to allow the user to update physical joints between actors. Notably absent from NvBlast and NvBlastTk: - There is no physics or collision representation. - There is no graphics representation. Blast, at the low-level and toolkit layer, is physics and graphics agnostic. It is entirely up to the user to create such representations when Blast objects are created. Updates to those objects (such as actor splitting) are passed to the user as the output of a split function in the low-level API, or through a split event in the toolkit API. This allows Blast to be used with any physics SDK and any rendering library. In order to help the user get started quickly, however, there is a PhysX-specific Blast extension which uses BlastTk and manages PhysX actors and joints. The source code for this extension, like all Blast extensions, is intended to be a reference implementation. Current Blast extensions: - ExtAssetUtils - NvBlastAsset utility functions. Add external bonds, merge assets, and transform geometric data. - ExtAuthoring - a set of geometric tools which can split a mesh hierarchically and create a Blast asset, along with collision geometry and chunk graphics meshes in a separate files. - ExtExporter - standard mesh and collision writer tools in fbx, obj, and json formats. - ExtSerialization and ExtTkSerialization - serialization extensions for low-level and Tk layers. Uses Cap’n Proto to provide robust serialization across different platforms. - ExtShaders - sample damage shaders to pass to both the low-level and Tk actor damage functions. - ExtStress - a toolkit for performing stress calculations on low-level Blast actors, using a minimal API to assign masses and apply forces. Does not use any external physics library. ## Gallery ### Tower Explosion ### Bunny Impact Damage ### Layered Cube Explosion ## Table Impact Damage ## Tower Slice ## Wall Impact Damage ## Stress Solver ## Joints ### Contents * Introduction * Asset Structure * Support Model * Damage Model * Low Level API (NvBlast) * Introduction * Linking and Header Files * Creating an Asset from a Descriptor (Authoring) * Cloning an Asset * Releasing an Asset * Creating Actors and Families * Copying Actors (Serialization and Deserialization) * Cloning a Family * Cloning a Family * Single Actor Serialization * Deactivating an Actor * Releasing a family * Damage and Fracturing * Globals API (NvBlastGlobals) * Allocator * Error Callback * Profiler API * High Level (Toolkit) API (NvBlastTk) * Introduction to NvBlastTk * NvBlastTk Class Hierarchy * Linking and Header Files * Creating the TkFramework * Creating a TkAsset * Instancing a TkAsset: Creation of a TkActor and a TkFamily * Groups * Applying Damage to Actors and Families * Multiple Damage Descriptors using NvBlastProgramParams - Multiple Damage Descriptors using NvBlastProgramParams - Single Damage Descriptor with Default TkFamily Material - Single Damage Descriptor with Specified Material - Joints - Releasing Joints - Events - Object and Type Identification - Extensions (NvBlastExt) - Damage Shaders (NvBlastExtShaders) - Stress Solver (NvBlastExtStress) - Features - Settings Tuning - Usage - Asset Utilities (NvBlastExtAssetUtils) - Add World Bonds - Merge Assets - Transform In-Place - Asset Authoring (NvBlastExtAuthoring) - FractureTool - Mesh Restrictions - ConvexMeshBuilder - BondGenerator - MeshCleaner - Serialization (NvBlastExtSerialization) - Introduction - Serialization (writing) - Using a Buffer Provider - Deserialization (reading) - Detecting the Object Type in a Buffer - Peeking at and Skipping Buffer Data - Cleaning Up - BlastTk Serialization (NvBlastExtTkSerialization) - Definitions - Copyrights - Boost - V-HACD - Changelog - [5.0.4] - 22-January-2024 - Bugfixes - [5.0.3] - 1-November-2023 - Bugfixes - [5.0.2] - 25-July-2023 - [5.0.1] - 22-June-2023 - Bugfixes - [5.0.0] - 23-Jan-2023 - Changes - [4.0.2] - 31-Aug-2022 - Bugfixes - [4.0.1] - 10-Aug-2022 - Bugfixes - [4.0.0] - 31-May-2022 - New Features - [3.1.3] - 28-Feb-2022 - Changes - [3.1.2] - 24-Feb-2022 - Changes - Bug Fixes - [3.1.1] - 2022-01-12 - Changes - [3.1.0] - 2022-01-10 - Changes - [3.0.0] - 2021-10-13 - Changes - [2.1.7] - 2021-07-18 - Bug Fixes - [2.1.6] - 2021-06-24 - Changes - [2.1.5] - 2021-05-10 - Changes - [2.1.4] - 2021-04-08 - Bug Fixes - [2.1.3] - 2021-04-05 - Bug Fixes - [2.1.2] - 2021-03-15 - Bug Fixes - [2.1.1] - 2021-03-02 - Changes - [2.0.1] - 2021-03-01 - Changes - [2.0.0] - 2021-02-19 - Changes - [1.4.7] - 2020-10-20 - Changes - Bug Fixes - [1.4.6] - 2020-10-08 - Changes - Bug Fixes - [1.4.5] - 2020-09-30 - Bug Fixes - [1.4.4] - 2020-09-29 - Changes - [1.4.3] - 2020-09-26 - Changes - Bug Fixes - [1.4.2] - 2020-08-28 - Bug Fixes - [1.4.1] - 2020-06-26 - Changes - [1.2.0] - 2020-01-23 - Changes - New Features - Known Issues - [1.1.5] - 2019-09-16 - Changes - New Features - Bug Fixes - Known Issues - [1.1.4] - 2018-10-24 - Changes - New Features - Bug Fixes - Known Issues - [1.1.3] - 2018-05-30 - Changes - New Features - Bug Fixes - Known Issues - [1.1.2] - 2018-01-26 - Changes - New Features - Bug Fixes - Known Issues - [1.1.1] - 2017-10-10 - Changes - New Features - Bug Fixes - Known Issues - [1.1.0] - 2017-08-28 - Changes - New Features - Bug Fixes - Known Issues - [1.0.0] - 2017-02-24 - Changes - New Features - **Changelog** - [1.0.0] - 2018-03-15 - Changes - New Features - Bug Fixes - Known Issues - [1.0.0-beta] - 2017-01-24 - Changes - New Features - Removed Features - Known Issues - [1.0.0-alpha] - 2016-10-21 - Features - Known Issues - **API Documentation** - Directory hierarchy - Namespace hierarchy - API contents - Classes - Macros - Directories - Files - Functions - Namespaces - Structs - Typedefs - Variables # Index - Index - Search Page
bondgenerator_ext_authoring.md
# Asset Authoring (NvBlastExtAuthoring) The Authoring extension provides tools for creation of a Blast asset from a provided mesh. There are four tools for creation of Blast assets. ## FractureTool Nv::Blast::FractureTool (see NvBlastExtAuthoringFractureTool.h) is used to fracture an input mesh. It supports Voronoi fracturing, slicing, and “cutout” fracture (slicing based upon an image). Internal surfaces of output chunks can be tesselated and noise can be applied to them. The slicing method supports slicing with a noisy slicing surface, which allows the creation of a jagged slicing line. Noisy slicing is switched on by setting a non-zero noise amplitude in slicing parameters (Nv::Blast::SlicingConfiguration). FractureTool supports two types of output: 1. Array of triangles - the tool fills provided array with triangles of chunk, ID of chunk should be provided. 2. Buffered output - the tool fills provided array with vertices, and another array of arrays with indices. Indices form triplets of vertices of triangle. ## Mesh Restrictions At the core of the fracturing tools is a geometric boolean algorithm based upon the paper, *A topologically robust algorithm for Boolean operations on polyhedral shapes using approximate arithmetic* by Smith and Dodgson, Computer-Aided Design 39 (2007) 149–163, Elsevier. The constraints for a valid input mesh are given in the paper. Practically, the restrictions may be summarized as follows. Input meshes * must be closed with CCW-oriented surfaces, * must not have self-intersection, * must not have T-junctions, * may have multiple disconnected components. Failure to meet the constraints (first three items) above will lead to unpredictable fracturing results. ## ConvexMeshBuilder Nv::Blast::ConvexMeshBuilder is a tool for creation of collision geometry for physics engine. It recieves mesh vertices, and returns the convex hull of those vertices. If creation of a convex hull fails, the tool creates collision geometry as a bounding box of provided vertices. The tool provides a method to trim convex hulls against each other. It can be used along with noisy slicing to avoid “explosive” behavior due to penetration of neighboring collision hulls into each other. As a drawback, penetration of render meshes into each other is possible due to trimmed collision geometry. ## BondGenerator Nv::Blast::BlastBondGenerator is a tool for creation of Blast Bond descriptors from provided geometry data. It has separate a method which is optimized for working FractureTool. ```text int32_t Nv::Blast::BlastBondGenerator::buildDescFromInternalFracture(FractureTool* tool, const std::vector<bool>& chunkIsSupport, std::vector<NvBlastBondDesc>& resultBondDescs, std::vector<NvBlastChunkDesc>& resultChunkDescriptors); ``` Other methods can work with prefractured meshes created in Third party tools, and can be used for converting prefractured models to Blast assets. Nv::Blast::BlastBondGenerator supports two modes of NvBlastBond data generation: 1. Exact - in this mode exact common surface between chunks is found and considered as interface between them. Exact normal, area and centroid are computed. 2. Average - this mode uses approximations of the interface, and can be used for gathering NvBlastBond data for assets, where chunks penetrate each other, e.g. chunks with noise. ## MeshCleaner Nv::Blast::MeshCleaner can be used to remove self intersections and open edges in the interior of a mesh, making it more likely to fracture well. To use it, create a MeshCleaner using ``` Given an Nv::Blast::Mesh called “mesh”, simply call ```cpp Nv::Blast::Mesh* newMesh = cleaner->cleanMesh(mesh); ``` If successful, newMesh will be a valid pointer to the cleaned mesh. Otherwise, newMesh will be NULL. When done, release using ```text cleaner->release(); ```
boom-collision-audio_index.md
# Boom Collision Audio ## Omniverse Boom Extension ## Omniverse USD Schema Audio Boom
boost_copyrights.md
# Copyrights ## Boost Blast [Asset Authoring (NvBlastExtAuthoring)](extensions/ext_authoring.html#pageextauthoring) uses Boost (boost.org). This is licensed as follows. ``` Boost Software License - Version 1.0 - August 17th, 2003 Permission is hereby granted, free of charge, to any person or organization obtaining a copy of the software and accompanying documentation covered by this license (the "Software") to use, reproduce, display, distribute, execute, and transmit the Software, and to prepare derivative works of the Software, and to permit third-parties to whom the Software is furnished to do so, all subject to the following: The copyright notices in the Software and this entire statement, including the above license grant, this restriction and the following disclaimer, must be included in all copies of the Software, in whole or in part, and all derivative works of the Software, unless such copies or derivative works are solely in the form of machine-executable object code generated by a source language processor. THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. ``` ## V-HACD Blast [Asset Authoring (NvBlastExtAuthoring)](extensions/ext_authoring.html#pageextauthoring) uses V-HACD (by Khaled Mamou). This is licensed as follows. ``` Copyright (c) 2011 Khaled Mamou (kmamou at gmail dot com) All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: 1. Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. 2. Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution. 3. The names of the contributors may not be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. ```
build.md
# Build a Project Depending on the nature of your project, a ‘build step’ may be required as development progresses. Omniverse supports this step with a variety of scripts and tools that generate a representation of the final product, enabling subsequent testing, debugging, and packaging. ## Build-less Projects Python-only extensions do not require a build step. Your project might simply be an Extension within another Application, consisting of purely Python code. If that’s the case, you can bypass the build step for the time being. However, if the complexity of your project increases or you decide to create a tailored Application around your extension, introducing a build step into your development process might become necessary. ## “Repo Build” Find more information about Repo Tools in the Repo Tools documentation. The Repo Tool is your most likely method for building a Project. This tool operates on both Linux and Windows platforms, providing functionalities ranging from creating new extensions from a template to packaging your final build for distribution. For access to the current list of available tools via Repo, you can use the following command in a terminal: ``` repo -h ``` By Default, you can execute a build which compiles a release target by entering: ``` repo build ``` Other options include: - `-r` [default] will build only the *release* pipeline. - `-d` will build only the *debug* pipeline. - `-x` will delete all build files in the *_build* folder. After executing a `repo build`, a `_build` folder is generated, containing targets that you’ve selected to build. Within this directory, you will find a script file which loads the `.kit` file that you created for your Project. Alternatively, if you created an Extension, a script file should be present to load the Application you added your Extension to.
Building.md
# Building Carbonite ## Building Source To build on Windows: ```bash # build both release and debug build # debug build build -d # release build build -r # clean build output build -c ``` The commands above also work on Linux, though `build` should be replaced with `./build.sh`. ```c++ ./build.sh ``` Output from the build can be found in a platform/flavor subfolder of `_build/`. For example: ``` _build/windows-x86_64/release/ ``` ### Improving Build Times The build process is divided into a “generation” step and a “build” step. The generation step only needs to be run once (or anytime you change `premake5.lua`). You can skip the generation step and only proceed with the build step as follows: ```c++ build -b ``` You can also build individual targets. For example: ```bash # build carb.dll build -t carb # build carb.tokens.plugin.dll build -t plugins\carb.tokens.plugins ``` On Mac OS and Linux, `ccache` can substantially reduce build times when doing rebuilds. To use `ccache`, first install ccache (on Ubuntu: `apt-get install ccache`; on Mac OS, you can use homebrew: `brew install ccache` or macports: `port install ccache`), then add `export CARB_CCACHE=ccache` to your shell startup file (On Linux, the default shell is typically bash, so this will be `$HOME/.bashrc`. Mac OS uses zsh by default, so you need to use `$HOME/.zshrc`; if you switch to the version of bash Mac provides, it uses `$HOME/.profile`). This unfortunately will remove the colored highlighting from Clang/GCC’s diagnostics most of the time. ### Tips On Linux, the parallel build may make it difficult to read build warnings and serializing the build with ```code -j1 ``` is slow. You can avoid this slowdown by passing the ```code -k ``` parameter to make; this is done with the ```code -e ``` parameter of ```code build.sh ``` . This will rebuild everything possible, so you can run a serial build without having to wait as long. ```bash # build everything possible ./build.sh -e-k # serial build, so the error message is easier to read ./build.sh -j1 ``` Carbonite supports a number of debugging features, such as Clang’s address sanitizer on Linux. Use ```code ./build.sh --help ``` to view the list of extra flags. ## Building Documentation Documentation is built as follows: ```bash ./repo docs ``` Output from the build can be found in: > _build/docs/carbonite/lastet Curious readers can refer to How the Omniverse Documentation System Works for a deep-dive into how the documentation system works. > Tip > See Documentation Build Stages to decrease your build iteration time by understanding how the documentation build process works. ## Unity Builds Unity builds are supported in the build system. Individual projects must opt in to unity builds in order to use them however. More information can be found under the Unity Builds Walkthrough doc.
building_index.md
# Omniverse Carbonite SDK The Omniverse Carbonite SDK is the foundational software layer for Omniverse applications, microservices, tools, plugins, Connectors, and SDKs. The Omniverse Carbonite SDK provides the following high-level features: ## ABI Stable Interfaces The core of Carbonite provides tooling that enables developers to define and maintain ABI stable interfaces. We call these interfaces Omniverse Native Interfaces (ONI). These interfaces allow the creation of software components that are binary stable across compiler toolchains, OS revisions, and SDK releases. ## Plugins Carbonite provides a plugin system that allows the functionality of applications to be dynamically extended at runtime. These plugins export zero or more ABI stable interfaces for use by other loaded components. ## Cross-Platform Abstractions In order to be useful on a variety of hardware and operating systems, Carbonite exposes ABI-stable interfaces which provide a uniform abstraction over low-level platform facilities. ## Inline Headers Carbonite contains a rich suite of well tested, efficient, cross-platform, general purpose inline headers. ## Diagnostics Universally useful diagnostic APIs for profiling, crash reporting, and telemetry are provided by Carbonite as first-class citizens. ## Building ### To build on Windows: ```shell build ``` ### To build on Linux: ```shell ./build.sh ``` If your build fails, please reach out to #ct-carbonite on Slack. For faster build times, see [Building Carbonite](docs/Building.html#carb-building). ## Testing ### To run Carbonite’s unit tests on Windows: ```shell _build\windows-x86_64\release\test.unit.exe ``` To understand how to run debug builds of the unit test, run tests on other platforms, or run a subset of tests, see [Testing Carbonite](docs/Testing.html#carb-testing). ## License Carbonite is proprietary software of NVIDIA Corporation. License details can be found [here](docs/LicenseInfo.html#carb-license). # Contents - Manifesto - Changelog - Coding Style Guide - API - License ## Top Level - Carbonite Plugins/Interfaces - Omniverse Native Interfaces - Deploying a Carbonite Application ## Components - Asserts - Audio - Crash Reporter - Function - Carbonite Input Plugin - Overview - Localization - Logging - Memory - Python Bindings - String - Tasking - Telemetry - Unicode ## Guides - ABI Compatibility - Building - Unity Builds - Testing - Packaging - Releasing - Using Valgrind - Carbonite Interface Walkthrough - Creating a New Omniverse Native Interface - Troubleshooting - Extending an Omniverse Native Interface Walkthrough - Using omni.bind ## Documenting - Documentation Guidelines - Restructured Text Guide - C++ Documentation Guide - Python Documentation Guide
build_tools.md
# Build Tools ## Packman The basic package management tool used on OV, see packman. A copy of the currently used packman lives in tools/packman in most repositories. It can be easily upgraded to newer versions. ## Repo Tools “RepoMan” These are a set of small utility libraries whose source lives in https://gitlab-master.nvidia.com/omniverse/repo. They are dependencies of your project (you can set their versions by changing `deps/repo-deps.packman.xml`, or using `repo update`. The first time you build your project, the chosen versions will be downloaded and linked into `_repo`. They have a single entry point which is `./repo.sh[.bat]` in the root of your repo. Call `repo.bat` to see a list of all available tools. Each command can be explored with `--help` flag. Each tool defines default settings in their `repo_tools.toml` file. E.g. look at `_repo/deps/repo_format/repo_tools.toml` for format tool settings. `repo_tools.toml` is a tool definition file. Repo can override those settings using `repo.toml` file. When any tool runs this config is applied on top of tools `repo_tools.toml`. Notice that `repo.toml` supports applying extra configuration using `repo.import_configs` settings. It is used to share many settings between extension repos in common package. ### repo build Example Usage: `repo.bat build -r` or `build.bat -r` Simply, this will build your project. In more detail, it will: 1. pull and link dependencies (via packman), 2. setup vscode (generate python stub files and all of the other plumbing needed to get good intellisense/code completion for vscode in your project, as well as with Kit, USD etc) 3. Generate license files 4. file copy and link 5. pip install 6. project generation 7. toolchain build call (which in the case of pure python is really just creating some symlinks. This is equivalent to calling ./build.[.sh][.bat] from the root of the repo. ## repo docs Example usage: ``` repo.bat docs ``` Builds documentation from the release build of your project. Document your python code with Google Docstring, more info in: (http://nv/repo_docs) ## repo publish_ext Example usage: ``` repo.bat publish_ext -c release -n ``` This will publish extensions to the registry. *This will normally be called by TC rather than locally.* ## repo package Example usage: ``` repo.bat package -a ``` Prepares final package in `_build/packages`. It will build zip/7z artifacts which are passed between CI jobs. We don’t package kit inside to save space, instead we prepare special bat file `pull_kit_sdk.bat` to pull it from packman before running tests. ## repo test Example usage: ``` repo.bat test --config debug ``` Very simple entry point for running your tests locally or on TC. When you do a build, premake will generate bat/sh scripts that will run your tests, e.g `tests-python-omni.kit.widget.collection.sh`. This is just starting up Kit, enabling the appropriate extensions, and running their test. `repo test` is running those scripts, as defined in `repo.toml`. As well as running tests, it will look for particular patterns in the output (stdout/stderr) to fail on, and others to ignore (configurable). To run tests on TC it uses `--from-package` flag to unpack package and run tests in it. You can do that locally, by downloading TC artifact into `_build/packages` and running with `--from-package`. ## repo source This allows you to link to local versions of packman dependencies. ## repo format This will format C++ and Python code according to OV conventions (using black for Python). It can also verify formatting is correct. ## repo update This updates your dependencies by modifying the `deps/*xml` files to the latest versions (major/minor constraints can be specified). This is a local only step. E.g. to update all tools run `repo update repo_`. ## repo changelog Future work is to update this so it can automatically generate the changelogs for extensions from git commits - currently it works mostly for Kit-based applications ## repo build_number Used by TC only to generate full build number. ## repo ci Used to run TC entry points. Entry points are python scripts written for CI. Call like `repo ci build` uses `repo.toml` to find a script to call, in this examples `build.py` in `repo_kit_tools`. ## Tools and CI Many of them are used by Teamcity to execute various parts of the build pipeline (build/package/publish etc), but many of them can be executed locally also to perform various tasks A simplified build pipeline as used by most tools is: ```c++ build->package->test->publish ``` To run locally just run ``` repo ci ``` that will list all available job entry points. E.g. ``` repo ci build ``` will run build job. It will also print the actual script it running: ``` Executing CI job 'build'. Script: ... ``` There are many repo tools, the version of each to use is defined in deps/repo-deps.packman.xml in each repository, and the code for these is downloaded to the ``` _repo ``` folder General notes: - Some of these tools work with your local source, some of them work with packaged artefacts, some give you the option of either. Normally TC jobs will be working with packaged artefacts - Some of them require you to specify a “config” e.g release or debug, via -c/–config usually. Some default to debug if nothing is passed - If you cannot work out why TC jobs are failing, it can be useful to log into the host after the job has completed. Ask on #ct-teamcity, they will allow you access to a host, where you can ssh in via - Not all TC jobs are equal - some stages of the pipeline are triggered by any commit to an MR, some (e.g publishing) might only happen on master - Most build/packaging/testing etc is done separately for Windows/Linux (and now ARM) platforms, even for Python only ## VSCode Kit and OV projects in general are set up to use VSCode. You’ll usually find the following in a .vscode folder in your repo (Note: Work out when these files are generated/updated.. At build time?) - c_cpp_properties.json - extensions.json - global-snippets.code-snippets - json.code-snippets - launch.json - settings.json - settings.template.json - tasks.json to get going, Install VsCode python extension, close VsCode, run ``` build.bat ``` first time ( ``` -s ``` flag is enough), open project again. Python intellisense, linter, formatting should work (we bring our own version of python).
Bundles.md
# Bundles User Guide ## Introduction Prior to the `1.59.0` version, the following nodes: `Read Prim` and `Read Prim into Bundle` used to read `UsdPrim`s from a `UsdStage` as a single primitive in a bundle. Meaning, the attributes of the primitive were saved directly in the output bundle. As a consequence, `Read Prim*` node instances were able to read only one primitive at the time. Since the `1.59.0` version, the new nodes were introduced that read `UsdPrim`s as multiple primitives in a bundle. To make this feature possible, some important changes had to be made to the bundle interface. The most prominent is that the bundle interface gained an ability to build a hierarchy of bundles. Each bundle is allowed to have children, and those child bundles are allowed to have children on their own, making bundles recursive. This feature existed for some time in `omni.graph.core`, but it was unavailable to access through the `omni.graph.nodes` until `1.59.0` release. ## Backwards compatibility To support backward compatibility, `Read Prim` and `Read Prim into Bundle` remain as they were, but have been deprecated and hidden. Old scenes should work as they were, but it’s strongly advised to update the scenes to use the new workflow. ## Inspecting recursive bundles Unfortunately, there aren’t robust tools to inspect the recursive content of a bundle yet. `Bundle Inspector` node has an ability to inspect number of children in a bundle, but nothing more for now. Plans to make `Bundle Inspector` print the hierarchy of the bundles exist. Deprecated nodes will produce a warning when instantiated in the graph. # Primitive in a Bundle There is no existing definition of a *primitive in a bundle*. A bundle is *considered* to carry a primitive if `sourcePrimPath` and `sourcePrimType` attributes are present in it. Because of this ambiguity, the following terms are used interchangeably: - **Single Primitive in a Bundle (SPiB), Single Bundle in a Bundle (SBiB), or Primitive in a Bundle.** - **Multiple Primitives in a Bundle (MPiB) and Multiple Bundles in a Bundle (MBiB)** # Single Primitive in a Bundle A Single Primitive in a Bundle is considered when attributes of the `UsdPrim` are saved directly in the bundle. There are no children or other levels of indirection in that bundle. # Multiple Primitives in a Bundle - the bundle hierarchy Multiple Primitives in a Bundle is considered where there is a hierarchy of bundles. The top level bundle does not carry any primitives, but it serves a purpose of a *place holder* for children. Note, the bundle interface is flexible enough to mirror the hierarchy of the `UsdStage`. # New nodes - Multiple Primitives in a Bundle The new replacements for deprecated nodes are: - `Read Prims` - Reads `UsdPrim`s as multiple primitives and outputs a bundle. - `Read Prim Attributes` - Reads single `UsdPrim` and exposes its attributes as dynamic attributes. It does not output a bundle. Also, there are new nodes that made work with Multiple Primitives in a Bundle possible: - `Extract Prim` - Extracts a child bundle as a single primitive from multiple primitives input, using primitive path (`sourcePrimPath` attribute). - `Get Prims` - Passes to the output primitives from the input that match specific criteria. # Extract Prim - extracting a primitive `Extract Prim` node takes multiple primitives as an input, then it searches for only one child with a specific `sourcePrimPath` attribute. It outputs the child, as a single primitive in a bundle, that was found in the input. `Extract Prim` allows access to primitive’s attributes, or its children. ## Get Prims - processing primitives `Get Prims` allows to filter input multiple primitives. The input primitives can be filtered based on `path` or `type`. Simple wild-card expressions allow users to make expressions very flexible. In the example below `Get Prims` picks only two, out of three children, and exposes them in the output. In the example `Get Prims` searches for primitives with a path that matches with expression `/World/C*`, thus only `/World/Cube` and `/World/Cone` are carried to the output. `/World/Mesh` is ignored. `Get Prims` allows to hand over to the output only specific primitives from the input.
buttons.md
# Buttons and Images ## Common Styling for Buttons and Images Here is a list of common style you can customize on Buttons and Images: > border_color (color): the border color if the button or image background has a border > border_radius (float): the border radius if the user wants to round the button or image > border_width (float): the border width if the button or image or image background has a border > margin (float): the distance between the widget content and the parent widget defined boundary > margin_width (float): the width distance between the widget content and the parent widget defined boundary > margin_height (float): the height distance between the widget content and the parent widget defined boundary ## Button The Button widget provides a command button. Click a button to execute a command. The command button is perhaps the most commonly used widget in any graphical user interface. It is rectangular and typically displays a text label or image describing its action. Except the common style for Buttons and Images, here is a list of styles you can customize on Button: > background_color (color): the background color of the button > padding (float): the distance between the content widgets (e.g. Image or Label) and the border of the button > stack_direction (enum): defines how the content widgets (e.g. Image or Label) on the button are placed. There are 6 types of stack_directions supported - ui.Direction.TOP_TO_BOTTOM : layout from top to bottom - ui.Direction.BOTTOM_TO_TOP : layout from bottom to top - ui.Direction.LEFT_TO_RIGHT : layout from left to right - ui.Direction.RIGHT_TO_LEFT : layout from right to left - ui.Direction.BACK_TO_FRONT : layout from back to front - ui.Direction.FRONT_TO_BACK : layout from front to back To control the style of the button content, you can customize `Button.Image` when image on button and `Button.Label` when text on button. Here is an example showing a list of buttons with different types of the stack directions: ![Code Result](Buttons and Images_0.png) ```python from omni.ui import color as cl direction_flags = { "ui.Direction.TOP_TO_BOTTOM": ui.Direction.TOP_TO_BOTTOM, "ui.Direction.BOTTOM_TO_TOP": ui.Direction.BOTTOM_TO_TOP, "ui.Direction.LEFT_TO_RIGHT": ui.Direction.LEFT_TO_RIGHT, "ui.Direction.RIGHT_TO_LEFT": ui.Direction.RIGHT_TO_LEFT, } ``` "ui.Direction.BACK_TO_FRONT": ui.Direction.BACK_TO_FRONT, "ui.Direction.FRONT_TO_BACK": ui.Direction.FRONT_TO_BACK, } with ui.ScrollingFrame( height=50, vertical_scrollbar_policy=ui.ScrollBarPolicy.SCROLLBAR_ALWAYS_OFF, style={"ScrollingFrame": {"background_color": cl.transparent}}, ): with ui.HStack(): for key, value in direction_flags.items(): button_style = {"Button": {"stack_direction": value}} ui_button = ui.Button( key, image_url="resources/icons/Nav_Flymode.png", image_width=24, height=40, style=button_style ) Here is an example of two buttons. Pressing the second button makes the name of the first button longer. And press the first button makes the name of itself shorter: ![Code Result](Buttons and Images_1.png) ```python from omni.ui import color as cl style_system = { "Button": { "background_color": cl(0.85), "border_color": cl.yellow, "border_width": 2, "border_radius": 5, "padding": 5, }, "Button.Label": {"color": cl.red, "font_size": 17}, "Button:hovered": {"background_color": cl("#E5F1FB"), "border_color": cl("#0078D7"), "border_width": 2.0}, "Button:pressed": {"background_color": cl("#CCE4F7"), "border_color": cl("#005499"), "border_width": 2.0}, } def make_longer_text(button): """Set the text of the button longer""" button.text = "Longer " + button.text def make_shorter_text(button): """Set the text of the button shorter""" splitted = button.text.split(" ", 1) button.text = splitted[1] if len(splitted) > 1 else splitted[0] with ui.HStack(style=style_system): btn_with_text = ui.Button("Text", width=0) ui.Button("Press me", width=0, clicked_fn=lambda b=btn_with_text: make_longer_text(b)) btn_with_text.set_clicked_fn(lambda b=btn_with_text: make_shorter_text(b)) ``` Here is an example where you can tweak most of the Button’s style and see the results: ![Code Result](Buttons and Images_2.png) ```python from omni.ui import color as cl style = { "Button": {"stack_direction": ui.Direction.TOP_TO_BOTTOM}, "Button.Image": { "color": cl("#99CCFF"), "alignment": ui.Alignment.CENTER, }, "Button.Label": {"alignment": ui.Alignment.CENTER}, } def direction(model, button, style=style): value = model.get_item_value_model().get_value_as_int() direction = ( ui.Direction.TOP_TO_BOTTOM, ui.Direction.BOTTOM_TO_TOP, ui.Direction.LEFT_TO_RIGHT, ui.Direction.RIGHT_TO_LEFT, ui.Direction.BACK_TO_FRONT, ui.Direction.FRONT_TO_BACK, )[value] style["Button"]["stack_direction"] = direction button.set_style(style) def align(model, button, image, style=style): value = model.get_item_value_model().get_value_as_int() alignment = ( ui.Alignment.LEFT_TOP, ui.Alignment.LEFT_CENTER, ui.Alignment.LEFT_BOTTOM, ui.Alignment.CENTER_TOP, ui.Alignment.CENTER, ui.Alignment.CENTER_BOTTOM, ui.Alignment.RIGHT_TOP, ui.Alignment.RIGHT_CENTER, ui.Alignment.RIGHT_BOTTOM, )[value] if image: style["Button.Image"]["alignment"] = alignment else: style["Button.Label"]["alignment"] = alignment button.set_style(style) def layout(model, button, padding, style=style): if padding == 0: padding = "padding" elif padding == 1: padding = "margin" elif padding == 2: padding = "margin_width" else: padding = "margin_height" style["Button"][padding] = model.get_value_as_float() button.set_style(style) def spacing(model, button): button.spacing = model.get_value_as_float() button = ui.Button("Label", style=style, width=64, height=64) with ui.HStack(width=ui.Percent(50)): ui.Label('"Button": {"stack_direction"}', name="text") options = ( 0, "TOP_TO_BOTTOM", "BOTTOM_TO_TOP", "LEFT_TO_RIGHT", "RIGHT_TO_LEFT", "BACK_TO_FRONT", "FRONT_TO_BACK", model = ui.ComboBox(*options).model model.add_item_changed_fn(lambda m, i, b=button: direction(m, b)) alignment = ( 4, "LEFT_TOP", "LEFT_CENTER", "LEFT_BOTTOM", "CENTER_TOP", "CENTER", "CENTER_BOTTOM", "RIGHT_TOP", "RIGHT_CENTER", "RIGHT_BOTTOM", ) with ui.HStack(width=ui.Percent(50)): ui.Label('"Button.Image": {"alignment"}', name="text") model = ui.ComboBox(*alignment).model model.add_item_changed_fn(lambda m, i, b=button: align(m, b, 1)) with ui.HStack(width=ui.Percent(50)): ui.Label('"Button.Label": {"alignment"}', name="text") model = ui.ComboBox(*alignment).model model.add_item_changed_fn(lambda m, i, b=button: align(m, b, 0)) with ui.HStack(width=ui.Percent(50)): ui.Label("padding", name="text") model = ui.FloatSlider(min=0, max=500).model model.add_value_changed_fn(lambda m, b=button: layout(m, b, 0)) with ui.HStack(width=ui.Percent(50)): ui.Label("margin", name="text") model = ui.FloatSlider(min=0, max=500).model model.add_value_changed_fn(lambda m, b=button: layout(m, b, 1)) with ui.HStack(width=ui.Percent(50)): ui.Label("margin_width", name="text") model = ui.FloatSlider(min=0, max=500).model model.add_value_changed_fn(lambda m, b=button: layout(m, b, 2)) with ui.HStack(width=ui.Percent(50)): ui.Label("margin_height", name="text") model = ui.FloatSlider(min=0, max=500).model ```python model.add_value_changed_fn(lambda m, b=button: layout(m, b, 3)) ``` ```python with ui.HStack(width=ui.Percent(50)): ui.Label("Button.spacing", name="text") model = ui.FloatSlider(min=0, max=50).model model.add_value_changed_fn(lambda m, b=button: spacing(m, b)) ``` ## Radio Button RadioButton is the widget that allows the user to choose only one from a predefined set of mutually exclusive options. RadioButtons are arranged in collections of two or more buttons within a RadioCollection, which is the central component of the system and controls the behavior of all the RadioButtons in the collection. Except the common style for Buttons and Images, here is a list of styles you can customize on RadioButton: > background_color (color): the background color of the RadioButton > padding (float): the distance between the the RadioButton content widget (e.g. Image) and the RadioButton border To control the style of the button image, you can customize `RadioButton.Image`. For example RadioButton.Image’s image_url defines the image when it’s not checked. You can define the image for checked status with `RadioButton.Image:checked` style. Here is an example of RadioCollection which contains 5 RadioButtons with style. Also there is an IntSlider which shares the model with the RadioCollection, so that when RadioButton value or the IntSlider value changes, the other one will update too. ```python from omni.ui import color as cl style = { "RadioButton": { "background_color": cl.cyan, "margin_width": 2, "padding": 1, "border_radius": 0, "border_color": cl.white, "border_width": 1.0, }, "RadioButton.Image": { "image_url": "../exts/omni.kit.documentation.ui.style/icons/radio_off.svg", }, "RadioButton.Image:checked": { "image_url": "../exts/omni.kit.documentation.ui.style/icons/radio_on.svg", }, } collection = ui.RadioCollection() for i in range(5): with ui.HStack(style=style): ui.RadioButton(radio_collection=collection, width=30, height=30) ui.Label(f"Option {i}", name="text") ui.IntSlider(collection.model, min=0, max=4) ``` ## ToolButton ToolButton is functionally similar to Button, but provides a model that determines if the button is checked. This button toggles between checked (on) and unchecked (off) when the user clicks it. Here is an example of a ToolButton: ```python def update_label(model, label): checked = model.get_value_as_bool() label.text = f"The check status button is {checked}" with ui.VStack(spacing=5): model = ui.ToolButton(text="click", name="toolbutton", width=100).model ``` ## ColorWidget The ColorWidget is a button that displays the color from the item model and can open a picker window. The color dialog’s function is to allow users to choose color. Except the common style for Buttons and Images, here is a list of styles you can customize on ColorWidget: > background_color (color): the background color of the tooltip widget when hover over onto the ColorWidget > color (color): the text color of the tooltip widget when hover over onto the ColorWidget Here is an example of a ColorWidget with three FloatFields. The ColorWidget model is shared with the FloatFields so that users can click and edit the field value to change the ColorWidget’s color, and the value change of the ColorWidget will also reflect in the value change of the FloatFields. Here is an example of a ColorWidget with three FloatDrags. The ColorWidget model is shared with the FloatDrags so that users can drag the field value to change the color, and the value change of the ColorWidget will also reflect in the value change of the FloatDrags. Here is an example of a ColorWidget with a ComboBox. The ColorWidget model is shared with the ComboBox. Only the value change of the ColorWidget will reflect in the value change of the ComboBox. Here is an interactive example with USD. You can create a Mesh in the Stage. Choose Pixar Storm as the render. Select the mesh and use this ColorWidget to change the color of the mesh. You can use Ctrl+z for undoing and Ctrl+y for redoing. ```python from omni.ui import color as cl with ui.HStack(spacing=5): color_model = ui.ColorWidget(width=0, height=0, style={"ColorWidget":{"border_width": 2, "border_color": cl.white, "border_radius": 4, "color": cl.pink, "margin": 2}}).model for item in color_model.get_item_children(): component = color_model.get_item_value_model(item) ui.FloatField(component) ``` ```python from omni.ui import color as cl with ui.HStack(spacing=5): color_model = ui.ColorWidget(0.125, 0.25, 0.5, width=0, height=0, style={"background_color": cl.pink}).model for item in color_model.get_item_children(): component = color_model.get_item_value_model(item) ui.FloatDrag(component, min=0, max=1) ``` ```python with ui.HStack(spacing=5): color_model = ui.ColorWidget(width=0, height=0).model ui.ComboBox(color_model) ``` ```python import weakref import omni.kit.commands ``` ```python from omni.usd.commands import UsdStageHelper from pxr import UsdGeom from pxr import Gf import omni.usd class SetDisplayColorCommand(omni.kit.commands.Command, UsdStageHelper): """ Change prim display color undoable **Command**. Unlike ChangePropertyCommand, it can undo property creation. Args: gprim (Gprim): Prim to change display color on. value: Value to change to. value: Value to undo to. """ def __init__(self, gprim: UsdGeom.Gprim, color: any, prev: any): self._gprim = gprim self._color = color self._prev = prev def do(self): color_attr = self._gprim.CreateDisplayColorAttr() color_attr.Set([self._color]) def undo(self): color_attr = self._gprim.GetDisplayColorAttr() if self._prev is None: color_attr.Clear() else: color_attr.Set([self._prev]) omni.kit.commands.register(SetDisplayColorCommand) class FloatModel(ui.SimpleFloatModel): def __init__(self, parent): super().__init__() self._parent = weakref.ref(parent) def begin_edit(self): parent = self._parent() parent.begin_edit(None) def end_edit(self): parent = self._parent() parent.end_edit(None) class USDColorItem(ui.AbstractItem): def __init__(self, model): super().__init__() self.model = model class USDColorModel(ui.AbstractItemModel): def __init__(self): super().__init__() # Create root model self._root_model = ui.SimpleIntModel() self._root_model.add_value_changed_fn(lambda a: self._item_changed(None)) # Create three models per component self._items = [USDColorItem(FloatModel(self)) for i in range(3)] for item in self._items: item.model.add_value_changed_fn(lambda a, item=item: self._on_value_changed(item)) # Omniverse contexts self._usd_context = omni.usd.get_context() self._selection = self._usd_context.get_selection() self._events = self._usd_context.get_stage_event_stream() self._stage_event_sub = self._events.create_subscription_to_pop( ``` self._on_stage_event, name="omni.example.ui ColorWidget stage update" # Privates self._subscription = None self._gprim = None self._prev_color = None self._edit_mode_counter = 0 def _on_stage_event(self, event): """Called with subscription to pop""" if event.type == int(omni.usd.StageEventType.SELECTION_CHANGED): self._on_selection_changed() def _on_selection_changed(self): """Called when the user changes the selection""" selection = self._selection.get_selected_prim_paths() stage = self._usd_context.get_stage() self._subscription = None self._gprim = None # When TC runs tests, it's possible that stage is None if selection and stage: self._gprim = UsdGeom.Gprim.Get(stage, selection[0]) if self._gprim: color_attr = self._gprim.GetDisplayColorAttr() usd_watcher = omni.usd.get_watcher() self._subscription = usd_watcher.subscribe_to_change_info_path( color_attr.GetPath(), self._on_usd_changed ) # Change the widget color self._on_usd_changed() def _on_value_changed(self, item): """Called when the submodel is changed""" if not self._gprim: return if self._edit_mode_counter > 0: # Change USD only if we are in edit mode. color_attr = self._gprim.CreateDisplayColorAttr() color = Gf.Vec3f( self._items[0].model.get_value_as_float(), self._items[1].model.get_value_as_float(), self._items[2].model.get_value_as_float(), ) color_attr.Set([color]) self._item_changed(item) def _on_usd_changed(self, path=None): """Called with UsdWatcher when something in USD is changed""" color = self._get_current_color() or Gf.Vec3f(0.0) for i in range(len(self._items)): self._items[i].model.set_value(color[i]) def _get_current_color(self): """Returns color of the current object""" if self._gprim: color_attr = self._gprim.GetDisplayColorAttr() if color_attr: color_array = color_attr.Get() if color_array: return color_array[0] def get_item_children(self, item): ```python """Reimplemented from the base class""" return self._items def get_item_value_model(self, item, column_id): """Reimplemented from the base class""" if item is None: return self._root_model return item.model def begin_edit(self, item): """ Reimplemented from the base class. Called when the user starts editing. """ if self._edit_mode_counter == 0: self._prev_color = self._get_current_color() self._edit_mode_counter += 1 def end_edit(self, item): """ Reimplemented from the base class. Called when the user finishes editing. """ self._edit_mode_counter -= 1 if not self._gprim or self._edit_mode_counter > 0: return color = Gf.Vec3f( self._items[0].model.get_value_as_float(), self._items[1].model.get_value_as_float(), self._items[2].model.get_value_as_float(), ) omni.kit.commands.execute("SetDisplayColor", gprim=self._gprim, color=color, prev=self._prev_color) with ui.HStack(spacing=5): ui.ColorWidget(USDColorModel(), width=0) ui.Label("Interactive ColorWidget with USD", name="text") ``` # Image The Image type displays an image. The source of the image is specified as a URL using the source property. By default, specifying the width and height of the item makes the image to be scaled to fit that size. This behavior can be changed by setting the `fill_policy` property, allowing the image to be stretched or scaled instead. The property alignment controls how the scaled image is aligned in the parent defined space. Except the common style for Buttons and Images, here is a list of styles you can customize on Image: > image_url (str): the url path of the image source > color (color): the overlay color of the image > corner_flag (enum): defines which corner or corners to be rounded. The supported corner flags are the same as Rectangle since Image is eventually an image on top of a rectangle under the hood. > fill_policy (enum): defines how the Image fills the rectangle. There are three types of fill_policy - ui.FillPolicy.STRETCH: stretch the image to fill the entire rectangle. - ui.FillPolicy.PRESERVE_ASPECT_FIT: uniformly to fit the image without stretching or cropping. - ui.FillPolicy.PRESERVE_ASPECT_CROP: scaled uniformly to fill, cropping if necessary > alignment (enum): defines how the image is positioned in the parent defined space. There are 9 alignments supported which are quite self-explanatory. - ui.Alignment.LEFT_CENTER - ui.Alignment.LEFT_TOP - ui.Alignment.LEFT_BOTTOM - ui.Alignment.RIGHT_CENTER - ui.Alignment.RIGHT_TOP - ui.Alignment.RIGHT_BOTTOM - ui.Alignment.CENTER - ui.Alignment.CENTER_TOP - ui.Alignment.CENTER_BOTTOM Default Image is scaled uniformly to fit without stretching or cropping (ui.FillPolicy.PRESERVE_ASPECT_FIT), and aligned to ui.Alignment.CENTER: ![Code Result](Buttons and Images_9.png) ```python source = "resources/desktop-icons/omniverse_512.png" with ui.Frame(width=200, height=100): ``` The image is stretched to fit and aligned to the left The image is scaled uniformly to fill, cropping if necessary and aligned to the top The image is scaled uniformly to fit without cropping and aligned to the right. Notice the fill_policy and alignment are defined in style. The image has rounded corners and an overlayed color. Note image_url is in the style dictionary. The image is scaled uniformly to fill, cropping if necessary and aligned to the bottom, with a blue border. The image is arranged in a HStack with different margin styles defined. Note image_url is in the style dict. ```python styles = [ { "": {"image_url": "resources/icons/Nav_Walkmode.png"}, ":hovered": {"image_url": "resources/icons/Nav_Flymode.png"}, }, { "": {"image_url": "resources/icons/Move_local_64.png"}, ":hovered": {"image_url": "resources/icons/Move_64.png"}, }, { "": {"image_url": "resources/icons/Rotate_local_64.png"}, ":hovered": {"image_url": "resources/icons/Rotate_global.png"}, }, ] def set_image(model, image): value = model.get_item_value_model().get_value_as_int() image.set_style(styles[value]) with ui.Frame(height=80): with ui.VStack(): image = ui.Image(width=64, height=64, style=styles[0]) with ui.HStack(width=ui.Percent(50)): ui.Label("Select a texture to display", name="text") model = ui.ComboBox(0, "Navigation", "Move", "Rotate").model model.add_item_changed_fn(lambda m, i, im=image: set_image(m, im)) ``` ## ImageWithProvider ImageWithProvider also displays an image just like Image. It is a much more advanced image widget. ImageWithProvider blocks until the image is loaded, Image doesn’t block. Sometimes Image blinks because when the first frame is created, the image is not loaded. Users are recommended to use ImageWithProvider if the UI is updated pretty often. Because it doesn’t blink when recreating. It has the almost the same style list as Image, except the fill_policy has different enum values. > fill_policy (enum): defines how the Image fills the rectangle. There are three types of fill_policy - ui.IwpFillPolicy.IWP_STRETCH: stretch the image to fill the entire rectangle. - ui.IwpFillPolicy.IWP_PRESERVE_ASPECT_FIT: uniformly to fit the image without stretching or cropping. - ui.IwpFillPolicy.IWP_PRESERVE_ASPECT_CROP: scaled uniformly to fill, cropping if necessary The image source comes from `ImageProvider` which could be `ByteImageProvider`, `RasterImageProvider` or `VectorImageProvider`. `RasterImageProvider` and `VectorImageProvider` are using image urls like Image. Here is an example taken from Image. Notice the fill_policy value difference. source = "resources/desktop-icons/omniverse_512.png" with ui.Frame(width=200, height=100): ui.ImageWithProvider( source, style={ "ImageWithProvider": { "border_width": 5, "border_color": cl("#1ab3ff"), "corner_flag": ui.CornerFlag.TOP, "border_radius": 15, "fill_policy": ui.IwpFillPolicy.IWP_PRESERVE_ASPECT_CROP, "alignment": ui.Alignment.CENTER_BOTTOM}} ) <code>ByteImageProvider</code> is really useful to create gradient images. Here is an example: self._byte_provider = ui.ByteImageProvider() self._byte_provider.set_bytes_data([ 255, 0, 0, 255, # red 255, 255, 0, 255, # yellow 0, 255, 0, 255, # green 0, 255, 255, 255, # cyan 0, 0, 255, 255], # blue [5, 1]) # size with ui.Frame(height=20): ui.ImageWithProvider(self._byte_provider, fill_policy=ui.IwpFillPolicy.IWP_STRETCH) ## Plot The Plot class displays a line or histogram image. The data of the image is specified as a data array or a provider function. Except the common style for Buttons and Images, here is a list of styles you can customize on Plot: > color (color): the color of the plot, line color in the line typed plot or rectangle bar color in the histogram typed plot > selected_color (color): the selected color of the plot, dot in the line typed plot and rectangle bar in the histogram typed plot > background_color (color): the background color of the plot > secondary_color (color): the color of the text and the border of the text box which shows the plot selection value > background_selected_color (color): the background color of the text box which shows the plot selection value Here are couple of examples of Plots: import math from omni.ui import color as cl data = [] for i in range(360): data.append(math.cos(math.radians(i))) def on_data_provider(index): return math.sin(math.radians(index)) with ui.Frame(height=20): with ui.HStack(): plot_1 = ui.Plot(ui.Type.LINE, -1.0, 1.0, *data, width=360, height=100, style={"Plot":{ "color": cl.red, "background_color": cl(0.08), "secondary_color": cl("#aa1111")}}) { "selected_color": cl.green, "background_selected_color": cl.white, "border_width": 5, "border_color": cl.blue, "border_radius": 20 } ui.Spacer(width = 20) plot_2 = ui.Plot(ui.Type.HISTOGRAM, -1.0, 1.0, on_data_provider, 360, width=360, height=100, style={ "Plot": { "color": cl.blue, "background_color": cl("#551111"), "secondary_color": cl("#11AA11"), "selected_color": cl(0.67), "margin_height": 10, } }) plot_2.value_stride = 6
c-support_Overview.md
# Overview ## Overview ### Extension : omni.kit.commands-1.4.9 ### Documentation Generated : May 08, 2024 Commands and Undo/Redo system. A **Command** is an undo/redo system primitive. It is a class which gets instantiated and the `do` method is called on an instance. The instance is stored in the undo stack if it contains an `undo` method. When undo is called, the `undo` method will be executed on the same instance. To create a command, derive from `omni.kit.commands.Command` and add a `do` method and optionally an `undo` method. If you consider also the `redo` operation, the `do()`/`undo()` methods can be called an infinite amount of times. You can also create a **command** with only the `do()` method, which means it is not undoable and won’t be added to the `undo stack`. Here is a simple example: ```python import omni.kit.commands class NumIncrement(omni.kit.commands.Command): def __init__(self, num: int): self._num = num def do(self): self._num = self._num + 1 return self._num # Result can be optionally returned def undo(self): self._num = self._num - 1 ``` Here we create a **command** class `NumIncrement`. By inhering from `omni.kit.commands.Command`, it is automatically discovered and registered by the **command system**. # Command Kit If a command is inside one of public extensions module, you can also register it explicitly with: ```python omni.kit.commands.register(NumIncrement) ``` call. To execute a command, one can call ```python x = omni.kit.commands.execute("NumIncrement", num=10) ``` from anywhere. Commands may also return values in the `do` method. ## Guidelines There are some useful rules to follow when creating a command: 1. All arguments must be simple types (numbers, strings, lists etc) to enable serialization and calling of commands from a console. 2. Try to make commands as simple as possible. Compose complex commands of other commands using grouping to minimize side effects. 3. Write at least one test for each command! 4. To signal failure from a command, raise an Error. This will automatically trigger the command (and any descendants) to call `undo` if they define it. ## Groups Commands can be grouped meaning that executing a group of commands will execute all of them and `undo` and `redo` operations will also cover the whole group. First of all, commands executed inside of a command are grouped automatically: ```python import omni.kit.commands class SpawnFewPrims(omni.kit.commands.Command): def do(self): omni.kit.commands.execute("CreatePrimWithDefaultXform", prim_type="Sphere") omni.kit.commands.execute("CreatePrimWithDefaultXform", prim_type="Cone") def undo(self): pass ``` In this example, you don’t even need to write an `undo` method. Undoing that command will automatically call undo on nested commands. But you must define `undo` method to hint that command is undoable. One can explicitly group commands using API: ```python import omni.kit.commands omni.kit.undo.begin_group() omni.kit.commands.execute("CreatePrimWithDefaultXform", prim_type="Sphere") omni.kit.commands.execute("CreatePrimWithDefaultXform", prim_type="Cone") omni.kit.undo.end_group() # or similarly: with omni.kit.undo.group(): omni.kit.commands.execute("CreatePrimWithDefaultXform", prim_type="Sphere") omni.kit.commands.execute("CreatePrimWithDefaultXform", prim_type="Cone") ``` ## C++ Support Commands were originally written in (and only available to use from) Python, but they can now be registered, deregistered, executed, and undone/redone from C++. - Commands registered from C++ should always be deregistered from C++ (although deregistering them from Python may not be fatal). - Commands registered from Python should always be deregistered from Python (although deregistering them from C++ may not be fatal). - All C++ commands have an ‘undo’ function on the Python side (unlike Python commands which can be created without undo functionality), so when executed they will always be placed on the undo/redo stack. ## Command API Reference API (python) ```
c-usage-examples_Overview.md
# Overview — Kit Extension Template C++ 1.0.0 documentation ## Overview An example C++ extension that can be used as a reference/template for creating new extensions. Demonstrates how to create a C++ object that will startup / shutdown along with the extension. ## C++ Usage Examples ### Defining Extensions ```c++ // When this extension is enabled, any class that derives from omni.ext.IExt // will be instantiated and 'onStartup(extId)' called. When the extension is // later disabled, a matching 'onShutdown()' call will be made on the object. class ExampleCppHelloWorldExtension : public omni::ext::IExt { public: void onStartup(const char* extId) override { printf("ExampleCppHelloWorldExtension starting up (ext_id: %s).\n", extId); if (omni::kit::IApp* app = carb::getFramework()->acquireInterface<omni::kit::IApp>()) { // Subscribe to update events. m_updateEventsSubscription = carb::events::createSubscriptionToPop(app->getUpdateEventStream(), [this](carb::events::IEvent*){ onUpdate(); }); } } void onShutdown() override { printf("ExampleCppHelloWorldExtension shutting down.\n"); // Unsubscribe from update events. m_updateEventsSubscription = nullptr; } void onUpdate() { if (m_updateCounter % 1000 == 0) ``` ```cpp { printf("Hello from the omni.example.cpp.hello_world extension! %d updates counted.\n", m_updateCounter); } m_updateCounter++; private: carb::events::ISubscriptionPtr m_updateEventsSubscription; int m_updateCounter = 0; }; ```
cad-converter-config-file-inputs_Overview.md
# Overview omni.kit.converter.hoops_core uses the HOOPS Exchange SDK to convert a variety of CAD data formats to USD. When this extension loads, it will register itself with the CAD Converter service (omni.services.convert.cad) if it is available. ## CAD CONVERTER CONFIG FILE INPUTS: Conversion options are configured by supplying a JSON file. Below are the available configuration options. ### JSON Converter Settings: **Format**: “setting name” : default value ```json "sConfigFilePath": "C:/test/sample_config.json" ``` Configuration file path ```json "sFilePathIn": "C:/cad_part.jt" ``` Input file path. **NOTE**: This will override import path argument from service. ```json "sFilePathOut": "C:/test/output/cad_part.usd" ``` Output file path. Supported extensions are .usd, .usda, and .usdc. - usd = USD (binary by default) - usda = USD Ascii - usdc = USD Crate (binary). **NOTE**: This will override output path argument from service. ```json "bInstancing": true ``` Controls whether or not the usd model uses instances and prototypes. If false, then there is no instancing at all. ```json "bGlobalXforms": false ``` When instancing = false, this flag controls whether globalXforms are composited. If false local transforms are applied ```json "iTessLOD": 2 ``` - iTessLOD0 = ExtraLow, ChordHeightRatio=50, AngleToleranceDeg=40, - iTessLOD1 = Low, ChordHeightRatio=600, AngleToleranceDeg=40, - iTessLOD2 = Medium, ChordHeightRatio=1000, AngleToleranceDeg=30, ``` ## Configuration Settings ### Tessellation Levels - **iTessLOD1** = Medium, ChordHeightRatio=2000, AngleToleranceDeg=40 - **iTessLOD3** = High, ChordHeightRatio=5000, AngleToleranceDeg=30 - **iTessLOD4** = ExtraHigh, ChordHeightRatio=10000, AngleToleranceDeg=20 ### Configuration Flags - **bAccurateSu2faceCurvatures**: If true, respect surface curvature to control triangles elongation directions. - **bOptimize**: Flag to invoke USD scene optimization. - **bDedup**: If `True`, weld mesh elements in appropriate groups. - **bUseMaterials**: Setting for how part colors are converted to USD. - If `true`, converter creates OmniPBR materials with the color set as an attribute. - If `false`, converter will convert colors to USD `displayColor` primvars. - **bUseNormals**: If true then we pass normals to USD. if false, then we do not. - **bReportProgress**: If true then we report import/export progress - **bUseCurrentStage**: If true use currently opened USD. ### Full sample_config.json: ```json { "sConfigFilePath": "C:/test/sample_config.json", "sFilePathIn": "C:/test/cad_part.jt", "sFilePathOut": "C:/test/converted/cad_part.usd", "bInstancing": true, "bGlobalXforms": false, "bOptimize": true, "bDedup": true, "iTessLOD": 2, "bAccurateSu2faceCurvatures": true, "bUseNormals": true, "bUseMaterials": true, "bReportProgress": true, "bUseCurrentStage": true } ```
cad-converter_Overview.md
# CAD Converter ## Overview The CAD Converter extension enables conversion for many common CAD file formats to USD. USD Explorer includes the CAD Converter extension enabled by default. ## Supported CAD file formats The following file formats are supported by CAD Converter: - CATIA V5 Files (`*.CATPart, *.CATProduct, *.CGR`) - CATIA V6 Files (`*.3DXML`) - IFC Files (`*.ifc, *.ifczip`) - Siemens NX Files (`*.prt`) - Parasolid Files (`*.xmt, *.x_t, *.x_b, *.xmt_txt`) - SolidWorks Files (`*.sldprt, *.sldasm`) - STL Files (`*.stl`) - Autodesk Inventor Files (`*.IPT, *.IAM`) - AutoCAD 3D Files (`*.DWG, *.DXF`) - Creo - Pro/E Files (`*.ASM, *.PRT`) - Revit Files (`*.RVT, *.RFA`) - Solid Edge Files (`*.ASM, *.PAR, *.PWD, *.PSM`) - Step/Iges (`*.STEP, *.IGES`) - JT Files (`*.jt`) - DGN (`*.DGN`) ::: note Note ::: # Asset Converter File Formats The file formats *.fbx, *.obj, *.gltf, *.glb, *.lxo, *.md5, *.e57 and *.pts are supported by Asset Converter and also available by default. ## Notes ### Note 1 If expert tools such as Creo, Revit or Alias are installed, we recommend using the corresponding connectors. These provide more extensive options for conversion. ### Note 2 CAD Assemblies may not work when converting files from Nucleus. When converting assemblies with external references we recommend either working with local files or using Omniverse Drive. # Converter Options This section covers options for configuration of conversions of CAD file formats to USD. ## Surface Tolerance (DGN Files) This is the maximum distance between the tessellated mesh and the original solid/surface. Please refer to Open Design Alliance’s webpage here. The more precise the value (e.g., 0.00001) the more triangles are generated for the mesh. A field is provided when selecting a DGN file. The minimum and maximum values are 0 to 1. If a value of 0 is provided, then surface tolerance of an object is calculated as the diagonal its extents multiplied by 0.025. # Related Extensions These related extensions make up the CAD Converter. This extension provides import tasks to the extensions through their interfaces. The DGN Core extension, however, is launched and provided configuration options through a subprocess to avoid library conflicts with those loaded by the other converters. ## Core Converters - CAD Core: omni.kit.converter.cad_core - DGN Core: omni.kit.converter.dgn_core - JT Core: omni.kit.converter.jt_core ## Services - CAD Converter Service: omni.services.convert.cad ## Utils - Converter Common: omni.kit.converter.common
camera-manipulator_Overview.md
# Camera Manipulator — Omniverse Kit 105.0.5 documentation ## Camera Manipulator Mouse interaction with the Viewport is handled with classes from `omni.kit.manipulator.camera`, which is built on top of the `omni.ui.scene` framework. We’ve created an `omni.ui.scene.SceneView` to host the manipulator, and by simply assigning the camera manipulators model into the SceneView’s model, all of the edits to the Camera’s transform will be pushed through to the `SceneView`. ```python # And finally add the camera-manipulator into that view with self.__scene_view.scene: self.__camera_manip = ViewportCameraManipulator(self.viewport_api) # Push the camera manipulator's model into the SceneView so it'lll auto-update self.__scene_view.model = self.__camera_manip.model ``` The `omni.ui.scene.SceneView` only understands two values: ‘view’ and ‘projection’. Our CameraManipulator’s model will push out those values as edits are made; however, it supports a larger set of values to control the manipulator itself. ## Operations and Values The CameraManipulator model store’s the amount of movement according to three modes, applied in this order: 1. Tumble 2. Look 3. Move (Pan) 4. Fly (FlightMode / WASD navigation) ### tumble Tumble values are specified in degrees as the amount to rotate around the current up-axis. These values should be pre-scaled with any speed before setting into the model. This allows for different manipulators/gestures to interpret speed differently, rather than lock to a constant speed. ```python # Tumble by 180 degrees around Y (as a movement across ui X would cause) model.set_floats('tumble', [0, 180, 0]) ``` ### look Look values are specified in degrees as the amount to rotate around the current up-axis. These values should be pre-scaled with any speed before setting into the model. This allows for different manipulators/gestures to interpret speed differently, rather than lock to a constant speed. ``` # Look by 90 degrees around X (as a movement across ui Y would cause) # i.e Look straight up model.set_floats('look', [90, 0, 0]) ## move Move values are specified in world units and the amount to move the camera by. Move is applied after rotation, so the X, Y, Z are essentially left, right, back. These values should be pre-scaled with any speed before setting into the model. This allows for different manipulators/gestures to interpret speed differently, rather than lock to a constant speed. ```python # Move left by 30 units, up by 60 and back by 90 model.set_floats('move', [30, 60, 90]) ``` ## fly Fly values are the direction of flight in X, Y, Z. Fly is applied after rotation, so the X, Y, Z are essentially left, right, back. These values will be scaled with `fly_speed` before application. Because fly is a direction with `fly_speed` automatically applied, if a gesture/manipulator wants to fly slower without changing `fly_speed` globally, it must apply whatever factor is required before setting. ```python # Move left model.set_floats('fly', [1, 0, 0]) # Move up model.set_floats('fly', [0, 1, 0]) ``` ## Speed By default the Camera manipulator will map a full mouse move across the viewport as follows: - Pan: A full translation of the center-of-interest across the Viewport. - Tumble: A 180 degree rotation across X or Y. - Look: A 180 degree rotation across X and a 90 degree rotation across Y. These speed can be adjusted by setting float values into the model. ### world_speed The Pan and Zoom speed can be adjusted with three floats set into the model as ‘world_speed’. ```python # Half the movement speed for both Pan and Zoom pan_speed_x = 0.5, pan_speed_y = 0.5, zoom_speed_z = 0.5 model.set_floats('world_speed', [pan_speed_x, pan_speed_y, zoom_speed_z]) ``` ### rotation_speed The Tumble and Look speed can be adjusted with either a scalar value for all rotation axes or per component. ```python # Half the rotation speed for both Tumble and Look rot_speed_both = 0.5 model.set_floats('rotation_speed', [rot_speed_both]) # Half the rotation speed for both Tumble and Look in X and quarter if for Y rot_speed_x = 0.5, rot_speed_y = 0.25 model.set_floats('rotation_speed', [rot_speed_x, rot_speed_y]) ``` ### tumble_speed Tumble can be adjusted separately with either a scalar value for all rotation axes or per component. The final speed of Tumble operation is `rotation_speed * tumble_speed` ```python # Half the rotation speed for Tumble rot_speed_both = 0.5 model.set_floats('tumble_speed', [rot_speed_both]) # Half the rotation speed for Tumble in X and quarter if for Y rot_speed_x = 0.5, rot_speed_y = 0.25 model.set_floats('tumble_speed', [rot_speed_x, rot_speed_y]) ``` ### look_speed Look can be adjusted separately with either a scalar value for all rotation axes or per component. The final speed of a Look operation is `rotation_speed * look_speed` ``` # rotation_speed * tumble_speed ```python # Half the rotation speed for Look rot_speed_both = 0.5 model.set_floats('look_speed', [rot_speed_both]) # Half the rotation speed for Tumble in X and quarter if for Y rot_speed_x = 0.5, rot_speed_y = 0.25 model.set_floats('look_speed', [rot_speed_x, rot_speed_y]) ``` # fly_speed The speed at which FlightMode (WASD navigation) will fly through the scene. FlightMode speed can be adjusted separately with either a scalar value for all axes or per component. ```python # Half the speed in all directions fly_speed = 0.5 model.set_floats('fly_speed', [fly_speed]) # Half the speed when moving in X or Y, but double it moving in Z fly_speed_x_y = 0.5, fly_speed_z = 2 model.set_floats('fly_speed', [fly_speed_x_y, fly_speed_x_y, fly_speed_z]) ``` # Undo Because we’re operating on a unique `omni.usd.UsdContext` we don’t want movement in the preview-window to affect the undo-stack. To accomplish that, we’ll set the ‘disable_undo’ value to an array of 1 int; essentially saying `disable_undo=True`. ```python # Let's disable any undo for these movements as we're a preview-window model.set_ints('disable_undo', [1]) ``` # Disabling operations By default the manipulator will allow Pan, Zoom, Tumble, and Look operations on a perspective camera, but only allow Pan and Zoom on an orthographic one. If we want to explicitly disable any operations again we set int values as booleans into the model. ## disable_tumble ```python # Disable the Tumble manipulation model.set_ints('disable_tumble', [1]) ``` ## disable_look ```python # Disable the Look manipulation model.set_ints('disable_look', [1]) ``` ## disable_pan ```python # Disable the Pan manipulation. model.set_ints('disable_pan', [1]) ``` ## disable_zoom ```python # Disable the Zoom manipulation. model.set_ints('disable_zoom', [1]) ```
CameraUtil.md
# CameraUtil module Summary: Camera Utilities ## Module: pxr.CameraUtil Camera utilities. ### Classes: | Class | Description | |-------|-------------| | ConformWindowPolicy | | | Framing | Framing information. | | ScreenWindowParameters | Given a camera object, compute parameters suitable for setting up RenderMan. | ### Class: pxr.CameraUtil.ConformWindowPolicy Methods: | Method | Description | |--------|-------------| | GetValueFromName | | Attributes: | Attribute | Description | |-----------|-------------| | allValues | | #### Method: pxr.CameraUtil.ConformWindowPolicy.GetValueFromName allValues = (CameraUtil.MatchVertically, CameraUtil.MatchHorizontally, CameraUtil.Fit, CameraUtil.Crop, CameraUtil.DontConform) class pxr.CameraUtil.Framing Framing information. That is information determining how the filmback plane of a camera maps to the pixels of the rendered image (displayWindow together with pixelAspectRatio and window policy) and what pixels of the image will be filled by the renderer (dataWindow). The concepts of displayWindow and dataWindow are similar to the ones in OpenEXR, including that the x- and y-axis of the coordinate system point left and down, respectively. In fact, these windows mean the same here and in OpenEXR if the displayWindow has the same aspect ratio (when accounting for the pixelAspectRatio) as the filmback plane of the camera has (that is the ratio of the horizontalAperture to verticalAperture of, e.g., Usd’s Camera or GfCamera). In particular, overscan can be achieved by making the dataWindow larger than the displayWindow. If the aspect ratios differ, a window policy is applied to the displayWindow to determine how the pixels correspond to the filmback plane. One such window policy is to take the largest rect that fits (centered) into the displayWindow and has the camera’s aspect ratio. For example, if the displayWindow and dataWindow are the same and both have an aspect ratio smaller than the camera, the image is created by enlarging the camera frustum slightly in the bottom and top direction. When using the AOVs, the render buffer size is determined independently from the framing info. However, the dataWindow is supposed to be contained in the render buffer rect (in particular, the dataWindow cannot contain pixels withs negative coordinates - this restriction does not apply if, e.g., hdPrman circumvents AOVs and writes directly to EXR). In other words, unlike in OpenEXR, the rect of pixels for which we allocate storage can differ from the rect the renderer fills with data (dataWindow). For example, an application can set the render buffer size to match the widget size but use a dataWindow and displayWindow that only fills the render buffer horizontally to have slates at the top and bottom. **Methods:** - ApplyToProjectionMatrix(projectionMatrix, ...) - Given the projectionMatrix computed from a camera, applies the framing. - IsValid() - Is display and data window non-empty. **Attributes:** - dataWindow - displayWindow - pixelAspectRatio ApplyToProjectionMatrix(projectionMatrix, windowPolicy) → Matrix4d - Given the projectionMatrix computed from a camera, applies the framing. - To obtain a correct result, a rasterizer needs to use the resulting projection matrix and set the viewport to the data window. Parameters: - projectionMatrix - windowPolicy - **Matrix4d** – - **windowPolicy** (**ConformWindowPolicy**) – ### IsValid ```IsValid``` - Returns: ```bool``` - Is display and data window non-empty. ### dataWindow ```property dataWindow``` ### displayWindow ```property displayWindow``` ### pixelAspectRatio ```property pixelAspectRatio``` ### class ScreenWindowParameters ```class pxr.CameraUtil.ScreenWindowParameters``` - Given a camera object, compute parameters suitable for setting up RenderMan. **Attributes:** | Attribute | Type | |-----------|------| | ```fieldOfView``` | float | | ```screenWindow``` | Vec4d | | ```zFacingViewMatrix``` | Matrix4d | ### fieldOfView ```property fieldOfView``` - Type: ```float``` - The field of view. More precisely, the full angle perspective field of view (in degrees) between screen space coordinates (-1,0) and (1,0). Give these parameters to RiProjection as parameter after "perspective". ### screenWindow ```property screenWindow``` - Type: ```Vec4d``` - The vector (left, right, bottom, top) defining the rectangle in the image plane. Give these parameters to RiScreenWindow. ### zFacingViewMatrix ```property zFacingViewMatrix``` - Type: ```Matrix4d``` ## zFacingViewMatrix Matrix4d Returns the inverse of the transform for a camera that is y-Up and z-facing (vs the OpenGL camera that is (-z)-facing). Write this transform with RiConcatTransform before RiWorldBegin. ### Type type
capture.md
# Capture [](#capture) `omni.kit.widget.viewport.capture` is the interface to a capture a Viewport’s state and AOVs. It provides a few convenient implementations that will write AOV(s) to disk or pass CPU buffers for pixel access. ## FileCapture [](#filecapture) A simple delegate to capture a single AOV to a single file. It can be used as is, or subclassed to access additional information/meta-data to be written. By default it will capture a color AOV, but accepts an explicit AOV-name to capture instead. ```python from omni.kit.widget.viewport.capture import FileCapture capture = viewport_api.schedule_capture(FileCapture(image_path)) captured_aovs = await capture.wait_for_result() if captured_aovs: print(f'AOV "{captured_aovs[0]}" was written to "{image_path}"') else: print(f'No image was written to "{image_path}"') ``` A sample subclass implementation to write additional data. ```python from omni.kit.widget.viewport.capture import FileCapture class SideCarWriter(FileCapture): def __init__(self, image_path, aov_name=''): super().__init__(image_path, aov_name) def capture_aov(self, file_path, aov): # Call the default file-saver self.save_aov_to_file(file_path, aov) # Possibly append data to a custom image print(f'Wrote AOV "{aov["name"]}" to "{file_path}"') print(f' with view: {self.view}') print(f' with projection: {self.projection}') capture = viewport_api.schedule_capture(SideCarWriter(image_path)) captured_aovs = await capture.wait_for_result() if captured_aovs: print(f'AOV "{captured_aovs[0]}" was written to "{image_path}"') ``` # ByteCapture A simple delegate to capture a single AOV and deliver it as CPU pixel data. It can be used as is with a free-standing function, or subclassed to access additional information/meta-data. By default it will capture a color AOV, but accepts an explicit AOV-name to capture instead. ```python from omni.kit.widget.viewport.capture import ByteCapture def on_capture_completed(buffer, buffer_size, width, height, format): print(f'PixelData resolution: {width} x {height}') print(f'PixelData format: {format}') capture = viewport_api.schedule_capture(ByteCapture(on_capture_completed)) captured_aovs = await capture.wait_for_result() if captured_aovs: print(f'AOV "{captured_aovs[0]}" was written to "{image_path}"') print(f'It had a camera view of {capture.view}') else: print(f'No image was written to "{image_path}"') ``` A sample subclass implementation to write additional data. ```python from omni.kit.widget.viewport.capture import ByteCapture SideCarData(ByteCapture): def __init__(self, image_path, aov_name=''): super().__init__(image_path, aov_name) def on_capture_completed(self, buffer, buffer_size, width, height, format): print(f'PixelData resolution: {width} x {height}') print(f'PixelData format: {format}') print(f'PixelData has a camera view of {self.view}') capture = viewport_api.schedule_capture(SideCarData(image_path)) captured_aovs = await capture.wait_for_result() if captured_aovs: print(f'AOV "{captured_aovs[0]}" was written to "{image_path}"') else: print(f'No image was written to "{image_path}"') ```
carb-license_LicenseInfo.md
# License Copyright (c) 2020-2023, NVIDIA CORPORATION. All rights reserved. NVIDIA CORPORATION and its licensors retain all intellectual property and proprietary rights in and to this software, related documentation and any modifications thereto. Any use, reproduction, disclosure or distribution of this software and related documentation without an express license agreement from NVIDIA CORPORATION is strictly prohibited.
carb-testing_Testing.md
# Testing Testing is one of the cornerstones of Carbonite philosophy. Our goal is to keep all the plugins and the framework itself covered with tests and continuously run them on every MR. The isolated nature of the plugin interfaces makes unit testing easy and straightforward. This guide describes how to write new tests and work with existing ones. ## Folder Structure and Naming Everything test related is in the `source/tests` folder. `source/tests/test.unit` contains all unit tests which compile in the `test.unit` executable. All tests grouped by the interface/namespace they are testing. For instance, `source/tests/test.unit/tasking` contains all tests for `carb.tasking.plugin` while `source/tests/test.unit/framework` contains all framework tests. Each unit test source file must start with `Test` prefix, e.g. `tests/test.unit/tasking/TestTasking.cpp`. Generally you want to test against the interface. If the interface has multiple plugins implementing it you can just iterate over all of them in the test initialization code, without changing test itself or making small changes (like having separate shaders for different graphics backends). If you want to write tests against a particular implementation and it is not convenient anymore to keep them in the same folder the naming guideline is to add impl name: `/tests/test.unit/graphics.vulkan`. If you need to create special plugins for your testing they should be put into: `source/tests/plugins/`. ## The Testing Framework We use [doctest](https://github.com/onqtam/doctest) as our testing framework of choice. While some parts of its functionality will be covered in this guide, it is recommended you read the official [tutorial](https://github.com/onqtam/doctest/blob/master/doc/markdown/tutorial.md). This will for instance help you understand the concept of `SECTION()` and how it can be used to structure tests. This [CppCon 2017 talk](https://www.youtube.com/watch?v=eH1CxEC29l8) is also useful. ## Writing Tests The typical unit test can look like this (from `framework/TestFileSystem.cpp`): ```c++ #if CARB_PLATFORM_WINDOWS const char* kFileName = "carb.dll"; #else const char* kFileName = "libcarb.so"; #endif TEST_CASE("paths can be checked for existence", "[framework][filesystem]" "[component=carbonite][owner=adent][priority=mandatory]") { FrameworkScoped f; FileSystem* fs = f->getFileSystem(); REQUIRE(fs); } ``` SECTION "carb (relative) path exists" { CHECK(fs->exists(kFileName)); } SECTION "app (absolute) path exists" { CHECK(fs->exists(f->getAppPath())); } SECTION "made up path doesn't exist" { CHECK(fs->exists("doesn't exist") == false); } The general flow is to first get the framework, using the `FrameworkScoped` utility from `common/TestHelpers`. Then get the interface you need to test against, write your tests and clean up. Framework config can be used to control which plugins to load (at least to avoid loading overhead). It’s up to the test writer how to organize initialization code, the only important thing is to clean up everything after the test is done. In order to not have to write the same setup and teardown code over and over you can create a C++ object using the RAII pattern, like this: ```c++ class AutoTempDir { public: AutoTempDir() { FileSystem* fs = getFramework()->getFileSystem(); bool res = fs->makeTempDirectory(m_path, sizeof(m_path)); REQUIRE(res); } ~AutoTempDir() { FileSystem* fs = getFramework()->getFileSystem(); bool res = fs->removeDirectory(m_path); CHECK(res); } const char* getPath() { return m_path; } }; TEST_CASE("temp directory", "[framework][filesystem]", "[component=carbonite][owner=ncournia][priority=mandatory]") { FrameworkScoped f; FileSystem* fs = f->getFileSystem(); REQUIRE(fs); SECTION("create and remove") { AutoTempDir autoTempDir; SECTION("while creating empty file inside") { std::string path = autoTempDir.getPath() + std::string("/empty.txt"); File* file = fs->openFileToWrite(path.c_str()); REQUIRE(file); fs->closeFile(file); } } } ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ``` ```
carb.settings.acquire_settings_interface.md
# acquire_settings_interface ## acquire_settings_interface ### `carb.settings.acquire_settings_interface(plugin_name: str = None, library_path: str = None) -> carb.settings._settings.ISettings` - **Parameters**: - `plugin_name`: str, optional - `library_path`: str, optional - **Returns**: - `carb.settings._settings.ISettings`
carb.settings.ChangeEventType.md
# ChangeEventType ## ChangeEventType ```python class carb.settings.ChangeEventType ``` **Bases:** ```python pybind11_object ``` **Members:** - CREATED: An Item was created - CHANGED: An Item was changed - DESTROYED: An Item was destroyed **Methods** | Method | Description | |--------|-------------| | `__init__(self, value)` | | **Attributes** | Attribute | Description | |-----------|-------------| | `CHANGED` | | | `CREATED` | | | `DESTROYED` | | | `name` | | | `value` | | ```pre carb.settings._settings.ChangeEventType ``` , ```pre value ``` : ```pre int ``` ) → ```pre None ``` ```pre property ``` ```pre name ``` ``` 请注意,由于HTML中的链接和图片被要求删除,因此相关的标记(如`<a>`和`<img>`)及其内容已被移除。同时,为了保持文本内容的完整性,所有文本内容都被保留,并转换为Markdown格式的代码块(使用```pr
carb.settings.Classes.md
# carb.settings Classes ## Classes Summary | Class | Description | |-------|-------------| | [ChangeEventType](#) | Members: | | [ISettings](#) | The Carbonite Settings interface | | [SubscriptionId](#) | Representation of a subscription |
carb.settings.Functions.md
# carb.settings Functions ## Functions Summary - **get_settings** - Returns cached :class:`carb.settings.ISettings` interface (shorthand). - **lru_cache** - Least-recently-used cache decorator. - **acquire_settings_interface** - acquire_settings_interface(plugin_name: str = None, library_path: str = None) -> carb.settings._settings.ISettings
carb.settings.get_settings.md
# get_settings ## get_settings Returns cached `carb.settings.ISettings` interface (shorthand).
carb.settings.ISettings.md
# ISettings ## ISettings - **Bases:** `pybind11_object` - **Description:** The Carbonite Settings interface. Carbonite settings are built on top of the carb.dictionary interface. Many dictionary functions are replicated in settings, but explicitly use the settings database instead of a generic carb.dictionary.Item object. - **Key Features:** - Uses keys (or paths) that start with an optional forward-slash and are forward-slash separated (example: “/log/level”). - The settings database exists as a root-level carb.dictionary.Item (of type ItemType.DICTIONARY) that is created and maintained by the carb.settings system. - Portions of the settings database hierarchy can be subscribed to with `subscribe_to_tree_change_events()` or individual keys may be subscribed to with `subscribe_to_tree_change_events()`. ### Methods - `__init__(*args, **kwargs)` - `create_dictionary_from_settings(self, path)` - Takes a snapshot of a portion of the setting database as a dictionary.Item. - `destroy_item(self, arg0)` - Destroys the item at the given path. - `get(self, path)` - Retrieve the stored value at the supplied path as a Python object. - `get_as_bool(self, arg0)` - Retrieve the stored value at the supplied path as a boolean. Attempts to get the supplied item as a boolean value, either directly or via conversion. ```python get_as_float(self, arg0) ``` Attempts to get the supplied item as a floating-point value, either directly or via conversion. ```python get_as_int(self, arg0) ``` Attempts to get the supplied item as an integer, either directly or via conversion. ```python get_as_string(self, arg0) ``` Attempts to get the supplied item as a string value, either directly or via conversion. ```python get_settings_dictionary(self, path) ``` Access the setting database as a dictionary.Item ```python initialize_from_dictionary(self, arg0) ``` Performs a one-time initialization from a given dictionary.Item. ```python is_accessible_as(self, arg0, arg1) ``` Checks if the item could be accessible as the provided type, either directly or via a cast. ```python set(self, path, value) ``` Sets the given value at the supplied path. ```python set_bool(self, arg0, arg1) ``` Sets the boolean value at the supplied path. ```python set_bool_array(self, arg0, arg1) ``` Sets the given array at the supplied path. ```python set_default(self, path, value) ``` Sets a value at the given path, if and only if one does not already exist. ```python set_default_bool(self, arg0, arg1) ``` Sets a value at the given path, if and only if one does not already exist. ```python set_default_float(self, arg0, arg1) ``` Sets a value at the given path, if and only if one does not already exist. ```python set_default_int(self, arg0, arg1) ``` Sets a value at the given path, if and only if one does not already exist. ```python set_default_string(self, arg0, arg1) ``` Sets a value at the given path, if and only if one does not already exist. ```python set_float(self, arg0, arg1) ``` | Method Name | Description | |-------------|-------------| | `set_float(self, arg0, arg1)` | Sets the floating-point value at the supplied path. | | `set_float_array(self, arg0, arg1)` | Sets the given array at the supplied path. | | `set_int(self, arg0, arg1)` | Sets the integer value at the supplied path. | | `set_int_array(self, arg0, arg1)` | Sets the given array at the supplied path. | | `set_string(self, arg0, arg1)` | Sets the string value at the supplied path. | | `set_string_array(self, arg0, arg1)` | Sets the given array at the supplied path. | | `subscribe_to_node_change_events(self, arg0, ...)` | Subscribes to node change events about a specific item. | | `subscribe_to_tree_change_events(self, arg0, ...)` | Subscribes to change events for all items in a subtree. | | `unsubscribe_to_change_events(self, id)` | Unsubscribes from change events. | | `update(self, arg0, arg1, arg2, arg3)` | Merges the source dictionary.Item into the settings database. | | `__init__(self, *args, **kwargs)` | | | `create_dictionary_from_settings(self: carb.settings._settings.ISettings, path: str = '')` | | ### carb.settings.ISettings.create_dictionary_from_settings Takes a snapshot of a portion of the setting database as a dictionary.Item. #### Parameters - **path** – An optional path from root to access. “/” or “” is interpreted to be the settings database root. ### carb.settings.ISettings.destroy_item Destroys the item at the given path. Any objects that reference the given path become invalid and their use is undefined behavior. #### Parameters - **path** – Settings database key path (i.e. “/log/level”). ### carb.settings.ISettings.get Retrieve the stored value at the supplied path as a Python object. An array value will be returned as a list. If the value does not exist, None will be returned. #### Parameters - **path** – Settings database key path (i.e. “/log/level”). #### Returns - A Python object representing the stored value. ### carb.settings.ISettings.get_as_bool Attempts to get the supplied item as a boolean value, either directly or via conversion. #### Parameters - **path** – Settings database key path (i.e. “/log/level”). #### Returns - a boolean value representing the stored value. If conversion fails, False is returned. #### Return type - Boolean ### carb.settings.ISettings.get_as_float (Method description not provided in the HTML snippet) ## carb.settings.ISettings.get_as_float ```python get_as_float(self: carb.settings._settings.ISettings, arg0: str) -> float ``` Attempts to get the supplied item as a floating-point value, either directly or via conversion. **Parameters** - **path** – Settings database key path (i.e. “/log/level”). **Returns** - a floating-point value representing the stored value. If conversion fails, 0.0 is returned. **Return type** - Float ## carb.settings.ISettings.get_as_int ```python get_as_int(self: carb.settings._settings.ISettings, arg0: str) -> int ``` Attempts to get the supplied item as an integer, either directly or via conversion. **Parameters** - **path** – Settings database key path (i.e. “/log/level”). **Returns** - an integer value representing the stored value. If conversion fails, 0 is returned. **Return type** - Integer ## carb.settings.ISettings.get_as_string ```python get_as_string(self: carb.settings._settings.ISettings, arg0: str) -> str ``` Attempts to get the supplied item as a string value, either directly or via conversion. **Parameters** - **path** – Settings database key path (i.e. “/log/level”). **Returns** - a string value representing the stored value. If conversion fails, “” is returned. **Return type** - String ### carb.settings.ISettings.get_settings_dictionary Access the setting database as a dictionary.Item Accesses the setting database as a dictionary.Item, which allows use of carb.dictionary functions directly. WARNING: The root dictionary.Item is owned by carb.settings and must not be altered or destroyed. **Parameters** - **path** – An optional path from root to access. “/” or “” is interpreted to be the settings database root. ### carb.settings.ISettings.initialize_from_dictionary Performs a one-time initialization from a given dictionary.Item. NOTE: This function may only be called once. Subsequent calls will result in an error message logged. **Parameters** - **dictionary** – A dictionary.Item to initialize the settings database from. The items are copied into the root of the settings database. ### carb.settings.ISettings.is_accessible_as Checks if the item could be accessible as the provided type, either directly or via a cast. **Parameters** - **itemType** – carb.dictionary.ItemType to check for. - **path** – Settings database key path (i.e. “/log/level”). **Returns** - True if the item is accessible as the provided type; False otherwise. **Return type** - boolean ### carb.settings.ISettings.set Sets a value in the settings database. **Parameters** - **path** : str – The path to the setting. - **value** : object – The value to set. ### carb.settings.ISettings.set Sets the given value at the supplied path. #### Parameters - **path** – Settings database key path (i.e. “/log/level”). - **value** – A Python object. The carb.dictionary.ItemType is inferred from the type of the object; if the type is not supported, the value is ignored. Both tuples and lists are treated as arrays (a special kind of ItemType.DICTIONARY). ### carb.settings.ISettings.set_bool Sets the boolean value at the supplied path. #### Parameters - **path** – Settings database key path (i.e. “/log/level”). - **value** – A boolean value to store. ### carb.settings.ISettings.set_bool_array Sets the given array at the supplied path. #### Parameters - **path** – Settings database key path (i.e. “/log/level”). - **array** – A tuple or list of boolean values. ### carb.settings.ISettings.set_default Sets the default value at the supplied path. #### Parameters - **path** – Settings database key path (i.e. “/log/level”). - **value** – A Python object. The carb.dictionary.ItemType is inferred from the type of the object; if the type is not supported, the value is ignored. Both tuples and lists are treated as arrays (a special kind of ItemType.DICTIONARY). ### set_default ```python def set_default(self: carb.settings._settings.ISettings, value: object) -> None ``` ### set_default_bool ```python def set_default_bool(self: carb.settings._settings.ISettings, arg0: str, arg1: bool) -> None ``` Sets a value at the given path, if and only if one does not already exist. **Parameters:** - **path** – Settings database key path (i.e. “/log/level”). - **value** – Value that will be stored at the given path if a value does not already exist there. ### set_default_float ```python def set_default_float(self: carb.settings._settings.ISettings, arg0: str, arg1: float) -> None ``` Sets a value at the given path, if and only if one does not already exist. **Parameters:** - **path** – Settings database key path (i.e. “/log/level”). - **value** – Value that will be stored at the given path if a value does not already exist there. ### set_default_int ```python def set_default_int(self: carb.settings._settings.ISettings, arg0: str, arg1: int) -> None ``` Sets a value at the given path, if and only if one does not already exist. **Parameters:** - **path** – Settings database key path (i.e. “/log/level”). - **value** – Value that will be stored at the given path if a value does not already exist there. ### set_default_int ```python def set_default_int(self: carb.settings._settings.ISettings, path: str, value: int) -> None: ``` Sets a value at the given path, if and only if one does not already exist. **Parameters:** - **path** – Settings database key path (i.e. “/log/level”). - **value** – Value that will be stored at the given path if a value does not already exist there. ### set_default_string ```python def set_default_string(self: carb.settings._settings.ISettings, arg0: str, arg1: str) -> None: ``` Sets a value at the given path, if and only if one does not already exist. **Parameters:** - **path** – Settings database key path (i.e. “/log/level”). - **value** – Value that will be stored at the given path if a value does not already exist there. ### set_float ```python def set_float(self: carb.settings._settings.ISettings, arg0: str, arg1: float) -> None: ``` Sets the floating-point value at the supplied path. **Parameters:** - **path** – Settings database key path (i.e. “/log/level”). - **value** – A floating-point value to store. ### set_float_array ```python def set_float_array(self: carb.settings._settings.ISettings, arg0: str, arg1: list) -> None: ``` Sets an array of floating-point values at the supplied path. **Parameters:** - **path** – Settings database key path (i.e. “/log/level”). - **value** – An array of floating-point values to store. ### set_float_array Sets the given array at the supplied path. #### Parameters - **path** – Settings database key path (i.e. “/log/level”). - **array** – A tuple or list of floating-point values. ### set_int Sets the integer value at the supplied path. #### Parameters - **path** – Settings database key path (i.e. “/log/level”). - **value** – An integer value to store. ### set_int_array Sets the given array at the supplied path. #### Parameters - **path** – Settings database key path (i.e. “/log/level”). - **array** – A tuple or list of integer values. ### set_string Sets the string value at the supplied path. #### Parameters - **path** – Settings database key path (i.e. “/log/level”). - **value** – A string value to store. ### set_string ```python def set_string(self: carb.settings._settings.ISettings, arg0: str, arg1: str) -> None: ``` Sets the string value at the supplied path. **Parameters** - **path** – Settings database key path (i.e. “/log/level”). - **value** – A string value. ### set_string_array ```python def set_string_array(self: carb.settings._settings.ISettings, arg0: str, arg1: List[str]) -> None: ``` Sets the given array at the supplied path. **Parameters** - **path** – Settings database key path (i.e. “/log/level”). - **array** – A tuple or list of strings. ### subscribe_to_node_change_events ```python def subscribe_to_node_change_events(self: carb.settings._settings.ISettings, arg0: str, arg1: Callable[[carb.dictionary.Item, carb.settings._settings.ChangeEventType], None]) -> carb.settings._settings.SubscriptionId: ``` Subscribes to node change events about a specific item. When finished with the subscription, call unsubscribe_to_change_events(). **Parameters** - **path** – Settings database key path (i.e. “/log/level”) to subscribe to. - **eventFn** – A function that is called for each change event. ### subscribe_to_tree_change_events ```python subscribe_to_tree_change_events( self: carb.settings._settings.ISettings, arg0: str, arg1: Callable[[carb::dictionary::Item, carb::dictionary::Item, carb.settings._settings.ChangeEventType], None] ) -> carb.settings._settings.SubscriptionId ``` - **Description**: Subscribes to change events for all items in a subtree. - **Usage**: When finished with the subscription, call unsubscribe_to_change_events(). - **Parameters**: - **path** – Settings database key path (i.e. “/log/level”) to subscribe to. - **eventFn** – A function that is called for each change event. ### unsubscribe_to_change_events ```python unsubscribe_to_change_events( self: carb.settings._settings.ISettings, id: carb.settings._settings.SubscriptionId ) -> None ``` - **Description**: Unsubscribes from change events. - **Parameters**: - **id** – The handle returned from subscribe_to_tree_change_events() or subscribe_to_node_change_events(). ### update ```python update( self: carb.settings._settings.ISettings, arg0: str, arg1: carb::dictionary::Item, arg2: str, arg3: object ) -> None ``` - **Description**: Merges the source dictionary.Item into the settings database. - **Usage**: Destination path need not exist and missing items in the path will be created as ItemType.DICTIONARY. - **Parameters**: - **path** – Settings database key path (i.e. “/log/level”). Used as the destination location within the setting database. “/” is considered to be the root. - **dictionary** – A dictionary.Item used as the base of the items to merge into the setting database. - **dictionaryPath** – A child path of `dictionary` to use as the root for merging. May be None or an empty string in order to use `dictionary` directly as the root. - **updatePolicy** – One of dictionary.UpdateAction to use as the policy for updating. ``` ```html <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Document</title> <script src="https://cdnjs.cloudflare.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script> <script> $(document).ready(function() { // Enable navigation SphinxRtdTheme.Navigation.enable(true); }); </script> </head> <body> </body> </html> ```
carb.settings.lru_cache.md
# lru_cache ## lru_cache ```python carb.settings.lru_cache(maxsize=128, typed=False) ``` Least-recently-used cache decorator. If `maxsize` is set to None, the LRU features are disabled and the cache can grow without bound. If `typed` is True, arguments of different types will be cached separately. For example, f(3.0) and f(3) will be treated as distinct calls with distinct results. Arguments to the cached function must be hashable. View the cache statistics named tuple (hits, misses, maxsize, currsize) with f.cache_info(). Clear the cache and statistics with f.cache_clear(). Access the underlying function with f.__wrapped__. See: [Least recently used (LRU)](https://en.wikipedia.org/wiki/Cache_replacement_policies#Least_recently_used_(LRU))
carb.settings.md
# carb.settings ## Classes Summary: | Class | Description | |-------|-------------| | ChangeEventType | Members: | | ISettings | The Carbonite Settings interface | | SubscriptionId | Representation of a subscription | ## Functions Summary: | Function | Description | |----------|-------------| | get_settings | Returns cached :class:`carb.settings.ISettings` interface (shorthand). | | lru_cache | Least-recently-used cache decorator. | | acquire_settings_interface | acquire_settings_interface(plugin_name: str = None, library_path: str = None) -> carb.settings._settings.ISettings |
carb.settings.SubscriptionId.md
# SubscriptionId ## SubscriptionId ```python class carb.settings.SubscriptionId ``` Representation of a subscription ### Methods | Method | Description | |--------|-------------| | `__init__(*args, **kwargs)` | | ```python def __init__(*args, **kwargs): pass ``` ``` ---
carb.tokens.acquire_tokens_interface.md
# acquire_tokens_interface ## acquire_tokens_interface