file_path
stringlengths 21
202
| content
stringlengths 19
1.02M
| size
int64 19
1.02M
| lang
stringclasses 8
values | avg_line_length
float64 5.88
100
| max_line_length
int64 12
993
| alphanum_fraction
float64 0.27
0.93
|
---|---|---|---|---|---|---|
PegasusSimulator/PegasusSimulator/docs/source/api/sensors.sensor.rst | Sensor
======
.. automodule:: pegasus.simulator.logic.sensors.sensor
:members:
:undoc-members:
:show-inheritance:
| 124 | reStructuredText | 14.624998 | 54 | 0.677419 |
PegasusSimulator/PegasusSimulator/docs/source/api/vehicles.vehicle.rst | Vehicle
=======
.. automodule:: pegasus.simulator.logic.vehicles.vehicle
:members:
:undoc-members:
:show-inheritance: | 127 | reStructuredText | 17.285712 | 56 | 0.685039 |
PegasusSimulator/PegasusSimulator/docs/source/api/state.rst | State
=====
.. automodule:: pegasus.simulator.logic.state
:members:
:undoc-members:
:show-inheritance: | 112 | reStructuredText | 15.142855 | 45 | 0.669643 |
PegasusSimulator/PegasusSimulator/docs/source/api/sensors.magnetometer.rst | Magnetometer
============
To simulate the measurements produced by a magnetometer, we resort to the same declination D,
nclination I angles, and magnetic strength S pre-computed tables provided in PX4-SITL :cite:p:`px4`,
obtained from the World Magnetic Model (WMM) from 2018 :cite:p:`compute_mag_components`.
.. automodule:: pegasus.simulator.logic.sensors.magnetometer
:members:
:undoc-members:
:show-inheritance:
| 429 | reStructuredText | 34.83333 | 101 | 0.748252 |
PegasusSimulator/PegasusSimulator/docs/source/api/backends.backend.rst | Backend
=======
.. automodule:: pegasus.simulator.logic.backends.backend
:members:
:undoc-members:
:show-inheritance:
| 128 | reStructuredText | 15.124998 | 56 | 0.679687 |
PegasusSimulator/PegasusSimulator/docs/source/api/params.rst | Params
======
.. automodule:: pegasus.simulator.params
:members:
:undoc-members:
:show-inheritance: | 109 | reStructuredText | 14.714284 | 40 | 0.66055 |
PegasusSimulator/PegasusSimulator/docs/source/api/backends.ros2_backend.rst | ROS2 Backend
============
.. automodule:: pegasus.simulator.logic.backends.ros2_backend
:members:
:undoc-members:
:show-inheritance:
| 143 | reStructuredText | 16.999998 | 61 | 0.664336 |
PegasusSimulator/PegasusSimulator/docs/source/api/sensors.imu.rst | Imu
===
The IMU is composed of a gyroscope and an accelerometer, which measure
the angular velocity and acceleration of the vehicle in the body frame, respectively.
The bias/drift of the imu is given by a slowly varying random walk processes of
diffusion, as described in :cite:p:`kalibr`, :cite:p:`maybeck`.
.. automodule:: pegasus.simulator.logic.sensors.imu
:members:
:undoc-members:
:show-inheritance:
| 419 | reStructuredText | 31.30769 | 86 | 0.754177 |
PegasusSimulator/PegasusSimulator/docs/source/api/vehicles.multirotor.rst | Multirotor
==========
.. automodule:: pegasus.simulator.logic.vehicles.multirotor
:members:
:undoc-members:
:show-inheritance: | 136 | reStructuredText | 18.571426 | 59 | 0.683824 |
PegasusSimulator/PegasusSimulator/docs/source/api/graphs.ros2_camera.rst | ROS2 Camera
===========
.. automodule:: pegasus.simulator.logic.graphs.ros2_camera
:members:
:undoc-members:
:show-inheritance:
| 138 | reStructuredText | 16.374998 | 58 | 0.65942 |
PegasusSimulator/PegasusSimulator/docs/source/api/sensors.gps.rst | GPS
===
In order to guarantee full compatibility with the PX4 navigation system, the projection from
local to global coordinate system, i.e., latitude and longitude is performed by transforming
the vehicle position, to the geographic coordinate system using the azymuthal equidistant
projection in accordance with the World Geodetic System (WGS84) :cite:p:`azimuthal_projection`,
:cite:p:`Snyder1987`.
.. automodule:: pegasus.simulator.logic.sensors.gps
:members:
:undoc-members:
:show-inheritance:
| 514 | reStructuredText | 35.785712 | 96 | 0.780156 |
PegasusSimulator/PegasusSimulator/docs/source/api/sensors.barometer.rst | Barometer
=========
This sensor follows the International Standard Atmosphere (ISA) model :cite:p:`baromter_implementation` .
.. automodule:: pegasus.simulator.logic.sensors.barometer
:members:
:undoc-members:
:show-inheritance:
| 240 | reStructuredText | 23.099998 | 105 | 0.733333 |
PegasusSimulator/PegasusSimulator/docs/source/api/graphs.graph.rst | Graph
======
.. automodule:: pegasus.simulator.logic.graphs.graph
:members:
:undoc-members:
:show-inheritance:
| 121 | reStructuredText | 14.249998 | 52 | 0.669421 |
PegasusSimulator/PegasusSimulator/docs/source/api/index.rst | API Reference
=============
Sensors
-------
.. toctree::
:maxdepth: 2
sensors.sensor
sensors.barometer
sensors.gps
sensors.imu
sensors.magnetometer
Graphs
------
.. toctree::
:maxdepth: 2
graphs.graph
graphs.ros2_camera
Dynamics
--------
.. toctree::
:maxdepth: 2
dynamics.drag
dynamics.linear_drag
Thruster
--------
.. toctree::
:maxdepth: 2
thrusters.quadratic_thrust_curve
Control Backends
----------------
.. toctree::
:maxdepth: 2
backends.backend
backends.mavlink_backend
backends.ros2_backend
Vehicle
-------
.. toctree::
:maxdepth: 2
state
vehicles.vehicle
vehicles.vehicle_manager
vehicles.multirotor
Pegasus Interace
----------------
.. toctree::
:maxdepth: 2
pegasus_interface
Default Params
--------------
.. toctree::
:maxdepth: 2
params | 853 | reStructuredText | 10.54054 | 35 | 0.599062 |
PegasusSimulator/PegasusSimulator/docs/source/features/vehicles.rst | Vehicles
========
In this first version of the Pegasus Simulator we only provide the implementation for a generic ``Multirotor`` Vehicles
and a 3D asset for the ``3DR Iris quadrotor`` . Please check the :ref:`Roadmap` section for future plans regarding new vehicles
topologies and the :ref:`Contributing` section to contribute if you have a new vehicle model that you would like to be added.
To create a Multirotor object from scratch, consider the following example code:
.. code:: Python
from pegasus.simulator.params import ROBOTS
from pegasus.simulator.logic.dynamics import LinearDrag
from pegasus.simulator.logic.thrusters import QuadraticThrustCurve
from pegasus.simulator.logic.sensors import Barometer, IMU, Magnetometer, GPS
from pegasus.simulator.logic.vehicles.multirotor import Multirotor, MultirotorConfig
# Auxiliary scipy import to set the rotation of the vehicle in quaternion
from scipy.spatial.transform import Rotation
# Create a multirotor configuration object
multirotor_config = MultirotorConfig()
multirotor_config.stage_prefix="quadrotor"
multirotor_config.usd_file = ""
# The default thrust curve for a quadrotor and dynamics relating to drag
multirotor_config.thrust_curve = QuadraticThrustCurve()
multirotor_config.drag = LinearDrag([0.50, 0.30, 0.0])
# Set the sensors for a quadrotor
# For each sensor we are using the default parameters, but you can also pass in a dictionary
# to configure them. Check the Sensors API documentation for more information.
multirotor_config.sensors = [Barometer(), IMU(), Magnetometer(), GPS()]
# The backends for actually sending commands to the vehicle.
# By default use mavlink (with default mavlink configurations).
# It can also be your own custom Control Backend implementation!
#
# Note: you can have multiple backends (this is usefull for creating backends
# for logging purposes) but only the first backend in the list will be used
# to send commands to the vehicles. The others will just be used to receive the
# current state of the vehicle and the data produced by the sensors
multirotor_config.backends = [MavlinkBackend()]
# Create and spawn the multirotor object in the scene
Multirotor(
stage_prefix="/World/quadrotor",
# The path to the multirotor USD file
usd_file=ROBOTS['Iris'],
vehicle_id=0,
init_pos=[0.0, 0.0, 0.07],
init_orientation=Rotation.from_euler("XYZ", [0.0, 0.0, 0.0], degrees=True).as_quat(),
config=config_multirotor,
)
To define and use a custom multirotor frame, you must adhere to the adopted convention. Therefore, a vehicle
must be made in a USD file and have a ``/body`` and multiple ``/rotor<id>`` Xform objects. Each rotor ``/rotor<id>``
must have inside a Revolut Joint named ``/rotor<id>`` where the ``<id>`` of the joint must coincide with the Xform name.
An example tree of the 3DR Iris quadrotor is presented bellow.
.. image:: /_static/features/vehicle_standard.png
:width: 600px
:align: center
:alt: Vehicle standard for defining the frames
Additionally, you should set the mass and moments of inertial of the materials the compose the vehicle directly in the USD file,
as well as the physics colliders. | 3,358 | reStructuredText | 47.681159 | 128 | 0.726027 |
PegasusSimulator/PegasusSimulator/docs/source/features/px4_integration.rst | PX4 Integration
===============
The ``PX4-Autopilot`` support is provided by making use of the ``Control Backends API`` , and implementing a custom
``MavlinkBackend`` which contains a built-in tool to launch and kill PX4 in SITL mode automatically.
To instantiate a ``MavlinkBackend`` via Python scripting, consider the following example:
.. code:: Python
# Import the Mavlink backend module
from pegasus.simulator.logic.backends.mavlink_backend import MavlinkBackend, MavlinkBackendConfig
# Create the multirotor configuration
# In this example we are showing the default parameters that are used if you do not specify them
mavlink_config = MavlinkBackendConfig({"vehicle_id": 0,
"connection_type": "tcpin",
"connection_ip": "localhost",
# The actual port that gets used = "connection_baseport" + "vehicle_id"
"connection_baseport": 4560,
"enable_lockstep": True,
"num_rotors": 4,
"input_offset": [0.0, 0.0, 0.0, 0.0],
"input_scaling": [1000.0, 1000.0, 1000.0, 1000.0],
"zero_position_armed": [100.0, 100.0, 100.0, 100.0],
"update_rate": 250.0,
# Settings for automatically launching PX4
# If px4_autolaunch==False, then "px4_dir" and "px4_vehicle_model" are unused
"px4_autolaunch": True,
"px4_dir": "PegasusInterface().px4_path",
"px4_vehicle_model": "iris",
})
config_multirotor.backends = [MavlinkBackend(mavlink_config)]
.. note::
In general, the Pegasus Simulator does not need to know where you have PX4 running to simulate the vehicle and send data
through ``MAVLink`` . However, if you intend to use the provided ``PX4 auto-launch`` feature, you must inform Pegasus Simulator
where you have your local install of PX4.
By default, the simulator expects PX4 to be located at ``~/PX4-Autopilot`` directory. You can set the default
path for the ``PX4-Autopilot`` by either:
1. Using the GUI of the Pegasus Simulator when operating in extension mode.
.. image:: /_static/pegasus_GUI_px4_dir.png
:width: 600px
:align: center
:alt: Setting the PX4 path
2. Use the methods provided by :class:`PegasusInterface`, i.e:
.. code:: Python
from pegasus.simulator.params import SIMULATION_ENVIRONMENTS
from pegasus.simulator.logic.interface.pegasus_interface import PegasusInterface
# Start the Pegasus Interface
pg = PegasusInterface()
# Set the default PX4 installation path used by the simulator
# This will be saved for future runs
pg.set_px4_path("path_to_px4_directory")
| 2,645 | reStructuredText | 38.492537 | 131 | 0.677883 |
PegasusSimulator/PegasusSimulator/docs/source/features/environments.rst | Environments
============
At the moment, we only provide in the :ref:`Params` API a dictionary named ``SIMULATION_ENVIRONMENTS``
which stores the path to ``Isaac Sim`` pre-made worlds. As we update this simulation framework, expect
the list of default simulation environments to grow.
List of provided simulation environments
----------------------------------------
.. table::
:widths: 25 17
+----------------------------+--------------------------+
| World | Name |
+============================+==========================+
| |default_environment| | Default Environment |
+----------------------------+--------------------------+
| |black_gridroom| | Black Gridroom |
+----------------------------+--------------------------+
| |curved_gridroom| | Curved Gridroom |
+----------------------------+--------------------------+
| |hospital| | Hospital |
+----------------------------+--------------------------+
| |office| | Office |
+----------------------------+--------------------------+
| |simple_room| | Simple Room |
+----------------------------+--------------------------+
| |warehouse| | Warehouse |
+----------------------------+--------------------------+
| |warehouse_with_forklifts| | Warehouse with Forklifts |
+----------------------------+--------------------------+
| |warehouse_with_shelves| | Warehouse with Shelves |
+----------------------------+--------------------------+
| |full_warehouse| | Full Warehouse |
+----------------------------+--------------------------+
| |flat_plane| | Flat Plane |
+----------------------------+--------------------------+
| |rought_plane| | Rough Plane |
+----------------------------+--------------------------+
| |slope_plane| | Slope Plane |
+----------------------------+--------------------------+
| |stairs_plane| | Stairs Plane |
+----------------------------+--------------------------+
To spawn one of the provided environments when using the Pegasus Simulator
in standalone application mode, you can just add to your code:
.. code:: Python
from pegasus.simulator.params import SIMULATION_ENVIRONMENTS
from pegasus.simulator.logic.interface.pegasus_interface import PegasusInterface
# Start the Pegasus Interface
pg = PegasusInterface()
# Load the environment
pg.load_environment(SIMULATION_ENVIRONMENTS["Curved Gridroom"])
To index the dictionary of pre-made simulation environments, just use the names of the columns table.
.. note::
In this initial version it is not possible to spawn a custom 3D USD world using the Pegasus Simulator GUI.
If you use the Pegasus Simulator in extension mode and want to use your custom worlds, for now you need
manually drag and drop the assets into the viewport like a cavemen 👌️. This is for sure a feature in the :ref:`Roadmap`.
However, when using the Pegasus Simulator in standalone application mode, i.e. Python scripting,
you can load your own custom USD files using the ``load_environment(usd_path)`` method.
.. Definition of the image alias
.. |default_environment| image:: /_static/worlds/Default\ Environment.png
.. |black_gridroom| image:: /_static/worlds/Black\ Gridroom.png
.. |curved_gridroom| image:: /_static/worlds/Curved\ Gridroom.png
.. |hospital| image:: /_static/worlds/Hospital.png
.. |office| image:: /_static/worlds/Office.png
.. |simple_room| image:: /_static/worlds/Simple\ Room.png
.. |warehouse| image:: /_static/worlds/Warehouse.png
.. |warehouse_with_forklifts| image:: /_static/worlds/Warehouse\ with\ Forklifts.png
.. |warehouse_with_shelves| image:: /_static/worlds/Warehouse\ with\ Shelves.png
.. |full_warehouse| image:: /_static/worlds/Full\ Warehouse.png
.. |flat_plane| image:: /_static/worlds/Flat\ Plane.png
.. |rought_plane| image:: /_static/worlds/Rough\ Plane.png
.. |slope_plane| image:: /_static/worlds/Slope\ Plane.png
.. |stairs_plane| image:: /_static/worlds/Stairs\ Plane.png
Setting the Map Global Coordinates
----------------------------------
By default, the latitude, longitude and altitude of the origin of the simulated world
is set to the geographic coordinates of `Instituto Superior Técnico, Lisbon (Portugal)`, i.e.:
- **latitude=** 90.0 (º)
- **longitude=** 38.736832 (º)
- **altitude=** -9.137977 (m)
You can change the default coordinates by either:
1. Using the GUI of the Pegasus Simulator when operating in extension mode.
.. image:: /_static/features/setting_geographic_coordinates.png
:width: 600px
:align: center
:alt: Setting the geographic coordinates
2. Use the methods provided by :class:`PegasusInterface`, i.e:
.. code:: Python
from pegasus.simulator.params import SIMULATION_ENVIRONMENTS
from pegasus.simulator.logic.interface.pegasus_interface import PegasusInterface
# Start the Pegasus Interface
pg = PegasusInterface()
# Change only the global coordinates for this instance of the code
# Future code runs will keep the same default coordinates
pg.set_global_coordinates(latitude, longitude, altitude)
# Change the default global coordinates for the simulator
# This will be saved for future runs
pg.set_new_global_coordinates(latitude, longitude, altitude) | 5,690 | reStructuredText | 45.647541 | 124 | 0.536731 |
PegasusSimulator/PegasusSimulator/docs/source/references/roadmap.rst | Roadmap
=======
In this section a basic feature roadmap is presented. The features in this roadmap are subject to
changes. There are no specific delivery dates nor priority table. Some of the unchecked features
might already be implemented but not yet documented.
If there is some feature missing that you would like to see added, please check the :ref:`Contributing` section.
* Supported sensors
* |check_| Baromter
* |check_| GPS
* |check_| IMU (Accelerometer + Gyroscope)
* |check_| Magnetometer
* |check_| Camera (Implemented as a Graph Node)
* |uncheck_| UDP Camera
* Supported actuators
* |check_| Quadratic Thrust Curve
* |uncheck_| Gimbal Control
* Base vehicles
* |check_| 3DR Iris
* |uncheck_| Typhoon
* |uncheck_| Fixed-wing plane
* |uncheck_| VTOL
* API backends
* |check_| Mavlink (with direct PX4 integration)
* |check_| Direct ROS 2 interface
* |uncheck_| Python backend for Reinforcement Learning (RL)
* UI
* |check_| Select from NVIDIA samples worlds
* |check_| Select from Pegasus sample vehicles
* |uncheck_| Support for custom vehicles in the UI dropdown
* |uncheck_| Support for custom worlds in the UI dropdown
* |uncheck_| Add an option to clone and compile PX4-Autopilot directly from the Pegasus Simulator UI
.. |check| raw:: html
<input checked="" type="checkbox">
.. |check_| raw:: html
<input checked="" disabled="" type="checkbox">
.. |uncheck| raw:: html
<input type="checkbox">
.. |uncheck_| raw:: html
<input disabled="" type="checkbox">
| 1,551 | reStructuredText | 24.866666 | 112 | 0.693746 |
PegasusSimulator/PegasusSimulator/docs/source/references/changelog.rst | Changelog
=========
All notable changes to this project are documented in this file. The format is based on `Keep a Changelog <https://keepachangelog.com/en/1.0.0/>`__.
The changelog for the Pegasus Simulator is located in the ``docs`` directory of the extension. If you propose any changes to the project via Pull Requests, please also update the changelog file accordingly. Check the :ref:`Contributing` section for more informations.
.. include:: ../../../extensions/pegasus.simulator/docs/CHANGELOG.md
:parser: myst_parser.sphinx_
:start-line: 3 | 561 | reStructuredText | 55.199994 | 267 | 0.743316 |
PegasusSimulator/PegasusSimulator/docs/source/references/known_issues.rst | Known Issues
============
ROS 2 control backend not-working properly
------------------------------------------
At the moment, the Isaac Sim support for ROS 2 Humble (Ubuntu 22.04LTS) is still a little bit shaky. Since this extension
was developed on Ubuntu 22.04LTS, I was not able to fully test this functionality yet. Therefore, ROS 2 control backend was
temporarily disabled in the GUI and the only control backend available when using Pegasus Simulator in extension mode is the PX4/MAVLink
one. This functionality will be re-activated as soon as it becomes stable. For the meantime, you can use the `ROS 2 interface
provided by the PX4 team <https://docs.px4.io/main/en/ros/ros2_comm.html>`__. | 701 | reStructuredText | 62.818176 | 136 | 0.727532 |
PegasusSimulator/PegasusSimulator/docs/source/references/bibliography.rst | Bibliography
============
.. bibliography:: | 44 | reStructuredText | 10.249997 | 17 | 0.545455 |
PegasusSimulator/PegasusSimulator/docs/source/references/license.rst | License
=======
Pegasus Simulator License
-------------------------
| The Pegasus Simulator is an open-source framework that follows a `BSD-3 Clause License <https://opensource.org/licenses/BSD-3-Clause/>`__.
| **Author and lead developer:** Marcelo Jacinto (marcelo.jacinto@tecnico.ulisboa.pt)
.. code-block:: text
BSD 3-Clause License
Copyright (c) 2023, Pegasus Simulator, Marcelo Jacinto
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this
list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of the copyright holder nor the names of its
contributors may be used to endorse or promote products derived from
this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY,
OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
This framework is built on top of `NVIDIA Omniverse <https://docs.omniverse.nvidia.com/>`__ and `Isaac
Sim <https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/overview.html>`__ which is made available for free under the
individual license. The license files for all the NVIDIA assets used in this framework are available in NVIDIA's `documentation <https://docs.omniverse.nvidia.com/app_isaacsim/common/licenses.html>`_.
3DR Iris Asset
--------------
.. include:: /licenses/assets/iris-license.rst | 2,411 | reStructuredText | 48.224489 | 200 | 0.765657 |
PegasusSimulator/PegasusSimulator/docs/source/references/contributing.rst | Contributing
============
The Pegasus Simulator is an open-source effort, started by me, Marcelo Jacinto in January/2023. It is a tool that was
created with the original purpose of serving my Ph.D. workplan for the next 4 years, which means that you can expect
this repository to be mantained by me directly, hopefully until 2027.
With that said, it is very likely that you will stumble upon bugs on the code or missing features. Information on how
to contribute with code, documentation or suggestions for the project roadmap can be found in the following sections.
Issues, Bug Reporting and Feature Requests
------------------------------------------
- **Bug reports:** Report bugs you find in the Github Issue tab.
- **Feature requests:** Suggest new features you would like to see in the Github Discussions tab.
- **Code contributions:** Submit a Github Pull Request (read the next section).
Branch and Version Model
------------------------
This project uses a two-branch Git model:
- **main:** By default points to the latest stable tag version of the project.
- **dev:** Corresponds to an unstable versions of the code that are not well tested yet.
In this project, we avoid performing merges and give preference to a fork/pull-request structure. All code contributions
have to be made under the permissive BSD 3-clause license and all code must not impose any further constraints on the use.
Contributing with Code
----------------------
Please follow these steps to contribute with code:
1. Create an issue in Github issue tab to discuss new changes or additions to the code.
2. `Fork <https://docs.github.com/en/get-started/quickstart/fork-a-repo>`__ the repository.
3. `Create a new branch <https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-and-deleting-branches-within-your-repository>`__ for your changes.
4. Make your changes.
5. Run pre-commit on all files to make sure the code is well formated (check :ref:`Code Style` section).
6. Commit the changes following the guide in the :ref:`Commit Messages` section.
7. Update the documentation accordingly (check :ref:`Contributing with Documentation` section).
8. Push your changes to your forked repository.
9. `Submit a pull request <https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/proposing-changes-to-your-work-with-pull-requests/creating-a-pull-request-from-a-fork>`__ to the main branch of this project.
10. Ensure all the checks on the pull request are successful.
After sending a pull request, the developer team will review your code, provide feedback.
.. note::
Ensure that your code is well-formatted, documented and working.
Commit Messages
~~~~~~~~~~~~~~~
Each commit message should be short and provide a good description of what is being changed or added to the code. As such,
we suggest that every commit message start by:
* ``feat``: For new features.
* ``fix``: For bug fixes.
* ``rem``: For removing code or features.
* ``doc``: For adding or changing documentation.
* ``chore``: When none of the above are a good fit.
Here is an example of a "good" commit:
.. code:: bash
git commit -m "feat: new vehicle thurster dynamics"
.. note::
We do not enforce strictly this commit policy, but it is highly recommended.
Pull Requests
~~~~~~~~~~~~~
The description of the Pull Request should include:
- An overview of what is adding, changing or removing; enough to understand the broad purpose of the code
- Links to related issues, supporting information or research papers (if useful).
- Information about what code testing has been conducted.
Code Style
~~~~~~~~~~
The inline code documentation follows the `Google Style Guides <https://google.github.io/styleguide/pyguide.html>`__ while the Python code follows the `PEP guidelines <https://peps.python.org/pep-0008/>`__. We use
the `pre-commit <https://pre-commit.com/>`__ tool tools for maintaining code quality and consistency over the codebase.
You can install ``pre-commit`` by running:
.. code:: bash
pip install pre-commit
If you do not want to polute your python environment, please use
`venv <https://docs.python.org/3/library/venv.html>`__ or `conda <https://docs.conda.io/en/latest/>`__.
To run ``pre-commit`` over the entire repository, execute:
.. code:: bash
pre-commit run --all-files
Contributing with Documentation
-------------------------------
I know, everyone hates to write documentation - its boring... but it is needed. That's why we tried
to make it easy to contribute to it.
All the source files for the documentation are located in the ``docs`` directory. The documentation is written in
`reStructuredText <https://www.sphinx-doc.org/en/master/>`__ format. We use Sphinx with the
`Read the Docs Theme <https://readthedocs.org/projects/sphinx/>`__ for generating the documentation. Sending a pull
request for the documentation is the same as sending a pull request for the codebase. Please follow the steps
mentioned in the :ref:`Contributing with Code` section.
To build the documentation, you need to install a few python
dependencies. If you do not want to polute your python environment, please use
`venv <https://docs.python.org/3/library/venv.html>`__ or `conda <https://docs.conda.io/en/latest/>`__.
To generate the html documentation, execute the following commands:
1. Enter the ``docs`` directory.
.. code:: bash
# (relative to the root of the repository)
cd docs
2. Install the python dependencies.
.. code:: bash
pip install -r requirements.txt
3. Build the documentation.
.. code:: bash
make html
4. Open the documentation in a browser.
.. code:: bash
xdg-open _build/html/index.html
Contributing with Assets
------------------------
Creating 3D models is an hard and time consuming task. We encourage people to share models that they feel will be usefull
for the community, as long as:
1. The assets are appropriately licensed.
2. They can be distributed in an open-source repository.
.. note::
Currently, we still do not have a standard approach for submitting open-source assets to be incorporated into Pegasus Simulator,
but a possible solution in the future might lie either on hosting small sized ones on this repository and large
worlds in a nucleus server. If you have a great idea regarding this subject, share it with us on the Github Issues tab!
Sponsor the project
-------------------
If you want to be a part of this project, or sponsor my work with some graphics cards, jetson developer boards and other development
material, please reach out to me directly at ``marcelo.jacinto@tecnico.ulisboa.pt``.
Current sponsors:
- Dynamics Systems and Ocean Robotics (DSOR) group of the Institute for Systems and Robotics (ISR), a research unit of the Laboratory of Robotics and Engineering Systems (LARSyS).
- Instituto Superior Técnico, Universidade de Lisboa
The work developed by Marcelo Jacinto and João Pinto was supported by Ph.D. grants funded by Fundação para as Ciências e Tecnologias (FCT).
.. raw:: html
<p float="left" align="center">
<img src="../../_static/dsor_logo.png" width="90" align="center" />
<img src="../../_static/logo_isr.png" width="200" align="center"/>
<img src="../../_static/larsys_logo.png" width="200" align="center"/>
<img src="../../_static/ist_logo.png" width="200" align="center"/>
<img src="../../_static/logo_fct.png" width="200" align="center"/>
</p>
| 7,592 | reStructuredText | 40.71978 | 226 | 0.722603 |
Conv-AI/ov_extension/CHANGELOG.md | # Changelog
All notable changes to this project will be documented in this file.
# Release 1.0.3
**Fixed**
- Extension failed to install on latest Omniverse apps.
# Release 1.0.2
**Improved**
- Action Graph for the demo stage.
# Release 1.0.1
**Fixed**
- UI bug preventing Convai window from appearing.
# Release 1.0.0
**Added**
- GRPC streaming for reduced latency.
- Demo stage with Nvidia character.
# Release 0.1.0-alpha
- Added REST APIs integration. | 460 | Markdown | 19.954545 | 68 | 0.730435 |
Conv-AI/ov_extension/README.md | # Convai Omniverse Extension
## Introduction
The Convai Omniverse Extension provides seamless integration between [Convai](https://convai.com/) API and Omniverse, allowing users to connect their 3D character assets with intelligent conversational agents. With this extension, users can define their character's backstory and voice at Convai and easily connect the character using its character ID.
## Installation
To install the Convai Omniverse Extension, follow these steps:
1. Clone the latest version of the repo.
2. Open Omniverse app of your choice (e.g Code) and from the `Window` menu click `Extensions`.
3. In the extensions tab, click the gear icon in the top right.
<p align="left">
<img height="350" src="images/extensions.png?raw=true">
</p>
4. Click the green plus icon in the `Edit` column and add the absolute path to the `exts` folder found in the repository directory.
<p align="left">
<img height="350" src="images/SearchPath.png?raw=true">
</p>
5. Select the `Third Party` tab and search for `Convai` in the top left search bar, make sure to check `Enabled`.
<p align="left">
<img height="250" src="images/ConvaiSearch.png?raw=true">
</p>
6. The Convai window should appear, drag it and dock it in any suitable area of the UI.
7. If the Convai window does not appear, go to the `Window` menu and select `Convai` from the list.
## Configuration
To add your API Key and Character ID, follow these steps:
1. Sign up at [Convai](https://convai.com/).
2. On the website click the gear icon in the top-right corner of the playground then copy and paste the API key into the `Convai API Key` field in the Convai extension window.
3. Go to the [Dashboard](https://convai.com/pipeline/dashboard) and on the left panel and either create a new character or select a sample one.
4. Copy the Character ID and paste it in the `Character ID` field in the extension window.
## Actions:
Actions can be used to trigger events with the same name as the action in the `Action graph`. They can be used to run animations based on the action received. To try out actions:
1. Add a few comma seperated actions to `Comma seperated actions` field (e.g jump, kick, dance, etc.).
2. The character will select one of the actions based on the conversation and run any event with the same name as the action in the `Action Graph`.
## Running the Demo
1. Open your chosen Omniverse app (e.g., Code).
2. Go to `File->Open` and navigate to the repo directory.
3. Navigate to `<repo directory>/ConvaiDemoStage/ConvaiDemo.usd` and click `open it.
4. Click the `play` button from the `Toolbar` menu on the left.
<p align="left">
<img height="350" src="images/PlayToolbar.png?raw=true">
</p>
5. Click `Start Talking` in the `Convai` window to talk to the character then click `Stop` to send the request.
## Notes
- The extension is tested in Omniverse Code, but you are welcome to try it out in other apps as well.
- The demo stage includes only talk and idle animations. However, it is possible to add more animations and trigger them using the action selected by the character. More on that in the future. | 3,142 | Markdown | 64.479165 | 352 | 0.746658 |
Conv-AI/ov_extension/exts/convai/convai/extension.py | import math, os
import asyncio
import numpy as np
import omni.ext
import carb.events
import omni.ui as ui
import configparser
import pyaudio
import grpc
from .rpc import service_pb2 as convai_service_msg
from .rpc import service_pb2_grpc as convai_service
from .convai_audio_player import ConvaiAudioPlayer
from typing import Generator
import io
from pydub import AudioSegment
import threading
import traceback
import time
from collections import deque
import random
from functools import partial
__location__ = os.path.realpath(os.path.join(os.getcwd(), os.path.dirname(__file__)))
CHUNK = 1024
FORMAT = pyaudio.paInt16
CHANNELS = 1
RATE = 12000
def log(text: str, warning: bool =False):
print(f"[convai] {'[Warning]' if warning else ''} {text}")
class ConvaiExtension(omni.ext.IExt):
WINDOW_NAME = "Convai"
MENU_PATH = f"Window/{WINDOW_NAME}"
def on_startup(self, ext_id: str):
self.IsCapturingAudio = False
self.on_new_frame_sub = None
self.channel_address = None
self.channel = None
self.SessionID = None
self.channelState = grpc.ChannelConnectivity.IDLE
self.client = None
self.ConvaiGRPCGetResponseProxy = None
self.PyAudio = pyaudio.PyAudio()
self.stream = None
self.Tick = False
self.TickThread = None
self.ConvaiAudioPlayer = ConvaiAudioPlayer(self._on_start_talk_callback, self._on_stop_talk_callback)
self.LastReadyTranscription = ""
self.ResponseTextBuffer = ""
self.OldCharacterID = ""
self.response_UI_Label_text = ""
self.action_UI_Label_text = "<Action>"
self.transcription_UI_Label_text = ""
# self.response_UI_Label_text = "<Response will apear here>"
self.response_UI_Label_text = "" # Turn off response text due to unknown crash
self.StartTalking_Btn_text = "Start Talking"
self.StartTalking_Btn_state = True
self.UI_Lock = threading.Lock()
self.Mic_Lock = threading.Lock()
self.UI_update_counter = 0
self.on_new_update_sub = None
ui.Workspace.set_show_window_fn(ConvaiExtension.WINDOW_NAME, partial(self.show_window, None))
ui.Workspace.show_window(ConvaiExtension.WINDOW_NAME)
# # Put the new menu
editor_menu = omni.kit.ui.get_editor_menu()
if editor_menu:
self._menu = editor_menu.add_item(
ConvaiExtension.MENU_PATH, self.show_window, toggle=True, value=True
)
# self.show_window(None, True)
self.read_channel_address_from_config()
self.create_channel()
log("ConvaiExtension started")
def setup_UI(self):
self._window = ui.Window(ConvaiExtension.WINDOW_NAME, width=300, height=300)
self._window.set_visibility_changed_fn(self._visiblity_changed_fn)
with self._window.frame:
with ui.VStack():
with ui.HStack(height = ui.Length(30)):
l = ui.Label("Convai API key")
self.APIKey_input_UI = ui.StringField()
ui.Spacer(height=5)
with ui.HStack(height = ui.Length(30)):
l = ui.Label("Character ID")
self.CharID_input_UI = ui.StringField()
ui.Spacer(height=5)
# with ui.HStack(height = ui.Length(30)):
# l = ui.Label("Session(Leave empty for 1st time)")
# self.session_input_UI = ui.StringField()
# ui.Spacer(height=5)
with ui.HStack(height = ui.Length(30)):
l = ui.Label("Comma seperated actions")
self.actions_input_UI = ui.StringField()
self.actions_input_UI.set_tooltip("e.g. Dances, Jumps")
ui.Spacer(height=5)
# self.response_UI_Label = ui.Label("", height = ui.Length(60), word_wrap = True)
# self.response_UI_Label.alignment = ui.Alignment.CENTER
self.action_UI_Label = ui.Label("<Action>", height = ui.Length(30), word_wrap = False)
self.action_UI_Label.alignment = ui.Alignment.CENTER
ui.Spacer(height=5)
self.StartTalking_Btn = ui.Button("Start Talking", clicked_fn=lambda: self.on_start_talking_btn_click(), height = ui.Length(30))
self.transcription_UI_Label = ui.Label("", height = ui.Length(60), word_wrap = True)
self.transcription_UI_Label.alignment = ui.Alignment.CENTER
if self.on_new_update_sub is None:
self.on_new_update_sub = (
omni.kit.app.get_app()
.get_update_event_stream()
.create_subscription_to_pop(self._on_UI_update_event, name="convai new UI update")
)
self.read_UI_from_config()
return self._window
def _on_UI_update_event(self, e):
if self.UI_update_counter>1000:
self.UI_update_counter = 0
self.UI_update_counter += 1
if self._window is None:
return
if self.UI_Lock.locked():
log("UI_Lock is locked", 1)
return
with self.UI_Lock:
# self.response_UI_Label.text = str(self.response_UI_Label_text)
self.action_UI_Label.text = str(self.action_UI_Label_text)
self.transcription_UI_Label.text = str(self.transcription_UI_Label_text)
self.StartTalking_Btn.text = self.StartTalking_Btn_text
self.StartTalking_Btn.enabled = self.StartTalking_Btn_state
def start_tick(self):
if self.Tick:
log("Tick already started", 1)
return
self.Tick = True
self.TickThread = threading.Thread(target=self._on_tick)
self.TickThread.start()
def stop_tick(self):
if self.TickThread and self.Tick:
self.Tick = False
self.TickThread.join()
def read_channel_address_from_config(self):
config = configparser.ConfigParser()
config.read(os.path.join(__location__, 'convai.env'))
self.channel_address = config.get("CONVAI", "CHANNEL")
def read_UI_from_config(self):
config = configparser.ConfigParser()
config.read(os.path.join(__location__, 'convai.env'))
api_key = config.get("CONVAI", "API_KEY")
self.APIKey_input_UI.model.set_value(api_key)
character_id = config.get("CONVAI", "CHARACTER_ID")
self.CharID_input_UI.model.set_value(character_id)
actions_text = config.get("CONVAI", "ACTIONS")
self.actions_input_UI.model.set_value(actions_text)
def save_config(self):
config = configparser.ConfigParser()
config.read(os.path.join(__location__, 'convai.env'))
config.set("CONVAI", "API_KEY", self.APIKey_input_UI.model.get_value_as_string())
config.set("CONVAI", "CHARACTER_ID", self.CharID_input_UI.model.get_value_as_string())
config.set("CONVAI", "ACTIONS", self.actions_input_UI.model.get_value_as_string())
# config.set("CONVAI", "CHANNEL", self.channel_address)
with open(os.path.join(__location__, 'convai.env'), 'w') as file:
config.write(file)
def create_channel(self):
if (self.channel):
log("gRPC channel already created")
return
self.channel = grpc.secure_channel(self.channel_address, grpc.ssl_channel_credentials())
# self.channel.subscribe(self.on_channel_state_change, True)
log("Created gRPC channel")
def close_channel(self):
if (self.channel):
self.channel.close()
self.channel = None
log("close_channel - Closed gRPC channel")
else:
log("close_channel - gRPC channel already closed")
def on_start_talking_btn_click(self):
if (self.IsCapturingAudio):
# Change UI
with self.UI_Lock:
self.StartTalking_Btn_text = "Processing..."
# self.StartTalking_Btn_text = "Start Talking"
self.StartTalking_Btn_state = False
# Reset response UI text
self.response_UI_Label_text = ""
# Do one last mic read
self.read_mic_and_send_to_grpc(True)
# time.sleep(0.01)
# Stop Mic
self.stop_mic()
else:
# Reset Session ID if Character ID changes
if self.OldCharacterID != self.CharID_input_UI.model.get_value_as_string():
self.OldCharacterID = self.CharID_input_UI.model.get_value_as_string()
self.SessionID = ""
with self.UI_Lock:
# Reset transcription UI text
self.transcription_UI_Label_text = ""
self.LastReadyTranscription = ""
# Change Btn text
self.StartTalking_Btn_text = "Stop"
# Open Mic stream
self.start_mic()
# Stop any on-going audio
self.ConvaiAudioPlayer.stop()
# Save API key, character ID and session ID
self.save_config()
# Create gRPC stream
self.ConvaiGRPCGetResponseProxy = ConvaiGRPCGetResponseProxy(self)
def on_shutdown(self):
self.clean_grpc_stream()
self.close_channel()
self.stop_tick()
if self._menu:
self._menu = None
if self._window:
self._window.destroy()
self._window = None
# Deregister the function that shows the window from omni.ui
ui.Workspace.set_show_window_fn(ConvaiExtension.WINDOW_NAME, None)
log("ConvaiExtension shutdown")
def start_mic(self):
if self.IsCapturingAudio == True:
log("start_mic - mic is already capturing audio", 1)
return
self.stream = self.PyAudio.open(format=FORMAT,
channels=CHANNELS,
rate=RATE,
input=True,
frames_per_buffer=CHUNK)
self.IsCapturingAudio = True
self.start_tick()
log("start_mic - Started Recording")
def stop_mic(self):
if self.IsCapturingAudio == False:
log("stop_mic - mic has not started yet", 1)
return
self.stop_tick()
if self.stream:
self.stream.stop_stream()
self.stream.close()
else:
log("stop_mic - could not close mic stream since it is None", 1)
self.IsCapturingAudio = False
log("stop_mic - Stopped Recording")
def clean_grpc_stream(self):
if self.ConvaiGRPCGetResponseProxy:
self.ConvaiGRPCGetResponseProxy.Parent = None
del self.ConvaiGRPCGetResponseProxy
self.ConvaiGRPCGetResponseProxy = None
# self.close_channel()
def on_transcription_received(self, Transcription: str, IsTranscriptionReady: bool, IsFinal: bool):
'''
Called when user transcription is received
'''
self.UI_Lock.acquire()
self.transcription_UI_Label_text = self.LastReadyTranscription + " " + Transcription
self.UI_Lock.release()
if IsTranscriptionReady:
self.LastReadyTranscription = self.LastReadyTranscription + " " + Transcription
def on_data_received(self, ReceivedText: str, ReceivedAudio: bytes, SampleRate: int, IsFinal: bool):
'''
Called when new text and/or Audio data is received
'''
self.ResponseTextBuffer += str(ReceivedText)
if IsFinal:
with self.UI_Lock:
self.response_UI_Label_text = self.ResponseTextBuffer
self.transcription_UI_Label_text = self.ResponseTextBuffer
self.ResponseTextBuffer = ""
self.ConvaiAudioPlayer.append_to_stream(ReceivedAudio)
return
def on_actions_received(self, Action: str):
'''
Called when actions are received
'''
# Action.replace(".", "")
self.UI_Lock.acquire()
for InputAction in self.parse_actions():
# log (f"on_actions_received: {Action} - {InputAction} - {InputAction.find(Action)}")
if Action.find(InputAction) >= 0:
self.action_UI_Label_text = InputAction
self.fire_event(InputAction)
self.UI_Lock.release()
return
self.action_UI_Label_text = "None"
self.UI_Lock.release()
def on_session_ID_received(self, SessionID: str):
'''
Called when new SessionID is received
'''
self.SessionID = SessionID
def on_finish(self):
'''
Called when the response stream is done
'''
self.ConvaiGRPCGetResponseProxy = None
with self.UI_Lock:
self.StartTalking_Btn_text = "Start Talking"
self.StartTalking_Btn_state = True
self.clean_grpc_stream()
log("Received on_finish")
def on_failure(self, ErrorMessage: str):
'''
Called when there is an unsuccessful response
'''
log(f"on_failure called with message: {ErrorMessage}", 1)
with self.UI_Lock:
self.transcription_UI_Label_text = "ERROR: Please double check API key and the character ID - Send logs to support@convai.com for further assistance."
self.stop_mic()
self.on_finish()
def _on_tick(self):
while self.Tick:
time.sleep(0.1)
if self.IsCapturingAudio == False or self.ConvaiGRPCGetResponseProxy is None:
continue
self.read_mic_and_send_to_grpc(False)
def _on_start_talk_callback(self):
self.fire_event("start")
log("Character Started Talking")
def _on_stop_talk_callback(self):
self.fire_event("stop")
log("Character Stopped Talking")
def read_mic_and_send_to_grpc(self, LastWrite):
with self.Mic_Lock:
if self.stream:
data = self.stream.read(CHUNK)
else:
log("read_mic_and_send_to_grpc - could not read mic stream since it is none", 1)
data = bytes()
if self.ConvaiGRPCGetResponseProxy:
self.ConvaiGRPCGetResponseProxy.write_audio_data_to_send(data, LastWrite)
else:
log("read_mic_and_send_to_grpc - ConvaiGRPCGetResponseProxy is not valid", 1)
def fire_event(self, event_name):
def registered_event_name(event_name):
"""Returns the internal name used for the given custom event name"""
n = "omni.graph.action." + event_name
return carb.events.type_from_string(n)
reg_event_name = registered_event_name(event_name)
message_bus = omni.kit.app.get_app().get_message_bus_event_stream()
message_bus.push(reg_event_name, payload={})
def parse_actions(self):
actions = ["None"] + self.actions_input_UI.model.get_value_as_string().split(',')
actions = [a.lstrip(" ").rstrip(" ") for a in actions]
return actions
def show_window(self, menu, value):
# with self.UI_Lock:
if value:
self.setup_UI()
self._window.set_visibility_changed_fn(self._visiblity_changed_fn)
else:
if self._window:
self._window.visible = False
def _visiblity_changed_fn(self, visible):
# with self.UI_Lock:
# Called when the user pressed "X"
self._set_menu(visible)
if not visible:
# Destroy the window, since we are creating new window
# in show_window
asyncio.ensure_future(self._destroy_window_async())
def _set_menu(self, value):
"""Set the menu to create this window on and off"""
editor_menu = omni.kit.ui.get_editor_menu()
if editor_menu:
editor_menu.set_value(ConvaiExtension.MENU_PATH, value)
async def _destroy_window_async(self):
# with self.UI_Lock:
# wait one frame, this is due to the one frame defer
# in Window::_moveToMainOSWindow()
await omni.kit.app.get_app().next_update_async()
if self._window:
self._window.destroy()
self._window = None
class ConvaiGRPCGetResponseProxy:
def __init__(self, Parent: ConvaiExtension):
self.Parent = Parent
self.AudioBuffer = deque(maxlen=4096*2)
self.InformOnDataReceived = False
self.LastWriteReceived = False
self.client = None
self.NumberOfAudioBytesSent = 0
self.call = None
self._write_task = None
self._read_task = None
# self._main_task = asyncio.ensure_future(self.activate())
self.activate()
log("ConvaiGRPCGetResponseProxy constructor")
def activate(self):
# Validate API key
if (len(self.Parent.APIKey_input_UI.model.get_value_as_string()) == 0):
self.Parent.on_failure("API key is empty")
return
# Validate Character ID
if (len(self.Parent.CharID_input_UI.model.get_value_as_string()) == 0):
self.Parent.on_failure("Character ID is empty")
return
# Validate Channel
if self.Parent.channel is None:
log("grpc - self.Parent.channel is None", 1)
self.Parent.on_failure("gRPC channel was not created")
return
# Create the stub
self.client = convai_service.ConvaiServiceStub(self.Parent.channel)
threading.Thread(target=self.init_stream).start()
def init_stream(self):
log("grpc - stream initialized")
try:
for response in self.client.GetResponse(self.create_getGetResponseRequests()):
if response.HasField("audio_response"):
log("gRPC - audio_response: {} {} {}".format(response.audio_response.audio_config, response.audio_response.text_data, response.audio_response.end_of_response))
log("gRPC - session_id: {}".format(response.session_id))
self.Parent.on_session_ID_received(response.session_id)
self.Parent.on_data_received(
response.audio_response.text_data,
response.audio_response.audio_data,
response.audio_response.audio_config.sample_rate_hertz,
response.audio_response.end_of_response)
elif response.HasField("action_response"):
log(f"gRPC - action_response: {response.action_response.action}")
self.Parent.on_actions_received(response.action_response.action)
elif response.HasField("user_query"):
log(f"gRPC - user_query: {response.user_query}")
self.Parent.on_transcription_received(response.user_query.text_data, response.user_query.is_final, response.user_query.end_of_response)
else:
log("Stream Message: {}".format(response))
time.sleep(0.1)
except Exception as e:
if 'response' in locals() and response is not None and response.HasField("audio_response"):
self.Parent.on_failure(f"gRPC - Exception caught in loop: {str(e)} - Stream Message: {response}")
else:
self.Parent.on_failure(f"gRPC - Exception caught in loop: {str(e)}")
traceback.print_exc()
return
self.Parent.on_finish()
def create_initial_GetResponseRequest(self)-> convai_service_msg.GetResponseRequest:
action_config = convai_service_msg.ActionConfig(
classification = 'singlestep',
context_level = 1
)
action_config.actions[:] = self.Parent.parse_actions()
action_config.objects.append(
convai_service_msg.ActionConfig.Object(
name = "dummy",
description = "A dummy object."
)
)
log(f"gRPC - actions parsed: {action_config.actions}")
action_config.characters.append(
convai_service_msg.ActionConfig.Character(
name = "User",
bio = "Person playing the game and asking questions."
)
)
get_response_config = convai_service_msg.GetResponseRequest.GetResponseConfig(
character_id = self.Parent.CharID_input_UI.model.get_value_as_string(),
api_key = self.Parent.APIKey_input_UI.model.get_value_as_string(),
audio_config = convai_service_msg.AudioConfig(
sample_rate_hertz = RATE
),
action_config = action_config
)
if self.Parent.SessionID and self.Parent.SessionID != "":
get_response_config.session_id = self.Parent.SessionID
return convai_service_msg.GetResponseRequest(get_response_config = get_response_config)
def create_getGetResponseRequests(self)-> Generator[convai_service_msg.GetResponseRequest, None, None]:
req = self.create_initial_GetResponseRequest()
yield req
# for i in range(0, 10):
while 1:
IsThisTheFinalWrite = False
GetResponseData = None
if (0): # check if this is a text request
pass
else:
data, IsThisTheFinalWrite = self.consume_from_audio_buffer()
if len(data) == 0 and IsThisTheFinalWrite == False:
time.sleep(0.05)
continue
# Load the audio data to the request
self.NumberOfAudioBytesSent += len(data)
# if len(data):
# log(f"len(data) = {len(data)}")
GetResponseData = convai_service_msg.GetResponseRequest.GetResponseData(audio_data = data)
# Prepare the request
req = convai_service_msg.GetResponseRequest(get_response_data = GetResponseData)
yield req
if IsThisTheFinalWrite:
log(f"gRPC - Done Writing - {self.NumberOfAudioBytesSent} audio bytes sent")
break
time.sleep(0.1)
def write_audio_data_to_send(self, Data: bytes, LastWrite: bool):
self.AudioBuffer.append(Data)
if LastWrite:
self.LastWriteReceived = True
log(f"gRPC LastWriteReceived")
# if self.InformOnDataReceived:
# # Inform of new data to send
# self._write_task = asyncio.ensure_future(self.write_stream())
# # Reset
# self.InformOnDataReceived = False
def finish_writing(self):
self.write_audio_data_to_send(bytes(), True)
def consume_from_audio_buffer(self):
Length = len(self.AudioBuffer)
IsThisTheFinalWrite = False
data = bytes()
if Length:
data = self.AudioBuffer.pop()
# self.AudioBuffer = bytes()
if self.LastWriteReceived and Length == 0:
IsThisTheFinalWrite = True
else:
IsThisTheFinalWrite = False
if IsThisTheFinalWrite:
log(f"gRPC Consuming last mic write")
return data, IsThisTheFinalWrite
def __del__(self):
self.Parent = None
# if self._main_task:
# self._main_task.cancel()
# if self._write_task:
# self._write_task.cancel()
# if self._read_task:
# self._read_task.cancel()
# if self.call:
# self.call.cancel()
log("ConvaiGRPCGetResponseProxy Destructor")
| 23,850 | Python | 36.4427 | 179 | 0.584151 |
Conv-AI/ov_extension/exts/convai/convai/convai_audio_player.py | # from .extension import ConvaiExtension, log
# from test import ConvaiExtension, log
import pyaudio
from pydub import AudioSegment
import io
class ConvaiAudioPlayer:
def __init__(self, start_taking_callback, stop_talking_callback):
self.start_talking_callback = start_taking_callback
self.stop_talking_callback = stop_talking_callback
self.AudioSegment = None
self.pa = pyaudio.PyAudio()
self.pa_stream = None
self.IsPlaying = False
def append_to_stream(self, data: bytes):
segment = AudioSegment.from_wav(io.BytesIO(data)).fade_in(100).fade_out(100)
if self.AudioSegment is None:
self.AudioSegment = segment
else:
self.AudioSegment._data += segment._data
self.play()
def play(self):
if self.IsPlaying:
return
print("ConvaiAudioPlayer - Started playing")
self.start_talking_callback()
self.pa_stream = self.pa.open(
format=pyaudio.get_format_from_width(self.AudioSegment.sample_width),
channels=self.AudioSegment.channels,
rate=self.AudioSegment.frame_rate,
output=True,
stream_callback=self.stream_callback
)
self.IsPlaying = True
def pause(self):
'''
Pause playing
'''
self.IsPlaying = False
def stop(self):
'''
Pause playing and clear audio
'''
self.pause()
self.AudioSegment = None
def stream_callback(self, in_data, frame_count, time_info, status_flags):
if not self.IsPlaying:
frames = bytes()
else:
frames = self.consume_frames(frame_count)
if self.AudioSegment and len(frames) < frame_count*self.AudioSegment.frame_width:
print("ConvaiAudioPlayer - Stopped playing")
self.stop_talking_callback()
self.IsPlaying = False
return frames, pyaudio.paComplete
else:
return frames, pyaudio.paContinue
def consume_frames(self, count: int):
if self.AudioSegment is None:
return bytes()
FrameEnd = self.AudioSegment.frame_width*count
if FrameEnd > len(self.AudioSegment._data):
return bytes()
FramesToReturn = self.AudioSegment._data[0:FrameEnd]
if FrameEnd == len(self.AudioSegment._data):
self.AudioSegment._data = bytes()
else:
self.AudioSegment._data = self.AudioSegment._data[FrameEnd:]
# print("self.AudioSegment._data = self.AudioSegment._data[FrameEnd:]")
return FramesToReturn
if __name__ == '__main__':
import time
import pyaudio
import grpc
from rpc import service_pb2 as convai_service_msg
from rpc import service_pb2_grpc as convai_service
from typing import Generator
import io
from pydub import AudioSegment
import configparser
CHUNK = 1024
FORMAT = pyaudio.paInt16
CHANNELS = 1
RATE = 16000
RECORD_SECONDS = 3
p = pyaudio.PyAudio()
stream = p.open(format=FORMAT,
channels=CHANNELS,
rate=RATE,
input=True,
frames_per_buffer=CHUNK)
audio_player = ConvaiAudioPlayer(None)
def start_mic():
global stream
stream = PyAudio.open(format=FORMAT,
channels=CHANNELS,
rate=RATE,
input=True,
frames_per_buffer=CHUNK)
print("start_mic - Started Recording")
def stop_mic():
global stream
if stream:
stream.stop_stream()
stream.close()
else:
print("stop_mic - could not close mic stream since it is None")
return
print("stop_mic - Stopped Recording")
def getGetResponseRequests(api_key: str, character_id: str, session_id: str = "") -> Generator[convai_service_msg.GetResponseRequest, None, None]:
action_config = convai_service_msg.ActionConfig(
classification = 'multistep',
context_level = 1
)
action_config.actions[:] = ["fetch", "jump", "dance", "swim"]
action_config.objects.append(
convai_service_msg.ActionConfig.Object(
name = "ball",
description = "A round object that can bounce around."
)
)
action_config.objects.append(
convai_service_msg.ActionConfig.Object(
name = "water",
description = "Liquid found in oceans, seas and rivers that you can swim in. You can also drink it."
)
)
action_config.characters.append(
convai_service_msg.ActionConfig.Character(
name = "User",
bio = "Person playing the game and asking questions."
)
)
action_config.characters.append(
convai_service_msg.ActionConfig.Character(
name = "Learno",
bio = "A medieval farmer from a small village."
)
)
get_response_config = convai_service_msg.GetResponseRequest.GetResponseConfig(
character_id = character_id,
api_key = api_key,
audio_config = convai_service_msg.AudioConfig(
sample_rate_hertz = 16000
),
action_config = action_config
)
# session_id = "f50b7bf00ad50f5c2c22065965948c16"
if session_id != "":
get_response_config.session_id = session_id
yield convai_service_msg.GetResponseRequest(
get_response_config = get_response_config
)
for i in range(0, int(RATE / CHUNK * RECORD_SECONDS)):
data = stream.read(CHUNK)
yield convai_service_msg.GetResponseRequest(
get_response_data = convai_service_msg.GetResponseRequest.GetResponseData(
audio_data = data
)
)
stream.stop_stream()
stream.close()
print("* recording stopped")
config = configparser.ConfigParser()
config.read("exts\convai\convai\convai.env")
api_key = config.get("CONVAI", "API_KEY")
character_id = config.get("CONVAI", "CHARACTER_ID")
channel_address = config.get("CONVAI", "CHANNEL")
channel = grpc.secure_channel(channel_address, grpc.ssl_channel_credentials())
client = convai_service.ConvaiServiceStub(channel)
for response in client.GetResponse(getGetResponseRequests(api_key, character_id)):
if response.HasField("audio_response"):
print("Stream Message: {} {} {}".format(response.session_id, response.audio_response.audio_config, response.audio_response.text_data))
audio_player.append_to_stream(response.audio_response.audio_data)
else:
print("Stream Message: {}".format(response))
p.terminate()
# start_mic()
time.sleep(10)
# while 1:
# audio_player = ConvaiAudioPlayer(None)
# # data = stream.read(CHUNK)
# # _, data = scipy.io.wavfile.read("F:/Work/Convai/Tests/Welcome.wav")
# f = open("F:/Work/Convai/Tests/Welcome.wav", "rb")
# data = f.read()
# print(type(data))
# audio_player.append_to_stream(data)
# time.sleep(0.2)
# break
# # stop_mic()
# time.sleep(2)
# with keyboard.Listener(on_press=on_press,on_release=on_release):
# while(1):
# time.sleep(0.1)
# continue
# print("running") | 7,714 | Python | 32.986784 | 150 | 0.577651 |
Conv-AI/ov_extension/exts/convai/convai/rpc/service_pb2_grpc.py | # Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
"""Client and server classes corresponding to protobuf-defined services."""
import grpc
from . import service_pb2 as service__pb2
class ConvaiServiceStub(object):
"""Missing associated documentation comment in .proto file."""
def __init__(self, channel):
"""Constructor.
Args:
channel: A grpc.Channel.
"""
self.Hello = channel.unary_unary(
'/service.ConvaiService/Hello',
request_serializer=service__pb2.HelloRequest.SerializeToString,
response_deserializer=service__pb2.HelloResponse.FromString,
)
self.HelloStream = channel.stream_stream(
'/service.ConvaiService/HelloStream',
request_serializer=service__pb2.HelloRequest.SerializeToString,
response_deserializer=service__pb2.HelloResponse.FromString,
)
self.SpeechToText = channel.stream_stream(
'/service.ConvaiService/SpeechToText',
request_serializer=service__pb2.STTRequest.SerializeToString,
response_deserializer=service__pb2.STTResponse.FromString,
)
self.GetResponse = channel.stream_stream(
'/service.ConvaiService/GetResponse',
request_serializer=service__pb2.GetResponseRequest.SerializeToString,
response_deserializer=service__pb2.GetResponseResponse.FromString,
)
self.GetResponseSingle = channel.unary_stream(
'/service.ConvaiService/GetResponseSingle',
request_serializer=service__pb2.GetResponseRequestSingle.SerializeToString,
response_deserializer=service__pb2.GetResponseResponse.FromString,
)
class ConvaiServiceServicer(object):
"""Missing associated documentation comment in .proto file."""
def Hello(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def HelloStream(self, request_iterator, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def SpeechToText(self, request_iterator, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def GetResponse(self, request_iterator, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def GetResponseSingle(self, request, context):
"""Missing associated documentation comment in .proto file."""
context.set_code(grpc.StatusCode.UNIMPLEMENTED)
context.set_details('Method not implemented!')
raise NotImplementedError('Method not implemented!')
def add_ConvaiServiceServicer_to_server(servicer, server):
rpc_method_handlers = {
'Hello': grpc.unary_unary_rpc_method_handler(
servicer.Hello,
request_deserializer=service__pb2.HelloRequest.FromString,
response_serializer=service__pb2.HelloResponse.SerializeToString,
),
'HelloStream': grpc.stream_stream_rpc_method_handler(
servicer.HelloStream,
request_deserializer=service__pb2.HelloRequest.FromString,
response_serializer=service__pb2.HelloResponse.SerializeToString,
),
'SpeechToText': grpc.stream_stream_rpc_method_handler(
servicer.SpeechToText,
request_deserializer=service__pb2.STTRequest.FromString,
response_serializer=service__pb2.STTResponse.SerializeToString,
),
'GetResponse': grpc.stream_stream_rpc_method_handler(
servicer.GetResponse,
request_deserializer=service__pb2.GetResponseRequest.FromString,
response_serializer=service__pb2.GetResponseResponse.SerializeToString,
),
'GetResponseSingle': grpc.unary_stream_rpc_method_handler(
servicer.GetResponseSingle,
request_deserializer=service__pb2.GetResponseRequestSingle.FromString,
response_serializer=service__pb2.GetResponseResponse.SerializeToString,
),
}
generic_handler = grpc.method_handlers_generic_handler(
'service.ConvaiService', rpc_method_handlers)
server.add_generic_rpc_handlers((generic_handler,))
# This class is part of an EXPERIMENTAL API.
class ConvaiService(object):
"""Missing associated documentation comment in .proto file."""
@staticmethod
def Hello(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_unary(request, target, '/service.ConvaiService/Hello',
service__pb2.HelloRequest.SerializeToString,
service__pb2.HelloResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def HelloStream(request_iterator,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.stream_stream(request_iterator, target, '/service.ConvaiService/HelloStream',
service__pb2.HelloRequest.SerializeToString,
service__pb2.HelloResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def SpeechToText(request_iterator,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.stream_stream(request_iterator, target, '/service.ConvaiService/SpeechToText',
service__pb2.STTRequest.SerializeToString,
service__pb2.STTResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def GetResponse(request_iterator,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.stream_stream(request_iterator, target, '/service.ConvaiService/GetResponse',
service__pb2.GetResponseRequest.SerializeToString,
service__pb2.GetResponseResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
@staticmethod
def GetResponseSingle(request,
target,
options=(),
channel_credentials=None,
call_credentials=None,
insecure=False,
compression=None,
wait_for_ready=None,
timeout=None,
metadata=None):
return grpc.experimental.unary_stream(request, target, '/service.ConvaiService/GetResponseSingle',
service__pb2.GetResponseRequestSingle.SerializeToString,
service__pb2.GetResponseResponse.FromString,
options, channel_credentials,
insecure, call_credentials, compression, wait_for_ready, timeout, metadata)
| 8,631 | Python | 42.376884 | 111 | 0.636543 |
Conv-AI/ov_extension/exts/convai/config/extension.toml | [python.pipapi]
# Commands passed to pip install before extension gets enabled. Can also contain flags, like `--upgrade`, `--no--index`, etc.
# Refer to: https://pip.pypa.io/en/stable/reference/requirements-file-format/
requirements = [
"scipy",
"wavio",
"sounddevice",
"requests",
"googleapis-common-protos",
"grpcio==1.51.1",
"grpcio-tools==1.51.1",
"protobuf==4.21.10",
"PyAudio==0.2.12",
"pydub==0.25.1"
]
# Allow going to online index. Required to be set to true for pip install call.
use_online_index = true
# Ignore import check for modules.
ignore_import_check = false
[package]
# Semantic Versionning is used: https://semver.org/
version = "1.0.3"
# The title and description fields are primarily for displaying extension info in UI
title = "Convai"
description="Integrate Convai API with Omniverse."
icon = "data/icon.png"
preview_image = "data/preview.png"
# Path (relative to the root) or content of readme markdown file for UI.
readme = "docs/README.md"
feature = true
# URL of the extension source repository.
repository = "https://github.com/Conv-AI/ov_extension"
# One of categories for UI.
category = "Convai"
# Keywords for the extension
keywords = ["kit", "convai", "chatgpt", "chatbot"]
# Use omni.ui to build simple UI
[dependencies]
"omni.kit.uiapp" = {}
"omni.anim.graph.bundle" = {}
# Main python module this extension provides, it will be publicly available as "import convai".
[[python.module]]
name = "convai"
| 1,493 | TOML | 24.322033 | 125 | 0.699263 |
Steigner/Isaac-ur_rtde/isaac_rtde.py | # MIT License
# Copyright (c) 2023 Fravebot
# Permission is hereby granted, free of charge, to any person obtaining a copy
# of this software and associated documentation files (the "Software"), to deal
# in the Software without restriction, including without limitation the rights
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
# copies of the Software, and to permit persons to whom the Software is
# furnished to do so, subject to the following conditions:
# The above copyright notice and this permission notice shall be included in all
# copies or substantial portions of the Software.
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
# AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
# LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
# OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
# SOFTWARE.
# Author: Martin Juříček
# Isaac Sim app library
from omni.isaac.kit import SimulationApp
simulation_app = SimulationApp({"headless": False})
# Isaac Sim extenstions + core libraries
from omni.isaac.motion_generation.lula import RmpFlow
from omni.isaac.motion_generation import ArticulationMotionPolicy
from omni.isaac.core.robots import Robot
from omni.isaac.core.objects import cuboid
from omni.isaac.core import World
from omni.isaac.core.utils.stage import add_reference_to_stage
from omni.isaac.core.utils.nucleus import get_assets_root_path
from omni.isaac.motion_generation.interface_config_loader import (
load_supported_motion_policy_config,
)
# ur rtde communication
import rtde_control
import rtde_receive
import numpy as np
import argparse
import sys
parser = argparse.ArgumentParser()
parser.add_argument(
"--robot-ip",
type=str,
default="127.0.0.1",
help="IP adress of robot Real world UR Polyscope or VM UR Polyscope",
)
arg = parser.parse_args()
# set up paths and prims
robot_name = "UR5e"
prim_path = "/UR5e"
usd_path = get_assets_root_path() + "/Isaac/Robots/UniversalRobots/ur5e/ur5e.usd"
# set references to staget in isaac
add_reference_to_stage(usd_path=usd_path, prim_path=prim_path)
# add world
my_world = World(stage_units_in_meters=1.0)
my_world.scene.add_default_ground_plane()
# add robot to world
robot = my_world.scene.add(Robot(prim_path=prim_path, name=robot_name))
# The load_supported_motion_policy_config() function is currently the simplest way to load supported robots.
# In the future, Isaac Sim will provide a centralized registry of robots with Lula robot description files
# and RMP configuration files stored alongside the robot USD.
rmp_config = load_supported_motion_policy_config(robot_name, "RMPflow")
# Initialize an RmpFlow object and set up
rmpflow = RmpFlow(**rmp_config)
physics_dt = 1.0/60
articulation_rmpflow = ArticulationMotionPolicy(robot, rmpflow, physics_dt)
articulation_controller = robot.get_articulation_controller()
# Make a target to follow
target_cube = cuboid.VisualCuboid(
"/World/target", position=np.array([0.5, 0, 0.5]), color=np.array([1.0, 0, 0]), size=0.1, scale=np.array([0.5,0.5,0.5])
)
# Make an obstacle to avoid
ground = cuboid.VisualCuboid(
"/World/ground", position=np.array([0.0, 0, -0.0525]), color=np.array([0, 1.0, 0]), size=0.1, scale=np.array([40,40,1])
)
rmpflow.add_obstacle(ground)
# prereset world
my_world.reset()
# IP adress of robot Real world UR Polyscope or VM UR Polyscope
try:
rtde_r = rtde_receive.RTDEReceiveInterface(arg.robot_ip)
rtde_c = rtde_control.RTDEControlInterface(arg.robot_ip)
robot.set_joint_positions(np.array(rtde_r.getActualQ()))
except:
print("[ERROR] Robot is not connected")
# close isaac sim
simulation_app.close()
sys.exit()
while simulation_app.is_running():
# on step render
my_world.step(render=True)
if my_world.is_playing():
# first frame -> reset world
if my_world.current_time_step_index == 0:
my_world.reset()
# set target to RMP Flow
rmpflow.set_end_effector_target(
target_position=target_cube.get_world_pose()[0], target_orientation=target_cube.get_world_pose()[1]
)
# Parameters
velocity = 0.1
acceleration = 0.1
dt = 1.0/500 # 2ms
lookahead_time = 0.1
gain = 300
# jointq = get joints positions
joint_q = robot.get_joint_positions()
# time start period
t_start = rtde_c.initPeriod()
# run servoJ
rtde_c.servoJ(joint_q, velocity, acceleration, dt, lookahead_time, gain)
rtde_c.waitPeriod(t_start)
# Query the current obstacle position
rmpflow.update_world()
actions = articulation_rmpflow.get_next_articulation_action()
articulation_controller.apply_action(actions)
# get actual q from robot and update isaac model
robot.set_joint_positions(np.array(rtde_r.getActualQ()))
# rtde control stop script and disconnect
rtde_c.servoStop()
rtde_c.stopScript()
rtde_r.disconnect()
# close isaac sim
simulation_app.close() | 5,292 | Python | 33.148387 | 123 | 0.717876 |
Steigner/Isaac-ur_rtde/README.md | # Nvidia Isaac Universal Robots RTDE communication
## Introduction
This example was created in collaboration with [Fravebot](https://www.fravebot.com/) company. This example is a simple demonstration of using Nvidia's Isaac Sim software to control a collaborative robot UR5e from Universal Robots.
```javascript
Software
------------------------------------
| Nvidia Isaac Sim version 2022.2.0
| Isaac Python version 3.7
| - ur-rtde 1.5.5
```
1) [Nvidia Isaac Requirements](https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/requirements.html)
2) [Nvidia Omniverse Isaac Instalation](https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/install_workstation.html)
3) Install [ur_rtde](https://gitlab.com/sdurobotics/ur_rtde) (Check docs Win/Lin). After install python bindings:
```console
user@user-pc:~/.../isaac_sim-2022.2.0$ ./python.sh -m pip install ur_rtde
```
## How-to-Start
1) Go to workspace Nvidia Omniverse Isaac
2) Clone repository
3) Run VM Polyscope or Real Robot.
4) Run command (robot-ip is ip adress of robot VM or Real):
```console
user@user-pc:~/.../isaac_sim-2022.2.0$ ./python.sh test_fravebot/isaac_rtde.py --robot-ip 192.168.200.135
```
## Video!
<p align="center">
<a href="https://www.youtube.com/watch?v=L3BNDRvnJjo"><img src="https://upload.wikimedia.org/wikipedia/commons/0/09/YouTube_full-color_icon_%282017%29.svg" alt="Nvidia Isaac Sim Universal Robots UR5e RTDE"/></a>
</p>
## :information_source: Contacts
:mailbox: juricek@fravebot.com
| 1,502 | Markdown | 33.15909 | 230 | 0.725033 |
HC2ER/OmniverseExtension-hnadi.tools.exploded_view/README.md | ## Adding This Extension
![Folders](https://github.com/HC2ER/omniverse.tools.exploded_view/blob/master/docs/pics/ADD1.png)
![Folders](https://github.com/HC2ER/omniverse.tools.exploded_view/blob/master/docs/pics/ADD2.png)
1. Download the package.
2. Go into the local address where you install Omniverse apps (code or create).
3. Just put the whole folder into "exts" or "extscache" folders where other extensions are installed. Both folders are OK.
## Using This Extension
An exploded view is very useful to show the details of products in many fields like architecture and mechanical engineering...
This extension provides an easy and reliable way to make exploded view in Omniverse.
### Step1: Match the formats
![Single Prim Format](https://github.com/HC2ER/omniverse.tools.exploded_view/blob/master/docs/pics/FORMAT1.png)
![Parent Group Format](https://github.com/HC2ER/omniverse.tools.exploded_view/blob/master/docs/pics/FORMAT2.png)
* Because of the core algorithm reads and manipulates the transform: translate and xformOp: translate: pivot attribute of prims,
before selecting, every single item needs to have a proper pivot coordinate around its geometric center,
as well as the parent group if you want to choose it immediately.
If the formats do not match, change the model in DCC, or directly change their properties if already imported in Omniverse.
### Step2: Create the Exploded Model
![Main Functions](https://github.com/HC2ER/omniverse.tools.exploded_view/blob/master/docs/pics/MAIN1.png)
1. Select a group or all items at once and click the "Select Prims" button.
2. Change the X, Y, and Z ratio to control the explosion distance in different directions.
3. Change the coordinates of Pivot to control the explosion centre.
* All items in the Exploded_Model will change dynamically with the change of X, Y, Z ratio and Pivot.
### Step3: Edit the Exploded Model
![Bind](https://github.com/HC2ER/omniverse.tools.exploded_view/blob/master/docs/pics/EDIT1.png)
![Bind](https://github.com/HC2ER/omniverse.tools.exploded_view/blob/master/docs/pics/EDIT2.png)
1. Select prims and click the "Add" or "Remove" button to add or remove them into or from the existed Exploded_Model.
2. Select prims and click the "Bind" button if you want to keep the relative distances of selected items during explosion.
3. Select a group and click the "Unbind" button to unleash their relative distances during explosion.
* The Pivot of the Exploded_Model will change dynamically with adding or removing prims.
### Other Functions
![Axono](https://github.com/HC2ER/omniverse.tools.exploded_view/blob/master/docs/pics/OTHER1.png)
![Axono](https://github.com/HC2ER/omniverse.tools.exploded_view/blob/master/docs/pics/OTHER2.png)
1. Click the "Axono" button and adjust the distance of the camera if you want an axonometric view.
2. Click the "Eye" button to hide or show the ORIGINAL prims.
3. Click the "Reset" button to reset the X, Y, Z ratio and Pivot.
4. Click the "Clear" button to delete the Exploded_Model.
### Future Development
* Add some animation functions to show the process of explosion.
* Improve the running speed and manipulation logic.
If you have any other questions about this extension, welcome to send the message or contact 460855381@qq.com.
| 3,274 | Markdown | 61.980768 | 128 | 0.780696 |
HC2ER/OmniverseExtension-hnadi.tools.exploded_view/hnadi/tools/exploded_view/exploded_view.py | from .utils import get_name_from_path, get_pure_list
import omni.usd
import omni.kit.commands
from pxr import Usd, Sdf, Gf
# ----------------------------------------------------SELECT-------------------------------------------------------------
def select_explode_Xform(x_coord, y_coord, z_coord, x_ratio, y_ratio, z_ratio):
global original_path
global current_model_path
global item_count
global default_pivot
global item_list0
global translate_list0
# Get current stage and active prim_paths
stage = omni.usd.get_context().get_stage()
selected_prim_path = omni.usd.get_context().get_selection().get_selected_prim_paths()
if not selected_prim_path:
return
# A: If the whole group is selected
if len(selected_prim_path) == 1:
# Test members
selected_prim_path = omni.usd.get_context().get_selection().get_selected_prim_paths()
group_prim = stage.GetPrimAtPath(selected_prim_path[0])
children_prims_list = group_prim.GetChildren()
# If no members
if len(children_prims_list) <= 1:
print("Please select a valid group or all items at once!")
return
else:
original_path = selected_prim_path
item_count = len(children_prims_list)
omni.kit.commands.execute('CopyPrim',
path_from= selected_prim_path[0],
path_to='/World/Exploded_Model',
exclusive_select=False)
selected_prim_path = omni.usd.get_context().get_selection().get_selected_prim_paths()
# print(selected_prim_path[-1])
# sub_group_prim = stage.GetPrimAtPath(selected_prim_path[0])
# sub_children_prims_list = group_prim.GetChildren()
# original_path = selected_prim_path
# item_count = len(selected_prim_path)
# for i in sub_children_prim_list:
# name = get_name_from_path(i)
# name_list.append(name)
omni.kit.commands.execute('SelectPrims',
old_selected_paths=selected_prim_path,
new_selected_paths=[selected_prim_path[-1]],
expand_in_stage=True)
# B: If multiple prims are selected separately
else:
original_path = selected_prim_path
item_count = len(selected_prim_path)
name_list = []
group_list = []
for i in selected_prim_path:
name = get_name_from_path(i)
name_list.append(name)
# Copy
omni.kit.commands.execute('CopyPrim',
path_from = i,
path_to ='/World/item_01',
exclusive_select=False)
if selected_prim_path.index(i)<= 8:
group_list.append(f'/World/item_0{selected_prim_path.index(i)+1}')
else:
group_list.append(f'/World/item_{selected_prim_path.index(i)+1}')
# Group
omni.kit.commands.execute('GroupPrims',
prim_paths=group_list)
# Change group name
selected_prim_path = omni.usd.get_context().get_selection().get_selected_prim_paths()
omni.kit.commands.execute('MovePrims',
paths_to_move={selected_prim_path[0]: '/World/Exploded_Model'})
# obj = stage.GetObjectAtPath(selected_prim_path[0])
# default_pivot = obj.GetAttribute('xformOp:translate:pivot').Get()
# Change members names back
selected_prim_path = omni.usd.get_context().get_selection().get_selected_prim_paths()
group_prim = stage.GetPrimAtPath(selected_prim_path[0])
children_prims_list = group_prim.GetChildren()
# Move members out of the group
for i in children_prims_list:
ind = children_prims_list.index(i)
if ind <= 8:
omni.kit.commands.execute('MovePrims',
paths_to_move={f"{selected_prim_path[0]}/item_0{ind+1}": f"{selected_prim_path[0]}/" + name_list[ind]})
else:
omni.kit.commands.execute('MovePrims',
paths_to_move={f"{selected_prim_path[0]}/item_{ind+1}": f"{selected_prim_path[0]}/" + name_list[ind]})
# Choose Exploded_Model and get current path,count,pivot
selected_prim_path = omni.usd.get_context().get_selection().get_selected_prim_paths()
current_model_path = selected_prim_path
obj = stage.GetObjectAtPath(selected_prim_path[0])
default_pivot = obj.GetAttribute('xformOp:translate:pivot').Get()
print(obj)
print(default_pivot)
# Get origin translate_list
outer_group_prim = stage.GetPrimAtPath(current_model_path[0])
children_prims_list = outer_group_prim.GetChildren()
item_list0 = children_prims_list
translate_list0 = []
for i in children_prims_list:
sub_children_prim_list = i.GetChildren()
if len(sub_children_prim_list) <= 1:
translate = i.GetAttribute('xformOp:translate').Get()
else:
translate = i.GetAttribute('xformOp:translate:pivot').Get()
translate_list0.append(translate)
# print("--------------------------------------------")
# print(original_path)
# print(current_model_path)
# print(item_count)
# print(default_pivot)
# print(item_list0)
# print(translate_list0)
# print("--------------------------------------------")
# Create Explosion_Centre
omni.kit.commands.execute('CreatePrimWithDefaultXform',
prim_type='Xform',
attributes={})
selected_prim_path = omni.usd.get_context().get_selection().get_selected_prim_paths()
world_pivot_path = selected_prim_path
omni.kit.commands.execute('ChangeProperty',
prop_path=Sdf.Path(f"{world_pivot_path[0]}" + ".xformOp:translate"),
value=Gf.Vec3d(default_pivot[0], default_pivot[1], default_pivot[2]),
prev=Gf.Vec3d(0, 0, 0))
obj1 = stage.GetObjectAtPath(selected_prim_path[0])
default_pivot = obj1.GetAttribute('xformOp:translate').Get()
omni.kit.commands.execute('MovePrims',
paths_to_move={f"{world_pivot_path[0]}": f"{current_model_path[0]}/" + "Explosion_Centre"})
# Set_default_button_value
x_coord.model.set_value(default_pivot[0])
y_coord.model.set_value(default_pivot[1])
z_coord.model.set_value(default_pivot[2])
x_ratio.model.set_value(0)
y_ratio.model.set_value(0)
z_ratio.model.set_value(0)
# End
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=[f'{current_model_path[0]}'],
expand_in_stage=True)
return
#------------------------------------------------------REMOVE-----------------------------------------------------------
def remove_item(x_coord, y_coord, z_coord, x_ratio, y_ratio, z_ratio):
try:
global original_path
global current_model_path
global item_count
global default_pivot
global item_list0
global translate_list0
stage = omni.usd.get_context().get_stage()
selected_prim_path = omni.usd.get_context().get_selection().get_selected_prim_paths()
if not selected_prim_path:
return
# Remove correct items
for i in selected_prim_path:
path = str(i)
name = get_name_from_path(path)
if path != f"{current_model_path[0]}/" + "Explosion_Centre":
if path != current_model_path[0] and path.find(current_model_path[0]) != -1:
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=[f'{current_model_path[0]}'],
expand_in_stage=True)
selected_prim_path = omni.usd.get_context().get_selection().get_selected_prim_paths()
obj0 = stage.GetObjectAtPath(selected_prim_path[0])
children_prims_list_z0 = obj0.GetChildren()
new_count = len(children_prims_list_z0)
if new_count == 2:
print("Cannot remove the only item in Exploded_Model!")
return
else:
# Restore values to 0 to record the postions
x = x_ratio.model.get_value_as_float()
y = y_ratio.model.get_value_as_float()
z = z_ratio.model.get_value_as_float()
# If 0,pass
if x == 0.0 and y== 0.0 and z == 0.0:
pass
# If not, set 0
else:
x_ratio.model.set_value(0.0)
y_ratio.model.set_value(0.0)
z_ratio.model.set_value(0.0)
omni.kit.commands.execute('MovePrim',
path_from = path,
path_to = "World/" + name)
else:
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=[f'{current_model_path[0]}'],
expand_in_stage=True)
print("Please select a valid item to remove!")
else:
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=[f'{current_model_path[0]}'],
expand_in_stage=True)
print("Cannot remove Explosion_Centre!")
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=[f'{current_model_path[0]}'],
expand_in_stage=True)
selected_prim_path = omni.usd.get_context().get_selection().get_selected_prim_paths()
obj = stage.GetObjectAtPath(selected_prim_path[0])
children_prims_list_z = obj.GetChildren()
new_count2 = len(children_prims_list_z) -1
# If any item is removed
if new_count2 < item_count:
# Refresh item_count
item_count = new_count2
obj = stage.GetObjectAtPath(selected_prim_path[0])
# Refresh item_list0 and translate_list0
outer_group_prim = stage.GetPrimAtPath(current_model_path[0])
children_prims_list0 = outer_group_prim.GetChildren()
children_prims_list = get_pure_list(children_prims_list0)
item_list0 = children_prims_list
translate_list0 = []
for i in children_prims_list:
sub_children_prim_list = i.GetChildren()
if len(sub_children_prim_list) <= 1:
translate = i.GetAttribute('xformOp:translate').Get()
else:
translate = i.GetAttribute('xformOp:translate:pivot').Get()
translate_list0.append(translate)
# Refresh pivot
group_list = []
name_list = []
for i in item_list0:
item_path = str(i.GetPath())
name = get_name_from_path(item_path)
name_list.append(name)
group_list.append(item_path)
# S1 group
omni.kit.commands.execute('GroupPrims',
prim_paths=group_list)
# change group name
selected_prim_path = omni.usd.get_context().get_selection().get_selected_prim_paths()
omni.kit.commands.execute('MovePrims',
paths_to_move={selected_prim_path[0]: f"{current_model_path[0]}/Sub_Exploded_Model"})
# S2 Get new pivot by group
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=[f"{current_model_path[0]}/Sub_Exploded_Model"],
expand_in_stage=True)
selected_prim_path = omni.usd.get_context().get_selection().get_selected_prim_paths()
obj = stage.GetObjectAtPath(selected_prim_path[0])
default_pivot = obj.GetAttribute('xformOp:translate:pivot').Get()
# S3 Move members out of the group
group_path = selected_prim_path
outer_group_prim = stage.GetPrimAtPath(group_path[0])
children_prims_list = outer_group_prim.GetChildren()
for i in children_prims_list:
index = children_prims_list.index(i)
name = name_list[index]
omni.kit.commands.execute('MovePrim',
path_from = f"{group_path[0]}/{name_list[index]}",
path_to = f"{current_model_path[0]}/{name_list[index]}")
# S4 Delete the group
omni.kit.commands.execute('DeletePrims',
paths=[f"{current_model_path[0]}/Sub_Exploded_Model"])
# S5 Change pivot
omni.kit.commands.execute('ChangeProperty',
prop_path=Sdf.Path(f"{current_model_path[0]}/Explosion_Centre" + ".xformOp:translate"),
value=Gf.Vec3d(default_pivot[0], default_pivot[1], default_pivot[2]),
prev=Gf.Vec3d(0, 0, 0))
# Restore_default_panel
x_coord.model.set_value(default_pivot[0])
y_coord.model.set_value(default_pivot[1])
z_coord.model.set_value(default_pivot[2])
if x == 0.0 and y== 0.0 and z == 0.0:
pass
else:
x_ratio.model.set_value(x)
y_ratio.model.set_value(y)
z_ratio.model.set_value(z)
# Select
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=[f'{current_model_path[0]}'],
expand_in_stage=True)
return
# If no remove actions,return
else:
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=[f'{current_model_path[0]}'],
expand_in_stage=True)
return
except:
print("Create a model to explode at first!")
return
#---------------------------------------------------------ADD-----------------------------------------------------------
def add_item(x_coord, y_coord, z_coord, x_ratio, y_ratio, z_ratio):
try:
global original_path
global current_model_path
global item_count
global default_pivot
global item_list0
global translate_list0
stage = omni.usd.get_context().get_stage()
selected_prim_path = omni.usd.get_context().get_selection().get_selected_prim_paths()
if not selected_prim_path:
return
# Add correct items
for i in selected_prim_path:
path = str(i)
name = get_name_from_path(path)
if path.find(current_model_path[0]) == -1:
# Restore values to 0 to record the postions
x = x_ratio.model.get_value_as_float()
y = y_ratio.model.get_value_as_float()
z = z_ratio.model.get_value_as_float()
# If 0, pass
if x == 0.0 and y== 0.0 and z == 0.0:
pass
# If not, set 0
else:
x_ratio.model.set_value(0.0)
y_ratio.model.set_value(0.0)
z_ratio.model.set_value(0.0)
omni.kit.commands.execute('MovePrim',
path_from = path,
path_to = f"{current_model_path[0]}/" + name)
else:
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=[f'{current_model_path[0]}'],
expand_in_stage=True)
print("The selected item already existed in the model!")
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=[f'{current_model_path[0]}'],
expand_in_stage=True)
selected_prim_path = omni.usd.get_context().get_selection().get_selected_prim_paths()
obj = stage.GetObjectAtPath(selected_prim_path[0])
children_prims_list_z = obj.GetChildren()
new_count2 = len(children_prims_list_z) - 1
# print(new_count2)
# If any items is added
if new_count2 > item_count:
# Refresh item_count
item_count = new_count2
obj = stage.GetObjectAtPath(selected_prim_path[0])
# Refresh item_list0 and translate_list0
outer_group_prim = stage.GetPrimAtPath(current_model_path[0])
children_prims_list0 = outer_group_prim.GetChildren()
children_prims_list = get_pure_list(children_prims_list0)
item_list0 = children_prims_list
translate_list0 = []
for i in children_prims_list:
sub_children_prim_list = i.GetChildren()
if len(sub_children_prim_list) <= 1:
translate = i.GetAttribute('xformOp:translate').Get()
else:
translate = i.GetAttribute('xformOp:translate:pivot').Get()
translate_list0.append(translate)
# Refresh pivot
group_list = []
name_list = []
for i in item_list0:
item_path = str(i.GetPath())
name = get_name_from_path(item_path)
name_list.append(name)
group_list.append(item_path)
# S1 Group
omni.kit.commands.execute('GroupPrims',
prim_paths=group_list)
# Change group name
selected_prim_path = omni.usd.get_context().get_selection().get_selected_prim_paths()
omni.kit.commands.execute('MovePrims',
paths_to_move={selected_prim_path[0]: f"{current_model_path[0]}/Sub_Exploded_Model"})
# S2 Get new pivot by group
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=[f"{current_model_path[0]}/Sub_Exploded_Model"],
expand_in_stage=True)
selected_prim_path = omni.usd.get_context().get_selection().get_selected_prim_paths()
obj = stage.GetObjectAtPath(selected_prim_path[0])
default_pivot = obj.GetAttribute('xformOp:translate:pivot').Get()
print(default_pivot)
# S3 Move members out of the group
group_path = selected_prim_path
outer_group_prim = stage.GetPrimAtPath(group_path[0])
children_prims_list = outer_group_prim.GetChildren()
for i in children_prims_list:
index = children_prims_list.index(i)
name = name_list[index]
omni.kit.commands.execute('MovePrim',
path_from= f"{group_path[0]}/{name_list[index]}",
path_to=f"{current_model_path[0]}/{name_list[index]}")
# S4 Delete the group
omni.kit.commands.execute('DeletePrims',
paths=[f"{current_model_path[0]}/Sub_Exploded_Model"])
# S5 Change pivot
omni.kit.commands.execute('ChangeProperty',
prop_path=Sdf.Path(f"{current_model_path[0]}/Explosion_Centre" + ".xformOp:translate"),
value=Gf.Vec3d(default_pivot[0], default_pivot[1], default_pivot[2]),
prev=Gf.Vec3d(0, 0, 0))
# Restore_default_panel
x_coord.model.set_value(default_pivot[0])
y_coord.model.set_value(default_pivot[1])
z_coord.model.set_value(default_pivot[2])
if x == 0.0 and y== 0.0 and z == 0.0:
pass
else:
x_ratio.model.set_value(x)
y_ratio.model.set_value(y)
z_ratio.model.set_value(z)
# Select
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=[f'{current_model_path[0]}'],
expand_in_stage=True)
return
# If no add actions,return
else:
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=[f'{current_model_path[0]}'],
expand_in_stage=True)
return
except:
print("Create a model to explode at first!")
return
#--------------------------------------------------------BIND-----------------------------------------------------------
def bind_item(x_coord, y_coord, z_coord, x_ratio, y_ratio, z_ratio):
try:
global original_path
global current_model_path
global item_count
global default_pivot
global item_list0
global translate_list0
stage = omni.usd.get_context().get_stage()
selected_prim_path = omni.usd.get_context().get_selection().get_selected_prim_paths()
if not selected_prim_path:
return
if len(selected_prim_path) < 2:
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=[f'{current_model_path[0]}'],
expand_in_stage=True)
print("Bind at least 2 items in the model!")
return
group_list = []
for i in selected_prim_path:
path = str(i)
name = get_name_from_path(path)
if path != f"{current_model_path[0]}/" + "Explosion_Centre":
if path.find(current_model_path[0]) != -1:
group_list.append(i)
else:
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=[f'{current_model_path[0]}'],
expand_in_stage=True)
print("Cannot bind the Explosion_Centre!")
# Restore values to 0 to bind
x = x_ratio.model.get_value_as_float()
y = y_ratio.model.get_value_as_float()
z = z_ratio.model.get_value_as_float()
# If 0,pass
if x == 0.0 and y== 0.0 and z == 0.0:
pass
# If not,set 0
else:
x_ratio.model.set_value(0.0)
y_ratio.model.set_value(0.0)
z_ratio.model.set_value(0.0)
# Bind items
omni.kit.commands.execute('GroupPrims',
prim_paths=group_list)
selected_prim_path = omni.usd.get_context().get_selection().get_selected_prim_paths()
group_path = selected_prim_path[0]
# print(group_path)
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=[f'{current_model_path[0]}'],
expand_in_stage=True)
selected_prim_path = omni.usd.get_context().get_selection().get_selected_prim_paths()
obj = stage.GetObjectAtPath(selected_prim_path[0])
children_prims_list_z = obj.GetChildren()
new_count2 = len(children_prims_list_z) - 1
# print(new_count2)
# If bind actions
if new_count2 < item_count:
# Refresh item_count
item_count = new_count2
obj = stage.GetObjectAtPath(selected_prim_path[0])
# Refresh item_list0 and translate_list0
outer_group_prim = stage.GetPrimAtPath(current_model_path[0])
children_prims_list0 = outer_group_prim.GetChildren()
children_prims_list = get_pure_list(children_prims_list0)
item_list0 = children_prims_list
translate_list0 = []
for i in children_prims_list:
sub_children_prim_list = i.GetChildren()
if len(sub_children_prim_list) <= 1:
translate = i.GetAttribute('xformOp:translate').Get()
else:
translate = i.GetAttribute('xformOp:translate:pivot').Get()
translate_list0.append(translate)
# Refresh pivot
default_pivot = default_pivot
# Restore_default_panel
x_coord.model.set_value(default_pivot[0])
y_coord.model.set_value(default_pivot[1])
z_coord.model.set_value(default_pivot[2])
if x == 0.0 and y== 0.0 and z == 0.0:
pass
else:
x_ratio.model.set_value(x)
y_ratio.model.set_value(y)
z_ratio.model.set_value(z)
# Select
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=[f'{current_model_path[0]}'],
expand_in_stage=True)
return
# If no bind,return
else:
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=[f'{current_model_path[0]}'],
expand_in_stage=True)
return
except:
print("Create a model to explode at first!")
return
#------------------------------------------------------UNBIND-----------------------------------------------------------
def unbind_item(x_coord, y_coord, z_coord, x_ratio, y_ratio, z_ratio):
try:
global original_path
global current_model_path
global item_count
global default_pivot
global item_list0
global translate_list0
stage = omni.usd.get_context().get_stage()
selected_prim_path = omni.usd.get_context().get_selection().get_selected_prim_paths()
if not selected_prim_path:
return
# Test valid group
for i in selected_prim_path:
path0 = str(i)
if path0 != current_model_path[0] and path0.find(current_model_path[0]) != -1:
outer_group_prim = stage.GetPrimAtPath(path0)
children_prims_list0 = outer_group_prim.GetChildren()
if len(children_prims_list0) < 1:
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=[f'{current_model_path[0]}'],
expand_in_stage=True)
print("Please select a valid group!")
return
else:
if selected_prim_path.index(i) == 0:
# Restore values to 0 to unbind
x = x_ratio.model.get_value_as_float()
y = y_ratio.model.get_value_as_float()
z = z_ratio.model.get_value_as_float()
# If 0,pass
if x == 0.0 and y== 0.0 and z == 0.0:
pass
# If not,set 0
else:
x_ratio.model.set_value(0.0)
y_ratio.model.set_value(0.0)
z_ratio.model.set_value(0.0)
for j in children_prims_list0:
path = str(j.GetPath())
name = get_name_from_path(path)
omni.kit.commands.execute('MovePrims',
paths_to_move={path: f"{current_model_path[0]}/{name}"})
# Delete group
omni.kit.commands.execute('DeletePrims',
paths=[path0])
else:
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=[f'{current_model_path[0]}'],
expand_in_stage=True)
print("Please unbind a valid group!")
return
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=[f'{current_model_path[0]}'],
expand_in_stage=True)
selected_prim_path = omni.usd.get_context().get_selection().get_selected_prim_paths()
obj = stage.GetObjectAtPath(selected_prim_path[0])
children_prims_list_z = obj.GetChildren()
new_count2 = len(children_prims_list_z) - 1
# print(new_count2)
# If unbind actions
if new_count2 >= item_count:
# Refresh item_count
item_count = new_count2
obj = stage.GetObjectAtPath(selected_prim_path[0])
# Refresh item_list0 and translate_list0
outer_group_prim = stage.GetPrimAtPath(current_model_path[0])
children_prims_list0 = outer_group_prim.GetChildren()
children_prims_list = get_pure_list(children_prims_list0)
item_list0 = children_prims_list
translate_list0 = []
for i in children_prims_list:
sub_children_prim_list = i.GetChildren()
if len(sub_children_prim_list) <= 1:
translate = i.GetAttribute('xformOp:translate').Get()
else:
translate = i.GetAttribute('xformOp:translate:pivot').Get()
translate_list0.append(translate)
# Refresh pivot
default_pivot = default_pivot
# Restore_default_panel
x_coord.model.set_value(default_pivot[0])
y_coord.model.set_value(default_pivot[1])
z_coord.model.set_value(default_pivot[2])
if x == 0.0 and y== 0.0 and z == 0.0:
pass
else:
x_ratio.model.set_value(x)
y_ratio.model.set_value(y)
z_ratio.model.set_value(z)
# Select
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=[f'{current_model_path[0]}'],
expand_in_stage=True)
return
# If no unbind,return
else:
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=[f'{current_model_path[0]}'],
expand_in_stage=True)
return
except:
print("Create a model to explode at first!")
return
#------------------------------------------------------ONCHANGE----------------------------------------------------------
def on_pivot_change(x_coord, y_coord, z_coord, x_button, y_button, z_button, a:float):
try:
global original_path
global current_model_path
global item_count
global default_pivot
global item_list0
global translate_list0
stage = omni.usd.get_context().get_stage()
# Select Model
if not current_model_path:
print("Please select items to explode at first")
return
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=[f'{current_model_path[0]}'],
expand_in_stage=True)
# Get x,y,z value
x_position = x_coord.model.get_value_as_float()
y_position = y_coord.model.get_value_as_float()
z_position = z_coord.model.get_value_as_float()
# print(x_position, y_position, z_position)
# Change pivot
omni.kit.commands.execute('TransformPrimSRT',
path=Sdf.Path(f"{current_model_path[0]}/" + "Explosion_Centre"),
new_translation=Gf.Vec3d(x_position, y_position, z_position),
old_translation=Gf.Vec3d(0, 0, 0))
# Get new pivot
obj2 = stage.GetObjectAtPath(f"{current_model_path[0]}/" + "Explosion_Centre")
pivot = obj2.GetAttribute('xformOp:translate').Get()
# Get x,y,z ratio
x_ratio = x_button.model.get_value_as_float()
y_ratio = y_button.model.get_value_as_float()
z_ratio = z_button.model.get_value_as_float()
# Calculate each item
group_prim = stage.GetPrimAtPath(current_model_path[0])
children_prims_list0 = group_prim.GetChildren()
children_prims_list = get_pure_list(children_prims_list0)
# Move each item
for item in children_prims_list:
sub_children_prim_list = item.GetChildren()
index = children_prims_list.index(item)
translate = translate_list0[index]
# print(translate)
item_path = item.GetPrimPath()
# print(item_path)
if len(sub_children_prim_list) <= 1:
# If single item
x_distance = (translate[0] - pivot[0]) * x_ratio
y_distance = (translate[1] - pivot[1]) * y_ratio
z_distance = (translate[2] - pivot[2]) * z_ratio
omni.kit.commands.execute('TransformPrimSRT',
path=Sdf.Path(item_path),
new_translation=Gf.Vec3d(translate[0] + x_distance, translate[1] + y_distance, translate[2] + z_distance),
old_translation=translate)
else:
# If group item
x_distance = (translate[0] - pivot[0]) * x_ratio
y_distance = (translate[1] - pivot[1]) * y_ratio
z_distance = (translate[2] - pivot[2]) * z_ratio
omni.kit.commands.execute('TransformPrimSRT',
path=Sdf.Path(item_path),
new_translation=Gf.Vec3d(x_distance, y_distance, z_distance),
old_translation=translate)
# End
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=[f"{current_model_path[0]}/" + "Explosion_Centre"],
expand_in_stage=True)
except:
x_coord.model.set_value(0.0)
y_coord.model.set_value(0.0)
z_coord.model.set_value(0.0)
x_button.model.set_value(0.0)
y_button.model.set_value(0.0)
z_button.model.set_value(0.0)
print("Create a model to explode at first!")
return
def on_ratio_change(x_button, y_button, z_button, x_coord, y_coord, z_coord, a:float):
try:
global original_path
global current_model_path
global item_count
global default_pivot
global item_list0
global translate_list0
stage = omni.usd.get_context().get_stage()
# Select Model
if not current_model_path:
print("Please select items to explode at first")
return
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=[f'{current_model_path[0]}'],
expand_in_stage=True)
# Get x,y,z value
x_position = x_coord.model.get_value_as_float()
y_position = y_coord.model.get_value_as_float()
z_position = z_coord.model.get_value_as_float()
# print(x_position, y_position, z_position)
# Change pivot
omni.kit.commands.execute('TransformPrimSRT',
path=Sdf.Path(f"{current_model_path[0]}/" + "Explosion_Centre"),
new_translation=Gf.Vec3d(x_position, y_position, z_position),
old_translation=Gf.Vec3d(0, 0, 0))
# Get new pivot
obj = stage.GetObjectAtPath(f"{current_model_path[0]}/" + "Explosion_Centre")
pivot = obj.GetAttribute('xformOp:translate').Get()
# Get x,y,z ratio
x_ratio = x_button.model.get_value_as_float()
y_ratio = y_button.model.get_value_as_float()
z_ratio = z_button.model.get_value_as_float()
# Calculate each item
group_prim = stage.GetPrimAtPath(current_model_path[0])
children_prims_list0 = group_prim.GetChildren()
children_prims_list = get_pure_list(children_prims_list0)
# Move each item
for item in children_prims_list:
sub_children_prim_list = item.GetChildren()
index = children_prims_list.index(item)
translate = translate_list0[index]
item_path = item.GetPrimPath()
if len(sub_children_prim_list) <= 1:
# If single item
x_distance = (translate[0] - pivot[0]) * x_ratio
y_distance = (translate[1] - pivot[1]) * y_ratio
z_distance = (translate[2] - pivot[2]) * z_ratio
omni.kit.commands.execute('TransformPrimSRT',
path=Sdf.Path(item_path),
new_translation=Gf.Vec3d(translate[0] + x_distance, translate[1] + y_distance, translate[2] + z_distance),
old_translation=translate)
else:
# If group item
x_distance = (translate[0] - pivot[0]) * x_ratio
y_distance = (translate[1] - pivot[1]) * y_ratio
z_distance = (translate[2] - pivot[2]) * z_ratio
omni.kit.commands.execute('TransformPrimSRT',
path=Sdf.Path(item_path),
new_translation=Gf.Vec3d(x_distance, y_distance, z_distance),
old_translation=translate)
# End
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=[f"{current_model_path[0]}/" + "Explosion_Centre"],
expand_in_stage=True)
except:
x_coord.model.set_value(0.0)
y_coord.model.set_value(0.0)
z_coord.model.set_value(0.0)
x_button.model.set_value(0.0)
y_button.model.set_value(0.0)
z_button.model.set_value(0.0)
print("Create a model to explode at first!")
return
#-------------------------------------------------SECONDARY FUNCTION------------------------------------------------------
def hide_unhide_original_model():
try:
global original_path
global item_count
stage = omni.usd.get_context().get_stage()
visible_count = 0
for i in original_path:
obj = stage.GetObjectAtPath(i)
visual_attr = obj.GetAttribute('visibility').Get()
# print(type(visual_attr))
# print(visual_attr)
if visual_attr == "inherited":
visible_count += 1
# print(original_path)
# print(visible_count)
# print(item_count)
# All light
if visible_count == 1:
omni.kit.commands.execute('ChangeProperty',
prop_path=Sdf.Path(f"{original_path[0]}.visibility"),
value='invisible',
prev=None)
# All light
elif visible_count < item_count:
for i in original_path:
omni.kit.commands.execute('ChangeProperty',
prop_path=Sdf.Path(f"{i}.visibility"),
value='inherited',
prev=None)
# All dark
elif visible_count == item_count:
for i in original_path:
omni.kit.commands.execute('ChangeProperty',
prop_path=Sdf.Path(f"{i}.visibility"),
value='invisible',
prev=None)
return
except:
print("Cannot find ORIGINAL prims to hide or show!")
return
def set_camera():
stage = omni.usd.get_context().get_stage()
world = stage.GetObjectAtPath('/World')
children_refs = world.GetChildren()
for i in children_refs:
path = str(i)
if path.find('/World/Axonometric_View') != -1:
print("Axonometric camera already existed!")
omni.kit.commands.execute('SelectPrims',
old_selected_paths=[],
new_selected_paths=['/World/Axonometric_View'],
expand_in_stage=True)
return
omni.kit.commands.execute('DuplicateFromActiveViewportCameraCommand',
viewport_name='Viewport')
omni.kit.commands.execute('CreatePrim',
prim_path='/World/Camera',
prim_type='Camera')
selected_prim_path = omni.usd.get_context().get_selection().get_selected_prim_paths()
camera_path = selected_prim_path
omni.kit.commands.execute('MovePrims',
paths_to_move={camera_path[0]: '/World/Axonometric_View'})
omni.kit.commands.execute('MovePrim',
path_from=camera_path[0],
path_to='/World/Axonometric_View',
time_code=Usd.TimeCode.Default(),
keep_world_transform=True)
omni.kit.commands.execute('ChangeProperty',
prop_path=Sdf.Path('/World/Axonometric_View.focalLength'),
value=500.0,
prev=0)
return
def reset_model(x_coord, y_coord, z_coord, x_ratio, y_ratio, z_ratio):
try:
global default_pivot
x_coord.model.set_value(default_pivot[0])
y_coord.model.set_value(default_pivot[1])
z_coord.model.set_value(default_pivot[2])
x = x_ratio.model.get_value_as_float()
y = y_ratio.model.get_value_as_float()
z = z_ratio.model.get_value_as_float()
# If 0,pass
if x == 0.0 and y== 0.0 and z == 0.0:
pass
# If not, set 0
else:
x_ratio.model.set_value(0.0)
y_ratio.model.set_value(0.0)
z_ratio.model.set_value(0.0)
return
except:
print("Create a model to explode at first!")
return
def clear(x_coord, y_coord, z_coord, x_ratio, y_ratio, z_ratio):
try:
global original_path
global current_model_path
global item_count
global default_pivot
global item_list0
global translate_list0
omni.kit.commands.execute('DeletePrims',
paths=[current_model_path[0]])
original_path = None
current_model_path = None
item_count = None
default_pivot = None
item_list0 = None
translate_list0 = None
x_coord.model.set_value(0.0)
y_coord.model.set_value(0.0)
z_coord.model.set_value(0.0)
x_ratio.model.set_value(0.0)
y_ratio.model.set_value(0.0)
z_ratio.model.set_value(0.0)
print("All data clear")
return
except:
print("Create a model to explode at first!")
return
| 42,513 | Python | 37.094982 | 126 | 0.532002 |
HC2ER/OmniverseExtension-hnadi.tools.exploded_view/hnadi/tools/exploded_view/extension.py | import omni.ext
import omni.usd
from .exploded_view_ui import Cretae_UI_Framework
class Main_Entrance(omni.ext.IExt):
def on_startup(self, ext_id):
Cretae_UI_Framework(self)
def on_shutdown(self):
print("[hnadi.tools.exploded_view] shutdown")
self._window.destroy()
self._window = None
stage = omni.usd.get_context().get_stage() | 379 | Python | 26.142855 | 53 | 0.659631 |
HC2ER/OmniverseExtension-hnadi.tools.exploded_view/hnadi/tools/exploded_view/utils.py | def get_name_from_path(path:str):
reverse_str = list(reversed(path))
num = len(reverse_str)-((reverse_str.index("/"))+1)
name = path[num+1:]
return name
def get_pure_list(list:list):
new_list = []
for i in list:
full_path0 = i.GetPrimPath()
full_path = str(full_path0)
if get_name_from_path(full_path) != "Explosion_Centre":
new_list.append(i)
# print(new_list)
return new_list | 450 | Python | 29.066665 | 63 | 0.586667 |
HC2ER/OmniverseExtension-hnadi.tools.exploded_view/hnadi/tools/exploded_view/exploded_view_style.py | # Copyright (c) 2022, HNADIACE. All rights reserved.
__all__ = ["HNADI_window_style"]
from omni.ui import color as cl
from omni.ui import constant as fl
from omni.ui import url
import omni.kit.app
import omni.ui as ui
import pathlib
EXTENSION_FOLDER_PATH = pathlib.Path(
omni.kit.app.get_app().get_extension_manager().get_extension_path_by_module(__name__)
)
##颜色预设##
#主题色
main_color = cl.hnadi_color = cl("#F5B81B")
#主字体色
white = cl.hnadi_text_color = cl("#DADADA") # 最浅色
#窗口
cl.window_label_bg = cl("#0F0F0F") # 窗口标题背景色
cl.window_bg = cl("#252525") # 窗口背景色,60~90%透明度(透明度不知道定义)
#折叠框架
cl.clloapsible_bg_label = cl("#252525")
#按钮
cl.button_bg = cl("#252525") # 常规背景色+边框#9393939,1px
cl.button_bg_hover = cl("#98999C")
cl.button_bg_click = cl("#636363")
cl.button_label = cl("#939393") # 按钮常规字体颜色
cl.button_label_hover = cl("#383838") # 按钮悬停时字体颜色
cl.button_label_click = cl("#DADADA")
#下拉框
cl.combobox_bg = cl("#252525")
cl.combobox_label = cl("#939393")
cl.combobox_bg_hover = cl("#0F0F0F")
cl.combobox_label_hover = cl("#AFAFAF")
#勾选框/还原按钮
cl.revert_arrow_enabled = cl("#AFAFAF") # 启用状态
cl.revert_arrow_disabled = cl("#383838") # 禁用状态
cl.checkbox_hover = cl("#DADADA")
cl.checkbox_click = cl("#F5B81B")
#边界线框
border_color = cl.border = cl("#636363") # 1px-2px厚度
#滑块
cl.slider_fill = cl("#F5B81B") # 滑块填充色,主题色
cl.slider_bg = cl("#252525")
cl.floatslider_sele = cl("#BB8E1A") # 滑块点击效果
cl.slider_text_color = cl("98999C")
#还原按钮
cl.revert_arrow_enabled = cl("#F5B81B") # 启用状态
cl.revert_arrow_disabled = cl("#383838") # 禁用状态
#好像用不到的
cl.transparent = cl(0, 0, 0, 0)
# HC Color
black = cl("#252525")
white = cl("#FFFFFF")
cls_temperature_gradient = [cl("#fe0a00"), cl("#f4f467"), cl("#a8b9ea"), cl("#2c4fac"), cl("#274483"), cl("#1f334e")]
## 间距预设 ##
fl.window_attr_hspacing = 8 # 文字与功能框间距(全部)
fl.window_attr_spacing = 4 # 纵向间距
fl.group_spacing = 4 # 组间间距
fl.spacing = 4
fl.border_radius = 4
fl.border_width = 1
## 字体大小 ##
fl.window_title_font_size = 18
fl.collapsable_font_size = 16
fl.text_font_size = 14
## 链接 ##
url.icon_achiview = f"{EXTENSION_FOLDER_PATH}/image/achi_view.png"
url.icon_achiview_click = f"{EXTENSION_FOLDER_PATH}/image/achi_view_click.png"
url.icon_bowlgenerator = f"{EXTENSION_FOLDER_PATH}/image/bowl_generator.png"
url.icon_bowlgenerator_click = f"{EXTENSION_FOLDER_PATH}/image/bowl_generator_click.png"
url.icon_buildingblock = f"{EXTENSION_FOLDER_PATH}/image/building_block.png"
url.icon_buildingblock_click = f"{EXTENSION_FOLDER_PATH}/image/building_blockc_click.png"
url.icon_draincurve = f"{EXTENSION_FOLDER_PATH}/image/drain_curve.png"
url.icon_draincurve_click = f"{EXTENSION_FOLDER_PATH}/image/drain_curve_click.png"
url.icon_explodedview = f"{EXTENSION_FOLDER_PATH}/image/exploded_view.png"
url.icon_explodedview_click = f"{EXTENSION_FOLDER_PATH}/image/exploded_view_click.png"
url.icon_isochronouscircle = f"{EXTENSION_FOLDER_PATH}/image/isochronouscircle.png"
url.icon_isochronouscircle_click = f"{EXTENSION_FOLDER_PATH}/image/isochronouscircle_click.png"
url.icon_light_studio = f"{EXTENSION_FOLDER_PATH}/image/light_studio.png"
url.icon_lightstudio_click = f"{EXTENSION_FOLDER_PATH}/image/light_studio_click.png"
url.icon_solarpanel = f"{EXTENSION_FOLDER_PATH}/image/solar_panel.png"
url.icon_solarpanel_click = f"{EXTENSION_FOLDER_PATH}/image/solar_panel_click.png"
url.closed_arrow_icon = f"{EXTENSION_FOLDER_PATH}/image/closed.svg"
url.radio_btn_on_icon = f"{EXTENSION_FOLDER_PATH}/image/Slice 3.png"
url.radio_btn_off_icon = f"{EXTENSION_FOLDER_PATH}/image/Slice 1.png"
url.radio_btn_hovered_icon = f"{EXTENSION_FOLDER_PATH}/image/Slice 2.png"
## Style格式说明 ##
"""
"Button":{"border_width":0.5} # 1———为一类控件指定样式,直接"WidgetType":{}
"Button::B1":{XXXX} # 2———为一类控件下的某个实例指定特殊样式,"WidgetType::InstanceName":{}
"Button::B1:hovered/pressed":{XXXX} # 3———为一类控件的某个实例的某个状态指定样式,"WidgetType::InstanceName:State":{}
"Button.Label::B1":{} # 3———为一类控件的某个实例的某种属性指定样式,"WidgetType.AttributeName::InstanceName":{}
"""
HNADI_window_style = {
# 属性字体 attribute_name
"Label::attribute_name": {
"alignment": ui.Alignment.RIGHT_CENTER,
"margin_height": fl.window_attr_spacing,
"margin_width": fl.window_attr_hspacing,
"color": cl.button_label,
},
"Label::attribute_name:hovered": {"color": cl.hnadi_text_color},
# 可折叠标题
# 可折叠标题文字 collapsable_name
"Label::collapsable_name": {
"alignment": ui.Alignment.LEFT_CENTER,
"color": cl.hnadi_text_color,
"font_size": fl.collapsable_font_size,
},
# 可折叠标题命名(间隔属性) group
"CollapsableFrame::group": {"margin_height": fl.group_spacing},
# HeaderLine 线
"HeaderLine": {"color": cl(.5, .5, .5, .5)},
# 滑杆
"Slider": {
"border_radius": fl.border_radius,
"color": cl.slider_text_color,
"background_color": cl.slider_bg,
"secondary_color": cl.slider_fill,
"secondary_selected_color": cl.floatslider_sele,
"draw_mode": ui.SliderDrawMode.HANDLE,
},
# FloatSlider attribute_float
"Slider::attribute_float": {"draw_mode": ui.SliderDrawMode.FILLED},
"Slider::attribute_float:hovered": {
"color": cl.slider_text_color,
"background_color": cl.slider_bg
},
"Slider::attribute_float:pressed": {"color": cl.slider_text_color},
# IntSlider attribute_int
"Slider::attribute_int": {
"secondary_color": cl.slider_fill,
"secondary_selected_color": cl.floatslider_sele,
},
"Slider::attribute_int:hovered": {"color": cl.slider_text_color},
"Slider::attribute_float:pressed": {"color": cl.slider_text_color},
# 按钮 tool_button
"Button::tool_button": {
"background_color": cl.button_bg,
"border_width": fl.border_width,
"border_color": cl.border,
"border_radius": fl.border_radius,
},
"Button::tool_button:hovered": {"background_color": cl.button_bg_hover},
"Button::tool_button:pressed": {"background_color": cl.button_bg_click},
"Button::tool_button:checked": {"background_color": cl.button_bg_click},
"Button::tool_button:pressed": {"background_color": cl.slider_fill},
"Button.Label::tool_button:hovered": {"color": cl.button_label_hover},
"Button.Label::tool_button:pressed": {"color": white},
"Button.Label::tool_button": {"color": cl.button_label},
# # 图片按钮 image_button
# "Button::image_button": {
# "background_color": cl.transparent,
# "border_radius": fl.border_radius,
# "fill_policy": ui.FillPolicy.PRESERVE_ASPECT_FIT,
# },
# "Button.Image::image_button": {
# "image_url": url.icon_achiview,
# "alignment": ui.Alignment.CENTER_TOP,
# "border_radius": fl.border_radius,
# },
# "Button.Image::image_button:checked": {"image_url": url.icon_achiview_click},
# "Button::image_button:hovered": {"background_color": cl.button_bg_hover},
# "Button::image_button:pressed": {"background_color": cl.button_bg_click},
# "Button::image_button:checked": {"background_color": cl.imagebutton_bg_click},
# Field attribute_field
"Field": {
"background_color": cl.slider_bg,
"border_radius": fl.border_radius,
"border_color": cl.border,
"border_width": fl.border_width,
},
"Field::attribute_field": {
"corner_flag": ui.CornerFlag.RIGHT,
"font_size": fl.text_font_size,
},
"Field::attribute_field:hovered":{"background_color": cl.combobox_bg_hover},
"Field::attribute_field:pressed":{"background_color": cl.combobox_bg_hover},
# cl.slider_fill
# # 下拉框
"Rectangle::box": {
"background_color": cl.slider_fill,
"border_radius": fl.border_radius,
"border_color": cl.slider_fill,
"border_width": 0,
"color": cl.combobox_label,
},
# "ComboBox::dropdown_menu":{
# "background_color": cl.combobox_bg,
# "secondary_color": 0x0,
# "font_size": fl.text_font_size,
# },
# "ComboBox::dropdown_menu:hovered":{
# "color": cl.combobox_label_hover,
# "background_color": cl.combobox_bg_hover,
# "secondary_color": cl.combobox_bg_hover,
# },
# "ComboBox::dropdown_menu:pressed":{
# "background_color": cl.combobox_bg_hover,
# "border_color": cl.border,
# },
# "Rectangle::combobox_icon_cover": {"background_color": cl.field_bg},
# RadioButtion
# "Button::radiobutton":{
# "background_color":cl.transparent,
# "image_url": url.radio_btn_off_icon,
# },
# "Button::radiobutton:pressed":{"image_url": url.radio_btn_on_icon},
# "Button::radiobutton:checked":{"image_url": url.radio_btn_on_icon},
#图片
# "Image::radio_on": {"image_url": url.radio_btn_on_icon},
# "Image::radio_off": {"image_url": url.radio_btn_off_icon},
# "Image::collapsable_opened": {"color": cl.example_window_text, "image_url": url.example_window_icon_opened},
# "Image::collapsable_closed": {"color": cl.example_window_text, "image_url": url.example_window_icon_closed},
# "Image::collapsable_closed": {
# "color": cl.collapsible_header_text,
# "image_url": url.closed_arrow_icon,
# },
# "Image::collapsable_closed:hovered": {
# "color": cl.collapsible_header_text_hover,
# "image_url": url.closed_arrow_icon,
# },
}
| 9,302 | Python | 35.482353 | 117 | 0.65459 |
HC2ER/OmniverseExtension-hnadi.tools.exploded_view/hnadi/tools/exploded_view/exploded_view_ui.py | from os import path
from data.image_path import image_path
from functools import partial
import omni.ui as ui
from .exploded_view import select_explode_Xform, on_ratio_change, on_pivot_change, remove_item, add_item, bind_item, unbind_item, hide_unhide_original_model, reset_model, clear, set_camera
from .exploded_view_style import HNADI_window_style, main_color, white, border_color
# Connect to Extension
class Cretae_UI_Framework(ui.Window):
def __init__(self, transformer) -> None:
self = transformer
spacer_distance = distance = 6
overall_width = 380
overall_height = 395
self._window = ui.Window("Exploded View", width=overall_width, height=overall_height)
with self._window.frame:
with ui.VStack(style=HNADI_window_style):
# Column1 Main Functions UI
with ui.HStack(height = 170):
# two big buttons
ui.Spacer(width=6)
with ui.VStack(width = 120):
ui.Spacer(height = distance - 1)
select_button = create_button_type1(name="Select Prims", tooltip="Select a group or all items at once to explode.", pic=image_path.select, height=102, spacing=-45)
ui.Spacer(height = 4)
camera_button = create_button_type1_1(name="Axono", tooltip="Set an axonometirc camera.", pic=image_path.Axono, height=53, spacing=-20)
ui.Spacer(width = 10)
# four main control sliders
with ui.VStack():
ui.Spacer(height = distance+2)
x_button = create_floatfield_ui(label="X ratio", tooltip="Explosion distance ratio in X direction", max=100.0)
ui.Spacer(height = 2)
y_button = create_floatfield_ui(label="Y ratio", tooltip="Explosion distance ratio in Y direction", max=100.0)
ui.Spacer(height = 2)
z_button = create_floatfield_ui(label="Z ratio", tooltip="Explosion distance ratio in Z direction", max=100.0)
ui.Spacer(height = 2)
ui.Spacer(height = 4)
with ui.HStack():
ui.Label("Pivot", name="attribute_name", width = 40, height=25, tooltip="Coordinates of the Explosion_Centre")
ui.Spacer(width=10)
with ui.HStack():
x_coord = create_coord_ui(name="X", color=0xFF5555AA)
ui.Spacer(width=10)
y_coord = create_coord_ui(name="Y", color=0xFF76A371)
ui.Spacer(width=10)
z_coord = create_coord_ui(name="Z", color=0xFFA07D4F)
ui.Spacer(width=6)
# Column2 Edit Functions UI
with ui.CollapsableFrame("Edit Exploded Model", name="group", build_header_fn=_build_collapsable_header):
with ui.VStack():
with ui.HStack():
ui.Spacer(width=6)
add_button = create_button_type2(name="Add", tooltip="Add items into the Exploded_Model.", pic=image_path.add1, spacing=-85)
ui.Spacer(width=1)
bind_button = create_button_type2(name="Bind", tooltip="Bind items together to keep their relative distances during explosion.", pic=image_path.bind, spacing=-75)
ui.Spacer(width=6)
with ui.HStack():
ui.Spacer(width=6)
remove_button = create_button_type2(name="Remove", tooltip="Remove items from the Exploded_Model", pic=image_path.remove1, spacing=-85)
ui.Spacer(width=1)
unbind_button = create_button_type2(name="Unbind", tooltip="Unbind items.", pic=image_path.unbind, spacing=-75)
ui.Spacer(width=6)
ui.Spacer(height = 5)
ui.Spacer(height = 6)
# Column3 Other Functions UI
with ui.HStack():
ui.Spacer(width=6)
hideorshow_button = create_button_type3(tooltip="Hide or show the ORIGINAL prims.", pic=image_path.hide_show)
reset_button = create_button_type3(tooltip="Reset the Exploded_Model.", pic=image_path.reset)
clear_button = create_button_type3(tooltip="Delete the Exploded_Model and all data.", pic=image_path.clear)
ui.Spacer(width=6)
# Connect functions to button
select_button.set_clicked_fn(partial(select_explode_Xform, x_coord, y_coord, z_coord, x_button, y_button, z_button))
camera_button.set_clicked_fn(set_camera)
x_button.model.add_value_changed_fn(partial(on_ratio_change, x_button, y_button, z_button, x_coord, y_coord, z_coord))
y_button.model.add_value_changed_fn(partial(on_ratio_change, x_button, y_button, z_button, x_coord, y_coord, z_coord))
z_button.model.add_value_changed_fn(partial(on_ratio_change, x_button, y_button, z_button, x_coord, y_coord, z_coord))
x_coord.model.add_value_changed_fn(partial(on_pivot_change, x_coord, y_coord, z_coord, x_button, y_button, z_button))
y_coord.model.add_value_changed_fn(partial(on_pivot_change, x_coord, y_coord, z_coord, x_button, y_button, z_button))
z_coord.model.add_value_changed_fn(partial(on_pivot_change, x_coord, y_coord, z_coord, x_button, y_button, z_button))
add_button.set_clicked_fn(partial(add_item, x_coord, y_coord, z_coord, x_button, y_button, z_button))
remove_button.set_clicked_fn(partial(remove_item, x_coord, y_coord, z_coord, x_button, y_button, z_button))
bind_button.set_clicked_fn(partial(bind_item, x_coord, y_coord, z_coord, x_button, y_button, z_button))
unbind_button.set_clicked_fn(partial(unbind_item, x_coord, y_coord, z_coord, x_button, y_button, z_button))
hideorshow_button.set_clicked_fn(hide_unhide_original_model)
reset_button.set_clicked_fn(partial(reset_model, x_coord, y_coord, z_coord, x_button, y_button, z_button))
clear_button.set_clicked_fn(partial(clear, x_coord, y_coord, z_coord, x_button, y_button, z_button))
def _build_collapsable_header(collapsed, title):
"""Build a custom title of CollapsableFrame"""
with ui.VStack():
ui.Spacer(height=5)
with ui.HStack():
ui.Label(title, name="collapsable_name")
if collapsed:
image_name = "collapsable_opened"
else:
image_name = "collapsable_closed"
ui.Image(name=image_name, width=10, height=10)
ui.Spacer(height=5)
ui.Line(style_type_name_override="HeaderLine")
# UI button style
def create_coord_ui(color:str, name:str):
with ui.ZStack(width=13, height=25):
ui.Rectangle(name="vector_label", width=15, style={"background_color":main_color, "border_radius":3})
ui.Label(name, alignment=ui.Alignment.CENTER, style={"color":white})
coord_button =ui.FloatDrag(min=-99999999.9, max=99999999.9)
return coord_button
def create_floatfield_ui(label:str, max:float, tooltip:str, min=0.0):
with ui.HStack():
ui.Label(label, name="attribute_name", width=40, height=25, tooltip=tooltip)
ui.Spacer(width=1.5)
button = ui.FloatField(min=min, max=max, height=25)
button.model.set_value(0.0)
return button
def create_button_type1(name, tooltip, pic, height, spacing=-45):
style = {
"Button":{"stack_direction":ui.Direction.TOP_TO_BOTTOM},
"Button.Label":{"alignment":ui.Alignment.CENTER_BOTTOM},
# "border_width": 0.1,
# "border_color": border_color,
"border_radius": 4,
"Button.Image":{# "color":0xffFFFFFF,
"image_url":pic,
"alignment":ui.Alignment.CENTER_BOTTOM,},
":hovered":{
"background_gradient_color":main_color,
"background_color":0X500066FF}}
button = ui.Button(name, height=height, width=120, tooltip=tooltip, style=style)
button.spacing = spacing
return button
def create_button_type1_1(name, tooltip, pic, height, spacing=-20):
style = {
"Button":{"stack_direction":ui.Direction.LEFT_TO_RIGHT},
"Button.Image":{# "color":0xffFFFFFF,
"image_url":pic,
"alignment":ui.Alignment.CENTER_BOTTOM,},
"Button.Label":{"alignment":ui.Alignment.CENTER},
"border_radius": 4,
":hovered":{
"background_gradient_color":main_color,
"background_color":0X500066FF}}
button = ui.Button(name, height=height, width=120, tooltip=tooltip, style=style)
button.spacing = spacing
return button
def create_button_type2(name, tooltip, pic, height=40, spacing=-75):
style={
"Button":{"stack_direction":ui.Direction.LEFT_TO_RIGHT},
"Button.Image":{# "color":0xffFFFFFF,
"image_url":pic,
"alignment":ui.Alignment.CENTER,},
"Button.Label":{"alignment":ui.Alignment.CENTER},
"background_color":0x10CCCCCC,
":hovered":{
"background_gradient_color":0X500066FF,
"background_color":main_color},
"Button:pressed:":{"background_color":0xff000000}}
button = ui.Button(name, height=height, tooltip=tooltip, style=style)
button.spacing = spacing
return button
def create_button_type3(tooltip, pic, height=50):
style = {
"Button":{"stack_direction":ui.Direction.TOP_TO_BOTTOM},
"Button.Image":{# "color":0xffFFFFFF,
"image_url":pic,
"alignment":ui.Alignment.CENTER,},
"Button.Label":{"alignment":ui.Alignment.CENTER},
"border_radius": 4,
":hovered":{
"background_gradient_color":main_color,
"background_color":0X500066FF}}
button = ui.Button("", height = height, style = style, tooltip = tooltip)
return button | 10,328 | Python | 47.492958 | 190 | 0.583947 |
HC2ER/OmniverseExtension-hnadi.tools.exploded_view/config/extension.toml | [package]
# Semantic Versionning is used: https://semver.org/
version = "1.1.0"
# The title and description fields are primarily for displaying extension info in UI
title = "exploded view"
description="This extension provides an easy and reliable way to create exploded view for selected prims."
# Path (relative to the root) or content of readme markdown file for UI.
icon = "data/title.png"
preview_image = "data/1.png"
readme = "docs/README.md"
changelog = "docs/CHANGELOG.md"
# URL of the extension source repository.
# repository = "https://github.com"
# One of categories for UI.
category = "Other"
# Keywords for the extension
keywords = ["kit", "Layout Authoring", "Omni.ui", "Exploded View", "HNADI", "HC"]
# Use omni.ui to build simple UI
[dependencies]
"omni.kit.uiapp" = {}
# Main python module this extension provides, it will be publicly available as "import omni.tools.quick_array".
[[python.module]]
name = "hnadi.tools.exploded_view" | 962 | TOML | 31.099999 | 111 | 0.730769 |
HC2ER/OmniverseExtension-hnadi.tools.exploded_view/data/image_path.py | from os import path
class image_path:
D = path.dirname(__file__)
add1 = D + "/add1.png"
Axono = D + "/Axono.png"
bind = D + "/bind.png"
clear = D + "/clear.png"
hide_show = D + "/hide&show.png"
preview = D + "/preview.png"
remove1 = D + "/remove1.png"
reset = D + "/reset.png"
select = D + "/select.png"
title = D + "/title.png"
unbind = D + "/unbind.png"
| 407 | Python | 21.666665 | 36 | 0.525799 |
gazebosim/gz-omni/PACKAGE-INFO.yaml | Package : ignition-conection
Version : 000-linux-x86_64
Maintainers : N/A
Description : N/A
Repository : https://github.com/ignitionrobotics/ign-omni
Branch : main
| 164 | YAML | 22.571425 | 57 | 0.77439 |
gazebosim/gz-omni/launcher.toml | #displayed application name
name = "Ignition Omni Connector"
#displayed before application name in launcher
productArea = "Omniverse"
#unique identifier for component, all lower case, persists between versions
slug = "ignitionomni"
## install and launch instructions by environment
# Windows
# [defaults.windows-x86_64]
# entrypoint = "explorer.exe"
# args = ["${productRoot}"]
[defaults.windows-x86_64.environment]
[defaults.windows-x86_64.install]
pre-install = ""
pre-install-args = []
install = ""
install-args = []
post-install = ""
post-install-args = []
[defaults.windows-x86_64.uninstall]
pre-uninstall = ""
pre-uninstall-args = []
uninstall = ""
uninstall-args = []
post-uninstall = ""
post-uninstall-args = []
[defaults.windows-x86_64.register]
command = ''
args = []
[defaults.windows-x86_64.unregister]
command = ''
args = []
# Linux
#[defaults.linux-x86_64]
#entrypoint = "gnome-terminal"
#args = ["--working-directory=${productRoot}"]
[defaults.linux-x86_64.environment]
[defaults.linux-x86_64.install]
pre-install = ""
pre-install-args = []
install = ""
install-args = []
post-install = ""
post-install-args = []
[defaults.linux-x86_64.uninstall]
pre-uninstall = ""
pre-uninstall-args = []
uninstall = ""
uninstall-args = []
post-uninstall = ""
post-uninstall-args = []
[defaults.linux-x86_64.register]
command = ''
args = []
[defaults.linux-x86_64.unregister]
command = ''
args = []
| 1,413 | TOML | 18.638889 | 75 | 0.699929 |
gazebosim/gz-omni/README.md | # ign-omni
![](./tutorials/hybrid_diagram.png)
**NOTES**:
- This repository is under development, you might find compilation errors,
malfunctions or some undocumented features.
- This code will only run on Linux (for now).
- You should [install Ignition Fortress](https://ignitionrobotics.org/docs/fortress) (from source).
**Requirements**:
- ROS 2 Galactic.
- Ignition Gazebo Fortress
- ubuntu 20.04
**Features**:
- Ignition -> IsaacSim
- Move/rotate models
- Create/remove models
- Joints
- Sensors (lidar, cameras)
- IsaacSim -> Ignition
- Create/remove models
## Tutorials
- [How to compile it](tutorials/01_compile.md)
- [Quickstart](tutorials/02_quickstart.md)
- [Mobile base simulation (turtlebot3) (Nav2)](tutorials/03_ROS_simulation.md)
- [Articulated arm Isaac Sim -> Ignition (Moveit 2)](tutorials/04_articulated_arm_issacsim_to_ignition.md)
- [Hybrid simulation (Nav2)](tutorials/05_hybrid_simulation.md)
## ROSCon 2022
[![](img/video_img.png)](https://vimeo.com/767140085)
| 1,026 | Markdown | 28.342856 | 108 | 0.71345 |
gazebosim/gz-omni/tools/repoman/package.py | import os
import sys
import packmanapi
packagemaker_path = packmanapi.install("packagemaker", package_version="4.0.0-rc9", link_path='_packages/packagemaker')
sys.path.append('_packages/packagemaker')
import packagemaker
def main():
pkg = packagemaker.PackageDesc()
pkg.version = os.getenv('BUILD_NUMBER', '0')
pkg.output_folder = '_unsignedpackages'
pkg.name = 'samples'
pkg.files = [
('_build/windows-x86_64/release/*.exe'),
('_build/windows-x86_64/release/*.dll'),
]
packagemaker.package(pkg)
if __name__ == '__main__' or __name__ == '__mp_main__':
main()
| 614 | Python | 22.653845 | 119 | 0.643322 |
gazebosim/gz-omni/tools/repoman/repoman.py | import os
import sys
import packmanapi
SCRIPT_DIR = os.path.dirname(os.path.realpath(__file__))
REPO_ROOT_DIR = os.path.join(SCRIPT_DIR, "..", "..")
HOST_DEPS_PATH = os.path.join(REPO_ROOT_DIR, "_build", "host-deps")
repoman_link_path = os.path.abspath(os.path.join(HOST_DEPS_PATH, "nvtools_repoman"))
packmanapi.install("repo_repoman", package_version="0.1.1-beta2", link_path=repoman_link_path)
sys.path.append(repoman_link_path)
import api
| 448 | Python | 27.062498 | 94 | 0.720982 |
gazebosim/gz-omni/tools/repoman/findwindowsbuildtools.py | import os
import sys
import argparse
import subprocess
import json
from xml.etree import ElementTree
import repoman
import packmanapi
DEPS = {
"nvtools_build": {
"version": "0.2.0",
"link_path_host": "nvtools_build",
}
}
'''
buildtools: C:/Program Files (x86)/Microsoft Visual Studio/2019/BuildTools
vc: ../_build/host-deps/buildtools/VC/Tools/MSVC/14.27.29110
'''
def update_host_deps(host_deps_path:str, vs_path:str = "", msvc_ver:str = ""):
# Preserving comments in the XML host_deps file
# credit: https://stackoverflow.com/questions/33573807/faithfully-preserve-comments-in-parsed-xml
class CommentedTreeBuilder(ElementTree.TreeBuilder):
def comment(self, data):
self.start(ElementTree.Comment, {})
self.data(data)
self.end(ElementTree.Comment)
parser = ElementTree.XMLParser(target=CommentedTreeBuilder())
# python 3.8 adds insert_comments
#parser = ElementTree.XMLParser(target=ElementTree.TreeBuilder(insert_comments=True))
vc_path = os.path.join("..", "_build", "host-deps", "buildtools", "VC", "Tools", "MSVC", msvc_ver)
# Use winsdk.bat from Visual Studio tools to find the Windows SDK
windows_sdk_dir = ""
windows_sdk_ver = ""
windows_sdk_bin_dir = ""
windows_sdk_lib_dir = ""
windows_sdk_include_dir = ""
winsdk_bat_path = os.path.join(vs_path, "Common7", "Tools", "vsdevcmd", "core", "winsdk.bat")
if os.path.exists(winsdk_bat_path):
# We have a batch wrapper that calls the winsdk.bat file and emits the important env vars to be processed
script_path = os.path.split(os.path.abspath(__file__))[0]
cmd_line = []
cmd_line.append(os.path.join(script_path, "print_winsdk_env_vars.bat"))
cmd_line.append(winsdk_bat_path)
completed = subprocess.run(cmd_line, capture_output=True)
for line in completed.stdout.decode().splitlines():
if "WindowsSDKDir" in line:
windows_sdk_dir = line.split("=")[1].rstrip("\\")
elif "WindowsSdkVersion" in line:
windows_sdk_ver = line.split("=")[1].rstrip("\\")
if os.path.exists(windows_sdk_dir):
windows_sdk_bin_dir = os.path.join(windows_sdk_dir, "bin", windows_sdk_ver)
windows_sdk_include_dir = os.path.join(windows_sdk_dir, "include", windows_sdk_ver)
windows_sdk_lib_dir = os.path.join(windows_sdk_dir, "lib", windows_sdk_ver)
# Read the XML tree from the host_deps file
tree = ElementTree.parse(host_deps_path, parser)
root = tree.getroot()
# Replace the builtools and vc paths
find_replace_dict = {
"buildtools": vs_path,
"vc": vc_path,
"winsdk": windows_sdk_dir,
"winsdk_bin": windows_sdk_bin_dir,
"winsdk_include": windows_sdk_include_dir,
"winsdk_lib": windows_sdk_lib_dir,
}
for dependency in root.findall("dependency"):
for find_key in find_replace_dict.keys():
if "name" in dependency.attrib.keys() and find_key == dependency.attrib["name"] and find_replace_dict[find_key] != "":
for source in dependency.iter("source"):
source.attrib["path"] = find_replace_dict[find_key]
print("Updating <%s> attribute with <%s>" % (dependency.attrib["name"],source.attrib["path"]))
tree.write(host_deps_path)
'''
find_vs will search through the display names of the installed Visual Studio versions and
return the installation path for the first one that matches the input string provided
current display names:
* Visual Studio Community 2019
* Visual Studio Professional 2019
* Visual Studio Professional 2017
'''
def find_vs(search_str:str, listall:bool = False) -> str:
program_files = os.getenv("ProgramFiles(x86)")
if not program_files:
print("ERROR: No Program Files (x86) directory found")
return None
vswhere_path = os.path.join(program_files, "Microsoft Visual Studio", "Installer", "vswhere.exe")
if not os.path.exists(vswhere_path):
print("ERROR: vswhere.exe is not found here, so no Visual Studio installations found: " + vswhere_path)
return None
# Run vswhere with a json-formatted output
cmd_line = list()
cmd_line.append(vswhere_path)
cmd_line.append("-products")
cmd_line.append("*")
cmd_line.append("-format")
cmd_line.append("json")
completed = subprocess.run(cmd_line, capture_output=True)
version_strings = completed.stdout.decode()
version_json = json.loads(version_strings)
# Find the requested version using the displayName attribute
last_version = None
for vs_version in version_json:
if listall:
print(vs_version["displayName"])
last_version = vs_version["installationPath"]
elif search_str in vs_version["displayName"]:
return vs_version["installationPath"]
return last_version
'''
find_msvc_ver will list the first MSVC version found in a Visual Studio installation
vs_install_path = "C:\\Program Files (x86)\\Microsoft Visual Studio\\2019\\Community"
returns something like "14.25.28610"
'''
def find_msvc_ver(vs_install_path:str) -> str:
# return first folder found in "VC/Tools/MSVC"
msvc_folder = os.path.join(vs_install_path, "VC", "Tools", "MSVC")
if not os.path.exists(msvc_folder):
print("ERROR: No MSVC folder found at " + msvc_folder)
return None
msvc_vers = os.listdir(msvc_folder)
if len(msvc_vers) > 0:
return msvc_vers[0]
else:
print("ERROR: No MSVC folder found at " + msvc_folder)
return None
def run_command():
parser = argparse.ArgumentParser()
parser.add_argument('-v',
'--visual-studio-version',
default='2019',
dest='vs_ver',
help='Different Visual Studio installation \"displayNames\" will be searched with this substring',
required=False)
parser.add_argument('-l',
'--list-all',
dest='list_all',
action='store_true',
help="Enable this to simply just list all Visual Studio installations rather than updating the host_deps file",
required=False)
parser.add_argument('-d',
'--host-deps-path',
dest='host_deps_path',
help="The path to the host_deps.packman.xml file",
required=False)
args, _ = parser.parse_known_args()
if not args.list_all:
print("Searching for an install of Visual Studio <%s>" % (args.vs_ver))
vs_path = find_vs(args.vs_ver, args.list_all)
if not vs_path:
print("ERROR: No Visual Studio Installation Found")
exit(1)
if not args.list_all and vs_path:
print("VS " + args.vs_ver + " found in: " + vs_path)
msvc_version = find_msvc_ver(vs_path)
if msvc_version:
print("VS " + args.vs_ver + " MSVC ver: " + msvc_version)
if args.host_deps_path and vs_path and msvc_version:
update_host_deps(args.host_deps_path, vs_path = vs_path, msvc_ver = msvc_version)
print("Update host dependencies file: " + args.host_deps_path)
if __name__ == "__main__" or __name__ == "__mp_main__":
run_command()
| 7,257 | Python | 37 | 130 | 0.637453 |
gazebosim/gz-omni/tools/repoman/clean.py | import os
import sys
import platform
def clean():
folders = [
'_build',
'_compiler',
'_builtpackages'
]
for folder in folders:
# having to do the platform check because its safer when you might be removing
# folders with windows junctions.
if os.path.exists(folder):
print("Removing %s" % folder)
if platform.system() == 'Windows':
os.system("rmdir /q /s %s > nul 2>&1" % folder)
else:
os.system("rm -r -f %s > /dev/null 2>&1" % folder)
if os.path.exists(folder):
print("Warning: %s was not successfully removed, most probably due to a file lock on 1 or more of the files." % folder)
if __name__ == "__main__" or __name__ == "__mp_main__":
clean()
| 812 | Python | 29.11111 | 135 | 0.539409 |
gazebosim/gz-omni/tools/repoman/build.py | import os
import sys
import argparse
import repoman
import packmanapi
DEPS = {
"nvtools_build": {
"version": "0.3.2",
"link_path_host": "nvtools_build",
}
}
def run_command():
platform_host = repoman.api.get_and_validate_host_platform(["windows-x86_64", "linux-x86_64"])
repo_folders = repoman.api.get_repo_paths()
repoman.api.fetch_deps(DEPS, platform_host, repo_folders["host_deps"])
BUILD_SCRIPT = os.path.join(repo_folders["host_deps"], "nvtools_build", "build.py")
# Fetch the asset dependencies
parser = argparse.ArgumentParser()
parser.add_argument('-p', '--platform-target',
dest='platform_target', required=False)
options, _ = parser.parse_known_args()
# Checking if platform was passed
# We cannot use argparse's default, as we also need to set up the command line argument
# if it wasn't supplied. It is possible to also check for the host platform, if we want to
# make different default behavior when building on windows.
if not repoman.api.has_options_arg(options, 'platform_target'):
options.platform_target = 'linux-x86_64'
sys.argv.extend(["--platform-target", options.platform_target])
# We need the host-deps before we can run MSBuild
packmanapi.pull(os.path.join(repo_folders["root"], repo_folders["host_deps_xml"]), platform=platform_host)
# Construct arguments for the underlying script
script_argv = sys.argv[1:]
script_argv.extend(["--root", repo_folders["root"]])
script_argv.extend(["--deps-host", repo_folders["host_deps_xml"]])
script_argv.extend(["--deps-target", repo_folders["target_deps_xml"]])
if platform_host == "windows-x86_64":
script_argv.extend(["--premake-tool", os.path.join(repo_folders["host_deps"], "premake", "premake5.exe")])
# Look for different MSBuild versions
ms_build_path = ""
ms_build_locations = [
r"buildtools\MSBuild\15.0\Bin\MSBuild.exe",
r"buildtools\MSBuild\Current\Bin\MSBuild.exe",
]
for ms_build_location in ms_build_locations:
print("Checking if MSBuild.exe located here: " + os.path.join(repo_folders["host_deps"], ms_build_location))
if os.path.exists(os.path.join(repo_folders["host_deps"], ms_build_location)):
ms_build_path = os.path.join(repo_folders["host_deps"], ms_build_location)
break
print("Building using this MSBuild: " + ms_build_path)
script_argv.extend(["--msbuild-tool", ms_build_path])
script_argv.extend(["--vs-version", "vs2019"])
script_argv.extend(["--sln", os.path.join(repo_folders["compiler"], r"vs2019\Samples.sln")])
elif platform_host == "linux-x86_64":
script_argv.extend(["--premake-tool", os.path.join(repo_folders["host_deps"], "premake", "premake5")])
# Execute module script and set globals
repoman.api.run_script_with_custom_args(BUILD_SCRIPT, script_argv)
if __name__ == "__main__" or __name__ == "__mp_main__":
run_command()
| 3,071 | Python | 41.666666 | 120 | 0.645718 |
gazebosim/gz-omni/tools/repoman/filecopy.py | import os
import sys
import argparse
import packmanapi
import repoman
DEPS = {
"nvfilecopy": {
"version": "1.4",
"link_path_host": "nvtools_nvfilecopy",
}
}
if __name__ == "__main__" or __name__ == "__mp_main__":
repo_folders = repoman.api.get_repo_paths()
deps_folders = repoman.api.fetch_deps(DEPS, None, repo_folders["host_deps"])
sys.path.append(deps_folders["nvfilecopy"])
import nvfilecopy
parser = argparse.ArgumentParser()
parser.add_argument('-p', '--platform-target', dest='platform_target', required=True)
options, _ = parser.parse_known_args()
platform_target = repoman.api.validate_platform(
"target",
options.platform_target,
["windows-x86_64", "linux-x86_64", "linux-aarch64"]
)
nvfilecopy.process_json_file(sys.argv[len(sys.argv) - 1], platform_target)
| 863 | Python | 25.999999 | 89 | 0.636153 |
gazebosim/gz-omni/tutorials/03_ROS_simulation.md | # Run a more complex simulation
<!-- TODO: Replace this with turtlebot4 instructions https://github.com/ignitionrobotics/ign-omni/pull/17 -->
For example you can run the turtlebot3. Compile the code from this PR https://github.com/ROBOTIS-GIT/turtlebot3_simulations/pull/180
Use ROS 2 Galactic.
```
mkdir -p ~/turtlebot3_ws/src
cd ~/turtlebot3_ws/src
git clone https://github.com/ahcorde/turtlebot3_simulations -b ahcorde/ignition_support
git clone https://github.com/ignitionrobotics/ign_ros2_control -b galactic
rosdep install --from-paths ./ -i -y --rosdistro galactic
```
Compile it
```bash
cd ~/turtlebot3_ws/
source /opt/ros/galactic/setup.sh
export IGNITION_VERSION=fortress
colcon build --merge-install --event-handlers console_direct+
```
## Run Ignition
This should run in a separate terminal using your normal Ignition Gazebo installation.
```bash
source ~/turtlebot3_ws/install/setup.bash
TURTLEBOT3_MODEL=waffle ros2 launch turtlebot3_ignition ignition.launch.py
```
## Run IsaacSim
Launch `IsaacSim` and activate the `live sync`
![](live_sync.gif)
## Run the connector
Create this directory `omniverse://localhost/Users/ignition/` in the nucleus server and run the connector
```bash
export IGN_GAZEBO_RESOURCE_PATH="~/turtlebot3_ws/src/turtlebot3_simulations/turtlebot3_gazebo/models:/opt/ros/galactic/share"
reset && bash run_ignition_omni.sh -p omniverse://localhost/Users/ignition/turtlebot3.usd -w empty -v --pose ignition
```
![](turtlebot3.gif)
| 1,482 | Markdown | 28.078431 | 132 | 0.765182 |
gazebosim/gz-omni/tutorials/01_compile.md | # How to compile it
## Install Ignition
```bash
sudo apt update
sudo apt install python3-pip wget lsb-release gnupg curl
sudo sh -c 'echo "deb http://packages.ros.org/ros/ubuntu $(lsb_release -sc) main" > /etc/apt/sources.list.d/ros-latest.list'
curl -s https://raw.githubusercontent.com/ros/rosdistro/master/ros.asc | sudo apt-key add -
sudo apt-get update
sudo apt-get install python3-vcstool python3-colcon-common-extensions
sudo apt-get install git libfreeimage-dev
sudo apt-get install ignition-edifice
```
For more information, see https://ignitionrobotics.org/docs/edifice/install_ubuntu_src.
# Compile ignition-omniverse
We need to compile some Ignition packages from source with a specific flag due the `omni-client` library.
To make this process simple we have created the [`ign-omni-meta` repository](https://github.com/ignitionrobotics/ign-omni-meta).
To compile this libraries you should run:
```bash
mkdir -p ~/ign-omni/src
cd ~/ign-omni/src
git clone https://github.com/ignitionrobotics/ign-omni-meta
vcs import . < ign-omni-meta/repos.yaml
cd protobuf
git -C . apply ../ign-omni-meta/protobuf-cmake.patch
cd ~/ign-omni
colcon build --merge-install --event-handlers console_direct+ --packages-select protobuf
cp src/ign-omni-meta/colcon.meta .
colcon build --merge-install --event-handlers console_direct+ --packages-up-to ignition-omniverse1
```
You can ignore the following message:
```bash
WARNING:colcon.colcon_cmake.task.cmake.build:Could not run installation step for package 'ignition-omniverse1' because it has no 'install' target
```
**Note: There will be 2 builds of ignition, the default build when ignition-edifice is compiled from source, and a special build with pre cxx11 abi compiled as part of ignition-omniverse.**
| 1,759 | Markdown | 37.260869 | 189 | 0.77203 |
gazebosim/gz-omni/tutorials/02_quickstart.md | # Run it
Please review this [tutorial](./01_compile.md) if you need to install ign-omni.
## Run Ignition
Run the `shapes.sdf` world in Ignition Gazebo. This should run in a separate terminal using your normal Ignition Gazebo installation.
```bash
ign gazebo -v 4 shapes.sdf
```
## Run IsaacSim
If not already done so, install nvidia omniverse, isaac sim and omniverse nucleus, for more information, see https://www.nvidia.com/en-us/omniverse/.
### (Optional) Create ignition user in nucleus
When nucleus is first installed, it will prompt you to create a user. If the ignition user is not created at this time, it can be created later via the omniverse app.
![](omniverse-create-user.gif)
Launch `IsaacSim` and activate the `live sync`
![](live_sync.gif)
## Run the connector
**Note**: `ignition-omni` will be built under `src/ign-omni/_build`, this is because
it uses a custom build system by NVidia which is hard coded to put output in that directory.
In this case you need to source the special workspace that we have created
with the `ign-omni-meta` repository.
In a new terminal
```bash
source ~/ign-omni/install/setup.bash
cd ~/ign-omni/src/ign-omni
bash run_ignition_omni.sh -p omniverse://localhost/Users/ignition/shapes.usd -w shapes --pose ignition
```
You may replace `ignition` with any user registered in nucleus.
Open the `shapes.usd` in Isaac Sim and optionally enable live sync.
![](isaac-shapes.png)
| 1,435 | Markdown | 29.553191 | 166 | 0.749826 |
gazebosim/gz-omni/tutorials/05_hybrid_simulation.md | # Hybrid simulation
This demo explains how to use the hybrid simulation. The concept of Hybrid simulation is defined as: *A user can separate their simulation workload between Ignition
and Isaac Sim, with both systems running in parallel. For example, sensors can be handled by Isaac Sim, with rendering handled by Ignition, or vice versa.*
We can define more tangible examples, we can use some of the ROS frameworks such as the Nav stack or Moveit to use the ROS data from both simulators. For example:
- Moveit: We can simulate in Ignition the joints of an articulated arm (sensing the data from the joints and sending commands to the joints) and in Isaac Sim we can simulate a camera attached to the robot or looking at the scene.
- Nav stack: We can simulate in Ignition the diff drive controller and simulate all the sensors in Isaac Sim (Lidar, cameras, etc).
The possibilities are huge you just need to configure your on setup. In the following image is explained how the hybrid simulation works.
![](hybrid_diagram.png)
Both simulators will be connected using the `ign-omni` connector. The connector will share the data of the models pose and joints state. Then we can use ROS to send sensor data to the ROS network or received some commands to operate a motor.
Once the connector is running we need to defined which data from sensor or actuators is going to be provided by the different simulator.
- Isaac Sim can share the data in the ROS network using some the predefined ROS plugins
![](isaac_ros_plugins.png)
- Ignition uses `ros_ign_bridge`, this package provides a network bridge which enables the exchange of messages between ROS 2 and Ignition Transport. You can follow this tutorial to learn more about [how to use ROS Ignition bridges](https://docs.ros.org/en/galactic/Tutorials/Simulators/Ignition/Setting-up-a-Robot-Simulation-Ignition.html)
## Demo
In particular will follow this steps:
- Launch a world containing a ROS 2-controlled robot.
- Enable hybrid simulation, sharing the workload between Ignition and Isaac Sim
- Control the robot from ROS 2
- Visualize the data in the ROS network from both simulators in Rviz2
## Prerequisites
- ign-omni Connector (see the [compile instructions](01_compile.md))
- [Turtebot4 Simulation](https://github.com/turtlebot/turtlebot4_simulator)
- Omniverse Issac Sim
- [ROS & ROS2 Bridge](https://docs.omniverse.nvidia.com/app_isaacsim/app_isaacsim/ext_omni_isaac_ros_bridge.html)
- [How to use ROS Ignition bridges](https://docs.ros.org/en/galactic/Tutorials/Simulators/Ignition/Setting-up-a-Robot-Simulation-Ignition.html)
- ros_ign_bridge
- ROS 2: `sudo apt-get install ros-galactic-ros-ign-bridge`
## Launch a world containing a ROS 2-controlled robot
In this case we are going to simulate the Turtebot4 available in this [repository](https://github.com/turtlebot/turtlebot4_simulator).
But first we are going to modify the lidar ros_ign_bridge bridge. Edit the file `~/turtlebot4_ws/src/turtlebot4_simulator/turtlebot4_ignition_bringup/launch/ros_ign_bridge.launch.py`. We should remap the `/scan` ROS 2 topic to `/scan_ignition`. The original file is:
```python
remappings=[
(['/world/', LaunchConfiguration('world'),
'/model/', LaunchConfiguration('robot_name'),
'/link/rplidar_link/sensor/rplidar/scan'],
'/scan')
])
```
And you should modify it:
```python
remappings=[
(['/world/', LaunchConfiguration('world'),
'/model/', LaunchConfiguration('robot_name'),
'/link/rplidar_link/sensor/rplidar/scan'],
'/scan_ignition')
])
```
Now you should compile it. Follow the instructions in the [README.md](https://github.com/turtlebot/turtlebot4_simulator/blob/galactic/README.md) file to install it.
## Running the connector (ign-omni)
If you need to compile `ign-omni`, please review this [instructions](01_compile.md).
**Note**: `ignition-omni` will be built under `src/ign-omni/_build`, this is because
it uses a custom build system by NVidia which is hard coded to put output in that directory.
In this case you need to source the special workspace that we have created
with the `ign-omni-meta` repository.
Create an empty file in Nucleus in the following directory `omniverse://localhost/Users/ignition/turtlebot4.usd`
## Running the example:
### Run the ROS 2 simulation
You should run the Turtlebot4 simulation in Ignition, we are going to set to true the `slam` option to be able to create a map and navigate in the scene and we are going to set `rviz` to true too, in this case to visualize all the data from the nav2 stack in Rviz2:
```bash
source ~/turtlebot4_ws/install/setup.bash
ros2 launch turtlebot4_ignition_bringup ignition.launch.py slam:=sync nav2:=true rviz:=true
```
### Run the connector
Once the simulation in Ignition is running we need to define some arguments to run the connector:
```bash
Ignition omniverse connector
Usage: ./_build/linux-x86_64/debug/ignition-omniverse1 [OPTIONS]
Options:
-h,--help Print this help message and exit
-p,--path TEXT REQUIRED Location of the omniverse stage. e.g. "omniverse://localhost/Users/ignition/stage.usd"
-w,--world TEXT REQUIRED Name of the ignition world
--pose ENUM:value in {ignition->0,isaacsim->1} OR {0,1} REQUIRED
Which simulator will handle the poses
-v,--verbose
```
In particular we need to define:
- `-p,--path`: this is the file inside Omniverse, Isaac Sim and the connector are going to share it thanks to live sync mode. **This file must live in Omniverse**.
- `-w,--world`: The name of the Ignition world
- `--pose`: This option has two values: `ignition` or `isaacsim`. It defines how will handle the models pose and the joint states.
```bash
source ~/ign-omni/install/setup.bash
cd ~/ign-omni/src/ign-omni
export IGN_GAZEBO_RESOURCE_PATH=$IGN_GAZEBO_RESOURCE_PATH:`echo $HOME`/turtlebot4_ws/install/share/:`echo $HOME`/turtlebot4_ws/install/turtlebot4_description/share/:`echo $HOME`/turtlebot4_ws/install/irobot_create_description/share/
bash run_ignition_omni.sh -p omniverse://localhost/Users/ignition/turtlebot4.usd -w depot -v --pose ignition
```
### Run Issac Sim
Launch `IssacSim`, load the file `omniverse://localhost/Users/ignition/turtlebot4.usd` and activate the `live sync`
![](../live_sync.gif)
### Let's configure Issac Sim
Include the LIDAR ROS plugin in Issac Sim
![](./issacsim_create_lidar.png)
Configure the Plugin:
- The frame_id should be `turtlebot4/rplidar_link/rplidar`
- The laser scan topic `/scan_isaac`
- The lidar prim `/depot/turtlebot4/rplidar_link/rplidar`
![](./issac_lidar_ROS.png)
Configure the Issac Sim lidar:
- Disable `highLod`
- In case you want to visualize the data from the laser enable `drawLines`
![](./issac_lidar_config.png)
Now enable the ROS 2 extension
![](./ROS2_extension.gif)
There is an issue with the ROS 2 clock, for this reason we need to create a
simple republisher to update the timestamp of the lidar msgs.
Create a file called `lidar_republisher.py` and include this code:
```python
import rclpy
from rclpy.node import Node
from sensor_msgs.msg import LaserScan
class MinimalSubscriber(Node):
def __init__(self):
super().__init__('minimal_subscriber')
self.subscription = self.create_subscription(
LaserScan,
'/scan_isaac',
self.listener_callback,
10)
self.subscription # prevent unused variable warning
self.publisher_ = self.create_publisher(LaserScan, '/scan', 10)
my_new_param = rclpy.parameter.Parameter(
'use_sim_time',
rclpy.Parameter.Type.BOOL,
True
)
all_new_parameters = [my_new_param]
self.set_parameters(all_new_parameters)
def listener_callback(self, msg):
msg.header.stamp = self.get_clock().now().to_msg()
self.publisher_.publish(msg)
def main(args=None):
rclpy.init(args=args)
minimal_subscriber = MinimalSubscriber()
rclpy.spin(minimal_subscriber)
minimal_subscriber.destroy_node()
rclpy.shutdown()
if __name__ == '__main__':
main()
```
**Note: With this script you can choose how is going to provide the data from the lidar**
- If you define `/scan_isaac` the data used is provided by Isaac Sim.
- If you define `/scan_ignition` the data used is provided by Ignition Gazebo.
Launch the node:
```bash
source ~/turtlebot4_ws/install/setup.bash
python3 lidar_republisher.py
```
At this point you should be able to visualize the map in Rviz2. then you should be able to set some `Nav2 Goal` using the Rviz2 GUI.
![](./rviz_nav2.png)
![](./hybrid_nav2.gif)
| 8,706 | Markdown | 38.220721 | 341 | 0.728463 |
gazebosim/gz-omni/tutorials/04_articulated_arm_issacsim_to_ignition.md | # Articulated arm connection from Isaac Sim to Ignition
In this tutorial we well explain how to use the connector from Issac Sim to Ignition
## Prerequisites
- sdformat with USD support (see the [sdformat installation instructions](http://sdformat.org/tutorials?tut=install))
- Ignition fuel tools cli command (see the [ign fuel tools installation instructions](https://ignitionrobotics.org/api/fuel_tools/7.0/install.html))
- ign-omni Connector (see the [compile instructions](01_compile.md))
- Omniverse Issac Sim
- ros_ign_bridge
- ROS: `sudo apt-get install ros-noetic-ros-ign-bridge`
## Convert USD to SDF
If sdformat is built with USD support, there should be a `usd2sdf` cli program.
```bash
SDF to USD converter
Usage: sdf2usd [OPTIONS] [input] [output]
Positionals:
input TEXT Input filename. Defaults to input.sdf unless otherwise specified.
output TEXT Output filename. Defaults to output.usd unless otherwise specified.
Options:
-h,--help Print this help message and exit
--help-all Show all help
--version
```
- Convert the Panda Franka Emika robot to USD. Create the following file `panda.sdf`:
```xml
<?xml version="1.0" ?>
<sdf version="1.6">
<world name="fuel">
<physics name="1ms" type="ignored">
<max_step_size>0.001</max_step_size>
<real_time_factor>1.0</real_time_factor>
</physics>
<scene>
<ambient>1.0 1.0 1.0 1.0</ambient>
<background>0.8 0.8 0.8 1.0</background>
<grid>false</grid>
<origin_visual>false</origin_visual>
</scene>
<plugin
filename="ignition-gazebo-physics-system"
name="ignition::gazebo::systems::Physics">
</plugin>
<plugin
filename="ignition-gazebo-sensors-system"
name="ignition::gazebo::systems::Sensors">
<render_engine>ogre2</render_engine>
</plugin>
<plugin
filename="ignition-gazebo-user-commands-system"
name="ignition::gazebo::systems::UserCommands">
</plugin>
<plugin
filename="ignition-gazebo-scene-broadcaster-system"
name="ignition::gazebo::systems::SceneBroadcaster">
</plugin>
<light type="directional" name="sun">
<cast_shadows>true</cast_shadows>
<pose>0 0 10 0 0 0</pose>
<diffuse>0.8 0.8 0.8 1</diffuse>
<specular>0.2 0.2 0.2 1</specular>
<attenuation>
<range>1000</range>
<constant>0.9</constant>
<linear>0.01</linear>
<quadratic>0.001</quadratic>
</attenuation>
<direction>-0.5 0.1 -0.9</direction>
</light>
<include>
<name>panda</name>
<pose>0 0 0 0 0 0</pose>
<uri>https://fuel.ignitionrobotics.org/1.0/OpenRobotics/models/Panda with Ignition position controller model</uri>
</include>
<model name="ground_plane">
<static>true</static>
<link name="link">
<collision name="collision">
<geometry>
<plane>
<normal>0 0 1</normal>
<size>100 100</size>
</plane>
</geometry>
</collision>
<visual name="visual">
<geometry>
<plane>
<normal>0 0 1</normal>
<size>100 100</size>
</plane>
</geometry>
<material>
<ambient>0.8 0.8 0.8 1</ambient>
<diffuse>0.8 0.8 0.8 1</diffuse>
<specular>0.8 0.8 0.8 1</specular>
</material>
</visual>
</link>
</model>
</world>
</sdf>
```
- Download the panda model from fuel:
```bash
ign fuel download --url "https://fuel.ignitionrobotics.org/1.0/OpenRobotics/models/Panda with Ignition position controller model"
```
- Run the converter:
```bash
sdf2usd /panda.sdf panda.usd
```
- Copy the file in this path `omniverse://localhost/Users/ignition/panda.usd`:
![](live_sync.gif)
- Load the model in Issac Sim and activate the `live sync`
![](./panda_issac.png)
- Configure the ROS Plugins:
- Add ROS Clock plugin
- Add ROS joint state
- Configure *articulationPrim* to `/fuel/panda`
- Launch the simulation in Ignition Gazebo:
```bash
ign gazebo panda.sdf -v 4 -r
```
- Launch the connector:
```bash
source ~/ign-omni/install/setup.bash
cd ~/ign-omni/src/ign-omni
bash run_ignition_omni.sh -p omniverse://localhost/Users/ignition/panda.usd -w fuel --pose ignition
```
- Right now Issac Sim does not provide the joint angle, but this data is provided in ROS. You should launch a ROS -> Ignition bridge:
```bash
rosrun ros_ign_bridge parameter_bridge /joint_states@sensor_msgs/JointState]ignition.msgs.Model
```
- Then you can move the robot. There is a workspace available here `.local/share/ov/pkg/isaac_sim-2021.2.1/ros_workspace/` that you need to compile it and run:
```bash
rosrun isaac_moveit franka_isaac_execution.launch
```
| 4,854 | Markdown | 30.121795 | 160 | 0.639267 |
gazebosim/gz-omni/PACKAGE-LICENSES/sqlite-LICENSE.md | **
** The author disclaims copyright to this source code. In place of
** a legal notice, here is a blessing:
**
** May you do good and not evil.
** May you find forgiveness for yourself and forgive others.
** May you share freely, never taking more than you give.
**
| 277 | Markdown | 29.888886 | 67 | 0.693141 |
gazebosim/gz-omni/PACKAGE-LICENSES/openssl-LICENSE.md |
LICENSE ISSUES
==============
The OpenSSL toolkit stays under a double license, i.e. both the conditions of
the OpenSSL License and the original SSLeay license apply to the toolkit.
See below for the actual license texts.
OpenSSL License
---------------
/* ====================================================================
* Copyright (c) 1998-2019 The OpenSSL Project. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
*
* 3. All advertising materials mentioning features or use of this
* software must display the following acknowledgment:
* "This product includes software developed by the OpenSSL Project
* for use in the OpenSSL Toolkit. (http://www.openssl.org/)"
*
* 4. The names "OpenSSL Toolkit" and "OpenSSL Project" must not be used to
* endorse or promote products derived from this software without
* prior written permission. For written permission, please contact
* openssl-core@openssl.org.
*
* 5. Products derived from this software may not be called "OpenSSL"
* nor may "OpenSSL" appear in their names without prior written
* permission of the OpenSSL Project.
*
* 6. Redistributions of any form whatsoever must retain the following
* acknowledgment:
* "This product includes software developed by the OpenSSL Project
* for use in the OpenSSL Toolkit (http://www.openssl.org/)"
*
* THIS SOFTWARE IS PROVIDED BY THE OpenSSL PROJECT ``AS IS'' AND ANY
* EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE OpenSSL PROJECT OR
* ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
* OF THE POSSIBILITY OF SUCH DAMAGE.
* ====================================================================
*
* This product includes cryptographic software written by Eric Young
* (eay@cryptsoft.com). This product includes software written by Tim
* Hudson (tjh@cryptsoft.com).
*
*/
Original SSLeay License
-----------------------
/* Copyright (C) 1995-1998 Eric Young (eay@cryptsoft.com)
* All rights reserved.
*
* This package is an SSL implementation written
* by Eric Young (eay@cryptsoft.com).
* The implementation was written so as to conform with Netscapes SSL.
*
* This library is free for commercial and non-commercial use as long as
* the following conditions are aheared to. The following conditions
* apply to all code found in this distribution, be it the RC4, RSA,
* lhash, DES, etc., code; not just the SSL code. The SSL documentation
* included with this distribution is covered by the same copyright terms
* except that the holder is Tim Hudson (tjh@cryptsoft.com).
*
* Copyright remains Eric Young's, and as such any Copyright notices in
* the code are not to be removed.
* If this package is used in a product, Eric Young should be given attribution
* as the author of the parts of the library used.
* This can be in the form of a textual message at program startup or
* in documentation (online or textual) provided with the package.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. All advertising materials mentioning features or use of this software
* must display the following acknowledgement:
* "This product includes cryptographic software written by
* Eric Young (eay@cryptsoft.com)"
* The word 'cryptographic' can be left out if the rouines from the library
* being used are not cryptographic related :-).
* 4. If you include any Windows specific code (or a derivative thereof) from
* the apps directory (application code) you must include an acknowledgement:
* "This product includes software written by Tim Hudson (tjh@cryptsoft.com)"
*
* THIS SOFTWARE IS PROVIDED BY ERIC YOUNG ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* The licence and distribution terms for any publically available version or
* derivative of this code cannot be changed. i.e. this code cannot simply be
* copied and put under another distribution licence
* [including the GNU Public Licence.]
*/
| 6,121 | Markdown | 47.587301 | 80 | 0.729129 |
gazebosim/gz-omni/PACKAGE-LICENSES/ptex-LICENSE.md | PTEX components are licensed under the following terms:
PTEX SOFTWARE
Copyright 2014 Disney Enterprises, Inc. All rights reserved
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in
the documentation and/or other materials provided with the
distribution.
* The names "Disney", "Walt Disney Pictures", "Walt Disney Animation
Studios" or the names of its contributors may NOT be used to
endorse or promote products derived from this software without
specific prior written permission from Walt Disney Pictures.
Disclaimer: THIS SOFTWARE IS PROVIDED BY WALT DISNEY PICTURES AND
CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING,
BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE, NONINFRINGEMENT AND TITLE ARE DISCLAIMED.
IN NO EVENT SHALL WALT DISNEY PICTURES, THE COPYRIGHT HOLDER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND BASED ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
| 1,720 | Markdown | 48.171427 | 70 | 0.800581 |
gazebosim/gz-omni/PACKAGE-LICENSES/python-LICENSE.md | A. HISTORY OF THE SOFTWARE
==========================
Python was created in the early 1990s by Guido van Rossum at Stichting
Mathematisch Centrum (CWI, see http://www.cwi.nl) in the Netherlands
as a successor of a language called ABC. Guido remains Python's
principal author, although it includes many contributions from others.
In 1995, Guido continued his work on Python at the Corporation for
National Research Initiatives (CNRI, see http://www.cnri.reston.va.us)
in Reston, Virginia where he released several versions of the
software.
In May 2000, Guido and the Python core development team moved to
BeOpen.com to form the BeOpen PythonLabs team. In October of the same
year, the PythonLabs team moved to Digital Creations, which became
Zope Corporation. In 2001, the Python Software Foundation (PSF, see
https://www.python.org/psf/) was formed, a non-profit organization
created specifically to own Python-related Intellectual Property.
Zope Corporation was a sponsoring member of the PSF.
All Python releases are Open Source (see http://www.opensource.org for
the Open Source Definition). Historically, most, but not all, Python
releases have also been GPL-compatible; the table below summarizes
the various releases.
Release Derived Year Owner GPL-
from compatible? (1)
0.9.0 thru 1.2 1991-1995 CWI yes
1.3 thru 1.5.2 1.2 1995-1999 CNRI yes
1.6 1.5.2 2000 CNRI no
2.0 1.6 2000 BeOpen.com no
1.6.1 1.6 2001 CNRI yes (2)
2.1 2.0+1.6.1 2001 PSF no
2.0.1 2.0+1.6.1 2001 PSF yes
2.1.1 2.1+2.0.1 2001 PSF yes
2.1.2 2.1.1 2002 PSF yes
2.1.3 2.1.2 2002 PSF yes
2.2 and above 2.1.1 2001-now PSF yes
Footnotes:
(1) GPL-compatible doesn't mean that we're distributing Python under
the GPL. All Python licenses, unlike the GPL, let you distribute
a modified version without making your changes open source. The
GPL-compatible licenses make it possible to combine Python with
other software that is released under the GPL; the others don't.
(2) According to Richard Stallman, 1.6.1 is not GPL-compatible,
because its license has a choice of law clause. According to
CNRI, however, Stallman's lawyer has told CNRI's lawyer that 1.6.1
is "not incompatible" with the GPL.
Thanks to the many outside volunteers who have worked under Guido's
direction to make these releases possible.
B. TERMS AND CONDITIONS FOR ACCESSING OR OTHERWISE USING PYTHON
===============================================================
PYTHON SOFTWARE FOUNDATION LICENSE VERSION 2
--------------------------------------------
1. This LICENSE AGREEMENT is between the Python Software Foundation
("PSF"), and the Individual or Organization ("Licensee") accessing and
otherwise using this software ("Python") in source or binary form and
its associated documentation.
2. Subject to the terms and conditions of this License Agreement, PSF hereby
grants Licensee a nonexclusive, royalty-free, world-wide license to reproduce,
analyze, test, perform and/or display publicly, prepare derivative works,
distribute, and otherwise use Python alone or in any derivative version,
provided, however, that PSF's License Agreement and PSF's notice of copyright,
i.e., "Copyright (c) 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008, 2009, 2010,
2011, 2012, 2013, 2014, 2015, 2016, 2017, 2018, 2019, 2020 Python Software Foundation;
All Rights Reserved" are retained in Python alone or in any derivative version
prepared by Licensee.
3. In the event Licensee prepares a derivative work that is based on
or incorporates Python or any part thereof, and wants to make
the derivative work available to others as provided herein, then
Licensee hereby agrees to include in any such work a brief summary of
the changes made to Python.
4. PSF is making Python available to Licensee on an "AS IS"
basis. PSF MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, PSF MAKES NO AND
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON WILL NOT
INFRINGE ANY THIRD PARTY RIGHTS.
5. PSF SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON,
OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
6. This License Agreement will automatically terminate upon a material
breach of its terms and conditions.
7. Nothing in this License Agreement shall be deemed to create any
relationship of agency, partnership, or joint venture between PSF and
Licensee. This License Agreement does not grant permission to use PSF
trademarks or trade name in a trademark sense to endorse or promote
products or services of Licensee, or any third party.
8. By copying, installing or otherwise using Python, Licensee
agrees to be bound by the terms and conditions of this License
Agreement.
BEOPEN.COM LICENSE AGREEMENT FOR PYTHON 2.0
-------------------------------------------
BEOPEN PYTHON OPEN SOURCE LICENSE AGREEMENT VERSION 1
1. This LICENSE AGREEMENT is between BeOpen.com ("BeOpen"), having an
office at 160 Saratoga Avenue, Santa Clara, CA 95051, and the
Individual or Organization ("Licensee") accessing and otherwise using
this software in source or binary form and its associated
documentation ("the Software").
2. Subject to the terms and conditions of this BeOpen Python License
Agreement, BeOpen hereby grants Licensee a non-exclusive,
royalty-free, world-wide license to reproduce, analyze, test, perform
and/or display publicly, prepare derivative works, distribute, and
otherwise use the Software alone or in any derivative version,
provided, however, that the BeOpen Python License is retained in the
Software, alone or in any derivative version prepared by Licensee.
3. BeOpen is making the Software available to Licensee on an "AS IS"
basis. BEOPEN MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, BEOPEN MAKES NO AND
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF THE SOFTWARE WILL NOT
INFRINGE ANY THIRD PARTY RIGHTS.
4. BEOPEN SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF THE
SOFTWARE FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS
AS A RESULT OF USING, MODIFYING OR DISTRIBUTING THE SOFTWARE, OR ANY
DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
5. This License Agreement will automatically terminate upon a material
breach of its terms and conditions.
6. This License Agreement shall be governed by and interpreted in all
respects by the law of the State of California, excluding conflict of
law provisions. Nothing in this License Agreement shall be deemed to
create any relationship of agency, partnership, or joint venture
between BeOpen and Licensee. This License Agreement does not grant
permission to use BeOpen trademarks or trade names in a trademark
sense to endorse or promote products or services of Licensee, or any
third party. As an exception, the "BeOpen Python" logos available at
http://www.pythonlabs.com/logos.html may be used according to the
permissions granted on that web page.
7. By copying, installing or otherwise using the software, Licensee
agrees to be bound by the terms and conditions of this License
Agreement.
CNRI LICENSE AGREEMENT FOR PYTHON 1.6.1
---------------------------------------
1. This LICENSE AGREEMENT is between the Corporation for National
Research Initiatives, having an office at 1895 Preston White Drive,
Reston, VA 20191 ("CNRI"), and the Individual or Organization
("Licensee") accessing and otherwise using Python 1.6.1 software in
source or binary form and its associated documentation.
2. Subject to the terms and conditions of this License Agreement, CNRI
hereby grants Licensee a nonexclusive, royalty-free, world-wide
license to reproduce, analyze, test, perform and/or display publicly,
prepare derivative works, distribute, and otherwise use Python 1.6.1
alone or in any derivative version, provided, however, that CNRI's
License Agreement and CNRI's notice of copyright, i.e., "Copyright (c)
1995-2001 Corporation for National Research Initiatives; All Rights
Reserved" are retained in Python 1.6.1 alone or in any derivative
version prepared by Licensee. Alternately, in lieu of CNRI's License
Agreement, Licensee may substitute the following text (omitting the
quotes): "Python 1.6.1 is made available subject to the terms and
conditions in CNRI's License Agreement. This Agreement together with
Python 1.6.1 may be located on the Internet using the following
unique, persistent identifier (known as a handle): 1895.22/1013. This
Agreement may also be obtained from a proxy server on the Internet
using the following URL: http://hdl.handle.net/1895.22/1013".
3. In the event Licensee prepares a derivative work that is based on
or incorporates Python 1.6.1 or any part thereof, and wants to make
the derivative work available to others as provided herein, then
Licensee hereby agrees to include in any such work a brief summary of
the changes made to Python 1.6.1.
4. CNRI is making Python 1.6.1 available to Licensee on an "AS IS"
basis. CNRI MAKES NO REPRESENTATIONS OR WARRANTIES, EXPRESS OR
IMPLIED. BY WAY OF EXAMPLE, BUT NOT LIMITATION, CNRI MAKES NO AND
DISCLAIMS ANY REPRESENTATION OR WARRANTY OF MERCHANTABILITY OR FITNESS
FOR ANY PARTICULAR PURPOSE OR THAT THE USE OF PYTHON 1.6.1 WILL NOT
INFRINGE ANY THIRD PARTY RIGHTS.
5. CNRI SHALL NOT BE LIABLE TO LICENSEE OR ANY OTHER USERS OF PYTHON
1.6.1 FOR ANY INCIDENTAL, SPECIAL, OR CONSEQUENTIAL DAMAGES OR LOSS AS
A RESULT OF MODIFYING, DISTRIBUTING, OR OTHERWISE USING PYTHON 1.6.1,
OR ANY DERIVATIVE THEREOF, EVEN IF ADVISED OF THE POSSIBILITY THEREOF.
6. This License Agreement will automatically terminate upon a material
breach of its terms and conditions.
7. This License Agreement shall be governed by the federal
intellectual property law of the United States, including without
limitation the federal copyright law, and, to the extent such
U.S. federal law does not apply, by the law of the Commonwealth of
Virginia, excluding Virginia's conflict of law provisions.
Notwithstanding the foregoing, with regard to derivative works based
on Python 1.6.1 that incorporate non-separable material that was
previously distributed under the GNU General Public License (GPL), the
law of the Commonwealth of Virginia shall govern this License
Agreement only as to issues arising under or with respect to
Paragraphs 4, 5, and 7 of this License Agreement. Nothing in this
License Agreement shall be deemed to create any relationship of
agency, partnership, or joint venture between CNRI and Licensee. This
License Agreement does not grant permission to use CNRI trademarks or
trade name in a trademark sense to endorse or promote products or
services of Licensee, or any third party.
8. By clicking on the "ACCEPT" button where indicated, or by copying,
installing or otherwise using Python 1.6.1, Licensee agrees to be
bound by the terms and conditions of this License Agreement.
ACCEPT
CWI LICENSE AGREEMENT FOR PYTHON 0.9.0 THROUGH 1.2
--------------------------------------------------
Copyright (c) 1991 - 1995, Stichting Mathematisch Centrum Amsterdam,
The Netherlands. All rights reserved.
Permission to use, copy, modify, and distribute this software and its
documentation for any purpose and without fee is hereby granted,
provided that the above copyright notice appear in all copies and that
both that copyright notice and this permission notice appear in
supporting documentation, and that the name of Stichting Mathematisch
Centrum or CWI not be used in advertising or publicity pertaining to
distribution of the software without specific, written prior
permission.
STICHTING MATHEMATISCH CENTRUM DISCLAIMS ALL WARRANTIES WITH REGARD TO
THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS, IN NO EVENT SHALL STICHTING MATHEMATISCH CENTRUM BE LIABLE
FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
Licenses and Acknowledgements for Incorporated Software
This section is an incomplete, but growing list of licenses and acknowledgements for third-party software incorporated in the Python distribution.
Mersenne Twister
The _random module includes code based on a download from http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/MT2002/emt19937ar.html. The following are the verbatim comments from the original code:
A C-program for MT19937, with initialization improved 2002/1/26.
Coded by Takuji Nishimura and Makoto Matsumoto.
Before using, initialize the state by using init_genrand(seed)
or init_by_array(init_key, key_length).
Copyright (C) 1997 - 2002, Makoto Matsumoto and Takuji Nishimura,
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. The names of its contributors may not be used to endorse or promote
products derived from this software without specific prior written
permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Any feedback is very welcome.
http://www.math.sci.hiroshima-u.ac.jp/~m-mat/MT/emt.html
email: m-mat @ math.sci.hiroshima-u.ac.jp (remove space)
Sockets
The socket module uses the functions, getaddrinfo(), and getnameinfo(), which are coded in separate source files from the WIDE Project, http://www.wide.ad.jp/.
Copyright (C) 1995, 1996, 1997, and 1998 WIDE Project.
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
3. Neither the name of the project nor the names of its contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE PROJECT AND CONTRIBUTORS ``AS IS'' AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE PROJECT OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
SUCH DAMAGE.
Asynchronous socket services
The asynchat and asyncore modules contain the following notice:
Copyright 1996 by Sam Rushing
All Rights Reserved
Permission to use, copy, modify, and distribute this software and
its documentation for any purpose and without fee is hereby
granted, provided that the above copyright notice appear in all
copies and that both that copyright notice and this permission
notice appear in supporting documentation, and that the name of Sam
Rushing not be used in advertising or publicity pertaining to
distribution of the software without specific, written prior
permission.
SAM RUSHING DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS SOFTWARE,
INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS, IN
NO EVENT SHALL SAM RUSHING BE LIABLE FOR ANY SPECIAL, INDIRECT OR
CONSEQUENTIAL DAMAGES OR ANY DAMAGES WHATSOEVER RESULTING FROM LOSS
OF USE, DATA OR PROFITS, WHETHER IN AN ACTION OF CONTRACT,
NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF OR IN
CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
Cookie management
The http.cookies module contains the following notice:
Copyright 2000 by Timothy O'Malley <timo@alum.mit.edu>
All Rights Reserved
Permission to use, copy, modify, and distribute this software
and its documentation for any purpose and without fee is hereby
granted, provided that the above copyright notice appear in all
copies and that both that copyright notice and this permission
notice appear in supporting documentation, and that the name of
Timothy O'Malley not be used in advertising or publicity
pertaining to distribution of the software without specific, written
prior permission.
Timothy O'Malley DISCLAIMS ALL WARRANTIES WITH REGARD TO THIS
SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY
AND FITNESS, IN NO EVENT SHALL Timothy O'Malley BE LIABLE FOR
ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR
PERFORMANCE OF THIS SOFTWARE.
Execution tracing
The trace module contains the following notice:
portions copyright 2001, Autonomous Zones Industries, Inc., all rights...
err... reserved and offered to the public under the terms of the
Python 2.2 license.
Author: Zooko O'Whielacronx
http://zooko.com/
mailto:zooko@zooko.com
Copyright 2000, Mojam Media, Inc., all rights reserved.
Author: Skip Montanaro
Copyright 1999, Bioreason, Inc., all rights reserved.
Author: Andrew Dalke
Copyright 1995-1997, Automatrix, Inc., all rights reserved.
Author: Skip Montanaro
Copyright 1991-1995, Stichting Mathematisch Centrum, all rights reserved.
Permission to use, copy, modify, and distribute this Python software and
its associated documentation for any purpose without fee is hereby
granted, provided that the above copyright notice appears in all copies,
and that both that copyright notice and this permission notice appear in
supporting documentation, and that the name of neither Automatrix,
Bioreason or Mojam Media be used in advertising or publicity pertaining to
distribution of the software without specific, written prior permission.
UUencode and UUdecode functions
The uu module contains the following notice:
Copyright 1994 by Lance Ellinghouse
Cathedral City, California Republic, United States of America.
All Rights Reserved
Permission to use, copy, modify, and distribute this software and its
documentation for any purpose and without fee is hereby granted,
provided that the above copyright notice appear in all copies and that
both that copyright notice and this permission notice appear in
supporting documentation, and that the name of Lance Ellinghouse
not be used in advertising or publicity pertaining to distribution
of the software without specific, written prior permission.
LANCE ELLINGHOUSE DISCLAIMS ALL WARRANTIES WITH REGARD TO
THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANTABILITY AND
FITNESS, IN NO EVENT SHALL LANCE ELLINGHOUSE CENTRUM BE LIABLE
FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT
OF OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
Modified by Jack Jansen, CWI, July 1995:
- Use binascii module to do the actual line-by-line conversion
between ascii and binary. This results in a 1000-fold speedup. The C
version is still 5 times faster, though.
- Arguments more compliant with Python standard
XML Remote Procedure Calls
The xmlrpc.client module contains the following notice:
The XML-RPC client interface is
Copyright (c) 1999-2002 by Secret Labs AB
Copyright (c) 1999-2002 by Fredrik Lundh
By obtaining, using, and/or copying this software and/or its
associated documentation, you agree that you have read, understood,
and will comply with the following terms and conditions:
Permission to use, copy, modify, and distribute this software and
its associated documentation for any purpose and without fee is
hereby granted, provided that the above copyright notice appears in
all copies, and that both that copyright notice and this permission
notice appear in supporting documentation, and that the name of
Secret Labs AB or the author not be used in advertising or publicity
pertaining to distribution of the software without specific, written
prior permission.
SECRET LABS AB AND THE AUTHOR DISCLAIMS ALL WARRANTIES WITH REGARD
TO THIS SOFTWARE, INCLUDING ALL IMPLIED WARRANTIES OF MERCHANT-
ABILITY AND FITNESS. IN NO EVENT SHALL SECRET LABS AB OR THE AUTHOR
BE LIABLE FOR ANY SPECIAL, INDIRECT OR CONSEQUENTIAL DAMAGES OR ANY
DAMAGES WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS,
WHETHER IN AN ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS
ACTION, ARISING OUT OF OR IN CONNECTION WITH THE USE OR PERFORMANCE
OF THIS SOFTWARE.
Select kqueue
The select module contains the following notice for the kqueue interface:
Copyright (c) 2000 Doug White, 2006 James Knight, 2007 Christian Heimes
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. Redistributions in binary form must reproduce the above copyright
notice, this list of conditions and the following disclaimer in the
documentation and/or other materials provided with the distribution.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
SUCH DAMAGE.
SipHash24
The file Python/pyhash.c contains Marek Majkowski implementation of Dan Bernsteins SipHash24 algorithm. The contains the following note:
<MIT License>
Copyright (c) 2013 Marek Majkowski <marek@popcount.org>
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
</MIT License>
Original location:
https://github.com/majek/csiphash/
Solution inspired by code from:
Samuel Neves (supercop/crypto_auth/siphash24/little)
djb (supercop/crypto_auth/siphash24/little2)
Jean-Philippe Aumasson (https://131002.net/siphash/siphash24.c)
strtod and dtoa
The file Python/dtoa.c, which supplies C functions dtoa and strtod for conversion of C doubles to and from strings, is derived from the file of the same name by David M. Gay, currently available from http://www.netlib.org/fp/. The original file, as retrieved on March 16, 2009, contains the following copyright and licensing notice:
/****************************************************************
*
* The author of this software is David M. Gay.
*
* Copyright (c) 1991, 2000, 2001 by Lucent Technologies.
*
* Permission to use, copy, modify, and distribute this software for any
* purpose without fee is hereby granted, provided that this entire notice
* is included in all copies of any software which is or includes a copy
* or modification of this software and in all copies of the supporting
* documentation for such software.
*
* THIS SOFTWARE IS BEING PROVIDED "AS IS", WITHOUT ANY EXPRESS OR IMPLIED
* WARRANTY. IN PARTICULAR, NEITHER THE AUTHOR NOR LUCENT MAKES ANY
* REPRESENTATION OR WARRANTY OF ANY KIND CONCERNING THE MERCHANTABILITY
* OF THIS SOFTWARE OR ITS FITNESS FOR ANY PARTICULAR PURPOSE.
*
***************************************************************/
OpenSSL
The modules hashlib, posix, ssl, crypt use the OpenSSL library for added performance if made available by the operating system. Additionally, the Windows and Mac OS X installers for Python may include a copy of the OpenSSL libraries, so we include a copy of the OpenSSL license here:
LICENSE ISSUES
==============
The OpenSSL toolkit stays under a dual license, i.e. both the conditions of
the OpenSSL License and the original SSLeay license apply to the toolkit.
See below for the actual license texts. Actually both licenses are BSD-style
Open Source licenses. In case of any license issues related to OpenSSL
please contact openssl-core@openssl.org.
OpenSSL License
---------------
/* ====================================================================
* Copyright (c) 1998-2008 The OpenSSL Project. All rights reserved.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* 1. Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
*
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
*
* 3. All advertising materials mentioning features or use of this
* software must display the following acknowledgment:
* "This product includes software developed by the OpenSSL Project
* for use in the OpenSSL Toolkit. (http://www.openssl.org/)";
*
* 4. The names "OpenSSL Toolkit" and "OpenSSL Project" must not be used to
* endorse or promote products derived from this software without
* prior written permission. For written permission, please contact
* openssl-core@openssl.org.
*
* 5. Products derived from this software may not be called "OpenSSL"
* nor may "OpenSSL" appear in their names without prior written
* permission of the OpenSSL Project.
*
* 6. Redistributions of any form whatsoever must retain the following
* acknowledgment:
* "This product includes software developed by the OpenSSL Project
* for use in the OpenSSL Toolkit (http://www.openssl.org/)";
*
* THIS SOFTWARE IS PROVIDED BY THE OpenSSL PROJECT ``AS IS'' AND ANY
* EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
* PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE OpenSSL PROJECT OR
* ITS CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
* NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
* LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
* STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
* ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
* OF THE POSSIBILITY OF SUCH DAMAGE.
* ====================================================================
*
* This product includes cryptographic software written by Eric Young
* (eay@cryptsoft.com). This product includes software written by Tim
* Hudson (tjh@cryptsoft.com).
*
*/
Original SSLeay License
-----------------------
/* Copyright (C) 1995-1998 Eric Young (eay@cryptsoft.com)
* All rights reserved.
*
* This package is an SSL implementation written
* by Eric Young (eay@cryptsoft.com).
* The implementation was written so as to conform with Netscapes SSL.
*
* This library is free for commercial and non-commercial use as long as
* the following conditions are aheared to. The following conditions
* apply to all code found in this distribution, be it the RC4, RSA,
* lhash, DES, etc., code; not just the SSL code. The SSL documentation
* included with this distribution is covered by the same copyright terms
* except that the holder is Tim Hudson (tjh@cryptsoft.com).
*
* Copyright remains Eric Young's, and as such any Copyright notices in
* the code are not to be removed.
* If this package is used in a product, Eric Young should be given attribution
* as the author of the parts of the library used.
* This can be in the form of a textual message at program startup or
* in documentation (online or textual) provided with the package.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
* 1. Redistributions of source code must retain the copyright
* notice, this list of conditions and the following disclaimer.
* 2. Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in the
* documentation and/or other materials provided with the distribution.
* 3. All advertising materials mentioning features or use of this software
* must display the following acknowledgement:
* "This product includes cryptographic software written by
* Eric Young (eay@cryptsoft.com)"
* The word 'cryptographic' can be left out if the rouines from the library
* being used are not cryptographic related :-).
* 4. If you include any Windows specific code (or a derivative thereof) from
* the apps directory (application code) you must include an acknowledgement:
* "This product includes software written by Tim Hudson (tjh@cryptsoft.com)"
*
* THIS SOFTWARE IS PROVIDED BY ERIC YOUNG ``AS IS'' AND
* ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
* IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
* ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS BE LIABLE
* FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
* DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS
* OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF
* SUCH DAMAGE.
*
* The licence and distribution terms for any publically available version or
* derivative of this code cannot be changed. i.e. this code cannot simply be
* copied and put under another distribution licence
* [including the GNU Public Licence.]
*/
libffi
The _ctypes extension is built using an included copy of the libffi sources unless the build is configured --with-system-libffi:
Copyright (c) 1996-2008 Red Hat, Inc and others.
Permission is hereby granted, free of charge, to any person obtaining
a copy of this software and associated documentation files (the
``Software''), to deal in the Software without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Software, and to
permit persons to whom the Software is furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED ``AS IS'', WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND
NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT
HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY,
WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
zlib
The zlib extension is built using an included copy of the zlib sources if the zlib version found on the system is too old to be used for the build:
Copyright (C) 1995-2011 Jean-loup Gailly and Mark Adler
This software is provided 'as-is', without any express or implied
warranty. In no event will the authors be held liable for any damages
arising from the use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software
in a product, an acknowledgment in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.
Jean-loup Gailly Mark Adler
jloup@gzip.org madler@alumni.caltech.edu
cfuhash
The implementation of the hash table used by the tracemalloc is based on the cfuhash project:
Copyright (c) 2005 Don Owens
All rights reserved.
This code is released under the BSD license:
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
* Neither the name of the author nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR
SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION)
HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED
OF THE POSSIBILITY OF SUCH DAMAGE.
| 37,296 | Markdown | 48.795728 | 332 | 0.76346 |
gazebosim/gz-omni/PACKAGE-LICENSES/IlmBase-LICENSE.md | Copyright (c) 2006, Industrial Light & Magic, a division of Lucasfilm
Entertainment Company Ltd. Portions contributed and copyright held by
others as indicated. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above
copyright notice, this list of conditions and the following
disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided with
the distribution.
* Neither the name of Industrial Light & Magic nor the names of
any other contributors to this software may be used to endorse or
promote products derived from this software without specific prior
written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS
IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
| 1,697 | Markdown | 47.514284 | 74 | 0.786682 |
gazebosim/gz-omni/PACKAGE-LICENSES/zlib-LICENSE.md | zlib.h -- interface of the 'zlib' general purpose compression library
version 1.2.11, January 15th, 2017
Copyright (C) 1995-2017 Jean-loup Gailly and Mark Adler
This software is provided 'as-is', without any express or implied
warranty. In no event will the authors be held liable for any damages
arising from the use of this software.
Permission is granted to anyone to use this software for any purpose,
including commercial applications, and to alter it and redistribute it
freely, subject to the following restrictions:
1. The origin of this software must not be misrepresented; you must not
claim that you wrote the original software. If you use this software
in a product, an acknowledgment in the product documentation would be
appreciated but is not required.
2. Altered source versions must be plainly marked as such, and must not be
misrepresented as being the original software.
3. This notice may not be removed or altered from any source distribution.
Jean-loup Gailly Mark Adler
jloup@gzip.org madler@alumni.caltech.edu | 1,067 | Markdown | 45.434781 | 74 | 0.781631 |
gazebosim/gz-omni/PACKAGE-LICENSES/bzip2-LICENSE.md |
--------------------------------------------------------------------------
This program, "bzip2", the associated library "libbzip2", and all
documentation, are copyright (C) 1996-2019 Julian R Seward. All
rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions
are met:
1. Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
2. The origin of this software must not be misrepresented; you must
not claim that you wrote the original software. If you use this
software in a product, an acknowledgment in the product
documentation would be appreciated but is not required.
3. Altered source versions must be plainly marked as such, and must
not be misrepresented as being the original software.
4. The name of the author may not be used to endorse or promote
products derived from this software without specific prior written
permission.
THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS
OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY
DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL
DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE
GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Julian Seward, jseward@acm.org
bzip2/libbzip2 version 1.0.8 of 13 July 2019
--------------------------------------------------------------------------
| 1,896 | Markdown | 43.116278 | 74 | 0.727321 |
gazebosim/gz-omni/PACKAGE-LICENSES/boost-LICENSE.md | Boost Software License - Version 1.0 - August 17th, 2003
Permission is hereby granted, free of charge, to any person or organization
obtaining a copy of the software and accompanying documentation covered by
this license (the "Software") to use, reproduce, display, distribute,
execute, and transmit the Software, and to prepare derivative works of the
Software, and to permit third-parties to whom the Software is furnished to
do so, all subject to the following:
The copyright notices in the Software and this entire statement, including
the above license grant, this restriction and the following disclaimer,
must be included in all copies of the Software, in whole or in part, and
all derivative works of the Software, unless such copies or derivative
works are solely in the form of machine-executable object code generated by
a source language processor.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE, TITLE AND NON-INFRINGEMENT. IN NO EVENT
SHALL THE COPYRIGHT HOLDERS OR ANYONE DISTRIBUTING THE SOFTWARE BE LIABLE
FOR ANY DAMAGES OR OTHER LIABILITY, WHETHER IN CONTRACT, TORT OR OTHERWISE,
ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
DEALINGS IN THE SOFTWARE.
| 1,338 | Markdown | 54.791664 | 75 | 0.807922 |
gazebosim/gz-omni/PACKAGE-LICENSES/nv_usd-LICENSE.md | Universal Scene Description (USD) components are licensed under the following terms:
Modified Apache 2.0 License
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor
and its affiliates, except as required to comply with Section 4(c) of
the License and to reproduce the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
============================================================
RapidJSON
============================================================
Tencent is pleased to support the open source community by making RapidJSON available.
Copyright (C) 2015 THL A29 Limited, a Tencent company, and Milo Yip. All rights reserved.
If you have downloaded a copy of the RapidJSON binary from Tencent, please note that the RapidJSON binary is licensed under the MIT License.
If you have downloaded a copy of the RapidJSON source code from Tencent, please note that RapidJSON source code is licensed under the MIT License, except for the third-party components listed below which are subject to different license terms. Your integration of RapidJSON into your own projects may require compliance with the MIT License, as well as the other licenses applicable to the third-party components included within RapidJSON. To avoid the problematic JSON license in your own projects, it's sufficient to exclude the bin/jsonchecker/ directory, as it's the only code under the JSON license.
A copy of the MIT License is included in this file.
Other dependencies and licenses:
Open Source Software Licensed Under the BSD License:
--------------------------------------------------------------------
The msinttypes r29
Copyright (c) 2006-2013 Alexander Chemeris
All rights reserved.
Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.
* Neither the name of copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE REGENTS AND CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE REGENTS AND CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
Open Source Software Licensed Under the JSON License:
--------------------------------------------------------------------
json.org
Copyright (c) 2002 JSON.org
All Rights Reserved.
JSON_checker
Copyright (c) 2002 JSON.org
All Rights Reserved.
Terms of the JSON License:
---------------------------------------------------
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
The Software shall be used for Good, not Evil.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Terms of the MIT License:
--------------------------------------------------------------------
Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
============================================================
pygilstate_check
============================================================
The MIT License (MIT)
Copyright (c) 2014, Pankaj Pandey
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
============================================================
double-conversion
============================================================
Copyright 2006-2011, the V8 project authors. All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following
disclaimer in the documentation and/or other materials provided
with the distribution.
* Neither the name of Google Inc. nor the names of its
contributors may be used to endorse or promote products derived
from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
============================================================
OpenEXR/IlmBase/Half
============================================================
///////////////////////////////////////////////////////////////////////////
//
// Copyright (c) 2002, Industrial Light & Magic, a division of Lucas
// Digital Ltd. LLC
//
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without
// modification, are permitted provided that the following conditions are
// met:
// * Redistributions of source code must retain the above copyright
// notice, this list of conditions and the following disclaimer.
// * Redistributions in binary form must reproduce the above
// copyright notice, this list of conditions and the following disclaimer
// in the documentation and/or other materials provided with the
// distribution.
// * Neither the name of Industrial Light & Magic nor the names of
// its contributors may be used to endorse or promote products derived
// from this software without specific prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
// "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
// LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
// A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
// OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
// LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
// DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
// THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
// (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
// OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//
///////////////////////////////////////////////////////////////////////////
============================================================
Apple Technical Q&A QA1361 - Detecting the Debugger
https://developer.apple.com/library/content/qa/qa1361/_index.html
============================================================
Sample code project: Detecting the Debugger
Version: 1.0
Abstract: Shows how to determine if code is being run under the debugger.
IMPORTANT: This Apple software is supplied to you by Apple
Inc. ("Apple") in consideration of your agreement to the following
terms, and your use, installation, modification or redistribution of
this Apple software constitutes acceptance of these terms. If you do
not agree with these terms, please do not use, install, modify or
redistribute this Apple software.
In consideration of your agreement to abide by the following terms, and
subject to these terms, Apple grants you a personal, non-exclusive
license, under Apple's copyrights in this original Apple software (the
"Apple Software"), to use, reproduce, modify and redistribute the Apple
Software, with or without modifications, in source and/or binary forms;
provided that if you redistribute the Apple Software in its entirety and
without modifications, you must retain this notice and the following
text and disclaimers in all such redistributions of the Apple Software.
Neither the name, trademarks, service marks or logos of Apple Inc. may
be used to endorse or promote products derived from the Apple Software
without specific prior written permission from Apple. Except as
expressly stated in this notice, no other rights or licenses, express or
implied, are granted by Apple herein, including but not limited to any
patent rights that may be infringed by your derivative works or by other
works in which the Apple Software may be incorporated.
The Apple Software is provided by Apple on an "AS IS" basis. APPLE
MAKES NO WARRANTIES, EXPRESS OR IMPLIED, INCLUDING WITHOUT LIMITATION
THE IMPLIED WARRANTIES OF NON-INFRINGEMENT, MERCHANTABILITY AND FITNESS
FOR A PARTICULAR PURPOSE, REGARDING THE APPLE SOFTWARE OR ITS USE AND
OPERATION ALONE OR IN COMBINATION WITH YOUR PRODUCTS.
IN NO EVENT SHALL APPLE BE LIABLE FOR ANY SPECIAL, INDIRECT, INCIDENTAL
OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) ARISING IN ANY WAY OUT OF THE USE, REPRODUCTION,
MODIFICATION AND/OR DISTRIBUTION OF THE APPLE SOFTWARE, HOWEVER CAUSED
AND WHETHER UNDER THEORY OF CONTRACT, TORT (INCLUDING NEGLIGENCE),
STRICT LIABILITY OR OTHERWISE, EVEN IF APPLE HAS BEEN ADVISED OF THE
POSSIBILITY OF SUCH DAMAGE.
============================================================
LZ4
============================================================
LZ4 - Fast LZ compression algorithm
Copyright (C) 2011-2017, Yann Collet.
BSD 2-Clause License (http://www.opensource.org/licenses/bsd-license.php)
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are
met:
* Redistributions of source code must retain the above copyright
notice, this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above
copyright notice, this list of conditions and the following disclaimer
in the documentation and/or other materials provided with the
distribution.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
You can contact the author at :
- LZ4 homepage : http://www.lz4.org
- LZ4 source repository : https://github.com/lz4/lz4
============================================================
stb
============================================================
stb_image - v2.19 - public domain image loader - http://nothings.org/stb
no warranty implied; use at your own risk
stb_image_resize - v0.95 - public domain image resizing
by Jorge L Rodriguez (@VinoBS) - 2014
http://github.com/nothings/stb
stb_image_write - v1.09 - public domain - http://nothings.org/stb/stb_image_write.h
writes out PNG/BMP/TGA/JPEG/HDR images to C stdio - Sean Barrett 2010-2015
no warranty implied; use at your own risk
ALTERNATIVE B - Public Domain (www.unlicense.org)
This is free and unencumbered software released into the public domain.
Anyone is free to copy, modify, publish, use, compile, sell, or distribute this
software, either in source code form or as a compiled binary, for any purpose,
commercial or non-commercial, and by any means.
In jurisdictions that recognize copyright laws, the author or authors of this
software dedicate any and all copyright interest in the software to the public
domain. We make this dedication for the benefit of the public at large and to
the detriment of our heirs and successors. We intend this dedication to be an
overt act of relinquishment in perpetuity of all present and future rights to
this software under copyright law.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION
WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
| 26,573 | Markdown | 57.792035 | 739 | 0.727317 |
gazebosim/gz-omni/PACKAGE-LICENSES/opensubdiv-LICENSE.md |
Modified Apache 2.0 License
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor
and its affiliates, except as required to comply with Section 4(c) of
the License and to reproduce the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
| 10,038 | Markdown | 56.365714 | 77 | 0.73421 |
gazebosim/gz-omni/PACKAGE-LICENSES/glew-LICENSE.md | The OpenGL Extension Wrangler Library
Copyright (C) 2002-2007, Milan Ikits <milan ikits[]ieee org>
Copyright (C) 2002-2007, Marcelo E. Magallon <mmagallo[]debian org>
Copyright (C) 2002, Lev Povalahev
All rights reserved.
Redistribution and use in source and binary forms, with or without
modification, are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* The name of the author may be used to endorse or promote products
derived from this software without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE
LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF
THE POSSIBILITY OF SUCH DAMAGE.
Mesa 3-D graphics library
Version: 7.0
Copyright (C) 1999-2007 Brian Paul All Rights Reserved.
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and associated documentation files (the "Software"),
to deal in the Software without restriction, including without limitation
the rights to use, copy, modify, merge, publish, distribute, sublicense,
and/or sell copies of the Software, and to permit persons to whom the
Software is furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
BRIAN PAUL BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN
AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
Copyright (c) 2007 The Khronos Group Inc.
Permission is hereby granted, free of charge, to any person obtaining a
copy of this software and/or associated documentation files (the
"Materials"), to deal in the Materials without restriction, including
without limitation the rights to use, copy, modify, merge, publish,
distribute, sublicense, and/or sell copies of the Materials, and to
permit persons to whom the Materials are furnished to do so, subject to
the following conditions:
The above copyright notice and this permission notice shall be included
in all copies or substantial portions of the Materials.
THE MATERIALS ARE PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
MATERIALS OR THE USE OR OTHER DEALINGS IN THE MATERIALS.
| 3,797 | Markdown | 50.324324 | 76 | 0.801949 |
gazebosim/gz-omni/source/ignition_live/ThreadSafe.hpp | /*
* Copyright (C) 2022 Open Source Robotics Foundation
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
#ifndef IGNITION_OMNIVERSE_THREADSAFE_HPP
#define IGNITION_OMNIVERSE_THREADSAFE_HPP
#include <mutex>
namespace ignition::omniverse
{
/// \brief Make an object threadsafe by locking it behind a mutex.
template <typename T, typename MutexT = std::recursive_mutex>
class ThreadSafe
{
public:
class Ref
{
public:
Ref(T& _data, MutexT& _m);
~Ref();
// don't allow copying and moving
Ref(const Ref&) = delete;
Ref(Ref&&) = delete;
Ref& operator=(const Ref&) = delete;
T& operator*();
T& operator->();
private:
MutexT& m;
T& data;
};
/// \brief Takes ownership of the data.
explicit ThreadSafe(T&& _data);
// don't allow copying
ThreadSafe(const ThreadSafe&) = delete;
ThreadSafe& operator=(const ThreadSafe&) = delete;
// moving is ok
ThreadSafe(ThreadSafe&&) = default;
/// \brief Locks the mutex
Ref Lock();
private:
T data;
MutexT mutex;
};
template <typename T, typename MutexT>
ThreadSafe<T, MutexT>::Ref::Ref(T& _data, MutexT& _m) : data(_data), m(_m)
{
this->m.lock();
}
template <typename T, typename MutexT>
ThreadSafe<T, MutexT>::Ref::~Ref()
{
this->m.unlock();
}
template <typename T, typename MutexT>
T& ThreadSafe<T, MutexT>::Ref::operator*()
{
return this->data;
}
template <typename T, typename MutexT>
T& ThreadSafe<T, MutexT>::Ref::operator->()
{
return this->data;
}
template <typename T, typename MutexT>
ThreadSafe<T, MutexT>::ThreadSafe(T&& _data) : data(_data)
{
}
template <typename T, typename MutexT>
typename ThreadSafe<T, MutexT>::Ref ThreadSafe<T, MutexT>::Lock()
{
return Ref(this->data, this->mutex);
}
} // namespace ignition::omniverse
#endif
| 2,298 | C++ | 20.688679 | 75 | 0.680157 |
gazebosim/gz-omni/source/ignition_live/SetOp.hpp | /*
* Copyright (C) 2022 Open Source Robotics Foundation
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
#ifndef IGNITION_OMNIVERSE_SET_OP_HPP
#define IGNITION_OMNIVERSE_SET_OP_HPP
#include <pxr/base/gf/vec3f.h>
#include <pxr/usd/usdGeom/xform.h>
namespace ignition
{
namespace omniverse
{
// A utility class to set the position, rotation, or scale values
class SetOp
{
public:
SetOp(pxr::UsdGeomXformable& xForm, pxr::UsdGeomXformOp& op,
pxr::UsdGeomXformOp::Type opType, const pxr::GfVec3d& value,
const pxr::UsdGeomXformOp::Precision precision)
{
if (!op)
{
op = xForm.AddXformOp(opType, precision);
}
if (op.GetPrecision() == pxr::UsdGeomXformOp::Precision::PrecisionFloat)
op.Set(pxr::GfVec3f(value));
else
op.Set(value);
}
};
} // namespace omniverse
} // namespace ignition
#endif
| 1,373 | C++ | 25.941176 | 76 | 0.706482 |
gazebosim/gz-omni/source/ignition_live/OmniClientpp.cpp | /*
* Copyright (C) 2022 Open Source Robotics Foundation
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
#include "OmniClientpp.hpp"
#include <ignition/common/Console.hh>
#include <OmniClient.h>
namespace ignition::omniverse
{
OmniverseLock::OmniverseLock(const std::string& _url) : url(_url)
{
omniClientLock(this->url.c_str(), nullptr, nullptr);
}
OmniverseLock::~OmniverseLock()
{
omniClientUnlock(this->url.c_str(), nullptr, nullptr);
}
bool CheckClientResult(OmniClientResult result)
{
return result == eOmniClientResult_Ok || result == eOmniClientResult_OkLatest;
}
OmniverseSync::MaybeError<OmniClientListEntry> OmniverseSync::Stat(
const std::string& url) noexcept
{
MaybeError<OmniClientListEntry> ret(eOmniClientResult_Error);
omniClientWait(omniClientStat(
url.c_str(), &ret,
[](void* userData, OmniClientResult clientResult,
const OmniClientListEntry* entry) noexcept
{
auto* ret =
reinterpret_cast<MaybeError<OmniClientListEntry>*>(userData);
if (!CheckClientResult(clientResult))
{
*ret = clientResult;
return;
}
*ret = *entry;
}));
return ret;
}
} // namespace ignition::omniverse
| 1,742 | C++ | 27.112903 | 80 | 0.700918 |
gazebosim/gz-omni/source/ignition_live/OmniverseConnect.hpp | /*
* Copyright (C) 2022 Open Source Robotics Foundation
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
#ifndef IGNITION_OMNIVERSE_CONNECT_HPP
#define IGNITION_OMNIVERSE_CONNECT_HPP
#include "OmniClientpp.hpp"
#include <pxr/usd/usd/stage.h>
#include <OmniClient.h>
#include <OmniUsdLive.h>
#include <string>
// Global for making the logging reasonable
static std::mutex gLogMutex;
namespace ignition::omniverse
{
static std::string normalizedStageUrl;
// Stage URL really only needs to contain the server in the URL. eg.
// omniverse://ov-prod
void PrintConnectedUsername(const std::string& stageUrl);
/// \brief Creates a new ignition stage in omniverse, does nothing if the
/// stage already exists.
/// \details The new stage is authored with ignition metadata.
/// \return The url of the stage
MaybeError<std::string, GenericError> CreateOmniverseModel(
const std::string& destinationPath);
void CheckpointFile(const char* stageUrl, const char* comment);
// Startup Omniverse
bool StartOmniverse();
} // namespace ignition::omniverse
#endif
| 1,581 | C++ | 27.763636 | 75 | 0.752056 |
gazebosim/gz-omni/source/ignition_live/GetOp.hpp | /*
* Copyright (C) 2022 Open Source Robotics Foundation
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
#ifndef IGNITION_OMNIVERSE_GET_OP_HPP
#define IGNITION_OMNIVERSE_GET_OP_HPP
#include <pxr/base/gf/vec3f.h>
#include <pxr/base/gf/quatf.h>
#include <pxr/usd/usdGeom/xform.h>
#include <pxr/usd/usdGeom/xformCommonAPI.h>
#include <ignition/math/Quaternion.hh>
namespace ignition
{
namespace omniverse
{
// A utility class to get the position, rotation, or scale values
class GetOp
{
public:
GetOp(pxr::UsdGeomXformable& xForm)
{
this->position = pxr::GfVec3d(0);
this->rotXYZ = pxr::GfVec3f(0);
this->scale = pxr::GfVec3f(1);
ignition::math::Quaterniond orientQuat;
bool resetXformStack = false;
std::vector<pxr::UsdGeomXformOp> xFormOps =
xForm.GetOrderedXformOps(&resetXformStack);
// Get the current xform op values
for (size_t i = 0; i < xFormOps.size(); i++)
{
switch (xFormOps[i].GetOpType())
{
case pxr::UsdGeomXformOp::TypeTranslate:
translateOp = xFormOps[i];
translateOp.Get(&this->position);
break;
case pxr::UsdGeomXformOp::TypeRotateXYZ:
rotateOp = xFormOps[i];
rotateOp.Get(&this->rotXYZ);
break;
case pxr::UsdGeomXformOp::TypeOrient:
rotateOp = xFormOps[i];
rotateOp.Get(&this->rotQ);
orientQuat = ignition::math::Quaterniond(
this->rotQ.GetReal(),
this->rotQ.GetImaginary()[0],
this->rotQ.GetImaginary()[1],
this->rotQ.GetImaginary()[2]);
this->rotXYZ = pxr::GfVec3f(
orientQuat.Roll(), orientQuat.Pitch(), orientQuat.Yaw());
break;
case pxr::UsdGeomXformOp::TypeScale:
scaleOp = xFormOps[i];
scaleOp.Get(&this->scale);
break;
}
}
}
// Define storage for the different xform ops that Omniverse Kit likes to use
pxr::UsdGeomXformOp translateOp;
pxr::UsdGeomXformOp rotateOp;
pxr::UsdGeomXformOp scaleOp;
pxr::GfVec3d position;
pxr::GfVec3f rotXYZ;
pxr::GfVec3f scale;
pxr::GfQuatf rotQ;
};
} // namespace omniverse
} // namespace ignition
#endif
| 2,710 | C++ | 28.467391 | 79 | 0.653875 |
gazebosim/gz-omni/source/ignition_live/Error.hpp | /*
* Copyright (C) 2022 Open Source Robotics Foundation
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
#ifndef IGNITION_OMNIVERSE_ERROR_HPP
#define IGNITION_OMNIVERSE_ERROR_HPP
#include <string>
#include <variant>
namespace ignition::omniverse
{
class GenericError
{
public:
std::string message;
explicit GenericError(const std::string& _message) : message(_message) {}
friend std::ostream& operator<<(std::ostream& _output,
const GenericError& _error)
{
_output << _error.message;
return _output;
}
};
/// \brief Represents the result of a function which may contain an error.
template <typename T, typename ErrorT>
class MaybeError
{
public:
// allow implicit conversion
MaybeError(const T& _val) : data(_val) {}
MaybeError(const ErrorT& _error) : data(_error) {}
/// \brief `true` if there is no error
explicit operator bool() const
{
return !std::holds_alternative<ErrorT>(this->data);
}
/// \brief Get the value of the result, throws if there is an error.
const T& Value() const { return std::get<T>(this->data); }
/// \brief Get the error, throws if there is no error.
const ErrorT& Error() const { return std::get<ErrorT>(this->data); }
private:
std::variant<T, ErrorT> data;
};
} // namespace ignition::omniverse
#endif
| 1,842 | C++ | 26.102941 | 75 | 0.690011 |
gazebosim/gz-omni/source/ignition_live/FUSDLayerNoticeListener.cpp | /*
* Copyright (C) 2022 Open Source Robotics Foundation
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
#include "FUSDLayerNoticeListener.hpp"
#include <memory>
#include <string>
#include <ignition/transport/Node.hh>
namespace ignition
{
namespace omniverse
{
class FUSDLayerNoticeListener::Implementation
{
public:
std::shared_ptr<ThreadSafe<pxr::UsdStageRefPtr>> stage;
std::string worldName;
ignition::transport::Node node;
};
FUSDLayerNoticeListener::FUSDLayerNoticeListener(
std::shared_ptr<ThreadSafe<pxr::UsdStageRefPtr>> &_stage,
const std::string& _worldName)
: dataPtr(ignition::utils::MakeUniqueImpl<Implementation>())
{
this->dataPtr->stage = _stage;
this->dataPtr->worldName = _worldName;
}
void FUSDLayerNoticeListener::HandleGlobalLayerReload(
const pxr::SdfNotice::LayerDidReloadContent& n)
{
igndbg << "HandleGlobalLayerReload called" << std::endl;
}
// Print some interesting info about the LayerNotice
void FUSDLayerNoticeListener::HandleRootOrSubLayerChange(
const class pxr::SdfNotice::LayersDidChangeSentPerLayer& _layerNotice,
const pxr::TfWeakPtr<pxr::SdfLayer>& _sender)
{
auto iter = _layerNotice.find(_sender);
for (auto & changeEntry : iter->second.GetEntryList())
{
const pxr::SdfPath& sdfPath = changeEntry.first;
if (changeEntry.second.flags.didRemoveNonInertPrim)
{
ignition::msgs::Entity req;
req.set_name(sdfPath.GetName());
req.set_type(ignition::msgs::Entity::MODEL);
ignition::msgs::Boolean rep;
bool result;
unsigned int timeout = 5000;
bool executed = this->dataPtr->node.Request(
"/world/" + this->dataPtr->worldName + "/remove",
req, timeout, rep, result);
if (executed)
{
if (rep.data())
{
igndbg << "Model was removed [" << sdfPath.GetName() << "]"
<< std::endl;
this->dataPtr->stage->Lock()->RemovePrim(sdfPath);
}
else
{
ignerr << "Error model was not removed [" << sdfPath.GetName()
<< "]" << std::endl;
}
}
ignmsg << "Deleted " << sdfPath.GetName() << std::endl;
}
else if (changeEntry.second.flags.didAddNonInertPrim)
{
ignmsg << "Added" << sdfPath.GetName() << std::endl;
}
}
}
}
}
| 2,825 | C++ | 27.545454 | 75 | 0.665841 |
gazebosim/gz-omni/source/ignition_live/OmniverseConnect.cpp | /*
* Copyright (C) 2022 Open Source Robotics Foundation
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
#include "OmniverseConnect.hpp"
#include <ignition/common/Console.hh>
#include <pxr/base/gf/vec3f.h>
#include <pxr/usd/usd/notice.h>
#include <pxr/usd/usd/stage.h>
#include <pxr/usd/usd/primRange.h>
#include <pxr/usd/usdGeom/cylinder.h>
#include <pxr/usd/usdGeom/metrics.h>
#include <pxr/usd/usdGeom/xform.h>
#include <iostream>
#include <mutex>
#include <string>
namespace ignition
{
namespace omniverse
{
// Stage URL really only needs to contain the server in the URL. eg.
// omniverse://ov-prod
void PrintConnectedUsername(const std::string& stageUrl)
{
// Get the username for the connection
std::string userName("_none_");
omniClientWait(omniClientGetServerInfo(
stageUrl.c_str(), &userName,
[](void* userData, OmniClientResult result,
struct OmniClientServerInfo const* info) noexcept
{
std::string* userName = static_cast<std::string*>(userData);
if (userData && userName && info && info->username)
{
userName->assign(info->username);
}
}));
{
std::unique_lock<std::mutex> lk(gLogMutex);
ignmsg << "Connected username: " << userName << std::endl;
}
}
// This struct is context for the omniClientStatSubscribe() callbacks
struct StatSubscribeContext
{
std::string* stageUrlPtr;
pxr::UsdStageRefPtr stage;
};
// Called immediately due to the stat subscribe function
static void clientStatCallback(void* userData, OmniClientResult result,
struct OmniClientListEntry const* entry) noexcept
{
StatSubscribeContext* context = static_cast<StatSubscribeContext*>(userData);
if (result != OmniClientResult::eOmniClientResult_Ok)
{
ignerr << "stage not found: " << *context->stageUrlPtr << std::endl;
exit(1);
}
}
// Called due to the stat subscribe function when the file is updated
static void clientStatSubscribeCallback(
void* userData, OmniClientResult result, OmniClientListEvent listEvent,
struct OmniClientListEntry const* entry) noexcept
{
StatSubscribeContext* context = static_cast<StatSubscribeContext*>(userData);
switch (listEvent)
{
case eOmniClientListEvent_Updated:
{
ignmsg << "Updated - user: " << entry->modifiedBy
<< " version: " << entry->version << std::endl;
// Mark the last updated time
// *context->lastUpdatedTimePtr = std::time(0);
break;
}
case eOmniClientListEvent_Created:
ignmsg << "Created: " << entry->createdBy << std::endl;
break;
case eOmniClientListEvent_Deleted:
ignmsg << "Deleted: " << entry->createdBy << std::endl;
exit(1);
break;
case eOmniClientListEvent_Locked:
ignmsg << "Locked: " << entry->createdBy << std::endl;
break;
default:
break;
}
}
MaybeError<std::string, GenericError> CreateOmniverseModel(
const std::string& destinationPath)
{
std::string stageUrl = destinationPath;
// Normalize the URL because the omniUsdLiveSetModeForUrl() interface keys off
// of the _normalized_ stage path
std::string normalizedStageUrl;
char* normalizedStageBuffer = nullptr;
size_t bufferSize = 0;
omniClientNormalizeUrl(stageUrl.c_str(), normalizedStageUrl.data(),
&bufferSize);
normalizedStageUrl.reserve(bufferSize);
normalizedStageUrl += omniClientNormalizeUrl(
stageUrl.c_str(), normalizedStageUrl.data(), &bufferSize);
// according to usd docs, `UsdStage::Open` should auto create a new stage
// if the path doesn't exist, but this doesn't work in omniverse for
// some reason. So we check if the path exist and use `UsdStage::CreateNew`,
// if it does not.
auto entry = OmniverseSync::Stat(normalizedStageUrl);
if (!entry)
{
if (entry.Error() != eOmniClientResult_ErrorNotFound)
{
auto errString = omniClientGetResultString(entry.Error());
return GenericError("Failure to create stage in Omniverse (" +
std::string(errString) + ")");
}
else
{
auto stage = pxr::UsdStage::CreateNew(normalizedStageUrl);
// Specify ignition up-ness and units.
pxr::UsdGeomSetStageUpAxis(stage, pxr::UsdGeomTokens->z);
pxr::UsdGeomSetStageMetersPerUnit(stage, 1);
stage->SetMetadata(pxr::SdfFieldKeys->Comment,
"Created by ignition-omniverse");
stage->Save();
ignmsg << "Created omniverse stage at [" << normalizedStageUrl << "]"
<< std::endl;
}
}
return normalizedStageUrl;
}
void CheckpointFile(const char* stageUrl, const char* comment)
{
bool bCheckpointsSupported = false;
omniClientWait(omniClientGetServerInfo(
stageUrl, &bCheckpointsSupported,
[](void* UserData, OmniClientResult Result,
OmniClientServerInfo const* Info) noexcept
{
if (Result == eOmniClientResult_Ok && Info && UserData)
{
bool* bCheckpointsSupported = static_cast<bool*>(UserData);
*bCheckpointsSupported = Info->checkpointsEnabled;
}
}));
if (bCheckpointsSupported)
{
const bool bForceCheckpoint = true;
omniClientWait(omniClientCreateCheckpoint(
stageUrl, comment, bForceCheckpoint, nullptr,
[](void* userData, OmniClientResult result,
char const* checkpointQuery) noexcept {}));
}
}
// Startup Omniverse
bool StartOmniverse()
{
// Register a function to be called whenever the library wants to print
// something to a log
omniClientSetLogCallback(
[](char const* threadName, char const* component,
OmniClientLogLevel level, char const* message) noexcept
{
std::unique_lock<std::mutex> lk(gLogMutex);
switch (level)
{
case eOmniClientLogLevel_Debug:
case eOmniClientLogLevel_Verbose:
igndbg << "(" << component << ") " << message << std::endl;
break;
case eOmniClientLogLevel_Info:
ignmsg << "(" << component << ") " << message << std::endl;
break;
case eOmniClientLogLevel_Warning:
ignwarn << "(" << component << ") " << message << std::endl;
break;
case eOmniClientLogLevel_Error:
ignerr << "(" << component << ") " << message << std::endl;
break;
default:
igndbg << "(" << component << ") " << message << std::endl;
}
});
// The default log level is "Info", set it to "Debug" to see all messages
omniClientSetLogLevel(eOmniClientLogLevel_Info);
// Initialize the library and pass it the version constant defined in
// OmniClient.h This allows the library to verify it was built with a
// compatible version. It will return false if there is a version mismatch.
if (!omniClientInitialize(kOmniClientVersion))
{
return false;
}
omniClientRegisterConnectionStatusCallback(
nullptr,
[](void* userData, const char* url,
OmniClientConnectionStatus status) noexcept
{
std::unique_lock<std::mutex> lk(gLogMutex);
ignmsg << "Connection Status: "
<< omniClientGetConnectionStatusString(status) << " [" << url
<< "]" << std::endl;
if (status == eOmniClientConnectionStatus_ConnectError)
{
// We shouldn't just exit here - we should clean up a bit, but we're
// going to do it anyway
ignerr << "Failed connection, exiting." << std::endl;
exit(-1);
}
});
// Enable live updates
omniUsdLiveSetDefaultEnabled(true);
return true;
}
} // namespace omniverse
} // namespace ignition
| 8,231 | C++ | 31.666667 | 80 | 0.653019 |
gazebosim/gz-omni/source/ignition_live/Material.hpp | /*
* Copyright (C) 2022 Open Source Robotics Foundation
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
#ifndef IGNITION_OMNIVERSE_MATERIAL_HPP
#define IGNITION_OMNIVERSE_MATERIAL_HPP
#include <ignition/msgs/visual.pb.h>
#include <pxr/usd/usd/stage.h>
#include <pxr/usd/usdGeom/gprim.h>
#include <pxr/usd/usdShade/material.h>
namespace ignition
{
namespace omniverse
{
bool SetMaterial(const pxr::UsdGeomGprim& _gprim,
const ignition::msgs::Visual& _visualMsg,
const pxr::UsdStageRefPtr& _stage,
const std::string& _stageDirUrl);
}
} // namespace ignition
#endif
| 1,137 | C++ | 28.947368 | 75 | 0.71504 |
gazebosim/gz-omni/source/ignition_live/Scene.cpp | /*
* Copyright (C) 2022 Open Source Robotics Foundation
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
#include "Scene.hpp"
#include "FUSDLayerNoticeListener.hpp"
#include "FUSDNoticeListener.hpp"
#include "Material.hpp"
#include "Mesh.hpp"
#include <ignition/common/Console.hh>
#include <ignition/common/Filesystem.hh>
#include <ignition/math/Quaternion.hh>
#include <pxr/usd/usd/primRange.h>
#include <pxr/usd/usdGeom/camera.h>
#include <pxr/usd/usdGeom/xform.h>
#include <pxr/usd/usdLux/diskLight.h>
#include <pxr/usd/usdLux/distantLight.h>
#include <pxr/usd/usdLux/sphereLight.h>
#include <algorithm>
#include <chrono>
#include <string>
#include <thread>
#include <vector>
#include <pxr/base/tf/nullPtr.h>
using namespace std::chrono_literals;
namespace ignition
{
namespace omniverse
{
class Scene::Implementation
{
public:
std::string worldName;
std::shared_ptr<ThreadSafe<pxr::UsdStageRefPtr>> stage;
ignition::transport::Node node;
std::string stageDirUrl;
std::unordered_map<uint32_t, pxr::UsdPrim> entities;
std::unordered_map<std::string, uint32_t> entitiesByName;
std::shared_ptr<FUSDLayerNoticeListener> USDLayerNoticeListener;
std::shared_ptr<FUSDNoticeListener> USDNoticeListener;
Simulator simulatorPoses = {Simulator::Ignition};
bool UpdateSensors(const ignition::msgs::Sensor &_sensor,
const std::string &_usdSensorPath);
bool UpdateLights(const ignition::msgs::Light &_light,
const std::string &_usdLightPath);
bool UpdateScene(const ignition::msgs::Scene &_scene);
bool UpdateVisual(const ignition::msgs::Visual &_visual,
const std::string &_usdPath);
bool UpdateLink(const ignition::msgs::Link &_link,
const std::string &_usdModelPath);
bool UpdateJoint(const ignition::msgs::Joint &_joint,
const std::string &_modelName);
bool UpdateModel(const ignition::msgs::Model &_model);
void SetPose(const pxr::UsdGeomXformCommonAPI &_prim,
const ignition::msgs::Pose &_pose);
void ResetPose(const pxr::UsdGeomXformCommonAPI &_prim);
void SetScale(const pxr::UsdGeomXformCommonAPI &_xform,
const ignition::msgs::Vector3d &_scale);
void ResetScale(const pxr::UsdGeomXformCommonAPI &_prim);
void CallbackPoses(const ignition::msgs::Pose_V &_msg);
void CallbackJoint(const ignition::msgs::Model &_msg);
void CallbackScene(const ignition::msgs::Scene &_scene);
void CallbackSceneDeletion(const ignition::msgs::UInt32_V &_msg);
};
//////////////////////////////////////////////////
Scene::Scene(
const std::string &_worldName,
const std::string &_stageUrl,
Simulator _simulatorPoses)
: dataPtr(ignition::utils::MakeUniqueImpl<Implementation>())
{
ignmsg << "Opened stage [" << _stageUrl << "]" << std::endl;
this->dataPtr->worldName = _worldName;
this->dataPtr->stage = std::make_shared<ThreadSafe<pxr::UsdStageRefPtr>>(
pxr::UsdStage::Open(_stageUrl));
this->dataPtr->stageDirUrl = ignition::common::parentPath(_stageUrl);
this->dataPtr->simulatorPoses = _simulatorPoses;
}
// //////////////////////////////////////////////////
std::shared_ptr<ThreadSafe<pxr::UsdStageRefPtr>> &Scene::Stage()
{
return this->dataPtr->stage;
}
//////////////////////////////////////////////////
void Scene::Implementation::SetPose(const pxr::UsdGeomXformCommonAPI &_prim,
const ignition::msgs::Pose &_pose)
{
if (this->simulatorPoses == Simulator::Ignition)
{
if (_prim)
{
pxr::UsdGeomXformCommonAPI xformApi(_prim);
const auto &pos = _pose.position();
const auto &orient = _pose.orientation();
ignition::math::Quaterniond quat(orient.w(), orient.x(), orient.y(),
orient.z());
xformApi.SetTranslate(pxr::GfVec3d(pos.x(), pos.y(), pos.z()));
xformApi.SetRotate(pxr::GfVec3f(
ignition::math::Angle(quat.Roll()).Degree(),
ignition::math::Angle(quat.Pitch()).Degree(),
ignition::math::Angle(quat.Yaw()).Degree()),
pxr::UsdGeomXformCommonAPI::RotationOrderXYZ);
}
}
}
//////////////////////////////////////////////////
void Scene::Implementation::ResetPose(const pxr::UsdGeomXformCommonAPI &_prim)
{
pxr::UsdGeomXformCommonAPI xformApi(_prim);
xformApi.SetTranslate(pxr::GfVec3d(0));
xformApi.SetRotate(pxr::GfVec3f(0));
}
//////////////////////////////////////////////////
void Scene::Implementation::SetScale(const pxr::UsdGeomXformCommonAPI &_prim,
const ignition::msgs::Vector3d &_scale)
{
pxr::UsdGeomXformCommonAPI xformApi(_prim);
xformApi.SetScale(pxr::GfVec3f(_scale.x(), _scale.y(), _scale.z()));
}
//////////////////////////////////////////////////
void Scene::Implementation::ResetScale(const pxr::UsdGeomXformCommonAPI &_prim)
{
pxr::UsdGeomXformCommonAPI xformApi(_prim);
xformApi.SetScale(pxr::GfVec3f(1));
}
//////////////////////////////////////////////////
bool Scene::Implementation::UpdateVisual(const ignition::msgs::Visual &_visual,
const std::string &_usdLinkPath)
{
auto stage = this->stage->Lock();
std::string visualName = _visual.name();
std::string suffix = "_visual";
std::size_t found = visualName.find("_visual");
if (found != std::string::npos)
{
suffix = "";
}
std::string usdVisualPath = _usdLinkPath + "/" + _visual.name() + suffix;
auto prim = stage->GetPrimAtPath(pxr::SdfPath(usdVisualPath));
if (prim)
return true;
auto usdVisualXform =
pxr::UsdGeomXform::Define(*stage, pxr::SdfPath(usdVisualPath));
pxr::UsdGeomXformCommonAPI xformApi(usdVisualXform);
if (_visual.has_scale())
{
this->SetScale(xformApi, _visual.scale());
}
else
{
this->ResetScale(xformApi);
}
if (_visual.has_pose())
{
this->SetPose(xformApi, _visual.pose());
}
else
{
this->ResetPose(xformApi);
}
this->entities[_visual.id()] = usdVisualXform.GetPrim();
this->entitiesByName[usdVisualXform.GetPrim().GetName()] = _visual.id();
std::string usdGeomPath(usdVisualPath + "/geometry");
const auto &geom = _visual.geometry();
switch (geom.type())
{
case ignition::msgs::Geometry::BOX:
{
auto usdCube =
pxr::UsdGeomCube::Define(*stage, pxr::SdfPath(usdGeomPath));
usdCube.CreateSizeAttr().Set(1.0);
pxr::GfVec3f endPoint(0.5);
pxr::VtArray<pxr::GfVec3f> extentBounds;
extentBounds.push_back(-1.0 * endPoint);
extentBounds.push_back(endPoint);
usdCube.CreateExtentAttr().Set(extentBounds);
pxr::UsdGeomXformCommonAPI cubeXformAPI(usdCube);
cubeXformAPI.SetScale(pxr::GfVec3f(
geom.box().size().x(), geom.box().size().y(), geom.box().size().z()));
if (!SetMaterial(usdCube, _visual, *stage, this->stageDirUrl))
{
ignwarn << "Failed to set material" << std::endl;
}
break;
}
// TODO: Support cone
// case ignition::msgs::Geometry::CONE:
case ignition::msgs::Geometry::CYLINDER:
{
auto usdCylinder =
pxr::UsdGeomCylinder::Define(*stage, pxr::SdfPath(usdGeomPath));
double radius = geom.cylinder().radius();
double length = geom.cylinder().length();
usdCylinder.CreateRadiusAttr().Set(radius);
usdCylinder.CreateHeightAttr().Set(length);
pxr::GfVec3f endPoint(radius);
endPoint[2] = length * 0.5;
pxr::VtArray<pxr::GfVec3f> extentBounds;
extentBounds.push_back(-1.0 * endPoint);
extentBounds.push_back(endPoint);
usdCylinder.CreateExtentAttr().Set(extentBounds);
if (!SetMaterial(usdCylinder, _visual, *stage, this->stageDirUrl))
{
ignwarn << "Failed to set material" << std::endl;
}
break;
}
case ignition::msgs::Geometry::PLANE:
{
auto usdCube =
pxr::UsdGeomCube::Define(*stage, pxr::SdfPath(usdGeomPath));
usdCube.CreateSizeAttr().Set(1.0);
pxr::GfVec3f endPoint(0.5);
pxr::VtArray<pxr::GfVec3f> extentBounds;
extentBounds.push_back(-1.0 * endPoint);
extentBounds.push_back(endPoint);
usdCube.CreateExtentAttr().Set(extentBounds);
pxr::UsdGeomXformCommonAPI cubeXformAPI(usdCube);
cubeXformAPI.SetScale(
pxr::GfVec3f(geom.plane().size().x(), geom.plane().size().y(), 0.0025));
if (!SetMaterial(usdCube, _visual, *stage, this->stageDirUrl))
{
ignwarn << "Failed to set material" << std::endl;
}
break;
}
case ignition::msgs::Geometry::ELLIPSOID:
{
auto usdEllipsoid =
pxr::UsdGeomSphere::Define(*stage, pxr::SdfPath(usdGeomPath));
const auto maxRadii =
ignition::math::Vector3d(geom.ellipsoid().radii().x(),
geom.ellipsoid().radii().y(),
geom.ellipsoid().radii().z())
.Max();
usdEllipsoid.CreateRadiusAttr().Set(0.5);
pxr::UsdGeomXformCommonAPI xform(usdEllipsoid);
xform.SetScale(pxr::GfVec3f{
static_cast<float>(geom.ellipsoid().radii().x() / maxRadii),
static_cast<float>(geom.ellipsoid().radii().y() / maxRadii),
static_cast<float>(geom.ellipsoid().radii().z() / maxRadii),
});
// extents is the bounds before any transformation
pxr::VtArray<pxr::GfVec3f> extentBounds;
extentBounds.push_back(pxr::GfVec3f{static_cast<float>(-maxRadii)});
extentBounds.push_back(pxr::GfVec3f{static_cast<float>(maxRadii)});
usdEllipsoid.CreateExtentAttr().Set(extentBounds);
if (!SetMaterial(usdEllipsoid, _visual, *stage, this->stageDirUrl))
{
ignwarn << "Failed to set material" << std::endl;
}
break;
}
case ignition::msgs::Geometry::SPHERE:
{
auto usdSphere =
pxr::UsdGeomSphere::Define(*stage, pxr::SdfPath(usdGeomPath));
double radius = geom.sphere().radius();
usdSphere.CreateRadiusAttr().Set(radius);
pxr::VtArray<pxr::GfVec3f> extentBounds;
extentBounds.push_back(pxr::GfVec3f(-1.0 * radius));
extentBounds.push_back(pxr::GfVec3f(radius));
usdSphere.CreateExtentAttr().Set(extentBounds);
if (!SetMaterial(usdSphere, _visual, *stage, this->stageDirUrl))
{
ignwarn << "Failed to set material" << std::endl;
}
break;
}
case ignition::msgs::Geometry::CAPSULE:
{
auto usdCapsule =
pxr::UsdGeomCapsule::Define(*stage, pxr::SdfPath(usdGeomPath));
double radius = geom.capsule().radius();
double length = geom.capsule().length();
usdCapsule.CreateRadiusAttr().Set(radius);
usdCapsule.CreateHeightAttr().Set(length);
pxr::GfVec3f endPoint(radius);
endPoint[2] += 0.5 * length;
pxr::VtArray<pxr::GfVec3f> extentBounds;
extentBounds.push_back(-1.0 * endPoint);
extentBounds.push_back(endPoint);
usdCapsule.CreateExtentAttr().Set(extentBounds);
if (!SetMaterial(usdCapsule, _visual, *stage, this->stageDirUrl))
{
ignwarn << "Failed to set material" << std::endl;
}
break;
}
case ignition::msgs::Geometry::MESH:
{
auto usdMesh = UpdateMesh(geom.mesh(), usdGeomPath, *stage);
if (!usdMesh)
{
ignerr << "Failed to update visual [" << _visual.name() << "]"
<< std::endl;
return false;
}
if (!SetMaterial(usdMesh, _visual, *stage, this->stageDirUrl))
{
ignerr << "Failed to update visual [" << _visual.name() << "]"
<< std::endl;
return false;
}
break;
}
default:
ignerr << "Failed to update geometry (unsuported geometry type '"
<< _visual.type() << "')" << std::endl;
return false;
}
// TODO(ahcorde): When usdphysics will be available in nv-usd we should
// replace this code with pxr::UsdPhysicsCollisionAPI::Apply(geomPrim)
pxr::TfToken appliedSchemaNamePhysicsCollisionAPI("PhysicsCollisionAPI");
pxr::SdfPrimSpecHandle primSpec = pxr::SdfCreatePrimInLayer(
stage->GetEditTarget().GetLayer(),
pxr::SdfPath(usdGeomPath));
pxr::SdfTokenListOp listOpPanda;
// Use ReplaceOperations to append in place.
if (!listOpPanda.ReplaceOperations(pxr::SdfListOpTypeExplicit,
0, 0, {appliedSchemaNamePhysicsCollisionAPI})) {
ignerr << "Error Applying schema PhysicsCollisionAPI" << '\n';
}
primSpec->SetInfo(
pxr::UsdTokens->apiSchemas, pxr::VtValue::Take(listOpPanda));
return true;
}
//////////////////////////////////////////////////
bool Scene::Implementation::UpdateLink(const ignition::msgs::Link &_link,
const std::string &_usdModelPath)
{
auto stage = this->stage->Lock();
std::string linkName = _link.name();
std::string suffix = "_link";
std::size_t found = linkName.find("_link");
if (found != std::string::npos)
{
suffix = "";
}
std::string usdLinkPath = _usdModelPath + "/" + _link.name();
auto prim = stage->GetPrimAtPath(pxr::SdfPath(usdLinkPath));
if (prim)
{
return true;
}
else
{
usdLinkPath = _usdModelPath + "/" + _link.name() + suffix;
prim = stage->GetPrimAtPath(pxr::SdfPath(usdLinkPath));
if (prim)
{
return true;
}
}
auto xform = pxr::UsdGeomXform::Define(*stage, pxr::SdfPath(usdLinkPath));
pxr::UsdGeomXformCommonAPI xformApi(xform);
if (_link.has_pose())
{
this->SetPose(xformApi, _link.pose());
}
else
{
this->ResetPose(xformApi);
}
this->entities[_link.id()] = xform.GetPrim();
this->entitiesByName[xform.GetPrim().GetName()] = _link.id();
for (const auto &visual : _link.visual())
{
if (!this->UpdateVisual(visual, usdLinkPath))
{
ignerr << "Failed to update link [" << _link.name() << "]" << std::endl;
return false;
}
}
for (const auto &sensor : _link.sensor())
{
std::string usdSensorPath = usdLinkPath + "/" + sensor.name();
if (!this->UpdateSensors(sensor, usdSensorPath))
{
ignerr << "Failed to add sensor [" << usdSensorPath << "]" << std::endl;
return false;
}
}
for (const auto &light : _link.light())
{
if (!this->UpdateLights(light, usdLinkPath + "/" + light.name()))
{
ignerr << "Failed to add light [" << usdLinkPath + "/" + light.name()
<< "]" << std::endl;
return false;
}
}
return true;
}
//////////////////////////////////////////////////
bool Scene::Implementation::UpdateJoint(
const ignition::msgs::Joint &_joint, const std::string &_modelName)
{
auto stage = this->stage->Lock();
auto jointUSD =
stage->GetPrimAtPath(pxr::SdfPath("/" + worldName + "/" + _joint.name()));
// TODO(ahcorde): This code is duplicated in the sdformat converter.
if (!jointUSD)
{
jointUSD =
stage->GetPrimAtPath(
pxr::SdfPath(
"/" + worldName + "/" + _modelName + "/" + _joint.name()));
if (!jointUSD)
{
switch (_joint.type())
{
case ignition::msgs::Joint::FIXED:
{
pxr::TfToken usdPrimTypeName("PhysicsFixedJoint");
auto jointFixedUSD = stage->DefinePrim(
pxr::SdfPath("/" + this->worldName + "/" + _joint.name()),
usdPrimTypeName);
auto body0 = jointFixedUSD.CreateRelationship(
pxr::TfToken("physics:body0"), false);
body0.AddTarget(pxr::SdfPath(
"/" + this->worldName + "/" + _joint.parent()));
auto body1 = jointFixedUSD.CreateRelationship(
pxr::TfToken("physics:body1"), false);
body1.AddTarget(pxr::SdfPath(
"/" + this->worldName + "/" + _joint.child()));
jointFixedUSD.CreateAttribute(pxr::TfToken("physics:localPos1"),
pxr::SdfValueTypeNames->Point3fArray, false).Set(
pxr::GfVec3f(0, 0, 0));
jointFixedUSD.CreateAttribute(pxr::TfToken("physics:localPos0"),
pxr::SdfValueTypeNames->Point3fArray, false).Set(
pxr::GfVec3f(_joint.pose().position().x(),
_joint.pose().position().y(),
_joint.pose().position().z()));
return true;
}
case ignition::msgs::Joint::REVOLUTE:
{
igndbg << "Creating a revolute joint" << '\n';
pxr::TfToken usdPrimTypeName("PhysicsRevoluteJoint");
auto revoluteJointUSD = stage->DefinePrim(
pxr::SdfPath("/" + this->worldName + "/" + _joint.name()),
usdPrimTypeName);
igndbg << "\tParent "
<< "/" + this->worldName + "/" + _joint.parent() << '\n';
igndbg << "\tchild "
<< "/" + this->worldName + "/" + _joint.child() << '\n';
pxr::TfTokenVector identifiersBody0 =
{pxr::TfToken("physics"), pxr::TfToken("body0")};
if (pxr::UsdRelationship body0 = revoluteJointUSD.CreateRelationship(
pxr::TfToken(pxr::SdfPath::JoinIdentifier(identifiersBody0)), false))
{
body0.AddTarget(
pxr::SdfPath("/" + this->worldName + "/panda/" + _joint.parent()),
pxr::UsdListPositionFrontOfAppendList);
}
else
{
igndbg << "Not able to create UsdRelationship for body1" << '\n';
}
pxr::TfTokenVector identifiersBody1 =
{pxr::TfToken("physics"), pxr::TfToken("body1")};
if (pxr::UsdRelationship body1 = revoluteJointUSD.CreateRelationship(
pxr::TfToken(pxr::SdfPath::JoinIdentifier(identifiersBody1)), false))
{
body1.AddTarget(
pxr::SdfPath("/" + this->worldName + "/panda/" + _joint.child()),
pxr::UsdListPositionFrontOfAppendList);
}
else
{
igndbg << "Not able to create UsdRelationship for body1" << '\n';
}
ignition::math::Vector3i axis(
_joint.axis1().xyz().x(),
_joint.axis1().xyz().y(),
_joint.axis1().xyz().z());
if (axis == ignition::math::Vector3i(1, 0, 0))
{
revoluteJointUSD.CreateAttribute(pxr::TfToken("physics:axis"),
pxr::SdfValueTypeNames->Token, false).Set(pxr::TfToken("X"));
}
else if (axis == ignition::math::Vector3i(0, 1, 0))
{
revoluteJointUSD.CreateAttribute(pxr::TfToken("physics:axis"),
pxr::SdfValueTypeNames->Token, false).Set(pxr::TfToken("Y"));
}
else if (axis == ignition::math::Vector3i(0, 0, 1))
{
revoluteJointUSD.CreateAttribute(pxr::TfToken("physics:axis"),
pxr::SdfValueTypeNames->Token, false).Set(pxr::TfToken("Z"));
}
revoluteJointUSD.CreateAttribute(pxr::TfToken("physics:localPos1"),
pxr::SdfValueTypeNames->Point3f, false).Set(
pxr::GfVec3f(0, 0, 0));
revoluteJointUSD.CreateAttribute(pxr::TfToken("physics:localPos0"),
pxr::SdfValueTypeNames->Point3f, false).Set(
pxr::GfVec3f(
_joint.pose().position().x(),
_joint.pose().position().y(),
_joint.pose().position().z()));
revoluteJointUSD.CreateAttribute(
pxr::TfToken("drive:angular:physics:damping"),
pxr::SdfValueTypeNames->Float, false).Set(100000.0f);
revoluteJointUSD.CreateAttribute(
pxr::TfToken("drive:angular:physics:stiffness"),
pxr::SdfValueTypeNames->Float, false).Set(1000000.0f);
revoluteJointUSD.CreateAttribute(
pxr::TfToken("drive:angular:physics:targetPosition"),
pxr::SdfValueTypeNames->Float, false).Set(0.0f);
revoluteJointUSD.CreateAttribute(
pxr::TfToken("physics:lowerLimit"),
pxr::SdfValueTypeNames->Float, false).Set(
static_cast<float>(_joint.axis1().limit_lower() * 180 / 3.1416));
revoluteJointUSD.CreateAttribute(
pxr::TfToken("physics:upperLimit"),
pxr::SdfValueTypeNames->Float, false).Set(
static_cast<float>(_joint.axis1().limit_upper() * 180 / 3.1416));
pxr::TfToken appliedSchemaNamePhysicsArticulationRootAPI(
"PhysicsArticulationRootAPI");
pxr::TfToken appliedSchemaNamePhysxArticulationAPI(
"PhysxArticulationAPI");
pxr::SdfPrimSpecHandle primSpecPanda = pxr::SdfCreatePrimInLayer(
stage->GetEditTarget().GetLayer(),
pxr::SdfPath("/" + this->worldName + "/panda"));
pxr::SdfTokenListOp listOpPanda;
// Use ReplaceOperations to append in place.
if (!listOpPanda.ReplaceOperations(
pxr::SdfListOpTypeExplicit,
0,
0,
{appliedSchemaNamePhysicsArticulationRootAPI,
appliedSchemaNamePhysxArticulationAPI})) {
ignerr << "Not able to setup the schema PhysxArticulationAPI "
<< "and PhysicsArticulationRootAPI\n";
}
primSpecPanda->SetInfo(
pxr::UsdTokens->apiSchemas, pxr::VtValue::Take(listOpPanda));
pxr::TfToken appliedSchemaName("PhysicsDriveAPI:angular");
pxr::SdfPrimSpecHandle primSpec = pxr::SdfCreatePrimInLayer(
stage->GetEditTarget().GetLayer(),
pxr::SdfPath("/" + this->worldName + "/" + _joint.name()));
pxr::SdfTokenListOp listOp;
// Use ReplaceOperations to append in place.
if (!listOp.ReplaceOperations(pxr::SdfListOpTypeExplicit,
0, 0, {appliedSchemaName})) {
ignerr << "Not able to setup the schema PhysicsDriveAPI\n";
}
primSpec->SetInfo(
pxr::UsdTokens->apiSchemas, pxr::VtValue::Take(listOp));
break;
}
default:
return false;
}
}
}
auto attrTargetPos = jointUSD.GetAttribute(
pxr::TfToken("drive:angular:physics:targetPosition"));
if (attrTargetPos)
{
attrTargetPos.Set(pxr::VtValue(
static_cast<float>(
ignition::math::Angle(_joint.axis1().position()).Degree())));
}
else
{
jointUSD.CreateAttribute(
pxr::TfToken("drive:angular:physics:targetPosition"),
pxr::SdfValueTypeNames->Float, false).Set(
static_cast<float>(
ignition::math::Angle(_joint.axis1().position()).Degree()));
}
return true;
}
//////////////////////////////////////////////////
bool Scene::Implementation::UpdateModel(const ignition::msgs::Model &_model)
{
auto stage = this->stage->Lock();
std::string modelName = _model.name();
if (modelName.empty())
return true;
auto range = pxr::UsdPrimRange::Stage(*stage);
for (auto const &prim : range)
{
if (prim.GetName().GetString() == modelName)
{
ignwarn << "The model [" << _model.name() << "] is already available"
<< " in Isaac Sim" << std::endl;
std::string usdModelPath = "/" + worldName + "/" + modelName;
auto prim = stage->GetPrimAtPath(
pxr::SdfPath(usdModelPath));
if (prim)
{
this->entities[_model.id()] = prim;
this->entitiesByName[prim.GetName()] = _model.id();
for (const auto &link : _model.link())
{
std::string linkName = link.name();
std::string suffix = "_link";
std::size_t found = linkName.find("_link");
if (found != std::string::npos)
{
suffix = "";
}
std::string usdLinkPath = usdModelPath + "/" + linkName + suffix;
auto linkPrim = stage->GetPrimAtPath(
pxr::SdfPath(usdLinkPath));
if (!linkPrim)
{
usdLinkPath = usdModelPath + "/" + linkName;
linkPrim = stage->GetPrimAtPath(
pxr::SdfPath(usdLinkPath));
}
if (linkPrim)
{
this->entities[link.id()] = linkPrim;
this->entitiesByName[linkPrim.GetName()] = link.id();
for (const auto &visual : link.visual())
{
std::string visualName = visual.name();
std::string suffix = "_visual";
std::size_t found = visualName.find("_visual");
if (found != std::string::npos)
{
suffix = "";
}
std::string usdvisualPath =
usdLinkPath + "/" + visualName + suffix;
auto visualPrim = stage->GetPrimAtPath(
pxr::SdfPath(usdvisualPath));
if (visualPrim)
{
this->entities[visual.id()] = visualPrim;
this->entitiesByName[visualPrim.GetName()] = visual.id();
}
else
{
usdvisualPath =
usdLinkPath + "/" + visualName;
visualPrim = stage->GetPrimAtPath(
pxr::SdfPath(usdvisualPath));
if (visualPrim)
{
this->entities[visual.id()] = visualPrim;
this->entitiesByName[visualPrim.GetName()] = visual.id();
}
}
}
for (const auto &light : link.light())
{
std::string usdLightPath =
usdLinkPath + "/" + light.name();
auto lightPrim = stage->GetPrimAtPath(
pxr::SdfPath(usdLightPath));
if (lightPrim)
{
this->entities[light.id()] = lightPrim;
this->entitiesByName[lightPrim.GetName()] = light.id();
}
}
}
}
}
}
}
std::replace(modelName.begin(), modelName.end(), ' ', '_');
std::string usdModelPath = "/" + worldName + "/" + modelName;
this->entitiesByName[modelName] = _model.id();
auto xform = pxr::UsdGeomXform::Define(*stage, pxr::SdfPath(usdModelPath));
pxr::UsdGeomXformCommonAPI xformApi(xform);
if (_model.has_scale())
{
this->SetScale(xformApi, _model.scale());
}
else
{
this->ResetScale(xformApi);
}
if (_model.has_pose())
{
this->SetPose(xformApi, _model.pose());
}
else
{
this->ResetPose(xformApi);
}
this->entities[_model.id()] = xform.GetPrim();
for (const auto &link : _model.link())
{
if (!this->UpdateLink(link, usdModelPath))
{
ignerr << "Failed to update model [" << modelName << "]" << std::endl;
return false;
}
}
for (const auto &joint : _model.joint())
{
if (!this->UpdateJoint(joint, _model.name()))
{
ignerr << "Failed to update model [" << modelName << "]" << std::endl;
return false;
}
}
return true;
}
//////////////////////////////////////////////////
bool Scene::Implementation::UpdateScene(const ignition::msgs::Scene &_scene)
{
for (const auto &model : _scene.model())
{
if (!this->UpdateModel(model))
{
ignerr << "Failed to add model [" << model.name() << "]" << std::endl;
return false;
}
igndbg << "added model [" << model.name() << "]" << std::endl;
}
for (const auto &light : _scene.light())
{
if (!this->UpdateLights(light, "/" + worldName + "/" + light.name()))
{
ignerr << "Failed to add light [" << light.name() << "]" << std::endl;
return false;
}
}
return true;
}
//////////////////////////////////////////////////
bool Scene::Implementation::UpdateSensors(const ignition::msgs::Sensor &_sensor,
const std::string &_usdSensorPath)
{
auto stage = this->stage->Lock();
// TODO(ahcorde): This code is duplicated in the USD converter (sdformat)
if (_sensor.type() == "camera")
{
auto usdCamera = pxr::UsdGeomCamera::Define(
*stage, pxr::SdfPath(_usdSensorPath));
// TODO(ahcorde): The default value in USD is 50, but something more
// similar to ignition Gazebo is 40.
usdCamera.CreateFocalLengthAttr().Set(
static_cast<float>(52.0f));
usdCamera.CreateClippingRangeAttr().Set(pxr::GfVec2f(
static_cast<float>(_sensor.camera().near_clip()),
static_cast<float>(_sensor.camera().far_clip())));
usdCamera.CreateHorizontalApertureAttr().Set(
static_cast<float>(
_sensor.camera().horizontal_fov() * 180.0f / IGN_PI));
ignition::math::Pose3d poseCameraYUp(0, 0, 0, IGN_PI_2, 0, -IGN_PI_2);
ignition::math::Quaterniond q(
_sensor.pose().orientation().w(),
_sensor.pose().orientation().x(),
_sensor.pose().orientation().y(),
_sensor.pose().orientation().z());
ignition::math::Pose3d poseCamera(
_sensor.pose().position().x(),
_sensor.pose().position().y(),
_sensor.pose().position().z(),
q.Roll() * 180.0 / IGN_PI,
q.Pitch() * 180.0 / IGN_PI,
q.Yaw() * 180. / IGN_PI);
poseCamera = poseCamera * poseCameraYUp;
usdCamera.AddTranslateOp(pxr::UsdGeomXformOp::Precision::PrecisionDouble)
.Set(
pxr::GfVec3d(
poseCamera.Pos().X(),
poseCamera.Pos().Y(),
poseCamera.Pos().Z()));
usdCamera.AddRotateXYZOp(pxr::UsdGeomXformOp::Precision::PrecisionDouble)
.Set(
pxr::GfVec3d(
poseCamera.Rot().Roll() * 180.0 / IGN_PI,
poseCamera.Rot().Pitch() * 180.0 / IGN_PI,
poseCamera.Rot().Yaw() * 180. / IGN_PI));
}
else if (_sensor.type() == "gpu_lidar")
{
pxr::UsdGeomXform::Define(
*stage, pxr::SdfPath(_usdSensorPath));
auto lidarPrim = stage->GetPrimAtPath(
pxr::SdfPath(_usdSensorPath));
lidarPrim.SetTypeName(pxr::TfToken("Lidar"));
lidarPrim.CreateAttribute(pxr::TfToken("minRange"),
pxr::SdfValueTypeNames->Float, false).Set(
static_cast<float>(_sensor.lidar().range_min()));
lidarPrim.CreateAttribute(pxr::TfToken("maxRange"),
pxr::SdfValueTypeNames->Float, false).Set(
static_cast<float>(_sensor.lidar().range_max()));
const auto horizontalFov = _sensor.lidar().horizontal_max_angle() -
_sensor.lidar().horizontal_min_angle();
// TODO(adlarkin) double check if these FOV calculations are correct
lidarPrim.CreateAttribute(pxr::TfToken("horizontalFov"),
pxr::SdfValueTypeNames->Float, false).Set(
static_cast<float>(horizontalFov * 180.0f / IGN_PI));
const auto verticalFov = _sensor.lidar().vertical_max_angle() -
_sensor.lidar().vertical_min_angle();
lidarPrim.CreateAttribute(pxr::TfToken("verticalFov"),
pxr::SdfValueTypeNames->Float, false).Set(
static_cast<float>(verticalFov * 180.0f / IGN_PI));
lidarPrim.CreateAttribute(pxr::TfToken("horizontalResolution"),
pxr::SdfValueTypeNames->Float, false).Set(
static_cast<float>(_sensor.lidar().horizontal_resolution()));
lidarPrim.CreateAttribute(pxr::TfToken("verticalResolution"),
pxr::SdfValueTypeNames->Float, false).Set(
static_cast<float>(_sensor.lidar().vertical_resolution()));
}
else
{
ignwarn << "This kind of sensor [" << _sensor.type()
<< "] is not supported" << std::endl;
return true;
}
return true;
}
//////////////////////////////////////////////////
bool Scene::Implementation::UpdateLights(const ignition::msgs::Light &_light,
const std::string &_usdLightPath)
{
// TODO: We can probably re-use code from sdformat
auto stage = this->stage->Lock();
const pxr::SdfPath sdfLightPath(_usdLightPath);
switch (_light.type())
{
case ignition::msgs::Light::POINT:
{
auto pointLight = pxr::UsdLuxSphereLight::Define(*stage, sdfLightPath);
pointLight.CreateTreatAsPointAttr().Set(true);
this->entities[_light.id()] = pointLight.GetPrim();
this->entitiesByName[pointLight.GetPrim().GetName()] = _light.id();
pointLight.CreateRadiusAttr(pxr::VtValue(0.1f));
pointLight.CreateColorAttr(pxr::VtValue(pxr::GfVec3f(
_light.diffuse().r(), _light.diffuse().g(), _light.diffuse().b())));
break;
}
case ignition::msgs::Light::SPOT:
{
auto diskLight = pxr::UsdLuxDiskLight::Define(*stage, sdfLightPath);
this->entities[_light.id()] = diskLight.GetPrim();
this->entitiesByName[diskLight.GetPrim().GetName()] = _light.id();
diskLight.CreateColorAttr(pxr::VtValue(pxr::GfVec3f(
_light.diffuse().r(), _light.diffuse().g(), _light.diffuse().b())));
break;
}
case ignition::msgs::Light::DIRECTIONAL:
{
auto directionalLight =
pxr::UsdLuxDistantLight::Define(*stage, sdfLightPath);
this->entities[_light.id()] = directionalLight.GetPrim();
this->entitiesByName[directionalLight.GetPrim().GetName()] = _light.id();
directionalLight.CreateColorAttr(pxr::VtValue(pxr::GfVec3f(
_light.diffuse().r(), _light.diffuse().g(), _light.diffuse().b())));
break;
}
default:
return false;
}
// This is a workaround to set the light's intensity attribute. Using the
// UsdLuxLightAPI sets the light's "inputs:intensity" attribute, but isaac
// sim reads the light's "intensity" attribute. Both inputs:intensity and
// intensity are set to provide flexibility with other USD renderers
const float usdLightIntensity =
static_cast<float>(_light.intensity()) * 1000.0f;
auto lightPrim = stage->GetPrimAtPath(sdfLightPath);
lightPrim
.CreateAttribute(pxr::TfToken("intensity"), pxr::SdfValueTypeNames->Float,
false)
.Set(usdLightIntensity);
return true;
}
//////////////////////////////////////////////////
bool Scene::Init()
{
bool result;
ignition::msgs::Empty req;
ignition::msgs::Scene ignScene;
if (!this->dataPtr->node.Request(
"/world/" + this->dataPtr->worldName + "/scene/info", req, 5000,
ignScene, result))
{
ignwarn << "Error requesting scene info, make sure the world ["
<< this->dataPtr->worldName
<< "] is available, ignition-omniverse will keep trying..."
<< std::endl;
if (!this->dataPtr->node.Request(
"/world/" + this->dataPtr->worldName + "/scene/info", req, -1,
ignScene, result))
{
ignerr << "Error request scene info" << std::endl;
return false;
}
}
if (!this->dataPtr->UpdateScene(ignScene))
{
ignerr << "Failed to init scene" << std::endl;
return false;
}
std::vector<std::string> topics;
this->dataPtr->node.TopicList(topics);
for (auto const &topic : topics)
{
if (topic.find("/joint_state") != std::string::npos)
{
if (!this->dataPtr->node.Subscribe(
topic, &Scene::Implementation::CallbackJoint, this->dataPtr.get()))
{
ignerr << "Error subscribing to topic [" << topic << "]" << std::endl;
return false;
}
else
{
ignmsg << "Subscribed to topic: [joint_state]" << std::endl;
}
}
}
std::string topic = "/world/" + this->dataPtr->worldName + "/pose/info";
// Subscribe to a topic by registering a callback.
if (!this->dataPtr->node.Subscribe(
topic, &Scene::Implementation::CallbackPoses, this->dataPtr.get()))
{
ignerr << "Error subscribing to topic [" << topic << "]" << std::endl;
return false;
}
else
{
ignmsg << "Subscribed to topic: [" << topic << "]" << std::endl;
}
topic = "/world/" + this->dataPtr->worldName + "/scene/info";
if (!this->dataPtr->node.Subscribe(
topic, &Scene::Implementation::CallbackScene, this->dataPtr.get()))
{
ignerr << "Error subscribing to topic [" << topic << "]" << std::endl;
return false;
}
else
{
ignmsg << "Subscribed to topic: [" << topic << "]" << std::endl;
}
topic = "/world/" + this->dataPtr->worldName + "/scene/deletion";
if (!this->dataPtr->node.Subscribe(
topic, &Scene::Implementation::CallbackSceneDeletion,
this->dataPtr.get()))
{
ignerr << "Error subscribing to topic [" << topic << "]" << std::endl;
return false;
}
else
{
ignmsg << "Subscribed to topic: [" << topic << "]" << std::endl;
}
this->dataPtr->USDLayerNoticeListener =
std::make_shared<FUSDLayerNoticeListener>(
this->dataPtr->stage,
this->dataPtr->worldName);
auto LayerReloadKey = pxr::TfNotice::Register(
pxr::TfCreateWeakPtr(this->dataPtr->USDLayerNoticeListener.get()),
&FUSDLayerNoticeListener::HandleGlobalLayerReload);
auto LayerChangeKey = pxr::TfNotice::Register(
pxr::TfCreateWeakPtr(this->dataPtr->USDLayerNoticeListener.get()),
&FUSDLayerNoticeListener::HandleRootOrSubLayerChange,
this->dataPtr->stage->Lock()->GetRootLayer());
this->dataPtr->USDNoticeListener = std::make_shared<FUSDNoticeListener>(
this->dataPtr->stage,
this->dataPtr->worldName,
this->dataPtr->simulatorPoses,
this->dataPtr->entitiesByName);
auto USDNoticeKey = pxr::TfNotice::Register(
pxr::TfCreateWeakPtr(this->dataPtr->USDNoticeListener.get()),
&FUSDNoticeListener::Handle);
return true;
}
//////////////////////////////////////////////////
void Scene::Save() { this->Stage()->Lock()->Save(); }
//////////////////////////////////////////////////
/// \brief Function called each time a topic update is received.
void Scene::Implementation::CallbackPoses(const ignition::msgs::Pose_V &_msg)
{
for (const auto &poseMsg : _msg.pose())
{
try
{
auto stage = this->stage->Lock();
const auto &prim = this->entities.at(poseMsg.id());
if (prim)
{
this->SetPose(pxr::UsdGeomXformCommonAPI(prim), poseMsg);
}
}
catch (const std::out_of_range &)
{
ignwarn << "Error updating pose, cannot find [" << poseMsg.name() << " - " << poseMsg.id() << "]"
<< std::endl;
}
}
}
//////////////////////////////////////////////////
/// \brief Function called each time a topic update is received.
void Scene::Implementation::CallbackJoint(const ignition::msgs::Model &_msg)
{
// this->UpdateModel(_msg);
for (const auto &joint : _msg.joint())
{
if (!this->UpdateJoint(joint, _msg.name()))
{
ignerr << "Failed to update model [" << _msg.name() << "]" << std::endl;
return;
}
}
}
//////////////////////////////////////////////////
void Scene::Implementation::CallbackScene(const ignition::msgs::Scene &_scene)
{
this->UpdateScene(_scene);
}
//////////////////////////////////////////////////
void Scene::Implementation::CallbackSceneDeletion(
const ignition::msgs::UInt32_V &_msg)
{
for (const auto id : _msg.data())
{
try
{
auto stage = this->stage->Lock();
const auto &prim = this->entities.at(id);
std::string primName = prim.GetName();
stage->RemovePrim(prim.GetPath());
ignmsg << "Removed [" << prim.GetPath() << "]" << std::endl;
this->entities.erase(id);
this->entitiesByName.erase(primName);
}
catch (const std::out_of_range &)
{
ignwarn << "Failed to delete [" << id << "] (Unable to find node)"
<< std::endl;
}
}
}
} // namespace omniverse
} // namespace ignition
| 39,838 | C++ | 33.552472 | 103 | 0.58394 |
gazebosim/gz-omni/source/ignition_live/Scene.hpp | /*
* Copyright (C) 2021 Open Source Robotics Foundation
*
* Licensed under the Apache License, Version 2.0 (the "License"); * you may not
* use this file except in compliance with the License. You may obtain a copy of
* the License at
*
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
#ifndef IGNITION_OMNIVERSE_SCENE_HPP
#define IGNITION_OMNIVERSE_SCENE_HPP
#include "Error.hpp"
#include "ThreadSafe.hpp"
#include <ignition/utils/ImplPtr.hh>
#include <ignition/math/Pose3.hh>
#include <ignition/msgs/joint.pb.h>
#include <ignition/msgs/link.pb.h>
#include <ignition/msgs/model.pb.h>
#include <ignition/msgs/pose.pb.h>
#include <ignition/msgs/pose_v.pb.h>
#include <ignition/msgs/scene.pb.h>
#include <ignition/msgs/vector3d.pb.h>
#include <ignition/msgs/visual.pb.h>
#include <ignition/transport.hh>
#include <pxr/usd/usd/stage.h>
#include <pxr/usd/usdGeom/sphere.h>
#include <pxr/usd/usdGeom/capsule.h>
#include <pxr/usd/usdGeom/cube.h>
#include <pxr/usd/usdGeom/cylinder.h>
#include <pxr/usd/usdGeom/mesh.h>
#include <pxr/usd/usdShade/material.h>
#include <pxr/usd/usdGeom/xformCommonAPI.h>
#include <cstdint>
#include <memory>
#include <string>
#include <thread>
#include <unordered_map>
namespace ignition
{
namespace omniverse
{
enum class Simulator : int { Ignition, IsaacSim };
class Scene
{
public:
Scene(
const std::string &_worldName,
const std::string &_stageUrl,
Simulator _simulatorPoses);
/// \brief Initialize the scene and subscribes for updates. This blocks until
/// the scene is initialized.
/// \return true if success
bool Init();
/// \brief Equivalent to `scene.Stage().Lock()->Save()`.
void Save();
std::shared_ptr<ThreadSafe<pxr::UsdStageRefPtr>> &Stage();
/// \internal
/// \brief Private data pointer
IGN_UTILS_UNIQUE_IMPL_PTR(dataPtr)
};
} // namespace omniverse
} // namespace ignition
#endif
| 2,162 | C++ | 24.75 | 80 | 0.726179 |
gazebosim/gz-omni/source/ignition_live/Mesh.hpp | /*
* Copyright (C) 2022 Open Source Robotics Foundation
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
#ifndef IGNITION_OMNIVERSE_MESH_HPP
#define IGNITION_OMNIVERSE_MESH_HPP
#include <ignition/msgs/meshgeom.pb.h>
#include <pxr/usd/usd/stage.h>
#include <pxr/usd/usdGeom/mesh.h>
namespace ignition
{
namespace omniverse
{
pxr::UsdGeomMesh UpdateMesh(const ignition::msgs::MeshGeom& _meshMsg,
const std::string& _path,
const pxr::UsdStageRefPtr& _stage);
}
} // namespace ignition
#endif
| 1,067 | C++ | 28.666666 | 75 | 0.705717 |
gazebosim/gz-omni/source/ignition_live/Material.cpp | /*
* Copyright (C) 2022 Open Source Robotics Foundation
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
#include "Material.hpp"
#include <ignition/common/Console.hh>
#include <ignition/math/Color.hh>
#include <pxr/usd/usd/tokens.h>
#include <pxr/usd/usdGeom/gprim.h>
#include <pxr/usd/usdShade/material.h>
#include <pxr/usd/usdShade/materialBindingAPI.h>
#include <iostream>
#include <map>
#include <memory>
#include <string>
#include <OmniClient.h>
namespace ignition
{
namespace omniverse
{
/// Return the full path of an URL
/// If the resource is a URI we try to find to file in the filesystem
/// \brief _fullPath URI of the resource
std::string checkURI(const std::string _fullPath)
{
// TODO (ahcorde): This code is duplicated is the USD converter (sdformat)
ignition::common::URI uri(_fullPath);
std::string fullPath = _fullPath;
std::string home;
if (!ignition::common::env("HOME", home, false))
{
ignwarn << "The HOME environment variable was not defined, "
<< "so the resource [" << fullPath << "] could not be found\n";
return "";
}
if (uri.Scheme() == "http" || uri.Scheme() == "https")
{
auto systemPaths = ignition::common::systemPaths();
std::vector<std::string> tokens = ignition::common::split(uri.Path().Str(), "/");
std::string server = tokens[0];
std::string versionServer = tokens[1];
std::string owner = ignition::common::lowercase(tokens[2]);
std::string type = ignition::common::lowercase(tokens[3]);
std::string modelName = ignition::common::lowercase(tokens[4]);
std::string modelVersion = ignition::common::lowercase(tokens[5]);
fullPath = ignition::common::joinPaths(
home, ".ignition", "fuel", server, owner, type, modelName, modelVersion);
systemPaths->AddFilePaths(fullPath);
for (int i = 7; i < tokens.size(); i++)
{
fullPath = ignition::common::joinPaths(
fullPath, ignition::common::lowercase(tokens[i]));
systemPaths->AddFilePaths(fullPath);
}
}
return fullPath;
}
/// \brief Copy a file in a directory
/// \param[in] _path path where the copy will be located
/// \param[in] _fullPath name of the file to copy
/// \param[in] _stageDirUrl stage directory URL to copy materials if required
bool copyMaterial(
const std::string &_path,
const std::string &_fullPath,
const std::string &_stageDirUrl)
{
if (!_path.empty() && !_fullPath.empty())
{
///
auto fileName = ignition::common::basename(_path);
auto filePathIndex = _path.rfind(fileName);
auto filePath = _path.substr(0, filePathIndex);
if (!omniClientWaitFor(omniClientCopy(
_fullPath.c_str(),
std::string(_stageDirUrl + "/" + _path).c_str(),
nullptr,
nullptr), 1000))
{
ignerr << "omniClientCopy timeout. Not able to copy file ["
<< _fullPath.c_str() << "]" << "in nucleus ["
<< std::string(_stageDirUrl + "/" + _path) << "]." ;
}
}
return false;
}
/// \brief Create the path to copy the material
/// \param[in] _uri full path of the file to copy
/// \return A relative path to save the material, the path looks like:
/// materials/textures/<filename with extension>
std::string getMaterialCopyPath(const std::string &_uri)
{
return ignition::common::joinPaths(
".",
"materials",
"textures",
ignition::common::basename(_uri));
}
/// \brief Fill Material shader attributes and properties
/// \param[in] _prim USD primitive
/// \param[in] _name Name of the field attribute or property
/// \param[in] _vType Type of the field
/// \param[in] _value Value of the field
/// \param[in] _customData Custom data to set the field
/// \param[in] _displayName Display name
/// \param[in] _displayGroup Display group
/// \param[in] _doc Documentation of the field
/// \param[in] _colorSpace if the material is a texture, we can specify the
/// colorSpace of the image
template <typename T>
void CreateMaterialInput(
const pxr::UsdPrim &_prim, const std::string &_name,
const pxr::SdfValueTypeName &_vType, T _value,
const std::map<pxr::TfToken, pxr::VtValue> &_customData,
const pxr::TfToken &_displayName = pxr::TfToken(""),
const pxr::TfToken &_displayGroup = pxr::TfToken(""),
const std::string &_doc = "",
const pxr::TfToken &_colorSpace = pxr::TfToken(""))
{
auto shader = pxr::UsdShadeShader(_prim);
if (shader)
{
auto existingInput = shader.GetInput(pxr::TfToken(_name));
pxr::SdfValueTypeName vTypeName;
if (_vType.IsScalar())
{
vTypeName = _vType.GetScalarType();
}
else if (_vType.IsArray())
{
vTypeName = _vType.GetArrayType();
}
auto surfaceInput = shader.CreateInput(pxr::TfToken(_name), vTypeName);
surfaceInput.Set(_value);
auto attr = surfaceInput.GetAttr();
for (const auto &[key, customValue] : _customData)
{
attr.SetCustomDataByKey(key, customValue);
}
if (!_displayName.GetString().empty())
{
attr.SetDisplayName(_displayName);
}
if (!_displayGroup.GetString().empty())
{
attr.SetDisplayGroup(_displayGroup);
}
if (!_doc.empty())
{
attr.SetDocumentation(_doc);
}
if (!_colorSpace.GetString().empty())
{
attr.SetColorSpace(_colorSpace);
}
}
else
{
ignerr << "Not able to convert the prim to a UsdShadeShader" << std::endl;
}
}
/// \param[in] _stageDirUrl stage directory URL to copy materials if required
bool SetMaterial(const pxr::UsdGeomGprim &_gprim,
const ignition::msgs::Visual &_visualMsg,
const pxr::UsdStageRefPtr &_stage,
const std::string &_stageDirUrl)
{
if (!_visualMsg.has_material())
{
return true;
}
const std::string mtlPath = "/Looks/Material_" + _visualMsg.name() + "_" +
std::to_string(_visualMsg.id());
pxr::UsdShadeMaterial material =
pxr::UsdShadeMaterial::Define(_stage, pxr::SdfPath(mtlPath));
auto usdShader =
pxr::UsdShadeShader::Define(_stage, pxr::SdfPath(mtlPath + "/Shader"));
auto shaderPrim = usdShader.GetPrim();
auto shaderOut =
pxr::UsdShadeConnectableAPI(shaderPrim)
.CreateOutput(pxr::TfToken("out"), pxr::SdfValueTypeNames->Token);
material.CreateSurfaceOutput(pxr::TfToken("mdl")).ConnectToSource(shaderOut);
material.CreateVolumeOutput(pxr::TfToken("mdl")).ConnectToSource(shaderOut);
material.CreateDisplacementOutput(pxr::TfToken("mdl"))
.ConnectToSource(shaderOut);
usdShader.GetImplementationSourceAttr().Set(pxr::UsdShadeTokens->sourceAsset);
usdShader.SetSourceAsset(pxr::SdfAssetPath("OmniPBR.mdl"),
pxr::TfToken("mdl"));
usdShader.SetSourceAssetSubIdentifier(pxr::TfToken("OmniPBR"),
pxr::TfToken("mdl"));
std::map<pxr::TfToken, pxr::VtValue> customDataDiffuse = {
{pxr::TfToken("default"), pxr::VtValue(pxr::GfVec3f(0.2, 0.2, 0.2))},
{pxr::TfToken("range:max"),
pxr::VtValue(pxr::GfVec3f(100000, 100000, 100000))},
{pxr::TfToken("range:min"), pxr::VtValue(pxr::GfVec3f(0, 0, 0))}};
ignition::math::Color diffuse(
_visualMsg.material().diffuse().r(), _visualMsg.material().diffuse().g(),
_visualMsg.material().diffuse().b(), _visualMsg.material().diffuse().a());
CreateMaterialInput<pxr::GfVec3f>(
shaderPrim, "diffuse_color_constant", pxr::SdfValueTypeNames->Color3f,
pxr::GfVec3f(diffuse.R(), diffuse.G(), diffuse.B()), customDataDiffuse,
pxr::TfToken("Base Color"), pxr::TfToken("Albedo"),
"This is the base color");
std::map<pxr::TfToken, pxr::VtValue> customDataEmissive = {
{pxr::TfToken("default"), pxr::VtValue(pxr::GfVec3f(1, 0.1, 0.1))},
{pxr::TfToken("range:max"),
pxr::VtValue(pxr::GfVec3f(100000, 100000, 100000))},
{pxr::TfToken("range:min"), pxr::VtValue(pxr::GfVec3f(0, 0, 0))}};
ignition::math::Color emissive(_visualMsg.material().emissive().r(),
_visualMsg.material().emissive().g(),
_visualMsg.material().emissive().b(),
_visualMsg.material().emissive().a());
CreateMaterialInput<pxr::GfVec3f>(
shaderPrim, "emissive_color", pxr::SdfValueTypeNames->Color3f,
pxr::GfVec3f(emissive.R(), emissive.G(), emissive.B()),
customDataEmissive, pxr::TfToken("Emissive Color"),
pxr::TfToken("Emissive"), "The emission color");
std::map<pxr::TfToken, pxr::VtValue> customDataEnableEmission = {
{pxr::TfToken("default"), pxr::VtValue(0)}};
CreateMaterialInput<bool>(
shaderPrim, "enable_emission", pxr::SdfValueTypeNames->Bool,
emissive.A() > 0, customDataEnableEmission,
pxr::TfToken("Enable Emissive"), pxr::TfToken("Emissive"),
"Enables the emission of light from the material");
std::map<pxr::TfToken, pxr::VtValue> customDataIntensity = {
{pxr::TfToken("default"), pxr::VtValue(40)},
{pxr::TfToken("range:max"), pxr::VtValue(100000)},
{pxr::TfToken("range:min"), pxr::VtValue(0)}};
CreateMaterialInput<float>(
shaderPrim, "emissive_intensity", pxr::SdfValueTypeNames->Float,
emissive.A(), customDataIntensity, pxr::TfToken("Emissive Intensity"),
pxr::TfToken("Emissive"), "Intensity of the emission");
if (_visualMsg.material().has_pbr())
{
auto pbr = _visualMsg.material().pbr();
std::map<pxr::TfToken, pxr::VtValue> customDataMetallicConstant =
{
{pxr::TfToken("default"), pxr::VtValue(0.5)},
{pxr::TfToken("range:max"), pxr::VtValue(1)},
{pxr::TfToken("range:min"), pxr::VtValue(0)}
};
CreateMaterialInput<float>(
shaderPrim,
"metallic_constant",
pxr::SdfValueTypeNames->Float,
pbr.metalness(),
customDataMetallicConstant,
pxr::TfToken("Metallic Amount"),
pxr::TfToken("Reflectivity"),
"Metallic Material");
std::map<pxr::TfToken, pxr::VtValue> customDataRoughnessConstant =
{
{pxr::TfToken("default"), pxr::VtValue(0.5)},
{pxr::TfToken("range:max"), pxr::VtValue(1)},
{pxr::TfToken("range:min"), pxr::VtValue(0)}
};
CreateMaterialInput<float>(
shaderPrim,
"reflection_roughness_constant",
pxr::SdfValueTypeNames->Float,
pbr.roughness(),
customDataRoughnessConstant,
pxr::TfToken("Roughness Amount"),
pxr::TfToken("Reflectivity"),
"Higher roughness values lead to more blurry reflections");
if (!pbr.albedo_map().empty())
{
std::map<pxr::TfToken, pxr::VtValue> customDataDiffuseTexture =
{
{pxr::TfToken("default"), pxr::VtValue(pxr::SdfAssetPath())},
};
std::string copyPath = getMaterialCopyPath(pbr.albedo_map());
std::string albedoMapURI = checkURI(pbr.albedo_map());
std::string fullnameAlbedoMap =
ignition::common::findFile(
ignition::common::basename(albedoMapURI));
if (fullnameAlbedoMap.empty())
{
fullnameAlbedoMap = pbr.albedo_map();
}
copyMaterial(copyPath, fullnameAlbedoMap, _stageDirUrl);
CreateMaterialInput<pxr::SdfAssetPath>(
shaderPrim,
"diffuse_texture",
pxr::SdfValueTypeNames->Asset,
pxr::SdfAssetPath(copyPath),
customDataDiffuseTexture,
pxr::TfToken("Base Map"),
pxr::TfToken("Albedo"),
"",
pxr::TfToken("auto"));
}
if (!pbr.metalness_map().empty())
{
std::map<pxr::TfToken, pxr::VtValue> customDataMetallnessTexture =
{
{pxr::TfToken("default"), pxr::VtValue(pxr::SdfAssetPath())},
};
std::string copyPath = getMaterialCopyPath(pbr.metalness_map());
std::string fullnameMetallnessMap =
ignition::common::findFile(
ignition::common::basename(pbr.metalness_map()));
if (fullnameMetallnessMap.empty())
{
fullnameMetallnessMap = pbr.metalness_map();
}
copyMaterial(copyPath, fullnameMetallnessMap, _stageDirUrl);
CreateMaterialInput<pxr::SdfAssetPath>(
shaderPrim,
"metallic_texture",
pxr::SdfValueTypeNames->Asset,
pxr::SdfAssetPath(copyPath),
customDataMetallnessTexture,
pxr::TfToken("Metallic Map"),
pxr::TfToken("Reflectivity"),
"",
pxr::TfToken("raw"));
}
if (!pbr.normal_map().empty())
{
std::map<pxr::TfToken, pxr::VtValue> customDataNormalTexture =
{
{pxr::TfToken("default"), pxr::VtValue(pxr::SdfAssetPath())},
};
std::string copyPath = getMaterialCopyPath(pbr.normal_map());
std::string fullnameNormalMap =
ignition::common::findFile(
ignition::common::basename(pbr.normal_map()));
if (fullnameNormalMap.empty())
{
fullnameNormalMap = pbr.normal_map();
}
copyMaterial(copyPath, fullnameNormalMap, _stageDirUrl);
CreateMaterialInput<pxr::SdfAssetPath>(
shaderPrim,
"normalmap_texture",
pxr::SdfValueTypeNames->Asset,
pxr::SdfAssetPath(copyPath),
customDataNormalTexture,
pxr::TfToken("Normal Map"),
pxr::TfToken("Normal"),
"",
pxr::TfToken("raw"));
}
if (!pbr.roughness_map().empty())
{
std::map<pxr::TfToken, pxr::VtValue> customDataRoughnessTexture =
{
{pxr::TfToken("default"), pxr::VtValue(pxr::SdfAssetPath())},
};
std::string copyPath = getMaterialCopyPath(pbr.roughness_map());
std::string fullnameRoughnessMap =
ignition::common::findFile(
ignition::common::basename(pbr.roughness_map()));
if (fullnameRoughnessMap.empty())
{
fullnameRoughnessMap = pbr.roughness_map();
}
copyMaterial(copyPath, fullnameRoughnessMap, _stageDirUrl);
CreateMaterialInput<pxr::SdfAssetPath>(
shaderPrim,
"reflectionroughness_texture",
pxr::SdfValueTypeNames->Asset,
pxr::SdfAssetPath(copyPath),
customDataRoughnessTexture,
pxr::TfToken("RoughnessMap Map"),
pxr::TfToken("RoughnessMap"),
"",
pxr::TfToken("raw"));
std::map<pxr::TfToken, pxr::VtValue>
customDataRoughnessTextureInfluence =
{
{pxr::TfToken("default"), pxr::VtValue(0)},
{pxr::TfToken("range:max"), pxr::VtValue(1)},
{pxr::TfToken("range:min"), pxr::VtValue(0)}
};
CreateMaterialInput<bool>(
shaderPrim,
"reflection_roughness_texture_influence",
pxr::SdfValueTypeNames->Bool,
true,
customDataRoughnessTextureInfluence,
pxr::TfToken("Roughness Map Influence"),
pxr::TfToken("Reflectivity"),
"",
pxr::TfToken("raw"));
}
}
pxr::UsdShadeMaterialBindingAPI(_gprim).Bind(material);
return true;
}
} // namespace omniverse
} // namespace ignition
| 15,400 | C++ | 33.224444 | 85 | 0.633701 |
gazebosim/gz-omni/source/ignition_live/FUSDNoticeListener.cpp | /*
* Copyright (C) 2022 Open Source Robotics Foundation
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
#include "FUSDNoticeListener.hpp"
#include "GetOp.hpp"
#include <ignition/common/Console.hh>
#include <pxr/usd/sdf/path.h>
#include <pxr/usd/usd/notice.h>
#include <pxr/usd/usd/primRange.h>
#include <pxr/usd/usdGeom/sphere.h>
#include <pxr/usd/usdGeom/cube.h>
#include <pxr/usd/usdGeom/cylinder.h>
#include <ignition/transport/Node.hh>
#include <ignition/msgs/model.pb.h>
#include <sdf/Collision.hh>
#include <sdf/Geometry.hh>
#include <sdf/Root.hh>
#include <sdf/Link.hh>
#include <sdf/Model.hh>
#include <sdf/Sphere.hh>
#include <sdf/Visual.hh>
namespace ignition
{
namespace omniverse
{
class FUSDNoticeListener::Implementation
{
public:
void ParseCube(const pxr::UsdPrim &_prim, sdf::Link &_link);
void ParseCylinder(const pxr::UsdPrim &_prim, sdf::Link &_link);
void ParseSphere(const pxr::UsdPrim &_prim, sdf::Link &_link);
bool ParsePrim(const pxr::UsdPrim &_prim, sdf::Link &_link)
{
if (_prim.IsA<pxr::UsdGeomSphere>())
{
ParseSphere(_prim, _link);
return true;
}
else if (_prim.IsA<pxr::UsdGeomCylinder>())
{
ParseCylinder(_prim, _link);
}
return false;
}
void CreateSDF(sdf::Link &_link, const pxr::UsdPrim &_prim)
{
if (!_prim)
return;
if (ParsePrim(_prim, _link))
{
return;
}
else
{
auto children = _prim.GetChildren();
for (const pxr::UsdPrim &childPrim : children)
{
if (ParsePrim(childPrim, _link))
{
return;
}
else
{
CreateSDF(_link, childPrim);
}
}
}
}
void jointStateCb(const ignition::msgs::Model &_msg);
std::shared_ptr<ThreadSafe<pxr::UsdStageRefPtr>> stage;
std::string worldName;
std::unordered_map<std::string, transport::Node::Publisher> revoluteJointPublisher;
/// \brief Ignition communication node.
public: transport::Node node;
Simulator simulatorPoses;
std::mutex jointStateMsgMutex;
std::unordered_map<std::string, double> jointStateMap;
std::unordered_map<std::string, uint32_t> * entitiesByName;
};
void FUSDNoticeListener::Implementation::ParseCube(
const pxr::UsdPrim &_prim, sdf::Link &_link)
{
// double size;
// auto variant_cylinder = pxr::UsdGeomCube(_prim);
// variant_cylinder.GetSizeAttr().Get(&size);
}
void FUSDNoticeListener::Implementation::ParseCylinder(
const pxr::UsdPrim &_prim, sdf::Link &_link)
{
// auto variant_cylinder = pxr::UsdGeomCylinder(_prim);
// double radius;
// double height;
// variant_cylinder.GetRadiusAttr().Get(&radius);
// variant_cylinder.GetHeightAttr().Get(&height);
}
void FUSDNoticeListener::Implementation::ParseSphere(
const pxr::UsdPrim &_prim, sdf::Link &_link)
{
double radius;
auto variant_sphere = pxr::UsdGeomSphere(_prim);
variant_sphere.GetRadiusAttr().Get(&radius);
sdf::Visual visual;
sdf::Collision collision;
sdf::Geometry geom;
sdf::Sphere sphere;
geom.SetType(sdf::GeometryType::SPHERE);
sphere.SetRadius(radius);
geom.SetSphereShape(sphere);
visual.SetName("sphere_visual");
visual.SetGeom(geom);
collision.SetName("sphere_collision");
collision.SetGeom(geom);
_link.AddVisual(visual);
_link.AddCollision(collision);
}
FUSDNoticeListener::FUSDNoticeListener(
std::shared_ptr<ThreadSafe<pxr::UsdStageRefPtr>> &_stage,
const std::string &_worldName,
Simulator _simulatorPoses,
std::unordered_map<std::string, uint32_t> &_entitiesByName)
: dataPtr(ignition::utils::MakeUniqueImpl<Implementation>())
{
this->dataPtr->stage = _stage;
this->dataPtr->worldName = _worldName;
this->dataPtr->simulatorPoses = _simulatorPoses;
this->dataPtr->entitiesByName = &_entitiesByName;
std::string jointStateTopic = "/joint_states";
this->dataPtr->node.Subscribe(
jointStateTopic,
&FUSDNoticeListener::Implementation::jointStateCb,
this->dataPtr.get());
}
void FUSDNoticeListener::Implementation::jointStateCb(
const ignition::msgs::Model &_msg)
{
std::lock_guard<std::mutex> lock(this->jointStateMsgMutex);
for(int i = 0; i < _msg.joint_size(); ++i)
{
this->jointStateMap[_msg.joint(i).name()] =
_msg.joint(i).axis1().position();
}
}
void FUSDNoticeListener::Handle(
const class pxr::UsdNotice::ObjectsChanged &ObjectsChanged)
{
auto stage = this->dataPtr->stage->Lock();
for (const pxr::SdfPath &objectsChanged : ObjectsChanged.GetResyncedPaths())
{
ignmsg << "Resynced Path: " << objectsChanged.GetText() << std::endl;
auto modelUSD = stage->GetPrimAtPath(objectsChanged);
std::string primName = modelUSD.GetName();
if (primName.find("ROS_") != std::string::npos ||
primName.find("PhysicsScene") != std::string::npos)
{
continue;
}
if (modelUSD)
{
std::string strPath = objectsChanged.GetText();
if (strPath.find("_link") != std::string::npos
|| strPath.find("_visual") != std::string::npos
|| strPath.find("geometry") != std::string::npos) {
return;
}
auto it = this->dataPtr->entitiesByName->find(modelUSD.GetName().GetString());
if (it != this->dataPtr->entitiesByName->end())
{
continue;
}
auto range = pxr::UsdPrimRange::Stage(*stage);
for (auto const &prim : range)
{
if (prim.GetName().GetString() == primName)
{
continue;
}
}
sdf::Root root;
sdf::Model model;
model.SetName(modelUSD.GetPath().GetName());
model.SetRawPose(ignition::math::Pose3d());
sdf::Link link;
link.SetName(modelUSD.GetPath().GetName());
this->dataPtr->CreateSDF(link, modelUSD);
model.AddLink(link);
root.SetModel(model);
// Prepare the input parameters.
ignition::msgs::EntityFactory req;
req.set_sdf(root.ToElement()->ToString(""));
req.set_name(modelUSD.GetPath().GetName());
req.set_allow_renaming(false);
igndbg << "root.ToElement()->ToString("") "
<< root.ToElement()->ToString("") << '\n';
ignition::msgs::Boolean rep;
bool result;
unsigned int timeout = 5000;
bool executed = this->dataPtr->node.Request(
"/world/" + this->dataPtr->worldName + "/create",
req, timeout, rep, result);
if (executed)
{
if (rep.data())
{
igndbg << "Model was inserted [" << modelUSD.GetPath().GetName()
<< "]" << '\n';
}
else
{
igndbg << "Error model was not inserted" << '\n';
}
}
}
}
ignition::msgs::Pose_V req;
if (this->dataPtr->simulatorPoses == Simulator::IsaacSim)
{
// this loop checks all paths to find revolute joints
// if there is some, we get the body0 and body1 and calculate the
// joint angle.
auto range = pxr::UsdPrimRange::Stage(*stage);
{
std::lock_guard<std::mutex> lock(this->dataPtr->jointStateMsgMutex);
for (auto const &prim : range)
{
std::string primType = prim.GetPrimTypeInfo().GetTypeName().GetText();
if (primType == std::string("PhysicsRevoluteJoint"))
{
std::string topic = transport::TopicUtils::AsValidTopic(
std::string("/model/") + std::string("panda") +
std::string("/joint/") + prim.GetPath().GetName() +
std::string("/0/cmd_pos"));
auto pub = this->dataPtr->revoluteJointPublisher.find(topic);
if (pub == this->dataPtr->revoluteJointPublisher.end())
{
this->dataPtr->revoluteJointPublisher[topic] =
this->dataPtr->node.Advertise<msgs::Double>(topic);
}
else
{
msgs::Double cmd;
float pos = this->dataPtr->jointStateMap[prim.GetName()];
cmd.set_data(pos);
pub->second.Publish(cmd);
}
}
}
}
for (const pxr::SdfPath &objectsChanged :
ObjectsChanged.GetChangedInfoOnlyPaths())
{
if (std::string(objectsChanged.GetText()) == "/")
continue;
igndbg << "path " << objectsChanged.GetText() << std::endl;
auto modelUSD = stage->GetPrimAtPath(objectsChanged.GetParentPath());
auto property = modelUSD.GetPropertyAtPath(objectsChanged);
std::string strProperty = property.GetBaseName().GetText();
if (strProperty == "radius")
{
double radius;
auto attribute = modelUSD.GetAttributeAtPath(objectsChanged);
attribute.Get(&radius);
}
if (strProperty == "translate")
{
auto xform = pxr::UsdGeomXformable(modelUSD);
auto transforms = GetOp(xform);
auto currentPrim = modelUSD;
ignition::math::Quaterniond q(
transforms.rotXYZ[0],
transforms.rotXYZ[1],
transforms.rotXYZ[2]);
if (currentPrim.GetName() == "geometry")
{
currentPrim = currentPrim.GetParent();
auto visualXform = pxr::UsdGeomXformable(currentPrim);
auto visualOp = GetOp(visualXform);
transforms.position += visualOp.position;
ignition::math::Quaterniond qX, qY, qZ;
ignition::math::Angle angleX(IGN_DTOR(visualOp.rotXYZ[0]));
ignition::math::Angle angleY(IGN_DTOR(visualOp.rotXYZ[1]));
ignition::math::Angle angleZ(IGN_DTOR(visualOp.rotXYZ[2]));
qX = ignition::math::Quaterniond(angleX.Normalized().Radian(), 0, 0);
qY = ignition::math::Quaterniond(0, angleY.Normalized().Radian(), 0);
qZ = ignition::math::Quaterniond(0, 0, angleZ.Normalized().Radian());
q = ((q * qX) * qY) * qZ;
transforms.scale = pxr::GfVec3f(
transforms.scale[0] * visualOp.scale[0],
transforms.scale[1] * visualOp.scale[1],
transforms.scale[2] * visualOp.scale[2]);
}
auto currentPrimName = currentPrim.GetName().GetString();
int substrIndex = currentPrimName.size() - std::string("_visual").size();
if (substrIndex >= 0 && substrIndex < currentPrimName.size())
{
if (currentPrimName.substr(substrIndex).find("_visual") !=
std::string::npos)
{
currentPrim = currentPrim.GetParent();
auto linkXform = pxr::UsdGeomXformable(currentPrim);
auto linkOp = GetOp(linkXform);
transforms.position += linkOp.position;
ignition::math::Quaterniond qX, qY, qZ;
ignition::math::Angle angleX(IGN_DTOR(linkOp.rotXYZ[0]));
ignition::math::Angle angleY(IGN_DTOR(linkOp.rotXYZ[1]));
ignition::math::Angle angleZ(IGN_DTOR(linkOp.rotXYZ[2]));
qX = ignition::math::Quaterniond(angleX.Normalized().Radian(), 0, 0);
qY = ignition::math::Quaterniond(0, angleY.Normalized().Radian(), 0);
qZ = ignition::math::Quaterniond(0, 0, angleZ.Normalized().Radian());
q = ((q * qX) * qY) * qZ;
transforms.scale = pxr::GfVec3f(
transforms.scale[0] * linkOp.scale[0],
transforms.scale[1] * linkOp.scale[1],
transforms.scale[2] * linkOp.scale[2]);
}
}
currentPrimName = currentPrim.GetName().GetString();
substrIndex = currentPrimName.size() - std::string("_link").size();
if (substrIndex >= 0 && substrIndex < currentPrimName.size())
{
if (currentPrimName.substr(substrIndex).find("_link") !=
std::string::npos)
{
currentPrim = currentPrim.GetParent();
auto modelXform = pxr::UsdGeomXformable(currentPrim);
auto modelOp = GetOp(modelXform);
transforms.position += modelOp.position;
ignition::math::Quaterniond qX, qY, qZ;
ignition::math::Angle angleX(IGN_DTOR(modelOp.rotXYZ[0]));
ignition::math::Angle angleY(IGN_DTOR(modelOp.rotXYZ[1]));
ignition::math::Angle angleZ(IGN_DTOR(modelOp.rotXYZ[2]));
qX = ignition::math::Quaterniond(angleX.Normalized().Radian(), 0, 0);
qY = ignition::math::Quaterniond(0, angleY.Normalized().Radian(), 0);
qZ = ignition::math::Quaterniond(0, 0, angleZ.Normalized().Radian());
q = ((q * qX) * qY) * qZ;
transforms.scale = pxr::GfVec3f(
transforms.scale[0] * modelOp.scale[0],
transforms.scale[1] * modelOp.scale[1],
transforms.scale[2] * modelOp.scale[2]);
}
}
std::size_t found = std::string(currentPrim.GetName()).find("_link");
if (found != std::string::npos)
continue;
found = std::string(currentPrim.GetName()).find("_visual");
if (found != std::string::npos)
continue;
auto poseMsg = req.add_pose();
poseMsg->set_name(currentPrim.GetName());
poseMsg->mutable_position()->set_x(transforms.position[0]);
poseMsg->mutable_position()->set_y(transforms.position[1]);
poseMsg->mutable_position()->set_z(transforms.position[2]);
poseMsg->mutable_orientation()->set_x(q.X());
poseMsg->mutable_orientation()->set_y(q.Y());
poseMsg->mutable_orientation()->set_z(q.Z());
poseMsg->mutable_orientation()->set_w(q.W());
}
}
if (req.pose_size() > 0)
{
bool result;
ignition::msgs::Boolean rep;
unsigned int timeout = 100;
bool executed = this->dataPtr->node.Request(
"/world/" + this->dataPtr->worldName + "/set_pose_vector",
req, timeout, rep, result);
if (executed)
{
if (!result)
ignerr << "Service call failed" << std::endl;
}
else
ignerr << "Service [/world/" << this->dataPtr->worldName
<< "/set_pose_vector] call timed out" << std::endl;
}
}
}
} // namespace omniverse
} // namespace ignition
| 14,498 | C++ | 31.582022 | 85 | 0.608222 |
gazebosim/gz-omni/source/ignition_live/Joint.hpp | /*
* Copyright (C) 2022 Open Source Robotics Foundation
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
#ifndef IGNITION_OMNIVERSE_JOINT_HPP
#define IGNITION_OMNIVERSE_JOINT_HPP
#include <ignition/msgs/joint.pb.h>
#include <pxr/usd/usd/prim.h>
#include <pxr/usd/usd/stage.h>
namespace ignition::omniverse
{
pxr::UsdPrim CreateFixedJoint(const std::string& _path,
const pxr::UsdStageRefPtr& _stage);
pxr::UsdPrim CreateRevoluteJoint(const std::string& _path,
const pxr::UsdStageRefPtr& _stage);
} // namespace ignition::omniverse
#endif
| 1,123 | C++ | 31.114285 | 75 | 0.705254 |
gazebosim/gz-omni/source/ignition_live/Mesh.cpp | /*
* Copyright (C) 2022 Open Source Robotics Foundation
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
#include "Mesh.hpp"
#include <ignition/common/Console.hh>
#include <ignition/common/Mesh.hh>
#include <ignition/common/MeshManager.hh>
#include <ignition/common/SubMesh.hh>
#include <ignition/common/URI.hh>
#include <ignition/common/Util.hh>
#include <pxr/usd/usdGeom/xformCommonAPI.h>
namespace ignition::omniverse
{
bool endsWith(const std::string_view &str, const std::string_view &suffix)
{
return str.size() >= suffix.size() &&
0 == str.compare(str.size() - suffix.size(), suffix.size(), suffix);
}
inline std::string removeDash(const std::string &_str)
{
std::string result = _str;
std::replace(result.begin(), result.end(), '-', '_');
return result;
}
pxr::UsdGeomMesh UpdateMesh(const ignition::msgs::MeshGeom &_meshMsg,
const std::string &_path,
const pxr::UsdStageRefPtr &_stage)
{
ignition::common::URI uri(_meshMsg.filename());
std::string fullname;
std::string home;
if (!ignition::common::env("HOME", home, false))
{
ignerr << "The HOME environment variable was not defined, "
<< "so the resource [" << fullname << "] could not be found\n";
return pxr::UsdGeomMesh();
}
if (uri.Scheme() == "https" || uri.Scheme() == "http")
{
auto systemPaths = ignition::common::systemPaths();
std::vector<std::string> tokens = ignition::common::split(uri.Path().Str(), "/");
std::string server = tokens[0];
std::string versionServer = tokens[1];
std::string owner = ignition::common::lowercase(tokens[2]);
std::string type = ignition::common::lowercase(tokens[3]);
std::string modelName = ignition::common::lowercase(tokens[4]);
std::string modelVersion = ignition::common::lowercase(tokens[5]);
fullname = ignition::common::joinPaths(
home, ".ignition", "fuel", server, owner, type, modelName, modelVersion);
systemPaths->AddFilePaths(fullname);
for (int i = 7; i < tokens.size(); i++)
{
fullname = ignition::common::joinPaths(
fullname, ignition::common::lowercase(tokens[i]));
systemPaths->AddFilePaths(fullname);
}
}
else
{
fullname = ignition::common::findFile(_meshMsg.filename());
}
auto ignMesh = ignition::common::MeshManager::Instance()->Load(fullname);
// Some Meshes are splited in some submeshes, this loop check if the name
// of the path is the same as the name of the submesh. In this case
// we create a USD mesh per submesh.
bool isUSDPathInSubMeshName = false;
for (unsigned int i = 0; i < ignMesh->SubMeshCount(); ++i)
{
auto subMesh = ignMesh->SubMeshByIndex(i).lock();
if (ignMesh->SubMeshCount() != 1)
{
std::string pathLowerCase = ignition::common::lowercase(_path);
std::string subMeshLowerCase =
ignition::common::lowercase(subMesh->Name());
if (pathLowerCase.find(subMeshLowerCase) != std::string::npos)
{
isUSDPathInSubMeshName = true;
break;
}
}
}
for (unsigned int i = 0; i < ignMesh->SubMeshCount(); ++i)
{
pxr::VtArray<pxr::GfVec3f> meshPoints;
pxr::VtArray<pxr::GfVec2f> uvs;
pxr::VtArray<pxr::GfVec3f> normals;
pxr::VtArray<int> faceVertexIndices;
pxr::VtArray<int> faceVertexCounts;
auto subMesh = ignMesh->SubMeshByIndex(i).lock();
if (!subMesh)
{
ignerr << "Unable to get a shared pointer to submesh at index [" << i
<< "] of parent mesh [" << ignMesh->Name() << "]" << std::endl;
return pxr::UsdGeomMesh();
}
if (isUSDPathInSubMeshName)
{
if (ignMesh->SubMeshCount() != 1)
{
std::string pathLowerCase = ignition::common::lowercase(_path);
std::string subMeshLowerCase =
ignition::common::lowercase(subMesh->Name());
if (pathLowerCase.find(subMeshLowerCase) == std::string::npos)
{
continue;
}
}
}
// copy the submesh's vertices to the usd mesh's "points" array
for (unsigned int v = 0; v < subMesh->VertexCount(); ++v)
{
const auto &vertex = subMesh->Vertex(v);
meshPoints.push_back(pxr::GfVec3f(vertex.X(), vertex.Y(), vertex.Z()));
}
// copy the submesh's indices to the usd mesh's "faceVertexIndices" array
for (unsigned int j = 0; j < subMesh->IndexCount(); ++j)
faceVertexIndices.push_back(subMesh->Index(j));
// copy the submesh's texture coordinates
for (unsigned int j = 0; j < subMesh->TexCoordCount(); ++j)
{
const auto &uv = subMesh->TexCoord(j);
uvs.push_back(pxr::GfVec2f(uv[0], 1 - uv[1]));
}
// copy the submesh's normals
for (unsigned int j = 0; j < subMesh->NormalCount(); ++j)
{
const auto &normal = subMesh->Normal(j);
normals.push_back(pxr::GfVec3f(normal[0], normal[1], normal[2]));
}
// set the usd mesh's "faceVertexCounts" array according to
// the submesh primitive type
// TODO(adlarkin) support all primitive types. The computations are more
// involved for LINESTRIPS, TRIFANS, and TRISTRIPS. I will need to spend
// some time deriving what the number of faces for these primitive types
// are, given the number of indices. The "faceVertexCounts" array will
// also not have the same value for every element in the array for these
// more complex primitive types (see the TODO note in the for loop below)
unsigned int verticesPerFace = 0;
unsigned int numFaces = 0;
switch (subMesh->SubMeshPrimitiveType())
{
case ignition::common::SubMesh::PrimitiveType::POINTS:
verticesPerFace = 1;
numFaces = subMesh->IndexCount();
break;
case ignition::common::SubMesh::PrimitiveType::LINES:
verticesPerFace = 2;
numFaces = subMesh->IndexCount() / 2;
break;
case ignition::common::SubMesh::PrimitiveType::TRIANGLES:
verticesPerFace = 3;
numFaces = subMesh->IndexCount() / 3;
break;
case ignition::common::SubMesh::PrimitiveType::LINESTRIPS:
case ignition::common::SubMesh::PrimitiveType::TRIFANS:
case ignition::common::SubMesh::PrimitiveType::TRISTRIPS:
default:
ignerr << "Submesh " << subMesh->Name()
<< " has a primitive type that is not supported." << std::endl;
return pxr::UsdGeomMesh();
}
// TODO(adlarkin) update this loop to allow for varying element
// values in the array (see TODO note above). Right now, the
// array only allows for all elements to have one value, which in
// this case is "verticesPerFace"
for (unsigned int n = 0; n < numFaces; ++n)
faceVertexCounts.push_back(verticesPerFace);
std::string primName = _path + "/" + subMesh->Name();
primName = removeDash(primName);
if (endsWith(primName, "/"))
{
primName.erase(primName.size() - 1);
}
auto usdMesh = pxr::UsdGeomMesh::Define(_stage, pxr::SdfPath(_path));
usdMesh.CreatePointsAttr().Set(meshPoints);
usdMesh.CreateFaceVertexIndicesAttr().Set(faceVertexIndices);
usdMesh.CreateFaceVertexCountsAttr().Set(faceVertexCounts);
auto coordinates = usdMesh.CreatePrimvar(
pxr::TfToken("st"), pxr::SdfValueTypeNames->Float2Array,
pxr::UsdGeomTokens->vertex);
coordinates.Set(uvs);
usdMesh.CreateNormalsAttr().Set(normals);
usdMesh.SetNormalsInterpolation(pxr::TfToken("vertex"));
usdMesh.CreateSubdivisionSchemeAttr(pxr::VtValue(pxr::TfToken("none")));
const auto &meshMin = ignMesh->Min();
const auto &meshMax = ignMesh->Max();
pxr::VtArray<pxr::GfVec3f> extentBounds;
extentBounds.push_back(pxr::GfVec3f(meshMin.X(), meshMin.Y(), meshMin.Z()));
extentBounds.push_back(pxr::GfVec3f(meshMax.X(), meshMax.Y(), meshMax.Z()));
usdMesh.CreateExtentAttr().Set(extentBounds);
// TODO (ahcorde): Material inside the submesh
int materialIndex = subMesh->MaterialIndex();
if (materialIndex != -1)
{
auto material = ignMesh->MaterialByIndex(materialIndex);
// sdf::Material materialSdf = sdf::usd::convert(material);
// auto materialUSD = ParseSdfMaterial(&materialSdf, _stage);
// if(materialSdf.Emissive() != ignition::math::Color(0, 0, 0, 1)
// || materialSdf.Specular() != ignition::math::Color(0, 0, 0, 1)
// || materialSdf.PbrMaterial())
// {
// if (materialUSD)
// {
// pxr::UsdShadeMaterialBindingAPI(usdMesh).Bind(materialUSD);
// }
// }
}
pxr::UsdGeomXformCommonAPI meshXformAPI(usdMesh);
meshXformAPI.SetScale(pxr::GfVec3f(
_meshMsg.scale().x(), _meshMsg.scale().y(), _meshMsg.scale().z()));
return usdMesh;
}
return pxr::UsdGeomMesh();
}
} // namespace ignition::omniverse
| 9,358 | C++ | 34.721374 | 85 | 0.643941 |
gazebosim/gz-omni/source/ignition_live/main.cpp | /*
* Copyright (C) 2022 Open Source Robotics Foundation
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
#include "GetOp.hpp"
#include "OmniverseConnect.hpp"
#include "Scene.hpp"
#include "SetOp.hpp"
#include "ThreadSafe.hpp"
#include <ignition/common/Console.hh>
#include <ignition/common/SystemPaths.hh>
#include <ignition/common/StringUtils.hh>
#include <ignition/utils/cli.hh>
#include <pxr/usd/sdf/path.h>
#include <pxr/usd/usd/prim.h>
#include <pxr/usd/usdGeom/xformCommonAPI.h>
#include <string>
using namespace ignition::omniverse;
constexpr double kTargetFps = 60;
constexpr std::chrono::duration<double> kUpdateRate(1 / kTargetFps);
int main(int argc, char* argv[])
{
CLI::App app("Ignition omniverse connector");
std::string destinationPath;
app.add_option("-p,--path", destinationPath,
// clang-format off
"Location of the omniverse stage. e.g. \"omniverse://localhost/Users/ignition/stage.usd\"")
// clang-format on
->required();
std::string worldName;
ignition::omniverse::Simulator simulatorPoses{
ignition::omniverse::Simulator::Ignition};
app.add_option("-w,--world", worldName, "Name of the ignition world")
->required();
std::map<std::string, ignition::omniverse::Simulator> map{
{"ignition", ignition::omniverse::Simulator::Ignition},
{"isaacsim", ignition::omniverse::Simulator::IsaacSim}};
app.add_option("--pose", simulatorPoses, "Which simulator will handle the poses")
->required()
->transform(CLI::CheckedTransformer(map, CLI::ignore_case));;
app.add_flag_callback("-v,--verbose",
[]() { ignition::common::Console::SetVerbosity(4); });
CLI11_PARSE(app, argc, argv);
std::string ignGazeboResourcePath;
auto systemPaths = ignition::common::systemPaths();
ignition::common::env("IGN_GAZEBO_RESOURCE_PATH", ignGazeboResourcePath);
for (const auto& resourcePath :
ignition::common::Split(ignGazeboResourcePath, ':'))
{
systemPaths->AddFilePaths(resourcePath);
}
// Connect with omniverse
if (!StartOmniverse())
{
ignerr << "Not able to start Omniverse" << std::endl;
return -1;
}
// Open the USD model in Omniverse
const std::string stageUrl = [&]()
{
auto result = CreateOmniverseModel(destinationPath);
if (!result)
{
ignerr << result.Error() << std::endl;
exit(-1);
}
return result.Value();
}();
omniUsdLiveSetModeForUrl(stageUrl.c_str(),
OmniUsdLiveMode::eOmniUsdLiveModeEnabled);
PrintConnectedUsername(stageUrl);
Scene scene(worldName, stageUrl, simulatorPoses);
if (!scene.Init())
{
return -1;
};
auto lastUpdate = std::chrono::steady_clock::now();
// don't spam the console, show the fps only once a sec
auto nextShowFps =
lastUpdate.time_since_epoch() + std::chrono::duration<double>(1);
while (true)
{
std::this_thread::sleep_for((lastUpdate + kUpdateRate) -
std::chrono::steady_clock::now());
auto now = std::chrono::steady_clock::now();
if (now.time_since_epoch() > nextShowFps)
{
double curFps =
1 / std::chrono::duration<double>(now - lastUpdate).count();
nextShowFps = now.time_since_epoch() + std::chrono::duration<double>(1);
igndbg << "fps: " << curFps << std::endl;
}
lastUpdate = now;
scene.Save();
omniUsdLiveProcess();
}
return 0;
}
| 3,968 | C++ | 29.068182 | 108 | 0.662802 |
gazebosim/gz-omni/source/ignition_live/OmniClientpp.hpp | /*
* Copyright (C) 2022 Open Source Robotics Foundation
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
/**
* C++ wrappers for various omniclient apis
*/
#ifndef IGNITION_OMNIVERSE_OMNICLIENTPP_HPP
#define IGNITION_OMNIVERSE_OMNICLIENTPP_HPP
#include "Error.hpp"
#include <OmniClient.h>
#include <ostream>
#include <string>
namespace ignition::omniverse
{
/// \brief RAII wrapper to omniClientLock and omniClientUnlock
class OmniverseLock
{
public:
OmniverseLock(const std::string& _url);
~OmniverseLock();
OmniverseLock(const OmniverseLock&) = delete;
OmniverseLock(OmniverseLock&&) = delete;
OmniverseLock& operator=(const OmniverseLock&) = delete;
private:
const std::string url;
};
/// \brief Synchronous API for omniverse
class OmniverseSync
{
public:
template <typename T>
using MaybeError = MaybeError<T, OmniClientResult>;
static MaybeError<OmniClientListEntry> Stat(const std::string& url) noexcept;
};
} // namespace ignition::omniverse
#endif
| 1,507 | C++ | 23.721311 | 79 | 0.741871 |
gazebosim/gz-omni/source/ignition_live/FUSDLayerNoticeListener.hpp | /*
* Copyright (C) 2022 Open Source Robotics Foundation
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
#ifndef IGNITION_OMNIVERSE_FUSDLAYERNOTICELISTENER_HPP
#define IGNITION_OMNIVERSE_FUSDLAYERNOTICELISTENER_HPP
#include "Scene.hpp"
#include "ThreadSafe.hpp"
#include <pxr/usd/usd/stage.h>
#include <ignition/common/Console.hh>
#include <ignition/utils/ImplPtr.hh>
namespace ignition
{
namespace omniverse
{
class FUSDLayerNoticeListener : public pxr::TfWeakBase
{
public:
FUSDLayerNoticeListener(
std::shared_ptr<ThreadSafe<pxr::UsdStageRefPtr>> &_stage,
const std::string& _worldName);
void HandleGlobalLayerReload(const pxr::SdfNotice::LayerDidReloadContent& n);
// Print some interesting info about the LayerNotice
void HandleRootOrSubLayerChange(
const class pxr::SdfNotice::LayersDidChangeSentPerLayer& _layerNotice,
const pxr::TfWeakPtr<pxr::SdfLayer>& _sender);
/// \internal
/// \brief Private data pointer
IGN_UTILS_UNIQUE_IMPL_PTR(dataPtr)
};
} // namespace omniverse
} // namespace ignition
#endif
| 1,577 | C++ | 27.178571 | 79 | 0.749524 |
gazebosim/gz-omni/source/ignition_live/FUSDNoticeListener.hpp | /*
* Copyright (C) 2022 Open Source Robotics Foundation
*
* Licensed under the Apache License, Version 2.0 (the "License");
* you may not use this file except in compliance with the License.
* You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS,
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
* See the License for the specific language governing permissions and
* limitations under the License.
*
*/
#ifndef IGNITION_OMNIVERSE_FUSDNOTICELISTENER_HPP
#define IGNITION_OMNIVERSE_FUSDNOTICELISTENER_HPP
#include <memory>
#include <string>
#include "ThreadSafe.hpp"
#include "Scene.hpp"
#include <pxr/usd/usd/notice.h>
namespace ignition
{
namespace omniverse
{
class FUSDNoticeListener : public pxr::TfWeakBase
{
public:
FUSDNoticeListener(
std::shared_ptr<ThreadSafe<pxr::UsdStageRefPtr>> &_stage,
const std::string &_worldName,
Simulator _simulatorPoses,
std::unordered_map<std::string, uint32_t> &entitiesByName);
void Handle(const class pxr::UsdNotice::ObjectsChanged &ObjectsChanged);
/// \internal
/// \brief Private data pointer
IGN_UTILS_UNIQUE_IMPL_PTR(dataPtr)
};
} // namespace omniverse
} // namespace ignition
#endif
| 1,385 | C++ | 26.17647 | 75 | 0.740072 |
NVlabs/ACID/README.md | [![NVIDIA Source Code License](https://img.shields.io/badge/license-NSCL-blue.svg)](https://github.com/NVlabs/ACID/blob/master/LICENSE)
![Python 3.7](https://img.shields.io/badge/python-3.7-green.svg)
# ACID: Action-Conditional Implicit Visual Dynamics for Deformable Object Manipulation
### [Project Page](https://b0ku1.github.io/acid/) | [Paper](https://arxiv.org/abs/2203.06856)
<div style="text-align: center">
<img src="_media/model_figure.png" width="600"/>
</div>
This repository contains the codebase used in [**ACID: Action-Conditional Implicit Visual Dynamics for Deformable Object Manipulation**](https://b0ku1.github.io/acid/), which will appear in [RSS 2022](https://roboticsconference.org/program/papers/) and is nominated for Best Student Paper Award. Specifically, the repo contains code for:
* [**PlushSim**](./PlushSim/), the simulation environment used to generate all manipulation data.
* [**ACID model**](./ACID/), the implicit visual dynamics model's model and training code.
If you find our code or paper useful, please consider citing
```bibtex
@article{shen2022acid,
title={ACID: Action-Conditional Implicit Visual Dynamics for Deformable Object Manipulation},
author={Shen, Bokui and Jiang, Zhenyu and Choy, Christopher and J. Guibas, Leonidas and Savarese, Silvio and Anandkumar, Anima and Zhu, Yuke},
journal={Robotics: Science and Systems (RSS)},
year={2022}
}
```
# ACID model
Please see the [README](./ACID/README.md) for more detailed information.
# PlushSim
Please see the [README](./PlushSim/README.md) for more detailed information.
# License
Please check the [LICENSE](./LICENSE) file. ACID may be used non-commercially, meaning for research or evaluation purposes only. For business inquiries, please contact researchinquiries@nvidia.com.
| 1,794 | Markdown | 48.86111 | 337 | 0.758082 |
NVlabs/ACID/PlushSim/README.md | [![NVIDIA Source Code License](https://img.shields.io/badge/license-NSCL-blue.svg)](https://github.com/NVlabs/ACID/blob/master/LICENSE)
![Python 3.7](https://img.shields.io/badge/python-3.7-green.svg)
# PlushSim
<div style="text-align: center">
<img src="../_media/plushsim.png" width="600"/>
</div>
Our PlushSim simulation environment is based on [Omniverse Kit](https://docs.omniverse.nvidia.com/prod_kit/prod_kit.html). This codebase contains the docker image and the code to simulate and manipulate deformable objects.
## Prerequisites
Omniverse Kit has a set of hardware requirements. Specifically, it requires a RTX gpu (e.g. RTX 2080, RTX 30x0, Titan RTX etc.). Also, a 16GB+ memory is recommended.
The codebase is tested on Linux Ubuntu 20.04.
## Getting the Docker Image
First, you need to install [Docker](https://docs.docker.com/engine/install/ubuntu/) and [NVIDIA Container Toolkit](https://github.com/NVIDIA/nvidia-docker) before proceeding.
After you have installed Docker and NVIDIA container toolkit, you can obtain the PlushSim Docker image from DockerHub, with command:
```
docker pull b0ku1/acid-docker:cleaned
```
## Preparing Simulation Assets
You can download the simulation assets `raw_assets.zip` at: [Google Drive](https://drive.google.com/file/d/1OO8Wi0PHF3ROmW8088JNOMJn4EcDLDPB/view?usp=sharing).
After you download it, unzip the assets within this directory. You should have a folder structure like:
```
PlushSim/
assets/
animals/
...
attic_clean/
...
```
## Generating Manipulation Trajectories
Generating manipulation data consists of two steps:
1. Start Docker image, and mount the correct directory.
2. Run script
To start the docker image with an interactive session, run the following command inside `PlushSim/`:
```
export PLUSHSIM_ROOT=$(pwd)
docker run -it -v $PLUSHSIM_ROOT:/result --gpus all b0ku1/acid-docker:cleaned bash
```
Aftery entering the interactive session, you can run the following commands to start generating manipulation trajectories:
```
./python.sh /result/scripts/data_gen_attic.py
```
The above scripts will generate sample interaction sequences in `PlushSim/interaction_sequence`. There are various command line arguments that you can give to `data_gen_attic.py`. Please see documentation of the python script.
## Visualizing the assets in GUI
To visualize the assets in Omniverse GUI, you need to download and install [Omniverse](https://docs.omniverse.nvidia.com/prod_install-guide/prod_install-guide.html). The link contains NVIDIA's official instruction for installation.
After you install Omniverse, you can open the `.usda` files in the assets folder. To run PlushSim's scripts outside of Docker (e.g. with your native Omniverse installation), you can find more information at [Omniverse Kit's Python Manual](https://docs.omniverse.nvidia.com/py/kit/index.html). For questions regarding Omniverse usage, please visit [NVIDIA developer forum](https://forums.developer.nvidia.com/c/omniverse/300).
## License
Please check the [LICENSE](../LICENSE) file. ACID may be used non-commercially, meaning for research or evaluation purposes only. For business inquiries, please contact researchinquiries@nvidia.com.
If you find our code or paper useful, please consider citing
```bibtex
@article{shen2022acid,
title={ACID: Action-Conditional Implicit Visual Dynamics for Deformable Object Manipulation},
author={Shen, Bokui and Jiang, Zhenyu and Choy, Christopher and J. Guibas, Leonidas and Savarese, Silvio and Anandkumar, Anima and Zhu, Yuke},
journal={Robotics: Science and Systems (RSS)},
year={2022}
}
``` | 3,640 | Markdown | 48.876712 | 425 | 0.765659 |
NVlabs/ACID/PlushSim/scripts/python_app.py | #!/usr/bin/env python
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
import carb
import omni.kit.app
import omni.kit
import os
import sys
import time
import asyncio
import argparse
DEFAULT_CONFIG = {
"width": 1024,
"height": 800,
"renderer": "PathTracing", # Can also be RayTracedLighting
"anti_aliasing": 3, # 3 for dlss, 2 for fxaa, 1 for taa, 0 to disable aa
"samples_per_pixel_per_frame": 64,
"denoiser": True,
"subdiv_refinement_level": 0,
"headless": True,
"max_bounces": 4,
"max_specular_transmission_bounces": 6,
"max_volume_bounces": 4,
"sync_loads": False,
"experience": f'{os.environ["EXP_PATH"]}/omni.bloky.python.kit',
}
class OmniKitHelper:
"""Helper class for launching OmniKit from a Python environment.
Launches and configures OmniKit and exposes useful functions.
Typical usage example:
.. highlight:: python
.. code-block:: python
config = {'width': 800, 'height': 600, 'renderer': 'PathTracing'}
kit = OmniKitHelper(config) # Start omniverse kit
# <Code to generate or load a scene>
kit.update() # Render a single frame"""
def __init__(self, config=DEFAULT_CONFIG):
"""The config variable is a dictionary containing the following entries
Args:
width (int): Width of the viewport and generated images. Defaults to 1024
height (int): Height of the viewport and generated images. Defaults to 800
renderer (str): Rendering mode, can be `RayTracedLighting` or `PathTracing`. Defaults to `PathTracing`
samples_per_pixel_per_frame (int): The number of samples to render per frame, used for `PathTracing` only. Defaults to 64
denoiser (bool): Enable this to use AI denoising to improve image quality. Defaults to True
subdiv_refinement_level (int): Number of subdivisons to perform on supported geometry. Defaults to 0
headless (bool): Disable UI when running. Defaults to True
max_bounces (int): Maximum number of bounces, used for `PathTracing` only. Defaults to 4
max_specular_transmission_bounces(int): Maximum number of bounces for specular or transmission, used for `PathTracing` only. Defaults to 6
max_volume_bounces(int): Maximum number of bounces for volumetric, used for `PathTracing` only. Defaults to 4
sync_loads (bool): When enabled, will pause rendering until all assets are loaded. Defaults to False
experience (str): The config json used to launch the application.
"""
# only import custom loop runner if we create this object
# from omni.kit.loop import _loop
# initialize vars
self._exiting = False
self._is_dirty_instance_mappings = True
self._previous_physics_dt = 1.0 / 60.0
self.config = DEFAULT_CONFIG
if config is not None:
self.config.update(config)
# Load app plugin
self._framework = carb.get_framework()
print(os.environ["CARB_APP_PATH"])
self._framework.load_plugins(
loaded_file_wildcards=["omni.kit.app.plugin"],
search_paths=[os.path.abspath(f'{os.environ["CARB_APP_PATH"]}/kit/plugins')],
)
print(DEFAULT_CONFIG)
# launch kit
self.last_update_t = time.time()
self.app = omni.kit.app.get_app()
self.kit_settings = None
self._start_app()
self.carb_settings = carb.settings.acquire_settings_interface()
self.setup_renderer(mode="default") # set rtx-defaults settings
self.setup_renderer(mode="non-default") # set rtx settings
self.timeline = omni.timeline.get_timeline_interface()
# Wait for new stage to open
new_stage_task = asyncio.ensure_future(omni.usd.get_context().new_stage_async())
print("OmniKitHelper Starting up ...")
while not new_stage_task.done():
time.sleep(0.001) # This sleep prevents a deadlock in certain cases
self.update()
self.update()
# Dock windows if they exist
main_dockspace = omni.ui.Workspace.get_window("DockSpace")
def dock_window(space, name, location):
window = omni.ui.Workspace.get_window(name)
if window and space:
window.dock_in(space, location)
return window
view = dock_window(main_dockspace, "Viewport", omni.ui.DockPosition.TOP)
self.update()
console = dock_window(view, "Console", omni.ui.DockPosition.BOTTOM)
prop = dock_window(view, "Property", omni.ui.DockPosition.RIGHT)
dock_window(view, "Main ToolBar", omni.ui.DockPosition.LEFT)
self.update()
dock_window(prop, "Render Settings", omni.ui.DockPosition.SAME)
self.update()
print("OmniKitHelper Startup Complete")
def _start_app(self):
args = [
os.path.abspath(__file__),
f'{self.config["experience"]}',
"--/persistent/app/viewport/displayOptions=0", # hide extra stuff in viewport
# Forces kit to not render until all USD files are loaded
f'--/rtx/materialDb/syncLoads={self.config["sync_loads"]}',
f'--/rtx/hydra/materialSyncLoads={self.config["sync_loads"]}'
f'--/omni.kit.plugin/syncUsdLoads={self.config["sync_loads"]}',
"--/app/content/emptyStageOnStart=False", # This is required due to a infinite loop but results in errors on launch
"--/app/hydraEngine/waitIdle=True",
"--/app/asyncRendering=False",
f'--/app/renderer/resolution/width={self.config["width"]}',
f'--/app/renderer/resolution/height={self.config["height"]}',
]
args.append(f"--portable")
args.append(f"--no-window")
args.append(f"--allow-root")
print(args)
self.app.startup("kit", f'{os.environ["CARB_APP_PATH"]}/kit', args)
def __del__(self):
if self._exiting is False and sys.meta_path is None:
print(
"\033[91m"
+ "ERROR: Python exiting while OmniKitHelper was still running, Please call shutdown() on the OmniKitHelper object to exit cleanly"
+ "\033[0m"
)
def shutdown(self):
self._exiting = True
print("Shutting Down OmniKitHelper...")
# We are exisitng but something is still loading, wait for it to load to avoid a deadlock
if self.is_loading():
print(" Waiting for USD resource operations to complete (this may take a few seconds)")
while self.is_loading():
self.app.update()
self.app.shutdown()
self._framework.unload_all_plugins()
print("Shutting Down Complete")
def get_stage(self):
"""Returns the current USD stage."""
return omni.usd.get_context().get_stage()
def set_setting(self, setting, value):
"""Convenience function to set settings.
Args:
setting (str): string representing the setting being changed
value: new value for the setting being changed, the type of this value must match its repsective setting
"""
if isinstance(value, str):
self.carb_settings.set_string(setting, value)
elif isinstance(value, bool):
self.carb_settings.set_bool(setting, value)
elif isinstance(value, int):
self.carb_settings.set_int(setting, value)
elif isinstance(value, float):
self.carb_settings.set_float(setting, value)
else:
raise ValueError(f"Value of type {type(value)} is not supported.")
def set_physics_dt(self, physics_dt: float = 1.0 / 150.0, physics_substeps: int = 1):
"""Specify the physics step size to use when simulating, default is 1/60.
Note that a physics scene has to be in the stage for this to do anything
Args:
physics_dt (float): Use this value for physics step
"""
if self.get_stage() is None:
return
if physics_dt == self._previous_physics_dt:
return
if physics_substeps is None or physics_substeps <= 1:
physics_substeps = 1
self._previous_physics_dt = physics_dt
from pxr import UsdPhysics, PhysxSchema
steps_per_second = int(1.0 / physics_dt)
min_steps = int(steps_per_second / physics_substeps)
physxSceneAPI = None
for prim in self.get_stage().Traverse():
if prim.IsA(UsdPhysics.Scene):
physxSceneAPI = PhysxSchema.PhysxSceneAPI.Apply(prim)
if physxSceneAPI is not None:
physxSceneAPI.GetTimeStepsPerSecondAttr().Set(steps_per_second)
settings = carb.settings.get_settings()
settings.set_int("persistent/simulation/minFrameRate", min_steps)
def update(self, dt=0.0, physics_dt=None, physics_substeps=None):
"""Render one frame. Optionally specify dt in seconds, specify None to use wallclock.
Specify physics_dt and physics_substeps to decouple the physics step size from rendering
For example: to render with a dt of 1/30 and simulate physics at 1/120 use:
- dt = 1/30.0
- physics_dt = 1/120.0
- physics_substeps = 4
Args:
dt (float): The step size used for the overall update, set to None to use wallclock
physics_dt (float, optional): If specified use this value for physics step
physics_substeps (int, optional): Maximum number of physics substeps to perform
"""
# dont update if exit was called
if self._exiting:
return
# a physics dt was specified and is > 0
if physics_dt is not None and physics_dt > 0.0:
self.set_physics_dt(physics_dt, physics_substeps)
# a dt was specified and is > 0
if dt is not None and dt > 0.0:
# if physics dt was not specified, use rendering dt
if physics_dt is None:
self.set_physics_dt(dt)
# self.loop_runner.set_runner_dt(dt)
self.app.update()
else:
# dt not specified, run in realtime
time_now = time.time()
dt = time_now - self.last_update_t
if physics_dt is None:
self.set_physics_dt(1.0 / 60.0, 4)
self.last_update_t = time_now
# self.loop_runner.set_runner_dt(dt)
self.app.update()
def play(self):
"""Starts the editor physics simulation"""
self.update()
self.timeline.play()
self.update()
def pause(self):
"""Pauses the editor physics simulation"""
self.update()
self.timeline.pause()
self.update()
def stop(self):
"""Stops the editor physics simulation"""
self.update()
self.timeline.stop()
self.update()
def get_status(self):
"""Get the status of the renderer to see if anything is loading"""
return omni.usd.get_context().get_stage_loading_status()
def is_loading(self):
"""convenience function to see if any files are being loaded
Returns:
bool: True if loading, False otherwise
"""
message, loaded, loading = self.get_status()
return loading > 0
def is_exiting(self):
"""get current exit status for this object
Returns:
bool: True if exit() was called previously, False otherwise
"""
return self._exiting
def execute(self, *args, **kwargs):
"""Allow use of omni.kit.commands interface"""
omni.kit.commands.execute(*args, **kwargs)
def setup_renderer(self, mode="non-default"):
rtx_mode = "/rtx-defaults" if mode == "default" else "/rtx"
"""Reset render settings to those in config. This should be used in case a new stage is opened and the desired config needs to be re-applied"""
self.set_setting(rtx_mode + "/rendermode", self.config["renderer"])
# Raytrace mode settings
self.set_setting(rtx_mode + "/post/aa/op", self.config["anti_aliasing"])
self.set_setting(rtx_mode + "/directLighting/sampledLighting/enabled", True)
# self.set_setting(rtx_mode + "/ambientOcclusion/enabled", True)
# Pathtrace mode settings
self.set_setting(rtx_mode + "/pathtracing/spp", self.config["samples_per_pixel_per_frame"])
self.set_setting(rtx_mode + "/pathtracing/totalSpp", self.config["samples_per_pixel_per_frame"])
self.set_setting(rtx_mode + "/pathtracing/clampSpp", self.config["samples_per_pixel_per_frame"])
self.set_setting(rtx_mode + "/pathtracing/maxBounces", self.config["max_bounces"])
self.set_setting(
rtx_mode + "/pathtracing/maxSpecularAndTransmissionBounces",
self.config["max_specular_transmission_bounces"],
)
self.set_setting(rtx_mode + "/pathtracing/maxVolumeBounces", self.config["max_volume_bounces"])
self.set_setting(rtx_mode + "/pathtracing/optixDenoiser/enabled", self.config["denoiser"])
self.set_setting(rtx_mode + "/hydra/subdivision/refinementLevel", self.config["subdiv_refinement_level"])
# Experimental, forces kit to not render until all USD files are loaded
self.set_setting(rtx_mode + "/materialDb/syncLoads", self.config["sync_loads"])
self.set_setting(rtx_mode + "/hydra/materialSyncLoads", self.config["sync_loads"])
self.set_setting("/omni.kit.plugin/syncUsdLoads", self.config["sync_loads"])
def create_prim(
self, path, prim_type, translation=None, rotation=None, scale=None, ref=None, semantic_label=None, attributes={}
):
"""Create a prim, apply specified transforms, apply semantic label and
set specified attributes.
args:
path (str): The path of the new prim.
prim_type (str): Prim type name
translation (tuple(float, float, float), optional): prim translation (applied last)
rotation (tuple(float, float, float), optional): prim rotation in radians with rotation
order ZYX.
scale (tuple(float, float, float), optional): scaling factor in x, y, z.
ref (str, optional): Path to the USD that this prim will reference.
semantic_label (str, optional): Semantic label.
attributes (dict, optional): Key-value pairs of prim attributes to set.
"""
from pxr import UsdGeom, Semantics
prim = self.get_stage().DefinePrim(path, prim_type)
for k, v in attributes.items():
prim.GetAttribute(k).Set(v)
xform_api = UsdGeom.XformCommonAPI(prim)
if ref:
prim.GetReferences().AddReference(ref)
if semantic_label:
sem = Semantics.SemanticsAPI.Apply(prim, "Semantics")
sem.CreateSemanticTypeAttr()
sem.CreateSemanticDataAttr()
sem.GetSemanticTypeAttr().Set("class")
sem.GetSemanticDataAttr().Set(semantic_label)
if rotation:
xform_api.SetRotate(rotation, UsdGeom.XformCommonAPI.RotationOrderXYZ)
if scale:
xform_api.SetScale(scale)
if translation:
xform_api.SetTranslate(translation)
return prim
def set_up_axis(self, axis):
"""Change the up axis of the current stage
Args:
axis: valid values are `UsdGeom.Tokens.y`, or `UsdGeom.Tokens.z`
"""
from pxr import UsdGeom, Usd
stage = self.get_stage()
rootLayer = stage.GetRootLayer()
rootLayer.SetPermissionToEdit(True)
with Usd.EditContext(stage, rootLayer):
UsdGeom.SetStageUpAxis(stage, axis)
| 16,266 | Python | 42.034391 | 151 | 0.624185 |
NVlabs/ACID/PlushSim/scripts/data_gen_attic.py | #!/usr/bin/env python
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
import os
import time
import argparse
import json
from utils import *
parser = argparse.ArgumentParser("Dataset generation")
################################################################
# save to args
parser.add_argument("--save_dir", type=str, default="/result/interaction_sequence")
parser.add_argument("--img_subdir", type=str, default='img')
parser.add_argument("--geom_subdir", type=str, default='geom')
parser.add_argument("--info_subdir", type=str, default='info')
parser.add_argument("--save_every", type=int, default=25)
################################################################
# interaction args
parser.add_argument("--num_interaction", type=int, default=18)
parser.add_argument("--reset_every", type=int, default=6)
################################################################
# scene args
parser.add_argument("--asset_root", type=str, default="/result/assets")
parser.add_argument("--scene_path", type=str, default="attic_lean/Attic_clean_v2.usda")
parser.add_argument("--plush_path", type=str, default="animals/teddy/teddy_scaled/teddy_scaled.usda")
parser.add_argument("--skip_layout_randomization", action="store_true", default=False)
parser.add_argument("--skip_lights_randomization", action="store_true", default=False)
args = parser.parse_args()
os.makedirs(args.save_dir, exist_ok=True)
os.makedirs(os.path.join(args.save_dir, args.img_subdir), exist_ok=True)
os.makedirs(os.path.join(args.save_dir, args.geom_subdir), exist_ok=True)
os.makedirs(os.path.join(args.save_dir, args.info_subdir), exist_ok=True)
img_dir = os.path.join(args.save_dir, args.img_subdir)
geom_dir = os.path.join(args.save_dir, args.geom_subdir)
info_dir = os.path.join(args.save_dir, args.info_subdir)
def main():
from attic_scene import attic_scene
scene_path = os.path.join(args.asset_root, args.scene_path)
plush_path = os.path.join(args.asset_root, args.plush_path)
scene = attic_scene(
scene_path,
plush_path,
RESET_STATIC=True,
RAND_LAYOUT=not args.skip_layout_randomization,
RAND_LIGHTS=not args.skip_lights_randomization,)
start_time = time.time()
# save scene overall info
with open(os.path.join(info_dir, "scene_meta.json"), 'w') as fp:
json.dump(scene.get_scene_metadata(), fp)
# number of resets
num_resets = (args.num_interaction + args.reset_every - 1) // args.reset_every
for reset in range(num_resets):
# save scene reset collider info
np.savez_compressed(os.path.join(info_dir, f"clutter_info_{reset:04d}.npz"), **scene.get_scene_background_state())
num_steps = min(args.num_interaction, (reset + 1) * args.reset_every) - reset * args.reset_every
# sample interactions
actions = {
'grasp_points':[],
'target_points':[],
'grasp_pixels':[],
'start_frames':[],
'release_frames':[],
'static_frames':[], }
# save start frame
save_frame(f"{reset:04d}_{scene.frame:06d}", scene.get_observations(), img_dir)
np.savez_compressed(
os.path.join(geom_dir, f"{reset:04d}_{scene.frame:06d}.npz"),
**scene.get_scene_state_plush(convert_to=np.float16))
for interaction in range(num_steps):
# stop simulating
scene.kit.pause()
action = scene.sample_action()
if action is None:
scene.kit.play()
continue
grasp_point, target_point, grasp_pixel = action
actions['grasp_points'].append(np.array(grasp_point,np.float16))
actions['target_points'].append(np.array(target_point,np.float16))
actions['grasp_pixels'].append(np.array(grasp_pixel,np.uint16))
actions['start_frames'].append(np.array(scene.frame,np.uint16))
save_frame(f"{reset:04d}_{scene.frame:06d}", scene.get_observations(), img_dir)
np.savez_compressed(
os.path.join(geom_dir, f"{reset:04d}_{scene.frame:06d}.npz"),
**scene.get_scene_state_plush(convert_to=np.float16))
scene.kit.play()
init_traj = scene.gripper.plan_trajectory(scene.gripper.eef_default_loc, grasp_point)
# move
for pos in init_traj:
scene.step()
scene.gripper.set_translation(tuple(pos))
if scene.frame % args.save_every == args.save_every - 1:
save_frame(f"{reset:04d}_{scene.frame:06d}", scene.get_observations(), img_dir)
np.savez_compressed(
os.path.join(geom_dir, f"{reset:04d}_{scene.frame:06d}.npz"),
**scene.get_scene_state_plush(convert_to=np.float16))
scene.kit.pause()
#init_move_traj = scene.gripper.set_translation(grasp_point)
scene.gripper.grasp(scene.plush)
scene.kit.play()
traj = scene.gripper.plan_trajectory(grasp_point, target_point)
# move
for pos in traj:
scene.step()
scene.gripper.set_translation(tuple(pos))
if scene.frame % args.save_every == args.save_every - 1:
save_frame(f"{reset:04d}_{scene.frame:06d}", scene.get_observations(), img_dir)
np.savez_compressed(
os.path.join(geom_dir, f"{reset:04d}_{scene.frame:06d}.npz"),
**scene.get_scene_state_plush(convert_to=np.float16))
# wait until stable
for ff in range(scene.FALL_MAX):
scene.step()
if scene.check_scene_static():
print(f"grasp reaching a resting state after {ff} steps")
break
save_frame(f"{reset:04d}_{scene.frame:06d}", scene.get_observations(), img_dir)
np.savez_compressed(
os.path.join(geom_dir, f"{reset:04d}_{scene.frame:06d}.npz"),
**scene.get_scene_state_plush(convert_to=np.float16))
actions['release_frames'].append(np.array(scene.frame,np.uint16))
# release
scene.kit.pause()
scene.gripper.ungrasp()
# TODO: delete gripper collider
scene.kit.play()
for ff in range(scene.FALL_MAX+scene.DROP_MIN):
scene.step()
if scene.frame % args.save_every == args.save_every - 1:
save_frame(f"{reset:04d}_{scene.frame:06d}", scene.get_observations(), img_dir)
np.savez_compressed(
os.path.join(geom_dir, f"{reset:04d}_{scene.frame:06d}.npz"),
**scene.get_scene_state_plush(convert_to=np.float16))
if ff < scene.DROP_MIN:
continue
if scene.check_scene_static():
print(f"release reaching a resting state after {ff} steps")
break
scene.gripper.reset_translation()
save_frame(f"{reset:04d}_{scene.frame:06d}", scene.get_observations(), img_dir)
np.savez_compressed(
os.path.join(geom_dir, f"{reset:04d}_{scene.frame:06d}.npz"),
**scene.get_scene_state_plush(convert_to=np.float16))
actions['static_frames'].append(np.array(scene.frame,np.uint16))
np.savez_compressed(os.path.join(info_dir, f"interaction_info_{reset:04d}.npz"), **actions)
end_time = time.time()
from datetime import timedelta
time_str = str(timedelta(seconds=end_time - start_time))
print(f'Sampling {num_steps} interactions takes: {time_str}')
scene.reset()
# cleanup
scene.kit.shutdown()
if __name__ == "__main__":
main()
| 8,282 | Python | 43.05851 | 122 | 0.588747 |
NVlabs/ACID/PlushSim/scripts/syntheticdata.py | #!/usr/bin/env python
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
"""Helper class for obtaining groundtruth data from OmniKit.
Support provided for RGB, Depth, Bounding Box (2D Tight, 2D Loose, 3D),
segmentation (instance and semantic), and camera parameters.
Typical usage example:
kit = OmniKitHelper() # Start omniverse kit
sd_helper = SyntheticDataHelper()
gt = sd_helper.get_groundtruth(('rgb', 'depth', 'boundingBox2DTight'))
"""
import math
import carb
import omni
import time
from pxr import UsdGeom, Semantics, Gf
import numpy as np
class SyntheticDataHelper:
def __init__(self):
self.app = omni.kit.app.get_app_interface()
ext_manager = self.app.get_extension_manager()
ext_manager.set_extension_enabled("omni.syntheticdata", True)
from omni.syntheticdata import sensors, helpers
import omni.syntheticdata._syntheticdata as sd # Must be imported after getting app interface
self.sd = sd
self.sd_interface = self.sd.acquire_syntheticdata_interface()
self.viewport = omni.kit.viewport.get_viewport_interface()
self.carb_settings = carb.settings.acquire_settings_interface()
self.sensor_helper_lib = sensors
self.generic_helper_lib = helpers
mode = "numpy"
self.sensor_helpers = {
"rgb": sensors.get_rgb,
"depth": sensors.get_depth_linear,
"depthLinear": self.get_depth_linear,
"instanceSegmentation": sensors.get_instance_segmentation,
"semanticSegmentation": self.get_semantic_segmentation,
"boundingBox2DTight": sensors.get_bounding_box_2d_tight,
"boundingBox2DLoose": sensors.get_bounding_box_2d_loose,
"boundingBox3D": sensors.get_bounding_box_3d,
"camera": self.get_camera_params,
"pose": self.get_pose,
}
self.sensor_types = {
"rgb": self.sd.SensorType.Rgb,
"depth": self.sd.SensorType.DepthLinear,
"depthLinear": self.sd.SensorType.DepthLinear,
"instanceSegmentation": self.sd.SensorType.InstanceSegmentation,
"semanticSegmentation": self.sd.SensorType.SemanticSegmentation,
"boundingBox2DTight": self.sd.SensorType.BoundingBox2DTight,
"boundingBox2DLoose": self.sd.SensorType.BoundingBox2DLoose,
"boundingBox3D": self.sd.SensorType.BoundingBox3D,
}
self.sensor_state = {s: False for s in list(self.sensor_helpers.keys())}
def get_depth_linear(self, viewport):
""" Get Depth Linear sensor output.
Args:
viewport (omni.kit.viewport._viewport.IViewportWindow): Viewport from which to retrieve/create sensor.
Return:
(numpy.ndarray): A float32 array of shape (height, width, 1).
"""
sensor = self.sensor_helper_lib.create_or_retrieve_sensor(viewport, self.sd.SensorType.DepthLinear)
data = self.sd_interface.get_sensor_host_float_texture_array(sensor)
h, w = data.shape[:2]
return np.frombuffer(data, np.float32).reshape(h, w, -1)
def get_semantic_segmentation(self, viewport):
instance_data, instance_mappings = self.sensor_helpers['instanceSegmentation'](viewport, return_mapping=True)
ins_to_sem = np.zeros(np.max(instance_data)+1,dtype=np.uint8)
for im in instance_mappings[::-1]:
for i in im["instanceIds"]:
if i >= len(ins_to_sem):
continue
ins_to_sem[i] = 1 #if im['semanticLabel'] == 'teddy' else 2
return np.take(ins_to_sem, instance_data)
def get_camera_params(self, viewport):
"""Get active camera intrinsic and extrinsic parameters.
Returns:
A dict of the active camera's parameters.
pose (numpy.ndarray): camera position in world coordinates,
fov (float): horizontal field of view in radians
focal_length (float)
horizontal_aperture (float)
view_projection_matrix (numpy.ndarray(dtype=float64, shape=(4, 4)))
resolution (dict): resolution as a dict with 'width' and 'height'.
clipping_range (tuple(float, float)): Near and Far clipping values.
"""
stage = omni.usd.get_context().get_stage()
prim = stage.GetPrimAtPath(viewport.get_active_camera())
prim_tf = UsdGeom.Camera(prim).GetLocalTransformation()
focal_length = prim.GetAttribute("focalLength").Get()
horiz_aperture = prim.GetAttribute("horizontalAperture").Get()
fov = 2 * math.atan(horiz_aperture / (2 * focal_length))
x_min, y_min, x_max, y_max = viewport.get_viewport_rect()
width, height = x_max - x_min, y_max - y_min
aspect_ratio = width / height
near, far = prim.GetAttribute("clippingRange").Get()
view_proj_mat = self.generic_helper_lib.get_view_proj_mat(prim, aspect_ratio, near, far)
return {
"pose": np.array(prim_tf),
"fov": fov,
"focal_length": focal_length,
"horizontal_aperture": horiz_aperture,
"view_projection_matrix": view_proj_mat,
"resolution": {"width": width, "height": height},
"clipping_range": (near, far),
}
def get_pose(self):
"""Get pose of all objects with a semantic label.
"""
stage = omni.usd.get_context().get_stage()
mappings = self.generic_helper_lib.get_instance_mappings()
pose = []
for m in mappings:
prim_path = m[0]
prim = stage.GetPrimAtPath(prim_path)
prim_tf = UsdGeom.Xformable(prim).ComputeLocalToWorldTransform(0.0)
pose.append((str(prim_path), m[1], str(m[2]), np.array(prim_tf)))
return pose
async def initialize_async(self, viewport, sensor_types, timeout=10):
""" Initialize sensors in the list provided.
Args:
viewport (omni.kit.viewport._viewport.IViewportWindow): Viewport from which to retrieve/create sensor.
sensor_types (list of omni.syntheticdata._syntheticdata.SensorType): List of sensor types to initialize.
timeout (int): Maximum time in seconds to attempt to initialize sensors.
"""
start = time.time()
is_initialized = False
while not is_initialized and time.time() < (start + timeout):
sensors = []
for sensor_type in sensor_types:
sensors.append(self.sensor_helper_lib.create_or_retrieve_sensor(viewport, sensor_type))
await omni.kit.app.get_app_interface().next_update_async()
is_initialized = not any([not self.sd_interface.is_sensor_initialized(s) for s in sensors])
if not is_initialized:
unititialized = [s for s in sensors if not self.sd_interface.is_sensor_initialized(s)]
raise TimeoutError(f"Unable to initialized sensors: [{unititialized}] within {timeout} seconds.")
await omni.kit.app.get_app_interface().next_update_async() # Extra frame required to prevent access violation error
def get_groundtruth(self, gt_sensors, viewport, verify_sensor_init=True):
"""Get groundtruth from specified gt_sensors.
Args:
gt_sensors (list): List of strings of sensor names. Valid sensors names: rgb, depth,
instanceSegmentation, semanticSegmentation, boundingBox2DTight,
boundingBox2DLoose, boundingBox3D, camera
viewport (omni.kit.viewport._viewport.IViewportWindow): Viewport from which to retrieve/create sensor.
verify_sensor_init (bool): Additional check to verify creation and initialization of sensors.
Returns:
Dict of sensor outputs
"""
if isinstance(gt_sensors, str):
gt_sensors = (gt_sensors,)
# Create and initialize sensors
while verify_sensor_init:
flag = 0
# Render frame
self.app.update()
for sensor_name in gt_sensors:
if sensor_name != "camera" and sensor_name != "pose":
current_sensor = self.sensor_helper_lib.create_or_retrieve_sensor(
viewport, self.sensor_types[sensor_name]
)
if not self.sd_interface.is_sensor_initialized(current_sensor):
flag = 1
# Render frame
self.app.update()
self.app.update()
if flag == 0:
break
gt = {}
sensor_state = {}
# Process non-RT-only sensors
for sensor in gt_sensors:
if sensor not in ["camera", "pose"]:
if sensor == "instanceSegmentation":
gt[sensor] = self.sensor_helpers[sensor](viewport, parsed=True, return_mapping=True)
elif sensor == "boundingBox3D":
gt[sensor] = self.sensor_helpers[sensor](viewport, parsed=True, return_corners=True)
else:
gt[sensor] = self.sensor_helpers[sensor](viewport)
current_sensor = self.sensor_helper_lib.create_or_retrieve_sensor(viewport, self.sensor_types[sensor])
current_sensor_state = self.sd_interface.is_sensor_initialized(current_sensor)
sensor_state[sensor] = current_sensor_state
else:
gt[sensor] = self.sensor_helpers[sensor](viewport)
gt["state"] = sensor_state
return gt
| 9,968 | Python | 42.532751 | 124 | 0.623596 |
NVlabs/ACID/PlushSim/scripts/attic_scene.py | import os
import cv2
import time
import random
import asyncio
import numpy as np
from python_app import OmniKitHelper
import omni
import carb
from utils import *
RESOLUTION=720
# specify a custom config
CUSTOM_CONFIG = {
"width": RESOLUTION,
"height": RESOLUTION,
"anti_aliasing": 3, # 3 for dlss, 2 for fxaa, 1 for taa, 0 to disable aa
"renderer": "RayTracedLighting",
"samples_per_pixel_per_frame": 128,
"max_bounces": 10,
"max_specular_transmission_bounces": 6,
"max_volume_bounces": 4,
"subdiv_refinement_level": 2,
"headless": True,
"sync_loads": True,
"experience": f'{os.environ["EXP_PATH"]}/omni.bloky.kit',
}
"""
plush animal material: /Root/physics/stuff_animal
magic gripper: /Root/physics/magic_gripper
real object group: /Root/physics/real_objects
magic object group: /Root/physics/magic_objects
"""
class attic_scene(object):
def __init__(self,
SCENE_PATH,
PLUSH_ANIMAL_PATH,
PLUSH_SCALE=4,
FALL_MAX=300,
REST_THRESHOLD=8,
PHYSX_DT=1/150.,
SAVE_EVERY=25,
DROP_MIN=20,
RESET_STATIC=True,
RAND_LAYOUT=True,
RAND_LIGHTS=True,
ROBOT_SPEED=1.):
for k,v in locals().items():
if k != 'self':
self.__dict__[k] = v
self.plush_animal_mat = "/Root/physics/stuff_animal"
self.magic_gripper = "/Root/physics/magic_gripper"
self.fingerL = "/Root/physics/magic_gripper/fingerL"
self.fingerR = "/Root/physics/magic_gripper/fingerR"
self.real_object_group = "/Root/physics/real_objects"
self.magic_object_group = "/Root/physics/magic_objects"
self.front_path = "/Root/scene_front"
self.back_path = "/Root/scene_back"
self.scene_range = np.array([[-50*12,-50*8,0],[50*12,50*8,50*8]])
self.drop_range = np.array([[-50*self.PLUSH_SCALE,-50*self.PLUSH_SCALE,],
[50*self.PLUSH_SCALE,50*self.PLUSH_SCALE,]]) #/ 2.
self.back_clutter_range = np.array([[-50*12,50*8,],[50*12,50*12,]])
self.total_range = np.array([[-50*12,-50*12,0],[50*12,50*12,50*8]])
self.kit = OmniKitHelper(CUSTOM_CONFIG)
self.kit.set_physics_dt(physics_dt=self.PHYSX_DT)
physx_interface = omni.physx.get_physx_interface()
physx_interface.force_load_physics_from_usd()
physx_interface.reset_simulation()
async def load_stage(path):
await omni.usd.get_context().open_stage_async(path)
setup_task = asyncio.ensure_future(load_stage(SCENE_PATH))
while not setup_task.done():
self.kit.update()
self.kit.setup_renderer()
self.kit.update()
self.stage = omni.usd.get_context().get_stage()
self.front_group = self.stage.GetPrimAtPath(self.front_path)
self.back_group = self.stage.GetPrimAtPath(self.back_path)
from syntheticdata import SyntheticDataHelper
self.sd_helper = SyntheticDataHelper()
# force RayTracedLighting mode for better performance while simulating physics
self.kit.set_setting("/rtx/rendermode", "RayTracedLighting")
# wait until all materials are loaded
print("waiting for things to load...")
# if self.kit.is_loading():
# time.sleep(10)
while self.kit.is_loading():
time.sleep(0.1)
# set up cameras
self._setup_cameras()
_viewport_api = omni.kit.viewport.get_viewport_interface()
viewport = _viewport_api.get_instance_list()[0]
self._viewport = _viewport_api.get_viewport_window(viewport)
# touch the sensors to kick in anti-aliasing
for _ in range(20):
_ = self.sd_helper.get_groundtruth(
[ "rgb","depth","instanceSegmentation","semanticSegmentation",], self._viewport)
# set up objects
self._import_plush_animal(PLUSH_ANIMAL_PATH)
self._setup_robots()
# # start off Omniverse
self.kit.play()
# store original sim and vis points for reset
self.sim_og_pts, self.vis_og_pts = self._get_plush_points()
# # stop Omniverse
# self.kit.pause()
# reset the scene
self.frame = 0
self.reset()
def step(self):
self.kit.update(self.PHYSX_DT)
self.frame += 1
return self.frame
def sample_action(self, grasp_point=None):
if grasp_point is None:
gt = self.sd_helper.get_groundtruth(
[ "rgb","depth","instanceSegmentation","semanticSegmentation",], self._viewport)
pts = get_partial_point_cloud(self._viewport, project_factor=100.)
semseg = gt['semanticSegmentation']
kernel = np.ones((2,2), np.uint8)
semseg = cv2.erode(semseg, kernel, iterations=1)
plush_pts = np.where(semseg == 1)
if len(plush_pts[0]) == 0:
return None
idx = random.randint(0,len(plush_pts[0])-1)
grasp_pixel = (plush_pts[0][idx], plush_pts[1][idx])
grasp_point = tuple(pts[grasp_pixel[0], grasp_pixel[1],:])
else:
grasp_pixel = None
target_point = self._sample_displacement_vector(grasp_point)
if target_point is None:
return None
return grasp_point, target_point, grasp_pixel
def reset(self):
self.kit.stop()
from pxr import Gf
self.frame = 0
print("Reseting plush geometry...")
self._reset_plush_geometry(self.sim_og_pts, self.vis_og_pts)
print("Finished reseting plush geometry...")
# randonly drop the plush into the scene
print("Reseting plush translation...")
self.plush_translateOp.Set(Gf.Vec3f((0.,0.,250.)))
print("Reseting plush rotation...")
def randrot():
return random.random() * 360.
rotx,roty,rotz = randrot(), randrot(), randrot()
self.plush_rotationOp.Set(rpy2quat(rotx,roty,rotz))
print("Finished reseting plush pose...")
print("Reseting scene...")
self._randomize_scene()
print("Finished reseting scene...")
self.kit.play()
# wait until stable
if self.RESET_STATIC:
print("Waiting to reach stable...")
for _ in range(self.DROP_MIN):
self.step()
for ff in range(self.FALL_MAX*6):
self.step()
if self.check_scene_static():
print(f"Initial configuration becomes static after {ff} steps")
break
print("Reset Finished")
self.frame = 0
def reset_to(self, state):
self.kit.stop()
loc = state['loc']
rot = state['rot']
sim = state['sim']
vis = state['vis']
self._reset_plush_geometry(sim, vis)
self.plush_translateOp.Set(loc)
self.plush_rotationOp.Set(rot)
self.kit.play()
def check_scene_static(self):
_,_,_,v = self._get_object_velocity_stats()
return v < self.REST_THRESHOLD
def get_scene_metadata(self):
from pxr import PhysxSchema
sbAPI = PhysxSchema.PhysxDeformableAPI(self.plush)
faces = sbAPI.GetSimulationIndicesAttr().Get()
return {'plush_path': self.PLUSH_ANIMAL_PATH,
'sim_faces':np.array(faces, int).tolist(),
'sim_pts':np.array(self.sim_og_pts, np.float16).tolist(),
'vis_pts':np.array(self.vis_og_pts, np.float16).tolist(),
'scene_range': self.scene_range.tolist(),
'back_clutter_range': self.back_clutter_range.tolist(),
'cam_info': self._get_camera_info()}
# background state is different per reset
def get_scene_background_state(self):
collider = {}
for p in find_immediate_children(self.front_group):
name = str(p.GetPath()).split("/")[-1]
e,f = find_collider(p)
collider[f"{name}_box"] = e
collider[f"{name}_tran"] = f
for p in find_immediate_children(self.back_group):
name = str(p.GetPath()).split("/")[-1]
e,f = find_collider(p)
collider[f"{name}_box"] = e
collider[f"{name}_tran"] = f
return collider
def get_scene_state_plush(self,raw=False,convert_to=None):
sim,vis = self._get_plush_points()
loc,rot,scale = self._get_plush_loc(),self._get_plush_rot(),self._get_plush_scale()
if not raw:
loc,rot,scale = tuple(loc),eval(str(rot)),tuple(scale)
state = {'sim':sim, 'vis':vis,
'loc':loc, 'rot':rot, 'scale':scale}
if convert_to is not None:
for k,v in state.items():
state[k] = np.array(v, convert_to)
return state
def get_observations(self,
sensors=["rgb","depth",
# "instanceSegmentation",
"semanticSegmentation",],
partial_pointcloud=False):
frame = self.sd_helper.get_groundtruth(sensors, self._viewport)
gt = {}
gt['rgb_img'] = frame['rgb'][:,:,:-1]
gt['seg_img'] = frame['semanticSegmentation']
gt['dep_img'] = frame['depth'].squeeze()
if partial_pointcloud:
gt['pxyz'] = get_partial_point_cloud(self._viewport, project_factor=100.)
return gt
################################################################
#
# Below are "private" functions ;)
#
################################################################
def _import_plush_animal(self, usda_path):
from omni.physx.scripts import physicsUtils
mesh_name = usda_path.split('/')[-1].split('.')[0]
from pxr import PhysxSchema,UsdGeom,UsdShade,Semantics
###################
# import object
abspath = carb.tokens.get_tokens_interface().resolve(usda_path)
physics_root = "/Root"
assert self.stage.DefinePrim(physics_root+f"/{mesh_name}").GetReferences().AddReference(abspath)
self.mesh_path = f"{physics_root}/{mesh_name}/{mesh_name}_obj/mesh"
self.plush= self.stage.GetPrimAtPath(self.mesh_path)
###################
# add deformable property
schema_parameters = {
"self_collision": True,
"vertex_velocity_damping": 0.005,
"sleep_damping": 10,
"sleep_threshold": 5,
"settling_threshold": 11,
"solver_position_iteration_count": 60,
"collisionRestOffset": 0.1,
"collisionContactOffset": 0.5,
"voxel_resolution": 45,
}
skin_mesh = UsdGeom.Mesh.Get(self.stage, self.mesh_path)
skin_mesh.AddTranslateOp().Set(Gf.Vec3f(0.0, 0.0, 300.0))
skin_mesh.AddOrientOp().Set(Gf.Quatf(0.707, 0.707, 0, 0))
skin_points = skin_mesh.GetPointsAttr().Get()
skin_indices = physicsUtils.triangulateMesh(skin_mesh)
# Create tet meshes for simulation and collision based on the skin mesh
simulation_resolution = schema_parameters["voxel_resolution"]
skin_mesh_scale = Gf.Vec3f(1.0, 1.0, 1.0)
collision_points, collision_indices = physicsUtils.create_conforming_tetrahedral_mesh(skin_points, skin_indices)
simulation_points, simulation_indices = physicsUtils.create_voxel_tetrahedral_mesh(collision_points, collision_indices, skin_mesh_scale, simulation_resolution)
# Apply PhysxDeformableBodyAPI and PhysxCollisionAPI to skin mesh and set parameter and tet meshes
deformable_body_api = PhysxSchema.PhysxDeformableBodyAPI.Apply(skin_mesh.GetPrim())
deformable_body_api.CreateSolverPositionIterationCountAttr().Set(schema_parameters['solver_position_iteration_count'])
deformable_body_api.CreateSelfCollisionAttr().Set(schema_parameters['self_collision'])
deformable_body_api.CreateCollisionIndicesAttr().Set(collision_indices)
deformable_body_api.CreateCollisionRestPointsAttr().Set(collision_points)
deformable_body_api.CreateSimulationIndicesAttr().Set(simulation_indices)
deformable_body_api.CreateSimulationRestPointsAttr().Set(simulation_points)
deformable_body_api.CreateVertexVelocityDampingAttr().Set(schema_parameters['vertex_velocity_damping'])
deformable_body_api.CreateSleepDampingAttr().Set(schema_parameters['sleep_damping'])
deformable_body_api.CreateSleepThresholdAttr().Set(schema_parameters['sleep_threshold'])
deformable_body_api.CreateSettlingThresholdAttr().Set(schema_parameters['settling_threshold'])
PhysxSchema.PhysxCollisionAPI.Apply(skin_mesh.GetPrim())
###################
# add deformable material
def add_physics_material_to_prim(stage, prim, materialPath):
bindingAPI = UsdShade.MaterialBindingAPI.Apply(prim)
materialPrim = UsdShade.Material(stage.GetPrimAtPath(materialPath))
bindingAPI.Bind(materialPrim, UsdShade.Tokens.weakerThanDescendants, "physics")
add_physics_material_to_prim(self.stage, self.plush, self.plush_animal_mat)
###################
# add collision group
physicsUtils.add_collision_to_collision_group(self.stage, self.mesh_path, self.real_object_group)
###################
# add semantic info
sem = Semantics.SemanticsAPI.Apply(self.stage.GetPrimAtPath(self.mesh_path), "Semantics")
sem.CreateSemanticTypeAttr()
sem.CreateSemanticDataAttr()
sem.GetSemanticTypeAttr().Set("class")
sem.GetSemanticDataAttr().Set("plush")
###################
# standarize transform
physicsUtils.setup_transform_as_scale_orient_translate(self.plush)
xform = UsdGeom.Xformable(self.plush)
ops = xform.GetOrderedXformOps()
self.plush_translateOp = ops[0]
self.plush_rotationOp = ops[1]
self.plush_scaleOp = ops[2]
scale_factor = self.PLUSH_SCALE
self.plush_scaleOp.Set((scale_factor,scale_factor,scale_factor))
def _get_object_velocity_stats(self):
from pxr import PhysxSchema
sbAPI = PhysxSchema.PhysxDeformableAPI(self.plush)
velocity = np.array(sbAPI.GetSimulationVelocitiesAttr().Get())
vnorm = np.linalg.norm(velocity, axis=1)
return np.percentile(vnorm, [0,50,90,99])
def _setup_robots(self):
actor = self.stage.GetPrimAtPath(self.magic_gripper)
fingerL = self.stage.GetPrimAtPath(self.fingerL)
fingerR = self.stage.GetPrimAtPath(self.fingerR)
self.gripper = magic_eef(actor,
self.stage,
eef_default_loc=(0.,0.,600.),
default_speed=self.ROBOT_SPEED,
fingerL=fingerL,
fingerR=fingerR)
def _setup_cameras(self):
from pxr import UsdGeom
stage = omni.usd.get_context().get_stage()
# Need to set this before setting viewport window size
carb.settings.acquire_settings_interface().set_int("/app/renderer/resolution/width", -1)
carb.settings.acquire_settings_interface().set_int("/app/renderer/resolution/height", -1)
viewport_window = omni.kit.viewport.get_default_viewport_window()
viewport_window.set_active_camera("/Root/cam_light/Camera")
viewport_window.set_texture_resolution(RESOLUTION,RESOLUTION)
viewport_window.set_window_size(RESOLUTION, RESOLUTION)
def _get_plush_loc(self):
return self.plush_translateOp.Get()
def _get_plush_rot(self):
return self.plush_rotationOp.Get()
def _get_plush_scale(self):
return self.plush_scaleOp.Get()
def _get_plush_points(self):
from pxr import PhysxSchema
sbAPI = PhysxSchema.PhysxDeformableBodyAPI(self.plush)
sim = sbAPI.GetSimulationPointsAttr().Get()
mesh = UsdGeom.Mesh(self.plush)
vis = mesh.GetPointsAttr().Get()
return sim, vis
def _get_camera_info(self):
cam_info = {}
camera_pose, camera_intr = get_camera_params(self._viewport)
cam_name = get_camera_name(self._viewport)
cam_info[cam_name] = [camera_pose.tolist(), camera_intr.tolist()]
return cam_info
def _randomize_collection(self, collection_prim, scene_range, drop_range=None, rand_rot=True, padding=True):
extents,objs = [],[]
for p in find_immediate_children(collection_prim):
objs.append(str(p.GetPath()))
extent, transform = find_collider(p)
extents.append(transform_verts(extent, transform))
objects = [standardize_bbox(bbox) for bbox in np.array(extents)[:,:,:-1]]
canvas = get_canvas(scene_range)
if drop_range is not None:
fill_canvas(canvas, scene_range, drop_range)
translations = []
for b,n in zip(objects,objs):
for _ in range(3):
t = sample_bbox_translation(b, scene_range)
if padding:
tb = scale(pad_to_square(b + t))
else:
tb = b + t
if not overlaps_with_current(canvas, scene_range, tb):
fill_canvas(canvas, scene_range, tb)
translations.append((n,t))
break
if len(translations) == 0 or translations[-1][0] != n:
translations.append((n,np.array([0,-2000])))
def randrot():
return random.random() * 360.
from pxr import UsdGeom
from omni.physx.scripts import physicsUtils
for n,t in translations:
xform = UsdGeom.Xformable(self.stage.GetPrimAtPath(n))
physicsUtils.setup_transform_as_scale_orient_translate(xform)
ops = xform.GetOrderedXformOps()
translateOp = ops[0]
translateOp.Set(tuple(np.array(tuple(translateOp.Get())) + np.append(t, 0)))
if rand_rot:
orientOp = ops[1]
orientOp.Set(rpy2quat(0,0,randrot()))
def _randomize_lighting(self):
domelight = self.stage.GetPrimAtPath("/Root/cam_light/Lights/DomeLight")
light = self.stage.GetPrimAtPath("/Root/cam_light/Lights/DistantLight")
light1 = self.stage.GetPrimAtPath("/Root/cam_light/Lights/DistantLight_01")
temp = np.random.rand(1)[0] * 5000 + 2500
domelight.GetAttribute('colorTemperature').Set(temp)
light.GetAttribute('colorTemperature').Set(temp)
light1.GetAttribute('colorTemperature').Set(temp)
int_range = 10000
int_min = 2500
for l in [domelight, light, light1]:
intensity = np.random.rand(1)[0] * int_range + int_min
l.GetAttribute('intensity').Set(intensity)
def _randomize_scene(self):
if self.RAND_LAYOUT:
# randomize front scene
self._randomize_collection(self.front_group, self.scene_range[:,:-1], self.drop_range)
# randomize back scene
self._randomize_collection(self.back_group, self.back_clutter_range,rand_rot=False, padding=False)
if self.RAND_LIGHTS:
# randomize lights
self._randomize_lighting()
def _get_2d_layout_occupancy_map(self):
extents = []
for p in find_immediate_children(self.front_group):
extent, transform = find_collider(p)
extents.append(transform_verts(extent, transform))
for p in find_immediate_children(self.back_group):
extent, transform = find_collider(p)
extents.append(transform_verts(extent, transform))
objects = [standardize_bbox(bbox) for bbox in np.array(extents)[:,:,:-1]]
#canvas = get_canvas(self.scene_range[:,:-1])
canvas = get_canvas(self.total_range[:,:-1])
for b in objects:
fill_canvas(canvas, self.total_range[:,:-1], b)
return canvas
def _sample_displacement_vector(self, grasp_point):
sampled_for = 0
mean_len = 160
std_len = 80
max_len = 240
min_len = 80
canvas = self._get_2d_layout_occupancy_map()
while(True):
sampled_for = sampled_for + 1
move_len = np.clip(np.random.normal(loc=mean_len,scale=std_len), min_len, max_len)
move_dir = sample_direction_zup(100).squeeze()
#move_dir[1,:] = np.abs(move_dir[1,:])
move_vec = move_dir * move_len
target_pts = grasp_point + move_vec.T
in_world = np.logical_and(
target_pts > self.total_range[0],
target_pts < self.total_range[1]).all(axis=1)
occupancies = []
try:
# assure that no obstacle is in path for length times 1.3
for i in range(int(max_len*1.3)):
temp = grasp_point + (target_pts - grasp_point) / max_len * i
temp[:,0] = np.clip(target_pts[:,0], self.total_range[0,0], self.total_range[1,0])
temp[:,1] = np.clip(target_pts[:,1], self.total_range[0,1], self.total_range[1,1])
occupancies.append(get_occupancy_value(
canvas, self.total_range[:,:-1], temp[:,:-1]))
path_no_collision = (np.array(occupancies) == 0).all(axis=0)
viable = np.logical_and(in_world, path_no_collision)
in_idx = np.nonzero(viable)[0]
except:
continue
if len(in_idx) > 0:
target_point = target_pts[np.random.choice(in_idx)]
return target_point
else:
if sampled_for > 10:
break
return None
def _reset_plush_geometry(self, sim, vis):
from pxr import PhysxSchema, Gf, Vt
# reset simulation points
sbAPI = PhysxSchema.PhysxDeformableBodyAPI(self.plush)
sbAPI.GetSimulationPointsAttr().Set(sim)
# reset simulation points velocity
sbAPI = PhysxSchema.PhysxDeformableAPI(self.plush)
velocity = np.array(sbAPI.GetSimulationVelocitiesAttr().Get())
zero_velocity = np.zeros_like(velocity)
velocity_vec = Vt.Vec3fArray([Gf.Vec3f(tuple(m)) for m in zero_velocity])
sbAPI.GetSimulationVelocitiesAttr().Set(velocity_vec)
# reset visual points
mesh = UsdGeom.Mesh(self.plush)
mesh.GetPointsAttr().Set(vis) | 22,641 | Python | 42.710425 | 167 | 0.591184 |
NVlabs/ACID/PlushSim/scripts/utils.py | import os
import math
import omni
import numpy as np
from PIL import Image
from pxr import UsdGeom, Usd, UsdPhysics, Gf
import matplotlib.pyplot as plt
################################################################
# State Saving Utils
# (Geometry)
################################################################
def transform_points_cam_to_world(cam_pts, camera_pose):
world_pts = np.transpose(
np.dot(camera_pose[0:3, 0:3], np.transpose(cam_pts)) + np.tile(camera_pose[0:3, 3:], (1, cam_pts.shape[0])))
return world_pts
def project_depth_world_space(depth_image, camera_intr, camera_pose, project_factor=1.):
cam_pts = project_depth_cam_space(depth_image, camera_intr, keep_dim=False, project_factor=project_factor)
world_pts = transform_points_cam_to_world(cam_pts, camera_pose)
W, H = depth_image.shape
pts = world_pts.reshape([W, H, 3])
return pts
def project_depth_cam_space(depth_img, camera_intrinsics, keep_dim=True, project_factor=1.):
# Get depth image size
im_h = depth_img.shape[0]
im_w = depth_img.shape[1]
# Project depth into 3D point cloud in camera coordinates
pix_x, pix_y = np.meshgrid(np.linspace(0, im_w - 1, im_w), np.linspace(0, im_h - 1, im_h))
cam_pts_x = np.multiply(pix_x - im_w / 2., -depth_img / camera_intrinsics[0, 0])
cam_pts_y = np.multiply(pix_y - im_h / 2., depth_img / camera_intrinsics[1, 1])
cam_pts_z = depth_img.copy()
cam_pts_x.shape = (im_h * im_w, 1)
cam_pts_y.shape = (im_h * im_w, 1)
cam_pts_z.shape = (im_h * im_w, 1)
cam_pts = np.concatenate((cam_pts_x, cam_pts_y, cam_pts_z), axis=1) * project_factor
# print("cam_pts: ", cam_pts.max(axis=0), cam_pts.min(axis=0))
if keep_dim:
cam_pts = cam_pts.reshape([im_h, im_w, 3])
return cam_pts
def get_camera_params(viewport):
stage = omni.usd.get_context().get_stage()
prim = stage.GetPrimAtPath(viewport.get_active_camera())
prim_tf = np.array(UsdGeom.Camera(prim).GetLocalTransformation())
focal_length = prim.GetAttribute("focalLength").Get()
horiz_aperture = prim.GetAttribute("horizontalAperture").Get()
fov = 2 * math.atan(horiz_aperture / (2 * focal_length))
image_w, image_h = viewport.get_texture_resolution()
camera_focal_length = (float(image_w) / 2) / np.tan(fov/ 2)
cam_intr = np.array(
[[camera_focal_length, 0, float(image_h) / 2],
[0, camera_focal_length, float(image_w) / 2],
[0, 0, 1]])
return prim_tf.T, cam_intr
def get_partial_point_cloud(viewport, in_world_space=True, project_factor=1.):
from omni.syntheticdata import sensors
data = sensors.get_depth_linear(viewport)
h, w = data.shape[:2]
depth_data = -np.frombuffer(data, np.float32).reshape(h, w, -1)
camera_pose, camera_intr = get_camera_params(viewport)
if in_world_space:
return project_depth_world_space(depth_data.squeeze(), camera_intr, camera_pose, project_factor=project_factor)
else:
return project_depth_cam_space(depth_data.squeeze(), camera_intr, project_factor=project_factor)
def export_visual_mesh(prim, export_path, loc=None, rot=None, binarize=True):
assert prim.IsA(UsdGeom.Mesh), "prim needs to be a UsdGeom.Mesh"
mesh = UsdGeom.Mesh(prim)
points = mesh.GetPointsAttr().Get()
if binarize:
path = os.path.splitext(export_path)[0]+'.npy'
np.save(path, np.array(points, np.float16))
else:
print(export_path)
faces = np.array(mesh.GetFaceVertexIndicesAttr().Get()).reshape(-1,3) + 1
uv = mesh.GetPrimvar("st").Get()
with open(export_path, "w") as fp:
fp.write("mtllib teddy.mtl\nusemtl Material.004\n")
for x,y,z in points:
fp.write(f"v {x:.3f} {y:.3f} {z:.3f}\n")
for u,v in uv:
fp.write(f"vt {u:=.4f} {v:.4f}\n")
for i, (x,y,z) in enumerate(faces):
fp.write(f"f {x}/{i*3+1} {y}/{i*3+2} {z}/{i*3+3}\n")
def get_sim_points(prim, loc=None, rot=None):
from pxr import PhysxSchema
sbAPI = PhysxSchema.PhysxDeformableBodyAPI(prim)
points = sbAPI.GetSimulationPointsAttr().Get()
if rot is not None:
points = np.array(points)
w,x,y,z = eval(str(rot))
from scipy.spatial.transform import Rotation
rot = Rotation.from_quat(np.array([x,y,z,w]))
points = rot.apply(points)
if loc is not None:
loc = np.array(tuple(loc))
points = points + loc
return points
def get_sim_faces(prim):
from pxr import PhysxSchema
sbAPI = PhysxSchema.PhysxDeformableAPI(prim)
faces = sbAPI.GetSimulationIndicesAttr().Get()
return faces
def export_simulation_voxels(prim, export_path, binarize=True, export_faces=False):
points = get_sim_points(prim)
if export_faces:
faces = get_sim_faces(prim)
if binarize:
path = os.path.splitext(export_path)[0]+'.npy'
if export_faces:
np.savez(path, points=np.array(points, np.float16), faces=np.array(faces, int))
else:
np.save(path, np.array(points, np.float16))
else:
with open(export_path, 'w') as fp:
for p in points:
fp.write(f"v {p[0]:.3f} {p[1]:.3f} {p[2]:.3f}\n")
if export_faces:
faces = np.array(faces, int).reshape([-1,4]) + 1
for f in faces:
fp.write(f"f {f[0]} {f[1]} {f[2]} {f[3]}\n")
def visualize_sensors(gt, save_path):
from omni.syntheticdata import visualize
# GROUNDTRUTH VISUALIZATION
# Setup a figure
fig, axes = plt.subplots(1, 3, figsize=(20, 6))
axes = axes.flat
for ax in axes:
ax.axis("off")
# RGB
axes[0].set_title("RGB")
for ax in axes[:-1]:
ax.imshow(gt["rgb"])
# DEPTH
axes[1].set_title("Depth")
depth_data = np.clip(gt["depth"], 0, 255)
axes[1].imshow(visualize.colorize_depth(depth_data.squeeze()))
# SEMSEG
axes[2].set_title("Semantic Segmentation")
semantic_seg = gt["semanticSegmentation"]
semantic_rgb = visualize.colorize_segmentation(semantic_seg)
axes[2].imshow(semantic_rgb, alpha=0.7)
# Save figure
fig.savefig(save_path)
plt.close(fig)
def save_frame(frame_name, frame_data, save_dir,
save_rgb=True, save_seg=True, save_depth=True, save_partial_pointcloud=False):
if save_rgb:
rgb = frame_data['rgb_img']
Image.fromarray(rgb).save(f"{save_dir}/rgb_{frame_name}.jpg")
if save_seg:
seg= frame_data['seg_img']
sem = np.tile(seg[:,:,np.newaxis], (1,1,3)).astype(np.uint8) * 255
Image.fromarray(sem).save(f"{save_dir}/seg_{frame_name}.jpg")
if save_depth:
depth_img = Image.fromarray((frame_data['dep_img'].squeeze() * 1000).astype(np.uint16), mode='I;16').convert(mode='I')
depth_img.save(f"{save_dir}/depth_{frame_name}.png")
def save_state(state_name, state_data, save_dir):
loc, rot, sim, vis = state_data
state_dict = {}
state_dict['loc'] = np.array(tuple(loc))
state_dict['rot'] = np.array(eval(str(rot)))
state_dict['sim'] = np.array(sim)
state_dict['vis'] = np.array(vis)
np.savez(f"{save_dir}/state_{state_name}.npz", **state_dict)
################################################################
# Interaction Utils
################################################################
def sample_pick_point(partial_point_cloud, segmentation):
im_h = segmentation.shape[0]
im_w = segmentation.shape[1]
# point cloud "image" height and width
pc_h = partial_point_cloud.shape[0]
pc_w = partial_point_cloud.shape[1]
assert im_h == pc_h and im_w == pc_w, "partial_point_cloud dimension should match with that of segmentation mask"
def sample_spherical(npoints, ndim=3):
vec = np.random.randn(ndim, npoints)
vec /= np.linalg.norm(vec, axis=0)
return vec
def sample_direction(npoints):
phi = np.random.randn(npoints) * 2 * np.pi
theta = np.clip(np.random.normal(loc=np.pi / 4.,scale=np.pi / 12., size=npoints), np.pi / 6., np.pi / 2.)
x = np.cos(phi) * np.sin(theta)
z = np.sin(phi) * np.sin(theta)
y = np.cos(theta)
vec = np.vstack([x,y,z])
return vec
def sample_direction_zup(npoints):
phi = np.random.randn(npoints) * 2 * np.pi
theta = np.clip(np.random.normal(loc=np.pi / 4.,scale=np.pi / 12., size=npoints), np.pi / 6., np.pi / 2.)
x = np.cos(phi) * np.sin(theta)
y = np.sin(phi) * np.sin(theta)
z = np.cos(theta)
vec = np.vstack([x,y,z])
return vec
def interpolate(start_loc, end_loc, speed):
start_loc = np.array(start_loc)
end_loc = np.array(end_loc)
dist = np.linalg.norm(end_loc - start_loc)
chunks = dist // speed
return start_loc + np.outer(np.arange(chunks+1,dtype=float), (end_loc - start_loc) / chunks)
class magic_eef(object):
def __init__(self, end_effector, stage, eef_default_loc=None, default_speed=1,
fingerL=None, fingerR=None):
self.end_effector = end_effector
self.eef_default_loc = eef_default_loc
self.default_speed = default_speed
self.stage = stage
xform = UsdGeom.Xformable(end_effector)
self.ops = xform.GetOrderedXformOps()
assert self.ops[0].GetOpType() == UsdGeom.XformOp.TypeTranslate,\
"Code is based on UsdGeom.Xformable with first op as translation"
assert self.ops[1].GetOpType() == UsdGeom.XformOp.TypeOrient,\
"Code is based on UsdGeom.Xformable with second op as orientation"
self.attachmentPath = None
self.set_translation(eef_default_loc)
self.fingerL=fingerL
if fingerL is not None:
xform = UsdGeom.Xformable(fingerL)
self.fingerL_ops = xform.GetOrderedXformOps()[0]
self.fingerL_ops.Set((-5,0,20))
self.fingerR=fingerR
if fingerR is not None:
xform = UsdGeom.Xformable(fingerR)
self.fingerR_ops = xform.GetOrderedXformOps()[0]
self.fingerL_ops.Set((5,0,20))
def get_translation(self):
return self.ops[0].Get()
def set_translation(self, loc):
self.ops[0].Set(loc)
def reset_translation(self):
self.set_translation(self.eef_default_loc)
def get_orientation(self):
return self.ops[1].Get()
def set_orientation(self, rot):
self.ops[1].Set(rot)
def grasp(self, target_object):
# enable collision
self.end_effector.GetAttribute("physics:collisionEnabled").Set(True)
# create magic grasp
self.attachmentPath = target_object.GetPath().AppendChild("rigidAttachment_0")
omni.kit.commands.execute(
"AddSoftBodyRigidAttachmentCommand",
target_attachment_path=self.attachmentPath,
softbody_path=target_object.GetPath(),
rigidbody_path=self.end_effector.GetPath(),
)
attachmentPrim = self.stage.GetPrimAtPath(self.attachmentPath)
assert attachmentPrim
assert attachmentPrim.GetAttribute("physxEnableHaloParticleFiltering").Set(True)
assert attachmentPrim.GetAttribute("physxEnableVolumeParticleAttachments").Set(True)
assert attachmentPrim.GetAttribute("physxEnableSurfaceTetraAttachments").Set(True)
omni.physx.get_physx_interface().release_physics_objects()
self.fingerL_ops.Set((-5,0,20))
self.fingerR_ops.Set((5,0,20))
def ungrasp(self):
assert self.attachmentPath is not None, "nothing is grasped! (there is no attachment registered)"
# release magic grasp
omni.kit.commands.execute(
"DeletePrimsCommand",
paths=[self.attachmentPath]
)
self.end_effector.GetAttribute("physics:collisionEnabled").Set(False)
omni.physx.get_physx_interface().release_physics_objects()
self.attachmentPath = None
self.fingerL_ops.Set((-80,0,20))
self.fingerR_ops.Set((80,0,20))
#self.reset_translation()
def plan_trajectory(self, start_loc, end_loc, speed=None):
return interpolate(start_loc, end_loc, self.default_speed if speed is None else speed)
################################
# Random utils
################################
def get_camera_name(viewport):
stage = omni.usd.get_context().get_stage()
return stage.GetPrimAtPath(viewport.get_active_camera()).GetName()
def rpy2quat(roll,pitch,yaw):
roll*=0.5
pitch*=0.5
yaw*=0.5
cr = math.cos(roll)
cp = math.cos(pitch)
cy = math.cos(yaw)
sr = math.sin(roll)
sp = math.sin(pitch)
sy = math.sin(yaw)
cpcy = cp * cy
spsy = sp * sy
spcy = sp * cy
cpsy = cp * sy
qx = (sr * cpcy - cr * spsy)
qy = (cr * spcy + sr * cpsy)
qz = (cr * cpsy - sr * spcy)
qw = cr * cpcy + sr * spsy
return Gf.Quatf(qw,qx,qy,qz)
################################
# Scene randomization utils
################################
def is_collider(prim):
try:
return prim.GetAttribute("physics:collisionEnabled").Get()
except:
return False
def find_collider(prim):
#from pxr import UsdPhysics
primRange = iter(Usd.PrimRange(prim))
extent, transform = None, None
for p in primRange:
#if p.HasAPI(UsdPhysics.CollisionAPI):
if is_collider(p):
extent = p.GetAttribute("extent").Get()
if extent is None:
# this means that the object is a cube
extent = np.array([[-50,-50,-50],[50,50,50]])
transform = omni.usd.get_world_transform_matrix(p, Usd.TimeCode.Default())
primRange.PruneChildren()
break
return np.array(extent), np.array(transform)
def find_immediate_children(prim):
primRange = Usd.PrimRange(prim)
primPath = prim.GetPath()
immediate_children = []
for p in primRange:
if p.GetPath().GetParentPath() == primPath:
immediate_children.append(p)
return immediate_children
def extent_to_cube(extent):
min_x,min_y,min_z = extent[0]
max_x,max_y,max_z = extent[1]
verts = np.array([
(max_x,max_y,max_z),
(max_x,max_y,min_z),
(max_x,min_y,max_z),
(max_x,min_y,min_z),
(min_x,max_y,max_z),
(min_x,max_y,min_z),
(min_x,min_y,max_z),
(min_x,min_y,min_z),])
faces = np.array([
(1,5,7,3),
(4,3,7,8),
(8,7,5,6),
(6,2,4,8),
(2,1,3,4),
(6,5,1,2),])
return verts, faces
def transform_verts(verts, transform):
verts_app = np.concatenate([verts,np.ones((verts.shape[0], 1))], axis=-1)
return (verts_app @ transform)[:,:-1]
def export_quad_obj(verts, faces, export_path):
with open(export_path, 'w') as fp:
for p in verts:
fp.write(f"v {p[0]:.3f} {p[1]:.3f} {p[2]:.3f}\n")
for f in faces:
fp.write(f"f {f[0]} {f[1]} {f[2]} {f[3]}\n")
def standardize_bbox(bbox):
return np.array([bbox.min(axis=0),bbox.max(axis=0)])
def get_bbox_translation_range(bbox, scene_range):
# bbox size
size_x,size_y = bbox[1] - bbox[0]
center_range = scene_range + np.array([[size_x, size_y],[-size_x,-size_y]]) / 2
center = np.mean(bbox, axis=0)
return center_range - center
def sample_bbox_translation(bbox, scene_range):
translation_range = get_bbox_translation_range(bbox, scene_range)
sample = np.random.rand(2)
return translation_range[0] + sample * (translation_range[1] - translation_range[0])
def get_canvas(scene_range):
scene_size = scene_range[1] - scene_range[0]
scene_size = ( scene_size * 1.1 ).astype(int)
return np.zeros(scene_size)
def fill_canvas(canvas, scene_range, bbox,val=1):
canvas_center = np.array(canvas.shape) / 2
cb = (bbox - np.mean(scene_range, axis=0) + canvas_center).astype(int)
if cb[0,0] < 0 or cb[0,1] < 0:
return
h,w = canvas.shape
if cb[1,0] >= h or cb[1,1] >= w:
return
canvas[cb[0,0]:cb[1,0], cb[0,1]:cb[1,1]] = val
def get_occupancy_value(canvas, scene_range, pts):
canvas_center = np.array(canvas.shape) / 2
pts = (pts - np.mean(scene_range, axis=0) + canvas_center).astype(int)
return canvas[pts[:,0], pts[:,1]]
def overlaps_with_current(canvas, scene_range, bbox,val=0):
canvas_center = np.array(canvas.shape) / 2
cb = (bbox - np.mean(scene_range, axis=0) + canvas_center).astype(int)
return (canvas[cb[0,0]:cb[1,0], cb[0,1]:cb[1,1]] != val).any()
def pad_to_square(bbox):
size_x,size_y = (bbox[1] - bbox[0]) / 2.
center = np.mean(bbox, axis=0)
length = max(size_x,size_y)
return np.stack([center-length,center+length])
def scale(bbox,factor=1.1):
size_x,size_y = (bbox[1] - bbox[0]) / 2. *factor
center = np.mean(bbox, axis=0)
return np.stack([center-[size_x,size_y],center+[size_x,size_y]])
| 16,913 | Python | 36.923767 | 126 | 0.601845 |
NVlabs/ACID/PlushSim/scripts/writer.py | #!/usr/bin/env python
# Copyright (c) 2020, NVIDIA CORPORATION. All rights reserved.
#
# NVIDIA CORPORATION and its licensors retain all intellectual property
# and proprietary rights in and to this software, related documentation
# and any modifications thereto. Any use, reproduction, disclosure or
# distribution of this software and related documentation without an express
# license agreement from NVIDIA CORPORATION is strictly prohibited.
"""Helper class for writing groundtruth data offline.
"""
import atexit
import colorsys
import queue
import omni
import os
import threading
import numpy as np
from PIL import Image, ImageDraw
class DataWriter:
def __init__(self, data_dir, num_worker_threads, max_queue_size=500, sensor_settings=None):
from omni.isaac.synthetic_utils import visualization as vis
self.vis = vis
atexit.register(self.stop_threads)
self.data_dir = data_dir
# Threading for multiple scenes
self.num_worker_threads = num_worker_threads
# Initialize queue with a specified size
self.q = queue.Queue(max_queue_size)
self.threads = []
self._viewport = omni.kit.viewport.get_viewport_interface()
self.create_output_folders(sensor_settings)
def start_threads(self):
"""Start worker threads."""
for _ in range(self.num_worker_threads):
t = threading.Thread(target=self.worker, daemon=True)
t.start()
self.threads.append(t)
def stop_threads(self):
"""Waits for all tasks to be completed before stopping worker threads."""
print(f"Finish writing data...")
# Block until all tasks are done
self.q.join()
# Stop workers
for _ in range(self.num_worker_threads):
self.q.put(None)
for t in self.threads:
t.join()
print(f"Done.")
def worker(self):
"""Processes task from queue. Each tasks contains groundtruth data and metadata which is used to transform the output and write it to disk."""
while True:
groundtruth = self.q.get()
if groundtruth is None:
break
filename = groundtruth["METADATA"]["image_id"]
viewport_name = groundtruth["METADATA"]["viewport_name"]
for gt_type, data in groundtruth["DATA"].items():
if gt_type == "RGB":
self.save_image(viewport_name, gt_type, data, filename)
elif gt_type == "DEPTH":
if groundtruth["METADATA"]["DEPTH"]["NPY"]:
self.depth_folder = self.data_dir + "/" + str(viewport_name) + "/depth/"
np.save(self.depth_folder + filename + ".npy", data)
if groundtruth["METADATA"]["DEPTH"]["COLORIZE"]:
self.save_image(viewport_name, gt_type, data, filename)
elif gt_type == "INSTANCE":
self.save_segmentation(
viewport_name,
gt_type,
data,
filename,
groundtruth["METADATA"]["INSTANCE"]["WIDTH"],
groundtruth["METADATA"]["INSTANCE"]["HEIGHT"],
groundtruth["METADATA"]["INSTANCE"]["COLORIZE"],
groundtruth["METADATA"]["INSTANCE"]["NPY"],
)
elif gt_type == "SEMANTIC":
self.save_segmentation(
viewport_name,
gt_type,
data,
filename,
groundtruth["METADATA"]["SEMANTIC"]["WIDTH"],
groundtruth["METADATA"]["SEMANTIC"]["HEIGHT"],
groundtruth["METADATA"]["SEMANTIC"]["COLORIZE"],
groundtruth["METADATA"]["SEMANTIC"]["NPY"],
)
elif gt_type in ["BBOX2DTIGHT", "BBOX2DLOOSE"]:
self.save_bbox(
viewport_name,
gt_type,
data,
filename,
groundtruth["METADATA"][gt_type]["COLORIZE"],
groundtruth["DATA"]["RGB"],
groundtruth["METADATA"][gt_type]["NPY"],
)
elif gt_type == "CAMERA":
self.camera_folder = self.data_dir + "/" + str(viewport_name) + "/camera/"
np.save(self.camera_folder + filename + ".npy", data)
elif gt_type == "POSES":
self.poses_folder = self.data_dir + "/" + str(viewport_name) + "/poses/"
np.save(self.poses_folder + filename + ".npy", data)
else:
raise NotImplementedError
self.q.task_done()
def save_segmentation(
self, viewport_name, data_type, data, filename, width=1280, height=720, display_rgb=True, save_npy=True
):
self.instance_folder = self.data_dir + "/" + str(viewport_name) + "/instance/"
self.semantic_folder = self.data_dir + "/" + str(viewport_name) + "/semantic/"
# Save ground truth data locally as npy
if data_type == "INSTANCE" and save_npy:
np.save(self.instance_folder + filename + ".npy", data)
if data_type == "SEMANTIC" and save_npy:
np.save(self.semantic_folder + filename + ".npy", data)
if display_rgb:
image_data = np.frombuffer(data, dtype=np.uint8).reshape(*data.shape, -1)
num_colors = 50 if data_type == "SEMANTIC" else None
color_image = self.vis.colorize_segmentation(image_data, width, height, 3, num_colors)
# color_image = visualize.colorize_instance(image_data)
color_image_rgb = Image.fromarray(color_image, "RGB")
if data_type == "INSTANCE":
color_image_rgb.save(f"{self.instance_folder}/{filename}.png")
if data_type == "SEMANTIC":
color_image_rgb.save(f"{self.semantic_folder}/{filename}.png")
def save_image(self, viewport_name, img_type, image_data, filename):
self.rgb_folder = self.data_dir + "/" + str(viewport_name) + "/rgb/"
self.depth_folder = self.data_dir + "/" + str(viewport_name) + "/depth/"
if img_type == "RGB":
# Save ground truth data locally as png
rgb_img = Image.fromarray(image_data, "RGBA")
rgb_img.save(f"{self.rgb_folder}/{filename}.png")
elif img_type == "DEPTH":
# Convert linear depth to inverse depth for better visualization
image_data = image_data * 100
image_data = np.reciprocal(image_data)
# Save ground truth data locally as png
image_data[image_data == 0.0] = 1e-5
image_data = np.clip(image_data, 0, 255)
image_data -= np.min(image_data)
if np.max(image_data) > 0:
image_data /= np.max(image_data)
depth_img = Image.fromarray((image_data * 255.0).astype(np.uint8))
depth_img.save(f"{self.depth_folder}/{filename}.png")
def save_bbox(self, viewport_name, data_type, data, filename, display_rgb=True, rgb_data=None, save_npy=True):
self.bbox_2d_tight_folder = self.data_dir + "/" + str(viewport_name) + "/bbox_2d_tight/"
self.bbox_2d_loose_folder = self.data_dir + "/" + str(viewport_name) + "/bbox_2d_loose/"
# Save ground truth data locally as npy
if data_type == "BBOX2DTIGHT" and save_npy:
np.save(self.bbox_2d_tight_folder + filename + ".npy", data)
if data_type == "BBOX2DLOOSE" and save_npy:
np.save(self.bbox_2d_loose_folder + filename + ".npy", data)
if display_rgb and rgb_data is not None:
color_image = self.vis.colorize_bboxes(data, rgb_data)
color_image_rgb = Image.fromarray(color_image, "RGBA")
if data_type == "BBOX2DTIGHT":
color_image_rgb.save(f"{self.bbox_2d_tight_folder}/{filename}.png")
if data_type == "BBOX2DLOOSE":
color_image_rgb.save(f"{self.bbox_2d_loose_folder}/{filename}.png")
def create_output_folders(self, sensor_settings=None):
"""Checks if the sensor output folder corresponding to each viewport is created. If not, it creates them."""
if not os.path.exists(self.data_dir):
os.mkdir(self.data_dir)
if sensor_settings is None:
sensor_settings = dict()
viewports = self._viewport.get_instance_list()
viewport_names = [self._viewport.get_viewport_window_name(vp) for vp in viewports]
sensor_settings_viewport = {
"rgb": {"enabled": True},
"depth": {"enabled": True, "colorize": True, "npy": True},
"instance": {"enabled": True, "colorize": True, "npy": True},
"semantic": {"enabled": True, "colorize": True, "npy": True},
"bbox_2d_tight": {"enabled": True, "colorize": True, "npy": True},
"bbox_2d_loose": {"enabled": True, "colorize": True, "npy": True},
"camera": {"enabled": True, "npy": True},
"poses": {"enabled": True, "npy": True},
}
for name in viewport_names:
sensor_settings[name] = copy.deepcopy(sensor_settings_viewport)
for viewport_name in sensor_settings:
viewport_folder = self.data_dir + "/" + str(viewport_name)
if not os.path.exists(viewport_folder):
os.mkdir(viewport_folder)
for sensor_name in sensor_settings[viewport_name]:
if sensor_settings[viewport_name][sensor_name]["enabled"]:
sensor_folder = self.data_dir + "/" + str(viewport_name) + "/" + str(sensor_name)
if not os.path.exists(sensor_folder):
os.mkdir(sensor_folder)
| 10,072 | Python | 47.196172 | 150 | 0.552621 |
NVlabs/ACID/ACID/environment.yaml | name: acid_train
channels:
- conda-forge
- pytorch
- defaults
dependencies:
- cython=0.29.2
- imageio=2.4.1
- numpy=1.15.4
- numpy-base=1.15.4
- matplotlib=3.0.3
- matplotlib-base=3.0.3
- pandas=0.23.4
- pillow=5.3.0
- pyembree=0.1.4
- pytest=4.0.2
- python=3.7.10
- pytorch=1.4.0
- pyyaml=3.13
- scikit-image=0.14.1
- scipy=1.5.2
- tensorboardx=1.4
- torchvision=0.2.1
- tqdm=4.28.1
- trimesh=2.37.7
- pip
- pip:
- scikit-learn==0.24.2
- h5py==2.9.0
- plyfile==0.7
- polyscope==1.2.0
| 551 | YAML | 15.727272 | 26 | 0.575318 |
NVlabs/ACID/ACID/setup.py | try:
from setuptools import setup
except ImportError:
from distutils.core import setup
from distutils.extension import Extension
from Cython.Build import cythonize
from torch.utils.cpp_extension import BuildExtension, CppExtension, CUDAExtension
import numpy
# Get the numpy include directory.
numpy_include_dir = numpy.get_include()
# Extensions
# mcubes (marching cubes algorithm)
mcubes_module = Extension(
'src.utils.libmcubes.mcubes',
sources=[
'src/utils/libmcubes/mcubes.pyx',
'src/utils/libmcubes/pywrapper.cpp',
'src/utils/libmcubes/marchingcubes.cpp'
],
language='c++',
extra_compile_args=['-std=c++11'],
include_dirs=[numpy_include_dir]
)
# mise (efficient mesh extraction)
mise_module = Extension(
'src.utils.libmise.mise',
sources=[
'src/utils/libmise/mise.pyx'
],
)
# simplify (efficient mesh simplification)
simplify_mesh_module = Extension(
'src.utils.libsimplify.simplify_mesh',
sources=[
'src/utils/libsimplify/simplify_mesh.pyx'
],
include_dirs=[numpy_include_dir]
)
# Gather all extension modules
ext_modules = [
mcubes_module,
mise_module,
simplify_mesh_module,
]
setup(
ext_modules=cythonize(ext_modules),
cmdclass={
'build_ext': BuildExtension
}
)
| 1,311 | Python | 21.237288 | 81 | 0.691076 |
NVlabs/ACID/ACID/plush_train.py | import torch
import torch.optim as optim
from tensorboardX import SummaryWriter
import matplotlib; matplotlib.use('Agg')
import numpy as np
import os
import argparse
import time, datetime
from src import config, data
from src.checkpoints import CheckpointIO
from collections import defaultdict
import shutil
from matplotlib.backends.backend_agg import FigureCanvasAgg as FigureCanvas
from src.utils import common_util
import matplotlib.pyplot as plt
from PIL import Image
# Arguments
parser = argparse.ArgumentParser(
description='Train a Plush Env dynamics model.'
)
parser.add_argument('config', type=str, help='Path to config file.')
parser.add_argument('--no-cuda', action='store_true', help='Do not use cuda.')
parser.add_argument('--exit-after', type=int, default=-1,
help='Checkpoint and exit after specified number of seconds'
'with exit code 2.')
parser.add_argument('--debug', action='store_true', help='debugging')
parser.add_argument('--eval_only', action='store_true', help='run eval only')
args = parser.parse_args()
cfg = config.load_config(args.config, 'configs/default.yaml')
is_cuda = (torch.cuda.is_available() and not args.no_cuda)
device = torch.device("cuda" if is_cuda else "cpu")
# Set t0
t0 = time.time()
# Shorthands
out_dir = cfg['training']['out_dir']
if args.debug:
cfg['training']['batch_size'] = 2
cfg['training']['vis_n_outputs'] = 1
cfg['training']['print_every'] = 1
cfg['training']['backup_every'] = 1
cfg['training']['validate_every'] = 1
cfg['training']['visualize_every'] = 1
cfg['training']['checkpoint_every'] = 1
cfg['training']['visualize_total'] = 1
batch_size = cfg['training']['batch_size']
backup_every = cfg['training']['backup_every']
vis_n_outputs = cfg['generation']['vis_n_outputs']
exit_after = args.exit_after
model_selection_metric = cfg['training']['model_selection_metric']
if cfg['training']['model_selection_mode'] == 'maximize':
model_selection_sign = 1
elif cfg['training']['model_selection_mode'] == 'minimize':
model_selection_sign = -1
else:
raise ValueError('model_selection_mode must be '
'either maximize or minimize.')
# Output directory
if not os.path.exists(out_dir):
os.makedirs(out_dir)
shutil.copyfile(args.config, os.path.join(out_dir, 'config.yaml'))
# Dataset
train_loader = data.core.get_plush_loader(cfg, cfg['model']['type'], split='train')
val_loader = data.core.get_plush_loader(cfg, cfg['model']['type'], split='test')
# Model
model = config.get_model(cfg, device=device)
# Generator
generator = config.get_generator(model, cfg, device=device)
# Intialize training
optimizer = optim.Adam(model.parameters(), lr=1e-3, weight_decay=1e-4)
trainer = config.get_trainer(model, optimizer, cfg, device=device)
checkpoint_io = CheckpointIO(out_dir, model=model, optimizer=optimizer)
try:
load_dict = checkpoint_io.load('model_best.pt')
except FileExistsError:
load_dict = dict()
epoch_it = load_dict.get('epoch_it', 0)
it = load_dict.get('it', 0)
metric_val_best = load_dict.get(
'loss_val_best', -model_selection_sign * np.inf)
if metric_val_best == np.inf or metric_val_best == -np.inf:
metric_val_best = -model_selection_sign * np.inf
print('Current best validation metric (%s): %.8f'
% (model_selection_metric, metric_val_best))
logger = SummaryWriter(os.path.join(out_dir, 'logs'))
# Shorthands
print_every = cfg['training']['print_every']
checkpoint_every = cfg['training']['checkpoint_every']
validate_every = cfg['training']['validate_every']
visualize_every = cfg['training']['visualize_every']
# Print model
nparameters = sum(p.numel() for p in model.parameters())
print('Total number of parameters: %d' % nparameters)
print('output path: ', cfg['training']['out_dir'])
# For visualizations
data_vis_list = []
if cfg['model']['type'] == 'geom':
vis_dataset = data.core.get_geom_dataset(cfg, split='vis')
elif cfg['model']['type'] == 'combined':
vis_dataset = data.core.get_combined_dataset(cfg, split='vis')
# Build a data dictionary for visualization
np.random.seed(0)
data_idxes = np.random.randint(len(vis_dataset), size=cfg['training']['visualize_total'])
for i, id in enumerate(data_idxes):
data_vis = data.core.collate_pair_fn([vis_dataset[id]])
data_vis_list.append({'it': i, 'data': data_vis})
if args.eval_only:
eval_dict, figs = trainer.evaluate(val_loader)
metric_val = eval_dict[model_selection_metric]
for k, v in eval_dict.items():
print(f"metric {k}: {v}")
print('Validation metric (%s): %.4f'
% (model_selection_metric, metric_val))
for k,v in figs.items():
fig_path = os.path.join(out_dir, 'vis', f"{k}_eval_best.png")
v.savefig(fig_path)
for data_vis in data_vis_list:
out = generator.generate_mesh(data_vis['data'])
# Get statistics
try:
mesh, stats_dict = out
except TypeError:
mesh, stats_dict = out, {}
mesh.export(os.path.join(out_dir, 'vis', f"best_{data_vis['it']}.off"))
out2 = generator.generate_pointcloud(data_vis['data'])
for i,pcloud in enumerate(out2):
ipath = os.path.join(out_dir, 'vis', f"best_{data_vis['it']}_{i}.obj")
common_util.write_pointcoud_as_obj(ipath, pcloud)
pcloud_dict = [{"title":'source'if i == 0 else 'target',
"pts": p[:,:3],
"col": None if p.shape[1] == 3 else p[:,3:]
} for i,p in enumerate(out2)]
fig = common_util.side_by_side_point_clouds(pcloud_dict)
width, height = fig.get_size_inches() * fig.get_dpi()
canvas = FigureCanvas(fig)
canvas.draw()
img_path = os.path.join(out_dir, 'vis', f"best_{data_vis['it']}.png")
Image.fromarray(
np.frombuffer(
canvas.tostring_rgb(),
dtype='uint8').reshape(int(height), int(width), 3)).save(
img_path
)
plt.close(fig)
quit()
while True:
epoch_it += 1
for batch in train_loader:
it += 1
losses = trainer.train_step(batch, it)
for k,v in losses.items():
logger.add_scalar(f'train/{k}_loss', v, it)
# Print output
if (it % print_every) == 0:
t = datetime.datetime.now()
print_str = f"[Epoch {epoch_it:04d}] it={it:04d}, time: {time.time()-t0:.3f}, "
print_str += f"{t.hour:02d}:{t.minute:02d}, "
for k,v in losses.items():
print_str += f"{k}:{v:.4f}, "
print(print_str)
# Save checkpoint
if (checkpoint_every > 0 and (it % checkpoint_every) == 0):
print('Saving checkpoint')
checkpoint_io.save('model.pt', epoch_it=epoch_it, it=it,
loss_val_best=metric_val_best)
# Backup if necessary
if (backup_every > 0 and (it % backup_every) == 0):
print('Backup checkpoint')
checkpoint_io.save('model_%d.pt' % it, epoch_it=epoch_it, it=it,
loss_val_best=metric_val_best)
# Run validation
if validate_every > 0 and (it % validate_every) == 0:
print('Running Validation')
eval_dict, figs = trainer.evaluate(val_loader)
for k,v in figs.items():
fig_path = os.path.join(out_dir, 'vis', f"{k}_{it}.png")
v.savefig(fig_path)
logger.add_figure(k, v, it)
metric_val = eval_dict[model_selection_metric]
print('Validation metric (%s): %.4f'
% (model_selection_metric, metric_val))
for k, v in eval_dict.items():
print(f"metric {k}: {v}")
logger.add_scalar('val/%s' % k, v, it)
if model_selection_sign * (metric_val - metric_val_best) > 0:
metric_val_best = metric_val
print('New best model (loss %.4f)' % metric_val_best)
checkpoint_io.save('model_best.pt', epoch_it=epoch_it, it=it,
loss_val_best=metric_val_best)
# Visualize output
if visualize_every > 0 and (it % visualize_every) == 0:
print('Visualizing')
renders = []
for data_vis in data_vis_list:
out = generator.generate_mesh(data_vis['data'])
# Get statistics
try:
mesh, stats_dict = out
except TypeError:
mesh, stats_dict = out, {}
mesh.export(os.path.join(out_dir, 'vis', '{}_{}.off'.format(it, data_vis['it'])))
out2 = generator.generate_pointcloud(data_vis['data'])
for i,pcloud in enumerate(out2):
ipath = os.path.join(out_dir, 'vis', f"{it}_{data_vis['it']}_{i}.obj")
common_util.write_pointcoud_as_obj(ipath, pcloud)
name_dict = ['source', 'target', 'source_rollout', 'target_rollout']
pcloud_dict = [{"title":name_dict[i],
"pts": p[:,:3],
"col": None if p.shape[1] == 3 else p[:,3:]
} for i,p in enumerate(out2)]
fig = common_util.side_by_side_point_clouds(pcloud_dict)
width, height = fig.get_size_inches() * fig.get_dpi()
canvas = FigureCanvas(fig)
canvas.draw()
img_path = os.path.join(out_dir, 'vis', f"{it}_{data_vis['it']}.png")
Image.fromarray(
np.frombuffer(
canvas.tostring_rgb(),
dtype='uint8').reshape(int(height), int(width), 3)).save(
img_path
)
plt.close(fig)
# Exit if necessary
if exit_after > 0 and (time.time() - t0) >= exit_after:
print('Time limit reached. Exiting.')
checkpoint_io.save('model.pt', epoch_it=epoch_it, it=it,
loss_val_best=metric_val_best)
exit(3)
| 10,307 | Python | 39.108949 | 116 | 0.573979 |
NVlabs/ACID/ACID/README.md | [![NVIDIA Source Code License](https://img.shields.io/badge/license-NSCL-blue.svg)](https://github.com/NVlabs/ACID/blob/master/LICENSE)
![Python 3.7](https://img.shields.io/badge/python-3.7-green.svg)
# ACID model
<div style="text-align: center">
<img src="../_media/model_figure.png" width="600"/>
</div>
## Prerequisites
We use anaconda to manage necessary packages. You can create an anaconda environment called `acid_train` using
```bash
conda env create -f environment.yaml
conda activate acid_train
pip install torch-scatter==2.0.4 -f https://pytorch-geometric.com/whl/torch-1.4.0+cu101.html
```
Next, we need to compile extension modules used for mesh utilies, which are from [Convolutional Occupancy Network](https://github.com/autonomousvision/convolutional_occupancy_networks).
You can do this via
```
python setup.py build_ext --inplace
```
## Get Raw Manipulation Data
You can obtain our pre-generated manipulation trajectories from [PlushSim](../PlushSim/) from this [Google Drive](https://drive.google.com/drive/folders/1wOIk58e3wCfgOeYFBC1caYP2KAoFijbW?usp=sharing) directory. The manipulation trajectories are broken down to 10GB chunks. We recommend using [`gdown`](https://github.com/wkentaro/gdown) for downloading.
After downloading, please run the following commands to decompress the data:
```
cat data_plush.zip.part-* > data_plush.zip
unzip data_plush.zip
```
You should have the following folder structure:
```
ACID/
data_plush/
metadata/
split1/
...
split2/
...
split3/
...
split1/
...
split2/
...
split3/
...
```
### Generating Training Data
To generate input-output pairs for ACID training, you need to run the following scripts to generate the data:
```
cd preprocess
python gen_data_flow_plush.py
python gen_data_flow_splits.py
python gen_data_contrastive_pairs_flow.py
```
This should create `train_data` directory inside this folder, with the following structure:
```
ACID/
train_data/
flow/
split1/
split2/
split3/
train.pkl
test.pkl
pair/
split1/
split2/
split3/
```
If you wish to generate the data at another location, you can pass in different flags. Check out each preprocess script for details.
## Training
Finally, to train the ACID model from scratch, run:
```
python plush_train.py configs/plush_dyn_geodesics.yaml
```
For available training options, please take a look at `configs/default.yaml` and `configs/plush_dyn_geodesics.yaml`.
### Pretrained Weights
You can download pretrained weights on [Google Drive](https://drive.google.com/file/d/15ClJpMx8LlgPHXp1EeCP3Z4kD5h5bDKl/view?usp=sharing), please save `model_best.pt` to `result/geodesics/`.
## License
Please check the [LICENSE](../LICENSE) file. ACID may be used non-commercially, meaning for research or evaluation purposes only. For business inquiries, please contact researchinquiries@nvidia.com.
If you find our code or paper useful, please consider citing
```bibtex
@article{shen2022acid,
title={ACID: Action-Conditional Implicit Visual Dynamics for Deformable Object Manipulation},
author={Shen, Bokui and Jiang, Zhenyu and Choy, Christopher and J. Guibas, Leonidas and Savarese, Silvio and Anandkumar, Anima and Zhu, Yuke},
journal={Robotics: Science and Systems (RSS)},
year={2022}
}
``` | 3,489 | Markdown | 33.9 | 353 | 0.70364 |
NVlabs/ACID/ACID/src/training.py | import numpy as np
from collections import defaultdict
from tqdm import tqdm
class BaseTrainer(object):
''' Base trainer class.
'''
def evaluate(self, val_loader):
''' Performs an evaluation.
Args:
val_loader (dataloader): pytorch dataloader
'''
eval_list = defaultdict(list)
for data in tqdm(val_loader):
eval_step_dict = self.eval_step(data)
for k, v in eval_step_dict.items():
eval_list[k].append(v)
eval_dict = {k: np.mean(v) for k, v in eval_list.items()}
return eval_dict
def train_step(self, *args, **kwargs):
''' Performs a training step.
'''
raise NotImplementedError
def eval_step(self, *args, **kwargs):
''' Performs an evaluation step.
'''
raise NotImplementedError
def visualize(self, *args, **kwargs):
''' Performs visualization.
'''
raise NotImplementedError
| 988 | Python | 23.724999 | 65 | 0.571862 |
NVlabs/ACID/ACID/src/common.py | # import multiprocessing
import torch
import numpy as np
import math
import numpy as np
def compute_iou(occ1, occ2):
''' Computes the Intersection over Union (IoU) value for two sets of
occupancy values.
Args:
occ1 (tensor): first set of occupancy values
occ2 (tensor): second set of occupancy values
'''
occ1 = np.asarray(occ1)
occ2 = np.asarray(occ2)
# Put all data in second dimension
# Also works for 1-dimensional data
if occ1.ndim >= 2:
occ1 = occ1.reshape(occ1.shape[0], -1)
if occ2.ndim >= 2:
occ2 = occ2.reshape(occ2.shape[0], -1)
# Convert to boolean values
occ1 = (occ1 >= 0.5)
occ2 = (occ2 >= 0.5)
# Compute IOU
area_union = (occ1 | occ2).astype(np.float32).sum(axis=-1)
area_intersect = (occ1 & occ2).astype(np.float32).sum(axis=-1)
iou = (area_intersect / area_union)
return iou
def chamfer_distance(points1, points2, give_id=False):
''' Returns the chamfer distance for the sets of points.
Args:
points1 (numpy array): first point set
points2 (numpy array): second point set
use_kdtree (bool): whether to use a kdtree
give_id (bool): whether to return the IDs of nearest points
'''
return chamfer_distance_naive(points1, points2)
def chamfer_distance_naive(points1, points2):
''' Naive implementation of the Chamfer distance.
Args:
points1 (numpy array): first point set
points2 (numpy array): second point set
'''
assert(points1.size() == points2.size())
batch_size, T, _ = points1.size()
points1 = points1.view(batch_size, T, 1, 3)
points2 = points2.view(batch_size, 1, T, 3)
distances = (points1 - points2).pow(2).sum(-1)
chamfer1 = distances.min(dim=1)[0].mean(dim=1)
chamfer2 = distances.min(dim=2)[0].mean(dim=1)
chamfer = chamfer1 + chamfer2
return chamfer
def make_3d_grid(bb_min, bb_max, shape):
''' Makes a 3D grid.
Args:
bb_min (tuple): bounding box minimum
bb_max (tuple): bounding box maximum
shape (tuple): output shape
'''
size = shape[0] * shape[1] * shape[2]
pxs = torch.linspace(bb_min[0], bb_max[0], shape[0])
pys = torch.linspace(bb_min[1], bb_max[1], shape[1])
pzs = torch.linspace(bb_min[2], bb_max[2], shape[2])
pxs = pxs.view(-1, 1, 1).expand(*shape).contiguous().view(size)
pys = pys.view(1, -1, 1).expand(*shape).contiguous().view(size)
pzs = pzs.view(1, 1, -1).expand(*shape).contiguous().view(size)
p = torch.stack([pxs, pys, pzs], dim=1)
return p
def transform_points(points, transform):
''' Transforms points with regard to passed camera information.
Args:
points (tensor): points tensor
transform (tensor): transformation matrices
'''
assert(points.size(2) == 3)
assert(transform.size(1) == 3)
assert(points.size(0) == transform.size(0))
if transform.size(2) == 4:
R = transform[:, :, :3]
t = transform[:, :, 3:]
points_out = points @ R.transpose(1, 2) + t.transpose(1, 2)
elif transform.size(2) == 3:
K = transform
points_out = points @ K.transpose(1, 2)
return points_out
def b_inv(b_mat):
''' Performs batch matrix inversion.
Arguments:
b_mat: the batch of matrices that should be inverted
'''
eye = b_mat.new_ones(b_mat.size(-1)).diag().expand_as(b_mat)
b_inv, _ = torch.gesv(eye, b_mat)
return b_inv
def project_to_camera(points, transform):
''' Projects points to the camera plane.
Args:
points (tensor): points tensor
transform (tensor): transformation matrices
'''
p_camera = transform_points(points, transform)
p_camera = p_camera[..., :2] / p_camera[..., 2:]
return p_camera
def fix_Rt_camera(Rt, loc, scale):
''' Fixes Rt camera matrix.
Args:
Rt (tensor): Rt camera matrix
loc (tensor): location
scale (float): scale
'''
# Rt is B x 3 x 4
# loc is B x 3 and scale is B
batch_size = Rt.size(0)
R = Rt[:, :, :3]
t = Rt[:, :, 3:]
scale = scale.view(batch_size, 1, 1)
R_new = R * scale
t_new = t + R @ loc.unsqueeze(2)
Rt_new = torch.cat([R_new, t_new], dim=2)
assert(Rt_new.size() == (batch_size, 3, 4))
return Rt_new
def normalize_coordinate(p, padding=0.1, plane='xz'):
''' Normalize coordinate to [0, 1] for unit cube experiments
Args:
p (tensor): point
padding (float): conventional padding paramter of ONet for unit cube, so [-0.5, 0.5] -> [-0.55, 0.55]
plane (str): plane feature type, ['xz', 'xy', 'yz']
'''
if plane == 'xz':
xy = p[:, :, [0, 2]]
elif plane =='xy':
xy = p[:, :, [0, 1]]
else:
xy = p[:, :, [1, 2]]
xy_new = xy / (1 + padding + 10e-6) # (-0.5, 0.5)
xy_new = xy_new + 0.5 # range (0, 1)
# f there are outliers out of the range
if xy_new.max() >= 1:
xy_new[xy_new >= 1] = 1 - 10e-6
if xy_new.min() < 0:
xy_new[xy_new < 0] = 0.0
return xy_new
def normalize_3d_coordinate(p, padding=0.1):
''' Normalize coordinate to [0, 1] for unit cube experiments.
Corresponds to our 3D model
Args:
p (tensor): point
padding (float): conventional padding paramter of ONet for unit cube, so [-0.5, 0.5] -> [-0.55, 0.55]
'''
p_nor = p / (1 + padding + 10e-4) # (-0.5, 0.5)
p_nor = p_nor + 0.5 # range (0, 1)
# f there are outliers out of the range
if p_nor.max() >= 1:
p_nor[p_nor >= 1] = 1 - 10e-4
if p_nor.min() < 0:
p_nor[p_nor < 0] = 0.0
return p_nor
def normalize_coord(p, vol_range, plane='xz'):
''' Normalize coordinate to [0, 1] for sliding-window experiments
Args:
p (tensor): point
vol_range (numpy array): volume boundary
plane (str): feature type, ['xz', 'xy', 'yz'] - canonical planes; ['grid'] - grid volume
'''
p[:, 0] = (p[:, 0] - vol_range[0][0]) / (vol_range[1][0] - vol_range[0][0])
p[:, 1] = (p[:, 1] - vol_range[0][1]) / (vol_range[1][1] - vol_range[0][1])
p[:, 2] = (p[:, 2] - vol_range[0][2]) / (vol_range[1][2] - vol_range[0][2])
if plane == 'xz':
x = p[:, [0, 2]]
elif plane =='xy':
x = p[:, [0, 1]]
elif plane =='yz':
x = p[:, [1, 2]]
else:
x = p
return x
def coordinate2index(x, reso, coord_type='2d'):
''' Normalize coordinate to [0, 1] for unit cube experiments.
Corresponds to our 3D model
Args:
x (tensor): coordinate
reso (int): defined resolution
coord_type (str): coordinate type
'''
x = (x * reso).long()
if coord_type == '2d': # plane
index = x[:, :, 0] + reso * x[:, :, 1]
elif coord_type == '3d': # grid
index = x[:, :, 0] + reso * (x[:, :, 1] + reso * x[:, :, 2])
index = index[:, None, :]
return index
def coord2index(p, vol_range, reso=None, plane='xz'):
''' Normalize coordinate to [0, 1] for sliding-window experiments.
Corresponds to our 3D model
Args:
p (tensor): points
vol_range (numpy array): volume boundary
reso (int): defined resolution
plane (str): feature type, ['xz', 'xy', 'yz'] - canonical planes; ['grid'] - grid volume
'''
# normalize to [0, 1]
x = normalize_coord(p, vol_range, plane=plane)
if isinstance(x, np.ndarray):
x = np.floor(x * reso).astype(int)
else: #* pytorch tensor
x = (x * reso).long()
if x.shape[1] == 2:
index = x[:, 0] + reso * x[:, 1]
index[index > reso**2] = reso**2
elif x.shape[1] == 3:
index = x[:, 0] + reso * (x[:, 1] + reso * x[:, 2])
index[index > reso**3] = reso**3
return index[None]
def update_reso(reso, depth):
''' Update the defined resolution so that UNet can process.
Args:
reso (int): defined resolution
depth (int): U-Net number of layers
'''
base = 2**(int(depth) - 1)
if ~(reso / base).is_integer(): # when this is not integer, U-Net dimension error
for i in range(base):
if ((reso + i) / base).is_integer():
reso = reso + i
break
return reso
def decide_total_volume_range(query_vol_metric, recep_field, unit_size, unet_depth):
''' Update the defined resolution so that UNet can process.
Args:
query_vol_metric (numpy array): query volume size
recep_field (int): defined the receptive field for U-Net
unit_size (float): the defined voxel size
unet_depth (int): U-Net number of layers
'''
reso = query_vol_metric / unit_size + recep_field - 1
reso = update_reso(int(reso), unet_depth) # make sure input reso can be processed by UNet
input_vol_metric = reso * unit_size
p_c = np.array([0.0, 0.0, 0.0]).astype(np.float32)
lb_input_vol, ub_input_vol = p_c - input_vol_metric/2, p_c + input_vol_metric/2
lb_query_vol, ub_query_vol = p_c - query_vol_metric/2, p_c + query_vol_metric/2
input_vol = [lb_input_vol, ub_input_vol]
query_vol = [lb_query_vol, ub_query_vol]
# handle the case when resolution is too large
if reso > 10000:
reso = 1
return input_vol, query_vol, reso
def add_key(base, new, base_name, new_name, device=None):
''' Add new keys to the given input
Args:
base (tensor): inputs
new (tensor): new info for the inputs
base_name (str): name for the input
new_name (str): name for the new info
device (device): pytorch device
'''
if (new is not None) and (isinstance(new, dict)):
if device is not None:
for key in new.keys():
new[key] = new[key].to(device)
base = {base_name: base,
new_name: new}
return base
class map2local(object):
''' Add new keys to the given input
Args:
s (float): the defined voxel size
pos_encoding (str): method for the positional encoding, linear|sin_cos
'''
def __init__(self, s, pos_encoding='linear'):
super().__init__()
self.s = s
self.pe = positional_encoding(basis_function=pos_encoding)
def __call__(self, p):
p = torch.remainder(p, self.s) / self.s # always possitive
# p = torch.fmod(p, self.s) / self.s # same sign as input p!
p = self.pe(p)
return p
class positional_encoding(object):
''' Positional Encoding (presented in NeRF)
Args:
basis_function (str): basis function
'''
def __init__(self, basis_function='sin_cos'):
super().__init__()
self.func = basis_function
L = 10
freq_bands = 2.**(np.linspace(0, L-1, L))
self.freq_bands = freq_bands * math.pi
def __call__(self, p):
if self.func == 'sin_cos':
out = []
p = 2.0 * p - 1.0 # chagne to the range [-1, 1]
for freq in self.freq_bands:
out.append(torch.sin(freq * p))
out.append(torch.cos(freq * p))
p = torch.cat(out, dim=2)
return p
| 11,186 | Python | 29.399456 | 109 | 0.562846 |
NVlabs/ACID/ACID/src/config.py | import yaml
from torchvision import transforms
from src import data
from src import conv_onet
method_dict = {
'conv_onet': conv_onet
}
# General config
def load_config(path, default_path=None):
''' Loads config file.
Args:
path (str): path to config file
default_path (bool): whether to use default path
'''
# Load configuration from file itself
with open(path, 'r') as f:
cfg_special = yaml.load(f)
# Check if we should inherit from a config
inherit_from = cfg_special.get('inherit_from')
# If yes, load this config first as default
# If no, use the default_path
if inherit_from is not None:
cfg = load_config(inherit_from, default_path)
elif default_path is not None:
with open(default_path, 'r') as f:
cfg = yaml.load(f)
else:
cfg = dict()
# Include main configuration
update_recursive(cfg, cfg_special)
return cfg
def update_recursive(dict1, dict2):
''' Update two config dictionaries recursively.
Args:
dict1 (dict): first dictionary to be updated
dict2 (dict): second dictionary which entries should be used
'''
for k, v in dict2.items():
if k not in dict1:
dict1[k] = dict()
if isinstance(v, dict):
update_recursive(dict1[k], v)
else:
dict1[k] = v
# Models
def get_model(cfg, device=None, dataset=None):
''' Returns the model instance.
Args:
cfg (dict): config dictionary
device (device): pytorch device
dataset (dataset): dataset
'''
method = cfg['method']
model = method_dict[method].config.get_model(
cfg, device=device, dataset=dataset)
return model
# Trainer
def get_trainer(model, optimizer, cfg, device):
''' Returns a trainer instance.
Args:
model (nn.Module): the model which is used
optimizer (optimizer): pytorch optimizer
cfg (dict): config dictionary
device (device): pytorch device
'''
method = cfg['method']
trainer = method_dict[method].config.get_trainer(
model, optimizer, cfg, device)
return trainer
# Generator for final mesh extraction
def get_generator(model, cfg, device):
''' Returns a generator instance.
Args:
model (nn.Module): the model which is used
cfg (dict): config dictionary
device (device): pytorch device
'''
method = cfg['method']
generator = method_dict[method].config.get_generator(model, cfg, device)
return generator
| 2,573 | Python | 23.990291 | 76 | 0.624563 |
NVlabs/ACID/ACID/src/checkpoints.py | import os
import urllib
import torch
from torch.utils import model_zoo
class CheckpointIO(object):
''' CheckpointIO class.
It handles saving and loading checkpoints.
Args:
checkpoint_dir (str): path where checkpoints are saved
'''
def __init__(self, checkpoint_dir='./chkpts', **kwargs):
self.module_dict = kwargs
self.checkpoint_dir = checkpoint_dir
if not os.path.exists(checkpoint_dir):
os.makedirs(checkpoint_dir)
def register_modules(self, **kwargs):
''' Registers modules in current module dictionary.
'''
self.module_dict.update(kwargs)
def save(self, filename, **kwargs):
''' Saves the current module dictionary.
Args:
filename (str): name of output file
'''
if not os.path.isabs(filename):
filename = os.path.join(self.checkpoint_dir, filename)
outdict = kwargs
for k, v in self.module_dict.items():
outdict[k] = v.state_dict()
torch.save(outdict, filename)
def load(self, filename):
'''Loads a module dictionary from local file or url.
Args:
filename (str): name of saved module dictionary
'''
if is_url(filename):
return self.load_url(filename)
else:
return self.load_file(filename)
def load_file(self, filename):
'''Loads a module dictionary from file.
Args:
filename (str): name of saved module dictionary
'''
if not os.path.isabs(filename):
filename = os.path.join(self.checkpoint_dir, filename)
if os.path.exists(filename):
print(filename)
print('=> Loading checkpoint from local file...')
state_dict = torch.load(filename)
scalars = self.parse_state_dict(state_dict)
return scalars
else:
raise FileExistsError
def load_url(self, url):
'''Load a module dictionary from url.
Args:
url (str): url to saved model
'''
print(url)
print('=> Loading checkpoint from url...')
state_dict = model_zoo.load_url(url, progress=True)
scalars = self.parse_state_dict(state_dict)
return scalars
def parse_state_dict(self, state_dict):
'''Parse state_dict of model and return scalars.
Args:
state_dict (dict): State dict of model
'''
for k, v in self.module_dict.items():
if k in state_dict:
v.load_state_dict(state_dict[k])
else:
print('Warning: Could not find %s in checkpoint!' % k)
scalars = {k: v for k, v in state_dict.items()
if k not in self.module_dict}
return scalars
def is_url(url):
scheme = urllib.parse.urlparse(url).scheme
return scheme in ('http', 'https') | 2,962 | Python | 28.63 | 70 | 0.568535 |