repo_name
stringclasses
6 values
pr_number
int64
99
20.3k
pr_title
stringlengths
8
158
pr_description
stringlengths
0
6.54k
author
stringlengths
4
18
date_created
unknown
date_merged
unknown
previous_commit
stringlengths
40
40
pr_commit
stringlengths
40
40
query
stringlengths
37
6.57k
filepath
stringlengths
8
153
before_content
stringlengths
0
876M
after_content
stringlengths
0
876M
label
int64
-1
1
tensorflow/graphics
486
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. Following changes are made to the library code: - tf.compat.v1.name_scope -> tf.name_scope - tf.compat.v1.where. -> tf.where - tf.compat.v1.assert_equal -> tf.debugging.assert_equal - tf.compat.v1.dimension_value -> tf.compat.dimension_value Following changes are made to the test code: - Remove tf.compat.v1.get_variable() - Remove tf.compat.v1.global_variables_initializer() - Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
copybara-service[bot]
"2021-01-29T04:02:31Z"
"2021-02-07T22:38:58Z"
9d257ad4a72ccf65e4349910b9fff7c0a5648073
f683a9a5794bade30ede447339394e84b44acc0b
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. Following changes are made to the library code: - tf.compat.v1.name_scope -> tf.name_scope - tf.compat.v1.where. -> tf.where - tf.compat.v1.assert_equal -> tf.debugging.assert_equal - tf.compat.v1.dimension_value -> tf.compat.dimension_value Following changes are made to the test code: - Remove tf.compat.v1.get_variable() - Remove tf.compat.v1.global_variables_initializer() - Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
./requirements.txt
tensorflow >= 2.2.0 tensorflow-addons >= 0.10.0 tensorflow-datasets >= 2.0.0 absl-py >= 0.6.1 h5py >= 2.10.0 matplotlib >= 2.2.5 numpy >= 1.15.4 psutil >= 5.7.0 scipy >= 1.1.0 tqdm >= 4.45.0 OpenEXR >= 1.3.2 termcolor >= 1.1.0 trimesh >= 2.37.22 # Required by trimesh. networkx
tensorflow >= 2.2.0 tensorflow-addons >= 0.10.0 tensorflow-datasets >= 2.0.0 absl-py >= 0.6.1 h5py >= 2.10.0 matplotlib >= 2.2.5 numpy >= 1.15.4 psutil >= 5.7.0 scipy >= 1.1.0 tqdm >= 4.45.0 OpenEXR >= 1.3.2 termcolor >= 1.1.0 trimesh >= 2.37.22 # Required by trimesh. networkx
-1
tensorflow/graphics
486
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. Following changes are made to the library code: - tf.compat.v1.name_scope -> tf.name_scope - tf.compat.v1.where. -> tf.where - tf.compat.v1.assert_equal -> tf.debugging.assert_equal - tf.compat.v1.dimension_value -> tf.compat.dimension_value Following changes are made to the test code: - Remove tf.compat.v1.get_variable() - Remove tf.compat.v1.global_variables_initializer() - Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
copybara-service[bot]
"2021-01-29T04:02:31Z"
"2021-02-07T22:38:58Z"
9d257ad4a72ccf65e4349910b9fff7c0a5648073
f683a9a5794bade30ede447339394e84b44acc0b
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. Following changes are made to the library code: - tf.compat.v1.name_scope -> tf.name_scope - tf.compat.v1.where. -> tf.where - tf.compat.v1.assert_equal -> tf.debugging.assert_equal - tf.compat.v1.dimension_value -> tf.compat.dimension_value Following changes are made to the test code: - Remove tf.compat.v1.get_variable() - Remove tf.compat.v1.global_variables_initializer() - Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
./tensorflow_graphics/rendering/voxels/tests/test_helpers.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Test helpers for the voxels module.""" import numpy as np def generate_random_test_voxels_render(): """Generates random test for the voxels rendering functions.""" batch_shape = np.random.randint(1, 3) voxels_shape = np.random.randint(2, 8, size=(3)).tolist() signals_dimension = np.random.randint(2, 4) random_voxels = np.random.uniform(size=[batch_shape] + voxels_shape + [signals_dimension]) return random_voxels def generate_preset_test_voxels_visual_hull_render(): """Generates preset test for the visual hull voxels rendering function.""" voxels = np.array([[[[[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0.3, 0.7, 0.1], [1, 0.1, 0], [0.2, 0.1, 0.8]], [[0.1, 0.9, 0], [0.2, 1, 0.4], [0.3, 0.2, 0]]], [[[0.15, 0.69, 0.57], [0.07, 0.33, 0.55], [0, 0, 0]], [[0.71, 0.61, 0.43], [1, 0.1, 0], [0.71, 0.61, 0.43]], [[0.17, 0.01, 1.22], [0.2, 1, 0.4], [0.67, 0.94, 0.14]]], [[[0, 0, 0], [0.17, 0.33, 0.55], [0, 0, 0]], [[0.71, 0.61, 0.43], [1, 0.1, 0], [0.71, 0.61, 0.43]], [[0.1, 0.9, 0], [0.2, 1, 0.4], [0.88, 0.09, 0.45]]], [[[1, 0, 0], [0, 0, 0], [1, 0, 0]], [[0.88, 0.09, 0.5], [0.71, 0.61, 0.4], [0.14, 0, 0.22]], [[0.71, 0.61, 0.45], [0.71, 0.7, 0.43], [0.3, 0.2, 0]]]], [[[[0, 0, 0], [0, 0, 0], [0, 0, 1]], [[0.3, 0.7, 0.1], [0.15, 0.69, 0.5], [0.88, 0.09, 0.45]], [[0.07, 0.33, 0.55], [0.2, 1, 0.4], [0.4, 0.34, 0.43]]], [[[0, 1, 0], [0, 1, 0], [0, 1, 0]], [[0.19, 0.06, 0.24], [1, 0.1, 0], [0.2, 0.1, 0.8]], [[0.67, 0.94, 0.14], [0.2, 1, 0.4], [0.15, 0.69, 0.57]]], [[[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0.74, 0.67, 0.4], [0.64, 0.8, 0.19], [0.9, 0.6, 0.48]], [[0.1, 0.9, 0], [0.02, 0.37, 0.56], [0.62, 0.98, 0.19]]], [[[0.04, 0.87, 0.37], [0, 0, 0], [1, 0, 0]], [[0.3, 0.7, 0.1], [0.24, 0.12, 0.7], [0.76, 0.64, 0.79]], [[0.7, 0.2, 0.2], [0.4, 1, 0.9], [0.19, 0.66, 0.03]]]]]) images = np.array([[[[0, 0, 0], [0.77686984, 0.59343034, 0.59343034], [0.45118836, 0.87754357, 0.32967995]], [[0.19748120, 0.63940506, 0.67372021], [0.91107838, 0.73286470, 0.57683792], [0.64654532, 0.85772593, 0.82795514]], [[0.15633518, 0.28107627, 0.42305019], [0.91107838, 0.73286470, 0.57683792], [0.69272126, 0.86330457, 0.57258507]], [[0.86466472, 0, 0], [0.82271559, 0.50341470, 0.67372021], [0.82093385, 0.77909002, 0.58521709]]], [[[0, 0, 0.63212055], [0.73552274, 0.77236231, 0.65006225], [0.48829142, 0.81175293, 0.74842145]], [[0, 0.950212931, 0], [0.75092470, 0.22894841, 0.64654532], [0.63940506, 0.92792154, 0.67044104]], [[0, 0, 0], [0.89771579, 0.87381422, 0.65699148], [0.52288608, 0.89460078, 0.52763345]], [[0.64654532, 0.58104845, 0.30926567], [0.72746821, 0.76776373, 0.79607439], [0.724729, 0.844327, 0.676967]]]]) # pyformat: disable return voxels, images def generate_preset_test_voxels_absorption_render(): """Generates preset test for the absorption voxels rendering function.""" voxels = np.array([[[[[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0.3, 0.7, 0.1], [1, 0.1, 0], [0.2, 0.1, 0.8]], [[0.1, 0.9, 0], [0.2, 1, 0.4], [0.3, 0.2, 0]]], [[[0.15, 0.69, 0.57], [0.07, 0.33, 0.55], [0, 0, 0]], [[0.71, 0.61, 0.43], [1, 0.1, 0], [0.71, 0.61, 0.43]], [[0.17, 0.01, 1.22], [0.2, 1, 0.4], [0.67, 0.94, 0.14]]], [[[0, 0, 0], [0.17, 0.33, 0.55], [0, 0, 0]], [[0.71, 0.61, 0.43], [1, 0.1, 0], [0.71, 0.61, 0.43]], [[0.1, 0.9, 0], [0.2, 1, 0.4], [0.88, 0.09, 0.45]]], [[[1, 0, 0], [0, 0, 0], [1, 0, 0]], [[0.88, 0.09, 0.5], [0.71, 0.61, 0.4], [0.14, 0, 0.22]], [[0.71, 0.61, 0.45], [0.71, 0.7, 0.43], [0.3, 0.2, 0]]]], [[[[0, 0, 0], [0, 0, 0], [0, 0, 1]], [[0.3, 0.7, 0.1], [0.15, 0.69, 0.5], [0.88, 0.09, 0.45]], [[0.07, 0.33, 0.55], [0.2, 1, 0.4], [0.4, 0.34, 0.43]]], [[[0, 1, 0], [0, 1, 0], [0, 1, 0]], [[0.19, 0.06, 0.24], [1, 0.1, 0], [0.2, 0.1, 0.8]], [[0.67, 0.94, 0.14], [0.2, 1, 0.4], [0.15, 0.69, 0.57]]], [[[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0.74, 0.67, 0.4], [0.64, 0.8, 0.19], [0.9, 0.6, 0.48]], [[0.1, 0.9, 0], [0.02, 0.37, 0.56], [0.62, 0.98, 0.19]]], [[[0.04, 0.87, 0.37], [0, 0, 0], [1, 0, 0]], [[0.3, 0.7, 0.1], [0.24, 0.12, 0.7], [0.76, 0.64, 0.79]], [[0.7, 0.2, 0.2], [0.4, 1, 0.9], [0.19, 0.66, 0.03]]]]]) images = np.array([[[[0, 0, 0], [0.6175, 0.413375, 0.43], [0.27325, 0.7525, 0.2]], [[0.107375, 0.453075, 0.481625], [0.7919875, 0.54112625, 0.383775], [0.4523725, 0.736325, 0.70984]], [[0.085, 0.165, 0.275], [0.7919875, 0.54112625, 0.383775], [0.5212, 0.737375, 0.38]], [[0.75, 0, 0], [0.664084, 0.336275, 0.466], [0.64637875, 0.593425, 0.391625]]], [[[0, 0, 0.5], [0.5597, 0.59340875, 0.4478125], [0.3052, 0.653475, 0.5447]], [[0, 0.875, 0], [0.59275, 0.124575, 0.472], [0.4463875, 0.826425, 0.46804]], [[0, 0, 0], [0.76438, 0.7207, 0.44976], [0.351055, 0.7713925, 0.3484]], [[0.51, 0.435, 0.185], [0.53624, 0.58452, 0.6264125], [0.5294, 0.6985, 0.512425]]]]) # pyformat: disable return voxels, images def generate_preset_test_voxels_emission_absorption_render(): """Generates preset test for the emission absorption voxels rendering function.""" voxels = np.array([[[[[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0.3, 0.7, 0.1], [1, 0.1, 0], [0.2, 0.1, 0.8]], [[0.1, 0.9, 0], [0.2, 1, 0.4], [0.3, 0.2, 0]]], [[[0.15, 0.69, 0.57], [0.07, 0.33, 0.55], [0, 0, 0]], [[0.71, 0.61, 0.43], [1, 0.1, 0], [0.71, 0.61, 0.43]], [[0.17, 0.01, 1.22], [0.2, 1, 0.4], [0.67, 0.94, 0.14]]], [[[0, 0, 0], [0.17, 0.33, 0.55], [0, 0, 0]], [[0.71, 0.61, 0.43], [1, 0.1, 0], [0.71, 0.61, 0.43]], [[0.1, 0.9, 0], [0.2, 1, 0.4], [0.88, 0.09, 0.45]]], [[[1, 0, 0], [0, 0, 0], [1, 0, 0]], [[0.88, 0.09, 0.5], [0.71, 0.61, 0.4], [0.14, 0, 0.22]], [[0.71, 0.61, 0.45], [0.71, 0.7, 0.43], [0.3, 0.2, 0]]]], [[[[0, 0, 0], [0, 0, 0], [0, 0, 1]], [[0.3, 0.7, 0.1], [0.15, 0.69, 0.5], [0.88, 0.09, 0.45]], [[0.07, 0.33, 0.55], [0.2, 1, 0.4], [0.4, 0.34, 0.43]]], [[[0, 1, 0], [0, 1, 0], [0, 1, 0]], [[0.19, 0.06, 0.24], [1, 0.1, 0], [0.2, 0.1, 0.8]], [[0.67, 0.94, 0.14], [0.2, 1, 0.4], [0.15, 0.69, 0.57]]], [[[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0.74, 0.67, 0.4], [0.64, 0.8, 0.19], [0.9, 0.6, 0.48]], [[0.1, 0.9, 0], [0.02, 0.37, 0.56], [0.62, 0.98, 0.19]]], [[[0.04, 0.87, 0.37], [0, 0, 0], [1, 0, 0]], [[0.3, 0.7, 0.1], [0.24, 0.12, 0.7], [0.76, 0.64, 0.79]], [[0.7, 0.2, 0.2], [0.4, 1, 0.9], [0.19, 0.66, 0.03]]]]]) images = np.array([[[[0, 0, 0], [0.19553845, 0.27123076, 0.82], [0.08, 0.39999998, 0.4]], [[0.10144142, 0.46858389, 0.8065], [0.47932099, 0.41181099, 0.6751], [0.22078022, 0.23262935, 1.11352]], [[0.0935, 0.18149999, 0.55], [0.47932099, 0.41181099, 0.6751], [0.30814825, 0.43694864, 0.67]], [[0, 0, 0], [0.5677705, 0.17392569, 0.766], [0.48741499, 0.44055107, 0.6865]]], [[[0, 0, 1], [0.28019208, 0.40287539, 0.7525], [0.13121746, 0.42573205, 0.8461]], [[0, 0, 0], [0.16451199, 0.064448, 0.848], [0.24191167, 0.69841443, 0.77812]], [[0, 0, 0], [0.56974806, 0.50646416, 0.74728], [0.09611898, 0.32276643, 0.6436]], [[0.0148, 0.32189999, 0.37], [0.3099809, 0.33312645, 0.9433], [0.55598098, 0.41542985, 0.9224]]]]) # pyformat: disable return voxels, images
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Test helpers for the voxels module.""" import numpy as np def generate_random_test_voxels_render(): """Generates random test for the voxels rendering functions.""" batch_shape = np.random.randint(1, 3) voxels_shape = np.random.randint(2, 8, size=(3)).tolist() signals_dimension = np.random.randint(2, 4) random_voxels = np.random.uniform(size=[batch_shape] + voxels_shape + [signals_dimension]) return random_voxels def generate_preset_test_voxels_visual_hull_render(): """Generates preset test for the visual hull voxels rendering function.""" voxels = np.array([[[[[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0.3, 0.7, 0.1], [1, 0.1, 0], [0.2, 0.1, 0.8]], [[0.1, 0.9, 0], [0.2, 1, 0.4], [0.3, 0.2, 0]]], [[[0.15, 0.69, 0.57], [0.07, 0.33, 0.55], [0, 0, 0]], [[0.71, 0.61, 0.43], [1, 0.1, 0], [0.71, 0.61, 0.43]], [[0.17, 0.01, 1.22], [0.2, 1, 0.4], [0.67, 0.94, 0.14]]], [[[0, 0, 0], [0.17, 0.33, 0.55], [0, 0, 0]], [[0.71, 0.61, 0.43], [1, 0.1, 0], [0.71, 0.61, 0.43]], [[0.1, 0.9, 0], [0.2, 1, 0.4], [0.88, 0.09, 0.45]]], [[[1, 0, 0], [0, 0, 0], [1, 0, 0]], [[0.88, 0.09, 0.5], [0.71, 0.61, 0.4], [0.14, 0, 0.22]], [[0.71, 0.61, 0.45], [0.71, 0.7, 0.43], [0.3, 0.2, 0]]]], [[[[0, 0, 0], [0, 0, 0], [0, 0, 1]], [[0.3, 0.7, 0.1], [0.15, 0.69, 0.5], [0.88, 0.09, 0.45]], [[0.07, 0.33, 0.55], [0.2, 1, 0.4], [0.4, 0.34, 0.43]]], [[[0, 1, 0], [0, 1, 0], [0, 1, 0]], [[0.19, 0.06, 0.24], [1, 0.1, 0], [0.2, 0.1, 0.8]], [[0.67, 0.94, 0.14], [0.2, 1, 0.4], [0.15, 0.69, 0.57]]], [[[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0.74, 0.67, 0.4], [0.64, 0.8, 0.19], [0.9, 0.6, 0.48]], [[0.1, 0.9, 0], [0.02, 0.37, 0.56], [0.62, 0.98, 0.19]]], [[[0.04, 0.87, 0.37], [0, 0, 0], [1, 0, 0]], [[0.3, 0.7, 0.1], [0.24, 0.12, 0.7], [0.76, 0.64, 0.79]], [[0.7, 0.2, 0.2], [0.4, 1, 0.9], [0.19, 0.66, 0.03]]]]]) images = np.array([[[[0, 0, 0], [0.77686984, 0.59343034, 0.59343034], [0.45118836, 0.87754357, 0.32967995]], [[0.19748120, 0.63940506, 0.67372021], [0.91107838, 0.73286470, 0.57683792], [0.64654532, 0.85772593, 0.82795514]], [[0.15633518, 0.28107627, 0.42305019], [0.91107838, 0.73286470, 0.57683792], [0.69272126, 0.86330457, 0.57258507]], [[0.86466472, 0, 0], [0.82271559, 0.50341470, 0.67372021], [0.82093385, 0.77909002, 0.58521709]]], [[[0, 0, 0.63212055], [0.73552274, 0.77236231, 0.65006225], [0.48829142, 0.81175293, 0.74842145]], [[0, 0.950212931, 0], [0.75092470, 0.22894841, 0.64654532], [0.63940506, 0.92792154, 0.67044104]], [[0, 0, 0], [0.89771579, 0.87381422, 0.65699148], [0.52288608, 0.89460078, 0.52763345]], [[0.64654532, 0.58104845, 0.30926567], [0.72746821, 0.76776373, 0.79607439], [0.724729, 0.844327, 0.676967]]]]) # pyformat: disable return voxels, images def generate_preset_test_voxels_absorption_render(): """Generates preset test for the absorption voxels rendering function.""" voxels = np.array([[[[[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0.3, 0.7, 0.1], [1, 0.1, 0], [0.2, 0.1, 0.8]], [[0.1, 0.9, 0], [0.2, 1, 0.4], [0.3, 0.2, 0]]], [[[0.15, 0.69, 0.57], [0.07, 0.33, 0.55], [0, 0, 0]], [[0.71, 0.61, 0.43], [1, 0.1, 0], [0.71, 0.61, 0.43]], [[0.17, 0.01, 1.22], [0.2, 1, 0.4], [0.67, 0.94, 0.14]]], [[[0, 0, 0], [0.17, 0.33, 0.55], [0, 0, 0]], [[0.71, 0.61, 0.43], [1, 0.1, 0], [0.71, 0.61, 0.43]], [[0.1, 0.9, 0], [0.2, 1, 0.4], [0.88, 0.09, 0.45]]], [[[1, 0, 0], [0, 0, 0], [1, 0, 0]], [[0.88, 0.09, 0.5], [0.71, 0.61, 0.4], [0.14, 0, 0.22]], [[0.71, 0.61, 0.45], [0.71, 0.7, 0.43], [0.3, 0.2, 0]]]], [[[[0, 0, 0], [0, 0, 0], [0, 0, 1]], [[0.3, 0.7, 0.1], [0.15, 0.69, 0.5], [0.88, 0.09, 0.45]], [[0.07, 0.33, 0.55], [0.2, 1, 0.4], [0.4, 0.34, 0.43]]], [[[0, 1, 0], [0, 1, 0], [0, 1, 0]], [[0.19, 0.06, 0.24], [1, 0.1, 0], [0.2, 0.1, 0.8]], [[0.67, 0.94, 0.14], [0.2, 1, 0.4], [0.15, 0.69, 0.57]]], [[[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0.74, 0.67, 0.4], [0.64, 0.8, 0.19], [0.9, 0.6, 0.48]], [[0.1, 0.9, 0], [0.02, 0.37, 0.56], [0.62, 0.98, 0.19]]], [[[0.04, 0.87, 0.37], [0, 0, 0], [1, 0, 0]], [[0.3, 0.7, 0.1], [0.24, 0.12, 0.7], [0.76, 0.64, 0.79]], [[0.7, 0.2, 0.2], [0.4, 1, 0.9], [0.19, 0.66, 0.03]]]]]) images = np.array([[[[0, 0, 0], [0.6175, 0.413375, 0.43], [0.27325, 0.7525, 0.2]], [[0.107375, 0.453075, 0.481625], [0.7919875, 0.54112625, 0.383775], [0.4523725, 0.736325, 0.70984]], [[0.085, 0.165, 0.275], [0.7919875, 0.54112625, 0.383775], [0.5212, 0.737375, 0.38]], [[0.75, 0, 0], [0.664084, 0.336275, 0.466], [0.64637875, 0.593425, 0.391625]]], [[[0, 0, 0.5], [0.5597, 0.59340875, 0.4478125], [0.3052, 0.653475, 0.5447]], [[0, 0.875, 0], [0.59275, 0.124575, 0.472], [0.4463875, 0.826425, 0.46804]], [[0, 0, 0], [0.76438, 0.7207, 0.44976], [0.351055, 0.7713925, 0.3484]], [[0.51, 0.435, 0.185], [0.53624, 0.58452, 0.6264125], [0.5294, 0.6985, 0.512425]]]]) # pyformat: disable return voxels, images def generate_preset_test_voxels_emission_absorption_render(): """Generates preset test for the emission absorption voxels rendering function.""" voxels = np.array([[[[[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0.3, 0.7, 0.1], [1, 0.1, 0], [0.2, 0.1, 0.8]], [[0.1, 0.9, 0], [0.2, 1, 0.4], [0.3, 0.2, 0]]], [[[0.15, 0.69, 0.57], [0.07, 0.33, 0.55], [0, 0, 0]], [[0.71, 0.61, 0.43], [1, 0.1, 0], [0.71, 0.61, 0.43]], [[0.17, 0.01, 1.22], [0.2, 1, 0.4], [0.67, 0.94, 0.14]]], [[[0, 0, 0], [0.17, 0.33, 0.55], [0, 0, 0]], [[0.71, 0.61, 0.43], [1, 0.1, 0], [0.71, 0.61, 0.43]], [[0.1, 0.9, 0], [0.2, 1, 0.4], [0.88, 0.09, 0.45]]], [[[1, 0, 0], [0, 0, 0], [1, 0, 0]], [[0.88, 0.09, 0.5], [0.71, 0.61, 0.4], [0.14, 0, 0.22]], [[0.71, 0.61, 0.45], [0.71, 0.7, 0.43], [0.3, 0.2, 0]]]], [[[[0, 0, 0], [0, 0, 0], [0, 0, 1]], [[0.3, 0.7, 0.1], [0.15, 0.69, 0.5], [0.88, 0.09, 0.45]], [[0.07, 0.33, 0.55], [0.2, 1, 0.4], [0.4, 0.34, 0.43]]], [[[0, 1, 0], [0, 1, 0], [0, 1, 0]], [[0.19, 0.06, 0.24], [1, 0.1, 0], [0.2, 0.1, 0.8]], [[0.67, 0.94, 0.14], [0.2, 1, 0.4], [0.15, 0.69, 0.57]]], [[[0, 0, 0], [0, 0, 0], [0, 0, 0]], [[0.74, 0.67, 0.4], [0.64, 0.8, 0.19], [0.9, 0.6, 0.48]], [[0.1, 0.9, 0], [0.02, 0.37, 0.56], [0.62, 0.98, 0.19]]], [[[0.04, 0.87, 0.37], [0, 0, 0], [1, 0, 0]], [[0.3, 0.7, 0.1], [0.24, 0.12, 0.7], [0.76, 0.64, 0.79]], [[0.7, 0.2, 0.2], [0.4, 1, 0.9], [0.19, 0.66, 0.03]]]]]) images = np.array([[[[0, 0, 0], [0.19553845, 0.27123076, 0.82], [0.08, 0.39999998, 0.4]], [[0.10144142, 0.46858389, 0.8065], [0.47932099, 0.41181099, 0.6751], [0.22078022, 0.23262935, 1.11352]], [[0.0935, 0.18149999, 0.55], [0.47932099, 0.41181099, 0.6751], [0.30814825, 0.43694864, 0.67]], [[0, 0, 0], [0.5677705, 0.17392569, 0.766], [0.48741499, 0.44055107, 0.6865]]], [[[0, 0, 1], [0.28019208, 0.40287539, 0.7525], [0.13121746, 0.42573205, 0.8461]], [[0, 0, 0], [0.16451199, 0.064448, 0.848], [0.24191167, 0.69841443, 0.77812]], [[0, 0, 0], [0.56974806, 0.50646416, 0.74728], [0.09611898, 0.32276643, 0.6436]], [[0.0148, 0.32189999, 0.37], [0.3099809, 0.33312645, 0.9433], [0.55598098, 0.41542985, 0.9224]]]]) # pyformat: disable return voxels, images
-1
tensorflow/graphics
486
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. Following changes are made to the library code: - tf.compat.v1.name_scope -> tf.name_scope - tf.compat.v1.where. -> tf.where - tf.compat.v1.assert_equal -> tf.debugging.assert_equal - tf.compat.v1.dimension_value -> tf.compat.dimension_value Following changes are made to the test code: - Remove tf.compat.v1.get_variable() - Remove tf.compat.v1.global_variables_initializer() - Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
copybara-service[bot]
"2021-01-29T04:02:31Z"
"2021-02-07T22:38:58Z"
9d257ad4a72ccf65e4349910b9fff7c0a5648073
f683a9a5794bade30ede447339394e84b44acc0b
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. Following changes are made to the library code: - tf.compat.v1.name_scope -> tf.name_scope - tf.compat.v1.where. -> tf.where - tf.compat.v1.assert_equal -> tf.debugging.assert_equal - tf.compat.v1.dimension_value -> tf.compat.dimension_value Following changes are made to the test code: - Remove tf.compat.v1.get_variable() - Remove tf.compat.v1.global_variables_initializer() - Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
./tensorflow_graphics/projects/local_implicit_grid/core/reconstruction.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Lint as: python3 """Utility modules for reconstructing scenes. """ import os import numpy as np from skimage import measure import tensorflow.compat.v1 as tf from tensorflow_graphics.projects.local_implicit_grid.core import evaluator from tensorflow_graphics.projects.local_implicit_grid.core import local_implicit_grid_layer as lig from tensorflow_graphics.projects.local_implicit_grid.core import point_utils as pt class LIGOptimizer(object): """Class for using optimization to acquire feature grid.""" def __init__(self, ckpt, origin, grid_shape, part_size, occ_idx, indep_pt_loss=True, overlap=True, alpha_lat=1e-2, npts=2048, init_std=1e-2, learning_rate=1e-3, var_prefix='', nows=False): self.ckpt = ckpt self.ckpt_dir = os.path.dirname(ckpt) self.params = self._load_params(self.ckpt_dir) self.origin = origin self.grid_shape = grid_shape self.part_size = part_size self.occ_idx = occ_idx self.init_std = init_std self.learning_rate = learning_rate self.var_prefix = var_prefix self.nows = nows self.xmin = self.origin if overlap: true_shape = (np.array(grid_shape) - 1) / 2.0 self.xmax = self.origin + true_shape * part_size else: self.xmax = self.origin + (np.array(grid_shape) - 1) * part_size _, sj, sk = self.grid_shape self.occ_idx_flat = (self.occ_idx[:, 0]*(sj*sk)+ self.occ_idx[:, 1]*sk+self.occ_idx[:, 2]) self.indep_pt_loss = indep_pt_loss self.overlap = overlap self.alpha_lat = alpha_lat self.npts = int(npts) self._init_graph() def _load_params(self, ckpt_dir): param_file = os.path.join(ckpt_dir, 'params.txt') params = evaluator.parse_param_file(param_file) return params def _init_graph(self): """Initialize computation graph for tensorflow. """ self.graph = tf.Graph() with self.graph.as_default(): self.point_coords_ph = tf.placeholder( tf.float32, shape=[1, self.npts, 3]) # placeholder self.point_values_ph = tf.placeholder( tf.float32, shape=[1, self.npts, 1]) # placeholder self.point_coords = self.point_coords_ph self.point_values = self.point_values_ph self.liggrid = lig.LocalImplicitGrid( size=self.grid_shape, in_features=self.params['codelen'], out_features=1, num_filters=self.params['refiner_nf'], net_type='imnet', method='linear' if self.overlap else 'nn', x_location_max=(1.0 if self.overlap else 2.0), name='lig', interp=(not self.indep_pt_loss), min_grid_value=self.xmin, max_grid_value=self.xmax) si, sj, sk = self.grid_shape self.occ_idx_flat_ = tf.convert_to_tensor( self.occ_idx_flat[:, np.newaxis]) self.shape_ = tf.constant([si*sj*sk, self.params['codelen']], dtype=tf.int64) self.feat_sparse_ = tf.Variable( (tf.random.normal(shape=[self.occ_idx.shape[0], self.params['codelen']]) * self.init_std), trainable=True, name='feat_sparse') self.feat_grid = tf.scatter_nd(self.occ_idx_flat_, self.feat_sparse_, self.shape_) self.feat_grid = tf.reshape(self.feat_grid, [1, si, sj, sk, self.params['codelen']]) self.feat_norm = tf.norm(self.feat_sparse_, axis=-1) if self.indep_pt_loss: self.preds, self.weights = self.liggrid(self.feat_grid, self.point_coords, training=True) # preds: [b, n, 8, 1], weights: [b, n, 8] self.preds_interp = tf.reduce_sum( tf.expand_dims(self.weights, axis=-1)*self.preds, axis=2) # [b, n, 1] self.preds = tf.concat([self.preds, self.preds_interp[:, :, tf.newaxis, :]], axis=2) # preds: [b, n, 9, 1] self.point_values = tf.broadcast_to( self.point_values[:, :, tf.newaxis, :], shape=self.preds.shape) # [b, n, 9, 1] else: self.preds = self.liggrid(self.feat_grid, self.point_coords, training=True) # [b, n, 1] self.labels_01 = (self.point_values+1) / 2 # turn labels to 0, 1 labels self.loss_pt = tf.losses.sigmoid_cross_entropy( self.labels_01, logits=self.preds, reduction=tf.losses.Reduction.NONE) self.loss_lat = tf.reduce_mean(self.feat_norm) * self.alpha_lat self.loss = tf.reduce_mean(self.loss_pt) + self.loss_lat # compute accuracy metric if self.indep_pt_loss: self.pvalue = tf.sign(self.point_values[:, :, -1, 0]) self.ppred = tf.sign(self.preds[:, :, -1, 0]) else: self.pvalue = tf.sign(self.point_values[..., 0]) self.ppred = tf.sign(self.preds[:, :, 0]) self.accu = tf.reduce_sum(tf.cast( tf.logical_or(tf.logical_and(self.pvalue > 0, self.ppred > 0), tf.logical_and(self.pvalue < 0, self.ppred < 0)), tf.float32)) / float(self.npts) # get optimizer self.optimizer = tf.train.AdamOptimizer(learning_rate=self.learning_rate) self.fgrid_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope='feat_sparse') self.train_op = self.optimizer.minimize( self.loss, global_step=tf.train.get_or_create_global_step(), var_list=[self.fgrid_vars]) self.map_dict = self._get_var_mapping(model=self.liggrid, scope=self.var_prefix) self.sess = tf.Session() if not self.nows: self.saver = tf.train.Saver(self.map_dict) self.saver.restore(self.sess, self.ckpt) self._initialize_uninitialized(self.sess) def _get_var_mapping(self, model, scope=''): vars_ = model.trainable_variables varnames = [v.name for v in vars_] # .split(':')[0] varnames = [scope+v.replace('lig/', '').strip(':0') for v in varnames] map_dict = dict(zip(varnames, vars_)) return map_dict def _initialize_uninitialized(self, sess): global_vars = tf.global_variables() is_not_initialized = sess.run( [tf.is_variable_initialized(var) for var in global_vars]) not_initialized_vars = [v for (v, f) in zip(global_vars, is_not_initialized) if not f] if not_initialized_vars: sess.run(tf.variables_initializer(not_initialized_vars)) def optimize_feat_grid(self, point_coords, point_vals, steps=10000, print_every_n_steps=1000): """Optimize feature grid. Args: point_coords: [npts, 3] point coordinates. point_vals: [npts, 1] point values. steps: int, number of steps for gradient descent. print_every_n_steps: int, print every n steps. Returns: """ print_every_n_steps = int(print_every_n_steps) point_coords = point_coords.copy() point_vals = np.sign(point_vals.copy()) if point_coords.ndim == 3: point_coords = point_coords[0] if point_vals.ndim == 3: point_vals = point_vals[0] elif point_vals.ndim == 1: point_vals = point_vals[:, np.newaxis] # clip point_coords = np.clip(point_coords, self.xmin, self.xmax) # shuffle points seq = np.random.permutation(point_coords.shape[0]) point_coords = point_coords[seq] point_vals = point_vals[seq] point_coords = point_coords[np.newaxis] point_vals = point_vals[np.newaxis] # random point sampling function def random_point_sample(): sid = np.random.choice(point_coords.shape[1]-self.npts+1) eid = sid + self.npts return point_coords[:, sid:eid], point_vals[:, sid:eid] with self.graph.as_default(): for i in range(steps): pc, pv = random_point_sample() accu_, loss_, _ = self.sess.run([self.accu, self.loss, self.train_op], feed_dict={ self.point_coords_ph: pc, self.point_values_ph: pv}) if i % print_every_n_steps == 0: print('Step [{:6d}] Accu: {:5.4f} Loss: {:5.4f}'.format(i, accu_, loss_)) @property def feature_grid(self): with self.graph.as_default(): return self.sess.run(self.feat_grid) def occupancy_sparse_to_dense(occ_idx, grid_shape): dense = np.zeros(grid_shape, dtype=np.bool).ravel() occ_idx_f = (occ_idx[:, 0] * grid_shape[1] * grid_shape[2] + occ_idx[:, 1] * grid_shape[2] + occ_idx[:, 2]) dense[occ_idx_f] = True dense = np.reshape(dense, grid_shape) return dense def get_in_out_from_samples(mesh, npoints, sample_factor=10, std=0.01): """Get in/out point samples from a given mesh. Args: mesh: trimesh mesh. Original mesh to sample points from. npoints: int, number of points to sample on the mesh surface. sample_factor: int, number of samples to pick per surface point. std: float, std of samples to generate. Returns: surface_samples: [npoints, 6], where first 3 dims are xyz, last 3 dims are normals (nx, ny, nz). """ surface_point_samples, fid = mesh.sample(int(npoints), return_index=True) surface_point_normals = mesh.face_normals[fid] offsets = np.random.randn(int(npoints), sample_factor, 1) * std near_surface_samples = (surface_point_samples[:, np.newaxis, :] + surface_point_normals[:, np.newaxis, :] * offsets) near_surface_samples = np.concatenate([near_surface_samples, offsets], axis=-1) near_surface_samples = near_surface_samples.reshape([-1, 4]) surface_samples = np.concatenate([surface_point_samples, surface_point_normals], axis=-1) return surface_samples, near_surface_samples def get_in_out_from_ray(points_from_ray, sample_factor=10, std=0.01): """Get sample points from points from ray. Args: points_from_ray: [npts, 6], where first 3 dims are xyz, last 3 are ray dir. sample_factor: int, number of samples to pick per surface point. std: float, std of samples to generate. Returns: near_surface_samples: [npts*sample_factor, 4], where last dimension is distance to surface point. """ surface_point_samples = points_from_ray[:, :3] surface_point_normals = points_from_ray[:, 3:] # make sure normals are normalized to unit length n = surface_point_normals surface_point_normals = n / (np.linalg.norm(n, axis=1, keepdims=True)+1e-8) npoints = points_from_ray.shape[0] offsets = np.random.randn(npoints, sample_factor, 1) * std near_surface_samples = (surface_point_samples[:, np.newaxis, :] + surface_point_normals[:, np.newaxis, :] * offsets) near_surface_samples = np.concatenate([near_surface_samples, offsets], axis=-1) near_surface_samples = near_surface_samples.reshape([-1, 4]) return near_surface_samples def intrinsics_from_matrix(int_mat): return (int_mat[0, 0], int_mat[1, 1], int_mat[0, 2], int_mat[1, 2]) def encode_decoder_one_scene(near_surface_samples, ckpt_dir, part_size, overlap, indep_pt_loss, xmin=np.zeros(3), xmax=np.ones(3), res_per_part=16, npts=4096, init_std=1e-4, learning_rate=1e-3, steps=10000, nows=False, verbose=False): """Wrapper function for encoding and decoding one scene. Args: near_surface_samples: [npts*sample_factor, 4], where last dimension is distance to surface point. ckpt_dir: str, path to checkpoint directory to use. part_size: float, size of each part to use when autodecoding. overlap: bool, whether to use overlapping encoding. indep_pt_loss: bool, whether to use independent point loss in optimization. xmin: np.array of len 3, lower coordinates of the domain bounds. xmax: np.array of len 3, upper coordinates of the domain bounds. res_per_part: int, resolution of output evaluation per part. npts: int, number of points to use per step when doing gradient descent. init_std: float, std to use when initializing seed. learning_rate: float, learning rate for doing gradient descent. steps: int, number of optimization steps to take. nows: bool, no warmstarting from checkpoint. use random codebook. verbose: bool, verbose mode. Returns: v: float32 np.array, vertices of reconstructed mesh. f: int32 np.array, faces of reconstructed mesh. feat_grid: float32 np.array, feature grid. mask: bool np.array, mask of occupied cells. """ ckpt = tf.train.latest_checkpoint(ckpt_dir) np.random.shuffle(near_surface_samples) param_file = os.path.join(ckpt_dir, 'params.txt') params = evaluator.parse_param_file(param_file) _, occ_idx, grid_shape = pt.np_get_occupied_idx( near_surface_samples[:100000, :3], xmin=xmin-0.5*part_size, xmax=xmax+0.5*part_size, crop_size=part_size, ntarget=1, overlap=overlap, normalize_crops=False, return_shape=True) npts = min(npts, near_surface_samples.shape[0]) if verbose: print('LIG shape: {}'.format(grid_shape)) if verbose: print('Optimizing latent codes in LIG...') goptim = LIGOptimizer( ckpt, origin=xmin, grid_shape=grid_shape, part_size=part_size, occ_idx=occ_idx, indep_pt_loss=indep_pt_loss, overlap=overlap, alpha_lat=params['alpha_lat'], npts=npts, init_std=init_std, learning_rate=learning_rate, var_prefix='', nows=nows) goptim.optimize_feat_grid(near_surface_samples[:, :3], near_surface_samples[:, 3:], steps=steps) mask = occupancy_sparse_to_dense(occ_idx, grid_shape) # evaluate mesh for the current crop if verbose: print('Extracting mesh from LIG...') svg = evaluator.SparseLIGEvaluator( ckpt, num_filters=params['refiner_nf'], codelen=params['codelen'], origin=xmin, grid_shape=grid_shape, part_size=part_size, overlap=overlap, scope='') feat_grid = goptim.feature_grid[0] out_grid = svg.evaluate_feature_grid(feat_grid, mask=mask, res_per_part=res_per_part) v, f, _, _ = measure.marching_cubes_lewiner(out_grid, 0) v *= (part_size / float(res_per_part) * float(out_grid.shape[0]) / (float(out_grid.shape[0])-1)) v += xmin return v, f, feat_grid, mask
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Lint as: python3 """Utility modules for reconstructing scenes. """ import os import numpy as np from skimage import measure import tensorflow.compat.v1 as tf from tensorflow_graphics.projects.local_implicit_grid.core import evaluator from tensorflow_graphics.projects.local_implicit_grid.core import local_implicit_grid_layer as lig from tensorflow_graphics.projects.local_implicit_grid.core import point_utils as pt class LIGOptimizer(object): """Class for using optimization to acquire feature grid.""" def __init__(self, ckpt, origin, grid_shape, part_size, occ_idx, indep_pt_loss=True, overlap=True, alpha_lat=1e-2, npts=2048, init_std=1e-2, learning_rate=1e-3, var_prefix='', nows=False): self.ckpt = ckpt self.ckpt_dir = os.path.dirname(ckpt) self.params = self._load_params(self.ckpt_dir) self.origin = origin self.grid_shape = grid_shape self.part_size = part_size self.occ_idx = occ_idx self.init_std = init_std self.learning_rate = learning_rate self.var_prefix = var_prefix self.nows = nows self.xmin = self.origin if overlap: true_shape = (np.array(grid_shape) - 1) / 2.0 self.xmax = self.origin + true_shape * part_size else: self.xmax = self.origin + (np.array(grid_shape) - 1) * part_size _, sj, sk = self.grid_shape self.occ_idx_flat = (self.occ_idx[:, 0]*(sj*sk)+ self.occ_idx[:, 1]*sk+self.occ_idx[:, 2]) self.indep_pt_loss = indep_pt_loss self.overlap = overlap self.alpha_lat = alpha_lat self.npts = int(npts) self._init_graph() def _load_params(self, ckpt_dir): param_file = os.path.join(ckpt_dir, 'params.txt') params = evaluator.parse_param_file(param_file) return params def _init_graph(self): """Initialize computation graph for tensorflow. """ self.graph = tf.Graph() with self.graph.as_default(): self.point_coords_ph = tf.placeholder( tf.float32, shape=[1, self.npts, 3]) # placeholder self.point_values_ph = tf.placeholder( tf.float32, shape=[1, self.npts, 1]) # placeholder self.point_coords = self.point_coords_ph self.point_values = self.point_values_ph self.liggrid = lig.LocalImplicitGrid( size=self.grid_shape, in_features=self.params['codelen'], out_features=1, num_filters=self.params['refiner_nf'], net_type='imnet', method='linear' if self.overlap else 'nn', x_location_max=(1.0 if self.overlap else 2.0), name='lig', interp=(not self.indep_pt_loss), min_grid_value=self.xmin, max_grid_value=self.xmax) si, sj, sk = self.grid_shape self.occ_idx_flat_ = tf.convert_to_tensor( self.occ_idx_flat[:, np.newaxis]) self.shape_ = tf.constant([si*sj*sk, self.params['codelen']], dtype=tf.int64) self.feat_sparse_ = tf.Variable( (tf.random.normal(shape=[self.occ_idx.shape[0], self.params['codelen']]) * self.init_std), trainable=True, name='feat_sparse') self.feat_grid = tf.scatter_nd(self.occ_idx_flat_, self.feat_sparse_, self.shape_) self.feat_grid = tf.reshape(self.feat_grid, [1, si, sj, sk, self.params['codelen']]) self.feat_norm = tf.norm(self.feat_sparse_, axis=-1) if self.indep_pt_loss: self.preds, self.weights = self.liggrid(self.feat_grid, self.point_coords, training=True) # preds: [b, n, 8, 1], weights: [b, n, 8] self.preds_interp = tf.reduce_sum( tf.expand_dims(self.weights, axis=-1)*self.preds, axis=2) # [b, n, 1] self.preds = tf.concat([self.preds, self.preds_interp[:, :, tf.newaxis, :]], axis=2) # preds: [b, n, 9, 1] self.point_values = tf.broadcast_to( self.point_values[:, :, tf.newaxis, :], shape=self.preds.shape) # [b, n, 9, 1] else: self.preds = self.liggrid(self.feat_grid, self.point_coords, training=True) # [b, n, 1] self.labels_01 = (self.point_values+1) / 2 # turn labels to 0, 1 labels self.loss_pt = tf.losses.sigmoid_cross_entropy( self.labels_01, logits=self.preds, reduction=tf.losses.Reduction.NONE) self.loss_lat = tf.reduce_mean(self.feat_norm) * self.alpha_lat self.loss = tf.reduce_mean(self.loss_pt) + self.loss_lat # compute accuracy metric if self.indep_pt_loss: self.pvalue = tf.sign(self.point_values[:, :, -1, 0]) self.ppred = tf.sign(self.preds[:, :, -1, 0]) else: self.pvalue = tf.sign(self.point_values[..., 0]) self.ppred = tf.sign(self.preds[:, :, 0]) self.accu = tf.reduce_sum(tf.cast( tf.logical_or(tf.logical_and(self.pvalue > 0, self.ppred > 0), tf.logical_and(self.pvalue < 0, self.ppred < 0)), tf.float32)) / float(self.npts) # get optimizer self.optimizer = tf.train.AdamOptimizer(learning_rate=self.learning_rate) self.fgrid_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES, scope='feat_sparse') self.train_op = self.optimizer.minimize( self.loss, global_step=tf.train.get_or_create_global_step(), var_list=[self.fgrid_vars]) self.map_dict = self._get_var_mapping(model=self.liggrid, scope=self.var_prefix) self.sess = tf.Session() if not self.nows: self.saver = tf.train.Saver(self.map_dict) self.saver.restore(self.sess, self.ckpt) self._initialize_uninitialized(self.sess) def _get_var_mapping(self, model, scope=''): vars_ = model.trainable_variables varnames = [v.name for v in vars_] # .split(':')[0] varnames = [scope+v.replace('lig/', '').strip(':0') for v in varnames] map_dict = dict(zip(varnames, vars_)) return map_dict def _initialize_uninitialized(self, sess): global_vars = tf.global_variables() is_not_initialized = sess.run( [tf.is_variable_initialized(var) for var in global_vars]) not_initialized_vars = [v for (v, f) in zip(global_vars, is_not_initialized) if not f] if not_initialized_vars: sess.run(tf.variables_initializer(not_initialized_vars)) def optimize_feat_grid(self, point_coords, point_vals, steps=10000, print_every_n_steps=1000): """Optimize feature grid. Args: point_coords: [npts, 3] point coordinates. point_vals: [npts, 1] point values. steps: int, number of steps for gradient descent. print_every_n_steps: int, print every n steps. Returns: """ print_every_n_steps = int(print_every_n_steps) point_coords = point_coords.copy() point_vals = np.sign(point_vals.copy()) if point_coords.ndim == 3: point_coords = point_coords[0] if point_vals.ndim == 3: point_vals = point_vals[0] elif point_vals.ndim == 1: point_vals = point_vals[:, np.newaxis] # clip point_coords = np.clip(point_coords, self.xmin, self.xmax) # shuffle points seq = np.random.permutation(point_coords.shape[0]) point_coords = point_coords[seq] point_vals = point_vals[seq] point_coords = point_coords[np.newaxis] point_vals = point_vals[np.newaxis] # random point sampling function def random_point_sample(): sid = np.random.choice(point_coords.shape[1]-self.npts+1) eid = sid + self.npts return point_coords[:, sid:eid], point_vals[:, sid:eid] with self.graph.as_default(): for i in range(steps): pc, pv = random_point_sample() accu_, loss_, _ = self.sess.run([self.accu, self.loss, self.train_op], feed_dict={ self.point_coords_ph: pc, self.point_values_ph: pv}) if i % print_every_n_steps == 0: print('Step [{:6d}] Accu: {:5.4f} Loss: {:5.4f}'.format(i, accu_, loss_)) @property def feature_grid(self): with self.graph.as_default(): return self.sess.run(self.feat_grid) def occupancy_sparse_to_dense(occ_idx, grid_shape): dense = np.zeros(grid_shape, dtype=np.bool).ravel() occ_idx_f = (occ_idx[:, 0] * grid_shape[1] * grid_shape[2] + occ_idx[:, 1] * grid_shape[2] + occ_idx[:, 2]) dense[occ_idx_f] = True dense = np.reshape(dense, grid_shape) return dense def get_in_out_from_samples(mesh, npoints, sample_factor=10, std=0.01): """Get in/out point samples from a given mesh. Args: mesh: trimesh mesh. Original mesh to sample points from. npoints: int, number of points to sample on the mesh surface. sample_factor: int, number of samples to pick per surface point. std: float, std of samples to generate. Returns: surface_samples: [npoints, 6], where first 3 dims are xyz, last 3 dims are normals (nx, ny, nz). """ surface_point_samples, fid = mesh.sample(int(npoints), return_index=True) surface_point_normals = mesh.face_normals[fid] offsets = np.random.randn(int(npoints), sample_factor, 1) * std near_surface_samples = (surface_point_samples[:, np.newaxis, :] + surface_point_normals[:, np.newaxis, :] * offsets) near_surface_samples = np.concatenate([near_surface_samples, offsets], axis=-1) near_surface_samples = near_surface_samples.reshape([-1, 4]) surface_samples = np.concatenate([surface_point_samples, surface_point_normals], axis=-1) return surface_samples, near_surface_samples def get_in_out_from_ray(points_from_ray, sample_factor=10, std=0.01): """Get sample points from points from ray. Args: points_from_ray: [npts, 6], where first 3 dims are xyz, last 3 are ray dir. sample_factor: int, number of samples to pick per surface point. std: float, std of samples to generate. Returns: near_surface_samples: [npts*sample_factor, 4], where last dimension is distance to surface point. """ surface_point_samples = points_from_ray[:, :3] surface_point_normals = points_from_ray[:, 3:] # make sure normals are normalized to unit length n = surface_point_normals surface_point_normals = n / (np.linalg.norm(n, axis=1, keepdims=True)+1e-8) npoints = points_from_ray.shape[0] offsets = np.random.randn(npoints, sample_factor, 1) * std near_surface_samples = (surface_point_samples[:, np.newaxis, :] + surface_point_normals[:, np.newaxis, :] * offsets) near_surface_samples = np.concatenate([near_surface_samples, offsets], axis=-1) near_surface_samples = near_surface_samples.reshape([-1, 4]) return near_surface_samples def intrinsics_from_matrix(int_mat): return (int_mat[0, 0], int_mat[1, 1], int_mat[0, 2], int_mat[1, 2]) def encode_decoder_one_scene(near_surface_samples, ckpt_dir, part_size, overlap, indep_pt_loss, xmin=np.zeros(3), xmax=np.ones(3), res_per_part=16, npts=4096, init_std=1e-4, learning_rate=1e-3, steps=10000, nows=False, verbose=False): """Wrapper function for encoding and decoding one scene. Args: near_surface_samples: [npts*sample_factor, 4], where last dimension is distance to surface point. ckpt_dir: str, path to checkpoint directory to use. part_size: float, size of each part to use when autodecoding. overlap: bool, whether to use overlapping encoding. indep_pt_loss: bool, whether to use independent point loss in optimization. xmin: np.array of len 3, lower coordinates of the domain bounds. xmax: np.array of len 3, upper coordinates of the domain bounds. res_per_part: int, resolution of output evaluation per part. npts: int, number of points to use per step when doing gradient descent. init_std: float, std to use when initializing seed. learning_rate: float, learning rate for doing gradient descent. steps: int, number of optimization steps to take. nows: bool, no warmstarting from checkpoint. use random codebook. verbose: bool, verbose mode. Returns: v: float32 np.array, vertices of reconstructed mesh. f: int32 np.array, faces of reconstructed mesh. feat_grid: float32 np.array, feature grid. mask: bool np.array, mask of occupied cells. """ ckpt = tf.train.latest_checkpoint(ckpt_dir) np.random.shuffle(near_surface_samples) param_file = os.path.join(ckpt_dir, 'params.txt') params = evaluator.parse_param_file(param_file) _, occ_idx, grid_shape = pt.np_get_occupied_idx( near_surface_samples[:100000, :3], xmin=xmin-0.5*part_size, xmax=xmax+0.5*part_size, crop_size=part_size, ntarget=1, overlap=overlap, normalize_crops=False, return_shape=True) npts = min(npts, near_surface_samples.shape[0]) if verbose: print('LIG shape: {}'.format(grid_shape)) if verbose: print('Optimizing latent codes in LIG...') goptim = LIGOptimizer( ckpt, origin=xmin, grid_shape=grid_shape, part_size=part_size, occ_idx=occ_idx, indep_pt_loss=indep_pt_loss, overlap=overlap, alpha_lat=params['alpha_lat'], npts=npts, init_std=init_std, learning_rate=learning_rate, var_prefix='', nows=nows) goptim.optimize_feat_grid(near_surface_samples[:, :3], near_surface_samples[:, 3:], steps=steps) mask = occupancy_sparse_to_dense(occ_idx, grid_shape) # evaluate mesh for the current crop if verbose: print('Extracting mesh from LIG...') svg = evaluator.SparseLIGEvaluator( ckpt, num_filters=params['refiner_nf'], codelen=params['codelen'], origin=xmin, grid_shape=grid_shape, part_size=part_size, overlap=overlap, scope='') feat_grid = goptim.feature_grid[0] out_grid = svg.evaluate_feature_grid(feat_grid, mask=mask, res_per_part=res_per_part) v, f, _, _ = measure.marching_cubes_lewiner(out_grid, 0) v *= (part_size / float(res_per_part) * float(out_grid.shape[0]) / (float(out_grid.shape[0])-1)) v += xmin return v, f, feat_grid, mask
-1
tensorflow/graphics
486
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. Following changes are made to the library code: - tf.compat.v1.name_scope -> tf.name_scope - tf.compat.v1.where. -> tf.where - tf.compat.v1.assert_equal -> tf.debugging.assert_equal - tf.compat.v1.dimension_value -> tf.compat.dimension_value Following changes are made to the test code: - Remove tf.compat.v1.get_variable() - Remove tf.compat.v1.global_variables_initializer() - Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
copybara-service[bot]
"2021-01-29T04:02:31Z"
"2021-02-07T22:38:58Z"
9d257ad4a72ccf65e4349910b9fff7c0a5648073
f683a9a5794bade30ede447339394e84b44acc0b
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. Following changes are made to the library code: - tf.compat.v1.name_scope -> tf.name_scope - tf.compat.v1.where. -> tf.where - tf.compat.v1.assert_equal -> tf.debugging.assert_equal - tf.compat.v1.dimension_value -> tf.compat.dimension_value Following changes are made to the test code: - Remove tf.compat.v1.get_variable() - Remove tf.compat.v1.global_variables_initializer() - Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
./tensorflow_graphics/rendering/camera/__init__.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Camera module.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function from tensorflow_graphics.rendering.camera import orthographic from tensorflow_graphics.rendering.camera import perspective from tensorflow_graphics.rendering.camera import quadratic_radial_distortion from tensorflow_graphics.util import export_api as _export_api # API contains submodules of tensorflow_graphics.rendering.camera. __all__ = _export_api.get_modules()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Camera module.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function from tensorflow_graphics.rendering.camera import orthographic from tensorflow_graphics.rendering.camera import perspective from tensorflow_graphics.rendering.camera import quadratic_radial_distortion from tensorflow_graphics.util import export_api as _export_api # API contains submodules of tensorflow_graphics.rendering.camera. __all__ = _export_api.get_modules()
-1
tensorflow/graphics
486
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. Following changes are made to the library code: - tf.compat.v1.name_scope -> tf.name_scope - tf.compat.v1.where. -> tf.where - tf.compat.v1.assert_equal -> tf.debugging.assert_equal - tf.compat.v1.dimension_value -> tf.compat.dimension_value Following changes are made to the test code: - Remove tf.compat.v1.get_variable() - Remove tf.compat.v1.global_variables_initializer() - Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
copybara-service[bot]
"2021-01-29T04:02:31Z"
"2021-02-07T22:38:58Z"
9d257ad4a72ccf65e4349910b9fff7c0a5648073
f683a9a5794bade30ede447339394e84b44acc0b
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. Following changes are made to the library code: - tf.compat.v1.name_scope -> tf.name_scope - tf.compat.v1.where. -> tf.where - tf.compat.v1.assert_equal -> tf.debugging.assert_equal - tf.compat.v1.dimension_value -> tf.compat.dimension_value Following changes are made to the test code: - Remove tf.compat.v1.get_variable() - Remove tf.compat.v1.global_variables_initializer() - Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
./tensorflow_graphics/g3doc/_book.yaml
upper_tabs: # Tabs left of dropdown menu - include: /_upper_tabs_left.yaml - include: /api_docs/_upper_tabs_api.yaml # Dropdown menu - name: Resources path: /resources is_default: true menu: - include: /resources/_menu_toc.yaml lower_tabs: # Subsite tabs other: - name: Guide & Tutorials contents: - title: Overview path: /graphics/overview - title: Install path: /graphics/install - title: Contributing path: https://github.com/tensorflow/graphics/blob/master/CONTRIBUTING.md status: external - title: Debug path: /graphics/debug_mode - title: TensorBoard path: /graphics/tensorboard - heading: Tutorials - title: 6DOF alignment path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/6dof_alignment.ipynb status: external - title: Camera intrinsics optimization path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/intrinsics_optimization.ipynb status: external - title: Interpolation path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/interpolation.ipynb status: external - title: Reflectance path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/reflectance.ipynb status: external - title: Non-rigid deformation path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/non_rigid_deformation.ipynb status: external - title: Spherical harmonics rendering path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/spherical_harmonics_approximation.ipynb status: external - title: Environment map optimization path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/spherical_harmonics_optimization.ipynb status: external - title: Semantic mesh segmentation path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/mesh_segmentation_demo.ipynb status: external - name: API skip_translation: true contents: - include: /graphics/api_docs/python/tfg/_toc.yaml - include: /_upper_tabs_right.yaml
upper_tabs: # Tabs left of dropdown menu - include: /_upper_tabs_left.yaml - include: /api_docs/_upper_tabs_api.yaml # Dropdown menu - name: Resources path: /resources is_default: true menu: - include: /resources/_menu_toc.yaml lower_tabs: # Subsite tabs other: - name: Guide & Tutorials contents: - title: Overview path: /graphics/overview - title: Install path: /graphics/install - title: Contributing path: https://github.com/tensorflow/graphics/blob/master/CONTRIBUTING.md status: external - title: Debug path: /graphics/debug_mode - title: TensorBoard path: /graphics/tensorboard - heading: Tutorials - title: 6DOF alignment path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/6dof_alignment.ipynb status: external - title: Camera intrinsics optimization path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/intrinsics_optimization.ipynb status: external - title: Interpolation path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/interpolation.ipynb status: external - title: Reflectance path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/reflectance.ipynb status: external - title: Non-rigid deformation path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/non_rigid_deformation.ipynb status: external - title: Spherical harmonics rendering path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/spherical_harmonics_approximation.ipynb status: external - title: Environment map optimization path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/spherical_harmonics_optimization.ipynb status: external - title: Semantic mesh segmentation path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/mesh_segmentation_demo.ipynb status: external - name: API skip_translation: true contents: - include: /graphics/api_docs/python/tfg/_toc.yaml - include: /_upper_tabs_right.yaml
-1
tensorflow/graphics
486
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. Following changes are made to the library code: - tf.compat.v1.name_scope -> tf.name_scope - tf.compat.v1.where. -> tf.where - tf.compat.v1.assert_equal -> tf.debugging.assert_equal - tf.compat.v1.dimension_value -> tf.compat.dimension_value Following changes are made to the test code: - Remove tf.compat.v1.get_variable() - Remove tf.compat.v1.global_variables_initializer() - Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
copybara-service[bot]
"2021-01-29T04:02:31Z"
"2021-02-07T22:38:58Z"
9d257ad4a72ccf65e4349910b9fff7c0a5648073
f683a9a5794bade30ede447339394e84b44acc0b
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. Following changes are made to the library code: - tf.compat.v1.name_scope -> tf.name_scope - tf.compat.v1.where. -> tf.where - tf.compat.v1.assert_equal -> tf.debugging.assert_equal - tf.compat.v1.dimension_value -> tf.compat.dimension_value Following changes are made to the test code: - Remove tf.compat.v1.get_variable() - Remove tf.compat.v1.global_variables_initializer() - Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
./tensorflow_graphics/projects/local_implicit_grid/requirements.txt
absl-py>=0.7.1 numpy>=1.16.4 plyfile>=0.7.1 scipy>=1.3.1 scikit-image>==0.15.0 trimesh>=3.2.12 tensorflow>=1.14.0
absl-py>=0.7.1 numpy>=1.16.4 plyfile>=0.7.1 scipy>=1.3.1 scikit-image>==0.15.0 trimesh>=3.2.12 tensorflow>=1.14.0
-1
tensorflow/graphics
486
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. Following changes are made to the library code: - tf.compat.v1.name_scope -> tf.name_scope - tf.compat.v1.where. -> tf.where - tf.compat.v1.assert_equal -> tf.debugging.assert_equal - tf.compat.v1.dimension_value -> tf.compat.dimension_value Following changes are made to the test code: - Remove tf.compat.v1.get_variable() - Remove tf.compat.v1.global_variables_initializer() - Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
copybara-service[bot]
"2021-01-29T04:02:31Z"
"2021-02-07T22:38:58Z"
9d257ad4a72ccf65e4349910b9fff7c0a5648073
f683a9a5794bade30ede447339394e84b44acc0b
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. Following changes are made to the library code: - tf.compat.v1.name_scope -> tf.name_scope - tf.compat.v1.where. -> tf.where - tf.compat.v1.assert_equal -> tf.debugging.assert_equal - tf.compat.v1.dimension_value -> tf.compat.dimension_value Following changes are made to the test code: - Remove tf.compat.v1.get_variable() - Remove tf.compat.v1.global_variables_initializer() - Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
./tensorflow_graphics/projects/pointnet/helpers.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """A collection of training helper utilities.""" from __future__ import print_function import argparse import os import tempfile import time import tensorflow as tf import termcolor class ArgumentParser(argparse.ArgumentParser): """Argument parser with default flags, and tensorboard helpers.""" def __init__(self, *args, **kwargs): argparse.ArgumentParser.__init__(self, *args, **kwargs) # --- Query default logdir random_logdir = tempfile.mkdtemp(prefix="tensorboard_") default_logdir = os.environ.get("TENSORBOARD_DEFAULT_LOGDIR", random_logdir) # --- Add the default options self.add("--logdir", default_logdir, help="tensorboard dir") self.add("--tensorboard", True, help="should generate summaries?") self.add("--assert_gpu", True, help="asserts on missing GPU accelerator") self.add("--tf_quiet", True, help="no verbose tf startup") def add(self, name, default, **kwargs): """More compact argumentparser 'add' flag method.""" helpstring = kwargs["help"] if "help" in kwargs else "" metavar = kwargs["metavar"] if "metavar" in kwargs else name # --- Fixes problems with bool arguments def str2bool(string): if isinstance(string, bool): return str if string.lower() in ("true", "yes"): return True if string.lower() in ("false", "no"): return False raise argparse.ArgumentTypeError("Bad value for boolean flag") mytype = type(default) if isinstance(default, bool): mytype = str2bool self.add_argument( name, metavar=metavar, default=default, help=helpstring, type=mytype) def parse_args(self, args=None, namespace=None): """WARNING: programmatically changes the logdir flags.""" flags = super(ArgumentParser, self).parse_args(args) # --- setup automatic logdir (timestamp) if "timestamp" in flags.logdir: timestamp = time.strftime("%a%d_%H:%M:%S") # "Tue19_12:02:26" flags.logdir = flags.logdir.replace("timestamp", timestamp) if flags.tf_quiet: set_tensorflow_log_level(3) if flags.assert_gpu: assert_gpu_available() # --- ensure logdir ends in / if flags.logdir[-1] != "/": flags.logdir += "/" return flags def assert_gpu_available(): """Verifies a GPU accelerator is available.""" physical_devices = tf.config.list_physical_devices("GPU") num_gpus = len(physical_devices) assert num_gpus >= 1, "execution requires one GPU" def set_tensorflow_log_level(level=3): """Sets the log level of TensorFlow.""" os.environ["TF_CPP_MIN_LOG_LEVEL"] = str(level) def summary_command(parser, flags, log_to_file=True, log_to_summary=True): """Cache the command used to reproduce experiment in summary folder.""" if not flags.tensorboard: return exec_string = "python " + parser.prog + " \\\n" nflags = len(vars(flags)) for i, arg in enumerate(vars(flags)): exec_string += " --{} ".format(arg) exec_string += "{}".format(getattr(flags, arg)) if i + 1 < nflags: exec_string += " \\\n" exec_string += "\n" if log_to_file: with tf.io.gfile.GFile( os.path.join(flags.logdir, "command.txt"), mode="w") as fid: fid.write(exec_string) if log_to_summary and flags.tensorboard: tf.summary.text("command", exec_string, step=0) def setup_tensorboard(flags): """Creates summary writers, and setups default tensorboard paths.""" if not flags.tensorboard: return # --- Do not allow experiment with same name assert (not tf.io.gfile.exists(flags.logdir) or not tf.io.gfile.listdir(flags.logdir)), \ "CRITICAL: folder {} already exists".format(flags.logdir) # --- Log where summary can be found print("View results with: ") termcolor.cprint(" tensorboard --logdir {}".format(flags.logdir), "red") writer = tf.summary.create_file_writer(flags.logdir, flush_millis=10000) writer.set_as_default() # --- Log dir name tweak for "hypertune" log_dir = "" trial_id = int(os.environ.get("CLOUD_ML_TRIAL_ID", 0)) if trial_id != 0: if log_dir.endswith(os.sep): log_dir = log_dir[:-1] # removes trailing "/" log_dir += "_trial{0:03d}/".format(trial_id) def handle_keyboard_interrupt(flags): """Informs user how to delete stale summaries.""" print("Keyboard interrupt by user") if flags.logdir.startswith("gs://"): bucketpath = flags.logdir[5:] print("Delete these summaries with: ") termcolor.cprint(" gsutil rm -rf {}".format(flags.logdir), "red") baseurl = " https://pantheon.google.com/storage/browser/{}" print("Or by visiting: ") termcolor.cprint(baseurl.format(bucketpath), "red") else: print("Delete these summaries with: ") termcolor.cprint(" rm -rf {}".format(flags.logdir), "red")
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """A collection of training helper utilities.""" from __future__ import print_function import argparse import os import tempfile import time import tensorflow as tf import termcolor class ArgumentParser(argparse.ArgumentParser): """Argument parser with default flags, and tensorboard helpers.""" def __init__(self, *args, **kwargs): argparse.ArgumentParser.__init__(self, *args, **kwargs) # --- Query default logdir random_logdir = tempfile.mkdtemp(prefix="tensorboard_") default_logdir = os.environ.get("TENSORBOARD_DEFAULT_LOGDIR", random_logdir) # --- Add the default options self.add("--logdir", default_logdir, help="tensorboard dir") self.add("--tensorboard", True, help="should generate summaries?") self.add("--assert_gpu", True, help="asserts on missing GPU accelerator") self.add("--tf_quiet", True, help="no verbose tf startup") def add(self, name, default, **kwargs): """More compact argumentparser 'add' flag method.""" helpstring = kwargs["help"] if "help" in kwargs else "" metavar = kwargs["metavar"] if "metavar" in kwargs else name # --- Fixes problems with bool arguments def str2bool(string): if isinstance(string, bool): return str if string.lower() in ("true", "yes"): return True if string.lower() in ("false", "no"): return False raise argparse.ArgumentTypeError("Bad value for boolean flag") mytype = type(default) if isinstance(default, bool): mytype = str2bool self.add_argument( name, metavar=metavar, default=default, help=helpstring, type=mytype) def parse_args(self, args=None, namespace=None): """WARNING: programmatically changes the logdir flags.""" flags = super(ArgumentParser, self).parse_args(args) # --- setup automatic logdir (timestamp) if "timestamp" in flags.logdir: timestamp = time.strftime("%a%d_%H:%M:%S") # "Tue19_12:02:26" flags.logdir = flags.logdir.replace("timestamp", timestamp) if flags.tf_quiet: set_tensorflow_log_level(3) if flags.assert_gpu: assert_gpu_available() # --- ensure logdir ends in / if flags.logdir[-1] != "/": flags.logdir += "/" return flags def assert_gpu_available(): """Verifies a GPU accelerator is available.""" physical_devices = tf.config.list_physical_devices("GPU") num_gpus = len(physical_devices) assert num_gpus >= 1, "execution requires one GPU" def set_tensorflow_log_level(level=3): """Sets the log level of TensorFlow.""" os.environ["TF_CPP_MIN_LOG_LEVEL"] = str(level) def summary_command(parser, flags, log_to_file=True, log_to_summary=True): """Cache the command used to reproduce experiment in summary folder.""" if not flags.tensorboard: return exec_string = "python " + parser.prog + " \\\n" nflags = len(vars(flags)) for i, arg in enumerate(vars(flags)): exec_string += " --{} ".format(arg) exec_string += "{}".format(getattr(flags, arg)) if i + 1 < nflags: exec_string += " \\\n" exec_string += "\n" if log_to_file: with tf.io.gfile.GFile( os.path.join(flags.logdir, "command.txt"), mode="w") as fid: fid.write(exec_string) if log_to_summary and flags.tensorboard: tf.summary.text("command", exec_string, step=0) def setup_tensorboard(flags): """Creates summary writers, and setups default tensorboard paths.""" if not flags.tensorboard: return # --- Do not allow experiment with same name assert (not tf.io.gfile.exists(flags.logdir) or not tf.io.gfile.listdir(flags.logdir)), \ "CRITICAL: folder {} already exists".format(flags.logdir) # --- Log where summary can be found print("View results with: ") termcolor.cprint(" tensorboard --logdir {}".format(flags.logdir), "red") writer = tf.summary.create_file_writer(flags.logdir, flush_millis=10000) writer.set_as_default() # --- Log dir name tweak for "hypertune" log_dir = "" trial_id = int(os.environ.get("CLOUD_ML_TRIAL_ID", 0)) if trial_id != 0: if log_dir.endswith(os.sep): log_dir = log_dir[:-1] # removes trailing "/" log_dir += "_trial{0:03d}/".format(trial_id) def handle_keyboard_interrupt(flags): """Informs user how to delete stale summaries.""" print("Keyboard interrupt by user") if flags.logdir.startswith("gs://"): bucketpath = flags.logdir[5:] print("Delete these summaries with: ") termcolor.cprint(" gsutil rm -rf {}".format(flags.logdir), "red") baseurl = " https://pantheon.google.com/storage/browser/{}" print("Or by visiting: ") termcolor.cprint(baseurl.format(bucketpath), "red") else: print("Delete these summaries with: ") termcolor.cprint(" rm -rf {}".format(flags.logdir), "red")
-1
tensorflow/graphics
486
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. Following changes are made to the library code: - tf.compat.v1.name_scope -> tf.name_scope - tf.compat.v1.where. -> tf.where - tf.compat.v1.assert_equal -> tf.debugging.assert_equal - tf.compat.v1.dimension_value -> tf.compat.dimension_value Following changes are made to the test code: - Remove tf.compat.v1.get_variable() - Remove tf.compat.v1.global_variables_initializer() - Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
copybara-service[bot]
"2021-01-29T04:02:31Z"
"2021-02-07T22:38:58Z"
9d257ad4a72ccf65e4349910b9fff7c0a5648073
f683a9a5794bade30ede447339394e84b44acc0b
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. Following changes are made to the library code: - tf.compat.v1.name_scope -> tf.name_scope - tf.compat.v1.where. -> tf.where - tf.compat.v1.assert_equal -> tf.debugging.assert_equal - tf.compat.v1.dimension_value -> tf.compat.dimension_value Following changes are made to the test code: - Remove tf.compat.v1.get_variable() - Remove tf.compat.v1.global_variables_initializer() - Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
./.git/index
DIRChewh[ewh[pl.m'WB-.bazelrcewh[ewh[pE+P)1|kȵ .flake8ewh[ewh[pYW;8l^*Ǻ w\*.github/workflows/build.ymle=e=p}@H1Hj97r?&.github/workflows/tfg-nigthly-pypi.ymlewh[ewh[p%AN1Q;Tqt#1 .gitignoreewh[ewh[p*:)- |CJ΃^^ƳE .gitmodulesewh[ewh[p,-YczAOi .pylintrcewh[ewh[p5'jdžv;u0}`CONTRIBUTING.mdewh[ewh[p8,^EiVs49GB3- LLICENSEewh[ewh[p9*ikSKvZ$󕠋tR MANIFEST.inewh[ewh[p:2,ꐶ"]lL(nКK7 README.mdewh[ewh[p;(^ki[d WORKSPACEewh[ewh[p<Q?"ؑ]5&' pytest.iniewh[ewh[p>fI/fd,+requirements.txtewh[ewh[p?.̇µoE>Rrequirements.unixewh[ewh[p@ ׸&hҒbX52 setup.pyewh[ewh[pBz_|"5}62Wvnsubmodules/README.mdewh[ewh[pCCa`ۍ!_submodules/rednerewh[ewh[pE,^EiVs49GB3- Ltensorflow_graphics/LICENSEe=e=pP/|;*0_;_lLItensorflow_graphics/__init__.pyeqXeqXpHpj>]V:7@(tensorflow_graphics/datasets/__init__.pye=e=pH:`p)VXr1tensorflow_graphics/datasets/features/__init__.pye=e=p\97GtFS 7tensorflow_graphics/datasets/features/camera_feature.pye=e=p) FeSqj <tensorflow_graphics/datasets/features/camera_feature_test.pye=e=pa]gR 5tensorflow_graphics/datasets/features/pose_feature.pye=e=p4\(^MJ\S':tensorflow_graphics/datasets/features/pose_feature_test.pyeqXeqXpZ, v\|/W8tensorflow_graphics/datasets/features/test_data/cube.mateqXeqXp["6k@m!}AЅ!8tensorflow_graphics/datasets/features/test_data/cube.obje=e=p}}pOor]ජF 8tensorflow_graphics/datasets/features/trimesh_feature.pye=e=pfImD2,C}&=tensorflow_graphics/datasets/features/trimesh_feature_test.pye=e=p )mgh番,^wE6tensorflow_graphics/datasets/features/voxel_feature.pye=e=p RTi G=p;tensorflow_graphics/datasets/features/voxel_feature_test.pye=e=ptT lpUI.pTw 3tensorflow_graphics/datasets/modelnet40/__init__.pyeqXeqXpbN49m'ر~m5tensorflow_graphics/datasets/modelnet40/checksums.tsveqXeqXpeL! \ XbnXtensorflow_graphics/datasets/modelnet40/fakes/modelnet40_ply_hdf5_2048/ply_data_test0.h5eqXeqXpf&jfCȄ8Xtensorflow_graphics/datasets/modelnet40/fakes/modelnet40_ply_hdf5_2048/ply_data_test1.h5eqXeqXpg:|-L5P$ : rYtensorflow_graphics/datasets/modelnet40/fakes/modelnet40_ply_hdf5_2048/ply_data_train0.h5ezSezSph??J :VfYtensorflow_graphics/datasets/modelnet40/fakes/modelnet40_ply_hdf5_2048/ply_data_train1.h5ezSezSpi %{FDDz T mI@Ytensorflow_graphics/datasets/modelnet40/fakes/modelnet40_ply_hdf5_2048/ply_data_train2.h5ezSezSpj`u\69eZ؀ǻUtensorflow_graphics/datasets/modelnet40/fakes/modelnet40_ply_hdf5_2048/test_files.txtezSezSpk~$.n3:@3 @BRVtensorflow_graphics/datasets/modelnet40/fakes/modelnet40_ply_hdf5_2048/train_files.txte=e=p8xΰjY>*elĊӶ5tensorflow_graphics/datasets/modelnet40/modelnet40.pye=e=p "DK@wC!upN+`?tensorflow_graphics/datasets/modelnet40/modelnet40_checksums.pye=e=pο{ (Y3ѶC=$Bg?tensorflow_graphics/datasets/modelnet40/modelnet40_makefakes.pye=e=pldlھH`לah9tensorflow_graphics/datasets/modelnet40/modelnet40_run.pye=e=po,eREpF/:tensorflow_graphics/datasets/modelnet40/modelnet40_show.pye=e=p^A?XrE?Ơ:tensorflow_graphics/datasets/modelnet40/modelnet40_test.pye=e=p__uﱇ&R 5`5.tensorflow_graphics/datasets/pix3d/__init__.pyezSezSptv23H#g ?l0tensorflow_graphics/datasets/pix3d/checksums.tsvezSezSpy8h/ʯ1gE9tensorflow_graphics/datasets/pix3d/fakes/img/bed/0001.pnge .Pe .Ppz\aw4TEƮ`x9tensorflow_graphics/datasets/pix3d/fakes/img/bed/0002.pnge .Pe .Pp{-2`L|LkaZٌׅ*9tensorflow_graphics/datasets/pix3d/fakes/img/bed/0010.pnge .Pe .Pp*4ޘpg*|-:tensorflow_graphics/datasets/pix3d/fakes/mask/bed/0001.pnge .Pe .Pp,_{CGfuR`:tensorflow_graphics/datasets/pix3d/fakes/mask/bed/0002.pnge .Pe .Pp aDtLis-%4 S:tensorflow_graphics/datasets/pix3d/fakes/mask/bed/0010.pnge .Pe .PpD@al'()BLOtensorflow_graphics/datasets/pix3d/fakes/model/bed/IKEA_MALM_2/3d_keypoints.txte .Pe .Pp0^>{agYtensorflow_graphics/datasets/pix3d/fakes/model/bed/IKEA_MALM_2/malm_bed_2_obj0_object.mtle .Pe .PpY_^+r%3ϰ*. LTHtensorflow_graphics/datasets/pix3d/fakes/model/bed/IKEA_MALM_2/model.obje .Pe .Pp#; íAeyp$Htensorflow_graphics/datasets/pix3d/fakes/model/bed/IKEA_MALM_2/voxel.mate .Pe .Ppǘij~,"˪ 9Otensorflow_graphics/datasets/pix3d/fakes/model/bed/IKEA_MALM_3/3d_keypoints.txte .Pe .Pp0^>{agYtensorflow_graphics/datasets/pix3d/fakes/model/bed/IKEA_MALM_3/malm_bed_3_obj0_object.mtle .Pe .PpY/Ӌ$KU i,Htensorflow_graphics/datasets/pix3d/fakes/model/bed/IKEA_MALM_3/model.obje .Pe .Pp#u+,<,#Htensorflow_graphics/datasets/pix3d/fakes/model/bed/IKEA_MALM_3/voxel.mate .Pe .Pp$Y $^{'4Z3tensorflow_graphics/datasets/pix3d/fakes/pix3d.jsone .Pe .PpГc7v ȢѺ]Rm7tensorflow_graphics/datasets/pix3d/fakes/pix3d_test.npye .Pe .PpfÂ\T\ .S18tensorflow_graphics/datasets/pix3d/fakes/pix3d_train.npye .Pe .Ppkb΂(;&\>m7tensorflow_graphics/datasets/pix3d/fixed_masks/0045.pnge kMe kMp*i9"ƶ<[l7tensorflow_graphics/datasets/pix3d/fixed_masks/1745.pnge=e=p- 7pĻ.G=d3*+tensorflow_graphics/datasets/pix3d/pix3d.pye=e=p<w;n!搳40tensorflow_graphics/datasets/pix3d/pix3d_test.pye kMe kMpJ EљltNt-8tensorflow_graphics/datasets/pix3d/splits/pix3d_test.npye kMe kMp%- 1+<ÕL9tensorflow_graphics/datasets/pix3d/splits/pix3d_train.npye=e=p[VOZzL1tensorflow_graphics/datasets/shapenet/__init__.pye kMe kMpЅ.,_r2y VI3tensorflow_graphics/datasets/shapenet/checksums.tsve kMe kMp`djN[ \3]rqbDuptensorflow_graphics/datasets/shapenet/fakes/02691156/3d5354863690ac7eca27bba175814d1/models/model_normalized.obje kMe kMpcJ;j{K”TŠ3qtensorflow_graphics/datasets/shapenet/fakes/02691156/7eff60e0d72800b8ca8607f540cc62ba/models/model_normalized.obje kMe kMpD)=N;pzG%qtensorflow_graphics/datasets/shapenet/fakes/02691156/9550774ad1c19b24a5a118bd15e6e34f/models/model_normalized.obje kMe kMp)N5~Vvs3eqtensorflow_graphics/datasets/shapenet/fakes/02691156/a98038807a61926abce962d6c4b37336/models/model_normalized.obje kMe kMpR)wE')Lqtensorflow_graphics/datasets/shapenet/fakes/03001627/a800bd725fe116447a84e76181a9e08f/models/model_normalized.obje kMe kMp<t V1> ZDE3tensorflow_graphics/datasets/shapenet/fakes/all.csve kMe kMp'm2c :bCq3[99tensorflow_graphics/datasets/shapenet/fakes/taxonomy.jsone=e=p޳9İ!w61tensorflow_graphics/datasets/shapenet/shapenet.pye=e=p q 蜓 2K⤲y6tensorflow_graphics/datasets/shapenet/shapenet_test.pye kMe kMpe,덴q8&*>rWY0tensorflow_graphics/datasets/testing/__init__.pye kMe kMp !Mn ~~jQtensorflow_graphics/datasets/testing/metadata/model_net40/1.0.0/dataset_info.jsone kMe kMp 6Q5S^>q~j$tensorflow_graphics/g3doc/_book.yamle kMe kMpY4`S~[\1&tensorflow_graphics/g3doc/_index.ipynbe kMe kMp(z_fr- 'g%tensorflow_graphics/g3doc/_index.yamle=e=pv.<~sa~>QR'tensorflow_graphics/g3doc/build_docs.pye kMe kMpĤ16dJϯUa'tensorflow_graphics/g3doc/debug_mode.mde kMe kMp'?d_B<PE$tensorflow_graphics/g3doc/install.mde kMe kMp jYϹro$N&~2%tensorflow_graphics/g3doc/overview.mde kMe kMpϡ:T)Hs9(tensorflow_graphics/g3doc/tensorboard.mde kMe kMpƆZ<hWr0aVU(tensorflow_graphics/geometry/__init__.pye kMe kMpB aufҒu&0~Á4tensorflow_graphics/geometry/convolution/__init__.pye=e=p2=%z&5`=tensorflow_graphics/geometry/convolution/graph_convolution.pye=e=p7'KlkB8&~~` $9tensorflow_graphics/geometry/convolution/graph_pooling.pye kMe kMpJw0<5e*H:tensorflow_graphics/geometry/convolution/tests/__init__.pye=e=p+:5%>ҘHtensorflow_graphics/geometry/convolution/tests/graph_convolution_test.pye=e=p WtlrlKP ixz@Dtensorflow_graphics/geometry/convolution/tests/graph_pooling_test.pye=e=pfBZ^Qof 6Ux<tensorflow_graphics/geometry/convolution/tests/utils_test.pye=e=p C=C$8$znkHbb1tensorflow_graphics/geometry/convolution/utils.pye kMe kMp85.+;m\<G ;tensorflow_graphics/geometry/deformation_energy/__init__.pye=e=p#7G,&3vKtensorflow_graphics/geometry/deformation_energy/as_conformal_as_possible.pye kMe kMpJw0<5e*HAtensorflow_graphics/geometry/deformation_energy/tests/__init__.pye kMe kMpKVcp NH)ELVtensorflow_graphics/geometry/deformation_energy/tests/as_conformal_as_possible_test.pye kMe kMpyk]v+jo7tensorflow_graphics/geometry/representation/__init__.pye=e=p5Fʣ3tensorflow_graphics/geometry/representation/grid.pyeCeCp7H׆Iyj<tensorflow_graphics/geometry/representation/mesh/__init__.pye=e=pR#뷍{b$5EխHT;tensorflow_graphics/geometry/representation/mesh/normals.pye=e=p;@]%㢍׊ggV;tensorflow_graphics/geometry/representation/mesh/sampler.pye kMe kMpJw0<5e*HBtensorflow_graphics/geometry/representation/mesh/tests/__init__.pye kMe kMpahak ,_wLGItensorflow_graphics/geometry/representation/mesh/tests/mesh_test_utils.pye=e=p<5zP@}!KPg5,7Ftensorflow_graphics/geometry/representation/mesh/tests/normals_test.pye=e=p=7HU1R{u{Ftensorflow_graphics/geometry/representation/mesh/tests/sampler_test.pye=e=p@G5)Hժn0i,Dtensorflow_graphics/geometry/representation/mesh/tests/utils_test.pye=e=pBioRU J^Y(F@9tensorflow_graphics/geometry/representation/mesh/utils.pye=e=pC<xYҺWTWZ:4tensorflow_graphics/geometry/representation/point.pye=e=p"; n'5r+"4wE2tensorflow_graphics/geometry/representation/ray.pye kMe kMpJw0<5e*H=tensorflow_graphics/geometry/representation/tests/__init__.pye=e=pD R ,|*K>tensorflow_graphics/geometry/representation/tests/grid_test.pye kMe kMp }j(r&QI[?tensorflow_graphics/geometry/representation/tests/point_test.pye?Fe?FpE' rX~ǰvWVw[o=tensorflow_graphics/geometry/representation/tests/ray_test.pye?Fe?FpF'VT6[ԑMM$mABtensorflow_graphics/geometry/representation/tests/triangle_test.pye?Fe?FpQd70qR&)6, 'a+o7tensorflow_graphics/geometry/representation/triangle.pyeA ?eA ?p#eUT(t@y7tensorflow_graphics/geometry/transformation/__init__.pye?Fe?FpR0KNʏ6[@9tensorflow_graphics/geometry/transformation/axis_angle.pye?Fe?FpS qgI+悁 ]} ">tensorflow_graphics/geometry/transformation/dual_quaternion.pye?Fe?FpT$BzZ^J}eXF"O4tensorflow_graphics/geometry/transformation/euler.pye?Fe?FpU/L83-Dtensorflow_graphics/geometry/transformation/linear_blend_skinning.pye?Fe?FpV9<?F ֤?*6tensorflow_graphics/geometry/transformation/look_at.pye?Fe?FpW\1/!_N5Lw9tensorflow_graphics/geometry/transformation/quaternion.pye?Fe?FpX!/1gltI `-_Atensorflow_graphics/geometry/transformation/rotation_matrix_2d.pye?Fe?FpY7XK}Euoz#Atensorflow_graphics/geometry/transformation/rotation_matrix_3d.pye?Fe?Fp[ Q24*#.OAtEtensorflow_graphics/geometry/transformation/rotation_matrix_common.pye Ie IpJw0<5e*H=tensorflow_graphics/geometry/transformation/tests/__init__.pye?Fe?Fp`E[%&܉EesI-Dtensorflow_graphics/geometry/transformation/tests/axis_angle_test.pye?Fe?Fp Gǔ"~1 |)Itensorflow_graphics/geometry/transformation/tests/dual_quaternion_test.pye?Fe?Fpe/2f<*г3,5dgp?tensorflow_graphics/geometry/transformation/tests/euler_test.pye Ie Ip e/]Y!֥dOtensorflow_graphics/geometry/transformation/tests/linear_blend_skinning_test.pye Ie Ip  ;b|{4J'Atensorflow_graphics/geometry/transformation/tests/look_at_test.pye?Fe?Fp{p#ؓ!ޏ_X yuMKDtensorflow_graphics/geometry/transformation/tests/quaternion_test.pye Ie Ip &\8/K`t_m-H]?SLtensorflow_graphics/geometry/transformation/tests/rotation_matrix_2d_test.pye Ie Ip _д.]:~O\Mps",Ltensorflow_graphics/geometry/transformation/tests/rotation_matrix_3d_test.pye Ie Ip oPvxF((Ptensorflow_graphics/geometry/transformation/tests/rotation_matrix_common_test.pye Ie Ip =nC,Q$>tensorflow_graphics/geometry/transformation/tests/test_data.pye?Fe?Fp'25!uMTJAtensorflow_graphics/geometry/transformation/tests/test_helpers.pye Ie Ip !TTSzW[V{%tensorflow_graphics/image/__init__.pye Ie Ip * E?}J|:T_1tensorflow_graphics/image/color_space/__init__.pye Ie Ip -.ǟ!󩣛 'P-2tensorflow_graphics/image/color_space/constants.pye?Fe?Fp ;9>r8[sLRLdH̟3tensorflow_graphics/image/color_space/linear_rgb.pye?Fe?Fp gmC38sk||-tensorflow_graphics/image/color_space/srgb.pye Ie Ip Jw0<5e*H7tensorflow_graphics/image/color_space/tests/__init__.pye Ie Ip  txUR(K˺nf>tensorflow_graphics/image/color_space/tests/linear_rgb_test.pye Ie Ip  6EŇ>-8tensorflow_graphics/image/color_space/tests/srgb_test.pye?Fe?Fp,!GFr2}$tensorflow_graphics/image/matting.pye?Fe?Fp#T91$مSJt$tensorflow_graphics/image/pyramid.pye Ie Ip Jw0<5e*H+tensorflow_graphics/image/tests/__init__.pye?Fe?Fp7"Jbbkdm"'׿/tensorflow_graphics/image/tests/matting_test.pye?Fe?Fp"ȳ.uNF6`y/tensorflow_graphics/image/tests/pyramid_test.pye?Fe?FpUNyY l_I3tensorflow_graphics/image/tests/transformer_test.pye?Fe?Fp= ŞSV|;>a(tensorflow_graphics/image/transformer.pye?Fe?Fp0M/甹xY ;I>nJ"tensorflow_graphics/io/__init__.pye?Fe?Fp6 m@Otensorflow_graphics/io/exr.pye Ie Ip Jw0<5e*H(tensorflow_graphics/io/tests/__init__.pye?Fe?Fp=`um΋(tensorflow_graphics/io/tests/exr_test.pye?Fe?Fp Gw_EI@se'tensorflow_graphics/io/triangle_mesh.pye Ie Ip # YD7Eߟi($tensorflow_graphics/math/__init__.pye Ie Ip &\A:h) "JS#2tensorflow_graphics/math/interpolation/__init__.pye?Fe?Fp)3 z}kf5ş1tensorflow_graphics/math/interpolation/bspline.pye?Fe?Fp*[Je'/tensorflow_graphics/math/interpolation/slerp.pye Ie Ip *Jw0<5e*H8tensorflow_graphics/math/interpolation/tests/__init__.pye Ie Ip +c (`WGK}:<tensorflow_graphics/math/interpolation/tests/bspline_test.pye?Fe?Fp(ٲyɾAfL:tensorflow_graphics/math/interpolation/tests/slerp_test.pye?Fe?FpHmJmɏ]>tensorflow_graphics/math/interpolation/tests/trilinear_test.pye?Fe?Fp!o;/(<PcZ=tensorflow_graphics/math/interpolation/tests/weighted_test.pye?Fe?Fp<a}T QiZ/3tensorflow_graphics/math/interpolation/trilinear.pye?Fe?FprDeXײWhUk2tensorflow_graphics/math/interpolation/weighted.pye?Fe?Fp.W$|*]n"308*(tensorflow_graphics/math/math_helpers.pye Ie Ip 3q,EQM[ĝ-b.tensorflow_graphics/math/optimizer/__init__.pye?Fe?Fp$ڹջxwЏ"r9tensorflow_graphics/math/optimizer/levenberg_marquardt.pye Ie Ip 6Jw0<5e*H4tensorflow_graphics/math/optimizer/tests/__init__.pye?Fe?FpG3]7Rpx!|j#Dtensorflow_graphics/math/optimizer/tests/levenberg_marquardt_test.pye?Fe?Fp:A7`?^-/tensorflow_graphics/math/spherical_harmonics.pye Ie Ip ;Jw0<5e*H*tensorflow_graphics/math/tests/__init__.pye Ie Ip ="Sg5aO2h臅g/3tensorflow_graphics/math/tests/math_helpers_test.pye Ie Ip ?UJqaf?}'ė:tensorflow_graphics/math/tests/spherical_harmonics_test.pye Ie Ip @)} !#f>-tensorflow_graphics/math/tests/vector_test.pye?Fe?Fpn1=;X_$LN("tensorflow_graphics/math/vector.pye Ee Ep CM\'7)s=$"tensorflow_graphics/nn/__init__.pye~<e~<pch66mW(tensorflow_graphics/nn/layer/__init__.pye?Fe?FpG/j-hˊ̦W-_G1tensorflow_graphics/nn/layer/graph_convolution.pye?Fe?Fp3!0_W~in K(tensorflow_graphics/nn/layer/pointnet.pye Ee Ep IJw0<5e*H.tensorflow_graphics/nn/layer/tests/__init__.pye?Fe?Fp.ط|[om.#<tensorflow_graphics/nn/layer/tests/graph_convolution_test.pye Ee Ep K cMI_ ? 3tensorflow_graphics/nn/layer/tests/pointnet_test.pye?Fe?Fp}] Mm}dȕz A'tensorflow_graphics/nn/loss/__init__.pye?Fe?Fp47uUJ)!{=#߱/tensorflow_graphics/nn/loss/chamfer_distance.pye Ee Ep QJw0<5e*H-tensorflow_graphics/nn/loss/tests/__init__.pye Ee Ep Rl38QGyύ(:tensorflow_graphics/nn/loss/tests/chamfer_distance_test.pye Ee Ep UFŴ DRQǴ)tensorflow_graphics/nn/metric/__init__.pye?Fe?Fp6 {5{wx`t@15-'tensorflow_graphics/nn/metric/fscore.pye?Fe?FpD {<χ_h0VjVۢ8tensorflow_graphics/nn/metric/intersection_over_union.pye?Fe?Fp8:E/V@t~\e]H*tensorflow_graphics/nn/metric/precision.pye?Fe?Fp9[f^HKO'tensorflow_graphics/nn/metric/recall.pye Ee Ep [Jw0<5e*H/tensorflow_graphics/nn/metric/tests/__init__.pye Ee Ep \ ѿ}Rz0bxѫ>G/O2tensorflow_graphics/nn/metric/tests/fscore_test.pye Ee Ep ]Mɐjz$IɽhCtensorflow_graphics/nn/metric/tests/intersection_over_union_test.pye Ee Ep ^I)&#Y$s8w5tensorflow_graphics/nn/metric/tests/precision_test.pye Ee Ep _=!pwl<ٸyugA2tensorflow_graphics/nn/metric/tests/recall_test.pye Ee Ep ac̘`xAyKjA2tensorflow_graphics/notebooks/6dof_alignment.ipynbe Ee Ep b*{!i!=Z9eJ)tensorflow_graphics/notebooks/__init__.pye Ee Ep c,̘#E+kEt,0*t1tensorflow_graphics/notebooks/interpolation.ipynbe?Fe?Fp:G3ɠ mMar./;tensorflow_graphics/notebooks/intrinsics_optimization.ipynbe Ee Ep f?i8|n<Jo OR+tensorflow_graphics/notebooks/matting.ipynbe|Oe|OpE4mCЙaf@ VF9tensorflow_graphics/notebooks/mesh_segmentation_dataio.pye Ee Ep hiNkn3yDJN3%T||8*':tensorflow_graphics/notebooks/mesh_segmentation_demo.ipynbe|Oe|OpF gy=h>4,tensorflow_graphics/notebooks/mesh_viewer.pye Ee Ep j.28A.0 Qȕ9tensorflow_graphics/notebooks/non_rigid_deformation.ipynbe|Oe|Op>6޺U.-٘<R/tensorflow_graphics/notebooks/reflectance.ipynbe Ee Ep mF"bde$R/Gr3tensorflow_graphics/notebooks/resources/__init__.pye Ee Ep n pNDǟN>tensorflow_graphics/notebooks/resources/tfg_simplified_logo.pye Ee Ep o\\SI^ni>tensorflow_graphics/notebooks/resources/triangulated_stripe.pye|Oe|Op?i=NE?Etensorflow_graphics/notebooks/spherical_harmonics_approximation.ipynbe|Oe|OpGIVwyڂ_r0@wmDtensorflow_graphics/notebooks/spherical_harmonics_optimization.ipynbe Ee Ep rjy}MBr5iKmP6tensorflow_graphics/notebooks/threejs_visualization.pye|Oe|OpA*nޱ*ʺ`Zic)tensorflow_graphics/opensource_only.filese Ee Ep uq#|[ 8#% &tensorflow_graphics/projects/README.mde Ee Ep vaTsSY/?lZ(tensorflow_graphics/projects/__init__.pye Ee Ep x$zF\A߿J-tensorflow_graphics/projects/cvxnet/README.mde Ee Ep ~Ƿijː2O@$4:LT+tensorflow_graphics/projects/cvxnet/eval.pye Ee Ep mG3M[,`Yc!3tensorflow_graphics/projects/cvxnet/lib/datasets.pye Ee Ep 2;t71Wb|] 8tensorflow_graphics/projects/cvxnet/lib/libmise/mise.pyxe Ee Ep /JUAw6XNM1tensorflow_graphics/projects/cvxnet/lib/models.pye Ee Ep  pü h Oˡ1tensorflow_graphics/projects/cvxnet/lib/resnet.pye Ee Ep / %љy{^(% f0tensorflow_graphics/projects/cvxnet/lib/utils.pye Ee Ep aR $WpB?4tensorflow_graphics/projects/cvxnet/requirements.txte Ee Ep &,oY-&/rZe,tensorflow_graphics/projects/cvxnet/setup.pye Ee Ep  v/$t`,v6,tensorflow_graphics/projects/cvxnet/train.pye!"Be!"Bp " XOTXL$:tensorflow_graphics/projects/local_implicit_grid/README.mde|Oe|OpJY=RJ'lEۨCBtensorflow_graphics/projects/local_implicit_grid/core/evaluator.pye|Oe|Opb @k!Ftensorflow_graphics/projects/local_implicit_grid/core/implicit_nets.pye|Oe|OpcH3R?׆<q|4tVRtensorflow_graphics/projects/local_implicit_grid/core/local_implicit_grid_layer.pye|Oe|Opd+\69" &XXBtensorflow_graphics/projects/local_implicit_grid/core/model_g2g.pye|Oe|OpKkyx­a)"K7>(|A.Btensorflow_graphics/projects/local_implicit_grid/core/model_g2v.pye|Oe|OpfX%72XD|yaDtensorflow_graphics/projects/local_implicit_grid/core/point_utils.pye|Oe|OpgebN %`3}bDtensorflow_graphics/projects/local_implicit_grid/core/postprocess.pye|Oe|OpM<%!$LQL'e`{=:Gtensorflow_graphics/projects/local_implicit_grid/core/reconstruction.pye|Oe|Op^sZ GD:cXStensorflow_graphics/projects/local_implicit_grid/core/regular_grid_interpolation.pye|Oe|OpًO͢|\lOHtensorflow_graphics/projects/local_implicit_grid/reconstruct_geometry.pye!"Be!"Bp r~c(nhY>UAtensorflow_graphics/projects/local_implicit_grid/requirements.txte|Oe|Op oߠyLV;Etensorflow_graphics/projects/local_implicit_grid/resample_geometry.pye!"Be!"Bp (C-" ТoJuD7tensorflow_graphics/projects/local_implicit_grid/run.she!"Be!"Bp p"JxX"dS+tensorflow_graphics/projects/nasa/README.mde!"Be!"Bp !>s5θDY]Mi5CH;)tensorflow_graphics/projects/nasa/eval.pye!"Be!"Bp 5Ɇ8{X7 1tensorflow_graphics/projects/nasa/lib/datasets.pye!"Be!"Bp "OLҖfiU=4tensorflow_graphics/projects/nasa/lib/model_utils.pye|Oe|Op`T w#ch0(/tensorflow_graphics/projects/nasa/lib/models.pye!"Be!"Bp 5 ,R]+W2jÎ=M8.tensorflow_graphics/projects/nasa/lib/utils.pye!"Be!"Bp n >jZA,? [2tensorflow_graphics/projects/nasa/requirements.txte!"Be!"Bp  7JzɳU2e){*tensorflow_graphics/projects/nasa/track.pye|Oe|OpLk}Y؂0c*tensorflow_graphics/projects/nasa/train.pye!"Be!"Bp  nS\Н4%sx<tensorflow_graphics/projects/neural_voxel_renderer/README.mde!"Be!"Bp nNnŹa m@=1[>tensorflow_graphics/projects/neural_voxel_renderer/__init__.pye!"Be!"Bp El i &s&=tensorflow_graphics/projects/neural_voxel_renderer/demo.ipynbe|Oe|Op0juj}$ZH=tensorflow_graphics/projects/neural_voxel_renderer/helpers.pye!"Be!"Bp i,!_ViKOcsMp<tensorflow_graphics/projects/neural_voxel_renderer/layers.pye!"Be!"Bp __Foc1~*ԺP<tensorflow_graphics/projects/neural_voxel_renderer/models.pye!"Be!"Bp  ni|JW˥jѮNtensorflow_graphics/projects/neural_voxel_renderer/prepare_tfrecords/README.mde!"Be!"Bp H3i\l!hh9ݐOtensorflow_graphics/projects/neural_voxel_renderer/prepare_tfrecords/data.protoe!"Be!"Bp 'l)-2x!e_tensorflow_graphics/projects/neural_voxel_renderer/prepare_tfrecords/download_colored_voxels.she!"Be!"Bp 09iKL.]ki!%ctensorflow_graphics/projects/neural_voxel_renderer/prepare_tfrecords/generate_tfrecords_nvr_plus.pye!"Be!"Bp >}5Aq_\S>tensorflow_graphics/projects/neural_voxel_renderer/train.ipynbe!"Be!"Bp B"YoM%bza/tensorflow_graphics/projects/pointnet/README.mde!"Be!"Bp aʰALU3?.  1tensorflow_graphics/projects/pointnet/__init__.pye!"Be!"Bp 6@N#?%E#g03tensorflow_graphics/projects/pointnet/aiplatform.she!"Be!"Bp &x 5Jq0tensorflow_graphics/projects/pointnet/augment.pye!"Be!"Bp  'V\ljF.0tensorflow_graphics/projects/pointnet/helpers.pye!"Be!"Bp *OBgS .tensorflow_graphics/projects/pointnet/train.pye!"Be!"Bp ӿQuv&~b6P-<3tensorflow_graphics/projects/pointnet/train_test.pye|Oe|OpLGi[ K`{=$)tensorflow_graphics/rendering/__init__.pye8e8p<޷$򭖭v+($0tensorflow_graphics/rendering/camera/__init__.pye|Oe|OpNMMUB<VeF4tensorflow_graphics/rendering/camera/orthographic.pye|Oe|OpOEےL5@ahZ]8-3tensorflow_graphics/rendering/camera/perspective.pye|Oe|OpP& {]׍g۲RCtensorflow_graphics/rendering/camera/quadratic_radial_distortion.pye!_>e!_>p Jw0<5e*H6tensorflow_graphics/rendering/camera/tests/__init__.pye!_>e!_>p bҗk*qh[n%j?tensorflow_graphics/rendering/camera/tests/orthographic_test.pye|Oe|OpQSIXsZMFr~&d>tensorflow_graphics/rendering/camera/tests/perspective_test.pye!_>e!_>p 0+.\Ia <WݣNtensorflow_graphics/rendering/camera/tests/quadratic_radial_distortion_test.pye|Oe|Op$o*f"j!A]q<,tensorflow_graphics/rendering/framebuffer.pye|Oe|OpS2׉^J;})} pAtensorflow_graphics/rendering/kernels/rasterize_triangles_impl.cce|Oe|OpT ]D@p.gNIbTn]@tensorflow_graphics/rendering/kernels/rasterize_triangles_impl.he|Oe|OpV6mtUg@ 6z?tensorflow_graphics/rendering/kernels/rasterize_triangles_op.cce!_>e!_>p #X ?f@> _kjB2/tensorflow_graphics/rendering/light/__init__.pye|Oe|OpX!|ΥplƸ an2tensorflow_graphics/rendering/light/point_light.pye!_>e!_>p &Jw0<5e*H5tensorflow_graphics/rendering/light/tests/__init__.pye!_>e!_>p '*?7۲VU\/  =tensorflow_graphics/rendering/light/tests/point_light_test.pye|Oe|OpY8(a8AP͖*tensorflow_graphics/rendering/opengl/BUILDe!_>e!_>p *ղ+ ׇ~R&0tensorflow_graphics/rendering/opengl/__init__.pye!_>e!_>p + E>sn5<K3.tensorflow_graphics/rendering/opengl/cleanup.he|Oe|OpZ*S!E*azc:BC=tensorflow_graphics/rendering/opengl/egl_offscreen_context.cce!_>e!_>p -0EDl&[a=Rb=l<tensorflow_graphics/rendering/opengl/egl_offscreen_context.he!_>e!_>p .:8`ؒ6c4*0: k$0tensorflow_graphics/rendering/opengl/egl_util.cce!_>e!_>p / cMUF.f ֬/tensorflow_graphics/rendering/opengl/egl_util.he|Oe|Op[sƺ)UEfe}%2tensorflow_graphics/rendering/opengl/gl_program.cce!_>e!_>p 1>oFĴH쮀 i#1tensorflow_graphics/rendering/opengl/gl_program.he|Oe|Op\8Y=aBHz=Ͼ6t9tensorflow_graphics/rendering/opengl/gl_render_targets.cce|Oe|Op_&HL aHfR8tensorflow_graphics/rendering/opengl/gl_render_targets.he|Oe|Op`$  p(MI@tensorflow_graphics/rendering/opengl/gl_shader_storage_buffer.cce|Oe|Opa ˧?dѷZ&Ts?tensorflow_graphics/rendering/opengl/gl_shader_storage_buffer.he!_>e!_>p 6 ux5!%˕S y }-tensorflow_graphics/rendering/opengl/macros.he|Oe|Opb[̾!! E醧{",tensorflow_graphics/rendering/opengl/math.pye|Oe|Opc ki36RJ3\=tensorflow_graphics/rendering/opengl/rasterization_backend.pye|Oe|OpdNSWRV(2tensorflow_graphics/rendering/opengl/rasterizer.cce|Oe|Opi-=T;=.3yY?F0z@1tensorflow_graphics/rendering/opengl/rasterizer.he|Oe|Opk7##KT\!6O5tensorflow_graphics/rendering/opengl/rasterizer_op.cce|Oe|Opn iNz<" ??tensorflow_graphics/rendering/opengl/rasterizer_with_context.cce|Oe|Oppcƽd]ÙB:˹u>tensorflow_graphics/rendering/opengl/rasterizer_with_context.he|Oe|Op~=a xߤ3`7tensorflow_graphics/rendering/opengl/tests/math_test.pye|Oe|Opr iI` q6*]$iHtensorflow_graphics/rendering/opengl/tests/rasterization_backend_test.pye|Oe|Opsخigv@hK@tensorflow_graphics/rendering/opengl/tests/rasterizer_op_test.pye|Oe|OptsnjmEq'C@tensorflow_graphics/rendering/opengl/thread_safe_resource_pool.he|Oe|Opv 'tXc Ѿ!U8B]6tensorflow_graphics/rendering/rasterization_backend.pye!:e!:p F=w_<IXZmFtء5tensorflow_graphics/rendering/reflectance/__init__.pye|Oe|Opw]2š8/--8G8tensorflow_graphics/rendering/reflectance/blinn_phong.pyeXeXpx<1k2^Xi{ce7tensorflow_graphics/rendering/reflectance/lambertian.pyeXeXpy,h%y4Kj6_<c2tensorflow_graphics/rendering/reflectance/phong.pye!:e!:p LJw0<5e*H;tensorflow_graphics/rendering/reflectance/tests/__init__.pye!:e!:p M %2[^1woD5 a=SCtensorflow_graphics/rendering/reflectance/tests/blinn_phong_test.pye!:e!:p N?;|:# p߭Btensorflow_graphics/rendering/reflectance/tests/lambertian_test.pye!:e!:p O. bol#R5zt=tensorflow_graphics/rendering/reflectance/tests/phong_test.pye!:e!:p RJw0<5e*H/tensorflow_graphics/rendering/tests/__init__.pyeXeXp%vqomrLn t^ãAW.7tensorflow_graphics/rendering/tests/framebuffer_test.pyeXeXp|ѵV81lzk4tensorflow_graphics/rendering/triangle_rasterizer.pyeXeXp(2xb-7Ǧ/+ 0tensorflow_graphics/rendering/voxels/__init__.pyeXeXp*  Q+G2tensorflow_graphics/rendering/voxels/absorption.pyeXeXp+ 㐱7KU# Ϳ;tensorflow_graphics/rendering/voxels/emission_absorption.pyeXeXp-Jw0<5e*H6tensorflow_graphics/rendering/voxels/tests/__init__.pyeXeXp. ę[c8d˴T$R=tensorflow_graphics/rendering/voxels/tests/absorption_test.pyeXeXp/ Ojp4L:&I%z/LFtensorflow_graphics/rendering/voxels/tests/emission_absorption_test.pyeXeXp0)(iBZ(q/y:tensorflow_graphics/rendering/voxels/tests/test_helpers.pyeXeXp ?ބ`#X8"w%]0N>tensorflow_graphics/rendering/voxels/tests/visual_hull_test.pyeXeXpw9ߒ4~3tensorflow_graphics/rendering/voxels/visual_hull.pye!:e!:p kOl&0#7_tensorflow_graphics/tensorboard/mesh_visualizer/tf_mesh_dashboard/array-buffer-data-provider.jse!:e!:p 9KQxswEPPtensorflow_graphics/tensorboard/mesh_visualizer/tf_mesh_dashboard/mesh-viewer.jse!:e!:p Q7є&Y(s <$tensorflow_graphics/util/__init__.pyeXeXp1Pi}3uLFbz>-D\L#tensorflow_graphics/util/asserts.pye!:e!:p 6́>&M9Xtensorflow_graphics/util/doc.pyeXeXp|ߢ |iP%6i&tensorflow_graphics/util/export_api.pyeXeXp)w36f hN`O^D$tensorflow_graphics/util/safe_ops.pyeXeXpD.zkP!tensorflow_graphics/util/shape.pyeXeXp<b(r_0`$*E1%tensorflow_graphics/util/test_case.pye!:e!:p Jw0<5e*H*tensorflow_graphics/util/tests/__init__.pyeXeXp@,+$0FjG.tensorflow_graphics/util/tests/asserts_test.pye!:e!:p ={콜QF:epD:1tensorflow_graphics/util/tests/export_api_test.pye!:e!:p `a!4:)#1 TP0/tensorflow_graphics/util/tests/safe_ops_test.pye!:e!:p 8IsbIY MI,tensorflow_graphics/util/tests/shape_test.pye!:e!:p  {]zȇ9cZZ0tensorflow_graphics/util/tests/test_case_test.pye!:e!:p Q? \'*PT@gC%tensorflow_graphics/util/tfg_flags.pyeXeXpc"|YEXϖ M&tensorflow_graphics/util/type_alias.pyTREE 360 3 K斨 ϒϋVx.github2 1 ~b}ȬnTilworkflows2 0 5]cn^ Isubmodules2 0 d-b ‡@tensorflow_graphics342 12 ߹x78. B|a#io5 1 uoD; \*BCktests2 0 # 2<B8nn21 3 r:I ?,0bloss4 1 %Wά ɴF(tests2 0 (<t<'/Dlayer6 1 Za2 !l|ftests3 0 #uJ<`dT3ПePmmetric10 1 +rM Gxtests5 0 O9ytm7r(math22 3 __A9;:G.tests4 0 pbtlf_(thoptimizer4 1 a۟3z@Ktests2 0 pn=,>ۨ$at;interpolation10 1 ,L͙lYl~tests5 0 KغQYI:util15 1 ǟ;@ȹ&s.tests6 0 W<"!8y *g3doc8 0 / #\so(riqimage15 2 ]y8DTN8[tests4 0 F{C9color_space7 1 aS@β2Ctests3 0 n7#5X2]datasets65 5 hfv4,Ƕ9pix3d25 3 "huCUI e!-HSfakes17 3 t6~/W_ne;img3 1 yiQ^r1 <8nbed3 0 p>C;w2HG?mask3 1 O'c5fxf[bed3 0 Dy8p5Kmodel8 1 7<Qbed8 2 o  E`$ WIKEA_MALM_24 0 +*N~L/Wl!$IKEA_MALM_34 0 nfo g?vnP5?splits2 0 ɀ՟Fut+fixed_masks2 0 6`Fލ(3^J:testing2 1 zspy g]`Ametadata1 1 C{*|Gk(.model_net401 1 peq`{"ֽP9 U1.0.01 0 Hynrsefeatures11 1 _=Yvn8s .test_data2 0 [6%I?.Lձshapenet11 1 (Q|",0hVfakes7 2 d-,[V<o026911564 4 d[~{ݖ$V3d5354863690ac7eca27bba175814d11 1 `8YP3){vTmodels1 0  Yj3dr2- 7eff60e0d72800b8ca8607f540cc62ba1 1 Nv.@nQs:`(O$models1 0 y==B@v*px9550774ad1c19b24a5a118bd15e6e34f1 1 ='_qs@0ߩ<models1 0 dٺarwjf3^Ka98038807a61926abce962d6c4b373361 1 ?H$)\*b 磻models1 0 +z |X0 䒸030016271 1 {ׅ=lkWll{a800bd725fe116447a84e76181a9e08f1 1 >M0VyhkjYVY;models1 0 lY8KEC`\"Ӵmodelnet4015 1 ~S|@zWLfakes7 1 p~^IƟ]8H`cmodelnet40_ply_hdf5_20487 0  ;%}geometry54 4 .3`2֋ϣsgH4kRconvolution8 1 v;j!:&jͿtests4 0 ,)-1ʕx!f|representation19 2 +vpRRJǾXUmesh9 1 642^! tests5 0 xY8)!yߵ‹Ltests5 0 炊0u<Իtransformation22 1 Qqed@Oc:(tests12 0 mV#yaC",/Z]<߻Ddeformation_energy4 1 r  6Dtests2 0 CְZFƬBZ$projects53 5 Xv]l*1nasa9 1 X8$FpkT"Vlib4 0 މl$wUFO(mcvxnet10 1 <>_e'jGop~ilib5 1 O@zI"?1libmise1 0 cNjB?:Mpointnet7 0 WlyvG"Fʒlocal_implicit_grid14 1 Phb$ yU+core9 0 j׾pbt?>neural_voxel_renderer11 1 J|YAĒ=jprepare_tfrecords4 0 3cn#jg6knotebooks16 1 /LeTRB:resources3 0 ugKNa2=rendering63 7 h cuݏDZlight4 1 o1WZQ:v*;dJtests2 0 VƙF0% Ur{tests2 0 wӥ=>+camera8 1 Jlzkۼ9ugtests4 0 O@0|Bg=>u?2opengl25 1 L5eTq!1 OXtests3 0 ᐑLz3Ԕ+voxels9 1 kS4|R D;\W#tests5 0 GwSw;{q#nkernels3 0 fڥڜtSܹ4reflectance8 1 "ocZ˝+OԎtests4 0 h?gBQtensorboard2 1 36`K9I=v mesh_visualizer2 1 scTI D6!tf_mesh_dashboard2 0 (krO?j /h[Tc<z2%;g
DIRChewh[ewh[pl.m'WB-.bazelrcewh[ewh[pE+P)1|kȵ .flake8ewh[ewh[pYW;8l^*Ǻ w\*.github/workflows/build.ymle=e=p}@H1Hj97r?&.github/workflows/tfg-nigthly-pypi.ymlewh[ewh[p%AN1Q;Tqt#1 .gitignoreewh[ewh[p*:)- |CJ΃^^ƳE .gitmodulesewh[ewh[p,-YczAOi .pylintrcewh[ewh[p5'jdžv;u0}`CONTRIBUTING.mdewh[ewh[p8,^EiVs49GB3- LLICENSEewh[ewh[p9*ikSKvZ$󕠋tR MANIFEST.inewh[ewh[p:2,ꐶ"]lL(nКK7 README.mdewh[ewh[p;(^ki[d WORKSPACEewh[ewh[p<Q?"ؑ]5&' pytest.iniewh[ewh[p>fI/fd,+requirements.txtewh[ewh[p?.̇µoE>Rrequirements.unixewh[ewh[p@ ׸&hҒbX52 setup.pyewh[ewh[pBz_|"5}62Wvnsubmodules/README.mdewh[ewh[pCCa`ۍ!_submodules/rednerewh[ewh[pE,^EiVs49GB3- Ltensorflow_graphics/LICENSEe=e=pP/|;*0_;_lLItensorflow_graphics/__init__.pyeqXeqXpHpj>]V:7@(tensorflow_graphics/datasets/__init__.pye=e=pH:`p)VXr1tensorflow_graphics/datasets/features/__init__.pye=e=p\97GtFS 7tensorflow_graphics/datasets/features/camera_feature.pye=e=p) FeSqj <tensorflow_graphics/datasets/features/camera_feature_test.pye=e=pa]gR 5tensorflow_graphics/datasets/features/pose_feature.pye=e=p4\(^MJ\S':tensorflow_graphics/datasets/features/pose_feature_test.pyeqXeqXpZ, v\|/W8tensorflow_graphics/datasets/features/test_data/cube.mateqXeqXp["6k@m!}AЅ!8tensorflow_graphics/datasets/features/test_data/cube.obje=e=p}}pOor]ජF 8tensorflow_graphics/datasets/features/trimesh_feature.pye=e=pfImD2,C}&=tensorflow_graphics/datasets/features/trimesh_feature_test.pye=e=p )mgh番,^wE6tensorflow_graphics/datasets/features/voxel_feature.pye=e=p RTi G=p;tensorflow_graphics/datasets/features/voxel_feature_test.pye=e=ptT lpUI.pTw 3tensorflow_graphics/datasets/modelnet40/__init__.pyeqXeqXpbN49m'ر~m5tensorflow_graphics/datasets/modelnet40/checksums.tsveqXeqXpeL! \ XbnXtensorflow_graphics/datasets/modelnet40/fakes/modelnet40_ply_hdf5_2048/ply_data_test0.h5eqXeqXpf&jfCȄ8Xtensorflow_graphics/datasets/modelnet40/fakes/modelnet40_ply_hdf5_2048/ply_data_test1.h5eqXeqXpg:|-L5P$ : rYtensorflow_graphics/datasets/modelnet40/fakes/modelnet40_ply_hdf5_2048/ply_data_train0.h5ezSezSph??J :VfYtensorflow_graphics/datasets/modelnet40/fakes/modelnet40_ply_hdf5_2048/ply_data_train1.h5ezSezSpi %{FDDz T mI@Ytensorflow_graphics/datasets/modelnet40/fakes/modelnet40_ply_hdf5_2048/ply_data_train2.h5ezSezSpj`u\69eZ؀ǻUtensorflow_graphics/datasets/modelnet40/fakes/modelnet40_ply_hdf5_2048/test_files.txtezSezSpk~$.n3:@3 @BRVtensorflow_graphics/datasets/modelnet40/fakes/modelnet40_ply_hdf5_2048/train_files.txte=e=p8xΰjY>*elĊӶ5tensorflow_graphics/datasets/modelnet40/modelnet40.pye=e=p "DK@wC!upN+`?tensorflow_graphics/datasets/modelnet40/modelnet40_checksums.pye=e=pο{ (Y3ѶC=$Bg?tensorflow_graphics/datasets/modelnet40/modelnet40_makefakes.pye=e=pldlھH`לah9tensorflow_graphics/datasets/modelnet40/modelnet40_run.pye=e=po,eREpF/:tensorflow_graphics/datasets/modelnet40/modelnet40_show.pye=e=p^A?XrE?Ơ:tensorflow_graphics/datasets/modelnet40/modelnet40_test.pye=e=p__uﱇ&R 5`5.tensorflow_graphics/datasets/pix3d/__init__.pyezSezSptv23H#g ?l0tensorflow_graphics/datasets/pix3d/checksums.tsvezSezSpy8h/ʯ1gE9tensorflow_graphics/datasets/pix3d/fakes/img/bed/0001.pnge .Pe .Ppz\aw4TEƮ`x9tensorflow_graphics/datasets/pix3d/fakes/img/bed/0002.pnge .Pe .Pp{-2`L|LkaZٌׅ*9tensorflow_graphics/datasets/pix3d/fakes/img/bed/0010.pnge .Pe .Pp*4ޘpg*|-:tensorflow_graphics/datasets/pix3d/fakes/mask/bed/0001.pnge .Pe .Pp,_{CGfuR`:tensorflow_graphics/datasets/pix3d/fakes/mask/bed/0002.pnge .Pe .Pp aDtLis-%4 S:tensorflow_graphics/datasets/pix3d/fakes/mask/bed/0010.pnge .Pe .PpD@al'()BLOtensorflow_graphics/datasets/pix3d/fakes/model/bed/IKEA_MALM_2/3d_keypoints.txte .Pe .Pp0^>{agYtensorflow_graphics/datasets/pix3d/fakes/model/bed/IKEA_MALM_2/malm_bed_2_obj0_object.mtle .Pe .PpY_^+r%3ϰ*. LTHtensorflow_graphics/datasets/pix3d/fakes/model/bed/IKEA_MALM_2/model.obje .Pe .Pp#; íAeyp$Htensorflow_graphics/datasets/pix3d/fakes/model/bed/IKEA_MALM_2/voxel.mate .Pe .Ppǘij~,"˪ 9Otensorflow_graphics/datasets/pix3d/fakes/model/bed/IKEA_MALM_3/3d_keypoints.txte .Pe .Pp0^>{agYtensorflow_graphics/datasets/pix3d/fakes/model/bed/IKEA_MALM_3/malm_bed_3_obj0_object.mtle .Pe .PpY/Ӌ$KU i,Htensorflow_graphics/datasets/pix3d/fakes/model/bed/IKEA_MALM_3/model.obje .Pe .Pp#u+,<,#Htensorflow_graphics/datasets/pix3d/fakes/model/bed/IKEA_MALM_3/voxel.mate .Pe .Pp$Y $^{'4Z3tensorflow_graphics/datasets/pix3d/fakes/pix3d.jsone .Pe .PpГc7v ȢѺ]Rm7tensorflow_graphics/datasets/pix3d/fakes/pix3d_test.npye .Pe .PpfÂ\T\ .S18tensorflow_graphics/datasets/pix3d/fakes/pix3d_train.npye .Pe .Ppkb΂(;&\>m7tensorflow_graphics/datasets/pix3d/fixed_masks/0045.pnge kMe kMp*i9"ƶ<[l7tensorflow_graphics/datasets/pix3d/fixed_masks/1745.pnge=e=p- 7pĻ.G=d3*+tensorflow_graphics/datasets/pix3d/pix3d.pye=e=p<w;n!搳40tensorflow_graphics/datasets/pix3d/pix3d_test.pye kMe kMpJ EљltNt-8tensorflow_graphics/datasets/pix3d/splits/pix3d_test.npye kMe kMp%- 1+<ÕL9tensorflow_graphics/datasets/pix3d/splits/pix3d_train.npye=e=p[VOZzL1tensorflow_graphics/datasets/shapenet/__init__.pye kMe kMpЅ.,_r2y VI3tensorflow_graphics/datasets/shapenet/checksums.tsve kMe kMp`djN[ \3]rqbDuptensorflow_graphics/datasets/shapenet/fakes/02691156/3d5354863690ac7eca27bba175814d1/models/model_normalized.obje kMe kMpcJ;j{K”TŠ3qtensorflow_graphics/datasets/shapenet/fakes/02691156/7eff60e0d72800b8ca8607f540cc62ba/models/model_normalized.obje kMe kMpD)=N;pzG%qtensorflow_graphics/datasets/shapenet/fakes/02691156/9550774ad1c19b24a5a118bd15e6e34f/models/model_normalized.obje kMe kMp)N5~Vvs3eqtensorflow_graphics/datasets/shapenet/fakes/02691156/a98038807a61926abce962d6c4b37336/models/model_normalized.obje kMe kMpR)wE')Lqtensorflow_graphics/datasets/shapenet/fakes/03001627/a800bd725fe116447a84e76181a9e08f/models/model_normalized.obje kMe kMp<t V1> ZDE3tensorflow_graphics/datasets/shapenet/fakes/all.csve kMe kMp'm2c :bCq3[99tensorflow_graphics/datasets/shapenet/fakes/taxonomy.jsone=e=p޳9İ!w61tensorflow_graphics/datasets/shapenet/shapenet.pye=e=p q 蜓 2K⤲y6tensorflow_graphics/datasets/shapenet/shapenet_test.pye kMe kMpe,덴q8&*>rWY0tensorflow_graphics/datasets/testing/__init__.pye kMe kMp !Mn ~~jQtensorflow_graphics/datasets/testing/metadata/model_net40/1.0.0/dataset_info.jsone kMe kMp 6Q5S^>q~j$tensorflow_graphics/g3doc/_book.yamle kMe kMpY4`S~[\1&tensorflow_graphics/g3doc/_index.ipynbe kMe kMp(z_fr- 'g%tensorflow_graphics/g3doc/_index.yamle=e=pv.<~sa~>QR'tensorflow_graphics/g3doc/build_docs.pye kMe kMpĤ16dJϯUa'tensorflow_graphics/g3doc/debug_mode.mde kMe kMp'?d_B<PE$tensorflow_graphics/g3doc/install.mde kMe kMp jYϹro$N&~2%tensorflow_graphics/g3doc/overview.mde kMe kMpϡ:T)Hs9(tensorflow_graphics/g3doc/tensorboard.mde kMe kMpƆZ<hWr0aVU(tensorflow_graphics/geometry/__init__.pye kMe kMpB aufҒu&0~Á4tensorflow_graphics/geometry/convolution/__init__.pye=e=p2=%z&5`=tensorflow_graphics/geometry/convolution/graph_convolution.pye=e=p7'KlkB8&~~` $9tensorflow_graphics/geometry/convolution/graph_pooling.pye kMe kMpJw0<5e*H:tensorflow_graphics/geometry/convolution/tests/__init__.pye=e=p+:5%>ҘHtensorflow_graphics/geometry/convolution/tests/graph_convolution_test.pye=e=p WtlrlKP ixz@Dtensorflow_graphics/geometry/convolution/tests/graph_pooling_test.pye=e=pfBZ^Qof 6Ux<tensorflow_graphics/geometry/convolution/tests/utils_test.pye=e=p C=C$8$znkHbb1tensorflow_graphics/geometry/convolution/utils.pye kMe kMp85.+;m\<G ;tensorflow_graphics/geometry/deformation_energy/__init__.pye=e=p#7G,&3vKtensorflow_graphics/geometry/deformation_energy/as_conformal_as_possible.pye kMe kMpJw0<5e*HAtensorflow_graphics/geometry/deformation_energy/tests/__init__.pye kMe kMpKVcp NH)ELVtensorflow_graphics/geometry/deformation_energy/tests/as_conformal_as_possible_test.pye kMe kMpyk]v+jo7tensorflow_graphics/geometry/representation/__init__.pye=e=p5Fʣ3tensorflow_graphics/geometry/representation/grid.pyeCeCp7H׆Iyj<tensorflow_graphics/geometry/representation/mesh/__init__.pye=e=pR#뷍{b$5EխHT;tensorflow_graphics/geometry/representation/mesh/normals.pye=e=p;@]%㢍׊ggV;tensorflow_graphics/geometry/representation/mesh/sampler.pye kMe kMpJw0<5e*HBtensorflow_graphics/geometry/representation/mesh/tests/__init__.pye kMe kMpahak ,_wLGItensorflow_graphics/geometry/representation/mesh/tests/mesh_test_utils.pye=e=p<5zP@}!KPg5,7Ftensorflow_graphics/geometry/representation/mesh/tests/normals_test.pye=e=p=7HU1R{u{Ftensorflow_graphics/geometry/representation/mesh/tests/sampler_test.pye=e=p@G5)Hժn0i,Dtensorflow_graphics/geometry/representation/mesh/tests/utils_test.pye=e=pBioRU J^Y(F@9tensorflow_graphics/geometry/representation/mesh/utils.pye=e=pC<xYҺWTWZ:4tensorflow_graphics/geometry/representation/point.pye=e=p"; n'5r+"4wE2tensorflow_graphics/geometry/representation/ray.pye kMe kMpJw0<5e*H=tensorflow_graphics/geometry/representation/tests/__init__.pye=e=pD R ,|*K>tensorflow_graphics/geometry/representation/tests/grid_test.pye kMe kMp }j(r&QI[?tensorflow_graphics/geometry/representation/tests/point_test.pye?Fe?FpE' rX~ǰvWVw[o=tensorflow_graphics/geometry/representation/tests/ray_test.pye?Fe?FpF'VT6[ԑMM$mABtensorflow_graphics/geometry/representation/tests/triangle_test.pye?Fe?FpQd70qR&)6, 'a+o7tensorflow_graphics/geometry/representation/triangle.pyeA ?eA ?p#eUT(t@y7tensorflow_graphics/geometry/transformation/__init__.pye?Fe?FpR0KNʏ6[@9tensorflow_graphics/geometry/transformation/axis_angle.pye?Fe?FpS qgI+悁 ]} ">tensorflow_graphics/geometry/transformation/dual_quaternion.pye?Fe?FpT$BzZ^J}eXF"O4tensorflow_graphics/geometry/transformation/euler.pye?Fe?FpU/L83-Dtensorflow_graphics/geometry/transformation/linear_blend_skinning.pye?Fe?FpV9<?F ֤?*6tensorflow_graphics/geometry/transformation/look_at.pye?Fe?FpW\1/!_N5Lw9tensorflow_graphics/geometry/transformation/quaternion.pye?Fe?FpX!/1gltI `-_Atensorflow_graphics/geometry/transformation/rotation_matrix_2d.pye?Fe?FpY7XK}Euoz#Atensorflow_graphics/geometry/transformation/rotation_matrix_3d.pye?Fe?Fp[ Q24*#.OAtEtensorflow_graphics/geometry/transformation/rotation_matrix_common.pye Ie IpJw0<5e*H=tensorflow_graphics/geometry/transformation/tests/__init__.pye?Fe?Fp`E[%&܉EesI-Dtensorflow_graphics/geometry/transformation/tests/axis_angle_test.pye?Fe?Fp Gǔ"~1 |)Itensorflow_graphics/geometry/transformation/tests/dual_quaternion_test.pye?Fe?Fpe/2f<*г3,5dgp?tensorflow_graphics/geometry/transformation/tests/euler_test.pye Ie Ip e/]Y!֥dOtensorflow_graphics/geometry/transformation/tests/linear_blend_skinning_test.pye Ie Ip  ;b|{4J'Atensorflow_graphics/geometry/transformation/tests/look_at_test.pye?Fe?Fp{p#ؓ!ޏ_X yuMKDtensorflow_graphics/geometry/transformation/tests/quaternion_test.pye Ie Ip &\8/K`t_m-H]?SLtensorflow_graphics/geometry/transformation/tests/rotation_matrix_2d_test.pye Ie Ip _д.]:~O\Mps",Ltensorflow_graphics/geometry/transformation/tests/rotation_matrix_3d_test.pye Ie Ip oPvxF((Ptensorflow_graphics/geometry/transformation/tests/rotation_matrix_common_test.pye Ie Ip =nC,Q$>tensorflow_graphics/geometry/transformation/tests/test_data.pye?Fe?Fp'25!uMTJAtensorflow_graphics/geometry/transformation/tests/test_helpers.pye Ie Ip !TTSzW[V{%tensorflow_graphics/image/__init__.pye Ie Ip * E?}J|:T_1tensorflow_graphics/image/color_space/__init__.pye Ie Ip -.ǟ!󩣛 'P-2tensorflow_graphics/image/color_space/constants.pye?Fe?Fp ;9>r8[sLRLdH̟3tensorflow_graphics/image/color_space/linear_rgb.pye?Fe?Fp gmC38sk||-tensorflow_graphics/image/color_space/srgb.pye Ie Ip Jw0<5e*H7tensorflow_graphics/image/color_space/tests/__init__.pye Ie Ip  txUR(K˺nf>tensorflow_graphics/image/color_space/tests/linear_rgb_test.pye Ie Ip  6EŇ>-8tensorflow_graphics/image/color_space/tests/srgb_test.pye?Fe?Fp,!GFr2}$tensorflow_graphics/image/matting.pye?Fe?Fp#T91$مSJt$tensorflow_graphics/image/pyramid.pye Ie Ip Jw0<5e*H+tensorflow_graphics/image/tests/__init__.pye?Fe?Fp7"Jbbkdm"'׿/tensorflow_graphics/image/tests/matting_test.pye?Fe?Fp"ȳ.uNF6`y/tensorflow_graphics/image/tests/pyramid_test.pye?Fe?FpUNyY l_I3tensorflow_graphics/image/tests/transformer_test.pye?Fe?Fp= ŞSV|;>a(tensorflow_graphics/image/transformer.pye?Fe?Fp0M/甹xY ;I>nJ"tensorflow_graphics/io/__init__.pye?Fe?Fp6 m@Otensorflow_graphics/io/exr.pye Ie Ip Jw0<5e*H(tensorflow_graphics/io/tests/__init__.pye?Fe?Fp=`um΋(tensorflow_graphics/io/tests/exr_test.pye?Fe?Fp Gw_EI@se'tensorflow_graphics/io/triangle_mesh.pye Ie Ip # YD7Eߟi($tensorflow_graphics/math/__init__.pye Ie Ip &\A:h) "JS#2tensorflow_graphics/math/interpolation/__init__.pye?Fe?Fp)3 z}kf5ş1tensorflow_graphics/math/interpolation/bspline.pye?Fe?Fp*[Je'/tensorflow_graphics/math/interpolation/slerp.pye Ie Ip *Jw0<5e*H8tensorflow_graphics/math/interpolation/tests/__init__.pye Ie Ip +c (`WGK}:<tensorflow_graphics/math/interpolation/tests/bspline_test.pye?Fe?Fp(ٲyɾAfL:tensorflow_graphics/math/interpolation/tests/slerp_test.pye?Fe?FpHmJmɏ]>tensorflow_graphics/math/interpolation/tests/trilinear_test.pye?Fe?Fp!o;/(<PcZ=tensorflow_graphics/math/interpolation/tests/weighted_test.pye?Fe?Fp<a}T QiZ/3tensorflow_graphics/math/interpolation/trilinear.pye?Fe?FprDeXײWhUk2tensorflow_graphics/math/interpolation/weighted.pye?Fe?Fp.W$|*]n"308*(tensorflow_graphics/math/math_helpers.pye Ie Ip 3q,EQM[ĝ-b.tensorflow_graphics/math/optimizer/__init__.pye?Fe?Fp$ڹջxwЏ"r9tensorflow_graphics/math/optimizer/levenberg_marquardt.pye Ie Ip 6Jw0<5e*H4tensorflow_graphics/math/optimizer/tests/__init__.pye?Fe?FpG3]7Rpx!|j#Dtensorflow_graphics/math/optimizer/tests/levenberg_marquardt_test.pye?Fe?Fp:A7`?^-/tensorflow_graphics/math/spherical_harmonics.pye Ie Ip ;Jw0<5e*H*tensorflow_graphics/math/tests/__init__.pye Ie Ip ="Sg5aO2h臅g/3tensorflow_graphics/math/tests/math_helpers_test.pye Ie Ip ?UJqaf?}'ė:tensorflow_graphics/math/tests/spherical_harmonics_test.pye Ie Ip @)} !#f>-tensorflow_graphics/math/tests/vector_test.pye?Fe?Fpn1=;X_$LN("tensorflow_graphics/math/vector.pye Ee Ep CM\'7)s=$"tensorflow_graphics/nn/__init__.pye~<e~<pch66mW(tensorflow_graphics/nn/layer/__init__.pye?Fe?FpG/j-hˊ̦W-_G1tensorflow_graphics/nn/layer/graph_convolution.pye?Fe?Fp3!0_W~in K(tensorflow_graphics/nn/layer/pointnet.pye Ee Ep IJw0<5e*H.tensorflow_graphics/nn/layer/tests/__init__.pye?Fe?Fp.ط|[om.#<tensorflow_graphics/nn/layer/tests/graph_convolution_test.pye Ee Ep K cMI_ ? 3tensorflow_graphics/nn/layer/tests/pointnet_test.pye?Fe?Fp}] Mm}dȕz A'tensorflow_graphics/nn/loss/__init__.pye?Fe?Fp47uUJ)!{=#߱/tensorflow_graphics/nn/loss/chamfer_distance.pye Ee Ep QJw0<5e*H-tensorflow_graphics/nn/loss/tests/__init__.pye Ee Ep Rl38QGyύ(:tensorflow_graphics/nn/loss/tests/chamfer_distance_test.pye Ee Ep UFŴ DRQǴ)tensorflow_graphics/nn/metric/__init__.pye?Fe?Fp6 {5{wx`t@15-'tensorflow_graphics/nn/metric/fscore.pye?Fe?FpD {<χ_h0VjVۢ8tensorflow_graphics/nn/metric/intersection_over_union.pye?Fe?Fp8:E/V@t~\e]H*tensorflow_graphics/nn/metric/precision.pye?Fe?Fp9[f^HKO'tensorflow_graphics/nn/metric/recall.pye Ee Ep [Jw0<5e*H/tensorflow_graphics/nn/metric/tests/__init__.pye Ee Ep \ ѿ}Rz0bxѫ>G/O2tensorflow_graphics/nn/metric/tests/fscore_test.pye Ee Ep ]Mɐjz$IɽhCtensorflow_graphics/nn/metric/tests/intersection_over_union_test.pye Ee Ep ^I)&#Y$s8w5tensorflow_graphics/nn/metric/tests/precision_test.pye Ee Ep _=!pwl<ٸyugA2tensorflow_graphics/nn/metric/tests/recall_test.pye Ee Ep ac̘`xAyKjA2tensorflow_graphics/notebooks/6dof_alignment.ipynbe Ee Ep b*{!i!=Z9eJ)tensorflow_graphics/notebooks/__init__.pye Ee Ep c,̘#E+kEt,0*t1tensorflow_graphics/notebooks/interpolation.ipynbe?Fe?Fp:G3ɠ mMar./;tensorflow_graphics/notebooks/intrinsics_optimization.ipynbe Ee Ep f?i8|n<Jo OR+tensorflow_graphics/notebooks/matting.ipynbe|Oe|OpE4mCЙaf@ VF9tensorflow_graphics/notebooks/mesh_segmentation_dataio.pye Ee Ep hiNkn3yDJN3%T||8*':tensorflow_graphics/notebooks/mesh_segmentation_demo.ipynbe|Oe|OpF gy=h>4,tensorflow_graphics/notebooks/mesh_viewer.pye Ee Ep j.28A.0 Qȕ9tensorflow_graphics/notebooks/non_rigid_deformation.ipynbe|Oe|Op>6޺U.-٘<R/tensorflow_graphics/notebooks/reflectance.ipynbe Ee Ep mF"bde$R/Gr3tensorflow_graphics/notebooks/resources/__init__.pye Ee Ep n pNDǟN>tensorflow_graphics/notebooks/resources/tfg_simplified_logo.pye Ee Ep o\\SI^ni>tensorflow_graphics/notebooks/resources/triangulated_stripe.pye|Oe|Op?i=NE?Etensorflow_graphics/notebooks/spherical_harmonics_approximation.ipynbe|Oe|OpGIVwyڂ_r0@wmDtensorflow_graphics/notebooks/spherical_harmonics_optimization.ipynbe Ee Ep rjy}MBr5iKmP6tensorflow_graphics/notebooks/threejs_visualization.pye|Oe|OpA*nޱ*ʺ`Zic)tensorflow_graphics/opensource_only.filese Ee Ep uq#|[ 8#% &tensorflow_graphics/projects/README.mde Ee Ep vaTsSY/?lZ(tensorflow_graphics/projects/__init__.pye Ee Ep x$zF\A߿J-tensorflow_graphics/projects/cvxnet/README.mde Ee Ep ~Ƿijː2O@$4:LT+tensorflow_graphics/projects/cvxnet/eval.pye Ee Ep mG3M[,`Yc!3tensorflow_graphics/projects/cvxnet/lib/datasets.pye Ee Ep 2;t71Wb|] 8tensorflow_graphics/projects/cvxnet/lib/libmise/mise.pyxe Ee Ep /JUAw6XNM1tensorflow_graphics/projects/cvxnet/lib/models.pye Ee Ep  pü h Oˡ1tensorflow_graphics/projects/cvxnet/lib/resnet.pye Ee Ep / %љy{^(% f0tensorflow_graphics/projects/cvxnet/lib/utils.pye Ee Ep aR $WpB?4tensorflow_graphics/projects/cvxnet/requirements.txte Ee Ep &,oY-&/rZe,tensorflow_graphics/projects/cvxnet/setup.pye Ee Ep  v/$t`,v6,tensorflow_graphics/projects/cvxnet/train.pye!"Be!"Bp " XOTXL$:tensorflow_graphics/projects/local_implicit_grid/README.mde|Oe|OpJY=RJ'lEۨCBtensorflow_graphics/projects/local_implicit_grid/core/evaluator.pye|Oe|Opb @k!Ftensorflow_graphics/projects/local_implicit_grid/core/implicit_nets.pye|Oe|OpcH3R?׆<q|4tVRtensorflow_graphics/projects/local_implicit_grid/core/local_implicit_grid_layer.pye|Oe|Opd+\69" &XXBtensorflow_graphics/projects/local_implicit_grid/core/model_g2g.pye|Oe|OpKkyx­a)"K7>(|A.Btensorflow_graphics/projects/local_implicit_grid/core/model_g2v.pye|Oe|OpfX%72XD|yaDtensorflow_graphics/projects/local_implicit_grid/core/point_utils.pye|Oe|OpgebN %`3}bDtensorflow_graphics/projects/local_implicit_grid/core/postprocess.pye|Oe|OpM<%!$LQL'e`{=:Gtensorflow_graphics/projects/local_implicit_grid/core/reconstruction.pye|Oe|Op^sZ GD:cXStensorflow_graphics/projects/local_implicit_grid/core/regular_grid_interpolation.pye|Oe|OpًO͢|\lOHtensorflow_graphics/projects/local_implicit_grid/reconstruct_geometry.pye!"Be!"Bp r~c(nhY>UAtensorflow_graphics/projects/local_implicit_grid/requirements.txte|Oe|Op oߠyLV;Etensorflow_graphics/projects/local_implicit_grid/resample_geometry.pye!"Be!"Bp (C-" ТoJuD7tensorflow_graphics/projects/local_implicit_grid/run.she!"Be!"Bp p"JxX"dS+tensorflow_graphics/projects/nasa/README.mde!"Be!"Bp !>s5θDY]Mi5CH;)tensorflow_graphics/projects/nasa/eval.pye!"Be!"Bp 5Ɇ8{X7 1tensorflow_graphics/projects/nasa/lib/datasets.pye!"Be!"Bp "OLҖfiU=4tensorflow_graphics/projects/nasa/lib/model_utils.pye|Oe|Op`T w#ch0(/tensorflow_graphics/projects/nasa/lib/models.pye!"Be!"Bp 5 ,R]+W2jÎ=M8.tensorflow_graphics/projects/nasa/lib/utils.pye!"Be!"Bp n >jZA,? [2tensorflow_graphics/projects/nasa/requirements.txte!"Be!"Bp  7JzɳU2e){*tensorflow_graphics/projects/nasa/track.pye|Oe|OpLk}Y؂0c*tensorflow_graphics/projects/nasa/train.pye!"Be!"Bp  nS\Н4%sx<tensorflow_graphics/projects/neural_voxel_renderer/README.mde!"Be!"Bp nNnŹa m@=1[>tensorflow_graphics/projects/neural_voxel_renderer/__init__.pye!"Be!"Bp El i &s&=tensorflow_graphics/projects/neural_voxel_renderer/demo.ipynbe|Oe|Op0juj}$ZH=tensorflow_graphics/projects/neural_voxel_renderer/helpers.pye!"Be!"Bp i,!_ViKOcsMp<tensorflow_graphics/projects/neural_voxel_renderer/layers.pye!"Be!"Bp __Foc1~*ԺP<tensorflow_graphics/projects/neural_voxel_renderer/models.pye!"Be!"Bp  ni|JW˥jѮNtensorflow_graphics/projects/neural_voxel_renderer/prepare_tfrecords/README.mde!"Be!"Bp H3i\l!hh9ݐOtensorflow_graphics/projects/neural_voxel_renderer/prepare_tfrecords/data.protoe!"Be!"Bp 'l)-2x!e_tensorflow_graphics/projects/neural_voxel_renderer/prepare_tfrecords/download_colored_voxels.she!"Be!"Bp 09iKL.]ki!%ctensorflow_graphics/projects/neural_voxel_renderer/prepare_tfrecords/generate_tfrecords_nvr_plus.pye!"Be!"Bp >}5Aq_\S>tensorflow_graphics/projects/neural_voxel_renderer/train.ipynbe!"Be!"Bp B"YoM%bza/tensorflow_graphics/projects/pointnet/README.mde!"Be!"Bp aʰALU3?.  1tensorflow_graphics/projects/pointnet/__init__.pye!"Be!"Bp 6@N#?%E#g03tensorflow_graphics/projects/pointnet/aiplatform.she!"Be!"Bp &x 5Jq0tensorflow_graphics/projects/pointnet/augment.pye!"Be!"Bp  'V\ljF.0tensorflow_graphics/projects/pointnet/helpers.pye!"Be!"Bp *OBgS .tensorflow_graphics/projects/pointnet/train.pye!"Be!"Bp ӿQuv&~b6P-<3tensorflow_graphics/projects/pointnet/train_test.pye|Oe|OpLGi[ K`{=$)tensorflow_graphics/rendering/__init__.pye8e8p<޷$򭖭v+($0tensorflow_graphics/rendering/camera/__init__.pye|Oe|OpNMMUB<VeF4tensorflow_graphics/rendering/camera/orthographic.pye|Oe|OpOEےL5@ahZ]8-3tensorflow_graphics/rendering/camera/perspective.pye|Oe|OpP& {]׍g۲RCtensorflow_graphics/rendering/camera/quadratic_radial_distortion.pye!_>e!_>p Jw0<5e*H6tensorflow_graphics/rendering/camera/tests/__init__.pye!_>e!_>p bҗk*qh[n%j?tensorflow_graphics/rendering/camera/tests/orthographic_test.pye|Oe|OpQSIXsZMFr~&d>tensorflow_graphics/rendering/camera/tests/perspective_test.pye!_>e!_>p 0+.\Ia <WݣNtensorflow_graphics/rendering/camera/tests/quadratic_radial_distortion_test.pye|Oe|Op$o*f"j!A]q<,tensorflow_graphics/rendering/framebuffer.pye|Oe|OpS2׉^J;})} pAtensorflow_graphics/rendering/kernels/rasterize_triangles_impl.cce|Oe|OpT ]D@p.gNIbTn]@tensorflow_graphics/rendering/kernels/rasterize_triangles_impl.he|Oe|OpV6mtUg@ 6z?tensorflow_graphics/rendering/kernels/rasterize_triangles_op.cce!_>e!_>p #X ?f@> _kjB2/tensorflow_graphics/rendering/light/__init__.pye|Oe|OpX!|ΥplƸ an2tensorflow_graphics/rendering/light/point_light.pye!_>e!_>p &Jw0<5e*H5tensorflow_graphics/rendering/light/tests/__init__.pye!_>e!_>p '*?7۲VU\/  =tensorflow_graphics/rendering/light/tests/point_light_test.pye|Oe|OpY8(a8AP͖*tensorflow_graphics/rendering/opengl/BUILDe!_>e!_>p *ղ+ ׇ~R&0tensorflow_graphics/rendering/opengl/__init__.pye!_>e!_>p + E>sn5<K3.tensorflow_graphics/rendering/opengl/cleanup.he|Oe|OpZ*S!E*azc:BC=tensorflow_graphics/rendering/opengl/egl_offscreen_context.cce!_>e!_>p -0EDl&[a=Rb=l<tensorflow_graphics/rendering/opengl/egl_offscreen_context.he!_>e!_>p .:8`ؒ6c4*0: k$0tensorflow_graphics/rendering/opengl/egl_util.cce!_>e!_>p / cMUF.f ֬/tensorflow_graphics/rendering/opengl/egl_util.he|Oe|Op[sƺ)UEfe}%2tensorflow_graphics/rendering/opengl/gl_program.cce!_>e!_>p 1>oFĴH쮀 i#1tensorflow_graphics/rendering/opengl/gl_program.he|Oe|Op\8Y=aBHz=Ͼ6t9tensorflow_graphics/rendering/opengl/gl_render_targets.cce|Oe|Op_&HL aHfR8tensorflow_graphics/rendering/opengl/gl_render_targets.he|Oe|Op`$  p(MI@tensorflow_graphics/rendering/opengl/gl_shader_storage_buffer.cce|Oe|Opa ˧?dѷZ&Ts?tensorflow_graphics/rendering/opengl/gl_shader_storage_buffer.he!_>e!_>p 6 ux5!%˕S y }-tensorflow_graphics/rendering/opengl/macros.he|Oe|Opb[̾!! E醧{",tensorflow_graphics/rendering/opengl/math.pye|Oe|Opc ki36RJ3\=tensorflow_graphics/rendering/opengl/rasterization_backend.pye|Oe|OpdNSWRV(2tensorflow_graphics/rendering/opengl/rasterizer.cce|Oe|Opi-=T;=.3yY?F0z@1tensorflow_graphics/rendering/opengl/rasterizer.he|Oe|Opk7##KT\!6O5tensorflow_graphics/rendering/opengl/rasterizer_op.cce|Oe|Opn iNz<" ??tensorflow_graphics/rendering/opengl/rasterizer_with_context.cce|Oe|Oppcƽd]ÙB:˹u>tensorflow_graphics/rendering/opengl/rasterizer_with_context.he|Oe|Op~=a xߤ3`7tensorflow_graphics/rendering/opengl/tests/math_test.pye|Oe|Opr iI` q6*]$iHtensorflow_graphics/rendering/opengl/tests/rasterization_backend_test.pye|Oe|Opsخigv@hK@tensorflow_graphics/rendering/opengl/tests/rasterizer_op_test.pye|Oe|OptsnjmEq'C@tensorflow_graphics/rendering/opengl/thread_safe_resource_pool.he|Oe|Opv 'tXc Ѿ!U8B]6tensorflow_graphics/rendering/rasterization_backend.pye!:e!:p F=w_<IXZmFtء5tensorflow_graphics/rendering/reflectance/__init__.pye|Oe|Opw]2š8/--8G8tensorflow_graphics/rendering/reflectance/blinn_phong.pyeXeXpx<1k2^Xi{ce7tensorflow_graphics/rendering/reflectance/lambertian.pyeXeXpy,h%y4Kj6_<c2tensorflow_graphics/rendering/reflectance/phong.pye!:e!:p LJw0<5e*H;tensorflow_graphics/rendering/reflectance/tests/__init__.pye!:e!:p M %2[^1woD5 a=SCtensorflow_graphics/rendering/reflectance/tests/blinn_phong_test.pye!:e!:p N?;|:# p߭Btensorflow_graphics/rendering/reflectance/tests/lambertian_test.pye!:e!:p O. bol#R5zt=tensorflow_graphics/rendering/reflectance/tests/phong_test.pye!:e!:p RJw0<5e*H/tensorflow_graphics/rendering/tests/__init__.pyeXeXp%vqomrLn t^ãAW.7tensorflow_graphics/rendering/tests/framebuffer_test.pyeXeXp|ѵV81lzk4tensorflow_graphics/rendering/triangle_rasterizer.pyeXeXp(2xb-7Ǧ/+ 0tensorflow_graphics/rendering/voxels/__init__.pyeXeXp*  Q+G2tensorflow_graphics/rendering/voxels/absorption.pyeXeXp+ 㐱7KU# Ϳ;tensorflow_graphics/rendering/voxels/emission_absorption.pyeXeXp-Jw0<5e*H6tensorflow_graphics/rendering/voxels/tests/__init__.pyeXeXp. ę[c8d˴T$R=tensorflow_graphics/rendering/voxels/tests/absorption_test.pyeXeXp/ Ojp4L:&I%z/LFtensorflow_graphics/rendering/voxels/tests/emission_absorption_test.pyeXeXp0)(iBZ(q/y:tensorflow_graphics/rendering/voxels/tests/test_helpers.pyeXeXp ?ބ`#X8"w%]0N>tensorflow_graphics/rendering/voxels/tests/visual_hull_test.pyeXeXpw9ߒ4~3tensorflow_graphics/rendering/voxels/visual_hull.pye!:e!:p kOl&0#7_tensorflow_graphics/tensorboard/mesh_visualizer/tf_mesh_dashboard/array-buffer-data-provider.jse!:e!:p 9KQxswEPPtensorflow_graphics/tensorboard/mesh_visualizer/tf_mesh_dashboard/mesh-viewer.jse!:e!:p Q7є&Y(s <$tensorflow_graphics/util/__init__.pyeXeXp1Pi}3uLFbz>-D\L#tensorflow_graphics/util/asserts.pye!:e!:p 6́>&M9Xtensorflow_graphics/util/doc.pyeXeXp|ߢ |iP%6i&tensorflow_graphics/util/export_api.pyeXeXp)w36f hN`O^D$tensorflow_graphics/util/safe_ops.pyeXeXpD.zkP!tensorflow_graphics/util/shape.pyeXeXp<b(r_0`$*E1%tensorflow_graphics/util/test_case.pye!:e!:p Jw0<5e*H*tensorflow_graphics/util/tests/__init__.pyeXeXp@,+$0FjG.tensorflow_graphics/util/tests/asserts_test.pye!:e!:p ={콜QF:epD:1tensorflow_graphics/util/tests/export_api_test.pye!:e!:p `a!4:)#1 TP0/tensorflow_graphics/util/tests/safe_ops_test.pye!:e!:p 8IsbIY MI,tensorflow_graphics/util/tests/shape_test.pye!:e!:p  {]zȇ9cZZ0tensorflow_graphics/util/tests/test_case_test.pye!:e!:p Q? \'*PT@gC%tensorflow_graphics/util/tfg_flags.pyeXeXpc"|YEXϖ M&tensorflow_graphics/util/type_alias.pyTREE 360 3 K斨 ϒϋVx.github2 1 ~b}ȬnTilworkflows2 0 5]cn^ Isubmodules2 0 d-b ‡@tensorflow_graphics342 12 ߹x78. B|a#io5 1 uoD; \*BCktests2 0 # 2<B8nn21 3 r:I ?,0bloss4 1 %Wά ɴF(tests2 0 (<t<'/Dlayer6 1 Za2 !l|ftests3 0 #uJ<`dT3ПePmmetric10 1 +rM Gxtests5 0 O9ytm7r(math22 3 __A9;:G.tests4 0 pbtlf_(thoptimizer4 1 a۟3z@Ktests2 0 pn=,>ۨ$at;interpolation10 1 ,L͙lYl~tests5 0 KغQYI:util15 1 ǟ;@ȹ&s.tests6 0 W<"!8y *g3doc8 0 / #\so(riqimage15 2 ]y8DTN8[tests4 0 F{C9color_space7 1 aS@β2Ctests3 0 n7#5X2]datasets65 5 hfv4,Ƕ9pix3d25 3 "huCUI e!-HSfakes17 3 t6~/W_ne;img3 1 yiQ^r1 <8nbed3 0 p>C;w2HG?mask3 1 O'c5fxf[bed3 0 Dy8p5Kmodel8 1 7<Qbed8 2 o  E`$ WIKEA_MALM_24 0 +*N~L/Wl!$IKEA_MALM_34 0 nfo g?vnP5?splits2 0 ɀ՟Fut+fixed_masks2 0 6`Fލ(3^J:testing2 1 zspy g]`Ametadata1 1 C{*|Gk(.model_net401 1 peq`{"ֽP9 U1.0.01 0 Hynrsefeatures11 1 _=Yvn8s .test_data2 0 [6%I?.Lձshapenet11 1 (Q|",0hVfakes7 2 d-,[V<o026911564 4 d[~{ݖ$V3d5354863690ac7eca27bba175814d11 1 `8YP3){vTmodels1 0  Yj3dr2- 7eff60e0d72800b8ca8607f540cc62ba1 1 Nv.@nQs:`(O$models1 0 y==B@v*px9550774ad1c19b24a5a118bd15e6e34f1 1 ='_qs@0ߩ<models1 0 dٺarwjf3^Ka98038807a61926abce962d6c4b373361 1 ?H$)\*b 磻models1 0 +z |X0 䒸030016271 1 {ׅ=lkWll{a800bd725fe116447a84e76181a9e08f1 1 >M0VyhkjYVY;models1 0 lY8KEC`\"Ӵmodelnet4015 1 ~S|@zWLfakes7 1 p~^IƟ]8H`cmodelnet40_ply_hdf5_20487 0  ;%}geometry54 4 .3`2֋ϣsgH4kRconvolution8 1 v;j!:&jͿtests4 0 ,)-1ʕx!f|representation19 2 +vpRRJǾXUmesh9 1 642^! tests5 0 xY8)!yߵ‹Ltests5 0 炊0u<Իtransformation22 1 Qqed@Oc:(tests12 0 mV#yaC",/Z]<߻Ddeformation_energy4 1 r  6Dtests2 0 CְZFƬBZ$projects53 5 Xv]l*1nasa9 1 X8$FpkT"Vlib4 0 މl$wUFO(mcvxnet10 1 <>_e'jGop~ilib5 1 O@zI"?1libmise1 0 cNjB?:Mpointnet7 0 WlyvG"Fʒlocal_implicit_grid14 1 Phb$ yU+core9 0 j׾pbt?>neural_voxel_renderer11 1 J|YAĒ=jprepare_tfrecords4 0 3cn#jg6knotebooks16 1 /LeTRB:resources3 0 ugKNa2=rendering63 7 h cuݏDZlight4 1 o1WZQ:v*;dJtests2 0 VƙF0% Ur{tests2 0 wӥ=>+camera8 1 Jlzkۼ9ugtests4 0 O@0|Bg=>u?2opengl25 1 L5eTq!1 OXtests3 0 ᐑLz3Ԕ+voxels9 1 kS4|R D;\W#tests5 0 GwSw;{q#nkernels3 0 fڥڜtSܹ4reflectance8 1 "ocZ˝+OԎtests4 0 h?gBQtensorboard2 1 36`K9I=v mesh_visualizer2 1 scTI D6!tf_mesh_dashboard2 0 (krO?j /h[Tc<z2%;g
-1
tensorflow/graphics
486
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. Following changes are made to the library code: - tf.compat.v1.name_scope -> tf.name_scope - tf.compat.v1.where. -> tf.where - tf.compat.v1.assert_equal -> tf.debugging.assert_equal - tf.compat.v1.dimension_value -> tf.compat.dimension_value Following changes are made to the test code: - Remove tf.compat.v1.get_variable() - Remove tf.compat.v1.global_variables_initializer() - Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
copybara-service[bot]
"2021-01-29T04:02:31Z"
"2021-02-07T22:38:58Z"
9d257ad4a72ccf65e4349910b9fff7c0a5648073
f683a9a5794bade30ede447339394e84b44acc0b
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. Following changes are made to the library code: - tf.compat.v1.name_scope -> tf.name_scope - tf.compat.v1.where. -> tf.where - tf.compat.v1.assert_equal -> tf.debugging.assert_equal - tf.compat.v1.dimension_value -> tf.compat.dimension_value Following changes are made to the test code: - Remove tf.compat.v1.get_variable() - Remove tf.compat.v1.global_variables_initializer() - Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
./tensorflow_graphics/datasets/pix3d/fixed_masks/0045.png
PNG  IHDRH FxIDATxn@ EaZ_9!@c{䞹gEA!qo7s9s9眫vpk/_=ޢo˯Nj񯯅]Ԋa״jإ+îh bֵزV[M[&ִRhe.SƦGMkٴjcZqlNM%͌VcZ6-"Q- i)ؐFCʃ@jQؙÎG Î4PÎaaZ %b!"=-BBXhmiVjgV1鸕'Km+p]EMl̍-ʿpМbKbZO+=o=k/BoՇiOv]\ؽ{5ؽ,Z<Lk2J^[g/+ ؽVJCQޟ} ?;]\kYRb7vR|P{sA{q{;b/bYi舘>zHX_=* xԌf,5cK\2f,5cKXjjnӌff <v:ڋ?k%MvɎB9s9sP'pIENDB`
PNG  IHDRH FxIDATxn@ EaZ_9!@c{䞹gEA!qo7s9s9眫vpk/_=ޢo˯Nj񯯅]Ԋa״jإ+îh bֵزV[M[&ִRhe.SƦGMkٴjcZqlNM%͌VcZ6-"Q- i)ؐFCʃ@jQؙÎG Î4PÎaaZ %b!"=-BBXhmiVjgV1鸕'Km+p]EMl̍-ʿpМbKbZO+=o=k/BoՇiOv]\ؽ{5ؽ,Z<Lk2J^[g/+ ؽVJCQޟ} ?;]\kYRb7vR|P{sA{q{;b/bYi舘>zHX_=* xԌf,5cK\2f,5cKXjjnӌff <v:ڋ?k%MvɎB9s9sP'pIENDB`
-1
tensorflow/graphics
486
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. Following changes are made to the library code: - tf.compat.v1.name_scope -> tf.name_scope - tf.compat.v1.where. -> tf.where - tf.compat.v1.assert_equal -> tf.debugging.assert_equal - tf.compat.v1.dimension_value -> tf.compat.dimension_value Following changes are made to the test code: - Remove tf.compat.v1.get_variable() - Remove tf.compat.v1.global_variables_initializer() - Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
copybara-service[bot]
"2021-01-29T04:02:31Z"
"2021-02-07T22:38:58Z"
9d257ad4a72ccf65e4349910b9fff7c0a5648073
f683a9a5794bade30ede447339394e84b44acc0b
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. Following changes are made to the library code: - tf.compat.v1.name_scope -> tf.name_scope - tf.compat.v1.where. -> tf.where - tf.compat.v1.assert_equal -> tf.debugging.assert_equal - tf.compat.v1.dimension_value -> tf.compat.dimension_value Following changes are made to the test code: - Remove tf.compat.v1.get_variable() - Remove tf.compat.v1.global_variables_initializer() - Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
./tensorflow_graphics/rendering/opengl/tests/rasterization_backend_test.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from absl.testing import parameterized import numpy as np import tensorflow as tf from tensorflow_graphics.geometry.representation import grid from tensorflow_graphics.geometry.transformation import look_at from tensorflow_graphics.rendering.camera import perspective from tensorflow_graphics.rendering.opengl import math as glm from tensorflow_graphics.rendering.opengl import rasterization_backend from tensorflow_graphics.util import test_case _IMAGE_HEIGHT = 5 _IMAGE_WIDTH = 7 _TRIANGLE_SIZE = 2.0 def _generate_vertices_and_view_matrices(): camera_origin = ((0.0, 0.0, 0.0), (0.0, 0.0, 0.0)) camera_up = ((0.0, 1.0, 0.0), (0.0, 1.0, 0.0)) look_at_point = ((0.0, 0.0, 1.0), (0.0, 0.0, -1.0)) field_of_view = ((60 * np.math.pi / 180,), (60 * np.math.pi / 180,)) near_plane = ((0.01,), (0.01,)) far_plane = ((400.0,), (400.0,)) aspect_ratio = ((float(_IMAGE_WIDTH) / float(_IMAGE_HEIGHT),), (float(_IMAGE_WIDTH) / float(_IMAGE_HEIGHT),)) # Construct the view projection matrix. world_to_camera = look_at.right_handed(camera_origin, look_at_point, camera_up) perspective_matrix = perspective.right_handed(field_of_view, aspect_ratio, near_plane, far_plane) view_projection_matrix = tf.linalg.matmul(perspective_matrix, world_to_camera) depth = 1.0 vertices = (((-10.0 * _TRIANGLE_SIZE, 10.0 * _TRIANGLE_SIZE, depth), (10.0 * _TRIANGLE_SIZE, 10.0 * _TRIANGLE_SIZE, depth), (0.0, -10.0 * _TRIANGLE_SIZE, depth)), ((-_TRIANGLE_SIZE, 0.0, depth), (0.0, _TRIANGLE_SIZE, depth), (0.0, 0.0, depth))) return vertices, view_projection_matrix def _proxy_rasterize(vertices, triangles, view_projection_matrices): return rasterization_backend.rasterize(vertices, triangles, view_projection_matrices, (_IMAGE_WIDTH, _IMAGE_HEIGHT)) class RasterizationBackendTest(test_case.TestCase): @parameterized.parameters( ("must have exactly 3 dimensions in axis -1", (2, 6, 32, 2), (17, 3), (2, 6, 4, 4)), ("must have exactly 3 dimensions in axis -1", (2, 6, 32, 3), (17, 2), (2, 6, 4, 4)), ("must have a rank of 2", (2, 6, 32, 3), (3, 17, 2), (2, 6, 4, 4)), ("must have exactly 4 dimensions in axis -1", (2, 6, 32, 3), (17, 3), (2, 6, 4, 3)), ("must have exactly 4 dimensions in axis -2", (2, 6, 32, 3), (17, 3), (2, 6, 3, 4)), ("Not all batch dimensions are broadcast-compatible", (3, 6, 32, 3), (17, 3), (5, 6, 4, 4)), ) def test_rasterize_exception_raised(self, error_msg, *shapes): """Tests that the shape exceptions are properly raised.""" self.assert_exception_is_raised(_proxy_rasterize, error_msg, shapes) @parameterized.parameters( (((32, 3), (17, 3), (4, 4)), (tf.float32, tf.int32, tf.float32)), (((None, 32, 3), (17, 3), (None, 4, 4)), (tf.float32, tf.int32, tf.float32)), (((None, 9, 32, 3), (17, 3), (None, 9, 4, 4)), (tf.float32, tf.int32, tf.float32)), ) def test_rasterize_exception_not_raised(self, shapes, dtypes): self.assert_exception_is_not_raised( _proxy_rasterize, shapes=shapes, dtypes=dtypes) def test_rasterize_batch_vertices_only(self): triangles = np.array(((0, 1, 2),), np.int32) vertices, view_projection_matrix = _generate_vertices_and_view_matrices() predicted_fb = rasterization_backend.rasterize( vertices, triangles, view_projection_matrix[0], (_IMAGE_WIDTH, _IMAGE_HEIGHT)) mask = predicted_fb.foreground_mask self.assertAllEqual(mask[0, ...], tf.ones_like(mask[0, ...])) gt_layer_1 = np.zeros((_IMAGE_HEIGHT, _IMAGE_WIDTH, 1), np.float32) gt_layer_1[_IMAGE_HEIGHT // 2:, _IMAGE_WIDTH // 2:, 0] = 1.0 self.assertAllEqual(mask[1, ...], gt_layer_1) def test_rasterize_batch_view_only(self): triangles = np.array(((0, 1, 2),), np.int32) vertices, view_projection_matrix = _generate_vertices_and_view_matrices() predicted_fb = rasterization_backend.rasterize( vertices[0], triangles, view_projection_matrix, (_IMAGE_WIDTH, _IMAGE_HEIGHT)) self.assertAllEqual(predicted_fb.foreground_mask[0, ...], tf.ones_like(predicted_fb.foreground_mask[0, ...])) self.assertAllEqual(predicted_fb.foreground_mask[1, ...], tf.zeros_like(predicted_fb.foreground_mask[1, ...])) def test_rasterize_preset(self): camera_origin = (0.0, 0.0, 0.0) camera_up = (0.0, 1.0, 0.0) look_at_point = (0.0, 0.0, 1.0) field_of_view = (60 * np.math.pi / 180,) near_plane = (0.01,) far_plane = (400.0,) # Construct the view projection matrix. model_to_eye_matrix = look_at.right_handed(camera_origin, look_at_point, camera_up) perspective_matrix = perspective.right_handed( field_of_view, (float(_IMAGE_WIDTH) / float(_IMAGE_HEIGHT),), near_plane, far_plane) view_projection_matrix = tf.linalg.matmul(perspective_matrix, model_to_eye_matrix) depth = 1.0 vertices = ((-2.0 * _TRIANGLE_SIZE, 0.0, depth), (0.0, _TRIANGLE_SIZE, depth), (0.0, 0.0, depth), (0.0, -_TRIANGLE_SIZE, depth)) triangles = np.array(((1, 2, 0), (0, 2, 3)), np.int32) predicted_fb = rasterization_backend.rasterize( vertices, triangles, view_projection_matrix, (_IMAGE_WIDTH, _IMAGE_HEIGHT)) with self.subTest(name="triangle_index"): groundtruth_triangle_index = np.zeros((_IMAGE_HEIGHT, _IMAGE_WIDTH, 1), dtype=np.int32) groundtruth_triangle_index[..., :_IMAGE_WIDTH // 2, 0] = 0 groundtruth_triangle_index[:_IMAGE_HEIGHT // 2, _IMAGE_WIDTH // 2:, 0] = 1 self.assertAllEqual(groundtruth_triangle_index, predicted_fb.triangle_id) with self.subTest(name="mask"): groundtruth_mask = np.ones((_IMAGE_HEIGHT, _IMAGE_WIDTH, 1), dtype=np.int32) groundtruth_mask[..., :_IMAGE_WIDTH // 2, 0] = 0 self.assertAllEqual(groundtruth_mask, predicted_fb.foreground_mask) attributes = np.array( ((1.0, 0.0, 0.0), (0.0, 1.0, 0.0), (0.0, 0.0, 1.0))).astype(np.float32) perspective_correct_interpolation = lambda geometry, pixels: glm.perspective_correct_interpolation( # pylint: disable=g-long-lambda,line-too-long geometry, attributes, pixels, model_to_eye_matrix, perspective_matrix, np.array((_IMAGE_WIDTH, _IMAGE_HEIGHT)).astype(np.float32), np.array((0.0, 0.0)).astype(np.float32)) with self.subTest(name="barycentric_coordinates_triangle_0"): geometry_0 = tf.gather(vertices, triangles[0, :]) pixels_0 = tf.transpose( grid.generate((3.5, 2.5), (6.5, 4.5), (4, 3)), perm=(1, 0, 2)) barycentrics_gt_0 = perspective_correct_interpolation( geometry_0, pixels_0) self.assertAllClose( barycentrics_gt_0, predicted_fb.barycentrics.value[2:, 3:, :], atol=1e-3) with self.subTest(name="barycentric_coordinates_triangle_1"): geometry_1 = tf.gather(vertices, triangles[1, :]) pixels_1 = tf.transpose( grid.generate((3.5, 0.5), (6.5, 1.5), (4, 2)), perm=(1, 0, 2)) barycentrics_gt_1 = perspective_correct_interpolation( geometry_1, pixels_1) self.assertAllClose( barycentrics_gt_1, predicted_fb.barycentrics.value[0:2, 3:, :], atol=1e-3)
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from absl.testing import parameterized import numpy as np import tensorflow as tf from tensorflow_graphics.geometry.representation import grid from tensorflow_graphics.geometry.transformation import look_at from tensorflow_graphics.rendering.camera import perspective from tensorflow_graphics.rendering.opengl import math as glm from tensorflow_graphics.rendering.opengl import rasterization_backend from tensorflow_graphics.util import test_case _IMAGE_HEIGHT = 5 _IMAGE_WIDTH = 7 _TRIANGLE_SIZE = 2.0 def _generate_vertices_and_view_matrices(): camera_origin = ((0.0, 0.0, 0.0), (0.0, 0.0, 0.0)) camera_up = ((0.0, 1.0, 0.0), (0.0, 1.0, 0.0)) look_at_point = ((0.0, 0.0, 1.0), (0.0, 0.0, -1.0)) field_of_view = ((60 * np.math.pi / 180,), (60 * np.math.pi / 180,)) near_plane = ((0.01,), (0.01,)) far_plane = ((400.0,), (400.0,)) aspect_ratio = ((float(_IMAGE_WIDTH) / float(_IMAGE_HEIGHT),), (float(_IMAGE_WIDTH) / float(_IMAGE_HEIGHT),)) # Construct the view projection matrix. world_to_camera = look_at.right_handed(camera_origin, look_at_point, camera_up) perspective_matrix = perspective.right_handed(field_of_view, aspect_ratio, near_plane, far_plane) view_projection_matrix = tf.linalg.matmul(perspective_matrix, world_to_camera) depth = 1.0 vertices = (((-10.0 * _TRIANGLE_SIZE, 10.0 * _TRIANGLE_SIZE, depth), (10.0 * _TRIANGLE_SIZE, 10.0 * _TRIANGLE_SIZE, depth), (0.0, -10.0 * _TRIANGLE_SIZE, depth)), ((-_TRIANGLE_SIZE, 0.0, depth), (0.0, _TRIANGLE_SIZE, depth), (0.0, 0.0, depth))) return vertices, view_projection_matrix def _proxy_rasterize(vertices, triangles, view_projection_matrices): return rasterization_backend.rasterize(vertices, triangles, view_projection_matrices, (_IMAGE_WIDTH, _IMAGE_HEIGHT)) class RasterizationBackendTest(test_case.TestCase): @parameterized.parameters( ("must have exactly 3 dimensions in axis -1", (2, 6, 32, 2), (17, 3), (2, 6, 4, 4)), ("must have exactly 3 dimensions in axis -1", (2, 6, 32, 3), (17, 2), (2, 6, 4, 4)), ("must have a rank of 2", (2, 6, 32, 3), (3, 17, 2), (2, 6, 4, 4)), ("must have exactly 4 dimensions in axis -1", (2, 6, 32, 3), (17, 3), (2, 6, 4, 3)), ("must have exactly 4 dimensions in axis -2", (2, 6, 32, 3), (17, 3), (2, 6, 3, 4)), ("Not all batch dimensions are broadcast-compatible", (3, 6, 32, 3), (17, 3), (5, 6, 4, 4)), ) def test_rasterize_exception_raised(self, error_msg, *shapes): """Tests that the shape exceptions are properly raised.""" self.assert_exception_is_raised(_proxy_rasterize, error_msg, shapes) @parameterized.parameters( (((32, 3), (17, 3), (4, 4)), (tf.float32, tf.int32, tf.float32)), (((None, 32, 3), (17, 3), (None, 4, 4)), (tf.float32, tf.int32, tf.float32)), (((None, 9, 32, 3), (17, 3), (None, 9, 4, 4)), (tf.float32, tf.int32, tf.float32)), ) def test_rasterize_exception_not_raised(self, shapes, dtypes): self.assert_exception_is_not_raised( _proxy_rasterize, shapes=shapes, dtypes=dtypes) def test_rasterize_batch_vertices_only(self): triangles = np.array(((0, 1, 2),), np.int32) vertices, view_projection_matrix = _generate_vertices_and_view_matrices() predicted_fb = rasterization_backend.rasterize( vertices, triangles, view_projection_matrix[0], (_IMAGE_WIDTH, _IMAGE_HEIGHT)) mask = predicted_fb.foreground_mask self.assertAllEqual(mask[0, ...], tf.ones_like(mask[0, ...])) gt_layer_1 = np.zeros((_IMAGE_HEIGHT, _IMAGE_WIDTH, 1), np.float32) gt_layer_1[_IMAGE_HEIGHT // 2:, _IMAGE_WIDTH // 2:, 0] = 1.0 self.assertAllEqual(mask[1, ...], gt_layer_1) def test_rasterize_batch_view_only(self): triangles = np.array(((0, 1, 2),), np.int32) vertices, view_projection_matrix = _generate_vertices_and_view_matrices() predicted_fb = rasterization_backend.rasterize( vertices[0], triangles, view_projection_matrix, (_IMAGE_WIDTH, _IMAGE_HEIGHT)) self.assertAllEqual(predicted_fb.foreground_mask[0, ...], tf.ones_like(predicted_fb.foreground_mask[0, ...])) self.assertAllEqual(predicted_fb.foreground_mask[1, ...], tf.zeros_like(predicted_fb.foreground_mask[1, ...])) def test_rasterize_preset(self): camera_origin = (0.0, 0.0, 0.0) camera_up = (0.0, 1.0, 0.0) look_at_point = (0.0, 0.0, 1.0) field_of_view = (60 * np.math.pi / 180,) near_plane = (0.01,) far_plane = (400.0,) # Construct the view projection matrix. model_to_eye_matrix = look_at.right_handed(camera_origin, look_at_point, camera_up) perspective_matrix = perspective.right_handed( field_of_view, (float(_IMAGE_WIDTH) / float(_IMAGE_HEIGHT),), near_plane, far_plane) view_projection_matrix = tf.linalg.matmul(perspective_matrix, model_to_eye_matrix) depth = 1.0 vertices = ((-2.0 * _TRIANGLE_SIZE, 0.0, depth), (0.0, _TRIANGLE_SIZE, depth), (0.0, 0.0, depth), (0.0, -_TRIANGLE_SIZE, depth)) triangles = np.array(((1, 2, 0), (0, 2, 3)), np.int32) predicted_fb = rasterization_backend.rasterize( vertices, triangles, view_projection_matrix, (_IMAGE_WIDTH, _IMAGE_HEIGHT)) with self.subTest(name="triangle_index"): groundtruth_triangle_index = np.zeros((_IMAGE_HEIGHT, _IMAGE_WIDTH, 1), dtype=np.int32) groundtruth_triangle_index[..., :_IMAGE_WIDTH // 2, 0] = 0 groundtruth_triangle_index[:_IMAGE_HEIGHT // 2, _IMAGE_WIDTH // 2:, 0] = 1 self.assertAllEqual(groundtruth_triangle_index, predicted_fb.triangle_id) with self.subTest(name="mask"): groundtruth_mask = np.ones((_IMAGE_HEIGHT, _IMAGE_WIDTH, 1), dtype=np.int32) groundtruth_mask[..., :_IMAGE_WIDTH // 2, 0] = 0 self.assertAllEqual(groundtruth_mask, predicted_fb.foreground_mask) attributes = np.array( ((1.0, 0.0, 0.0), (0.0, 1.0, 0.0), (0.0, 0.0, 1.0))).astype(np.float32) perspective_correct_interpolation = lambda geometry, pixels: glm.perspective_correct_interpolation( # pylint: disable=g-long-lambda,line-too-long geometry, attributes, pixels, model_to_eye_matrix, perspective_matrix, np.array((_IMAGE_WIDTH, _IMAGE_HEIGHT)).astype(np.float32), np.array((0.0, 0.0)).astype(np.float32)) with self.subTest(name="barycentric_coordinates_triangle_0"): geometry_0 = tf.gather(vertices, triangles[0, :]) pixels_0 = tf.transpose( grid.generate((3.5, 2.5), (6.5, 4.5), (4, 3)), perm=(1, 0, 2)) barycentrics_gt_0 = perspective_correct_interpolation( geometry_0, pixels_0) self.assertAllClose( barycentrics_gt_0, predicted_fb.barycentrics.value[2:, 3:, :], atol=1e-3) with self.subTest(name="barycentric_coordinates_triangle_1"): geometry_1 = tf.gather(vertices, triangles[1, :]) pixels_1 = tf.transpose( grid.generate((3.5, 0.5), (6.5, 1.5), (4, 2)), perm=(1, 0, 2)) barycentrics_gt_1 = perspective_correct_interpolation( geometry_1, pixels_1) self.assertAllClose( barycentrics_gt_1, predicted_fb.barycentrics.value[0:2, 3:, :], atol=1e-3)
-1
tensorflow/graphics
486
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. Following changes are made to the library code: - tf.compat.v1.name_scope -> tf.name_scope - tf.compat.v1.where. -> tf.where - tf.compat.v1.assert_equal -> tf.debugging.assert_equal - tf.compat.v1.dimension_value -> tf.compat.dimension_value Following changes are made to the test code: - Remove tf.compat.v1.get_variable() - Remove tf.compat.v1.global_variables_initializer() - Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
copybara-service[bot]
"2021-01-29T04:02:31Z"
"2021-02-07T22:38:58Z"
9d257ad4a72ccf65e4349910b9fff7c0a5648073
f683a9a5794bade30ede447339394e84b44acc0b
Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. Following changes are made to the library code: - tf.compat.v1.name_scope -> tf.name_scope - tf.compat.v1.where. -> tf.where - tf.compat.v1.assert_equal -> tf.debugging.assert_equal - tf.compat.v1.dimension_value -> tf.compat.dimension_value Following changes are made to the test code: - Remove tf.compat.v1.get_variable() - Remove tf.compat.v1.global_variables_initializer() - Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
./tensorflow_graphics/image/color_space/tests/linear_rgb_test.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for srgb.""" from absl.testing import flagsaver from absl.testing import parameterized import numpy as np from tensorflow_graphics.image.color_space import linear_rgb from tensorflow_graphics.image.color_space import srgb from tensorflow_graphics.util import test_case class LinearRGBTest(test_case.TestCase): def test_cycle_srgb_linear_rgb_srgb_for_random_input(self): """Tests loop from sRGB to linear RGB and back for random inputs.""" tensor_size = np.random.randint(3) tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist() srgb_input = np.random.uniform(size=tensor_shape + [3]) linear_output = linear_rgb.from_srgb(srgb_input) srgb_recovered = srgb.from_linear_rgb(linear_output) self.assertAllClose(srgb_input, srgb_recovered) @parameterized.parameters( (((0., 0.5, 1.), (0.0404, 0.04045, 0.0405)), ((0., 0.214041, 1.), (0.003127, 0.003131, 0.003135))),) def test_from_srgb_preset(self, test_inputs, test_outputs): """Tests conversion from sRGB to linear RGB space for preset inputs.""" self.assert_output_is_correct(linear_rgb.from_srgb, (test_inputs,), (test_outputs,)) def test_from_srgb_jacobian_random(self): """Tests the Jacobian of the from_srgb function for random inputs.""" tensor_size = np.random.randint(3) tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist() srgb_random_init = np.random.uniform(size=tensor_shape + [3]) self.assert_jacobian_is_correct_fn(linear_rgb.from_srgb, [srgb_random_init]) @parameterized.parameters( (np.array((0., 0.01, 0.02)),), (np.array((0.05, 0.06, 1.)),), (np.array((0.01, 0.04, 0.06)),)) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_from_srgb_jacobian_preset(self, inputs_init): """Tests the Jacobian of the from_srgb function for preset inputs.""" self.assert_jacobian_is_correct_fn(linear_rgb.from_srgb, [inputs_init]) @parameterized.parameters( ((3,),), ((None, None, None, 3),), ) def test_from_srgb_exception_not_raised(self, *shape): """Tests that the shape exceptions are not raised.""" self.assert_exception_is_not_raised(linear_rgb.from_srgb, shape) @parameterized.parameters( ("must have a rank greater than 0", ()), ("must have exactly 3 dimensions in axis -1", (2, 3, 4)), ) def test_from_srgb_exception_raised(self, error_msg, *shape): """Tests that the shape exceptions are properly raised.""" self.assert_exception_is_raised(linear_rgb.from_srgb, error_msg, shape) if __name__ == "__main__": test_case.main()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for srgb.""" from absl.testing import flagsaver from absl.testing import parameterized import numpy as np from tensorflow_graphics.image.color_space import linear_rgb from tensorflow_graphics.image.color_space import srgb from tensorflow_graphics.util import test_case class LinearRGBTest(test_case.TestCase): def test_cycle_srgb_linear_rgb_srgb_for_random_input(self): """Tests loop from sRGB to linear RGB and back for random inputs.""" tensor_size = np.random.randint(3) tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist() srgb_input = np.random.uniform(size=tensor_shape + [3]) linear_output = linear_rgb.from_srgb(srgb_input) srgb_recovered = srgb.from_linear_rgb(linear_output) self.assertAllClose(srgb_input, srgb_recovered) @parameterized.parameters( (((0., 0.5, 1.), (0.0404, 0.04045, 0.0405)), ((0., 0.214041, 1.), (0.003127, 0.003131, 0.003135))),) def test_from_srgb_preset(self, test_inputs, test_outputs): """Tests conversion from sRGB to linear RGB space for preset inputs.""" self.assert_output_is_correct(linear_rgb.from_srgb, (test_inputs,), (test_outputs,)) def test_from_srgb_jacobian_random(self): """Tests the Jacobian of the from_srgb function for random inputs.""" tensor_size = np.random.randint(3) tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist() srgb_random_init = np.random.uniform(size=tensor_shape + [3]) self.assert_jacobian_is_correct_fn(linear_rgb.from_srgb, [srgb_random_init]) @parameterized.parameters( (np.array((0., 0.01, 0.02)),), (np.array((0.05, 0.06, 1.)),), (np.array((0.01, 0.04, 0.06)),)) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_from_srgb_jacobian_preset(self, inputs_init): """Tests the Jacobian of the from_srgb function for preset inputs.""" self.assert_jacobian_is_correct_fn(linear_rgb.from_srgb, [inputs_init]) @parameterized.parameters( ((3,),), ((None, None, None, 3),), ) def test_from_srgb_exception_not_raised(self, *shape): """Tests that the shape exceptions are not raised.""" self.assert_exception_is_not_raised(linear_rgb.from_srgb, shape) @parameterized.parameters( ("must have a rank greater than 0", ()), ("must have exactly 3 dimensions in axis -1", (2, 3, 4)), ) def test_from_srgb_exception_raised(self, error_msg, *shape): """Tests that the shape exceptions are properly raised.""" self.assert_exception_is_raised(linear_rgb.from_srgb, error_msg, shape) if __name__ == "__main__": test_case.main()
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/rendering/opengl/math.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """This module implements math routines used by OpenGL.""" import tensorflow as tf from tensorflow_graphics.geometry.transformation import look_at from tensorflow_graphics.math.interpolation import weighted from tensorflow_graphics.rendering.camera import perspective from tensorflow_graphics.util import asserts from tensorflow_graphics.util import export_api from tensorflow_graphics.util import shape def model_to_eye(point_model_space, camera_position, look_at_point, up_vector, name=None): """Transforms points from model to eye coordinates. Note: In the following, A1 to An are optional batch dimensions which must be broadcast compatible. Args: point_model_space: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the 3D points in model space. camera_position: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the 3D position of the camera. look_at_point: A tensor of shape `[A1, ..., An, 3]`, with the last dimension storing the position where the camera is looking at. up_vector: A tensor of shape `[A1, ..., An, 3]`, where the last dimension defines the up vector of the camera. name: A name for this op. Defaults to 'model_to_eye'. Raises: ValueError: if the all the inputs are not of the same shape, or if any input of of an unsupported shape. Returns: A tensor of shape `[A1, ..., An, 3]`, containing `point_model_space` in eye coordinates. """ with tf.compat.v1.name_scope( name, "model_to_eye", [point_model_space, camera_position, look_at_point, up_vector]): point_model_space = tf.convert_to_tensor(value=point_model_space) camera_position = tf.convert_to_tensor(value=camera_position) look_at_point = tf.convert_to_tensor(value=look_at_point) up_vector = tf.convert_to_tensor(value=up_vector) shape.check_static( tensor=point_model_space, tensor_name="point_model_space", has_dim_equals=(-1, 3)) shape.compare_batch_dimensions( tensors=(point_model_space, camera_position), last_axes=-2, tensor_names=("point_model_space", "camera_position"), broadcast_compatible=True) model_to_eye_matrix = look_at.right_handed(camera_position, look_at_point, up_vector) batch_shape = tf.shape(input=point_model_space)[:-1] one = tf.ones( shape=tf.concat((batch_shape, (1,)), axis=-1), dtype=point_model_space.dtype) point_model_space = tf.concat((point_model_space, one), axis=-1) point_model_space = tf.expand_dims(point_model_space, axis=-1) res = tf.squeeze(tf.matmul(model_to_eye_matrix, point_model_space), axis=-1) return res[..., :-1] def eye_to_clip(point_eye_space, vertical_field_of_view, aspect_ratio, near, far, name=None): """Transforms points from eye to clip space. Note: In the following, A1 to An are optional batch dimensions which must be broadcast compatible. Args: point_eye_space: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the 3D points in eye coordinates. vertical_field_of_view: A tensor of shape `[A1, ..., An, 1]`, where the last dimension represents the vertical field of view of the frustum. Note that values for `vertical_field_of_view` must be in the range ]0,pi[. aspect_ratio: A tensor of shape `[A1, ..., An, 1]`, where the last dimension stores the width over height ratio of the frustum. Note that values for `aspect_ratio` must be non-negative. near: A tensor of shape `[A1, ..., An, 1]`, where the last dimension captures the distance between the viewer and the near clipping plane. Note that values for `near` must be non-negative. far: A tensor of shape `[A1, ..., An, 1]`, where the last dimension captures the distance between the viewer and the far clipping plane. Note that values for `far` must be non-negative. name: A name for this op. Defaults to 'eye_to_clip'. Raises: ValueError: If any input is of an unsupported shape. Returns: A tensor of shape `[A1, ..., An, 4]`, containing `point_eye_space` in homogeneous clip coordinates. """ with tf.compat.v1.name_scope( name, "eye_to_clip", [point_eye_space, vertical_field_of_view, aspect_ratio, near, far]): point_eye_space = tf.convert_to_tensor(value=point_eye_space) vertical_field_of_view = tf.convert_to_tensor(value=vertical_field_of_view) aspect_ratio = tf.convert_to_tensor(value=aspect_ratio) near = tf.convert_to_tensor(value=near) far = tf.convert_to_tensor(value=far) shape.check_static( tensor=point_eye_space, tensor_name="point_eye_space", has_dim_equals=(-1, 3)) shape.check_static( tensor=vertical_field_of_view, tensor_name="vertical_field_of_view", has_dim_equals=(-1, 1)) shape.check_static( tensor=aspect_ratio, tensor_name="aspect_ratio", has_dim_equals=(-1, 1)) shape.check_static(tensor=near, tensor_name="near", has_dim_equals=(-1, 1)) shape.check_static(tensor=far, tensor_name="far", has_dim_equals=(-1, 1)) shape.compare_batch_dimensions( tensors=(point_eye_space, vertical_field_of_view, aspect_ratio, near, far), last_axes=-2, tensor_names=("point_eye_space", "vertical_field_of_view", "aspect_ratio", "near", "far"), broadcast_compatible=True) perspective_matrix = perspective.right_handed(vertical_field_of_view, aspect_ratio, near, far) batch_shape = tf.shape(input=point_eye_space)[:-1] one = tf.ones( shape=tf.concat((batch_shape, (1,)), axis=-1), dtype=point_eye_space.dtype) point_eye_space = tf.concat((point_eye_space, one), axis=-1) point_eye_space = tf.expand_dims(point_eye_space, axis=-1) return tf.squeeze(tf.matmul(perspective_matrix, point_eye_space), axis=-1) def clip_to_ndc(point_clip_space, name=None): """Transforms points from clip to normalized device coordinates (ndc). Note: In the following, A1 to An are optional batch dimensions. Args: point_clip_space: A tensor of shape `[A1, ..., An, 4]`, where the last dimension represents points in clip space. name: A name for this op. Defaults to 'clip_to_ndc'. Raises: ValueError: If `point_clip_space` is not of size 4 in its last dimension. Returns: A tensor of shape `[A1, ..., An, 3]`, containing `point_clip_space` in normalized device coordinates. """ with tf.compat.v1.name_scope(name, "clip_to_ndc", [point_clip_space]): point_clip_space = tf.convert_to_tensor(value=point_clip_space) shape.check_static( tensor=point_clip_space, tensor_name="point_clip_space", has_dim_equals=(-1, 4)) w = point_clip_space[..., -1:] return point_clip_space[..., :3] / w def ndc_to_screen(point_ndc_space, lower_left_corner, screen_dimensions, near, far, name=None): """Transforms points from normalized device coordinates to screen coordinates. Note: In the following, A1 to An are optional batch dimensions which must be broadcast compatible between `point_ndc_space` and the other variables. Args: point_ndc_space: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents points in normalized device coordinates. lower_left_corner: A tensor of shape `[A1, ..., An, 2]`, where the last dimension captures the position (in pixels) of the lower left corner of the screen. screen_dimensions: A tensor of shape `[A1, ..., An, 2]`, where the last dimension is expressed in pixels and captures the width and the height (in pixels) of the screen. near: A tensor of shape `[A1, ..., An, 1]`, where the last dimension captures the distance between the viewer and the near clipping plane. Note that values for `near` must be non-negative. far: A tensor of shape `[A1, ..., An, 1]`, where the last dimension captures the distance between the viewer and the far clipping plane. Note that values for `far` must be greater than those of `near`. name: A name for this op. Defaults to 'ndc_to_screen'. Raises: InvalidArgumentError: if any input contains data not in the specified range of valid values. ValueError: If any input is of an unsupported shape. Returns: A tensor of shape `[A1, ..., An, 3]`, containing `point_ndc_space` in screen coordinates. """ with tf.compat.v1.name_scope( name, "ndc_to_screen", [point_ndc_space, lower_left_corner, screen_dimensions, near, far]): point_ndc_space = tf.convert_to_tensor(value=point_ndc_space) lower_left_corner = tf.convert_to_tensor(value=lower_left_corner) screen_dimensions = tf.convert_to_tensor(value=screen_dimensions) near = tf.convert_to_tensor(value=near) far = tf.convert_to_tensor(value=far) shape.check_static( tensor=point_ndc_space, tensor_name="point_ndc_space", has_dim_equals=(-1, 3)) shape.check_static( tensor=lower_left_corner, tensor_name="lower_left_corner", has_dim_equals=(-1, 2)) shape.check_static( tensor=screen_dimensions, tensor_name="screen_dimensions", has_dim_equals=(-1, 2)) shape.check_static(tensor=near, tensor_name="near", has_dim_equals=(-1, 1)) shape.check_static(tensor=far, tensor_name="far", has_dim_equals=(-1, 1)) shape.compare_batch_dimensions( tensors=(lower_left_corner, screen_dimensions, near, far), last_axes=-2, tensor_names=("lower_left_corner", "screen_dimensions", "near", "far"), broadcast_compatible=False) shape.compare_batch_dimensions( tensors=(point_ndc_space, near), last_axes=-2, tensor_names=("point_ndc_space", "near"), broadcast_compatible=True) screen_dimensions = asserts.assert_all_above( screen_dimensions, 0.0, open_bound=True) near = asserts.assert_all_above(near, 0.0, open_bound=True) far = asserts.assert_all_above(far, near, open_bound=True) ndc_to_screen_factor = tf.concat( (screen_dimensions, far - near), axis=-1) / 2.0 screen_center = tf.concat( (lower_left_corner + screen_dimensions / 2.0, (near + far) / 2.0), axis=-1) return ndc_to_screen_factor * point_ndc_space + screen_center def model_to_screen(point_model_space, model_to_eye_matrix, perspective_matrix, screen_dimensions, lower_left_corner=(0.0, 0.0), name=None): """Transforms points from model to screen coordinates. Note: Please refer to http://www.songho.ca/opengl/gl_transform.html for an in-depth review of this pipeline. Note: In the following, A1 to An are optional batch dimensions which must be broadcast compatible. Args: point_model_space: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the 3D points in model space. model_to_eye_matrix: A tensor of shape `[A1, ..., An, 4, 4]`, where the last two dimension represent matrices to transform points from model to eye coordinates. perspective_matrix: A tensor of shape `[A1, ..., An, 4, 4]`, where the last two dimension represent matrices to transform points from eye to clip coordinates. screen_dimensions: A tensor of shape `[A1, ..., An, 2]`, where the last dimension is expressed in pixels and captures the width and the height (in pixels) of the screen. lower_left_corner: A tensor of shape `[A1, ..., An, 2]`, where the last dimension captures the position (in pixels) of the lower left corner of the screen. name: A name for this op. Defaults to 'model_to_screen'. Raises: InvalidArgumentError: if any input contains data not in the specified range of valid values. ValueError: If any input is of an unsupported shape. Returns: A tuple of two tensors, respectively of shape `[A1, ..., An, 3]` and `[A1, ..., An, 1]`, where the first tensor containing the projection of `point_model_space` in screen coordinates, and the second represents the 'w' component of `point_model_space` in clip space. """ with tf.compat.v1.name_scope(name, "model_to_screen", [ point_model_space, model_to_eye_matrix, perspective_matrix, screen_dimensions, lower_left_corner ]): point_model_space = tf.convert_to_tensor(value=point_model_space) model_to_eye_matrix = tf.convert_to_tensor(value=model_to_eye_matrix) perspective_matrix = tf.convert_to_tensor(value=perspective_matrix) shape.check_static( tensor=point_model_space, tensor_name="point_model_space", has_dim_equals=(-1, 3)) shape.check_static( tensor=model_to_eye_matrix, tensor_name="model_to_eye_matrix", has_dim_equals=((-1, 4), (-2, 4))) shape.check_static( tensor=perspective_matrix, tensor_name="perspective_matrix", has_dim_equals=((-1, 4), (-2, 4))) shape.compare_batch_dimensions( tensors=(point_model_space, model_to_eye_matrix, perspective_matrix), last_axes=(-2, -3, -3), tensor_names=("point_model_space", "model_to_eye_matrix", "perspective_matrix"), broadcast_compatible=True) batch_shape = tf.shape(input=point_model_space)[:-1] one = tf.ones( shape=tf.concat((batch_shape, (1,)), axis=-1), dtype=point_model_space.dtype) point_model_space = tf.concat((point_model_space, one), axis=-1) point_model_space = tf.expand_dims(point_model_space, axis=-1) view_projection_matrix = tf.linalg.matmul(perspective_matrix, model_to_eye_matrix) _, _, near, far = perspective.parameters_from_right_handed( perspective_matrix) point_clip_space = tf.squeeze( tf.matmul(view_projection_matrix, point_model_space), axis=-1) point_ndc_space = clip_to_ndc(point_clip_space) point_screen_space = ndc_to_screen(point_ndc_space, lower_left_corner, screen_dimensions, near, far) return point_screen_space, point_clip_space[..., 3:4] def perspective_correct_barycentrics(triangle_vertices_model_space, pixel_position, model_to_eye_matrix, perspective_matrix, screen_dimensions, lower_left_corner=(0.0, 0.0), name=None): """Computes perspective correct barycentrics. Note: In the following, A1 to An are optional batch dimensions. Args: triangle_vertices_model_space: A tensor of shape `[A1, ..., An, 3, 3]`, where the last dimension represents the vertices of a triangle in model space. pixel_position: A tensor of shape `[A1, ..., An, 2]`, where the last dimension stores the position (in pixels) where the interpolation is requested. model_to_eye_matrix: A tensor of shape `[A1, ..., An, 4, 4]`, where the last two dimension represent matrices to transform points from model to eye coordinates. perspective_matrix: A tensor of shape `[A1, ..., An, 4, 4]`, where the last two dimension represent matrices to transform points from eye to clip coordinates. screen_dimensions: A tensor of shape `[A1, ..., An, 2]`, where the last dimension is expressed in pixels and captures the width and the height (in pixels) of the screen. lower_left_corner: A tensor of shape `[A1, ..., An, 2]`, where the last dimension captures the position (in pixels) of the lower left corner of the screen. name: A name for this op. Defaults to 'perspective_correct_barycentrics'. Raises: InvalidArgumentError: if any input contains data not in the specified range of valid values. ValueError: If any input is of an unsupported shape. Returns: A tensor of shape `[A1, ..., An, 3]`, containing perspective correct barycentric coordinates. """ with tf.compat.v1.name_scope(name, "perspective_correct_barycentrics", [ triangle_vertices_model_space, pixel_position, model_to_eye_matrix, perspective_matrix, screen_dimensions, lower_left_corner ]): pixel_position = tf.convert_to_tensor(value=pixel_position) triangle_vertices_model_space = tf.convert_to_tensor( value=triangle_vertices_model_space) shape.check_static( tensor=pixel_position, tensor_name="pixel_position", has_dim_equals=(-1, 2)) shape.check_static( tensor=triangle_vertices_model_space, tensor_name="triangle_vertices_model_space", has_dim_equals=((-2, 3), (-1, 3))) vertices_screen, vertices_w = model_to_screen(triangle_vertices_model_space, model_to_eye_matrix, perspective_matrix, screen_dimensions, lower_left_corner) vertices_w = tf.squeeze(vertices_w, axis=-1) pixel_position = tf.expand_dims(pixel_position, axis=-2) barycentric_coordinates, _ = weighted.get_barycentric_coordinates( vertices_screen[..., :2], pixel_position) barycentric_coordinates = tf.squeeze(barycentric_coordinates, axis=-2) coeffs = barycentric_coordinates / vertices_w return tf.linalg.normalize(coeffs, ord=1, axis=-1)[0] def interpolate_attributes(attribute, barycentric, name=None): """Interpolates attributes using barycentric weights. Note: In the following, A1 to An are optional batch dimensions. Args: attribute: A tensor of shape `[A1, ..., An, 3, B]`, where the last dimension stores a per-vertex `B`-dimensional attribute. barycentric: A tensor of shape `[A1, ..., An, 3]`, where the last dimension contains barycentric coordinates. name: A name for this op. Defaults to 'interpolate_attributes'. Returns: A tensor of shape `[A1, ..., An, B]`, containing interpolated attributes. """ with tf.compat.v1.name_scope(name, "interpolate_attributes", (attribute, barycentric)): attribute = tf.convert_to_tensor(value=attribute) barycentric = tf.convert_to_tensor(value=barycentric) shape.check_static( tensor=attribute, tensor_name="attribute", has_dim_equals=(-2, 3)) shape.check_static( tensor=barycentric, tensor_name="barycentric", has_dim_equals=(-1, 3)) shape.compare_batch_dimensions( tensors=(attribute, barycentric), last_axes=(-2, -1), tensor_names=("attribute", "barycentric"), broadcast_compatible=True) barycentric = asserts.assert_normalized(barycentric, order=1) return tf.reduce_sum( input_tensor=tf.expand_dims(barycentric, axis=-1) * attribute, axis=-2) def perspective_correct_interpolation(triangle_vertices_model_space, attribute, pixel_position, model_to_eye_matrix, perspective_matrix, screen_dimensions, lower_left_corner=(0.0, 0.0), name=None): """Returns perspective corrected interpolation of attributes over triangles. Note: In the following, A1 to An are optional batch dimensions. Args: triangle_vertices_model_space: A tensor of shape `[A1, ..., An, 3, 3]`, where the last dimension represents the vertices of a triangle in model space. attribute: A tensor of shape `[A1, ..., An, 3, B]`, where the last dimension stores a per-vertex `B`-dimensional attribute. pixel_position: A tensor of shape `[A1, ..., An, 2]`, where the last dimension stores the position (in pixels) where the interpolation is requested. model_to_eye_matrix: A tensor of shape `[A1, ..., An, 4, 4]`, where the last two dimension represent matrices to transform points from model to eye coordinates. perspective_matrix: A tensor of shape `[A1, ..., An, 4, 4]`, where the last two dimension represent matrices to transform points from eye to clip coordinates. screen_dimensions: A tensor of shape `[A1, ..., An, 2]`, where the last dimension is expressed in pixels and captures the width and the height (in pixels) of the screen. lower_left_corner: A tensor of shape `[A1, ..., An, 2]`, where the last dimension captures the position (in pixels) of the lower left corner of the screen. name: A name for this op. Defaults to 'perspective_correct_interpolation'. Raises: tf.errors.InvalidArgumentError: if any input contains data not in the specified range of valid values. ValueError: If any input is of an unsupported shape. Returns: A tensor of shape `[A1, ..., An, B]`, containing interpolated attributes. """ with tf.compat.v1.name_scope(name, "perspective_correct_interpolation", [ triangle_vertices_model_space, attribute, pixel_position, model_to_eye_matrix, perspective_matrix, screen_dimensions, lower_left_corner ]): barycentric = perspective_correct_barycentrics( triangle_vertices_model_space, pixel_position, model_to_eye_matrix, perspective_matrix, screen_dimensions, lower_left_corner) return interpolate_attributes(attribute, barycentric) # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """This module implements math routines used by OpenGL.""" import tensorflow as tf from tensorflow_graphics.geometry.transformation import look_at from tensorflow_graphics.math.interpolation import weighted from tensorflow_graphics.rendering.camera import perspective from tensorflow_graphics.util import asserts from tensorflow_graphics.util import export_api from tensorflow_graphics.util import shape def model_to_eye(point_model_space, camera_position, look_at_point, up_vector, name=None): """Transforms points from model to eye coordinates. Note: In the following, A1 to An are optional batch dimensions which must be broadcast compatible. Args: point_model_space: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the 3D points in model space. camera_position: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the 3D position of the camera. look_at_point: A tensor of shape `[A1, ..., An, 3]`, with the last dimension storing the position where the camera is looking at. up_vector: A tensor of shape `[A1, ..., An, 3]`, where the last dimension defines the up vector of the camera. name: A name for this op. Defaults to 'model_to_eye'. Raises: ValueError: if the all the inputs are not of the same shape, or if any input of of an unsupported shape. Returns: A tensor of shape `[A1, ..., An, 3]`, containing `point_model_space` in eye coordinates. """ with tf.compat.v1.name_scope( name, "model_to_eye", [point_model_space, camera_position, look_at_point, up_vector]): point_model_space = tf.convert_to_tensor(value=point_model_space) camera_position = tf.convert_to_tensor(value=camera_position) look_at_point = tf.convert_to_tensor(value=look_at_point) up_vector = tf.convert_to_tensor(value=up_vector) shape.check_static( tensor=point_model_space, tensor_name="point_model_space", has_dim_equals=(-1, 3)) shape.compare_batch_dimensions( tensors=(point_model_space, camera_position), last_axes=-2, tensor_names=("point_model_space", "camera_position"), broadcast_compatible=True) model_to_eye_matrix = look_at.right_handed(camera_position, look_at_point, up_vector) batch_shape = tf.shape(input=point_model_space)[:-1] one = tf.ones( shape=tf.concat((batch_shape, (1,)), axis=-1), dtype=point_model_space.dtype) point_model_space = tf.concat((point_model_space, one), axis=-1) point_model_space = tf.expand_dims(point_model_space, axis=-1) res = tf.squeeze(tf.matmul(model_to_eye_matrix, point_model_space), axis=-1) return res[..., :-1] def eye_to_clip(point_eye_space, vertical_field_of_view, aspect_ratio, near, far, name=None): """Transforms points from eye to clip space. Note: In the following, A1 to An are optional batch dimensions which must be broadcast compatible. Args: point_eye_space: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the 3D points in eye coordinates. vertical_field_of_view: A tensor of shape `[A1, ..., An, 1]`, where the last dimension represents the vertical field of view of the frustum. Note that values for `vertical_field_of_view` must be in the range ]0,pi[. aspect_ratio: A tensor of shape `[A1, ..., An, 1]`, where the last dimension stores the width over height ratio of the frustum. Note that values for `aspect_ratio` must be non-negative. near: A tensor of shape `[A1, ..., An, 1]`, where the last dimension captures the distance between the viewer and the near clipping plane. Note that values for `near` must be non-negative. far: A tensor of shape `[A1, ..., An, 1]`, where the last dimension captures the distance between the viewer and the far clipping plane. Note that values for `far` must be non-negative. name: A name for this op. Defaults to 'eye_to_clip'. Raises: ValueError: If any input is of an unsupported shape. Returns: A tensor of shape `[A1, ..., An, 4]`, containing `point_eye_space` in homogeneous clip coordinates. """ with tf.compat.v1.name_scope( name, "eye_to_clip", [point_eye_space, vertical_field_of_view, aspect_ratio, near, far]): point_eye_space = tf.convert_to_tensor(value=point_eye_space) vertical_field_of_view = tf.convert_to_tensor(value=vertical_field_of_view) aspect_ratio = tf.convert_to_tensor(value=aspect_ratio) near = tf.convert_to_tensor(value=near) far = tf.convert_to_tensor(value=far) shape.check_static( tensor=point_eye_space, tensor_name="point_eye_space", has_dim_equals=(-1, 3)) shape.check_static( tensor=vertical_field_of_view, tensor_name="vertical_field_of_view", has_dim_equals=(-1, 1)) shape.check_static( tensor=aspect_ratio, tensor_name="aspect_ratio", has_dim_equals=(-1, 1)) shape.check_static(tensor=near, tensor_name="near", has_dim_equals=(-1, 1)) shape.check_static(tensor=far, tensor_name="far", has_dim_equals=(-1, 1)) shape.compare_batch_dimensions( tensors=(point_eye_space, vertical_field_of_view, aspect_ratio, near, far), last_axes=-2, tensor_names=("point_eye_space", "vertical_field_of_view", "aspect_ratio", "near", "far"), broadcast_compatible=True) perspective_matrix = perspective.right_handed(vertical_field_of_view, aspect_ratio, near, far) batch_shape = tf.shape(input=point_eye_space)[:-1] one = tf.ones( shape=tf.concat((batch_shape, (1,)), axis=-1), dtype=point_eye_space.dtype) point_eye_space = tf.concat((point_eye_space, one), axis=-1) point_eye_space = tf.expand_dims(point_eye_space, axis=-1) return tf.squeeze(tf.matmul(perspective_matrix, point_eye_space), axis=-1) def clip_to_ndc(point_clip_space, name=None): """Transforms points from clip to normalized device coordinates (ndc). Note: In the following, A1 to An are optional batch dimensions. Args: point_clip_space: A tensor of shape `[A1, ..., An, 4]`, where the last dimension represents points in clip space. name: A name for this op. Defaults to 'clip_to_ndc'. Raises: ValueError: If `point_clip_space` is not of size 4 in its last dimension. Returns: A tensor of shape `[A1, ..., An, 3]`, containing `point_clip_space` in normalized device coordinates. """ with tf.compat.v1.name_scope(name, "clip_to_ndc", [point_clip_space]): point_clip_space = tf.convert_to_tensor(value=point_clip_space) shape.check_static( tensor=point_clip_space, tensor_name="point_clip_space", has_dim_equals=(-1, 4)) w = point_clip_space[..., -1:] return point_clip_space[..., :3] / w def ndc_to_screen(point_ndc_space, lower_left_corner, screen_dimensions, near, far, name=None): """Transforms points from normalized device coordinates to screen coordinates. Note: In the following, A1 to An are optional batch dimensions which must be broadcast compatible between `point_ndc_space` and the other variables. Args: point_ndc_space: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents points in normalized device coordinates. lower_left_corner: A tensor of shape `[A1, ..., An, 2]`, where the last dimension captures the position (in pixels) of the lower left corner of the screen. screen_dimensions: A tensor of shape `[A1, ..., An, 2]`, where the last dimension is expressed in pixels and captures the width and the height (in pixels) of the screen. near: A tensor of shape `[A1, ..., An, 1]`, where the last dimension captures the distance between the viewer and the near clipping plane. Note that values for `near` must be non-negative. far: A tensor of shape `[A1, ..., An, 1]`, where the last dimension captures the distance between the viewer and the far clipping plane. Note that values for `far` must be greater than those of `near`. name: A name for this op. Defaults to 'ndc_to_screen'. Raises: InvalidArgumentError: if any input contains data not in the specified range of valid values. ValueError: If any input is of an unsupported shape. Returns: A tensor of shape `[A1, ..., An, 3]`, containing `point_ndc_space` in screen coordinates. """ with tf.compat.v1.name_scope( name, "ndc_to_screen", [point_ndc_space, lower_left_corner, screen_dimensions, near, far]): point_ndc_space = tf.convert_to_tensor(value=point_ndc_space) lower_left_corner = tf.convert_to_tensor(value=lower_left_corner) screen_dimensions = tf.convert_to_tensor(value=screen_dimensions) near = tf.convert_to_tensor(value=near) far = tf.convert_to_tensor(value=far) shape.check_static( tensor=point_ndc_space, tensor_name="point_ndc_space", has_dim_equals=(-1, 3)) shape.check_static( tensor=lower_left_corner, tensor_name="lower_left_corner", has_dim_equals=(-1, 2)) shape.check_static( tensor=screen_dimensions, tensor_name="screen_dimensions", has_dim_equals=(-1, 2)) shape.check_static(tensor=near, tensor_name="near", has_dim_equals=(-1, 1)) shape.check_static(tensor=far, tensor_name="far", has_dim_equals=(-1, 1)) shape.compare_batch_dimensions( tensors=(lower_left_corner, screen_dimensions, near, far), last_axes=-2, tensor_names=("lower_left_corner", "screen_dimensions", "near", "far"), broadcast_compatible=False) shape.compare_batch_dimensions( tensors=(point_ndc_space, near), last_axes=-2, tensor_names=("point_ndc_space", "near"), broadcast_compatible=True) screen_dimensions = asserts.assert_all_above( screen_dimensions, 0.0, open_bound=True) near = asserts.assert_all_above(near, 0.0, open_bound=True) far = asserts.assert_all_above(far, near, open_bound=True) ndc_to_screen_factor = tf.concat( (screen_dimensions, far - near), axis=-1) / 2.0 screen_center = tf.concat( (lower_left_corner + screen_dimensions / 2.0, (near + far) / 2.0), axis=-1) return ndc_to_screen_factor * point_ndc_space + screen_center def model_to_screen(point_model_space, model_to_eye_matrix, perspective_matrix, screen_dimensions, lower_left_corner=(0.0, 0.0), name=None): """Transforms points from model to screen coordinates. Note: Please refer to http://www.songho.ca/opengl/gl_transform.html for an in-depth review of this pipeline. Note: In the following, A1 to An are optional batch dimensions which must be broadcast compatible. Args: point_model_space: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the 3D points in model space. model_to_eye_matrix: A tensor of shape `[A1, ..., An, 4, 4]`, where the last two dimension represent matrices to transform points from model to eye coordinates. perspective_matrix: A tensor of shape `[A1, ..., An, 4, 4]`, where the last two dimension represent matrices to transform points from eye to clip coordinates. screen_dimensions: A tensor of shape `[A1, ..., An, 2]`, where the last dimension is expressed in pixels and captures the width and the height (in pixels) of the screen. lower_left_corner: A tensor of shape `[A1, ..., An, 2]`, where the last dimension captures the position (in pixels) of the lower left corner of the screen. name: A name for this op. Defaults to 'model_to_screen'. Raises: InvalidArgumentError: if any input contains data not in the specified range of valid values. ValueError: If any input is of an unsupported shape. Returns: A tuple of two tensors, respectively of shape `[A1, ..., An, 3]` and `[A1, ..., An, 1]`, where the first tensor containing the projection of `point_model_space` in screen coordinates, and the second represents the 'w' component of `point_model_space` in clip space. """ with tf.compat.v1.name_scope(name, "model_to_screen", [ point_model_space, model_to_eye_matrix, perspective_matrix, screen_dimensions, lower_left_corner ]): point_model_space = tf.convert_to_tensor(value=point_model_space) model_to_eye_matrix = tf.convert_to_tensor(value=model_to_eye_matrix) perspective_matrix = tf.convert_to_tensor(value=perspective_matrix) shape.check_static( tensor=point_model_space, tensor_name="point_model_space", has_dim_equals=(-1, 3)) shape.check_static( tensor=model_to_eye_matrix, tensor_name="model_to_eye_matrix", has_dim_equals=((-1, 4), (-2, 4))) shape.check_static( tensor=perspective_matrix, tensor_name="perspective_matrix", has_dim_equals=((-1, 4), (-2, 4))) shape.compare_batch_dimensions( tensors=(point_model_space, model_to_eye_matrix, perspective_matrix), last_axes=(-2, -3, -3), tensor_names=("point_model_space", "model_to_eye_matrix", "perspective_matrix"), broadcast_compatible=True) batch_shape = tf.shape(input=point_model_space)[:-1] one = tf.ones( shape=tf.concat((batch_shape, (1,)), axis=-1), dtype=point_model_space.dtype) point_model_space = tf.concat((point_model_space, one), axis=-1) point_model_space = tf.expand_dims(point_model_space, axis=-1) view_projection_matrix = tf.linalg.matmul(perspective_matrix, model_to_eye_matrix) _, _, near, far = perspective.parameters_from_right_handed( perspective_matrix) point_clip_space = tf.squeeze( tf.matmul(view_projection_matrix, point_model_space), axis=-1) point_ndc_space = clip_to_ndc(point_clip_space) point_screen_space = ndc_to_screen(point_ndc_space, lower_left_corner, screen_dimensions, near, far) return point_screen_space, point_clip_space[..., 3:4] def perspective_correct_barycentrics(triangle_vertices_model_space, pixel_position, model_to_eye_matrix, perspective_matrix, screen_dimensions, lower_left_corner=(0.0, 0.0), name=None): """Computes perspective correct barycentrics. Note: In the following, A1 to An are optional batch dimensions. Args: triangle_vertices_model_space: A tensor of shape `[A1, ..., An, 3, 3]`, where the last dimension represents the vertices of a triangle in model space. pixel_position: A tensor of shape `[A1, ..., An, 2]`, where the last dimension stores the position (in pixels) where the interpolation is requested. model_to_eye_matrix: A tensor of shape `[A1, ..., An, 4, 4]`, where the last two dimension represent matrices to transform points from model to eye coordinates. perspective_matrix: A tensor of shape `[A1, ..., An, 4, 4]`, where the last two dimension represent matrices to transform points from eye to clip coordinates. screen_dimensions: A tensor of shape `[A1, ..., An, 2]`, where the last dimension is expressed in pixels and captures the width and the height (in pixels) of the screen. lower_left_corner: A tensor of shape `[A1, ..., An, 2]`, where the last dimension captures the position (in pixels) of the lower left corner of the screen. name: A name for this op. Defaults to 'perspective_correct_barycentrics'. Raises: InvalidArgumentError: if any input contains data not in the specified range of valid values. ValueError: If any input is of an unsupported shape. Returns: A tensor of shape `[A1, ..., An, 3]`, containing perspective correct barycentric coordinates. """ with tf.compat.v1.name_scope(name, "perspective_correct_barycentrics", [ triangle_vertices_model_space, pixel_position, model_to_eye_matrix, perspective_matrix, screen_dimensions, lower_left_corner ]): pixel_position = tf.convert_to_tensor(value=pixel_position) triangle_vertices_model_space = tf.convert_to_tensor( value=triangle_vertices_model_space) shape.check_static( tensor=pixel_position, tensor_name="pixel_position", has_dim_equals=(-1, 2)) shape.check_static( tensor=triangle_vertices_model_space, tensor_name="triangle_vertices_model_space", has_dim_equals=((-2, 3), (-1, 3))) lower_left_corner = tf.convert_to_tensor(value=lower_left_corner) screen_dimensions = tf.convert_to_tensor(value=screen_dimensions) lower_left_corner = shape.add_batch_dimensions( lower_left_corner, "lower_left_corner", model_to_eye_matrix.shape[:-2], last_axis=-2) screen_dimensions = shape.add_batch_dimensions( screen_dimensions, "screen_dimensions", model_to_eye_matrix.shape[:-2], last_axis=-2) vertices_screen, vertices_w = model_to_screen(triangle_vertices_model_space, model_to_eye_matrix, perspective_matrix, screen_dimensions, lower_left_corner) vertices_w = tf.squeeze(vertices_w, axis=-1) pixel_position = tf.expand_dims(pixel_position, axis=-2) barycentric_coordinates, _ = weighted.get_barycentric_coordinates( vertices_screen[..., :2], pixel_position) barycentric_coordinates = tf.squeeze(barycentric_coordinates, axis=-2) coeffs = barycentric_coordinates / vertices_w return tf.linalg.normalize(coeffs, ord=1, axis=-1)[0] def interpolate_attributes(attribute, barycentric, name=None): """Interpolates attributes using barycentric weights. Note: In the following, A1 to An are optional batch dimensions. Args: attribute: A tensor of shape `[A1, ..., An, 3, B]`, where the last dimension stores a per-vertex `B`-dimensional attribute. barycentric: A tensor of shape `[A1, ..., An, 3]`, where the last dimension contains barycentric coordinates. name: A name for this op. Defaults to 'interpolate_attributes'. Returns: A tensor of shape `[A1, ..., An, B]`, containing interpolated attributes. """ with tf.compat.v1.name_scope(name, "interpolate_attributes", (attribute, barycentric)): attribute = tf.convert_to_tensor(value=attribute) barycentric = tf.convert_to_tensor(value=barycentric) shape.check_static( tensor=attribute, tensor_name="attribute", has_dim_equals=(-2, 3)) shape.check_static( tensor=barycentric, tensor_name="barycentric", has_dim_equals=(-1, 3)) shape.compare_batch_dimensions( tensors=(attribute, barycentric), last_axes=(-2, -1), tensor_names=("attribute", "barycentric"), broadcast_compatible=True) barycentric = asserts.assert_normalized(barycentric, order=1) return tf.reduce_sum( input_tensor=tf.expand_dims(barycentric, axis=-1) * attribute, axis=-2) def perspective_correct_interpolation(triangle_vertices_model_space, attribute, pixel_position, model_to_eye_matrix, perspective_matrix, screen_dimensions, lower_left_corner=(0.0, 0.0), name=None): """Returns perspective corrected interpolation of attributes over triangles. Note: In the following, A1 to An are optional batch dimensions. Args: triangle_vertices_model_space: A tensor of shape `[A1, ..., An, 3, 3]`, where the last dimension represents the vertices of a triangle in model space. attribute: A tensor of shape `[A1, ..., An, 3, B]`, where the last dimension stores a per-vertex `B`-dimensional attribute. pixel_position: A tensor of shape `[A1, ..., An, 2]`, where the last dimension stores the position (in pixels) where the interpolation is requested. model_to_eye_matrix: A tensor of shape `[A1, ..., An, 4, 4]`, where the last two dimension represent matrices to transform points from model to eye coordinates. perspective_matrix: A tensor of shape `[A1, ..., An, 4, 4]`, where the last two dimension represent matrices to transform points from eye to clip coordinates. screen_dimensions: A tensor of shape `[A1, ..., An, 2]`, where the last dimension is expressed in pixels and captures the width and the height (in pixels) of the screen. lower_left_corner: A tensor of shape `[A1, ..., An, 2]`, where the last dimension captures the position (in pixels) of the lower left corner of the screen. name: A name for this op. Defaults to 'perspective_correct_interpolation'. Raises: tf.errors.InvalidArgumentError: if any input contains data not in the specified range of valid values. ValueError: If any input is of an unsupported shape. Returns: A tensor of shape `[A1, ..., An, B]`, containing interpolated attributes. """ with tf.compat.v1.name_scope(name, "perspective_correct_interpolation", [ triangle_vertices_model_space, attribute, pixel_position, model_to_eye_matrix, perspective_matrix, screen_dimensions, lower_left_corner ]): barycentric = perspective_correct_barycentrics( triangle_vertices_model_space, pixel_position, model_to_eye_matrix, perspective_matrix, screen_dimensions, lower_left_corner) return interpolate_attributes(attribute, barycentric) # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/rendering/opengl/rasterization_backend.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """OpenGL rasterization backend for TF Graphics.""" import tensorflow as tf from tensorflow_graphics.util import export_api from tensorflow_graphics.util import shape # pylint: disable=g-import-not-at-top try: from tensorflow_graphics.rendering.opengl import gen_rasterizer_op as render_ops except ImportError: import os dir_path = os.path.dirname(os.path.abspath(__file__)) render_ops = tf.load_op_library(os.path.join(dir_path, "rasterizer_op.so")) # pylint: enable=g-import-not-at-top def _dim_value(dim): return 1 if dim is None else tf.compat.v1.dimension_value(dim) # Empty vertex shader; all the work happens in the geometry shader. vertex_shader = """ #version 430 void main() { } """ # Geometry shader that projects the vertices of visible triangles onto the image # plane. geometry_shader = """ #version 430 uniform mat4 view_projection_matrix; layout(points) in; layout(triangle_strip, max_vertices=3) out; out layout(location = 0) vec2 barycentric_coordinates; out layout(location = 1) float triangle_index; layout(binding=0) buffer triangular_mesh { float mesh_buffer[]; }; vec3 get_vertex_position(int vertex_index) { // Triangles are packed as 3 consecuitve vertices, each with 3 coordinates. int offset = gl_PrimitiveIDIn * 9 + vertex_index * 3; return vec3(mesh_buffer[offset], mesh_buffer[offset + 1], mesh_buffer[offset + 2]); } void main() { vec3 positions[3] = {get_vertex_position(0), get_vertex_position(1), get_vertex_position(2)}; vec4 projected_vertices[3] = { view_projection_matrix * vec4(positions[0], 1.0), view_projection_matrix * vec4(positions[1], 1.0), view_projection_matrix * vec4(positions[2], 1.0)}; for (int i = 0; i < 3; ++i) { // gl_Position is a pre-defined size 4 output variable. gl_Position = projected_vertices[i]; barycentric_coordinates = vec2(i==0 ? 1.0 : 0.0, i==1 ? 1.0 : 0.0); triangle_index = gl_PrimitiveIDIn; EmitVertex(); } EndPrimitive(); } """ # Fragment shader that packs barycentric coordinates, and triangle index. fragment_shader = """ #version 430 in layout(location = 0) vec2 barycentric_coordinates; in layout(location = 1) float triangle_index; out vec4 output_color; void main() { output_color = vec4(round(triangle_index + 1.0), barycentric_coordinates, 1.0); } """ def rasterize(vertices, triangles, view_projection_matrices, image_size, name=None): """Rasterizes the scene. This rasterizer estimates which triangle is associated with each pixel using OpenGL. Note: In the following, A1 to An are optional batch dimensions which must be broadcast compatible for inputs `vertices` and `view_projection_matrices`. Args: vertices: A tensor of shape `[A1, ..., An, V, 3]` containing batches of `V` vertices, each defined by a 3D point. triangles: A tensor of shape `[T, 3]` containing `T` triangles, each associated with 3 vertices from `scene_vertices` view_projection_matrices: A tensor of shape `[A1, ..., An, 4, 4]` containing batches of view projection matrices image_size: An tuple of integers (width, height) containing the dimensions in pixels of the rasterized image. name: A name for this op. Defaults to 'rasterization_backend_rasterize'. Returns: A tuple of 3 elements. The first one of shape `[A1, ..., An, H, W, 1]` representing the triangle index associated with each pixel. If no triangle is associated to a pixel, the index is set to -1. The second element in the tuple is of shape `[A1, ..., An, H, W, 3]` and correspond to barycentric coordinates per pixel. The last element in the tuple is of shape `[A1, ..., An, H, W]` and stores a value of `0` of the pixel is assciated with the background, and `1` with the foreground. """ with tf.compat.v1.name_scope(name, "rasterization_backend_rasterize", (vertices, triangles, view_projection_matrices)): vertices = tf.convert_to_tensor(value=vertices) triangles = tf.convert_to_tensor(value=triangles) view_projection_matrices = tf.convert_to_tensor( value=view_projection_matrices) shape.check_static( tensor=vertices, tensor_name="vertices", has_rank_greater_than=1, has_dim_equals=((-1, 3))) shape.check_static( tensor=triangles, tensor_name="triangles", has_rank=2, has_dim_equals=((-1, 3))) shape.check_static( tensor=view_projection_matrices, tensor_name="view_projection_matrices", has_rank_greater_than=1, has_dim_equals=((-1, 4), (-2, 4))) shape.compare_batch_dimensions( tensors=(vertices, view_projection_matrices), tensor_names=("vertices", "view_projection_matrices"), last_axes=(-3, -3), broadcast_compatible=True) common_batch_shape = shape.get_broadcasted_shape( vertices.shape[:-2], view_projection_matrices.shape[:-2]) common_batch_shape = [_dim_value(dim) for dim in common_batch_shape] vertices = tf.broadcast_to(vertices, common_batch_shape + vertices.shape[-2:]) view_projection_matrices = tf.broadcast_to(view_projection_matrices, common_batch_shape + [4, 4]) geometry = tf.gather(vertices, triangles, axis=-2) rasterized = render_ops.rasterize( num_points=geometry.shape[-3], alpha_clear=0.0, enable_cull_face=True, variable_names=("view_projection_matrix", "triangular_mesh"), variable_kinds=("mat", "buffer"), variable_values=(view_projection_matrices, tf.reshape(geometry, shape=common_batch_shape + [-1])), output_resolution=image_size, vertex_shader=vertex_shader, geometry_shader=geometry_shader, fragment_shader=fragment_shader) triangle_index = tf.cast(rasterized[..., 0], tf.int32) - 1 barycentric_coordinates = rasterized[..., 1:3] barycentric_coordinates = tf.concat( (barycentric_coordinates, 1.0 - barycentric_coordinates[..., 0:1] - barycentric_coordinates[..., 1:2]), axis=-1) mask = tf.cast(rasterized[..., 3], tf.int32) return triangle_index, barycentric_coordinates, mask # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """OpenGL rasterization backend for TF Graphics.""" import tensorflow as tf from tensorflow_graphics.util import export_api from tensorflow_graphics.util import shape # pylint: disable=g-import-not-at-top try: from tensorflow_graphics.rendering.opengl import gen_rasterizer_op as render_ops except ImportError: import os dir_path = os.path.dirname(os.path.abspath(__file__)) render_ops = tf.load_op_library(os.path.join(dir_path, "rasterizer_op.so")) # pylint: enable=g-import-not-at-top def _dim_value(dim): return 1 if dim is None else tf.compat.v1.dimension_value(dim) # Empty vertex shader; all the work happens in the geometry shader. vertex_shader = """ #version 430 void main() { } """ # Geometry shader that projects the vertices of visible triangles onto the image # plane. geometry_shader = """ #version 430 uniform mat4 view_projection_matrix; layout(points) in; layout(triangle_strip, max_vertices=3) out; out layout(location = 0) vec2 barycentric_coordinates; out layout(location = 1) float triangle_index; layout(binding=0) buffer triangular_mesh { float mesh_buffer[]; }; vec3 get_vertex_position(int vertex_index) { // Triangles are packed as 3 consecuitve vertices, each with 3 coordinates. int offset = gl_PrimitiveIDIn * 9 + vertex_index * 3; return vec3(mesh_buffer[offset], mesh_buffer[offset + 1], mesh_buffer[offset + 2]); } void main() { vec3 positions[3] = {get_vertex_position(0), get_vertex_position(1), get_vertex_position(2)}; vec4 projected_vertices[3] = { view_projection_matrix * vec4(positions[0], 1.0), view_projection_matrix * vec4(positions[1], 1.0), view_projection_matrix * vec4(positions[2], 1.0)}; for (int i = 0; i < 3; ++i) { // gl_Position is a pre-defined size 4 output variable. gl_Position = projected_vertices[i]; barycentric_coordinates = vec2(i==0 ? 1.0 : 0.0, i==1 ? 1.0 : 0.0); triangle_index = gl_PrimitiveIDIn; EmitVertex(); } EndPrimitive(); } """ # Fragment shader that packs barycentric coordinates, and triangle index. fragment_shader = """ #version 430 in layout(location = 0) vec2 barycentric_coordinates; in layout(location = 1) float triangle_index; out vec4 output_color; void main() { output_color = vec4(round(triangle_index + 1.0), barycentric_coordinates, 1.0); } """ def rasterize(vertices, triangles, view_projection_matrices, image_size, name=None): """Rasterizes the scene. This rasterizer estimates which triangle is associated with each pixel using OpenGL. Note: In the following, A1 to An are optional batch dimensions which must be broadcast compatible for inputs `vertices` and `view_projection_matrices`. Args: vertices: A tensor of shape `[A1, ..., An, V, 3]` containing batches of `V` vertices, each defined by a 3D point. triangles: A tensor of shape `[T, 3]` containing `T` triangles, each associated with 3 vertices from `scene_vertices` view_projection_matrices: A tensor of shape `[A1, ..., An, 4, 4]` containing batches of view projection matrices image_size: An tuple of integers (width, height) containing the dimensions in pixels of the rasterized image. name: A name for this op. Defaults to 'rasterization_backend_rasterize'. Returns: A tuple of 3 elements. The first one of shape `[A1, ..., An, H, W, 1]` representing the triangle index associated with each pixel. If no triangle is associated to a pixel, the index is set to -1. The second element in the tuple is of shape `[A1, ..., An, H, W, 3]` and correspond to barycentric coordinates per pixel. The last element in the tuple is of shape `[A1, ..., An, H, W, 1]` and stores a value of `0` of the pixel is assciated with the background, and `1` with the foreground. """ with tf.compat.v1.name_scope(name, "rasterization_backend_rasterize", (vertices, triangles, view_projection_matrices)): vertices = tf.convert_to_tensor(value=vertices) triangles = tf.convert_to_tensor(value=triangles) view_projection_matrices = tf.convert_to_tensor( value=view_projection_matrices) shape.check_static( tensor=vertices, tensor_name="vertices", has_rank_greater_than=1, has_dim_equals=((-1, 3))) shape.check_static( tensor=triangles, tensor_name="triangles", has_rank=2, has_dim_equals=((-1, 3))) shape.check_static( tensor=view_projection_matrices, tensor_name="view_projection_matrices", has_rank_greater_than=1, has_dim_equals=((-1, 4), (-2, 4))) shape.compare_batch_dimensions( tensors=(vertices, view_projection_matrices), tensor_names=("vertices", "view_projection_matrices"), last_axes=(-3, -3), broadcast_compatible=True) common_batch_shape = shape.get_broadcasted_shape( vertices.shape[:-2], view_projection_matrices.shape[:-2]) common_batch_shape = [_dim_value(dim) for dim in common_batch_shape] vertices = tf.broadcast_to(vertices, common_batch_shape + vertices.shape[-2:]) view_projection_matrices = tf.broadcast_to(view_projection_matrices, common_batch_shape + [4, 4]) geometry = tf.gather(vertices, triangles, axis=-2) rasterized = render_ops.rasterize( num_points=geometry.shape[-3], alpha_clear=0.0, enable_cull_face=True, variable_names=("view_projection_matrix", "triangular_mesh"), variable_kinds=("mat", "buffer"), variable_values=(view_projection_matrices, tf.reshape(geometry, shape=common_batch_shape + [-1])), output_resolution=image_size, vertex_shader=vertex_shader, geometry_shader=geometry_shader, fragment_shader=fragment_shader) triangle_index = tf.cast(rasterized[..., 0], tf.int32) - 1 # Slicing of the tensor will result in all batch dimensions being # `None` for tensorflow graph mode, therefore we have to fix it in order to # have explicit shape. width, height = image_size triangle_index = tf.reshape(triangle_index, common_batch_shape + [height, width, 1]) barycentric_coordinates = rasterized[..., 1:3] barycentric_coordinates = tf.concat( (barycentric_coordinates, 1.0 - barycentric_coordinates[..., 0:1] - barycentric_coordinates[..., 1:2]), axis=-1) mask = tf.cast(rasterized[..., 3], tf.int32) mask = tf.reshape(mask, common_batch_shape + [height, width, 1]) return triangle_index, barycentric_coordinates, mask # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/rendering/opengl/tests/rasterization_backend_test.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from absl.testing import parameterized import numpy as np import tensorflow as tf from tensorflow_graphics.geometry.representation import grid from tensorflow_graphics.geometry.transformation import look_at from tensorflow_graphics.rendering.camera import perspective from tensorflow_graphics.rendering.opengl import math as glm from tensorflow_graphics.rendering.opengl import rasterization_backend from tensorflow_graphics.util import test_case _IMAGE_HEIGHT = 5 _IMAGE_WIDTH = 7 _TRIANGLE_SIZE = 2.0 def _generate_vertices_and_view_matrices(): camera_origin = ((0.0, 0.0, 0.0), (0.0, 0.0, 0.0)) camera_up = ((0.0, 1.0, 0.0), (0.0, 1.0, 0.0)) look_at_point = ((0.0, 0.0, 1.0), (0.0, 0.0, -1.0)) field_of_view = ((60 * np.math.pi / 180,), (60 * np.math.pi / 180,)) near_plane = ((0.01,), (0.01,)) far_plane = ((400.0,), (400.0,)) aspect_ratio = ((float(_IMAGE_WIDTH) / float(_IMAGE_HEIGHT),), (float(_IMAGE_WIDTH) / float(_IMAGE_HEIGHT),)) # Construct the view projection matrix. world_to_camera = look_at.right_handed(camera_origin, look_at_point, camera_up) perspective_matrix = perspective.right_handed(field_of_view, aspect_ratio, near_plane, far_plane) view_projection_matrix = tf.linalg.matmul(perspective_matrix, world_to_camera) depth = 1.0 vertices = (((-10.0 * _TRIANGLE_SIZE, 10.0 * _TRIANGLE_SIZE, depth), (10.0 * _TRIANGLE_SIZE, 10.0 * _TRIANGLE_SIZE, depth), (0.0, -10.0 * _TRIANGLE_SIZE, depth)), ((-_TRIANGLE_SIZE, 0.0, depth), (0.0, _TRIANGLE_SIZE, depth), (0.0, 0.0, depth))) return vertices, view_projection_matrix def _proxy_rasterize(vertices, triangles, view_projection_matrices): return rasterization_backend.rasterize(vertices, triangles, view_projection_matrices, (_IMAGE_WIDTH, _IMAGE_HEIGHT)) class RasterizationBackendTest(test_case.TestCase): @parameterized.parameters( ("must have exactly 3 dimensions in axis -1", (2, 6, 32, 2), (17, 3), (2, 6, 4, 4)), ("must have exactly 3 dimensions in axis -1", (2, 6, 32, 3), (17, 2), (2, 6, 4, 4)), ("must have a rank of 2", (2, 6, 32, 3), (3, 17, 2), (2, 6, 4, 4)), ("must have exactly 4 dimensions in axis -1", (2, 6, 32, 3), (17, 3), (2, 6, 4, 3)), ("must have exactly 4 dimensions in axis -2", (2, 6, 32, 3), (17, 3), (2, 6, 3, 4)), ("Not all batch dimensions are broadcast-compatible", (3, 6, 32, 3), (17, 3), (5, 6, 4, 4)), ) def test_rasterize_exception_raised(self, error_msg, *shapes): """Tests that the shape exceptions are properly raised.""" self.assert_exception_is_raised(_proxy_rasterize, error_msg, shapes) @parameterized.parameters( (((32, 3), (17, 3), (4, 4)), (tf.float32, tf.int32, tf.float32)), (((None, 32, 3), (17, 3), (None, 4, 4)), (tf.float32, tf.int32, tf.float32)), (((None, 9, 32, 3), (17, 3), (None, 9, 4, 4)), (tf.float32, tf.int32, tf.float32)), ) def test_rasterize_exception_not_raised(self, shapes, dtypes): self.assert_exception_is_not_raised( _proxy_rasterize, shapes=shapes, dtypes=dtypes) def test_rasterize_batch_vertices_only(self): triangles = np.array(((0, 1, 2),), np.int32) vertices, view_projection_matrix = _generate_vertices_and_view_matrices() _, _, mask = rasterization_backend.rasterize(vertices, triangles, view_projection_matrix[0], (_IMAGE_WIDTH, _IMAGE_HEIGHT)) self.assertAllEqual(mask[0, ...], tf.ones_like(mask[0, ...])) gt_layer_1 = np.zeros((_IMAGE_HEIGHT, _IMAGE_WIDTH), np.float32) gt_layer_1[_IMAGE_HEIGHT // 2:, _IMAGE_WIDTH // 2:] = 1.0 self.assertAllEqual(mask[1, ...], gt_layer_1) def test_rasterize_batch_view_only(self): triangles = np.array(((0, 1, 2),), np.int32) vertices, view_projection_matrix = _generate_vertices_and_view_matrices() _, _, mask = rasterization_backend.rasterize(vertices[0], triangles, view_projection_matrix, (_IMAGE_WIDTH, _IMAGE_HEIGHT)) self.assertAllEqual(mask[0, ...], tf.ones_like(mask[0, ...])) self.assertAllEqual(mask[1, ...], tf.zeros_like(mask[1, ...])) def test_rasterize_preset(self): camera_origin = (0.0, 0.0, 0.0) camera_up = (0.0, 1.0, 0.0) look_at_point = (0.0, 0.0, 1.0) field_of_view = (60 * np.math.pi / 180,) near_plane = (0.01,) far_plane = (400.0,) # Construct the view projection matrix. model_to_eye_matrix = look_at.right_handed(camera_origin, look_at_point, camera_up) perspective_matrix = perspective.right_handed( field_of_view, (float(_IMAGE_WIDTH) / float(_IMAGE_HEIGHT),), near_plane, far_plane) view_projection_matrix = tf.linalg.matmul(perspective_matrix, model_to_eye_matrix) depth = 1.0 vertices = ((-2.0 * _TRIANGLE_SIZE, 0.0, depth), (0.0, _TRIANGLE_SIZE, depth), (0.0, 0.0, depth), (0.0, -_TRIANGLE_SIZE, depth)) triangles = np.array(((1, 2, 0), (0, 2, 3)), np.int32) predicted_triangle_index, predicted_barycentrics, predicted_mask = rasterization_backend.rasterize( vertices, triangles, view_projection_matrix, (_IMAGE_WIDTH, _IMAGE_HEIGHT)) with self.subTest(name="triangle_index"): groundtruth_triangle_index = np.zeros((_IMAGE_HEIGHT, _IMAGE_WIDTH), dtype=np.int32) groundtruth_triangle_index[..., :_IMAGE_WIDTH // 2] = -1 groundtruth_triangle_index[:_IMAGE_HEIGHT // 2, _IMAGE_WIDTH // 2:] = 1 self.assertAllEqual(groundtruth_triangle_index, predicted_triangle_index) with self.subTest(name="mask"): groundtruth_mask = np.ones((_IMAGE_HEIGHT, _IMAGE_WIDTH), dtype=np.int32) groundtruth_mask[..., :_IMAGE_WIDTH // 2] = 0 self.assertAllEqual(groundtruth_mask, predicted_mask) attributes = np.array( ((1.0, 0.0, 0.0), (0.0, 1.0, 0.0), (0.0, 0.0, 1.0))).astype(np.float32) perspective_correct_interpolation = lambda geometry, pixels: glm.perspective_correct_interpolation( # pylint: disable=g-long-lambda,line-too-long geometry, attributes, pixels, model_to_eye_matrix, perspective_matrix, np.array((_IMAGE_WIDTH, _IMAGE_HEIGHT)).astype(np.float32), np.array((0.0, 0.0)).astype(np.float32)) with self.subTest(name="barycentric_coordinates_triangle_0"): geometry_0 = tf.gather(vertices, triangles[0, :]) pixels_0 = tf.transpose( grid.generate((3.5, 2.5), (6.5, 4.5), (4, 3)), perm=(1, 0, 2)) barycentrics_gt_0 = perspective_correct_interpolation( geometry_0, pixels_0) self.assertAllClose( barycentrics_gt_0, predicted_barycentrics[2:, 3:, :], atol=1e-3) with self.subTest(name="barycentric_coordinates_triangle_1"): geometry_1 = tf.gather(vertices, triangles[1, :]) pixels_1 = tf.transpose( grid.generate((3.5, 0.5), (6.5, 1.5), (4, 2)), perm=(1, 0, 2)) barycentrics_gt_1 = perspective_correct_interpolation( geometry_1, pixels_1) self.assertAllClose( barycentrics_gt_1, predicted_barycentrics[0:2, 3:, :], atol=1e-3)
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from absl.testing import parameterized import numpy as np import tensorflow as tf from tensorflow_graphics.geometry.representation import grid from tensorflow_graphics.geometry.transformation import look_at from tensorflow_graphics.rendering.camera import perspective from tensorflow_graphics.rendering.opengl import math as glm from tensorflow_graphics.rendering.opengl import rasterization_backend from tensorflow_graphics.util import test_case _IMAGE_HEIGHT = 5 _IMAGE_WIDTH = 7 _TRIANGLE_SIZE = 2.0 def _generate_vertices_and_view_matrices(): camera_origin = ((0.0, 0.0, 0.0), (0.0, 0.0, 0.0)) camera_up = ((0.0, 1.0, 0.0), (0.0, 1.0, 0.0)) look_at_point = ((0.0, 0.0, 1.0), (0.0, 0.0, -1.0)) field_of_view = ((60 * np.math.pi / 180,), (60 * np.math.pi / 180,)) near_plane = ((0.01,), (0.01,)) far_plane = ((400.0,), (400.0,)) aspect_ratio = ((float(_IMAGE_WIDTH) / float(_IMAGE_HEIGHT),), (float(_IMAGE_WIDTH) / float(_IMAGE_HEIGHT),)) # Construct the view projection matrix. world_to_camera = look_at.right_handed(camera_origin, look_at_point, camera_up) perspective_matrix = perspective.right_handed(field_of_view, aspect_ratio, near_plane, far_plane) view_projection_matrix = tf.linalg.matmul(perspective_matrix, world_to_camera) depth = 1.0 vertices = (((-10.0 * _TRIANGLE_SIZE, 10.0 * _TRIANGLE_SIZE, depth), (10.0 * _TRIANGLE_SIZE, 10.0 * _TRIANGLE_SIZE, depth), (0.0, -10.0 * _TRIANGLE_SIZE, depth)), ((-_TRIANGLE_SIZE, 0.0, depth), (0.0, _TRIANGLE_SIZE, depth), (0.0, 0.0, depth))) return vertices, view_projection_matrix def _proxy_rasterize(vertices, triangles, view_projection_matrices): return rasterization_backend.rasterize(vertices, triangles, view_projection_matrices, (_IMAGE_WIDTH, _IMAGE_HEIGHT)) class RasterizationBackendTest(test_case.TestCase): @parameterized.parameters( ("must have exactly 3 dimensions in axis -1", (2, 6, 32, 2), (17, 3), (2, 6, 4, 4)), ("must have exactly 3 dimensions in axis -1", (2, 6, 32, 3), (17, 2), (2, 6, 4, 4)), ("must have a rank of 2", (2, 6, 32, 3), (3, 17, 2), (2, 6, 4, 4)), ("must have exactly 4 dimensions in axis -1", (2, 6, 32, 3), (17, 3), (2, 6, 4, 3)), ("must have exactly 4 dimensions in axis -2", (2, 6, 32, 3), (17, 3), (2, 6, 3, 4)), ("Not all batch dimensions are broadcast-compatible", (3, 6, 32, 3), (17, 3), (5, 6, 4, 4)), ) def test_rasterize_exception_raised(self, error_msg, *shapes): """Tests that the shape exceptions are properly raised.""" self.assert_exception_is_raised(_proxy_rasterize, error_msg, shapes) @parameterized.parameters( (((32, 3), (17, 3), (4, 4)), (tf.float32, tf.int32, tf.float32)), (((None, 32, 3), (17, 3), (None, 4, 4)), (tf.float32, tf.int32, tf.float32)), (((None, 9, 32, 3), (17, 3), (None, 9, 4, 4)), (tf.float32, tf.int32, tf.float32)), ) def test_rasterize_exception_not_raised(self, shapes, dtypes): self.assert_exception_is_not_raised( _proxy_rasterize, shapes=shapes, dtypes=dtypes) def test_rasterize_batch_vertices_only(self): triangles = np.array(((0, 1, 2),), np.int32) vertices, view_projection_matrix = _generate_vertices_and_view_matrices() _, _, mask = rasterization_backend.rasterize(vertices, triangles, view_projection_matrix[0], (_IMAGE_WIDTH, _IMAGE_HEIGHT)) self.assertAllEqual(mask[0, ...], tf.ones_like(mask[0, ...])) gt_layer_1 = np.zeros((_IMAGE_HEIGHT, _IMAGE_WIDTH, 1), np.float32) gt_layer_1[_IMAGE_HEIGHT // 2:, _IMAGE_WIDTH // 2:, 0] = 1.0 self.assertAllEqual(mask[1, ...], gt_layer_1) def test_rasterize_batch_view_only(self): triangles = np.array(((0, 1, 2),), np.int32) vertices, view_projection_matrix = _generate_vertices_and_view_matrices() _, _, mask = rasterization_backend.rasterize(vertices[0], triangles, view_projection_matrix, (_IMAGE_WIDTH, _IMAGE_HEIGHT)) self.assertAllEqual(mask[0, ...], tf.ones_like(mask[0, ...])) self.assertAllEqual(mask[1, ...], tf.zeros_like(mask[1, ...])) def test_rasterize_preset(self): camera_origin = (0.0, 0.0, 0.0) camera_up = (0.0, 1.0, 0.0) look_at_point = (0.0, 0.0, 1.0) field_of_view = (60 * np.math.pi / 180,) near_plane = (0.01,) far_plane = (400.0,) # Construct the view projection matrix. model_to_eye_matrix = look_at.right_handed(camera_origin, look_at_point, camera_up) perspective_matrix = perspective.right_handed( field_of_view, (float(_IMAGE_WIDTH) / float(_IMAGE_HEIGHT),), near_plane, far_plane) view_projection_matrix = tf.linalg.matmul(perspective_matrix, model_to_eye_matrix) depth = 1.0 vertices = ((-2.0 * _TRIANGLE_SIZE, 0.0, depth), (0.0, _TRIANGLE_SIZE, depth), (0.0, 0.0, depth), (0.0, -_TRIANGLE_SIZE, depth)) triangles = np.array(((1, 2, 0), (0, 2, 3)), np.int32) predicted_triangle_index, predicted_barycentrics, predicted_mask = rasterization_backend.rasterize( vertices, triangles, view_projection_matrix, (_IMAGE_WIDTH, _IMAGE_HEIGHT)) with self.subTest(name="triangle_index"): groundtruth_triangle_index = np.zeros((_IMAGE_HEIGHT, _IMAGE_WIDTH, 1), dtype=np.int32) groundtruth_triangle_index[..., :_IMAGE_WIDTH // 2, 0] = -1 groundtruth_triangle_index[:_IMAGE_HEIGHT // 2, _IMAGE_WIDTH // 2:, 0] = 1 self.assertAllEqual(groundtruth_triangle_index, predicted_triangle_index) with self.subTest(name="mask"): groundtruth_mask = np.ones((_IMAGE_HEIGHT, _IMAGE_WIDTH, 1), dtype=np.int32) groundtruth_mask[..., :_IMAGE_WIDTH // 2, 0] = 0 self.assertAllEqual(groundtruth_mask, predicted_mask) attributes = np.array( ((1.0, 0.0, 0.0), (0.0, 1.0, 0.0), (0.0, 0.0, 1.0))).astype(np.float32) perspective_correct_interpolation = lambda geometry, pixels: glm.perspective_correct_interpolation( # pylint: disable=g-long-lambda,line-too-long geometry, attributes, pixels, model_to_eye_matrix, perspective_matrix, np.array((_IMAGE_WIDTH, _IMAGE_HEIGHT)).astype(np.float32), np.array((0.0, 0.0)).astype(np.float32)) with self.subTest(name="barycentric_coordinates_triangle_0"): geometry_0 = tf.gather(vertices, triangles[0, :]) pixels_0 = tf.transpose( grid.generate((3.5, 2.5), (6.5, 4.5), (4, 3)), perm=(1, 0, 2)) barycentrics_gt_0 = perspective_correct_interpolation( geometry_0, pixels_0) self.assertAllClose( barycentrics_gt_0, predicted_barycentrics[2:, 3:, :], atol=1e-3) with self.subTest(name="barycentric_coordinates_triangle_1"): geometry_1 = tf.gather(vertices, triangles[1, :]) pixels_1 = tf.transpose( grid.generate((3.5, 0.5), (6.5, 1.5), (4, 2)), perm=(1, 0, 2)) barycentrics_gt_1 = perspective_correct_interpolation( geometry_1, pixels_1) self.assertAllClose( barycentrics_gt_1, predicted_barycentrics[0:2, 3:, :], atol=1e-3)
1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/rendering/triangle_rasterizer.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """This module implements a differentiable rasterizer of triangular meshes. The resulting rendering contains perspective-correct interpolation of attributes defined at the vertices of the rasterized meshes. This rasterizer does not provide gradients through visibility, but it does through visible geometry and attributes. """ import tensorflow as tf from tensorflow_graphics.rendering import rasterization_backend from tensorflow_graphics.rendering.opengl import math as glm from tensorflow_graphics.util import export_api from tensorflow_graphics.util import shape def _perspective_correct_barycentrics(vertices_per_pixel, model_to_eye_matrix, perspective_matrix, image_size_float): """Creates the pixels grid and computes barycentrics.""" # Construct the pixel grid with half-integer pixel centers. width = image_size_float[1] height = image_size_float[0] px = tf.linspace(0.5, width - 0.5, num=int(width)) py = tf.linspace(0.5, height - 0.5, num=int(height)) xv, yv = tf.meshgrid(px, py) pixel_position = tf.stack((xv, yv), axis=-1) return glm.perspective_correct_barycentrics(vertices_per_pixel, pixel_position, model_to_eye_matrix, perspective_matrix, (width, height)) def _perspective_correct_attributes(attribute, barycentrics, triangles, triangle_index, len_batch_shape): attribute = tf.gather(attribute, triangles, axis=-2) attribute_per_pixel = tf.gather( attribute, triangle_index, axis=-3, batch_dims=len_batch_shape) return glm.interpolate_attributes(attribute_per_pixel, barycentrics) def _dim_value(dim): return 1 if dim is None else tf.compat.v1.dimension_value(dim) def rasterize(vertices, triangles, attributes, model_to_eye_matrix, perspective_matrix, image_size, backend=rasterization_backend.RasterizationBackends.OPENGL, name=None): """Rasterizes the scene. Note: In the following, A1 to An are optional batch dimensions. Args: vertices: A tensor of shape `[A1, ..., An, V, 3]` containing batches of `V` vertices, each defined by a 3D point. triangles: A tensor of shape `[T, 3]` containing `T` triangles, each associated with 3 vertices from `vertices`. attributes: A dictionary of tensors, each of shape `[A1, ..., An, V, K_a]` containing batches of `V` vertices, each associated with K-dimensional attributes. K_a may vary by attribute. model_to_eye_matrix: A tensor of shape `[A1, ..., An, 4, 4]` containing batches of matrices used to transform vertices from model to eye coordinates. perspective_matrix: A tensor of shape `[A1, ..., An, 4, 4]` containing batches of matrices used to project vertices from eye to clip coordinates. image_size: A tuple (height, width) containing the dimensions in pixels of the rasterized image. backend: A rasterization_backend.RasterizationBackends enum containing the backend method to use for rasterization. name: A name for this op. Defaults to 'triangle_rasterizer_rasterize'. Returns: A dictionary. The key "mask" is of shape `[A1, ..., An, height, width]` and stores a value of `0` of the pixel is assciated with the background, and `1` with the foreground. The key "barycentrics" is of shape `[A1, ..., An, height, width, 3]` and stores barycentric weights. Finally, the dictionary contains perspective correct interpolated attributes of shape `[A1, ..., An, height, width, K]` per entry in the `attributes` dictionary. """ with tf.compat.v1.name_scope(name, "triangle_rasterizer_rasterize", (vertices, triangles, attributes, model_to_eye_matrix, perspective_matrix)): vertices = tf.convert_to_tensor(value=vertices) triangles = tf.convert_to_tensor(value=triangles) model_to_eye_matrix = tf.convert_to_tensor(value=model_to_eye_matrix) perspective_matrix = tf.convert_to_tensor(value=perspective_matrix) shape.check_static( tensor=vertices, tensor_name="vertices", has_rank_greater_than=1, has_dim_equals=((-1, 3))) shape.check_static( tensor=triangles, tensor_name="triangles", has_rank=2, has_dim_equals=((-1, 3))) shape.check_static( tensor=model_to_eye_matrix, tensor_name="model_to_eye_matrix", has_dim_equals=(((-2, 4), (-1, 4)))) shape.check_static( tensor=perspective_matrix, tensor_name="perspective_matrix", has_dim_equals=(((-2, 4), (-1, 4)))) image_size_float = (float(image_size[0]), float(image_size[1])) image_size_backend = (int(image_size[1]), int(image_size[0])) view_projection_matrix = tf.linalg.matmul(perspective_matrix, model_to_eye_matrix) triangle_index, _, mask = rasterization_backend.rasterize( vertices, triangles, view_projection_matrix, image_size_backend, backend) outputs = {"mask": mask, "triangle_indices": triangle_index} batch_shape = triangle_index.shape[:-3] batch_shape = [_dim_value(dim) for dim in batch_shape] vertices = tf.gather(vertices, triangles, axis=-2) # Gather does not work on negative indices, which is the case for the pixel # associated to the background. triangle_index = triangle_index * mask vertices_per_pixel = tf.gather( vertices, triangle_index, axis=-3, batch_dims=len(batch_shape)) barycentrics = _perspective_correct_barycentrics(vertices_per_pixel, model_to_eye_matrix, perspective_matrix, image_size_float) mask_float = tf.cast(tf.expand_dims(mask, axis=-1), vertices.dtype) outputs["barycentrics"] = mask_float * barycentrics for key, attribute in attributes.items(): attribute = tf.convert_to_tensor(value=attribute) outputs[key] = mask_float * _perspective_correct_attributes( attribute, barycentrics, triangles, triangle_index, len(batch_shape)) return outputs # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """This module implements a differentiable rasterizer of triangular meshes. The resulting rendering contains perspective-correct interpolation of attributes defined at the vertices of the rasterized meshes. This rasterizer does not provide gradients through visibility, but it does through visible geometry and attributes. """ import tensorflow as tf from tensorflow_graphics.rendering import rasterization_backend from tensorflow_graphics.rendering.opengl import math as glm from tensorflow_graphics.util import export_api from tensorflow_graphics.util import shape def _perspective_correct_barycentrics(vertices_per_pixel, model_to_eye_matrix, perspective_matrix, image_size_float): """Creates the pixels grid and computes barycentrics.""" # Construct the pixel grid with half-integer pixel centers. width = image_size_float[1] height = image_size_float[0] px = tf.linspace(0.5, width - 0.5, num=int(width)) py = tf.linspace(0.5, height - 0.5, num=int(height)) xv, yv = tf.meshgrid(px, py) pixel_position = tf.stack((xv, yv), axis=-1) return glm.perspective_correct_barycentrics(vertices_per_pixel, pixel_position, model_to_eye_matrix, perspective_matrix, (width, height)) def _perspective_correct_attributes(attribute, barycentrics, triangles, triangle_index, len_batch_shape): attribute = tf.gather(attribute, triangles, axis=-2) attribute_per_pixel = tf.gather( attribute, triangle_index, axis=-3, batch_dims=len_batch_shape) return glm.interpolate_attributes(attribute_per_pixel, barycentrics) def _dim_value(dim): return 1 if dim is None else tf.compat.v1.dimension_value(dim) def rasterize(vertices, triangles, attributes, model_to_eye_matrix, perspective_matrix, image_size, backend=rasterization_backend.RasterizationBackends.OPENGL, name=None): """Rasterizes the scene. Note: In the following, A1 to An are optional batch dimensions. Args: vertices: A tensor of shape `[A1, ..., An, V, 3]` containing batches of `V` vertices, each defined by a 3D point. triangles: A tensor of shape `[T, 3]` containing `T` triangles, each associated with 3 vertices from `vertices`. attributes: A dictionary of tensors, each of shape `[A1, ..., An, V, K_a]` containing batches of `V` vertices, each associated with K-dimensional attributes. K_a may vary by attribute. model_to_eye_matrix: A tensor of shape `[A1, ..., An, 4, 4]` containing batches of matrices used to transform vertices from model to eye coordinates. perspective_matrix: A tensor of shape `[A1, ..., An, 4, 4]` containing batches of matrices used to project vertices from eye to clip coordinates. image_size: A tuple (height, width) containing the dimensions in pixels of the rasterized image. backend: A rasterization_backend.RasterizationBackends enum containing the backend method to use for rasterization. name: A name for this op. Defaults to 'triangle_rasterizer_rasterize'. Returns: A dictionary. The key "mask" is of shape `[A1, ..., An, height, width, 1]` and stores a value of `0` of the pixel is assciated with the background, and `1` with the foreground. The key "barycentrics" is of shape `[A1, ..., An, height, width, 3]` and stores barycentric weights. Finally, the dictionary contains perspective correct interpolated attributes of shape `[A1, ..., An, height, width, K]` per entry in the `attributes` dictionary. """ with tf.compat.v1.name_scope(name, "triangle_rasterizer_rasterize", (vertices, triangles, attributes, model_to_eye_matrix, perspective_matrix)): vertices = tf.convert_to_tensor(value=vertices) triangles = tf.convert_to_tensor(value=triangles) model_to_eye_matrix = tf.convert_to_tensor(value=model_to_eye_matrix) perspective_matrix = tf.convert_to_tensor(value=perspective_matrix) shape.check_static( tensor=vertices, tensor_name="vertices", has_rank_greater_than=1, has_dim_equals=((-1, 3))) shape.check_static( tensor=triangles, tensor_name="triangles", has_rank=2, has_dim_equals=((-1, 3))) shape.check_static( tensor=model_to_eye_matrix, tensor_name="model_to_eye_matrix", has_dim_equals=(((-2, 4), (-1, 4)))) shape.check_static( tensor=perspective_matrix, tensor_name="perspective_matrix", has_dim_equals=(((-2, 4), (-1, 4)))) image_size_float = (float(image_size[0]), float(image_size[1])) image_size_backend = (int(image_size[1]), int(image_size[0])) view_projection_matrix = tf.linalg.matmul(perspective_matrix, model_to_eye_matrix) triangle_index, _, mask = rasterization_backend.rasterize( vertices, triangles, view_projection_matrix, image_size_backend, backend) outputs = {"mask": mask, "triangle_indices": triangle_index} vertices = tf.gather(vertices, triangles, axis=-2) # Gather does not work on negative indices, which is the case for the pixel # associated to the background. triangle_index = triangle_index * mask # Extract batch shape in order to make sure it is preserved after `gather` # operation. batch_shape = triangle_index.shape[:-3] batch_shape = [_dim_value(dim) for dim in batch_shape] # Remove last dimension of `triangle_index` in order to make it compatible # with gather operations. triangle_index_lean = tf.squeeze(triangle_index, axis=-1) vertices_per_pixel = tf.gather( vertices, triangle_index_lean, axis=-3, batch_dims=len(batch_shape)) barycentrics = _perspective_correct_barycentrics(vertices_per_pixel, model_to_eye_matrix, perspective_matrix, image_size_float) mask_float = tf.cast(mask, vertices.dtype) outputs["barycentrics"] = mask_float * barycentrics for key, attribute in attributes.items(): attribute = tf.convert_to_tensor(value=attribute) outputs[key] = mask_float * _perspective_correct_attributes( attribute, barycentrics, triangles, triangle_index_lean, len(batch_shape)) return outputs # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/util/shape.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Shape utility functions.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import itertools import six import tensorflow as tf def _broadcast_shape_helper(shape_x, shape_y): """Helper function for is_broadcast_compatible and broadcast_shape. Args: shape_x: A `TensorShape`. shape_y: A `TensorShape`. Returns: Returns None if the shapes are not broadcast compatible, or a list containing the broadcasted dimensions otherwise. """ # To compute the broadcasted dimensions, we zip together shape_x and shape_y, # and pad with 1 to make them the same length. broadcasted_dims = reversed( list( six.moves.zip_longest( reversed(shape_x.dims), reversed(shape_y.dims), fillvalue=tf.compat.v1.Dimension(1)))) # Next we combine the dimensions according to the numpy broadcasting rules. # http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html return_dims = [] for (dim_x, dim_y) in broadcasted_dims: if dim_x.value is None or dim_y.value is None: # One or both dimensions is unknown. If either dimension is greater than # 1, we assume that the program is correct, and the other dimension will # be broadcast to match it. if dim_x.value is not None and dim_x.value > 1: return_dims.append(dim_x) elif dim_y.value is not None and dim_y.value > 1: return_dims.append(dim_y) else: return_dims.append(None) elif dim_x.value == 1: # We will broadcast dim_x to dim_y. return_dims.append(dim_y) elif dim_y.value == 1: # We will broadcast dim_y to dim_x. return_dims.append(dim_x) elif dim_x.value == dim_y.value: # The dimensions are compatible, so output is the same size in that # dimension. return_dims.append(dim_x.merge_with(dim_y)) else: return None return return_dims def is_broadcast_compatible(shape_x, shape_y): """Returns True if `shape_x` and `shape_y` are broadcast compatible. Args: shape_x: A `TensorShape`. shape_y: A `TensorShape`. Returns: True if a shape exists that both `shape_x` and `shape_y` can be broadcasted to. False otherwise. """ if shape_x.ndims is None or shape_y.ndims is None: return False return _broadcast_shape_helper(shape_x, shape_y) is not None def get_broadcasted_shape(shape_x, shape_y): """Returns the common shape for broadcast compatible shapes. Args: shape_x: A `TensorShape`. shape_y: A `TensorShape`. Returns: Returns None if the shapes are not broadcast compatible, or a list containing the broadcasted dimensions otherwise. """ if shape_x.ndims is None or shape_y.ndims is None: return None return _broadcast_shape_helper(shape_x, shape_y) def _check_type(variable, variable_name, expected_type): """Helper function for checking that inputs are of expected types.""" if isinstance(expected_type, (list, tuple)): expected_type_name = 'list or tuple' else: expected_type_name = expected_type.__name__ if not isinstance(variable, expected_type): raise ValueError('{} must be of type {}, but it is {}'.format( variable_name, expected_type_name, type(variable).__name__)) def _fix_axis_dim_pairs(pairs, name): """Helper function to make `pairs` a list if needed.""" if isinstance(pairs[0], int): pairs = [pairs] for pair in pairs: if len(pair) != 2: raise ValueError( '{} must consist of axis-value pairs, but found {}'.format( name, pair)) return pairs def _get_dim(tensor, axis): """Returns dimensionality of a tensor for a given axis.""" return tf.compat.v1.dimension_value(tensor.shape[axis]) def check_static(tensor, has_rank=None, has_rank_greater_than=None, has_rank_less_than=None, has_dim_equals=None, has_dim_greater_than=None, has_dim_less_than=None, tensor_name='tensor'): """Checks static shapes for rank and dimension constraints. This function can be used to check a tensor's shape for multiple rank and dimension constraints at the same time. Args: tensor: Any tensor with a static shape. has_rank: An int or `None`. If not `None`, the function checks if the rank of the `tensor` equals to `has_rank`. has_rank_greater_than: An int or `None`. If not `None`, the function checks if the rank of the `tensor` is greater than `has_rank_greater_than`. has_rank_less_than: An int or `None`. If not `None`, the function checks if the rank of the `tensor` is less than `has_rank_less_than`. has_dim_equals: Either a tuple or list containing a single pair of `int`s, or a list or tuple containing multiple such pairs. Each pair is in the form (`axis`, `dim`), which means the function should check if `tensor.shape[axis] == dim`. has_dim_greater_than: Either a tuple or list containing a single pair of `int`s, or a list or tuple containing multiple such pairs. Each pair is in the form (`axis`, `dim`), which means the function should check if `tensor.shape[axis] > dim`. has_dim_less_than: Either a tuple or list containing a single pair of `int`s, or a list or tuple containing multiple such pairs. Each pair is in the form (`axis`, `dim`), which means the function should check if `tensor.shape[axis] < dim`. tensor_name: A name for `tensor` to be used in the error message if one is thrown. Raises: ValueError: If any input is not of the expected types, or if one of the checks described above fails. """ rank = tensor.shape.ndims def _raise_value_error_for_rank(variable, error_msg): raise ValueError( '{} must have a rank {} {}, but it has rank {} and shape {}'.format( tensor_name, error_msg, variable, rank, tensor.shape.as_list())) def _raise_value_error_for_dim(tensor_name, error_msg, axis, value): raise ValueError( '{} must have {} {} dimensions in axis {}, but it has shape {}'.format( tensor_name, error_msg, value, axis, tensor.shape.as_list())) if has_rank is not None: _check_type(has_rank, 'has_rank', int) if rank != has_rank: _raise_value_error_for_rank(has_rank, 'of') if has_rank_greater_than is not None: _check_type(has_rank_greater_than, 'has_rank_greater_than', int) if rank <= has_rank_greater_than: _raise_value_error_for_rank(has_rank_greater_than, 'greater than') if has_rank_less_than is not None: _check_type(has_rank_less_than, 'has_rank_less_than', int) if rank >= has_rank_less_than: _raise_value_error_for_rank(has_rank_less_than, 'less than') if has_dim_equals is not None: _check_type(has_dim_equals, 'has_dim_equals', (list, tuple)) has_dim_equals = _fix_axis_dim_pairs(has_dim_equals, 'has_dim_equals') for axis, value in has_dim_equals: if _get_dim(tensor, axis) != value: _raise_value_error_for_dim(tensor_name, 'exactly', axis, value) if has_dim_greater_than is not None: _check_type(has_dim_greater_than, 'has_dim_greater_than', (list, tuple)) has_dim_greater_than = _fix_axis_dim_pairs(has_dim_greater_than, 'has_dim_greater_than') for axis, value in has_dim_greater_than: if not _get_dim(tensor, axis) > value: _raise_value_error_for_dim(tensor_name, 'greater than', axis, value) if has_dim_less_than is not None: _check_type(has_dim_less_than, 'has_dim_less_than', (list, tuple)) has_dim_less_than = _fix_axis_dim_pairs(has_dim_less_than, 'has_dim_less_than') for axis, value in has_dim_less_than: if not _get_dim(tensor, axis) < value: _raise_value_error_for_dim(tensor_name, 'less than', axis, value) def _check_tensors(tensors, tensors_name): """Helper function to check the type and length of tensors.""" _check_type(tensors, tensors_name, (list, tuple)) if len(tensors) < 2: raise ValueError('At least 2 tensors are required.') def _check_tensor_axis_lists(tensors, tensors_name, axes, axes_name): """Helper function to check that lengths of `tensors` and `axes` match.""" _check_type(axes, axes_name, (list, tuple)) if len(tensors) != len(axes): raise ValueError( '{} and {} must have the same length, but are {} and {}.'.format( tensors_name, axes_name, len(tensors), len(axes))) def _fix_axes(tensors, axes, allow_negative): """Makes all axes positive and checks for out of bound errors.""" axes = [ axis + tensor.shape.ndims if axis < 0 else axis for tensor, axis in zip(tensors, axes) ] if not all( ((allow_negative or (not allow_negative and axis >= 0)) and axis < tensor.shape.ndims) for tensor, axis in zip(tensors, axes)): rank_axis_pairs = zip([tensor.shape.ndims for tensor in tensors], axes) raise ValueError( 'Some axes are out of bounds. Given rank-axes pairs: {}'.format( [pair for pair in rank_axis_pairs])) return axes def _give_default_names(list_of_objects, name): """Helper function to give default names to objects for error messages.""" return [name + '_' + str(index) for index in range(len(list_of_objects))] def _all_are_equal(list_of_objects): """Helper function to check if all the items in a list are the same.""" if not list_of_objects: return True if isinstance(list_of_objects[0], list): list_of_objects = [tuple(obj) for obj in list_of_objects] return len(set(list_of_objects)) == 1 def _raise_error(tensor_names, batch_shapes): formatted_list = [(name, batch_shape) for name, batch_shape in zip(tensor_names, batch_shapes)] raise ValueError( 'Not all batch dimensions are identical: {}'.format(formatted_list)) def compare_batch_dimensions(tensors, last_axes, broadcast_compatible, initial_axes=0, tensor_names=None): """Compares batch dimensions for tensors with static shapes. Args: tensors: A list or tuple of tensors with static shapes to compare. last_axes: An `int` or a list or tuple of `int`s with the same length as `tensors`. If an `int`, it is assumed to be the same for all the tensors. Each entry should correspond to the last axis of the batch (with zero based indices). For instance, if there is only a single batch dimension, last axis should be `0`. broadcast_compatible: A 'bool', whether the batch shapes can be broadcast compatible in the numpy sense. initial_axes: An `int` or a list or tuple of `int`s with the same length as `tensors`. If an `int`, it is assumed to be the same for all the tensors. Each entry should correspond to the first axis of the batch (with zero based indices). Default value is `0`. tensor_names: Names of `tensors` to be used in the error message if one is thrown. If left as `None`, `tensor_i` is used. Raises: ValueError: If inputs have unexpected types, or if given axes are out of bounds, or if the check fails. """ _check_tensors(tensors, 'tensors') if isinstance(initial_axes, int): initial_axes = [initial_axes] * len(tensors) if isinstance(last_axes, int): last_axes = [last_axes] * len(tensors) _check_tensor_axis_lists(tensors, 'tensors', initial_axes, 'initial_axes') _check_tensor_axis_lists(tensors, 'tensors', last_axes, 'last_axes') initial_axes = _fix_axes(tensors, initial_axes, allow_negative=True) last_axes = _fix_axes(tensors, last_axes, allow_negative=True) batch_shapes = [ tensor.shape[init:last + 1] for tensor, init, last in zip(tensors, initial_axes, last_axes) ] if tensor_names is None: tensor_names = _give_default_names(tensors, 'tensor') if not broadcast_compatible: batch_ndims = [batch_shape.ndims for batch_shape in batch_shapes] batch_shapes = [batch_shape.as_list() for batch_shape in batch_shapes] if not _all_are_equal(batch_ndims): # If not all batch shapes have the same length, they cannot be identical. _raise_error(tensor_names, batch_shapes) for dims in zip(*batch_shapes): if _all_are_equal(dims): # Continue if all dimensions are None or have the same value. continue if None not in dims: # If all dimensions are known at this point, they are not identical. _raise_error(tensor_names, batch_shapes) # At this point dims must consist of both None's and int's. if len(set(dims)) != 2: # set(dims) should return (None, some_int). # Otherwise shapes are not identical. _raise_error(tensor_names, batch_shapes) else: if not all( is_broadcast_compatible(shape1, shape2) for shape1, shape2 in itertools.combinations(batch_shapes, 2)): raise ValueError( 'Not all batch dimensions are broadcast-compatible: {}'.format([ (name, batch_shape.as_list()) for name, batch_shape in zip(tensor_names, batch_shapes) ])) def compare_dimensions(tensors, axes, tensor_names=None): """Compares dimensions of tensors with static or dynamic shapes. Args: tensors: A list or tuple of tensors to compare. axes: An `int` or a list or tuple of `int`s with the same length as `tensors`. If an `int`, it is assumed to be the same for all the tensors. Each entry should correspond to the axis of the tensor being compared. tensor_names: Names of `tensors` to be used in the error message if one is thrown. If left as `None`, their `Tensor.name` fields are used instead. Raises: ValueError: If inputs have unexpected types, or if given axes are out of bounds, or if the check fails. """ _check_tensors(tensors, 'tensors') if isinstance(axes, int): axes = [axes] * len(tensors) _check_tensor_axis_lists(tensors, 'tensors', axes, 'axes') axes = _fix_axes(tensors, axes, allow_negative=False) if tensor_names is None: tensor_names = _give_default_names(tensors, 'tensor') dimensions = [_get_dim(tensor, axis) for tensor, axis in zip(tensors, axes)] if not _all_are_equal(dimensions): raise ValueError('Tensors {} must have the same number of dimensions in ' 'axes {}, but they are {}.'.format( list(tensor_names), list(axes), list(dimensions))) def is_static(tensor_shape): """Checks if the given tensor shape is static.""" if isinstance(tensor_shape, (list, tuple)): return None not in tensor_shape else: return None not in tensor_shape.as_list() # The util functions or classes are not exported. __all__ = []
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Shape utility functions.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import itertools import numpy as np import six import tensorflow as tf def _broadcast_shape_helper(shape_x, shape_y): """Helper function for is_broadcast_compatible and broadcast_shape. Args: shape_x: A `TensorShape`. shape_y: A `TensorShape`. Returns: Returns None if the shapes are not broadcast compatible, or a list containing the broadcasted dimensions otherwise. """ # To compute the broadcasted dimensions, we zip together shape_x and shape_y, # and pad with 1 to make them the same length. broadcasted_dims = reversed( list( six.moves.zip_longest( reversed(shape_x.dims), reversed(shape_y.dims), fillvalue=tf.compat.v1.Dimension(1)))) # Next we combine the dimensions according to the numpy broadcasting rules. # http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html return_dims = [] for (dim_x, dim_y) in broadcasted_dims: if dim_x.value is None or dim_y.value is None: # One or both dimensions is unknown. If either dimension is greater than # 1, we assume that the program is correct, and the other dimension will # be broadcast to match it. if dim_x.value is not None and dim_x.value > 1: return_dims.append(dim_x) elif dim_y.value is not None and dim_y.value > 1: return_dims.append(dim_y) else: return_dims.append(None) elif dim_x.value == 1: # We will broadcast dim_x to dim_y. return_dims.append(dim_y) elif dim_y.value == 1: # We will broadcast dim_y to dim_x. return_dims.append(dim_x) elif dim_x.value == dim_y.value: # The dimensions are compatible, so output is the same size in that # dimension. return_dims.append(dim_x.merge_with(dim_y)) else: return None return return_dims def is_broadcast_compatible(shape_x, shape_y): """Returns True if `shape_x` and `shape_y` are broadcast compatible. Args: shape_x: A `TensorShape`. shape_y: A `TensorShape`. Returns: True if a shape exists that both `shape_x` and `shape_y` can be broadcasted to. False otherwise. """ if shape_x.ndims is None or shape_y.ndims is None: return False return _broadcast_shape_helper(shape_x, shape_y) is not None def get_broadcasted_shape(shape_x, shape_y): """Returns the common shape for broadcast compatible shapes. Args: shape_x: A `TensorShape`. shape_y: A `TensorShape`. Returns: Returns None if the shapes are not broadcast compatible, or a list containing the broadcasted dimensions otherwise. """ if shape_x.ndims is None or shape_y.ndims is None: return None return _broadcast_shape_helper(shape_x, shape_y) def _check_type(variable, variable_name, expected_type): """Helper function for checking that inputs are of expected types.""" if isinstance(expected_type, (list, tuple)): expected_type_name = 'list or tuple' else: expected_type_name = expected_type.__name__ if not isinstance(variable, expected_type): raise ValueError('{} must be of type {}, but it is {}'.format( variable_name, expected_type_name, type(variable).__name__)) def _fix_axis_dim_pairs(pairs, name): """Helper function to make `pairs` a list if needed.""" if isinstance(pairs[0], int): pairs = [pairs] for pair in pairs: if len(pair) != 2: raise ValueError( '{} must consist of axis-value pairs, but found {}'.format( name, pair)) return pairs def _get_dim(tensor, axis): """Returns dimensionality of a tensor for a given axis.""" return tf.compat.v1.dimension_value(tensor.shape[axis]) def check_static(tensor, has_rank=None, has_rank_greater_than=None, has_rank_less_than=None, has_dim_equals=None, has_dim_greater_than=None, has_dim_less_than=None, tensor_name='tensor'): """Checks static shapes for rank and dimension constraints. This function can be used to check a tensor's shape for multiple rank and dimension constraints at the same time. Args: tensor: Any tensor with a static shape. has_rank: An int or `None`. If not `None`, the function checks if the rank of the `tensor` equals to `has_rank`. has_rank_greater_than: An int or `None`. If not `None`, the function checks if the rank of the `tensor` is greater than `has_rank_greater_than`. has_rank_less_than: An int or `None`. If not `None`, the function checks if the rank of the `tensor` is less than `has_rank_less_than`. has_dim_equals: Either a tuple or list containing a single pair of `int`s, or a list or tuple containing multiple such pairs. Each pair is in the form (`axis`, `dim`), which means the function should check if `tensor.shape[axis] == dim`. has_dim_greater_than: Either a tuple or list containing a single pair of `int`s, or a list or tuple containing multiple such pairs. Each pair is in the form (`axis`, `dim`), which means the function should check if `tensor.shape[axis] > dim`. has_dim_less_than: Either a tuple or list containing a single pair of `int`s, or a list or tuple containing multiple such pairs. Each pair is in the form (`axis`, `dim`), which means the function should check if `tensor.shape[axis] < dim`. tensor_name: A name for `tensor` to be used in the error message if one is thrown. Raises: ValueError: If any input is not of the expected types, or if one of the checks described above fails. """ rank = tensor.shape.ndims def _raise_value_error_for_rank(variable, error_msg): raise ValueError( '{} must have a rank {} {}, but it has rank {} and shape {}'.format( tensor_name, error_msg, variable, rank, tensor.shape.as_list())) def _raise_value_error_for_dim(tensor_name, error_msg, axis, value): raise ValueError( '{} must have {} {} dimensions in axis {}, but it has shape {}'.format( tensor_name, error_msg, value, axis, tensor.shape.as_list())) if has_rank is not None: _check_type(has_rank, 'has_rank', int) if rank != has_rank: _raise_value_error_for_rank(has_rank, 'of') if has_rank_greater_than is not None: _check_type(has_rank_greater_than, 'has_rank_greater_than', int) if rank <= has_rank_greater_than: _raise_value_error_for_rank(has_rank_greater_than, 'greater than') if has_rank_less_than is not None: _check_type(has_rank_less_than, 'has_rank_less_than', int) if rank >= has_rank_less_than: _raise_value_error_for_rank(has_rank_less_than, 'less than') if has_dim_equals is not None: _check_type(has_dim_equals, 'has_dim_equals', (list, tuple)) has_dim_equals = _fix_axis_dim_pairs(has_dim_equals, 'has_dim_equals') for axis, value in has_dim_equals: if _get_dim(tensor, axis) != value: _raise_value_error_for_dim(tensor_name, 'exactly', axis, value) if has_dim_greater_than is not None: _check_type(has_dim_greater_than, 'has_dim_greater_than', (list, tuple)) has_dim_greater_than = _fix_axis_dim_pairs(has_dim_greater_than, 'has_dim_greater_than') for axis, value in has_dim_greater_than: if not _get_dim(tensor, axis) > value: _raise_value_error_for_dim(tensor_name, 'greater than', axis, value) if has_dim_less_than is not None: _check_type(has_dim_less_than, 'has_dim_less_than', (list, tuple)) has_dim_less_than = _fix_axis_dim_pairs(has_dim_less_than, 'has_dim_less_than') for axis, value in has_dim_less_than: if not _get_dim(tensor, axis) < value: _raise_value_error_for_dim(tensor_name, 'less than', axis, value) def _check_tensors(tensors, tensors_name): """Helper function to check the type and length of tensors.""" _check_type(tensors, tensors_name, (list, tuple)) if len(tensors) < 2: raise ValueError('At least 2 tensors are required.') def _check_tensor_axis_lists(tensors, tensors_name, axes, axes_name): """Helper function to check that lengths of `tensors` and `axes` match.""" _check_type(axes, axes_name, (list, tuple)) if len(tensors) != len(axes): raise ValueError( '{} and {} must have the same length, but are {} and {}.'.format( tensors_name, axes_name, len(tensors), len(axes))) def _fix_axes(tensors, axes, allow_negative): """Makes all axes positive and checks for out of bound errors.""" axes = [ axis + tensor.shape.ndims if axis < 0 else axis for tensor, axis in zip(tensors, axes) ] if not all( ((allow_negative or (not allow_negative and axis >= 0)) and axis < tensor.shape.ndims) for tensor, axis in zip(tensors, axes)): rank_axis_pairs = zip([tensor.shape.ndims for tensor in tensors], axes) raise ValueError( 'Some axes are out of bounds. Given rank-axes pairs: {}'.format( [pair for pair in rank_axis_pairs])) return axes def _give_default_names(list_of_objects, name): """Helper function to give default names to objects for error messages.""" return [name + '_' + str(index) for index in range(len(list_of_objects))] def _all_are_equal(list_of_objects): """Helper function to check if all the items in a list are the same.""" if not list_of_objects: return True if isinstance(list_of_objects[0], list): list_of_objects = [tuple(obj) for obj in list_of_objects] return len(set(list_of_objects)) == 1 def _raise_error(tensor_names, batch_shapes): formatted_list = [(name, batch_shape) for name, batch_shape in zip(tensor_names, batch_shapes)] raise ValueError( 'Not all batch dimensions are identical: {}'.format(formatted_list)) def compare_batch_dimensions(tensors, last_axes, broadcast_compatible, initial_axes=0, tensor_names=None): """Compares batch dimensions for tensors with static shapes. Args: tensors: A list or tuple of tensors with static shapes to compare. last_axes: An `int` or a list or tuple of `int`s with the same length as `tensors`. If an `int`, it is assumed to be the same for all the tensors. Each entry should correspond to the last axis of the batch (with zero based indices). For instance, if there is only a single batch dimension, last axis should be `0`. broadcast_compatible: A 'bool', whether the batch shapes can be broadcast compatible in the numpy sense. initial_axes: An `int` or a list or tuple of `int`s with the same length as `tensors`. If an `int`, it is assumed to be the same for all the tensors. Each entry should correspond to the first axis of the batch (with zero based indices). Default value is `0`. tensor_names: Names of `tensors` to be used in the error message if one is thrown. If left as `None`, `tensor_i` is used. Raises: ValueError: If inputs have unexpected types, or if given axes are out of bounds, or if the check fails. """ _check_tensors(tensors, 'tensors') if isinstance(initial_axes, int): initial_axes = [initial_axes] * len(tensors) if isinstance(last_axes, int): last_axes = [last_axes] * len(tensors) _check_tensor_axis_lists(tensors, 'tensors', initial_axes, 'initial_axes') _check_tensor_axis_lists(tensors, 'tensors', last_axes, 'last_axes') initial_axes = _fix_axes(tensors, initial_axes, allow_negative=True) last_axes = _fix_axes(tensors, last_axes, allow_negative=True) batch_shapes = [ tensor.shape[init:last + 1] for tensor, init, last in zip(tensors, initial_axes, last_axes) ] if tensor_names is None: tensor_names = _give_default_names(tensors, 'tensor') if not broadcast_compatible: batch_ndims = [batch_shape.ndims for batch_shape in batch_shapes] batch_shapes = [batch_shape.as_list() for batch_shape in batch_shapes] if not _all_are_equal(batch_ndims): # If not all batch shapes have the same length, they cannot be identical. _raise_error(tensor_names, batch_shapes) for dims in zip(*batch_shapes): if _all_are_equal(dims): # Continue if all dimensions are None or have the same value. continue if None not in dims: # If all dimensions are known at this point, they are not identical. _raise_error(tensor_names, batch_shapes) # At this point dims must consist of both None's and int's. if len(set(dims)) != 2: # set(dims) should return (None, some_int). # Otherwise shapes are not identical. _raise_error(tensor_names, batch_shapes) else: if not all( is_broadcast_compatible(shape1, shape2) for shape1, shape2 in itertools.combinations(batch_shapes, 2)): raise ValueError( 'Not all batch dimensions are broadcast-compatible: {}'.format([ (name, batch_shape.as_list()) for name, batch_shape in zip(tensor_names, batch_shapes) ])) def compare_dimensions(tensors, axes, tensor_names=None): """Compares dimensions of tensors with static or dynamic shapes. Args: tensors: A list or tuple of tensors to compare. axes: An `int` or a list or tuple of `int`s with the same length as `tensors`. If an `int`, it is assumed to be the same for all the tensors. Each entry should correspond to the axis of the tensor being compared. tensor_names: Names of `tensors` to be used in the error message if one is thrown. If left as `None`, their `Tensor.name` fields are used instead. Raises: ValueError: If inputs have unexpected types, or if given axes are out of bounds, or if the check fails. """ _check_tensors(tensors, 'tensors') if isinstance(axes, int): axes = [axes] * len(tensors) _check_tensor_axis_lists(tensors, 'tensors', axes, 'axes') axes = _fix_axes(tensors, axes, allow_negative=False) if tensor_names is None: tensor_names = _give_default_names(tensors, 'tensor') dimensions = [_get_dim(tensor, axis) for tensor, axis in zip(tensors, axes)] if not _all_are_equal(dimensions): raise ValueError('Tensors {} must have the same number of dimensions in ' 'axes {}, but they are {}.'.format( list(tensor_names), list(axes), list(dimensions))) def is_static(tensor_shape): """Checks if the given tensor shape is static.""" if isinstance(tensor_shape, (list, tuple)): return None not in tensor_shape else: return None not in tensor_shape.as_list() def add_batch_dimensions(tensor, tensor_name, batch_shape, last_axis=None): """Broadcasts tensor to match batch dimensions. It will either broadcast to all provided batch dimensions, therefore increasing tensor shape by len(batch_shape) dimensions or will do nothing if batch dimensions already present and equal to expected batch dimensions. Args: tensor: A tensor to broadcast of a shape [A1, ..., An, B1, ..., Bn]. Where [A1, ..., An] is batch dimensions (it is allowed to have no batch dimensions), and [B1, ..., Bn] are other tensor dimensions. If [A1, ..., An] are present but different from values in `batch_shape` the error will be thrown. tensor_name: Name of `tensor` to be used in the error message if one is batch_shape: list of `int` representing desired batch dimensions. last_axis: An `int` corresponding to the last axis of the batch (with zero based indices). For instance, if there is only a single batch dimension, last axis should be `0`. If there is no batch dimensions it must be set to `None`. thrown. Returns: Tensor of a shape `batch_shape` + [B1, ..., Bn] or unmodified tensor if `batch_shape` = [A1, ..., An]. Raises: ValueError if tensor already has batch dimensions different from desired one. """ if last_axis is not None: last_axis = _fix_axes([tensor], [last_axis], allow_negative=True)[0] tensor_batch_shape = tensor.shape.as_list()[:last_axis + 1] if np.array_equal(tensor_batch_shape, batch_shape): return tensor elif tensor_batch_shape: raise ValueError( 'Tensor {} has batch dimensions different from target ' 'one. Found {}, but expected no batch dimensions or {}'.format( tensor_name, tensor.shape[:last_axis + 1], batch_shape)) return tf.broadcast_to(tensor, batch_shape + list(tensor.shape)) # The util functions or classes are not exported. __all__ = []
1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/util/type_alias.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Lint as: python3 """Type aliases for Python 3 typing.""" from typing import Union, Sequence import numpy as np import tensorflow as tf Integer = Union[int, np.int8, np.int16, np.int32, np.int64, np.uint8, np.uint16, np.uint32, np.uint64] Float = Union[float, np.float16, np.float32, np.float64] TensorLike = Union[Integer, Float, Sequence, np.ndarray, tf.Tensor, tf.Variable]
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Lint as: python3 """Type aliases for Python 3 typing.""" from typing import Union, Sequence import numpy as np import tensorflow as tf Integer = Union[int, np.int8, np.int16, np.int32, np.int64, np.uint8, np.uint16, np.uint32, np.uint64] Float = Union[float, np.float16, np.float32, np.float64] TensorLike = Union[Integer, Float, Sequence, np.ndarray, tf.Tensor, tf.Variable]
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/projects/nasa/lib/models.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Model Implementations.""" import tensorflow.compat.v1 as tf from tensorflow_graphics.projects.nasa.lib import model_utils tf.disable_eager_execution() def get_model(hparams): return model_dict[hparams.model](hparams) def nasa(hparams): """Construct the model function of NASA.""" # Parse model parameters from global configurations. n_parts = hparams.n_parts n_dims = hparams.n_dims transform_dims = (n_dims + 1)**2 # Using homogeneous coordinates. lr = hparams.lr level_set = hparams.level_set label_w = hparams.label_w minimal_w = hparams.minimal_w sample_vert = hparams.sample_vert sample_bbox = hparams.sample_bbox def _model_fn(features, labels, mode, params=None): is_training = (mode == tf.estimator.ModeKeys.TRAIN) batch_size = features['point'].shape[0] n_sample_frames = features['point'].shape[1] accum_size = batch_size * n_sample_frames if params == 'gen_mesh': latent_output = tf.constant([0, 0, 0], dtype=tf.float32) latent_holder = tf.placeholder(tf.float32, latent_output.shape) # Decode the tranformed shapes and compute the losses with tf.variable_scope('shape/decode', reuse=tf.AUTO_REUSE): transform = tf.reshape(features['transform'], [accum_size, n_parts, transform_dims]) joint = tf.reshape(features['joint'], [accum_size, n_parts, n_dims]) points = features['point'] n_points = tf.shape(points)[2] points = tf.reshape(points, [accum_size, n_points, n_dims]) if is_training: labels = tf.reshape(features['label'], [accum_size, n_points, 1]) predictions, parts = model_utils.nasa_indicator( points, transform, joint, hparams, need_transformation=True) indicator_loss = model_utils.compute_l2_indicator_loss( labels, predictions) minimal_loss = tf.reduce_mean(tf.square(parts[..., :sample_bbox, :])) part_points = tf.reshape(features['vert'], [accum_size, -1, n_dims]) part_weight = tf.reshape(features['weight'], [accum_size, -1, n_parts]) if sample_vert > 0: # If 0, use all vertices. n_vert = part_points.shape[1] sample_indices = tf.random.uniform([accum_size, sample_vert], minval=0, maxval=n_vert, dtype=tf.int32) part_points = tf.gather( part_points, sample_indices, axis=1, batch_dims=1) part_weight = tf.gather( part_weight, sample_indices, axis=1, batch_dims=1) unused_var, pred_parts = model_utils.nasa_indicator( part_points, transform, joint, hparams, need_transformation=True) part_label = tf.argmax(part_weight, axis=-1) part_label = tf.one_hot( part_label, depth=n_parts, axis=-1, dtype=tf.float32) * level_set part_label = tf.expand_dims( tf.transpose(part_label, [0, 2, 1]), axis=-1) label_loss = model_utils.compute_l2_indicator_loss( part_label, pred_parts) else: n_points = tf.shape(features['point'])[2] points = tf.reshape(features['point'], [accum_size, n_points, n_dims]) predictions, parts = model_utils.nasa_indicator( points, transform, joint, hparams, need_transformation=True, noise=labels) if params == 'gen_mesh': return latent_holder, latent_output, tf.concat( [parts, tf.expand_dims(predictions, axis=1)], axis=1) tf.summary.scalar('indicator', indicator_loss) loss = indicator_loss if label_w > 0: tf.summary.scalar('label', label_loss) indicator_loss += label_loss * label_w if minimal_w > 0: tf.summary.scalar('minimal', minimal_loss) indicator_loss += minimal_loss * minimal_w global_step = tf.train.get_or_create_global_step() optimizer = tf.train.AdamOptimizer(lr) update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) with tf.control_dependencies(update_ops): train_op = optimizer.minimize( indicator_loss, global_step=global_step, name='optimizer_shape') return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op) return _model_fn model_dict = { 'nasa': nasa, }
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Model Implementations.""" import tensorflow.compat.v1 as tf from tensorflow_graphics.projects.nasa.lib import model_utils tf.disable_eager_execution() def get_model(hparams): return model_dict[hparams.model](hparams) def nasa(hparams): """Construct the model function of NASA.""" # Parse model parameters from global configurations. n_parts = hparams.n_parts n_dims = hparams.n_dims transform_dims = (n_dims + 1)**2 # Using homogeneous coordinates. lr = hparams.lr level_set = hparams.level_set label_w = hparams.label_w minimal_w = hparams.minimal_w sample_vert = hparams.sample_vert sample_bbox = hparams.sample_bbox def _model_fn(features, labels, mode, params=None): is_training = (mode == tf.estimator.ModeKeys.TRAIN) batch_size = features['point'].shape[0] n_sample_frames = features['point'].shape[1] accum_size = batch_size * n_sample_frames if params == 'gen_mesh': latent_output = tf.constant([0, 0, 0], dtype=tf.float32) latent_holder = tf.placeholder(tf.float32, latent_output.shape) # Decode the tranformed shapes and compute the losses with tf.variable_scope('shape/decode', reuse=tf.AUTO_REUSE): transform = tf.reshape(features['transform'], [accum_size, n_parts, transform_dims]) joint = tf.reshape(features['joint'], [accum_size, n_parts, n_dims]) points = features['point'] n_points = tf.shape(points)[2] points = tf.reshape(points, [accum_size, n_points, n_dims]) if is_training: labels = tf.reshape(features['label'], [accum_size, n_points, 1]) predictions, parts = model_utils.nasa_indicator( points, transform, joint, hparams, need_transformation=True) indicator_loss = model_utils.compute_l2_indicator_loss( labels, predictions) minimal_loss = tf.reduce_mean(tf.square(parts[..., :sample_bbox, :])) part_points = tf.reshape(features['vert'], [accum_size, -1, n_dims]) part_weight = tf.reshape(features['weight'], [accum_size, -1, n_parts]) if sample_vert > 0: # If 0, use all vertices. n_vert = part_points.shape[1] sample_indices = tf.random.uniform([accum_size, sample_vert], minval=0, maxval=n_vert, dtype=tf.int32) part_points = tf.gather( part_points, sample_indices, axis=1, batch_dims=1) part_weight = tf.gather( part_weight, sample_indices, axis=1, batch_dims=1) unused_var, pred_parts = model_utils.nasa_indicator( part_points, transform, joint, hparams, need_transformation=True) part_label = tf.argmax(part_weight, axis=-1) part_label = tf.one_hot( part_label, depth=n_parts, axis=-1, dtype=tf.float32) * level_set part_label = tf.expand_dims( tf.transpose(part_label, [0, 2, 1]), axis=-1) label_loss = model_utils.compute_l2_indicator_loss( part_label, pred_parts) else: n_points = tf.shape(features['point'])[2] points = tf.reshape(features['point'], [accum_size, n_points, n_dims]) predictions, parts = model_utils.nasa_indicator( points, transform, joint, hparams, need_transformation=True, noise=labels) if params == 'gen_mesh': return latent_holder, latent_output, tf.concat( [parts, tf.expand_dims(predictions, axis=1)], axis=1) tf.summary.scalar('indicator', indicator_loss) loss = indicator_loss if label_w > 0: tf.summary.scalar('label', label_loss) indicator_loss += label_loss * label_w if minimal_w > 0: tf.summary.scalar('minimal', minimal_loss) indicator_loss += minimal_loss * minimal_w global_step = tf.train.get_or_create_global_step() optimizer = tf.train.AdamOptimizer(lr) update_ops = tf.get_collection(tf.GraphKeys.UPDATE_OPS) with tf.control_dependencies(update_ops): train_op = optimizer.minimize( indicator_loss, global_step=global_step, name='optimizer_shape') return tf.estimator.EstimatorSpec(mode=mode, loss=loss, train_op=train_op) return _model_fn model_dict = { 'nasa': nasa, }
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/datasets/modelnet40/modelnet40_test.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Lint as: python3 """Tests the ModelNet40 dataset with fake data.""" import os import tensorflow_datasets as tfds from tensorflow_graphics.datasets import modelnet40 class ModelNet40Test(tfds.testing.DatasetBuilderTestCase): """Tests the ModelNet40 dataset with fake data.""" DATASET_CLASS = modelnet40.ModelNet40 SPLITS = { "train": 24, # Number of fake train example "test": 16, # Number of fake test example } # If you are calling `download/download_and_extract` with a dict, like: # dl_manager.download({'some_key': 'http://a.org/out.txt', ...}) # then the tests needs to provide the fake output paths relative to the # fake data directory DL_EXTRACT_RESULT = "" EXAMPLE_DIR = os.path.join(os.path.dirname(__file__), "fakes") # SKIP_CHECKSUMS = True if __name__ == "__main__": tfds.testing.test_main()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Lint as: python3 """Tests the ModelNet40 dataset with fake data.""" import os import tensorflow_datasets as tfds from tensorflow_graphics.datasets import modelnet40 class ModelNet40Test(tfds.testing.DatasetBuilderTestCase): """Tests the ModelNet40 dataset with fake data.""" DATASET_CLASS = modelnet40.ModelNet40 SPLITS = { "train": 24, # Number of fake train example "test": 16, # Number of fake test example } # If you are calling `download/download_and_extract` with a dict, like: # dl_manager.download({'some_key': 'http://a.org/out.txt', ...}) # then the tests needs to provide the fake output paths relative to the # fake data directory DL_EXTRACT_RESULT = "" EXAMPLE_DIR = os.path.join(os.path.dirname(__file__), "fakes") # SKIP_CHECKSUMS = True if __name__ == "__main__": tfds.testing.test_main()
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/rendering/camera/tests/__init__.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/datasets/shapenet/shapenet.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Lint as: python3 """Shapenet Core dataset.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import collections import csv import json import os import textwrap import tensorflow.compat.v2 as tf import tensorflow_datasets as tfds from tensorflow_datasets import features as tfds_features from tensorflow_graphics.datasets import features as tfg_features _CITATION = """ @techreport{shapenet2015, title = {{ShapeNet: An Information-Rich 3D Model Repository}}, author = {Chang, Angel X. and Funkhouser, Thomas and Guibas, Leonidas and Hanrahan, Pat and Huang, Qixing and Li, Zimo and Savarese, Silvio and Savva, Manolis and Song, Shuran and Su, Hao and Xiao, Jianxiong and Yi, Li and Yu, Fisher}, number = {arXiv:1512.03012 [cs.GR]}, institution = {Stanford University --- Princeton University --- Toyota Technological Institute at Chicago}, year = {2015} } """ _DESCRIPTION = """ ShapeNetCore is a densely annotated subset of ShapeNet covering 55 common object categories with ~51,300 unique 3D models. Each model in ShapeNetCore is linked to an appropriate synset in WordNet (version 3.0). The synsets will be extracted from the taxonomy.json file in the ShapeNetCore.v2.zip archive and the splits from http://shapenet.cs.stanford.edu/shapenet/obj-zip/SHREC16/all.csv """ _TAXONOMY_FILE_NAME = 'taxonomy.json' _SPLIT_FILE_URL = \ 'http://shapenet.cs.stanford.edu/shapenet/obj-zip/SHREC16/all.csv' class ShapenetConfig(tfds.core.BuilderConfig): """Base class for Shapenet BuilderConfigs. The Shapenet database builder delegates the implementation of info, split_generators and generate_examples to the specified ShapenetConfig. This is done to allow multiple versions of the dataset. """ def info(self, dataset_builder): """Delegated Shapenet._info.""" raise NotImplementedError('Abstract method') def split_generators(self, dl_manager, dataset_builder): """Delegated Shapenet._split_generators.""" raise NotImplementedError('Abstract method') def generate_examples(self, **kwargs): """Delegated Shapenet._generate_examples.""" raise NotImplementedError('Abstract method') class MeshConfig(ShapenetConfig): """A Shapenet config for loading the original .obj files.""" _MODEL_SUBPATH = os.path.join('models', 'model_normalized.obj') def __init__(self, model_subpath=_MODEL_SUBPATH): super(MeshConfig, self).__init__( name='shapenet_trimesh', description=_DESCRIPTION, version=tfds.core.Version('1.0.0')) self.model_subpath = model_subpath def info(self, dataset_builder): return tfds.core.DatasetInfo( builder=dataset_builder, description=_DESCRIPTION, features=tfds_features.FeaturesDict({ 'trimesh': tfg_features.TriangleMesh(), 'label': tfds_features.ClassLabel(num_classes=353), 'model_id': tfds_features.Text(), }), supervised_keys=('trimesh', 'label'), # Homepage of the dataset for documentation homepage='https://shapenet.org/', citation=_CITATION, ) def split_generators(self, dl_manager, dataset_builder): # Extract the synset ids from the taxonomy file and update the ClassLabel # feature. with tf.io.gfile.GFile( os.path.join(dl_manager.manual_dir, _TAXONOMY_FILE_NAME)) as taxonomy_file: labels = [x['synsetId'] for x in json.loads(taxonomy_file.read())] # Remove duplicate labels (the json file contains two identical entries # for synset '04591713'). labels = list(collections.OrderedDict.fromkeys(labels)) dataset_builder.info.features['label'].names = labels split_file = dl_manager.download(_SPLIT_FILE_URL) fieldnames = ['id', 'synset', 'sub_synset', 'model_id', 'split'] model_items = collections.defaultdict(list) with tf.io.gfile.GFile(split_file) as csvfile: for row in csv.DictReader(csvfile, fieldnames): model_items[row['split']].append(row) return [ tfds.core.SplitGenerator( name=tfds.Split.TRAIN, gen_kwargs={ 'base_dir': dl_manager.manual_dir, 'models': model_items['train'] }, ), tfds.core.SplitGenerator( name=tfds.Split.TEST, gen_kwargs={ 'base_dir': dl_manager.manual_dir, 'models': model_items['test'] }, ), tfds.core.SplitGenerator( name=tfds.Split.VALIDATION, gen_kwargs={ 'base_dir': dl_manager.manual_dir, 'models': model_items['val'] }, ), ] def generate_examples(self, base_dir, models): """Yields examples. The structure of the examples: { 'trimesh': tensorflow_graphics.datasets.features.TriangleMesh 'label': tensorflow_datasets.features.ClassLabel 'model_id': tensorflow_datasets.features.Text } Args: base_dir: The base directory of shapenet. models: The list of models in the split. """ for model in models: synset = model['synset'] model_id = model['model_id'] model_filepath = os.path.join(base_dir, synset, model_id, self.model_subpath) # If the model doesn't exist, skip it. if not tf.io.gfile.exists(model_filepath): continue yield model_id, { 'trimesh': model_filepath, 'label': synset, 'model_id': model_id, } class Shapenet(tfds.core.GeneratorBasedBuilder): """ShapeNetCore V2. Example usage of the dataset: import tensorflow_datasets as tfds from tensorflow_graphics.datasets.shapenet import Shapenet data_set = Shapenet.load( split='train', download_and_prepare_kwargs={ 'download_config': tfds.download.DownloadConfig(manual_dir='~/shapenet_base') }) for example in data_set.take(1): trimesh, label, model_id = example['trimesh'], example['label'], example['model_id'] """ BUILDER_CONFIGS = [MeshConfig()] VERSION = tfds.core.Version('1.0.0') @staticmethod def load(*args, **kwargs): return tfds.load('shapenet', *args, **kwargs) # pytype: disable=wrong-arg-count MANUAL_DOWNLOAD_INSTRUCTIONS = textwrap.dedent("""\ manual_dir should contain the extracted ShapeNetCore.v2.zip archive. You need to register on https://shapenet.org/download/shapenetcore in order to get the link to download the dataset. """) def _info(self): return self.builder_config.info(self) def _split_generators(self, dl_manager): """Returns SplitGenerators.""" return self.builder_config.split_generators(dl_manager, self) def _generate_examples(self, **kwargs): """Yields examples.""" return self.builder_config.generate_examples(**kwargs)
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Lint as: python3 """Shapenet Core dataset.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import collections import csv import json import os import textwrap import tensorflow.compat.v2 as tf import tensorflow_datasets as tfds from tensorflow_datasets import features as tfds_features from tensorflow_graphics.datasets import features as tfg_features _CITATION = """ @techreport{shapenet2015, title = {{ShapeNet: An Information-Rich 3D Model Repository}}, author = {Chang, Angel X. and Funkhouser, Thomas and Guibas, Leonidas and Hanrahan, Pat and Huang, Qixing and Li, Zimo and Savarese, Silvio and Savva, Manolis and Song, Shuran and Su, Hao and Xiao, Jianxiong and Yi, Li and Yu, Fisher}, number = {arXiv:1512.03012 [cs.GR]}, institution = {Stanford University --- Princeton University --- Toyota Technological Institute at Chicago}, year = {2015} } """ _DESCRIPTION = """ ShapeNetCore is a densely annotated subset of ShapeNet covering 55 common object categories with ~51,300 unique 3D models. Each model in ShapeNetCore is linked to an appropriate synset in WordNet (version 3.0). The synsets will be extracted from the taxonomy.json file in the ShapeNetCore.v2.zip archive and the splits from http://shapenet.cs.stanford.edu/shapenet/obj-zip/SHREC16/all.csv """ _TAXONOMY_FILE_NAME = 'taxonomy.json' _SPLIT_FILE_URL = \ 'http://shapenet.cs.stanford.edu/shapenet/obj-zip/SHREC16/all.csv' class ShapenetConfig(tfds.core.BuilderConfig): """Base class for Shapenet BuilderConfigs. The Shapenet database builder delegates the implementation of info, split_generators and generate_examples to the specified ShapenetConfig. This is done to allow multiple versions of the dataset. """ def info(self, dataset_builder): """Delegated Shapenet._info.""" raise NotImplementedError('Abstract method') def split_generators(self, dl_manager, dataset_builder): """Delegated Shapenet._split_generators.""" raise NotImplementedError('Abstract method') def generate_examples(self, **kwargs): """Delegated Shapenet._generate_examples.""" raise NotImplementedError('Abstract method') class MeshConfig(ShapenetConfig): """A Shapenet config for loading the original .obj files.""" _MODEL_SUBPATH = os.path.join('models', 'model_normalized.obj') def __init__(self, model_subpath=_MODEL_SUBPATH): super(MeshConfig, self).__init__( name='shapenet_trimesh', description=_DESCRIPTION, version=tfds.core.Version('1.0.0')) self.model_subpath = model_subpath def info(self, dataset_builder): return tfds.core.DatasetInfo( builder=dataset_builder, description=_DESCRIPTION, features=tfds_features.FeaturesDict({ 'trimesh': tfg_features.TriangleMesh(), 'label': tfds_features.ClassLabel(num_classes=353), 'model_id': tfds_features.Text(), }), supervised_keys=('trimesh', 'label'), # Homepage of the dataset for documentation homepage='https://shapenet.org/', citation=_CITATION, ) def split_generators(self, dl_manager, dataset_builder): # Extract the synset ids from the taxonomy file and update the ClassLabel # feature. with tf.io.gfile.GFile( os.path.join(dl_manager.manual_dir, _TAXONOMY_FILE_NAME)) as taxonomy_file: labels = [x['synsetId'] for x in json.loads(taxonomy_file.read())] # Remove duplicate labels (the json file contains two identical entries # for synset '04591713'). labels = list(collections.OrderedDict.fromkeys(labels)) dataset_builder.info.features['label'].names = labels split_file = dl_manager.download(_SPLIT_FILE_URL) fieldnames = ['id', 'synset', 'sub_synset', 'model_id', 'split'] model_items = collections.defaultdict(list) with tf.io.gfile.GFile(split_file) as csvfile: for row in csv.DictReader(csvfile, fieldnames): model_items[row['split']].append(row) return [ tfds.core.SplitGenerator( name=tfds.Split.TRAIN, gen_kwargs={ 'base_dir': dl_manager.manual_dir, 'models': model_items['train'] }, ), tfds.core.SplitGenerator( name=tfds.Split.TEST, gen_kwargs={ 'base_dir': dl_manager.manual_dir, 'models': model_items['test'] }, ), tfds.core.SplitGenerator( name=tfds.Split.VALIDATION, gen_kwargs={ 'base_dir': dl_manager.manual_dir, 'models': model_items['val'] }, ), ] def generate_examples(self, base_dir, models): """Yields examples. The structure of the examples: { 'trimesh': tensorflow_graphics.datasets.features.TriangleMesh 'label': tensorflow_datasets.features.ClassLabel 'model_id': tensorflow_datasets.features.Text } Args: base_dir: The base directory of shapenet. models: The list of models in the split. """ for model in models: synset = model['synset'] model_id = model['model_id'] model_filepath = os.path.join(base_dir, synset, model_id, self.model_subpath) # If the model doesn't exist, skip it. if not tf.io.gfile.exists(model_filepath): continue yield model_id, { 'trimesh': model_filepath, 'label': synset, 'model_id': model_id, } class Shapenet(tfds.core.GeneratorBasedBuilder): """ShapeNetCore V2. Example usage of the dataset: import tensorflow_datasets as tfds from tensorflow_graphics.datasets.shapenet import Shapenet data_set = Shapenet.load( split='train', download_and_prepare_kwargs={ 'download_config': tfds.download.DownloadConfig(manual_dir='~/shapenet_base') }) for example in data_set.take(1): trimesh, label, model_id = example['trimesh'], example['label'], example['model_id'] """ BUILDER_CONFIGS = [MeshConfig()] VERSION = tfds.core.Version('1.0.0') @staticmethod def load(*args, **kwargs): return tfds.load('shapenet', *args, **kwargs) # pytype: disable=wrong-arg-count MANUAL_DOWNLOAD_INSTRUCTIONS = textwrap.dedent("""\ manual_dir should contain the extracted ShapeNetCore.v2.zip archive. You need to register on https://shapenet.org/download/shapenetcore in order to get the link to download the dataset. """) def _info(self): return self.builder_config.info(self) def _split_generators(self, dl_manager): """Returns SplitGenerators.""" return self.builder_config.split_generators(dl_manager, self) def _generate_examples(self, **kwargs): """Yields examples.""" return self.builder_config.generate_examples(**kwargs)
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/rendering/camera/orthographic.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. r"""This module implements orthographic camera functionalities. An orthographic camera represents three-dimensional objects in two dimensions by parallel projection, in which the projection lines are parallel to the camera axis. The camera axis is the line perpendicular to the image plane starting at the camera center. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf from tensorflow_graphics.util import export_api from tensorflow_graphics.util import shape def project(point_3d, name=None): r"""Projects a 3d point onto the 2d camera plane. Projects a 3d point \\((x, y, z)\\) to a 2d point \\((x', y')\\) onto the image plane, with $$ \begin{matrix} x' = x, & y' = y. \end{matrix} $$ Note: In the following, A1 to An are optional batch dimensions. Args: point_3d: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents a 3d point to project. name: A name for this op that defaults to "orthographic_project". Returns: A tensor of shape `[A1, ..., An, 2]`, where the last dimension represents a 2d point. Raises: ValueError: If the shape of `point_3d` is not supported. """ with tf.compat.v1.name_scope(name, "orthographic_project", [point_3d]): point_3d = tf.convert_to_tensor(value=point_3d) shape.check_static( tensor=point_3d, tensor_name="point_3d", has_dim_equals=(-1, 3)) point_xy, _ = tf.compat.v1.split(point_3d, (2, 1), axis=-1) return point_xy def ray(point_2d, name=None): r"""Computes the 3d ray for a 2d point (the z component of the ray is 1). Computes the 3d ray \\((r_x, r_y, 1)\\) for a 2d point \\((x', y')\\) on the image plane. For an orthographic camera the rays are constant over the image plane with $$ \begin{matrix} r_x = 0, & r_y = 0, & z = 1. \end{matrix} $$ Note: In the following, A1 to An are optional batch dimensions. Args: point_2d: A tensor of shape `[A1, ..., An, 2]`, where the last dimension represents a 2d point. name: A name for this op that defaults to "orthographic_ray". Returns: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents a 3d ray. Raises: ValueError: If the shape of `point_2d` is not supported. """ with tf.compat.v1.name_scope(name, "orthographic_ray", [point_2d]): point_2d = tf.convert_to_tensor(value=point_2d) shape.check_static( tensor=point_2d, tensor_name="point_2d", has_dim_equals=(-1, 2)) ones = tf.ones_like(point_2d[..., :1]) # point_2d is multiplied by zero to ensure it has defined gradients. return tf.concat((point_2d * 0.0, ones), axis=-1) def unproject(point_2d, depth, name=None): r"""Unprojects a 2d point in 3d. Unprojects a 2d point \\((x', y')\\) to a 3d point \\((x, y, z)\\) given its depth \\(z\\), with $$ \begin{matrix} x = x', & y = y', & z = z. \end{matrix} $$ Note: In the following, A1 to An are optional batch dimensions. Args: point_2d: A tensor of shape `[A1, ..., An, 2]`, where the last dimension represents a 2d point to unproject. depth: A tensor of shape `[A1, ..., An, 1]`, where the last dimension represents the depth of a 2d point. name: A name for this op that defaults to "orthographic_unproject". Returns: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents a 3d point. Raises: ValueError: If the shape of `point_2d`, `depth` is not supported. """ with tf.compat.v1.name_scope(name, "orthographic_unproject", [point_2d, depth]): point_2d = tf.convert_to_tensor(value=point_2d) depth = tf.convert_to_tensor(value=depth) shape.check_static( tensor=point_2d, tensor_name="point_2d", has_dim_equals=(-1, 2)) shape.check_static( tensor=depth, tensor_name="depth", has_dim_equals=(-1, 1)) shape.compare_batch_dimensions( tensors=(point_2d, depth), tensor_names=("point_2d", "depth"), last_axes=-2, broadcast_compatible=False) return tf.concat((point_2d, depth), axis=-1) # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. r"""This module implements orthographic camera functionalities. An orthographic camera represents three-dimensional objects in two dimensions by parallel projection, in which the projection lines are parallel to the camera axis. The camera axis is the line perpendicular to the image plane starting at the camera center. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf from tensorflow_graphics.util import export_api from tensorflow_graphics.util import shape def project(point_3d, name=None): r"""Projects a 3d point onto the 2d camera plane. Projects a 3d point \\((x, y, z)\\) to a 2d point \\((x', y')\\) onto the image plane, with $$ \begin{matrix} x' = x, & y' = y. \end{matrix} $$ Note: In the following, A1 to An are optional batch dimensions. Args: point_3d: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents a 3d point to project. name: A name for this op that defaults to "orthographic_project". Returns: A tensor of shape `[A1, ..., An, 2]`, where the last dimension represents a 2d point. Raises: ValueError: If the shape of `point_3d` is not supported. """ with tf.compat.v1.name_scope(name, "orthographic_project", [point_3d]): point_3d = tf.convert_to_tensor(value=point_3d) shape.check_static( tensor=point_3d, tensor_name="point_3d", has_dim_equals=(-1, 3)) point_xy, _ = tf.compat.v1.split(point_3d, (2, 1), axis=-1) return point_xy def ray(point_2d, name=None): r"""Computes the 3d ray for a 2d point (the z component of the ray is 1). Computes the 3d ray \\((r_x, r_y, 1)\\) for a 2d point \\((x', y')\\) on the image plane. For an orthographic camera the rays are constant over the image plane with $$ \begin{matrix} r_x = 0, & r_y = 0, & z = 1. \end{matrix} $$ Note: In the following, A1 to An are optional batch dimensions. Args: point_2d: A tensor of shape `[A1, ..., An, 2]`, where the last dimension represents a 2d point. name: A name for this op that defaults to "orthographic_ray". Returns: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents a 3d ray. Raises: ValueError: If the shape of `point_2d` is not supported. """ with tf.compat.v1.name_scope(name, "orthographic_ray", [point_2d]): point_2d = tf.convert_to_tensor(value=point_2d) shape.check_static( tensor=point_2d, tensor_name="point_2d", has_dim_equals=(-1, 2)) ones = tf.ones_like(point_2d[..., :1]) # point_2d is multiplied by zero to ensure it has defined gradients. return tf.concat((point_2d * 0.0, ones), axis=-1) def unproject(point_2d, depth, name=None): r"""Unprojects a 2d point in 3d. Unprojects a 2d point \\((x', y')\\) to a 3d point \\((x, y, z)\\) given its depth \\(z\\), with $$ \begin{matrix} x = x', & y = y', & z = z. \end{matrix} $$ Note: In the following, A1 to An are optional batch dimensions. Args: point_2d: A tensor of shape `[A1, ..., An, 2]`, where the last dimension represents a 2d point to unproject. depth: A tensor of shape `[A1, ..., An, 1]`, where the last dimension represents the depth of a 2d point. name: A name for this op that defaults to "orthographic_unproject". Returns: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents a 3d point. Raises: ValueError: If the shape of `point_2d`, `depth` is not supported. """ with tf.compat.v1.name_scope(name, "orthographic_unproject", [point_2d, depth]): point_2d = tf.convert_to_tensor(value=point_2d) depth = tf.convert_to_tensor(value=depth) shape.check_static( tensor=point_2d, tensor_name="point_2d", has_dim_equals=(-1, 2)) shape.check_static( tensor=depth, tensor_name="depth", has_dim_equals=(-1, 1)) shape.compare_batch_dimensions( tensors=(point_2d, depth), tensor_names=("point_2d", "depth"), last_axes=-2, broadcast_compatible=False) return tf.concat((point_2d, depth), axis=-1) # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/datasets/features/voxel_feature_test.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Lint as: python3 """Tests for tensorflow_graphics.datasets.features.voxel_feature.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import os import numpy as np import tensorflow as tf import tensorflow_datasets as tfds from tensorflow_graphics.datasets.features import voxel_feature _TEST_DATA_DIR = os.path.join(os.path.dirname(__file__), 'test_data') class VoxelGridFeatureTest(tfds.testing.FeatureExpectationsTestCase): """Test Cases for VoxelGrid FeatureConnector.""" def test_voxel(self): """Tests voxel I/O and encoding/decoding to DatasetFeature.""" mat_file_path = os.path.join(_TEST_DATA_DIR, 'cube.mat') expected_voxel = np.zeros((16, 16, 16), dtype=np.float32) expected_voxel[4:12, 4:12, 4:12] = 1. mat_dict = {'path': mat_file_path, 'key': 'voxels'} raising_inputs = {'path': mat_file_path, 'foo': 'voxels'} wrong_key = {'path': mat_file_path, 'key': 'foo'} wrong_path = {'path': '/somewhere/wrong', 'key': 'voxels'} wrong_dim = np.ones((1, 1, 1, 1)) self.assertFeature( feature=voxel_feature.VoxelGrid((16, 16, 16)), shape=(16, 16, 16), dtype=tf.float32, tests=[ # mat file tfds.testing.FeatureExpectationItem( value=mat_dict, expected=expected_voxel, ), # Voxel Grid tfds.testing.FeatureExpectationItem( value=expected_voxel, expected=expected_voxel, ), tfds.testing.FeatureExpectationItem( value=raising_inputs, raise_cls=ValueError, raise_msg='Missing keys in provided dictionary!', ), tfds.testing.FeatureExpectationItem( value=wrong_key, raise_cls=ValueError, raise_msg='Key `foo` not found in .mat file', ), tfds.testing.FeatureExpectationItem( value=wrong_path, raise_cls=FileNotFoundError, raise_msg='File `/somewhere/wrong` does not exist.', ), tfds.testing.FeatureExpectationItem( value=wrong_dim, raise_cls=ValueError, raise_msg='Only 3D Voxel Grids are supported.', ), ], ) if __name__ == '__main__': tfds.testing.test_main()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Lint as: python3 """Tests for tensorflow_graphics.datasets.features.voxel_feature.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import os import numpy as np import tensorflow as tf import tensorflow_datasets as tfds from tensorflow_graphics.datasets.features import voxel_feature _TEST_DATA_DIR = os.path.join(os.path.dirname(__file__), 'test_data') class VoxelGridFeatureTest(tfds.testing.FeatureExpectationsTestCase): """Test Cases for VoxelGrid FeatureConnector.""" def test_voxel(self): """Tests voxel I/O and encoding/decoding to DatasetFeature.""" mat_file_path = os.path.join(_TEST_DATA_DIR, 'cube.mat') expected_voxel = np.zeros((16, 16, 16), dtype=np.float32) expected_voxel[4:12, 4:12, 4:12] = 1. mat_dict = {'path': mat_file_path, 'key': 'voxels'} raising_inputs = {'path': mat_file_path, 'foo': 'voxels'} wrong_key = {'path': mat_file_path, 'key': 'foo'} wrong_path = {'path': '/somewhere/wrong', 'key': 'voxels'} wrong_dim = np.ones((1, 1, 1, 1)) self.assertFeature( feature=voxel_feature.VoxelGrid((16, 16, 16)), shape=(16, 16, 16), dtype=tf.float32, tests=[ # mat file tfds.testing.FeatureExpectationItem( value=mat_dict, expected=expected_voxel, ), # Voxel Grid tfds.testing.FeatureExpectationItem( value=expected_voxel, expected=expected_voxel, ), tfds.testing.FeatureExpectationItem( value=raising_inputs, raise_cls=ValueError, raise_msg='Missing keys in provided dictionary!', ), tfds.testing.FeatureExpectationItem( value=wrong_key, raise_cls=ValueError, raise_msg='Key `foo` not found in .mat file', ), tfds.testing.FeatureExpectationItem( value=wrong_path, raise_cls=FileNotFoundError, raise_msg='File `/somewhere/wrong` does not exist.', ), tfds.testing.FeatureExpectationItem( value=wrong_dim, raise_cls=ValueError, raise_msg='Only 3D Voxel Grids are supported.', ), ], ) if __name__ == '__main__': tfds.testing.test_main()
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/projects/local_implicit_grid/core/local_implicit_grid_layer.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Lint as: python3 """Local Implicit Grid layer implemented in Tensorflow. """ import tensorflow.compat.v1 as tf from tensorflow_graphics.projects.local_implicit_grid.core import implicit_nets from tensorflow_graphics.projects.local_implicit_grid.core import regular_grid_interpolation layers = tf.keras.layers class LocalImplicitGrid(layers.Layer): """Local Implicit Grid layer. """ def __init__(self, size=(32, 32, 32), in_features=16, out_features=1, x_location_max=1, num_filters=128, net_type="imnet", method="linear", interp=True, min_grid_value=(0, 0, 0), max_grid_value=(1, 1, 1), name="lvoxgrid"): """Initialization function. Args: size: list or tuple of ints, grid dimension in each dimension. in_features: int, number of input channels. out_features: int, number of output channels. x_location_max: float, relative coordinate range for one voxel. num_filters: int, number of filters for refiner. net_type: str, one of occnet/deepsdf. method: str, one of linear/nn. interp: bool, interp final results across neighbors (only in linear mode). min_grid_value: tuple, lower bound of query points. max_grid_value: tuple, upper bound of query points. name: str, name of the layer. """ super(LocalImplicitGrid, self).__init__(name=name) # Print warning if x_location_max and method do not match if not ((x_location_max == 1 and method == "linear") or (x_location_max == 2 and method == "nn")): raise ValueError("Bad combination of x_location_max and method.") self.cin = in_features self.cout = out_features self.dim = len(size) self.x_location_max = x_location_max self.interp = interp self.min_grid_value = min_grid_value self.max_grid_value = max_grid_value self.num_filters = num_filters if self.dim not in [2, 3]: raise ValueError("`size` must be tuple or list of len 2 or 3.") if net_type == "imnet": self.net = implicit_nets.ImNet( in_features=in_features, num_filters=num_filters) elif net_type == "deepsdf": self.net = implicit_nets.DeepSDF( in_features=in_features, num_filters=num_filters) else: raise NotImplementedError if method not in ["linear", "nn"]: raise ValueError("`method` must be `linear` or `nn`.") self.method = method self.size_tensor = tf.constant(size) self.size = size if size[0] == size[1] == size[2] == 1: self.cubesize = None else: self.cubesize = tf.constant([1/(r-1) for r in size], dtype=tf.float32) def call(self, grid, pts, training=False): """Forward method for Learnable Voxel Grid. Args: grid: `[batch_size, *self.size, in_features]` tensor, input feature grid. pts: `[batch_size, num_points, dim]` tensor, coordinates of points that are within the range (0, 1). training: bool, flag indicating training phase. Returns: outputs: `[batch_size, num_points, out_features]` tensor, continuous function field value at locations specified at pts. Raises: RuntimeError: dimensions of grid does not match that of self. """ # assert that dimensions match grid = tf.ensure_shape(grid, (None, self.size[0], self.size[1], self.size[2], self.cin)) pts = tf.ensure_shape(pts, (None, None, self.dim)) lat, weights, xloc = self._interp(grid, pts) outputs = self._eval_net(lat, weights, xloc, training=training) return outputs def _interp(self, grid, pts): """Interpolation function to get local latent code, weights & relative loc. Args: grid: `[batch_size, *self.size, in_features]` tensor, input feature grid. pts: `[batch_size, num_points, dim]` tensor, coordinates of points that are within the range (0, 1). Returns: lat: `[batch_size, num_points, 2**dim, in_features]` tensor, neighbor latent codes for each input point. weights: `[batch_size, num_points, 2**dim]` tensor, bi/tri-linear interpolation weights for each neighbor. xloc: `[batch_size, num_points, 2**dim, dim]`tensor, relative coordinates. """ lat, weights, xloc = regular_grid_interpolation.get_interp_coefficients( grid, pts, min_grid_value=self.min_grid_value, max_grid_value=self.max_grid_value) xloc *= self.x_location_max return lat, weights, xloc def _eval_net(self, lat, weights, xloc, training=False): """Evaluate function values by querying shared dense network. Args: lat: `[batch_size, num_points, 2**dim, in_features]` tensor, neighbor latent codes for each input point. weights: `[batch_size, num_points, 2**dim]` tensor, bi/tri-linear interpolation weights for each neighbor. xloc: `[batch_size, num_points, 2**dim, dim]`tensor, relative coordinates. training: bool, flag indicating training phase. Returns: values: `[batch_size, num_point, out_features]` tensor, query values. """ nb, np, nn, nc = lat.get_shape().as_list() nd = self.dim if self.method == "linear": inputs = tf.concat([xloc, lat], axis=-1) # `[batch_size, num_points, 2**dim, dim+in_features]` inputs = tf.reshape(inputs, [-1, nc+nd]) values = self.net(inputs, training=training) values = tf.reshape(values, [nb, np, nn, self.cout]) # `[batch_size, num_points, 2**dim, out_features]` if self.interp: values = tf.reduce_sum(tf.expand_dims(weights, axis=-1)*values, axis=2) # `[batch_size, num_points out_features]` else: values = (values, weights) else: # nearest neighbor nid = tf.cast(tf.argmax(weights, axis=-1), tf.int32) # [batch_size, num_points] bid = tf.broadcast_to(tf.range(nb, dtype=tf.int32)[:, tf.newaxis], [nb, np]) pid = tf.broadcast_to(tf.range(np, dtype=tf.int32)[tf.newaxis, :], [nb, np]) gather_id = tf.stack((bid, pid, nid), axis=-1) lat_ = tf.gather_nd(lat, gather_id) # [batch_size, num_points, in_feat] xloc_ = tf.gather_nd(xloc, gather_id) # [batch_size, num_points, dim] inputs = tf.concat([xloc_, lat_], axis=-1) inputs = tf.reshape(inputs, [-1, nc+nd]) values = self.net(inputs, training=training) values = tf.reshape(values, [nb, np, self.cout]) # `[batch_size, num_points, out_features]` return values
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Lint as: python3 """Local Implicit Grid layer implemented in Tensorflow. """ import tensorflow.compat.v1 as tf from tensorflow_graphics.projects.local_implicit_grid.core import implicit_nets from tensorflow_graphics.projects.local_implicit_grid.core import regular_grid_interpolation layers = tf.keras.layers class LocalImplicitGrid(layers.Layer): """Local Implicit Grid layer. """ def __init__(self, size=(32, 32, 32), in_features=16, out_features=1, x_location_max=1, num_filters=128, net_type="imnet", method="linear", interp=True, min_grid_value=(0, 0, 0), max_grid_value=(1, 1, 1), name="lvoxgrid"): """Initialization function. Args: size: list or tuple of ints, grid dimension in each dimension. in_features: int, number of input channels. out_features: int, number of output channels. x_location_max: float, relative coordinate range for one voxel. num_filters: int, number of filters for refiner. net_type: str, one of occnet/deepsdf. method: str, one of linear/nn. interp: bool, interp final results across neighbors (only in linear mode). min_grid_value: tuple, lower bound of query points. max_grid_value: tuple, upper bound of query points. name: str, name of the layer. """ super(LocalImplicitGrid, self).__init__(name=name) # Print warning if x_location_max and method do not match if not ((x_location_max == 1 and method == "linear") or (x_location_max == 2 and method == "nn")): raise ValueError("Bad combination of x_location_max and method.") self.cin = in_features self.cout = out_features self.dim = len(size) self.x_location_max = x_location_max self.interp = interp self.min_grid_value = min_grid_value self.max_grid_value = max_grid_value self.num_filters = num_filters if self.dim not in [2, 3]: raise ValueError("`size` must be tuple or list of len 2 or 3.") if net_type == "imnet": self.net = implicit_nets.ImNet( in_features=in_features, num_filters=num_filters) elif net_type == "deepsdf": self.net = implicit_nets.DeepSDF( in_features=in_features, num_filters=num_filters) else: raise NotImplementedError if method not in ["linear", "nn"]: raise ValueError("`method` must be `linear` or `nn`.") self.method = method self.size_tensor = tf.constant(size) self.size = size if size[0] == size[1] == size[2] == 1: self.cubesize = None else: self.cubesize = tf.constant([1/(r-1) for r in size], dtype=tf.float32) def call(self, grid, pts, training=False): """Forward method for Learnable Voxel Grid. Args: grid: `[batch_size, *self.size, in_features]` tensor, input feature grid. pts: `[batch_size, num_points, dim]` tensor, coordinates of points that are within the range (0, 1). training: bool, flag indicating training phase. Returns: outputs: `[batch_size, num_points, out_features]` tensor, continuous function field value at locations specified at pts. Raises: RuntimeError: dimensions of grid does not match that of self. """ # assert that dimensions match grid = tf.ensure_shape(grid, (None, self.size[0], self.size[1], self.size[2], self.cin)) pts = tf.ensure_shape(pts, (None, None, self.dim)) lat, weights, xloc = self._interp(grid, pts) outputs = self._eval_net(lat, weights, xloc, training=training) return outputs def _interp(self, grid, pts): """Interpolation function to get local latent code, weights & relative loc. Args: grid: `[batch_size, *self.size, in_features]` tensor, input feature grid. pts: `[batch_size, num_points, dim]` tensor, coordinates of points that are within the range (0, 1). Returns: lat: `[batch_size, num_points, 2**dim, in_features]` tensor, neighbor latent codes for each input point. weights: `[batch_size, num_points, 2**dim]` tensor, bi/tri-linear interpolation weights for each neighbor. xloc: `[batch_size, num_points, 2**dim, dim]`tensor, relative coordinates. """ lat, weights, xloc = regular_grid_interpolation.get_interp_coefficients( grid, pts, min_grid_value=self.min_grid_value, max_grid_value=self.max_grid_value) xloc *= self.x_location_max return lat, weights, xloc def _eval_net(self, lat, weights, xloc, training=False): """Evaluate function values by querying shared dense network. Args: lat: `[batch_size, num_points, 2**dim, in_features]` tensor, neighbor latent codes for each input point. weights: `[batch_size, num_points, 2**dim]` tensor, bi/tri-linear interpolation weights for each neighbor. xloc: `[batch_size, num_points, 2**dim, dim]`tensor, relative coordinates. training: bool, flag indicating training phase. Returns: values: `[batch_size, num_point, out_features]` tensor, query values. """ nb, np, nn, nc = lat.get_shape().as_list() nd = self.dim if self.method == "linear": inputs = tf.concat([xloc, lat], axis=-1) # `[batch_size, num_points, 2**dim, dim+in_features]` inputs = tf.reshape(inputs, [-1, nc+nd]) values = self.net(inputs, training=training) values = tf.reshape(values, [nb, np, nn, self.cout]) # `[batch_size, num_points, 2**dim, out_features]` if self.interp: values = tf.reduce_sum(tf.expand_dims(weights, axis=-1)*values, axis=2) # `[batch_size, num_points out_features]` else: values = (values, weights) else: # nearest neighbor nid = tf.cast(tf.argmax(weights, axis=-1), tf.int32) # [batch_size, num_points] bid = tf.broadcast_to(tf.range(nb, dtype=tf.int32)[:, tf.newaxis], [nb, np]) pid = tf.broadcast_to(tf.range(np, dtype=tf.int32)[tf.newaxis, :], [nb, np]) gather_id = tf.stack((bid, pid, nid), axis=-1) lat_ = tf.gather_nd(lat, gather_id) # [batch_size, num_points, in_feat] xloc_ = tf.gather_nd(xloc, gather_id) # [batch_size, num_points, dim] inputs = tf.concat([xloc_, lat_], axis=-1) inputs = tf.reshape(inputs, [-1, nc+nd]) values = self.net(inputs, training=training) values = tf.reshape(values, [nb, np, self.cout]) # `[batch_size, num_points, out_features]` return values
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/util/export_api.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """API export functions used to create the automated documentation.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import inspect def get_functions_and_classes(): """Extracts a list of public functions and classes for the API generation. Returns: A list of function and class names. """ caller = inspect.stack()[1] module = inspect.getmodule(caller[0]) return [ obj_name for obj_name, obj in inspect.getmembers(module) if inspect.isfunction(obj) or inspect.isclass(obj) and not obj_name.startswith("_") ] def get_modules(): """Extracts a list of public modules for the API generation. Returns: A list of module names. """ caller = inspect.stack()[1] module = inspect.getmodule(caller[0]) return [ obj_name for obj_name, obj in inspect.getmembers(module) if inspect.ismodule(obj) and obj.__name__.rsplit(".", 1)[0] == module.__name__ and not obj_name.startswith("_") ] # The util functions or classes are not exported. __all__ = []
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """API export functions used to create the automated documentation.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import inspect def get_functions_and_classes(): """Extracts a list of public functions and classes for the API generation. Returns: A list of function and class names. """ caller = inspect.stack()[1] module = inspect.getmodule(caller[0]) return [ obj_name for obj_name, obj in inspect.getmembers(module) if inspect.isfunction(obj) or inspect.isclass(obj) and not obj_name.startswith("_") ] def get_modules(): """Extracts a list of public modules for the API generation. Returns: A list of module names. """ caller = inspect.stack()[1] module = inspect.getmodule(caller[0]) return [ obj_name for obj_name, obj in inspect.getmembers(module) if inspect.ismodule(obj) and obj.__name__.rsplit(".", 1)[0] == module.__name__ and not obj_name.startswith("_") ] # The util functions or classes are not exported. __all__ = []
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/math/interpolation/tests/bspline_test.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for bspline.""" from absl.testing import parameterized import numpy as np from tensorflow_graphics.math.interpolation import bspline from tensorflow_graphics.util import test_case class BSplineTest(test_case.TestCase): @parameterized.parameters((0.0, (1.0,)), (1.0, (1.0,))) def test_constant_basis_boundary_values(self, position, weights): """Tests that basis functions of degree 0 return expected values.""" self.assertAllClose(bspline._constant(position), weights) # pylint: disable=protected-access @parameterized.parameters((0.0, (1.0, 0.0)), (1.0, (0.0, 1.0))) def test_linear_basis_boundary_values(self, position, weights): """Tests that basis functions of degree 1 return expected values.""" self.assertAllClose(bspline._linear(position), weights) # pylint: disable=protected-access @parameterized.parameters((0.0, (0.5, 0.5, 0.0)), (1.0, (0.0, 0.5, 0.5))) def test_quadratic_basis_boundary_values(self, position, weights): """Tests that basis functions of degree 2 return expected values.""" self.assertAllClose(bspline._quadratic(position), weights) # pylint: disable=protected-access @parameterized.parameters((0.0, (1.0 / 6.0, 2.0 / 3.0, 1.0 / 6.0, 0.0)), (1.0, (0.0, 1.0 / 6.0, 2.0 / 3.0, 1.0 / 6.0))) def test_cubic_basis_boundary_values(self, position, weights): """Tests that basis functions of degree 3 return expected values.""" self.assertAllClose(bspline._cubic(position), weights) # pylint: disable=protected-access @parameterized.parameters( (0.0, (1.0 / 24.0, 11.0 / 24.0, 11.0 / 24.0, 1.0 / 24.0, 0.0)), (1.0, (0.0, 1.0 / 24.0, 11.0 / 24.0, 11.0 / 24.0, 1.0 / 24.0))) def test_quartic_basis_boundary_values(self, position, weights): """Tests that basis functions of degree 4 return expected values.""" self.assertAllClose(bspline._quartic(position), weights) # pylint: disable=protected-access @parameterized.parameters( (((0.5,), (1.5,), (2.5,)), (((0.5, 0.5),), ((0.5, 0.5),), ((0.5, 0.5),)), (((0,), (1,), (2,))), 1, True), ((0.0, 1.0), ((0.5, 0.5, 0.0), (0.0, 0.5, 0.5)), (0, 0), 2, False), ) def test_knot_weights_sparse_mode_preset(self, positions, gt_weights, gt_shifts, degree, cyclical): """Tests that sparse mode returns correct results.""" weights, shifts = bspline.knot_weights( positions, num_knots=3, degree=degree, cyclical=cyclical, sparse_mode=True) self.assertAllClose(weights, gt_weights) self.assertAllClose(shifts, gt_shifts) @parameterized.parameters( (((0.5,),), (((0.5, 0.5, 0.0),),), 1), (((1.5,),), (((0.0, 0.5, 0.5),),), 1), (((2.5,),), (((0.5, 0.0, 0.5),),), 1), (((0.5,), (1.5,), (2.5,)), (((1.0 / 8.0, 0.75, 1.0 / 8.0),), ((1.0 / 8.0, 1.0 / 8.0, 0.75),), ((0.75, 1.0 / 8.0, 1.0 / 8.0),)), 2), ) def test_knot_weights_preset(self, position, weights, degree): """Tests that knot weights are correct when degree < num_knots - 1.""" self.assertAllClose( bspline.knot_weights( position, num_knots=3, degree=degree, cyclical=True), weights) @parameterized.parameters((((0.0,), (0.25,), (0.5,), (0.75,)),)) def test_full_degree_non_cyclical_knot_weights(self, positions): """Tests that noncyclical weights are correct when using max degree.""" cyclical_weights = bspline.knot_weights( positions=positions, num_knots=3, degree=2, cyclical=True) noncyclical_weights = bspline.knot_weights( positions=positions, num_knots=3, degree=2, cyclical=False) self.assertAllClose(cyclical_weights, noncyclical_weights) @parameterized.parameters( ("must have the same number of dimensions", ((None, 2), (None, 3, 3))), ("must have the same number of dimensions", ((2,), (3,))), ) def test_interpolate_with_weights_exception_is_raised(self, error_msg, shapes): """Tests that exception is raised when wrong number of knots is given.""" self.assert_exception_is_raised( bspline.interpolate_with_weights, error_msg, shapes=shapes) @parameterized.parameters( (((0.5,), (0.0,), (0.9,)), (((0.5, 1.5), (1.5, 1.5), (2.5, 3.5)),))) def test_interpolate_with_weights_preset(self, positions, knots): """Tests that interpolate_with_weights works correctly.""" degree = 1 cyclical = False interp1 = bspline.interpolate(knots, positions, degree, cyclical) weights = bspline.knot_weights(positions, 2, degree, cyclical) interp2 = bspline.interpolate_with_weights(knots, weights) self.assertAllClose(interp1, interp2) @parameterized.parameters( (1, 2), (1, None), (2, 2), (2, None), (3, 2), (3, None), (4, 2), (4, None), ) def test_knot_weights_exception_is_not_raised(self, positions_rank, dims): shapes = ([dims] * positions_rank,) self.assert_exception_is_not_raised( bspline.knot_weights, shapes=shapes, num_knots=3, degree=2, cyclical=True) @parameterized.parameters( ("Degree should be between 0 and 4.", 6, -1), ("Degree should be between 0 and 4.", 6, 5), ("Degree cannot be >= number of knots.", 2, 2), ("Degree cannot be >= number of knots.", 2, 3), ) def test_knot_weights_exception_is_raised(self, error_msg, num_knots, degree): self.assert_exception_is_raised( bspline.knot_weights, error_msg, shapes=((10, 1),), num_knots=num_knots, degree=degree, cyclical=True) @parameterized.parameters( (1, 0, True), (1, 0, False), (2, 1, True), (2, 1, False), (3, 1, True), (3, 1, False), (3, 2, True), (3, 2, False), (4, 1, True), (4, 1, False), (4, 3, True), (4, 3, False), (5, 1, True), (5, 1, False), (5, 4, True), (5, 4, False), ) def test_knot_weights_jacobian_is_correct(self, num_knots, degree, cyclical): """Tests that Jacobian is correct.""" positions_init = np.random.random_sample((10, 1)) scale = num_knots if cyclical else num_knots - degree positions_init *= scale def dense_mode_fn(positions): return bspline.knot_weights( positions=positions, num_knots=num_knots, degree=degree, cyclical=cyclical, sparse_mode=False) def sparse_mode_fn(positions): return bspline.knot_weights( positions=positions, num_knots=num_knots, degree=degree, cyclical=cyclical, sparse_mode=True)[0] with self.subTest(name="dense_mode"): self.assert_jacobian_is_correct_fn(dense_mode_fn, [positions_init]) with self.subTest(name="sparse_mode"): self.assert_jacobian_is_correct_fn(sparse_mode_fn, [positions_init]) if __name__ == "__main__": test_case.main()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for bspline.""" from absl.testing import parameterized import numpy as np from tensorflow_graphics.math.interpolation import bspline from tensorflow_graphics.util import test_case class BSplineTest(test_case.TestCase): @parameterized.parameters((0.0, (1.0,)), (1.0, (1.0,))) def test_constant_basis_boundary_values(self, position, weights): """Tests that basis functions of degree 0 return expected values.""" self.assertAllClose(bspline._constant(position), weights) # pylint: disable=protected-access @parameterized.parameters((0.0, (1.0, 0.0)), (1.0, (0.0, 1.0))) def test_linear_basis_boundary_values(self, position, weights): """Tests that basis functions of degree 1 return expected values.""" self.assertAllClose(bspline._linear(position), weights) # pylint: disable=protected-access @parameterized.parameters((0.0, (0.5, 0.5, 0.0)), (1.0, (0.0, 0.5, 0.5))) def test_quadratic_basis_boundary_values(self, position, weights): """Tests that basis functions of degree 2 return expected values.""" self.assertAllClose(bspline._quadratic(position), weights) # pylint: disable=protected-access @parameterized.parameters((0.0, (1.0 / 6.0, 2.0 / 3.0, 1.0 / 6.0, 0.0)), (1.0, (0.0, 1.0 / 6.0, 2.0 / 3.0, 1.0 / 6.0))) def test_cubic_basis_boundary_values(self, position, weights): """Tests that basis functions of degree 3 return expected values.""" self.assertAllClose(bspline._cubic(position), weights) # pylint: disable=protected-access @parameterized.parameters( (0.0, (1.0 / 24.0, 11.0 / 24.0, 11.0 / 24.0, 1.0 / 24.0, 0.0)), (1.0, (0.0, 1.0 / 24.0, 11.0 / 24.0, 11.0 / 24.0, 1.0 / 24.0))) def test_quartic_basis_boundary_values(self, position, weights): """Tests that basis functions of degree 4 return expected values.""" self.assertAllClose(bspline._quartic(position), weights) # pylint: disable=protected-access @parameterized.parameters( (((0.5,), (1.5,), (2.5,)), (((0.5, 0.5),), ((0.5, 0.5),), ((0.5, 0.5),)), (((0,), (1,), (2,))), 1, True), ((0.0, 1.0), ((0.5, 0.5, 0.0), (0.0, 0.5, 0.5)), (0, 0), 2, False), ) def test_knot_weights_sparse_mode_preset(self, positions, gt_weights, gt_shifts, degree, cyclical): """Tests that sparse mode returns correct results.""" weights, shifts = bspline.knot_weights( positions, num_knots=3, degree=degree, cyclical=cyclical, sparse_mode=True) self.assertAllClose(weights, gt_weights) self.assertAllClose(shifts, gt_shifts) @parameterized.parameters( (((0.5,),), (((0.5, 0.5, 0.0),),), 1), (((1.5,),), (((0.0, 0.5, 0.5),),), 1), (((2.5,),), (((0.5, 0.0, 0.5),),), 1), (((0.5,), (1.5,), (2.5,)), (((1.0 / 8.0, 0.75, 1.0 / 8.0),), ((1.0 / 8.0, 1.0 / 8.0, 0.75),), ((0.75, 1.0 / 8.0, 1.0 / 8.0),)), 2), ) def test_knot_weights_preset(self, position, weights, degree): """Tests that knot weights are correct when degree < num_knots - 1.""" self.assertAllClose( bspline.knot_weights( position, num_knots=3, degree=degree, cyclical=True), weights) @parameterized.parameters((((0.0,), (0.25,), (0.5,), (0.75,)),)) def test_full_degree_non_cyclical_knot_weights(self, positions): """Tests that noncyclical weights are correct when using max degree.""" cyclical_weights = bspline.knot_weights( positions=positions, num_knots=3, degree=2, cyclical=True) noncyclical_weights = bspline.knot_weights( positions=positions, num_knots=3, degree=2, cyclical=False) self.assertAllClose(cyclical_weights, noncyclical_weights) @parameterized.parameters( ("must have the same number of dimensions", ((None, 2), (None, 3, 3))), ("must have the same number of dimensions", ((2,), (3,))), ) def test_interpolate_with_weights_exception_is_raised(self, error_msg, shapes): """Tests that exception is raised when wrong number of knots is given.""" self.assert_exception_is_raised( bspline.interpolate_with_weights, error_msg, shapes=shapes) @parameterized.parameters( (((0.5,), (0.0,), (0.9,)), (((0.5, 1.5), (1.5, 1.5), (2.5, 3.5)),))) def test_interpolate_with_weights_preset(self, positions, knots): """Tests that interpolate_with_weights works correctly.""" degree = 1 cyclical = False interp1 = bspline.interpolate(knots, positions, degree, cyclical) weights = bspline.knot_weights(positions, 2, degree, cyclical) interp2 = bspline.interpolate_with_weights(knots, weights) self.assertAllClose(interp1, interp2) @parameterized.parameters( (1, 2), (1, None), (2, 2), (2, None), (3, 2), (3, None), (4, 2), (4, None), ) def test_knot_weights_exception_is_not_raised(self, positions_rank, dims): shapes = ([dims] * positions_rank,) self.assert_exception_is_not_raised( bspline.knot_weights, shapes=shapes, num_knots=3, degree=2, cyclical=True) @parameterized.parameters( ("Degree should be between 0 and 4.", 6, -1), ("Degree should be between 0 and 4.", 6, 5), ("Degree cannot be >= number of knots.", 2, 2), ("Degree cannot be >= number of knots.", 2, 3), ) def test_knot_weights_exception_is_raised(self, error_msg, num_knots, degree): self.assert_exception_is_raised( bspline.knot_weights, error_msg, shapes=((10, 1),), num_knots=num_knots, degree=degree, cyclical=True) @parameterized.parameters( (1, 0, True), (1, 0, False), (2, 1, True), (2, 1, False), (3, 1, True), (3, 1, False), (3, 2, True), (3, 2, False), (4, 1, True), (4, 1, False), (4, 3, True), (4, 3, False), (5, 1, True), (5, 1, False), (5, 4, True), (5, 4, False), ) def test_knot_weights_jacobian_is_correct(self, num_knots, degree, cyclical): """Tests that Jacobian is correct.""" positions_init = np.random.random_sample((10, 1)) scale = num_knots if cyclical else num_knots - degree positions_init *= scale def dense_mode_fn(positions): return bspline.knot_weights( positions=positions, num_knots=num_knots, degree=degree, cyclical=cyclical, sparse_mode=False) def sparse_mode_fn(positions): return bspline.knot_weights( positions=positions, num_knots=num_knots, degree=degree, cyclical=cyclical, sparse_mode=True)[0] with self.subTest(name="dense_mode"): self.assert_jacobian_is_correct_fn(dense_mode_fn, [positions_init]) with self.subTest(name="sparse_mode"): self.assert_jacobian_is_correct_fn(sparse_mode_fn, [positions_init]) if __name__ == "__main__": test_case.main()
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/datasets/features/camera_feature_test.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Lint as: python3 """Tests for tensorflow_graphics.datasets.features.camera_feature.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np import tensorflow as tf import tensorflow_datasets as tfds from tensorflow_graphics.datasets.features import camera_feature class CameraFeatureTest(tfds.testing.FeatureExpectationsTestCase): """Test Cases for Camera FeatureConnector.""" def __get_camera_params(self): pose = {'R': np.eye(3).astype(np.float32), 't': np.zeros(3).astype(np.float32)} f = 35. optical_center = (640 / 2, 480 / 2) return pose, f, optical_center def test_simple_camera(self): """Tests camera parameters with fixed focal length, no skew and no aspect ratio.""" expected_pose, expected_f, expected_center = self.__get_camera_params() expected_intrinsics = np.asarray([[expected_f, 0, expected_center[0]], [0, expected_f, expected_center[1]], [0, 0, 1]], dtype=np.float32) expected_camera = {'pose': expected_pose, 'intrinsics': expected_intrinsics} inputs = {'f': expected_f, 'optical_center': expected_center, 'pose': expected_pose} lookat_inputs = { 'f': expected_f, 'optical_center': expected_center, 'pose': { 'look_at': np.array([0, 0, -1], dtype=np.float32), 'up': np.array([0, 1, 0], dtype=np.float32), 'position': np.array([0, 0, 0], dtype=np.float32) } } raising_pose_entry = { 'f': expected_f, 'optical_center': expected_center, 'pose': np.eye(4) } raising_pose_inputs = { 'f': expected_f, 'optical_center': expected_center, 'pose': {'rot': np.eye(3), 'trans': np.zeros(3)} } raising_lookat_inputs = { 'f': expected_f, 'optical_center': expected_center, 'pose': { 'l': np.array([0, 0, -1], dtype=np.float32), 'up': np.array([0, 1, 0], dtype=np.float32), 'C': np.array([0, 0, 0], dtype=np.float32) } } self.assertFeature( feature=camera_feature.Camera(), shape={ 'pose': { 'R': (3, 3), 't': (3,) }, 'intrinsics': (3, 3) }, dtype={ 'pose': { 'R': tf.float32, 't': tf.float32 }, 'intrinsics': tf.float32 }, tests=[ tfds.testing.FeatureExpectationItem( value=inputs, expected=expected_camera, ), tfds.testing.FeatureExpectationItem( value=lookat_inputs, expected=expected_camera ), tfds.testing.FeatureExpectationItem( value=raising_pose_inputs, raise_cls=ValueError, raise_msg='Wrong keys for pose feature provided' ), tfds.testing.FeatureExpectationItem( value=raising_lookat_inputs, raise_cls=ValueError, raise_msg='Wrong keys for pose feature provided' ), tfds.testing.FeatureExpectationItem( value=raising_pose_entry, raise_cls=ValueError, raise_msg='Pose needs to be a dictionary' ), ], ) def test_camera_with_aspect_ratio_and_skew(self): """Tests camera parameters with fixed focal length, aspect_ratio and skew.""" expected_pose, expected_f, expected_center = self.__get_camera_params() expected_aspect_ratio = expected_center[0] / expected_center[1] expected_skew = 0.6 expected_intrinsics = np.asarray( [[expected_f, expected_skew, expected_center[0]], [0, expected_aspect_ratio * expected_f, expected_center[1]], [0, 0, 1]], dtype=np.float32) expected_camera = {'pose': expected_pose, 'intrinsics': expected_intrinsics} inputs = {'f': expected_f, 'optical_center': expected_center, 'skew': expected_skew, 'aspect_ratio': expected_aspect_ratio, 'pose': expected_pose} self.assertFeature( feature=camera_feature.Camera(), shape={ 'pose': { 'R': (3, 3), 't': (3,) }, 'intrinsics': (3, 3) }, dtype={ 'pose': { 'R': tf.float32, 't': tf.float32 }, 'intrinsics': tf.float32 }, tests=[ tfds.testing.FeatureExpectationItem( value=inputs, expected=expected_camera, ), ], ) def test_full_camera_calibration_matrix(self): """Tests camera parameters with different focal length per camera axis and skew.""" expected_pose, _, expected_optical_center = self.__get_camera_params() expected_skew = 0.6 expected_f = (35., 40.) expected_intrinsics = np.array( [[expected_f[0], expected_skew, expected_optical_center[0]], [0, expected_f[1], expected_optical_center[1]], [0, 0, 1]], dtype=np.float32) expected_camera = {'pose': expected_pose, 'intrinsics': expected_intrinsics} inputs = {'f': expected_f, 'optical_center': expected_optical_center, 'skew': expected_skew, 'pose': expected_pose} raising_inputs = {'f': expected_f, 'aspect_ratio': 1.5, 'optical_center': expected_optical_center, 'skew': expected_skew, 'pose': expected_pose} self.assertFeature( feature=camera_feature.Camera(), shape={ 'pose': { 'R': (3, 3), 't': (3,) }, 'intrinsics': (3, 3) }, dtype={ 'pose': { 'R': tf.float32, 't': tf.float32 }, 'intrinsics': tf.float32 }, tests=[ tfds.testing.FeatureExpectationItem( value=inputs, expected=expected_camera, ), tfds.testing.FeatureExpectationItem( value=raising_inputs, raise_cls=ValueError, raise_msg='If aspect ratio is provided, f needs to ' 'be a single float', ), ], ) if __name__ == '__main__': tfds.testing.test_main()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Lint as: python3 """Tests for tensorflow_graphics.datasets.features.camera_feature.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np import tensorflow as tf import tensorflow_datasets as tfds from tensorflow_graphics.datasets.features import camera_feature class CameraFeatureTest(tfds.testing.FeatureExpectationsTestCase): """Test Cases for Camera FeatureConnector.""" def __get_camera_params(self): pose = {'R': np.eye(3).astype(np.float32), 't': np.zeros(3).astype(np.float32)} f = 35. optical_center = (640 / 2, 480 / 2) return pose, f, optical_center def test_simple_camera(self): """Tests camera parameters with fixed focal length, no skew and no aspect ratio.""" expected_pose, expected_f, expected_center = self.__get_camera_params() expected_intrinsics = np.asarray([[expected_f, 0, expected_center[0]], [0, expected_f, expected_center[1]], [0, 0, 1]], dtype=np.float32) expected_camera = {'pose': expected_pose, 'intrinsics': expected_intrinsics} inputs = {'f': expected_f, 'optical_center': expected_center, 'pose': expected_pose} lookat_inputs = { 'f': expected_f, 'optical_center': expected_center, 'pose': { 'look_at': np.array([0, 0, -1], dtype=np.float32), 'up': np.array([0, 1, 0], dtype=np.float32), 'position': np.array([0, 0, 0], dtype=np.float32) } } raising_pose_entry = { 'f': expected_f, 'optical_center': expected_center, 'pose': np.eye(4) } raising_pose_inputs = { 'f': expected_f, 'optical_center': expected_center, 'pose': {'rot': np.eye(3), 'trans': np.zeros(3)} } raising_lookat_inputs = { 'f': expected_f, 'optical_center': expected_center, 'pose': { 'l': np.array([0, 0, -1], dtype=np.float32), 'up': np.array([0, 1, 0], dtype=np.float32), 'C': np.array([0, 0, 0], dtype=np.float32) } } self.assertFeature( feature=camera_feature.Camera(), shape={ 'pose': { 'R': (3, 3), 't': (3,) }, 'intrinsics': (3, 3) }, dtype={ 'pose': { 'R': tf.float32, 't': tf.float32 }, 'intrinsics': tf.float32 }, tests=[ tfds.testing.FeatureExpectationItem( value=inputs, expected=expected_camera, ), tfds.testing.FeatureExpectationItem( value=lookat_inputs, expected=expected_camera ), tfds.testing.FeatureExpectationItem( value=raising_pose_inputs, raise_cls=ValueError, raise_msg='Wrong keys for pose feature provided' ), tfds.testing.FeatureExpectationItem( value=raising_lookat_inputs, raise_cls=ValueError, raise_msg='Wrong keys for pose feature provided' ), tfds.testing.FeatureExpectationItem( value=raising_pose_entry, raise_cls=ValueError, raise_msg='Pose needs to be a dictionary' ), ], ) def test_camera_with_aspect_ratio_and_skew(self): """Tests camera parameters with fixed focal length, aspect_ratio and skew.""" expected_pose, expected_f, expected_center = self.__get_camera_params() expected_aspect_ratio = expected_center[0] / expected_center[1] expected_skew = 0.6 expected_intrinsics = np.asarray( [[expected_f, expected_skew, expected_center[0]], [0, expected_aspect_ratio * expected_f, expected_center[1]], [0, 0, 1]], dtype=np.float32) expected_camera = {'pose': expected_pose, 'intrinsics': expected_intrinsics} inputs = {'f': expected_f, 'optical_center': expected_center, 'skew': expected_skew, 'aspect_ratio': expected_aspect_ratio, 'pose': expected_pose} self.assertFeature( feature=camera_feature.Camera(), shape={ 'pose': { 'R': (3, 3), 't': (3,) }, 'intrinsics': (3, 3) }, dtype={ 'pose': { 'R': tf.float32, 't': tf.float32 }, 'intrinsics': tf.float32 }, tests=[ tfds.testing.FeatureExpectationItem( value=inputs, expected=expected_camera, ), ], ) def test_full_camera_calibration_matrix(self): """Tests camera parameters with different focal length per camera axis and skew.""" expected_pose, _, expected_optical_center = self.__get_camera_params() expected_skew = 0.6 expected_f = (35., 40.) expected_intrinsics = np.array( [[expected_f[0], expected_skew, expected_optical_center[0]], [0, expected_f[1], expected_optical_center[1]], [0, 0, 1]], dtype=np.float32) expected_camera = {'pose': expected_pose, 'intrinsics': expected_intrinsics} inputs = {'f': expected_f, 'optical_center': expected_optical_center, 'skew': expected_skew, 'pose': expected_pose} raising_inputs = {'f': expected_f, 'aspect_ratio': 1.5, 'optical_center': expected_optical_center, 'skew': expected_skew, 'pose': expected_pose} self.assertFeature( feature=camera_feature.Camera(), shape={ 'pose': { 'R': (3, 3), 't': (3,) }, 'intrinsics': (3, 3) }, dtype={ 'pose': { 'R': tf.float32, 't': tf.float32 }, 'intrinsics': tf.float32 }, tests=[ tfds.testing.FeatureExpectationItem( value=inputs, expected=expected_camera, ), tfds.testing.FeatureExpectationItem( value=raising_inputs, raise_cls=ValueError, raise_msg='If aspect ratio is provided, f needs to ' 'be a single float', ), ], ) if __name__ == '__main__': tfds.testing.test_main()
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/geometry/convolution/tests/graph_pooling_test.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for tensorflow_graphics.geometry.convolution.tests.graph_pooling.""" # pylint: disable=protected-access import itertools from absl.testing import parameterized import numpy as np import tensorflow as tf import tensorflow_graphics.geometry.convolution.graph_pooling as gp from tensorflow_graphics.geometry.convolution.tests import utils_test from tensorflow_graphics.util import test_case def _dense_to_sparse(data): """Convert a numpy array to a tf.SparseTensor.""" return utils_test._dense_to_sparse(data) def _batch_sparse_eye(batch_shape, num_vertices, dtype): """Generate a batch of identity matrices.""" eye = np.eye(num_vertices, dtype=dtype) num_batch_dims = len(batch_shape) expand_shape = np.concatenate((np.ones( num_batch_dims, dtype=np.int32), (num_vertices, num_vertices)), axis=0) eye = np.reshape(eye, expand_shape) tile_shape = np.concatenate((batch_shape, (1, 1)), axis=0) return _dense_to_sparse(np.tile(eye, tile_shape)) class GraphPoolingTestPoolTests(test_case.TestCase): @parameterized.parameters( ("'sizes' must have an integer type.", np.float32, np.float32, np.float32), ("'data' must have a float type.", np.int32, np.float32, np.int32), ("'pool_map' and 'data' must have the same type.", np.float32, np.float64, np.int32)) def test_pool_exception_raised_types(self, err_msg, data_type, pool_map_type, sizes_type): """Tests the correct exceptions are raised for invalid types.""" data = np.ones((2, 3, 3), dtype=data_type) pool_map = _dense_to_sparse(np.ones((2, 3, 3), dtype=pool_map_type)) sizes = np.array(((1, 2), (2, 3)), dtype=sizes_type) with self.assertRaisesRegexp(TypeError, err_msg): gp.pool(data, pool_map, sizes) @parameterized.parameters( ('data must have a rank greater than 1', (3,), (3,), None), ('pool_map must have a rank of 2', (3, 3), (3,), None), ('sizes must have a rank of 3', (4, 5, 3, 2), (4, 5, 3, 3), (3, 2)), ) def test_pool_exception_raised_shapes(self, err_msg, data_shape, pool_map_shape, sizes_shape): """Tests the correct exceptions are raised for invalid shapes.""" data = np.ones(data_shape, dtype=np.float32) pool_map = _dense_to_sparse(np.ones(pool_map_shape, dtype=np.float32)) if sizes_shape is not None: sizes = np.ones(sizes_shape, dtype=np.int32) else: sizes = None with self.assertRaisesRegexp(ValueError, err_msg): gp.pool(data, pool_map, sizes) def test_pool_exception_raised_algorithm(self): """Tests the correct exception is raised for an invalid algorithm.""" data = np.ones(shape=(2, 2)) pool_map = _dense_to_sparse(np.ones(shape=(2, 2))) with self.assertRaisesRegexp( ValueError, 'The pooling method must be "weighted" or "max"'): gp.pool(data, pool_map, sizes=None, algorithm='mean') @parameterized.parameters( ((2, 3), 4, 3, np.float32), ((1,), 6, 1, np.float32), ((4, 1, 3), 9, 7, np.float64), ((2, 8, 4, 6), 19, 11, np.float64), ) def test_pool_identity(self, batch_shape, num_vertices, num_features, data_type): """Tests graph pooling with identity maps.""" data_shape = np.concatenate((batch_shape, (num_vertices, num_features))) data = np.random.uniform(size=data_shape).astype(data_type) pool_map = _batch_sparse_eye(batch_shape, num_vertices, data_type) pooled_max = gp.pool(data, pool_map, sizes=None, algorithm='max', name=None) pooled_weighted = gp.pool( data, pool_map, sizes=None, algorithm='weighted', name=None) self.assertAllClose(pooled_max, data) self.assertAllClose(pooled_weighted, data) def test_pool_preset_padded(self): """Tests pooling with preset data and padding.""" data = np.reshape(np.arange(12).astype(np.float32), (2, 3, 2)) sizes = ((2, 3), (3, 3)) pool_map = _dense_to_sparse( np.array((((0.5, 0.5, 0.), (0., 0., 1.), (0., 0., 0.)), ((1., 0., 0.), (0., 1., 0.), (0., 0., 1.))), dtype=np.float32)) pooled_max = gp.pool(data, pool_map, sizes, algorithm='max') pooled_weighted = gp.pool(data, pool_map, sizes, algorithm='weighted') true_max = (((2., 3.), (4., 5.), (0., 0.)), ((6., 7.), (8., 9.), (10., 11.))) true_weighted = (((1., 2.), (4., 5.), (0., 0.)), ((6., 7.), (8., 9.), (10., 11.))) self.assertAllClose(pooled_max, true_max) self.assertAllClose(pooled_weighted, true_weighted) def test_pool_preset(self): """Tests pooling with preset data.""" pool_map = np.array(((0.5, 0.5, 0., 0.), (0., 0., 0.5, 0.5)), dtype=np.float32) pool_map = _dense_to_sparse(pool_map) data = np.reshape(np.arange(8).astype(np.float32), (4, 2)) max_true = data[(1, 3), :] max_weighted = (data[(0, 2), :] + max_true) * 0.5 pooled_max = gp.pool(data, pool_map, sizes=None, algorithm='max', name=None) pooled_weighted = gp.pool( data, pool_map, sizes=None, algorithm='weighted', name=None) self.assertAllClose(pooled_max, max_true) self.assertAllClose(pooled_weighted, max_weighted) @parameterized.parameters((20, 10, 3), (2, 1, 1), (2, 5, 4), (2, 1, 3)) def test_pool_random(self, num_input_vertices, num_output_vertices, num_features): """Tests pooling with random inputs.""" pool_map = 0.001 + np.random.uniform( size=(num_output_vertices, num_input_vertices)) data = np.random.uniform(size=(num_input_vertices, num_features)) true_weighted = np.matmul(pool_map, data) true_max = np.tile( np.max(data, axis=0, keepdims=True), (num_output_vertices, 1)) pool_map = _dense_to_sparse(pool_map) with self.subTest(name='max'): pooled_max = gp.pool(data, pool_map, None, algorithm='max') self.assertAllClose(pooled_max, true_max) with self.subTest(name='weighted'): pooled_weighted = gp.pool(data, pool_map, None, algorithm='weighted') self.assertAllClose(pooled_weighted, true_weighted) def test_pool_jacobian(self): """Tests the jacobian is correct.""" sizes = ((2, 4), (3, 5)) data_init = np.random.uniform(size=(2, 5, 3)) pool_map = np.random.uniform(size=(2, 3, 5)) data_init[0, -1, :] = 0. pool_map[0, -1, :] = 0. pool_map = _dense_to_sparse(pool_map) def gp_pool(data, algorithm): return gp.pool(data, pool_map, sizes, algorithm=algorithm) with self.subTest(name='max'): self.assert_jacobian_is_correct_fn(lambda data: gp_pool(data, 'max'), [data_init]) with self.subTest(name='weighted'): self.assert_jacobian_is_correct_fn(lambda data: gp_pool(data, 'weighted'), [data_init]) class GraphPoolingTestUnpoolTests(test_case.TestCase): @parameterized.parameters( ("'sizes' must have an integer type.", np.float32, np.float32, np.float32), ("'data' must have a float type.", np.int32, np.float32, np.int32), ("'pool_map' and 'data' must have the same type.", np.float32, np.float64, np.int32)) def test_unpool_exception_raised_types(self, err_msg, data_type, pool_map_type, sizes_type): """Tests the correct exceptions are raised for invalid types.""" data = np.ones((2, 3, 3), dtype=data_type) pool_map = _dense_to_sparse(np.ones((2, 3, 3), dtype=pool_map_type)) sizes = np.array(((1, 2), (2, 3)), dtype=sizes_type) with self.assertRaisesRegexp(TypeError, err_msg): gp.unpool(data, pool_map, sizes) @parameterized.parameters( ('data must have a rank greater than 1', (3,), (3,), None), ('pool_map must have a rank of 2', (3, 3), (3,), None), ('sizes must have a rank of 3', (4, 5, 3, 2), (4, 5, 3, 3), (3, 2)), ('data must have a rank less than 6', (2, 3, 4, 5, 3, 2), (2, 3, 4, 5, 3, 3), None), ) def test_unpool_exception_raised_shapes(self, err_msg, data_shape, pool_map_shape, sizes_shape): """Tests the correct exceptions are raised for invalid shapes.""" data = np.ones(data_shape, dtype=np.float32) pool_map = _dense_to_sparse(np.ones(pool_map_shape, dtype=np.float32)) if sizes_shape is not None: sizes = np.ones(sizes_shape, dtype=np.int32) else: sizes = None with self.assertRaisesRegexp(ValueError, err_msg): gp.unpool(data, pool_map, sizes) @parameterized.parameters( ((2, 3), 4, 3, np.float32), ((1,), 6, 1, np.float32), ((4, 1, 3), 9, 7, np.float64), ((2, 8, 4), 19, 11, np.float64), ) def test_unpool_identity(self, batch_shape, num_vertices, num_features, data_type): """Tests graph unpooling with identity maps.""" data_shape = np.concatenate((batch_shape, (num_vertices, num_features))) data = np.random.uniform(size=data_shape).astype(data_type) pool_map = _batch_sparse_eye(batch_shape, num_vertices, data_type) unpooled = gp.unpool(data, pool_map, sizes=None) self.assertAllClose(unpooled, data) def test_unpool_preset_padded(self): """Tests pooling with preset data and padding.""" data = np.reshape(np.arange(12).astype(np.float32), (2, 3, 2)) data[0, -1, :] = 0. sizes = ((2, 3), (3, 3)) pool_map = _dense_to_sparse( np.array((((0.5, 0.5, 0.), (0., 0., 1.), (0., 0., 0.)), ((1., 0., 0.), (0., 1., 0.), (0., 0., 1.))), dtype=np.float32)) unpooled = gp.unpool(data, pool_map, sizes) true = (((0., 1.), (0., 1.), (2., 3.)), ((6., 7.), (8., 9.), (10., 11.))) self.assertAllClose(unpooled, true) @parameterized.parameters((20, 4), (2, 1), (12, 4), (6, 3)) def test_unpool_random(self, num_vertices, num_features): """Tests pooling with random data inputs.""" output_vertices = num_vertices // 2 pool_map = np.zeros(shape=(output_vertices, num_vertices), dtype=np.float32) for i in range(output_vertices): pool_map[i, (i * 2, i * 2 + 1)] = (0.5, 0.5) data = np.random.uniform(size=(output_vertices, num_features)).astype(np.float32) unpooled = gp.unpool( data, _dense_to_sparse(pool_map), sizes=None, name=None) with self.subTest(name='direct_unpool'): true = np.zeros(shape=(num_vertices, num_features)).astype(np.float32) true[0::2, :] = data true[1::2, :] = data self.assertAllClose(unpooled, true) with self.subTest(name='permute_pool_map'): permutation = np.random.permutation(num_vertices) pool_map_permute = pool_map[:, permutation] unpooled_permute = gp.unpool(data, _dense_to_sparse(pool_map_permute), None) true_permute = true[permutation, :] self.assertAllClose(unpooled_permute, true_permute) def test_unpool_jacobian_random(self): """Tests the jacobian is correct.""" sizes = ((2, 4), (3, 5)) data_init = np.random.uniform(size=(2, 3, 6)) pool_map = np.random.uniform(size=(2, 3, 5)) data_init[0, -1, :] = 0. pool_map[0, -1, :] = 0. pool_map = _dense_to_sparse(pool_map) def gp_unpool(data): return gp.unpool(data, pool_map, sizes) self.assert_jacobian_is_correct_fn(gp_unpool, [data_init]) class GraphPoolingUpsampleTransposeConvolutionTests(test_case.TestCase): @parameterized.parameters( ("'sizes' must have an integer type.", np.float32, np.float32, np.float32), ("'data' must have a float type.", np.int32, np.float32, np.int32), ("'pool_map' and 'data' must have the same type.", np.float32, np.float64, np.int32)) def test_upsample_transposed_convolution_exception_raised_types( self, err_msg, data_type, pool_map_type, sizes_type): """Tests the correct exceptions are raised for invalid types.""" data = np.ones((2, 3, 3), dtype=data_type) pool_map = _dense_to_sparse(np.ones((2, 3, 3), dtype=pool_map_type)) sizes = np.array(((1, 2), (2, 3)), dtype=sizes_type) with self.assertRaisesRegexp(TypeError, err_msg): gp.upsample_transposed_convolution( data, pool_map, sizes, kernel_size=1, transposed_convolution_op=None) @parameterized.parameters( ('data must have a rank greater than 1', (3,), (3,), None), ('pool_map must have a rank of 2', (3, 3), (3,), None), ('sizes must have a rank of 3', (4, 5, 3, 2), (4, 5, 3, 3), (3, 2)), ('data must have a rank less than 6', (2, 3, 4, 5, 3, 2), (2, 3, 4, 5, 3, 3), None), ) def test_upsample_transposed_convolution_exception_raised_shapes( self, err_msg, data_shape, pool_map_shape, sizes_shape): """Tests the correct exceptions are raised for invalid shapes.""" data = np.ones(data_shape, dtype=np.float32) pool_map = _dense_to_sparse(np.ones(pool_map_shape, dtype=np.float32)) if sizes_shape is not None: sizes = np.ones(sizes_shape, dtype=np.int32) else: sizes = None with self.assertRaisesRegexp(ValueError, err_msg): gp.upsample_transposed_convolution( data, pool_map, sizes, kernel_size=1, transposed_convolution_op=None) def test_upsample_transposed_convolution_exception_raised_callable(self): """Tests the correct exception is raised for a invalid convolution op.""" data = np.ones((5, 3)) pool_map = _dense_to_sparse(np.eye(5)) err_msg = "'transposed_convolution_op' must be callable." with self.assertRaisesRegexp(TypeError, err_msg): gp.upsample_transposed_convolution( data, pool_map, sizes=None, kernel_size=1, transposed_convolution_op=1) @parameterized.parameters((1, 1, 1, np.float32), (5, 3, 1, np.float32), (3, 6, 15, np.float64)) def test_upsample_transposed_convolution_zero_kernel(self, num_vertices, num_features, kernel_size, data_type): """Tests the upsampling with a zero kernel.""" data = np.random.uniform(size=(num_vertices, num_features)).astype(data_type) pool_map = np.zeros( shape=(num_vertices, num_vertices * kernel_size), dtype=data_type) for i in range(num_vertices): pool_map[i, np.arange(kernel_size * i, kernel_size * (i + 1))] = (1.0 / kernel_size) pool_map = _dense_to_sparse(pool_map) # Transposed convolution op with a zero kernel. transposed_convolution_op = tf.keras.layers.Conv2DTranspose( filters=num_features, kernel_size=(1, kernel_size), strides=(1, kernel_size), padding='valid', use_bias=False, kernel_initializer=tf.compat.v1.keras.initializers.zeros()) upsampled = gp.upsample_transposed_convolution( data, pool_map, sizes=None, kernel_size=kernel_size, transposed_convolution_op=transposed_convolution_op) # Initializes variables of the transpose conv layer. self.evaluate(tf.compat.v1.global_variables_initializer()) self.assertAllEqual( tf.shape(input=upsampled), (num_vertices * kernel_size, num_features)) self.assertAllEqual(upsampled, tf.zeros_like(upsampled)) @parameterized.parameters( itertools.product((3,), (6,), (3,), range(3), range(6), range(6)),) def test_upsample_transposed_convolution_selector_kernel_random( self, num_vertices, num_features, kernel_size, kernel_index, feature1_index, feature2_index): """Tests the upsampling with an indicator kernel.""" data = np.random.uniform(size=(num_vertices, num_features)).astype(np.float32) pool_map = np.zeros( shape=(num_vertices, num_vertices * kernel_size), dtype=np.float32) for i in range(num_vertices): pool_map[i, np.arange(kernel_size * i, kernel_size * (i + 1))] = (1.0 / kernel_size) pool_map = _dense_to_sparse(pool_map) selection = np.zeros( shape=(1, kernel_size, num_features, num_features), dtype=np.float32) selection[0, kernel_index, feature1_index, feature2_index] = 1. initializer = tf.compat.v1.constant_initializer(value=selection) transposed_convolution_op = tf.keras.layers.Conv2DTranspose( filters=num_features, kernel_size=(1, kernel_size), strides=(1, kernel_size), padding='valid', use_bias=False, kernel_initializer=initializer) true = np.zeros( shape=(num_vertices * kernel_size, num_features), dtype=np.float32) input_column = feature2_index output_column = feature1_index output_row_start = kernel_index true[output_row_start::kernel_size, output_column] = (data[:, input_column]) upsampled = gp.upsample_transposed_convolution( data, pool_map, sizes=None, kernel_size=kernel_size, transposed_convolution_op=transposed_convolution_op) # Initializes variables of the transpose conv layer. self.evaluate(tf.compat.v1.global_variables_initializer()) self.assertAllEqual(upsampled, true) def test_upsample_transposed_convolution_preset_padded(self): """Tests upsampling with presets.""" data = np.reshape(np.arange(12).astype(np.float32), (2, 3, 2)) data[0, -1, :] = 0. sizes = ((2, 3), (3, 3)) pool_map = _dense_to_sparse( np.array((((0.5, 0.5, 0.), (0., 0., 1.), (0., 0., 0.)), ((1., 0., 0.), (0., 1., 0.), (0., 0., 1.))), dtype=np.float32)) kernel = np.ones(shape=(1, 2, 2, 2), dtype=np.float32) initializer = tf.compat.v1.constant_initializer(value=kernel) transposed_convolution_op = tf.keras.layers.Conv2DTranspose( filters=2, kernel_size=(1, 2), strides=(1, 2), padding='valid', use_bias=False, kernel_initializer=initializer) # Convolving with an all-ones kernel is equal to summation of the input. data_sum = np.tile(np.sum(data, axis=-1, keepdims=True), (1, 1, 2)) true = np.zeros(shape=(2, 3, 2), dtype=np.float32) true[0, :, :] = data_sum[0, (0, 0, 1), :] true[1, :, :] = data_sum[1, :, :] upsampled = gp.upsample_transposed_convolution( data, pool_map, sizes=sizes, kernel_size=2, transposed_convolution_op=transposed_convolution_op) # Initializes variables of the transpose conv layer. self.evaluate(tf.compat.v1.global_variables_initializer()) self.assertAllEqual(upsampled.shape, (2, 3, 2)) self.assertAllClose(upsampled, true) def test_upsample_transposed_convolution_jacobian_random(self): """Tests the jacobian is correct.""" num_filters = 6 kernel_size = 1 data_init = np.random.uniform(size=(2, 5, num_filters)) pool_map = _batch_sparse_eye((2,), 5, np.float64) transposed_convolution_op = tf.keras.layers.Conv2DTranspose( filters=num_filters, kernel_size=(1, kernel_size), strides=(1, kernel_size), padding='valid', dtype='float64') # Calling the upsample_transposed_convolution to create the variables # in the transposed_convoution. gp.upsample_transposed_convolution( data_init, pool_map, sizes=None, kernel_size=kernel_size, transposed_convolution_op=transposed_convolution_op) # Initializes variables of the transpose conv layer. self.evaluate(tf.compat.v1.global_variables_initializer()) def gp_upsample_transposed_convolution(data): return gp.upsample_transposed_convolution( data, pool_map, sizes=None, kernel_size=kernel_size, transposed_convolution_op=transposed_convolution_op) self.assert_jacobian_is_correct_fn(gp_upsample_transposed_convolution, [data_init]) def test_upsample_transposed_convolution_jacobian_random_padding(self): """Tests the jacobian is correct with padded data.""" num_filters = 6 sizes = ((2, 4), (3, 5)) data_init = np.random.uniform(size=(2, 3, num_filters)) data_init[0, -1, :] = 0. pool_map = np.array( (((0.5, 0.5, 0., 0., 0.), (0., 0., 0.5, 0.5, 0.), (0., 0., 0., 0., 0.)), ((1., 0., 0., 0., 0.), (0., 1. / 3., 1. / 3., 1. / 3., 0.), (0., 0., 0., 0., 1.))), dtype=data_init.dtype) pool_map = _dense_to_sparse(pool_map) kernel_size = 2 transposed_convolution_op = tf.keras.layers.Conv2DTranspose( filters=num_filters, kernel_size=(1, kernel_size), strides=(1, kernel_size), padding='valid', dtype='float64') # Calling the upsample_transposed_convolution to create the variables # in the transposed_convoution. gp.upsample_transposed_convolution( data_init, pool_map, sizes=sizes, kernel_size=kernel_size, transposed_convolution_op=transposed_convolution_op) # Initializes variables of the transpose conv layer. self.evaluate(tf.compat.v1.global_variables_initializer()) def gp_upsample_transposed_convolution(data): return gp.upsample_transposed_convolution( data, pool_map, sizes=sizes, kernel_size=kernel_size, transposed_convolution_op=transposed_convolution_op) self.assert_jacobian_is_correct_fn(gp_upsample_transposed_convolution, [data_init]) if __name__ == '__main__': test_case.main()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for tensorflow_graphics.geometry.convolution.tests.graph_pooling.""" # pylint: disable=protected-access import itertools from absl.testing import parameterized import numpy as np import tensorflow as tf import tensorflow_graphics.geometry.convolution.graph_pooling as gp from tensorflow_graphics.geometry.convolution.tests import utils_test from tensorflow_graphics.util import test_case def _dense_to_sparse(data): """Convert a numpy array to a tf.SparseTensor.""" return utils_test._dense_to_sparse(data) def _batch_sparse_eye(batch_shape, num_vertices, dtype): """Generate a batch of identity matrices.""" eye = np.eye(num_vertices, dtype=dtype) num_batch_dims = len(batch_shape) expand_shape = np.concatenate((np.ones( num_batch_dims, dtype=np.int32), (num_vertices, num_vertices)), axis=0) eye = np.reshape(eye, expand_shape) tile_shape = np.concatenate((batch_shape, (1, 1)), axis=0) return _dense_to_sparse(np.tile(eye, tile_shape)) class GraphPoolingTestPoolTests(test_case.TestCase): @parameterized.parameters( ("'sizes' must have an integer type.", np.float32, np.float32, np.float32), ("'data' must have a float type.", np.int32, np.float32, np.int32), ("'pool_map' and 'data' must have the same type.", np.float32, np.float64, np.int32)) def test_pool_exception_raised_types(self, err_msg, data_type, pool_map_type, sizes_type): """Tests the correct exceptions are raised for invalid types.""" data = np.ones((2, 3, 3), dtype=data_type) pool_map = _dense_to_sparse(np.ones((2, 3, 3), dtype=pool_map_type)) sizes = np.array(((1, 2), (2, 3)), dtype=sizes_type) with self.assertRaisesRegexp(TypeError, err_msg): gp.pool(data, pool_map, sizes) @parameterized.parameters( ('data must have a rank greater than 1', (3,), (3,), None), ('pool_map must have a rank of 2', (3, 3), (3,), None), ('sizes must have a rank of 3', (4, 5, 3, 2), (4, 5, 3, 3), (3, 2)), ) def test_pool_exception_raised_shapes(self, err_msg, data_shape, pool_map_shape, sizes_shape): """Tests the correct exceptions are raised for invalid shapes.""" data = np.ones(data_shape, dtype=np.float32) pool_map = _dense_to_sparse(np.ones(pool_map_shape, dtype=np.float32)) if sizes_shape is not None: sizes = np.ones(sizes_shape, dtype=np.int32) else: sizes = None with self.assertRaisesRegexp(ValueError, err_msg): gp.pool(data, pool_map, sizes) def test_pool_exception_raised_algorithm(self): """Tests the correct exception is raised for an invalid algorithm.""" data = np.ones(shape=(2, 2)) pool_map = _dense_to_sparse(np.ones(shape=(2, 2))) with self.assertRaisesRegexp( ValueError, 'The pooling method must be "weighted" or "max"'): gp.pool(data, pool_map, sizes=None, algorithm='mean') @parameterized.parameters( ((2, 3), 4, 3, np.float32), ((1,), 6, 1, np.float32), ((4, 1, 3), 9, 7, np.float64), ((2, 8, 4, 6), 19, 11, np.float64), ) def test_pool_identity(self, batch_shape, num_vertices, num_features, data_type): """Tests graph pooling with identity maps.""" data_shape = np.concatenate((batch_shape, (num_vertices, num_features))) data = np.random.uniform(size=data_shape).astype(data_type) pool_map = _batch_sparse_eye(batch_shape, num_vertices, data_type) pooled_max = gp.pool(data, pool_map, sizes=None, algorithm='max', name=None) pooled_weighted = gp.pool( data, pool_map, sizes=None, algorithm='weighted', name=None) self.assertAllClose(pooled_max, data) self.assertAllClose(pooled_weighted, data) def test_pool_preset_padded(self): """Tests pooling with preset data and padding.""" data = np.reshape(np.arange(12).astype(np.float32), (2, 3, 2)) sizes = ((2, 3), (3, 3)) pool_map = _dense_to_sparse( np.array((((0.5, 0.5, 0.), (0., 0., 1.), (0., 0., 0.)), ((1., 0., 0.), (0., 1., 0.), (0., 0., 1.))), dtype=np.float32)) pooled_max = gp.pool(data, pool_map, sizes, algorithm='max') pooled_weighted = gp.pool(data, pool_map, sizes, algorithm='weighted') true_max = (((2., 3.), (4., 5.), (0., 0.)), ((6., 7.), (8., 9.), (10., 11.))) true_weighted = (((1., 2.), (4., 5.), (0., 0.)), ((6., 7.), (8., 9.), (10., 11.))) self.assertAllClose(pooled_max, true_max) self.assertAllClose(pooled_weighted, true_weighted) def test_pool_preset(self): """Tests pooling with preset data.""" pool_map = np.array(((0.5, 0.5, 0., 0.), (0., 0., 0.5, 0.5)), dtype=np.float32) pool_map = _dense_to_sparse(pool_map) data = np.reshape(np.arange(8).astype(np.float32), (4, 2)) max_true = data[(1, 3), :] max_weighted = (data[(0, 2), :] + max_true) * 0.5 pooled_max = gp.pool(data, pool_map, sizes=None, algorithm='max', name=None) pooled_weighted = gp.pool( data, pool_map, sizes=None, algorithm='weighted', name=None) self.assertAllClose(pooled_max, max_true) self.assertAllClose(pooled_weighted, max_weighted) @parameterized.parameters((20, 10, 3), (2, 1, 1), (2, 5, 4), (2, 1, 3)) def test_pool_random(self, num_input_vertices, num_output_vertices, num_features): """Tests pooling with random inputs.""" pool_map = 0.001 + np.random.uniform( size=(num_output_vertices, num_input_vertices)) data = np.random.uniform(size=(num_input_vertices, num_features)) true_weighted = np.matmul(pool_map, data) true_max = np.tile( np.max(data, axis=0, keepdims=True), (num_output_vertices, 1)) pool_map = _dense_to_sparse(pool_map) with self.subTest(name='max'): pooled_max = gp.pool(data, pool_map, None, algorithm='max') self.assertAllClose(pooled_max, true_max) with self.subTest(name='weighted'): pooled_weighted = gp.pool(data, pool_map, None, algorithm='weighted') self.assertAllClose(pooled_weighted, true_weighted) def test_pool_jacobian(self): """Tests the jacobian is correct.""" sizes = ((2, 4), (3, 5)) data_init = np.random.uniform(size=(2, 5, 3)) pool_map = np.random.uniform(size=(2, 3, 5)) data_init[0, -1, :] = 0. pool_map[0, -1, :] = 0. pool_map = _dense_to_sparse(pool_map) def gp_pool(data, algorithm): return gp.pool(data, pool_map, sizes, algorithm=algorithm) with self.subTest(name='max'): self.assert_jacobian_is_correct_fn(lambda data: gp_pool(data, 'max'), [data_init]) with self.subTest(name='weighted'): self.assert_jacobian_is_correct_fn(lambda data: gp_pool(data, 'weighted'), [data_init]) class GraphPoolingTestUnpoolTests(test_case.TestCase): @parameterized.parameters( ("'sizes' must have an integer type.", np.float32, np.float32, np.float32), ("'data' must have a float type.", np.int32, np.float32, np.int32), ("'pool_map' and 'data' must have the same type.", np.float32, np.float64, np.int32)) def test_unpool_exception_raised_types(self, err_msg, data_type, pool_map_type, sizes_type): """Tests the correct exceptions are raised for invalid types.""" data = np.ones((2, 3, 3), dtype=data_type) pool_map = _dense_to_sparse(np.ones((2, 3, 3), dtype=pool_map_type)) sizes = np.array(((1, 2), (2, 3)), dtype=sizes_type) with self.assertRaisesRegexp(TypeError, err_msg): gp.unpool(data, pool_map, sizes) @parameterized.parameters( ('data must have a rank greater than 1', (3,), (3,), None), ('pool_map must have a rank of 2', (3, 3), (3,), None), ('sizes must have a rank of 3', (4, 5, 3, 2), (4, 5, 3, 3), (3, 2)), ('data must have a rank less than 6', (2, 3, 4, 5, 3, 2), (2, 3, 4, 5, 3, 3), None), ) def test_unpool_exception_raised_shapes(self, err_msg, data_shape, pool_map_shape, sizes_shape): """Tests the correct exceptions are raised for invalid shapes.""" data = np.ones(data_shape, dtype=np.float32) pool_map = _dense_to_sparse(np.ones(pool_map_shape, dtype=np.float32)) if sizes_shape is not None: sizes = np.ones(sizes_shape, dtype=np.int32) else: sizes = None with self.assertRaisesRegexp(ValueError, err_msg): gp.unpool(data, pool_map, sizes) @parameterized.parameters( ((2, 3), 4, 3, np.float32), ((1,), 6, 1, np.float32), ((4, 1, 3), 9, 7, np.float64), ((2, 8, 4), 19, 11, np.float64), ) def test_unpool_identity(self, batch_shape, num_vertices, num_features, data_type): """Tests graph unpooling with identity maps.""" data_shape = np.concatenate((batch_shape, (num_vertices, num_features))) data = np.random.uniform(size=data_shape).astype(data_type) pool_map = _batch_sparse_eye(batch_shape, num_vertices, data_type) unpooled = gp.unpool(data, pool_map, sizes=None) self.assertAllClose(unpooled, data) def test_unpool_preset_padded(self): """Tests pooling with preset data and padding.""" data = np.reshape(np.arange(12).astype(np.float32), (2, 3, 2)) data[0, -1, :] = 0. sizes = ((2, 3), (3, 3)) pool_map = _dense_to_sparse( np.array((((0.5, 0.5, 0.), (0., 0., 1.), (0., 0., 0.)), ((1., 0., 0.), (0., 1., 0.), (0., 0., 1.))), dtype=np.float32)) unpooled = gp.unpool(data, pool_map, sizes) true = (((0., 1.), (0., 1.), (2., 3.)), ((6., 7.), (8., 9.), (10., 11.))) self.assertAllClose(unpooled, true) @parameterized.parameters((20, 4), (2, 1), (12, 4), (6, 3)) def test_unpool_random(self, num_vertices, num_features): """Tests pooling with random data inputs.""" output_vertices = num_vertices // 2 pool_map = np.zeros(shape=(output_vertices, num_vertices), dtype=np.float32) for i in range(output_vertices): pool_map[i, (i * 2, i * 2 + 1)] = (0.5, 0.5) data = np.random.uniform(size=(output_vertices, num_features)).astype(np.float32) unpooled = gp.unpool( data, _dense_to_sparse(pool_map), sizes=None, name=None) with self.subTest(name='direct_unpool'): true = np.zeros(shape=(num_vertices, num_features)).astype(np.float32) true[0::2, :] = data true[1::2, :] = data self.assertAllClose(unpooled, true) with self.subTest(name='permute_pool_map'): permutation = np.random.permutation(num_vertices) pool_map_permute = pool_map[:, permutation] unpooled_permute = gp.unpool(data, _dense_to_sparse(pool_map_permute), None) true_permute = true[permutation, :] self.assertAllClose(unpooled_permute, true_permute) def test_unpool_jacobian_random(self): """Tests the jacobian is correct.""" sizes = ((2, 4), (3, 5)) data_init = np.random.uniform(size=(2, 3, 6)) pool_map = np.random.uniform(size=(2, 3, 5)) data_init[0, -1, :] = 0. pool_map[0, -1, :] = 0. pool_map = _dense_to_sparse(pool_map) def gp_unpool(data): return gp.unpool(data, pool_map, sizes) self.assert_jacobian_is_correct_fn(gp_unpool, [data_init]) class GraphPoolingUpsampleTransposeConvolutionTests(test_case.TestCase): @parameterized.parameters( ("'sizes' must have an integer type.", np.float32, np.float32, np.float32), ("'data' must have a float type.", np.int32, np.float32, np.int32), ("'pool_map' and 'data' must have the same type.", np.float32, np.float64, np.int32)) def test_upsample_transposed_convolution_exception_raised_types( self, err_msg, data_type, pool_map_type, sizes_type): """Tests the correct exceptions are raised for invalid types.""" data = np.ones((2, 3, 3), dtype=data_type) pool_map = _dense_to_sparse(np.ones((2, 3, 3), dtype=pool_map_type)) sizes = np.array(((1, 2), (2, 3)), dtype=sizes_type) with self.assertRaisesRegexp(TypeError, err_msg): gp.upsample_transposed_convolution( data, pool_map, sizes, kernel_size=1, transposed_convolution_op=None) @parameterized.parameters( ('data must have a rank greater than 1', (3,), (3,), None), ('pool_map must have a rank of 2', (3, 3), (3,), None), ('sizes must have a rank of 3', (4, 5, 3, 2), (4, 5, 3, 3), (3, 2)), ('data must have a rank less than 6', (2, 3, 4, 5, 3, 2), (2, 3, 4, 5, 3, 3), None), ) def test_upsample_transposed_convolution_exception_raised_shapes( self, err_msg, data_shape, pool_map_shape, sizes_shape): """Tests the correct exceptions are raised for invalid shapes.""" data = np.ones(data_shape, dtype=np.float32) pool_map = _dense_to_sparse(np.ones(pool_map_shape, dtype=np.float32)) if sizes_shape is not None: sizes = np.ones(sizes_shape, dtype=np.int32) else: sizes = None with self.assertRaisesRegexp(ValueError, err_msg): gp.upsample_transposed_convolution( data, pool_map, sizes, kernel_size=1, transposed_convolution_op=None) def test_upsample_transposed_convolution_exception_raised_callable(self): """Tests the correct exception is raised for a invalid convolution op.""" data = np.ones((5, 3)) pool_map = _dense_to_sparse(np.eye(5)) err_msg = "'transposed_convolution_op' must be callable." with self.assertRaisesRegexp(TypeError, err_msg): gp.upsample_transposed_convolution( data, pool_map, sizes=None, kernel_size=1, transposed_convolution_op=1) @parameterized.parameters((1, 1, 1, np.float32), (5, 3, 1, np.float32), (3, 6, 15, np.float64)) def test_upsample_transposed_convolution_zero_kernel(self, num_vertices, num_features, kernel_size, data_type): """Tests the upsampling with a zero kernel.""" data = np.random.uniform(size=(num_vertices, num_features)).astype(data_type) pool_map = np.zeros( shape=(num_vertices, num_vertices * kernel_size), dtype=data_type) for i in range(num_vertices): pool_map[i, np.arange(kernel_size * i, kernel_size * (i + 1))] = (1.0 / kernel_size) pool_map = _dense_to_sparse(pool_map) # Transposed convolution op with a zero kernel. transposed_convolution_op = tf.keras.layers.Conv2DTranspose( filters=num_features, kernel_size=(1, kernel_size), strides=(1, kernel_size), padding='valid', use_bias=False, kernel_initializer=tf.compat.v1.keras.initializers.zeros()) upsampled = gp.upsample_transposed_convolution( data, pool_map, sizes=None, kernel_size=kernel_size, transposed_convolution_op=transposed_convolution_op) # Initializes variables of the transpose conv layer. self.evaluate(tf.compat.v1.global_variables_initializer()) self.assertAllEqual( tf.shape(input=upsampled), (num_vertices * kernel_size, num_features)) self.assertAllEqual(upsampled, tf.zeros_like(upsampled)) @parameterized.parameters( itertools.product((3,), (6,), (3,), range(3), range(6), range(6)),) def test_upsample_transposed_convolution_selector_kernel_random( self, num_vertices, num_features, kernel_size, kernel_index, feature1_index, feature2_index): """Tests the upsampling with an indicator kernel.""" data = np.random.uniform(size=(num_vertices, num_features)).astype(np.float32) pool_map = np.zeros( shape=(num_vertices, num_vertices * kernel_size), dtype=np.float32) for i in range(num_vertices): pool_map[i, np.arange(kernel_size * i, kernel_size * (i + 1))] = (1.0 / kernel_size) pool_map = _dense_to_sparse(pool_map) selection = np.zeros( shape=(1, kernel_size, num_features, num_features), dtype=np.float32) selection[0, kernel_index, feature1_index, feature2_index] = 1. initializer = tf.compat.v1.constant_initializer(value=selection) transposed_convolution_op = tf.keras.layers.Conv2DTranspose( filters=num_features, kernel_size=(1, kernel_size), strides=(1, kernel_size), padding='valid', use_bias=False, kernel_initializer=initializer) true = np.zeros( shape=(num_vertices * kernel_size, num_features), dtype=np.float32) input_column = feature2_index output_column = feature1_index output_row_start = kernel_index true[output_row_start::kernel_size, output_column] = (data[:, input_column]) upsampled = gp.upsample_transposed_convolution( data, pool_map, sizes=None, kernel_size=kernel_size, transposed_convolution_op=transposed_convolution_op) # Initializes variables of the transpose conv layer. self.evaluate(tf.compat.v1.global_variables_initializer()) self.assertAllEqual(upsampled, true) def test_upsample_transposed_convolution_preset_padded(self): """Tests upsampling with presets.""" data = np.reshape(np.arange(12).astype(np.float32), (2, 3, 2)) data[0, -1, :] = 0. sizes = ((2, 3), (3, 3)) pool_map = _dense_to_sparse( np.array((((0.5, 0.5, 0.), (0., 0., 1.), (0., 0., 0.)), ((1., 0., 0.), (0., 1., 0.), (0., 0., 1.))), dtype=np.float32)) kernel = np.ones(shape=(1, 2, 2, 2), dtype=np.float32) initializer = tf.compat.v1.constant_initializer(value=kernel) transposed_convolution_op = tf.keras.layers.Conv2DTranspose( filters=2, kernel_size=(1, 2), strides=(1, 2), padding='valid', use_bias=False, kernel_initializer=initializer) # Convolving with an all-ones kernel is equal to summation of the input. data_sum = np.tile(np.sum(data, axis=-1, keepdims=True), (1, 1, 2)) true = np.zeros(shape=(2, 3, 2), dtype=np.float32) true[0, :, :] = data_sum[0, (0, 0, 1), :] true[1, :, :] = data_sum[1, :, :] upsampled = gp.upsample_transposed_convolution( data, pool_map, sizes=sizes, kernel_size=2, transposed_convolution_op=transposed_convolution_op) # Initializes variables of the transpose conv layer. self.evaluate(tf.compat.v1.global_variables_initializer()) self.assertAllEqual(upsampled.shape, (2, 3, 2)) self.assertAllClose(upsampled, true) def test_upsample_transposed_convolution_jacobian_random(self): """Tests the jacobian is correct.""" num_filters = 6 kernel_size = 1 data_init = np.random.uniform(size=(2, 5, num_filters)) pool_map = _batch_sparse_eye((2,), 5, np.float64) transposed_convolution_op = tf.keras.layers.Conv2DTranspose( filters=num_filters, kernel_size=(1, kernel_size), strides=(1, kernel_size), padding='valid', dtype='float64') # Calling the upsample_transposed_convolution to create the variables # in the transposed_convoution. gp.upsample_transposed_convolution( data_init, pool_map, sizes=None, kernel_size=kernel_size, transposed_convolution_op=transposed_convolution_op) # Initializes variables of the transpose conv layer. self.evaluate(tf.compat.v1.global_variables_initializer()) def gp_upsample_transposed_convolution(data): return gp.upsample_transposed_convolution( data, pool_map, sizes=None, kernel_size=kernel_size, transposed_convolution_op=transposed_convolution_op) self.assert_jacobian_is_correct_fn(gp_upsample_transposed_convolution, [data_init]) def test_upsample_transposed_convolution_jacobian_random_padding(self): """Tests the jacobian is correct with padded data.""" num_filters = 6 sizes = ((2, 4), (3, 5)) data_init = np.random.uniform(size=(2, 3, num_filters)) data_init[0, -1, :] = 0. pool_map = np.array( (((0.5, 0.5, 0., 0., 0.), (0., 0., 0.5, 0.5, 0.), (0., 0., 0., 0., 0.)), ((1., 0., 0., 0., 0.), (0., 1. / 3., 1. / 3., 1. / 3., 0.), (0., 0., 0., 0., 1.))), dtype=data_init.dtype) pool_map = _dense_to_sparse(pool_map) kernel_size = 2 transposed_convolution_op = tf.keras.layers.Conv2DTranspose( filters=num_filters, kernel_size=(1, kernel_size), strides=(1, kernel_size), padding='valid', dtype='float64') # Calling the upsample_transposed_convolution to create the variables # in the transposed_convoution. gp.upsample_transposed_convolution( data_init, pool_map, sizes=sizes, kernel_size=kernel_size, transposed_convolution_op=transposed_convolution_op) # Initializes variables of the transpose conv layer. self.evaluate(tf.compat.v1.global_variables_initializer()) def gp_upsample_transposed_convolution(data): return gp.upsample_transposed_convolution( data, pool_map, sizes=sizes, kernel_size=kernel_size, transposed_convolution_op=transposed_convolution_op) self.assert_jacobian_is_correct_fn(gp_upsample_transposed_convolution, [data_init]) if __name__ == '__main__': test_case.main()
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/geometry/transformation/tests/look_at_test.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for OpenGL lookAt functions.""" from absl.testing import parameterized import numpy as np from tensorflow_graphics.geometry.transformation import look_at from tensorflow_graphics.util import test_case class LookAtTest(test_case.TestCase): def test_look_at_right_handed_preset(self): """Tests that look_at_right_handed generates expected results.""" camera_position = ((0.0, 0.0, 0.0), (0.1, 0.2, 0.3)) look_at_point = ((0.0, 0.0, 1.0), (0.4, 0.5, 0.6)) up_vector = ((0.0, 1.0, 0.0), (0.7, 0.8, 0.9)) pred = look_at.right_handed(camera_position, look_at_point, up_vector) gt = (((-1.0, 0.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0), (0.0, 0.0, -1.0, 0.0), (0.0, 0.0, 0.0, 1.0)), ((4.08248186e-01, -8.16496551e-01, 4.08248395e-01, -2.98023224e-08), (-7.07106888e-01, 1.19209290e-07, 7.07106769e-01, -1.41421378e-01), (-5.77350318e-01, -5.77350318e-01, -5.77350318e-01, 3.46410215e-01), (0.0, 0.0, 0.0, 1.0))) self.assertAllClose(pred, gt) @parameterized.parameters( ((3,), (3,), (3,)), ((None, 3), (None, 3), (None, 3)), ((None, 2, 3), (None, 2, 3), (None, 2, 3)), ) def test_look_at_right_handed_exception_not_raised(self, *shapes): """Tests that the shape exceptions are not raised.""" self.assert_exception_is_not_raised(look_at.right_handed, shapes) @parameterized.parameters( ("must have exactly 3 dimensions in axis -1", (2,), (3,), (3,)), ("must have exactly 3 dimensions in axis -1", (3,), (2,), (3,)), ("must have exactly 3 dimensions in axis -1", (3,), (3,), (1,)), ("Not all batch dimensions are identical", (3,), (3, 3), (3, 3)), ) def test_look_at_right_handed_exception_raised(self, error_msg, *shapes): """Tests that the shape exceptions are properly raised.""" self.assert_exception_is_raised(look_at.right_handed, error_msg, shapes) def test_look_at_right_handed_jacobian_preset(self): """Tests the Jacobian of look_at_right_handed.""" camera_position_init = np.array(((0.0, 0.0, 0.0), (0.1, 0.2, 0.3))) look_at_init = np.array(((0.0, 0.0, 1.0), (0.4, 0.5, 0.6))) up_vector_init = np.array(((0.0, 1.0, 0.0), (0.7, 0.8, 0.9))) self.assert_jacobian_is_correct_fn( look_at.right_handed, [camera_position_init, look_at_init, up_vector_init]) def test_look_at_right_handed_jacobian_random(self): """Tests the Jacobian of look_at_right_handed.""" tensor_size = np.random.randint(1, 3) tensor_shape = np.random.randint(1, 5, size=(tensor_size)).tolist() camera_position_init = np.random.uniform(size=tensor_shape + [3]) look_at_init = np.random.uniform(size=tensor_shape + [3]) up_vector_init = np.random.uniform(size=tensor_shape + [3]) self.assert_jacobian_is_correct_fn( look_at.right_handed, [camera_position_init, look_at_init, up_vector_init])
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for OpenGL lookAt functions.""" from absl.testing import parameterized import numpy as np from tensorflow_graphics.geometry.transformation import look_at from tensorflow_graphics.util import test_case class LookAtTest(test_case.TestCase): def test_look_at_right_handed_preset(self): """Tests that look_at_right_handed generates expected results.""" camera_position = ((0.0, 0.0, 0.0), (0.1, 0.2, 0.3)) look_at_point = ((0.0, 0.0, 1.0), (0.4, 0.5, 0.6)) up_vector = ((0.0, 1.0, 0.0), (0.7, 0.8, 0.9)) pred = look_at.right_handed(camera_position, look_at_point, up_vector) gt = (((-1.0, 0.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0), (0.0, 0.0, -1.0, 0.0), (0.0, 0.0, 0.0, 1.0)), ((4.08248186e-01, -8.16496551e-01, 4.08248395e-01, -2.98023224e-08), (-7.07106888e-01, 1.19209290e-07, 7.07106769e-01, -1.41421378e-01), (-5.77350318e-01, -5.77350318e-01, -5.77350318e-01, 3.46410215e-01), (0.0, 0.0, 0.0, 1.0))) self.assertAllClose(pred, gt) @parameterized.parameters( ((3,), (3,), (3,)), ((None, 3), (None, 3), (None, 3)), ((None, 2, 3), (None, 2, 3), (None, 2, 3)), ) def test_look_at_right_handed_exception_not_raised(self, *shapes): """Tests that the shape exceptions are not raised.""" self.assert_exception_is_not_raised(look_at.right_handed, shapes) @parameterized.parameters( ("must have exactly 3 dimensions in axis -1", (2,), (3,), (3,)), ("must have exactly 3 dimensions in axis -1", (3,), (2,), (3,)), ("must have exactly 3 dimensions in axis -1", (3,), (3,), (1,)), ("Not all batch dimensions are identical", (3,), (3, 3), (3, 3)), ) def test_look_at_right_handed_exception_raised(self, error_msg, *shapes): """Tests that the shape exceptions are properly raised.""" self.assert_exception_is_raised(look_at.right_handed, error_msg, shapes) def test_look_at_right_handed_jacobian_preset(self): """Tests the Jacobian of look_at_right_handed.""" camera_position_init = np.array(((0.0, 0.0, 0.0), (0.1, 0.2, 0.3))) look_at_init = np.array(((0.0, 0.0, 1.0), (0.4, 0.5, 0.6))) up_vector_init = np.array(((0.0, 1.0, 0.0), (0.7, 0.8, 0.9))) self.assert_jacobian_is_correct_fn( look_at.right_handed, [camera_position_init, look_at_init, up_vector_init]) def test_look_at_right_handed_jacobian_random(self): """Tests the Jacobian of look_at_right_handed.""" tensor_size = np.random.randint(1, 3) tensor_shape = np.random.randint(1, 5, size=(tensor_size)).tolist() camera_position_init = np.random.uniform(size=tensor_shape + [3]) look_at_init = np.random.uniform(size=tensor_shape + [3]) up_vector_init = np.random.uniform(size=tensor_shape + [3]) self.assert_jacobian_is_correct_fn( look_at.right_handed, [camera_position_init, look_at_init, up_vector_init])
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/geometry/representation/tests/__init__.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/projects/nasa/eval.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Reconstruction Evaluation.""" from os import path import numpy as np import tensorflow.compat.v1 as tf from tensorflow_graphics.projects.nasa.lib import datasets from tensorflow_graphics.projects.nasa.lib import models from tensorflow_graphics.projects.nasa.lib import utils tf.disable_eager_execution() flags = tf.app.flags logging = tf.logging tf.logging.set_verbosity(tf.logging.INFO) utils.define_flags() FLAGS = flags.FLAGS def build_eval_graph(input_fn, model_fn, hparams): """Build the evaluation computation graph.""" dataset = input_fn(None) batch = dataset.make_one_shot_iterator().get_next() batch_holder = { "transform": tf.placeholder( tf.float32, [1, 1, hparams.n_parts, hparams.n_dims + 1, hparams.n_dims + 1]), "joint": tf.placeholder(tf.float32, [1, 1, hparams.n_parts, hparams.n_dims]), "point": tf.placeholder(tf.float32, [1, 1, None, hparams.n_dims]), "label": tf.placeholder(tf.float32, [1, 1, None, 1]), } latent_holder, latent, occ = model_fn(batch_holder, None, None, "gen_mesh") # Eval Summary iou_holder = tf.placeholder(tf.float32, []) best_holder = tf.placeholder(tf.float32, []) tf.summary.scalar("IoU", iou_holder) tf.summary.scalar("Best_IoU", best_holder) return { "batch_holder": batch_holder, "latent_holder": latent_holder, "latent": latent, "occ": occ, "batch": batch, "iou_holder": iou_holder, "best_holder": best_holder, "merged_summary": tf.summary.merge_all(), } def evaluate(hook_dict, ckpt, saver, best_iou, hparams): """Evaluate a checkpoint on the whole test set.""" batch = hook_dict["batch"] merged_summary = hook_dict["merged_summary"] iou_holder = hook_dict["iou_holder"] best_holder = hook_dict["best_holder"] batch_holder = hook_dict["batch_holder"] latent_holder = hook_dict["latent_holder"] latent = hook_dict["latent"] occ = hook_dict["occ"] global_step = utils.parse_global_step(ckpt) assignment_map = { "shape/": "shape/", } tf.train.init_from_checkpoint(ckpt, assignment_map) init_op = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init_op) accum_iou = 0. example_cnt = 0 while True: try: batch_val = sess.run(batch) feed_dict = { batch_holder["transform"]: batch_val["transform"], batch_holder["joint"]: batch_val["joint"], } iou = utils.compute_iou(sess, feed_dict, latent_holder, batch_holder["point"], latent, occ[:, -1:], batch_val["points"], batch_val["labels"], hparams) accum_iou += iou example_cnt += 1 if hparams.gen_mesh_only > 0: # Generate meshes for evaluation unused_var = utils.save_mesh( sess, feed_dict, latent_holder, batch_holder["point"], latent, occ, batch_val, hparams, ) logging.info("Generated mesh No.{}".format(example_cnt)) except tf.errors.OutOfRangeError: accum_iou /= example_cnt if best_iou < accum_iou: best_iou = accum_iou saver.save(sess, path.join(hparams.train_dir, "best", "model.ckpt"), global_step) summary = sess.run( merged_summary, utils.make_summary_feed_dict( iou_holder, accum_iou, best_holder, best_iou, )) # If only generating meshes for the sequence, we can determinate the # evaluation after the first full loop over the test set. if hparams.gen_mesh_only: exit(0) break return summary, global_step def main(unused_argv): tf.random.set_random_seed(20200823) np.random.seed(20200823) input_fn = datasets.get_dataset("test", FLAGS) model_fn = models.get_model(FLAGS) best_iou = 0. with tf.summary.FileWriter(path.join(FLAGS.train_dir, "eval")) as eval_writer: hook_dict = build_eval_graph(input_fn, model_fn, FLAGS) saver = tf.train.Saver() for ckpt in tf.train.checkpoints_iterator(FLAGS.train_dir, timeout=1800): summary, global_step = evaluate(hook_dict, ckpt, saver, best_iou, FLAGS) eval_writer.add_summary(summary, global_step) eval_writer.flush() if __name__ == "__main__": tf.app.run(main)
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Reconstruction Evaluation.""" from os import path import numpy as np import tensorflow.compat.v1 as tf from tensorflow_graphics.projects.nasa.lib import datasets from tensorflow_graphics.projects.nasa.lib import models from tensorflow_graphics.projects.nasa.lib import utils tf.disable_eager_execution() flags = tf.app.flags logging = tf.logging tf.logging.set_verbosity(tf.logging.INFO) utils.define_flags() FLAGS = flags.FLAGS def build_eval_graph(input_fn, model_fn, hparams): """Build the evaluation computation graph.""" dataset = input_fn(None) batch = dataset.make_one_shot_iterator().get_next() batch_holder = { "transform": tf.placeholder( tf.float32, [1, 1, hparams.n_parts, hparams.n_dims + 1, hparams.n_dims + 1]), "joint": tf.placeholder(tf.float32, [1, 1, hparams.n_parts, hparams.n_dims]), "point": tf.placeholder(tf.float32, [1, 1, None, hparams.n_dims]), "label": tf.placeholder(tf.float32, [1, 1, None, 1]), } latent_holder, latent, occ = model_fn(batch_holder, None, None, "gen_mesh") # Eval Summary iou_holder = tf.placeholder(tf.float32, []) best_holder = tf.placeholder(tf.float32, []) tf.summary.scalar("IoU", iou_holder) tf.summary.scalar("Best_IoU", best_holder) return { "batch_holder": batch_holder, "latent_holder": latent_holder, "latent": latent, "occ": occ, "batch": batch, "iou_holder": iou_holder, "best_holder": best_holder, "merged_summary": tf.summary.merge_all(), } def evaluate(hook_dict, ckpt, saver, best_iou, hparams): """Evaluate a checkpoint on the whole test set.""" batch = hook_dict["batch"] merged_summary = hook_dict["merged_summary"] iou_holder = hook_dict["iou_holder"] best_holder = hook_dict["best_holder"] batch_holder = hook_dict["batch_holder"] latent_holder = hook_dict["latent_holder"] latent = hook_dict["latent"] occ = hook_dict["occ"] global_step = utils.parse_global_step(ckpt) assignment_map = { "shape/": "shape/", } tf.train.init_from_checkpoint(ckpt, assignment_map) init_op = tf.global_variables_initializer() with tf.Session() as sess: sess.run(init_op) accum_iou = 0. example_cnt = 0 while True: try: batch_val = sess.run(batch) feed_dict = { batch_holder["transform"]: batch_val["transform"], batch_holder["joint"]: batch_val["joint"], } iou = utils.compute_iou(sess, feed_dict, latent_holder, batch_holder["point"], latent, occ[:, -1:], batch_val["points"], batch_val["labels"], hparams) accum_iou += iou example_cnt += 1 if hparams.gen_mesh_only > 0: # Generate meshes for evaluation unused_var = utils.save_mesh( sess, feed_dict, latent_holder, batch_holder["point"], latent, occ, batch_val, hparams, ) logging.info("Generated mesh No.{}".format(example_cnt)) except tf.errors.OutOfRangeError: accum_iou /= example_cnt if best_iou < accum_iou: best_iou = accum_iou saver.save(sess, path.join(hparams.train_dir, "best", "model.ckpt"), global_step) summary = sess.run( merged_summary, utils.make_summary_feed_dict( iou_holder, accum_iou, best_holder, best_iou, )) # If only generating meshes for the sequence, we can determinate the # evaluation after the first full loop over the test set. if hparams.gen_mesh_only: exit(0) break return summary, global_step def main(unused_argv): tf.random.set_random_seed(20200823) np.random.seed(20200823) input_fn = datasets.get_dataset("test", FLAGS) model_fn = models.get_model(FLAGS) best_iou = 0. with tf.summary.FileWriter(path.join(FLAGS.train_dir, "eval")) as eval_writer: hook_dict = build_eval_graph(input_fn, model_fn, FLAGS) saver = tf.train.Saver() for ckpt in tf.train.checkpoints_iterator(FLAGS.train_dir, timeout=1800): summary, global_step = evaluate(hook_dict, ckpt, saver, best_iou, FLAGS) eval_writer.add_summary(summary, global_step) eval_writer.flush() if __name__ == "__main__": tf.app.run(main)
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/geometry/representation/ray.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tensorflow ray utility functions.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf from tensorflow_graphics.math import vector from tensorflow_graphics.util import asserts from tensorflow_graphics.util import export_api from tensorflow_graphics.util import shape def triangulate(startpoints, endpoints, weights, name=None): """Triangulates 3d points by miminizing the sum of squared distances to rays. The rays are defined by their start points and endpoints. At least two rays are required to triangulate any given point. Contrary to the standard reprojection-error metric, the sum of squared distances to rays can be minimized in a closed form. Note: In the following, A1 to An are optional batch dimensions. Args: startpoints: A tensor of ray start points with shape `[A1, ..., An, V, 3]`, the number of rays V around which the solution points live should be greater or equal to 2, otherwise triangulation is impossible. endpoints: A tensor of ray endpoints with shape `[A1, ..., An, V, 3]`, the number of rays V around which the solution points live should be greater or equal to 2, otherwise triangulation is impossible. The `endpoints` tensor should have the same shape as the `startpoints` tensor. weights: A tensor of ray weights (certainties) with shape `[A1, ..., An, V]`. Weights should have all positive entries. Weight should have at least two non-zero entries for each point (at least two rays should have certainties > 0). name: A name for this op. The default value of None means "ray_triangulate". Returns: A tensor of triangulated points with shape `[A1, ..., An, 3]`. Raises: ValueError: If the shape of the arguments is not supported. """ with tf.compat.v1.name_scope(name, "ray_triangulate", [startpoints, endpoints, weights]): startpoints = tf.convert_to_tensor(value=startpoints) endpoints = tf.convert_to_tensor(value=endpoints) weights = tf.convert_to_tensor(value=weights) shape.check_static( tensor=startpoints, tensor_name="startpoints", has_rank_greater_than=1, has_dim_equals=(-1, 3), has_dim_greater_than=(-2, 1)) shape.check_static( tensor=endpoints, tensor_name="endpoints", has_rank_greater_than=1, has_dim_equals=(-1, 3), has_dim_greater_than=(-2, 1)) shape.compare_batch_dimensions( tensors=(startpoints, endpoints, weights), last_axes=(-2, -2, -1), broadcast_compatible=False) weights = asserts.assert_all_above(weights, 0.0, open_bound=False) weights = asserts.assert_at_least_k_non_zero_entries(weights, k=2) left_hand_side_list = [] right_hand_side_list = [] # TODO(b/130892100): Replace the inefficient for loop and add comments here. for ray_id in range(weights.shape[-1]): weights_single_ray = weights[..., ray_id] startpoints_single_ray = startpoints[..., ray_id, :] endpoints_singleview = endpoints[..., ray_id, :] ray = endpoints_singleview - startpoints_single_ray ray = tf.nn.l2_normalize(ray, axis=-1) ray_x, ray_y, ray_z = tf.unstack(ray, axis=-1) zeros = tf.zeros_like(ray_x) cross_product_matrix = tf.stack( (zeros, -ray_z, ray_y, ray_z, zeros, -ray_x, -ray_y, ray_x, zeros), axis=-1) cross_product_matrix_shape = tf.concat( (tf.shape(input=cross_product_matrix)[:-1], (3, 3)), axis=-1) cross_product_matrix = tf.reshape( cross_product_matrix, shape=cross_product_matrix_shape) weights_single_ray = tf.expand_dims(weights_single_ray, axis=-1) weights_single_ray = tf.expand_dims(weights_single_ray, axis=-1) left_hand_side = weights_single_ray * cross_product_matrix left_hand_side_list.append(left_hand_side) dot_product = tf.matmul(cross_product_matrix, tf.expand_dims(startpoints_single_ray, axis=-1)) right_hand_side = weights_single_ray * dot_product right_hand_side_list.append(right_hand_side) left_hand_side_multi_rays = tf.concat(left_hand_side_list, axis=-2) right_hand_side_multi_rays = tf.concat(right_hand_side_list, axis=-2) points = tf.linalg.lstsq(left_hand_side_multi_rays, right_hand_side_multi_rays) points = tf.squeeze(points, axis=-1) return points # TODO(b/130893491): Add batch support for radii and return [A1, ... , 3, 2]. def intersection_ray_sphere(sphere_center, sphere_radius, ray, point_on_ray, name=None): """Finds positions and surface normals where the sphere and the ray intersect. Note: In the following, A1 to An are optional batch dimensions. Args: sphere_center: A tensor of shape `[3]` representing the 3d sphere center. sphere_radius: A tensor of shape `[1]` containing a strictly positive value defining the radius of the sphere. ray: A tensor of shape `[A1, ..., An, 3]` containing normalized 3D vectors. point_on_ray: A tensor of shape `[A1, ..., An, 3]`. name: A name for this op. The default value of None means "ray_intersection_ray_sphere". Returns: A tensor of shape `[2, A1, ..., An, 3]` containing the position of the intersections, and a tensor of shape `[2, A1, ..., An, 3]` the associated surface normals at that point. Both tensors contain NaNs when there is no intersections. The first dimension of the returned tensor provides access to the first and second intersections of the ray with the sphere. Raises: ValueError: if the shape of `sphere_center`, `sphere_radius`, `ray` or `point_on_ray` is not supported. tf.errors.InvalidArgumentError: If `ray` is not normalized. """ with tf.compat.v1.name_scope( name, "ray_intersection_ray_sphere", [sphere_center, sphere_radius, ray, point_on_ray]): sphere_center = tf.convert_to_tensor(value=sphere_center) sphere_radius = tf.convert_to_tensor(value=sphere_radius) ray = tf.convert_to_tensor(value=ray) point_on_ray = tf.convert_to_tensor(value=point_on_ray) shape.check_static( tensor=sphere_center, tensor_name="sphere_center", has_rank=1, has_dim_equals=(0, 3)) shape.check_static( tensor=sphere_radius, tensor_name="sphere_radius", has_rank=1, has_dim_equals=(0, 1)) shape.check_static(tensor=ray, tensor_name="ray", has_dim_equals=(-1, 3)) shape.check_static( tensor=point_on_ray, tensor_name="point_on_ray", has_dim_equals=(-1, 3)) shape.compare_batch_dimensions( tensors=(ray, point_on_ray), last_axes=(-2, -2), broadcast_compatible=False) sphere_radius = asserts.assert_all_above( sphere_radius, 0.0, open_bound=True) ray = asserts.assert_normalized(ray) vector_sphere_center_to_point_on_ray = sphere_center - point_on_ray distance_sphere_center_to_point_on_ray = tf.norm( tensor=vector_sphere_center_to_point_on_ray, axis=-1, keepdims=True) distance_projection_sphere_center_on_ray = vector.dot( vector_sphere_center_to_point_on_ray, ray) closest_distance_sphere_center_to_ray = tf.sqrt( tf.square(distance_sphere_center_to_point_on_ray) - tf.pow(distance_projection_sphere_center_on_ray, 2)) half_secant_length = tf.sqrt( tf.square(sphere_radius) - tf.square(closest_distance_sphere_center_to_ray)) distances = tf.stack( (distance_projection_sphere_center_on_ray - half_secant_length, distance_projection_sphere_center_on_ray + half_secant_length), axis=0) intersections_points = distances * ray + point_on_ray normals = tf.math.l2_normalize( intersections_points - sphere_center, axis=-1) return intersections_points, normals # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tensorflow ray utility functions.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf from tensorflow_graphics.math import vector from tensorflow_graphics.util import asserts from tensorflow_graphics.util import export_api from tensorflow_graphics.util import shape def triangulate(startpoints, endpoints, weights, name=None): """Triangulates 3d points by miminizing the sum of squared distances to rays. The rays are defined by their start points and endpoints. At least two rays are required to triangulate any given point. Contrary to the standard reprojection-error metric, the sum of squared distances to rays can be minimized in a closed form. Note: In the following, A1 to An are optional batch dimensions. Args: startpoints: A tensor of ray start points with shape `[A1, ..., An, V, 3]`, the number of rays V around which the solution points live should be greater or equal to 2, otherwise triangulation is impossible. endpoints: A tensor of ray endpoints with shape `[A1, ..., An, V, 3]`, the number of rays V around which the solution points live should be greater or equal to 2, otherwise triangulation is impossible. The `endpoints` tensor should have the same shape as the `startpoints` tensor. weights: A tensor of ray weights (certainties) with shape `[A1, ..., An, V]`. Weights should have all positive entries. Weight should have at least two non-zero entries for each point (at least two rays should have certainties > 0). name: A name for this op. The default value of None means "ray_triangulate". Returns: A tensor of triangulated points with shape `[A1, ..., An, 3]`. Raises: ValueError: If the shape of the arguments is not supported. """ with tf.compat.v1.name_scope(name, "ray_triangulate", [startpoints, endpoints, weights]): startpoints = tf.convert_to_tensor(value=startpoints) endpoints = tf.convert_to_tensor(value=endpoints) weights = tf.convert_to_tensor(value=weights) shape.check_static( tensor=startpoints, tensor_name="startpoints", has_rank_greater_than=1, has_dim_equals=(-1, 3), has_dim_greater_than=(-2, 1)) shape.check_static( tensor=endpoints, tensor_name="endpoints", has_rank_greater_than=1, has_dim_equals=(-1, 3), has_dim_greater_than=(-2, 1)) shape.compare_batch_dimensions( tensors=(startpoints, endpoints, weights), last_axes=(-2, -2, -1), broadcast_compatible=False) weights = asserts.assert_all_above(weights, 0.0, open_bound=False) weights = asserts.assert_at_least_k_non_zero_entries(weights, k=2) left_hand_side_list = [] right_hand_side_list = [] # TODO(b/130892100): Replace the inefficient for loop and add comments here. for ray_id in range(weights.shape[-1]): weights_single_ray = weights[..., ray_id] startpoints_single_ray = startpoints[..., ray_id, :] endpoints_singleview = endpoints[..., ray_id, :] ray = endpoints_singleview - startpoints_single_ray ray = tf.nn.l2_normalize(ray, axis=-1) ray_x, ray_y, ray_z = tf.unstack(ray, axis=-1) zeros = tf.zeros_like(ray_x) cross_product_matrix = tf.stack( (zeros, -ray_z, ray_y, ray_z, zeros, -ray_x, -ray_y, ray_x, zeros), axis=-1) cross_product_matrix_shape = tf.concat( (tf.shape(input=cross_product_matrix)[:-1], (3, 3)), axis=-1) cross_product_matrix = tf.reshape( cross_product_matrix, shape=cross_product_matrix_shape) weights_single_ray = tf.expand_dims(weights_single_ray, axis=-1) weights_single_ray = tf.expand_dims(weights_single_ray, axis=-1) left_hand_side = weights_single_ray * cross_product_matrix left_hand_side_list.append(left_hand_side) dot_product = tf.matmul(cross_product_matrix, tf.expand_dims(startpoints_single_ray, axis=-1)) right_hand_side = weights_single_ray * dot_product right_hand_side_list.append(right_hand_side) left_hand_side_multi_rays = tf.concat(left_hand_side_list, axis=-2) right_hand_side_multi_rays = tf.concat(right_hand_side_list, axis=-2) points = tf.linalg.lstsq(left_hand_side_multi_rays, right_hand_side_multi_rays) points = tf.squeeze(points, axis=-1) return points # TODO(b/130893491): Add batch support for radii and return [A1, ... , 3, 2]. def intersection_ray_sphere(sphere_center, sphere_radius, ray, point_on_ray, name=None): """Finds positions and surface normals where the sphere and the ray intersect. Note: In the following, A1 to An are optional batch dimensions. Args: sphere_center: A tensor of shape `[3]` representing the 3d sphere center. sphere_radius: A tensor of shape `[1]` containing a strictly positive value defining the radius of the sphere. ray: A tensor of shape `[A1, ..., An, 3]` containing normalized 3D vectors. point_on_ray: A tensor of shape `[A1, ..., An, 3]`. name: A name for this op. The default value of None means "ray_intersection_ray_sphere". Returns: A tensor of shape `[2, A1, ..., An, 3]` containing the position of the intersections, and a tensor of shape `[2, A1, ..., An, 3]` the associated surface normals at that point. Both tensors contain NaNs when there is no intersections. The first dimension of the returned tensor provides access to the first and second intersections of the ray with the sphere. Raises: ValueError: if the shape of `sphere_center`, `sphere_radius`, `ray` or `point_on_ray` is not supported. tf.errors.InvalidArgumentError: If `ray` is not normalized. """ with tf.compat.v1.name_scope( name, "ray_intersection_ray_sphere", [sphere_center, sphere_radius, ray, point_on_ray]): sphere_center = tf.convert_to_tensor(value=sphere_center) sphere_radius = tf.convert_to_tensor(value=sphere_radius) ray = tf.convert_to_tensor(value=ray) point_on_ray = tf.convert_to_tensor(value=point_on_ray) shape.check_static( tensor=sphere_center, tensor_name="sphere_center", has_rank=1, has_dim_equals=(0, 3)) shape.check_static( tensor=sphere_radius, tensor_name="sphere_radius", has_rank=1, has_dim_equals=(0, 1)) shape.check_static(tensor=ray, tensor_name="ray", has_dim_equals=(-1, 3)) shape.check_static( tensor=point_on_ray, tensor_name="point_on_ray", has_dim_equals=(-1, 3)) shape.compare_batch_dimensions( tensors=(ray, point_on_ray), last_axes=(-2, -2), broadcast_compatible=False) sphere_radius = asserts.assert_all_above( sphere_radius, 0.0, open_bound=True) ray = asserts.assert_normalized(ray) vector_sphere_center_to_point_on_ray = sphere_center - point_on_ray distance_sphere_center_to_point_on_ray = tf.norm( tensor=vector_sphere_center_to_point_on_ray, axis=-1, keepdims=True) distance_projection_sphere_center_on_ray = vector.dot( vector_sphere_center_to_point_on_ray, ray) closest_distance_sphere_center_to_ray = tf.sqrt( tf.square(distance_sphere_center_to_point_on_ray) - tf.pow(distance_projection_sphere_center_on_ray, 2)) half_secant_length = tf.sqrt( tf.square(sphere_radius) - tf.square(closest_distance_sphere_center_to_ray)) distances = tf.stack( (distance_projection_sphere_center_on_ray - half_secant_length, distance_projection_sphere_center_on_ray + half_secant_length), axis=0) intersections_points = distances * ray + point_on_ray normals = tf.math.l2_normalize( intersections_points - sphere_center, axis=-1) return intersections_points, normals # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/projects/local_implicit_grid/resample_geometry.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Lint as: python3 """Compute point samples from a mesh after normalizing its scale.""" import os from absl import app from absl import flags import numpy as np from tensorflow_graphics.projects.local_implicit_grid.core import point_utils as pu import trimesh os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' flags.DEFINE_string('input_mesh', '', 'Input geometry file. Must be a trimesh supported type') flags.DEFINE_string('output_ply', '', 'Samples points ply file.') flags.DEFINE_float('sampling_density', 2e-4, 'Approx surface area based point sampling density.') FLAGS = flags.FLAGS def normalize_mesh(mesh, in_place=True): """Rescales vertex positions to lie inside unit cube.""" scale = 1.0 / np.max(mesh.bounds[1, :] - mesh.bounds[0, :]) centroid = mesh.centroid scaled_vertices = (mesh.vertices - centroid) * scale if in_place: scaled_mesh = mesh scaled_mesh.vertices = scaled_vertices else: scaled_mesh = mesh.copy() scaled_mesh.vertices = scaled_vertices scaled_mesh.fix_normals() return scaled_mesh def sample_mesh(mesh): """Samples oriented points from a mesh.""" num_samples = int(mesh.area / FLAGS.sampling_density) sample_pts, sample_face_ids = trimesh.sample.sample_surface(mesh, num_samples) sample_normals = mesh.face_normals[sample_face_ids] return sample_pts, sample_normals def main(argv): if len(argv) > 1: raise app.UsageError('Too many command-line arguments.') if not FLAGS.input_mesh: raise IOError('--input_mesh must be specified.') if not FLAGS.output_ply: raise IOError('--output_ply must be specified.') mesh = trimesh.load(FLAGS.input_mesh) mesh = normalize_mesh(mesh) sample_pts, sample_normals = sample_mesh(mesh) print('Computed {} samples from mesh.'.format(sample_pts.shape[0])) print('Writing sampled points to {}'.format(FLAGS.output_ply)) pu.write_point_ply(FLAGS.output_ply, sample_pts, sample_normals) if __name__ == '__main__': app.run(main)
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Lint as: python3 """Compute point samples from a mesh after normalizing its scale.""" import os from absl import app from absl import flags import numpy as np from tensorflow_graphics.projects.local_implicit_grid.core import point_utils as pu import trimesh os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3' flags.DEFINE_string('input_mesh', '', 'Input geometry file. Must be a trimesh supported type') flags.DEFINE_string('output_ply', '', 'Samples points ply file.') flags.DEFINE_float('sampling_density', 2e-4, 'Approx surface area based point sampling density.') FLAGS = flags.FLAGS def normalize_mesh(mesh, in_place=True): """Rescales vertex positions to lie inside unit cube.""" scale = 1.0 / np.max(mesh.bounds[1, :] - mesh.bounds[0, :]) centroid = mesh.centroid scaled_vertices = (mesh.vertices - centroid) * scale if in_place: scaled_mesh = mesh scaled_mesh.vertices = scaled_vertices else: scaled_mesh = mesh.copy() scaled_mesh.vertices = scaled_vertices scaled_mesh.fix_normals() return scaled_mesh def sample_mesh(mesh): """Samples oriented points from a mesh.""" num_samples = int(mesh.area / FLAGS.sampling_density) sample_pts, sample_face_ids = trimesh.sample.sample_surface(mesh, num_samples) sample_normals = mesh.face_normals[sample_face_ids] return sample_pts, sample_normals def main(argv): if len(argv) > 1: raise app.UsageError('Too many command-line arguments.') if not FLAGS.input_mesh: raise IOError('--input_mesh must be specified.') if not FLAGS.output_ply: raise IOError('--output_ply must be specified.') mesh = trimesh.load(FLAGS.input_mesh) mesh = normalize_mesh(mesh) sample_pts, sample_normals = sample_mesh(mesh) print('Computed {} samples from mesh.'.format(sample_pts.shape[0])) print('Writing sampled points to {}'.format(FLAGS.output_ply)) pu.write_point_ply(FLAGS.output_ply, sample_pts, sample_normals) if __name__ == '__main__': app.run(main)
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/projects/cvxnet/lib/datasets.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Dataset implementations.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function from os import path import tensorflow.compat.v1 as tf def get_dataset(data_name, split, args): return dataset_dict[data_name](split, args) def shapenet(split, args): """ShapeNet Dataset. Args: split: string, the split of the dataset, either "train" or "test". args: tf.app.flags.FLAGS, configurations. Returns: dataset: tf.data.Dataset, the shapenet dataset. """ total_points = 100000 data_dir = args.data_dir sample_bbx = args.sample_bbx if split != "train": sample_bbx = total_points sample_surf = args.sample_surf if split != "train": sample_surf = 0 image_h = args.image_h image_w = args.image_w image_d = args.image_d n_views = args.n_views depth_h = args.depth_h depth_w = args.depth_w depth_d = args.depth_d batch_size = args.batch_size if split == "train" else 1 dims = args.dims def _parser(example): fs = tf.parse_single_example( example, features={ "rgb": tf.FixedLenFeature([n_views * image_h * image_w * image_d], tf.float32), "depth": tf.FixedLenFeature([depth_d * depth_h * depth_w], tf.float32), "bbox_samples": tf.FixedLenFeature([total_points * (dims + 1)], tf.float32), "surf_samples": tf.FixedLenFeature([total_points * (dims + 1)], tf.float32), "name": tf.FixedLenFeature([], tf.string), }) fs["rgb"] = tf.reshape(fs["rgb"], [n_views, image_h, image_w, image_d]) fs["depth"] = tf.reshape(fs["depth"], [depth_d, depth_h, depth_w, 1]) fs["bbox_samples"] = tf.reshape(fs["bbox_samples"], [total_points, dims + 1]) fs["surf_samples"] = tf.reshape(fs["surf_samples"], [total_points, dims + 1]) return fs def _sampler(example): image = tf.gather( example["rgb"], tf.random.uniform((), minval=0, maxval=n_views if split == "train" else 1, dtype=tf.int32), axis=0) image = tf.image.resize_bilinear(tf.expand_dims(image, axis=0), [224, 224]) depth = example["depth"] / 1000. sample_points = [] sample_labels = [] if sample_bbx > 0: if split == "train": indices_bbx = tf.random.uniform([sample_bbx], minval=0, maxval=total_points, dtype=tf.int32) bbx_samples = tf.gather(example["bbox_samples"], indices_bbx, axis=0) else: bbx_samples = example["bbox_samples"] bbx_points, bbx_labels = tf.split(bbx_samples, [3, 1], axis=-1) sample_points.append(bbx_points) sample_labels.append(bbx_labels) if sample_surf > 0: indices_surf = tf.random.uniform([sample_surf], minval=0, maxval=total_points, dtype=tf.int32) surf_samples = tf.gather(example["surf_samples"], indices_surf, axis=0) surf_points, surf_labels = tf.split(surf_samples, [3, 1], axis=-1) sample_points.append(surf_points) sample_labels.append(surf_labels) points = tf.concat(sample_points, axis=0) point_labels = tf.cast(tf.concat(sample_labels, axis=0) <= 0., tf.float32) image = tf.reshape(image, [224, 224, image_d]) depth = tf.reshape(depth, [depth_d, depth_h, depth_w]) depth = tf.transpose(depth, [1, 2, 0]) points = tf.reshape(points, [sample_bbx + sample_surf, 3]) point_labels = tf.reshape(point_labels, [sample_bbx + sample_surf, 1]) return { "image": image, "depth": depth, "point": points, "point_label": point_labels, "name": example["name"], } data_pattern = path.join(data_dir, "{}-{}-*".format(args.obj_class, split)) data_files = tf.gfile.Glob(data_pattern) if not data_files: raise ValueError("{} did not match any files".format(data_pattern)) file_count = len(data_files) filenames = tf.data.Dataset.list_files(data_pattern, shuffle=True) data = filenames.interleave( lambda x: tf.data.TFRecordDataset([x]), cycle_length=file_count, num_parallel_calls=tf.data.experimental.AUTOTUNE) data = data.map(_parser, num_parallel_calls=tf.data.experimental.AUTOTUNE) data = data.map(_sampler, num_parallel_calls=tf.data.experimental.AUTOTUNE) if split == "train": data = data.shuffle(batch_size * 5).repeat(-1) return data.batch(batch_size).prefetch(tf.data.experimental.AUTOTUNE) dataset_dict = { "shapenet": shapenet, }
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Dataset implementations.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function from os import path import tensorflow.compat.v1 as tf def get_dataset(data_name, split, args): return dataset_dict[data_name](split, args) def shapenet(split, args): """ShapeNet Dataset. Args: split: string, the split of the dataset, either "train" or "test". args: tf.app.flags.FLAGS, configurations. Returns: dataset: tf.data.Dataset, the shapenet dataset. """ total_points = 100000 data_dir = args.data_dir sample_bbx = args.sample_bbx if split != "train": sample_bbx = total_points sample_surf = args.sample_surf if split != "train": sample_surf = 0 image_h = args.image_h image_w = args.image_w image_d = args.image_d n_views = args.n_views depth_h = args.depth_h depth_w = args.depth_w depth_d = args.depth_d batch_size = args.batch_size if split == "train" else 1 dims = args.dims def _parser(example): fs = tf.parse_single_example( example, features={ "rgb": tf.FixedLenFeature([n_views * image_h * image_w * image_d], tf.float32), "depth": tf.FixedLenFeature([depth_d * depth_h * depth_w], tf.float32), "bbox_samples": tf.FixedLenFeature([total_points * (dims + 1)], tf.float32), "surf_samples": tf.FixedLenFeature([total_points * (dims + 1)], tf.float32), "name": tf.FixedLenFeature([], tf.string), }) fs["rgb"] = tf.reshape(fs["rgb"], [n_views, image_h, image_w, image_d]) fs["depth"] = tf.reshape(fs["depth"], [depth_d, depth_h, depth_w, 1]) fs["bbox_samples"] = tf.reshape(fs["bbox_samples"], [total_points, dims + 1]) fs["surf_samples"] = tf.reshape(fs["surf_samples"], [total_points, dims + 1]) return fs def _sampler(example): image = tf.gather( example["rgb"], tf.random.uniform((), minval=0, maxval=n_views if split == "train" else 1, dtype=tf.int32), axis=0) image = tf.image.resize_bilinear(tf.expand_dims(image, axis=0), [224, 224]) depth = example["depth"] / 1000. sample_points = [] sample_labels = [] if sample_bbx > 0: if split == "train": indices_bbx = tf.random.uniform([sample_bbx], minval=0, maxval=total_points, dtype=tf.int32) bbx_samples = tf.gather(example["bbox_samples"], indices_bbx, axis=0) else: bbx_samples = example["bbox_samples"] bbx_points, bbx_labels = tf.split(bbx_samples, [3, 1], axis=-1) sample_points.append(bbx_points) sample_labels.append(bbx_labels) if sample_surf > 0: indices_surf = tf.random.uniform([sample_surf], minval=0, maxval=total_points, dtype=tf.int32) surf_samples = tf.gather(example["surf_samples"], indices_surf, axis=0) surf_points, surf_labels = tf.split(surf_samples, [3, 1], axis=-1) sample_points.append(surf_points) sample_labels.append(surf_labels) points = tf.concat(sample_points, axis=0) point_labels = tf.cast(tf.concat(sample_labels, axis=0) <= 0., tf.float32) image = tf.reshape(image, [224, 224, image_d]) depth = tf.reshape(depth, [depth_d, depth_h, depth_w]) depth = tf.transpose(depth, [1, 2, 0]) points = tf.reshape(points, [sample_bbx + sample_surf, 3]) point_labels = tf.reshape(point_labels, [sample_bbx + sample_surf, 1]) return { "image": image, "depth": depth, "point": points, "point_label": point_labels, "name": example["name"], } data_pattern = path.join(data_dir, "{}-{}-*".format(args.obj_class, split)) data_files = tf.gfile.Glob(data_pattern) if not data_files: raise ValueError("{} did not match any files".format(data_pattern)) file_count = len(data_files) filenames = tf.data.Dataset.list_files(data_pattern, shuffle=True) data = filenames.interleave( lambda x: tf.data.TFRecordDataset([x]), cycle_length=file_count, num_parallel_calls=tf.data.experimental.AUTOTUNE) data = data.map(_parser, num_parallel_calls=tf.data.experimental.AUTOTUNE) data = data.map(_sampler, num_parallel_calls=tf.data.experimental.AUTOTUNE) if split == "train": data = data.shuffle(batch_size * 5).repeat(-1) return data.batch(batch_size).prefetch(tf.data.experimental.AUTOTUNE) dataset_dict = { "shapenet": shapenet, }
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/geometry/transformation/euler.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. r"""This modules implements Euler angles functionalities. The Euler angles are defined using a vector $$[\theta, \gamma, \beta]^T \in \mathbb{R}^3$$, where $$\theta$$ is the angle about $$x$$, $$\gamma$$ the angle about $$y$$, and $$\beta$$ is the angle about $$z$$ More details about Euler angles can be found on [this page.] (https://en.wikipedia.org/wiki/Euler_angles) Note: The angles are defined in radians. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import math import tensorflow as tf from tensorflow_graphics.geometry.transformation import quaternion from tensorflow_graphics.geometry.transformation import rotation_matrix_3d from tensorflow_graphics.util import asserts from tensorflow_graphics.util import export_api from tensorflow_graphics.util import safe_ops from tensorflow_graphics.util import shape def from_axis_angle(axis, angle, name=None): """Converts axis-angle to Euler angles. Note: In the following, A1 to An are optional batch dimensions. Args: axis: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents a normalized axis. angle: A tensor of shape `[A1, ..., An, 1]`, where the last dimension represents an angle. name: A name for this op that defaults to "euler_from_axis_angle". Returns: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the three Euler angles. """ with tf.compat.v1.name_scope(name, "euler_from_axis_angle", [axis, angle]): return from_quaternion(quaternion.from_axis_angle(axis, angle)) def from_quaternion(quaternions, name=None): """Converts quaternions to Euler angles. Args: quaternions: A tensor of shape `[A1, ..., An, 4]`, where the last dimension represents a normalized quaternion. name: A name for this op that defaults to "euler_from_quaternion". Returns: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the three Euler angles. """ def general_case(r00, r10, r21, r22, r20, eps_addition): """Handles the general case.""" theta_y = -tf.asin(r20) sign_cos_theta_y = safe_ops.nonzero_sign(tf.cos(theta_y)) r00 = safe_ops.nonzero_sign(r00) * eps_addition + r00 r22 = safe_ops.nonzero_sign(r22) * eps_addition + r22 theta_z = tf.atan2(r10 * sign_cos_theta_y, r00 * sign_cos_theta_y) theta_x = tf.atan2(r21 * sign_cos_theta_y, r22 * sign_cos_theta_y) return tf.stack((theta_x, theta_y, theta_z), axis=-1) def gimbal_lock(r01, r02, r20, eps_addition): """Handles Gimbal locks.""" sign_r20 = safe_ops.nonzero_sign(r20) r02 = safe_ops.nonzero_sign(r02) * eps_addition + r02 theta_x = tf.atan2(-sign_r20 * r01, -sign_r20 * r02) theta_y = -sign_r20 * tf.constant(math.pi / 2.0, dtype=r20.dtype) theta_z = tf.zeros_like(theta_x) angles = tf.stack((theta_x, theta_y, theta_z), axis=-1) return angles with tf.compat.v1.name_scope(name, "euler_from_quaternion", [quaternions]): quaternions = tf.convert_to_tensor(value=quaternions) shape.check_static( tensor=quaternions, tensor_name="quaternions", has_dim_equals=(-1, 4)) x, y, z, w = tf.unstack(quaternions, axis=-1) tx = safe_ops.safe_shrink(2.0 * x, -2.0, 2.0, True) ty = safe_ops.safe_shrink(2.0 * y, -2.0, 2.0, True) tz = safe_ops.safe_shrink(2.0 * z, -2.0, 2.0, True) twx = tx * w twy = ty * w twz = tz * w txx = tx * x txy = ty * x txz = tz * x tyy = ty * y tyz = tz * y tzz = tz * z # The following is clipped due to numerical instabilities that can take some # enties outside the [-1;1] range. r00 = safe_ops.safe_shrink(1.0 - (tyy + tzz), -1.0, 1.0, True) r10 = safe_ops.safe_shrink(txy + twz, -1.0, 1.0, True) r21 = safe_ops.safe_shrink(tyz + twx, -1.0, 1.0, True) r22 = safe_ops.safe_shrink(1.0 - (txx + tyy), -1.0, 1.0, True) r20 = safe_ops.safe_shrink(txz - twy, -1.0, 1.0, True) r01 = safe_ops.safe_shrink(txy - twz, -1.0, 1.0, True) r02 = safe_ops.safe_shrink(txz + twy, -1.0, 1.0, True) eps_addition = asserts.select_eps_for_addition(quaternions.dtype) general_solution = general_case(r00, r10, r21, r22, r20, eps_addition) gimbal_solution = gimbal_lock(r01, r02, r20, eps_addition) # The general solution is unstable close to the Gimbal lock, and the gimbal # solution is not toooff in these cases. is_gimbal = tf.less(tf.abs(tf.abs(r20) - 1.0), 1.0e-6) gimbal_mask = tf.stack((is_gimbal, is_gimbal, is_gimbal), axis=-1) return tf.compat.v1.where(gimbal_mask, gimbal_solution, general_solution) def from_rotation_matrix(rotation_matrix, name=None): """Converts rotation matrices to Euler angles. The rotation matrices are assumed to have been constructed by rotation around the $$x$$, then $$y$$, and finally the $$z$$ axis. Note: There is an infinite number of solutions to this problem. There are Gimbal locks when abs(rotation_matrix(2,0)) == 1, which are not handled. Note: In the following, A1 to An are optional batch dimensions. Args: rotation_matrix: A tensor of shape `[A1, ..., An, 3, 3]`, where the last two dimensions represent a rotation matrix. name: A name for this op that defaults to "euler_from_rotation_matrix". Returns: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the three Euler angles. Raises: ValueError: If the shape of `rotation_matrix` is not supported. """ def general_case(rotation_matrix, r20, eps_addition): """Handles the general case.""" theta_y = -tf.asin(r20) sign_cos_theta_y = safe_ops.nonzero_sign(tf.cos(theta_y)) r00 = rotation_matrix[..., 0, 0] r10 = rotation_matrix[..., 1, 0] r21 = rotation_matrix[..., 2, 1] r22 = rotation_matrix[..., 2, 2] r00 = safe_ops.nonzero_sign(r00) * eps_addition + r00 r22 = safe_ops.nonzero_sign(r22) * eps_addition + r22 # cos_theta_y evaluates to 0 on Gimbal locks, in which case the output of # this function will not be used. theta_z = tf.atan2(r10 * sign_cos_theta_y, r00 * sign_cos_theta_y) theta_x = tf.atan2(r21 * sign_cos_theta_y, r22 * sign_cos_theta_y) angles = tf.stack((theta_x, theta_y, theta_z), axis=-1) return angles def gimbal_lock(rotation_matrix, r20, eps_addition): """Handles Gimbal locks.""" r01 = rotation_matrix[..., 0, 1] r02 = rotation_matrix[..., 0, 2] sign_r20 = safe_ops.nonzero_sign(r20) r02 = safe_ops.nonzero_sign(r02) * eps_addition + r02 theta_x = tf.atan2(-sign_r20 * r01, -sign_r20 * r02) theta_y = -sign_r20 * tf.constant(math.pi / 2.0, dtype=r20.dtype) theta_z = tf.zeros_like(theta_x) angles = tf.stack((theta_x, theta_y, theta_z), axis=-1) return angles with tf.compat.v1.name_scope(name, "euler_from_rotation_matrix", [rotation_matrix]): rotation_matrix = tf.convert_to_tensor(value=rotation_matrix) shape.check_static( tensor=rotation_matrix, tensor_name="rotation_matrix", has_rank_greater_than=1, has_dim_equals=((-1, 3), (-2, 3))) rotation_matrix = rotation_matrix_3d.assert_rotation_matrix_normalized( rotation_matrix) r20 = rotation_matrix[..., 2, 0] eps_addition = asserts.select_eps_for_addition(rotation_matrix.dtype) general_solution = general_case(rotation_matrix, r20, eps_addition) gimbal_solution = gimbal_lock(rotation_matrix, r20, eps_addition) is_gimbal = tf.equal(tf.abs(r20), 1) gimbal_mask = tf.stack((is_gimbal, is_gimbal, is_gimbal), axis=-1) return tf.compat.v1.where(gimbal_mask, gimbal_solution, general_solution) def inverse(euler_angle, name=None): """Computes the angles that would inverse a transformation by euler_angle. Note: In the following, A1 to An are optional batch dimensions. Args: euler_angle: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the three Euler angles. name: A name for this op that defaults to "euler_inverse". Returns: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the three Euler angles. Raises: ValueError: If the shape of `euler_angle` is not supported. """ with tf.compat.v1.name_scope(name, "euler_inverse", [euler_angle]): euler_angle = tf.convert_to_tensor(value=euler_angle) shape.check_static( tensor=euler_angle, tensor_name="euler_angle", has_dim_equals=(-1, 3)) return -euler_angle # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. r"""This modules implements Euler angles functionalities. The Euler angles are defined using a vector $$[\theta, \gamma, \beta]^T \in \mathbb{R}^3$$, where $$\theta$$ is the angle about $$x$$, $$\gamma$$ the angle about $$y$$, and $$\beta$$ is the angle about $$z$$ More details about Euler angles can be found on [this page.] (https://en.wikipedia.org/wiki/Euler_angles) Note: The angles are defined in radians. """ from __future__ import absolute_import from __future__ import division from __future__ import print_function import math import tensorflow as tf from tensorflow_graphics.geometry.transformation import quaternion from tensorflow_graphics.geometry.transformation import rotation_matrix_3d from tensorflow_graphics.util import asserts from tensorflow_graphics.util import export_api from tensorflow_graphics.util import safe_ops from tensorflow_graphics.util import shape def from_axis_angle(axis, angle, name=None): """Converts axis-angle to Euler angles. Note: In the following, A1 to An are optional batch dimensions. Args: axis: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents a normalized axis. angle: A tensor of shape `[A1, ..., An, 1]`, where the last dimension represents an angle. name: A name for this op that defaults to "euler_from_axis_angle". Returns: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the three Euler angles. """ with tf.compat.v1.name_scope(name, "euler_from_axis_angle", [axis, angle]): return from_quaternion(quaternion.from_axis_angle(axis, angle)) def from_quaternion(quaternions, name=None): """Converts quaternions to Euler angles. Args: quaternions: A tensor of shape `[A1, ..., An, 4]`, where the last dimension represents a normalized quaternion. name: A name for this op that defaults to "euler_from_quaternion". Returns: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the three Euler angles. """ def general_case(r00, r10, r21, r22, r20, eps_addition): """Handles the general case.""" theta_y = -tf.asin(r20) sign_cos_theta_y = safe_ops.nonzero_sign(tf.cos(theta_y)) r00 = safe_ops.nonzero_sign(r00) * eps_addition + r00 r22 = safe_ops.nonzero_sign(r22) * eps_addition + r22 theta_z = tf.atan2(r10 * sign_cos_theta_y, r00 * sign_cos_theta_y) theta_x = tf.atan2(r21 * sign_cos_theta_y, r22 * sign_cos_theta_y) return tf.stack((theta_x, theta_y, theta_z), axis=-1) def gimbal_lock(r01, r02, r20, eps_addition): """Handles Gimbal locks.""" sign_r20 = safe_ops.nonzero_sign(r20) r02 = safe_ops.nonzero_sign(r02) * eps_addition + r02 theta_x = tf.atan2(-sign_r20 * r01, -sign_r20 * r02) theta_y = -sign_r20 * tf.constant(math.pi / 2.0, dtype=r20.dtype) theta_z = tf.zeros_like(theta_x) angles = tf.stack((theta_x, theta_y, theta_z), axis=-1) return angles with tf.compat.v1.name_scope(name, "euler_from_quaternion", [quaternions]): quaternions = tf.convert_to_tensor(value=quaternions) shape.check_static( tensor=quaternions, tensor_name="quaternions", has_dim_equals=(-1, 4)) x, y, z, w = tf.unstack(quaternions, axis=-1) tx = safe_ops.safe_shrink(2.0 * x, -2.0, 2.0, True) ty = safe_ops.safe_shrink(2.0 * y, -2.0, 2.0, True) tz = safe_ops.safe_shrink(2.0 * z, -2.0, 2.0, True) twx = tx * w twy = ty * w twz = tz * w txx = tx * x txy = ty * x txz = tz * x tyy = ty * y tyz = tz * y tzz = tz * z # The following is clipped due to numerical instabilities that can take some # enties outside the [-1;1] range. r00 = safe_ops.safe_shrink(1.0 - (tyy + tzz), -1.0, 1.0, True) r10 = safe_ops.safe_shrink(txy + twz, -1.0, 1.0, True) r21 = safe_ops.safe_shrink(tyz + twx, -1.0, 1.0, True) r22 = safe_ops.safe_shrink(1.0 - (txx + tyy), -1.0, 1.0, True) r20 = safe_ops.safe_shrink(txz - twy, -1.0, 1.0, True) r01 = safe_ops.safe_shrink(txy - twz, -1.0, 1.0, True) r02 = safe_ops.safe_shrink(txz + twy, -1.0, 1.0, True) eps_addition = asserts.select_eps_for_addition(quaternions.dtype) general_solution = general_case(r00, r10, r21, r22, r20, eps_addition) gimbal_solution = gimbal_lock(r01, r02, r20, eps_addition) # The general solution is unstable close to the Gimbal lock, and the gimbal # solution is not toooff in these cases. is_gimbal = tf.less(tf.abs(tf.abs(r20) - 1.0), 1.0e-6) gimbal_mask = tf.stack((is_gimbal, is_gimbal, is_gimbal), axis=-1) return tf.compat.v1.where(gimbal_mask, gimbal_solution, general_solution) def from_rotation_matrix(rotation_matrix, name=None): """Converts rotation matrices to Euler angles. The rotation matrices are assumed to have been constructed by rotation around the $$x$$, then $$y$$, and finally the $$z$$ axis. Note: There is an infinite number of solutions to this problem. There are Gimbal locks when abs(rotation_matrix(2,0)) == 1, which are not handled. Note: In the following, A1 to An are optional batch dimensions. Args: rotation_matrix: A tensor of shape `[A1, ..., An, 3, 3]`, where the last two dimensions represent a rotation matrix. name: A name for this op that defaults to "euler_from_rotation_matrix". Returns: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the three Euler angles. Raises: ValueError: If the shape of `rotation_matrix` is not supported. """ def general_case(rotation_matrix, r20, eps_addition): """Handles the general case.""" theta_y = -tf.asin(r20) sign_cos_theta_y = safe_ops.nonzero_sign(tf.cos(theta_y)) r00 = rotation_matrix[..., 0, 0] r10 = rotation_matrix[..., 1, 0] r21 = rotation_matrix[..., 2, 1] r22 = rotation_matrix[..., 2, 2] r00 = safe_ops.nonzero_sign(r00) * eps_addition + r00 r22 = safe_ops.nonzero_sign(r22) * eps_addition + r22 # cos_theta_y evaluates to 0 on Gimbal locks, in which case the output of # this function will not be used. theta_z = tf.atan2(r10 * sign_cos_theta_y, r00 * sign_cos_theta_y) theta_x = tf.atan2(r21 * sign_cos_theta_y, r22 * sign_cos_theta_y) angles = tf.stack((theta_x, theta_y, theta_z), axis=-1) return angles def gimbal_lock(rotation_matrix, r20, eps_addition): """Handles Gimbal locks.""" r01 = rotation_matrix[..., 0, 1] r02 = rotation_matrix[..., 0, 2] sign_r20 = safe_ops.nonzero_sign(r20) r02 = safe_ops.nonzero_sign(r02) * eps_addition + r02 theta_x = tf.atan2(-sign_r20 * r01, -sign_r20 * r02) theta_y = -sign_r20 * tf.constant(math.pi / 2.0, dtype=r20.dtype) theta_z = tf.zeros_like(theta_x) angles = tf.stack((theta_x, theta_y, theta_z), axis=-1) return angles with tf.compat.v1.name_scope(name, "euler_from_rotation_matrix", [rotation_matrix]): rotation_matrix = tf.convert_to_tensor(value=rotation_matrix) shape.check_static( tensor=rotation_matrix, tensor_name="rotation_matrix", has_rank_greater_than=1, has_dim_equals=((-1, 3), (-2, 3))) rotation_matrix = rotation_matrix_3d.assert_rotation_matrix_normalized( rotation_matrix) r20 = rotation_matrix[..., 2, 0] eps_addition = asserts.select_eps_for_addition(rotation_matrix.dtype) general_solution = general_case(rotation_matrix, r20, eps_addition) gimbal_solution = gimbal_lock(rotation_matrix, r20, eps_addition) is_gimbal = tf.equal(tf.abs(r20), 1) gimbal_mask = tf.stack((is_gimbal, is_gimbal, is_gimbal), axis=-1) return tf.compat.v1.where(gimbal_mask, gimbal_solution, general_solution) def inverse(euler_angle, name=None): """Computes the angles that would inverse a transformation by euler_angle. Note: In the following, A1 to An are optional batch dimensions. Args: euler_angle: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the three Euler angles. name: A name for this op that defaults to "euler_inverse". Returns: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the three Euler angles. Raises: ValueError: If the shape of `euler_angle` is not supported. """ with tf.compat.v1.name_scope(name, "euler_inverse", [euler_angle]): euler_angle = tf.convert_to_tensor(value=euler_angle) shape.check_static( tensor=euler_angle, tensor_name="euler_angle", has_dim_equals=(-1, 3)) return -euler_angle # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/geometry/transformation/tests/axis_angle_test.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for axis-angle.""" from absl.testing import flagsaver from absl.testing import parameterized import numpy as np import tensorflow as tf from tensorflow_graphics.geometry.transformation import axis_angle from tensorflow_graphics.geometry.transformation import quaternion from tensorflow_graphics.geometry.transformation import rotation_matrix_3d from tensorflow_graphics.geometry.transformation.tests import test_helpers from tensorflow_graphics.util import test_case class AxisAngleTest(test_case.TestCase): @parameterized.parameters( ((3,),), ((None, 3),), ) def test_from_euler_exception_not_raised(self, *shapes): """Tests that the shape exceptions are not raised.""" self.assert_exception_is_not_raised(axis_angle.from_euler, shapes) @parameterized.parameters( ("must have exactly 3 dimensions in axis -1", (None,)),) def test_from_euler_exception_raised(self, error_msg, *shapes): """Tests that the shape exceptions are properly raised.""" self.assert_exception_is_raised(axis_angle.from_euler, error_msg, shapes) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_from_euler_jacobian_random(self): """Test the Jacobian of the from_euler function. Note: Preset angles are not tested as the gradient of tf.norm is NaN at 0. """ x_init = test_helpers.generate_random_test_euler_angles() self.assert_jacobian_is_finite_fn(lambda x: axis_angle.from_euler(x)[0], [x_init]) self.assert_jacobian_is_finite_fn(lambda x: axis_angle.from_euler(x)[1], [x_init]) def test_from_euler_random(self): """Tests that from_euler allows to perform the expect rotation of points.""" random_euler_angles = test_helpers.generate_random_test_euler_angles() tensor_shape = random_euler_angles.shape[:-1] random_point = np.random.normal(size=tensor_shape + (3,)) random_matrix = rotation_matrix_3d.from_euler(random_euler_angles) random_axis, random_angle = axis_angle.from_euler(random_euler_angles) rotated_with_matrix = rotation_matrix_3d.rotate(random_point, random_matrix) rotated_with_axis_angle = axis_angle.rotate(random_point, random_axis, random_angle) self.assertAllClose(rotated_with_matrix, rotated_with_axis_angle) @parameterized.parameters( ((3,),), ((None, 3),), ((2, 3),), ) def test_from_euler_with_small_angles_approximation_exception_not_raised( self, *shapes): """Tests that the shape exceptions are not raised.""" self.assert_exception_is_not_raised( axis_angle.from_euler_with_small_angles_approximation, shapes) @parameterized.parameters( ("must have exactly 3 dimensions in axis -1", (None,)),) def test_from_euler_with_small_angles_approximation_exception_raised( self, error_msg, *shapes): """Tests that the shape exceptions are properly raised.""" self.assert_exception_is_raised( axis_angle.from_euler_with_small_angles_approximation, error_msg, shapes) def test_from_euler_normalized_preset(self): """Tests that from_euler allows build normalized axis-angles.""" euler_angles = test_helpers.generate_preset_test_euler_angles() axis, angle = axis_angle.from_euler(euler_angles) self.assertAllEqual( axis_angle.is_normalized(axis, angle), np.ones(angle.shape, dtype=bool)) def test_from_euler_normalized_random(self): """Tests that from_euler allows build normalized axis-angles.""" random_euler_angles = test_helpers.generate_random_test_euler_angles() random_axis, random_angle = axis_angle.from_euler(random_euler_angles) self.assertAllEqual( axis_angle.is_normalized(random_axis, random_angle), np.ones(shape=random_angle.shape)) def test_from_euler_with_small_angles_approximation_random(self): # Only generate small angles. For a test tolerance of 1e-3, 0.23 was found # empirically to be the range where the small angle approximation works. random_euler_angles = test_helpers.generate_random_test_euler_angles( min_angle=-0.23, max_angle=0.23) exact_axis_angle = axis_angle.from_euler(random_euler_angles) approximate_axis_angle = ( axis_angle.from_euler_with_small_angles_approximation( random_euler_angles)) self.assertAllClose(exact_axis_angle, approximate_axis_angle, atol=1e-3) @parameterized.parameters( ((4,),), ((None, 4),), ((2, 4),), ) def test_from_quaternion_exception_not_raised(self, *shape): """Tests that the shape exceptions are not raised.""" self.assert_exception_is_not_raised(axis_angle.from_quaternion, shape) @parameterized.parameters( ("must have exactly 4 dimensions in axis -1", (None,)),) def test_from_quaternion_exception_raised(self, error_msg, *shape): """Tests that the shape exceptions are raised.""" self.assert_exception_is_raised(axis_angle.from_quaternion, error_msg, shape) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_from_quaternion_jacobian_random(self): """Test the Jacobian of the from_quaternion function. Note: Preset angles are not tested as the gradient of tf.norm is NaN a 0. """ x_init = test_helpers.generate_random_test_quaternions() self.assert_jacobian_is_finite_fn( lambda x: axis_angle.from_quaternion(x)[0], [x_init]) self.assert_jacobian_is_finite_fn( lambda x: axis_angle.from_quaternion(x)[1], [x_init]) def test_from_quaternion_normalized_preset(self): """Tests that from_quaternion returns normalized axis-angles.""" euler_angles = test_helpers.generate_preset_test_euler_angles() quat = quaternion.from_euler(euler_angles) axis, angle = axis_angle.from_quaternion(quat) self.assertAllEqual( axis_angle.is_normalized(axis, angle), np.ones(angle.shape, dtype=bool)) def test_from_quaternion_normalized_random(self): """Tests that from_quaternion returns normalized axis-angles.""" random_quaternions = test_helpers.generate_random_test_quaternions() random_axis, random_angle = axis_angle.from_quaternion(random_quaternions) self.assertAllEqual( axis_angle.is_normalized(random_axis, random_angle), np.ones(random_angle.shape)) def test_from_quaternion_preset(self): """Tests that axis_angle.from_quaternion produces the expected result.""" preset_euler_angles = test_helpers.generate_preset_test_euler_angles() preset_quaternions = quaternion.from_euler(preset_euler_angles) preset_axis_angle = axis_angle.from_euler(preset_euler_angles) self.assertAllClose( preset_axis_angle, axis_angle.from_quaternion(preset_quaternions), rtol=1e-3) def test_from_quaternion_random(self): """Tests that axis_angle.from_quaternion produces the expected result.""" random_euler_angles = test_helpers.generate_random_test_euler_angles() random_quaternions = quaternion.from_euler(random_euler_angles) random_axis_angle = axis_angle.from_euler(random_euler_angles) self.assertAllClose( random_axis_angle, axis_angle.from_quaternion(random_quaternions), rtol=1e-3) @parameterized.parameters( ((3, 3),), ((None, 3, 3),), ) def test_from_rotation_matrix_exception_not_raised(self, *shapes): """Tests that the shape exceptions are not raised.""" self.assert_exception_is_not_raised(axis_angle.from_rotation_matrix, shapes) @parameterized.parameters( ("must have a rank greater than 1", (3,)), ("must have exactly 3 dimensions in axis -1", (3, None)), ("must have exactly 3 dimensions in axis -2", (None, 3)), ) def test_from_rotation_matrix_exception_raised(self, error_msg, *shape): """Tests that the shape exceptions are raised.""" self.assert_exception_is_raised(axis_angle.from_rotation_matrix, error_msg, shape) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_from_rotation_matrix_jacobian_random(self): """Test the Jacobian of the from_rotation_matrix function. Note: Preset angles are not tested as the gradient of tf.norm is NaN a 0. """ x_init = test_helpers.generate_random_test_rotation_matrix_3d() self.assert_jacobian_is_finite_fn( lambda x: axis_angle.from_rotation_matrix(x)[0], [x_init]) self.assert_jacobian_is_finite_fn( lambda x: axis_angle.from_rotation_matrix(x)[1], [x_init]) def test_from_rotation_matrix_normalized_preset(self): """Tests that from_rotation_matrix returns normalized axis-angles.""" preset_euler_angles = test_helpers.generate_preset_test_euler_angles() matrix = rotation_matrix_3d.from_euler(preset_euler_angles) axis, angle = axis_angle.from_rotation_matrix(matrix) self.assertAllEqual( axis_angle.is_normalized(axis, angle), np.ones(angle.shape, dtype=bool)) def test_from_rotation_matrix_normalized_random(self): """Tests that from_rotation_matrix returns normalized axis-angles.""" random_euler_angles = test_helpers.generate_random_test_euler_angles() matrix = rotation_matrix_3d.from_euler(random_euler_angles) axis, angle = axis_angle.from_rotation_matrix(matrix) self.assertAllEqual( axis_angle.is_normalized(axis, angle), np.ones(angle.shape, dtype=bool)) def test_from_rotation_matrix_random(self): """Tests rotation around Z axis.""" def get_rotation_matrix_around_z(angle_rad): return np.array([ [np.cos(angle_rad), -np.sin(angle_rad), 0], [np.sin(angle_rad), np.cos(angle_rad), 0], [0, 0, 1], ]) tensor_size = np.random.randint(10) angle = ( np.array([ np.deg2rad(np.random.randint(720) - 360) for _ in range(tensor_size) ]).reshape((tensor_size, 1))) rotation_matrix = [get_rotation_matrix_around_z(i[0]) for i in angle] rotation_matrix = np.array(rotation_matrix).reshape((tensor_size, 3, 3)) tf_axis, tf_angle = axis_angle.from_rotation_matrix(rotation_matrix) axis = np.tile([[0., 0., 1.]], (angle.shape[0], 1)) tf_quat_gt = quaternion.from_axis_angle(axis, angle) tf_quat = quaternion.from_axis_angle(tf_axis, tf_angle) # Compare quaternions since axis orientation and angle ambiguity will # lead to more complex comparisons. for quat_gt, quat in zip(self.evaluate(tf_quat_gt), self.evaluate(tf_quat)): # Remember that q=-q for any quaternion. pos = np.allclose(quat_gt, quat) neg = np.allclose(quat_gt, -quat) self.assertTrue(pos or neg) @parameterized.parameters( ((3,), (1,)), ((None, 3), (None, 1)), ((2, 3), (2, 1)), ((1, 3), (1,)), ((3,), (1, 1)), ) def test_inverse_exception_not_raised(self, *shape): """Tests that the shape exceptions are not raised.""" self.assert_exception_is_not_raised(axis_angle.inverse, shape) @parameterized.parameters( ("must have exactly 3 dimensions in axis -1", (None,), (1,)), ("must have exactly 1 dimensions in axis -1", (3,), (None,)), ) def test_inverse_exception_raised(self, error_msg, *shape): """Tests that the shape exceptions are raised.""" self.assert_exception_is_raised(axis_angle.inverse, error_msg, shape) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_inverse_jacobian_preset(self): """Test the Jacobian of the inverse function.""" x_axis_init, x_angle_init = test_helpers.generate_preset_test_axis_angle() if tf.executing_eagerly(): # Because axis is returned as is, gradient calculation fails in graph mode # but not in eager mode. This is a side effect of having a graph rather # than a problem of the function. with self.subTest("axis"): self.assert_jacobian_is_correct_fn( lambda x: axis_angle.inverse(x, x_angle_init)[0], [x_axis_init]) with self.subTest("angle"): self.assert_jacobian_is_correct_fn( lambda x: axis_angle.inverse(x_axis_init, x)[1], [x_angle_init]) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_inverse_jacobian_random(self): """Test the Jacobian of the inverse function.""" x_axis_init, x_angle_init = test_helpers.generate_random_test_axis_angle() if tf.executing_eagerly(): # Because axis is returned as is, gradient calculation fails in graph mode # but not in eager mode. This is a side effect of having a graph rather # than a problem of the function. with self.subTest("axis"): self.assert_jacobian_is_correct_fn( lambda x: axis_angle.inverse(1.0 * x, x_angle_init)[0], [x_axis_init]) with self.subTest("angle"): self.assert_jacobian_is_correct_fn( lambda x: axis_angle.inverse(x_axis_init, x)[1], [x_angle_init]) def test_inverse_normalized_random(self): """Tests that axis-angle inversion return a normalized axis-angle.""" random_axis, random_angle = test_helpers.generate_random_test_axis_angle() inverse_axis, inverse_angle = axis_angle.inverse(random_axis, random_angle) self.assertAllEqual( axis_angle.is_normalized(inverse_axis, inverse_angle), np.ones(random_angle.shape)) def test_inverse_random(self): """Tests axis-angle inversion.""" random_axis, random_angle = test_helpers.generate_random_test_axis_angle() inverse_axis, inverse_angle = axis_angle.inverse(random_axis, random_angle) self.assertAllClose(inverse_axis, random_axis, rtol=1e-3) self.assertAllClose(inverse_angle, -random_angle, rtol=1e-3) @parameterized.parameters( ("must have exactly 3 dimensions in axis -1", (None,), (1,)), ("must have exactly 1 dimensions in axis -1", (3,), (None,)), ) def test_is_normalized_exception_raised(self, error_msg, *shape): """Tests that the shape exceptions are raised.""" self.assert_exception_is_raised(axis_angle.is_normalized, error_msg, shape) def test_is_normalized_random(self): """Tests that is_normalized works as intended.""" # Samples normalized axis-angles. random_euler_angles = test_helpers.generate_random_test_euler_angles() with self.subTest(name=("is_normalized")): random_axis, random_angle = axis_angle.from_euler(random_euler_angles) pred = axis_angle.is_normalized(random_axis, random_angle) self.assertAllEqual(np.ones(shape=random_angle.shape, dtype=bool), pred) with self.subTest(name=("is_not_normalized")): random_axis *= 1.01 pred = axis_angle.is_normalized(random_axis, random_angle) self.assertAllEqual(np.zeros(shape=random_angle.shape, dtype=bool), pred) @parameterized.parameters( ((3,), (3,), (1,)), ((None, 3), (None, 3), (None, 1)), ((2, 3), (2, 3), (2, 1)), ((3,), (1, 3), (1, 2, 1)), ((1, 2, 3), (1, 3), (1,)), ((3,), (1, 3), (1,)), ) def test_rotate_exception_not_raised(self, *shapes): """Tests that the shape exceptions are not raised.""" self.assert_exception_is_not_raised(axis_angle.rotate, shapes) @parameterized.parameters( ("must have exactly 3 dimensions in axis -1", (2,), (3,), (1,)), ("must have exactly 3 dimensions in axis -1", (3,), (2,), (1,)), ("must have exactly 1 dimensions in axis -1", (3,), (3,), (2,)), ) def test_rotate_exception_raised(self, error_msg, *shape): """Tests that the shape exceptions are raised.""" self.assert_exception_is_raised(axis_angle.rotate, error_msg, shape) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_rotate_jacobian_preset(self): """Test the Jacobian of the rotate function.""" x_axis_init, x_angle_init = test_helpers.generate_preset_test_axis_angle() x_point_init = np.random.uniform(size=x_axis_init.shape) self.assert_jacobian_is_correct_fn( axis_angle.rotate, [x_point_init, x_axis_init, x_angle_init]) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_rotate_jacobian_random(self): """Test the Jacobian of the rotate function.""" x_axis_init, x_angle_init = test_helpers.generate_random_test_axis_angle() x_point_init = np.random.uniform(size=x_axis_init.shape) self.assert_jacobian_is_correct_fn( axis_angle.rotate, [x_point_init, x_axis_init, x_angle_init]) def test_rotate_random(self): """Tests that the rotate provide the same results as quaternion.rotate.""" random_axis, random_angle = test_helpers.generate_random_test_axis_angle() tensor_shape = random_angle.shape[:-1] random_point = np.random.normal(size=tensor_shape + (3,)) random_quaternion = quaternion.from_axis_angle(random_axis, random_angle) ground_truth = quaternion.rotate(random_point, random_quaternion) prediction = axis_angle.rotate(random_point, random_axis, random_angle) self.assertAllClose(ground_truth, prediction, rtol=1e-6) if __name__ == "__main__": test_case.main()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for axis-angle.""" from absl.testing import flagsaver from absl.testing import parameterized import numpy as np import tensorflow as tf from tensorflow_graphics.geometry.transformation import axis_angle from tensorflow_graphics.geometry.transformation import quaternion from tensorflow_graphics.geometry.transformation import rotation_matrix_3d from tensorflow_graphics.geometry.transformation.tests import test_helpers from tensorflow_graphics.util import test_case class AxisAngleTest(test_case.TestCase): @parameterized.parameters( ((3,),), ((None, 3),), ) def test_from_euler_exception_not_raised(self, *shapes): """Tests that the shape exceptions are not raised.""" self.assert_exception_is_not_raised(axis_angle.from_euler, shapes) @parameterized.parameters( ("must have exactly 3 dimensions in axis -1", (None,)),) def test_from_euler_exception_raised(self, error_msg, *shapes): """Tests that the shape exceptions are properly raised.""" self.assert_exception_is_raised(axis_angle.from_euler, error_msg, shapes) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_from_euler_jacobian_random(self): """Test the Jacobian of the from_euler function. Note: Preset angles are not tested as the gradient of tf.norm is NaN at 0. """ x_init = test_helpers.generate_random_test_euler_angles() self.assert_jacobian_is_finite_fn(lambda x: axis_angle.from_euler(x)[0], [x_init]) self.assert_jacobian_is_finite_fn(lambda x: axis_angle.from_euler(x)[1], [x_init]) def test_from_euler_random(self): """Tests that from_euler allows to perform the expect rotation of points.""" random_euler_angles = test_helpers.generate_random_test_euler_angles() tensor_shape = random_euler_angles.shape[:-1] random_point = np.random.normal(size=tensor_shape + (3,)) random_matrix = rotation_matrix_3d.from_euler(random_euler_angles) random_axis, random_angle = axis_angle.from_euler(random_euler_angles) rotated_with_matrix = rotation_matrix_3d.rotate(random_point, random_matrix) rotated_with_axis_angle = axis_angle.rotate(random_point, random_axis, random_angle) self.assertAllClose(rotated_with_matrix, rotated_with_axis_angle) @parameterized.parameters( ((3,),), ((None, 3),), ((2, 3),), ) def test_from_euler_with_small_angles_approximation_exception_not_raised( self, *shapes): """Tests that the shape exceptions are not raised.""" self.assert_exception_is_not_raised( axis_angle.from_euler_with_small_angles_approximation, shapes) @parameterized.parameters( ("must have exactly 3 dimensions in axis -1", (None,)),) def test_from_euler_with_small_angles_approximation_exception_raised( self, error_msg, *shapes): """Tests that the shape exceptions are properly raised.""" self.assert_exception_is_raised( axis_angle.from_euler_with_small_angles_approximation, error_msg, shapes) def test_from_euler_normalized_preset(self): """Tests that from_euler allows build normalized axis-angles.""" euler_angles = test_helpers.generate_preset_test_euler_angles() axis, angle = axis_angle.from_euler(euler_angles) self.assertAllEqual( axis_angle.is_normalized(axis, angle), np.ones(angle.shape, dtype=bool)) def test_from_euler_normalized_random(self): """Tests that from_euler allows build normalized axis-angles.""" random_euler_angles = test_helpers.generate_random_test_euler_angles() random_axis, random_angle = axis_angle.from_euler(random_euler_angles) self.assertAllEqual( axis_angle.is_normalized(random_axis, random_angle), np.ones(shape=random_angle.shape)) def test_from_euler_with_small_angles_approximation_random(self): # Only generate small angles. For a test tolerance of 1e-3, 0.23 was found # empirically to be the range where the small angle approximation works. random_euler_angles = test_helpers.generate_random_test_euler_angles( min_angle=-0.23, max_angle=0.23) exact_axis_angle = axis_angle.from_euler(random_euler_angles) approximate_axis_angle = ( axis_angle.from_euler_with_small_angles_approximation( random_euler_angles)) self.assertAllClose(exact_axis_angle, approximate_axis_angle, atol=1e-3) @parameterized.parameters( ((4,),), ((None, 4),), ((2, 4),), ) def test_from_quaternion_exception_not_raised(self, *shape): """Tests that the shape exceptions are not raised.""" self.assert_exception_is_not_raised(axis_angle.from_quaternion, shape) @parameterized.parameters( ("must have exactly 4 dimensions in axis -1", (None,)),) def test_from_quaternion_exception_raised(self, error_msg, *shape): """Tests that the shape exceptions are raised.""" self.assert_exception_is_raised(axis_angle.from_quaternion, error_msg, shape) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_from_quaternion_jacobian_random(self): """Test the Jacobian of the from_quaternion function. Note: Preset angles are not tested as the gradient of tf.norm is NaN a 0. """ x_init = test_helpers.generate_random_test_quaternions() self.assert_jacobian_is_finite_fn( lambda x: axis_angle.from_quaternion(x)[0], [x_init]) self.assert_jacobian_is_finite_fn( lambda x: axis_angle.from_quaternion(x)[1], [x_init]) def test_from_quaternion_normalized_preset(self): """Tests that from_quaternion returns normalized axis-angles.""" euler_angles = test_helpers.generate_preset_test_euler_angles() quat = quaternion.from_euler(euler_angles) axis, angle = axis_angle.from_quaternion(quat) self.assertAllEqual( axis_angle.is_normalized(axis, angle), np.ones(angle.shape, dtype=bool)) def test_from_quaternion_normalized_random(self): """Tests that from_quaternion returns normalized axis-angles.""" random_quaternions = test_helpers.generate_random_test_quaternions() random_axis, random_angle = axis_angle.from_quaternion(random_quaternions) self.assertAllEqual( axis_angle.is_normalized(random_axis, random_angle), np.ones(random_angle.shape)) def test_from_quaternion_preset(self): """Tests that axis_angle.from_quaternion produces the expected result.""" preset_euler_angles = test_helpers.generate_preset_test_euler_angles() preset_quaternions = quaternion.from_euler(preset_euler_angles) preset_axis_angle = axis_angle.from_euler(preset_euler_angles) self.assertAllClose( preset_axis_angle, axis_angle.from_quaternion(preset_quaternions), rtol=1e-3) def test_from_quaternion_random(self): """Tests that axis_angle.from_quaternion produces the expected result.""" random_euler_angles = test_helpers.generate_random_test_euler_angles() random_quaternions = quaternion.from_euler(random_euler_angles) random_axis_angle = axis_angle.from_euler(random_euler_angles) self.assertAllClose( random_axis_angle, axis_angle.from_quaternion(random_quaternions), rtol=1e-3) @parameterized.parameters( ((3, 3),), ((None, 3, 3),), ) def test_from_rotation_matrix_exception_not_raised(self, *shapes): """Tests that the shape exceptions are not raised.""" self.assert_exception_is_not_raised(axis_angle.from_rotation_matrix, shapes) @parameterized.parameters( ("must have a rank greater than 1", (3,)), ("must have exactly 3 dimensions in axis -1", (3, None)), ("must have exactly 3 dimensions in axis -2", (None, 3)), ) def test_from_rotation_matrix_exception_raised(self, error_msg, *shape): """Tests that the shape exceptions are raised.""" self.assert_exception_is_raised(axis_angle.from_rotation_matrix, error_msg, shape) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_from_rotation_matrix_jacobian_random(self): """Test the Jacobian of the from_rotation_matrix function. Note: Preset angles are not tested as the gradient of tf.norm is NaN a 0. """ x_init = test_helpers.generate_random_test_rotation_matrix_3d() self.assert_jacobian_is_finite_fn( lambda x: axis_angle.from_rotation_matrix(x)[0], [x_init]) self.assert_jacobian_is_finite_fn( lambda x: axis_angle.from_rotation_matrix(x)[1], [x_init]) def test_from_rotation_matrix_normalized_preset(self): """Tests that from_rotation_matrix returns normalized axis-angles.""" preset_euler_angles = test_helpers.generate_preset_test_euler_angles() matrix = rotation_matrix_3d.from_euler(preset_euler_angles) axis, angle = axis_angle.from_rotation_matrix(matrix) self.assertAllEqual( axis_angle.is_normalized(axis, angle), np.ones(angle.shape, dtype=bool)) def test_from_rotation_matrix_normalized_random(self): """Tests that from_rotation_matrix returns normalized axis-angles.""" random_euler_angles = test_helpers.generate_random_test_euler_angles() matrix = rotation_matrix_3d.from_euler(random_euler_angles) axis, angle = axis_angle.from_rotation_matrix(matrix) self.assertAllEqual( axis_angle.is_normalized(axis, angle), np.ones(angle.shape, dtype=bool)) def test_from_rotation_matrix_random(self): """Tests rotation around Z axis.""" def get_rotation_matrix_around_z(angle_rad): return np.array([ [np.cos(angle_rad), -np.sin(angle_rad), 0], [np.sin(angle_rad), np.cos(angle_rad), 0], [0, 0, 1], ]) tensor_size = np.random.randint(10) angle = ( np.array([ np.deg2rad(np.random.randint(720) - 360) for _ in range(tensor_size) ]).reshape((tensor_size, 1))) rotation_matrix = [get_rotation_matrix_around_z(i[0]) for i in angle] rotation_matrix = np.array(rotation_matrix).reshape((tensor_size, 3, 3)) tf_axis, tf_angle = axis_angle.from_rotation_matrix(rotation_matrix) axis = np.tile([[0., 0., 1.]], (angle.shape[0], 1)) tf_quat_gt = quaternion.from_axis_angle(axis, angle) tf_quat = quaternion.from_axis_angle(tf_axis, tf_angle) # Compare quaternions since axis orientation and angle ambiguity will # lead to more complex comparisons. for quat_gt, quat in zip(self.evaluate(tf_quat_gt), self.evaluate(tf_quat)): # Remember that q=-q for any quaternion. pos = np.allclose(quat_gt, quat) neg = np.allclose(quat_gt, -quat) self.assertTrue(pos or neg) @parameterized.parameters( ((3,), (1,)), ((None, 3), (None, 1)), ((2, 3), (2, 1)), ((1, 3), (1,)), ((3,), (1, 1)), ) def test_inverse_exception_not_raised(self, *shape): """Tests that the shape exceptions are not raised.""" self.assert_exception_is_not_raised(axis_angle.inverse, shape) @parameterized.parameters( ("must have exactly 3 dimensions in axis -1", (None,), (1,)), ("must have exactly 1 dimensions in axis -1", (3,), (None,)), ) def test_inverse_exception_raised(self, error_msg, *shape): """Tests that the shape exceptions are raised.""" self.assert_exception_is_raised(axis_angle.inverse, error_msg, shape) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_inverse_jacobian_preset(self): """Test the Jacobian of the inverse function.""" x_axis_init, x_angle_init = test_helpers.generate_preset_test_axis_angle() if tf.executing_eagerly(): # Because axis is returned as is, gradient calculation fails in graph mode # but not in eager mode. This is a side effect of having a graph rather # than a problem of the function. with self.subTest("axis"): self.assert_jacobian_is_correct_fn( lambda x: axis_angle.inverse(x, x_angle_init)[0], [x_axis_init]) with self.subTest("angle"): self.assert_jacobian_is_correct_fn( lambda x: axis_angle.inverse(x_axis_init, x)[1], [x_angle_init]) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_inverse_jacobian_random(self): """Test the Jacobian of the inverse function.""" x_axis_init, x_angle_init = test_helpers.generate_random_test_axis_angle() if tf.executing_eagerly(): # Because axis is returned as is, gradient calculation fails in graph mode # but not in eager mode. This is a side effect of having a graph rather # than a problem of the function. with self.subTest("axis"): self.assert_jacobian_is_correct_fn( lambda x: axis_angle.inverse(1.0 * x, x_angle_init)[0], [x_axis_init]) with self.subTest("angle"): self.assert_jacobian_is_correct_fn( lambda x: axis_angle.inverse(x_axis_init, x)[1], [x_angle_init]) def test_inverse_normalized_random(self): """Tests that axis-angle inversion return a normalized axis-angle.""" random_axis, random_angle = test_helpers.generate_random_test_axis_angle() inverse_axis, inverse_angle = axis_angle.inverse(random_axis, random_angle) self.assertAllEqual( axis_angle.is_normalized(inverse_axis, inverse_angle), np.ones(random_angle.shape)) def test_inverse_random(self): """Tests axis-angle inversion.""" random_axis, random_angle = test_helpers.generate_random_test_axis_angle() inverse_axis, inverse_angle = axis_angle.inverse(random_axis, random_angle) self.assertAllClose(inverse_axis, random_axis, rtol=1e-3) self.assertAllClose(inverse_angle, -random_angle, rtol=1e-3) @parameterized.parameters( ("must have exactly 3 dimensions in axis -1", (None,), (1,)), ("must have exactly 1 dimensions in axis -1", (3,), (None,)), ) def test_is_normalized_exception_raised(self, error_msg, *shape): """Tests that the shape exceptions are raised.""" self.assert_exception_is_raised(axis_angle.is_normalized, error_msg, shape) def test_is_normalized_random(self): """Tests that is_normalized works as intended.""" # Samples normalized axis-angles. random_euler_angles = test_helpers.generate_random_test_euler_angles() with self.subTest(name=("is_normalized")): random_axis, random_angle = axis_angle.from_euler(random_euler_angles) pred = axis_angle.is_normalized(random_axis, random_angle) self.assertAllEqual(np.ones(shape=random_angle.shape, dtype=bool), pred) with self.subTest(name=("is_not_normalized")): random_axis *= 1.01 pred = axis_angle.is_normalized(random_axis, random_angle) self.assertAllEqual(np.zeros(shape=random_angle.shape, dtype=bool), pred) @parameterized.parameters( ((3,), (3,), (1,)), ((None, 3), (None, 3), (None, 1)), ((2, 3), (2, 3), (2, 1)), ((3,), (1, 3), (1, 2, 1)), ((1, 2, 3), (1, 3), (1,)), ((3,), (1, 3), (1,)), ) def test_rotate_exception_not_raised(self, *shapes): """Tests that the shape exceptions are not raised.""" self.assert_exception_is_not_raised(axis_angle.rotate, shapes) @parameterized.parameters( ("must have exactly 3 dimensions in axis -1", (2,), (3,), (1,)), ("must have exactly 3 dimensions in axis -1", (3,), (2,), (1,)), ("must have exactly 1 dimensions in axis -1", (3,), (3,), (2,)), ) def test_rotate_exception_raised(self, error_msg, *shape): """Tests that the shape exceptions are raised.""" self.assert_exception_is_raised(axis_angle.rotate, error_msg, shape) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_rotate_jacobian_preset(self): """Test the Jacobian of the rotate function.""" x_axis_init, x_angle_init = test_helpers.generate_preset_test_axis_angle() x_point_init = np.random.uniform(size=x_axis_init.shape) self.assert_jacobian_is_correct_fn( axis_angle.rotate, [x_point_init, x_axis_init, x_angle_init]) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_rotate_jacobian_random(self): """Test the Jacobian of the rotate function.""" x_axis_init, x_angle_init = test_helpers.generate_random_test_axis_angle() x_point_init = np.random.uniform(size=x_axis_init.shape) self.assert_jacobian_is_correct_fn( axis_angle.rotate, [x_point_init, x_axis_init, x_angle_init]) def test_rotate_random(self): """Tests that the rotate provide the same results as quaternion.rotate.""" random_axis, random_angle = test_helpers.generate_random_test_axis_angle() tensor_shape = random_angle.shape[:-1] random_point = np.random.normal(size=tensor_shape + (3,)) random_quaternion = quaternion.from_axis_angle(random_axis, random_angle) ground_truth = quaternion.rotate(random_point, random_quaternion) prediction = axis_angle.rotate(random_point, random_axis, random_angle) self.assertAllClose(ground_truth, prediction, rtol=1e-6) if __name__ == "__main__": test_case.main()
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/projects/nasa/lib/utils.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """General helper functions.""" from os import path import numpy as np from skimage import measure import tensorflow.compat.v1 as tf from tensorflow_graphics.projects.cvxnet.lib.libmise import mise from tensorflow_graphics.projects.nasa.lib import datasets from tensorflow_graphics.projects.nasa.lib import models import tensorflow_probability as tfp from tqdm import trange import trimesh tf.disable_eager_execution() tfd = tfp.distributions def define_flags(): """Define command line flags.""" flags = tf.app.flags # Dataset Parameters flags.DEFINE_enum("dataset", "amass", list(k for k in datasets.dataset_dict.keys()), "Name of the dataset.") flags.DEFINE_string("data_dir", None, "Directory to load data from.") flags.mark_flag_as_required("data_dir") flags.DEFINE_integer("sample_bbox", 1024, "Number of bbox samples.") flags.DEFINE_integer("sample_surf", 1024, "Number of surface samples.") flags.DEFINE_integer("batch_size", 12, "Batch size.") flags.DEFINE_integer("motion", 0, "Index of the motion for evaluation.") flags.DEFINE_integer("subject", 0, "Index of the subject for training.") # Model Parameters flags.DEFINE_enum("model", "nasa", list(k for k in models.model_dict.keys()), "Name of the model.") flags.DEFINE_integer("n_parts", 24, "Number of parts.") flags.DEFINE_integer("total_dim", 960, "Dimension of the latent vector (in total).") flags.DEFINE_bool("shared_decoder", False, "Whether to use shared decoder.") flags.DEFINE_float("soft_blend", 5., "The constant to blend parts.") flags.DEFINE_bool("projection", True, "Whether to use projected shape features.") flags.DEFINE_float("level_set", 0.5, "The value of the level_set.") flags.DEFINE_integer("n_dims", 3, "The dimension of the query points.") # Training Parameters flags.DEFINE_float("lr", 1e-4, "Learning rate") flags.DEFINE_string("train_dir", None, "Training directory.") flags.mark_flag_as_required("train_dir") flags.DEFINE_integer("max_steps", 200000, "Number of optimization steps.") flags.DEFINE_integer("save_every", 5000, "Number of steps to save checkpoint.") flags.DEFINE_integer("summary_every", 500, "Number of steps to save checkpoint.") flags.DEFINE_float("label_w", 0.5, "Weight of labed vertices loss.") flags.DEFINE_float("minimal_w", 0.05, "Weight of minimal loss.") flags.DEFINE_bool("use_vert", True, "Whether to use vertices on the mesh for training.") flags.DEFINE_bool("use_joint", True, "Whether to use joint-based transformation.") flags.DEFINE_integer("sample_vert", 2048, "Number of vertex samples.") # Evalulation Parameters flags.DEFINE_bool("gen_mesh_only", False, "Whether to generate meshes only.") # Tracking Parameters flags.DEFINE_float("theta_lr", 5e-4, "Learning rate") flags.DEFINE_integer("max_steps_per_frame", 1792, "Number of optimization steps for tracking each frame.") flags.DEFINE_enum("gradient_type", "reparam", ["vanilla", "reparam"], "Type of gradient to use in theta optimization.") flags.DEFINE_integer("sample_track_vert", 1024, "Number of vertex samples for tracking each frame.") flags.DEFINE_integer("n_noisy_samples", 8, "Number of noisy samples per vertex") flags.DEFINE_float("bandwidth", 1e-2, "Bandwidth of the gaussian noises.") flags.DEFINE_bool( "left_trans", False, "Whether to use left side transformation (True) or right side (False).") flags.DEFINE_string("joint_data", None, "Path to load joint data.") flags.DEFINE_float("glue_w", 20., "Weight of length constraint loss.") flags.DEFINE_float("trans_range", 1., "The range of allowed translations.") def gen_mesh(sess, feed_dict, latent_holder, point_holder, latent, occ, batch_val, hparams, idx=0): """Generating meshes given a trained NASA model.""" scale = 1.1 # Scale of the padded bbox regarding the tight one. level_set = hparams.level_set latent_val = sess.run(latent, feed_dict) mesh_extractor = mise.MISE(32, 3, level_set) points = mesh_extractor.query() gt_verts = batch_val["vert"].reshape([-1, 3]) gt_bbox = np.stack([gt_verts.min(axis=0), gt_verts.max(axis=0)], axis=0) gt_center = (gt_bbox[0] + gt_bbox[1]) * 0.5 gt_scale = (gt_bbox[1] - gt_bbox[0]).max() while points.shape[0] != 0: orig_points = points points = points.astype(np.float32) points = (np.expand_dims(points, axis=0) / mesh_extractor.resolution - 0.5) * scale points = points * gt_scale + gt_center n_points = points.shape[1] values = [] for i in range(0, n_points, 100000): # Add this to prevent OOM due to points overload. feed_dict[latent_holder] = latent_val feed_dict[point_holder] = np.expand_dims(points[:, i:i + 100000], axis=1) value = sess.run(occ[:, idx], feed_dict) values.append(value) values = np.concatenate(values, axis=1) values = values[0, :, 0].astype(np.float64) mesh_extractor.update(orig_points, values) points = mesh_extractor.query() value_grid = mesh_extractor.to_dense() try: value_grid = np.pad(value_grid, 1, "constant", constant_values=-1e6) verts, faces, normals, unused_var = measure.marching_cubes_lewiner( value_grid, min(level_set, value_grid.max())) del normals verts -= 1 verts /= np.array([ value_grid.shape[0] - 3, value_grid.shape[1] - 3, value_grid.shape[2] - 3 ], dtype=np.float32) verts = scale * (verts - 0.5) verts = verts * gt_scale + gt_center faces = np.stack([faces[..., 1], faces[..., 0], faces[..., 2]], axis=-1) mesh = trimesh.Trimesh(vertices=verts, faces=faces) return mesh except: # pylint: disable=bare-except return None def save_mesh(sess, feed_dict, latent_holder, point_holder, latent, occ, batch_val, hparams, pth="meshes"): """Generate and save meshes to disk given a trained NASA model.""" name = batch_val["name"][0].decode("utf-8") subject, motion, frame = amass_name_helper(name) pth = path.join(hparams.train_dir, pth, frame) if not tf.io.gfile.isdir(pth): tf.io.gfile.makedirs(pth) start = hparams.n_parts for i in range(start, hparams.n_parts + 1): mesh_model = gen_mesh( sess, feed_dict, latent_holder, point_holder, latent, occ, batch_val, hparams, idx=i) mesh_name = "full_pred.obj" if mesh_model is not None: with tf.io.gfile.GFile(path.join(pth, mesh_name), "w") as fout: mesh_model.export(fout, file_type="obj") return subject, motion, frame, mesh_model def save_pointcloud(data, hparams, pth="pointcloud"): """Save pointcloud to disk.""" name = data["name"][0].decode("utf-8") unused_subject, unused_motion, frame = amass_name_helper(name) pth = path.join(hparams.train_dir, pth, frame) if not tf.io.gfile.isdir(pth): tf.io.gfile.makedirs(pth) mesh_name = "pointcloud.obj" with tf.io.gfile.GFile(path.join(pth, mesh_name), "w") as fout: pointcloud = data["vert"].reshape([-1, 3]) for v in pointcloud: fout.write("v {0} {1} {2}\n".format(*v.tolist())) def amass_name_helper(name): name, frame = name.split("-") subject = name[:5] motion = name[6:] return subject, motion, frame def make_summary_feed_dict( iou_hook, iou, best_hook, best_iou, ): feed_dict = {} feed_dict[iou_hook] = iou feed_dict[best_hook] = best_iou return feed_dict def parse_global_step(ckpt): basename = path.basename(ckpt) return int(basename.split("-")[-1]) def compute_iou(sess, feed_dict, latent_holder, point_holder, latent, occ, point, label, hparams): """Compute IoU.""" iou = 0. eps = 1e-9 latent_val = sess.run(latent, feed_dict) n_points = point.shape[2] preds = [] for start in range(0, n_points, 100000): feed_dict[point_holder] = point[:, :, start:start + 100000] feed_dict[latent_holder] = latent_val pred = sess.run(occ, feed_dict) preds.append(pred) pred = np.concatenate(preds, axis=2) pred = (pred >= hparams.level_set).astype(np.float32) label = (label[:, :1] >= 0.5).astype(np.float32).squeeze(axis=1) iou += np.sum(pred * label) / np.maximum(np.sum(np.maximum(pred, label)), eps) return iou def compute_glue_loss(connect, end_pts, inv_transforms, inv_first_frame_trans, joints, hparams): """Compute the prior term as a glue loss.""" n_dims = hparams.n_dims # Invert the transformation r_inv = inv_transforms[..., :n_dims, :n_dims] t_inv = inv_transforms[..., :n_dims, -1:] r = tf.transpose(r_inv, [0, 2, 1]) t = -tf.matmul(r, t_inv) transforms = tf.concat( [tf.concat([r, t], axis=-1), inv_transforms[..., -1:, :]], axis=-2) transforms = tf.matmul(transforms, inv_first_frame_trans) # Compute transformations of father joints and apply it to vectors from frame0 father_transforms = tf.reduce_sum( tf.expand_dims(transforms, axis=1) * connect.reshape([hparams.n_parts, hparams.n_parts, 1, 1]), axis=0) end_pts_homo = tf.expand_dims( tf.concat([end_pts, tf.ones_like(end_pts[..., :1])], axis=-1), axis=-1) end_pts_transformed = tf.matmul(father_transforms, end_pts_homo) end_pts_transformed = tf.squeeze(end_pts_transformed, axis=-1)[..., :n_dims] # Compute vectors in current configuration pred_links = tf.reshape(joints, [hparams.n_parts, n_dims]) # Compute distance between links and transformed vectors return tf.reduce_sum(tf.square(pred_links - end_pts_transformed)) def vanilla_theta_gradient(model_fn, batch_holder, hparams): """A vanilla gradient estimator for the pose, theta.""" latent_holder, latent, occ_eval = model_fn(batch_holder, None, None, "gen_mesh") if hparams.sample_vert > 0: points = batch_holder["point"] weights = batch_holder["weight"] n_vert = tf.shape(points)[2] sample_indices = tf.random.uniform([1, 1, hparams.sample_vert], minval=0, maxval=n_vert, dtype=tf.int32) points = tf.gather(points, sample_indices, axis=2, batch_dims=2) weights = tf.gather(weights, sample_indices, axis=2, batch_dims=2) batch_holder["point"] = points batch_holder["weight"] = weights unused_var0, unused_var1, occ = model_fn(batch_holder, None, None, "gen_mesh") return latent_holder, latent, occ_eval, tf.reduce_mean( tf.square(occ - hparams.level_set)) def reparam_theta_gradient(model_fn, batch_holder, hparams): """A gradient estimaor for the pose, theta, using the reparam trick.""" sigma = hparams.bandwidth n_samples = hparams.n_noisy_samples latent_holder, latent, occ_eval = model_fn(batch_holder, None, None, "gen_mesh") if hparams.sample_vert > 0: points = batch_holder["point"] weights = batch_holder["weight"] n_vert = tf.shape(points)[2] sample_indices = tf.random.uniform([1, 1, hparams.sample_vert], minval=0, maxval=n_vert, dtype=tf.int32) points = tf.gather(points, sample_indices, axis=2, batch_dims=2) weights = tf.gather(weights, sample_indices, axis=2, batch_dims=2) batch_holder["point"] = points batch_holder["weight"] = weights dist = tfd.Normal(loc=0., scale=sigma) n_pts = hparams.sample_vert if hparams.sample_vert > 0 else hparams.n_vert noises = dist.sample((1, hparams.n_parts, n_pts, n_samples, hparams.n_dims)) unused_var0, unused_var1, occ = model_fn(batch_holder, noises, None, "gen_mesh") occ = tf.reshape(occ, [1, hparams.n_parts + 1, -1, n_samples, 1]) occ = tf.reduce_mean(occ[:, hparams.n_parts:], axis=3) return latent_holder, latent, occ_eval, tf.reduce_mean( tf.square(occ - hparams.level_set)) def optimize_theta(feed_dict, loss, reset_op, train_op, rec_loss, glue_loss, sess, k, hparams): """Optimize the pose, theta, during tracking.""" sess.run(reset_op) loss_val = 0 glue_val = 0 with trange(hparams.max_steps_per_frame) as t: for unused_i in t: loss_val, unused_var, rec_val, glue_val = sess.run( [loss, train_op, rec_loss, glue_loss], feed_dict) t.set_description("Frame_{0} {1:.4f}|{2:.4f}".format( k, rec_val, glue_val)) return loss_val, glue_val
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """General helper functions.""" from os import path import numpy as np from skimage import measure import tensorflow.compat.v1 as tf from tensorflow_graphics.projects.cvxnet.lib.libmise import mise from tensorflow_graphics.projects.nasa.lib import datasets from tensorflow_graphics.projects.nasa.lib import models import tensorflow_probability as tfp from tqdm import trange import trimesh tf.disable_eager_execution() tfd = tfp.distributions def define_flags(): """Define command line flags.""" flags = tf.app.flags # Dataset Parameters flags.DEFINE_enum("dataset", "amass", list(k for k in datasets.dataset_dict.keys()), "Name of the dataset.") flags.DEFINE_string("data_dir", None, "Directory to load data from.") flags.mark_flag_as_required("data_dir") flags.DEFINE_integer("sample_bbox", 1024, "Number of bbox samples.") flags.DEFINE_integer("sample_surf", 1024, "Number of surface samples.") flags.DEFINE_integer("batch_size", 12, "Batch size.") flags.DEFINE_integer("motion", 0, "Index of the motion for evaluation.") flags.DEFINE_integer("subject", 0, "Index of the subject for training.") # Model Parameters flags.DEFINE_enum("model", "nasa", list(k for k in models.model_dict.keys()), "Name of the model.") flags.DEFINE_integer("n_parts", 24, "Number of parts.") flags.DEFINE_integer("total_dim", 960, "Dimension of the latent vector (in total).") flags.DEFINE_bool("shared_decoder", False, "Whether to use shared decoder.") flags.DEFINE_float("soft_blend", 5., "The constant to blend parts.") flags.DEFINE_bool("projection", True, "Whether to use projected shape features.") flags.DEFINE_float("level_set", 0.5, "The value of the level_set.") flags.DEFINE_integer("n_dims", 3, "The dimension of the query points.") # Training Parameters flags.DEFINE_float("lr", 1e-4, "Learning rate") flags.DEFINE_string("train_dir", None, "Training directory.") flags.mark_flag_as_required("train_dir") flags.DEFINE_integer("max_steps", 200000, "Number of optimization steps.") flags.DEFINE_integer("save_every", 5000, "Number of steps to save checkpoint.") flags.DEFINE_integer("summary_every", 500, "Number of steps to save checkpoint.") flags.DEFINE_float("label_w", 0.5, "Weight of labed vertices loss.") flags.DEFINE_float("minimal_w", 0.05, "Weight of minimal loss.") flags.DEFINE_bool("use_vert", True, "Whether to use vertices on the mesh for training.") flags.DEFINE_bool("use_joint", True, "Whether to use joint-based transformation.") flags.DEFINE_integer("sample_vert", 2048, "Number of vertex samples.") # Evalulation Parameters flags.DEFINE_bool("gen_mesh_only", False, "Whether to generate meshes only.") # Tracking Parameters flags.DEFINE_float("theta_lr", 5e-4, "Learning rate") flags.DEFINE_integer("max_steps_per_frame", 1792, "Number of optimization steps for tracking each frame.") flags.DEFINE_enum("gradient_type", "reparam", ["vanilla", "reparam"], "Type of gradient to use in theta optimization.") flags.DEFINE_integer("sample_track_vert", 1024, "Number of vertex samples for tracking each frame.") flags.DEFINE_integer("n_noisy_samples", 8, "Number of noisy samples per vertex") flags.DEFINE_float("bandwidth", 1e-2, "Bandwidth of the gaussian noises.") flags.DEFINE_bool( "left_trans", False, "Whether to use left side transformation (True) or right side (False).") flags.DEFINE_string("joint_data", None, "Path to load joint data.") flags.DEFINE_float("glue_w", 20., "Weight of length constraint loss.") flags.DEFINE_float("trans_range", 1., "The range of allowed translations.") def gen_mesh(sess, feed_dict, latent_holder, point_holder, latent, occ, batch_val, hparams, idx=0): """Generating meshes given a trained NASA model.""" scale = 1.1 # Scale of the padded bbox regarding the tight one. level_set = hparams.level_set latent_val = sess.run(latent, feed_dict) mesh_extractor = mise.MISE(32, 3, level_set) points = mesh_extractor.query() gt_verts = batch_val["vert"].reshape([-1, 3]) gt_bbox = np.stack([gt_verts.min(axis=0), gt_verts.max(axis=0)], axis=0) gt_center = (gt_bbox[0] + gt_bbox[1]) * 0.5 gt_scale = (gt_bbox[1] - gt_bbox[0]).max() while points.shape[0] != 0: orig_points = points points = points.astype(np.float32) points = (np.expand_dims(points, axis=0) / mesh_extractor.resolution - 0.5) * scale points = points * gt_scale + gt_center n_points = points.shape[1] values = [] for i in range(0, n_points, 100000): # Add this to prevent OOM due to points overload. feed_dict[latent_holder] = latent_val feed_dict[point_holder] = np.expand_dims(points[:, i:i + 100000], axis=1) value = sess.run(occ[:, idx], feed_dict) values.append(value) values = np.concatenate(values, axis=1) values = values[0, :, 0].astype(np.float64) mesh_extractor.update(orig_points, values) points = mesh_extractor.query() value_grid = mesh_extractor.to_dense() try: value_grid = np.pad(value_grid, 1, "constant", constant_values=-1e6) verts, faces, normals, unused_var = measure.marching_cubes_lewiner( value_grid, min(level_set, value_grid.max())) del normals verts -= 1 verts /= np.array([ value_grid.shape[0] - 3, value_grid.shape[1] - 3, value_grid.shape[2] - 3 ], dtype=np.float32) verts = scale * (verts - 0.5) verts = verts * gt_scale + gt_center faces = np.stack([faces[..., 1], faces[..., 0], faces[..., 2]], axis=-1) mesh = trimesh.Trimesh(vertices=verts, faces=faces) return mesh except: # pylint: disable=bare-except return None def save_mesh(sess, feed_dict, latent_holder, point_holder, latent, occ, batch_val, hparams, pth="meshes"): """Generate and save meshes to disk given a trained NASA model.""" name = batch_val["name"][0].decode("utf-8") subject, motion, frame = amass_name_helper(name) pth = path.join(hparams.train_dir, pth, frame) if not tf.io.gfile.isdir(pth): tf.io.gfile.makedirs(pth) start = hparams.n_parts for i in range(start, hparams.n_parts + 1): mesh_model = gen_mesh( sess, feed_dict, latent_holder, point_holder, latent, occ, batch_val, hparams, idx=i) mesh_name = "full_pred.obj" if mesh_model is not None: with tf.io.gfile.GFile(path.join(pth, mesh_name), "w") as fout: mesh_model.export(fout, file_type="obj") return subject, motion, frame, mesh_model def save_pointcloud(data, hparams, pth="pointcloud"): """Save pointcloud to disk.""" name = data["name"][0].decode("utf-8") unused_subject, unused_motion, frame = amass_name_helper(name) pth = path.join(hparams.train_dir, pth, frame) if not tf.io.gfile.isdir(pth): tf.io.gfile.makedirs(pth) mesh_name = "pointcloud.obj" with tf.io.gfile.GFile(path.join(pth, mesh_name), "w") as fout: pointcloud = data["vert"].reshape([-1, 3]) for v in pointcloud: fout.write("v {0} {1} {2}\n".format(*v.tolist())) def amass_name_helper(name): name, frame = name.split("-") subject = name[:5] motion = name[6:] return subject, motion, frame def make_summary_feed_dict( iou_hook, iou, best_hook, best_iou, ): feed_dict = {} feed_dict[iou_hook] = iou feed_dict[best_hook] = best_iou return feed_dict def parse_global_step(ckpt): basename = path.basename(ckpt) return int(basename.split("-")[-1]) def compute_iou(sess, feed_dict, latent_holder, point_holder, latent, occ, point, label, hparams): """Compute IoU.""" iou = 0. eps = 1e-9 latent_val = sess.run(latent, feed_dict) n_points = point.shape[2] preds = [] for start in range(0, n_points, 100000): feed_dict[point_holder] = point[:, :, start:start + 100000] feed_dict[latent_holder] = latent_val pred = sess.run(occ, feed_dict) preds.append(pred) pred = np.concatenate(preds, axis=2) pred = (pred >= hparams.level_set).astype(np.float32) label = (label[:, :1] >= 0.5).astype(np.float32).squeeze(axis=1) iou += np.sum(pred * label) / np.maximum(np.sum(np.maximum(pred, label)), eps) return iou def compute_glue_loss(connect, end_pts, inv_transforms, inv_first_frame_trans, joints, hparams): """Compute the prior term as a glue loss.""" n_dims = hparams.n_dims # Invert the transformation r_inv = inv_transforms[..., :n_dims, :n_dims] t_inv = inv_transforms[..., :n_dims, -1:] r = tf.transpose(r_inv, [0, 2, 1]) t = -tf.matmul(r, t_inv) transforms = tf.concat( [tf.concat([r, t], axis=-1), inv_transforms[..., -1:, :]], axis=-2) transforms = tf.matmul(transforms, inv_first_frame_trans) # Compute transformations of father joints and apply it to vectors from frame0 father_transforms = tf.reduce_sum( tf.expand_dims(transforms, axis=1) * connect.reshape([hparams.n_parts, hparams.n_parts, 1, 1]), axis=0) end_pts_homo = tf.expand_dims( tf.concat([end_pts, tf.ones_like(end_pts[..., :1])], axis=-1), axis=-1) end_pts_transformed = tf.matmul(father_transforms, end_pts_homo) end_pts_transformed = tf.squeeze(end_pts_transformed, axis=-1)[..., :n_dims] # Compute vectors in current configuration pred_links = tf.reshape(joints, [hparams.n_parts, n_dims]) # Compute distance between links and transformed vectors return tf.reduce_sum(tf.square(pred_links - end_pts_transformed)) def vanilla_theta_gradient(model_fn, batch_holder, hparams): """A vanilla gradient estimator for the pose, theta.""" latent_holder, latent, occ_eval = model_fn(batch_holder, None, None, "gen_mesh") if hparams.sample_vert > 0: points = batch_holder["point"] weights = batch_holder["weight"] n_vert = tf.shape(points)[2] sample_indices = tf.random.uniform([1, 1, hparams.sample_vert], minval=0, maxval=n_vert, dtype=tf.int32) points = tf.gather(points, sample_indices, axis=2, batch_dims=2) weights = tf.gather(weights, sample_indices, axis=2, batch_dims=2) batch_holder["point"] = points batch_holder["weight"] = weights unused_var0, unused_var1, occ = model_fn(batch_holder, None, None, "gen_mesh") return latent_holder, latent, occ_eval, tf.reduce_mean( tf.square(occ - hparams.level_set)) def reparam_theta_gradient(model_fn, batch_holder, hparams): """A gradient estimaor for the pose, theta, using the reparam trick.""" sigma = hparams.bandwidth n_samples = hparams.n_noisy_samples latent_holder, latent, occ_eval = model_fn(batch_holder, None, None, "gen_mesh") if hparams.sample_vert > 0: points = batch_holder["point"] weights = batch_holder["weight"] n_vert = tf.shape(points)[2] sample_indices = tf.random.uniform([1, 1, hparams.sample_vert], minval=0, maxval=n_vert, dtype=tf.int32) points = tf.gather(points, sample_indices, axis=2, batch_dims=2) weights = tf.gather(weights, sample_indices, axis=2, batch_dims=2) batch_holder["point"] = points batch_holder["weight"] = weights dist = tfd.Normal(loc=0., scale=sigma) n_pts = hparams.sample_vert if hparams.sample_vert > 0 else hparams.n_vert noises = dist.sample((1, hparams.n_parts, n_pts, n_samples, hparams.n_dims)) unused_var0, unused_var1, occ = model_fn(batch_holder, noises, None, "gen_mesh") occ = tf.reshape(occ, [1, hparams.n_parts + 1, -1, n_samples, 1]) occ = tf.reduce_mean(occ[:, hparams.n_parts:], axis=3) return latent_holder, latent, occ_eval, tf.reduce_mean( tf.square(occ - hparams.level_set)) def optimize_theta(feed_dict, loss, reset_op, train_op, rec_loss, glue_loss, sess, k, hparams): """Optimize the pose, theta, during tracking.""" sess.run(reset_op) loss_val = 0 glue_val = 0 with trange(hparams.max_steps_per_frame) as t: for unused_i in t: loss_val, unused_var, rec_val, glue_val = sess.run( [loss, train_op, rec_loss, glue_loss], feed_dict) t.set_description("Frame_{0} {1:.4f}|{2:.4f}".format( k, rec_val, glue_val)) return loss_val, glue_val
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/image/color_space/tests/srgb_test.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for srgb.""" from absl.testing import flagsaver from absl.testing import parameterized import numpy as np from tensorflow_graphics.image.color_space import linear_rgb from tensorflow_graphics.image.color_space import srgb from tensorflow_graphics.util import test_case class SrgbTest(test_case.TestCase): def test_cycle_linear_rgb_srgb_linear_rgb_for_random_input(self): """Tests loop from linear RGB to sRGB and back for random inputs.""" tensor_size = np.random.randint(3) tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist() linear_input = np.random.uniform(size=tensor_shape + [3]) srgb_output = srgb.from_linear_rgb(linear_input) linear_reverse = linear_rgb.from_srgb(srgb_output) self.assertAllClose(linear_input, linear_reverse) @parameterized.parameters( (((0., 0.5, 1.), (0.00312, 0.0031308, 0.00314)), ((0., 0.735357, 1.), (0.04031, 0.04045, 0.040567))),) def test_from_linear_rgb_preset(self, test_inputs, test_outputs): """Tests conversion from linear to sRGB color space for preset inputs.""" self.assert_output_is_correct(srgb.from_linear_rgb, (test_inputs,), (test_outputs,)) def test_from_linear_rgb_jacobian_random(self): """Tests the Jacobian of the from_linear_rgb function for random inputs.""" tensor_size = np.random.randint(3) tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist() linear_random_init = np.random.uniform(size=tensor_shape + [3]) self.assert_jacobian_is_correct_fn(srgb.from_linear_rgb, [linear_random_init]) @parameterized.parameters((np.array((0., 0.001, 0.002)),), (np.array( (0.004, 0.005, 1.)),), (np.array((0.00312, 0.004, 0.00314)),)) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_from_linear_rgb_jacobian_preset(self, inputs_init): """Tests the Jacobian of the from_linear_rgb function for preset inputs.""" self.assert_jacobian_is_correct_fn(srgb.from_linear_rgb, [inputs_init]) @parameterized.parameters( ((3,),), ((None, None, None, 3),), ) def test_from_linear_rgb_exception_not_raised(self, *shape): """Tests that the shape exceptions are not raised.""" self.assert_exception_is_not_raised(srgb.from_linear_rgb, shape) @parameterized.parameters( ("must have a rank greater than 0", ()), ("must have exactly 3 dimensions in axis -1", (2, 3, 4)), ) def test_from_linear_rgb_exception_raised(self, error_msg, *shape): """Tests that the shape exceptions are properly raised.""" self.assert_exception_is_raised(srgb.from_linear_rgb, error_msg, shape) if __name__ == "__main__": test_case.main()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for srgb.""" from absl.testing import flagsaver from absl.testing import parameterized import numpy as np from tensorflow_graphics.image.color_space import linear_rgb from tensorflow_graphics.image.color_space import srgb from tensorflow_graphics.util import test_case class SrgbTest(test_case.TestCase): def test_cycle_linear_rgb_srgb_linear_rgb_for_random_input(self): """Tests loop from linear RGB to sRGB and back for random inputs.""" tensor_size = np.random.randint(3) tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist() linear_input = np.random.uniform(size=tensor_shape + [3]) srgb_output = srgb.from_linear_rgb(linear_input) linear_reverse = linear_rgb.from_srgb(srgb_output) self.assertAllClose(linear_input, linear_reverse) @parameterized.parameters( (((0., 0.5, 1.), (0.00312, 0.0031308, 0.00314)), ((0., 0.735357, 1.), (0.04031, 0.04045, 0.040567))),) def test_from_linear_rgb_preset(self, test_inputs, test_outputs): """Tests conversion from linear to sRGB color space for preset inputs.""" self.assert_output_is_correct(srgb.from_linear_rgb, (test_inputs,), (test_outputs,)) def test_from_linear_rgb_jacobian_random(self): """Tests the Jacobian of the from_linear_rgb function for random inputs.""" tensor_size = np.random.randint(3) tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist() linear_random_init = np.random.uniform(size=tensor_shape + [3]) self.assert_jacobian_is_correct_fn(srgb.from_linear_rgb, [linear_random_init]) @parameterized.parameters((np.array((0., 0.001, 0.002)),), (np.array( (0.004, 0.005, 1.)),), (np.array((0.00312, 0.004, 0.00314)),)) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_from_linear_rgb_jacobian_preset(self, inputs_init): """Tests the Jacobian of the from_linear_rgb function for preset inputs.""" self.assert_jacobian_is_correct_fn(srgb.from_linear_rgb, [inputs_init]) @parameterized.parameters( ((3,),), ((None, None, None, 3),), ) def test_from_linear_rgb_exception_not_raised(self, *shape): """Tests that the shape exceptions are not raised.""" self.assert_exception_is_not_raised(srgb.from_linear_rgb, shape) @parameterized.parameters( ("must have a rank greater than 0", ()), ("must have exactly 3 dimensions in axis -1", (2, 3, 4)), ) def test_from_linear_rgb_exception_raised(self, error_msg, *shape): """Tests that the shape exceptions are properly raised.""" self.assert_exception_is_raised(srgb.from_linear_rgb, error_msg, shape) if __name__ == "__main__": test_case.main()
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/projects/local_implicit_grid/core/model_g2g.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Lint as: python3 """Model for voxel grid-to-grid encoding.""" import math import tensorflow.compat.v1 as tf from tensorflow_graphics.projects.local_implicit_grid.core import local_implicit_grid_layer as lig layers = tf.keras.layers class ResBlock3D(layers.Layer): """3D convolutional Residue Block. Maintains same resolution. """ def __init__(self, neck_channels, out_channels, final_relu=True): """Initialization. Args: neck_channels: int, number of channels in bottleneck layer. out_channels: int, number of output channels. final_relu: bool, add relu to the last layer. """ super(ResBlock3D, self).__init__() self.neck_channels = neck_channels self.out_channels = out_channels self.conv1 = layers.Conv3D(neck_channels, kernel_size=1, strides=1) self.conv2 = layers.Conv3D( neck_channels, kernel_size=3, strides=1, padding='same') self.conv3 = layers.Conv3D(out_channels, kernel_size=1, strides=1) self.bn1 = layers.BatchNormalization(axis=-1) self.bn2 = layers.BatchNormalization(axis=-1) self.bn3 = layers.BatchNormalization(axis=-1) self.shortcut = layers.Conv3D(out_channels, kernel_size=1, strides=1) self.final_relu = final_relu def call(self, x, training=False): identity = x x = self.conv1(x) x = self.bn1(x, training=training) x = tf.nn.relu(x) x = self.conv2(x) x = self.bn2(x, training=training) x = tf.nn.relu(x) x = self.conv3(x) x = self.bn3(x, training=training) x += self.shortcut(identity) if self.final_relu: x = tf.nn.relu(x) return x class UNet3D(tf.keras.layers.Layer): """UNet that consumes even dimension grid and outputs even dimension grid.""" def __init__(self, in_grid_res=32, out_grid_res=16, num_filters=16, max_filters=512, out_features=32, name='unet3d'): """Initialization. Args: in_grid_res: int, input grid resolution, must be powers of 2. out_grid_res: int, output grid resolution, must be powers of 2. num_filters: int, number of feature layers at smallest grid resolution. max_filters: int, max number of feature layers at any resolution. out_features: int, number of output feature channels. name: str, name of the layer. Raises: ValueError: if in_grid_res or out_grid_res is not powers of 2. """ super(UNet3D, self).__init__(name=name) self.in_grid_res = in_grid_res self.out_grid_res = out_grid_res self.num_filters = num_filters self.max_filters = max_filters self.out_features = out_features # assert dimensions acceptable if math.log(out_grid_res, 2) % 1 != 0 or math.log(in_grid_res, 2) % 1 != 0: raise ValueError('in_grid_res and out_grid_res must be 2**n.') self.num_in_level = math.log(self.in_grid_res, 2) self.num_out_level = math.log(self.out_grid_res, 2) self.num_in_level = int(self.num_in_level) # number of input levels self.num_out_level = int(self.num_out_level) # number of output levels self._create_layers() def _create_layers(self): num_filter_down = [ self.num_filters * (2**(i + 1)) for i in range(self.num_in_level) ] # num. features in downward path num_filter_down = [ n if n <= self.max_filters else self.max_filters for n in num_filter_down ] num_filter_up = num_filter_down[::-1][:self.num_out_level] self.num_filter_down = num_filter_down self.num_filter_up = num_filter_up self.conv_in = ResBlock3D(self.num_filters, self.num_filters) self.conv_out = ResBlock3D( self.out_features, self.out_features, final_relu=False) self.down_modules = [ResBlock3D(int(n / 2), n) for n in num_filter_down] self.up_modules = [ResBlock3D(n, n) for n in num_filter_up] self.dnpool = layers.MaxPool3D((2, 2, 2)) self.upsamp = layers.UpSampling3D((2, 2, 2)) self.up_final = layers.UpSampling3D((2, 2, 2)) def call(self, x, training=False): """Forward method. Args: x: `[batch, in_grid_res, in_grid_res, in_grid_res, in_features]` tensor, input voxel grid. training: bool, flag indicating whether model is in training mode. Returns: `[batch, out_grid_res, out_grid_res, out_grid_res, out_features]` tensor, output voxel grid. """ x = self.conv_in(x) x_dns = [x] for mod in self.down_modules: x_ = self.dnpool(mod(x_dns[-1], training=training)) x_dns.append(x_) x_ups = [x_dns.pop(-1)] for mod in self.up_modules: x_ = tf.concat([self.upsamp(x_ups[-1]), x_dns.pop(-1)], axis=-1) x_ = mod(x_, training=training) x_ups.append(x_) x = self.conv_out(x_ups[-1]) return x class UNet3DOdd(tf.keras.layers.Layer): """UNet that consumes even dimension grid and outputs odd dimension grid.""" def __init__(self, in_grid_res=32, out_grid_res=15, num_filters=16, max_filters=512, out_features=32, name='unet3dodd'): """Initialization. Args: in_grid_res: int, input grid resolution, must be powers of 2. out_grid_res: int, output grid resolution, must be powers of 2. num_filters: int, number of feature layers at smallest grid resolution. max_filters: int, max number of feature layers at any resolution. out_features: int, number of output feature channels. name: str, name of the layer. Raises: ValueError: if in_grid_res or out_grid_res are not 2**n or 2**n-1 for some positive integer n. """ super(UNet3DOdd, self).__init__(name=name) self.in_grid_res = in_grid_res self.out_grid_res = out_grid_res self.num_filters = num_filters self.max_filters = max_filters self.out_features = out_features # assert dimensions acceptable if math.log(out_grid_res + 1, 2) % 1 != 0 or math.log(in_grid_res, 2) % 1 != 0: raise ValueError( 'in_grid_res must be 2**n, out_grid_res must be 2**n-1.') self.num_in_level = math.log(self.in_grid_res, 2) self.num_out_level = math.log(self.out_grid_res + 1, 2) self.num_in_level = int(self.num_in_level) # number of input levels self.num_out_level = int(self.num_out_level) # number of output levels self._create_layers() def _create_layers(self): num_filter_down = [ self.num_filters * (2**(i + 1)) for i in range(self.num_in_level) ] # num. features in downward path num_filter_down = [ n if n <= self.max_filters else self.max_filters for n in num_filter_down ] num_filter_up = num_filter_down[::-1][1:self.num_out_level] self.num_filter_down = num_filter_down self.num_filter_up = num_filter_up self.conv_in = ResBlock3D(self.num_filters, self.num_filters) self.conv_out = ResBlock3D( self.out_features, self.out_features, final_relu=False) self.down_modules = [ResBlock3D(int(n / 2), n) for n in num_filter_down] self.up_modules = [ResBlock3D(n, n) for n in num_filter_up] self.dnpool = layers.MaxPool3D((2, 2, 2)) self.upsamp = layers.UpSampling3D((2, 2, 2)) self.up_final = layers.UpSampling3D((2, 2, 2)) def call(self, x, training=False): """Forward method. Args: x: `[batch, in_grid_res, in_grid_res, in_grid_res, in_features]` tensor, input voxel grid. training: bool, flag indicating whether model is in training mode. Returns: `[batch, out_grid_res, out_grid_res, out_grid_res, out_features]` tensor, output voxel grid. """ x = self.conv_in(x) x_dns = [x] for mod in self.down_modules: x_ = self.dnpool(mod(x_dns[-1], training=training)) x_dns.append(x_) x_ups = [x_dns.pop(-1)] for mod in self.up_modules: x_ = tf.concat([self.upsamp(x_ups[-1]), x_dns.pop(-1)], axis=-1) x_ = mod(x_, training=training) x_ups.append(x_) # odd layer x = self.upsamp(x_ups[-1])[:, :-1, :-1, :-1, :] x = self.conv_out(x) return x class ModelG2G(tf.keras.Model): """Grid-to-Grid Model with U-Net skip connections.""" def __init__(self, in_grid_res=32, out_grid_res=8, num_filters=256, codelen=128, out_features=1, net_type='imnet', name='g2g'): """Initialization. Args: in_grid_res: int, input grid resolution, must be powers of 2. out_grid_res: int, output grid resolution, must be powers of 2. num_filters: int, number of feature layers at smallest grid resolution. codelen: int, length of local latent codes. out_features: int, number of output feature channels. net_type: str, implicit function network architecture. imnet/deepsdf. name: str, name of the layer. Raises: NotImplementedError: if net_type is not imnet or deepsdf. ValueError: if in_grid_res or out_grid_res is not powers of 2. """ super(ModelG2G, self).__init__(name=name) if math.log(out_grid_res, 2) % 1 != 0 or math.log(in_grid_res, 2) % 1 != 0: raise ValueError('in_grid_res and out_grid_res must be powers of 2.') if net_type not in ['imnet', 'deepsdf']: raise NotImplementedError self.codelen = codelen self.out_features = out_features self.in_grid_res = in_grid_res self.out_grid_res = out_grid_res self.num_filters = num_filters self.net_type = net_type self.outgrid = None self.unet = UNet3D( in_grid_res=in_grid_res, out_grid_res=out_grid_res, num_filters=num_filters, out_features=codelen) self.lig = lig.LocalImplicitGrid( size=(out_grid_res, out_grid_res, out_grid_res), in_features=codelen, out_features=out_features, net_type=net_type) def call(self, voxgrid, pts, training=False): """Forward method. Args: voxgrid: `[batch, inres, inres, inres, nc]` tensor, input voxel grid. pts: `[batch, num_points, 3]` tensor, input query points. training: bool, flag indicating whether model is in training mode. Returns: vals: `[batch, num_points, 3]` tensor, predicted values at query points. """ self.outgrid = self.unet(voxgrid, training=training) val = self.lig(self.outgrid, pts, training=training) return val
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Lint as: python3 """Model for voxel grid-to-grid encoding.""" import math import tensorflow.compat.v1 as tf from tensorflow_graphics.projects.local_implicit_grid.core import local_implicit_grid_layer as lig layers = tf.keras.layers class ResBlock3D(layers.Layer): """3D convolutional Residue Block. Maintains same resolution. """ def __init__(self, neck_channels, out_channels, final_relu=True): """Initialization. Args: neck_channels: int, number of channels in bottleneck layer. out_channels: int, number of output channels. final_relu: bool, add relu to the last layer. """ super(ResBlock3D, self).__init__() self.neck_channels = neck_channels self.out_channels = out_channels self.conv1 = layers.Conv3D(neck_channels, kernel_size=1, strides=1) self.conv2 = layers.Conv3D( neck_channels, kernel_size=3, strides=1, padding='same') self.conv3 = layers.Conv3D(out_channels, kernel_size=1, strides=1) self.bn1 = layers.BatchNormalization(axis=-1) self.bn2 = layers.BatchNormalization(axis=-1) self.bn3 = layers.BatchNormalization(axis=-1) self.shortcut = layers.Conv3D(out_channels, kernel_size=1, strides=1) self.final_relu = final_relu def call(self, x, training=False): identity = x x = self.conv1(x) x = self.bn1(x, training=training) x = tf.nn.relu(x) x = self.conv2(x) x = self.bn2(x, training=training) x = tf.nn.relu(x) x = self.conv3(x) x = self.bn3(x, training=training) x += self.shortcut(identity) if self.final_relu: x = tf.nn.relu(x) return x class UNet3D(tf.keras.layers.Layer): """UNet that consumes even dimension grid and outputs even dimension grid.""" def __init__(self, in_grid_res=32, out_grid_res=16, num_filters=16, max_filters=512, out_features=32, name='unet3d'): """Initialization. Args: in_grid_res: int, input grid resolution, must be powers of 2. out_grid_res: int, output grid resolution, must be powers of 2. num_filters: int, number of feature layers at smallest grid resolution. max_filters: int, max number of feature layers at any resolution. out_features: int, number of output feature channels. name: str, name of the layer. Raises: ValueError: if in_grid_res or out_grid_res is not powers of 2. """ super(UNet3D, self).__init__(name=name) self.in_grid_res = in_grid_res self.out_grid_res = out_grid_res self.num_filters = num_filters self.max_filters = max_filters self.out_features = out_features # assert dimensions acceptable if math.log(out_grid_res, 2) % 1 != 0 or math.log(in_grid_res, 2) % 1 != 0: raise ValueError('in_grid_res and out_grid_res must be 2**n.') self.num_in_level = math.log(self.in_grid_res, 2) self.num_out_level = math.log(self.out_grid_res, 2) self.num_in_level = int(self.num_in_level) # number of input levels self.num_out_level = int(self.num_out_level) # number of output levels self._create_layers() def _create_layers(self): num_filter_down = [ self.num_filters * (2**(i + 1)) for i in range(self.num_in_level) ] # num. features in downward path num_filter_down = [ n if n <= self.max_filters else self.max_filters for n in num_filter_down ] num_filter_up = num_filter_down[::-1][:self.num_out_level] self.num_filter_down = num_filter_down self.num_filter_up = num_filter_up self.conv_in = ResBlock3D(self.num_filters, self.num_filters) self.conv_out = ResBlock3D( self.out_features, self.out_features, final_relu=False) self.down_modules = [ResBlock3D(int(n / 2), n) for n in num_filter_down] self.up_modules = [ResBlock3D(n, n) for n in num_filter_up] self.dnpool = layers.MaxPool3D((2, 2, 2)) self.upsamp = layers.UpSampling3D((2, 2, 2)) self.up_final = layers.UpSampling3D((2, 2, 2)) def call(self, x, training=False): """Forward method. Args: x: `[batch, in_grid_res, in_grid_res, in_grid_res, in_features]` tensor, input voxel grid. training: bool, flag indicating whether model is in training mode. Returns: `[batch, out_grid_res, out_grid_res, out_grid_res, out_features]` tensor, output voxel grid. """ x = self.conv_in(x) x_dns = [x] for mod in self.down_modules: x_ = self.dnpool(mod(x_dns[-1], training=training)) x_dns.append(x_) x_ups = [x_dns.pop(-1)] for mod in self.up_modules: x_ = tf.concat([self.upsamp(x_ups[-1]), x_dns.pop(-1)], axis=-1) x_ = mod(x_, training=training) x_ups.append(x_) x = self.conv_out(x_ups[-1]) return x class UNet3DOdd(tf.keras.layers.Layer): """UNet that consumes even dimension grid and outputs odd dimension grid.""" def __init__(self, in_grid_res=32, out_grid_res=15, num_filters=16, max_filters=512, out_features=32, name='unet3dodd'): """Initialization. Args: in_grid_res: int, input grid resolution, must be powers of 2. out_grid_res: int, output grid resolution, must be powers of 2. num_filters: int, number of feature layers at smallest grid resolution. max_filters: int, max number of feature layers at any resolution. out_features: int, number of output feature channels. name: str, name of the layer. Raises: ValueError: if in_grid_res or out_grid_res are not 2**n or 2**n-1 for some positive integer n. """ super(UNet3DOdd, self).__init__(name=name) self.in_grid_res = in_grid_res self.out_grid_res = out_grid_res self.num_filters = num_filters self.max_filters = max_filters self.out_features = out_features # assert dimensions acceptable if math.log(out_grid_res + 1, 2) % 1 != 0 or math.log(in_grid_res, 2) % 1 != 0: raise ValueError( 'in_grid_res must be 2**n, out_grid_res must be 2**n-1.') self.num_in_level = math.log(self.in_grid_res, 2) self.num_out_level = math.log(self.out_grid_res + 1, 2) self.num_in_level = int(self.num_in_level) # number of input levels self.num_out_level = int(self.num_out_level) # number of output levels self._create_layers() def _create_layers(self): num_filter_down = [ self.num_filters * (2**(i + 1)) for i in range(self.num_in_level) ] # num. features in downward path num_filter_down = [ n if n <= self.max_filters else self.max_filters for n in num_filter_down ] num_filter_up = num_filter_down[::-1][1:self.num_out_level] self.num_filter_down = num_filter_down self.num_filter_up = num_filter_up self.conv_in = ResBlock3D(self.num_filters, self.num_filters) self.conv_out = ResBlock3D( self.out_features, self.out_features, final_relu=False) self.down_modules = [ResBlock3D(int(n / 2), n) for n in num_filter_down] self.up_modules = [ResBlock3D(n, n) for n in num_filter_up] self.dnpool = layers.MaxPool3D((2, 2, 2)) self.upsamp = layers.UpSampling3D((2, 2, 2)) self.up_final = layers.UpSampling3D((2, 2, 2)) def call(self, x, training=False): """Forward method. Args: x: `[batch, in_grid_res, in_grid_res, in_grid_res, in_features]` tensor, input voxel grid. training: bool, flag indicating whether model is in training mode. Returns: `[batch, out_grid_res, out_grid_res, out_grid_res, out_features]` tensor, output voxel grid. """ x = self.conv_in(x) x_dns = [x] for mod in self.down_modules: x_ = self.dnpool(mod(x_dns[-1], training=training)) x_dns.append(x_) x_ups = [x_dns.pop(-1)] for mod in self.up_modules: x_ = tf.concat([self.upsamp(x_ups[-1]), x_dns.pop(-1)], axis=-1) x_ = mod(x_, training=training) x_ups.append(x_) # odd layer x = self.upsamp(x_ups[-1])[:, :-1, :-1, :-1, :] x = self.conv_out(x) return x class ModelG2G(tf.keras.Model): """Grid-to-Grid Model with U-Net skip connections.""" def __init__(self, in_grid_res=32, out_grid_res=8, num_filters=256, codelen=128, out_features=1, net_type='imnet', name='g2g'): """Initialization. Args: in_grid_res: int, input grid resolution, must be powers of 2. out_grid_res: int, output grid resolution, must be powers of 2. num_filters: int, number of feature layers at smallest grid resolution. codelen: int, length of local latent codes. out_features: int, number of output feature channels. net_type: str, implicit function network architecture. imnet/deepsdf. name: str, name of the layer. Raises: NotImplementedError: if net_type is not imnet or deepsdf. ValueError: if in_grid_res or out_grid_res is not powers of 2. """ super(ModelG2G, self).__init__(name=name) if math.log(out_grid_res, 2) % 1 != 0 or math.log(in_grid_res, 2) % 1 != 0: raise ValueError('in_grid_res and out_grid_res must be powers of 2.') if net_type not in ['imnet', 'deepsdf']: raise NotImplementedError self.codelen = codelen self.out_features = out_features self.in_grid_res = in_grid_res self.out_grid_res = out_grid_res self.num_filters = num_filters self.net_type = net_type self.outgrid = None self.unet = UNet3D( in_grid_res=in_grid_res, out_grid_res=out_grid_res, num_filters=num_filters, out_features=codelen) self.lig = lig.LocalImplicitGrid( size=(out_grid_res, out_grid_res, out_grid_res), in_features=codelen, out_features=out_features, net_type=net_type) def call(self, voxgrid, pts, training=False): """Forward method. Args: voxgrid: `[batch, inres, inres, inres, nc]` tensor, input voxel grid. pts: `[batch, num_points, 3]` tensor, input query points. training: bool, flag indicating whether model is in training mode. Returns: vals: `[batch, num_points, 3]` tensor, predicted values at query points. """ self.outgrid = self.unet(voxgrid, training=training) val = self.lig(self.outgrid, pts, training=training) return val
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/geometry/deformation_energy/as_conformal_as_possible.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """This module implements TensorFlow As Rigid As Possible utility functions.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf from tensorflow_graphics.geometry.transformation import quaternion from tensorflow_graphics.math import vector from tensorflow_graphics.util import export_api from tensorflow_graphics.util import shape def energy(vertices_rest_pose, vertices_deformed_pose, quaternions, edges, vertex_weight=None, edge_weight=None, conformal_energy=True, aggregate_loss=True, name=None): """Estimates an As Conformal As Possible (ACAP) fitting energy. For a given mesh in rest pose, this function evaluates a variant of the ACAP [1] fitting energy for a batch of deformed meshes. The vertex weights and edge weights are defined on the rest pose. The method implemented here is similar to [2], but with an added free variable capturing a scale factor per vertex. [1]: Yusuke Yoshiyasu, Wan-Chun Ma, Eiichi Yoshida, and Fumio Kanehiro. "As-Conformal-As-Possible Surface Registration." Computer Graphics Forum. Vol. 33. No. 5. 2014.</br> [2]: Olga Sorkine, and Marc Alexa. "As-rigid-as-possible surface modeling". Symposium on Geometry Processing. Vol. 4. 2007. Note: In the description of the arguments, V corresponds to the number of vertices in the mesh, and E to the number of edges in this mesh. Note: In the following, A1 to An are optional batch dimensions. Args: vertices_rest_pose: A tensor of shape `[V, 3]` containing the position of all the vertices of the mesh in rest pose. vertices_deformed_pose: A tensor of shape `[A1, ..., An, V, 3]` containing the position of all the vertices of the mesh in deformed pose. quaternions: A tensor of shape `[A1, ..., An, V, 4]` defining a rigid transformation to apply to each vertex of the rest pose. See Section 2 from [1] for further details. edges: A tensor of shape `[E, 2]` defining indices of vertices that are connected by an edge. vertex_weight: An optional tensor of shape `[V]` defining the weight associated with each vertex. Defaults to a tensor of ones. edge_weight: A tensor of shape `[E]` defining the weight of edges. Common choices for these weights include uniform weighting, and cotangent weights. Defaults to a tensor of ones. conformal_energy: A `bool` indicating whether each vertex is associated with a scale factor or not. If this parameter is True, scaling information must be encoded in the norm of `quaternions`. If this parameter is False, this function implements the energy described in [2]. aggregate_loss: A `bool` defining whether the returned loss should be an aggregate measure. When True, the mean squared error is returned. When False, returns two losses for every edge of the mesh. name: A name for this op. Defaults to "as_conformal_as_possible_energy". Returns: When aggregate_loss is `True`, returns a tensor of shape `[A1, ..., An]` containing the ACAP energies. When aggregate_loss is `False`, returns a tensor of shape `[A1, ..., An, 2*E]` containing each term of the summation described in the equation 7 of [2]. Raises: ValueError: if the shape of `vertices_rest_pose`, `vertices_deformed_pose`, `quaternions`, `edges`, `vertex_weight`, or `edge_weight` is not supported. """ with tf.compat.v1.name_scope(name, "as_conformal_as_possible_energy", [ vertices_rest_pose, vertices_deformed_pose, quaternions, edges, conformal_energy, vertex_weight, edge_weight ]): vertices_rest_pose = tf.convert_to_tensor(value=vertices_rest_pose) vertices_deformed_pose = tf.convert_to_tensor(value=vertices_deformed_pose) quaternions = tf.convert_to_tensor(value=quaternions) edges = tf.convert_to_tensor(value=edges) if vertex_weight is not None: vertex_weight = tf.convert_to_tensor(value=vertex_weight) if edge_weight is not None: edge_weight = tf.convert_to_tensor(value=edge_weight) shape.check_static( tensor=vertices_rest_pose, tensor_name="vertices_rest_pose", has_rank=2, has_dim_equals=(-1, 3)) shape.check_static( tensor=vertices_deformed_pose, tensor_name="vertices_deformed_pose", has_rank_greater_than=1, has_dim_equals=(-1, 3)) shape.check_static( tensor=quaternions, tensor_name="quaternions", has_rank_greater_than=1, has_dim_equals=(-1, 4)) shape.compare_batch_dimensions( tensors=(vertices_deformed_pose, quaternions), last_axes=(-3, -3), broadcast_compatible=False) shape.check_static( tensor=edges, tensor_name="edges", has_rank=2, has_dim_equals=(-1, 2)) tensors_with_vertices = [vertices_rest_pose, vertices_deformed_pose, quaternions] names_with_vertices = ["vertices_rest_pose", "vertices_deformed_pose", "quaternions"] axes_with_vertices = [-2, -2, -2] if vertex_weight is not None: shape.check_static( tensor=vertex_weight, tensor_name="vertex_weight", has_rank=1) tensors_with_vertices.append(vertex_weight) names_with_vertices.append("vertex_weight") axes_with_vertices.append(0) shape.compare_dimensions( tensors=tensors_with_vertices, axes=axes_with_vertices, tensor_names=names_with_vertices) if edge_weight is not None: shape.check_static( tensor=edge_weight, tensor_name="edge_weight", has_rank=1) shape.compare_dimensions( tensors=(edges, edge_weight), axes=(0, 0), tensor_names=("edges", "edge_weight")) if not conformal_energy: quaternions = quaternion.normalize(quaternions) # Extracts the indices of vertices. indices_i, indices_j = tf.unstack(edges, axis=-1) # Extracts the vertices we need per term. vertices_i_rest = tf.gather(vertices_rest_pose, indices_i, axis=-2) vertices_j_rest = tf.gather(vertices_rest_pose, indices_j, axis=-2) vertices_i_deformed = tf.gather(vertices_deformed_pose, indices_i, axis=-2) vertices_j_deformed = tf.gather(vertices_deformed_pose, indices_j, axis=-2) # Extracts the weights we need per term. weights_shape = vertices_i_rest.shape.as_list()[-2] if vertex_weight is not None: weight_i = tf.gather(vertex_weight, indices_i) weight_j = tf.gather(vertex_weight, indices_j) else: weight_i = weight_j = tf.ones( weights_shape, dtype=vertices_rest_pose.dtype) weight_i = tf.expand_dims(weight_i, axis=-1) weight_j = tf.expand_dims(weight_j, axis=-1) if edge_weight is not None: weight_ij = edge_weight else: weight_ij = tf.ones(weights_shape, dtype=vertices_rest_pose.dtype) weight_ij = tf.expand_dims(weight_ij, axis=-1) # Extracts the rotation we need per term. quaternion_i = tf.gather(quaternions, indices_i, axis=-2) quaternion_j = tf.gather(quaternions, indices_j, axis=-2) # Computes the energy. deformed_ij = vertices_i_deformed - vertices_j_deformed rotated_rest_ij = quaternion.rotate((vertices_i_rest - vertices_j_rest), quaternion_i) energy_ij = weight_i * weight_ij * (deformed_ij - rotated_rest_ij) deformed_ji = vertices_j_deformed - vertices_i_deformed rotated_rest_ji = quaternion.rotate((vertices_j_rest - vertices_i_rest), quaternion_j) energy_ji = weight_j * weight_ij * (deformed_ji - rotated_rest_ji) energy_ij_squared = vector.dot(energy_ij, energy_ij, keepdims=False) energy_ji_squared = vector.dot(energy_ji, energy_ji, keepdims=False) if aggregate_loss: average_energy_ij = tf.reduce_mean( input_tensor=energy_ij_squared, axis=-1) average_energy_ji = tf.reduce_mean( input_tensor=energy_ji_squared, axis=-1) return (average_energy_ij + average_energy_ji) / 2.0 return tf.concat((energy_ij_squared, energy_ji_squared), axis=-1) # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """This module implements TensorFlow As Rigid As Possible utility functions.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf from tensorflow_graphics.geometry.transformation import quaternion from tensorflow_graphics.math import vector from tensorflow_graphics.util import export_api from tensorflow_graphics.util import shape def energy(vertices_rest_pose, vertices_deformed_pose, quaternions, edges, vertex_weight=None, edge_weight=None, conformal_energy=True, aggregate_loss=True, name=None): """Estimates an As Conformal As Possible (ACAP) fitting energy. For a given mesh in rest pose, this function evaluates a variant of the ACAP [1] fitting energy for a batch of deformed meshes. The vertex weights and edge weights are defined on the rest pose. The method implemented here is similar to [2], but with an added free variable capturing a scale factor per vertex. [1]: Yusuke Yoshiyasu, Wan-Chun Ma, Eiichi Yoshida, and Fumio Kanehiro. "As-Conformal-As-Possible Surface Registration." Computer Graphics Forum. Vol. 33. No. 5. 2014.</br> [2]: Olga Sorkine, and Marc Alexa. "As-rigid-as-possible surface modeling". Symposium on Geometry Processing. Vol. 4. 2007. Note: In the description of the arguments, V corresponds to the number of vertices in the mesh, and E to the number of edges in this mesh. Note: In the following, A1 to An are optional batch dimensions. Args: vertices_rest_pose: A tensor of shape `[V, 3]` containing the position of all the vertices of the mesh in rest pose. vertices_deformed_pose: A tensor of shape `[A1, ..., An, V, 3]` containing the position of all the vertices of the mesh in deformed pose. quaternions: A tensor of shape `[A1, ..., An, V, 4]` defining a rigid transformation to apply to each vertex of the rest pose. See Section 2 from [1] for further details. edges: A tensor of shape `[E, 2]` defining indices of vertices that are connected by an edge. vertex_weight: An optional tensor of shape `[V]` defining the weight associated with each vertex. Defaults to a tensor of ones. edge_weight: A tensor of shape `[E]` defining the weight of edges. Common choices for these weights include uniform weighting, and cotangent weights. Defaults to a tensor of ones. conformal_energy: A `bool` indicating whether each vertex is associated with a scale factor or not. If this parameter is True, scaling information must be encoded in the norm of `quaternions`. If this parameter is False, this function implements the energy described in [2]. aggregate_loss: A `bool` defining whether the returned loss should be an aggregate measure. When True, the mean squared error is returned. When False, returns two losses for every edge of the mesh. name: A name for this op. Defaults to "as_conformal_as_possible_energy". Returns: When aggregate_loss is `True`, returns a tensor of shape `[A1, ..., An]` containing the ACAP energies. When aggregate_loss is `False`, returns a tensor of shape `[A1, ..., An, 2*E]` containing each term of the summation described in the equation 7 of [2]. Raises: ValueError: if the shape of `vertices_rest_pose`, `vertices_deformed_pose`, `quaternions`, `edges`, `vertex_weight`, or `edge_weight` is not supported. """ with tf.compat.v1.name_scope(name, "as_conformal_as_possible_energy", [ vertices_rest_pose, vertices_deformed_pose, quaternions, edges, conformal_energy, vertex_weight, edge_weight ]): vertices_rest_pose = tf.convert_to_tensor(value=vertices_rest_pose) vertices_deformed_pose = tf.convert_to_tensor(value=vertices_deformed_pose) quaternions = tf.convert_to_tensor(value=quaternions) edges = tf.convert_to_tensor(value=edges) if vertex_weight is not None: vertex_weight = tf.convert_to_tensor(value=vertex_weight) if edge_weight is not None: edge_weight = tf.convert_to_tensor(value=edge_weight) shape.check_static( tensor=vertices_rest_pose, tensor_name="vertices_rest_pose", has_rank=2, has_dim_equals=(-1, 3)) shape.check_static( tensor=vertices_deformed_pose, tensor_name="vertices_deformed_pose", has_rank_greater_than=1, has_dim_equals=(-1, 3)) shape.check_static( tensor=quaternions, tensor_name="quaternions", has_rank_greater_than=1, has_dim_equals=(-1, 4)) shape.compare_batch_dimensions( tensors=(vertices_deformed_pose, quaternions), last_axes=(-3, -3), broadcast_compatible=False) shape.check_static( tensor=edges, tensor_name="edges", has_rank=2, has_dim_equals=(-1, 2)) tensors_with_vertices = [vertices_rest_pose, vertices_deformed_pose, quaternions] names_with_vertices = ["vertices_rest_pose", "vertices_deformed_pose", "quaternions"] axes_with_vertices = [-2, -2, -2] if vertex_weight is not None: shape.check_static( tensor=vertex_weight, tensor_name="vertex_weight", has_rank=1) tensors_with_vertices.append(vertex_weight) names_with_vertices.append("vertex_weight") axes_with_vertices.append(0) shape.compare_dimensions( tensors=tensors_with_vertices, axes=axes_with_vertices, tensor_names=names_with_vertices) if edge_weight is not None: shape.check_static( tensor=edge_weight, tensor_name="edge_weight", has_rank=1) shape.compare_dimensions( tensors=(edges, edge_weight), axes=(0, 0), tensor_names=("edges", "edge_weight")) if not conformal_energy: quaternions = quaternion.normalize(quaternions) # Extracts the indices of vertices. indices_i, indices_j = tf.unstack(edges, axis=-1) # Extracts the vertices we need per term. vertices_i_rest = tf.gather(vertices_rest_pose, indices_i, axis=-2) vertices_j_rest = tf.gather(vertices_rest_pose, indices_j, axis=-2) vertices_i_deformed = tf.gather(vertices_deformed_pose, indices_i, axis=-2) vertices_j_deformed = tf.gather(vertices_deformed_pose, indices_j, axis=-2) # Extracts the weights we need per term. weights_shape = vertices_i_rest.shape.as_list()[-2] if vertex_weight is not None: weight_i = tf.gather(vertex_weight, indices_i) weight_j = tf.gather(vertex_weight, indices_j) else: weight_i = weight_j = tf.ones( weights_shape, dtype=vertices_rest_pose.dtype) weight_i = tf.expand_dims(weight_i, axis=-1) weight_j = tf.expand_dims(weight_j, axis=-1) if edge_weight is not None: weight_ij = edge_weight else: weight_ij = tf.ones(weights_shape, dtype=vertices_rest_pose.dtype) weight_ij = tf.expand_dims(weight_ij, axis=-1) # Extracts the rotation we need per term. quaternion_i = tf.gather(quaternions, indices_i, axis=-2) quaternion_j = tf.gather(quaternions, indices_j, axis=-2) # Computes the energy. deformed_ij = vertices_i_deformed - vertices_j_deformed rotated_rest_ij = quaternion.rotate((vertices_i_rest - vertices_j_rest), quaternion_i) energy_ij = weight_i * weight_ij * (deformed_ij - rotated_rest_ij) deformed_ji = vertices_j_deformed - vertices_i_deformed rotated_rest_ji = quaternion.rotate((vertices_j_rest - vertices_i_rest), quaternion_j) energy_ji = weight_j * weight_ij * (deformed_ji - rotated_rest_ji) energy_ij_squared = vector.dot(energy_ij, energy_ij, keepdims=False) energy_ji_squared = vector.dot(energy_ji, energy_ji, keepdims=False) if aggregate_loss: average_energy_ij = tf.reduce_mean( input_tensor=energy_ij_squared, axis=-1) average_energy_ji = tf.reduce_mean( input_tensor=energy_ji_squared, axis=-1) return (average_energy_ij + average_energy_ji) / 2.0 return tf.concat((energy_ij_squared, energy_ji_squared), axis=-1) # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/projects/local_implicit_grid/core/point_utils.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Lint as: python3 """Additional data utilities for point preprocessing. """ import numpy as np from plyfile import PlyData from plyfile import PlyElement def read_point_ply(filename): """Load point cloud from ply file. Args: filename: str, filename for ply file to load. Returns: v: np.array of shape [#v, 3], vertex coordinates n: np.array of shape [#v, 3], vertex normals """ pd = PlyData.read(filename)['vertex'] v = np.array(np.stack([pd[i] for i in ['x', 'y', 'z']], axis=-1)) n = np.array(np.stack([pd[i] for i in ['nx', 'ny', 'nz']], axis=-1)) return v, n def write_point_ply(filename, v, n): """Write point cloud to ply file. Args: filename: str, filename for ply file to load. v: np.array of shape [#v, 3], vertex coordinates n: np.array of shape [#v, 3], vertex normals """ vn = np.concatenate([v, n], axis=1) vn = [tuple(vn[i]) for i in range(vn.shape[0])] vn = np.array(vn, dtype=[('x', 'f4'), ('y', 'f4'), ('z', 'f4'), ('nx', 'f4'), ('ny', 'f4'), ('nz', 'f4')]) el = PlyElement.describe(vn, 'vertex') PlyData([el]).write(filename) def np_pad_points(points, ntarget): """Pad point cloud to required size. If number of points is larger than ntarget, take ntarget random samples. If number of points is smaller than ntarget, pad by repeating last point. Args: points: `[npoints, nchannel]` np array, where first 3 channels are xyz. ntarget: int, number of target channels. Returns: result: `[ntarget, nchannel]` np array, padded points to ntarget numbers. """ if points.shape[0] < ntarget: mult = np.ceil(float(ntarget)/float(points.shape[0])) - 1 rand_pool = np.tile(points, [int(mult), 1]) nextra = ntarget-points.shape[0] extra_idx = np.random.choice(rand_pool.shape[0], nextra, replace=False) extra_pts = rand_pool[extra_idx] points_out = np.concatenate([points, extra_pts], axis=0) else: idx_choice = np.random.choice(points.shape[0], size=ntarget, replace=False) points_out = points[idx_choice] return points_out def np_gather_ijk_index(arr, index): arr_flat = arr.reshape(-1, arr.shape[-1]) _, j, k, _ = arr.shape index_transform = index[:, 0]*j*k+index[:, 1]*k+index[:, 2] return arr_flat[index_transform] def np_shifted_crop(v, idx_grid, shift, crop_size, ntarget): """Create a shifted crop.""" nchannel = v.shape[1] vxyz = v[:, :3] - shift * crop_size * 0.5 vall = v.copy() point_idxs = np.arange(v.shape[0]) point_grid_idx = np.floor(vxyz / crop_size).astype(np.int32) valid_mask = np.ones(point_grid_idx.shape[0]).astype(np.bool) for i in range(3): valid_mask = np.logical_and(valid_mask, point_grid_idx[:, i] >= 0) valid_mask = np.logical_and(valid_mask, point_grid_idx[:, i] < idx_grid.shape[i]) point_grid_idx = point_grid_idx[valid_mask] # translate to global grid index point_grid_idx = np_gather_ijk_index(idx_grid, point_grid_idx) vall = vall[valid_mask] point_idxs = point_idxs[valid_mask] crop_indices, revidx = np.unique(point_grid_idx, axis=0, return_inverse=True) ncrops = crop_indices.shape[0] sortarr = np.argsort(revidx) revidx_sorted = revidx[sortarr] vall_sorted = vall[sortarr] point_idxs_sorted = point_idxs[sortarr] bins = np.searchsorted(revidx_sorted, np.arange(ncrops)) bins = list(bins) + [v.shape[0]] sid = bins[0:-1] eid = bins[1:] # initialize outputs point_crops = np.zeros([ncrops, ntarget, nchannel]) crop_point_idxs = [] # extract crops and pad for i, (s, e) in enumerate(zip(sid, eid)): cropped_points = vall_sorted[s:e] crop_point_idx = point_idxs_sorted[s:e] crop_point_idxs.append(crop_point_idx) if cropped_points.shape[0] < ntarget: padded_points = np_pad_points(cropped_points, ntarget=ntarget) else: choice_idx = np.random.choice(cropped_points.shape[0], ntarget, replace=False) padded_points = cropped_points[choice_idx] point_crops[i] = padded_points return point_crops, crop_indices, crop_point_idxs def np_get_occupied_idx(v, xmin=(0., 0., 0.), xmax=(1., 1., 1.), crop_size=.125, ntarget=2048, overlap=True, normalize_crops=False, return_shape=False, return_crop_point_idxs=False): """Get crop indices for point clouds.""" v = v.copy()-xmin xmin = np.array(xmin) xmax = np.array(xmax) r = (xmax-xmin)/crop_size r = np.ceil(r) rr = r.astype(np.int32) if not overlap else (2*r-1).astype(np.int32) # create index grid idx_grid = np.stack(np.meshgrid(np.arange(rr[0]), np.arange(rr[1]), np.arange(rr[2]), indexing='ij'), axis=-1) # [rr[0], rr[1], rr[2], 3] shift_idxs = np.stack( np.meshgrid(np.arange(int(overlap)+1), np.arange(int(overlap)+1), np.arange(int(overlap)+1), indexing='ij'), axis=-1) shift_idxs = np.reshape(shift_idxs, [-1, 3]) point_crops = [] crop_indices = [] crop_point_idxs = [] for i in range(shift_idxs.shape[0]): sft = shift_idxs[i] skp = int(overlap)+1 idg = idx_grid[sft[0]::skp, sft[1]::skp, sft[2]::skp] pc, ci, cpidx = np_shifted_crop(v, idg, sft, crop_size=crop_size, ntarget=ntarget) point_crops.append(pc) crop_indices.append(ci) crop_point_idxs += cpidx point_crops = np.concatenate(point_crops, axis=0) # [ncrops, nsurface, 6] crop_indices = np.concatenate(crop_indices, axis=0) # [ncrops, 3] if normalize_crops: # normalize each crop crop_corners = crop_indices * 0.5 * crop_size crop_centers = crop_corners + 0.5 * crop_size # [ncrops, 3] crop_centers = crop_centers[:, np.newaxis, :] # [ncrops, 1, 3] point_crops[..., :3] = point_crops[..., :3] -crop_centers point_crops[..., :3] = point_crops[..., :3] / crop_size * 2 outputs = [point_crops, crop_indices] if return_shape: outputs += [idx_grid.shape[:3]] if return_crop_point_idxs: outputs += [crop_point_idxs] return tuple(outputs)
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Lint as: python3 """Additional data utilities for point preprocessing. """ import numpy as np from plyfile import PlyData from plyfile import PlyElement def read_point_ply(filename): """Load point cloud from ply file. Args: filename: str, filename for ply file to load. Returns: v: np.array of shape [#v, 3], vertex coordinates n: np.array of shape [#v, 3], vertex normals """ pd = PlyData.read(filename)['vertex'] v = np.array(np.stack([pd[i] for i in ['x', 'y', 'z']], axis=-1)) n = np.array(np.stack([pd[i] for i in ['nx', 'ny', 'nz']], axis=-1)) return v, n def write_point_ply(filename, v, n): """Write point cloud to ply file. Args: filename: str, filename for ply file to load. v: np.array of shape [#v, 3], vertex coordinates n: np.array of shape [#v, 3], vertex normals """ vn = np.concatenate([v, n], axis=1) vn = [tuple(vn[i]) for i in range(vn.shape[0])] vn = np.array(vn, dtype=[('x', 'f4'), ('y', 'f4'), ('z', 'f4'), ('nx', 'f4'), ('ny', 'f4'), ('nz', 'f4')]) el = PlyElement.describe(vn, 'vertex') PlyData([el]).write(filename) def np_pad_points(points, ntarget): """Pad point cloud to required size. If number of points is larger than ntarget, take ntarget random samples. If number of points is smaller than ntarget, pad by repeating last point. Args: points: `[npoints, nchannel]` np array, where first 3 channels are xyz. ntarget: int, number of target channels. Returns: result: `[ntarget, nchannel]` np array, padded points to ntarget numbers. """ if points.shape[0] < ntarget: mult = np.ceil(float(ntarget)/float(points.shape[0])) - 1 rand_pool = np.tile(points, [int(mult), 1]) nextra = ntarget-points.shape[0] extra_idx = np.random.choice(rand_pool.shape[0], nextra, replace=False) extra_pts = rand_pool[extra_idx] points_out = np.concatenate([points, extra_pts], axis=0) else: idx_choice = np.random.choice(points.shape[0], size=ntarget, replace=False) points_out = points[idx_choice] return points_out def np_gather_ijk_index(arr, index): arr_flat = arr.reshape(-1, arr.shape[-1]) _, j, k, _ = arr.shape index_transform = index[:, 0]*j*k+index[:, 1]*k+index[:, 2] return arr_flat[index_transform] def np_shifted_crop(v, idx_grid, shift, crop_size, ntarget): """Create a shifted crop.""" nchannel = v.shape[1] vxyz = v[:, :3] - shift * crop_size * 0.5 vall = v.copy() point_idxs = np.arange(v.shape[0]) point_grid_idx = np.floor(vxyz / crop_size).astype(np.int32) valid_mask = np.ones(point_grid_idx.shape[0]).astype(np.bool) for i in range(3): valid_mask = np.logical_and(valid_mask, point_grid_idx[:, i] >= 0) valid_mask = np.logical_and(valid_mask, point_grid_idx[:, i] < idx_grid.shape[i]) point_grid_idx = point_grid_idx[valid_mask] # translate to global grid index point_grid_idx = np_gather_ijk_index(idx_grid, point_grid_idx) vall = vall[valid_mask] point_idxs = point_idxs[valid_mask] crop_indices, revidx = np.unique(point_grid_idx, axis=0, return_inverse=True) ncrops = crop_indices.shape[0] sortarr = np.argsort(revidx) revidx_sorted = revidx[sortarr] vall_sorted = vall[sortarr] point_idxs_sorted = point_idxs[sortarr] bins = np.searchsorted(revidx_sorted, np.arange(ncrops)) bins = list(bins) + [v.shape[0]] sid = bins[0:-1] eid = bins[1:] # initialize outputs point_crops = np.zeros([ncrops, ntarget, nchannel]) crop_point_idxs = [] # extract crops and pad for i, (s, e) in enumerate(zip(sid, eid)): cropped_points = vall_sorted[s:e] crop_point_idx = point_idxs_sorted[s:e] crop_point_idxs.append(crop_point_idx) if cropped_points.shape[0] < ntarget: padded_points = np_pad_points(cropped_points, ntarget=ntarget) else: choice_idx = np.random.choice(cropped_points.shape[0], ntarget, replace=False) padded_points = cropped_points[choice_idx] point_crops[i] = padded_points return point_crops, crop_indices, crop_point_idxs def np_get_occupied_idx(v, xmin=(0., 0., 0.), xmax=(1., 1., 1.), crop_size=.125, ntarget=2048, overlap=True, normalize_crops=False, return_shape=False, return_crop_point_idxs=False): """Get crop indices for point clouds.""" v = v.copy()-xmin xmin = np.array(xmin) xmax = np.array(xmax) r = (xmax-xmin)/crop_size r = np.ceil(r) rr = r.astype(np.int32) if not overlap else (2*r-1).astype(np.int32) # create index grid idx_grid = np.stack(np.meshgrid(np.arange(rr[0]), np.arange(rr[1]), np.arange(rr[2]), indexing='ij'), axis=-1) # [rr[0], rr[1], rr[2], 3] shift_idxs = np.stack( np.meshgrid(np.arange(int(overlap)+1), np.arange(int(overlap)+1), np.arange(int(overlap)+1), indexing='ij'), axis=-1) shift_idxs = np.reshape(shift_idxs, [-1, 3]) point_crops = [] crop_indices = [] crop_point_idxs = [] for i in range(shift_idxs.shape[0]): sft = shift_idxs[i] skp = int(overlap)+1 idg = idx_grid[sft[0]::skp, sft[1]::skp, sft[2]::skp] pc, ci, cpidx = np_shifted_crop(v, idg, sft, crop_size=crop_size, ntarget=ntarget) point_crops.append(pc) crop_indices.append(ci) crop_point_idxs += cpidx point_crops = np.concatenate(point_crops, axis=0) # [ncrops, nsurface, 6] crop_indices = np.concatenate(crop_indices, axis=0) # [ncrops, 3] if normalize_crops: # normalize each crop crop_corners = crop_indices * 0.5 * crop_size crop_centers = crop_corners + 0.5 * crop_size # [ncrops, 3] crop_centers = crop_centers[:, np.newaxis, :] # [ncrops, 1, 3] point_crops[..., :3] = point_crops[..., :3] -crop_centers point_crops[..., :3] = point_crops[..., :3] / crop_size * 2 outputs = [point_crops, crop_indices] if return_shape: outputs += [idx_grid.shape[:3]] if return_crop_point_idxs: outputs += [crop_point_idxs] return tuple(outputs)
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/geometry/transformation/look_at.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """This module implements OpenGL lookAt functionalities.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf from tensorflow_graphics.math import vector from tensorflow_graphics.util import export_api from tensorflow_graphics.util import shape def right_handed(camera_position, look_at, up_vector, name=None): """Builds a right handed look at view matrix. Note: In the following, A1 to An are optional batch dimensions. Args: camera_position: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the 3D position of the camera. look_at: A tensor of shape `[A1, ..., An, 3]`, with the last dimension storing the position where the camera is looking at. up_vector: A tensor of shape `[A1, ..., An, 3]`, where the last dimension defines the up vector of the camera. name: A name for this op. Defaults to 'right_handed'. Raises: ValueError: if the all the inputs are not of the same shape, or if any input of of an unsupported shape. Returns: A tensor of shape `[A1, ..., An, 4, 4]`, containing right handed look at matrices. """ with tf.compat.v1.name_scope(name, "right_handed", [camera_position, look_at, up_vector]): camera_position = tf.convert_to_tensor(value=camera_position) look_at = tf.convert_to_tensor(value=look_at) up_vector = tf.convert_to_tensor(value=up_vector) shape.check_static( tensor=camera_position, tensor_name="camera_position", has_dim_equals=(-1, 3)) shape.check_static( tensor=look_at, tensor_name="look_at", has_dim_equals=(-1, 3)) shape.check_static( tensor=up_vector, tensor_name="up_vector", has_dim_equals=(-1, 3)) shape.compare_batch_dimensions( tensors=(camera_position, look_at, up_vector), last_axes=-2, tensor_names=("camera_position", "look_at", "up_vector"), broadcast_compatible=False) z_axis = tf.linalg.l2_normalize(look_at - camera_position, axis=-1) horizontal_axis = tf.linalg.l2_normalize( vector.cross(z_axis, up_vector), axis=-1) vertical_axis = vector.cross(horizontal_axis, z_axis) batch_shape = tf.shape(input=horizontal_axis)[:-1] zeros = tf.zeros( shape=tf.concat((batch_shape, (3,)), axis=-1), dtype=horizontal_axis.dtype) one = tf.ones( shape=tf.concat((batch_shape, (1,)), axis=-1), dtype=horizontal_axis.dtype) matrix = tf.concat( (horizontal_axis, -vector.dot(horizontal_axis, camera_position), vertical_axis, -vector.dot(vertical_axis, camera_position), -z_axis, vector.dot(z_axis, camera_position), zeros, one), axis=-1) matrix_shape = tf.shape(input=matrix) output_shape = tf.concat((matrix_shape[:-1], (4, 4)), axis=-1) return tf.reshape(matrix, shape=output_shape) # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """This module implements OpenGL lookAt functionalities.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf from tensorflow_graphics.math import vector from tensorflow_graphics.util import export_api from tensorflow_graphics.util import shape def right_handed(camera_position, look_at, up_vector, name=None): """Builds a right handed look at view matrix. Note: In the following, A1 to An are optional batch dimensions. Args: camera_position: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the 3D position of the camera. look_at: A tensor of shape `[A1, ..., An, 3]`, with the last dimension storing the position where the camera is looking at. up_vector: A tensor of shape `[A1, ..., An, 3]`, where the last dimension defines the up vector of the camera. name: A name for this op. Defaults to 'right_handed'. Raises: ValueError: if the all the inputs are not of the same shape, or if any input of of an unsupported shape. Returns: A tensor of shape `[A1, ..., An, 4, 4]`, containing right handed look at matrices. """ with tf.compat.v1.name_scope(name, "right_handed", [camera_position, look_at, up_vector]): camera_position = tf.convert_to_tensor(value=camera_position) look_at = tf.convert_to_tensor(value=look_at) up_vector = tf.convert_to_tensor(value=up_vector) shape.check_static( tensor=camera_position, tensor_name="camera_position", has_dim_equals=(-1, 3)) shape.check_static( tensor=look_at, tensor_name="look_at", has_dim_equals=(-1, 3)) shape.check_static( tensor=up_vector, tensor_name="up_vector", has_dim_equals=(-1, 3)) shape.compare_batch_dimensions( tensors=(camera_position, look_at, up_vector), last_axes=-2, tensor_names=("camera_position", "look_at", "up_vector"), broadcast_compatible=False) z_axis = tf.linalg.l2_normalize(look_at - camera_position, axis=-1) horizontal_axis = tf.linalg.l2_normalize( vector.cross(z_axis, up_vector), axis=-1) vertical_axis = vector.cross(horizontal_axis, z_axis) batch_shape = tf.shape(input=horizontal_axis)[:-1] zeros = tf.zeros( shape=tf.concat((batch_shape, (3,)), axis=-1), dtype=horizontal_axis.dtype) one = tf.ones( shape=tf.concat((batch_shape, (1,)), axis=-1), dtype=horizontal_axis.dtype) matrix = tf.concat( (horizontal_axis, -vector.dot(horizontal_axis, camera_position), vertical_axis, -vector.dot(vertical_axis, camera_position), -z_axis, vector.dot(z_axis, camera_position), zeros, one), axis=-1) matrix_shape = tf.shape(input=matrix) output_shape = tf.concat((matrix_shape[:-1], (4, 4)), axis=-1) return tf.reshape(matrix, shape=output_shape) # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/image/tests/matting_test.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for matting.""" from absl.testing import parameterized import numpy as np import tensorflow as tf from tensorflow_graphics.image import matting from tensorflow_graphics.util import asserts from tensorflow_graphics.util import shape from tensorflow_graphics.util import test_case def _laplacian_matrix(image, size=3, eps=1e-5, name=None): """Generates the closed form matting Laplacian matrices. Generates the closed form matting Laplacian as proposed by Levin et al. in "A Closed Form Solution to Natural Image Matting". Args: image: A tensor of shape `[B, H, W, C]`. size: An `int` representing the size of the patches used to enforce smoothness. eps: A small number of type `float` to regularize the problem. name: A name for this op. Defaults to "matting_laplacian_matrix". Returns: A tensor of shape `[B, H, W, size^2, size^2]` containing the matting Laplacian matrices. Raises: ValueError: If `image` is not of rank 4. """ with tf.compat.v1.name_scope(name, "matting_laplacian_matrix", [image]): image = tf.convert_to_tensor(value=image) shape.check_static(image, has_rank=4) if size % 2 == 0: raise ValueError("The patch size is expected to be an odd value.") pixels = size**2 channels = tf.shape(input=image)[-1] dtype = image.dtype patches = tf.image.extract_patches( image, sizes=(1, size, size, 1), strides=(1, 1, 1, 1), rates=(1, 1, 1, 1), padding="VALID") batches = tf.shape(input=patches)[:-1] new_shape = tf.concat((batches, (pixels, channels)), axis=-1) patches = tf.reshape(patches, shape=new_shape) mean = tf.reduce_mean(input_tensor=patches, axis=-2, keepdims=True) demean = patches - mean covariance = tf.matmul(demean, demean, transpose_a=True) / pixels regularizer = (eps / pixels) * tf.eye(channels, dtype=dtype) covariance_inv = tf.linalg.inv(covariance + regularizer) covariance_inv = asserts.assert_no_infs_or_nans(covariance_inv) mat = tf.matmul(tf.matmul(demean, covariance_inv), demean, transpose_b=True) return tf.eye(pixels, dtype=dtype) - (1.0 + mat) / pixels class MattingTest(test_case.TestCase): @parameterized.parameters((3, 1), (3, 3), (5, 3), (5, 1)) def test_build_matrices_jacobian_random(self, size, channels): """Tests the Jacobian of the build_matrices function.""" tensor_shape = np.random.randint(size, 6, size=3) image_init = np.random.uniform( 0.0, 1.0, size=tensor_shape.tolist() + [channels]) with self.subTest(name="laplacian"): self.assert_jacobian_is_correct_fn( lambda image: matting.build_matrices(image, size=size)[0], [image_init]) with self.subTest(name="pseudo_inverse"): self.assert_jacobian_is_correct_fn( lambda image: matting.build_matrices(image, size=size)[1], [image_init]) @parameterized.parameters((3, 1), (3, 3), (5, 3), (5, 1)) def test_build_matrices_laplacian_zero_rows_and_columns(self, size, channels): """Tests that the laplacian matrix rows and columns sum to zero.""" tensor_shape = np.random.randint(size, 6, size=3) image_init = np.random.uniform( 0.0, 1.0, size=tensor_shape.tolist() + [channels]) image = tf.convert_to_tensor(value=image_init) laplacian, _ = matting.build_matrices(image, size=size) rows = tf.reduce_sum(input_tensor=laplacian, axis=-2) columns = tf.reduce_sum(input_tensor=laplacian, axis=-1) with self.subTest(name="rows"): self.assertAllClose(rows, tf.zeros_like(rows)) with self.subTest(name="columns"): self.assertAllClose(columns, tf.zeros_like(columns)) @parameterized.parameters((3, 1), (3, 3), (5, 3), (5, 1)) def test_build_matrices_laplacian_versions(self, size, channels): """Compares two ways of computing the laplacian matrix.""" tensor_shape = np.random.randint(size, 6, size=3) image_init = np.random.uniform( 0.0, 1.0, size=tensor_shape.tolist() + [channels]) image = tf.convert_to_tensor(value=image_init) laplacian_v1, _ = matting.build_matrices(image, size=size) laplacian_v2 = _laplacian_matrix(image, size=size) self.assertAllClose(laplacian_v1, laplacian_v2) @parameterized.parameters( (3, (None, None, None, 1)), (3, (None, None, None, 3)), (5, (None, None, None, 1)), (5, (None, None, None, 3)), (3, (1, 3, 3, 1)), (3, (1, 3, 3, 3)), (5, (1, 5, 5, 1)), (5, (1, 5, 5, 3)), ) def test_build_matrices_not_raised(self, size, *shapes): """Tests that the shape exceptions are not raised.""" build_matrices = lambda image: matting.build_matrices(image, size=size) self.assert_exception_is_not_raised(build_matrices, shapes) @parameterized.parameters( ("tensor must have a rank of 4, but it has rank", 3, (1,)), ("tensor must have a rank of 4, but it has rank", 3, (1, 1, 1, 1, 1)), ("The patch size is expected to be an odd value.", 2, (1, 1, 1, 1)), ) def test_build_matrices_raised(self, error_msg, size, *shapes): """Tests that the shape exceptions are properly raised.""" build_matrices = lambda image: matting.build_matrices(image, size=size) self.assert_exception_is_raised(build_matrices, error_msg, shapes) @parameterized.parameters((3,), (5,)) def test_linear_coefficients_jacobian_random(self, size): """Tests the Jacobian of the linear_coefficients function.""" tensor_shape = np.random.randint(size, 6, size=3) matte_init = np.random.uniform(0.0, 1.0, size=tensor_shape.tolist() + [1]) tensor_shape[1:3] -= (size - 1) num_coeffs = np.random.randint(2, 4) pseudo_inverse_init = np.random.uniform( 0.0, 1.0, size=tensor_shape.tolist() + [num_coeffs, size**2]) def a_fn(matte, pseudo_inverse): a, _ = matting.linear_coefficients(matte, pseudo_inverse) return a def b_fn(matte, pseudo_inverse): _, b = matting.linear_coefficients(matte, pseudo_inverse) return b with self.subTest(name="a"): self.assert_jacobian_is_correct_fn(a_fn, [matte_init, pseudo_inverse_init]) with self.subTest(name="b"): self.assert_jacobian_is_correct_fn(b_fn, [matte_init, pseudo_inverse_init]) @parameterized.parameters( ((None, None, None, 1), (None, None, None, 4, 9)), ((None, None, None, 1), (None, None, None, 2, 25)), ((1, 6, 6, 1), (1, 4, 4, 2, 9)), ((1, 10, 10, 1), (1, 6, 6, 2, 25)), ) def test_linear_coefficients_not_raised(self, *shapes): """Tests that the shape exceptions are not raised.""" self.assert_exception_is_not_raised(matting.linear_coefficients, shapes) @parameterized.parameters( ("must have exactly 1 dimensions in axis -1", (1, 6, 6, 2), (1, 4, 4, 2, 9)), ("Not all batch dimensions are identical.", (1, 6, 6, 1), (2, 4, 4, 2, 9)), ) def test_linear_coefficients_raised(self, error_msg, *shapes): """Tests that the shape exceptions are properly raised.""" self.assert_exception_is_raised(matting.linear_coefficients, error_msg, shapes) @parameterized.parameters((3,), (5,)) def test_linear_coefficients_reconstruction_same_images(self, size): """Tests that the matte can be reconstructed by using the coefficients .""" tensor_shape = np.random.randint(size, 6, size=3).tolist() image = np.random.uniform(0.0, 1.0, size=tensor_shape + [1]) _, pseudo_inverse = matting.build_matrices(image, size=size) a, b = matting.linear_coefficients(image, pseudo_inverse) reconstructed = matting.reconstruct(image, a, b) self.assertAllClose(image, reconstructed, atol=1e-4) @parameterized.parameters((3,), (5,)) def test_linear_coefficients_reconstruction_opposite_images(self, size): """Tests that the matte can be reconstructed by using the coefficients .""" tensor_shape = np.random.randint(size, 6, size=3).tolist() image = np.random.uniform(0.0, 1.0, size=tensor_shape + [1]) _, pseudo_inverse = matting.build_matrices(image, size=size) a, b = matting.linear_coefficients(1.0 - image, pseudo_inverse) reconstructed = matting.reconstruct(image, a, b) self.assertAllClose(1.0 - image, reconstructed, atol=1e-4) @parameterized.parameters((3,), (5,)) def test_loss_jacobian_random(self, size): """Tests the Jacobian of the matting loss function.""" tensor_shape = np.random.randint(size, 6, size=3) matte_init = np.random.uniform(0.0, 1.0, size=tensor_shape.tolist() + [1]) tensor_shape[1:3] -= (size - 1) laplacian_init = np.random.uniform( 0.0, 1.0, size=tensor_shape.tolist() + [size**2, size**2]) with self.subTest(name="matte"): self.assert_jacobian_is_correct_fn(matting.loss, [matte_init, laplacian_init]) @parameterized.parameters( ((None, None, None, 1), (None, None, None, 9, 9)), ((None, None, None, 1), (None, None, None, 25, 25)), ((1, 6, 6, 1), (1, 4, 4, 9, 9)), ((1, 10, 10, 1), (1, 6, 6, 25, 25)), ) def test_loss_not_raised(self, *shapes): """Tests that the shape exceptions are not raised.""" self.assert_exception_is_not_raised(matting.loss, shapes) @parameterized.parameters( ("must have exactly 1 dimensions in axis -1", (1, 6, 6, 2), (1, 4, 4, 9, 9)), ("must have exactly 9 dimensions in axis -2", (1, 6, 6, 1), (1, 4, 4, 1, 9)), ("Not all batch dimensions are identical.", (1, 6, 6, 1), (2, 4, 4, 9, 9)), ) def test_loss_raised(self, error_msg, *shapes): """Tests that the shape exceptions are properly raised.""" self.assert_exception_is_raised(matting.loss, error_msg, shapes) @parameterized.parameters((3,), (5,)) def test_loss_opposite_images(self, size): """Tests that passing opposite images results in a loss close to 0.0.""" tensor_shape = np.random.randint(size, 6, size=3).tolist() image = np.random.uniform(0.0, 1.0, size=tensor_shape + [1]) laplacian, _ = matting.build_matrices(image, size=size) loss = matting.loss(1.0 - image, laplacian) self.assertAllClose(loss, 0.0, atol=1e-4) @parameterized.parameters((3,), (5,)) def test_loss_same_images(self, size): """Tests that passing same images results in a loss close to 0.0.""" tensor_shape = np.random.randint(size, 6, size=3).tolist() image = np.random.uniform(0.0, 1.0, size=tensor_shape + [1]) laplacian, _ = matting.build_matrices(image, size=size) loss = matting.loss(image, laplacian) self.assertAllClose(loss, 0.0, atol=1e-4) @parameterized.parameters((3,), (5,)) def test_loss_positive(self, size): """Tests that the loss is always greater or equal to 0.0.""" tensor_shape = np.random.randint(size, 6, size=3).tolist() image = tf.random.uniform(minval=0.0, maxval=1.0, shape=tensor_shape + [3]) matte = tf.random.uniform(minval=0.0, maxval=1.0, shape=tensor_shape + [1]) laplacian, _ = matting.build_matrices(image, size=size) loss = matting.loss(matte, laplacian) self.assertAllGreaterEqual(loss, 0.0) @parameterized.parameters((1,), (3,)) def test_reconstruct_jacobian_random(self, channels): """Tests the Jacobian of the reconstruct function.""" tensor_shape = np.random.randint(1, 5, size=3).tolist() image_init = np.random.uniform(0.0, 1.0, size=tensor_shape + [channels]) mul_init = np.random.uniform(0.0, 1.0, size=tensor_shape + [channels]) add_init = np.random.uniform(0.0, 1.0, size=tensor_shape + [1]) self.assert_jacobian_is_correct_fn(matting.reconstruct, [image_init, mul_init, add_init]) @parameterized.parameters( ((None, None, None, 3), (None, None, None, 3), (None, None, None, 1)), ((1, 6, 6, 3), (1, 6, 6, 3), (1, 6, 6, 1)), ) def test_reconstruct_not_raised(self, *shapes): """Tests that the shape exceptions are not raised.""" self.assert_exception_is_not_raised(matting.reconstruct, shapes) @parameterized.parameters( ("tensor must have a rank of 4, but it has rank", (1, 6, 6), (1, 6, 6, 2), (1, 6, 6, 1)), ("tensor must have a rank of 4, but it has rank", (1, 6, 6, 2), (1, 6, 6), (1, 6, 6, 1)), ("tensor must have a rank of 4, but it has rank", (1, 6, 6, 2), (1, 6, 6, 2), (1, 6, 6)), ("must have exactly 1 dimensions in axis -1", (1, 6, 6, 2), (1, 6, 6, 2), (1, 6, 6, 2)), ("Not all batch dimensions are identical.", (1, 6, 6, 1), (1, 6, 6, 4), (1, 6, 6, 1)), ("Not all batch dimensions are identical.", (1, 6, 6, 1), (1, 4, 6, 1), (1, 6, 6, 1)), ("Not all batch dimensions are identical.", (1, 6, 6, 1), (1, 6, 6, 1), (1, 4, 6, 1)), ("Not all batch dimensions are identical.", (1, 6, 6, 1), (1, 6, 4, 1), (1, 6, 6, 1)), ("Not all batch dimensions are identical.", (1, 6, 6, 1), (1, 6, 6, 1), (1, 6, 4, 1)), ("Not all batch dimensions are identical.", (1, 6, 6, 1), (4, 6, 6, 1), (1, 6, 6, 1)), ("Not all batch dimensions are identical.", (1, 6, 6, 1), (1, 6, 6, 1), (4, 6, 6, 1)), ) def test_reconstruct_raised(self, error_msg, *shapes): """Tests that the shape exceptions are properly raised.""" self.assert_exception_is_raised(matting.reconstruct, error_msg, shapes) if __name__ == "__main__": test_case.main()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for matting.""" from absl.testing import parameterized import numpy as np import tensorflow as tf from tensorflow_graphics.image import matting from tensorflow_graphics.util import asserts from tensorflow_graphics.util import shape from tensorflow_graphics.util import test_case def _laplacian_matrix(image, size=3, eps=1e-5, name=None): """Generates the closed form matting Laplacian matrices. Generates the closed form matting Laplacian as proposed by Levin et al. in "A Closed Form Solution to Natural Image Matting". Args: image: A tensor of shape `[B, H, W, C]`. size: An `int` representing the size of the patches used to enforce smoothness. eps: A small number of type `float` to regularize the problem. name: A name for this op. Defaults to "matting_laplacian_matrix". Returns: A tensor of shape `[B, H, W, size^2, size^2]` containing the matting Laplacian matrices. Raises: ValueError: If `image` is not of rank 4. """ with tf.compat.v1.name_scope(name, "matting_laplacian_matrix", [image]): image = tf.convert_to_tensor(value=image) shape.check_static(image, has_rank=4) if size % 2 == 0: raise ValueError("The patch size is expected to be an odd value.") pixels = size**2 channels = tf.shape(input=image)[-1] dtype = image.dtype patches = tf.image.extract_patches( image, sizes=(1, size, size, 1), strides=(1, 1, 1, 1), rates=(1, 1, 1, 1), padding="VALID") batches = tf.shape(input=patches)[:-1] new_shape = tf.concat((batches, (pixels, channels)), axis=-1) patches = tf.reshape(patches, shape=new_shape) mean = tf.reduce_mean(input_tensor=patches, axis=-2, keepdims=True) demean = patches - mean covariance = tf.matmul(demean, demean, transpose_a=True) / pixels regularizer = (eps / pixels) * tf.eye(channels, dtype=dtype) covariance_inv = tf.linalg.inv(covariance + regularizer) covariance_inv = asserts.assert_no_infs_or_nans(covariance_inv) mat = tf.matmul(tf.matmul(demean, covariance_inv), demean, transpose_b=True) return tf.eye(pixels, dtype=dtype) - (1.0 + mat) / pixels class MattingTest(test_case.TestCase): @parameterized.parameters((3, 1), (3, 3), (5, 3), (5, 1)) def test_build_matrices_jacobian_random(self, size, channels): """Tests the Jacobian of the build_matrices function.""" tensor_shape = np.random.randint(size, 6, size=3) image_init = np.random.uniform( 0.0, 1.0, size=tensor_shape.tolist() + [channels]) with self.subTest(name="laplacian"): self.assert_jacobian_is_correct_fn( lambda image: matting.build_matrices(image, size=size)[0], [image_init]) with self.subTest(name="pseudo_inverse"): self.assert_jacobian_is_correct_fn( lambda image: matting.build_matrices(image, size=size)[1], [image_init]) @parameterized.parameters((3, 1), (3, 3), (5, 3), (5, 1)) def test_build_matrices_laplacian_zero_rows_and_columns(self, size, channels): """Tests that the laplacian matrix rows and columns sum to zero.""" tensor_shape = np.random.randint(size, 6, size=3) image_init = np.random.uniform( 0.0, 1.0, size=tensor_shape.tolist() + [channels]) image = tf.convert_to_tensor(value=image_init) laplacian, _ = matting.build_matrices(image, size=size) rows = tf.reduce_sum(input_tensor=laplacian, axis=-2) columns = tf.reduce_sum(input_tensor=laplacian, axis=-1) with self.subTest(name="rows"): self.assertAllClose(rows, tf.zeros_like(rows)) with self.subTest(name="columns"): self.assertAllClose(columns, tf.zeros_like(columns)) @parameterized.parameters((3, 1), (3, 3), (5, 3), (5, 1)) def test_build_matrices_laplacian_versions(self, size, channels): """Compares two ways of computing the laplacian matrix.""" tensor_shape = np.random.randint(size, 6, size=3) image_init = np.random.uniform( 0.0, 1.0, size=tensor_shape.tolist() + [channels]) image = tf.convert_to_tensor(value=image_init) laplacian_v1, _ = matting.build_matrices(image, size=size) laplacian_v2 = _laplacian_matrix(image, size=size) self.assertAllClose(laplacian_v1, laplacian_v2) @parameterized.parameters( (3, (None, None, None, 1)), (3, (None, None, None, 3)), (5, (None, None, None, 1)), (5, (None, None, None, 3)), (3, (1, 3, 3, 1)), (3, (1, 3, 3, 3)), (5, (1, 5, 5, 1)), (5, (1, 5, 5, 3)), ) def test_build_matrices_not_raised(self, size, *shapes): """Tests that the shape exceptions are not raised.""" build_matrices = lambda image: matting.build_matrices(image, size=size) self.assert_exception_is_not_raised(build_matrices, shapes) @parameterized.parameters( ("tensor must have a rank of 4, but it has rank", 3, (1,)), ("tensor must have a rank of 4, but it has rank", 3, (1, 1, 1, 1, 1)), ("The patch size is expected to be an odd value.", 2, (1, 1, 1, 1)), ) def test_build_matrices_raised(self, error_msg, size, *shapes): """Tests that the shape exceptions are properly raised.""" build_matrices = lambda image: matting.build_matrices(image, size=size) self.assert_exception_is_raised(build_matrices, error_msg, shapes) @parameterized.parameters((3,), (5,)) def test_linear_coefficients_jacobian_random(self, size): """Tests the Jacobian of the linear_coefficients function.""" tensor_shape = np.random.randint(size, 6, size=3) matte_init = np.random.uniform(0.0, 1.0, size=tensor_shape.tolist() + [1]) tensor_shape[1:3] -= (size - 1) num_coeffs = np.random.randint(2, 4) pseudo_inverse_init = np.random.uniform( 0.0, 1.0, size=tensor_shape.tolist() + [num_coeffs, size**2]) def a_fn(matte, pseudo_inverse): a, _ = matting.linear_coefficients(matte, pseudo_inverse) return a def b_fn(matte, pseudo_inverse): _, b = matting.linear_coefficients(matte, pseudo_inverse) return b with self.subTest(name="a"): self.assert_jacobian_is_correct_fn(a_fn, [matte_init, pseudo_inverse_init]) with self.subTest(name="b"): self.assert_jacobian_is_correct_fn(b_fn, [matte_init, pseudo_inverse_init]) @parameterized.parameters( ((None, None, None, 1), (None, None, None, 4, 9)), ((None, None, None, 1), (None, None, None, 2, 25)), ((1, 6, 6, 1), (1, 4, 4, 2, 9)), ((1, 10, 10, 1), (1, 6, 6, 2, 25)), ) def test_linear_coefficients_not_raised(self, *shapes): """Tests that the shape exceptions are not raised.""" self.assert_exception_is_not_raised(matting.linear_coefficients, shapes) @parameterized.parameters( ("must have exactly 1 dimensions in axis -1", (1, 6, 6, 2), (1, 4, 4, 2, 9)), ("Not all batch dimensions are identical.", (1, 6, 6, 1), (2, 4, 4, 2, 9)), ) def test_linear_coefficients_raised(self, error_msg, *shapes): """Tests that the shape exceptions are properly raised.""" self.assert_exception_is_raised(matting.linear_coefficients, error_msg, shapes) @parameterized.parameters((3,), (5,)) def test_linear_coefficients_reconstruction_same_images(self, size): """Tests that the matte can be reconstructed by using the coefficients .""" tensor_shape = np.random.randint(size, 6, size=3).tolist() image = np.random.uniform(0.0, 1.0, size=tensor_shape + [1]) _, pseudo_inverse = matting.build_matrices(image, size=size) a, b = matting.linear_coefficients(image, pseudo_inverse) reconstructed = matting.reconstruct(image, a, b) self.assertAllClose(image, reconstructed, atol=1e-4) @parameterized.parameters((3,), (5,)) def test_linear_coefficients_reconstruction_opposite_images(self, size): """Tests that the matte can be reconstructed by using the coefficients .""" tensor_shape = np.random.randint(size, 6, size=3).tolist() image = np.random.uniform(0.0, 1.0, size=tensor_shape + [1]) _, pseudo_inverse = matting.build_matrices(image, size=size) a, b = matting.linear_coefficients(1.0 - image, pseudo_inverse) reconstructed = matting.reconstruct(image, a, b) self.assertAllClose(1.0 - image, reconstructed, atol=1e-4) @parameterized.parameters((3,), (5,)) def test_loss_jacobian_random(self, size): """Tests the Jacobian of the matting loss function.""" tensor_shape = np.random.randint(size, 6, size=3) matte_init = np.random.uniform(0.0, 1.0, size=tensor_shape.tolist() + [1]) tensor_shape[1:3] -= (size - 1) laplacian_init = np.random.uniform( 0.0, 1.0, size=tensor_shape.tolist() + [size**2, size**2]) with self.subTest(name="matte"): self.assert_jacobian_is_correct_fn(matting.loss, [matte_init, laplacian_init]) @parameterized.parameters( ((None, None, None, 1), (None, None, None, 9, 9)), ((None, None, None, 1), (None, None, None, 25, 25)), ((1, 6, 6, 1), (1, 4, 4, 9, 9)), ((1, 10, 10, 1), (1, 6, 6, 25, 25)), ) def test_loss_not_raised(self, *shapes): """Tests that the shape exceptions are not raised.""" self.assert_exception_is_not_raised(matting.loss, shapes) @parameterized.parameters( ("must have exactly 1 dimensions in axis -1", (1, 6, 6, 2), (1, 4, 4, 9, 9)), ("must have exactly 9 dimensions in axis -2", (1, 6, 6, 1), (1, 4, 4, 1, 9)), ("Not all batch dimensions are identical.", (1, 6, 6, 1), (2, 4, 4, 9, 9)), ) def test_loss_raised(self, error_msg, *shapes): """Tests that the shape exceptions are properly raised.""" self.assert_exception_is_raised(matting.loss, error_msg, shapes) @parameterized.parameters((3,), (5,)) def test_loss_opposite_images(self, size): """Tests that passing opposite images results in a loss close to 0.0.""" tensor_shape = np.random.randint(size, 6, size=3).tolist() image = np.random.uniform(0.0, 1.0, size=tensor_shape + [1]) laplacian, _ = matting.build_matrices(image, size=size) loss = matting.loss(1.0 - image, laplacian) self.assertAllClose(loss, 0.0, atol=1e-4) @parameterized.parameters((3,), (5,)) def test_loss_same_images(self, size): """Tests that passing same images results in a loss close to 0.0.""" tensor_shape = np.random.randint(size, 6, size=3).tolist() image = np.random.uniform(0.0, 1.0, size=tensor_shape + [1]) laplacian, _ = matting.build_matrices(image, size=size) loss = matting.loss(image, laplacian) self.assertAllClose(loss, 0.0, atol=1e-4) @parameterized.parameters((3,), (5,)) def test_loss_positive(self, size): """Tests that the loss is always greater or equal to 0.0.""" tensor_shape = np.random.randint(size, 6, size=3).tolist() image = tf.random.uniform(minval=0.0, maxval=1.0, shape=tensor_shape + [3]) matte = tf.random.uniform(minval=0.0, maxval=1.0, shape=tensor_shape + [1]) laplacian, _ = matting.build_matrices(image, size=size) loss = matting.loss(matte, laplacian) self.assertAllGreaterEqual(loss, 0.0) @parameterized.parameters((1,), (3,)) def test_reconstruct_jacobian_random(self, channels): """Tests the Jacobian of the reconstruct function.""" tensor_shape = np.random.randint(1, 5, size=3).tolist() image_init = np.random.uniform(0.0, 1.0, size=tensor_shape + [channels]) mul_init = np.random.uniform(0.0, 1.0, size=tensor_shape + [channels]) add_init = np.random.uniform(0.0, 1.0, size=tensor_shape + [1]) self.assert_jacobian_is_correct_fn(matting.reconstruct, [image_init, mul_init, add_init]) @parameterized.parameters( ((None, None, None, 3), (None, None, None, 3), (None, None, None, 1)), ((1, 6, 6, 3), (1, 6, 6, 3), (1, 6, 6, 1)), ) def test_reconstruct_not_raised(self, *shapes): """Tests that the shape exceptions are not raised.""" self.assert_exception_is_not_raised(matting.reconstruct, shapes) @parameterized.parameters( ("tensor must have a rank of 4, but it has rank", (1, 6, 6), (1, 6, 6, 2), (1, 6, 6, 1)), ("tensor must have a rank of 4, but it has rank", (1, 6, 6, 2), (1, 6, 6), (1, 6, 6, 1)), ("tensor must have a rank of 4, but it has rank", (1, 6, 6, 2), (1, 6, 6, 2), (1, 6, 6)), ("must have exactly 1 dimensions in axis -1", (1, 6, 6, 2), (1, 6, 6, 2), (1, 6, 6, 2)), ("Not all batch dimensions are identical.", (1, 6, 6, 1), (1, 6, 6, 4), (1, 6, 6, 1)), ("Not all batch dimensions are identical.", (1, 6, 6, 1), (1, 4, 6, 1), (1, 6, 6, 1)), ("Not all batch dimensions are identical.", (1, 6, 6, 1), (1, 6, 6, 1), (1, 4, 6, 1)), ("Not all batch dimensions are identical.", (1, 6, 6, 1), (1, 6, 4, 1), (1, 6, 6, 1)), ("Not all batch dimensions are identical.", (1, 6, 6, 1), (1, 6, 6, 1), (1, 6, 4, 1)), ("Not all batch dimensions are identical.", (1, 6, 6, 1), (4, 6, 6, 1), (1, 6, 6, 1)), ("Not all batch dimensions are identical.", (1, 6, 6, 1), (1, 6, 6, 1), (4, 6, 6, 1)), ) def test_reconstruct_raised(self, error_msg, *shapes): """Tests that the shape exceptions are properly raised.""" self.assert_exception_is_raised(matting.reconstruct, error_msg, shapes) if __name__ == "__main__": test_case.main()
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/geometry/transformation/rotation_matrix_3d.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """This module implements TensorFlow 3d rotation matrix utility functions. More details rotation matrices can be found on [this page.] (https://en.wikipedia.org/wiki/Rotation_matrix) """ from __future__ import absolute_import from __future__ import division from __future__ import print_function from absl import flags import tensorflow as tf from tensorflow_graphics.geometry.transformation import rotation_matrix_common from tensorflow_graphics.util import asserts from tensorflow_graphics.util import export_api from tensorflow_graphics.util import shape from tensorflow_graphics.util import tfg_flags FLAGS = flags.FLAGS def _build_matrix_from_sines_and_cosines(sin_angles, cos_angles): """Builds a rotation matrix from sines and cosines of Euler angles. Note: In the following, A1 to An are optional batch dimensions. Args: sin_angles: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the sine of the Euler angles. cos_angles: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the cosine of the Euler angles. Returns: A tensor of shape `[A1, ..., An, 3, 3]`, where the last two dimensions represent a 3d rotation matrix. """ sin_angles.shape.assert_is_compatible_with(cos_angles.shape) sx, sy, sz = tf.unstack(sin_angles, axis=-1) cx, cy, cz = tf.unstack(cos_angles, axis=-1) m00 = cy * cz m01 = (sx * sy * cz) - (cx * sz) m02 = (cx * sy * cz) + (sx * sz) m10 = cy * sz m11 = (sx * sy * sz) + (cx * cz) m12 = (cx * sy * sz) - (sx * cz) m20 = -sy m21 = sx * cy m22 = cx * cy matrix = tf.stack((m00, m01, m02, m10, m11, m12, m20, m21, m22), axis=-1) # pyformat: disable output_shape = tf.concat((tf.shape(input=sin_angles)[:-1], (3, 3)), axis=-1) return tf.reshape(matrix, shape=output_shape) def assert_rotation_matrix_normalized(matrix, eps=1e-3, name=None): """Checks whether a matrix is a rotation matrix. Note: In the following, A1 to An are optional batch dimensions. Args: matrix: A tensor of shape `[A1, ..., An, 3, 3]`, where the last two dimensions represent a 3d rotation matrix. eps: The absolute tolerance parameter. name: A name for this op that defaults to 'assert_rotation_matrix_normalized'. Returns: The input matrix, with dependence on the assertion operator in the graph. Raises: tf.errors.InvalidArgumentError: If rotation_matrix_3d is not normalized. """ if not FLAGS[tfg_flags.TFG_ADD_ASSERTS_TO_GRAPH].value: return matrix with tf.compat.v1.name_scope(name, "assert_rotation_matrix_normalized", [matrix]): matrix = tf.convert_to_tensor(value=matrix) shape.check_static( tensor=matrix, tensor_name="matrix", has_rank_greater_than=1, has_dim_equals=((-2, 3), (-1, 3))) is_matrix_normalized = is_valid(matrix, atol=eps) with tf.control_dependencies([ tf.compat.v1.assert_equal( is_matrix_normalized, tf.ones_like(is_matrix_normalized, dtype=tf.bool)) ]): return tf.identity(matrix) def from_axis_angle(axis, angle, name=None): """Convert an axis-angle representation to a rotation matrix. Note: In the following, A1 to An are optional batch dimensions, which must be broadcast compatible. Args: axis: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents a normalized axis. angle: A tensor of shape `[A1, ..., An, 1]`, where the last dimension represents a normalized axis. name: A name for this op that defaults to "rotation_matrix_3d_from_axis_angle". Returns: A tensor of shape `[A1, ..., An, 3, 3]`, where the last two dimensions represents a 3d rotation matrix. Raises: ValueError: If the shape of `axis` or `angle` is not supported. """ with tf.compat.v1.name_scope(name, "rotation_matrix_3d_from_axis_angle", [axis, angle]): axis = tf.convert_to_tensor(value=axis) angle = tf.convert_to_tensor(value=angle) shape.check_static(tensor=axis, tensor_name="axis", has_dim_equals=(-1, 3)) shape.check_static( tensor=angle, tensor_name="angle", has_dim_equals=(-1, 1)) shape.compare_batch_dimensions( tensors=(axis, angle), tensor_names=("axis", "angle"), last_axes=-2, broadcast_compatible=True) axis = asserts.assert_normalized(axis) sin_axis = tf.sin(angle) * axis cos_angle = tf.cos(angle) cos1_axis = (1.0 - cos_angle) * axis _, axis_y, axis_z = tf.unstack(axis, axis=-1) cos1_axis_x, cos1_axis_y, _ = tf.unstack(cos1_axis, axis=-1) sin_axis_x, sin_axis_y, sin_axis_z = tf.unstack(sin_axis, axis=-1) tmp = cos1_axis_x * axis_y m01 = tmp - sin_axis_z m10 = tmp + sin_axis_z tmp = cos1_axis_x * axis_z m02 = tmp + sin_axis_y m20 = tmp - sin_axis_y tmp = cos1_axis_y * axis_z m12 = tmp - sin_axis_x m21 = tmp + sin_axis_x diag = cos1_axis * axis + cos_angle diag_x, diag_y, diag_z = tf.unstack(diag, axis=-1) matrix = tf.stack((diag_x, m01, m02, m10, diag_y, m12, m20, m21, diag_z), axis=-1) # pyformat: disable output_shape = tf.concat((tf.shape(input=axis)[:-1], (3, 3)), axis=-1) return tf.reshape(matrix, shape=output_shape) def from_euler(angles, name=None): r"""Convert an Euler angle representation to a rotation matrix. The resulting matrix is $$\mathbf{R} = \mathbf{R}_z\mathbf{R}_y\mathbf{R}_x$$. Note: In the following, A1 to An are optional batch dimensions. Args: angles: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the three Euler angles. `[A1, ..., An, 0]` is the angle about `x` in radians `[A1, ..., An, 1]` is the angle about `y` in radians and `[A1, ..., An, 2]` is the angle about `z` in radians. name: A name for this op that defaults to "rotation_matrix_3d_from_euler". Returns: A tensor of shape `[A1, ..., An, 3, 3]`, where the last two dimensions represent a 3d rotation matrix. Raises: ValueError: If the shape of `angles` is not supported. """ with tf.compat.v1.name_scope(name, "rotation_matrix_3d_from_euler", [angles]): angles = tf.convert_to_tensor(value=angles) shape.check_static( tensor=angles, tensor_name="angles", has_dim_equals=(-1, 3)) sin_angles = tf.sin(angles) cos_angles = tf.cos(angles) return _build_matrix_from_sines_and_cosines(sin_angles, cos_angles) def from_euler_with_small_angles_approximation(angles, name=None): r"""Convert an Euler angle representation to a rotation matrix. The resulting matrix is $$\mathbf{R} = \mathbf{R}_z\mathbf{R}_y\mathbf{R}_x$$. Under the small angle assumption, $$\sin(x)$$ and $$\cos(x)$$ can be approximated by their second order Taylor expansions, where $$\sin(x) \approx x$$ and $$\cos(x) \approx 1 - \frac{x^2}{2}$$. In the current implementation, the smallness of the angles is not verified. Note: In the following, A1 to An are optional batch dimensions. Args: angles: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the three small Euler angles. `[A1, ..., An, 0]` is the angle about `x` in radians, `[A1, ..., An, 1]` is the angle about `y` in radians and `[A1, ..., An, 2]` is the angle about `z` in radians. name: A name for this op that defaults to "rotation_matrix_3d_from_euler". Returns: A tensor of shape `[A1, ..., An, 3, 3]`, where the last two dimensions represent a 3d rotation matrix. Raises: ValueError: If the shape of `angles` is not supported. """ with tf.compat.v1.name_scope( name, "rotation_matrix_3d_from_euler_with_small_angles", [angles]): angles = tf.convert_to_tensor(value=angles) shape.check_static( tensor=angles, tensor_name="angles", has_dim_equals=(-1, 3)) sin_angles = angles cos_angles = 1.0 - 0.5 * tf.square(angles) return _build_matrix_from_sines_and_cosines(sin_angles, cos_angles) def from_quaternion(quaternion, name=None): """Convert a quaternion to a rotation matrix. Note: In the following, A1 to An are optional batch dimensions. Args: quaternion: A tensor of shape `[A1, ..., An, 4]`, where the last dimension represents a normalized quaternion. name: A name for this op that defaults to "rotation_matrix_3d_from_quaternion". Returns: A tensor of shape `[A1, ..., An, 3, 3]`, where the last two dimensions represent a 3d rotation matrix. Raises: ValueError: If the shape of `quaternion` is not supported. """ with tf.compat.v1.name_scope(name, "rotation_matrix_3d_from_quaternion", [quaternion]): quaternion = tf.convert_to_tensor(value=quaternion) shape.check_static( tensor=quaternion, tensor_name="quaternion", has_dim_equals=(-1, 4)) quaternion = asserts.assert_normalized(quaternion) x, y, z, w = tf.unstack(quaternion, axis=-1) tx = 2.0 * x ty = 2.0 * y tz = 2.0 * z twx = tx * w twy = ty * w twz = tz * w txx = tx * x txy = ty * x txz = tz * x tyy = ty * y tyz = tz * y tzz = tz * z matrix = tf.stack((1.0 - (tyy + tzz), txy - twz, txz + twy, txy + twz, 1.0 - (txx + tzz), tyz - twx, txz - twy, tyz + twx, 1.0 - (txx + tyy)), axis=-1) # pyformat: disable output_shape = tf.concat((tf.shape(input=quaternion)[:-1], (3, 3)), axis=-1) return tf.reshape(matrix, shape=output_shape) def inverse(matrix, name=None): """Computes the inverse of a 3D rotation matrix. Note: In the following, A1 to An are optional batch dimensions. Args: matrix: A tensor of shape `[A1, ..., An, 3, 3]`, where the last two dimensions represent a 3d rotation matrix. name: A name for this op that defaults to "rotation_matrix_3d_inverse". Returns: A tensor of shape `[A1, ..., An, 3, 3]`, where the last two dimensions represent a 3d rotation matrix. Raises: ValueError: If the shape of `matrix` is not supported. """ with tf.compat.v1.name_scope(name, "rotation_matrix_3d_inverse", [matrix]): matrix = tf.convert_to_tensor(value=matrix) shape.check_static( tensor=matrix, tensor_name="matrix", has_rank_greater_than=1, has_dim_equals=((-2, 3), (-1, 3))) matrix = assert_rotation_matrix_normalized(matrix) ndims = matrix.shape.ndims perm = list(range(ndims - 2)) + [ndims - 1, ndims - 2] return tf.transpose(a=matrix, perm=perm) def is_valid(matrix, atol=1e-3, name=None): """Determines if a matrix is a valid rotation matrix. Note: In the following, A1 to An are optional batch dimensions. Args: matrix: A tensor of shape `[A1, ..., An, 3,3]`, where the last two dimensions represent a matrix. atol: Absolute tolerance parameter. name: A name for this op that defaults to "rotation_matrix_3d_is_valid". Returns: A tensor of type `bool` and shape `[A1, ..., An, 1]` where False indicates that the input is not a valid rotation matrix. """ with tf.compat.v1.name_scope(name, "rotation_matrix_3d_is_valid", [matrix]): matrix = tf.convert_to_tensor(value=matrix) shape.check_static( tensor=matrix, tensor_name="matrix", has_rank_greater_than=1, has_dim_equals=((-2, 3), (-1, 3))) return rotation_matrix_common.is_valid(matrix, atol) def rotate(point, matrix, name=None): """Rotate a point using a rotation matrix 3d. Note: In the following, A1 to An are optional batch dimensions, which must be broadcast compatible. Args: point: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents a 3d point. matrix: A tensor of shape `[A1, ..., An, 3,3]`, where the last dimension represents a 3d rotation matrix. name: A name for this op that defaults to "rotation_matrix_3d_rotate". Returns: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents a 3d point. Raises: ValueError: If the shape of `point` or `rotation_matrix_3d` is not supported. """ with tf.compat.v1.name_scope(name, "rotation_matrix_3d_rotate", [point, matrix]): point = tf.convert_to_tensor(value=point) matrix = tf.convert_to_tensor(value=matrix) shape.check_static( tensor=point, tensor_name="point", has_dim_equals=(-1, 3)) shape.check_static( tensor=matrix, tensor_name="matrix", has_rank_greater_than=1, has_dim_equals=((-2, 3), (-1, 3))) shape.compare_batch_dimensions( tensors=(point, matrix), tensor_names=("point", "matrix"), last_axes=(-2, -3), broadcast_compatible=True) matrix = assert_rotation_matrix_normalized(matrix) point = tf.expand_dims(point, axis=-1) common_batch_shape = shape.get_broadcasted_shape( point.shape[:-2], matrix.shape[:-2]) def dim_value(dim): return 1 if dim is None else tf.compat.v1.dimension_value(dim) common_batch_shape = [dim_value(dim) for dim in common_batch_shape] point = tf.broadcast_to(point, common_batch_shape + [3, 1]) matrix = tf.broadcast_to(matrix, common_batch_shape + [3, 3]) rotated_point = tf.matmul(matrix, point) return tf.squeeze(rotated_point, axis=-1) # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """This module implements TensorFlow 3d rotation matrix utility functions. More details rotation matrices can be found on [this page.] (https://en.wikipedia.org/wiki/Rotation_matrix) """ from __future__ import absolute_import from __future__ import division from __future__ import print_function from absl import flags import tensorflow as tf from tensorflow_graphics.geometry.transformation import rotation_matrix_common from tensorflow_graphics.util import asserts from tensorflow_graphics.util import export_api from tensorflow_graphics.util import shape from tensorflow_graphics.util import tfg_flags FLAGS = flags.FLAGS def _build_matrix_from_sines_and_cosines(sin_angles, cos_angles): """Builds a rotation matrix from sines and cosines of Euler angles. Note: In the following, A1 to An are optional batch dimensions. Args: sin_angles: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the sine of the Euler angles. cos_angles: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the cosine of the Euler angles. Returns: A tensor of shape `[A1, ..., An, 3, 3]`, where the last two dimensions represent a 3d rotation matrix. """ sin_angles.shape.assert_is_compatible_with(cos_angles.shape) sx, sy, sz = tf.unstack(sin_angles, axis=-1) cx, cy, cz = tf.unstack(cos_angles, axis=-1) m00 = cy * cz m01 = (sx * sy * cz) - (cx * sz) m02 = (cx * sy * cz) + (sx * sz) m10 = cy * sz m11 = (sx * sy * sz) + (cx * cz) m12 = (cx * sy * sz) - (sx * cz) m20 = -sy m21 = sx * cy m22 = cx * cy matrix = tf.stack((m00, m01, m02, m10, m11, m12, m20, m21, m22), axis=-1) # pyformat: disable output_shape = tf.concat((tf.shape(input=sin_angles)[:-1], (3, 3)), axis=-1) return tf.reshape(matrix, shape=output_shape) def assert_rotation_matrix_normalized(matrix, eps=1e-3, name=None): """Checks whether a matrix is a rotation matrix. Note: In the following, A1 to An are optional batch dimensions. Args: matrix: A tensor of shape `[A1, ..., An, 3, 3]`, where the last two dimensions represent a 3d rotation matrix. eps: The absolute tolerance parameter. name: A name for this op that defaults to 'assert_rotation_matrix_normalized'. Returns: The input matrix, with dependence on the assertion operator in the graph. Raises: tf.errors.InvalidArgumentError: If rotation_matrix_3d is not normalized. """ if not FLAGS[tfg_flags.TFG_ADD_ASSERTS_TO_GRAPH].value: return matrix with tf.compat.v1.name_scope(name, "assert_rotation_matrix_normalized", [matrix]): matrix = tf.convert_to_tensor(value=matrix) shape.check_static( tensor=matrix, tensor_name="matrix", has_rank_greater_than=1, has_dim_equals=((-2, 3), (-1, 3))) is_matrix_normalized = is_valid(matrix, atol=eps) with tf.control_dependencies([ tf.compat.v1.assert_equal( is_matrix_normalized, tf.ones_like(is_matrix_normalized, dtype=tf.bool)) ]): return tf.identity(matrix) def from_axis_angle(axis, angle, name=None): """Convert an axis-angle representation to a rotation matrix. Note: In the following, A1 to An are optional batch dimensions, which must be broadcast compatible. Args: axis: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents a normalized axis. angle: A tensor of shape `[A1, ..., An, 1]`, where the last dimension represents a normalized axis. name: A name for this op that defaults to "rotation_matrix_3d_from_axis_angle". Returns: A tensor of shape `[A1, ..., An, 3, 3]`, where the last two dimensions represents a 3d rotation matrix. Raises: ValueError: If the shape of `axis` or `angle` is not supported. """ with tf.compat.v1.name_scope(name, "rotation_matrix_3d_from_axis_angle", [axis, angle]): axis = tf.convert_to_tensor(value=axis) angle = tf.convert_to_tensor(value=angle) shape.check_static(tensor=axis, tensor_name="axis", has_dim_equals=(-1, 3)) shape.check_static( tensor=angle, tensor_name="angle", has_dim_equals=(-1, 1)) shape.compare_batch_dimensions( tensors=(axis, angle), tensor_names=("axis", "angle"), last_axes=-2, broadcast_compatible=True) axis = asserts.assert_normalized(axis) sin_axis = tf.sin(angle) * axis cos_angle = tf.cos(angle) cos1_axis = (1.0 - cos_angle) * axis _, axis_y, axis_z = tf.unstack(axis, axis=-1) cos1_axis_x, cos1_axis_y, _ = tf.unstack(cos1_axis, axis=-1) sin_axis_x, sin_axis_y, sin_axis_z = tf.unstack(sin_axis, axis=-1) tmp = cos1_axis_x * axis_y m01 = tmp - sin_axis_z m10 = tmp + sin_axis_z tmp = cos1_axis_x * axis_z m02 = tmp + sin_axis_y m20 = tmp - sin_axis_y tmp = cos1_axis_y * axis_z m12 = tmp - sin_axis_x m21 = tmp + sin_axis_x diag = cos1_axis * axis + cos_angle diag_x, diag_y, diag_z = tf.unstack(diag, axis=-1) matrix = tf.stack((diag_x, m01, m02, m10, diag_y, m12, m20, m21, diag_z), axis=-1) # pyformat: disable output_shape = tf.concat((tf.shape(input=axis)[:-1], (3, 3)), axis=-1) return tf.reshape(matrix, shape=output_shape) def from_euler(angles, name=None): r"""Convert an Euler angle representation to a rotation matrix. The resulting matrix is $$\mathbf{R} = \mathbf{R}_z\mathbf{R}_y\mathbf{R}_x$$. Note: In the following, A1 to An are optional batch dimensions. Args: angles: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the three Euler angles. `[A1, ..., An, 0]` is the angle about `x` in radians `[A1, ..., An, 1]` is the angle about `y` in radians and `[A1, ..., An, 2]` is the angle about `z` in radians. name: A name for this op that defaults to "rotation_matrix_3d_from_euler". Returns: A tensor of shape `[A1, ..., An, 3, 3]`, where the last two dimensions represent a 3d rotation matrix. Raises: ValueError: If the shape of `angles` is not supported. """ with tf.compat.v1.name_scope(name, "rotation_matrix_3d_from_euler", [angles]): angles = tf.convert_to_tensor(value=angles) shape.check_static( tensor=angles, tensor_name="angles", has_dim_equals=(-1, 3)) sin_angles = tf.sin(angles) cos_angles = tf.cos(angles) return _build_matrix_from_sines_and_cosines(sin_angles, cos_angles) def from_euler_with_small_angles_approximation(angles, name=None): r"""Convert an Euler angle representation to a rotation matrix. The resulting matrix is $$\mathbf{R} = \mathbf{R}_z\mathbf{R}_y\mathbf{R}_x$$. Under the small angle assumption, $$\sin(x)$$ and $$\cos(x)$$ can be approximated by their second order Taylor expansions, where $$\sin(x) \approx x$$ and $$\cos(x) \approx 1 - \frac{x^2}{2}$$. In the current implementation, the smallness of the angles is not verified. Note: In the following, A1 to An are optional batch dimensions. Args: angles: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the three small Euler angles. `[A1, ..., An, 0]` is the angle about `x` in radians, `[A1, ..., An, 1]` is the angle about `y` in radians and `[A1, ..., An, 2]` is the angle about `z` in radians. name: A name for this op that defaults to "rotation_matrix_3d_from_euler". Returns: A tensor of shape `[A1, ..., An, 3, 3]`, where the last two dimensions represent a 3d rotation matrix. Raises: ValueError: If the shape of `angles` is not supported. """ with tf.compat.v1.name_scope( name, "rotation_matrix_3d_from_euler_with_small_angles", [angles]): angles = tf.convert_to_tensor(value=angles) shape.check_static( tensor=angles, tensor_name="angles", has_dim_equals=(-1, 3)) sin_angles = angles cos_angles = 1.0 - 0.5 * tf.square(angles) return _build_matrix_from_sines_and_cosines(sin_angles, cos_angles) def from_quaternion(quaternion, name=None): """Convert a quaternion to a rotation matrix. Note: In the following, A1 to An are optional batch dimensions. Args: quaternion: A tensor of shape `[A1, ..., An, 4]`, where the last dimension represents a normalized quaternion. name: A name for this op that defaults to "rotation_matrix_3d_from_quaternion". Returns: A tensor of shape `[A1, ..., An, 3, 3]`, where the last two dimensions represent a 3d rotation matrix. Raises: ValueError: If the shape of `quaternion` is not supported. """ with tf.compat.v1.name_scope(name, "rotation_matrix_3d_from_quaternion", [quaternion]): quaternion = tf.convert_to_tensor(value=quaternion) shape.check_static( tensor=quaternion, tensor_name="quaternion", has_dim_equals=(-1, 4)) quaternion = asserts.assert_normalized(quaternion) x, y, z, w = tf.unstack(quaternion, axis=-1) tx = 2.0 * x ty = 2.0 * y tz = 2.0 * z twx = tx * w twy = ty * w twz = tz * w txx = tx * x txy = ty * x txz = tz * x tyy = ty * y tyz = tz * y tzz = tz * z matrix = tf.stack((1.0 - (tyy + tzz), txy - twz, txz + twy, txy + twz, 1.0 - (txx + tzz), tyz - twx, txz - twy, tyz + twx, 1.0 - (txx + tyy)), axis=-1) # pyformat: disable output_shape = tf.concat((tf.shape(input=quaternion)[:-1], (3, 3)), axis=-1) return tf.reshape(matrix, shape=output_shape) def inverse(matrix, name=None): """Computes the inverse of a 3D rotation matrix. Note: In the following, A1 to An are optional batch dimensions. Args: matrix: A tensor of shape `[A1, ..., An, 3, 3]`, where the last two dimensions represent a 3d rotation matrix. name: A name for this op that defaults to "rotation_matrix_3d_inverse". Returns: A tensor of shape `[A1, ..., An, 3, 3]`, where the last two dimensions represent a 3d rotation matrix. Raises: ValueError: If the shape of `matrix` is not supported. """ with tf.compat.v1.name_scope(name, "rotation_matrix_3d_inverse", [matrix]): matrix = tf.convert_to_tensor(value=matrix) shape.check_static( tensor=matrix, tensor_name="matrix", has_rank_greater_than=1, has_dim_equals=((-2, 3), (-1, 3))) matrix = assert_rotation_matrix_normalized(matrix) ndims = matrix.shape.ndims perm = list(range(ndims - 2)) + [ndims - 1, ndims - 2] return tf.transpose(a=matrix, perm=perm) def is_valid(matrix, atol=1e-3, name=None): """Determines if a matrix is a valid rotation matrix. Note: In the following, A1 to An are optional batch dimensions. Args: matrix: A tensor of shape `[A1, ..., An, 3,3]`, where the last two dimensions represent a matrix. atol: Absolute tolerance parameter. name: A name for this op that defaults to "rotation_matrix_3d_is_valid". Returns: A tensor of type `bool` and shape `[A1, ..., An, 1]` where False indicates that the input is not a valid rotation matrix. """ with tf.compat.v1.name_scope(name, "rotation_matrix_3d_is_valid", [matrix]): matrix = tf.convert_to_tensor(value=matrix) shape.check_static( tensor=matrix, tensor_name="matrix", has_rank_greater_than=1, has_dim_equals=((-2, 3), (-1, 3))) return rotation_matrix_common.is_valid(matrix, atol) def rotate(point, matrix, name=None): """Rotate a point using a rotation matrix 3d. Note: In the following, A1 to An are optional batch dimensions, which must be broadcast compatible. Args: point: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents a 3d point. matrix: A tensor of shape `[A1, ..., An, 3,3]`, where the last dimension represents a 3d rotation matrix. name: A name for this op that defaults to "rotation_matrix_3d_rotate". Returns: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents a 3d point. Raises: ValueError: If the shape of `point` or `rotation_matrix_3d` is not supported. """ with tf.compat.v1.name_scope(name, "rotation_matrix_3d_rotate", [point, matrix]): point = tf.convert_to_tensor(value=point) matrix = tf.convert_to_tensor(value=matrix) shape.check_static( tensor=point, tensor_name="point", has_dim_equals=(-1, 3)) shape.check_static( tensor=matrix, tensor_name="matrix", has_rank_greater_than=1, has_dim_equals=((-2, 3), (-1, 3))) shape.compare_batch_dimensions( tensors=(point, matrix), tensor_names=("point", "matrix"), last_axes=(-2, -3), broadcast_compatible=True) matrix = assert_rotation_matrix_normalized(matrix) point = tf.expand_dims(point, axis=-1) common_batch_shape = shape.get_broadcasted_shape( point.shape[:-2], matrix.shape[:-2]) def dim_value(dim): return 1 if dim is None else tf.compat.v1.dimension_value(dim) common_batch_shape = [dim_value(dim) for dim in common_batch_shape] point = tf.broadcast_to(point, common_batch_shape + [3, 1]) matrix = tf.broadcast_to(matrix, common_batch_shape + [3, 3]) rotated_point = tf.matmul(matrix, point) return tf.squeeze(rotated_point, axis=-1) # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/nn/metric/__init__.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Metrics module.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function from tensorflow_graphics.nn.metric import fscore from tensorflow_graphics.nn.metric import intersection_over_union from tensorflow_graphics.nn.metric import precision from tensorflow_graphics.nn.metric import recall from tensorflow_graphics.util import export_api as _export_api # API contains submodules of tensorflow_graphics.nn.metric. __all__ = _export_api.get_modules()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Metrics module.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function from tensorflow_graphics.nn.metric import fscore from tensorflow_graphics.nn.metric import intersection_over_union from tensorflow_graphics.nn.metric import precision from tensorflow_graphics.nn.metric import recall from tensorflow_graphics.util import export_api as _export_api # API contains submodules of tensorflow_graphics.nn.metric. __all__ = _export_api.get_modules()
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/geometry/transformation/tests/dual_quaternion_test.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for dual quaternion.""" from absl.testing import flagsaver from absl.testing import parameterized import tensorflow.compat.v2 as tf from tensorflow_graphics.geometry.transformation import dual_quaternion from tensorflow_graphics.geometry.transformation.tests import test_helpers from tensorflow_graphics.util import test_case class DualQuaternionTest(test_case.TestCase): @parameterized.parameters( ((8,),), ((None, 8),), ) def test_conjugate_exception_not_raised(self, *shape): """Tests that the shape exceptions of conjugate are not raised.""" self.assert_exception_is_not_raised(dual_quaternion.conjugate, shape) @parameterized.parameters( ("must have exactly 8 dimensions", (3,)),) def test_conjugate_exception_raised(self, error_msg, *shape): """Tests that the shape exceptions are raised.""" self.assert_exception_is_raised(dual_quaternion.conjugate, error_msg, shape) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_conjugate_jacobian_preset(self): """Tests the Jacobian of the conjugate function.""" x_init = test_helpers.generate_preset_test_dual_quaternions() self.assert_jacobian_is_correct_fn(dual_quaternion.conjugate, [x_init]) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_conjugate_jacobian_random(self): """Tests the Jacobian of the conjugate function.""" x_init = test_helpers.generate_random_test_dual_quaternions() self.assert_jacobian_is_correct_fn(dual_quaternion.conjugate, [x_init]) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_conjugate_preset(self): """Tests if the conjugate function is providing correct results.""" x_init = test_helpers.generate_preset_test_dual_quaternions() x = tf.convert_to_tensor(value=x_init) y = tf.convert_to_tensor(value=x_init) x = dual_quaternion.conjugate(x) x_real, x_dual = tf.split(x, (4, 4), axis=-1) y_real, y_dual = tf.split(y, (4, 4), axis=-1) xyz_y_real, w_y_real = tf.split(y_real, (3, 1), axis=-1) xyz_y_dual, w_y_dual = tf.split(y_dual, (3, 1), axis=-1) y_real = tf.concat((-xyz_y_real, w_y_real), axis=-1) y_dual = tf.concat((-xyz_y_dual, w_y_dual), axis=-1) self.assertAllEqual(x_real, y_real) self.assertAllEqual(x_dual, y_dual)
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for dual quaternion.""" from absl.testing import flagsaver from absl.testing import parameterized import tensorflow.compat.v2 as tf from tensorflow_graphics.geometry.transformation import dual_quaternion from tensorflow_graphics.geometry.transformation.tests import test_helpers from tensorflow_graphics.util import test_case class DualQuaternionTest(test_case.TestCase): @parameterized.parameters( ((8,),), ((None, 8),), ) def test_conjugate_exception_not_raised(self, *shape): """Tests that the shape exceptions of conjugate are not raised.""" self.assert_exception_is_not_raised(dual_quaternion.conjugate, shape) @parameterized.parameters( ("must have exactly 8 dimensions", (3,)),) def test_conjugate_exception_raised(self, error_msg, *shape): """Tests that the shape exceptions are raised.""" self.assert_exception_is_raised(dual_quaternion.conjugate, error_msg, shape) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_conjugate_jacobian_preset(self): """Tests the Jacobian of the conjugate function.""" x_init = test_helpers.generate_preset_test_dual_quaternions() self.assert_jacobian_is_correct_fn(dual_quaternion.conjugate, [x_init]) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_conjugate_jacobian_random(self): """Tests the Jacobian of the conjugate function.""" x_init = test_helpers.generate_random_test_dual_quaternions() self.assert_jacobian_is_correct_fn(dual_quaternion.conjugate, [x_init]) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_conjugate_preset(self): """Tests if the conjugate function is providing correct results.""" x_init = test_helpers.generate_preset_test_dual_quaternions() x = tf.convert_to_tensor(value=x_init) y = tf.convert_to_tensor(value=x_init) x = dual_quaternion.conjugate(x) x_real, x_dual = tf.split(x, (4, 4), axis=-1) y_real, y_dual = tf.split(y, (4, 4), axis=-1) xyz_y_real, w_y_real = tf.split(y_real, (3, 1), axis=-1) xyz_y_dual, w_y_dual = tf.split(y_dual, (3, 1), axis=-1) y_real = tf.concat((-xyz_y_real, w_y_real), axis=-1) y_dual = tf.concat((-xyz_y_dual, w_y_dual), axis=-1) self.assertAllEqual(x_real, y_real) self.assertAllEqual(x_dual, y_dual)
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/rendering/light/tests/point_light_test.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for point light.""" import math from absl.testing import flagsaver from absl.testing import parameterized import numpy as np import tensorflow as tf from tensorflow_graphics.rendering.light import point_light from tensorflow_graphics.util import test_case def fake_brdf(incoming_light_direction, outgoing_light_direction, surface_point_normal): del incoming_light_direction, surface_point_normal # Unused. return outgoing_light_direction def returning_zeros_brdf(incoming_light_direction, outgoing_light_direction, surface_point_normal): del incoming_light_direction, outgoing_light_direction # Unused. return tf.zeros_like(surface_point_normal) def random_tensor(tensor_shape): return np.random.uniform(low=-100.0, high=100.0, size=tensor_shape) class PointLightTest(test_case.TestCase): @parameterized.parameters( # Light direction is parallel to the surface normal. ([1.], [[0., 0., 1.]], [2., 0., 0.], [1.0 / (4. * math.pi), 0., 0.]), # Light direction is parallel to the surface normal and the reflected # light fall off is included in the calculation. ([1.], [[0., 0., 1.]], [2., 0., 0.], \ [0.25 / (4. * math.pi), 0., 0.], True), # Light direction is perpendicular to the surface normal. ([1.], [[3., 0., 0.]], [1., 2., 3.], [0., 0., 0.]), # Angle between surface normal and the incoming light direction is pi/3. ([1.], [[math.sqrt(3), 0., 1.]], \ [0., 1., 0.], [0., 0.125 / (4. * math.pi), 0.]), # Angle between surface normal and the incoming light direction is pi/4. ([1.], [[0., 1., 1.]], [1., 1., 0.], [0.25 / (4. * math.pi), 0.25 / (4. * math.pi), 0.]), # Light has 3 radiances. ([2., 4., 1.], [[0., 1., 1.]], [1., 1., 0.], [0.5 / (4. * math.pi), 1. / (4. * math.pi), 0.]), # Light is behind the surface. ([1.], [[0., 0., -2.]], [7., 0., 0.], [0., 0., 0.]), # Observation point is behind the surface. ([1.], [[0., 0., 2.]], [5., 0., -2.], [0., 0., 0.]), # Light and observation point are behind the surface. ([1.], [[0., 0., -2.]], [5., 0., -2.], [0., 0., 0.]), ) def test_estimate_radiance_preset(self, light_radiance, light_pos, observation_pos, expected_result, reflected_light_fall_off=False): """Tests the output of estimate radiance function with various parameters. In this test the point on the surface is always [0, 0, 0] ,the surface normal is [0, 0, 1] and the fake brdf function returns the (normalized) direction of the outgoing light as its output. Args: light_radiance: An array of size K representing the point light radiances. light_pos: An array of size [3,] representing the point light positions. observation_pos: An array of size [3,] representing the observation point. expected_result: An array of size [3,] representing the expected result of the estimated reflected radiance function. reflected_light_fall_off: A boolean specifying whether or not to include the fall off of the reflected light in the calculation. Defaults to False. """ tensor_size = np.random.randint(1, 3) + 1 tensor_shape = np.random.randint(1, 10, size=tensor_size).tolist() lights_tensor_size = np.random.randint(1, 3) + 1 lights_tensor_shape = np.random.randint( 1, 10, size=lights_tensor_size).tolist() point_light_radiance = np.tile(light_radiance, lights_tensor_shape + [1]) point_light_position = np.tile(light_pos, lights_tensor_shape + [1]) surface_point_normal = np.tile([0.0, 0.0, 1.0], tensor_shape + [1]) surface_point_position = np.tile([0.0, 0.0, 0.0], tensor_shape + [1]) observation_point = np.tile(observation_pos, tensor_shape + [1]) expected = np.tile(expected_result, tensor_shape + lights_tensor_shape + [1]) pred = point_light.estimate_radiance( point_light_radiance, point_light_position, surface_point_position, surface_point_normal, observation_point, fake_brdf, reflected_light_fall_off=reflected_light_fall_off) self.assertAllClose(expected, pred) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_estimate_radiance_jacobian_random(self): """Tests the Jacobian of the point lighting equation.""" tensor_size = np.random.randint(1, 3) tensor_shape = np.random.randint(1, 10, size=tensor_size).tolist() light_tensor_size = np.random.randint(1, 3) lights_tensor_shape = np.random.randint( 1, 10, size=light_tensor_size).tolist() point_light_radiance_init = random_tensor(lights_tensor_shape + [1]) point_light_position_init = random_tensor(lights_tensor_shape + [3]) surface_point_position_init = random_tensor(tensor_shape + [3]) surface_point_normal_init = random_tensor(tensor_shape + [3]) observation_point_init = random_tensor(tensor_shape + [3]) def estimate_radiance_fn(point_light_position, surface_point_position, surface_point_normal, observation_point): return point_light.estimate_radiance(point_light_radiance_init, point_light_position, surface_point_position, surface_point_normal, observation_point, fake_brdf) self.assert_jacobian_is_correct_fn(estimate_radiance_fn, [ point_light_position_init, surface_point_position_init, surface_point_normal_init, observation_point_init ]) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_estimate_radiance_jacobian_preset(self): """Tests the Jacobian of the point lighting equation. Verifies that the Jacobian of the point lighting equation is correct when the light direction is orthogonal to the surface normal. """ delta = 1e-5 point_light_radiance_init = np.array(1.0).reshape((1, 1)) point_light_position_init = np.array((delta, 1.0, 0.0)).reshape((1, 3)) surface_point_position_init = np.array((0.0, 0.0, 0.0)) surface_point_normal_init = np.array((1.0, 0.0, 0.0)) observation_point_init = np.array((delta, 3.0, 0.0)) def estimate_radiance_fn(point_light_position, surface_point_position, surface_point_normal, observation_point): return point_light.estimate_radiance(point_light_radiance_init, point_light_position, surface_point_position, surface_point_normal, observation_point, fake_brdf) self.assert_jacobian_is_correct_fn(estimate_radiance_fn, [ point_light_position_init, surface_point_position_init, surface_point_normal_init, observation_point_init ]) @parameterized.parameters( ((1, 1), (1, 3), (3,), (3,), (3,)), ((4, 1, 1), (4, 1, 3), (1, 3), (1, 3), (1, 3)), ((3, 2, 1), (3, 2, 3), (2, 3), (2, 3), (2, 3)), ((1, 1), (3,), (1, 3), (1, 2, 3), (1, 3)), ((4, 5, 1), (3, 4, 5, 3), (1, 3), (1, 2, 2, 3), (1, 2, 3)), ((1,), (1, 2, 2, 3), (1, 2, 3), (1, 3), (3,)), ) def test_estimate_radiance_shape_exception_not_raised(self, *shape): """Tests that the shape exceptions are not raised.""" self.assert_exception_is_not_raised( point_light.estimate_radiance, shape, brdf=returning_zeros_brdf) @parameterized.parameters( ("must have exactly 3 dimensions in axis -1", (1, 1), (1, 1), (3,), (3,), (3,)), ("must have exactly 3 dimensions in axis -1", (5, 1), (5, 2), (3,), (3,), (3,)), ("must have exactly 3 dimensions in axis -1", (1, 1), (1, 4), (3,), (3,), (3,)), ("must have exactly 3 dimensions in axis -1", (1, 1), (1, 3), (1,), (3,), (3,)), ("must have exactly 3 dimensions in axis -1", (1, 1), (1, 3), (2,), (3,), (3,)), ("must have exactly 3 dimensions in axis -1", (1, 1), (1, 3), (4,), (3,), (3,)), ("must have exactly 3 dimensions in axis -1", (1, 1), (1, 3), (3,), (1,), (3,)), ("must have exactly 3 dimensions in axis -1", (1, 1), (1, 3), (3,), (2,), (3,)), ("must have exactly 3 dimensions in axis -1", (1, 1), (1, 3), (3,), (4,), (3,)), ("must have exactly 3 dimensions in axis -1", (1, 1), (1, 3), (3,), (3,), (4,)), ("must have exactly 3 dimensions in axis -1", (1, 1), (1, 3), (3,), (3,), (2,)), ("must have exactly 3 dimensions in axis -1", (1, 1), (1, 3), (3,), (3,), (1,)), ("Not all batch dimensions are broadcast-compatible.", (1, 3, 1), (1, 3, 3), (2, 3), (4, 3), (3,)), ("Not all batch dimensions are broadcast-compatible.", (1, 3, 1), (1, 4, 3), (2, 3), (3,), (3,)), ) def test_estimate_radiance_shape_exception_raised(self, error_msg, *shape): """Tests that the shape exception is raised.""" self.assert_exception_is_raised( point_light.estimate_radiance, error_msg, shape, brdf=returning_zeros_brdf) def test_estimate_radiance_value_exceptions_raised(self): """Tests that the value exceptions are raised correctly.""" point_light_radiance = random_tensor(tensor_shape=(1, 1)) point_light_position = random_tensor(tensor_shape=(1, 3)) surface_point_position = random_tensor(tensor_shape=(3,)) surface_point_normal = random_tensor(tensor_shape=(3,)) observation_point = random_tensor(tensor_shape=(3,)) # Verify that an InvalidArgumentError is raised as the given # surface_point_normal is not normalized. with self.assertRaises(tf.errors.InvalidArgumentError): self.evaluate( point_light.estimate_radiance(point_light_radiance, point_light_position, surface_point_position, surface_point_normal, observation_point, returning_zeros_brdf)) if __name__ == "__main__": test_case.main()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for point light.""" import math from absl.testing import flagsaver from absl.testing import parameterized import numpy as np import tensorflow as tf from tensorflow_graphics.rendering.light import point_light from tensorflow_graphics.util import test_case def fake_brdf(incoming_light_direction, outgoing_light_direction, surface_point_normal): del incoming_light_direction, surface_point_normal # Unused. return outgoing_light_direction def returning_zeros_brdf(incoming_light_direction, outgoing_light_direction, surface_point_normal): del incoming_light_direction, outgoing_light_direction # Unused. return tf.zeros_like(surface_point_normal) def random_tensor(tensor_shape): return np.random.uniform(low=-100.0, high=100.0, size=tensor_shape) class PointLightTest(test_case.TestCase): @parameterized.parameters( # Light direction is parallel to the surface normal. ([1.], [[0., 0., 1.]], [2., 0., 0.], [1.0 / (4. * math.pi), 0., 0.]), # Light direction is parallel to the surface normal and the reflected # light fall off is included in the calculation. ([1.], [[0., 0., 1.]], [2., 0., 0.], \ [0.25 / (4. * math.pi), 0., 0.], True), # Light direction is perpendicular to the surface normal. ([1.], [[3., 0., 0.]], [1., 2., 3.], [0., 0., 0.]), # Angle between surface normal and the incoming light direction is pi/3. ([1.], [[math.sqrt(3), 0., 1.]], \ [0., 1., 0.], [0., 0.125 / (4. * math.pi), 0.]), # Angle between surface normal and the incoming light direction is pi/4. ([1.], [[0., 1., 1.]], [1., 1., 0.], [0.25 / (4. * math.pi), 0.25 / (4. * math.pi), 0.]), # Light has 3 radiances. ([2., 4., 1.], [[0., 1., 1.]], [1., 1., 0.], [0.5 / (4. * math.pi), 1. / (4. * math.pi), 0.]), # Light is behind the surface. ([1.], [[0., 0., -2.]], [7., 0., 0.], [0., 0., 0.]), # Observation point is behind the surface. ([1.], [[0., 0., 2.]], [5., 0., -2.], [0., 0., 0.]), # Light and observation point are behind the surface. ([1.], [[0., 0., -2.]], [5., 0., -2.], [0., 0., 0.]), ) def test_estimate_radiance_preset(self, light_radiance, light_pos, observation_pos, expected_result, reflected_light_fall_off=False): """Tests the output of estimate radiance function with various parameters. In this test the point on the surface is always [0, 0, 0] ,the surface normal is [0, 0, 1] and the fake brdf function returns the (normalized) direction of the outgoing light as its output. Args: light_radiance: An array of size K representing the point light radiances. light_pos: An array of size [3,] representing the point light positions. observation_pos: An array of size [3,] representing the observation point. expected_result: An array of size [3,] representing the expected result of the estimated reflected radiance function. reflected_light_fall_off: A boolean specifying whether or not to include the fall off of the reflected light in the calculation. Defaults to False. """ tensor_size = np.random.randint(1, 3) + 1 tensor_shape = np.random.randint(1, 10, size=tensor_size).tolist() lights_tensor_size = np.random.randint(1, 3) + 1 lights_tensor_shape = np.random.randint( 1, 10, size=lights_tensor_size).tolist() point_light_radiance = np.tile(light_radiance, lights_tensor_shape + [1]) point_light_position = np.tile(light_pos, lights_tensor_shape + [1]) surface_point_normal = np.tile([0.0, 0.0, 1.0], tensor_shape + [1]) surface_point_position = np.tile([0.0, 0.0, 0.0], tensor_shape + [1]) observation_point = np.tile(observation_pos, tensor_shape + [1]) expected = np.tile(expected_result, tensor_shape + lights_tensor_shape + [1]) pred = point_light.estimate_radiance( point_light_radiance, point_light_position, surface_point_position, surface_point_normal, observation_point, fake_brdf, reflected_light_fall_off=reflected_light_fall_off) self.assertAllClose(expected, pred) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_estimate_radiance_jacobian_random(self): """Tests the Jacobian of the point lighting equation.""" tensor_size = np.random.randint(1, 3) tensor_shape = np.random.randint(1, 10, size=tensor_size).tolist() light_tensor_size = np.random.randint(1, 3) lights_tensor_shape = np.random.randint( 1, 10, size=light_tensor_size).tolist() point_light_radiance_init = random_tensor(lights_tensor_shape + [1]) point_light_position_init = random_tensor(lights_tensor_shape + [3]) surface_point_position_init = random_tensor(tensor_shape + [3]) surface_point_normal_init = random_tensor(tensor_shape + [3]) observation_point_init = random_tensor(tensor_shape + [3]) def estimate_radiance_fn(point_light_position, surface_point_position, surface_point_normal, observation_point): return point_light.estimate_radiance(point_light_radiance_init, point_light_position, surface_point_position, surface_point_normal, observation_point, fake_brdf) self.assert_jacobian_is_correct_fn(estimate_radiance_fn, [ point_light_position_init, surface_point_position_init, surface_point_normal_init, observation_point_init ]) @flagsaver.flagsaver(tfg_add_asserts_to_graph=False) def test_estimate_radiance_jacobian_preset(self): """Tests the Jacobian of the point lighting equation. Verifies that the Jacobian of the point lighting equation is correct when the light direction is orthogonal to the surface normal. """ delta = 1e-5 point_light_radiance_init = np.array(1.0).reshape((1, 1)) point_light_position_init = np.array((delta, 1.0, 0.0)).reshape((1, 3)) surface_point_position_init = np.array((0.0, 0.0, 0.0)) surface_point_normal_init = np.array((1.0, 0.0, 0.0)) observation_point_init = np.array((delta, 3.0, 0.0)) def estimate_radiance_fn(point_light_position, surface_point_position, surface_point_normal, observation_point): return point_light.estimate_radiance(point_light_radiance_init, point_light_position, surface_point_position, surface_point_normal, observation_point, fake_brdf) self.assert_jacobian_is_correct_fn(estimate_radiance_fn, [ point_light_position_init, surface_point_position_init, surface_point_normal_init, observation_point_init ]) @parameterized.parameters( ((1, 1), (1, 3), (3,), (3,), (3,)), ((4, 1, 1), (4, 1, 3), (1, 3), (1, 3), (1, 3)), ((3, 2, 1), (3, 2, 3), (2, 3), (2, 3), (2, 3)), ((1, 1), (3,), (1, 3), (1, 2, 3), (1, 3)), ((4, 5, 1), (3, 4, 5, 3), (1, 3), (1, 2, 2, 3), (1, 2, 3)), ((1,), (1, 2, 2, 3), (1, 2, 3), (1, 3), (3,)), ) def test_estimate_radiance_shape_exception_not_raised(self, *shape): """Tests that the shape exceptions are not raised.""" self.assert_exception_is_not_raised( point_light.estimate_radiance, shape, brdf=returning_zeros_brdf) @parameterized.parameters( ("must have exactly 3 dimensions in axis -1", (1, 1), (1, 1), (3,), (3,), (3,)), ("must have exactly 3 dimensions in axis -1", (5, 1), (5, 2), (3,), (3,), (3,)), ("must have exactly 3 dimensions in axis -1", (1, 1), (1, 4), (3,), (3,), (3,)), ("must have exactly 3 dimensions in axis -1", (1, 1), (1, 3), (1,), (3,), (3,)), ("must have exactly 3 dimensions in axis -1", (1, 1), (1, 3), (2,), (3,), (3,)), ("must have exactly 3 dimensions in axis -1", (1, 1), (1, 3), (4,), (3,), (3,)), ("must have exactly 3 dimensions in axis -1", (1, 1), (1, 3), (3,), (1,), (3,)), ("must have exactly 3 dimensions in axis -1", (1, 1), (1, 3), (3,), (2,), (3,)), ("must have exactly 3 dimensions in axis -1", (1, 1), (1, 3), (3,), (4,), (3,)), ("must have exactly 3 dimensions in axis -1", (1, 1), (1, 3), (3,), (3,), (4,)), ("must have exactly 3 dimensions in axis -1", (1, 1), (1, 3), (3,), (3,), (2,)), ("must have exactly 3 dimensions in axis -1", (1, 1), (1, 3), (3,), (3,), (1,)), ("Not all batch dimensions are broadcast-compatible.", (1, 3, 1), (1, 3, 3), (2, 3), (4, 3), (3,)), ("Not all batch dimensions are broadcast-compatible.", (1, 3, 1), (1, 4, 3), (2, 3), (3,), (3,)), ) def test_estimate_radiance_shape_exception_raised(self, error_msg, *shape): """Tests that the shape exception is raised.""" self.assert_exception_is_raised( point_light.estimate_radiance, error_msg, shape, brdf=returning_zeros_brdf) def test_estimate_radiance_value_exceptions_raised(self): """Tests that the value exceptions are raised correctly.""" point_light_radiance = random_tensor(tensor_shape=(1, 1)) point_light_position = random_tensor(tensor_shape=(1, 3)) surface_point_position = random_tensor(tensor_shape=(3,)) surface_point_normal = random_tensor(tensor_shape=(3,)) observation_point = random_tensor(tensor_shape=(3,)) # Verify that an InvalidArgumentError is raised as the given # surface_point_normal is not normalized. with self.assertRaises(tf.errors.InvalidArgumentError): self.evaluate( point_light.estimate_radiance(point_light_radiance, point_light_position, surface_point_position, surface_point_normal, observation_point, returning_zeros_brdf)) if __name__ == "__main__": test_case.main()
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/datasets/pix3d/pix3d.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Lint as: python3 """pix3d dataset.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import json import os import numpy as np import tensorflow as tf from tensorflow_datasets import features as tfds_features import tensorflow_datasets.public_api as tfds from tensorflow_graphics.datasets import features as tfg_features _CITATION = ''' @inproceedings{pix3d, title={Pix3D: Dataset and Methods for Single-Image 3D Shape Modeling}, author={Sun, Xingyuan and Wu, Jiajun and Zhang, Xiuming and Zhang, Zhoutong and Zhang, Chengkai and Xue, Tianfan and Tenenbaum, Joshua B and Freeman, William T}, booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2018} } ''' _DESCRIPTION = ''' Pix3D is a large-scale dataset of diverse image-shape pairs with pixel-level 2D-3D alignment. It has wide applications in shape-related tasks including reconstruction, retrieval, viewpoint estimation, etc. Pix3D contains 10,069 2D-3D pairs of 395 distinct 3D shapes, categorised into nine object categories. Each sample comprises of an image, 3D shape represented as (non-watertight) triangle mesh and voxel grid, bounding-box, segmentation mask, intrinsic and extrinsic camera parameters and 2D and 3D key points. Notes: * The object and camera poses are provided with respect to the scene, whereas the camera is placed at the origin. Pix3D also provides the features `camera/position_with_respect_to_object` and `camera/inplane_rotation`. Those values are defined in object coordinates and will reproduce an image that is equivalent to the original image under a homography transformation. They are defined for viewer-centered algorithms whose predictions need to be rotated back to the canonical view for evaluations against ground truth shapes. This is necessary as most algorithms assume that the camera is looking at the object's center, the raw input images are usually cropped or transformed before sending into their pipeline. * There are two wrong segmentation masks in the annotations of the original Pix3D dataset (See https://github.com/xingyuansun/pix3d/issues/18 for details). We ignore those samples in this version of the dataset. However, if you want to use them, we provide own rendered segmentation masks in `tensorflow_graphics/datasets/pix3d/fixed_masks/`. Feel free to copy those two masks to your local Pix3D directory in `<PIX3D_HOME>/mask/table/`. Additionally, you need to add the indices of these samples to the split files located at `<TF Graphics Repository>/tensorflow_graphics/datasets/pix3d/splits`. The index `7953` needs to be appended to the train index and `9657` belongs to the test index. Train/Test split: Pix3D does not provide a standard train/test split. Therefore, this implementation adopts the S2 split from Mesh R-CNN (https://arxiv.org/abs/1906.02739, Sec. 4.2). This split ensures that the 3D models appearing in the train and test sets are disjoint. ''' class Pix3d(tfds.core.GeneratorBasedBuilder): """Pix3D is a large-scale dataset of diverse image-shape pairs with pixel-level 2D-3D alignment.""" VERSION = tfds.core.Version('0.1.0') TRAIN_SPLIT_IDX = os.path.join(os.path.dirname(__file__), 'splits/pix3d_train.npy') TEST_SPLIT_IDX = os.path.join(os.path.dirname(__file__), 'splits/pix3d_test.npy') CLASS_INDEX = ['background', 'bed', 'bookcase', 'chair', 'desk', 'misc', 'sofa', 'table', 'tool', 'wardrobe'] def _info(self): return tfds.core.DatasetInfo( builder=self, # This is the description that will appear on the datasets page. description=_DESCRIPTION, # tfds.features.FeatureConnectors features=tfds.features.FeaturesDict({ 'image': tfds_features.Image(shape=(None, None, 3), dtype=tf.uint8), 'image/filename': tfds_features.Text(), 'image/source': tfds_features.Text(), '2d_keypoints': tfds_features.FeaturesDict({ 'num_annotators': tf.uint8, 'num_keypoints': tf.uint8, 'keypoints': tfds_features.Tensor(shape=(None,), dtype=tf.float32), }), 'mask': tfds_features.Image(shape=(None, None, 1), dtype=tf.uint8), 'model': tfg_features.TriangleMesh(), 'model/source': tfds_features.Text(), '3d_keypoints': tfds_features.Tensor(shape=(None, 3), dtype=tf.float32), 'voxel': tfg_features.VoxelGrid(shape=(128, 128, 128)), 'pose': tfg_features.Pose(), # pose of object w.r.t to world. 'camera': tfds_features.FeaturesDict({ 'parameters': tfg_features.Camera(), 'position_with_respect_to_object': tfds_features.Tensor( shape=(3,), dtype=tf.float32 ), 'inplane_rotation': tf.float32, }), 'category': tfds_features.ClassLabel( num_classes=len(self.CLASS_INDEX) ), 'bbox': tfds_features.BBoxFeature(), 'truncated': tf.bool, 'occluded': tf.bool, 'slightly_occluded': tf.bool, }), # If there's a common (input, target) tuple from the features, # specify them here. They'll be used if as_supervised=True in # builder.as_dataset. supervised_keys=None, # Homepage of the dataset for documentation homepage='http://pix3d.csail.mit.edu/', citation=_CITATION, ) def _split_generators(self, dl_manager): """Returns SplitGenerators.""" pix3d_dir = dl_manager.download_and_extract( 'http://pix3d.csail.mit.edu/data/pix3d.zip') return [ tfds.core.SplitGenerator( name=tfds.Split.TRAIN, gen_kwargs={ 'samples_directory': pix3d_dir, 'split_file': self.TRAIN_SPLIT_IDX }, ), tfds.core.SplitGenerator( name=tfds.Split.TEST, gen_kwargs={ 'samples_directory': pix3d_dir, 'split_file': self.TEST_SPLIT_IDX }, ), ] def _generate_examples(self, samples_directory, split_file): """Yields examples. As Pix3D does not come with a predefined train/test split, we adopt one from Mesh R-CNN. The split ensures that the 3D models appearing in the train and test sets are disjoint. Args: samples_directory: `str`, path to the directory where Pix3D is stored. split_file: `str`, path to .npy file containing the indices of the current split. """ with tf.io.gfile.GFile(os.path.join(samples_directory, 'pix3d.json'), mode='r') as pix3d_index: pix3d = json.load(pix3d_index) split_samples = map(pix3d.__getitem__, np.load(split_file)) def _build_bbox(box, img_size): """Create a BBox with correct order of coordinates. Args: box: Bounding box of the object as provided by Pix3d img_size: size of the image, in the format of [width, height] Returns: tfds.features.BBox. """ xmin, ymin, xmax, ymax = box width, height = img_size return tfds_features.BBox(ymin=ymin / height, xmin=xmin / width, ymax=ymax / height, xmax=xmax / width) def _build_camera(f, img_size): """Prepare features for `Camera` FeatureConnector. The pose originates from the official Pix3D GitHub repository and describes the cameras position with respect to the scene. The focal length is originally provided in mm, but will be converted to pixel here using the fixed sensor with of 32 mm, which also originates from the Pix3D GitHub repository. Link to the official Pix3D repository: https://github.com/xingyuansun/pix3d. Args: f: float, denoting the focal length in mm. img_size: tuple of two floats, denoting the image height and width. Returns: Dictionary with all Camera Parameters. """ sensor_width = 32. return { 'pose': { 'R': np.array([[-1., 0., 0.], [0., -1., 0.], [0., 0., 1.]], dtype=np.float32), 't': np.zeros(3, dtype=np.float32) }, 'optical_center': (img_size[0] / 2, img_size[1] / 2), 'f': (f / sensor_width * img_size[0]) } def _build_2d_keypoints(keypoints): """Wraps keypoint feature in dict, because TFDS does not allow more than sone unknown dimensions. Args: keypoints: Array of dimension `[N, M, 2]`, where N denotes the number of annotators and M is the number of 2D keypoints. Keypoints are stored as (origin: top left corner; +x: rightward; +y: downward); [-1.0, -1.0] if an annotator marked this keypoint hard to label. Returns: Dictionary containing shape and flattened keypoints. """ if keypoints.ndim != 3 or keypoints.shape[-1] != 2: raise ValueError('2D keypoints should be in shape (N, M, 2).') return { 'num_annotators': keypoints.shape[0], 'num_keypoints': keypoints.shape[1], 'keypoints': keypoints.ravel() } for sample in split_samples: example = { 'image': os.path.join(samples_directory, sample['img']), 'image/filename': sample['img'], 'image/source': sample['img_source'], '2d_keypoints': _build_2d_keypoints( np.asarray(sample['2d_keypoints'], dtype=np.float32)), 'mask': os.path.join(samples_directory, sample['mask']), 'model': os.path.join(samples_directory, sample['model']), 'model/source': sample['model_source'], '3d_keypoints': np.loadtxt( os.path.join(samples_directory, sample['3d_keypoints']), dtype=np.float32), 'voxel': { 'path': os.path.join(samples_directory, sample['voxel']), 'key': 'voxel' }, 'pose': { 'R': sample['rot_mat'], 't': sample['trans_mat'] }, 'camera': { 'parameters': _build_camera( sample['focal_length'], sample['img_size'], ), 'position_with_respect_to_object': sample['cam_position'], 'inplane_rotation': sample['inplane_rotation'], }, 'category': self.CLASS_INDEX.index(sample['category']), 'bbox': _build_bbox(sample['bbox'], sample['img_size']), 'truncated': sample['truncated'], 'occluded': sample['occluded'], 'slightly_occluded': sample['slightly_occluded'] } yield sample['img'], example
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Lint as: python3 """pix3d dataset.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import json import os import numpy as np import tensorflow as tf from tensorflow_datasets import features as tfds_features import tensorflow_datasets.public_api as tfds from tensorflow_graphics.datasets import features as tfg_features _CITATION = ''' @inproceedings{pix3d, title={Pix3D: Dataset and Methods for Single-Image 3D Shape Modeling}, author={Sun, Xingyuan and Wu, Jiajun and Zhang, Xiuming and Zhang, Zhoutong and Zhang, Chengkai and Xue, Tianfan and Tenenbaum, Joshua B and Freeman, William T}, booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)}, year={2018} } ''' _DESCRIPTION = ''' Pix3D is a large-scale dataset of diverse image-shape pairs with pixel-level 2D-3D alignment. It has wide applications in shape-related tasks including reconstruction, retrieval, viewpoint estimation, etc. Pix3D contains 10,069 2D-3D pairs of 395 distinct 3D shapes, categorised into nine object categories. Each sample comprises of an image, 3D shape represented as (non-watertight) triangle mesh and voxel grid, bounding-box, segmentation mask, intrinsic and extrinsic camera parameters and 2D and 3D key points. Notes: * The object and camera poses are provided with respect to the scene, whereas the camera is placed at the origin. Pix3D also provides the features `camera/position_with_respect_to_object` and `camera/inplane_rotation`. Those values are defined in object coordinates and will reproduce an image that is equivalent to the original image under a homography transformation. They are defined for viewer-centered algorithms whose predictions need to be rotated back to the canonical view for evaluations against ground truth shapes. This is necessary as most algorithms assume that the camera is looking at the object's center, the raw input images are usually cropped or transformed before sending into their pipeline. * There are two wrong segmentation masks in the annotations of the original Pix3D dataset (See https://github.com/xingyuansun/pix3d/issues/18 for details). We ignore those samples in this version of the dataset. However, if you want to use them, we provide own rendered segmentation masks in `tensorflow_graphics/datasets/pix3d/fixed_masks/`. Feel free to copy those two masks to your local Pix3D directory in `<PIX3D_HOME>/mask/table/`. Additionally, you need to add the indices of these samples to the split files located at `<TF Graphics Repository>/tensorflow_graphics/datasets/pix3d/splits`. The index `7953` needs to be appended to the train index and `9657` belongs to the test index. Train/Test split: Pix3D does not provide a standard train/test split. Therefore, this implementation adopts the S2 split from Mesh R-CNN (https://arxiv.org/abs/1906.02739, Sec. 4.2). This split ensures that the 3D models appearing in the train and test sets are disjoint. ''' class Pix3d(tfds.core.GeneratorBasedBuilder): """Pix3D is a large-scale dataset of diverse image-shape pairs with pixel-level 2D-3D alignment.""" VERSION = tfds.core.Version('0.1.0') TRAIN_SPLIT_IDX = os.path.join(os.path.dirname(__file__), 'splits/pix3d_train.npy') TEST_SPLIT_IDX = os.path.join(os.path.dirname(__file__), 'splits/pix3d_test.npy') CLASS_INDEX = ['background', 'bed', 'bookcase', 'chair', 'desk', 'misc', 'sofa', 'table', 'tool', 'wardrobe'] def _info(self): return tfds.core.DatasetInfo( builder=self, # This is the description that will appear on the datasets page. description=_DESCRIPTION, # tfds.features.FeatureConnectors features=tfds.features.FeaturesDict({ 'image': tfds_features.Image(shape=(None, None, 3), dtype=tf.uint8), 'image/filename': tfds_features.Text(), 'image/source': tfds_features.Text(), '2d_keypoints': tfds_features.FeaturesDict({ 'num_annotators': tf.uint8, 'num_keypoints': tf.uint8, 'keypoints': tfds_features.Tensor(shape=(None,), dtype=tf.float32), }), 'mask': tfds_features.Image(shape=(None, None, 1), dtype=tf.uint8), 'model': tfg_features.TriangleMesh(), 'model/source': tfds_features.Text(), '3d_keypoints': tfds_features.Tensor(shape=(None, 3), dtype=tf.float32), 'voxel': tfg_features.VoxelGrid(shape=(128, 128, 128)), 'pose': tfg_features.Pose(), # pose of object w.r.t to world. 'camera': tfds_features.FeaturesDict({ 'parameters': tfg_features.Camera(), 'position_with_respect_to_object': tfds_features.Tensor( shape=(3,), dtype=tf.float32 ), 'inplane_rotation': tf.float32, }), 'category': tfds_features.ClassLabel( num_classes=len(self.CLASS_INDEX) ), 'bbox': tfds_features.BBoxFeature(), 'truncated': tf.bool, 'occluded': tf.bool, 'slightly_occluded': tf.bool, }), # If there's a common (input, target) tuple from the features, # specify them here. They'll be used if as_supervised=True in # builder.as_dataset. supervised_keys=None, # Homepage of the dataset for documentation homepage='http://pix3d.csail.mit.edu/', citation=_CITATION, ) def _split_generators(self, dl_manager): """Returns SplitGenerators.""" pix3d_dir = dl_manager.download_and_extract( 'http://pix3d.csail.mit.edu/data/pix3d.zip') return [ tfds.core.SplitGenerator( name=tfds.Split.TRAIN, gen_kwargs={ 'samples_directory': pix3d_dir, 'split_file': self.TRAIN_SPLIT_IDX }, ), tfds.core.SplitGenerator( name=tfds.Split.TEST, gen_kwargs={ 'samples_directory': pix3d_dir, 'split_file': self.TEST_SPLIT_IDX }, ), ] def _generate_examples(self, samples_directory, split_file): """Yields examples. As Pix3D does not come with a predefined train/test split, we adopt one from Mesh R-CNN. The split ensures that the 3D models appearing in the train and test sets are disjoint. Args: samples_directory: `str`, path to the directory where Pix3D is stored. split_file: `str`, path to .npy file containing the indices of the current split. """ with tf.io.gfile.GFile(os.path.join(samples_directory, 'pix3d.json'), mode='r') as pix3d_index: pix3d = json.load(pix3d_index) split_samples = map(pix3d.__getitem__, np.load(split_file)) def _build_bbox(box, img_size): """Create a BBox with correct order of coordinates. Args: box: Bounding box of the object as provided by Pix3d img_size: size of the image, in the format of [width, height] Returns: tfds.features.BBox. """ xmin, ymin, xmax, ymax = box width, height = img_size return tfds_features.BBox(ymin=ymin / height, xmin=xmin / width, ymax=ymax / height, xmax=xmax / width) def _build_camera(f, img_size): """Prepare features for `Camera` FeatureConnector. The pose originates from the official Pix3D GitHub repository and describes the cameras position with respect to the scene. The focal length is originally provided in mm, but will be converted to pixel here using the fixed sensor with of 32 mm, which also originates from the Pix3D GitHub repository. Link to the official Pix3D repository: https://github.com/xingyuansun/pix3d. Args: f: float, denoting the focal length in mm. img_size: tuple of two floats, denoting the image height and width. Returns: Dictionary with all Camera Parameters. """ sensor_width = 32. return { 'pose': { 'R': np.array([[-1., 0., 0.], [0., -1., 0.], [0., 0., 1.]], dtype=np.float32), 't': np.zeros(3, dtype=np.float32) }, 'optical_center': (img_size[0] / 2, img_size[1] / 2), 'f': (f / sensor_width * img_size[0]) } def _build_2d_keypoints(keypoints): """Wraps keypoint feature in dict, because TFDS does not allow more than sone unknown dimensions. Args: keypoints: Array of dimension `[N, M, 2]`, where N denotes the number of annotators and M is the number of 2D keypoints. Keypoints are stored as (origin: top left corner; +x: rightward; +y: downward); [-1.0, -1.0] if an annotator marked this keypoint hard to label. Returns: Dictionary containing shape and flattened keypoints. """ if keypoints.ndim != 3 or keypoints.shape[-1] != 2: raise ValueError('2D keypoints should be in shape (N, M, 2).') return { 'num_annotators': keypoints.shape[0], 'num_keypoints': keypoints.shape[1], 'keypoints': keypoints.ravel() } for sample in split_samples: example = { 'image': os.path.join(samples_directory, sample['img']), 'image/filename': sample['img'], 'image/source': sample['img_source'], '2d_keypoints': _build_2d_keypoints( np.asarray(sample['2d_keypoints'], dtype=np.float32)), 'mask': os.path.join(samples_directory, sample['mask']), 'model': os.path.join(samples_directory, sample['model']), 'model/source': sample['model_source'], '3d_keypoints': np.loadtxt( os.path.join(samples_directory, sample['3d_keypoints']), dtype=np.float32), 'voxel': { 'path': os.path.join(samples_directory, sample['voxel']), 'key': 'voxel' }, 'pose': { 'R': sample['rot_mat'], 't': sample['trans_mat'] }, 'camera': { 'parameters': _build_camera( sample['focal_length'], sample['img_size'], ), 'position_with_respect_to_object': sample['cam_position'], 'inplane_rotation': sample['inplane_rotation'], }, 'category': self.CLASS_INDEX.index(sample['category']), 'bbox': _build_bbox(sample['bbox'], sample['img_size']), 'truncated': sample['truncated'], 'occluded': sample['occluded'], 'slightly_occluded': sample['slightly_occluded'] } yield sample['img'], example
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/rendering/opengl/BUILD
# OpenGL functionalities for tf-graphics. licenses(["notice"]) # Apache 2.0 package(default_visibility = ["//visibility:public"]) genrule( name = "rasterizer_op", srcs = [ "cleanup.h", "egl_offscreen_context.cc", "egl_offscreen_context.h", "egl_util.cc", "egl_util.h", "gl_program.cc", "gl_program.h", "gl_render_targets.cc", "gl_render_targets.h", "gl_shader_storage_buffer.cc", "gl_shader_storage_buffer.h", "macros.h", "rasterizer.cc", "rasterizer.h", "rasterizer_op.cc", "rasterizer_with_context.cc", "rasterizer_with_context.h", "thread_safe_resource_pool.h", ], outs = ["rasterizer_op.so"], cmd = "TF_CFLAGS=$$(python -c 'import tensorflow as tf; print(\" \".join(tf.sysconfig.get_compile_flags()))');\ TF_LFLAGS=$$(python -c 'import tensorflow as tf; print(\" \".join(tf.sysconfig.get_link_flags()))');\ g++ -std=c++11 -shared $(SRCS) -o $(OUTS) -fPIC $${TF_CFLAGS[@]} $${TF_LFLAGS[@]}\ -DUSE_OZONE -Wl,-L/usr/lib/x86_64-linux-gnu/mesa-egl -Wl,-L/usr/lib/x86_64-linux-gnu -Wl,-lEGL -Wl,-lGLESv2 -O2;\ VAR_OUTS=$(OUTS);\ VAR_GENDIR=$(GENDIR);\ cp $(OUTS) $(BASEDIR)/$${VAR_OUTS#$$VAR_GENDIR}", )
# OpenGL functionalities for tf-graphics. licenses(["notice"]) # Apache 2.0 package(default_visibility = ["//visibility:public"]) genrule( name = "rasterizer_op", srcs = [ "cleanup.h", "egl_offscreen_context.cc", "egl_offscreen_context.h", "egl_util.cc", "egl_util.h", "gl_program.cc", "gl_program.h", "gl_render_targets.cc", "gl_render_targets.h", "gl_shader_storage_buffer.cc", "gl_shader_storage_buffer.h", "macros.h", "rasterizer.cc", "rasterizer.h", "rasterizer_op.cc", "rasterizer_with_context.cc", "rasterizer_with_context.h", "thread_safe_resource_pool.h", ], outs = ["rasterizer_op.so"], cmd = "TF_CFLAGS=$$(python -c 'import tensorflow as tf; print(\" \".join(tf.sysconfig.get_compile_flags()))');\ TF_LFLAGS=$$(python -c 'import tensorflow as tf; print(\" \".join(tf.sysconfig.get_link_flags()))');\ g++ -std=c++11 -shared $(SRCS) -o $(OUTS) -fPIC $${TF_CFLAGS[@]} $${TF_LFLAGS[@]}\ -DUSE_OZONE -Wl,-L/usr/lib/x86_64-linux-gnu/mesa-egl -Wl,-L/usr/lib/x86_64-linux-gnu -Wl,-lEGL -Wl,-lGLESv2 -O2;\ VAR_OUTS=$(OUTS);\ VAR_GENDIR=$(GENDIR);\ cp $(OUTS) $(BASEDIR)/$${VAR_OUTS#$$VAR_GENDIR}", )
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/geometry/representation/triangle.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tensorflow triangle utility functions.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf from tensorflow_graphics.math import vector from tensorflow_graphics.util import asserts from tensorflow_graphics.util import export_api from tensorflow_graphics.util import shape def normal(v0, v1, v2, clockwise=False, normalize=True, name=None): """Computes face normals (triangles). Note: In the following, A1 to An are optional batch dimensions, which must be broadcast compatible. Args: v0: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the first vertex of a triangle. v1: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the second vertex of a triangle. v2: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the third vertex of a triangle. clockwise: Winding order to determine front-facing triangles. normalize: A `bool` indicating whether output normals should be normalized by the function. name: A name for this op. Defaults to "triangle_normal". Returns: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents a normalized vector. Raises: ValueError: If the shape of `v0`, `v1`, or `v2` is not supported. """ with tf.compat.v1.name_scope(name, "triangle_normal", [v0, v1, v2]): v0 = tf.convert_to_tensor(value=v0) v1 = tf.convert_to_tensor(value=v1) v2 = tf.convert_to_tensor(value=v2) shape.check_static(tensor=v0, tensor_name="v0", has_dim_equals=(-1, 3)) shape.check_static(tensor=v1, tensor_name="v1", has_dim_equals=(-1, 3)) shape.check_static(tensor=v2, tensor_name="v2", has_dim_equals=(-1, 3)) shape.compare_batch_dimensions( tensors=(v0, v1, v2), last_axes=-2, broadcast_compatible=True) normal_vector = vector.cross(v1 - v0, v2 - v0, axis=-1) normal_vector = asserts.assert_nonzero_norm(normal_vector) if not clockwise: normal_vector *= -1.0 if normalize: return tf.nn.l2_normalize(normal_vector, axis=-1) return normal_vector def area(v0, v1, v2, name=None): """Computes triangle areas. Note: Computed triangle area = 0.5 * | e1 x e2 | where e1 and e2 are edges of triangle. A degenerate triangle will return 0 area, whereas the normal for a degenerate triangle is not defined. In the following, A1 to An are optional batch dimensions, which must be broadcast compatible. Args: v0: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the first vertex of a triangle. v1: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the second vertex of a triangle. v2: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the third vertex of a triangle. name: A name for this op. Defaults to "triangle_area". Returns: A tensor of shape `[A1, ..., An, 1]`, where the last dimension represents a normalized vector. """ with tf.compat.v1.name_scope(name, "triangle_area", [v0, v1, v2]): v0 = tf.convert_to_tensor(value=v0) v1 = tf.convert_to_tensor(value=v1) v2 = tf.convert_to_tensor(value=v2) shape.check_static(tensor=v0, tensor_name="v0", has_dim_equals=(-1, 3)) shape.check_static(tensor=v1, tensor_name="v1", has_dim_equals=(-1, 3)) shape.check_static(tensor=v2, tensor_name="v2", has_dim_equals=(-1, 3)) shape.compare_batch_dimensions( tensors=(v0, v1, v2), last_axes=-2, broadcast_compatible=True) normals = vector.cross(v1 - v0, v2 - v0, axis=-1) return 0.5 * tf.linalg.norm(tensor=normals, axis=-1, keepdims=True) # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tensorflow triangle utility functions.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf from tensorflow_graphics.math import vector from tensorflow_graphics.util import asserts from tensorflow_graphics.util import export_api from tensorflow_graphics.util import shape def normal(v0, v1, v2, clockwise=False, normalize=True, name=None): """Computes face normals (triangles). Note: In the following, A1 to An are optional batch dimensions, which must be broadcast compatible. Args: v0: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the first vertex of a triangle. v1: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the second vertex of a triangle. v2: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the third vertex of a triangle. clockwise: Winding order to determine front-facing triangles. normalize: A `bool` indicating whether output normals should be normalized by the function. name: A name for this op. Defaults to "triangle_normal". Returns: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents a normalized vector. Raises: ValueError: If the shape of `v0`, `v1`, or `v2` is not supported. """ with tf.compat.v1.name_scope(name, "triangle_normal", [v0, v1, v2]): v0 = tf.convert_to_tensor(value=v0) v1 = tf.convert_to_tensor(value=v1) v2 = tf.convert_to_tensor(value=v2) shape.check_static(tensor=v0, tensor_name="v0", has_dim_equals=(-1, 3)) shape.check_static(tensor=v1, tensor_name="v1", has_dim_equals=(-1, 3)) shape.check_static(tensor=v2, tensor_name="v2", has_dim_equals=(-1, 3)) shape.compare_batch_dimensions( tensors=(v0, v1, v2), last_axes=-2, broadcast_compatible=True) normal_vector = vector.cross(v1 - v0, v2 - v0, axis=-1) normal_vector = asserts.assert_nonzero_norm(normal_vector) if not clockwise: normal_vector *= -1.0 if normalize: return tf.nn.l2_normalize(normal_vector, axis=-1) return normal_vector def area(v0, v1, v2, name=None): """Computes triangle areas. Note: Computed triangle area = 0.5 * | e1 x e2 | where e1 and e2 are edges of triangle. A degenerate triangle will return 0 area, whereas the normal for a degenerate triangle is not defined. In the following, A1 to An are optional batch dimensions, which must be broadcast compatible. Args: v0: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the first vertex of a triangle. v1: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the second vertex of a triangle. v2: A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents the third vertex of a triangle. name: A name for this op. Defaults to "triangle_area". Returns: A tensor of shape `[A1, ..., An, 1]`, where the last dimension represents a normalized vector. """ with tf.compat.v1.name_scope(name, "triangle_area", [v0, v1, v2]): v0 = tf.convert_to_tensor(value=v0) v1 = tf.convert_to_tensor(value=v1) v2 = tf.convert_to_tensor(value=v2) shape.check_static(tensor=v0, tensor_name="v0", has_dim_equals=(-1, 3)) shape.check_static(tensor=v1, tensor_name="v1", has_dim_equals=(-1, 3)) shape.check_static(tensor=v2, tensor_name="v2", has_dim_equals=(-1, 3)) shape.compare_batch_dimensions( tensors=(v0, v1, v2), last_axes=-2, broadcast_compatible=True) normals = vector.cross(v1 - v0, v2 - v0, axis=-1) return 0.5 * tf.linalg.norm(tensor=normals, axis=-1, keepdims=True) # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/g3doc/build_docs.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Script to generate external api_docs for tf-graphics.""" # flake8: noqa from __future__ import absolute_import from __future__ import division from __future__ import print_function import os from absl import app from absl import flags from tensorflow_docs.api_generator import generate_lib os.environ["TFG_DOC_IMPORTS"] = "1" import tensorflow_graphics as tfg # pylint: disable=g-import-not-at-top FLAGS = flags.FLAGS flags.DEFINE_string("output_dir", "/tmp/graphics_api", "Where to output the docs") flags.DEFINE_string( "code_url_prefix", "https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics", "The url prefix for links to code.") flags.DEFINE_bool("search_hints", True, "Include metadata search hints in the generated files") flags.DEFINE_string("site_path", "graphics/api_docs/python", "Path prefix in the _toc.yaml") def main(_): doc_generator = generate_lib.DocGenerator( root_title="Tensorflow Graphics", py_modules=[("tfg", tfg)], base_dir=os.path.dirname(tfg.__file__), search_hints=FLAGS.search_hints, code_url_prefix=FLAGS.code_url_prefix, site_path=FLAGS.site_path) doc_generator.build(output_dir=FLAGS.output_dir) if __name__ == "__main__": app.run(main)
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Script to generate external api_docs for tf-graphics.""" # flake8: noqa from __future__ import absolute_import from __future__ import division from __future__ import print_function import os from absl import app from absl import flags from tensorflow_docs.api_generator import generate_lib os.environ["TFG_DOC_IMPORTS"] = "1" import tensorflow_graphics as tfg # pylint: disable=g-import-not-at-top FLAGS = flags.FLAGS flags.DEFINE_string("output_dir", "/tmp/graphics_api", "Where to output the docs") flags.DEFINE_string( "code_url_prefix", "https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics", "The url prefix for links to code.") flags.DEFINE_bool("search_hints", True, "Include metadata search hints in the generated files") flags.DEFINE_string("site_path", "graphics/api_docs/python", "Path prefix in the _toc.yaml") def main(_): doc_generator = generate_lib.DocGenerator( root_title="Tensorflow Graphics", py_modules=[("tfg", tfg)], base_dir=os.path.dirname(tfg.__file__), search_hints=FLAGS.search_hints, code_url_prefix=FLAGS.code_url_prefix, site_path=FLAGS.site_path) doc_generator.build(output_dir=FLAGS.output_dir) if __name__ == "__main__": app.run(main)
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/math/math_helpers.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """This module contains math routines that are shared by across different modules.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np import tensorflow as tf from tensorflow_graphics.util import asserts from tensorflow_graphics.util import export_api from tensorflow_graphics.util import safe_ops from tensorflow_graphics.util import shape def cartesian_to_spherical_coordinates(point_cartesian, eps=None, name=None): """Function to transform Cartesian coordinates to spherical coordinates. This function assumes a right handed coordinate system with `z` pointing up. When `x` and `y` are both `0`, the function outputs `0` for `phi`. Note that the function is not smooth when `x = y = 0`. Note: In the following, A1 to An are optional batch dimensions. Args: point_cartesian: A tensor of shape `[A1, ..., An, 3]`. In the last dimension, the data follows the `x`, `y`, `z` order. eps: A small `float`, to be added to the denominator. If left as `None`, its value is automatically selected using `point_cartesian.dtype`. name: A name for this op. Defaults to `cartesian_to_spherical_coordinates`. Returns: A tensor of shape `[A1, ..., An, 3]`. The last dimensions contains (`r`,`theta`,`phi`), where `r` is the sphere radius, `theta` is the polar angle and `phi` is the azimuthal angle. Returns `NaN` gradient if x = y = 0. """ with tf.compat.v1.name_scope(name, "cartesian_to_spherical_coordinates", [point_cartesian]): point_cartesian = tf.convert_to_tensor(value=point_cartesian) shape.check_static( tensor=point_cartesian, tensor_name="point_cartesian", has_dim_equals=(-1, 3)) x, y, z = tf.unstack(point_cartesian, axis=-1) radius = tf.norm(tensor=point_cartesian, axis=-1) theta = tf.acos( tf.clip_by_value(safe_ops.safe_unsigned_div(z, radius, eps), -1., 1.)) phi = tf.atan2(y, x) return tf.stack((radius, theta, phi), axis=-1) def _double_factorial_loop_body(n, result, two): result = tf.compat.v1.where(tf.greater_equal(n, two), result * n, result) return n - two, result, two def _double_factorial_loop_condition(n, result, two): del result # Unused return tf.cast(tf.math.count_nonzero(tf.greater_equal(n, two)), tf.bool) def double_factorial(n): """Computes the double factorial of `n`. Note: In the following, A1 to An are optional batch dimensions. Args: n: A tensor of shape `[A1, ..., An]` containing positive integer values. Returns: A tensor of shape `[A1, ..., An]` containing the double factorial of `n`. """ n = tf.convert_to_tensor(value=n) two = tf.ones_like(n) * 2 result = tf.ones_like(n) _, result, _ = tf.while_loop( cond=_double_factorial_loop_condition, body=_double_factorial_loop_body, loop_vars=[n, result, two]) return result def factorial(n): """Computes the factorial of `n`. Note: In the following, A1 to An are optional batch dimensions. Args: n: A tensor of shape `[A1, ..., An]`. Returns: A tensor of shape `[A1, ..., An]`. """ n = tf.convert_to_tensor(value=n) return tf.exp(tf.math.lgamma(n + 1)) def spherical_to_cartesian_coordinates(point_spherical, name=None): """Function to transform Cartesian coordinates to spherical coordinates. Note: In the following, A1 to An are optional batch dimensions. Args: point_spherical: A tensor of shape `[A1, ..., An, 3]`. The last dimension contains r, theta, and phi that respectively correspond to the radius, polar angle and azimuthal angle; r must be non-negative. name: A name for this op. Defaults to 'spherical_to_cartesian_coordinates'. Raises: tf.errors.InvalidArgumentError: If r, theta or phi contains out of range data. Returns: A tensor of shape `[A1, ..., An, 3]`, where the last dimension contains the cartesian coordinates in x,y,z order. """ with tf.compat.v1.name_scope(name, "spherical_to_cartesian_coordinates", [point_spherical]): point_spherical = tf.convert_to_tensor(value=point_spherical) shape.check_static( tensor=point_spherical, tensor_name="point_spherical", has_dim_equals=(-1, 3)) r, theta, phi = tf.unstack(point_spherical, axis=-1) r = asserts.assert_all_above(r, 0) tmp = r * tf.sin(theta) x = tmp * tf.cos(phi) y = tmp * tf.sin(phi) z = r * tf.cos(theta) return tf.stack((x, y, z), axis=-1) def square_to_spherical_coordinates(point_2d, name=None): """Maps points from a unit square to a unit sphere. Note: In the following, A1 to An are optional batch dimensions. Args: point_2d: A tensor of shape `[A1, ..., An, 2]` with values in [0,1]. name: A name for this op. Defaults to "math_square_to_spherical_coordinates". Returns: A tensor of shape `[A1, ..., An, 2]` with [..., 0] having values in [0.0, pi] and [..., 1] with values in [0.0, 2pi]. Raises: ValueError: if the shape of `point_2d` is not supported. InvalidArgumentError: if at least an element of `point_2d` is outside of [0,1]. """ with tf.compat.v1.name_scope(name, "math_square_to_spherical_coordinates", [point_2d]): point_2d = tf.convert_to_tensor(value=point_2d) shape.check_static( tensor=point_2d, tensor_name="point_2d", has_dim_equals=(-1, 2)) point_2d = asserts.assert_all_in_range( point_2d, 0.0, 1.0, open_bounds=False) x, y = tf.unstack(point_2d, axis=-1) theta = 2.0 * tf.acos(tf.sqrt(1.0 - x)) phi = 2.0 * np.pi * y return tf.stack((tf.ones_like(theta), theta, phi), axis=-1) # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """This module contains math routines that are shared by across different modules.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import numpy as np import tensorflow as tf from tensorflow_graphics.util import asserts from tensorflow_graphics.util import export_api from tensorflow_graphics.util import safe_ops from tensorflow_graphics.util import shape def cartesian_to_spherical_coordinates(point_cartesian, eps=None, name=None): """Function to transform Cartesian coordinates to spherical coordinates. This function assumes a right handed coordinate system with `z` pointing up. When `x` and `y` are both `0`, the function outputs `0` for `phi`. Note that the function is not smooth when `x = y = 0`. Note: In the following, A1 to An are optional batch dimensions. Args: point_cartesian: A tensor of shape `[A1, ..., An, 3]`. In the last dimension, the data follows the `x`, `y`, `z` order. eps: A small `float`, to be added to the denominator. If left as `None`, its value is automatically selected using `point_cartesian.dtype`. name: A name for this op. Defaults to `cartesian_to_spherical_coordinates`. Returns: A tensor of shape `[A1, ..., An, 3]`. The last dimensions contains (`r`,`theta`,`phi`), where `r` is the sphere radius, `theta` is the polar angle and `phi` is the azimuthal angle. Returns `NaN` gradient if x = y = 0. """ with tf.compat.v1.name_scope(name, "cartesian_to_spherical_coordinates", [point_cartesian]): point_cartesian = tf.convert_to_tensor(value=point_cartesian) shape.check_static( tensor=point_cartesian, tensor_name="point_cartesian", has_dim_equals=(-1, 3)) x, y, z = tf.unstack(point_cartesian, axis=-1) radius = tf.norm(tensor=point_cartesian, axis=-1) theta = tf.acos( tf.clip_by_value(safe_ops.safe_unsigned_div(z, radius, eps), -1., 1.)) phi = tf.atan2(y, x) return tf.stack((radius, theta, phi), axis=-1) def _double_factorial_loop_body(n, result, two): result = tf.compat.v1.where(tf.greater_equal(n, two), result * n, result) return n - two, result, two def _double_factorial_loop_condition(n, result, two): del result # Unused return tf.cast(tf.math.count_nonzero(tf.greater_equal(n, two)), tf.bool) def double_factorial(n): """Computes the double factorial of `n`. Note: In the following, A1 to An are optional batch dimensions. Args: n: A tensor of shape `[A1, ..., An]` containing positive integer values. Returns: A tensor of shape `[A1, ..., An]` containing the double factorial of `n`. """ n = tf.convert_to_tensor(value=n) two = tf.ones_like(n) * 2 result = tf.ones_like(n) _, result, _ = tf.while_loop( cond=_double_factorial_loop_condition, body=_double_factorial_loop_body, loop_vars=[n, result, two]) return result def factorial(n): """Computes the factorial of `n`. Note: In the following, A1 to An are optional batch dimensions. Args: n: A tensor of shape `[A1, ..., An]`. Returns: A tensor of shape `[A1, ..., An]`. """ n = tf.convert_to_tensor(value=n) return tf.exp(tf.math.lgamma(n + 1)) def spherical_to_cartesian_coordinates(point_spherical, name=None): """Function to transform Cartesian coordinates to spherical coordinates. Note: In the following, A1 to An are optional batch dimensions. Args: point_spherical: A tensor of shape `[A1, ..., An, 3]`. The last dimension contains r, theta, and phi that respectively correspond to the radius, polar angle and azimuthal angle; r must be non-negative. name: A name for this op. Defaults to 'spherical_to_cartesian_coordinates'. Raises: tf.errors.InvalidArgumentError: If r, theta or phi contains out of range data. Returns: A tensor of shape `[A1, ..., An, 3]`, where the last dimension contains the cartesian coordinates in x,y,z order. """ with tf.compat.v1.name_scope(name, "spherical_to_cartesian_coordinates", [point_spherical]): point_spherical = tf.convert_to_tensor(value=point_spherical) shape.check_static( tensor=point_spherical, tensor_name="point_spherical", has_dim_equals=(-1, 3)) r, theta, phi = tf.unstack(point_spherical, axis=-1) r = asserts.assert_all_above(r, 0) tmp = r * tf.sin(theta) x = tmp * tf.cos(phi) y = tmp * tf.sin(phi) z = r * tf.cos(theta) return tf.stack((x, y, z), axis=-1) def square_to_spherical_coordinates(point_2d, name=None): """Maps points from a unit square to a unit sphere. Note: In the following, A1 to An are optional batch dimensions. Args: point_2d: A tensor of shape `[A1, ..., An, 2]` with values in [0,1]. name: A name for this op. Defaults to "math_square_to_spherical_coordinates". Returns: A tensor of shape `[A1, ..., An, 2]` with [..., 0] having values in [0.0, pi] and [..., 1] with values in [0.0, 2pi]. Raises: ValueError: if the shape of `point_2d` is not supported. InvalidArgumentError: if at least an element of `point_2d` is outside of [0,1]. """ with tf.compat.v1.name_scope(name, "math_square_to_spherical_coordinates", [point_2d]): point_2d = tf.convert_to_tensor(value=point_2d) shape.check_static( tensor=point_2d, tensor_name="point_2d", has_dim_equals=(-1, 2)) point_2d = asserts.assert_all_in_range( point_2d, 0.0, 1.0, open_bounds=False) x, y = tf.unstack(point_2d, axis=-1) theta = 2.0 * tf.acos(tf.sqrt(1.0 - x)) phi = 2.0 * np.pi * y return tf.stack((tf.ones_like(theta), theta, phi), axis=-1) # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/projects/pointnet/train.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Training loop for PointNet v1 on modelnet40.""" # pylint: disable=missing-function-docstring import tensorflow as tf from tensorflow_graphics.datasets import modelnet40 from tensorflow_graphics.nn.layer import pointnet import tqdm # pylint: disable=g-bad-import-order from . import augment # pylint: disable=g-bad-import-order from . import helpers # pylint: disable=g-bad-import-order # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ parser = helpers.ArgumentParser() parser.add("--batch_size", 32) parser.add("--num_epochs", 250) parser.add("--num_points", 2048, help="subsampled (max 2048)") parser.add("--learning_rate", 1e-3, help="initial Adam learning rate") parser.add("--lr_decay", True, help="enable learning rate decay") parser.add("--bn_decay", .5, help="batch norm decay momentum") parser.add("--tb_every", 100, help="tensorboard frequency (iterations)") parser.add("--ev_every", 308, help="evaluation frequency (iterations)") parser.add("--augment", True, help="use augmentations") parser.add("--tqdm", True, help="enable the progress bar") FLAGS = parser.parse_args() # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ if FLAGS.lr_decay: lr_scheduler = tf.keras.optimizers.schedules.ExponentialDecay( FLAGS.learning_rate, decay_steps=6250, #< 200.000 / 32 (batch size) (from original pointnet) decay_rate=0.7, staircase=True) optimizer = tf.keras.optimizers.Adam(learning_rate=lr_scheduler) else: optimizer = tf.keras.optimizers.Adam(learning_rate=FLAGS.learning_rate) # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ model = pointnet.PointNetVanillaClassifier( num_classes=40, momentum=FLAGS.bn_decay) # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ @tf.function def wrapped_tf_function(points, label): """Performs one step of minimization of the loss.""" # --- subsampling (order DO matter) points = points[0:FLAGS.num_points, ...] # --- augmentation if FLAGS.augment: points = tf.map_fn(augment.rotate, points) points = augment.jitter(points) # --- training with tf.GradientTape() as tape: logits = model(points, training=True) loss = model.loss(label, logits) variables = model.trainable_variables gradients = tape.gradient(loss, variables) optimizer.apply_gradients(zip(gradients, variables)) return loss def train(example): """Performs one step of minimization of the loss and populates the summary.""" points = example["points"] label = example["label"] step = optimizer.iterations.numpy() # --- optimize loss = wrapped_tf_function(points, label) if step % FLAGS.tb_every == 0: tf.summary.scalar(name="loss", data=loss, step=step) # --- report rate in summaries if FLAGS.lr_decay and step % FLAGS.tb_every == 0: tf.summary.scalar(name="learning_rate", data=lr_scheduler(step), step=step) # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ def evaluate(): """Identify the best accuracy reached during training.""" step = optimizer.iterations.numpy() if "best_accuracy" not in evaluate.__dict__: evaluate.best_accuracy = 0 if step % FLAGS.ev_every != 0: return evaluate.best_accuracy aggregator = tf.keras.metrics.SparseCategoricalAccuracy() for example in ds_test: points, labels = example["points"], example["label"] logits = model(points, training=False) aggregator.update_state(labels, logits) accuracy = aggregator.result() evaluate.best_accuracy = max(accuracy, evaluate.best_accuracy) tf.summary.scalar(name="accuracy_test", data=accuracy, step=step) return evaluate.best_accuracy # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ ds_train, info = modelnet40.ModelNet40.load(split="train", with_info=True) num_examples = info.splits["train"].num_examples ds_train = ds_train.shuffle(num_examples, reshuffle_each_iteration=True) ds_train = ds_train.repeat(FLAGS.num_epochs) ds_train = ds_train.batch(FLAGS.batch_size) ds_test = modelnet40.ModelNet40.load(split="test").batch(FLAGS.batch_size) # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ try: helpers.setup_tensorboard(FLAGS) helpers.summary_command(parser, FLAGS) total = tf.data.experimental.cardinality(ds_train).numpy() pbar = tqdm.tqdm(ds_train, leave=False, total=total, disable=not FLAGS.tqdm) for train_example in pbar: train(train_example) best_accuracy = evaluate() pbar.set_postfix_str("best accuracy: {:.3f}".format(best_accuracy)) except KeyboardInterrupt: helpers.handle_keyboard_interrupt(FLAGS)
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Training loop for PointNet v1 on modelnet40.""" # pylint: disable=missing-function-docstring import tensorflow as tf from tensorflow_graphics.datasets import modelnet40 from tensorflow_graphics.nn.layer import pointnet import tqdm # pylint: disable=g-bad-import-order from . import augment # pylint: disable=g-bad-import-order from . import helpers # pylint: disable=g-bad-import-order # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ parser = helpers.ArgumentParser() parser.add("--batch_size", 32) parser.add("--num_epochs", 250) parser.add("--num_points", 2048, help="subsampled (max 2048)") parser.add("--learning_rate", 1e-3, help="initial Adam learning rate") parser.add("--lr_decay", True, help="enable learning rate decay") parser.add("--bn_decay", .5, help="batch norm decay momentum") parser.add("--tb_every", 100, help="tensorboard frequency (iterations)") parser.add("--ev_every", 308, help="evaluation frequency (iterations)") parser.add("--augment", True, help="use augmentations") parser.add("--tqdm", True, help="enable the progress bar") FLAGS = parser.parse_args() # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ if FLAGS.lr_decay: lr_scheduler = tf.keras.optimizers.schedules.ExponentialDecay( FLAGS.learning_rate, decay_steps=6250, #< 200.000 / 32 (batch size) (from original pointnet) decay_rate=0.7, staircase=True) optimizer = tf.keras.optimizers.Adam(learning_rate=lr_scheduler) else: optimizer = tf.keras.optimizers.Adam(learning_rate=FLAGS.learning_rate) # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ model = pointnet.PointNetVanillaClassifier( num_classes=40, momentum=FLAGS.bn_decay) # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ @tf.function def wrapped_tf_function(points, label): """Performs one step of minimization of the loss.""" # --- subsampling (order DO matter) points = points[0:FLAGS.num_points, ...] # --- augmentation if FLAGS.augment: points = tf.map_fn(augment.rotate, points) points = augment.jitter(points) # --- training with tf.GradientTape() as tape: logits = model(points, training=True) loss = model.loss(label, logits) variables = model.trainable_variables gradients = tape.gradient(loss, variables) optimizer.apply_gradients(zip(gradients, variables)) return loss def train(example): """Performs one step of minimization of the loss and populates the summary.""" points = example["points"] label = example["label"] step = optimizer.iterations.numpy() # --- optimize loss = wrapped_tf_function(points, label) if step % FLAGS.tb_every == 0: tf.summary.scalar(name="loss", data=loss, step=step) # --- report rate in summaries if FLAGS.lr_decay and step % FLAGS.tb_every == 0: tf.summary.scalar(name="learning_rate", data=lr_scheduler(step), step=step) # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ def evaluate(): """Identify the best accuracy reached during training.""" step = optimizer.iterations.numpy() if "best_accuracy" not in evaluate.__dict__: evaluate.best_accuracy = 0 if step % FLAGS.ev_every != 0: return evaluate.best_accuracy aggregator = tf.keras.metrics.SparseCategoricalAccuracy() for example in ds_test: points, labels = example["points"], example["label"] logits = model(points, training=False) aggregator.update_state(labels, logits) accuracy = aggregator.result() evaluate.best_accuracy = max(accuracy, evaluate.best_accuracy) tf.summary.scalar(name="accuracy_test", data=accuracy, step=step) return evaluate.best_accuracy # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ ds_train, info = modelnet40.ModelNet40.load(split="train", with_info=True) num_examples = info.splits["train"].num_examples ds_train = ds_train.shuffle(num_examples, reshuffle_each_iteration=True) ds_train = ds_train.repeat(FLAGS.num_epochs) ds_train = ds_train.batch(FLAGS.batch_size) ds_test = modelnet40.ModelNet40.load(split="test").batch(FLAGS.batch_size) # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ # ------------------------------------------------------------------------------ try: helpers.setup_tensorboard(FLAGS) helpers.summary_command(parser, FLAGS) total = tf.data.experimental.cardinality(ds_train).numpy() pbar = tqdm.tqdm(ds_train, leave=False, total=total, disable=not FLAGS.tqdm) for train_example in pbar: train(train_example) best_accuracy = evaluate() pbar.set_postfix_str("best accuracy: {:.3f}".format(best_accuracy)) except KeyboardInterrupt: helpers.handle_keyboard_interrupt(FLAGS)
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/math/interpolation/tests/slerp_test.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for slerp.""" from absl.testing import parameterized import numpy as np import tensorflow as tf from tensorflow_graphics.math.interpolation import slerp from tensorflow_graphics.util import test_case _SQRT2_DIV2 = np.sqrt(2.0).astype(np.float32) * 0.5 class SlerpTest(test_case.TestCase): def _pick_random_quaternion(self): """Creates a random quaternion with random shape.""" tensor_size = np.random.randint(3) tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist() return np.random.normal(size=tensor_shape + [4]) def _quaternion_slerp_helper(self, q1, q2, p): """Calls interpolate function for quaternions.""" return slerp.interpolate(q1, q2, p, slerp.InterpolationType.QUATERNION) def _vector_slerp_helper(self, q1, q2, p): """Calls interpolate function for vectors.""" return slerp.interpolate(q1, q2, p, slerp.InterpolationType.VECTOR) def test_interpolate_raises_exceptions(self): """Tests if unknown methods raise exceptions.""" vector1 = self._pick_random_quaternion() self.assert_exception_is_raised( slerp.interpolate, error_msg="Unknown interpolation type supplied.", shapes=[], vector1=vector1, vector2=-vector1, percent=0.1, method=2) def test_interpolate_with_weights_quaternion_preset(self): """Compares interpolate to quaternion_weights + interpolate_with_weights.""" q1 = self._pick_random_quaternion() q2 = q1 + tf.ones_like(q1) q1 = tf.nn.l2_normalize(q1, axis=-1) q2 = tf.nn.l2_normalize(q2, axis=-1) weight1, weight2 = slerp.quaternion_weights(q1, q2, 0.25) qf = slerp.interpolate_with_weights(q1, q2, weight1, weight2) qi = slerp.interpolate( q1, q2, 0.25, method=slerp.InterpolationType.QUATERNION) self.assertAllClose(qf, qi, atol=1e-9) def test_interpolate_with_weights_vector_preset(self): """Compares interpolate to vector_weights + interpolate_with_weights.""" # Any quaternion is a valid vector q1 = self._pick_random_quaternion() q2 = q1 + tf.ones_like(q1) weight1, weight2 = slerp.vector_weights(q1, q2, 0.75) qf = slerp.interpolate_with_weights(q1, q2, weight1, weight2) qi = slerp.interpolate(q1, q2, 0.75, method=slerp.InterpolationType.VECTOR) self.assertAllClose(qf, qi, atol=1e-9) @parameterized.parameters( # Orthogonal, same hemisphere (((1.0, 0.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0), (0.5,)), ((_SQRT2_DIV2, _SQRT2_DIV2, 0.0, 0.0),)), (((_SQRT2_DIV2, _SQRT2_DIV2, 0.0, 0.0), (0.0, 0.0, _SQRT2_DIV2, _SQRT2_DIV2), (0.5,)), ((0.5, 0.5, 0.5, 0.5),)), # Same hemisphere (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (0.0, 0.0, _SQRT2_DIV2, _SQRT2_DIV2), (0.5,)), ((0.408248290463863, 0.0, 0.816496580927726, 0.408248290463863),)), # Same quaternions (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (0.75,)), ((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0),)), # Anti-polar - small percent (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (-_SQRT2_DIV2, 0.0, -_SQRT2_DIV2, 0.0), (0.2,)), ((-_SQRT2_DIV2, 0.0, -_SQRT2_DIV2, 0.0),)), # Anti-polar - large percent (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (-_SQRT2_DIV2, 0.0, -_SQRT2_DIV2, 0.0), (0.8,)), ((-_SQRT2_DIV2, 0.0, -_SQRT2_DIV2, 0.0),)), # Extrapolation - same hemisphere (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (_SQRT2_DIV2, _SQRT2_DIV2, 0.0, 0.0), (-0.5,)), ((0.408248290463863, -0.408248290463863, 0.816496580927726, 0.0),)), # Extrapolation - opposite hemisphere (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (-_SQRT2_DIV2, _SQRT2_DIV2, 0.0, 0.0), (-0.5,)), ((-0.408248290463863, -0.408248290463863, -0.816496580927726, 0.0),)), ) def test_quaternion_slerp_preset(self, test_inputs, test_outputs): """Tests the accuracy of qslerp against numpy-quaternion values.""" test_inputs = [np.array(test_input).astype(np.float32) for test_input in test_inputs] self.assert_output_is_correct(self._quaternion_slerp_helper, test_inputs, test_outputs, tile=False) def test_unnormalized_quaternion_weights_exception_raised(self): """Tests if quaternion_weights raise exceptions for unnormalized input.""" q1 = self._pick_random_quaternion() q2 = tf.nn.l2_normalize(q1, axis=-1) p = tf.constant((0.5), dtype=q1.dtype) with self.assertRaises(tf.errors.InvalidArgumentError): self.evaluate(slerp.quaternion_weights(q1, q2, p)) @parameterized.parameters( ((4,), (4,), (1,)), ((None, 4), (None, 4), (None, 1)), ((None, 4), (None, 4), (None, 4)), ) def test_quaternion_weights_exception_not_raised(self, *shapes): """Tests that valid input shapes do not raise exceptions for qslerp.""" self.assert_exception_is_not_raised(slerp.quaternion_weights, shapes) @parameterized.parameters( ("must have exactly 4 dimensions in axis -1", (3,), (4,), (1,)), ("must have exactly 4 dimensions in axis -1", (4,), (3,), (1,)), ("Not all batch dimensions are broadcast-compatible.", (2, 4), (3, 4), (1,)), ("Not all batch dimensions are broadcast-compatible.", (1, 4), (3, 4), (2,)), ) def test_quaternion_weights_exception_raised(self, error_msg, *shapes): """Tests that the shape exceptions are properly raised for qslerp.""" self.assert_exception_is_raised(slerp.quaternion_weights, error_msg, shapes) @parameterized.parameters( # Same quaternions (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (0.75,)), ( (0.25,), (0.75,), )), # Anti-polar - small percent (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (-_SQRT2_DIV2, 0.0, -_SQRT2_DIV2, 0.0), (0.2,)), ( (-0.8,), (0.2,), )), # Anti-polar - large percent (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (-_SQRT2_DIV2, 0.0, -_SQRT2_DIV2, 0.0), (0.8,)), ( (-0.2,), (0.8,), )), ) def test_quaternion_weights_preset(self, test_inputs, test_outputs): """Tests the accuracy of quaternion_weights for problem cases.""" test_inputs = [np.array(test_input).astype(np.float32) for test_input in test_inputs] self.assert_output_is_correct(slerp.quaternion_weights, test_inputs, test_outputs, tile=False) @parameterized.parameters( ((3,), (3,), (1,)), ((None, 4), (None, 4), (None, 1)), ) def test_vector_weights_exception_not_raised(self, *shapes): """Tests that valid inputs do not raise exceptions for vector_weights.""" self.assert_exception_is_not_raised(slerp.vector_weights, shapes) @parameterized.parameters( ("must have the same number of dimensions in axes", (None, 3), (None, 4), (1,)), ("must have the same number of dimensions in axes", (2, 3), (2, 4), (1,)), ("Not all batch dimensions are broadcast-compatible.", (2, 3), (3, 3), (1,)), ("Not all batch dimensions are broadcast-compatible.", (1, 3), (3, 3), (2,)), ) def test_vector_weights_exception_raised(self, error_msg, *shapes): """Tests that shape exceptions are properly raised for vector_weights.""" self.assert_exception_is_raised(slerp.vector_weights, error_msg, shapes) @parameterized.parameters( # Orthogonal, same hemisphere (((1.0, 0.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0), (0.5,)), ((_SQRT2_DIV2, _SQRT2_DIV2, 0.0, 0.0),)), (((_SQRT2_DIV2, _SQRT2_DIV2, 0.0, 0.0), (0.0, 0.0, _SQRT2_DIV2, _SQRT2_DIV2), (0.5,)), ((0.5, 0.5, 0.5, 0.5),)), # Same hemisphere (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (0.0, 0.0, _SQRT2_DIV2, _SQRT2_DIV2), (0.5,)), ((0.408248290463863, 0.0, 0.816496580927726, 0.408248290463863),)), # Same vectors (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (0.75,)), ((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0),)), # Anti-polar - equal weights (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (-_SQRT2_DIV2, 0.0, -_SQRT2_DIV2, 0.0), (0.5,)), ((0.0, 0.0, 0.0, 0.0),)), # Anti-polar - small percent (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (-_SQRT2_DIV2, 0.0, -_SQRT2_DIV2, 0.0), (0.25,)), ((0.5, 0.0, 0.5, 0.0),)), # Extrapolation - same hemisphere (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (_SQRT2_DIV2, _SQRT2_DIV2, 0.0, 0.0), (-1.0,)), ((0.0, -_SQRT2_DIV2, _SQRT2_DIV2, 0.0),)), # Extrapolation - opposite hemisphere (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (-_SQRT2_DIV2, _SQRT2_DIV2, 0.0, 0.0), (1.5,)), ((-_SQRT2_DIV2, -0.0, -_SQRT2_DIV2, 0.0),)), # Unnormalized vectors (((4.0, 0.0), (0.0, 1.0), (0.5,)), ((2.82842712, _SQRT2_DIV2),)), ) def test_vector_slerp_preset(self, test_inputs, test_outputs): """Tests the accuracy of vector slerp results.""" test_inputs = [np.array(test_input).astype(np.float32) for test_input in test_inputs] self.assert_output_is_correct(self._vector_slerp_helper, test_inputs, test_outputs, tile=False) def test_vector_weights_reduce_to_lerp_preset(self): """Tests if vector slerp reduces to lerp for identical vectors as input.""" q1 = tf.constant((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0)) q2 = tf.constant((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0)) p = tf.constant((0.75,), dtype=q1.dtype) w1, w2 = slerp.vector_weights(q1, q2, p) self.assertAllClose(w1, (0.25,), rtol=1e-6) self.assertAllClose(w2, (0.75,), rtol=1e-6) if __name__ == "__main__": test_case.main()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for slerp.""" from absl.testing import parameterized import numpy as np import tensorflow as tf from tensorflow_graphics.math.interpolation import slerp from tensorflow_graphics.util import test_case _SQRT2_DIV2 = np.sqrt(2.0).astype(np.float32) * 0.5 class SlerpTest(test_case.TestCase): def _pick_random_quaternion(self): """Creates a random quaternion with random shape.""" tensor_size = np.random.randint(3) tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist() return np.random.normal(size=tensor_shape + [4]) def _quaternion_slerp_helper(self, q1, q2, p): """Calls interpolate function for quaternions.""" return slerp.interpolate(q1, q2, p, slerp.InterpolationType.QUATERNION) def _vector_slerp_helper(self, q1, q2, p): """Calls interpolate function for vectors.""" return slerp.interpolate(q1, q2, p, slerp.InterpolationType.VECTOR) def test_interpolate_raises_exceptions(self): """Tests if unknown methods raise exceptions.""" vector1 = self._pick_random_quaternion() self.assert_exception_is_raised( slerp.interpolate, error_msg="Unknown interpolation type supplied.", shapes=[], vector1=vector1, vector2=-vector1, percent=0.1, method=2) def test_interpolate_with_weights_quaternion_preset(self): """Compares interpolate to quaternion_weights + interpolate_with_weights.""" q1 = self._pick_random_quaternion() q2 = q1 + tf.ones_like(q1) q1 = tf.nn.l2_normalize(q1, axis=-1) q2 = tf.nn.l2_normalize(q2, axis=-1) weight1, weight2 = slerp.quaternion_weights(q1, q2, 0.25) qf = slerp.interpolate_with_weights(q1, q2, weight1, weight2) qi = slerp.interpolate( q1, q2, 0.25, method=slerp.InterpolationType.QUATERNION) self.assertAllClose(qf, qi, atol=1e-9) def test_interpolate_with_weights_vector_preset(self): """Compares interpolate to vector_weights + interpolate_with_weights.""" # Any quaternion is a valid vector q1 = self._pick_random_quaternion() q2 = q1 + tf.ones_like(q1) weight1, weight2 = slerp.vector_weights(q1, q2, 0.75) qf = slerp.interpolate_with_weights(q1, q2, weight1, weight2) qi = slerp.interpolate(q1, q2, 0.75, method=slerp.InterpolationType.VECTOR) self.assertAllClose(qf, qi, atol=1e-9) @parameterized.parameters( # Orthogonal, same hemisphere (((1.0, 0.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0), (0.5,)), ((_SQRT2_DIV2, _SQRT2_DIV2, 0.0, 0.0),)), (((_SQRT2_DIV2, _SQRT2_DIV2, 0.0, 0.0), (0.0, 0.0, _SQRT2_DIV2, _SQRT2_DIV2), (0.5,)), ((0.5, 0.5, 0.5, 0.5),)), # Same hemisphere (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (0.0, 0.0, _SQRT2_DIV2, _SQRT2_DIV2), (0.5,)), ((0.408248290463863, 0.0, 0.816496580927726, 0.408248290463863),)), # Same quaternions (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (0.75,)), ((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0),)), # Anti-polar - small percent (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (-_SQRT2_DIV2, 0.0, -_SQRT2_DIV2, 0.0), (0.2,)), ((-_SQRT2_DIV2, 0.0, -_SQRT2_DIV2, 0.0),)), # Anti-polar - large percent (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (-_SQRT2_DIV2, 0.0, -_SQRT2_DIV2, 0.0), (0.8,)), ((-_SQRT2_DIV2, 0.0, -_SQRT2_DIV2, 0.0),)), # Extrapolation - same hemisphere (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (_SQRT2_DIV2, _SQRT2_DIV2, 0.0, 0.0), (-0.5,)), ((0.408248290463863, -0.408248290463863, 0.816496580927726, 0.0),)), # Extrapolation - opposite hemisphere (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (-_SQRT2_DIV2, _SQRT2_DIV2, 0.0, 0.0), (-0.5,)), ((-0.408248290463863, -0.408248290463863, -0.816496580927726, 0.0),)), ) def test_quaternion_slerp_preset(self, test_inputs, test_outputs): """Tests the accuracy of qslerp against numpy-quaternion values.""" test_inputs = [np.array(test_input).astype(np.float32) for test_input in test_inputs] self.assert_output_is_correct(self._quaternion_slerp_helper, test_inputs, test_outputs, tile=False) def test_unnormalized_quaternion_weights_exception_raised(self): """Tests if quaternion_weights raise exceptions for unnormalized input.""" q1 = self._pick_random_quaternion() q2 = tf.nn.l2_normalize(q1, axis=-1) p = tf.constant((0.5), dtype=q1.dtype) with self.assertRaises(tf.errors.InvalidArgumentError): self.evaluate(slerp.quaternion_weights(q1, q2, p)) @parameterized.parameters( ((4,), (4,), (1,)), ((None, 4), (None, 4), (None, 1)), ((None, 4), (None, 4), (None, 4)), ) def test_quaternion_weights_exception_not_raised(self, *shapes): """Tests that valid input shapes do not raise exceptions for qslerp.""" self.assert_exception_is_not_raised(slerp.quaternion_weights, shapes) @parameterized.parameters( ("must have exactly 4 dimensions in axis -1", (3,), (4,), (1,)), ("must have exactly 4 dimensions in axis -1", (4,), (3,), (1,)), ("Not all batch dimensions are broadcast-compatible.", (2, 4), (3, 4), (1,)), ("Not all batch dimensions are broadcast-compatible.", (1, 4), (3, 4), (2,)), ) def test_quaternion_weights_exception_raised(self, error_msg, *shapes): """Tests that the shape exceptions are properly raised for qslerp.""" self.assert_exception_is_raised(slerp.quaternion_weights, error_msg, shapes) @parameterized.parameters( # Same quaternions (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (0.75,)), ( (0.25,), (0.75,), )), # Anti-polar - small percent (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (-_SQRT2_DIV2, 0.0, -_SQRT2_DIV2, 0.0), (0.2,)), ( (-0.8,), (0.2,), )), # Anti-polar - large percent (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (-_SQRT2_DIV2, 0.0, -_SQRT2_DIV2, 0.0), (0.8,)), ( (-0.2,), (0.8,), )), ) def test_quaternion_weights_preset(self, test_inputs, test_outputs): """Tests the accuracy of quaternion_weights for problem cases.""" test_inputs = [np.array(test_input).astype(np.float32) for test_input in test_inputs] self.assert_output_is_correct(slerp.quaternion_weights, test_inputs, test_outputs, tile=False) @parameterized.parameters( ((3,), (3,), (1,)), ((None, 4), (None, 4), (None, 1)), ) def test_vector_weights_exception_not_raised(self, *shapes): """Tests that valid inputs do not raise exceptions for vector_weights.""" self.assert_exception_is_not_raised(slerp.vector_weights, shapes) @parameterized.parameters( ("must have the same number of dimensions in axes", (None, 3), (None, 4), (1,)), ("must have the same number of dimensions in axes", (2, 3), (2, 4), (1,)), ("Not all batch dimensions are broadcast-compatible.", (2, 3), (3, 3), (1,)), ("Not all batch dimensions are broadcast-compatible.", (1, 3), (3, 3), (2,)), ) def test_vector_weights_exception_raised(self, error_msg, *shapes): """Tests that shape exceptions are properly raised for vector_weights.""" self.assert_exception_is_raised(slerp.vector_weights, error_msg, shapes) @parameterized.parameters( # Orthogonal, same hemisphere (((1.0, 0.0, 0.0, 0.0), (0.0, 1.0, 0.0, 0.0), (0.5,)), ((_SQRT2_DIV2, _SQRT2_DIV2, 0.0, 0.0),)), (((_SQRT2_DIV2, _SQRT2_DIV2, 0.0, 0.0), (0.0, 0.0, _SQRT2_DIV2, _SQRT2_DIV2), (0.5,)), ((0.5, 0.5, 0.5, 0.5),)), # Same hemisphere (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (0.0, 0.0, _SQRT2_DIV2, _SQRT2_DIV2), (0.5,)), ((0.408248290463863, 0.0, 0.816496580927726, 0.408248290463863),)), # Same vectors (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (0.75,)), ((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0),)), # Anti-polar - equal weights (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (-_SQRT2_DIV2, 0.0, -_SQRT2_DIV2, 0.0), (0.5,)), ((0.0, 0.0, 0.0, 0.0),)), # Anti-polar - small percent (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (-_SQRT2_DIV2, 0.0, -_SQRT2_DIV2, 0.0), (0.25,)), ((0.5, 0.0, 0.5, 0.0),)), # Extrapolation - same hemisphere (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (_SQRT2_DIV2, _SQRT2_DIV2, 0.0, 0.0), (-1.0,)), ((0.0, -_SQRT2_DIV2, _SQRT2_DIV2, 0.0),)), # Extrapolation - opposite hemisphere (((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0), (-_SQRT2_DIV2, _SQRT2_DIV2, 0.0, 0.0), (1.5,)), ((-_SQRT2_DIV2, -0.0, -_SQRT2_DIV2, 0.0),)), # Unnormalized vectors (((4.0, 0.0), (0.0, 1.0), (0.5,)), ((2.82842712, _SQRT2_DIV2),)), ) def test_vector_slerp_preset(self, test_inputs, test_outputs): """Tests the accuracy of vector slerp results.""" test_inputs = [np.array(test_input).astype(np.float32) for test_input in test_inputs] self.assert_output_is_correct(self._vector_slerp_helper, test_inputs, test_outputs, tile=False) def test_vector_weights_reduce_to_lerp_preset(self): """Tests if vector slerp reduces to lerp for identical vectors as input.""" q1 = tf.constant((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0)) q2 = tf.constant((_SQRT2_DIV2, 0.0, _SQRT2_DIV2, 0.0)) p = tf.constant((0.75,), dtype=q1.dtype) w1, w2 = slerp.vector_weights(q1, q2, p) self.assertAllClose(w1, (0.25,), rtol=1e-6) self.assertAllClose(w2, (0.75,), rtol=1e-6) if __name__ == "__main__": test_case.main()
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/datasets/shapenet/fakes/taxonomy.json
[{ "synsetId": "02691156", "name": "airplane,aeroplane,plane", "children": [ "02690373", "02842573", "02867715", "03174079", "03335030", "03595860", "04012084", "04160586", "20000000", "20000001", "20000002" ], "numInstances": 4045 }, { "synsetId": "02690373", "name": "airliner", "children": [ "03809312", "04583620" ], "numInstances": 1490 }, { "synsetId": "03809312", "name": "narrowbody aircraft,narrow-body aircraft,narrow-body", "children": [], "numInstances": 14 }, {"synsetId": "04583620"}, {"synsetId": "02842573"}, {"synsetId": "02867715"}, {"synsetId": "03174079"}, {"synsetId": "03335030"}, {"synsetId": "03595860"}, {"synsetId": "03321419"}, {"synsetId": "03604311"}, {"synsetId": "04012084"}, {"synsetId": "04160586"}, {"synsetId": "03373611"}, {"synsetId": "20000000"}, {"synsetId": "20000001"}, {"synsetId": "20000002"}, {"synsetId": "02747177"}, {"synsetId": "02773838"}, {"synsetId": "03986949"}, {"synsetId": "02801938"}, {"synsetId": "03482405"}, {"synsetId": "03050864"}, {"synsetId": "04582349"}, {"synsetId": "02808440"}, {"synsetId": "02818832"}, {"synsetId": "02831724"}, {"synsetId": "02920083"}, {"synsetId": "02920259"}, {"synsetId": "03114504"}, {"synsetId": "03115762"}, {"synsetId": "03388549"}, {"synsetId": "03482252"}, {"synsetId": "03962852"}, {"synsetId": "04222210"}, {"synsetId": "03540914"}, {"synsetId": "20000004"}, {"synsetId": "20000005"}, {"synsetId": "20000006"}, {"synsetId": "20000007"}, {"synsetId": "02828884"}, {"synsetId": "03360622"}, {"synsetId": "03891251"}, {"synsetId": "03920867"}, {"synsetId": "04177820"}, {"synsetId": "04590021"}, {"synsetId": "02834778"}, {"synsetId": "02843684"}, {"synsetId": "02871439"}, {"synsetId": "02876657"}, {"synsetId": "02823428"}, {"synsetId": "03359566"}, {"synsetId": "02952374"}, {"synsetId": "03603722"}, {"synsetId": "03923379"}, {"synsetId": "03983396"}, {"synsetId": "04591713"}, {"synsetId": "02880940"}, {"synsetId": "04263257"}, {"synsetId": "02924116"}, {"synsetId": "04146614"}, {"synsetId": "04487081"}, {"synsetId": "02933112"}, {"synsetId": "03018349"}, {"synsetId": "03237340"}, {"synsetId": "03742115"}, {"synsetId": "20000008"}, {"synsetId": "20000009"}, {"synsetId": "20000010"}, {"synsetId": "20000011"}, {"synsetId": "20000012"}, {"synsetId": "20000013"}, {"synsetId": "02942699"}, {"synsetId": "02884994"}, {"synsetId": "03196062"}, {"synsetId": "04569063"}, {"synsetId": "03358726"}, {"synsetId": "03789171"}, {"synsetId": "02946921"}, {"synsetId": "02823510"}, {"synsetId": "04255586"}, {"synsetId": "02954340"}, {"synsetId": "02799323"}, {"synsetId": "03049924"}, {"synsetId": "03610682"}, {"synsetId": "04387095"}, {"synsetId": "02958343"}, {"synsetId": "02701002"}, {"synsetId": "02814533"}, {"synsetId": "02930766"}, {"synsetId": "03100240"}, {"synsetId": "03119396"}, {"synsetId": "03141065"}, {"synsetId": "03881534"}, {"synsetId": "03498781"}, {"synsetId": "03543394"}, {"synsetId": "03594945"}, {"synsetId": "03670208"}, {"synsetId": "03770679"}, {"synsetId": "03870105"}, {"synsetId": "04037443"}, {"synsetId": "04322801"}, {"synsetId": "04097373"}, {"synsetId": "04166281"}, {"synsetId": "04285008"}, {"synsetId": "04285965"}, {"synsetId": "04459122"}, {"synsetId": "03001627"}, {"synsetId": "02738535"}, {"synsetId": "02957862"}, {"synsetId": "03262932"}, {"synsetId": "04593077"}, {"synsetId": "03786621"}, {"synsetId": "04062428"}, {"synsetId": "03002210"}, {"synsetId": "04429376"}, {"synsetId": "03002711"}, {"synsetId": "03260849"}, {"synsetId": "03376595"}, {"synsetId": "02946270"}, {"synsetId": "03168217"}, {"synsetId": "03632729"}, {"synsetId": "03649674"}, {"synsetId": "04099969"}, {"synsetId": "04331277"}, {"synsetId": "04590933"}, {"synsetId": "04373704"}, {"synsetId": "04576002"}, {"synsetId": "20000015"}, {"synsetId": "20000016"}, {"synsetId": "20000018"}, {"synsetId": "20000019"}, {"synsetId": "20000020"}, {"synsetId": "20000021"}, {"synsetId": "20000022"}, {"synsetId": "20000023"}, {"synsetId": "20000024"}, {"synsetId": "20000025"}, {"synsetId": "20000026"}, {"synsetId": "20000027"}, {"synsetId": "03046257"}, {"synsetId": "02694662"}, {"synsetId": "03909406"}, {"synsetId": "03452594"}, {"synsetId": "04548280"}, {"synsetId": "03085013"}, {"synsetId": "03207941"}, {"synsetId": "03211117"}, {"synsetId": "03196598"}, {"synsetId": "03676759"}, {"synsetId": "03211616"}, {"synsetId": "03361380"}, {"synsetId": "03782190"}, {"synsetId": "03085219"}, {"synsetId": "04152593"}, {"synsetId": "02769075"}, {"synsetId": "03085602"}, {"synsetId": "03261776"}, {"synsetId": "03325088"}, {"synsetId": "03775636"}, {"synsetId": "04559451"}, {"synsetId": "03337140"}, {"synsetId": "04529681"}, {"synsetId": "03467517"}, {"synsetId": "02676566"}, {"synsetId": "03513137"}, {"synsetId": "03379051"}, {"synsetId": "03492922"}, {"synsetId": "04265428"}, {"synsetId": "03593526"}, {"synsetId": "04522168"}, {"synsetId": "03624134"}, {"synsetId": "02812949"}, {"synsetId": "03158885"}, {"synsetId": "03636649"}, {"synsetId": "03367059"}, {"synsetId": "04380533"}, {"synsetId": "03642806"}, {"synsetId": "03691459"}, {"synsetId": "04349401"}, {"synsetId": "04502670"}, {"synsetId": "04599124"}, {"synsetId": "03710193"}, {"synsetId": "03759954"}, {"synsetId": "03761084"}, {"synsetId": "03790512"}, {"synsetId": "03769722"}, {"synsetId": "03785016"}, {"synsetId": "04466871"}, {"synsetId": "03797390"}, {"synsetId": "02824058"}, {"synsetId": "03063599"}, {"synsetId": "03928116"}, {"synsetId": "03452741"}, {"synsetId": "03086457"}, {"synsetId": "04515003"}, {"synsetId": "03938244"}, {"synsetId": "03948459"}, {"synsetId": "04086273"}, {"synsetId": "03991062"}, {"synsetId": "03957315"}, {"synsetId": "04004475"}, {"synsetId": "03280644"}, {"synsetId": "03643737"}, {"synsetId": "04074963"}, {"synsetId": "04090263"}, {"synsetId": "02961451"}, {"synsetId": "04250224"}, {"synsetId": "04099429"}, {"synsetId": "03773504"}, {"synsetId": "02693413"}, {"synsetId": "02781338"}, {"synsetId": "03466162"}, {"synsetId": "02929923"}, {"synsetId": "04363210"}, {"synsetId": "04225987"}, {"synsetId": "04256520"}, {"synsetId": "03100346"}, {"synsetId": "03164605"}, {"synsetId": "03015149"}, {"synsetId": "04344873"}, {"synsetId": "03165096"}, {"synsetId": "03693474"}, {"synsetId": "04177755"}, {"synsetId": "20000028"}, {"synsetId": "20000029"}, {"synsetId": "20000030"}, {"synsetId": "04330267"}, {"synsetId": "04379243"}, {"synsetId": "02699629"}, {"synsetId": "02874214"}, {"synsetId": "02894337"}, {"synsetId": "02964075"}, {"synsetId": "02964196"}, {"synsetId": "03063968"}, {"synsetId": "03090000"}, {"synsetId": "03092883"}, {"synsetId": "03116530"}, {"synsetId": "02789487"}, {"synsetId": "04255768"}, {"synsetId": "04591631"}, {"synsetId": "03011741"}, {"synsetId": "04061681"}, {"synsetId": "03179701"}, {"synsetId": "04164868"}, {"synsetId": "04608329"}, {"synsetId": "03238586"}, {"synsetId": "03246933"}, {"synsetId": "03620967"}, {"synsetId": "03850492"}, {"synsetId": "03904060"}, {"synsetId": "04436012"}, {"synsetId": "03982430"}, {"synsetId": "04301000"}, {"synsetId": "03653583"}, {"synsetId": "04381587"}, {"synsetId": "04398951"}, {"synsetId": "04603729"}, {"synsetId": "03231368"}, {"synsetId": "04600486"}, {"synsetId": "03630262"}, {"synsetId": "20000036"}, {"synsetId": "20000037"}, {"synsetId": "20000038"}, {"synsetId": "20000039"}, {"synsetId": "20000040"}, {"synsetId": "20000041"}, {"synsetId": "04401088"}, {"synsetId": "03179910"}, {"synsetId": "03488438"}, {"synsetId": "04044498"}, {"synsetId": "02992529"}, {"synsetId": "04460130"}, {"synsetId": "02814860"}, {"synsetId": "02826886"}, {"synsetId": "03029197"}, {"synsetId": "03047052"}, {"synsetId": "03519387"}, {"synsetId": "04028581"}, {"synsetId": "04206790"}, {"synsetId": "04220250"}, {"synsetId": "04312432"}, {"synsetId": "04361260"}, {"synsetId": "04501947"}, {"synsetId": "04556948"}, {"synsetId": "03347617"}, {"synsetId": "04468005"}, {"synsetId": "02971579"}, {"synsetId": "03394480"}, {"synsetId": "03896233"}, {"synsetId": "03078802"}, {"synsetId": "04349306"}, {"synsetId": "04530566"}, {"synsetId": "02858304"}, {"synsetId": "02792552"}, {"synsetId": "03545470"}, {"synsetId": "03981566"}, {"synsetId": "02947660"}, {"synsetId": "03329663"}, {"synsetId": "03464628"}, {"synsetId": "03790230"}, {"synsetId": "02932891"}, {"synsetId": "03859170"}, {"synsetId": "04273569"}, {"synsetId": "03939178"}, {"synsetId": "03977592"}, {"synsetId": "04024983"}, {"synsetId": "04095210"}, {"synsetId": "04158807"}, {"synsetId": "04244997"}, {"synsetId": "02951358"}, {"synsetId": "03254374"}, {"synsetId": "03609235"}, {"synsetId": "03199901"}, {"synsetId": "04115456"}, {"synsetId": "04409128"}, {"synsetId": "03351262"}, {"synsetId": "03900194"}, {"synsetId": "04128837"}, {"synsetId": "02793199"}, {"synsetId": "03045228"}, {"synsetId": "04128499"}, {"synsetId": "02981792"}, {"synsetId": "04194289"}, {"synsetId": "02965300"}, {"synsetId": "03095699"}, {"synsetId": "03845190"}, {"synsetId": "03541269"}, {"synsetId": "03896103"}, {"synsetId": "03673027"}, {"synsetId": "03141327"}, {"synsetId": "03947888"}, {"synsetId": "04146862"}, {"synsetId": "04224543"}, {"synsetId": "04309348"}, {"synsetId": "04409011"}, {"synsetId": "04552696"}, {"synsetId": "02687172"}, {"synsetId": "02956393"}, {"synsetId": "03140900"}, {"synsetId": "02811618"}, {"synsetId": "03180504"}, {"synsetId": "03180732"}, {"synsetId": "03465151"}, {"synsetId": "03718212"}, {"synsetId": "04348184"}, {"synsetId": "04347754"}, {"synsetId": "02755529"}, {"synsetId": "03811295"}, {"synsetId": "04363082"}, {"synsetId": "04567746"}, {"synsetId": "04610013"}, {"synsetId": "04554684"}, {"synsetId": "04591713"}]
[{ "synsetId": "02691156", "name": "airplane,aeroplane,plane", "children": [ "02690373", "02842573", "02867715", "03174079", "03335030", "03595860", "04012084", "04160586", "20000000", "20000001", "20000002" ], "numInstances": 4045 }, { "synsetId": "02690373", "name": "airliner", "children": [ "03809312", "04583620" ], "numInstances": 1490 }, { "synsetId": "03809312", "name": "narrowbody aircraft,narrow-body aircraft,narrow-body", "children": [], "numInstances": 14 }, {"synsetId": "04583620"}, {"synsetId": "02842573"}, {"synsetId": "02867715"}, {"synsetId": "03174079"}, {"synsetId": "03335030"}, {"synsetId": "03595860"}, {"synsetId": "03321419"}, {"synsetId": "03604311"}, {"synsetId": "04012084"}, {"synsetId": "04160586"}, {"synsetId": "03373611"}, {"synsetId": "20000000"}, {"synsetId": "20000001"}, {"synsetId": "20000002"}, {"synsetId": "02747177"}, {"synsetId": "02773838"}, {"synsetId": "03986949"}, {"synsetId": "02801938"}, {"synsetId": "03482405"}, {"synsetId": "03050864"}, {"synsetId": "04582349"}, {"synsetId": "02808440"}, {"synsetId": "02818832"}, {"synsetId": "02831724"}, {"synsetId": "02920083"}, {"synsetId": "02920259"}, {"synsetId": "03114504"}, {"synsetId": "03115762"}, {"synsetId": "03388549"}, {"synsetId": "03482252"}, {"synsetId": "03962852"}, {"synsetId": "04222210"}, {"synsetId": "03540914"}, {"synsetId": "20000004"}, {"synsetId": "20000005"}, {"synsetId": "20000006"}, {"synsetId": "20000007"}, {"synsetId": "02828884"}, {"synsetId": "03360622"}, {"synsetId": "03891251"}, {"synsetId": "03920867"}, {"synsetId": "04177820"}, {"synsetId": "04590021"}, {"synsetId": "02834778"}, {"synsetId": "02843684"}, {"synsetId": "02871439"}, {"synsetId": "02876657"}, {"synsetId": "02823428"}, {"synsetId": "03359566"}, {"synsetId": "02952374"}, {"synsetId": "03603722"}, {"synsetId": "03923379"}, {"synsetId": "03983396"}, {"synsetId": "04591713"}, {"synsetId": "02880940"}, {"synsetId": "04263257"}, {"synsetId": "02924116"}, {"synsetId": "04146614"}, {"synsetId": "04487081"}, {"synsetId": "02933112"}, {"synsetId": "03018349"}, {"synsetId": "03237340"}, {"synsetId": "03742115"}, {"synsetId": "20000008"}, {"synsetId": "20000009"}, {"synsetId": "20000010"}, {"synsetId": "20000011"}, {"synsetId": "20000012"}, {"synsetId": "20000013"}, {"synsetId": "02942699"}, {"synsetId": "02884994"}, {"synsetId": "03196062"}, {"synsetId": "04569063"}, {"synsetId": "03358726"}, {"synsetId": "03789171"}, {"synsetId": "02946921"}, {"synsetId": "02823510"}, {"synsetId": "04255586"}, {"synsetId": "02954340"}, {"synsetId": "02799323"}, {"synsetId": "03049924"}, {"synsetId": "03610682"}, {"synsetId": "04387095"}, {"synsetId": "02958343"}, {"synsetId": "02701002"}, {"synsetId": "02814533"}, {"synsetId": "02930766"}, {"synsetId": "03100240"}, {"synsetId": "03119396"}, {"synsetId": "03141065"}, {"synsetId": "03881534"}, {"synsetId": "03498781"}, {"synsetId": "03543394"}, {"synsetId": "03594945"}, {"synsetId": "03670208"}, {"synsetId": "03770679"}, {"synsetId": "03870105"}, {"synsetId": "04037443"}, {"synsetId": "04322801"}, {"synsetId": "04097373"}, {"synsetId": "04166281"}, {"synsetId": "04285008"}, {"synsetId": "04285965"}, {"synsetId": "04459122"}, {"synsetId": "03001627"}, {"synsetId": "02738535"}, {"synsetId": "02957862"}, {"synsetId": "03262932"}, {"synsetId": "04593077"}, {"synsetId": "03786621"}, {"synsetId": "04062428"}, {"synsetId": "03002210"}, {"synsetId": "04429376"}, {"synsetId": "03002711"}, {"synsetId": "03260849"}, {"synsetId": "03376595"}, {"synsetId": "02946270"}, {"synsetId": "03168217"}, {"synsetId": "03632729"}, {"synsetId": "03649674"}, {"synsetId": "04099969"}, {"synsetId": "04331277"}, {"synsetId": "04590933"}, {"synsetId": "04373704"}, {"synsetId": "04576002"}, {"synsetId": "20000015"}, {"synsetId": "20000016"}, {"synsetId": "20000018"}, {"synsetId": "20000019"}, {"synsetId": "20000020"}, {"synsetId": "20000021"}, {"synsetId": "20000022"}, {"synsetId": "20000023"}, {"synsetId": "20000024"}, {"synsetId": "20000025"}, {"synsetId": "20000026"}, {"synsetId": "20000027"}, {"synsetId": "03046257"}, {"synsetId": "02694662"}, {"synsetId": "03909406"}, {"synsetId": "03452594"}, {"synsetId": "04548280"}, {"synsetId": "03085013"}, {"synsetId": "03207941"}, {"synsetId": "03211117"}, {"synsetId": "03196598"}, {"synsetId": "03676759"}, {"synsetId": "03211616"}, {"synsetId": "03361380"}, {"synsetId": "03782190"}, {"synsetId": "03085219"}, {"synsetId": "04152593"}, {"synsetId": "02769075"}, {"synsetId": "03085602"}, {"synsetId": "03261776"}, {"synsetId": "03325088"}, {"synsetId": "03775636"}, {"synsetId": "04559451"}, {"synsetId": "03337140"}, {"synsetId": "04529681"}, {"synsetId": "03467517"}, {"synsetId": "02676566"}, {"synsetId": "03513137"}, {"synsetId": "03379051"}, {"synsetId": "03492922"}, {"synsetId": "04265428"}, {"synsetId": "03593526"}, {"synsetId": "04522168"}, {"synsetId": "03624134"}, {"synsetId": "02812949"}, {"synsetId": "03158885"}, {"synsetId": "03636649"}, {"synsetId": "03367059"}, {"synsetId": "04380533"}, {"synsetId": "03642806"}, {"synsetId": "03691459"}, {"synsetId": "04349401"}, {"synsetId": "04502670"}, {"synsetId": "04599124"}, {"synsetId": "03710193"}, {"synsetId": "03759954"}, {"synsetId": "03761084"}, {"synsetId": "03790512"}, {"synsetId": "03769722"}, {"synsetId": "03785016"}, {"synsetId": "04466871"}, {"synsetId": "03797390"}, {"synsetId": "02824058"}, {"synsetId": "03063599"}, {"synsetId": "03928116"}, {"synsetId": "03452741"}, {"synsetId": "03086457"}, {"synsetId": "04515003"}, {"synsetId": "03938244"}, {"synsetId": "03948459"}, {"synsetId": "04086273"}, {"synsetId": "03991062"}, {"synsetId": "03957315"}, {"synsetId": "04004475"}, {"synsetId": "03280644"}, {"synsetId": "03643737"}, {"synsetId": "04074963"}, {"synsetId": "04090263"}, {"synsetId": "02961451"}, {"synsetId": "04250224"}, {"synsetId": "04099429"}, {"synsetId": "03773504"}, {"synsetId": "02693413"}, {"synsetId": "02781338"}, {"synsetId": "03466162"}, {"synsetId": "02929923"}, {"synsetId": "04363210"}, {"synsetId": "04225987"}, {"synsetId": "04256520"}, {"synsetId": "03100346"}, {"synsetId": "03164605"}, {"synsetId": "03015149"}, {"synsetId": "04344873"}, {"synsetId": "03165096"}, {"synsetId": "03693474"}, {"synsetId": "04177755"}, {"synsetId": "20000028"}, {"synsetId": "20000029"}, {"synsetId": "20000030"}, {"synsetId": "04330267"}, {"synsetId": "04379243"}, {"synsetId": "02699629"}, {"synsetId": "02874214"}, {"synsetId": "02894337"}, {"synsetId": "02964075"}, {"synsetId": "02964196"}, {"synsetId": "03063968"}, {"synsetId": "03090000"}, {"synsetId": "03092883"}, {"synsetId": "03116530"}, {"synsetId": "02789487"}, {"synsetId": "04255768"}, {"synsetId": "04591631"}, {"synsetId": "03011741"}, {"synsetId": "04061681"}, {"synsetId": "03179701"}, {"synsetId": "04164868"}, {"synsetId": "04608329"}, {"synsetId": "03238586"}, {"synsetId": "03246933"}, {"synsetId": "03620967"}, {"synsetId": "03850492"}, {"synsetId": "03904060"}, {"synsetId": "04436012"}, {"synsetId": "03982430"}, {"synsetId": "04301000"}, {"synsetId": "03653583"}, {"synsetId": "04381587"}, {"synsetId": "04398951"}, {"synsetId": "04603729"}, {"synsetId": "03231368"}, {"synsetId": "04600486"}, {"synsetId": "03630262"}, {"synsetId": "20000036"}, {"synsetId": "20000037"}, {"synsetId": "20000038"}, {"synsetId": "20000039"}, {"synsetId": "20000040"}, {"synsetId": "20000041"}, {"synsetId": "04401088"}, {"synsetId": "03179910"}, {"synsetId": "03488438"}, {"synsetId": "04044498"}, {"synsetId": "02992529"}, {"synsetId": "04460130"}, {"synsetId": "02814860"}, {"synsetId": "02826886"}, {"synsetId": "03029197"}, {"synsetId": "03047052"}, {"synsetId": "03519387"}, {"synsetId": "04028581"}, {"synsetId": "04206790"}, {"synsetId": "04220250"}, {"synsetId": "04312432"}, {"synsetId": "04361260"}, {"synsetId": "04501947"}, {"synsetId": "04556948"}, {"synsetId": "03347617"}, {"synsetId": "04468005"}, {"synsetId": "02971579"}, {"synsetId": "03394480"}, {"synsetId": "03896233"}, {"synsetId": "03078802"}, {"synsetId": "04349306"}, {"synsetId": "04530566"}, {"synsetId": "02858304"}, {"synsetId": "02792552"}, {"synsetId": "03545470"}, {"synsetId": "03981566"}, {"synsetId": "02947660"}, {"synsetId": "03329663"}, {"synsetId": "03464628"}, {"synsetId": "03790230"}, {"synsetId": "02932891"}, {"synsetId": "03859170"}, {"synsetId": "04273569"}, {"synsetId": "03939178"}, {"synsetId": "03977592"}, {"synsetId": "04024983"}, {"synsetId": "04095210"}, {"synsetId": "04158807"}, {"synsetId": "04244997"}, {"synsetId": "02951358"}, {"synsetId": "03254374"}, {"synsetId": "03609235"}, {"synsetId": "03199901"}, {"synsetId": "04115456"}, {"synsetId": "04409128"}, {"synsetId": "03351262"}, {"synsetId": "03900194"}, {"synsetId": "04128837"}, {"synsetId": "02793199"}, {"synsetId": "03045228"}, {"synsetId": "04128499"}, {"synsetId": "02981792"}, {"synsetId": "04194289"}, {"synsetId": "02965300"}, {"synsetId": "03095699"}, {"synsetId": "03845190"}, {"synsetId": "03541269"}, {"synsetId": "03896103"}, {"synsetId": "03673027"}, {"synsetId": "03141327"}, {"synsetId": "03947888"}, {"synsetId": "04146862"}, {"synsetId": "04224543"}, {"synsetId": "04309348"}, {"synsetId": "04409011"}, {"synsetId": "04552696"}, {"synsetId": "02687172"}, {"synsetId": "02956393"}, {"synsetId": "03140900"}, {"synsetId": "02811618"}, {"synsetId": "03180504"}, {"synsetId": "03180732"}, {"synsetId": "03465151"}, {"synsetId": "03718212"}, {"synsetId": "04348184"}, {"synsetId": "04347754"}, {"synsetId": "02755529"}, {"synsetId": "03811295"}, {"synsetId": "04363082"}, {"synsetId": "04567746"}, {"synsetId": "04610013"}, {"synsetId": "04554684"}, {"synsetId": "04591713"}]
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/projects/cvxnet/lib/utils.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Utility functions.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import collections from os import path import numpy as np import scipy as sp from skimage import measure import tensorflow.compat.v1 as tf from tensorflow_graphics.projects.cvxnet.lib import datasets from tensorflow_graphics.projects.cvxnet.lib import models from tensorflow_graphics.projects.cvxnet.lib.libmise import mise import trimesh Stats = collections.namedtuple("Stats", ["iou", "chamfer", "fscore"]) SYSNET_CLASSES = { "02691156": "airplane", "02933112": "cabinet", "03001627": "chair", "03636649": "lamp", "04090263": "rifle", "04379243": "table", "04530566": "watercraft", "02828884": "bench", "02958343": "car", "03211117": "display", "03691459": "speaker", "04256520": "sofa", "04401088": "telephone", "all": "all", } def define_flags(): """Define command line flags.""" flags = tf.app.flags # Model flags flags.DEFINE_enum("model", "multiconvex", list(k for k in models.model_dict.keys()), "Name of the model.") flags.DEFINE_float("sharpness", 75., "Sharpness term.") flags.DEFINE_integer("n_parts", 50, "Number of convexes uesd.") flags.DEFINE_integer("n_half_planes", 25, "Number of half spaces used.") flags.DEFINE_integer("latent_size", 256, "The size of latent code.") flags.DEFINE_integer("dims", 3, "The dimension of query points.") flags.DEFINE_bool("image_input", False, "Use color images as input if True.") flags.DEFINE_float("vis_scale", 1.3, "Scale of bbox used when extracting meshes.") flags.DEFINE_float("level_set", 0.5, "Level set used for extracting surfaces.") # Dataset flags flags.DEFINE_enum("dataset", "shapenet", list(k for k in datasets.dataset_dict.keys()), "Name of the dataset.") flags.DEFINE_integer("image_h", 137, "The height of the color images.") flags.DEFINE_integer("image_w", 137, "The width of the color images.") flags.DEFINE_integer("image_d", 3, "The channels of color images.") flags.DEFINE_integer("depth_h", 224, "The height of depth images.") flags.DEFINE_integer("depth_w", 224, "The width of depth images.") flags.DEFINE_integer("depth_d", 20, "The number of depth views.") flags.DEFINE_integer("n_views", 24, "The number of color images views.") flags.DEFINE_string("data_dir", None, "The base directory to load data from.") flags.mark_flag_as_required("data_dir") flags.DEFINE_string("obj_class", "*", "Object class used from dataset.") # Training flags flags.DEFINE_float("lr", 1e-4, "Start learning rate.") flags.DEFINE_string( "train_dir", None, "The base directory to save training info and" "checkpoints.") flags.DEFINE_integer("save_every", 20000, "The number of steps to save checkpoint.") flags.DEFINE_integer("max_steps", 800000, "The number of steps of training.") flags.DEFINE_integer("batch_size", 32, "Batch size.") flags.DEFINE_integer("sample_bbx", 1024, "The number of bounding box sample points.") flags.DEFINE_integer("sample_surf", 1024, "The number of surface sample points.") flags.DEFINE_float("weight_overlap", 0.1, "Weight of overlap_loss") flags.DEFINE_float("weight_balance", 0.01, "Weight of balance_loss") flags.DEFINE_float("weight_center", 0.001, "Weight of center_loss") flags.mark_flag_as_required("train_dir") # Eval flags flags.DEFINE_bool("extract_mesh", False, "Extract meshes and set to disk if True.") flags.DEFINE_bool("surface_metrics", False, "Measure surface metrics and save to csv if True.") flags.DEFINE_string("mesh_dir", None, "Path to load ground truth meshes.") flags.DEFINE_string("trans_dir", None, "Path to load pred-to-target transformations.") flags.DEFINE_bool("eval_once", False, "Evaluate the model only once if True.") def mesh_name_helper(name): name = name[0].decode("utf-8") split = name.find("-") cls_name = name[:split] obj_name = name[split + 1:] return cls_name, obj_name def extract_mesh(input_val, params, indicators, input_holder, params_holder, points_holder, sess, args): """Extracting meshes from an indicator function. Args: input_val: np.array, [1, height, width, channel], input image. params: tf.Operation, hyperplane parameter hook. indicators: tf.Operation, indicator hook. input_holder: tf.Placeholder, input image placeholder. params_holder: tf.Placeholder, hyperplane parameter placeholder. points_holder: tf.Placeholder, query point placeholder. sess: tf.Session, running sess. args: tf.app.flags.FLAGS, configurations. Returns: mesh: trimesh.Trimesh, the extracted mesh. """ mesh_extractor = mise.MISE(64, 1, args.level_set) points = mesh_extractor.query() params_val = sess.run(params, {input_holder: input_val}) while points.shape[0] != 0: orig_points = points points = points.astype(np.float32) points = ( (np.expand_dims(points, axis=0) / mesh_extractor.resolution - 0.5) * args.vis_scale) n_points = points.shape[1] values = [] for i in range(0, n_points, 100000): # Add this to prevent OOM. value = sess.run(indicators, { params_holder: params_val, points_holder: points[:, i:i + 100000] }) values.append(value) values = np.concatenate(values, axis=1) values = values[0, :, 0].astype(np.float64) mesh_extractor.update(orig_points, values) points = mesh_extractor.query() value_grid = mesh_extractor.to_dense() value_grid = np.pad(value_grid, 1, "constant", constant_values=-1e6) verts, faces, normals, unused_var = measure.marching_cubes_lewiner( value_grid, min(args.level_set, value_grid.max() * 0.75)) del normals verts -= 1 verts /= np.array([ value_grid.shape[0] - 3, value_grid.shape[1] - 3, value_grid.shape[2] - 3 ], dtype=np.float32) verts = args.vis_scale * (verts - 0.5) faces = np.stack([faces[..., 1], faces[..., 0], faces[..., 2]], axis=-1) return trimesh.Trimesh(vertices=verts, faces=faces) def transform_mesh(mesh, name, trans_dir): """Transform mesh back to the same coordinate of ground truth. Args: mesh: trimesh.Trimesh, predicted mesh before transformation. name: Tensor, hash name of the mesh as recorded in the dataset. trans_dir: string, path to the directory for loading transformations. Returns: mesh: trimesh.Trimesh, the transformed mesh. """ if trans_dir is None: raise ValueError("Need to specify args.trans_dir for loading pred-to-target" "transformations.") cls_name, obj_name = mesh_name_helper(name) with tf.io.gfile.GFile( path.join(trans_dir, "test", cls_name, obj_name, "occnet_to_gaps.txt"), "r") as fin: tx = np.loadtxt(fin).reshape([4, 4]) mesh.apply_transform(np.linalg.inv(tx)) return mesh def save_mesh(mesh, name, eval_dir): """Save a mesh to disk. Args: mesh: trimesh.Trimesh, the mesh to save. name: Tensor, hash name of the mesh as recorded in the dataset. eval_dir: string, path to the directory to save the mesh. """ cls_name, obj_name = mesh_name_helper(name) cls_dir = path.join(eval_dir, "meshes", cls_name) if not tf.io.gfile.isdir(cls_dir): tf.io.gfile.makedirs(cls_dir) with tf.io.gfile.GFile(path.join(cls_dir, obj_name + ".obj"), "w") as fout: mesh.export(fout, file_type="obj") def distance_field_helper(source, target): target_kdtree = sp.spatial.cKDTree(target) distances, unused_var = target_kdtree.query(source, n_jobs=-1) return distances def compute_surface_metrics(mesh, name, mesh_dir): """Compute surface metrics (chamfer distance and f-score) for one example. Args: mesh: trimesh.Trimesh, the mesh to evaluate. name: Tensor, hash name of the mesh as recorded in the dataset. mesh_dir: string, path to the directory for loading ground truth meshes. Returns: chamfer: float, chamfer distance. fscore: float, f-score. """ if mesh_dir is None: raise ValueError("Need to specify args.mesh_dir for loading ground truth.") cls_name, obj_name = mesh_name_helper(name) with tf.io.gfile.GFile( path.join(mesh_dir, "test", cls_name, obj_name, "model_occnet.ply"), "rb", ) as fin: mesh_gt = trimesh.Trimesh(**trimesh.exchange.ply.load_ply(fin)) # Chamfer eval_points = 100000 point_gt = mesh_gt.sample(eval_points) point_gt = point_gt.astype(np.float32) point_pred = mesh.sample(eval_points) point_pred = point_pred.astype(np.float32) pred_to_gt = distance_field_helper(point_pred, point_gt) gt_to_pred = distance_field_helper(point_gt, point_pred) chamfer = np.mean(pred_to_gt**2) + np.mean(gt_to_pred**2) # Fscore tau = 1e-4 eps = 1e-9 pred_to_gt = (pred_to_gt**2) gt_to_pred = (gt_to_pred**2) prec_tau = (pred_to_gt <= tau).astype(np.float32).mean() * 100. recall_tau = (gt_to_pred <= tau).astype(np.float32).mean() * 100. fscore = (2 * prec_tau * recall_tau) / max(prec_tau + recall_tau, eps) # Following the tradition to scale chamfer distance up by 10. return chamfer * 100., fscore def init_stats(): """Initialize evaluation stats.""" stats = {} for k in SYSNET_CLASSES: stats[k] = { "cnt": 0, "iou": 0., "chamfer": 0., "fscore": 0., } return stats def update_stats(example_stats, name, shapenet_stats): """Update evaluation statistics. Args: example_stats: Stats, the stats of one example. name: Tensor, hash name of the example as recorded in the dataset. shapenet_stats: dict, the current stats of the whole dataset. """ cls_name, unused_var = mesh_name_helper(name) shapenet_stats[cls_name]["cnt"] += 1 shapenet_stats[cls_name]["iou"] += example_stats.iou shapenet_stats[cls_name]["chamfer"] += example_stats.chamfer shapenet_stats[cls_name]["fscore"] += example_stats.fscore shapenet_stats["all"]["cnt"] += 1 shapenet_stats["all"]["iou"] += example_stats.iou shapenet_stats["all"]["chamfer"] += example_stats.chamfer shapenet_stats["all"]["fscore"] += example_stats.fscore def average_stats(shapenet_stats): """Average the accumulated stats of the whole dataset.""" for k, v in shapenet_stats.items(): cnt = max(v["cnt"], 1) shapenet_stats[k] = { "iou": v["iou"] / cnt, "chamfer": v["chamfer"] / cnt, "fscore": v["fscore"] / cnt, } def write_stats(stats, eval_dir, step): """Write stats of the dataset to disk. Args: stats: dict, statistics to save. eval_dir: string, path to the directory to save the statistics. step: int, the global step of the checkpoint. """ if not tf.io.gfile.isdir(eval_dir): tf.io.gfile.makedirs(eval_dir) with tf.io.gfile.GFile(path.join(eval_dir, "stats_{}.csv".format(step)), "w") as fout: fout.write("class,iou,chamfer,fscore\n") for k in sorted(stats.keys()): if k == "all": continue fout.write("{0},{1},{2},{3}\n".format( SYSNET_CLASSES[k], stats[k]["iou"], stats[k]["chamfer"], stats[k]["fscore"], )) fout.write("all,{0},{1},{2}".format( stats["all"]["iou"], stats["all"]["chamfer"], stats["all"]["fscore"], ))
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Utility functions.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import collections from os import path import numpy as np import scipy as sp from skimage import measure import tensorflow.compat.v1 as tf from tensorflow_graphics.projects.cvxnet.lib import datasets from tensorflow_graphics.projects.cvxnet.lib import models from tensorflow_graphics.projects.cvxnet.lib.libmise import mise import trimesh Stats = collections.namedtuple("Stats", ["iou", "chamfer", "fscore"]) SYSNET_CLASSES = { "02691156": "airplane", "02933112": "cabinet", "03001627": "chair", "03636649": "lamp", "04090263": "rifle", "04379243": "table", "04530566": "watercraft", "02828884": "bench", "02958343": "car", "03211117": "display", "03691459": "speaker", "04256520": "sofa", "04401088": "telephone", "all": "all", } def define_flags(): """Define command line flags.""" flags = tf.app.flags # Model flags flags.DEFINE_enum("model", "multiconvex", list(k for k in models.model_dict.keys()), "Name of the model.") flags.DEFINE_float("sharpness", 75., "Sharpness term.") flags.DEFINE_integer("n_parts", 50, "Number of convexes uesd.") flags.DEFINE_integer("n_half_planes", 25, "Number of half spaces used.") flags.DEFINE_integer("latent_size", 256, "The size of latent code.") flags.DEFINE_integer("dims", 3, "The dimension of query points.") flags.DEFINE_bool("image_input", False, "Use color images as input if True.") flags.DEFINE_float("vis_scale", 1.3, "Scale of bbox used when extracting meshes.") flags.DEFINE_float("level_set", 0.5, "Level set used for extracting surfaces.") # Dataset flags flags.DEFINE_enum("dataset", "shapenet", list(k for k in datasets.dataset_dict.keys()), "Name of the dataset.") flags.DEFINE_integer("image_h", 137, "The height of the color images.") flags.DEFINE_integer("image_w", 137, "The width of the color images.") flags.DEFINE_integer("image_d", 3, "The channels of color images.") flags.DEFINE_integer("depth_h", 224, "The height of depth images.") flags.DEFINE_integer("depth_w", 224, "The width of depth images.") flags.DEFINE_integer("depth_d", 20, "The number of depth views.") flags.DEFINE_integer("n_views", 24, "The number of color images views.") flags.DEFINE_string("data_dir", None, "The base directory to load data from.") flags.mark_flag_as_required("data_dir") flags.DEFINE_string("obj_class", "*", "Object class used from dataset.") # Training flags flags.DEFINE_float("lr", 1e-4, "Start learning rate.") flags.DEFINE_string( "train_dir", None, "The base directory to save training info and" "checkpoints.") flags.DEFINE_integer("save_every", 20000, "The number of steps to save checkpoint.") flags.DEFINE_integer("max_steps", 800000, "The number of steps of training.") flags.DEFINE_integer("batch_size", 32, "Batch size.") flags.DEFINE_integer("sample_bbx", 1024, "The number of bounding box sample points.") flags.DEFINE_integer("sample_surf", 1024, "The number of surface sample points.") flags.DEFINE_float("weight_overlap", 0.1, "Weight of overlap_loss") flags.DEFINE_float("weight_balance", 0.01, "Weight of balance_loss") flags.DEFINE_float("weight_center", 0.001, "Weight of center_loss") flags.mark_flag_as_required("train_dir") # Eval flags flags.DEFINE_bool("extract_mesh", False, "Extract meshes and set to disk if True.") flags.DEFINE_bool("surface_metrics", False, "Measure surface metrics and save to csv if True.") flags.DEFINE_string("mesh_dir", None, "Path to load ground truth meshes.") flags.DEFINE_string("trans_dir", None, "Path to load pred-to-target transformations.") flags.DEFINE_bool("eval_once", False, "Evaluate the model only once if True.") def mesh_name_helper(name): name = name[0].decode("utf-8") split = name.find("-") cls_name = name[:split] obj_name = name[split + 1:] return cls_name, obj_name def extract_mesh(input_val, params, indicators, input_holder, params_holder, points_holder, sess, args): """Extracting meshes from an indicator function. Args: input_val: np.array, [1, height, width, channel], input image. params: tf.Operation, hyperplane parameter hook. indicators: tf.Operation, indicator hook. input_holder: tf.Placeholder, input image placeholder. params_holder: tf.Placeholder, hyperplane parameter placeholder. points_holder: tf.Placeholder, query point placeholder. sess: tf.Session, running sess. args: tf.app.flags.FLAGS, configurations. Returns: mesh: trimesh.Trimesh, the extracted mesh. """ mesh_extractor = mise.MISE(64, 1, args.level_set) points = mesh_extractor.query() params_val = sess.run(params, {input_holder: input_val}) while points.shape[0] != 0: orig_points = points points = points.astype(np.float32) points = ( (np.expand_dims(points, axis=0) / mesh_extractor.resolution - 0.5) * args.vis_scale) n_points = points.shape[1] values = [] for i in range(0, n_points, 100000): # Add this to prevent OOM. value = sess.run(indicators, { params_holder: params_val, points_holder: points[:, i:i + 100000] }) values.append(value) values = np.concatenate(values, axis=1) values = values[0, :, 0].astype(np.float64) mesh_extractor.update(orig_points, values) points = mesh_extractor.query() value_grid = mesh_extractor.to_dense() value_grid = np.pad(value_grid, 1, "constant", constant_values=-1e6) verts, faces, normals, unused_var = measure.marching_cubes_lewiner( value_grid, min(args.level_set, value_grid.max() * 0.75)) del normals verts -= 1 verts /= np.array([ value_grid.shape[0] - 3, value_grid.shape[1] - 3, value_grid.shape[2] - 3 ], dtype=np.float32) verts = args.vis_scale * (verts - 0.5) faces = np.stack([faces[..., 1], faces[..., 0], faces[..., 2]], axis=-1) return trimesh.Trimesh(vertices=verts, faces=faces) def transform_mesh(mesh, name, trans_dir): """Transform mesh back to the same coordinate of ground truth. Args: mesh: trimesh.Trimesh, predicted mesh before transformation. name: Tensor, hash name of the mesh as recorded in the dataset. trans_dir: string, path to the directory for loading transformations. Returns: mesh: trimesh.Trimesh, the transformed mesh. """ if trans_dir is None: raise ValueError("Need to specify args.trans_dir for loading pred-to-target" "transformations.") cls_name, obj_name = mesh_name_helper(name) with tf.io.gfile.GFile( path.join(trans_dir, "test", cls_name, obj_name, "occnet_to_gaps.txt"), "r") as fin: tx = np.loadtxt(fin).reshape([4, 4]) mesh.apply_transform(np.linalg.inv(tx)) return mesh def save_mesh(mesh, name, eval_dir): """Save a mesh to disk. Args: mesh: trimesh.Trimesh, the mesh to save. name: Tensor, hash name of the mesh as recorded in the dataset. eval_dir: string, path to the directory to save the mesh. """ cls_name, obj_name = mesh_name_helper(name) cls_dir = path.join(eval_dir, "meshes", cls_name) if not tf.io.gfile.isdir(cls_dir): tf.io.gfile.makedirs(cls_dir) with tf.io.gfile.GFile(path.join(cls_dir, obj_name + ".obj"), "w") as fout: mesh.export(fout, file_type="obj") def distance_field_helper(source, target): target_kdtree = sp.spatial.cKDTree(target) distances, unused_var = target_kdtree.query(source, n_jobs=-1) return distances def compute_surface_metrics(mesh, name, mesh_dir): """Compute surface metrics (chamfer distance and f-score) for one example. Args: mesh: trimesh.Trimesh, the mesh to evaluate. name: Tensor, hash name of the mesh as recorded in the dataset. mesh_dir: string, path to the directory for loading ground truth meshes. Returns: chamfer: float, chamfer distance. fscore: float, f-score. """ if mesh_dir is None: raise ValueError("Need to specify args.mesh_dir for loading ground truth.") cls_name, obj_name = mesh_name_helper(name) with tf.io.gfile.GFile( path.join(mesh_dir, "test", cls_name, obj_name, "model_occnet.ply"), "rb", ) as fin: mesh_gt = trimesh.Trimesh(**trimesh.exchange.ply.load_ply(fin)) # Chamfer eval_points = 100000 point_gt = mesh_gt.sample(eval_points) point_gt = point_gt.astype(np.float32) point_pred = mesh.sample(eval_points) point_pred = point_pred.astype(np.float32) pred_to_gt = distance_field_helper(point_pred, point_gt) gt_to_pred = distance_field_helper(point_gt, point_pred) chamfer = np.mean(pred_to_gt**2) + np.mean(gt_to_pred**2) # Fscore tau = 1e-4 eps = 1e-9 pred_to_gt = (pred_to_gt**2) gt_to_pred = (gt_to_pred**2) prec_tau = (pred_to_gt <= tau).astype(np.float32).mean() * 100. recall_tau = (gt_to_pred <= tau).astype(np.float32).mean() * 100. fscore = (2 * prec_tau * recall_tau) / max(prec_tau + recall_tau, eps) # Following the tradition to scale chamfer distance up by 10. return chamfer * 100., fscore def init_stats(): """Initialize evaluation stats.""" stats = {} for k in SYSNET_CLASSES: stats[k] = { "cnt": 0, "iou": 0., "chamfer": 0., "fscore": 0., } return stats def update_stats(example_stats, name, shapenet_stats): """Update evaluation statistics. Args: example_stats: Stats, the stats of one example. name: Tensor, hash name of the example as recorded in the dataset. shapenet_stats: dict, the current stats of the whole dataset. """ cls_name, unused_var = mesh_name_helper(name) shapenet_stats[cls_name]["cnt"] += 1 shapenet_stats[cls_name]["iou"] += example_stats.iou shapenet_stats[cls_name]["chamfer"] += example_stats.chamfer shapenet_stats[cls_name]["fscore"] += example_stats.fscore shapenet_stats["all"]["cnt"] += 1 shapenet_stats["all"]["iou"] += example_stats.iou shapenet_stats["all"]["chamfer"] += example_stats.chamfer shapenet_stats["all"]["fscore"] += example_stats.fscore def average_stats(shapenet_stats): """Average the accumulated stats of the whole dataset.""" for k, v in shapenet_stats.items(): cnt = max(v["cnt"], 1) shapenet_stats[k] = { "iou": v["iou"] / cnt, "chamfer": v["chamfer"] / cnt, "fscore": v["fscore"] / cnt, } def write_stats(stats, eval_dir, step): """Write stats of the dataset to disk. Args: stats: dict, statistics to save. eval_dir: string, path to the directory to save the statistics. step: int, the global step of the checkpoint. """ if not tf.io.gfile.isdir(eval_dir): tf.io.gfile.makedirs(eval_dir) with tf.io.gfile.GFile(path.join(eval_dir, "stats_{}.csv".format(step)), "w") as fout: fout.write("class,iou,chamfer,fscore\n") for k in sorted(stats.keys()): if k == "all": continue fout.write("{0},{1},{2},{3}\n".format( SYSNET_CLASSES[k], stats[k]["iou"], stats[k]["chamfer"], stats[k]["fscore"], )) fout.write("all,{0},{1},{2}".format( stats["all"]["iou"], stats["all"]["chamfer"], stats["all"]["fscore"], ))
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/geometry/representation/point.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tensorflow point utility functions.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf from tensorflow_graphics.math import vector from tensorflow_graphics.util import asserts from tensorflow_graphics.util import export_api from tensorflow_graphics.util import shape def distance_to_ray(point, origin, direction, keepdims=True, name=None): """Computes the distance from a M-d point to a M-d ray. Note: In the following, A1 to An are optional batch dimensions, which must be broadcast compatible. Args: point: A tensor of shape `[A1, ..., An, M]`. origin: A tensor of shape `[A1, ..., An, M]`. direction: A tensor of shape `[A1, ..., An, M]`. The last dimension must be normalized. keepdims: A `bool`, whether to keep the last dimension with length 1 or to remove it. name: A name for this op. Defaults to "point_distance_to_ray". Returns: A tensor of shape `[A1, ..., An, 1]` containing the distance from each point to the corresponding ray. Raises: ValueError: If the shape of `point`, `origin`, or 'direction' is not supported. """ with tf.compat.v1.name_scope(name, "point_distance_to_ray", [point, origin, direction]): point = tf.convert_to_tensor(value=point) origin = tf.convert_to_tensor(value=origin) direction = tf.convert_to_tensor(value=direction) shape.compare_dimensions((point, origin, direction), -1, ("point", "origin", "direction")) shape.compare_batch_dimensions( tensors=(point, origin, direction), last_axes=-2, broadcast_compatible=True) direction = asserts.assert_normalized(direction) vec = point - origin dot = vector.dot(vec, direction) vec -= dot * direction return tf.norm(tensor=vec, axis=-1, keepdims=keepdims) def project_to_ray(point, origin, direction, name=None): """Computes the projection of a M-d point on a M-d ray. Note: In the following, A1 to An are optional batch dimensions, which must be broadcast compatible. Args: point: A tensor of shape `[A1, ..., An, M]`. origin: A tensor of shape `[A1, ..., An, M]`. direction: A tensor of shape `[A1, ..., An, M]`. The last dimension must be normalized. name: A name for this op. Defaults to "point_project_to_ray". Returns: A tensor of shape `[A1, ..., An, M]` containing the projected point. Raises: ValueError: If the shape of `point`, `origin`, or 'direction' is not supported. """ with tf.compat.v1.name_scope(name, "point_project_to_ray", [point, origin, direction]): point = tf.convert_to_tensor(value=point) origin = tf.convert_to_tensor(value=origin) direction = tf.convert_to_tensor(value=direction) shape.compare_dimensions((point, origin, direction), -1, ("point", "origin", "direction")) shape.compare_batch_dimensions( tensors=(point, origin, direction), last_axes=-2, broadcast_compatible=True) direction = asserts.assert_normalized(direction) vec = point - origin dot = vector.dot(vec, direction) return origin + dot * direction # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tensorflow point utility functions.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf from tensorflow_graphics.math import vector from tensorflow_graphics.util import asserts from tensorflow_graphics.util import export_api from tensorflow_graphics.util import shape def distance_to_ray(point, origin, direction, keepdims=True, name=None): """Computes the distance from a M-d point to a M-d ray. Note: In the following, A1 to An are optional batch dimensions, which must be broadcast compatible. Args: point: A tensor of shape `[A1, ..., An, M]`. origin: A tensor of shape `[A1, ..., An, M]`. direction: A tensor of shape `[A1, ..., An, M]`. The last dimension must be normalized. keepdims: A `bool`, whether to keep the last dimension with length 1 or to remove it. name: A name for this op. Defaults to "point_distance_to_ray". Returns: A tensor of shape `[A1, ..., An, 1]` containing the distance from each point to the corresponding ray. Raises: ValueError: If the shape of `point`, `origin`, or 'direction' is not supported. """ with tf.compat.v1.name_scope(name, "point_distance_to_ray", [point, origin, direction]): point = tf.convert_to_tensor(value=point) origin = tf.convert_to_tensor(value=origin) direction = tf.convert_to_tensor(value=direction) shape.compare_dimensions((point, origin, direction), -1, ("point", "origin", "direction")) shape.compare_batch_dimensions( tensors=(point, origin, direction), last_axes=-2, broadcast_compatible=True) direction = asserts.assert_normalized(direction) vec = point - origin dot = vector.dot(vec, direction) vec -= dot * direction return tf.norm(tensor=vec, axis=-1, keepdims=keepdims) def project_to_ray(point, origin, direction, name=None): """Computes the projection of a M-d point on a M-d ray. Note: In the following, A1 to An are optional batch dimensions, which must be broadcast compatible. Args: point: A tensor of shape `[A1, ..., An, M]`. origin: A tensor of shape `[A1, ..., An, M]`. direction: A tensor of shape `[A1, ..., An, M]`. The last dimension must be normalized. name: A name for this op. Defaults to "point_project_to_ray". Returns: A tensor of shape `[A1, ..., An, M]` containing the projected point. Raises: ValueError: If the shape of `point`, `origin`, or 'direction' is not supported. """ with tf.compat.v1.name_scope(name, "point_project_to_ray", [point, origin, direction]): point = tf.convert_to_tensor(value=point) origin = tf.convert_to_tensor(value=origin) direction = tf.convert_to_tensor(value=direction) shape.compare_dimensions((point, origin, direction), -1, ("point", "origin", "direction")) shape.compare_batch_dimensions( tensors=(point, origin, direction), last_axes=-2, broadcast_compatible=True) direction = asserts.assert_normalized(direction) vec = point - origin dot = vector.dot(vec, direction) return origin + dot * direction # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/rendering/camera/tests/quadratic_radial_distortion_test.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for quadratic_radial_distortion.""" from absl.testing import parameterized import numpy as np import tensorflow as tf from tensorflow_graphics.rendering.camera import quadratic_radial_distortion from tensorflow_graphics.util import test_case RANDOM_TESTS_NUM_IMAGES = 10 RANDOM_TESTS_HEIGHT = 8 RANDOM_TESTS_WIDTH = 8 RADII_SHAPE = (RANDOM_TESTS_NUM_IMAGES, RANDOM_TESTS_HEIGHT, RANDOM_TESTS_WIDTH) COEFFICIENT_SHAPE = (RANDOM_TESTS_NUM_IMAGES,) def _get_random_radii(): return np.random.rand(*RADII_SHAPE).astype('float32') def _get_zeros_radii(): return np.zeros(shape=RADII_SHAPE).astype('float32') def _get_ones_radii(): return np.ones(shape=RADII_SHAPE).astype('float32') def _get_random_coefficient(): return np.random.rand(*COEFFICIENT_SHAPE).astype('float32') def _get_zeros_coefficient(): return np.zeros(shape=COEFFICIENT_SHAPE).astype('float32') def _get_ones_coefficient(): return np.ones(shape=COEFFICIENT_SHAPE).astype('float32') def _make_shape_compatible(coefficients): return np.expand_dims(np.expand_dims(coefficients, axis=-1), axis=-1) class QuadraticRadialDistortionTest(test_case.TestCase): def test_distortion_factor_random_positive_distortion_coefficient(self): """Tests that distortion_factor produces the expected outputs.""" squared_radii = _get_random_radii() * 2.0 distortion_coefficient = _get_random_coefficient() * 2.0 distortion, mask = quadratic_radial_distortion.distortion_factor( squared_radii, distortion_coefficient) distortion_coefficient = _make_shape_compatible(distortion_coefficient) with self.subTest(name='distortion'): self.assertAllClose(1.0 + distortion_coefficient * squared_radii, distortion) # No overflow when distortion_coefficient >= 0.0. with self.subTest(name='mask'): self.assertAllInSet(mask, (False,)) def test_distortion_factor_preset_zero_distortion_coefficient(self): """Tests distortion_factor at zero distortion coefficient.""" squared_radii = _get_random_radii() * 2.0 distortion, mask = quadratic_radial_distortion.distortion_factor( squared_radii, 0.0) with self.subTest(name='distortion'): self.assertAllClose(tf.ones_like(squared_radii), distortion) # No overflow when distortion_coefficient = 0.0. with self.subTest(name='mask'): self.assertAllInSet(mask, (False,)) def test_distortion_factor_random_negative_distortion_coefficient(self): """Tests that distortion_factor produces the expected outputs.""" squared_radii = _get_random_radii() * 2.0 distortion_coefficient = _get_random_coefficient() * -0.2 distortion, mask = quadratic_radial_distortion.distortion_factor( squared_radii, distortion_coefficient) distortion_coefficient = _make_shape_compatible(distortion_coefficient) max_squared_radii = -1.0 / 3.0 / distortion_coefficient expected_overflow_mask = squared_radii > max_squared_radii valid_mask = np.logical_not(expected_overflow_mask) # We assert correctness of the mask, and of all the pixels that are not in # overflow. actual_distortion_when_valid = self.evaluate(distortion)[valid_mask] expected_distortion_when_valid = ( 1.0 + distortion_coefficient * squared_radii)[valid_mask] with self.subTest(name='distortion'): self.assertAllClose(expected_distortion_when_valid, actual_distortion_when_valid) with self.subTest(name='mask'): self.assertAllEqual(expected_overflow_mask, mask) def test_distortion_factor_preset_zero_radius(self): """Tests distortion_factor at the corner case of zero radius.""" squared_radii = _get_zeros_radii() distortion_coefficient = _get_random_coefficient() - 0.5 distortion, mask = quadratic_radial_distortion.distortion_factor( squared_radii, distortion_coefficient) with self.subTest(name='distortion'): self.assertAllClose(np.ones_like(squared_radii), distortion) with self.subTest(name='mask'): self.assertAllInSet(mask, (False,)) @parameterized.parameters(quadratic_radial_distortion.distortion_factor, quadratic_radial_distortion.undistortion_factor) def test_both_negative_radius_exception_raised(self, distortion_function): """Tests that an exception is raised when the squared radius is negative.""" squared_radii = _get_zeros_radii() - 0.5 distortion_coefficient = _get_random_coefficient() - 0.5 with self.assertRaises(tf.errors.InvalidArgumentError): self.evaluate(distortion_function(squared_radii, distortion_coefficient)) @parameterized.parameters((2, 2e-3), (3, 1e-8)) def test_undistortion_factor_random_positive_distortion_coefficient( self, num_iterations, tolerance): """Tests that undistortion_factor produces the expected outputs.""" distorted_squared_radii = _get_random_radii() * 2.0 distortion_coefficient = _get_random_coefficient() * 0.2 undistortion, mask = quadratic_radial_distortion.undistortion_factor( distorted_squared_radii, distortion_coefficient, num_iterations) distortion_coefficient = _make_shape_compatible(distortion_coefficient) undistorted_squared_radii = tf.square( undistortion) * distorted_squared_radii # We distort again the undistorted radii and compare to the original # distorted_squared_radii. redistorted_squared_radii = tf.square( 1.0 + distortion_coefficient * undistorted_squared_radii) * undistorted_squared_radii with self.subTest(name='distortion'): self.assertAllClose( distorted_squared_radii, redistorted_squared_radii, atol=tolerance) # Positive distortion_coefficients never overflow. with self.subTest(name='mask'): self.assertAllInSet(mask, (False,)) @parameterized.parameters((2, 2e-2), (3, 6e-3), (4, 6e-4)) def test_undistortion_factor_random_negative_distortion_coefficient( self, num_iterations, tolerance): """Tests that undistortion_factor produces the expected outputs.""" distorted_squared_radii = _get_random_radii() * 2.0 distortion_coefficient = _get_random_coefficient() * -0.2 undistortion, mask = quadratic_radial_distortion.undistortion_factor( distorted_squared_radii, distortion_coefficient, num_iterations) distortion_coefficient = _make_shape_compatible(distortion_coefficient) undistorted_squared_radii = tf.square( undistortion) * distorted_squared_radii # See explanation in the implementation comments for this formula. expected_overflow_mask = ( distorted_squared_radii * distortion_coefficient + 4.0 / 27.0 < 0) redistorted_squared_radii = tf.square( 1.0 + distortion_coefficient * undistorted_squared_radii) * undistorted_squared_radii valid_mask = np.logical_not(expected_overflow_mask) redistorted_squared_radii_when_valid = self.evaluate( redistorted_squared_radii)[valid_mask] distorted_squared_radii_when_valid = distorted_squared_radii[valid_mask] with self.subTest(name='distortion'): self.assertAllClose( distorted_squared_radii_when_valid, redistorted_squared_radii_when_valid, rtol=tolerance, atol=tolerance) # We assert correctness of the mask, and of all the pixels that are not in # overflow, distorting again the undistorted radii and comparing to the # original distorted_squared_radii. with self.subTest(name='mask'): self.assertAllEqual(expected_overflow_mask, mask) def test_undistortion_factor_zero_distortion_coefficient(self): """Tests undistortion_factor at zero distortion coefficient.""" squared_radii = _get_random_radii() * 2.0 undistortion, mask = quadratic_radial_distortion.undistortion_factor( squared_radii, 0.0) with self.subTest(name='distortion'): self.assertAllClose(tf.ones_like(squared_radii), undistortion) # No overflow when distortion_coefficient = 0.0. with self.subTest(name='mask'): self.assertAllEqual(np.zeros_like(squared_radii), mask) @parameterized.parameters( ('must have a rank greater than 1', (2,), (2, 1)), ('Not all batch dimensions are broadcast-compatible', (2, 2, 2), (3,)), ('Not all batch dimensions are broadcast-compatible', (2, 2, 2), (3, 3)), ) def test_distortion_factor_shape_exception_raised(self, error_msg, *shapes): """Tests that the shape exceptions are raised.""" self.assert_exception_is_raised( func=quadratic_radial_distortion.distortion_factor, error_msg=error_msg, shapes=shapes) @parameterized.parameters( ((2, 2), ()), ((1, 2, 2), (2,)), ((2, 2, 2), (2,)), ((2, 2), (2, 2)), ((2, 2, 2), (1, 2)), ((2, 3, 4), (1,)), ((2, 3, 4), (1, 1)), ((2, 3, 4), (2,)), ) def test_distortion_factor_shape_exception_not_raised(self, *shapes): """Tests that the shape exceptions are raised.""" self.assert_exception_is_not_raised( func=quadratic_radial_distortion.distortion_factor, shapes=shapes) @parameterized.parameters( ('must have a rank greater than 1', (2,), (2, 1)), ('Not all batch dimensions are broadcast-compatible', (2, 2, 2), (3,)), ('Not all batch dimensions are broadcast-compatible', (2, 2, 2), (3, 3)), ) def test_undistortion_factor_shape_exception_raised(self, error_msg, *shapes): """Tests that the shape exceptions are raised.""" self.assert_exception_is_raised( func=quadratic_radial_distortion.undistortion_factor, error_msg=error_msg, shapes=shapes) @parameterized.parameters( ((2, 2), ()), ((1, 2, 2), (2,)), ((2, 2, 2), (2,)), ((2, 2), (2, 2)), ((2, 2, 2), (1, 2)), ((2, 3, 4), (1,)), ((2, 3, 4), (1, 1)), ((2, 3, 4), (2,)), ) def test_undistortion_factor_shape_exception_not_raised(self, *shapes): """Tests that the shape exceptions are raised.""" self.assert_exception_is_not_raised( func=quadratic_radial_distortion.undistortion_factor, shapes=shapes) @parameterized.parameters(quadratic_radial_distortion.distortion_factor, quadratic_radial_distortion.undistortion_factor) def test_both_radial_jacobian(self, distortion_function): """Test the Jacobians with respect to squared radii.""" squared_radii = _get_random_radii().astype(np.float64) * 0.5 distortion_coefficients = _get_random_coefficient().astype(np.float64) * 0.5 distortion_coefficients -= 0.25 def distortion_fn(squared_radii): distortion, _ = distortion_function(squared_radii, distortion_coefficients) return distortion self.assert_jacobian_is_correct_fn( distortion_fn, [squared_radii], delta=1e-7, atol=1e-3) @parameterized.parameters(quadratic_radial_distortion.distortion_factor, quadratic_radial_distortion.undistortion_factor) def test_both_distortion_coefficient_jacobian(self, distortion_function): """Test the Jacobians with respect to distortion coefficients.""" squared_radii = _get_random_radii().astype(np.float64) * 0.5 distortion_coefficients = _get_random_coefficient().astype(np.float64) * 0.5 distortion_coefficients -= 0.25 def distortion_fn(distortion_coefficients): distortion, _ = distortion_function(squared_radii, distortion_coefficients) return distortion self.assert_jacobian_is_correct_fn( distortion_fn, [distortion_coefficients], delta=1e-7, atol=1e-3) if __name__ == '__main__': test_case.main()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for quadratic_radial_distortion.""" from absl.testing import parameterized import numpy as np import tensorflow as tf from tensorflow_graphics.rendering.camera import quadratic_radial_distortion from tensorflow_graphics.util import test_case RANDOM_TESTS_NUM_IMAGES = 10 RANDOM_TESTS_HEIGHT = 8 RANDOM_TESTS_WIDTH = 8 RADII_SHAPE = (RANDOM_TESTS_NUM_IMAGES, RANDOM_TESTS_HEIGHT, RANDOM_TESTS_WIDTH) COEFFICIENT_SHAPE = (RANDOM_TESTS_NUM_IMAGES,) def _get_random_radii(): return np.random.rand(*RADII_SHAPE).astype('float32') def _get_zeros_radii(): return np.zeros(shape=RADII_SHAPE).astype('float32') def _get_ones_radii(): return np.ones(shape=RADII_SHAPE).astype('float32') def _get_random_coefficient(): return np.random.rand(*COEFFICIENT_SHAPE).astype('float32') def _get_zeros_coefficient(): return np.zeros(shape=COEFFICIENT_SHAPE).astype('float32') def _get_ones_coefficient(): return np.ones(shape=COEFFICIENT_SHAPE).astype('float32') def _make_shape_compatible(coefficients): return np.expand_dims(np.expand_dims(coefficients, axis=-1), axis=-1) class QuadraticRadialDistortionTest(test_case.TestCase): def test_distortion_factor_random_positive_distortion_coefficient(self): """Tests that distortion_factor produces the expected outputs.""" squared_radii = _get_random_radii() * 2.0 distortion_coefficient = _get_random_coefficient() * 2.0 distortion, mask = quadratic_radial_distortion.distortion_factor( squared_radii, distortion_coefficient) distortion_coefficient = _make_shape_compatible(distortion_coefficient) with self.subTest(name='distortion'): self.assertAllClose(1.0 + distortion_coefficient * squared_radii, distortion) # No overflow when distortion_coefficient >= 0.0. with self.subTest(name='mask'): self.assertAllInSet(mask, (False,)) def test_distortion_factor_preset_zero_distortion_coefficient(self): """Tests distortion_factor at zero distortion coefficient.""" squared_radii = _get_random_radii() * 2.0 distortion, mask = quadratic_radial_distortion.distortion_factor( squared_radii, 0.0) with self.subTest(name='distortion'): self.assertAllClose(tf.ones_like(squared_radii), distortion) # No overflow when distortion_coefficient = 0.0. with self.subTest(name='mask'): self.assertAllInSet(mask, (False,)) def test_distortion_factor_random_negative_distortion_coefficient(self): """Tests that distortion_factor produces the expected outputs.""" squared_radii = _get_random_radii() * 2.0 distortion_coefficient = _get_random_coefficient() * -0.2 distortion, mask = quadratic_radial_distortion.distortion_factor( squared_radii, distortion_coefficient) distortion_coefficient = _make_shape_compatible(distortion_coefficient) max_squared_radii = -1.0 / 3.0 / distortion_coefficient expected_overflow_mask = squared_radii > max_squared_radii valid_mask = np.logical_not(expected_overflow_mask) # We assert correctness of the mask, and of all the pixels that are not in # overflow. actual_distortion_when_valid = self.evaluate(distortion)[valid_mask] expected_distortion_when_valid = ( 1.0 + distortion_coefficient * squared_radii)[valid_mask] with self.subTest(name='distortion'): self.assertAllClose(expected_distortion_when_valid, actual_distortion_when_valid) with self.subTest(name='mask'): self.assertAllEqual(expected_overflow_mask, mask) def test_distortion_factor_preset_zero_radius(self): """Tests distortion_factor at the corner case of zero radius.""" squared_radii = _get_zeros_radii() distortion_coefficient = _get_random_coefficient() - 0.5 distortion, mask = quadratic_radial_distortion.distortion_factor( squared_radii, distortion_coefficient) with self.subTest(name='distortion'): self.assertAllClose(np.ones_like(squared_radii), distortion) with self.subTest(name='mask'): self.assertAllInSet(mask, (False,)) @parameterized.parameters(quadratic_radial_distortion.distortion_factor, quadratic_radial_distortion.undistortion_factor) def test_both_negative_radius_exception_raised(self, distortion_function): """Tests that an exception is raised when the squared radius is negative.""" squared_radii = _get_zeros_radii() - 0.5 distortion_coefficient = _get_random_coefficient() - 0.5 with self.assertRaises(tf.errors.InvalidArgumentError): self.evaluate(distortion_function(squared_radii, distortion_coefficient)) @parameterized.parameters((2, 2e-3), (3, 1e-8)) def test_undistortion_factor_random_positive_distortion_coefficient( self, num_iterations, tolerance): """Tests that undistortion_factor produces the expected outputs.""" distorted_squared_radii = _get_random_radii() * 2.0 distortion_coefficient = _get_random_coefficient() * 0.2 undistortion, mask = quadratic_radial_distortion.undistortion_factor( distorted_squared_radii, distortion_coefficient, num_iterations) distortion_coefficient = _make_shape_compatible(distortion_coefficient) undistorted_squared_radii = tf.square( undistortion) * distorted_squared_radii # We distort again the undistorted radii and compare to the original # distorted_squared_radii. redistorted_squared_radii = tf.square( 1.0 + distortion_coefficient * undistorted_squared_radii) * undistorted_squared_radii with self.subTest(name='distortion'): self.assertAllClose( distorted_squared_radii, redistorted_squared_radii, atol=tolerance) # Positive distortion_coefficients never overflow. with self.subTest(name='mask'): self.assertAllInSet(mask, (False,)) @parameterized.parameters((2, 2e-2), (3, 6e-3), (4, 6e-4)) def test_undistortion_factor_random_negative_distortion_coefficient( self, num_iterations, tolerance): """Tests that undistortion_factor produces the expected outputs.""" distorted_squared_radii = _get_random_radii() * 2.0 distortion_coefficient = _get_random_coefficient() * -0.2 undistortion, mask = quadratic_radial_distortion.undistortion_factor( distorted_squared_radii, distortion_coefficient, num_iterations) distortion_coefficient = _make_shape_compatible(distortion_coefficient) undistorted_squared_radii = tf.square( undistortion) * distorted_squared_radii # See explanation in the implementation comments for this formula. expected_overflow_mask = ( distorted_squared_radii * distortion_coefficient + 4.0 / 27.0 < 0) redistorted_squared_radii = tf.square( 1.0 + distortion_coefficient * undistorted_squared_radii) * undistorted_squared_radii valid_mask = np.logical_not(expected_overflow_mask) redistorted_squared_radii_when_valid = self.evaluate( redistorted_squared_radii)[valid_mask] distorted_squared_radii_when_valid = distorted_squared_radii[valid_mask] with self.subTest(name='distortion'): self.assertAllClose( distorted_squared_radii_when_valid, redistorted_squared_radii_when_valid, rtol=tolerance, atol=tolerance) # We assert correctness of the mask, and of all the pixels that are not in # overflow, distorting again the undistorted radii and comparing to the # original distorted_squared_radii. with self.subTest(name='mask'): self.assertAllEqual(expected_overflow_mask, mask) def test_undistortion_factor_zero_distortion_coefficient(self): """Tests undistortion_factor at zero distortion coefficient.""" squared_radii = _get_random_radii() * 2.0 undistortion, mask = quadratic_radial_distortion.undistortion_factor( squared_radii, 0.0) with self.subTest(name='distortion'): self.assertAllClose(tf.ones_like(squared_radii), undistortion) # No overflow when distortion_coefficient = 0.0. with self.subTest(name='mask'): self.assertAllEqual(np.zeros_like(squared_radii), mask) @parameterized.parameters( ('must have a rank greater than 1', (2,), (2, 1)), ('Not all batch dimensions are broadcast-compatible', (2, 2, 2), (3,)), ('Not all batch dimensions are broadcast-compatible', (2, 2, 2), (3, 3)), ) def test_distortion_factor_shape_exception_raised(self, error_msg, *shapes): """Tests that the shape exceptions are raised.""" self.assert_exception_is_raised( func=quadratic_radial_distortion.distortion_factor, error_msg=error_msg, shapes=shapes) @parameterized.parameters( ((2, 2), ()), ((1, 2, 2), (2,)), ((2, 2, 2), (2,)), ((2, 2), (2, 2)), ((2, 2, 2), (1, 2)), ((2, 3, 4), (1,)), ((2, 3, 4), (1, 1)), ((2, 3, 4), (2,)), ) def test_distortion_factor_shape_exception_not_raised(self, *shapes): """Tests that the shape exceptions are raised.""" self.assert_exception_is_not_raised( func=quadratic_radial_distortion.distortion_factor, shapes=shapes) @parameterized.parameters( ('must have a rank greater than 1', (2,), (2, 1)), ('Not all batch dimensions are broadcast-compatible', (2, 2, 2), (3,)), ('Not all batch dimensions are broadcast-compatible', (2, 2, 2), (3, 3)), ) def test_undistortion_factor_shape_exception_raised(self, error_msg, *shapes): """Tests that the shape exceptions are raised.""" self.assert_exception_is_raised( func=quadratic_radial_distortion.undistortion_factor, error_msg=error_msg, shapes=shapes) @parameterized.parameters( ((2, 2), ()), ((1, 2, 2), (2,)), ((2, 2, 2), (2,)), ((2, 2), (2, 2)), ((2, 2, 2), (1, 2)), ((2, 3, 4), (1,)), ((2, 3, 4), (1, 1)), ((2, 3, 4), (2,)), ) def test_undistortion_factor_shape_exception_not_raised(self, *shapes): """Tests that the shape exceptions are raised.""" self.assert_exception_is_not_raised( func=quadratic_radial_distortion.undistortion_factor, shapes=shapes) @parameterized.parameters(quadratic_radial_distortion.distortion_factor, quadratic_radial_distortion.undistortion_factor) def test_both_radial_jacobian(self, distortion_function): """Test the Jacobians with respect to squared radii.""" squared_radii = _get_random_radii().astype(np.float64) * 0.5 distortion_coefficients = _get_random_coefficient().astype(np.float64) * 0.5 distortion_coefficients -= 0.25 def distortion_fn(squared_radii): distortion, _ = distortion_function(squared_radii, distortion_coefficients) return distortion self.assert_jacobian_is_correct_fn( distortion_fn, [squared_radii], delta=1e-7, atol=1e-3) @parameterized.parameters(quadratic_radial_distortion.distortion_factor, quadratic_radial_distortion.undistortion_factor) def test_both_distortion_coefficient_jacobian(self, distortion_function): """Test the Jacobians with respect to distortion coefficients.""" squared_radii = _get_random_radii().astype(np.float64) * 0.5 distortion_coefficients = _get_random_coefficient().astype(np.float64) * 0.5 distortion_coefficients -= 0.25 def distortion_fn(distortion_coefficients): distortion, _ = distortion_function(squared_radii, distortion_coefficients) return distortion self.assert_jacobian_is_correct_fn( distortion_fn, [distortion_coefficients], delta=1e-7, atol=1e-3) if __name__ == '__main__': test_case.main()
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/datasets/features/trimesh_feature_test.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Lint as: python3 """Tests for tensorflow_graphics.datasets.features.trimesh_feature.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import os import numpy as np import tensorflow.compat.v2 as tf import tensorflow_datasets as tfds from tensorflow_graphics.datasets.features import trimesh_feature import trimesh _TEST_DATA_DIR = os.path.join(os.path.dirname(__file__), 'test_data') class TrimeshFeatureTest(tfds.testing.FeatureExpectationsTestCase): def test_trimesh(self): obj_file_path = os.path.join(_TEST_DATA_DIR, 'cube.obj') obj_file = tf.io.gfile.GFile(obj_file_path) obj_mesh = trimesh.load(obj_file_path) expected_vertices = np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 1.0], [0.0, 1.0, 0.0], [0.0, 1.0, 1.0], [1.0, 0.0, 0.0], [1.0, 0.0, 1.0], [1.0, 1.0, 0.0], [1.0, 1.0, 1.0]]) expected_faces = np.array( [[0, 6, 4], [0, 2, 6], [0, 3, 2], [0, 1, 3], [2, 7, 6], [2, 3, 7], [4, 6, 7], [4, 7, 5], [0, 4, 5], [0, 5, 1], [1, 5, 7], [1, 7, 3]], dtype=np.uint64) expected_trimesh = {'vertices': expected_vertices, 'faces': expected_faces} # Create a scene with two cubes. scene = trimesh.Scene() scene.add_geometry(obj_mesh) scene.add_geometry(obj_mesh) # The expected TriangleFeature for the scene. expected_scene_feature = { 'vertices': np.tile(expected_vertices, [2, 1]).astype(np.float32), 'faces': np.concatenate( [expected_faces, expected_faces + len(expected_vertices)], axis=0) } self.assertFeature( feature=trimesh_feature.TriangleMesh(), shape={ 'vertices': (None, 3), 'faces': (None, 3) }, dtype={ 'vertices': tf.float32, 'faces': tf.uint64 }, tests=[ # File path tfds.testing.FeatureExpectationItem( value=obj_file_path, expected=expected_trimesh, ), # File object tfds.testing.FeatureExpectationItem( value=obj_file, expected=expected_trimesh, ), # Trimesh tfds.testing.FeatureExpectationItem( value=obj_mesh, expected=expected_trimesh, ), # Scene tfds.testing.FeatureExpectationItem( value=scene, expected=expected_scene_feature, ), # FeaturesDict tfds.testing.FeatureExpectationItem( value=expected_scene_feature, expected=expected_scene_feature, ), # Invalid type tfds.testing.FeatureExpectationItem( value=np.random.rand(80, 3), raise_cls=ValueError, raise_msg='obj should be either a Trimesh or a Scene', ), ], ) if __name__ == '__main__': tfds.testing.test_main()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. # Lint as: python3 """Tests for tensorflow_graphics.datasets.features.trimesh_feature.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import os import numpy as np import tensorflow.compat.v2 as tf import tensorflow_datasets as tfds from tensorflow_graphics.datasets.features import trimesh_feature import trimesh _TEST_DATA_DIR = os.path.join(os.path.dirname(__file__), 'test_data') class TrimeshFeatureTest(tfds.testing.FeatureExpectationsTestCase): def test_trimesh(self): obj_file_path = os.path.join(_TEST_DATA_DIR, 'cube.obj') obj_file = tf.io.gfile.GFile(obj_file_path) obj_mesh = trimesh.load(obj_file_path) expected_vertices = np.array([[0.0, 0.0, 0.0], [0.0, 0.0, 1.0], [0.0, 1.0, 0.0], [0.0, 1.0, 1.0], [1.0, 0.0, 0.0], [1.0, 0.0, 1.0], [1.0, 1.0, 0.0], [1.0, 1.0, 1.0]]) expected_faces = np.array( [[0, 6, 4], [0, 2, 6], [0, 3, 2], [0, 1, 3], [2, 7, 6], [2, 3, 7], [4, 6, 7], [4, 7, 5], [0, 4, 5], [0, 5, 1], [1, 5, 7], [1, 7, 3]], dtype=np.uint64) expected_trimesh = {'vertices': expected_vertices, 'faces': expected_faces} # Create a scene with two cubes. scene = trimesh.Scene() scene.add_geometry(obj_mesh) scene.add_geometry(obj_mesh) # The expected TriangleFeature for the scene. expected_scene_feature = { 'vertices': np.tile(expected_vertices, [2, 1]).astype(np.float32), 'faces': np.concatenate( [expected_faces, expected_faces + len(expected_vertices)], axis=0) } self.assertFeature( feature=trimesh_feature.TriangleMesh(), shape={ 'vertices': (None, 3), 'faces': (None, 3) }, dtype={ 'vertices': tf.float32, 'faces': tf.uint64 }, tests=[ # File path tfds.testing.FeatureExpectationItem( value=obj_file_path, expected=expected_trimesh, ), # File object tfds.testing.FeatureExpectationItem( value=obj_file, expected=expected_trimesh, ), # Trimesh tfds.testing.FeatureExpectationItem( value=obj_mesh, expected=expected_trimesh, ), # Scene tfds.testing.FeatureExpectationItem( value=scene, expected=expected_scene_feature, ), # FeaturesDict tfds.testing.FeatureExpectationItem( value=expected_scene_feature, expected=expected_scene_feature, ), # Invalid type tfds.testing.FeatureExpectationItem( value=np.random.rand(80, 3), raise_cls=ValueError, raise_msg='obj should be either a Trimesh or a Scene', ), ], ) if __name__ == '__main__': tfds.testing.test_main()
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/math/optimizer/tests/__init__.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/math/vector.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tensorflow vector utility functions.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf from tensorflow_graphics.util import asserts from tensorflow_graphics.util import export_api from tensorflow_graphics.util import shape def cross(vector1, vector2, axis=-1, name=None): """Computes the cross product between two tensors along an axis. Note: In the following, A1 to An are optional batch dimensions, which should be broadcast compatible. Args: vector1: A tensor of shape `[A1, ..., Ai = 3, ..., An]`, where the dimension i = axis represents a 3d vector. vector2: A tensor of shape `[A1, ..., Ai = 3, ..., An]`, where the dimension i = axis represents a 3d vector. axis: The dimension along which to compute the cross product. name: A name for this op which defaults to "vector_cross". Returns: A tensor of shape `[A1, ..., Ai = 3, ..., An]`, where the dimension i = axis represents the result of the cross product. """ with tf.compat.v1.name_scope(name, "vector_cross", [vector1, vector2]): vector1 = tf.convert_to_tensor(value=vector1) vector2 = tf.convert_to_tensor(value=vector2) shape.check_static( tensor=vector1, tensor_name="vector1", has_dim_equals=(axis, 3)) shape.check_static( tensor=vector2, tensor_name="vector2", has_dim_equals=(axis, 3)) shape.compare_batch_dimensions( tensors=(vector1, vector2), last_axes=-1, broadcast_compatible=True) vector1_x, vector1_y, vector1_z = tf.unstack(vector1, axis=axis) vector2_x, vector2_y, vector2_z = tf.unstack(vector2, axis=axis) n_x = vector1_y * vector2_z - vector1_z * vector2_y n_y = vector1_z * vector2_x - vector1_x * vector2_z n_z = vector1_x * vector2_y - vector1_y * vector2_x return tf.stack((n_x, n_y, n_z), axis=axis) def dot(vector1, vector2, axis=-1, keepdims=True, name=None): """Computes the dot product between two tensors along an axis. Note: In the following, A1 to An are optional batch dimensions, which should be broadcast compatible. Args: vector1: Tensor of rank R and shape `[A1, ..., Ai, ..., An]`, where the dimension i = axis represents a vector. vector2: Tensor of rank R and shape `[A1, ..., Ai, ..., An]`, where the dimension i = axis represents a vector. axis: The dimension along which to compute the dot product. keepdims: If True, retains reduced dimensions with length 1. name: A name for this op which defaults to "vector_dot". Returns: A tensor of shape `[A1, ..., Ai = 1, ..., An]`, where the dimension i = axis represents the result of the dot product. """ with tf.compat.v1.name_scope(name, "vector_dot", [vector1, vector2]): vector1 = tf.convert_to_tensor(value=vector1) vector2 = tf.convert_to_tensor(value=vector2) shape.compare_batch_dimensions( tensors=(vector1, vector2), last_axes=-1, broadcast_compatible=True) shape.compare_dimensions( tensors=(vector1, vector2), axes=axis, tensor_names=("vector1", "vector2")) return tf.reduce_sum( input_tensor=vector1 * vector2, axis=axis, keepdims=keepdims) def reflect(vector, normal, axis=-1, name=None): r"""Computes the reflection direction for an incident vector. For an incident vector \\(\mathbf{v}\\) and normal $$\mathbf{n}$$ this function computes the reflected vector as \\(\mathbf{r} = \mathbf{v} - 2(\mathbf{n}^T\mathbf{v})\mathbf{n}\\). Note: In the following, A1 to An are optional batch dimensions, which should be broadcast compatible. Args: vector: A tensor of shape `[A1, ..., Ai, ..., An]`, where the dimension i = axis represents a vector. normal: A tensor of shape `[A1, ..., Ai, ..., An]`, where the dimension i = axis represents a normal around which the vector needs to be reflected. The normal vector needs to be normalized. axis: The dimension along which to compute the reflection. name: A name for this op which defaults to "vector_reflect". Returns: A tensor of shape `[A1, ..., Ai, ..., An]`, where the dimension i = axis represents a reflected vector. """ with tf.compat.v1.name_scope(name, "vector_reflect", [vector, normal]): vector = tf.convert_to_tensor(value=vector) normal = tf.convert_to_tensor(value=normal) shape.compare_dimensions( tensors=(vector, normal), axes=axis, tensor_names=("vector", "normal")) shape.compare_batch_dimensions( tensors=(vector, normal), last_axes=-1, broadcast_compatible=True) normal = asserts.assert_normalized(normal, axis=axis) dot_product = dot(vector, normal, axis=axis) return vector - 2.0 * dot_product * normal # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tensorflow vector utility functions.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf from tensorflow_graphics.util import asserts from tensorflow_graphics.util import export_api from tensorflow_graphics.util import shape def cross(vector1, vector2, axis=-1, name=None): """Computes the cross product between two tensors along an axis. Note: In the following, A1 to An are optional batch dimensions, which should be broadcast compatible. Args: vector1: A tensor of shape `[A1, ..., Ai = 3, ..., An]`, where the dimension i = axis represents a 3d vector. vector2: A tensor of shape `[A1, ..., Ai = 3, ..., An]`, where the dimension i = axis represents a 3d vector. axis: The dimension along which to compute the cross product. name: A name for this op which defaults to "vector_cross". Returns: A tensor of shape `[A1, ..., Ai = 3, ..., An]`, where the dimension i = axis represents the result of the cross product. """ with tf.compat.v1.name_scope(name, "vector_cross", [vector1, vector2]): vector1 = tf.convert_to_tensor(value=vector1) vector2 = tf.convert_to_tensor(value=vector2) shape.check_static( tensor=vector1, tensor_name="vector1", has_dim_equals=(axis, 3)) shape.check_static( tensor=vector2, tensor_name="vector2", has_dim_equals=(axis, 3)) shape.compare_batch_dimensions( tensors=(vector1, vector2), last_axes=-1, broadcast_compatible=True) vector1_x, vector1_y, vector1_z = tf.unstack(vector1, axis=axis) vector2_x, vector2_y, vector2_z = tf.unstack(vector2, axis=axis) n_x = vector1_y * vector2_z - vector1_z * vector2_y n_y = vector1_z * vector2_x - vector1_x * vector2_z n_z = vector1_x * vector2_y - vector1_y * vector2_x return tf.stack((n_x, n_y, n_z), axis=axis) def dot(vector1, vector2, axis=-1, keepdims=True, name=None): """Computes the dot product between two tensors along an axis. Note: In the following, A1 to An are optional batch dimensions, which should be broadcast compatible. Args: vector1: Tensor of rank R and shape `[A1, ..., Ai, ..., An]`, where the dimension i = axis represents a vector. vector2: Tensor of rank R and shape `[A1, ..., Ai, ..., An]`, where the dimension i = axis represents a vector. axis: The dimension along which to compute the dot product. keepdims: If True, retains reduced dimensions with length 1. name: A name for this op which defaults to "vector_dot". Returns: A tensor of shape `[A1, ..., Ai = 1, ..., An]`, where the dimension i = axis represents the result of the dot product. """ with tf.compat.v1.name_scope(name, "vector_dot", [vector1, vector2]): vector1 = tf.convert_to_tensor(value=vector1) vector2 = tf.convert_to_tensor(value=vector2) shape.compare_batch_dimensions( tensors=(vector1, vector2), last_axes=-1, broadcast_compatible=True) shape.compare_dimensions( tensors=(vector1, vector2), axes=axis, tensor_names=("vector1", "vector2")) return tf.reduce_sum( input_tensor=vector1 * vector2, axis=axis, keepdims=keepdims) def reflect(vector, normal, axis=-1, name=None): r"""Computes the reflection direction for an incident vector. For an incident vector \\(\mathbf{v}\\) and normal $$\mathbf{n}$$ this function computes the reflected vector as \\(\mathbf{r} = \mathbf{v} - 2(\mathbf{n}^T\mathbf{v})\mathbf{n}\\). Note: In the following, A1 to An are optional batch dimensions, which should be broadcast compatible. Args: vector: A tensor of shape `[A1, ..., Ai, ..., An]`, where the dimension i = axis represents a vector. normal: A tensor of shape `[A1, ..., Ai, ..., An]`, where the dimension i = axis represents a normal around which the vector needs to be reflected. The normal vector needs to be normalized. axis: The dimension along which to compute the reflection. name: A name for this op which defaults to "vector_reflect". Returns: A tensor of shape `[A1, ..., Ai, ..., An]`, where the dimension i = axis represents a reflected vector. """ with tf.compat.v1.name_scope(name, "vector_reflect", [vector, normal]): vector = tf.convert_to_tensor(value=vector) normal = tf.convert_to_tensor(value=normal) shape.compare_dimensions( tensors=(vector, normal), axes=axis, tensor_names=("vector", "normal")) shape.compare_batch_dimensions( tensors=(vector, normal), last_axes=-1, broadcast_compatible=True) normal = asserts.assert_normalized(normal, axis=axis) dot_product = dot(vector, normal, axis=axis) return vector - 2.0 * dot_product * normal # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/rendering/voxels/visual_hull.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """This module implements the visual hull voxel rendering.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf from tensorflow_graphics.util import export_api from tensorflow_graphics.util import shape def render(voxels, axis=2, name=None): """Renders the visual hull of a voxel grid, as described in ["Escaping Plato's Cave: 3D Shape From Adversarial Rendering" (Henzler 2019)](https://github.com/henzler/platonicgan). Note: In the following, A1 to An are optional batch dimensions. Args: voxels: A tensor of shape `[A1, ..., An, Vx, Vy, Vz, Vd]`, where Vx, Vy, Vz are the dimensions of the voxel grid and Vd the dimension of the information stored in each voxel (e.g. 3 for RGB color). axis: An index to the projection axis (0 for X, 1 for Y or 2 for Z). name: A name for this op. Defaults to "visual_hull_render". Returns: A tensor of shape `[A1, ..., An, Vx, Vy, Vd]` representing images of size (Vx,Vy). Raises: ValueError: If the shape of the input tensors are not supported. """ with tf.compat.v1.name_scope(name, "visual_hull_render", [voxels]): voxels = tf.convert_to_tensor(value=voxels) shape.check_static( tensor=voxels, tensor_name="voxels", has_rank_greater_than=3) if axis not in [0, 1, 2]: raise ValueError("'axis' needs to be 0, 1 or 2") image = tf.reduce_sum(input_tensor=voxels, axis=axis - 4) image = tf.ones_like(image) - tf.math.exp(-image) return image # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """This module implements the visual hull voxel rendering.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function import tensorflow as tf from tensorflow_graphics.util import export_api from tensorflow_graphics.util import shape def render(voxels, axis=2, name=None): """Renders the visual hull of a voxel grid, as described in ["Escaping Plato's Cave: 3D Shape From Adversarial Rendering" (Henzler 2019)](https://github.com/henzler/platonicgan). Note: In the following, A1 to An are optional batch dimensions. Args: voxels: A tensor of shape `[A1, ..., An, Vx, Vy, Vz, Vd]`, where Vx, Vy, Vz are the dimensions of the voxel grid and Vd the dimension of the information stored in each voxel (e.g. 3 for RGB color). axis: An index to the projection axis (0 for X, 1 for Y or 2 for Z). name: A name for this op. Defaults to "visual_hull_render". Returns: A tensor of shape `[A1, ..., An, Vx, Vy, Vd]` representing images of size (Vx,Vy). Raises: ValueError: If the shape of the input tensors are not supported. """ with tf.compat.v1.name_scope(name, "visual_hull_render", [voxels]): voxels = tf.convert_to_tensor(value=voxels) shape.check_static( tensor=voxels, tensor_name="voxels", has_rank_greater_than=3) if axis not in [0, 1, 2]: raise ValueError("'axis' needs to be 0, 1 or 2") image = tf.reduce_sum(input_tensor=voxels, axis=axis - 4) image = tf.ones_like(image) - tf.math.exp(-image) return image # API contains all public functions and classes. __all__ = export_api.get_functions_and_classes()
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/geometry/representation/mesh/tests/utils_test.py
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for utils.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function from absl.testing import parameterized import numpy as np from tensorflow_graphics.geometry.representation.mesh import utils from tensorflow_graphics.util import test_case class UtilsTest(test_case.TestCase): @parameterized.parameters( (np.array(((0, 1, 2),)), [[0, 1], [0, 2], [1, 2]]), (np.array( ((0, 1, 2), (0, 1, 3))), [[0, 1], [0, 2], [0, 3], [1, 2], [1, 3]]), ) def test_extract_undirected_edges_from_triangular_mesh_preset( self, test_inputs, test_outputs): """Tests that the output contain the expected edges.""" edges = utils.extract_unique_edges_from_triangular_mesh( test_inputs, directed_edges=False) edges.sort(axis=1) # Ensure edge tuple ordered by first vertex. self.assertEqual(sorted(edges.tolist()), test_outputs) @parameterized.parameters( (np.array( ((0, 1, 2),)), [[0, 1], [0, 2], [1, 0], [1, 2], [2, 0], [2, 1]]), (np.array( ((0, 1, 2), (0, 1, 3))), [[0, 1], [0, 2], [0, 3], [1, 0], [1, 2], [1, 3], [2, 0], [2, 1], [3, 0], [3, 1]]), ) def test_extract_directed_edges_from_triangular_mesh_preset( self, test_inputs, test_outputs): """Tests that the output contain the expected edges.""" edges = utils.extract_unique_edges_from_triangular_mesh( test_inputs, directed_edges=True) self.assertEqual(sorted(edges.tolist()), test_outputs) @parameterized.parameters( (1, "'faces' must be a numpy.ndarray."), (np.array((1,)), "must have a rank equal to 2"), (np.array((((1,),),)), "must have a rank equal to 2"), (np.array(((1,),)), "must have exactly 3 dimensions in the last axis"), (np.array(((1, 1),)), "must have exactly 3 dimensions in the last axis"), (np.array( ((1, 1, 1, 1),)), "must have exactly 3 dimensions in the last axis"), ) def test_extract_edges_from_triangular_mesh_raised( self, invalid_input, error_msg): """Tests that the shape exceptions are properly raised.""" with self.assertRaisesRegexp(ValueError, error_msg): utils.extract_unique_edges_from_triangular_mesh(invalid_input) @parameterized.parameters( (np.array(((0, 1), (0, 2), (1, 0), (1, 2), (2, 0), (2, 1))), np.float16, [0.5, 0.5, 0.5, 0.5, 0.5, 0.5]), (np.array(((0, 1), (0, 2), (1, 0), (1, 2), (2, 0), (2, 1))), np.float32, [0.5, 0.5, 0.5, 0.5, 0.5, 0.5]), (np.array(((0, 1), (0, 2), (0, 3), (1, 0), (1, 2), (1, 3), (2, 0), (2, 1), (3, 0), (3, 1))), np.float64, [1.0 / 3, 1.0 / 3, 1.0 / 3, 1.0 / 3, 1.0 / 3, 1.0 / 3, 0.5, 0.5, 0.5, 0.5]), ) def test_get_degree_based_edge_weights_preset( self, test_inputs, test_dtype, test_outputs): """Tests that the output contain the expected edges.""" weights = utils.get_degree_based_edge_weights(test_inputs, test_dtype) self.assertAllClose(weights.tolist(), test_outputs) @parameterized.parameters( (1, "'edges' must be a numpy.ndarray."), (np.array((1,)), "must have a rank equal to 2"), (np.array((((1,),),)), "must have a rank equal to 2"), (np.array(((1,),)), "must have exactly 2 dimensions in the last axis"), (np.array( ((1, 1, 1),)), "must have exactly 2 dimensions in the last axis"), ) def test_get_degree_based_edge_weights_invalid_edges_raised( self, invalid_input, error_msg): """Tests that the shape exceptions are properly raised.""" with self.assertRaisesRegexp(ValueError, error_msg): utils.get_degree_based_edge_weights(invalid_input) @parameterized.parameters( (np.bool, "must be a numpy float type"), (np.int, "must be a numpy float type"), (np.complex, "must be a numpy float type"), (np.uint, "must be a numpy float type"), (np.int16, "must be a numpy float type"), ) def test_get_degree_based_edge_weights_dtype_raised( self, invalid_type, error_msg): """Tests that the shape exceptions are properly raised.""" with self.assertRaisesRegexp(ValueError, error_msg): utils.get_degree_based_edge_weights(np.array(((1, 1),)), invalid_type) if __name__ == "__main__": test_case.main()
# Copyright 2020 The TensorFlow Authors # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # https://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. """Tests for utils.""" from __future__ import absolute_import from __future__ import division from __future__ import print_function from absl.testing import parameterized import numpy as np from tensorflow_graphics.geometry.representation.mesh import utils from tensorflow_graphics.util import test_case class UtilsTest(test_case.TestCase): @parameterized.parameters( (np.array(((0, 1, 2),)), [[0, 1], [0, 2], [1, 2]]), (np.array( ((0, 1, 2), (0, 1, 3))), [[0, 1], [0, 2], [0, 3], [1, 2], [1, 3]]), ) def test_extract_undirected_edges_from_triangular_mesh_preset( self, test_inputs, test_outputs): """Tests that the output contain the expected edges.""" edges = utils.extract_unique_edges_from_triangular_mesh( test_inputs, directed_edges=False) edges.sort(axis=1) # Ensure edge tuple ordered by first vertex. self.assertEqual(sorted(edges.tolist()), test_outputs) @parameterized.parameters( (np.array( ((0, 1, 2),)), [[0, 1], [0, 2], [1, 0], [1, 2], [2, 0], [2, 1]]), (np.array( ((0, 1, 2), (0, 1, 3))), [[0, 1], [0, 2], [0, 3], [1, 0], [1, 2], [1, 3], [2, 0], [2, 1], [3, 0], [3, 1]]), ) def test_extract_directed_edges_from_triangular_mesh_preset( self, test_inputs, test_outputs): """Tests that the output contain the expected edges.""" edges = utils.extract_unique_edges_from_triangular_mesh( test_inputs, directed_edges=True) self.assertEqual(sorted(edges.tolist()), test_outputs) @parameterized.parameters( (1, "'faces' must be a numpy.ndarray."), (np.array((1,)), "must have a rank equal to 2"), (np.array((((1,),),)), "must have a rank equal to 2"), (np.array(((1,),)), "must have exactly 3 dimensions in the last axis"), (np.array(((1, 1),)), "must have exactly 3 dimensions in the last axis"), (np.array( ((1, 1, 1, 1),)), "must have exactly 3 dimensions in the last axis"), ) def test_extract_edges_from_triangular_mesh_raised( self, invalid_input, error_msg): """Tests that the shape exceptions are properly raised.""" with self.assertRaisesRegexp(ValueError, error_msg): utils.extract_unique_edges_from_triangular_mesh(invalid_input) @parameterized.parameters( (np.array(((0, 1), (0, 2), (1, 0), (1, 2), (2, 0), (2, 1))), np.float16, [0.5, 0.5, 0.5, 0.5, 0.5, 0.5]), (np.array(((0, 1), (0, 2), (1, 0), (1, 2), (2, 0), (2, 1))), np.float32, [0.5, 0.5, 0.5, 0.5, 0.5, 0.5]), (np.array(((0, 1), (0, 2), (0, 3), (1, 0), (1, 2), (1, 3), (2, 0), (2, 1), (3, 0), (3, 1))), np.float64, [1.0 / 3, 1.0 / 3, 1.0 / 3, 1.0 / 3, 1.0 / 3, 1.0 / 3, 0.5, 0.5, 0.5, 0.5]), ) def test_get_degree_based_edge_weights_preset( self, test_inputs, test_dtype, test_outputs): """Tests that the output contain the expected edges.""" weights = utils.get_degree_based_edge_weights(test_inputs, test_dtype) self.assertAllClose(weights.tolist(), test_outputs) @parameterized.parameters( (1, "'edges' must be a numpy.ndarray."), (np.array((1,)), "must have a rank equal to 2"), (np.array((((1,),),)), "must have a rank equal to 2"), (np.array(((1,),)), "must have exactly 2 dimensions in the last axis"), (np.array( ((1, 1, 1),)), "must have exactly 2 dimensions in the last axis"), ) def test_get_degree_based_edge_weights_invalid_edges_raised( self, invalid_input, error_msg): """Tests that the shape exceptions are properly raised.""" with self.assertRaisesRegexp(ValueError, error_msg): utils.get_degree_based_edge_weights(invalid_input) @parameterized.parameters( (np.bool, "must be a numpy float type"), (np.int, "must be a numpy float type"), (np.complex, "must be a numpy float type"), (np.uint, "must be a numpy float type"), (np.int16, "must be a numpy float type"), ) def test_get_degree_based_edge_weights_dtype_raised( self, invalid_type, error_msg): """Tests that the shape exceptions are properly raised.""" with self.assertRaisesRegexp(ValueError, error_msg): utils.get_degree_based_edge_weights(np.array(((1, 1),)), invalid_type) if __name__ == "__main__": test_case.main()
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./MANIFEST.in
recursive-include tensorflow_graphics *.so
recursive-include tensorflow_graphics *.so
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./tensorflow_graphics/projects/neural_voxel_renderer/prepare_tfrecords/README.md
# Dataset generation for Neural Voxel Renderer ___ The [training](https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/projects/neural_voxel_renderer/train.ipynb) and [inference](https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/projects/neural_voxel_renderer/demo.ipynb) examples use demo data to illustrate the functionality of Neural Voxel Renderer (NVR). In this document, we describe how to generate the the full dataset to train NVR from scratch. **Warning:** the generated TFRecords will take ~350GB of disk space. ___ ## Download the colored voxels This dataset contains the colored voxels of 2040 chairs. The size of the dataset is **~16GB**. Each shape is represented as 128<sup>3</sup> x 4 voxel grid, where each voxel contains an RGB and occupancy value. The color was obtained from a single image aligned with the voxels. ``` bash PATH_TO_COLOR_VOXELS=/tmp/colored_voxels/ mkdir $PATH_TO_COLOR_VOXELS bash download_colored_voxels.sh $PATH_TO_COLOR_VOXELS ``` ## Download the synthetic images The dataset contains the target images (rendered using Blender) and all the necessary information that was used to set-up the scene in 3D (object rotation, translation, camera parameters, etc.). The size of the dataset is **~400MB** ``` bash PATH_TO_SYNTHETIC_DATASET=/tmp/synthetic_dataset/ mkdir $PATH_TO_SYNTHETIC_DATASET wget -P $PATH_TO_SYNTHETIC_DATASET https://storage.googleapis.com/tensorflow-graphics/notebooks/neural_voxel_renderer/blender_dataset/default_chairs_test.tfrecord wget -P $PATH_TO_SYNTHETIC_DATASET https://storage.googleapis.com/tensorflow-graphics/notebooks/neural_voxel_renderer/blender_dataset/default_chairs_train.tfrecord ``` ## Run the script The script iterates over all the synthetic images and pairs them with the corresponding colored voxels, placed according to the scene set-up. Additionally, it estimates the rendered image directly from the voxels which is used as additional input in NVR plus. ``` python PATH_TO_TFRECORDS=/tmp/tfrecords/ mkdir $PATH_TO_TFRECORDS python generate_tfrecords_nvr_plus.py -- --mode test --voxels_dir $PATH_TO_COLOR_VOXELS --images_dir $PATH_TO_SYNTHETIC_DATASET --output_dir $PATH_TO_TFRECORDS python generate_tfrecords_nvr_plus.py -- --mode train --voxels_dir $PATH_TO_COLOR_VOXELS --images_dir $PATH_TO_SYNTHETIC_DATASET --output_dir $PATH_TO_TFRECORDS ```
# Dataset generation for Neural Voxel Renderer ___ The [training](https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/projects/neural_voxel_renderer/train.ipynb) and [inference](https://colab.research.google.com/github/tensorflow/graphics/blob/master/tensorflow_graphics/projects/neural_voxel_renderer/demo.ipynb) examples use demo data to illustrate the functionality of Neural Voxel Renderer (NVR). In this document, we describe how to generate the the full dataset to train NVR from scratch. **Warning:** the generated TFRecords will take ~350GB of disk space. ___ ## Download the colored voxels This dataset contains the colored voxels of 2040 chairs. The size of the dataset is **~16GB**. Each shape is represented as 128<sup>3</sup> x 4 voxel grid, where each voxel contains an RGB and occupancy value. The color was obtained from a single image aligned with the voxels. ``` bash PATH_TO_COLOR_VOXELS=/tmp/colored_voxels/ mkdir $PATH_TO_COLOR_VOXELS bash download_colored_voxels.sh $PATH_TO_COLOR_VOXELS ``` ## Download the synthetic images The dataset contains the target images (rendered using Blender) and all the necessary information that was used to set-up the scene in 3D (object rotation, translation, camera parameters, etc.). The size of the dataset is **~400MB** ``` bash PATH_TO_SYNTHETIC_DATASET=/tmp/synthetic_dataset/ mkdir $PATH_TO_SYNTHETIC_DATASET wget -P $PATH_TO_SYNTHETIC_DATASET https://storage.googleapis.com/tensorflow-graphics/notebooks/neural_voxel_renderer/blender_dataset/default_chairs_test.tfrecord wget -P $PATH_TO_SYNTHETIC_DATASET https://storage.googleapis.com/tensorflow-graphics/notebooks/neural_voxel_renderer/blender_dataset/default_chairs_train.tfrecord ``` ## Run the script The script iterates over all the synthetic images and pairs them with the corresponding colored voxels, placed according to the scene set-up. Additionally, it estimates the rendered image directly from the voxels which is used as additional input in NVR plus. ``` python PATH_TO_TFRECORDS=/tmp/tfrecords/ mkdir $PATH_TO_TFRECORDS python generate_tfrecords_nvr_plus.py -- --mode test --voxels_dir $PATH_TO_COLOR_VOXELS --images_dir $PATH_TO_SYNTHETIC_DATASET --output_dir $PATH_TO_TFRECORDS python generate_tfrecords_nvr_plus.py -- --mode train --voxels_dir $PATH_TO_COLOR_VOXELS --images_dir $PATH_TO_SYNTHETIC_DATASET --output_dir $PATH_TO_TFRECORDS ```
-1
tensorflow/graphics
480
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
copybara-service[bot]
"2021-01-19T21:31:22Z"
"2021-02-01T16:01:31Z"
d047500d9b6cb9b716e4b02859d5cc9efb004156
e539c142799936d76d84d0861951ed883a9b4673
- Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions.. - Fix use case when `screen_dimensions` or `lower_left_corner` don't have batch dimensions, while other tensors do have batch dimensions. - Unify rasterization backend output to produce channel dimension for `mask` and `triangle_index`
./.pylintrc
[MASTER] # Specify a configuration file. #rcfile= # Python code to execute, usually for sys.path manipulation such as # pygtk.require(). #init-hook= # Profiled execution. profile=no # Add files or directories to the blacklist. They should be base names, not # paths. ignore=CVS, __pycache__, .git, .tox, .pytest_cache, tensorflow_graphics/projects/* # Pickle collected data for later comparisons. persistent=yes # Use multiple processes to speed up Pylint. jobs=4 # List of plugins (as comma separated values of python modules names) to load, # usually to register additional checkers. load-plugins= [MESSAGES CONTROL] # Enable the message, report, category or checker with the given id(s). You can # either give multiple identifier separated by comma (,) or put this option # multiple time. See also the "--disable" option for examples. enable=indexing-exception,old-raise-syntax # Disable the message, report, category or checker with the given id(s). You # can either give multiple identifiers separated by comma (,) or put this # option multiple times (only on the command line, not in the configuration # file where it should appear only once).You can also use "--disable=all" to # disable everything first and then reenable specific checks. For example, if # you want to run only the similarities checker, you can use "--disable=all # --enable=similarities". If you want to run only the classes checker, but have # no Warning level messages displayed, use"--disable=all --enable=classes # --disable=W" disable=design, similarities, no-self-use, attribute-defined-outside-init, locally-disabled, star-args, pointless-except, bad-option-value, global-statement, fixme, suppressed-message, useless-suppression, locally-enabled, no-member, no-name-in-module, import-error, unsubscriptable-object, unbalanced-tuple-unpacking, undefined-variable, not-context-manager, E1130, # (TFG) Invalid-unary-operand-type for Numpy array R1705, # (TFG) Unnecessary "else" after "return" (no-else-return) R1720, # (TFG) Unnecessary "else" after "raise" (no-else-raise) R1721, # (TFG) Unnecessary use of a comprehension (unnecessary-comprehension) # Set the cache size for astng objects. cache-size=500 [REPORTS] # Set the output format. Available formats are text, parseable, colorized, msvs # (visual studio) and html. You can also give a reporter class, eg # mypackage.mymodule.MyReporterClass. output-format=text # Put messages in a separate file for each module / package specified on the # command line instead of printing them on stdout. Reports (if any) will be # written in a file name "pylint_global.[txt|html]". files-output=no # Tells whether to display a full report or only the messages reports=no # Python expression which should return a note less than 10 (10 is the highest # note). You have access to the variables errors warning, statement which # respectively contain the number of errors / warnings messages and the total # number of statements analyzed. This is used by the global evaluation report # (RP0004). evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10) # Add a comment according to your evaluation note. This is used by the global # evaluation report (RP0004). comment=no # Template used to display messages. This is a python new-style format string # used to format the message information. See doc for all details #msg-template= [TYPECHECK] # Tells whether missing members accessed in mixin class should be ignored. A # mixin class is detected if its name ends with "mixin" (case insensitive). ignore-mixin-members=yes # List of classes names for which member attributes should not be checked # (useful for classes with attributes dynamically set). ignored-classes=SQLObject # When zope mode is activated, add a predefined set of Zope acquired attributes # to generated-members. zope=no # List of members which are set dynamically and missed by pylint inference # system, and so shouldn't trigger E0201 when accessed. Python regular # expressions are accepted. generated-members=REQUEST,acl_users,aq_parent # List of decorators that create context managers from functions, such as # contextlib.contextmanager. contextmanager-decorators=contextlib.contextmanager,contextlib2.contextmanager [VARIABLES] # Tells whether we should check for unused import in __init__ files. init-import=no # A regular expression matching the beginning of the name of dummy variables # (i.e. not used). dummy-variables-rgx=^\*{0,2}(_$|unused_|dummy_) # List of additional names supposed to be defined in builtins. Remember that # you should avoid to define new builtins when possible. additional-builtins= [BASIC] # Required attributes for module, separated by a comma required-attributes= # List of builtins function names that should not be used, separated by a comma bad-functions=apply,input,reduce # Disable the report(s) with the given id(s). # All non-Google reports are disabled by default. disable-report=R0001,R0002,R0003,R0004,R0101,R0102,R0201,R0202,R0220,R0401,R0402,R0701,R0801,R0901,R0902,R0903,R0904,R0911,R0912,R0913,R0914,R0915,R0921,R0922,R0923 # Regular expression which should only match correct module names module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$ # Regular expression which should only match correct module level names const-rgx=^(_?[A-Z][A-Z0-9_]*|__[a-z0-9_]+__|_?[a-z][a-z0-9_]*)$ # Regular expression which should only match correct class names class-rgx=^_?[A-Z][a-zA-Z0-9]*$ # Regular expression which should only match correct function names function-rgx=^(?:(?P<camel_case>_?[A-Z][a-zA-Z0-9]*)|(?P<snake_case>_?[a-z][a-z0-9_]*))$ # Regular expression which should only match correct method names method-rgx=^(?:(?P<exempt>__[a-z0-9_]+__|next)|(?P<camel_case>_{0,2}[A-Z][a-zA-Z0-9]*)|(?P<snake_case>_{0,2}[a-z][a-z0-9_]*))$ # Regular expression which should only match correct instance attribute names attr-rgx=^_{0,2}[a-z][a-z0-9_]*$ # Regular expression which should only match correct argument names argument-rgx=^[a-z][a-z0-9_]*$ # Regular expression which should only match correct variable names variable-rgx=^[a-z][a-z0-9_]*$ # Regular expression which should only match correct attribute names in class # bodies class-attribute-rgx=^(_?[A-Z][A-Z0-9_]*|__[a-z0-9_]+__|_?[a-z][a-z0-9_]*)$ # Regular expression which should only match correct list comprehension / # generator expression variable names inlinevar-rgx=^[a-z][a-z0-9_]*$ # Good variable names which should always be accepted, separated by a comma good-names=main,_ # Bad variable names which should always be refused, separated by a comma bad-names= # Regular expression which should only match function or class names that do # not require a docstring. # # no-docstring-rgx=(__.*__|main) #< TF version no-docstring-rgx=(__.*__|main|.*Test|^test_|^_) # Minimum line length for functions/classes that require docstrings, shorter # ones are exempt. docstring-min-length=10 [FORMAT] # Maximum number of characters on a single line. max-line-length=80 # Regexp for a line that is allowed to be longer than the limit. ignore-long-lines=(?x) (^\s*(import|from)\s |\$Id:\s\/\/depot\/.+#\d+\s\$ |^[a-zA-Z_][a-zA-Z0-9_]*\s*=\s*("[^"]\S+"|'[^']\S+') |^\s*\#\ LINT\.ThenChange |^[^#]*\#\ type:\ [a-zA-Z_][a-zA-Z0-9_.,[\] ]*$ |pylint |""" |\# |lambda |(https?|ftp):) # Allow the body of an if to be on the same line as the test if there is no # else. single-line-if-stmt=y # List of optional constructs for which whitespace checking is disabled no-space-check= # Maximum number of lines in a module max-module-lines=99999 # String used as indentation unit. This is usually " " (4 spaces) or "\t" (1 # tab). indent-string=' ' [SIMILARITIES] # Minimum lines number of a similarity. min-similarity-lines=4 # Ignore comments when computing similarities. ignore-comments=yes # Ignore docstrings when computing similarities. ignore-docstrings=yes # Ignore imports when computing similarities. ignore-imports=no [MISCELLANEOUS] # List of note tags to take in consideration, separated by a comma. notes= [IMPORTS] # Deprecated modules which should not be used, separated by a comma deprecated-modules=regsub,TERMIOS,Bastion,rexec,sets # Create a graph of every (i.e. internal and external) dependencies in the # given file (report RP0402 must not be disabled) import-graph= # Create a graph of external dependencies in the given file (report RP0402 must # not be disabled) ext-import-graph= # Create a graph of internal dependencies in the given file (report RP0402 must # not be disabled) int-import-graph= [CLASSES] # List of interface methods to ignore, separated by a comma. This is used for # instance to not check methods defines in Zope's Interface base class. ignore-iface-methods=isImplementedBy,deferred,extends,names,namesAndDescriptions,queryDescriptionFor,getBases,getDescriptionFor,getDoc,getName,getTaggedValue,getTaggedValueTags,isEqualOrExtendedBy,setTaggedValue,isImplementedByInstancesOf,adaptWith,is_implemented_by # List of method names used to declare (i.e. assign) instance attributes. defining-attr-methods=__init__,__new__,setUp # List of valid names for the first argument in a class method. valid-classmethod-first-arg=cls,class_ # List of valid names for the first argument in a metaclass class method. valid-metaclass-classmethod-first-arg=mcs [DESIGN] # Maximum number of arguments for function / method max-args=5 # Argument names that match this expression will be ignored. Default to name # with leading underscore ignored-argument-names=_.* # Maximum number of locals for function / method body max-locals=15 # Maximum number of return / yield for function / method body max-returns=6 # Maximum number of branch for function / method body max-branches=12 # Maximum number of statements in function / method body max-statements=50 # Maximum number of parents for a class (see R0901). max-parents=7 # Maximum number of attributes for a class (see R0902). max-attributes=7 # Minimum number of public methods for a class (see R0903). min-public-methods=2 # Maximum number of public methods for a class (see R0904). max-public-methods=20 [EXCEPTIONS] # Exceptions that will emit a warning when being caught. Defaults to # "Exception" overgeneral-exceptions=Exception,StandardError,BaseException [AST] # Maximum line length for lambdas short-func-length=1 # List of module members that should be marked as deprecated. # All of the string functions are listed in 4.1.4 Deprecated string functions # in the Python 2.4 docs. deprecated-members=string.atof,string.atoi,string.atol,string.capitalize,string.expandtabs,string.find,string.rfind,string.index,string.rindex,string.count,string.lower,string.split,string.rsplit,string.splitfields,string.join,string.joinfields,string.lstrip,string.rstrip,string.strip,string.swapcase,string.translate,string.upper,string.ljust,string.rjust,string.center,string.zfill,string.replace,sys.exitfunc [DOCSTRING] # List of exceptions that do not need to be mentioned in the Raises section of # a docstring. ignore-exceptions=AssertionError,NotImplementedError,StopIteration,TypeError [TOKENS] # Number of spaces of indent required when the last token on the preceding line # is an open (, [, or {. indent-after-paren=4 [GOOGLE LINES] # Regexp for a proper copyright notice. copyright=Copyright \d{4} The TensorFlow Authors\. +All [Rr]ights [Rr]eserved\.
[MASTER] # Specify a configuration file. #rcfile= # Python code to execute, usually for sys.path manipulation such as # pygtk.require(). #init-hook= # Profiled execution. profile=no # Add files or directories to the blacklist. They should be base names, not # paths. ignore=CVS, __pycache__, .git, .tox, .pytest_cache, tensorflow_graphics/projects/* # Pickle collected data for later comparisons. persistent=yes # Use multiple processes to speed up Pylint. jobs=4 # List of plugins (as comma separated values of python modules names) to load, # usually to register additional checkers. load-plugins= [MESSAGES CONTROL] # Enable the message, report, category or checker with the given id(s). You can # either give multiple identifier separated by comma (,) or put this option # multiple time. See also the "--disable" option for examples. enable=indexing-exception,old-raise-syntax # Disable the message, report, category or checker with the given id(s). You # can either give multiple identifiers separated by comma (,) or put this # option multiple times (only on the command line, not in the configuration # file where it should appear only once).You can also use "--disable=all" to # disable everything first and then reenable specific checks. For example, if # you want to run only the similarities checker, you can use "--disable=all # --enable=similarities". If you want to run only the classes checker, but have # no Warning level messages displayed, use"--disable=all --enable=classes # --disable=W" disable=design, similarities, no-self-use, attribute-defined-outside-init, locally-disabled, star-args, pointless-except, bad-option-value, global-statement, fixme, suppressed-message, useless-suppression, locally-enabled, no-member, no-name-in-module, import-error, unsubscriptable-object, unbalanced-tuple-unpacking, undefined-variable, not-context-manager, E1130, # (TFG) Invalid-unary-operand-type for Numpy array R1705, # (TFG) Unnecessary "else" after "return" (no-else-return) R1720, # (TFG) Unnecessary "else" after "raise" (no-else-raise) R1721, # (TFG) Unnecessary use of a comprehension (unnecessary-comprehension) # Set the cache size for astng objects. cache-size=500 [REPORTS] # Set the output format. Available formats are text, parseable, colorized, msvs # (visual studio) and html. You can also give a reporter class, eg # mypackage.mymodule.MyReporterClass. output-format=text # Put messages in a separate file for each module / package specified on the # command line instead of printing them on stdout. Reports (if any) will be # written in a file name "pylint_global.[txt|html]". files-output=no # Tells whether to display a full report or only the messages reports=no # Python expression which should return a note less than 10 (10 is the highest # note). You have access to the variables errors warning, statement which # respectively contain the number of errors / warnings messages and the total # number of statements analyzed. This is used by the global evaluation report # (RP0004). evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10) # Add a comment according to your evaluation note. This is used by the global # evaluation report (RP0004). comment=no # Template used to display messages. This is a python new-style format string # used to format the message information. See doc for all details #msg-template= [TYPECHECK] # Tells whether missing members accessed in mixin class should be ignored. A # mixin class is detected if its name ends with "mixin" (case insensitive). ignore-mixin-members=yes # List of classes names for which member attributes should not be checked # (useful for classes with attributes dynamically set). ignored-classes=SQLObject # When zope mode is activated, add a predefined set of Zope acquired attributes # to generated-members. zope=no # List of members which are set dynamically and missed by pylint inference # system, and so shouldn't trigger E0201 when accessed. Python regular # expressions are accepted. generated-members=REQUEST,acl_users,aq_parent # List of decorators that create context managers from functions, such as # contextlib.contextmanager. contextmanager-decorators=contextlib.contextmanager,contextlib2.contextmanager [VARIABLES] # Tells whether we should check for unused import in __init__ files. init-import=no # A regular expression matching the beginning of the name of dummy variables # (i.e. not used). dummy-variables-rgx=^\*{0,2}(_$|unused_|dummy_) # List of additional names supposed to be defined in builtins. Remember that # you should avoid to define new builtins when possible. additional-builtins= [BASIC] # Required attributes for module, separated by a comma required-attributes= # List of builtins function names that should not be used, separated by a comma bad-functions=apply,input,reduce # Disable the report(s) with the given id(s). # All non-Google reports are disabled by default. disable-report=R0001,R0002,R0003,R0004,R0101,R0102,R0201,R0202,R0220,R0401,R0402,R0701,R0801,R0901,R0902,R0903,R0904,R0911,R0912,R0913,R0914,R0915,R0921,R0922,R0923 # Regular expression which should only match correct module names module-rgx=(([a-z_][a-z0-9_]*)|([A-Z][a-zA-Z0-9]+))$ # Regular expression which should only match correct module level names const-rgx=^(_?[A-Z][A-Z0-9_]*|__[a-z0-9_]+__|_?[a-z][a-z0-9_]*)$ # Regular expression which should only match correct class names class-rgx=^_?[A-Z][a-zA-Z0-9]*$ # Regular expression which should only match correct function names function-rgx=^(?:(?P<camel_case>_?[A-Z][a-zA-Z0-9]*)|(?P<snake_case>_?[a-z][a-z0-9_]*))$ # Regular expression which should only match correct method names method-rgx=^(?:(?P<exempt>__[a-z0-9_]+__|next)|(?P<camel_case>_{0,2}[A-Z][a-zA-Z0-9]*)|(?P<snake_case>_{0,2}[a-z][a-z0-9_]*))$ # Regular expression which should only match correct instance attribute names attr-rgx=^_{0,2}[a-z][a-z0-9_]*$ # Regular expression which should only match correct argument names argument-rgx=^[a-z][a-z0-9_]*$ # Regular expression which should only match correct variable names variable-rgx=^[a-z][a-z0-9_]*$ # Regular expression which should only match correct attribute names in class # bodies class-attribute-rgx=^(_?[A-Z][A-Z0-9_]*|__[a-z0-9_]+__|_?[a-z][a-z0-9_]*)$ # Regular expression which should only match correct list comprehension / # generator expression variable names inlinevar-rgx=^[a-z][a-z0-9_]*$ # Good variable names which should always be accepted, separated by a comma good-names=main,_ # Bad variable names which should always be refused, separated by a comma bad-names= # Regular expression which should only match function or class names that do # not require a docstring. # # no-docstring-rgx=(__.*__|main) #< TF version no-docstring-rgx=(__.*__|main|.*Test|^test_|^_) # Minimum line length for functions/classes that require docstrings, shorter # ones are exempt. docstring-min-length=10 [FORMAT] # Maximum number of characters on a single line. max-line-length=80 # Regexp for a line that is allowed to be longer than the limit. ignore-long-lines=(?x) (^\s*(import|from)\s |\$Id:\s\/\/depot\/.+#\d+\s\$ |^[a-zA-Z_][a-zA-Z0-9_]*\s*=\s*("[^"]\S+"|'[^']\S+') |^\s*\#\ LINT\.ThenChange |^[^#]*\#\ type:\ [a-zA-Z_][a-zA-Z0-9_.,[\] ]*$ |pylint |""" |\# |lambda |(https?|ftp):) # Allow the body of an if to be on the same line as the test if there is no # else. single-line-if-stmt=y # List of optional constructs for which whitespace checking is disabled no-space-check= # Maximum number of lines in a module max-module-lines=99999 # String used as indentation unit. This is usually " " (4 spaces) or "\t" (1 # tab). indent-string=' ' [SIMILARITIES] # Minimum lines number of a similarity. min-similarity-lines=4 # Ignore comments when computing similarities. ignore-comments=yes # Ignore docstrings when computing similarities. ignore-docstrings=yes # Ignore imports when computing similarities. ignore-imports=no [MISCELLANEOUS] # List of note tags to take in consideration, separated by a comma. notes= [IMPORTS] # Deprecated modules which should not be used, separated by a comma deprecated-modules=regsub,TERMIOS,Bastion,rexec,sets # Create a graph of every (i.e. internal and external) dependencies in the # given file (report RP0402 must not be disabled) import-graph= # Create a graph of external dependencies in the given file (report RP0402 must # not be disabled) ext-import-graph= # Create a graph of internal dependencies in the given file (report RP0402 must # not be disabled) int-import-graph= [CLASSES] # List of interface methods to ignore, separated by a comma. This is used for # instance to not check methods defines in Zope's Interface base class. ignore-iface-methods=isImplementedBy,deferred,extends,names,namesAndDescriptions,queryDescriptionFor,getBases,getDescriptionFor,getDoc,getName,getTaggedValue,getTaggedValueTags,isEqualOrExtendedBy,setTaggedValue,isImplementedByInstancesOf,adaptWith,is_implemented_by # List of method names used to declare (i.e. assign) instance attributes. defining-attr-methods=__init__,__new__,setUp # List of valid names for the first argument in a class method. valid-classmethod-first-arg=cls,class_ # List of valid names for the first argument in a metaclass class method. valid-metaclass-classmethod-first-arg=mcs [DESIGN] # Maximum number of arguments for function / method max-args=5 # Argument names that match this expression will be ignored. Default to name # with leading underscore ignored-argument-names=_.* # Maximum number of locals for function / method body max-locals=15 # Maximum number of return / yield for function / method body max-returns=6 # Maximum number of branch for function / method body max-branches=12 # Maximum number of statements in function / method body max-statements=50 # Maximum number of parents for a class (see R0901). max-parents=7 # Maximum number of attributes for a class (see R0902). max-attributes=7 # Minimum number of public methods for a class (see R0903). min-public-methods=2 # Maximum number of public methods for a class (see R0904). max-public-methods=20 [EXCEPTIONS] # Exceptions that will emit a warning when being caught. Defaults to # "Exception" overgeneral-exceptions=Exception,StandardError,BaseException [AST] # Maximum line length for lambdas short-func-length=1 # List of module members that should be marked as deprecated. # All of the string functions are listed in 4.1.4 Deprecated string functions # in the Python 2.4 docs. deprecated-members=string.atof,string.atoi,string.atol,string.capitalize,string.expandtabs,string.find,string.rfind,string.index,string.rindex,string.count,string.lower,string.split,string.rsplit,string.splitfields,string.join,string.joinfields,string.lstrip,string.rstrip,string.strip,string.swapcase,string.translate,string.upper,string.ljust,string.rjust,string.center,string.zfill,string.replace,sys.exitfunc [DOCSTRING] # List of exceptions that do not need to be mentioned in the Raises section of # a docstring. ignore-exceptions=AssertionError,NotImplementedError,StopIteration,TypeError [TOKENS] # Number of spaces of indent required when the last token on the preceding line # is an open (, [, or {. indent-after-paren=4 [GOOGLE LINES] # Regexp for a proper copyright notice. copyright=Copyright \d{4} The TensorFlow Authors\. +All [Rr]ights [Rr]eserved\.
-1