repo_name
stringclasses 6
values | pr_number
int64 99
20.3k
| pr_title
stringlengths 8
158
| pr_description
stringlengths 0
6.54k
| author
stringlengths 4
18
| date_created
unknown | date_merged
unknown | previous_commit
stringlengths 40
40
| pr_commit
stringlengths 40
40
| query
stringlengths 37
6.57k
| filepath
stringlengths 8
153
| before_content
stringlengths 0
876M
| after_content
stringlengths 0
876M
| label
int64 -1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./requirements.txt | tensorflow >= 2.2.0
tensorflow-addons >= 0.10.0
tensorflow-datasets >= 2.0.0
absl-py >= 0.6.1
h5py >= 2.10.0
matplotlib >= 2.2.5
numpy >= 1.15.4
psutil >= 5.7.0
scipy >= 1.1.0
tqdm >= 4.45.0
OpenEXR >= 1.3.2
termcolor >= 1.1.0
trimesh >= 2.37.22
# Required by trimesh.
networkx
| tensorflow >= 2.2.0
tensorflow-addons >= 0.10.0
tensorflow-datasets >= 2.0.0
absl-py >= 0.6.1
h5py >= 2.10.0
matplotlib >= 2.2.5
numpy >= 1.15.4
psutil >= 5.7.0
scipy >= 1.1.0
tqdm >= 4.45.0
OpenEXR >= 1.3.2
termcolor >= 1.1.0
trimesh >= 2.37.22
# Required by trimesh.
networkx
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/rendering/voxels/tests/test_helpers.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Test helpers for the voxels module."""
import numpy as np
def generate_random_test_voxels_render():
"""Generates random test for the voxels rendering functions."""
batch_shape = np.random.randint(1, 3)
voxels_shape = np.random.randint(2, 8, size=(3)).tolist()
signals_dimension = np.random.randint(2, 4)
random_voxels = np.random.uniform(size=[batch_shape] + voxels_shape +
[signals_dimension])
return random_voxels
def generate_preset_test_voxels_visual_hull_render():
"""Generates preset test for the visual hull voxels rendering function."""
voxels = np.array([[[[[0, 0, 0], [0, 0, 0], [0, 0, 0]],
[[0.3, 0.7, 0.1], [1, 0.1, 0], [0.2, 0.1, 0.8]],
[[0.1, 0.9, 0], [0.2, 1, 0.4], [0.3, 0.2, 0]]],
[[[0.15, 0.69, 0.57], [0.07, 0.33, 0.55], [0, 0, 0]],
[[0.71, 0.61, 0.43], [1, 0.1, 0], [0.71, 0.61, 0.43]],
[[0.17, 0.01, 1.22], [0.2, 1, 0.4], [0.67, 0.94, 0.14]]],
[[[0, 0, 0], [0.17, 0.33, 0.55], [0, 0, 0]],
[[0.71, 0.61, 0.43], [1, 0.1, 0], [0.71, 0.61, 0.43]],
[[0.1, 0.9, 0], [0.2, 1, 0.4], [0.88, 0.09, 0.45]]],
[[[1, 0, 0], [0, 0, 0], [1, 0, 0]],
[[0.88, 0.09, 0.5], [0.71, 0.61, 0.4], [0.14, 0, 0.22]],
[[0.71, 0.61, 0.45], [0.71, 0.7, 0.43], [0.3, 0.2, 0]]]],
[[[[0, 0, 0], [0, 0, 0], [0, 0, 1]],
[[0.3, 0.7, 0.1], [0.15, 0.69, 0.5], [0.88, 0.09, 0.45]],
[[0.07, 0.33, 0.55], [0.2, 1, 0.4], [0.4, 0.34, 0.43]]],
[[[0, 1, 0], [0, 1, 0], [0, 1, 0]],
[[0.19, 0.06, 0.24], [1, 0.1, 0], [0.2, 0.1, 0.8]],
[[0.67, 0.94, 0.14], [0.2, 1, 0.4], [0.15, 0.69, 0.57]]],
[[[0, 0, 0], [0, 0, 0], [0, 0, 0]],
[[0.74, 0.67, 0.4], [0.64, 0.8, 0.19], [0.9, 0.6, 0.48]],
[[0.1, 0.9, 0], [0.02, 0.37, 0.56], [0.62, 0.98, 0.19]]],
[[[0.04, 0.87, 0.37], [0, 0, 0], [1, 0, 0]],
[[0.3, 0.7, 0.1], [0.24, 0.12, 0.7], [0.76, 0.64, 0.79]],
[[0.7, 0.2, 0.2], [0.4, 1, 0.9], [0.19, 0.66, 0.03]]]]])
images = np.array([[[[0, 0, 0],
[0.77686984, 0.59343034, 0.59343034],
[0.45118836, 0.87754357, 0.32967995]],
[[0.19748120, 0.63940506, 0.67372021],
[0.91107838, 0.73286470, 0.57683792],
[0.64654532, 0.85772593, 0.82795514]],
[[0.15633518, 0.28107627, 0.42305019],
[0.91107838, 0.73286470, 0.57683792],
[0.69272126, 0.86330457, 0.57258507]],
[[0.86466472, 0, 0],
[0.82271559, 0.50341470, 0.67372021],
[0.82093385, 0.77909002, 0.58521709]]],
[[[0, 0, 0.63212055],
[0.73552274, 0.77236231, 0.65006225],
[0.48829142, 0.81175293, 0.74842145]],
[[0, 0.950212931, 0],
[0.75092470, 0.22894841, 0.64654532],
[0.63940506, 0.92792154, 0.67044104]],
[[0, 0, 0],
[0.89771579, 0.87381422, 0.65699148],
[0.52288608, 0.89460078, 0.52763345]],
[[0.64654532, 0.58104845, 0.30926567],
[0.72746821, 0.76776373, 0.79607439],
[0.724729, 0.844327, 0.676967]]]]) # pyformat: disable
return voxels, images
def generate_preset_test_voxels_absorption_render():
"""Generates preset test for the absorption voxels rendering function."""
voxels = np.array([[[[[0, 0, 0], [0, 0, 0], [0, 0, 0]],
[[0.3, 0.7, 0.1], [1, 0.1, 0], [0.2, 0.1, 0.8]],
[[0.1, 0.9, 0], [0.2, 1, 0.4], [0.3, 0.2, 0]]],
[[[0.15, 0.69, 0.57], [0.07, 0.33, 0.55], [0, 0, 0]],
[[0.71, 0.61, 0.43], [1, 0.1, 0], [0.71, 0.61, 0.43]],
[[0.17, 0.01, 1.22], [0.2, 1, 0.4], [0.67, 0.94, 0.14]]],
[[[0, 0, 0], [0.17, 0.33, 0.55], [0, 0, 0]],
[[0.71, 0.61, 0.43], [1, 0.1, 0], [0.71, 0.61, 0.43]],
[[0.1, 0.9, 0], [0.2, 1, 0.4], [0.88, 0.09, 0.45]]],
[[[1, 0, 0], [0, 0, 0], [1, 0, 0]],
[[0.88, 0.09, 0.5], [0.71, 0.61, 0.4], [0.14, 0, 0.22]],
[[0.71, 0.61, 0.45], [0.71, 0.7, 0.43], [0.3, 0.2, 0]]]],
[[[[0, 0, 0], [0, 0, 0], [0, 0, 1]],
[[0.3, 0.7, 0.1], [0.15, 0.69, 0.5], [0.88, 0.09, 0.45]],
[[0.07, 0.33, 0.55], [0.2, 1, 0.4], [0.4, 0.34, 0.43]]],
[[[0, 1, 0], [0, 1, 0], [0, 1, 0]],
[[0.19, 0.06, 0.24], [1, 0.1, 0], [0.2, 0.1, 0.8]],
[[0.67, 0.94, 0.14], [0.2, 1, 0.4], [0.15, 0.69, 0.57]]],
[[[0, 0, 0], [0, 0, 0], [0, 0, 0]],
[[0.74, 0.67, 0.4], [0.64, 0.8, 0.19], [0.9, 0.6, 0.48]],
[[0.1, 0.9, 0], [0.02, 0.37, 0.56], [0.62, 0.98, 0.19]]],
[[[0.04, 0.87, 0.37], [0, 0, 0], [1, 0, 0]],
[[0.3, 0.7, 0.1], [0.24, 0.12, 0.7], [0.76, 0.64, 0.79]],
[[0.7, 0.2, 0.2], [0.4, 1, 0.9], [0.19, 0.66, 0.03]]]]])
images = np.array([[[[0, 0, 0],
[0.6175, 0.413375, 0.43],
[0.27325, 0.7525, 0.2]],
[[0.107375, 0.453075, 0.481625],
[0.7919875, 0.54112625, 0.383775],
[0.4523725, 0.736325, 0.70984]],
[[0.085, 0.165, 0.275],
[0.7919875, 0.54112625, 0.383775],
[0.5212, 0.737375, 0.38]],
[[0.75, 0, 0],
[0.664084, 0.336275, 0.466],
[0.64637875, 0.593425, 0.391625]]],
[[[0, 0, 0.5],
[0.5597, 0.59340875, 0.4478125],
[0.3052, 0.653475, 0.5447]],
[[0, 0.875, 0],
[0.59275, 0.124575, 0.472],
[0.4463875, 0.826425, 0.46804]],
[[0, 0, 0],
[0.76438, 0.7207, 0.44976],
[0.351055, 0.7713925, 0.3484]],
[[0.51, 0.435, 0.185],
[0.53624, 0.58452, 0.6264125],
[0.5294, 0.6985, 0.512425]]]]) # pyformat: disable
return voxels, images
def generate_preset_test_voxels_emission_absorption_render():
"""Generates preset test for the emission absorption voxels rendering function."""
voxels = np.array([[[[[0, 0, 0], [0, 0, 0], [0, 0, 0]],
[[0.3, 0.7, 0.1], [1, 0.1, 0], [0.2, 0.1, 0.8]],
[[0.1, 0.9, 0], [0.2, 1, 0.4], [0.3, 0.2, 0]]],
[[[0.15, 0.69, 0.57], [0.07, 0.33, 0.55], [0, 0, 0]],
[[0.71, 0.61, 0.43], [1, 0.1, 0], [0.71, 0.61, 0.43]],
[[0.17, 0.01, 1.22], [0.2, 1, 0.4], [0.67, 0.94, 0.14]]],
[[[0, 0, 0], [0.17, 0.33, 0.55], [0, 0, 0]],
[[0.71, 0.61, 0.43], [1, 0.1, 0], [0.71, 0.61, 0.43]],
[[0.1, 0.9, 0], [0.2, 1, 0.4], [0.88, 0.09, 0.45]]],
[[[1, 0, 0], [0, 0, 0], [1, 0, 0]],
[[0.88, 0.09, 0.5], [0.71, 0.61, 0.4], [0.14, 0, 0.22]],
[[0.71, 0.61, 0.45], [0.71, 0.7, 0.43], [0.3, 0.2, 0]]]],
[[[[0, 0, 0], [0, 0, 0], [0, 0, 1]],
[[0.3, 0.7, 0.1], [0.15, 0.69, 0.5], [0.88, 0.09, 0.45]],
[[0.07, 0.33, 0.55], [0.2, 1, 0.4], [0.4, 0.34, 0.43]]],
[[[0, 1, 0], [0, 1, 0], [0, 1, 0]],
[[0.19, 0.06, 0.24], [1, 0.1, 0], [0.2, 0.1, 0.8]],
[[0.67, 0.94, 0.14], [0.2, 1, 0.4], [0.15, 0.69, 0.57]]],
[[[0, 0, 0], [0, 0, 0], [0, 0, 0]],
[[0.74, 0.67, 0.4], [0.64, 0.8, 0.19], [0.9, 0.6, 0.48]],
[[0.1, 0.9, 0], [0.02, 0.37, 0.56], [0.62, 0.98, 0.19]]],
[[[0.04, 0.87, 0.37], [0, 0, 0], [1, 0, 0]],
[[0.3, 0.7, 0.1], [0.24, 0.12, 0.7], [0.76, 0.64, 0.79]],
[[0.7, 0.2, 0.2], [0.4, 1, 0.9], [0.19, 0.66, 0.03]]]]])
images = np.array([[[[0, 0, 0],
[0.19553845, 0.27123076, 0.82],
[0.08, 0.39999998, 0.4]],
[[0.10144142, 0.46858389, 0.8065],
[0.47932099, 0.41181099, 0.6751],
[0.22078022, 0.23262935, 1.11352]],
[[0.0935, 0.18149999, 0.55],
[0.47932099, 0.41181099, 0.6751],
[0.30814825, 0.43694864, 0.67]],
[[0, 0, 0],
[0.5677705, 0.17392569, 0.766],
[0.48741499, 0.44055107, 0.6865]]],
[[[0, 0, 1],
[0.28019208, 0.40287539, 0.7525],
[0.13121746, 0.42573205, 0.8461]],
[[0, 0, 0],
[0.16451199, 0.064448, 0.848],
[0.24191167, 0.69841443, 0.77812]],
[[0, 0, 0],
[0.56974806, 0.50646416, 0.74728],
[0.09611898, 0.32276643, 0.6436]],
[[0.0148, 0.32189999, 0.37],
[0.3099809, 0.33312645, 0.9433],
[0.55598098, 0.41542985, 0.9224]]]]) # pyformat: disable
return voxels, images
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Test helpers for the voxels module."""
import numpy as np
def generate_random_test_voxels_render():
"""Generates random test for the voxels rendering functions."""
batch_shape = np.random.randint(1, 3)
voxels_shape = np.random.randint(2, 8, size=(3)).tolist()
signals_dimension = np.random.randint(2, 4)
random_voxels = np.random.uniform(size=[batch_shape] + voxels_shape +
[signals_dimension])
return random_voxels
def generate_preset_test_voxels_visual_hull_render():
"""Generates preset test for the visual hull voxels rendering function."""
voxels = np.array([[[[[0, 0, 0], [0, 0, 0], [0, 0, 0]],
[[0.3, 0.7, 0.1], [1, 0.1, 0], [0.2, 0.1, 0.8]],
[[0.1, 0.9, 0], [0.2, 1, 0.4], [0.3, 0.2, 0]]],
[[[0.15, 0.69, 0.57], [0.07, 0.33, 0.55], [0, 0, 0]],
[[0.71, 0.61, 0.43], [1, 0.1, 0], [0.71, 0.61, 0.43]],
[[0.17, 0.01, 1.22], [0.2, 1, 0.4], [0.67, 0.94, 0.14]]],
[[[0, 0, 0], [0.17, 0.33, 0.55], [0, 0, 0]],
[[0.71, 0.61, 0.43], [1, 0.1, 0], [0.71, 0.61, 0.43]],
[[0.1, 0.9, 0], [0.2, 1, 0.4], [0.88, 0.09, 0.45]]],
[[[1, 0, 0], [0, 0, 0], [1, 0, 0]],
[[0.88, 0.09, 0.5], [0.71, 0.61, 0.4], [0.14, 0, 0.22]],
[[0.71, 0.61, 0.45], [0.71, 0.7, 0.43], [0.3, 0.2, 0]]]],
[[[[0, 0, 0], [0, 0, 0], [0, 0, 1]],
[[0.3, 0.7, 0.1], [0.15, 0.69, 0.5], [0.88, 0.09, 0.45]],
[[0.07, 0.33, 0.55], [0.2, 1, 0.4], [0.4, 0.34, 0.43]]],
[[[0, 1, 0], [0, 1, 0], [0, 1, 0]],
[[0.19, 0.06, 0.24], [1, 0.1, 0], [0.2, 0.1, 0.8]],
[[0.67, 0.94, 0.14], [0.2, 1, 0.4], [0.15, 0.69, 0.57]]],
[[[0, 0, 0], [0, 0, 0], [0, 0, 0]],
[[0.74, 0.67, 0.4], [0.64, 0.8, 0.19], [0.9, 0.6, 0.48]],
[[0.1, 0.9, 0], [0.02, 0.37, 0.56], [0.62, 0.98, 0.19]]],
[[[0.04, 0.87, 0.37], [0, 0, 0], [1, 0, 0]],
[[0.3, 0.7, 0.1], [0.24, 0.12, 0.7], [0.76, 0.64, 0.79]],
[[0.7, 0.2, 0.2], [0.4, 1, 0.9], [0.19, 0.66, 0.03]]]]])
images = np.array([[[[0, 0, 0],
[0.77686984, 0.59343034, 0.59343034],
[0.45118836, 0.87754357, 0.32967995]],
[[0.19748120, 0.63940506, 0.67372021],
[0.91107838, 0.73286470, 0.57683792],
[0.64654532, 0.85772593, 0.82795514]],
[[0.15633518, 0.28107627, 0.42305019],
[0.91107838, 0.73286470, 0.57683792],
[0.69272126, 0.86330457, 0.57258507]],
[[0.86466472, 0, 0],
[0.82271559, 0.50341470, 0.67372021],
[0.82093385, 0.77909002, 0.58521709]]],
[[[0, 0, 0.63212055],
[0.73552274, 0.77236231, 0.65006225],
[0.48829142, 0.81175293, 0.74842145]],
[[0, 0.950212931, 0],
[0.75092470, 0.22894841, 0.64654532],
[0.63940506, 0.92792154, 0.67044104]],
[[0, 0, 0],
[0.89771579, 0.87381422, 0.65699148],
[0.52288608, 0.89460078, 0.52763345]],
[[0.64654532, 0.58104845, 0.30926567],
[0.72746821, 0.76776373, 0.79607439],
[0.724729, 0.844327, 0.676967]]]]) # pyformat: disable
return voxels, images
def generate_preset_test_voxels_absorption_render():
"""Generates preset test for the absorption voxels rendering function."""
voxels = np.array([[[[[0, 0, 0], [0, 0, 0], [0, 0, 0]],
[[0.3, 0.7, 0.1], [1, 0.1, 0], [0.2, 0.1, 0.8]],
[[0.1, 0.9, 0], [0.2, 1, 0.4], [0.3, 0.2, 0]]],
[[[0.15, 0.69, 0.57], [0.07, 0.33, 0.55], [0, 0, 0]],
[[0.71, 0.61, 0.43], [1, 0.1, 0], [0.71, 0.61, 0.43]],
[[0.17, 0.01, 1.22], [0.2, 1, 0.4], [0.67, 0.94, 0.14]]],
[[[0, 0, 0], [0.17, 0.33, 0.55], [0, 0, 0]],
[[0.71, 0.61, 0.43], [1, 0.1, 0], [0.71, 0.61, 0.43]],
[[0.1, 0.9, 0], [0.2, 1, 0.4], [0.88, 0.09, 0.45]]],
[[[1, 0, 0], [0, 0, 0], [1, 0, 0]],
[[0.88, 0.09, 0.5], [0.71, 0.61, 0.4], [0.14, 0, 0.22]],
[[0.71, 0.61, 0.45], [0.71, 0.7, 0.43], [0.3, 0.2, 0]]]],
[[[[0, 0, 0], [0, 0, 0], [0, 0, 1]],
[[0.3, 0.7, 0.1], [0.15, 0.69, 0.5], [0.88, 0.09, 0.45]],
[[0.07, 0.33, 0.55], [0.2, 1, 0.4], [0.4, 0.34, 0.43]]],
[[[0, 1, 0], [0, 1, 0], [0, 1, 0]],
[[0.19, 0.06, 0.24], [1, 0.1, 0], [0.2, 0.1, 0.8]],
[[0.67, 0.94, 0.14], [0.2, 1, 0.4], [0.15, 0.69, 0.57]]],
[[[0, 0, 0], [0, 0, 0], [0, 0, 0]],
[[0.74, 0.67, 0.4], [0.64, 0.8, 0.19], [0.9, 0.6, 0.48]],
[[0.1, 0.9, 0], [0.02, 0.37, 0.56], [0.62, 0.98, 0.19]]],
[[[0.04, 0.87, 0.37], [0, 0, 0], [1, 0, 0]],
[[0.3, 0.7, 0.1], [0.24, 0.12, 0.7], [0.76, 0.64, 0.79]],
[[0.7, 0.2, 0.2], [0.4, 1, 0.9], [0.19, 0.66, 0.03]]]]])
images = np.array([[[[0, 0, 0],
[0.6175, 0.413375, 0.43],
[0.27325, 0.7525, 0.2]],
[[0.107375, 0.453075, 0.481625],
[0.7919875, 0.54112625, 0.383775],
[0.4523725, 0.736325, 0.70984]],
[[0.085, 0.165, 0.275],
[0.7919875, 0.54112625, 0.383775],
[0.5212, 0.737375, 0.38]],
[[0.75, 0, 0],
[0.664084, 0.336275, 0.466],
[0.64637875, 0.593425, 0.391625]]],
[[[0, 0, 0.5],
[0.5597, 0.59340875, 0.4478125],
[0.3052, 0.653475, 0.5447]],
[[0, 0.875, 0],
[0.59275, 0.124575, 0.472],
[0.4463875, 0.826425, 0.46804]],
[[0, 0, 0],
[0.76438, 0.7207, 0.44976],
[0.351055, 0.7713925, 0.3484]],
[[0.51, 0.435, 0.185],
[0.53624, 0.58452, 0.6264125],
[0.5294, 0.6985, 0.512425]]]]) # pyformat: disable
return voxels, images
def generate_preset_test_voxels_emission_absorption_render():
"""Generates preset test for the emission absorption voxels rendering function."""
voxels = np.array([[[[[0, 0, 0], [0, 0, 0], [0, 0, 0]],
[[0.3, 0.7, 0.1], [1, 0.1, 0], [0.2, 0.1, 0.8]],
[[0.1, 0.9, 0], [0.2, 1, 0.4], [0.3, 0.2, 0]]],
[[[0.15, 0.69, 0.57], [0.07, 0.33, 0.55], [0, 0, 0]],
[[0.71, 0.61, 0.43], [1, 0.1, 0], [0.71, 0.61, 0.43]],
[[0.17, 0.01, 1.22], [0.2, 1, 0.4], [0.67, 0.94, 0.14]]],
[[[0, 0, 0], [0.17, 0.33, 0.55], [0, 0, 0]],
[[0.71, 0.61, 0.43], [1, 0.1, 0], [0.71, 0.61, 0.43]],
[[0.1, 0.9, 0], [0.2, 1, 0.4], [0.88, 0.09, 0.45]]],
[[[1, 0, 0], [0, 0, 0], [1, 0, 0]],
[[0.88, 0.09, 0.5], [0.71, 0.61, 0.4], [0.14, 0, 0.22]],
[[0.71, 0.61, 0.45], [0.71, 0.7, 0.43], [0.3, 0.2, 0]]]],
[[[[0, 0, 0], [0, 0, 0], [0, 0, 1]],
[[0.3, 0.7, 0.1], [0.15, 0.69, 0.5], [0.88, 0.09, 0.45]],
[[0.07, 0.33, 0.55], [0.2, 1, 0.4], [0.4, 0.34, 0.43]]],
[[[0, 1, 0], [0, 1, 0], [0, 1, 0]],
[[0.19, 0.06, 0.24], [1, 0.1, 0], [0.2, 0.1, 0.8]],
[[0.67, 0.94, 0.14], [0.2, 1, 0.4], [0.15, 0.69, 0.57]]],
[[[0, 0, 0], [0, 0, 0], [0, 0, 0]],
[[0.74, 0.67, 0.4], [0.64, 0.8, 0.19], [0.9, 0.6, 0.48]],
[[0.1, 0.9, 0], [0.02, 0.37, 0.56], [0.62, 0.98, 0.19]]],
[[[0.04, 0.87, 0.37], [0, 0, 0], [1, 0, 0]],
[[0.3, 0.7, 0.1], [0.24, 0.12, 0.7], [0.76, 0.64, 0.79]],
[[0.7, 0.2, 0.2], [0.4, 1, 0.9], [0.19, 0.66, 0.03]]]]])
images = np.array([[[[0, 0, 0],
[0.19553845, 0.27123076, 0.82],
[0.08, 0.39999998, 0.4]],
[[0.10144142, 0.46858389, 0.8065],
[0.47932099, 0.41181099, 0.6751],
[0.22078022, 0.23262935, 1.11352]],
[[0.0935, 0.18149999, 0.55],
[0.47932099, 0.41181099, 0.6751],
[0.30814825, 0.43694864, 0.67]],
[[0, 0, 0],
[0.5677705, 0.17392569, 0.766],
[0.48741499, 0.44055107, 0.6865]]],
[[[0, 0, 1],
[0.28019208, 0.40287539, 0.7525],
[0.13121746, 0.42573205, 0.8461]],
[[0, 0, 0],
[0.16451199, 0.064448, 0.848],
[0.24191167, 0.69841443, 0.77812]],
[[0, 0, 0],
[0.56974806, 0.50646416, 0.74728],
[0.09611898, 0.32276643, 0.6436]],
[[0.0148, 0.32189999, 0.37],
[0.3099809, 0.33312645, 0.9433],
[0.55598098, 0.41542985, 0.9224]]]]) # pyformat: disable
return voxels, images
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/projects/local_implicit_grid/core/reconstruction.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Lint as: python3
"""Utility modules for reconstructing scenes.
"""
import os
import numpy as np
from skimage import measure
import tensorflow.compat.v1 as tf
from tensorflow_graphics.projects.local_implicit_grid.core import evaluator
from tensorflow_graphics.projects.local_implicit_grid.core import local_implicit_grid_layer as lig
from tensorflow_graphics.projects.local_implicit_grid.core import point_utils as pt
class LIGOptimizer(object):
"""Class for using optimization to acquire feature grid."""
def __init__(self, ckpt, origin, grid_shape, part_size, occ_idx,
indep_pt_loss=True, overlap=True, alpha_lat=1e-2, npts=2048,
init_std=1e-2, learning_rate=1e-3, var_prefix='', nows=False):
self.ckpt = ckpt
self.ckpt_dir = os.path.dirname(ckpt)
self.params = self._load_params(self.ckpt_dir)
self.origin = origin
self.grid_shape = grid_shape
self.part_size = part_size
self.occ_idx = occ_idx
self.init_std = init_std
self.learning_rate = learning_rate
self.var_prefix = var_prefix
self.nows = nows
self.xmin = self.origin
if overlap:
true_shape = (np.array(grid_shape) - 1) / 2.0
self.xmax = self.origin + true_shape * part_size
else:
self.xmax = self.origin + (np.array(grid_shape) - 1) * part_size
_, sj, sk = self.grid_shape
self.occ_idx_flat = (self.occ_idx[:, 0]*(sj*sk)+
self.occ_idx[:, 1]*sk+self.occ_idx[:, 2])
self.indep_pt_loss = indep_pt_loss
self.overlap = overlap
self.alpha_lat = alpha_lat
self.npts = int(npts)
self._init_graph()
def _load_params(self, ckpt_dir):
param_file = os.path.join(ckpt_dir, 'params.txt')
params = evaluator.parse_param_file(param_file)
return params
def _init_graph(self):
"""Initialize computation graph for tensorflow.
"""
self.graph = tf.Graph()
with self.graph.as_default():
self.point_coords_ph = tf.placeholder(
tf.float32,
shape=[1, self.npts, 3]) # placeholder
self.point_values_ph = tf.placeholder(
tf.float32,
shape=[1, self.npts, 1]) # placeholder
self.point_coords = self.point_coords_ph
self.point_values = self.point_values_ph
self.liggrid = lig.LocalImplicitGrid(
size=self.grid_shape,
in_features=self.params['codelen'],
out_features=1,
num_filters=self.params['refiner_nf'],
net_type='imnet',
method='linear' if self.overlap else 'nn',
x_location_max=(1.0 if self.overlap else 2.0),
name='lig',
interp=(not self.indep_pt_loss),
min_grid_value=self.xmin,
max_grid_value=self.xmax)
si, sj, sk = self.grid_shape
self.occ_idx_flat_ = tf.convert_to_tensor(
self.occ_idx_flat[:, np.newaxis])
self.shape_ = tf.constant([si*sj*sk, self.params['codelen']],
dtype=tf.int64)
self.feat_sparse_ = tf.Variable(
(tf.random.normal(shape=[self.occ_idx.shape[0],
self.params['codelen']]) *
self.init_std),
trainable=True,
name='feat_sparse')
self.feat_grid = tf.scatter_nd(self.occ_idx_flat_,
self.feat_sparse_,
self.shape_)
self.feat_grid = tf.reshape(self.feat_grid,
[1, si, sj, sk, self.params['codelen']])
self.feat_norm = tf.norm(self.feat_sparse_, axis=-1)
if self.indep_pt_loss:
self.preds, self.weights = self.liggrid(self.feat_grid,
self.point_coords,
training=True)
# preds: [b, n, 8, 1], weights: [b, n, 8]
self.preds_interp = tf.reduce_sum(
tf.expand_dims(self.weights, axis=-1)*self.preds,
axis=2) # [b, n, 1]
self.preds = tf.concat([self.preds,
self.preds_interp[:, :, tf.newaxis, :]],
axis=2) # preds: [b, n, 9, 1]
self.point_values = tf.broadcast_to(
self.point_values[:, :, tf.newaxis, :],
shape=self.preds.shape) # [b, n, 9, 1]
else:
self.preds = self.liggrid(self.feat_grid,
self.point_coords,
training=True) # [b, n, 1]
self.labels_01 = (self.point_values+1) / 2 # turn labels to 0, 1 labels
self.loss_pt = tf.losses.sigmoid_cross_entropy(
self.labels_01,
logits=self.preds,
reduction=tf.losses.Reduction.NONE)
self.loss_lat = tf.reduce_mean(self.feat_norm) * self.alpha_lat
self.loss = tf.reduce_mean(self.loss_pt) + self.loss_lat
# compute accuracy metric
if self.indep_pt_loss:
self.pvalue = tf.sign(self.point_values[:, :, -1, 0])
self.ppred = tf.sign(self.preds[:, :, -1, 0])
else:
self.pvalue = tf.sign(self.point_values[..., 0])
self.ppred = tf.sign(self.preds[:, :, 0])
self.accu = tf.reduce_sum(tf.cast(
tf.logical_or(tf.logical_and(self.pvalue > 0, self.ppred > 0),
tf.logical_and(self.pvalue < 0, self.ppred < 0)),
tf.float32)) / float(self.npts)
# get optimizer
self.optimizer = tf.train.AdamOptimizer(learning_rate=self.learning_rate)
self.fgrid_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,
scope='feat_sparse')
self.train_op = self.optimizer.minimize(
self.loss,
global_step=tf.train.get_or_create_global_step(),
var_list=[self.fgrid_vars])
self.map_dict = self._get_var_mapping(model=self.liggrid,
scope=self.var_prefix)
self.sess = tf.Session()
if not self.nows:
self.saver = tf.train.Saver(self.map_dict)
self.saver.restore(self.sess, self.ckpt)
self._initialize_uninitialized(self.sess)
def _get_var_mapping(self, model, scope=''):
vars_ = model.trainable_variables
varnames = [v.name for v in vars_] # .split(':')[0]
varnames = [scope+v.replace('lig/', '').strip(':0') for v in varnames]
map_dict = dict(zip(varnames, vars_))
return map_dict
def _initialize_uninitialized(self, sess):
global_vars = tf.global_variables()
is_not_initialized = sess.run(
[tf.is_variable_initialized(var) for var in global_vars])
not_initialized_vars = [v for (v, f) in zip(global_vars,
is_not_initialized) if not f]
if not_initialized_vars:
sess.run(tf.variables_initializer(not_initialized_vars))
def optimize_feat_grid(self, point_coords, point_vals, steps=10000,
print_every_n_steps=1000):
"""Optimize feature grid.
Args:
point_coords: [npts, 3] point coordinates.
point_vals: [npts, 1] point values.
steps: int, number of steps for gradient descent.
print_every_n_steps: int, print every n steps.
Returns:
"""
print_every_n_steps = int(print_every_n_steps)
point_coords = point_coords.copy()
point_vals = np.sign(point_vals.copy())
if point_coords.ndim == 3:
point_coords = point_coords[0]
if point_vals.ndim == 3:
point_vals = point_vals[0]
elif point_vals.ndim == 1:
point_vals = point_vals[:, np.newaxis]
# clip
point_coords = np.clip(point_coords, self.xmin, self.xmax)
# shuffle points
seq = np.random.permutation(point_coords.shape[0])
point_coords = point_coords[seq]
point_vals = point_vals[seq]
point_coords = point_coords[np.newaxis]
point_vals = point_vals[np.newaxis]
# random point sampling function
def random_point_sample():
sid = np.random.choice(point_coords.shape[1]-self.npts+1)
eid = sid + self.npts
return point_coords[:, sid:eid], point_vals[:, sid:eid]
with self.graph.as_default():
for i in range(steps):
pc, pv = random_point_sample()
accu_, loss_, _ = self.sess.run([self.accu, self.loss, self.train_op],
feed_dict={
self.point_coords_ph: pc,
self.point_values_ph: pv})
if i % print_every_n_steps == 0:
print('Step [{:6d}] Accu: {:5.4f} Loss: {:5.4f}'.format(i,
accu_, loss_))
@property
def feature_grid(self):
with self.graph.as_default():
return self.sess.run(self.feat_grid)
def occupancy_sparse_to_dense(occ_idx, grid_shape):
dense = np.zeros(grid_shape, dtype=np.bool).ravel()
occ_idx_f = (occ_idx[:, 0] * grid_shape[1] * grid_shape[2] +
occ_idx[:, 1] * grid_shape[2] + occ_idx[:, 2])
dense[occ_idx_f] = True
dense = np.reshape(dense, grid_shape)
return dense
def get_in_out_from_samples(mesh, npoints, sample_factor=10, std=0.01):
"""Get in/out point samples from a given mesh.
Args:
mesh: trimesh mesh. Original mesh to sample points from.
npoints: int, number of points to sample on the mesh surface.
sample_factor: int, number of samples to pick per surface point.
std: float, std of samples to generate.
Returns:
surface_samples: [npoints, 6], where first 3 dims are xyz, last 3 dims are
normals (nx, ny, nz).
"""
surface_point_samples, fid = mesh.sample(int(npoints), return_index=True)
surface_point_normals = mesh.face_normals[fid]
offsets = np.random.randn(int(npoints), sample_factor, 1) * std
near_surface_samples = (surface_point_samples[:, np.newaxis, :] +
surface_point_normals[:, np.newaxis, :] * offsets)
near_surface_samples = np.concatenate([near_surface_samples, offsets],
axis=-1)
near_surface_samples = near_surface_samples.reshape([-1, 4])
surface_samples = np.concatenate([surface_point_samples,
surface_point_normals], axis=-1)
return surface_samples, near_surface_samples
def get_in_out_from_ray(points_from_ray, sample_factor=10, std=0.01):
"""Get sample points from points from ray.
Args:
points_from_ray: [npts, 6], where first 3 dims are xyz, last 3 are ray dir.
sample_factor: int, number of samples to pick per surface point.
std: float, std of samples to generate.
Returns:
near_surface_samples: [npts*sample_factor, 4], where last dimension is
distance to surface point.
"""
surface_point_samples = points_from_ray[:, :3]
surface_point_normals = points_from_ray[:, 3:]
# make sure normals are normalized to unit length
n = surface_point_normals
surface_point_normals = n / (np.linalg.norm(n, axis=1, keepdims=True)+1e-8)
npoints = points_from_ray.shape[0]
offsets = np.random.randn(npoints, sample_factor, 1) * std
near_surface_samples = (surface_point_samples[:, np.newaxis, :] +
surface_point_normals[:, np.newaxis, :] * offsets)
near_surface_samples = np.concatenate([near_surface_samples, offsets],
axis=-1)
near_surface_samples = near_surface_samples.reshape([-1, 4])
return near_surface_samples
def intrinsics_from_matrix(int_mat):
return (int_mat[0, 0], int_mat[1, 1], int_mat[0, 2], int_mat[1, 2])
def encode_decoder_one_scene(near_surface_samples, ckpt_dir, part_size,
overlap, indep_pt_loss,
xmin=np.zeros(3),
xmax=np.ones(3),
res_per_part=16, npts=4096, init_std=1e-4,
learning_rate=1e-3, steps=10000, nows=False,
verbose=False):
"""Wrapper function for encoding and decoding one scene.
Args:
near_surface_samples: [npts*sample_factor, 4], where last dimension is
distance to surface point.
ckpt_dir: str, path to checkpoint directory to use.
part_size: float, size of each part to use when autodecoding.
overlap: bool, whether to use overlapping encoding.
indep_pt_loss: bool, whether to use independent point loss in optimization.
xmin: np.array of len 3, lower coordinates of the domain bounds.
xmax: np.array of len 3, upper coordinates of the domain bounds.
res_per_part: int, resolution of output evaluation per part.
npts: int, number of points to use per step when doing gradient descent.
init_std: float, std to use when initializing seed.
learning_rate: float, learning rate for doing gradient descent.
steps: int, number of optimization steps to take.
nows: bool, no warmstarting from checkpoint. use random codebook.
verbose: bool, verbose mode.
Returns:
v: float32 np.array, vertices of reconstructed mesh.
f: int32 np.array, faces of reconstructed mesh.
feat_grid: float32 np.array, feature grid.
mask: bool np.array, mask of occupied cells.
"""
ckpt = tf.train.latest_checkpoint(ckpt_dir)
np.random.shuffle(near_surface_samples)
param_file = os.path.join(ckpt_dir, 'params.txt')
params = evaluator.parse_param_file(param_file)
_, occ_idx, grid_shape = pt.np_get_occupied_idx(
near_surface_samples[:100000, :3],
xmin=xmin-0.5*part_size, xmax=xmax+0.5*part_size, crop_size=part_size,
ntarget=1, overlap=overlap, normalize_crops=False, return_shape=True)
npts = min(npts, near_surface_samples.shape[0])
if verbose: print('LIG shape: {}'.format(grid_shape))
if verbose: print('Optimizing latent codes in LIG...')
goptim = LIGOptimizer(
ckpt, origin=xmin, grid_shape=grid_shape, part_size=part_size,
occ_idx=occ_idx, indep_pt_loss=indep_pt_loss, overlap=overlap,
alpha_lat=params['alpha_lat'], npts=npts, init_std=init_std,
learning_rate=learning_rate, var_prefix='', nows=nows)
goptim.optimize_feat_grid(near_surface_samples[:, :3],
near_surface_samples[:, 3:], steps=steps)
mask = occupancy_sparse_to_dense(occ_idx, grid_shape)
# evaluate mesh for the current crop
if verbose: print('Extracting mesh from LIG...')
svg = evaluator.SparseLIGEvaluator(
ckpt, num_filters=params['refiner_nf'],
codelen=params['codelen'], origin=xmin,
grid_shape=grid_shape, part_size=part_size,
overlap=overlap, scope='')
feat_grid = goptim.feature_grid[0]
out_grid = svg.evaluate_feature_grid(feat_grid,
mask=mask,
res_per_part=res_per_part)
v, f, _, _ = measure.marching_cubes_lewiner(out_grid, 0)
v *= (part_size / float(res_per_part) *
float(out_grid.shape[0]) / (float(out_grid.shape[0])-1))
v += xmin
return v, f, feat_grid, mask
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Lint as: python3
"""Utility modules for reconstructing scenes.
"""
import os
import numpy as np
from skimage import measure
import tensorflow.compat.v1 as tf
from tensorflow_graphics.projects.local_implicit_grid.core import evaluator
from tensorflow_graphics.projects.local_implicit_grid.core import local_implicit_grid_layer as lig
from tensorflow_graphics.projects.local_implicit_grid.core import point_utils as pt
class LIGOptimizer(object):
"""Class for using optimization to acquire feature grid."""
def __init__(self, ckpt, origin, grid_shape, part_size, occ_idx,
indep_pt_loss=True, overlap=True, alpha_lat=1e-2, npts=2048,
init_std=1e-2, learning_rate=1e-3, var_prefix='', nows=False):
self.ckpt = ckpt
self.ckpt_dir = os.path.dirname(ckpt)
self.params = self._load_params(self.ckpt_dir)
self.origin = origin
self.grid_shape = grid_shape
self.part_size = part_size
self.occ_idx = occ_idx
self.init_std = init_std
self.learning_rate = learning_rate
self.var_prefix = var_prefix
self.nows = nows
self.xmin = self.origin
if overlap:
true_shape = (np.array(grid_shape) - 1) / 2.0
self.xmax = self.origin + true_shape * part_size
else:
self.xmax = self.origin + (np.array(grid_shape) - 1) * part_size
_, sj, sk = self.grid_shape
self.occ_idx_flat = (self.occ_idx[:, 0]*(sj*sk)+
self.occ_idx[:, 1]*sk+self.occ_idx[:, 2])
self.indep_pt_loss = indep_pt_loss
self.overlap = overlap
self.alpha_lat = alpha_lat
self.npts = int(npts)
self._init_graph()
def _load_params(self, ckpt_dir):
param_file = os.path.join(ckpt_dir, 'params.txt')
params = evaluator.parse_param_file(param_file)
return params
def _init_graph(self):
"""Initialize computation graph for tensorflow.
"""
self.graph = tf.Graph()
with self.graph.as_default():
self.point_coords_ph = tf.placeholder(
tf.float32,
shape=[1, self.npts, 3]) # placeholder
self.point_values_ph = tf.placeholder(
tf.float32,
shape=[1, self.npts, 1]) # placeholder
self.point_coords = self.point_coords_ph
self.point_values = self.point_values_ph
self.liggrid = lig.LocalImplicitGrid(
size=self.grid_shape,
in_features=self.params['codelen'],
out_features=1,
num_filters=self.params['refiner_nf'],
net_type='imnet',
method='linear' if self.overlap else 'nn',
x_location_max=(1.0 if self.overlap else 2.0),
name='lig',
interp=(not self.indep_pt_loss),
min_grid_value=self.xmin,
max_grid_value=self.xmax)
si, sj, sk = self.grid_shape
self.occ_idx_flat_ = tf.convert_to_tensor(
self.occ_idx_flat[:, np.newaxis])
self.shape_ = tf.constant([si*sj*sk, self.params['codelen']],
dtype=tf.int64)
self.feat_sparse_ = tf.Variable(
(tf.random.normal(shape=[self.occ_idx.shape[0],
self.params['codelen']]) *
self.init_std),
trainable=True,
name='feat_sparse')
self.feat_grid = tf.scatter_nd(self.occ_idx_flat_,
self.feat_sparse_,
self.shape_)
self.feat_grid = tf.reshape(self.feat_grid,
[1, si, sj, sk, self.params['codelen']])
self.feat_norm = tf.norm(self.feat_sparse_, axis=-1)
if self.indep_pt_loss:
self.preds, self.weights = self.liggrid(self.feat_grid,
self.point_coords,
training=True)
# preds: [b, n, 8, 1], weights: [b, n, 8]
self.preds_interp = tf.reduce_sum(
tf.expand_dims(self.weights, axis=-1)*self.preds,
axis=2) # [b, n, 1]
self.preds = tf.concat([self.preds,
self.preds_interp[:, :, tf.newaxis, :]],
axis=2) # preds: [b, n, 9, 1]
self.point_values = tf.broadcast_to(
self.point_values[:, :, tf.newaxis, :],
shape=self.preds.shape) # [b, n, 9, 1]
else:
self.preds = self.liggrid(self.feat_grid,
self.point_coords,
training=True) # [b, n, 1]
self.labels_01 = (self.point_values+1) / 2 # turn labels to 0, 1 labels
self.loss_pt = tf.losses.sigmoid_cross_entropy(
self.labels_01,
logits=self.preds,
reduction=tf.losses.Reduction.NONE)
self.loss_lat = tf.reduce_mean(self.feat_norm) * self.alpha_lat
self.loss = tf.reduce_mean(self.loss_pt) + self.loss_lat
# compute accuracy metric
if self.indep_pt_loss:
self.pvalue = tf.sign(self.point_values[:, :, -1, 0])
self.ppred = tf.sign(self.preds[:, :, -1, 0])
else:
self.pvalue = tf.sign(self.point_values[..., 0])
self.ppred = tf.sign(self.preds[:, :, 0])
self.accu = tf.reduce_sum(tf.cast(
tf.logical_or(tf.logical_and(self.pvalue > 0, self.ppred > 0),
tf.logical_and(self.pvalue < 0, self.ppred < 0)),
tf.float32)) / float(self.npts)
# get optimizer
self.optimizer = tf.train.AdamOptimizer(learning_rate=self.learning_rate)
self.fgrid_vars = tf.get_collection(tf.GraphKeys.TRAINABLE_VARIABLES,
scope='feat_sparse')
self.train_op = self.optimizer.minimize(
self.loss,
global_step=tf.train.get_or_create_global_step(),
var_list=[self.fgrid_vars])
self.map_dict = self._get_var_mapping(model=self.liggrid,
scope=self.var_prefix)
self.sess = tf.Session()
if not self.nows:
self.saver = tf.train.Saver(self.map_dict)
self.saver.restore(self.sess, self.ckpt)
self._initialize_uninitialized(self.sess)
def _get_var_mapping(self, model, scope=''):
vars_ = model.trainable_variables
varnames = [v.name for v in vars_] # .split(':')[0]
varnames = [scope+v.replace('lig/', '').strip(':0') for v in varnames]
map_dict = dict(zip(varnames, vars_))
return map_dict
def _initialize_uninitialized(self, sess):
global_vars = tf.global_variables()
is_not_initialized = sess.run(
[tf.is_variable_initialized(var) for var in global_vars])
not_initialized_vars = [v for (v, f) in zip(global_vars,
is_not_initialized) if not f]
if not_initialized_vars:
sess.run(tf.variables_initializer(not_initialized_vars))
def optimize_feat_grid(self, point_coords, point_vals, steps=10000,
print_every_n_steps=1000):
"""Optimize feature grid.
Args:
point_coords: [npts, 3] point coordinates.
point_vals: [npts, 1] point values.
steps: int, number of steps for gradient descent.
print_every_n_steps: int, print every n steps.
Returns:
"""
print_every_n_steps = int(print_every_n_steps)
point_coords = point_coords.copy()
point_vals = np.sign(point_vals.copy())
if point_coords.ndim == 3:
point_coords = point_coords[0]
if point_vals.ndim == 3:
point_vals = point_vals[0]
elif point_vals.ndim == 1:
point_vals = point_vals[:, np.newaxis]
# clip
point_coords = np.clip(point_coords, self.xmin, self.xmax)
# shuffle points
seq = np.random.permutation(point_coords.shape[0])
point_coords = point_coords[seq]
point_vals = point_vals[seq]
point_coords = point_coords[np.newaxis]
point_vals = point_vals[np.newaxis]
# random point sampling function
def random_point_sample():
sid = np.random.choice(point_coords.shape[1]-self.npts+1)
eid = sid + self.npts
return point_coords[:, sid:eid], point_vals[:, sid:eid]
with self.graph.as_default():
for i in range(steps):
pc, pv = random_point_sample()
accu_, loss_, _ = self.sess.run([self.accu, self.loss, self.train_op],
feed_dict={
self.point_coords_ph: pc,
self.point_values_ph: pv})
if i % print_every_n_steps == 0:
print('Step [{:6d}] Accu: {:5.4f} Loss: {:5.4f}'.format(i,
accu_, loss_))
@property
def feature_grid(self):
with self.graph.as_default():
return self.sess.run(self.feat_grid)
def occupancy_sparse_to_dense(occ_idx, grid_shape):
dense = np.zeros(grid_shape, dtype=np.bool).ravel()
occ_idx_f = (occ_idx[:, 0] * grid_shape[1] * grid_shape[2] +
occ_idx[:, 1] * grid_shape[2] + occ_idx[:, 2])
dense[occ_idx_f] = True
dense = np.reshape(dense, grid_shape)
return dense
def get_in_out_from_samples(mesh, npoints, sample_factor=10, std=0.01):
"""Get in/out point samples from a given mesh.
Args:
mesh: trimesh mesh. Original mesh to sample points from.
npoints: int, number of points to sample on the mesh surface.
sample_factor: int, number of samples to pick per surface point.
std: float, std of samples to generate.
Returns:
surface_samples: [npoints, 6], where first 3 dims are xyz, last 3 dims are
normals (nx, ny, nz).
"""
surface_point_samples, fid = mesh.sample(int(npoints), return_index=True)
surface_point_normals = mesh.face_normals[fid]
offsets = np.random.randn(int(npoints), sample_factor, 1) * std
near_surface_samples = (surface_point_samples[:, np.newaxis, :] +
surface_point_normals[:, np.newaxis, :] * offsets)
near_surface_samples = np.concatenate([near_surface_samples, offsets],
axis=-1)
near_surface_samples = near_surface_samples.reshape([-1, 4])
surface_samples = np.concatenate([surface_point_samples,
surface_point_normals], axis=-1)
return surface_samples, near_surface_samples
def get_in_out_from_ray(points_from_ray, sample_factor=10, std=0.01):
"""Get sample points from points from ray.
Args:
points_from_ray: [npts, 6], where first 3 dims are xyz, last 3 are ray dir.
sample_factor: int, number of samples to pick per surface point.
std: float, std of samples to generate.
Returns:
near_surface_samples: [npts*sample_factor, 4], where last dimension is
distance to surface point.
"""
surface_point_samples = points_from_ray[:, :3]
surface_point_normals = points_from_ray[:, 3:]
# make sure normals are normalized to unit length
n = surface_point_normals
surface_point_normals = n / (np.linalg.norm(n, axis=1, keepdims=True)+1e-8)
npoints = points_from_ray.shape[0]
offsets = np.random.randn(npoints, sample_factor, 1) * std
near_surface_samples = (surface_point_samples[:, np.newaxis, :] +
surface_point_normals[:, np.newaxis, :] * offsets)
near_surface_samples = np.concatenate([near_surface_samples, offsets],
axis=-1)
near_surface_samples = near_surface_samples.reshape([-1, 4])
return near_surface_samples
def intrinsics_from_matrix(int_mat):
return (int_mat[0, 0], int_mat[1, 1], int_mat[0, 2], int_mat[1, 2])
def encode_decoder_one_scene(near_surface_samples, ckpt_dir, part_size,
overlap, indep_pt_loss,
xmin=np.zeros(3),
xmax=np.ones(3),
res_per_part=16, npts=4096, init_std=1e-4,
learning_rate=1e-3, steps=10000, nows=False,
verbose=False):
"""Wrapper function for encoding and decoding one scene.
Args:
near_surface_samples: [npts*sample_factor, 4], where last dimension is
distance to surface point.
ckpt_dir: str, path to checkpoint directory to use.
part_size: float, size of each part to use when autodecoding.
overlap: bool, whether to use overlapping encoding.
indep_pt_loss: bool, whether to use independent point loss in optimization.
xmin: np.array of len 3, lower coordinates of the domain bounds.
xmax: np.array of len 3, upper coordinates of the domain bounds.
res_per_part: int, resolution of output evaluation per part.
npts: int, number of points to use per step when doing gradient descent.
init_std: float, std to use when initializing seed.
learning_rate: float, learning rate for doing gradient descent.
steps: int, number of optimization steps to take.
nows: bool, no warmstarting from checkpoint. use random codebook.
verbose: bool, verbose mode.
Returns:
v: float32 np.array, vertices of reconstructed mesh.
f: int32 np.array, faces of reconstructed mesh.
feat_grid: float32 np.array, feature grid.
mask: bool np.array, mask of occupied cells.
"""
ckpt = tf.train.latest_checkpoint(ckpt_dir)
np.random.shuffle(near_surface_samples)
param_file = os.path.join(ckpt_dir, 'params.txt')
params = evaluator.parse_param_file(param_file)
_, occ_idx, grid_shape = pt.np_get_occupied_idx(
near_surface_samples[:100000, :3],
xmin=xmin-0.5*part_size, xmax=xmax+0.5*part_size, crop_size=part_size,
ntarget=1, overlap=overlap, normalize_crops=False, return_shape=True)
npts = min(npts, near_surface_samples.shape[0])
if verbose: print('LIG shape: {}'.format(grid_shape))
if verbose: print('Optimizing latent codes in LIG...')
goptim = LIGOptimizer(
ckpt, origin=xmin, grid_shape=grid_shape, part_size=part_size,
occ_idx=occ_idx, indep_pt_loss=indep_pt_loss, overlap=overlap,
alpha_lat=params['alpha_lat'], npts=npts, init_std=init_std,
learning_rate=learning_rate, var_prefix='', nows=nows)
goptim.optimize_feat_grid(near_surface_samples[:, :3],
near_surface_samples[:, 3:], steps=steps)
mask = occupancy_sparse_to_dense(occ_idx, grid_shape)
# evaluate mesh for the current crop
if verbose: print('Extracting mesh from LIG...')
svg = evaluator.SparseLIGEvaluator(
ckpt, num_filters=params['refiner_nf'],
codelen=params['codelen'], origin=xmin,
grid_shape=grid_shape, part_size=part_size,
overlap=overlap, scope='')
feat_grid = goptim.feature_grid[0]
out_grid = svg.evaluate_feature_grid(feat_grid,
mask=mask,
res_per_part=res_per_part)
v, f, _, _ = measure.marching_cubes_lewiner(out_grid, 0)
v *= (part_size / float(res_per_part) *
float(out_grid.shape[0]) / (float(out_grid.shape[0])-1))
v += xmin
return v, f, feat_grid, mask
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/rendering/camera/__init__.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Camera module."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow_graphics.rendering.camera import orthographic
from tensorflow_graphics.rendering.camera import perspective
from tensorflow_graphics.rendering.camera import quadratic_radial_distortion
from tensorflow_graphics.util import export_api as _export_api
# API contains submodules of tensorflow_graphics.rendering.camera.
__all__ = _export_api.get_modules()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Camera module."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow_graphics.rendering.camera import orthographic
from tensorflow_graphics.rendering.camera import perspective
from tensorflow_graphics.rendering.camera import quadratic_radial_distortion
from tensorflow_graphics.util import export_api as _export_api
# API contains submodules of tensorflow_graphics.rendering.camera.
__all__ = _export_api.get_modules()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/g3doc/_book.yaml | upper_tabs:
# Tabs left of dropdown menu
- include: /_upper_tabs_left.yaml
- include: /api_docs/_upper_tabs_api.yaml
# Dropdown menu
- name: Resources
path: /resources
is_default: true
menu:
- include: /resources/_menu_toc.yaml
lower_tabs:
# Subsite tabs
other:
- name: Guide & Tutorials
contents:
- title: Overview
path: /graphics/overview
- title: Install
path: /graphics/install
- title: Contributing
path: https://github.com/tensorflow/graphics/blob/master/CONTRIBUTING.md
status: external
- title: Debug
path: /graphics/debug_mode
- title: TensorBoard
path: /graphics/tensorboard
- heading: Tutorials
- title: 6DOF alignment
path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/6dof_alignment.ipynb
status: external
- title: Camera intrinsics optimization
path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/intrinsics_optimization.ipynb
status: external
- title: Interpolation
path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/interpolation.ipynb
status: external
- title: Reflectance
path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/reflectance.ipynb
status: external
- title: Non-rigid deformation
path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/non_rigid_deformation.ipynb
status: external
- title: Spherical harmonics rendering
path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/spherical_harmonics_approximation.ipynb
status: external
- title: Environment map optimization
path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/spherical_harmonics_optimization.ipynb
status: external
- title: Semantic mesh segmentation
path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/mesh_segmentation_demo.ipynb
status: external
- name: API
skip_translation: true
contents:
- include: /graphics/api_docs/python/tfg/_toc.yaml
- include: /_upper_tabs_right.yaml
| upper_tabs:
# Tabs left of dropdown menu
- include: /_upper_tabs_left.yaml
- include: /api_docs/_upper_tabs_api.yaml
# Dropdown menu
- name: Resources
path: /resources
is_default: true
menu:
- include: /resources/_menu_toc.yaml
lower_tabs:
# Subsite tabs
other:
- name: Guide & Tutorials
contents:
- title: Overview
path: /graphics/overview
- title: Install
path: /graphics/install
- title: Contributing
path: https://github.com/tensorflow/graphics/blob/master/CONTRIBUTING.md
status: external
- title: Debug
path: /graphics/debug_mode
- title: TensorBoard
path: /graphics/tensorboard
- heading: Tutorials
- title: 6DOF alignment
path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/6dof_alignment.ipynb
status: external
- title: Camera intrinsics optimization
path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/intrinsics_optimization.ipynb
status: external
- title: Interpolation
path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/interpolation.ipynb
status: external
- title: Reflectance
path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/reflectance.ipynb
status: external
- title: Non-rigid deformation
path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/non_rigid_deformation.ipynb
status: external
- title: Spherical harmonics rendering
path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/spherical_harmonics_approximation.ipynb
status: external
- title: Environment map optimization
path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/spherical_harmonics_optimization.ipynb
status: external
- title: Semantic mesh segmentation
path: https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics/notebooks/mesh_segmentation_demo.ipynb
status: external
- name: API
skip_translation: true
contents:
- include: /graphics/api_docs/python/tfg/_toc.yaml
- include: /_upper_tabs_right.yaml
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/projects/local_implicit_grid/requirements.txt | absl-py>=0.7.1
numpy>=1.16.4
plyfile>=0.7.1
scipy>=1.3.1
scikit-image>==0.15.0
trimesh>=3.2.12
tensorflow>=1.14.0
| absl-py>=0.7.1
numpy>=1.16.4
plyfile>=0.7.1
scipy>=1.3.1
scikit-image>==0.15.0
trimesh>=3.2.12
tensorflow>=1.14.0
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/projects/pointnet/helpers.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""A collection of training helper utilities."""
from __future__ import print_function
import argparse
import os
import tempfile
import time
import tensorflow as tf
import termcolor
class ArgumentParser(argparse.ArgumentParser):
"""Argument parser with default flags, and tensorboard helpers."""
def __init__(self, *args, **kwargs):
argparse.ArgumentParser.__init__(self, *args, **kwargs)
# --- Query default logdir
random_logdir = tempfile.mkdtemp(prefix="tensorboard_")
default_logdir = os.environ.get("TENSORBOARD_DEFAULT_LOGDIR", random_logdir)
# --- Add the default options
self.add("--logdir", default_logdir, help="tensorboard dir")
self.add("--tensorboard", True, help="should generate summaries?")
self.add("--assert_gpu", True, help="asserts on missing GPU accelerator")
self.add("--tf_quiet", True, help="no verbose tf startup")
def add(self, name, default, **kwargs):
"""More compact argumentparser 'add' flag method."""
helpstring = kwargs["help"] if "help" in kwargs else ""
metavar = kwargs["metavar"] if "metavar" in kwargs else name
# --- Fixes problems with bool arguments
def str2bool(string):
if isinstance(string, bool):
return str
if string.lower() in ("true", "yes"):
return True
if string.lower() in ("false", "no"):
return False
raise argparse.ArgumentTypeError("Bad value for boolean flag")
mytype = type(default)
if isinstance(default, bool):
mytype = str2bool
self.add_argument(
name, metavar=metavar, default=default, help=helpstring, type=mytype)
def parse_args(self, args=None, namespace=None):
"""WARNING: programmatically changes the logdir flags."""
flags = super(ArgumentParser, self).parse_args(args)
# --- setup automatic logdir (timestamp)
if "timestamp" in flags.logdir:
timestamp = time.strftime("%a%d_%H:%M:%S") # "Tue19_12:02:26"
flags.logdir = flags.logdir.replace("timestamp", timestamp)
if flags.tf_quiet:
set_tensorflow_log_level(3)
if flags.assert_gpu:
assert_gpu_available()
# --- ensure logdir ends in /
if flags.logdir[-1] != "/":
flags.logdir += "/"
return flags
def assert_gpu_available():
"""Verifies a GPU accelerator is available."""
physical_devices = tf.config.list_physical_devices("GPU")
num_gpus = len(physical_devices)
assert num_gpus >= 1, "execution requires one GPU"
def set_tensorflow_log_level(level=3):
"""Sets the log level of TensorFlow."""
os.environ["TF_CPP_MIN_LOG_LEVEL"] = str(level)
def summary_command(parser, flags, log_to_file=True, log_to_summary=True):
"""Cache the command used to reproduce experiment in summary folder."""
if not flags.tensorboard:
return
exec_string = "python " + parser.prog + " \\\n"
nflags = len(vars(flags))
for i, arg in enumerate(vars(flags)):
exec_string += " --{} ".format(arg)
exec_string += "{}".format(getattr(flags, arg))
if i + 1 < nflags:
exec_string += " \\\n"
exec_string += "\n"
if log_to_file:
with tf.io.gfile.GFile(
os.path.join(flags.logdir, "command.txt"), mode="w") as fid:
fid.write(exec_string)
if log_to_summary and flags.tensorboard:
tf.summary.text("command", exec_string, step=0)
def setup_tensorboard(flags):
"""Creates summary writers, and setups default tensorboard paths."""
if not flags.tensorboard:
return
# --- Do not allow experiment with same name
assert (not tf.io.gfile.exists(flags.logdir) or
not tf.io.gfile.listdir(flags.logdir)), \
"CRITICAL: folder {} already exists".format(flags.logdir)
# --- Log where summary can be found
print("View results with: ")
termcolor.cprint(" tensorboard --logdir {}".format(flags.logdir), "red")
writer = tf.summary.create_file_writer(flags.logdir, flush_millis=10000)
writer.set_as_default()
# --- Log dir name tweak for "hypertune"
log_dir = ""
trial_id = int(os.environ.get("CLOUD_ML_TRIAL_ID", 0))
if trial_id != 0:
if log_dir.endswith(os.sep):
log_dir = log_dir[:-1] # removes trailing "/"
log_dir += "_trial{0:03d}/".format(trial_id)
def handle_keyboard_interrupt(flags):
"""Informs user how to delete stale summaries."""
print("Keyboard interrupt by user")
if flags.logdir.startswith("gs://"):
bucketpath = flags.logdir[5:]
print("Delete these summaries with: ")
termcolor.cprint(" gsutil rm -rf {}".format(flags.logdir), "red")
baseurl = " https://pantheon.google.com/storage/browser/{}"
print("Or by visiting: ")
termcolor.cprint(baseurl.format(bucketpath), "red")
else:
print("Delete these summaries with: ")
termcolor.cprint(" rm -rf {}".format(flags.logdir), "red")
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""A collection of training helper utilities."""
from __future__ import print_function
import argparse
import os
import tempfile
import time
import tensorflow as tf
import termcolor
class ArgumentParser(argparse.ArgumentParser):
"""Argument parser with default flags, and tensorboard helpers."""
def __init__(self, *args, **kwargs):
argparse.ArgumentParser.__init__(self, *args, **kwargs)
# --- Query default logdir
random_logdir = tempfile.mkdtemp(prefix="tensorboard_")
default_logdir = os.environ.get("TENSORBOARD_DEFAULT_LOGDIR", random_logdir)
# --- Add the default options
self.add("--logdir", default_logdir, help="tensorboard dir")
self.add("--tensorboard", True, help="should generate summaries?")
self.add("--assert_gpu", True, help="asserts on missing GPU accelerator")
self.add("--tf_quiet", True, help="no verbose tf startup")
def add(self, name, default, **kwargs):
"""More compact argumentparser 'add' flag method."""
helpstring = kwargs["help"] if "help" in kwargs else ""
metavar = kwargs["metavar"] if "metavar" in kwargs else name
# --- Fixes problems with bool arguments
def str2bool(string):
if isinstance(string, bool):
return str
if string.lower() in ("true", "yes"):
return True
if string.lower() in ("false", "no"):
return False
raise argparse.ArgumentTypeError("Bad value for boolean flag")
mytype = type(default)
if isinstance(default, bool):
mytype = str2bool
self.add_argument(
name, metavar=metavar, default=default, help=helpstring, type=mytype)
def parse_args(self, args=None, namespace=None):
"""WARNING: programmatically changes the logdir flags."""
flags = super(ArgumentParser, self).parse_args(args)
# --- setup automatic logdir (timestamp)
if "timestamp" in flags.logdir:
timestamp = time.strftime("%a%d_%H:%M:%S") # "Tue19_12:02:26"
flags.logdir = flags.logdir.replace("timestamp", timestamp)
if flags.tf_quiet:
set_tensorflow_log_level(3)
if flags.assert_gpu:
assert_gpu_available()
# --- ensure logdir ends in /
if flags.logdir[-1] != "/":
flags.logdir += "/"
return flags
def assert_gpu_available():
"""Verifies a GPU accelerator is available."""
physical_devices = tf.config.list_physical_devices("GPU")
num_gpus = len(physical_devices)
assert num_gpus >= 1, "execution requires one GPU"
def set_tensorflow_log_level(level=3):
"""Sets the log level of TensorFlow."""
os.environ["TF_CPP_MIN_LOG_LEVEL"] = str(level)
def summary_command(parser, flags, log_to_file=True, log_to_summary=True):
"""Cache the command used to reproduce experiment in summary folder."""
if not flags.tensorboard:
return
exec_string = "python " + parser.prog + " \\\n"
nflags = len(vars(flags))
for i, arg in enumerate(vars(flags)):
exec_string += " --{} ".format(arg)
exec_string += "{}".format(getattr(flags, arg))
if i + 1 < nflags:
exec_string += " \\\n"
exec_string += "\n"
if log_to_file:
with tf.io.gfile.GFile(
os.path.join(flags.logdir, "command.txt"), mode="w") as fid:
fid.write(exec_string)
if log_to_summary and flags.tensorboard:
tf.summary.text("command", exec_string, step=0)
def setup_tensorboard(flags):
"""Creates summary writers, and setups default tensorboard paths."""
if not flags.tensorboard:
return
# --- Do not allow experiment with same name
assert (not tf.io.gfile.exists(flags.logdir) or
not tf.io.gfile.listdir(flags.logdir)), \
"CRITICAL: folder {} already exists".format(flags.logdir)
# --- Log where summary can be found
print("View results with: ")
termcolor.cprint(" tensorboard --logdir {}".format(flags.logdir), "red")
writer = tf.summary.create_file_writer(flags.logdir, flush_millis=10000)
writer.set_as_default()
# --- Log dir name tweak for "hypertune"
log_dir = ""
trial_id = int(os.environ.get("CLOUD_ML_TRIAL_ID", 0))
if trial_id != 0:
if log_dir.endswith(os.sep):
log_dir = log_dir[:-1] # removes trailing "/"
log_dir += "_trial{0:03d}/".format(trial_id)
def handle_keyboard_interrupt(flags):
"""Informs user how to delete stale summaries."""
print("Keyboard interrupt by user")
if flags.logdir.startswith("gs://"):
bucketpath = flags.logdir[5:]
print("Delete these summaries with: ")
termcolor.cprint(" gsutil rm -rf {}".format(flags.logdir), "red")
baseurl = " https://pantheon.google.com/storage/browser/{}"
print("Or by visiting: ")
termcolor.cprint(baseurl.format(bucketpath), "red")
else:
print("Delete these summaries with: ")
termcolor.cprint(" rm -rf {}".format(flags.logdir), "red")
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./.git/index | DIRC hewh[ewh[ p l.m'WB- .bazelrc ewh[ewh[ p E+P)1|kȵ .flake8 ewh[ewh[ p YW;8l^*Ǻw\* .github/workflows/build.yml e=e= p }@H1Hj97r? &.github/workflows/tfg-nigthly-pypi.yml ewh[ewh[ p% AN1Q;Tqt#1
.gitignore ewh[ewh[ p* :)- |CJ^^ƳE .gitmodules ewh[ewh[ p, -YczAOi .pylintrc ewh[ewh[ p5 'jdžv;u0}` CONTRIBUTING.md ewh[ewh[ p8 ,^EiVs49GB3-
L LICENSE ewh[ewh[ p9 *ikSKvZ$tR MANIFEST.in ewh[ewh[ p: 2,ꐶ"]lL(nКK7 README.md ewh[ewh[ p; (^ ki[d WORKSPACE ewh[ewh[ p< |