repo_name
stringclasses 6
values | pr_number
int64 99
20.3k
| pr_title
stringlengths 8
158
| pr_description
stringlengths 0
6.54k
| author
stringlengths 4
18
| date_created
unknown | date_merged
unknown | previous_commit
stringlengths 40
40
| pr_commit
stringlengths 40
40
| query
stringlengths 37
6.57k
| filepath
stringlengths 8
153
| before_content
stringlengths 0
876M
| after_content
stringlengths 0
876M
| label
int64 -1
1
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/geometry/representation/mesh/__init__.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Mesh module."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow_graphics.geometry.representation.mesh import normals
from tensorflow_graphics.geometry.representation.mesh import sampler
from tensorflow_graphics.geometry.representation.mesh import utils
from tensorflow_graphics.util import export_api as _export_api
# API contains submodules of tensorflow_graphics.geometry.
__all__ = _export_api.get_modules()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Mesh module."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow_graphics.geometry.representation.mesh import normals
from tensorflow_graphics.geometry.representation.mesh import sampler
from tensorflow_graphics.geometry.representation.mesh import utils
from tensorflow_graphics.util import export_api as _export_api
# API contains submodules of tensorflow_graphics.geometry.
__all__ = _export_api.get_modules()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/projects/neural_voxel_renderer/layers.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Utility functions for keras layers."""
import tensorflow as tf
layers = tf.keras.layers
initializer = tf.keras.initializers.glorot_normal()
rate = 0.7
def norm_layer(tensor, normalization):
if normalization.lower() == 'batchnorm':
tensor = layers.BatchNormalization()(tensor)
return tensor
def upconv(tensor, nfilters, size, strides,
alpha_lrelu=0.2, normalization='None'):
"""Upconvolution as upsampling and convolution."""
tensor = layers.UpSampling2D()(tensor)
tensor = layers.Conv2D(nfilters, size,
strides=strides,
padding='same',
kernel_initializer=initializer,
use_bias=False)(tensor)
tensor = norm_layer(tensor, normalization)
tensor = layers.LeakyReLU(alpha=alpha_lrelu)(tensor)
if normalization.lower() == 'dropout':
tensor = layers.Dropout(rate)(tensor)
return tensor
def conv_block_3d(tensor, nfilters, size, strides,
alpha_lrelu=0.2, normalization='None', relu=True):
"""3D convolution block with normalization and leaky relu."""
tensor = layers.Conv3D(nfilters, size,
strides=strides,
padding='same',
kernel_initializer=initializer,
use_bias=False)(tensor)
tensor = norm_layer(tensor, normalization)
if relu:
tensor = layers.LeakyReLU(alpha=alpha_lrelu)(tensor)
if normalization.lower() == 'dropout':
tensor = layers.Dropout(rate)(tensor)
return tensor
def conv_t_block_3d(tensor, nfilters, size, strides,
alpha_lrelu=0.2, normalization='None', relu=True):
"""2D transpose convolution block with normalization and leaky relu."""
tensor = layers.Conv3DTranspose(nfilters, size,
strides=strides,
padding='same',
kernel_initializer=initializer,
use_bias=False)(tensor)
tensor = norm_layer(tensor, normalization)
if relu:
tensor = layers.LeakyReLU(alpha=alpha_lrelu)(tensor)
if normalization.lower() == 'dropout':
tensor = layers.Dropout(rate)(tensor)
return tensor
def conv_block_2d(tensor, nfilters, size, strides,
alpha_lrelu=0.2, normalization='None'):
"""2D convolution block with normalization and leaky relu."""
tensor = layers.Conv2D(nfilters, size,
strides=strides,
padding='same',
kernel_initializer=initializer,
use_bias=False)(tensor)
tensor = norm_layer(tensor, normalization)
tensor = layers.LeakyReLU(alpha=alpha_lrelu)(tensor)
if normalization.lower() == 'dropout':
tensor = layers.Dropout(rate)(tensor)
return tensor
def conv_t_block_2d(tensor, nfilters, size, strides,
alpha_lrelu=0.2, normalization='None'):
"""2D transpose convolution block with normalization and leaky relu."""
tensor = layers.Conv2DTranspose(nfilters, size,
strides=strides,
padding='same',
kernel_initializer=initializer,
use_bias=False)(tensor)
tensor = norm_layer(tensor, normalization)
tensor = layers.LeakyReLU(alpha=alpha_lrelu)(tensor)
if normalization.lower() == 'dropout':
tensor = layers.Dropout(rate)(tensor)
return tensor
def residual_block_2d(x, nfilters, strides=(1, 1), normalization='None'):
"""2D residual block."""
shortcut = x
x = layers.Conv2D(nfilters,
kernel_size=(3, 3),
strides=strides,
padding='same',
kernel_initializer=initializer)(x)
x = norm_layer(x, normalization)
x = layers.LeakyReLU()(x)
x = layers.Conv2D(nfilters,
kernel_size=(3, 3),
strides=(1, 1),
padding='same',
kernel_initializer=initializer)(x)
x = norm_layer(x, normalization)
if strides != (1, 1):
shortcut = layers.Conv2D(nfilters,
kernel_size=(1, 1),
strides=strides,
padding='same')(shortcut)
x = norm_layer(x, normalization)
x = layers.add([shortcut, x])
x = layers.LeakyReLU()(x)
return x
def residual_block_3d(x, nfilters, strides=(1, 1, 1), normalization='None'):
"""3D residual block."""
shortcut = x
x = layers.Conv3D(nfilters,
kernel_size=(3, 3, 3),
strides=strides,
padding='same',
kernel_initializer=initializer)(x)
x = norm_layer(x, normalization)
x = layers.LeakyReLU()(x)
x = layers.Conv3D(nfilters,
kernel_size=(3, 3, 3),
strides=(1, 1, 1),
padding='same',
kernel_initializer=initializer)(x)
x = norm_layer(x, normalization)
if strides != (1, 1, 1):
shortcut = layers.Conv3D(nfilters,
kernel_size=(1, 1, 1),
strides=strides,
padding='same')(shortcut)
x = norm_layer(x, normalization)
x = layers.add([shortcut, x])
x = layers.LeakyReLU()(x)
return x
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Utility functions for keras layers."""
import tensorflow as tf
layers = tf.keras.layers
initializer = tf.keras.initializers.glorot_normal()
rate = 0.7
def norm_layer(tensor, normalization):
if normalization.lower() == 'batchnorm':
tensor = layers.BatchNormalization()(tensor)
return tensor
def upconv(tensor, nfilters, size, strides,
alpha_lrelu=0.2, normalization='None'):
"""Upconvolution as upsampling and convolution."""
tensor = layers.UpSampling2D()(tensor)
tensor = layers.Conv2D(nfilters, size,
strides=strides,
padding='same',
kernel_initializer=initializer,
use_bias=False)(tensor)
tensor = norm_layer(tensor, normalization)
tensor = layers.LeakyReLU(alpha=alpha_lrelu)(tensor)
if normalization.lower() == 'dropout':
tensor = layers.Dropout(rate)(tensor)
return tensor
def conv_block_3d(tensor, nfilters, size, strides,
alpha_lrelu=0.2, normalization='None', relu=True):
"""3D convolution block with normalization and leaky relu."""
tensor = layers.Conv3D(nfilters, size,
strides=strides,
padding='same',
kernel_initializer=initializer,
use_bias=False)(tensor)
tensor = norm_layer(tensor, normalization)
if relu:
tensor = layers.LeakyReLU(alpha=alpha_lrelu)(tensor)
if normalization.lower() == 'dropout':
tensor = layers.Dropout(rate)(tensor)
return tensor
def conv_t_block_3d(tensor, nfilters, size, strides,
alpha_lrelu=0.2, normalization='None', relu=True):
"""2D transpose convolution block with normalization and leaky relu."""
tensor = layers.Conv3DTranspose(nfilters, size,
strides=strides,
padding='same',
kernel_initializer=initializer,
use_bias=False)(tensor)
tensor = norm_layer(tensor, normalization)
if relu:
tensor = layers.LeakyReLU(alpha=alpha_lrelu)(tensor)
if normalization.lower() == 'dropout':
tensor = layers.Dropout(rate)(tensor)
return tensor
def conv_block_2d(tensor, nfilters, size, strides,
alpha_lrelu=0.2, normalization='None'):
"""2D convolution block with normalization and leaky relu."""
tensor = layers.Conv2D(nfilters, size,
strides=strides,
padding='same',
kernel_initializer=initializer,
use_bias=False)(tensor)
tensor = norm_layer(tensor, normalization)
tensor = layers.LeakyReLU(alpha=alpha_lrelu)(tensor)
if normalization.lower() == 'dropout':
tensor = layers.Dropout(rate)(tensor)
return tensor
def conv_t_block_2d(tensor, nfilters, size, strides,
alpha_lrelu=0.2, normalization='None'):
"""2D transpose convolution block with normalization and leaky relu."""
tensor = layers.Conv2DTranspose(nfilters, size,
strides=strides,
padding='same',
kernel_initializer=initializer,
use_bias=False)(tensor)
tensor = norm_layer(tensor, normalization)
tensor = layers.LeakyReLU(alpha=alpha_lrelu)(tensor)
if normalization.lower() == 'dropout':
tensor = layers.Dropout(rate)(tensor)
return tensor
def residual_block_2d(x, nfilters, strides=(1, 1), normalization='None'):
"""2D residual block."""
shortcut = x
x = layers.Conv2D(nfilters,
kernel_size=(3, 3),
strides=strides,
padding='same',
kernel_initializer=initializer)(x)
x = norm_layer(x, normalization)
x = layers.LeakyReLU()(x)
x = layers.Conv2D(nfilters,
kernel_size=(3, 3),
strides=(1, 1),
padding='same',
kernel_initializer=initializer)(x)
x = norm_layer(x, normalization)
if strides != (1, 1):
shortcut = layers.Conv2D(nfilters,
kernel_size=(1, 1),
strides=strides,
padding='same')(shortcut)
x = norm_layer(x, normalization)
x = layers.add([shortcut, x])
x = layers.LeakyReLU()(x)
return x
def residual_block_3d(x, nfilters, strides=(1, 1, 1), normalization='None'):
"""3D residual block."""
shortcut = x
x = layers.Conv3D(nfilters,
kernel_size=(3, 3, 3),
strides=strides,
padding='same',
kernel_initializer=initializer)(x)
x = norm_layer(x, normalization)
x = layers.LeakyReLU()(x)
x = layers.Conv3D(nfilters,
kernel_size=(3, 3, 3),
strides=(1, 1, 1),
padding='same',
kernel_initializer=initializer)(x)
x = norm_layer(x, normalization)
if strides != (1, 1, 1):
shortcut = layers.Conv3D(nfilters,
kernel_size=(1, 1, 1),
strides=strides,
padding='same')(shortcut)
x = norm_layer(x, normalization)
x = layers.add([shortcut, x])
x = layers.LeakyReLU()(x)
return x
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/rendering/reflectance/lambertian.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This module implements the Lambertian reflectance."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import math
import tensorflow as tf
from tensorflow_graphics.math import vector
from tensorflow_graphics.util import asserts
from tensorflow_graphics.util import export_api
from tensorflow_graphics.util import shape
def brdf(direction_incoming_light,
direction_outgoing_light,
surface_normal,
albedo,
name=None):
"""Evaluates the brdf of a Lambertian surface.
Note:
In the following, A1 to An are optional batch dimensions, which must be
broadcast compatible.
Note:
The gradient of this function is not smooth when the dot product of the
normal with any light is 0.0.
Args:
direction_incoming_light: A tensor of shape `[A1, ..., An, 3]`, where the
last dimension represents a normalized incoming light vector.
direction_outgoing_light: A tensor of shape `[A1, ..., An, 3]`, where the
last dimension represents a normalized outgoing light vector.
surface_normal: A tensor of shape `[A1, ..., An, 3]`, where the last
dimension represents a normalized surface normal.
albedo: A tensor of shape `[A1, ..., An, 3]`, where the last dimension
represents albedo with values in [0,1].
name: A name for this op. Defaults to "lambertian_brdf".
Returns:
A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents
the amount of reflected light in any outgoing direction.
Raises:
ValueError: if the shape of `direction_incoming_light`,
`direction_outgoing_light`, `surface_normal`, `shininess` or `albedo` is not
supported.
InvalidArgumentError: if at least one element of `albedo` is outside of
[0,1].
"""
with tf.compat.v1.name_scope(name, "lambertian_brdf", [
direction_incoming_light, direction_outgoing_light, surface_normal, albedo
]):
direction_incoming_light = tf.convert_to_tensor(
value=direction_incoming_light)
direction_outgoing_light = tf.convert_to_tensor(
value=direction_outgoing_light)
surface_normal = tf.convert_to_tensor(value=surface_normal)
albedo = tf.convert_to_tensor(value=albedo)
shape.check_static(
tensor=direction_incoming_light,
tensor_name="direction_incoming_light",
has_dim_equals=(-1, 3))
shape.check_static(
tensor=direction_outgoing_light,
tensor_name="direction_outgoing_light",
has_dim_equals=(-1, 3))
shape.check_static(
tensor=surface_normal,
tensor_name="surface_normal",
has_dim_equals=(-1, 3))
shape.check_static(
tensor=albedo, tensor_name="albedo", has_dim_equals=(-1, 3))
shape.compare_batch_dimensions(
tensors=(direction_incoming_light, direction_outgoing_light,
surface_normal, albedo),
tensor_names=("direction_incoming_light", "direction_outgoing_light",
"surface_normal", "albedo"),
last_axes=-2,
broadcast_compatible=True)
direction_incoming_light = asserts.assert_normalized(
direction_incoming_light)
direction_outgoing_light = asserts.assert_normalized(
direction_outgoing_light)
surface_normal = asserts.assert_normalized(surface_normal)
albedo = asserts.assert_all_in_range(albedo, 0.0, 1.0, open_bounds=False)
# Checks whether the incoming or outgoing light point behind the surface.
dot_incoming_light_surface_normal = vector.dot(-direction_incoming_light,
surface_normal)
dot_outgoing_light_surface_normal = vector.dot(direction_outgoing_light,
surface_normal)
min_dot = tf.minimum(dot_incoming_light_surface_normal,
dot_outgoing_light_surface_normal)
common_shape = shape.get_broadcasted_shape(min_dot.shape, albedo.shape)
d_val = lambda dim: 1 if dim is None else tf.compat.v1.dimension_value(dim)
common_shape = [d_val(dim) for dim in common_shape]
condition = tf.broadcast_to(tf.greater_equal(min_dot, 0.0), common_shape)
albedo = tf.broadcast_to(albedo, common_shape)
return tf.compat.v1.where(condition, albedo / math.pi,
tf.zeros_like(albedo))
# API contains all public functions and classes.
__all__ = export_api.get_functions_and_classes()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This module implements the Lambertian reflectance."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import math
import tensorflow as tf
from tensorflow_graphics.math import vector
from tensorflow_graphics.util import asserts
from tensorflow_graphics.util import export_api
from tensorflow_graphics.util import shape
def brdf(direction_incoming_light,
direction_outgoing_light,
surface_normal,
albedo,
name=None):
"""Evaluates the brdf of a Lambertian surface.
Note:
In the following, A1 to An are optional batch dimensions, which must be
broadcast compatible.
Note:
The gradient of this function is not smooth when the dot product of the
normal with any light is 0.0.
Args:
direction_incoming_light: A tensor of shape `[A1, ..., An, 3]`, where the
last dimension represents a normalized incoming light vector.
direction_outgoing_light: A tensor of shape `[A1, ..., An, 3]`, where the
last dimension represents a normalized outgoing light vector.
surface_normal: A tensor of shape `[A1, ..., An, 3]`, where the last
dimension represents a normalized surface normal.
albedo: A tensor of shape `[A1, ..., An, 3]`, where the last dimension
represents albedo with values in [0,1].
name: A name for this op. Defaults to "lambertian_brdf".
Returns:
A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents
the amount of reflected light in any outgoing direction.
Raises:
ValueError: if the shape of `direction_incoming_light`,
`direction_outgoing_light`, `surface_normal`, `shininess` or `albedo` is not
supported.
InvalidArgumentError: if at least one element of `albedo` is outside of
[0,1].
"""
with tf.compat.v1.name_scope(name, "lambertian_brdf", [
direction_incoming_light, direction_outgoing_light, surface_normal, albedo
]):
direction_incoming_light = tf.convert_to_tensor(
value=direction_incoming_light)
direction_outgoing_light = tf.convert_to_tensor(
value=direction_outgoing_light)
surface_normal = tf.convert_to_tensor(value=surface_normal)
albedo = tf.convert_to_tensor(value=albedo)
shape.check_static(
tensor=direction_incoming_light,
tensor_name="direction_incoming_light",
has_dim_equals=(-1, 3))
shape.check_static(
tensor=direction_outgoing_light,
tensor_name="direction_outgoing_light",
has_dim_equals=(-1, 3))
shape.check_static(
tensor=surface_normal,
tensor_name="surface_normal",
has_dim_equals=(-1, 3))
shape.check_static(
tensor=albedo, tensor_name="albedo", has_dim_equals=(-1, 3))
shape.compare_batch_dimensions(
tensors=(direction_incoming_light, direction_outgoing_light,
surface_normal, albedo),
tensor_names=("direction_incoming_light", "direction_outgoing_light",
"surface_normal", "albedo"),
last_axes=-2,
broadcast_compatible=True)
direction_incoming_light = asserts.assert_normalized(
direction_incoming_light)
direction_outgoing_light = asserts.assert_normalized(
direction_outgoing_light)
surface_normal = asserts.assert_normalized(surface_normal)
albedo = asserts.assert_all_in_range(albedo, 0.0, 1.0, open_bounds=False)
# Checks whether the incoming or outgoing light point behind the surface.
dot_incoming_light_surface_normal = vector.dot(-direction_incoming_light,
surface_normal)
dot_outgoing_light_surface_normal = vector.dot(direction_outgoing_light,
surface_normal)
min_dot = tf.minimum(dot_incoming_light_surface_normal,
dot_outgoing_light_surface_normal)
common_shape = shape.get_broadcasted_shape(min_dot.shape, albedo.shape)
d_val = lambda dim: 1 if dim is None else tf.compat.v1.dimension_value(dim)
common_shape = [d_val(dim) for dim in common_shape]
condition = tf.broadcast_to(tf.greater_equal(min_dot, 0.0), common_shape)
albedo = tf.broadcast_to(albedo, common_shape)
return tf.compat.v1.where(condition, albedo / math.pi,
tf.zeros_like(albedo))
# API contains all public functions and classes.
__all__ = export_api.get_functions_and_classes()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/nn/loss/tests/__init__.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/nn/layer/tests/__init__.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/nn/layer/tests/pointnet_test.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for pointnet layers."""
# pylint: disable=invalid-name
from absl.testing import parameterized
import tensorflow as tf
from tensorflow_graphics.nn.layer.pointnet import ClassificationHead
from tensorflow_graphics.nn.layer.pointnet import PointNetConv2Layer
from tensorflow_graphics.nn.layer.pointnet import PointNetDenseLayer
from tensorflow_graphics.nn.layer.pointnet import PointNetVanillaClassifier
from tensorflow_graphics.nn.layer.pointnet import VanillaEncoder
from tensorflow_graphics.util import test_case
class RandomForwardExecutionTest(test_case.TestCase):
@parameterized.parameters(
((32, 2048, 1, 3), (32), (.5), True),
((32, 2048, 1, 3), (32), (.5), False),
((32, 2048, 1, 2), (16), (.99), True),
)
def test_conv2(self, input_shape, channels, momentum, training):
B, N, X, _ = input_shape
inputs = tf.random.uniform(input_shape)
layer = PointNetConv2Layer(channels, momentum)
outputs = layer(inputs, training=training)
assert outputs.shape == (B, N, X, channels)
@parameterized.parameters(
((32, 1024), (40), (.5), True),
((32, 2048), (20), (.5), False),
((32, 512), (10), (.99), True),
)
def test_dense(self, input_shape, channels, momentum, training):
B, _ = input_shape
inputs = tf.random.uniform(input_shape)
layer = PointNetDenseLayer(channels, momentum)
outputs = layer(inputs, training=training)
assert outputs.shape == (B, channels)
@parameterized.parameters(
((32, 2048, 3), (.9), True),
((32, 2048, 2), (.5), False),
((32, 2048, 3), (.99), True),
)
def test_vanilla_encoder(self, input_shape, momentum, training):
B = input_shape[0]
inputs = tf.random.uniform(input_shape)
encoder = VanillaEncoder(momentum)
outputs = encoder(inputs, training=training)
assert outputs.shape == (B, 1024)
@parameterized.parameters(
((16, 1024), (20), (.9), True),
((8, 2048), (40), (.5), False),
((32, 512), (10), (.99), True),
)
def test_classification_head(self, input_shape, num_classes, momentum,
training):
B = input_shape[0]
inputs = tf.random.uniform(input_shape)
head = ClassificationHead(num_classes, momentum)
outputs = head(inputs, training=training)
assert outputs.shape == (B, num_classes)
@parameterized.parameters(
((32, 1024, 3), 40, True),
((32, 1024, 2), 40, False),
((16, 2048, 3), 20, True),
((16, 2048, 2), 20, False),
)
def test_vanilla_classifier(self, input_shape, num_classes, training):
B = input_shape[0]
C = num_classes
inputs = tf.random.uniform(input_shape)
model = PointNetVanillaClassifier(num_classes, momentum=.5)
logits = model(inputs, training)
assert logits.shape == (B, C)
labels = tf.random.uniform((B,), minval=0, maxval=C, dtype=tf.int64)
PointNetVanillaClassifier.loss(labels, logits)
if __name__ == "__main__":
test_case.main()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for pointnet layers."""
# pylint: disable=invalid-name
from absl.testing import parameterized
import tensorflow as tf
from tensorflow_graphics.nn.layer.pointnet import ClassificationHead
from tensorflow_graphics.nn.layer.pointnet import PointNetConv2Layer
from tensorflow_graphics.nn.layer.pointnet import PointNetDenseLayer
from tensorflow_graphics.nn.layer.pointnet import PointNetVanillaClassifier
from tensorflow_graphics.nn.layer.pointnet import VanillaEncoder
from tensorflow_graphics.util import test_case
class RandomForwardExecutionTest(test_case.TestCase):
@parameterized.parameters(
((32, 2048, 1, 3), (32), (.5), True),
((32, 2048, 1, 3), (32), (.5), False),
((32, 2048, 1, 2), (16), (.99), True),
)
def test_conv2(self, input_shape, channels, momentum, training):
B, N, X, _ = input_shape
inputs = tf.random.uniform(input_shape)
layer = PointNetConv2Layer(channels, momentum)
outputs = layer(inputs, training=training)
assert outputs.shape == (B, N, X, channels)
@parameterized.parameters(
((32, 1024), (40), (.5), True),
((32, 2048), (20), (.5), False),
((32, 512), (10), (.99), True),
)
def test_dense(self, input_shape, channels, momentum, training):
B, _ = input_shape
inputs = tf.random.uniform(input_shape)
layer = PointNetDenseLayer(channels, momentum)
outputs = layer(inputs, training=training)
assert outputs.shape == (B, channels)
@parameterized.parameters(
((32, 2048, 3), (.9), True),
((32, 2048, 2), (.5), False),
((32, 2048, 3), (.99), True),
)
def test_vanilla_encoder(self, input_shape, momentum, training):
B = input_shape[0]
inputs = tf.random.uniform(input_shape)
encoder = VanillaEncoder(momentum)
outputs = encoder(inputs, training=training)
assert outputs.shape == (B, 1024)
@parameterized.parameters(
((16, 1024), (20), (.9), True),
((8, 2048), (40), (.5), False),
((32, 512), (10), (.99), True),
)
def test_classification_head(self, input_shape, num_classes, momentum,
training):
B = input_shape[0]
inputs = tf.random.uniform(input_shape)
head = ClassificationHead(num_classes, momentum)
outputs = head(inputs, training=training)
assert outputs.shape == (B, num_classes)
@parameterized.parameters(
((32, 1024, 3), 40, True),
((32, 1024, 2), 40, False),
((16, 2048, 3), 20, True),
((16, 2048, 2), 20, False),
)
def test_vanilla_classifier(self, input_shape, num_classes, training):
B = input_shape[0]
C = num_classes
inputs = tf.random.uniform(input_shape)
model = PointNetVanillaClassifier(num_classes, momentum=.5)
logits = model(inputs, training)
assert logits.shape == (B, C)
labels = tf.random.uniform((B,), minval=0, maxval=C, dtype=tf.int64)
PointNetVanillaClassifier.loss(labels, logits)
if __name__ == "__main__":
test_case.main()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/nn/metric/tests/fscore_test.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for the fscore metric."""
from absl.testing import parameterized
import numpy as np
from tensorflow_graphics.nn.metric import fscore
from tensorflow_graphics.nn.metric import precision
from tensorflow_graphics.nn.metric import recall
from tensorflow_graphics.util import test_case
def random_tensor(tensor_shape):
return np.random.uniform(low=0.0, high=1.0, size=tensor_shape)
def random_tensor_shape():
tensor_size = np.random.randint(5) + 1
return np.random.randint(1, 10, size=(tensor_size)).tolist()
def binary_precision_function(ground_truth, predictions):
return precision.evaluate(ground_truth, predictions, classes=[1])
def binary_recall_function(ground_truth, predictions):
return recall.evaluate(ground_truth, predictions, classes=[1])
class FscoreTest(test_case.TestCase):
@parameterized.parameters(
# Precision = 0.5, Recall = 0.25.
((0, 1, 1, 1, 1), (1, 1, 0, 0, 0), 2 * (0.5 * 0.25) / (0.5 + 0.25)),
# Precision = 1, Recall = 1.
((0, 0, 0, 1, 1, 1, 0, 1), (0, 0, 0, 1, 1, 1, 0, 1), 1),
# Precision = 0, Recall = 0.
((0, 1, 0, 0, 0, 0), (0, 0, 0, 0, 0, 0), 0))
def test_evaluate_preset(self, ground_truth, predictions, expected_fscore):
tensor_shape = random_tensor_shape()
ground_truth_labels = np.tile(ground_truth, tensor_shape + [1])
predicted_labels = np.tile(predictions, tensor_shape + [1])
expected = np.tile(expected_fscore, tensor_shape)
result = fscore.evaluate(
ground_truth_labels,
predicted_labels,
precision_function=binary_precision_function,
recall_function=binary_recall_function)
self.assertAllClose(expected, result)
@parameterized.parameters(
("Not all batch dimensions are broadcast-compatible.", (1, 5, 3), (4, 3)),
("Not all batch dimensions are broadcast-compatible.", (3, 4), (2, 4, 5)),
)
def test_evaluate_shape_exception_raised(self, error_msg, *shape):
"""Tests that the shape exception is raised."""
self.assert_exception_is_raised(fscore.evaluate, error_msg, shape)
@parameterized.parameters(
((1, 5, 3), (2, 5, 1)),
((None, 2, 6), (4, 2, None)),
((3, 1, 1, 2), (3, 5, 8, 2)),
)
def test_evaluate_shape_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(fscore.evaluate, shapes)
if __name__ == "__main__":
test_case.main()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for the fscore metric."""
from absl.testing import parameterized
import numpy as np
from tensorflow_graphics.nn.metric import fscore
from tensorflow_graphics.nn.metric import precision
from tensorflow_graphics.nn.metric import recall
from tensorflow_graphics.util import test_case
def random_tensor(tensor_shape):
return np.random.uniform(low=0.0, high=1.0, size=tensor_shape)
def random_tensor_shape():
tensor_size = np.random.randint(5) + 1
return np.random.randint(1, 10, size=(tensor_size)).tolist()
def binary_precision_function(ground_truth, predictions):
return precision.evaluate(ground_truth, predictions, classes=[1])
def binary_recall_function(ground_truth, predictions):
return recall.evaluate(ground_truth, predictions, classes=[1])
class FscoreTest(test_case.TestCase):
@parameterized.parameters(
# Precision = 0.5, Recall = 0.25.
((0, 1, 1, 1, 1), (1, 1, 0, 0, 0), 2 * (0.5 * 0.25) / (0.5 + 0.25)),
# Precision = 1, Recall = 1.
((0, 0, 0, 1, 1, 1, 0, 1), (0, 0, 0, 1, 1, 1, 0, 1), 1),
# Precision = 0, Recall = 0.
((0, 1, 0, 0, 0, 0), (0, 0, 0, 0, 0, 0), 0))
def test_evaluate_preset(self, ground_truth, predictions, expected_fscore):
tensor_shape = random_tensor_shape()
ground_truth_labels = np.tile(ground_truth, tensor_shape + [1])
predicted_labels = np.tile(predictions, tensor_shape + [1])
expected = np.tile(expected_fscore, tensor_shape)
result = fscore.evaluate(
ground_truth_labels,
predicted_labels,
precision_function=binary_precision_function,
recall_function=binary_recall_function)
self.assertAllClose(expected, result)
@parameterized.parameters(
("Not all batch dimensions are broadcast-compatible.", (1, 5, 3), (4, 3)),
("Not all batch dimensions are broadcast-compatible.", (3, 4), (2, 4, 5)),
)
def test_evaluate_shape_exception_raised(self, error_msg, *shape):
"""Tests that the shape exception is raised."""
self.assert_exception_is_raised(fscore.evaluate, error_msg, shape)
@parameterized.parameters(
((1, 5, 3), (2, 5, 1)),
((None, 2, 6), (4, 2, None)),
((3, 1, 1, 2), (3, 5, 8, 2)),
)
def test_evaluate_shape_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(fscore.evaluate, shapes)
if __name__ == "__main__":
test_case.main()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/math/interpolation/tests/bspline_test.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for bspline."""
from absl.testing import parameterized
import numpy as np
from tensorflow_graphics.math.interpolation import bspline
from tensorflow_graphics.util import test_case
class BSplineTest(test_case.TestCase):
@parameterized.parameters((0.0, (1.0,)), (1.0, (1.0,)))
def test_constant_basis_boundary_values(self, position, weights):
"""Tests that basis functions of degree 0 return expected values."""
self.assertAllClose(bspline._constant(position), weights) # pylint: disable=protected-access
@parameterized.parameters((0.0, (1.0, 0.0)), (1.0, (0.0, 1.0)))
def test_linear_basis_boundary_values(self, position, weights):
"""Tests that basis functions of degree 1 return expected values."""
self.assertAllClose(bspline._linear(position), weights) # pylint: disable=protected-access
@parameterized.parameters((0.0, (0.5, 0.5, 0.0)), (1.0, (0.0, 0.5, 0.5)))
def test_quadratic_basis_boundary_values(self, position, weights):
"""Tests that basis functions of degree 2 return expected values."""
self.assertAllClose(bspline._quadratic(position), weights) # pylint: disable=protected-access
@parameterized.parameters((0.0, (1.0 / 6.0, 2.0 / 3.0, 1.0 / 6.0, 0.0)),
(1.0, (0.0, 1.0 / 6.0, 2.0 / 3.0, 1.0 / 6.0)))
def test_cubic_basis_boundary_values(self, position, weights):
"""Tests that basis functions of degree 3 return expected values."""
self.assertAllClose(bspline._cubic(position), weights) # pylint: disable=protected-access
@parameterized.parameters(
(0.0, (1.0 / 24.0, 11.0 / 24.0, 11.0 / 24.0, 1.0 / 24.0, 0.0)),
(1.0, (0.0, 1.0 / 24.0, 11.0 / 24.0, 11.0 / 24.0, 1.0 / 24.0)))
def test_quartic_basis_boundary_values(self, position, weights):
"""Tests that basis functions of degree 4 return expected values."""
self.assertAllClose(bspline._quartic(position), weights) # pylint: disable=protected-access
@parameterized.parameters(
(((0.5,), (1.5,), (2.5,)), (((0.5, 0.5),), ((0.5, 0.5),), ((0.5, 0.5),)),
(((0,), (1,), (2,))), 1, True),
((0.0, 1.0), ((0.5, 0.5, 0.0), (0.0, 0.5, 0.5)), (0, 0), 2, False),
)
def test_knot_weights_sparse_mode_preset(self, positions, gt_weights,
gt_shifts, degree, cyclical):
"""Tests that sparse mode returns correct results."""
weights, shifts = bspline.knot_weights(
positions,
num_knots=3,
degree=degree,
cyclical=cyclical,
sparse_mode=True)
self.assertAllClose(weights, gt_weights)
self.assertAllClose(shifts, gt_shifts)
@parameterized.parameters(
(((0.5,),), (((0.5, 0.5, 0.0),),), 1),
(((1.5,),), (((0.0, 0.5, 0.5),),), 1),
(((2.5,),), (((0.5, 0.0, 0.5),),), 1),
(((0.5,), (1.5,), (2.5,)),
(((1.0 / 8.0, 0.75, 1.0 / 8.0),), ((1.0 / 8.0, 1.0 / 8.0, 0.75),),
((0.75, 1.0 / 8.0, 1.0 / 8.0),)), 2),
)
def test_knot_weights_preset(self, position, weights, degree):
"""Tests that knot weights are correct when degree < num_knots - 1."""
self.assertAllClose(
bspline.knot_weights(
position, num_knots=3, degree=degree, cyclical=True), weights)
@parameterized.parameters((((0.0,), (0.25,), (0.5,), (0.75,)),))
def test_full_degree_non_cyclical_knot_weights(self, positions):
"""Tests that noncyclical weights are correct when using max degree."""
cyclical_weights = bspline.knot_weights(
positions=positions, num_knots=3, degree=2, cyclical=True)
noncyclical_weights = bspline.knot_weights(
positions=positions, num_knots=3, degree=2, cyclical=False)
self.assertAllClose(cyclical_weights, noncyclical_weights)
@parameterized.parameters(
("must have the same number of dimensions", ((None, 2), (None, 3, 3))),
("must have the same number of dimensions", ((2,), (3,))),
)
def test_interpolate_with_weights_exception_is_raised(self, error_msg,
shapes):
"""Tests that exception is raised when wrong number of knots is given."""
self.assert_exception_is_raised(
bspline.interpolate_with_weights, error_msg, shapes=shapes)
@parameterized.parameters(
(((0.5,), (0.0,), (0.9,)), (((0.5, 1.5), (1.5, 1.5), (2.5, 3.5)),)))
def test_interpolate_with_weights_preset(self, positions, knots):
"""Tests that interpolate_with_weights works correctly."""
degree = 1
cyclical = False
interp1 = bspline.interpolate(knots, positions, degree, cyclical)
weights = bspline.knot_weights(positions, 2, degree, cyclical)
interp2 = bspline.interpolate_with_weights(knots, weights)
self.assertAllClose(interp1, interp2)
@parameterized.parameters(
(1, 2),
(1, None),
(2, 2),
(2, None),
(3, 2),
(3, None),
(4, 2),
(4, None),
)
def test_knot_weights_exception_is_not_raised(self, positions_rank, dims):
shapes = ([dims] * positions_rank,)
self.assert_exception_is_not_raised(
bspline.knot_weights,
shapes=shapes,
num_knots=3,
degree=2,
cyclical=True)
@parameterized.parameters(
("Degree should be between 0 and 4.", 6, -1),
("Degree should be between 0 and 4.", 6, 5),
("Degree cannot be >= number of knots.", 2, 2),
("Degree cannot be >= number of knots.", 2, 3),
)
def test_knot_weights_exception_is_raised(self, error_msg, num_knots, degree):
self.assert_exception_is_raised(
bspline.knot_weights,
error_msg,
shapes=((10, 1),),
num_knots=num_knots,
degree=degree,
cyclical=True)
@parameterized.parameters(
(1, 0, True),
(1, 0, False),
(2, 1, True),
(2, 1, False),
(3, 1, True),
(3, 1, False),
(3, 2, True),
(3, 2, False),
(4, 1, True),
(4, 1, False),
(4, 3, True),
(4, 3, False),
(5, 1, True),
(5, 1, False),
(5, 4, True),
(5, 4, False),
)
def test_knot_weights_jacobian_is_correct(self, num_knots, degree, cyclical):
"""Tests that Jacobian is correct."""
positions_init = np.random.random_sample((10, 1))
scale = num_knots if cyclical else num_knots - degree
positions_init *= scale
def dense_mode_fn(positions):
return bspline.knot_weights(
positions=positions,
num_knots=num_knots,
degree=degree,
cyclical=cyclical,
sparse_mode=False)
def sparse_mode_fn(positions):
return bspline.knot_weights(
positions=positions,
num_knots=num_knots,
degree=degree,
cyclical=cyclical,
sparse_mode=True)[0]
with self.subTest(name="dense_mode"):
self.assert_jacobian_is_correct_fn(dense_mode_fn, [positions_init])
with self.subTest(name="sparse_mode"):
self.assert_jacobian_is_correct_fn(sparse_mode_fn, [positions_init])
if __name__ == "__main__":
test_case.main()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for bspline."""
from absl.testing import parameterized
import numpy as np
from tensorflow_graphics.math.interpolation import bspline
from tensorflow_graphics.util import test_case
class BSplineTest(test_case.TestCase):
@parameterized.parameters((0.0, (1.0,)), (1.0, (1.0,)))
def test_constant_basis_boundary_values(self, position, weights):
"""Tests that basis functions of degree 0 return expected values."""
self.assertAllClose(bspline._constant(position), weights) # pylint: disable=protected-access
@parameterized.parameters((0.0, (1.0, 0.0)), (1.0, (0.0, 1.0)))
def test_linear_basis_boundary_values(self, position, weights):
"""Tests that basis functions of degree 1 return expected values."""
self.assertAllClose(bspline._linear(position), weights) # pylint: disable=protected-access
@parameterized.parameters((0.0, (0.5, 0.5, 0.0)), (1.0, (0.0, 0.5, 0.5)))
def test_quadratic_basis_boundary_values(self, position, weights):
"""Tests that basis functions of degree 2 return expected values."""
self.assertAllClose(bspline._quadratic(position), weights) # pylint: disable=protected-access
@parameterized.parameters((0.0, (1.0 / 6.0, 2.0 / 3.0, 1.0 / 6.0, 0.0)),
(1.0, (0.0, 1.0 / 6.0, 2.0 / 3.0, 1.0 / 6.0)))
def test_cubic_basis_boundary_values(self, position, weights):
"""Tests that basis functions of degree 3 return expected values."""
self.assertAllClose(bspline._cubic(position), weights) # pylint: disable=protected-access
@parameterized.parameters(
(0.0, (1.0 / 24.0, 11.0 / 24.0, 11.0 / 24.0, 1.0 / 24.0, 0.0)),
(1.0, (0.0, 1.0 / 24.0, 11.0 / 24.0, 11.0 / 24.0, 1.0 / 24.0)))
def test_quartic_basis_boundary_values(self, position, weights):
"""Tests that basis functions of degree 4 return expected values."""
self.assertAllClose(bspline._quartic(position), weights) # pylint: disable=protected-access
@parameterized.parameters(
(((0.5,), (1.5,), (2.5,)), (((0.5, 0.5),), ((0.5, 0.5),), ((0.5, 0.5),)),
(((0,), (1,), (2,))), 1, True),
((0.0, 1.0), ((0.5, 0.5, 0.0), (0.0, 0.5, 0.5)), (0, 0), 2, False),
)
def test_knot_weights_sparse_mode_preset(self, positions, gt_weights,
gt_shifts, degree, cyclical):
"""Tests that sparse mode returns correct results."""
weights, shifts = bspline.knot_weights(
positions,
num_knots=3,
degree=degree,
cyclical=cyclical,
sparse_mode=True)
self.assertAllClose(weights, gt_weights)
self.assertAllClose(shifts, gt_shifts)
@parameterized.parameters(
(((0.5,),), (((0.5, 0.5, 0.0),),), 1),
(((1.5,),), (((0.0, 0.5, 0.5),),), 1),
(((2.5,),), (((0.5, 0.0, 0.5),),), 1),
(((0.5,), (1.5,), (2.5,)),
(((1.0 / 8.0, 0.75, 1.0 / 8.0),), ((1.0 / 8.0, 1.0 / 8.0, 0.75),),
((0.75, 1.0 / 8.0, 1.0 / 8.0),)), 2),
)
def test_knot_weights_preset(self, position, weights, degree):
"""Tests that knot weights are correct when degree < num_knots - 1."""
self.assertAllClose(
bspline.knot_weights(
position, num_knots=3, degree=degree, cyclical=True), weights)
@parameterized.parameters((((0.0,), (0.25,), (0.5,), (0.75,)),))
def test_full_degree_non_cyclical_knot_weights(self, positions):
"""Tests that noncyclical weights are correct when using max degree."""
cyclical_weights = bspline.knot_weights(
positions=positions, num_knots=3, degree=2, cyclical=True)
noncyclical_weights = bspline.knot_weights(
positions=positions, num_knots=3, degree=2, cyclical=False)
self.assertAllClose(cyclical_weights, noncyclical_weights)
@parameterized.parameters(
("must have the same number of dimensions", ((None, 2), (None, 3, 3))),
("must have the same number of dimensions", ((2,), (3,))),
)
def test_interpolate_with_weights_exception_is_raised(self, error_msg,
shapes):
"""Tests that exception is raised when wrong number of knots is given."""
self.assert_exception_is_raised(
bspline.interpolate_with_weights, error_msg, shapes=shapes)
@parameterized.parameters(
(((0.5,), (0.0,), (0.9,)), (((0.5, 1.5), (1.5, 1.5), (2.5, 3.5)),)))
def test_interpolate_with_weights_preset(self, positions, knots):
"""Tests that interpolate_with_weights works correctly."""
degree = 1
cyclical = False
interp1 = bspline.interpolate(knots, positions, degree, cyclical)
weights = bspline.knot_weights(positions, 2, degree, cyclical)
interp2 = bspline.interpolate_with_weights(knots, weights)
self.assertAllClose(interp1, interp2)
@parameterized.parameters(
(1, 2),
(1, None),
(2, 2),
(2, None),
(3, 2),
(3, None),
(4, 2),
(4, None),
)
def test_knot_weights_exception_is_not_raised(self, positions_rank, dims):
shapes = ([dims] * positions_rank,)
self.assert_exception_is_not_raised(
bspline.knot_weights,
shapes=shapes,
num_knots=3,
degree=2,
cyclical=True)
@parameterized.parameters(
("Degree should be between 0 and 4.", 6, -1),
("Degree should be between 0 and 4.", 6, 5),
("Degree cannot be >= number of knots.", 2, 2),
("Degree cannot be >= number of knots.", 2, 3),
)
def test_knot_weights_exception_is_raised(self, error_msg, num_knots, degree):
self.assert_exception_is_raised(
bspline.knot_weights,
error_msg,
shapes=((10, 1),),
num_knots=num_knots,
degree=degree,
cyclical=True)
@parameterized.parameters(
(1, 0, True),
(1, 0, False),
(2, 1, True),
(2, 1, False),
(3, 1, True),
(3, 1, False),
(3, 2, True),
(3, 2, False),
(4, 1, True),
(4, 1, False),
(4, 3, True),
(4, 3, False),
(5, 1, True),
(5, 1, False),
(5, 4, True),
(5, 4, False),
)
def test_knot_weights_jacobian_is_correct(self, num_knots, degree, cyclical):
"""Tests that Jacobian is correct."""
positions_init = np.random.random_sample((10, 1))
scale = num_knots if cyclical else num_knots - degree
positions_init *= scale
def dense_mode_fn(positions):
return bspline.knot_weights(
positions=positions,
num_knots=num_knots,
degree=degree,
cyclical=cyclical,
sparse_mode=False)
def sparse_mode_fn(positions):
return bspline.knot_weights(
positions=positions,
num_knots=num_knots,
degree=degree,
cyclical=cyclical,
sparse_mode=True)[0]
with self.subTest(name="dense_mode"):
self.assert_jacobian_is_correct_fn(dense_mode_fn, [positions_init])
with self.subTest(name="sparse_mode"):
self.assert_jacobian_is_correct_fn(sparse_mode_fn, [positions_init])
if __name__ == "__main__":
test_case.main()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/geometry/deformation_energy/tests/__init__.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/geometry/deformation_energy/as_conformal_as_possible.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This module implements TensorFlow As Rigid As Possible utility functions."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from tensorflow_graphics.geometry.transformation import quaternion
from tensorflow_graphics.math import vector
from tensorflow_graphics.util import export_api
from tensorflow_graphics.util import shape
def energy(vertices_rest_pose,
vertices_deformed_pose,
quaternions,
edges,
vertex_weight=None,
edge_weight=None,
conformal_energy=True,
aggregate_loss=True,
name=None):
"""Estimates an As Conformal As Possible (ACAP) fitting energy.
For a given mesh in rest pose, this function evaluates a variant of the ACAP
[1] fitting energy for a batch of deformed meshes. The vertex weights and edge
weights are defined on the rest pose.
The method implemented here is similar to [2], but with an added free variable
capturing a scale factor per vertex.
[1]: Yusuke Yoshiyasu, Wan-Chun Ma, Eiichi Yoshida, and Fumio Kanehiro.
"As-Conformal-As-Possible Surface Registration." Computer Graphics Forum. Vol.
33. No. 5. 2014.</br>
[2]: Olga Sorkine, and Marc Alexa.
"As-rigid-as-possible surface modeling". Symposium on Geometry Processing.
Vol. 4. 2007.
Note:
In the description of the arguments, V corresponds to
the number of vertices in the mesh, and E to the number of edges in this
mesh.
Note:
In the following, A1 to An are optional batch dimensions.
Args:
vertices_rest_pose: A tensor of shape `[V, 3]` containing the position of
all the vertices of the mesh in rest pose.
vertices_deformed_pose: A tensor of shape `[A1, ..., An, V, 3]` containing
the position of all the vertices of the mesh in deformed pose.
quaternions: A tensor of shape `[A1, ..., An, V, 4]` defining a rigid
transformation to apply to each vertex of the rest pose. See Section 2
from [1] for further details.
edges: A tensor of shape `[E, 2]` defining indices of vertices that are
connected by an edge.
vertex_weight: An optional tensor of shape `[V]` defining the weight
associated with each vertex. Defaults to a tensor of ones.
edge_weight: A tensor of shape `[E]` defining the weight of edges. Common
choices for these weights include uniform weighting, and cotangent
weights. Defaults to a tensor of ones.
conformal_energy: A `bool` indicating whether each vertex is associated with
a scale factor or not. If this parameter is True, scaling information must
be encoded in the norm of `quaternions`. If this parameter is False, this
function implements the energy described in [2].
aggregate_loss: A `bool` defining whether the returned loss should be an
aggregate measure. When True, the mean squared error is returned. When
False, returns two losses for every edge of the mesh.
name: A name for this op. Defaults to "as_conformal_as_possible_energy".
Returns:
When aggregate_loss is `True`, returns a tensor of shape `[A1, ..., An]`
containing the ACAP energies. When aggregate_loss is `False`, returns a
tensor of shape `[A1, ..., An, 2*E]` containing each term of the summation
described in the equation 7 of [2].
Raises:
ValueError: if the shape of `vertices_rest_pose`, `vertices_deformed_pose`,
`quaternions`, `edges`, `vertex_weight`, or `edge_weight` is not supported.
"""
with tf.compat.v1.name_scope(name, "as_conformal_as_possible_energy", [
vertices_rest_pose, vertices_deformed_pose, quaternions, edges,
conformal_energy, vertex_weight, edge_weight
]):
vertices_rest_pose = tf.convert_to_tensor(value=vertices_rest_pose)
vertices_deformed_pose = tf.convert_to_tensor(value=vertices_deformed_pose)
quaternions = tf.convert_to_tensor(value=quaternions)
edges = tf.convert_to_tensor(value=edges)
if vertex_weight is not None:
vertex_weight = tf.convert_to_tensor(value=vertex_weight)
if edge_weight is not None:
edge_weight = tf.convert_to_tensor(value=edge_weight)
shape.check_static(
tensor=vertices_rest_pose,
tensor_name="vertices_rest_pose",
has_rank=2,
has_dim_equals=(-1, 3))
shape.check_static(
tensor=vertices_deformed_pose,
tensor_name="vertices_deformed_pose",
has_rank_greater_than=1,
has_dim_equals=(-1, 3))
shape.check_static(
tensor=quaternions,
tensor_name="quaternions",
has_rank_greater_than=1,
has_dim_equals=(-1, 4))
shape.compare_batch_dimensions(
tensors=(vertices_deformed_pose, quaternions),
last_axes=(-3, -3),
broadcast_compatible=False)
shape.check_static(
tensor=edges, tensor_name="edges", has_rank=2, has_dim_equals=(-1, 2))
tensors_with_vertices = [vertices_rest_pose,
vertices_deformed_pose,
quaternions]
names_with_vertices = ["vertices_rest_pose",
"vertices_deformed_pose",
"quaternions"]
axes_with_vertices = [-2, -2, -2]
if vertex_weight is not None:
shape.check_static(
tensor=vertex_weight, tensor_name="vertex_weight", has_rank=1)
tensors_with_vertices.append(vertex_weight)
names_with_vertices.append("vertex_weight")
axes_with_vertices.append(0)
shape.compare_dimensions(
tensors=tensors_with_vertices,
axes=axes_with_vertices,
tensor_names=names_with_vertices)
if edge_weight is not None:
shape.check_static(
tensor=edge_weight, tensor_name="edge_weight", has_rank=1)
shape.compare_dimensions(
tensors=(edges, edge_weight),
axes=(0, 0),
tensor_names=("edges", "edge_weight"))
if not conformal_energy:
quaternions = quaternion.normalize(quaternions)
# Extracts the indices of vertices.
indices_i, indices_j = tf.unstack(edges, axis=-1)
# Extracts the vertices we need per term.
vertices_i_rest = tf.gather(vertices_rest_pose, indices_i, axis=-2)
vertices_j_rest = tf.gather(vertices_rest_pose, indices_j, axis=-2)
vertices_i_deformed = tf.gather(vertices_deformed_pose, indices_i, axis=-2)
vertices_j_deformed = tf.gather(vertices_deformed_pose, indices_j, axis=-2)
# Extracts the weights we need per term.
weights_shape = vertices_i_rest.shape.as_list()[-2]
if vertex_weight is not None:
weight_i = tf.gather(vertex_weight, indices_i)
weight_j = tf.gather(vertex_weight, indices_j)
else:
weight_i = weight_j = tf.ones(
weights_shape, dtype=vertices_rest_pose.dtype)
weight_i = tf.expand_dims(weight_i, axis=-1)
weight_j = tf.expand_dims(weight_j, axis=-1)
if edge_weight is not None:
weight_ij = edge_weight
else:
weight_ij = tf.ones(weights_shape, dtype=vertices_rest_pose.dtype)
weight_ij = tf.expand_dims(weight_ij, axis=-1)
# Extracts the rotation we need per term.
quaternion_i = tf.gather(quaternions, indices_i, axis=-2)
quaternion_j = tf.gather(quaternions, indices_j, axis=-2)
# Computes the energy.
deformed_ij = vertices_i_deformed - vertices_j_deformed
rotated_rest_ij = quaternion.rotate((vertices_i_rest - vertices_j_rest),
quaternion_i)
energy_ij = weight_i * weight_ij * (deformed_ij - rotated_rest_ij)
deformed_ji = vertices_j_deformed - vertices_i_deformed
rotated_rest_ji = quaternion.rotate((vertices_j_rest - vertices_i_rest),
quaternion_j)
energy_ji = weight_j * weight_ij * (deformed_ji - rotated_rest_ji)
energy_ij_squared = vector.dot(energy_ij, energy_ij, keepdims=False)
energy_ji_squared = vector.dot(energy_ji, energy_ji, keepdims=False)
if aggregate_loss:
average_energy_ij = tf.reduce_mean(
input_tensor=energy_ij_squared, axis=-1)
average_energy_ji = tf.reduce_mean(
input_tensor=energy_ji_squared, axis=-1)
return (average_energy_ij + average_energy_ji) / 2.0
return tf.concat((energy_ij_squared, energy_ji_squared), axis=-1)
# API contains all public functions and classes.
__all__ = export_api.get_functions_and_classes()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This module implements TensorFlow As Rigid As Possible utility functions."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from tensorflow_graphics.geometry.transformation import quaternion
from tensorflow_graphics.math import vector
from tensorflow_graphics.util import export_api
from tensorflow_graphics.util import shape
def energy(vertices_rest_pose,
vertices_deformed_pose,
quaternions,
edges,
vertex_weight=None,
edge_weight=None,
conformal_energy=True,
aggregate_loss=True,
name=None):
"""Estimates an As Conformal As Possible (ACAP) fitting energy.
For a given mesh in rest pose, this function evaluates a variant of the ACAP
[1] fitting energy for a batch of deformed meshes. The vertex weights and edge
weights are defined on the rest pose.
The method implemented here is similar to [2], but with an added free variable
capturing a scale factor per vertex.
[1]: Yusuke Yoshiyasu, Wan-Chun Ma, Eiichi Yoshida, and Fumio Kanehiro.
"As-Conformal-As-Possible Surface Registration." Computer Graphics Forum. Vol.
33. No. 5. 2014.</br>
[2]: Olga Sorkine, and Marc Alexa.
"As-rigid-as-possible surface modeling". Symposium on Geometry Processing.
Vol. 4. 2007.
Note:
In the description of the arguments, V corresponds to
the number of vertices in the mesh, and E to the number of edges in this
mesh.
Note:
In the following, A1 to An are optional batch dimensions.
Args:
vertices_rest_pose: A tensor of shape `[V, 3]` containing the position of
all the vertices of the mesh in rest pose.
vertices_deformed_pose: A tensor of shape `[A1, ..., An, V, 3]` containing
the position of all the vertices of the mesh in deformed pose.
quaternions: A tensor of shape `[A1, ..., An, V, 4]` defining a rigid
transformation to apply to each vertex of the rest pose. See Section 2
from [1] for further details.
edges: A tensor of shape `[E, 2]` defining indices of vertices that are
connected by an edge.
vertex_weight: An optional tensor of shape `[V]` defining the weight
associated with each vertex. Defaults to a tensor of ones.
edge_weight: A tensor of shape `[E]` defining the weight of edges. Common
choices for these weights include uniform weighting, and cotangent
weights. Defaults to a tensor of ones.
conformal_energy: A `bool` indicating whether each vertex is associated with
a scale factor or not. If this parameter is True, scaling information must
be encoded in the norm of `quaternions`. If this parameter is False, this
function implements the energy described in [2].
aggregate_loss: A `bool` defining whether the returned loss should be an
aggregate measure. When True, the mean squared error is returned. When
False, returns two losses for every edge of the mesh.
name: A name for this op. Defaults to "as_conformal_as_possible_energy".
Returns:
When aggregate_loss is `True`, returns a tensor of shape `[A1, ..., An]`
containing the ACAP energies. When aggregate_loss is `False`, returns a
tensor of shape `[A1, ..., An, 2*E]` containing each term of the summation
described in the equation 7 of [2].
Raises:
ValueError: if the shape of `vertices_rest_pose`, `vertices_deformed_pose`,
`quaternions`, `edges`, `vertex_weight`, or `edge_weight` is not supported.
"""
with tf.compat.v1.name_scope(name, "as_conformal_as_possible_energy", [
vertices_rest_pose, vertices_deformed_pose, quaternions, edges,
conformal_energy, vertex_weight, edge_weight
]):
vertices_rest_pose = tf.convert_to_tensor(value=vertices_rest_pose)
vertices_deformed_pose = tf.convert_to_tensor(value=vertices_deformed_pose)
quaternions = tf.convert_to_tensor(value=quaternions)
edges = tf.convert_to_tensor(value=edges)
if vertex_weight is not None:
vertex_weight = tf.convert_to_tensor(value=vertex_weight)
if edge_weight is not None:
edge_weight = tf.convert_to_tensor(value=edge_weight)
shape.check_static(
tensor=vertices_rest_pose,
tensor_name="vertices_rest_pose",
has_rank=2,
has_dim_equals=(-1, 3))
shape.check_static(
tensor=vertices_deformed_pose,
tensor_name="vertices_deformed_pose",
has_rank_greater_than=1,
has_dim_equals=(-1, 3))
shape.check_static(
tensor=quaternions,
tensor_name="quaternions",
has_rank_greater_than=1,
has_dim_equals=(-1, 4))
shape.compare_batch_dimensions(
tensors=(vertices_deformed_pose, quaternions),
last_axes=(-3, -3),
broadcast_compatible=False)
shape.check_static(
tensor=edges, tensor_name="edges", has_rank=2, has_dim_equals=(-1, 2))
tensors_with_vertices = [vertices_rest_pose,
vertices_deformed_pose,
quaternions]
names_with_vertices = ["vertices_rest_pose",
"vertices_deformed_pose",
"quaternions"]
axes_with_vertices = [-2, -2, -2]
if vertex_weight is not None:
shape.check_static(
tensor=vertex_weight, tensor_name="vertex_weight", has_rank=1)
tensors_with_vertices.append(vertex_weight)
names_with_vertices.append("vertex_weight")
axes_with_vertices.append(0)
shape.compare_dimensions(
tensors=tensors_with_vertices,
axes=axes_with_vertices,
tensor_names=names_with_vertices)
if edge_weight is not None:
shape.check_static(
tensor=edge_weight, tensor_name="edge_weight", has_rank=1)
shape.compare_dimensions(
tensors=(edges, edge_weight),
axes=(0, 0),
tensor_names=("edges", "edge_weight"))
if not conformal_energy:
quaternions = quaternion.normalize(quaternions)
# Extracts the indices of vertices.
indices_i, indices_j = tf.unstack(edges, axis=-1)
# Extracts the vertices we need per term.
vertices_i_rest = tf.gather(vertices_rest_pose, indices_i, axis=-2)
vertices_j_rest = tf.gather(vertices_rest_pose, indices_j, axis=-2)
vertices_i_deformed = tf.gather(vertices_deformed_pose, indices_i, axis=-2)
vertices_j_deformed = tf.gather(vertices_deformed_pose, indices_j, axis=-2)
# Extracts the weights we need per term.
weights_shape = vertices_i_rest.shape.as_list()[-2]
if vertex_weight is not None:
weight_i = tf.gather(vertex_weight, indices_i)
weight_j = tf.gather(vertex_weight, indices_j)
else:
weight_i = weight_j = tf.ones(
weights_shape, dtype=vertices_rest_pose.dtype)
weight_i = tf.expand_dims(weight_i, axis=-1)
weight_j = tf.expand_dims(weight_j, axis=-1)
if edge_weight is not None:
weight_ij = edge_weight
else:
weight_ij = tf.ones(weights_shape, dtype=vertices_rest_pose.dtype)
weight_ij = tf.expand_dims(weight_ij, axis=-1)
# Extracts the rotation we need per term.
quaternion_i = tf.gather(quaternions, indices_i, axis=-2)
quaternion_j = tf.gather(quaternions, indices_j, axis=-2)
# Computes the energy.
deformed_ij = vertices_i_deformed - vertices_j_deformed
rotated_rest_ij = quaternion.rotate((vertices_i_rest - vertices_j_rest),
quaternion_i)
energy_ij = weight_i * weight_ij * (deformed_ij - rotated_rest_ij)
deformed_ji = vertices_j_deformed - vertices_i_deformed
rotated_rest_ji = quaternion.rotate((vertices_j_rest - vertices_i_rest),
quaternion_j)
energy_ji = weight_j * weight_ij * (deformed_ji - rotated_rest_ji)
energy_ij_squared = vector.dot(energy_ij, energy_ij, keepdims=False)
energy_ji_squared = vector.dot(energy_ji, energy_ji, keepdims=False)
if aggregate_loss:
average_energy_ij = tf.reduce_mean(
input_tensor=energy_ij_squared, axis=-1)
average_energy_ji = tf.reduce_mean(
input_tensor=energy_ji_squared, axis=-1)
return (average_energy_ij + average_energy_ji) / 2.0
return tf.concat((energy_ij_squared, energy_ji_squared), axis=-1)
# API contains all public functions and classes.
__all__ = export_api.get_functions_and_classes()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/nn/metric/recall.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This module implements the recall metric."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from tensorflow_graphics.util import export_api
from tensorflow_graphics.util import safe_ops
from tensorflow_graphics.util import shape
def _cast_to_int(prediction):
return tf.cast(x=prediction, dtype=tf.int32)
def evaluate(ground_truth,
prediction,
classes=None,
reduce_average=True,
prediction_to_category_function=_cast_to_int,
name=None):
"""Computes the recall metric for the given ground truth and predictions.
Note:
In the following, A1 to An are optional batch dimensions, which must be
broadcast compatible.
Args:
ground_truth: A tensor of shape `[A1, ..., An, N]`, where the last axis
represents the ground truth labels. Will be cast to int32.
prediction: A tensor of shape `[A1, ..., An, N]`, where the last axis
represents the predictions (which can be continuous).
classes: An integer or a list/tuple of integers representing the classes for
which the recall will be evaluated. In case 'classes' is 'None', the
number of classes will be inferred from the given values and the recall
will be calculated for each of the classes. Defaults to 'None'.
reduce_average: Whether to calculate the average of the recall for each
class and return a single recall value. Defaults to true.
prediction_to_category_function: A function to associate a `prediction` to a
category. Defaults to rounding down the value of the prediction to the
nearest integer value.
name: A name for this op. Defaults to "recall_evaluate".
Returns:
A tensor of shape `[A1, ..., An, C]`, where the last axis represents the
recall calculated for each of the requested classes.
Raises:
ValueError: if the shape of `ground_truth`, `prediction` is not supported.
"""
with tf.compat.v1.name_scope(name, "recall_evaluate",
[ground_truth, prediction]):
ground_truth = tf.cast(
x=tf.convert_to_tensor(value=ground_truth), dtype=tf.int32)
prediction = tf.convert_to_tensor(value=prediction)
shape.compare_batch_dimensions(
tensors=(ground_truth, prediction),
tensor_names=("ground_truth", "prediction"),
last_axes=-1,
broadcast_compatible=True)
prediction = prediction_to_category_function(prediction)
if classes is None:
num_classes = tf.math.maximum(
tf.math.reduce_max(input_tensor=ground_truth),
tf.math.reduce_max(input_tensor=prediction)) + 1
classes = tf.range(num_classes)
else:
classes = tf.convert_to_tensor(value=classes)
# Make sure classes is a tensor of rank 1.
classes = tf.reshape(classes, [1]) if tf.rank(classes) == 0 else classes
# Create a confusion matrix for each of the classes (with dimensions
# [A1, ..., An, C, N]).
classes = tf.expand_dims(classes, -1)
ground_truth_per_class = tf.equal(tf.expand_dims(ground_truth, -2), classes)
prediction_per_class = tf.equal(tf.expand_dims(prediction, -2), classes)
# Caluclate the recall for each of the classes.
true_positives = tf.math.reduce_sum(
input_tensor=tf.cast(
x=tf.math.logical_and(ground_truth_per_class, prediction_per_class),
dtype=tf.float32),
axis=-1)
total_ground_truth_positives = tf.math.reduce_sum(
input_tensor=tf.cast(x=ground_truth_per_class, dtype=tf.float32),
axis=-1)
recall_per_class = safe_ops.safe_signed_div(true_positives,
total_ground_truth_positives)
if reduce_average:
return tf.math.reduce_mean(input_tensor=recall_per_class, axis=-1)
else:
return recall_per_class
# API contains all public functions and classes.
__all__ = export_api.get_functions_and_classes()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This module implements the recall metric."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from tensorflow_graphics.util import export_api
from tensorflow_graphics.util import safe_ops
from tensorflow_graphics.util import shape
def _cast_to_int(prediction):
return tf.cast(x=prediction, dtype=tf.int32)
def evaluate(ground_truth,
prediction,
classes=None,
reduce_average=True,
prediction_to_category_function=_cast_to_int,
name=None):
"""Computes the recall metric for the given ground truth and predictions.
Note:
In the following, A1 to An are optional batch dimensions, which must be
broadcast compatible.
Args:
ground_truth: A tensor of shape `[A1, ..., An, N]`, where the last axis
represents the ground truth labels. Will be cast to int32.
prediction: A tensor of shape `[A1, ..., An, N]`, where the last axis
represents the predictions (which can be continuous).
classes: An integer or a list/tuple of integers representing the classes for
which the recall will be evaluated. In case 'classes' is 'None', the
number of classes will be inferred from the given values and the recall
will be calculated for each of the classes. Defaults to 'None'.
reduce_average: Whether to calculate the average of the recall for each
class and return a single recall value. Defaults to true.
prediction_to_category_function: A function to associate a `prediction` to a
category. Defaults to rounding down the value of the prediction to the
nearest integer value.
name: A name for this op. Defaults to "recall_evaluate".
Returns:
A tensor of shape `[A1, ..., An, C]`, where the last axis represents the
recall calculated for each of the requested classes.
Raises:
ValueError: if the shape of `ground_truth`, `prediction` is not supported.
"""
with tf.compat.v1.name_scope(name, "recall_evaluate",
[ground_truth, prediction]):
ground_truth = tf.cast(
x=tf.convert_to_tensor(value=ground_truth), dtype=tf.int32)
prediction = tf.convert_to_tensor(value=prediction)
shape.compare_batch_dimensions(
tensors=(ground_truth, prediction),
tensor_names=("ground_truth", "prediction"),
last_axes=-1,
broadcast_compatible=True)
prediction = prediction_to_category_function(prediction)
if classes is None:
num_classes = tf.math.maximum(
tf.math.reduce_max(input_tensor=ground_truth),
tf.math.reduce_max(input_tensor=prediction)) + 1
classes = tf.range(num_classes)
else:
classes = tf.convert_to_tensor(value=classes)
# Make sure classes is a tensor of rank 1.
classes = tf.reshape(classes, [1]) if tf.rank(classes) == 0 else classes
# Create a confusion matrix for each of the classes (with dimensions
# [A1, ..., An, C, N]).
classes = tf.expand_dims(classes, -1)
ground_truth_per_class = tf.equal(tf.expand_dims(ground_truth, -2), classes)
prediction_per_class = tf.equal(tf.expand_dims(prediction, -2), classes)
# Caluclate the recall for each of the classes.
true_positives = tf.math.reduce_sum(
input_tensor=tf.cast(
x=tf.math.logical_and(ground_truth_per_class, prediction_per_class),
dtype=tf.float32),
axis=-1)
total_ground_truth_positives = tf.math.reduce_sum(
input_tensor=tf.cast(x=ground_truth_per_class, dtype=tf.float32),
axis=-1)
recall_per_class = safe_ops.safe_signed_div(true_positives,
total_ground_truth_positives)
if reduce_average:
return tf.math.reduce_mean(input_tensor=recall_per_class, axis=-1)
else:
return recall_per_class
# API contains all public functions and classes.
__all__ = export_api.get_functions_and_classes()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/projects/cvxnet/lib/utils.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Utility functions."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
from os import path
import numpy as np
import scipy as sp
from skimage import measure
import tensorflow.compat.v1 as tf
from tensorflow_graphics.projects.cvxnet.lib import datasets
from tensorflow_graphics.projects.cvxnet.lib import models
from tensorflow_graphics.projects.cvxnet.lib.libmise import mise
import trimesh
Stats = collections.namedtuple("Stats", ["iou", "chamfer", "fscore"])
SYSNET_CLASSES = {
"02691156": "airplane",
"02933112": "cabinet",
"03001627": "chair",
"03636649": "lamp",
"04090263": "rifle",
"04379243": "table",
"04530566": "watercraft",
"02828884": "bench",
"02958343": "car",
"03211117": "display",
"03691459": "speaker",
"04256520": "sofa",
"04401088": "telephone",
"all": "all",
}
def define_flags():
"""Define command line flags."""
flags = tf.app.flags
# Model flags
flags.DEFINE_enum("model", "multiconvex",
list(k for k in models.model_dict.keys()),
"Name of the model.")
flags.DEFINE_float("sharpness", 75., "Sharpness term.")
flags.DEFINE_integer("n_parts", 50, "Number of convexes uesd.")
flags.DEFINE_integer("n_half_planes", 25, "Number of half spaces used.")
flags.DEFINE_integer("latent_size", 256, "The size of latent code.")
flags.DEFINE_integer("dims", 3, "The dimension of query points.")
flags.DEFINE_bool("image_input", False, "Use color images as input if True.")
flags.DEFINE_float("vis_scale", 1.3,
"Scale of bbox used when extracting meshes.")
flags.DEFINE_float("level_set", 0.5,
"Level set used for extracting surfaces.")
# Dataset flags
flags.DEFINE_enum("dataset", "shapenet",
list(k for k in datasets.dataset_dict.keys()),
"Name of the dataset.")
flags.DEFINE_integer("image_h", 137, "The height of the color images.")
flags.DEFINE_integer("image_w", 137, "The width of the color images.")
flags.DEFINE_integer("image_d", 3, "The channels of color images.")
flags.DEFINE_integer("depth_h", 224, "The height of depth images.")
flags.DEFINE_integer("depth_w", 224, "The width of depth images.")
flags.DEFINE_integer("depth_d", 20, "The number of depth views.")
flags.DEFINE_integer("n_views", 24, "The number of color images views.")
flags.DEFINE_string("data_dir", None, "The base directory to load data from.")
flags.mark_flag_as_required("data_dir")
flags.DEFINE_string("obj_class", "*", "Object class used from dataset.")
# Training flags
flags.DEFINE_float("lr", 1e-4, "Start learning rate.")
flags.DEFINE_string(
"train_dir", None, "The base directory to save training info and"
"checkpoints.")
flags.DEFINE_integer("save_every", 20000,
"The number of steps to save checkpoint.")
flags.DEFINE_integer("max_steps", 800000, "The number of steps of training.")
flags.DEFINE_integer("batch_size", 32, "Batch size.")
flags.DEFINE_integer("sample_bbx", 1024,
"The number of bounding box sample points.")
flags.DEFINE_integer("sample_surf", 1024,
"The number of surface sample points.")
flags.DEFINE_float("weight_overlap", 0.1, "Weight of overlap_loss")
flags.DEFINE_float("weight_balance", 0.01, "Weight of balance_loss")
flags.DEFINE_float("weight_center", 0.001, "Weight of center_loss")
flags.mark_flag_as_required("train_dir")
# Eval flags
flags.DEFINE_bool("extract_mesh", False,
"Extract meshes and set to disk if True.")
flags.DEFINE_bool("surface_metrics", False,
"Measure surface metrics and save to csv if True.")
flags.DEFINE_string("mesh_dir", None, "Path to load ground truth meshes.")
flags.DEFINE_string("trans_dir", None,
"Path to load pred-to-target transformations.")
flags.DEFINE_bool("eval_once", False, "Evaluate the model only once if True.")
def mesh_name_helper(name):
name = name[0].decode("utf-8")
split = name.find("-")
cls_name = name[:split]
obj_name = name[split + 1:]
return cls_name, obj_name
def extract_mesh(input_val, params, indicators, input_holder, params_holder,
points_holder, sess, args):
"""Extracting meshes from an indicator function.
Args:
input_val: np.array, [1, height, width, channel], input image.
params: tf.Operation, hyperplane parameter hook.
indicators: tf.Operation, indicator hook.
input_holder: tf.Placeholder, input image placeholder.
params_holder: tf.Placeholder, hyperplane parameter placeholder.
points_holder: tf.Placeholder, query point placeholder.
sess: tf.Session, running sess.
args: tf.app.flags.FLAGS, configurations.
Returns:
mesh: trimesh.Trimesh, the extracted mesh.
"""
mesh_extractor = mise.MISE(64, 1, args.level_set)
points = mesh_extractor.query()
params_val = sess.run(params, {input_holder: input_val})
while points.shape[0] != 0:
orig_points = points
points = points.astype(np.float32)
points = (
(np.expand_dims(points, axis=0) / mesh_extractor.resolution - 0.5) *
args.vis_scale)
n_points = points.shape[1]
values = []
for i in range(0, n_points, 100000): # Add this to prevent OOM.
value = sess.run(indicators, {
params_holder: params_val,
points_holder: points[:, i:i + 100000]
})
values.append(value)
values = np.concatenate(values, axis=1)
values = values[0, :, 0].astype(np.float64)
mesh_extractor.update(orig_points, values)
points = mesh_extractor.query()
value_grid = mesh_extractor.to_dense()
value_grid = np.pad(value_grid, 1, "constant", constant_values=-1e6)
verts, faces, normals, unused_var = measure.marching_cubes_lewiner(
value_grid, min(args.level_set,
value_grid.max() * 0.75))
del normals
verts -= 1
verts /= np.array([
value_grid.shape[0] - 3, value_grid.shape[1] - 3, value_grid.shape[2] - 3
],
dtype=np.float32)
verts = args.vis_scale * (verts - 0.5)
faces = np.stack([faces[..., 1], faces[..., 0], faces[..., 2]], axis=-1)
return trimesh.Trimesh(vertices=verts, faces=faces)
def transform_mesh(mesh, name, trans_dir):
"""Transform mesh back to the same coordinate of ground truth.
Args:
mesh: trimesh.Trimesh, predicted mesh before transformation.
name: Tensor, hash name of the mesh as recorded in the dataset.
trans_dir: string, path to the directory for loading transformations.
Returns:
mesh: trimesh.Trimesh, the transformed mesh.
"""
if trans_dir is None:
raise ValueError("Need to specify args.trans_dir for loading pred-to-target"
"transformations.")
cls_name, obj_name = mesh_name_helper(name)
with tf.io.gfile.GFile(
path.join(trans_dir, "test", cls_name, obj_name, "occnet_to_gaps.txt"),
"r") as fin:
tx = np.loadtxt(fin).reshape([4, 4])
mesh.apply_transform(np.linalg.inv(tx))
return mesh
def save_mesh(mesh, name, eval_dir):
"""Save a mesh to disk.
Args:
mesh: trimesh.Trimesh, the mesh to save.
name: Tensor, hash name of the mesh as recorded in the dataset.
eval_dir: string, path to the directory to save the mesh.
"""
cls_name, obj_name = mesh_name_helper(name)
cls_dir = path.join(eval_dir, "meshes", cls_name)
if not tf.io.gfile.isdir(cls_dir):
tf.io.gfile.makedirs(cls_dir)
with tf.io.gfile.GFile(path.join(cls_dir, obj_name + ".obj"), "w") as fout:
mesh.export(fout, file_type="obj")
def distance_field_helper(source, target):
target_kdtree = sp.spatial.cKDTree(target)
distances, unused_var = target_kdtree.query(source, n_jobs=-1)
return distances
def compute_surface_metrics(mesh, name, mesh_dir):
"""Compute surface metrics (chamfer distance and f-score) for one example.
Args:
mesh: trimesh.Trimesh, the mesh to evaluate.
name: Tensor, hash name of the mesh as recorded in the dataset.
mesh_dir: string, path to the directory for loading ground truth meshes.
Returns:
chamfer: float, chamfer distance.
fscore: float, f-score.
"""
if mesh_dir is None:
raise ValueError("Need to specify args.mesh_dir for loading ground truth.")
cls_name, obj_name = mesh_name_helper(name)
with tf.io.gfile.GFile(
path.join(mesh_dir, "test", cls_name, obj_name, "model_occnet.ply"),
"rb",
) as fin:
mesh_gt = trimesh.Trimesh(**trimesh.exchange.ply.load_ply(fin))
# Chamfer
eval_points = 100000
point_gt = mesh_gt.sample(eval_points)
point_gt = point_gt.astype(np.float32)
point_pred = mesh.sample(eval_points)
point_pred = point_pred.astype(np.float32)
pred_to_gt = distance_field_helper(point_pred, point_gt)
gt_to_pred = distance_field_helper(point_gt, point_pred)
chamfer = np.mean(pred_to_gt**2) + np.mean(gt_to_pred**2)
# Fscore
tau = 1e-4
eps = 1e-9
pred_to_gt = (pred_to_gt**2)
gt_to_pred = (gt_to_pred**2)
prec_tau = (pred_to_gt <= tau).astype(np.float32).mean() * 100.
recall_tau = (gt_to_pred <= tau).astype(np.float32).mean() * 100.
fscore = (2 * prec_tau * recall_tau) / max(prec_tau + recall_tau, eps)
# Following the tradition to scale chamfer distance up by 10.
return chamfer * 100., fscore
def init_stats():
"""Initialize evaluation stats."""
stats = {}
for k in SYSNET_CLASSES:
stats[k] = {
"cnt": 0,
"iou": 0.,
"chamfer": 0.,
"fscore": 0.,
}
return stats
def update_stats(example_stats, name, shapenet_stats):
"""Update evaluation statistics.
Args:
example_stats: Stats, the stats of one example.
name: Tensor, hash name of the example as recorded in the dataset.
shapenet_stats: dict, the current stats of the whole dataset.
"""
cls_name, unused_var = mesh_name_helper(name)
shapenet_stats[cls_name]["cnt"] += 1
shapenet_stats[cls_name]["iou"] += example_stats.iou
shapenet_stats[cls_name]["chamfer"] += example_stats.chamfer
shapenet_stats[cls_name]["fscore"] += example_stats.fscore
shapenet_stats["all"]["cnt"] += 1
shapenet_stats["all"]["iou"] += example_stats.iou
shapenet_stats["all"]["chamfer"] += example_stats.chamfer
shapenet_stats["all"]["fscore"] += example_stats.fscore
def average_stats(shapenet_stats):
"""Average the accumulated stats of the whole dataset."""
for k, v in shapenet_stats.items():
cnt = max(v["cnt"], 1)
shapenet_stats[k] = {
"iou": v["iou"] / cnt,
"chamfer": v["chamfer"] / cnt,
"fscore": v["fscore"] / cnt,
}
def write_stats(stats, eval_dir, step):
"""Write stats of the dataset to disk.
Args:
stats: dict, statistics to save.
eval_dir: string, path to the directory to save the statistics.
step: int, the global step of the checkpoint.
"""
if not tf.io.gfile.isdir(eval_dir):
tf.io.gfile.makedirs(eval_dir)
with tf.io.gfile.GFile(path.join(eval_dir, "stats_{}.csv".format(step)),
"w") as fout:
fout.write("class,iou,chamfer,fscore\n")
for k in sorted(stats.keys()):
if k == "all":
continue
fout.write("{0},{1},{2},{3}\n".format(
SYSNET_CLASSES[k],
stats[k]["iou"],
stats[k]["chamfer"],
stats[k]["fscore"],
))
fout.write("all,{0},{1},{2}".format(
stats["all"]["iou"],
stats["all"]["chamfer"],
stats["all"]["fscore"],
))
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Utility functions."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
from os import path
import numpy as np
import scipy as sp
from skimage import measure
import tensorflow.compat.v1 as tf
from tensorflow_graphics.projects.cvxnet.lib import datasets
from tensorflow_graphics.projects.cvxnet.lib import models
from tensorflow_graphics.projects.cvxnet.lib.libmise import mise
import trimesh
Stats = collections.namedtuple("Stats", ["iou", "chamfer", "fscore"])
SYSNET_CLASSES = {
"02691156": "airplane",
"02933112": "cabinet",
"03001627": "chair",
"03636649": "lamp",
"04090263": "rifle",
"04379243": "table",
"04530566": "watercraft",
"02828884": "bench",
"02958343": "car",
"03211117": "display",
"03691459": "speaker",
"04256520": "sofa",
"04401088": "telephone",
"all": "all",
}
def define_flags():
"""Define command line flags."""
flags = tf.app.flags
# Model flags
flags.DEFINE_enum("model", "multiconvex",
list(k for k in models.model_dict.keys()),
"Name of the model.")
flags.DEFINE_float("sharpness", 75., "Sharpness term.")
flags.DEFINE_integer("n_parts", 50, "Number of convexes uesd.")
flags.DEFINE_integer("n_half_planes", 25, "Number of half spaces used.")
flags.DEFINE_integer("latent_size", 256, "The size of latent code.")
flags.DEFINE_integer("dims", 3, "The dimension of query points.")
flags.DEFINE_bool("image_input", False, "Use color images as input if True.")
flags.DEFINE_float("vis_scale", 1.3,
"Scale of bbox used when extracting meshes.")
flags.DEFINE_float("level_set", 0.5,
"Level set used for extracting surfaces.")
# Dataset flags
flags.DEFINE_enum("dataset", "shapenet",
list(k for k in datasets.dataset_dict.keys()),
"Name of the dataset.")
flags.DEFINE_integer("image_h", 137, "The height of the color images.")
flags.DEFINE_integer("image_w", 137, "The width of the color images.")
flags.DEFINE_integer("image_d", 3, "The channels of color images.")
flags.DEFINE_integer("depth_h", 224, "The height of depth images.")
flags.DEFINE_integer("depth_w", 224, "The width of depth images.")
flags.DEFINE_integer("depth_d", 20, "The number of depth views.")
flags.DEFINE_integer("n_views", 24, "The number of color images views.")
flags.DEFINE_string("data_dir", None, "The base directory to load data from.")
flags.mark_flag_as_required("data_dir")
flags.DEFINE_string("obj_class", "*", "Object class used from dataset.")
# Training flags
flags.DEFINE_float("lr", 1e-4, "Start learning rate.")
flags.DEFINE_string(
"train_dir", None, "The base directory to save training info and"
"checkpoints.")
flags.DEFINE_integer("save_every", 20000,
"The number of steps to save checkpoint.")
flags.DEFINE_integer("max_steps", 800000, "The number of steps of training.")
flags.DEFINE_integer("batch_size", 32, "Batch size.")
flags.DEFINE_integer("sample_bbx", 1024,
"The number of bounding box sample points.")
flags.DEFINE_integer("sample_surf", 1024,
"The number of surface sample points.")
flags.DEFINE_float("weight_overlap", 0.1, "Weight of overlap_loss")
flags.DEFINE_float("weight_balance", 0.01, "Weight of balance_loss")
flags.DEFINE_float("weight_center", 0.001, "Weight of center_loss")
flags.mark_flag_as_required("train_dir")
# Eval flags
flags.DEFINE_bool("extract_mesh", False,
"Extract meshes and set to disk if True.")
flags.DEFINE_bool("surface_metrics", False,
"Measure surface metrics and save to csv if True.")
flags.DEFINE_string("mesh_dir", None, "Path to load ground truth meshes.")
flags.DEFINE_string("trans_dir", None,
"Path to load pred-to-target transformations.")
flags.DEFINE_bool("eval_once", False, "Evaluate the model only once if True.")
def mesh_name_helper(name):
name = name[0].decode("utf-8")
split = name.find("-")
cls_name = name[:split]
obj_name = name[split + 1:]
return cls_name, obj_name
def extract_mesh(input_val, params, indicators, input_holder, params_holder,
points_holder, sess, args):
"""Extracting meshes from an indicator function.
Args:
input_val: np.array, [1, height, width, channel], input image.
params: tf.Operation, hyperplane parameter hook.
indicators: tf.Operation, indicator hook.
input_holder: tf.Placeholder, input image placeholder.
params_holder: tf.Placeholder, hyperplane parameter placeholder.
points_holder: tf.Placeholder, query point placeholder.
sess: tf.Session, running sess.
args: tf.app.flags.FLAGS, configurations.
Returns:
mesh: trimesh.Trimesh, the extracted mesh.
"""
mesh_extractor = mise.MISE(64, 1, args.level_set)
points = mesh_extractor.query()
params_val = sess.run(params, {input_holder: input_val})
while points.shape[0] != 0:
orig_points = points
points = points.astype(np.float32)
points = (
(np.expand_dims(points, axis=0) / mesh_extractor.resolution - 0.5) *
args.vis_scale)
n_points = points.shape[1]
values = []
for i in range(0, n_points, 100000): # Add this to prevent OOM.
value = sess.run(indicators, {
params_holder: params_val,
points_holder: points[:, i:i + 100000]
})
values.append(value)
values = np.concatenate(values, axis=1)
values = values[0, :, 0].astype(np.float64)
mesh_extractor.update(orig_points, values)
points = mesh_extractor.query()
value_grid = mesh_extractor.to_dense()
value_grid = np.pad(value_grid, 1, "constant", constant_values=-1e6)
verts, faces, normals, unused_var = measure.marching_cubes_lewiner(
value_grid, min(args.level_set,
value_grid.max() * 0.75))
del normals
verts -= 1
verts /= np.array([
value_grid.shape[0] - 3, value_grid.shape[1] - 3, value_grid.shape[2] - 3
],
dtype=np.float32)
verts = args.vis_scale * (verts - 0.5)
faces = np.stack([faces[..., 1], faces[..., 0], faces[..., 2]], axis=-1)
return trimesh.Trimesh(vertices=verts, faces=faces)
def transform_mesh(mesh, name, trans_dir):
"""Transform mesh back to the same coordinate of ground truth.
Args:
mesh: trimesh.Trimesh, predicted mesh before transformation.
name: Tensor, hash name of the mesh as recorded in the dataset.
trans_dir: string, path to the directory for loading transformations.
Returns:
mesh: trimesh.Trimesh, the transformed mesh.
"""
if trans_dir is None:
raise ValueError("Need to specify args.trans_dir for loading pred-to-target"
"transformations.")
cls_name, obj_name = mesh_name_helper(name)
with tf.io.gfile.GFile(
path.join(trans_dir, "test", cls_name, obj_name, "occnet_to_gaps.txt"),
"r") as fin:
tx = np.loadtxt(fin).reshape([4, 4])
mesh.apply_transform(np.linalg.inv(tx))
return mesh
def save_mesh(mesh, name, eval_dir):
"""Save a mesh to disk.
Args:
mesh: trimesh.Trimesh, the mesh to save.
name: Tensor, hash name of the mesh as recorded in the dataset.
eval_dir: string, path to the directory to save the mesh.
"""
cls_name, obj_name = mesh_name_helper(name)
cls_dir = path.join(eval_dir, "meshes", cls_name)
if not tf.io.gfile.isdir(cls_dir):
tf.io.gfile.makedirs(cls_dir)
with tf.io.gfile.GFile(path.join(cls_dir, obj_name + ".obj"), "w") as fout:
mesh.export(fout, file_type="obj")
def distance_field_helper(source, target):
target_kdtree = sp.spatial.cKDTree(target)
distances, unused_var = target_kdtree.query(source, n_jobs=-1)
return distances
def compute_surface_metrics(mesh, name, mesh_dir):
"""Compute surface metrics (chamfer distance and f-score) for one example.
Args:
mesh: trimesh.Trimesh, the mesh to evaluate.
name: Tensor, hash name of the mesh as recorded in the dataset.
mesh_dir: string, path to the directory for loading ground truth meshes.
Returns:
chamfer: float, chamfer distance.
fscore: float, f-score.
"""
if mesh_dir is None:
raise ValueError("Need to specify args.mesh_dir for loading ground truth.")
cls_name, obj_name = mesh_name_helper(name)
with tf.io.gfile.GFile(
path.join(mesh_dir, "test", cls_name, obj_name, "model_occnet.ply"),
"rb",
) as fin:
mesh_gt = trimesh.Trimesh(**trimesh.exchange.ply.load_ply(fin))
# Chamfer
eval_points = 100000
point_gt = mesh_gt.sample(eval_points)
point_gt = point_gt.astype(np.float32)
point_pred = mesh.sample(eval_points)
point_pred = point_pred.astype(np.float32)
pred_to_gt = distance_field_helper(point_pred, point_gt)
gt_to_pred = distance_field_helper(point_gt, point_pred)
chamfer = np.mean(pred_to_gt**2) + np.mean(gt_to_pred**2)
# Fscore
tau = 1e-4
eps = 1e-9
pred_to_gt = (pred_to_gt**2)
gt_to_pred = (gt_to_pred**2)
prec_tau = (pred_to_gt <= tau).astype(np.float32).mean() * 100.
recall_tau = (gt_to_pred <= tau).astype(np.float32).mean() * 100.
fscore = (2 * prec_tau * recall_tau) / max(prec_tau + recall_tau, eps)
# Following the tradition to scale chamfer distance up by 10.
return chamfer * 100., fscore
def init_stats():
"""Initialize evaluation stats."""
stats = {}
for k in SYSNET_CLASSES:
stats[k] = {
"cnt": 0,
"iou": 0.,
"chamfer": 0.,
"fscore": 0.,
}
return stats
def update_stats(example_stats, name, shapenet_stats):
"""Update evaluation statistics.
Args:
example_stats: Stats, the stats of one example.
name: Tensor, hash name of the example as recorded in the dataset.
shapenet_stats: dict, the current stats of the whole dataset.
"""
cls_name, unused_var = mesh_name_helper(name)
shapenet_stats[cls_name]["cnt"] += 1
shapenet_stats[cls_name]["iou"] += example_stats.iou
shapenet_stats[cls_name]["chamfer"] += example_stats.chamfer
shapenet_stats[cls_name]["fscore"] += example_stats.fscore
shapenet_stats["all"]["cnt"] += 1
shapenet_stats["all"]["iou"] += example_stats.iou
shapenet_stats["all"]["chamfer"] += example_stats.chamfer
shapenet_stats["all"]["fscore"] += example_stats.fscore
def average_stats(shapenet_stats):
"""Average the accumulated stats of the whole dataset."""
for k, v in shapenet_stats.items():
cnt = max(v["cnt"], 1)
shapenet_stats[k] = {
"iou": v["iou"] / cnt,
"chamfer": v["chamfer"] / cnt,
"fscore": v["fscore"] / cnt,
}
def write_stats(stats, eval_dir, step):
"""Write stats of the dataset to disk.
Args:
stats: dict, statistics to save.
eval_dir: string, path to the directory to save the statistics.
step: int, the global step of the checkpoint.
"""
if not tf.io.gfile.isdir(eval_dir):
tf.io.gfile.makedirs(eval_dir)
with tf.io.gfile.GFile(path.join(eval_dir, "stats_{}.csv".format(step)),
"w") as fout:
fout.write("class,iou,chamfer,fscore\n")
for k in sorted(stats.keys()):
if k == "all":
continue
fout.write("{0},{1},{2},{3}\n".format(
SYSNET_CLASSES[k],
stats[k]["iou"],
stats[k]["chamfer"],
stats[k]["fscore"],
))
fout.write("all,{0},{1},{2}".format(
stats["all"]["iou"],
stats["all"]["chamfer"],
stats["all"]["fscore"],
))
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/geometry/transformation/tests/rotation_matrix_3d_test.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for 3d rotation matrix."""
from absl.testing import flagsaver
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from tensorflow_graphics.geometry.transformation import axis_angle
from tensorflow_graphics.geometry.transformation import quaternion
from tensorflow_graphics.geometry.transformation import rotation_matrix_3d
from tensorflow_graphics.geometry.transformation.tests import test_data as td
from tensorflow_graphics.geometry.transformation.tests import test_helpers
from tensorflow_graphics.util import test_case
class RotationMatrix3dTest(test_case.TestCase):
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_assert_rotation_matrix_normalized_passthrough(self):
"""Checks that the assert is a passthrough when the flag is False."""
angles = test_helpers.generate_preset_test_euler_angles()
matrix_input = rotation_matrix_3d.from_euler(angles)
matrix_output = rotation_matrix_3d.assert_rotation_matrix_normalized(
matrix_input)
self.assertTrue(matrix_input is matrix_output) # pylint: disable=g-generic-assert
@parameterized.parameters((np.float32), (np.float64))
def test_assert_rotation_matrix_normalized_preset(self, dtype):
"""Checks that assert_normalized function works as expected."""
angles = test_helpers.generate_preset_test_euler_angles().astype(dtype)
matrix = rotation_matrix_3d.from_euler(angles)
matrix_rescaled = matrix * 1.01
matrix_normalized = rotation_matrix_3d.assert_rotation_matrix_normalized(
matrix)
self.evaluate(matrix_normalized)
with self.assertRaises(tf.errors.InvalidArgumentError): # pylint: disable=g-error-prone-assert-raises
self.evaluate(rotation_matrix_3d.assert_rotation_matrix_normalized(
matrix_rescaled))
@parameterized.parameters(
((3, 3),),
((None, 3, 3),),
)
def test_assert_rotation_matrix_normalized_exception_not_raised(
self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(
rotation_matrix_3d.assert_rotation_matrix_normalized, shapes)
@parameterized.parameters(
("must have a rank greater than 1", (3,)),
("must have exactly 3 dimensions in axis -1", (3, None)),
("must have exactly 3 dimensions in axis -2", (None, 3)),
)
def test_assert_rotation_matrix_normalized_exception_raised(
self, error_msg, *shapes):
"""Tests that the shape exceptions are raised."""
self.assert_exception_is_raised(
rotation_matrix_3d.assert_rotation_matrix_normalized, error_msg, shapes)
@parameterized.parameters(
((3,), (1,)),
((None, 3), (None, 1)),
((1, 3), (1, 1)),
((2, 3), (2, 1)),
((1, 3), (1,)),
((3,), (1, 1)),
)
def test_from_axis_angle_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(rotation_matrix_3d.from_axis_angle,
shapes)
@parameterized.parameters(
("must have exactly 3 dimensions in axis -1", (None,), (1,)),
("must have exactly 1 dimensions in axis -1", (3,), (None,)),
)
def test_from_axis_angle_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(rotation_matrix_3d.from_axis_angle,
error_msg, shapes)
def test_from_axis_angle_normalized_preset(self):
"""Tests that axis-angles can be converted to rotation matrices."""
euler_angles = test_helpers.generate_preset_test_euler_angles()
axis, angle = axis_angle.from_euler(euler_angles)
matrix_axis_angle = rotation_matrix_3d.from_axis_angle(axis, angle)
self.assertAllEqual(
rotation_matrix_3d.is_valid(matrix_axis_angle),
np.ones(euler_angles.shape[0:-1] + (1,)))
def test_from_axis_angle_normalized_random(self):
"""Tests that axis-angles can be converted to rotation matrices."""
tensor_shape = np.random.randint(1, 10, size=np.random.randint(3)).tolist()
random_axis = np.random.normal(size=tensor_shape + [3])
random_axis /= np.linalg.norm(random_axis, axis=-1, keepdims=True)
random_angle = np.random.normal(size=tensor_shape + [1])
matrix_axis_angle = rotation_matrix_3d.from_axis_angle(
random_axis, random_angle)
self.assertAllEqual(
rotation_matrix_3d.is_valid(matrix_axis_angle),
np.ones(tensor_shape + [1]))
@parameterized.parameters(
((td.AXIS_3D_X, td.ANGLE_45), (td.MAT_3D_X_45,)),
((td.AXIS_3D_Y, td.ANGLE_45), (td.MAT_3D_Y_45,)),
((td.AXIS_3D_Z, td.ANGLE_45), (td.MAT_3D_Z_45,)),
((td.AXIS_3D_X, td.ANGLE_90), (td.MAT_3D_X_90,)),
((td.AXIS_3D_Y, td.ANGLE_90), (td.MAT_3D_Y_90,)),
((td.AXIS_3D_Z, td.ANGLE_90), (td.MAT_3D_Z_90,)),
((td.AXIS_3D_X, td.ANGLE_180), (td.MAT_3D_X_180,)),
((td.AXIS_3D_Y, td.ANGLE_180), (td.MAT_3D_Y_180,)),
((td.AXIS_3D_Z, td.ANGLE_180), (td.MAT_3D_Z_180,)),
)
def test_from_axis_angle_preset(self, test_inputs, test_outputs):
"""Tests that an axis-angle maps to correct matrix."""
self.assert_output_is_correct(rotation_matrix_3d.from_axis_angle,
test_inputs, test_outputs)
def test_from_axis_angle_random(self):
"""Tests conversion to matrix."""
tensor_shape = np.random.randint(1, 10, size=np.random.randint(3)).tolist()
random_axis = np.random.normal(size=tensor_shape + [3])
random_axis /= np.linalg.norm(random_axis, axis=-1, keepdims=True)
random_angle = np.random.normal(size=tensor_shape + [1])
matrix_axis_angle = rotation_matrix_3d.from_axis_angle(
random_axis, random_angle)
random_quaternion = quaternion.from_axis_angle(random_axis, random_angle)
matrix_quaternion = rotation_matrix_3d.from_quaternion(random_quaternion)
self.assertAllClose(matrix_axis_angle, matrix_quaternion, rtol=1e-3)
# Checks that resulting rotation matrices are normalized.
self.assertAllEqual(
rotation_matrix_3d.is_valid(matrix_axis_angle),
np.ones(tensor_shape + [1]))
@parameterized.parameters(
((td.AXIS_3D_X, td.ANGLE_90, td.AXIS_3D_X), (td.AXIS_3D_X,)),
((td.AXIS_3D_X, td.ANGLE_90, td.AXIS_3D_Y), (td.AXIS_3D_Z,)),
((td.AXIS_3D_X, -td.ANGLE_90, td.AXIS_3D_Z), (td.AXIS_3D_Y,)),
((td.AXIS_3D_Y, -td.ANGLE_90, td.AXIS_3D_X), (td.AXIS_3D_Z,)),
((td.AXIS_3D_Y, td.ANGLE_90, td.AXIS_3D_Y), (td.AXIS_3D_Y,)),
((td.AXIS_3D_Y, td.ANGLE_90, td.AXIS_3D_Z), (td.AXIS_3D_X,)),
((td.AXIS_3D_Z, td.ANGLE_90, td.AXIS_3D_X), (td.AXIS_3D_Y,)),
((td.AXIS_3D_Z, -td.ANGLE_90, td.AXIS_3D_Y), (td.AXIS_3D_X,)),
((td.AXIS_3D_Z, td.ANGLE_90, td.AXIS_3D_Z), (td.AXIS_3D_Z,)),
)
def test_from_axis_angle_rotate_vector_preset(self, test_inputs,
test_outputs):
"""Tests the directionality of axis-angle rotations."""
def func(axis, angle, point):
matrix = rotation_matrix_3d.from_axis_angle(axis, angle)
return rotation_matrix_3d.rotate(point, matrix)
self.assert_output_is_correct(func, test_inputs, test_outputs)
@parameterized.parameters(
((3,),),
((None, 3),),
((2, 3),),
)
def test_from_euler_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(rotation_matrix_3d.from_euler, shapes)
@parameterized.parameters(
("must have exactly 3 dimensions in axis -1", (None,)),)
def test_from_euler_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(rotation_matrix_3d.from_euler, error_msg,
shapes)
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_from_euler_jacobian_preset(self):
"""Test the Jacobian of the from_euler function."""
x_init = test_helpers.generate_preset_test_euler_angles()
self.assert_jacobian_is_correct_fn(rotation_matrix_3d.from_euler, [x_init])
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_from_euler_jacobian_random(self):
"""Test the Jacobian of the from_euler function."""
x_init = test_helpers.generate_random_test_euler_angles()
self.assert_jacobian_is_correct_fn(rotation_matrix_3d.from_euler, [x_init])
def test_from_euler_normalized_preset(self):
"""Tests that euler angles can be converted to rotation matrices."""
euler_angles = test_helpers.generate_preset_test_euler_angles()
matrix = rotation_matrix_3d.from_euler(euler_angles)
self.assertAllEqual(
rotation_matrix_3d.is_valid(matrix),
np.ones(euler_angles.shape[0:-1] + (1,)))
def test_from_euler_normalized_random(self):
"""Tests that euler angles can be converted to rotation matrices."""
random_euler_angles = test_helpers.generate_random_test_euler_angles()
matrix = rotation_matrix_3d.from_euler(random_euler_angles)
self.assertAllEqual(
rotation_matrix_3d.is_valid(matrix),
np.ones(random_euler_angles.shape[0:-1] + (1,)))
@parameterized.parameters(
((td.AXIS_3D_0,), (td.MAT_3D_ID,)),
((td.ANGLE_45 * td.AXIS_3D_X,), (td.MAT_3D_X_45,)),
((td.ANGLE_45 * td.AXIS_3D_Y,), (td.MAT_3D_Y_45,)),
((td.ANGLE_45 * td.AXIS_3D_Z,), (td.MAT_3D_Z_45,)),
((td.ANGLE_90 * td.AXIS_3D_X,), (td.MAT_3D_X_90,)),
((td.ANGLE_90 * td.AXIS_3D_Y,), (td.MAT_3D_Y_90,)),
((td.ANGLE_90 * td.AXIS_3D_Z,), (td.MAT_3D_Z_90,)),
((td.ANGLE_180 * td.AXIS_3D_X,), (td.MAT_3D_X_180,)),
((td.ANGLE_180 * td.AXIS_3D_Y,), (td.MAT_3D_Y_180,)),
((td.ANGLE_180 * td.AXIS_3D_Z,), (td.MAT_3D_Z_180,)),
)
def test_from_euler_preset(self, test_inputs, test_outputs):
"""Tests that Euler angles create the expected matrix."""
self.assert_output_is_correct(rotation_matrix_3d.from_euler, test_inputs,
test_outputs)
def test_from_euler_random(self):
"""Tests that Euler angles produce the same result as axis-angle."""
angles = test_helpers.generate_random_test_euler_angles()
matrix = rotation_matrix_3d.from_euler(angles)
tensor_tile = angles.shape[:-1]
x_axis = np.tile(td.AXIS_3D_X, tensor_tile + (1,))
y_axis = np.tile(td.AXIS_3D_Y, tensor_tile + (1,))
z_axis = np.tile(td.AXIS_3D_Z, tensor_tile + (1,))
x_angle = np.expand_dims(angles[..., 0], axis=-1)
y_angle = np.expand_dims(angles[..., 1], axis=-1)
z_angle = np.expand_dims(angles[..., 2], axis=-1)
x_rotation = rotation_matrix_3d.from_axis_angle(x_axis, x_angle)
y_rotation = rotation_matrix_3d.from_axis_angle(y_axis, y_angle)
z_rotation = rotation_matrix_3d.from_axis_angle(z_axis, z_angle)
expected_matrix = tf.matmul(z_rotation, tf.matmul(y_rotation, x_rotation))
self.assertAllClose(expected_matrix, matrix, rtol=1e-3)
@parameterized.parameters(
((3,),),
((None, 3),),
)
def test_from_euler_with_small_angles_approximation_exception_not_raised(
self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(
rotation_matrix_3d.from_euler_with_small_angles_approximation, shapes)
@parameterized.parameters(
("must have exactly 3 dimensions in axis -1", (None,)),)
def test_from_euler_with_small_angles_approximation_exception_raised(
self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(
rotation_matrix_3d.from_euler_with_small_angles_approximation,
error_msg, shapes)
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_from_euler_with_small_angles_approximation_jacobian_random(self):
"""Test the Jacobian of from_euler_with_small_angles_approximation."""
x_init = test_helpers.generate_random_test_euler_angles(
min_angle=-0.17, max_angle=0.17)
self.assert_jacobian_is_correct_fn(
rotation_matrix_3d.from_euler_with_small_angles_approximation, [x_init])
def test_from_euler_with_small_angles_approximation_random(self):
"""Tests small_angles approximation by comparing to exact calculation."""
# Only generate small angles. For a test tolerance of 1e-3, 0.16 was found
# empirically to be the range where the small angle approximation works.
random_euler_angles = test_helpers.generate_random_test_euler_angles(
min_angle=-0.16, max_angle=0.16)
exact_matrix = rotation_matrix_3d.from_euler(random_euler_angles)
approximate_matrix = (
rotation_matrix_3d.from_euler_with_small_angles_approximation(
random_euler_angles))
self.assertAllClose(exact_matrix, approximate_matrix, atol=1e-3)
@parameterized.parameters(
((4,),),
((None, 4),),
)
def test_from_quaternion_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(rotation_matrix_3d.from_quaternion,
shapes)
@parameterized.parameters(
("must have exactly 4 dimensions in axis -1", (None,)),)
def test_from_quaternion_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(rotation_matrix_3d.from_quaternion,
error_msg, shapes)
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_from_quaternion_jacobian_preset(self):
"""Test the Jacobian of the from_quaternion function."""
x_init = test_helpers.generate_preset_test_quaternions()
self.assert_jacobian_is_correct_fn(rotation_matrix_3d.from_quaternion,
[x_init])
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_from_quaternion_jacobian_random(self):
"""Test the Jacobian of the from_quaternion function."""
x_init = test_helpers.generate_random_test_quaternions()
self.assert_jacobian_is_correct_fn(rotation_matrix_3d.from_quaternion,
[x_init])
def test_from_quaternion_normalized_preset(self):
"""Tests that quaternions can be converted to rotation matrices."""
euler_angles = test_helpers.generate_preset_test_euler_angles()
quat = quaternion.from_euler(euler_angles)
matrix_quat = rotation_matrix_3d.from_quaternion(quat)
self.assertAllEqual(
rotation_matrix_3d.is_valid(matrix_quat),
np.ones(euler_angles.shape[0:-1] + (1,)))
def test_from_quaternion_normalized_random(self):
"""Tests that random quaternions can be converted to rotation matrices."""
random_quaternion = test_helpers.generate_random_test_quaternions()
tensor_shape = random_quaternion.shape[:-1]
random_matrix = rotation_matrix_3d.from_quaternion(random_quaternion)
self.assertAllEqual(
rotation_matrix_3d.is_valid(random_matrix),
np.ones(tensor_shape + (1,)))
def test_from_quaternion_preset(self):
"""Tests that a quaternion maps to correct matrix."""
preset_quaternions = test_helpers.generate_preset_test_quaternions()
preset_matrices = test_helpers.generate_preset_test_rotation_matrices_3d()
self.assertAllClose(preset_matrices,
rotation_matrix_3d.from_quaternion(preset_quaternions))
def test_from_quaternion_random(self):
"""Tests conversion to matrix."""
random_euler_angles = test_helpers.generate_random_test_euler_angles()
random_quaternions = quaternion.from_euler(random_euler_angles)
random_rotation_matrices = rotation_matrix_3d.from_euler(
random_euler_angles)
self.assertAllClose(random_rotation_matrices,
rotation_matrix_3d.from_quaternion(random_quaternions))
@parameterized.parameters(
((3, 3),),
((None, 3, 3),),
((2, 3, 3),),
)
def test_inverse_exception_not_raised(self, *shapes):
"""Checks the inputs of the rotate function."""
self.assert_exception_is_not_raised(rotation_matrix_3d.inverse, shapes)
@parameterized.parameters(
("must have a rank greater than 1", (3,)),
("must have exactly 3 dimensions in axis -1", (3, None)),
("must have exactly 3 dimensions in axis -2", (None, 3)),
)
def test_inverse_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(rotation_matrix_3d.inverse, error_msg,
shapes)
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_inverse_jacobian_preset(self):
"""Test the Jacobian of the inverse function."""
x_init = test_helpers.generate_preset_test_rotation_matrices_3d()
self.assert_jacobian_is_correct_fn(rotation_matrix_3d.inverse, [x_init])
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_inverse_jacobian_random(self):
"""Test the Jacobian of the inverse function."""
x_init = test_helpers.generate_random_test_rotation_matrix_3d()
self.assert_jacobian_is_correct_fn(rotation_matrix_3d.inverse, [x_init])
def test_inverse_normalized_random(self):
"""Checks that inverted rotation matrices are valid rotations."""
random_euler_angle = test_helpers.generate_random_test_euler_angles()
tensor_tile = random_euler_angle.shape[:-1]
random_matrix = rotation_matrix_3d.from_euler(random_euler_angle)
predicted_invert_random_matrix = rotation_matrix_3d.inverse(random_matrix)
self.assertAllEqual(
rotation_matrix_3d.is_valid(predicted_invert_random_matrix),
np.ones(tensor_tile + (1,)))
def test_inverse_random(self):
"""Checks that inverting rotated points results in no transformation."""
random_euler_angle = test_helpers.generate_random_test_euler_angles()
tensor_tile = random_euler_angle.shape[:-1]
random_matrix = rotation_matrix_3d.from_euler(random_euler_angle)
random_point = np.random.normal(size=tensor_tile + (3,))
rotated_random_points = rotation_matrix_3d.rotate(random_point,
random_matrix)
predicted_invert_random_matrix = rotation_matrix_3d.inverse(random_matrix)
predicted_invert_rotated_random_points = rotation_matrix_3d.rotate(
rotated_random_points, predicted_invert_random_matrix)
self.assertAllClose(
random_point, predicted_invert_rotated_random_points, rtol=1e-6)
@parameterized.parameters(
((3, 3),),
((None, 3, 3),),
((2, 3, 3),),
)
def test_is_valid_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(rotation_matrix_3d.is_valid, shapes)
@parameterized.parameters(
("must have a rank greater than 1", (3,)),
("must have exactly 3 dimensions in axis -1", (3, None)),
("must have exactly 3 dimensions in axis -2", (None, 3)),
)
def test_is_valid_exception_raised(self, error_msg, *shape):
"""Tests that the shape exceptions are raised."""
self.assert_exception_is_raised(rotation_matrix_3d.is_valid, error_msg,
shape)
def test_is_valid_random(self):
"""Tests that is_valid works as intended."""
random_euler_angle = test_helpers.generate_random_test_euler_angles()
tensor_tile = random_euler_angle.shape[:-1]
rotation_matrix = rotation_matrix_3d.from_euler(random_euler_angle)
pred_normalized = rotation_matrix_3d.is_valid(rotation_matrix)
with self.subTest(name="all_normalized"):
self.assertAllEqual(pred_normalized,
np.ones(shape=tensor_tile + (1,), dtype=bool))
with self.subTest(name="non_orthonormal"):
test_matrix = np.array([[2., 0., 0.], [0., 0.5, 0], [0., 0., 1.]])
pred_normalized = rotation_matrix_3d.is_valid(test_matrix)
self.assertAllEqual(pred_normalized, np.zeros(shape=(1,), dtype=bool))
with self.subTest(name="negative_orthonormal"):
test_matrix = np.array([[1., 0., 0.], [0., -1., 0.], [0., 0., 1.]])
pred_normalized = rotation_matrix_3d.is_valid(test_matrix)
self.assertAllEqual(pred_normalized, np.zeros(shape=(1,), dtype=bool))
@parameterized.parameters(
((3,), (3, 3)),
((None, 3), (None, 3, 3)),
((1, 3), (1, 3, 3)),
((2, 3), (2, 3, 3)),
((3,), (1, 3, 3)),
((1, 3), (3, 3)),
)
def test_rotate_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(rotation_matrix_3d.rotate, shapes)
@parameterized.parameters(
("must have exactly 3 dimensions in axis -1", (None,), (3, 3)),
("must have a rank greater than 1", (3,), (3,)),
("must have exactly 3 dimensions in axis -1", (3,), (3, None)),
("must have exactly 3 dimensions in axis -2", (3,), (None, 3)),
)
def test_rotate_exception_raised(self, error_msg, *shapes):
"""Checks the inputs of the rotate function."""
self.assert_exception_is_raised(rotation_matrix_3d.rotate, error_msg,
shapes)
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_rotate_jacobian_preset(self):
"""Test the Jacobian of the rotate function."""
x_matrix_init = test_helpers.generate_preset_test_rotation_matrices_3d()
tensor_shape = x_matrix_init.shape[:-1]
x_point_init = np.random.uniform(size=tensor_shape)
self.assert_jacobian_is_correct_fn(rotation_matrix_3d.rotate,
[x_point_init, x_matrix_init])
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_rotate_jacobian_random(self):
"""Test the Jacobian of the rotate function."""
x_matrix_init = test_helpers.generate_random_test_rotation_matrix_3d()
tensor_shape = x_matrix_init.shape[:-1]
x_point_init = np.random.uniform(size=tensor_shape)
self.assert_jacobian_is_correct_fn(rotation_matrix_3d.rotate,
[x_point_init, x_matrix_init])
@parameterized.parameters(
((td.ANGLE_90 * td.AXIS_3D_X, td.AXIS_3D_X), (td.AXIS_3D_X,)),
((td.ANGLE_90 * td.AXIS_3D_X, td.AXIS_3D_Y), (td.AXIS_3D_Z,)),
((-td.ANGLE_90 * td.AXIS_3D_X, td.AXIS_3D_Z), (td.AXIS_3D_Y,)),
((-td.ANGLE_90 * td.AXIS_3D_Y, td.AXIS_3D_X), (td.AXIS_3D_Z,)),
((td.ANGLE_90 * td.AXIS_3D_Y, td.AXIS_3D_Y), (td.AXIS_3D_Y,)),
((td.ANGLE_90 * td.AXIS_3D_Y, td.AXIS_3D_Z), (td.AXIS_3D_X,)),
((td.ANGLE_90 * td.AXIS_3D_Z, td.AXIS_3D_X), (td.AXIS_3D_Y,)),
((-td.ANGLE_90 * td.AXIS_3D_Z, td.AXIS_3D_Y), (td.AXIS_3D_X,)),
((td.ANGLE_90 * td.AXIS_3D_Z, td.AXIS_3D_Z), (td.AXIS_3D_Z,)),
)
def test_rotate_vector_preset(self, test_inputs, test_outputs):
"""Tests that the rotate function produces the expected results."""
def func(angles, point):
matrix = rotation_matrix_3d.from_euler(angles)
return rotation_matrix_3d.rotate(point, matrix)
self.assert_output_is_correct(func, test_inputs, test_outputs)
def test_rotate_vs_rotate_quaternion_random(self):
"""Tests that the rotate provide the same results as quaternion.rotate."""
random_euler_angle = test_helpers.generate_random_test_euler_angles()
tensor_tile = random_euler_angle.shape[:-1]
random_matrix = rotation_matrix_3d.from_euler(random_euler_angle)
random_quaternion = quaternion.from_rotation_matrix(random_matrix)
random_point = np.random.normal(size=tensor_tile + (3,))
ground_truth = quaternion.rotate(random_point, random_quaternion)
prediction = rotation_matrix_3d.rotate(random_point, random_matrix)
self.assertAllClose(ground_truth, prediction, rtol=1e-6)
if __name__ == "__main__":
test_case.main()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for 3d rotation matrix."""
from absl.testing import flagsaver
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from tensorflow_graphics.geometry.transformation import axis_angle
from tensorflow_graphics.geometry.transformation import quaternion
from tensorflow_graphics.geometry.transformation import rotation_matrix_3d
from tensorflow_graphics.geometry.transformation.tests import test_data as td
from tensorflow_graphics.geometry.transformation.tests import test_helpers
from tensorflow_graphics.util import test_case
class RotationMatrix3dTest(test_case.TestCase):
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_assert_rotation_matrix_normalized_passthrough(self):
"""Checks that the assert is a passthrough when the flag is False."""
angles = test_helpers.generate_preset_test_euler_angles()
matrix_input = rotation_matrix_3d.from_euler(angles)
matrix_output = rotation_matrix_3d.assert_rotation_matrix_normalized(
matrix_input)
self.assertTrue(matrix_input is matrix_output) # pylint: disable=g-generic-assert
@parameterized.parameters((np.float32), (np.float64))
def test_assert_rotation_matrix_normalized_preset(self, dtype):
"""Checks that assert_normalized function works as expected."""
angles = test_helpers.generate_preset_test_euler_angles().astype(dtype)
matrix = rotation_matrix_3d.from_euler(angles)
matrix_rescaled = matrix * 1.01
matrix_normalized = rotation_matrix_3d.assert_rotation_matrix_normalized(
matrix)
self.evaluate(matrix_normalized)
with self.assertRaises(tf.errors.InvalidArgumentError): # pylint: disable=g-error-prone-assert-raises
self.evaluate(rotation_matrix_3d.assert_rotation_matrix_normalized(
matrix_rescaled))
@parameterized.parameters(
((3, 3),),
((None, 3, 3),),
)
def test_assert_rotation_matrix_normalized_exception_not_raised(
self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(
rotation_matrix_3d.assert_rotation_matrix_normalized, shapes)
@parameterized.parameters(
("must have a rank greater than 1", (3,)),
("must have exactly 3 dimensions in axis -1", (3, None)),
("must have exactly 3 dimensions in axis -2", (None, 3)),
)
def test_assert_rotation_matrix_normalized_exception_raised(
self, error_msg, *shapes):
"""Tests that the shape exceptions are raised."""
self.assert_exception_is_raised(
rotation_matrix_3d.assert_rotation_matrix_normalized, error_msg, shapes)
@parameterized.parameters(
((3,), (1,)),
((None, 3), (None, 1)),
((1, 3), (1, 1)),
((2, 3), (2, 1)),
((1, 3), (1,)),
((3,), (1, 1)),
)
def test_from_axis_angle_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(rotation_matrix_3d.from_axis_angle,
shapes)
@parameterized.parameters(
("must have exactly 3 dimensions in axis -1", (None,), (1,)),
("must have exactly 1 dimensions in axis -1", (3,), (None,)),
)
def test_from_axis_angle_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(rotation_matrix_3d.from_axis_angle,
error_msg, shapes)
def test_from_axis_angle_normalized_preset(self):
"""Tests that axis-angles can be converted to rotation matrices."""
euler_angles = test_helpers.generate_preset_test_euler_angles()
axis, angle = axis_angle.from_euler(euler_angles)
matrix_axis_angle = rotation_matrix_3d.from_axis_angle(axis, angle)
self.assertAllEqual(
rotation_matrix_3d.is_valid(matrix_axis_angle),
np.ones(euler_angles.shape[0:-1] + (1,)))
def test_from_axis_angle_normalized_random(self):
"""Tests that axis-angles can be converted to rotation matrices."""
tensor_shape = np.random.randint(1, 10, size=np.random.randint(3)).tolist()
random_axis = np.random.normal(size=tensor_shape + [3])
random_axis /= np.linalg.norm(random_axis, axis=-1, keepdims=True)
random_angle = np.random.normal(size=tensor_shape + [1])
matrix_axis_angle = rotation_matrix_3d.from_axis_angle(
random_axis, random_angle)
self.assertAllEqual(
rotation_matrix_3d.is_valid(matrix_axis_angle),
np.ones(tensor_shape + [1]))
@parameterized.parameters(
((td.AXIS_3D_X, td.ANGLE_45), (td.MAT_3D_X_45,)),
((td.AXIS_3D_Y, td.ANGLE_45), (td.MAT_3D_Y_45,)),
((td.AXIS_3D_Z, td.ANGLE_45), (td.MAT_3D_Z_45,)),
((td.AXIS_3D_X, td.ANGLE_90), (td.MAT_3D_X_90,)),
((td.AXIS_3D_Y, td.ANGLE_90), (td.MAT_3D_Y_90,)),
((td.AXIS_3D_Z, td.ANGLE_90), (td.MAT_3D_Z_90,)),
((td.AXIS_3D_X, td.ANGLE_180), (td.MAT_3D_X_180,)),
((td.AXIS_3D_Y, td.ANGLE_180), (td.MAT_3D_Y_180,)),
((td.AXIS_3D_Z, td.ANGLE_180), (td.MAT_3D_Z_180,)),
)
def test_from_axis_angle_preset(self, test_inputs, test_outputs):
"""Tests that an axis-angle maps to correct matrix."""
self.assert_output_is_correct(rotation_matrix_3d.from_axis_angle,
test_inputs, test_outputs)
def test_from_axis_angle_random(self):
"""Tests conversion to matrix."""
tensor_shape = np.random.randint(1, 10, size=np.random.randint(3)).tolist()
random_axis = np.random.normal(size=tensor_shape + [3])
random_axis /= np.linalg.norm(random_axis, axis=-1, keepdims=True)
random_angle = np.random.normal(size=tensor_shape + [1])
matrix_axis_angle = rotation_matrix_3d.from_axis_angle(
random_axis, random_angle)
random_quaternion = quaternion.from_axis_angle(random_axis, random_angle)
matrix_quaternion = rotation_matrix_3d.from_quaternion(random_quaternion)
self.assertAllClose(matrix_axis_angle, matrix_quaternion, rtol=1e-3)
# Checks that resulting rotation matrices are normalized.
self.assertAllEqual(
rotation_matrix_3d.is_valid(matrix_axis_angle),
np.ones(tensor_shape + [1]))
@parameterized.parameters(
((td.AXIS_3D_X, td.ANGLE_90, td.AXIS_3D_X), (td.AXIS_3D_X,)),
((td.AXIS_3D_X, td.ANGLE_90, td.AXIS_3D_Y), (td.AXIS_3D_Z,)),
((td.AXIS_3D_X, -td.ANGLE_90, td.AXIS_3D_Z), (td.AXIS_3D_Y,)),
((td.AXIS_3D_Y, -td.ANGLE_90, td.AXIS_3D_X), (td.AXIS_3D_Z,)),
((td.AXIS_3D_Y, td.ANGLE_90, td.AXIS_3D_Y), (td.AXIS_3D_Y,)),
((td.AXIS_3D_Y, td.ANGLE_90, td.AXIS_3D_Z), (td.AXIS_3D_X,)),
((td.AXIS_3D_Z, td.ANGLE_90, td.AXIS_3D_X), (td.AXIS_3D_Y,)),
((td.AXIS_3D_Z, -td.ANGLE_90, td.AXIS_3D_Y), (td.AXIS_3D_X,)),
((td.AXIS_3D_Z, td.ANGLE_90, td.AXIS_3D_Z), (td.AXIS_3D_Z,)),
)
def test_from_axis_angle_rotate_vector_preset(self, test_inputs,
test_outputs):
"""Tests the directionality of axis-angle rotations."""
def func(axis, angle, point):
matrix = rotation_matrix_3d.from_axis_angle(axis, angle)
return rotation_matrix_3d.rotate(point, matrix)
self.assert_output_is_correct(func, test_inputs, test_outputs)
@parameterized.parameters(
((3,),),
((None, 3),),
((2, 3),),
)
def test_from_euler_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(rotation_matrix_3d.from_euler, shapes)
@parameterized.parameters(
("must have exactly 3 dimensions in axis -1", (None,)),)
def test_from_euler_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(rotation_matrix_3d.from_euler, error_msg,
shapes)
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_from_euler_jacobian_preset(self):
"""Test the Jacobian of the from_euler function."""
x_init = test_helpers.generate_preset_test_euler_angles()
self.assert_jacobian_is_correct_fn(rotation_matrix_3d.from_euler, [x_init])
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_from_euler_jacobian_random(self):
"""Test the Jacobian of the from_euler function."""
x_init = test_helpers.generate_random_test_euler_angles()
self.assert_jacobian_is_correct_fn(rotation_matrix_3d.from_euler, [x_init])
def test_from_euler_normalized_preset(self):
"""Tests that euler angles can be converted to rotation matrices."""
euler_angles = test_helpers.generate_preset_test_euler_angles()
matrix = rotation_matrix_3d.from_euler(euler_angles)
self.assertAllEqual(
rotation_matrix_3d.is_valid(matrix),
np.ones(euler_angles.shape[0:-1] + (1,)))
def test_from_euler_normalized_random(self):
"""Tests that euler angles can be converted to rotation matrices."""
random_euler_angles = test_helpers.generate_random_test_euler_angles()
matrix = rotation_matrix_3d.from_euler(random_euler_angles)
self.assertAllEqual(
rotation_matrix_3d.is_valid(matrix),
np.ones(random_euler_angles.shape[0:-1] + (1,)))
@parameterized.parameters(
((td.AXIS_3D_0,), (td.MAT_3D_ID,)),
((td.ANGLE_45 * td.AXIS_3D_X,), (td.MAT_3D_X_45,)),
((td.ANGLE_45 * td.AXIS_3D_Y,), (td.MAT_3D_Y_45,)),
((td.ANGLE_45 * td.AXIS_3D_Z,), (td.MAT_3D_Z_45,)),
((td.ANGLE_90 * td.AXIS_3D_X,), (td.MAT_3D_X_90,)),
((td.ANGLE_90 * td.AXIS_3D_Y,), (td.MAT_3D_Y_90,)),
((td.ANGLE_90 * td.AXIS_3D_Z,), (td.MAT_3D_Z_90,)),
((td.ANGLE_180 * td.AXIS_3D_X,), (td.MAT_3D_X_180,)),
((td.ANGLE_180 * td.AXIS_3D_Y,), (td.MAT_3D_Y_180,)),
((td.ANGLE_180 * td.AXIS_3D_Z,), (td.MAT_3D_Z_180,)),
)
def test_from_euler_preset(self, test_inputs, test_outputs):
"""Tests that Euler angles create the expected matrix."""
self.assert_output_is_correct(rotation_matrix_3d.from_euler, test_inputs,
test_outputs)
def test_from_euler_random(self):
"""Tests that Euler angles produce the same result as axis-angle."""
angles = test_helpers.generate_random_test_euler_angles()
matrix = rotation_matrix_3d.from_euler(angles)
tensor_tile = angles.shape[:-1]
x_axis = np.tile(td.AXIS_3D_X, tensor_tile + (1,))
y_axis = np.tile(td.AXIS_3D_Y, tensor_tile + (1,))
z_axis = np.tile(td.AXIS_3D_Z, tensor_tile + (1,))
x_angle = np.expand_dims(angles[..., 0], axis=-1)
y_angle = np.expand_dims(angles[..., 1], axis=-1)
z_angle = np.expand_dims(angles[..., 2], axis=-1)
x_rotation = rotation_matrix_3d.from_axis_angle(x_axis, x_angle)
y_rotation = rotation_matrix_3d.from_axis_angle(y_axis, y_angle)
z_rotation = rotation_matrix_3d.from_axis_angle(z_axis, z_angle)
expected_matrix = tf.matmul(z_rotation, tf.matmul(y_rotation, x_rotation))
self.assertAllClose(expected_matrix, matrix, rtol=1e-3)
@parameterized.parameters(
((3,),),
((None, 3),),
)
def test_from_euler_with_small_angles_approximation_exception_not_raised(
self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(
rotation_matrix_3d.from_euler_with_small_angles_approximation, shapes)
@parameterized.parameters(
("must have exactly 3 dimensions in axis -1", (None,)),)
def test_from_euler_with_small_angles_approximation_exception_raised(
self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(
rotation_matrix_3d.from_euler_with_small_angles_approximation,
error_msg, shapes)
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_from_euler_with_small_angles_approximation_jacobian_random(self):
"""Test the Jacobian of from_euler_with_small_angles_approximation."""
x_init = test_helpers.generate_random_test_euler_angles(
min_angle=-0.17, max_angle=0.17)
self.assert_jacobian_is_correct_fn(
rotation_matrix_3d.from_euler_with_small_angles_approximation, [x_init])
def test_from_euler_with_small_angles_approximation_random(self):
"""Tests small_angles approximation by comparing to exact calculation."""
# Only generate small angles. For a test tolerance of 1e-3, 0.16 was found
# empirically to be the range where the small angle approximation works.
random_euler_angles = test_helpers.generate_random_test_euler_angles(
min_angle=-0.16, max_angle=0.16)
exact_matrix = rotation_matrix_3d.from_euler(random_euler_angles)
approximate_matrix = (
rotation_matrix_3d.from_euler_with_small_angles_approximation(
random_euler_angles))
self.assertAllClose(exact_matrix, approximate_matrix, atol=1e-3)
@parameterized.parameters(
((4,),),
((None, 4),),
)
def test_from_quaternion_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(rotation_matrix_3d.from_quaternion,
shapes)
@parameterized.parameters(
("must have exactly 4 dimensions in axis -1", (None,)),)
def test_from_quaternion_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(rotation_matrix_3d.from_quaternion,
error_msg, shapes)
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_from_quaternion_jacobian_preset(self):
"""Test the Jacobian of the from_quaternion function."""
x_init = test_helpers.generate_preset_test_quaternions()
self.assert_jacobian_is_correct_fn(rotation_matrix_3d.from_quaternion,
[x_init])
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_from_quaternion_jacobian_random(self):
"""Test the Jacobian of the from_quaternion function."""
x_init = test_helpers.generate_random_test_quaternions()
self.assert_jacobian_is_correct_fn(rotation_matrix_3d.from_quaternion,
[x_init])
def test_from_quaternion_normalized_preset(self):
"""Tests that quaternions can be converted to rotation matrices."""
euler_angles = test_helpers.generate_preset_test_euler_angles()
quat = quaternion.from_euler(euler_angles)
matrix_quat = rotation_matrix_3d.from_quaternion(quat)
self.assertAllEqual(
rotation_matrix_3d.is_valid(matrix_quat),
np.ones(euler_angles.shape[0:-1] + (1,)))
def test_from_quaternion_normalized_random(self):
"""Tests that random quaternions can be converted to rotation matrices."""
random_quaternion = test_helpers.generate_random_test_quaternions()
tensor_shape = random_quaternion.shape[:-1]
random_matrix = rotation_matrix_3d.from_quaternion(random_quaternion)
self.assertAllEqual(
rotation_matrix_3d.is_valid(random_matrix),
np.ones(tensor_shape + (1,)))
def test_from_quaternion_preset(self):
"""Tests that a quaternion maps to correct matrix."""
preset_quaternions = test_helpers.generate_preset_test_quaternions()
preset_matrices = test_helpers.generate_preset_test_rotation_matrices_3d()
self.assertAllClose(preset_matrices,
rotation_matrix_3d.from_quaternion(preset_quaternions))
def test_from_quaternion_random(self):
"""Tests conversion to matrix."""
random_euler_angles = test_helpers.generate_random_test_euler_angles()
random_quaternions = quaternion.from_euler(random_euler_angles)
random_rotation_matrices = rotation_matrix_3d.from_euler(
random_euler_angles)
self.assertAllClose(random_rotation_matrices,
rotation_matrix_3d.from_quaternion(random_quaternions))
@parameterized.parameters(
((3, 3),),
((None, 3, 3),),
((2, 3, 3),),
)
def test_inverse_exception_not_raised(self, *shapes):
"""Checks the inputs of the rotate function."""
self.assert_exception_is_not_raised(rotation_matrix_3d.inverse, shapes)
@parameterized.parameters(
("must have a rank greater than 1", (3,)),
("must have exactly 3 dimensions in axis -1", (3, None)),
("must have exactly 3 dimensions in axis -2", (None, 3)),
)
def test_inverse_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(rotation_matrix_3d.inverse, error_msg,
shapes)
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_inverse_jacobian_preset(self):
"""Test the Jacobian of the inverse function."""
x_init = test_helpers.generate_preset_test_rotation_matrices_3d()
self.assert_jacobian_is_correct_fn(rotation_matrix_3d.inverse, [x_init])
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_inverse_jacobian_random(self):
"""Test the Jacobian of the inverse function."""
x_init = test_helpers.generate_random_test_rotation_matrix_3d()
self.assert_jacobian_is_correct_fn(rotation_matrix_3d.inverse, [x_init])
def test_inverse_normalized_random(self):
"""Checks that inverted rotation matrices are valid rotations."""
random_euler_angle = test_helpers.generate_random_test_euler_angles()
tensor_tile = random_euler_angle.shape[:-1]
random_matrix = rotation_matrix_3d.from_euler(random_euler_angle)
predicted_invert_random_matrix = rotation_matrix_3d.inverse(random_matrix)
self.assertAllEqual(
rotation_matrix_3d.is_valid(predicted_invert_random_matrix),
np.ones(tensor_tile + (1,)))
def test_inverse_random(self):
"""Checks that inverting rotated points results in no transformation."""
random_euler_angle = test_helpers.generate_random_test_euler_angles()
tensor_tile = random_euler_angle.shape[:-1]
random_matrix = rotation_matrix_3d.from_euler(random_euler_angle)
random_point = np.random.normal(size=tensor_tile + (3,))
rotated_random_points = rotation_matrix_3d.rotate(random_point,
random_matrix)
predicted_invert_random_matrix = rotation_matrix_3d.inverse(random_matrix)
predicted_invert_rotated_random_points = rotation_matrix_3d.rotate(
rotated_random_points, predicted_invert_random_matrix)
self.assertAllClose(
random_point, predicted_invert_rotated_random_points, rtol=1e-6)
@parameterized.parameters(
((3, 3),),
((None, 3, 3),),
((2, 3, 3),),
)
def test_is_valid_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(rotation_matrix_3d.is_valid, shapes)
@parameterized.parameters(
("must have a rank greater than 1", (3,)),
("must have exactly 3 dimensions in axis -1", (3, None)),
("must have exactly 3 dimensions in axis -2", (None, 3)),
)
def test_is_valid_exception_raised(self, error_msg, *shape):
"""Tests that the shape exceptions are raised."""
self.assert_exception_is_raised(rotation_matrix_3d.is_valid, error_msg,
shape)
def test_is_valid_random(self):
"""Tests that is_valid works as intended."""
random_euler_angle = test_helpers.generate_random_test_euler_angles()
tensor_tile = random_euler_angle.shape[:-1]
rotation_matrix = rotation_matrix_3d.from_euler(random_euler_angle)
pred_normalized = rotation_matrix_3d.is_valid(rotation_matrix)
with self.subTest(name="all_normalized"):
self.assertAllEqual(pred_normalized,
np.ones(shape=tensor_tile + (1,), dtype=bool))
with self.subTest(name="non_orthonormal"):
test_matrix = np.array([[2., 0., 0.], [0., 0.5, 0], [0., 0., 1.]])
pred_normalized = rotation_matrix_3d.is_valid(test_matrix)
self.assertAllEqual(pred_normalized, np.zeros(shape=(1,), dtype=bool))
with self.subTest(name="negative_orthonormal"):
test_matrix = np.array([[1., 0., 0.], [0., -1., 0.], [0., 0., 1.]])
pred_normalized = rotation_matrix_3d.is_valid(test_matrix)
self.assertAllEqual(pred_normalized, np.zeros(shape=(1,), dtype=bool))
@parameterized.parameters(
((3,), (3, 3)),
((None, 3), (None, 3, 3)),
((1, 3), (1, 3, 3)),
((2, 3), (2, 3, 3)),
((3,), (1, 3, 3)),
((1, 3), (3, 3)),
)
def test_rotate_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(rotation_matrix_3d.rotate, shapes)
@parameterized.parameters(
("must have exactly 3 dimensions in axis -1", (None,), (3, 3)),
("must have a rank greater than 1", (3,), (3,)),
("must have exactly 3 dimensions in axis -1", (3,), (3, None)),
("must have exactly 3 dimensions in axis -2", (3,), (None, 3)),
)
def test_rotate_exception_raised(self, error_msg, *shapes):
"""Checks the inputs of the rotate function."""
self.assert_exception_is_raised(rotation_matrix_3d.rotate, error_msg,
shapes)
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_rotate_jacobian_preset(self):
"""Test the Jacobian of the rotate function."""
x_matrix_init = test_helpers.generate_preset_test_rotation_matrices_3d()
tensor_shape = x_matrix_init.shape[:-1]
x_point_init = np.random.uniform(size=tensor_shape)
self.assert_jacobian_is_correct_fn(rotation_matrix_3d.rotate,
[x_point_init, x_matrix_init])
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_rotate_jacobian_random(self):
"""Test the Jacobian of the rotate function."""
x_matrix_init = test_helpers.generate_random_test_rotation_matrix_3d()
tensor_shape = x_matrix_init.shape[:-1]
x_point_init = np.random.uniform(size=tensor_shape)
self.assert_jacobian_is_correct_fn(rotation_matrix_3d.rotate,
[x_point_init, x_matrix_init])
@parameterized.parameters(
((td.ANGLE_90 * td.AXIS_3D_X, td.AXIS_3D_X), (td.AXIS_3D_X,)),
((td.ANGLE_90 * td.AXIS_3D_X, td.AXIS_3D_Y), (td.AXIS_3D_Z,)),
((-td.ANGLE_90 * td.AXIS_3D_X, td.AXIS_3D_Z), (td.AXIS_3D_Y,)),
((-td.ANGLE_90 * td.AXIS_3D_Y, td.AXIS_3D_X), (td.AXIS_3D_Z,)),
((td.ANGLE_90 * td.AXIS_3D_Y, td.AXIS_3D_Y), (td.AXIS_3D_Y,)),
((td.ANGLE_90 * td.AXIS_3D_Y, td.AXIS_3D_Z), (td.AXIS_3D_X,)),
((td.ANGLE_90 * td.AXIS_3D_Z, td.AXIS_3D_X), (td.AXIS_3D_Y,)),
((-td.ANGLE_90 * td.AXIS_3D_Z, td.AXIS_3D_Y), (td.AXIS_3D_X,)),
((td.ANGLE_90 * td.AXIS_3D_Z, td.AXIS_3D_Z), (td.AXIS_3D_Z,)),
)
def test_rotate_vector_preset(self, test_inputs, test_outputs):
"""Tests that the rotate function produces the expected results."""
def func(angles, point):
matrix = rotation_matrix_3d.from_euler(angles)
return rotation_matrix_3d.rotate(point, matrix)
self.assert_output_is_correct(func, test_inputs, test_outputs)
def test_rotate_vs_rotate_quaternion_random(self):
"""Tests that the rotate provide the same results as quaternion.rotate."""
random_euler_angle = test_helpers.generate_random_test_euler_angles()
tensor_tile = random_euler_angle.shape[:-1]
random_matrix = rotation_matrix_3d.from_euler(random_euler_angle)
random_quaternion = quaternion.from_rotation_matrix(random_matrix)
random_point = np.random.normal(size=tensor_tile + (3,))
ground_truth = quaternion.rotate(random_point, random_quaternion)
prediction = rotation_matrix_3d.rotate(random_point, random_matrix)
self.assertAllClose(ground_truth, prediction, rtol=1e-6)
if __name__ == "__main__":
test_case.main()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/rendering/opengl/tests/math_test.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for OpenGL math routines."""
import math
from absl.testing import flagsaver
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from tensorflow_graphics.geometry.transformation import look_at
from tensorflow_graphics.rendering.camera import perspective
from tensorflow_graphics.rendering.opengl import math as glm
from tensorflow_graphics.util import test_case
class MathTest(test_case.TestCase):
def test_model_to_eye_preset(self):
"""Tests that model_to_eye generates expected results.."""
point = ((2.0, 3.0, 4.0), (3.0, 4.0, 5.0))
camera_position = ((0.0, 0.0, 0.0), (0.1, 0.2, 0.3))
look_at_point = ((0.0, 0.0, 1.0), (0.4, 0.5, 0.6))
up_vector = ((0.0, 1.0, 0.0), (0.7, 0.8, 0.9))
pred = glm.model_to_eye(point, camera_position, look_at_point, up_vector)
gt = ((-2.0, 3.0, -4.0), (2.08616257e-07, 1.27279234, -6.58179379))
self.assertAllClose(pred, gt)
@parameterized.parameters(
((3,), (3,), (3,), (3,)),
((None, 3), (None, 3), (None, 3), (None, 3)),
((100, 3), (3,), (3,), (3,)),
((None, 1, 3), (None, 2, 3), (None, 2, 3), (None, 2, 3)),
)
def test_model_to_eye_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(glm.model_to_eye, shapes)
@parameterized.parameters(
("must have exactly 3 dimensions in axis -1", (2,), (3,), (3,), (3,)),
("must have exactly 3 dimensions in axis -1", (3,), (2,), (3,), (3,)),
("must have exactly 3 dimensions in axis -1", (3,), (3,), (2,), (3,)),
("must have exactly 3 dimensions in axis -1", (3,), (3,), (3,), (2,)),
("Not all batch dimensions are identical", (3,), (2, 3), (3, 3), (3, 3)),
("Not all batch dimensions are broadcast-compatible", (2, 3), (3, 3),
(3, 3), (3, 3)),
)
def test_model_to_eye_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(glm.model_to_eye, error_msg, shapes)
def test_model_to_eye_jacobian_preset(self):
"""Tests the Jacobian of model_to_eye."""
point_init = np.array(((2.0, 3.0, 4.0), (3.0, 4.0, 5.0)))
camera_position_init = np.array(((0.0, 0.0, 0.0), (0.1, 0.2, 0.3)))
look_at_init = np.array(((0.0, 0.0, 1.0), (0.4, 0.5, 0.6)))
up_vector_init = np.array(((0.0, 1.0, 0.0), (0.7, 0.8, 0.9)))
self.assert_jacobian_is_correct_fn(
glm.model_to_eye,
[point_init, camera_position_init, look_at_init, up_vector_init])
def test_model_to_eye_jacobian_random(self):
"""Tests the Jacobian of model_to_eye."""
tensor_size = np.random.randint(1, 3)
tensor_shape = np.random.randint(1, 5, size=(tensor_size)).tolist()
point_init = np.random.uniform(size=tensor_shape + [3])
camera_position_init = np.random.uniform(size=tensor_shape + [3])
look_at_init = np.random.uniform(size=tensor_shape + [3])
up_vector_init = np.random.uniform(size=tensor_shape + [3])
self.assert_jacobian_is_correct_fn(
glm.model_to_eye,
[point_init, camera_position_init, look_at_init, up_vector_init])
def test_eye_to_clip_preset(self):
"""Tests that eye_to_clip generates expected results."""
point = ((2.0, 3.0, 4.0), (3.0, 4.0, 5.0))
vertical_field_of_view = ((60.0 * math.pi / 180.0,),
(50.0 * math.pi / 180.0,))
aspect_ratio = ((1.5,), (1.6,))
near_plane = ((1.0,), (2.0,))
far_plane = ((10.0,), (11.0,))
pred = glm.eye_to_clip(point, vertical_field_of_view, aspect_ratio,
near_plane, far_plane)
gt = ((2.30940104, 5.19615173, -7.11111116, -4.0), (4.02095032, 8.57802773,
-12.11111069, -5.0))
self.assertAllClose(pred, gt)
@parameterized.parameters(
((3,), (1,), (1,), (1,), (1,)),
((None, 3), (None, 1), (None, 1), (None, 1), (None, 1)),
((None, 5, 3), (None, 5, 1), (None, 5, 1), (None, 5, 1), (None, 5, 1)),
)
def test_eye_to_clip_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(glm.eye_to_clip, shapes)
@parameterized.parameters(
("must have exactly 3 dimensions in axis -1", (2,), (1,), (1,), (1,),
(1,)),
("must have exactly 1 dimensions in axis -1", (3,), (2,), (1,), (1,),
(1,)),
("must have exactly 1 dimensions in axis -1", (3,), (1,), (2,), (1,),
(1,)),
("must have exactly 1 dimensions in axis -1", (3,), (1,), (1,), (2,),
(1,)),
("must have exactly 1 dimensions in axis -1", (3,), (1,), (1,), (1,),
(2,)),
("Not all batch dimensions are broadcast-compatible", (3, 3), (2, 1),
(1,), (1,), (1,)),
)
def test_eye_to_clip_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(glm.eye_to_clip, error_msg, shapes)
def test_eye_to_clip_jacobian_preset(self):
"""Tests the Jacobian of eye_to_clip."""
point_init = np.array(((2.0, 3.0, 4.0), (3.0, 4.0, 5.0)))
vertical_field_of_view_init = np.array(
((60.0 * math.pi / 180.0,), (50.0 * math.pi / 180.0,)))
aspect_ratio_init = np.array(((1.5,), (1.6,)))
near_init = np.array(((1.0,), (2.0,)))
far_init = np.array(((10.0,), (11.0,)))
self.assert_jacobian_is_correct_fn(
glm.eye_to_clip, [
point_init, vertical_field_of_view_init, aspect_ratio_init,
near_init, far_init
],
atol=1e-5)
def test_eye_to_clip_jacobian_random(self):
"""Tests the Jacobian of eye_to_clip."""
tensor_size = np.random.randint(1, 3)
tensor_shape = np.random.randint(1, 5, size=(tensor_size)).tolist()
point_init = np.random.uniform(size=tensor_shape + [3])
eps = np.finfo(np.float64).eps
vertical_field_of_view_init = np.random.uniform(
eps, math.pi - eps, size=tensor_shape + [1])
aspect_ratio_init = np.random.uniform(eps, 100.0, size=tensor_shape + [1])
near_init = np.random.uniform(eps, 100.0, size=tensor_shape + [1])
far_init = near_init + np.random.uniform(eps, 10.0, size=tensor_shape + [1])
self.assert_jacobian_is_correct_fn(
glm.eye_to_clip, [
point_init, vertical_field_of_view_init, aspect_ratio_init,
near_init, far_init
],
atol=1e-03)
def test_clip_to_ndc_preset(self):
"""Tests that clip_to_ndc generates expected results."""
point = ((4.0, 8.0, 16.0, 2.0), (4.0, 8.0, 16.0, 1.0))
pred = glm.clip_to_ndc(point)
gt = ((2.0, 4.0, 8.0), (4.0, 8.0, 16.0))
self.assertAllClose(pred, gt)
@parameterized.parameters(
((4,)),
((None, 4),),
((None, 5, 4),),
)
def test_clip_to_ndc_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(glm.clip_to_ndc, shapes)
def test_clip_to_ndc_exception_raised(self):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(
glm.clip_to_ndc, "must have exactly 4 dimensions in axis -1", ((2,),))
def test_clip_to_ndc_jacobian_preset(self):
"""Tests the Jacobian of clip_to_ndc."""
point_init = np.array(((4.0, 8.0, 16.0, 2.0), (4.0, 8.0, 16.0, 1.0)))
self.assert_jacobian_is_correct_fn(glm.clip_to_ndc, [point_init])
def test_clip_to_ndc_jacobian_random(self):
"""Tests the Jacobian of clip_to_ndc."""
tensor_size = np.random.randint(1, 3)
tensor_shape = np.random.randint(1, 5, size=(tensor_size)).tolist()
point_init = np.random.uniform(size=tensor_shape + [4])
self.assert_jacobian_is_correct_fn(
glm.clip_to_ndc, [point_init], atol=1e-04)
def test_ndc_to_screen_preset(self):
"""Tests that ndc_to_screen generates expected results."""
point = ((1.1, 2.2, 3.3), (5.1, 5.2, 5.3))
lower_left_corner = ((6.4, 4.8), (0.0, 0.0))
screen_dimensions = ((640.0, 480.0), (300.0, 400.0))
near = ((1.0,), (11.0,))
far = ((10.0,), (100.0,))
pred = glm.ndc_to_screen(point, lower_left_corner, screen_dimensions, near,
far)
gt = ((678.40002441, 772.79998779, 20.34999847), (915.0, 1240.0,
291.3500061))
self.assertAllClose(pred, gt)
@parameterized.parameters(
((3,), (2,), (2,), (1,), (1,)),
((None, 3), (None, 2), (None, 2), (None, 1), (None, 1)),
((None, 5, 3), (None, 5, 2), (None, 5, 2), (None, 5, 1), (None, 5, 1)),
)
def test_ndc_to_screen_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(glm.ndc_to_screen, shapes)
@parameterized.parameters(
("must have exactly 3 dimensions in axis -1", (2,), (2,), (2,), (1,),
(1,)),
("must have exactly 2 dimensions in axis -1", (3,), (1,), (2,), (1,),
(1,)),
("must have exactly 2 dimensions in axis -1", (3,), (2,), (3,), (1,),
(1,)),
("must have exactly 1 dimensions in axis -1", (3,), (2,), (2,), (2,),
(1,)),
("must have exactly 1 dimensions in axis -1", (3,), (2,), (2,), (1,),
(3,)),
("Not all batch dimensions are identical", (3,), (2, 2), (3, 2), (3, 1),
(3, 1)),
("Not all batch dimensions are broadcast-compatible", (4, 3), (3, 2),
(3, 2), (3, 1), (3, 1)),
)
def test_ndc_to_screen_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(glm.ndc_to_screen, error_msg, shapes)
def test_ndc_to_screen_exception_near_raised(self):
"""Tests that an exception is raised when `near` is not strictly positive."""
point = np.random.uniform(size=(3,))
lower_left_corner = np.random.uniform(size=(2,))
screen_dimensions = np.random.uniform(1.0, 2.0, size=(2,))
near = np.random.uniform(-1.0, 0.0, size=(1,))
far = np.random.uniform(1.0, 2.0, size=(1,))
with self.subTest("negative_near"):
with self.assertRaises(tf.errors.InvalidArgumentError):
self.evaluate(
glm.ndc_to_screen(point, lower_left_corner, screen_dimensions, near,
far))
with self.subTest("zero_near"):
with self.assertRaises(tf.errors.InvalidArgumentError):
self.evaluate(
glm.ndc_to_screen(point, lower_left_corner, screen_dimensions,
np.array((0.0,)), far))
def test_ndc_to_screen_exception_far_raised(self):
"""Tests that an exception is raised if `far` is not greater than `near`."""
point = np.random.uniform(size=(3,))
lower_left_corner = np.random.uniform(size=(2,))
screen_dimensions = np.random.uniform(1.0, 2.0, size=(2,))
near = np.random.uniform(1.0, 10.0, size=(1,))
far = near + np.random.uniform(-1.0, 0.0, size=(1,))
with self.assertRaises(tf.errors.InvalidArgumentError):
self.evaluate(
glm.ndc_to_screen(point, lower_left_corner, screen_dimensions, near,
far))
def test_ndc_to_screen_exception_screen_dimensions_raised(self):
"""Tests that an exception is raised when `screen_dimensions` is not strictly positive."""
point = np.random.uniform(size=(3,))
lower_left_corner = np.random.uniform(size=(2,))
screen_dimensions = np.random.uniform(-1.0, 0.0, size=(2,))
near = np.random.uniform(1.0, 10.0, size=(1,))
far = near + np.random.uniform(0.1, 1.0, size=(1,))
with self.subTest("negative_screen_dimensions"):
with self.assertRaises(tf.errors.InvalidArgumentError):
self.evaluate(
glm.ndc_to_screen(point, lower_left_corner, screen_dimensions, near,
far))
with self.subTest("zero_screen_dimensions"):
with self.assertRaises(tf.errors.InvalidArgumentError):
self.evaluate(
glm.ndc_to_screen(point, lower_left_corner, np.array((0.0, 0.0)),
near, far))
def test_ndc_to_screen_jacobian_preset(self):
"""Tests the Jacobian of ndc_to_screen."""
point_init = np.array(((1.1, 2.2, 3.3), (5.1, 5.2, 5.3)))
lower_left_corner_init = np.array(((6.4, 4.8), (0.0, 0.0)))
screen_dimensions_init = np.array(((640.0, 480.0), (300.0, 400.0)))
near_init = np.array(((1.0,), (11.0,)))
far_init = np.array(((10.0,), (100.0,)))
self.assert_jacobian_is_correct_fn(glm.ndc_to_screen, [
point_init, lower_left_corner_init, screen_dimensions_init, near_init,
far_init
])
def test_ndc_to_screen_jacobian_random(self):
"""Tests the Jacobian of ndc_to_screen."""
tensor_size = np.random.randint(1, 3)
tensor_shape = np.random.randint(1, 5, size=(tensor_size)).tolist()
point_init = np.random.uniform(size=tensor_shape + [3])
lower_left_corner_init = np.random.uniform(size=tensor_shape + [2])
screen_dimensions_init = np.random.uniform(
1.0, 1000.0, size=tensor_shape + [2])
near_init = np.random.uniform(1.0, 10.0, size=tensor_shape + [1])
far_init = near_init + np.random.uniform(0.1, 1.0, size=(1,))
self.assert_jacobian_is_correct_fn(glm.ndc_to_screen, [
point_init, lower_left_corner_init, screen_dimensions_init, near_init,
far_init
])
def test_model_to_screen_preset(self):
"""Tests that model_to_screen generates expected results."""
point_world_space = np.array(((3.1, 4.1, 5.1), (-1.1, 2.2, -3.1)))
camera_position = np.array(((0.0, 0.0, 0.0), (0.4, -0.8, 0.1)))
camera_up = np.array(((0.0, 1.0, 0.0), (0.0, 0.0, 1.0)))
look_at_point = np.array(((0.0, 0.0, 1.0), (0.0, 1.0, 0.0)))
vertical_field_of_view = np.array(
((60.0 * math.pi / 180.0,), (65 * math.pi / 180,)))
lower_left_corner = np.array(((0.0, 0.0), (10.0, 20.0)))
screen_dimensions = np.array(((501.0, 501.0), (400.0, 600.0)))
near = np.array(((0.01,), (1.0,)))
far = np.array(((4.0,), (3.0,)))
# Build matrices.
model_to_eye_matrix = look_at.right_handed(camera_position, look_at_point,
camera_up)
perspective_matrix = perspective.right_handed(
vertical_field_of_view,
screen_dimensions[..., 0:1] / screen_dimensions[..., 1:2], near, far)
pred_screen, pred_w = glm.model_to_screen(point_world_space,
model_to_eye_matrix,
perspective_matrix,
screen_dimensions,
lower_left_corner)
gt_screen = ((-13.23016357, 599.30444336, 4.00215721),
(98.07017517, -95.40383911, 3.1234405))
gt_w = ((5.1,), (3.42247,))
self.assertAllClose(pred_screen, gt_screen, atol=1e-5, rtol=1e-5)
self.assertAllClose(pred_w, gt_w)
@parameterized.parameters(
((3,), (4, 4), (4, 4), (2,), (2,)),
((640, 480, 3), (4, 4), (4, 4), (2,), (2,)),
((None, 3), (None, 4, 4), (None, 4, 4), (None, 2), (None, 2)),
((3,), (None, 1, 4, 4), (None, 1, 4, 4), (None, 1, 2), (None, 1, 2)),
)
def test_model_to_screen_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(glm.model_to_screen, shapes)
@parameterized.parameters(
("must have exactly 3 dimensions in axis -1", (9.0, 12.0), (0.0, 0.0),
(2,), (4, 4), (4, 4)),
("must have exactly 4 dimensions in axis -1", (9.0, 12.0), (0.0, 0.0),
(3,), (4, 3), (4, 4)),
("must have exactly 4 dimensions in axis -2", (9.0, 12.0), (0.0, 0.0),
(3,), (3, 4), (4, 4)),
("must have exactly 4 dimensions in axis -1", (9.0, 12.0), (0.0, 0.0),
(3,), (4, 4), (4, 3)),
("must have exactly 4 dimensions in axis -2", (9.0, 12.0), (0.0, 0.0),
(3,), (4, 4), (3, 4)),
("Not all batch dimensions are broadcast-compatible", (9.0, 12.0),
(0.0, 0.0), (2, 3), (3, 4, 4), (3, 4, 4)),
)
def test_model_to_screen_exception_raised(self, error_msg, screen_dimensions,
lower_left_corner, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(
func=glm.model_to_screen,
error_msg=error_msg,
shapes=shapes,
screen_dimensions=screen_dimensions,
lower_left_corner=lower_left_corner)
def test_model_to_screen_jacobian_preset(self):
"""Tests the Jacobian of model_to_screen."""
point_world_space_init = np.array(((3.1, 4.1, 5.1), (-1.1, 2.2, -3.1)))
camera_position_init = np.array(((0.0, 0.0, 0.0), (0.4, -0.8, 0.1)))
camera_up_init = np.array(((0.0, 1.0, 0.0), (0.0, 0.0, 1.0)))
look_at_init = np.array(((0.0, 0.0, 1.0), (0.0, 1.0, 0.0)))
vertical_field_of_view_init = np.array(
((60.0 * math.pi / 180.0,), (65 * math.pi / 180,)))
lower_left_corner_init = np.array(((0.0, 0.0), (10.0, 20.0)))
screen_dimensions_init = np.array(((501.0, 501.0), (400.0, 600.0)))
near_init = np.array(((0.01,), (1.0,)))
far_init = np.array(((4.0,), (3.0,)))
# Build matrices.
model_to_eye_matrix = look_at.right_handed(camera_position_init,
look_at_init, camera_up_init)
perspective_matrix = perspective.right_handed(
vertical_field_of_view_init,
screen_dimensions_init[..., 0:1] / screen_dimensions_init[..., 1:2],
near_init, far_init)
args = [
point_world_space_init, model_to_eye_matrix, perspective_matrix,
screen_dimensions_init, lower_left_corner_init
]
with self.subTest(name="jacobian_y_projection"):
self.assert_jacobian_is_correct_fn(
lambda *args: glm.model_to_screen(*args)[0], args, atol=1e-4)
# TODO(julienvalentin): will be fixed before submission
# with self.subTest(name="jacobian_w"):
# self.assert_jacobian_is_correct_fn(
# lambda *args: glm.model_to_screen(*args)[1], args)
def test_model_to_screen_jacobian_random(self):
"""Tests the Jacobian of model_to_screen."""
tensor_size = np.random.randint(1, 3)
tensor_shape = np.random.randint(1, 5, size=(tensor_size)).tolist()
point_world_space_init = np.random.uniform(size=tensor_shape + [3])
camera_position_init = np.random.uniform(size=tensor_shape + [3])
camera_up_init = np.random.uniform(size=tensor_shape + [3])
look_at_init = np.random.uniform(size=tensor_shape + [3])
vertical_field_of_view_init = np.random.uniform(
0.1, 1.0, size=tensor_shape + [1])
lower_left_corner_init = np.random.uniform(size=tensor_shape + [2])
screen_dimensions_init = np.random.uniform(
0.1, 1.0, size=tensor_shape + [2])
near_init = np.random.uniform(0.1, 1.0, size=tensor_shape + [1])
far_init = near_init + np.random.uniform(0.1, 1.0, size=tensor_shape + [1])
# Build matrices.
model_to_eye_matrix = look_at.right_handed(camera_position_init,
look_at_init, camera_up_init)
perspective_matrix = perspective.right_handed(
vertical_field_of_view_init,
screen_dimensions_init[..., 0:1] / screen_dimensions_init[..., 1:2],
near_init, far_init)
args = [
point_world_space_init, model_to_eye_matrix, perspective_matrix,
screen_dimensions_init, lower_left_corner_init
]
with self.subTest(name="jacobian_y_projection"):
self.assert_jacobian_is_correct_fn(
lambda *args: glm.model_to_screen(*args)[0], args, atol=1e-4)
# TODO(julienvalentin): will be fixed before submission
# with self.subTest(name="jacobian_w"):
# self.assert_jacobian_is_correct_fn(
# lambda *args: glm.model_to_screen(*args)[1], args)
def test_perspective_correct_interpolation_preset(self):
"""Tests that perspective_correct_interpolation generates expected results."""
camera_origin = np.array((0.0, 0.0, 0.0))
camera_up = np.array((0.0, 1.0, 0.0))
look_at_point = np.array((0.0, 0.0, 1.0))
fov = np.array((90.0 * np.math.pi / 180.0,))
bottom_left = np.array((0.0, 0.0))
image_size = np.array((501.0, 501.0))
near_plane = np.array((0.01,))
far_plane = np.array((10.0,))
batch_size = np.random.randint(1, 5)
triangle_x_y = np.random.uniform(-10.0, 10.0, (batch_size, 3, 2))
triangle_z = np.random.uniform(2.0, 10.0, (batch_size, 3, 1))
triangles = np.concatenate((triangle_x_y, triangle_z), axis=-1)
# Builds barycentric weights.
barycentric_weights = np.random.uniform(size=(batch_size, 3))
barycentric_weights = barycentric_weights / np.sum(
barycentric_weights, axis=-1, keepdims=True)
# Barycentric interpolation of vertex positions.
convex_combination = np.einsum("ba, bac -> bc", barycentric_weights,
triangles)
# Build matrices.
model_to_eye_matrix = look_at.right_handed(camera_origin, look_at_point,
camera_up)
perspective_matrix = perspective.right_handed(
fov, (image_size[0:1] / image_size[1:2]), near_plane, far_plane)
# Computes where those points project in screen coordinates.
pixel_position, _ = glm.model_to_screen(convex_combination,
model_to_eye_matrix,
perspective_matrix, image_size,
bottom_left)
# Builds attributes.
num_pixels = pixel_position.shape[0]
attribute_size = np.random.randint(10)
attributes = np.random.uniform(size=(num_pixels, 3, attribute_size))
prediction = glm.perspective_correct_interpolation(triangles, attributes,
pixel_position[..., 0:2],
model_to_eye_matrix,
perspective_matrix,
image_size, bottom_left)
groundtruth = np.einsum("ba, bac -> bc", barycentric_weights, attributes)
self.assertAllClose(prediction, groundtruth)
def test_perspective_correct_interpolation_jacobian_preset(self):
"""Tests the Jacobian of perspective_correct_interpolation."""
vertices_init = np.tile(
((-0.2857143, 0.2857143, 5.0), (0.2857143, 0.2857143, 0.5),
(0.0, -0.2857143, 1.0)), (2, 1, 1))
attributes_init = np.tile(
(((1.0, 0.0, 0.0), (0.0, 1.0, 0.0), (0.0, 0.0, 1.0))), (2, 1, 1))
pixel_position_init = np.array(((125.5, 375.5), (250.5, 250.5)))
camera_position_init = np.tile((0.0, 0.0, 0.0), (2, 3, 1))
look_at_init = np.tile((0.0, 0.0, 1.0), (2, 3, 1))
up_vector_init = np.tile((0.0, 1.0, 0.0), (2, 3, 1))
vertical_field_of_view_init = np.tile((1.0471975511965976,), (2, 3, 1))
screen_dimensions_init = np.tile((501.0, 501.0), (2, 3, 1))
near_init = np.tile((0.01,), (2, 3, 1))
far_init = np.tile((10.0,), (2, 3, 1))
lower_left_corner_init = np.tile((0.0, 0.0), (2, 3, 1))
# Build matrices.
model_to_eye_matrix_init = look_at.right_handed(camera_position_init,
look_at_init,
up_vector_init)
perspective_matrix_init = perspective.right_handed(
vertical_field_of_view_init,
screen_dimensions_init[..., 0:1] / screen_dimensions_init[..., 1:2],
near_init, far_init)
self.assert_jacobian_is_correct_fn(glm.perspective_correct_interpolation, [
vertices_init, attributes_init, pixel_position_init,
model_to_eye_matrix_init, perspective_matrix_init,
screen_dimensions_init, lower_left_corner_init
])
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_perspective_correct_interpolation_jacobian_random(self):
"""Tests the Jacobian of perspective_correct_interpolation."""
tensor_size = np.random.randint(1, 3)
tensor_shape = np.random.randint(1, 5, size=(tensor_size)).tolist()
vertices_init = np.random.uniform(size=tensor_shape + [3, 3])
num_attributes = np.random.randint(1, 10)
attributes_init = np.random.uniform(size=tensor_shape + [3, num_attributes])
pixel_position_init = np.random.uniform(size=tensor_shape + [2])
camera_position_init = np.random.uniform(size=tensor_shape + [3, 3])
look_at_init = np.random.uniform(size=tensor_shape + [3, 3])
up_vector_init = np.random.uniform(size=tensor_shape + [3, 3])
vertical_field_of_view_init = np.random.uniform(
0.1, 1.0, size=tensor_shape + [3, 1])
screen_dimensions_init = np.random.uniform(
1.0, 10.0, size=tensor_shape + [3, 2])
near_init = np.random.uniform(1.0, 10.0, size=tensor_shape + [3, 1])
far_init = near_init + np.random.uniform(
0.1, 1.0, size=tensor_shape + [3, 1])
lower_left_corner_init = np.random.uniform(size=tensor_shape + [3, 2])
# Build matrices.
model_to_eye_matrix_init = look_at.right_handed(camera_position_init,
look_at_init,
up_vector_init)
perspective_matrix_init = perspective.right_handed(
vertical_field_of_view_init,
screen_dimensions_init[..., 0:1] / screen_dimensions_init[..., 1:2],
near_init, far_init)
self.assert_jacobian_is_correct_fn(
glm.perspective_correct_interpolation, [
vertices_init, attributes_init, pixel_position_init,
model_to_eye_matrix_init, perspective_matrix_init,
screen_dimensions_init, lower_left_corner_init
],
atol=1e-4)
@parameterized.parameters(
((3, 3), (2,), (4, 4), (4, 4), (2,)),
((3, 3), (7, 2), (4, 4), (4, 4), (2,)),
((3, 3), (None, 2), (4, 4), (4, 4), (2,)),
((7, 3, 3), (2,), (4, 4), (4, 4), (2,)),
((None, 3, 3), (2,), (4, 4), (4, 4), (2,)),
)
def test_perspective_correct_barycentrics_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(glm.perspective_correct_barycentrics,
shapes)
@parameterized.parameters(
("must have exactly 2 dimensions in axis -1", (3, 3), (2,), (4, 4),
(4, 4), (3,)),
("must have exactly 3 dimensions in axis -1", (3, 4), (2,), (4, 4),
(4, 4), (3,)),
("must have exactly 3 dimensions in axis -2", (4, 3), (2,), (4, 4),
(4, 4), (3,)),
)
def test_perspective_correct_barycentrics_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(glm.perspective_correct_barycentrics,
error_msg, shapes)
def test_perspective_correct_barycentrics_preset(self):
"""Tests that perspective_correct_barycentrics generates expected results."""
camera_origin = np.array((0.0, 0.0, 0.0))
camera_up = np.array((0.0, 1.0, 0.0))
look_at_point = np.array((0.0, 0.0, 1.0))
fov = np.array((90.0 * np.math.pi / 180.0,))
bottom_left = np.array((0.0, 0.0))
image_size = np.array((501.0, 501.0))
near_plane = np.array((0.01,))
far_plane = np.array((10.0,))
batch_size = np.random.randint(1, 5)
triangle_x_y = np.random.uniform(-10.0, 10.0, (batch_size, 3, 2))
triangle_z = np.random.uniform(2.0, 10.0, (batch_size, 3, 1))
triangles = np.concatenate((triangle_x_y, triangle_z), axis=-1)
# Builds barycentric weights.
barycentric_weights = np.random.uniform(size=(batch_size, 3))
barycentric_weights = barycentric_weights / np.sum(
barycentric_weights, axis=-1, keepdims=True)
# Barycentric interpolation of vertex positions.
convex_combination = np.einsum("ba, bac -> bc", barycentric_weights,
triangles)
# Build matrices.
model_to_eye_matrix = look_at.right_handed(camera_origin, look_at_point,
camera_up)
perspective_matrix = perspective.right_handed(
fov, (image_size[0:1] / image_size[1:2]), near_plane, far_plane)
# Computes where those points project in screen coordinates.
pixel_position, _ = glm.model_to_screen(convex_combination,
model_to_eye_matrix,
perspective_matrix, image_size,
bottom_left)
prediction = glm.perspective_correct_barycentrics(triangles,
pixel_position[..., 0:2],
model_to_eye_matrix,
perspective_matrix,
image_size, bottom_left)
self.assertAllClose(prediction, barycentric_weights)
def test_perspective_correct_barycentrics_jacobian_random(self):
"""Tests the Jacobian of perspective_correct_barycentrics."""
tensor_size = np.random.randint(1, 3)
tensor_shape = np.random.randint(1, 5, size=(tensor_size)).tolist()
vertices_init = np.random.uniform(size=tensor_shape + [3, 3])
pixel_position_init = np.random.uniform(size=tensor_shape + [2])
camera_position_init = np.random.uniform(size=tensor_shape + [3, 3])
look_at_init = np.random.uniform(size=tensor_shape + [3, 3])
up_vector_init = np.random.uniform(size=tensor_shape + [3, 3])
vertical_field_of_view_init = np.random.uniform(
0.1, 1.0, size=tensor_shape + [3, 1])
screen_dimensions_init = np.random.uniform(
1.0, 10.0, size=tensor_shape + [3, 2])
near_init = np.random.uniform(1.0, 10.0, size=tensor_shape + [3, 1])
far_init = near_init + np.random.uniform(
0.1, 1.0, size=tensor_shape + [3, 1])
lower_left_corner_init = np.random.uniform(size=tensor_shape + [3, 2])
# Build matrices.
model_to_eye_matrix_init = look_at.right_handed(camera_position_init,
look_at_init,
up_vector_init)
perspective_matrix_init = perspective.right_handed(
vertical_field_of_view_init,
screen_dimensions_init[..., 0:1] / screen_dimensions_init[..., 1:2],
near_init, far_init)
self.assert_jacobian_is_correct_fn(
glm.perspective_correct_barycentrics, [
vertices_init, pixel_position_init, model_to_eye_matrix_init,
perspective_matrix_init, screen_dimensions_init,
lower_left_corner_init
],
atol=1e-4)
@parameterized.parameters(
((3, 7), (3,)),
((2, 3, 7), (2, 3)),
((None, 3, 7), (None, 3)),
)
def test_interpolate_attributes_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(glm.interpolate_attributes, shapes)
@parameterized.parameters(
("must have exactly 3 dimensions in axis -2", (2, 7), (3,)),
("must have exactly 3 dimensions in axis -1", (3, 7), (2,)),
("Not all batch dimensions are broadcast-compatible", (5, 3, 7), (4, 3)),
)
def test_interpolate_attributes_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(glm.interpolate_attributes, error_msg,
shapes)
def test_interpolate_attributes_random(self):
"""Checks the output of interpolate_attributes."""
attributes = np.random.uniform(-1.0, 1.0, size=(3,))
barycentric = np.random.uniform(0.0, 1.0, size=(3,))
barycentric = barycentric / np.linalg.norm(
barycentric, axis=-1, ord=1, keepdims=True)
groundtruth = np.sum(attributes * barycentric, keepdims=True)
attributes = np.reshape(attributes, (3, 1))
prediction = glm.interpolate_attributes(attributes, barycentric)
self.assertAllClose(groundtruth, prediction)
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_interpolate_attributes_jacobian_random(self):
"""Tests the jacobian of interpolate_attributes."""
batch_size = np.random.randint(1, 5)
attributes = np.random.uniform(-1.0, 1.0, size=(batch_size, 3, 1))
barycentric = np.random.uniform(
0.0, 1.0, size=(
batch_size,
3,
))
barycentric = barycentric / np.linalg.norm(
barycentric, axis=-1, ord=1, keepdims=True)
self.assert_jacobian_is_correct_fn(glm.interpolate_attributes,
[attributes, barycentric])
if __name__ == "__main__":
test_case.main()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for OpenGL math routines."""
import math
from absl.testing import flagsaver
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from tensorflow_graphics.geometry.transformation import look_at
from tensorflow_graphics.rendering.camera import perspective
from tensorflow_graphics.rendering.opengl import math as glm
from tensorflow_graphics.util import test_case
class MathTest(test_case.TestCase):
def test_model_to_eye_preset(self):
"""Tests that model_to_eye generates expected results.."""
point = ((2.0, 3.0, 4.0), (3.0, 4.0, 5.0))
camera_position = ((0.0, 0.0, 0.0), (0.1, 0.2, 0.3))
look_at_point = ((0.0, 0.0, 1.0), (0.4, 0.5, 0.6))
up_vector = ((0.0, 1.0, 0.0), (0.7, 0.8, 0.9))
pred = glm.model_to_eye(point, camera_position, look_at_point, up_vector)
gt = ((-2.0, 3.0, -4.0), (2.08616257e-07, 1.27279234, -6.58179379))
self.assertAllClose(pred, gt)
@parameterized.parameters(
((3,), (3,), (3,), (3,)),
((None, 3), (None, 3), (None, 3), (None, 3)),
((100, 3), (3,), (3,), (3,)),
((None, 1, 3), (None, 2, 3), (None, 2, 3), (None, 2, 3)),
)
def test_model_to_eye_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(glm.model_to_eye, shapes)
@parameterized.parameters(
("must have exactly 3 dimensions in axis -1", (2,), (3,), (3,), (3,)),
("must have exactly 3 dimensions in axis -1", (3,), (2,), (3,), (3,)),
("must have exactly 3 dimensions in axis -1", (3,), (3,), (2,), (3,)),
("must have exactly 3 dimensions in axis -1", (3,), (3,), (3,), (2,)),
("Not all batch dimensions are identical", (3,), (2, 3), (3, 3), (3, 3)),
("Not all batch dimensions are broadcast-compatible", (2, 3), (3, 3),
(3, 3), (3, 3)),
)
def test_model_to_eye_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(glm.model_to_eye, error_msg, shapes)
def test_model_to_eye_jacobian_preset(self):
"""Tests the Jacobian of model_to_eye."""
point_init = np.array(((2.0, 3.0, 4.0), (3.0, 4.0, 5.0)))
camera_position_init = np.array(((0.0, 0.0, 0.0), (0.1, 0.2, 0.3)))
look_at_init = np.array(((0.0, 0.0, 1.0), (0.4, 0.5, 0.6)))
up_vector_init = np.array(((0.0, 1.0, 0.0), (0.7, 0.8, 0.9)))
self.assert_jacobian_is_correct_fn(
glm.model_to_eye,
[point_init, camera_position_init, look_at_init, up_vector_init])
def test_model_to_eye_jacobian_random(self):
"""Tests the Jacobian of model_to_eye."""
tensor_size = np.random.randint(1, 3)
tensor_shape = np.random.randint(1, 5, size=(tensor_size)).tolist()
point_init = np.random.uniform(size=tensor_shape + [3])
camera_position_init = np.random.uniform(size=tensor_shape + [3])
look_at_init = np.random.uniform(size=tensor_shape + [3])
up_vector_init = np.random.uniform(size=tensor_shape + [3])
self.assert_jacobian_is_correct_fn(
glm.model_to_eye,
[point_init, camera_position_init, look_at_init, up_vector_init])
def test_eye_to_clip_preset(self):
"""Tests that eye_to_clip generates expected results."""
point = ((2.0, 3.0, 4.0), (3.0, 4.0, 5.0))
vertical_field_of_view = ((60.0 * math.pi / 180.0,),
(50.0 * math.pi / 180.0,))
aspect_ratio = ((1.5,), (1.6,))
near_plane = ((1.0,), (2.0,))
far_plane = ((10.0,), (11.0,))
pred = glm.eye_to_clip(point, vertical_field_of_view, aspect_ratio,
near_plane, far_plane)
gt = ((2.30940104, 5.19615173, -7.11111116, -4.0), (4.02095032, 8.57802773,
-12.11111069, -5.0))
self.assertAllClose(pred, gt)
@parameterized.parameters(
((3,), (1,), (1,), (1,), (1,)),
((None, 3), (None, 1), (None, 1), (None, 1), (None, 1)),
((None, 5, 3), (None, 5, 1), (None, 5, 1), (None, 5, 1), (None, 5, 1)),
)
def test_eye_to_clip_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(glm.eye_to_clip, shapes)
@parameterized.parameters(
("must have exactly 3 dimensions in axis -1", (2,), (1,), (1,), (1,),
(1,)),
("must have exactly 1 dimensions in axis -1", (3,), (2,), (1,), (1,),
(1,)),
("must have exactly 1 dimensions in axis -1", (3,), (1,), (2,), (1,),
(1,)),
("must have exactly 1 dimensions in axis -1", (3,), (1,), (1,), (2,),
(1,)),
("must have exactly 1 dimensions in axis -1", (3,), (1,), (1,), (1,),
(2,)),
("Not all batch dimensions are broadcast-compatible", (3, 3), (2, 1),
(1,), (1,), (1,)),
)
def test_eye_to_clip_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(glm.eye_to_clip, error_msg, shapes)
def test_eye_to_clip_jacobian_preset(self):
"""Tests the Jacobian of eye_to_clip."""
point_init = np.array(((2.0, 3.0, 4.0), (3.0, 4.0, 5.0)))
vertical_field_of_view_init = np.array(
((60.0 * math.pi / 180.0,), (50.0 * math.pi / 180.0,)))
aspect_ratio_init = np.array(((1.5,), (1.6,)))
near_init = np.array(((1.0,), (2.0,)))
far_init = np.array(((10.0,), (11.0,)))
self.assert_jacobian_is_correct_fn(
glm.eye_to_clip, [
point_init, vertical_field_of_view_init, aspect_ratio_init,
near_init, far_init
],
atol=1e-5)
def test_eye_to_clip_jacobian_random(self):
"""Tests the Jacobian of eye_to_clip."""
tensor_size = np.random.randint(1, 3)
tensor_shape = np.random.randint(1, 5, size=(tensor_size)).tolist()
point_init = np.random.uniform(size=tensor_shape + [3])
eps = np.finfo(np.float64).eps
vertical_field_of_view_init = np.random.uniform(
eps, math.pi - eps, size=tensor_shape + [1])
aspect_ratio_init = np.random.uniform(eps, 100.0, size=tensor_shape + [1])
near_init = np.random.uniform(eps, 100.0, size=tensor_shape + [1])
far_init = near_init + np.random.uniform(eps, 10.0, size=tensor_shape + [1])
self.assert_jacobian_is_correct_fn(
glm.eye_to_clip, [
point_init, vertical_field_of_view_init, aspect_ratio_init,
near_init, far_init
],
atol=1e-03)
def test_clip_to_ndc_preset(self):
"""Tests that clip_to_ndc generates expected results."""
point = ((4.0, 8.0, 16.0, 2.0), (4.0, 8.0, 16.0, 1.0))
pred = glm.clip_to_ndc(point)
gt = ((2.0, 4.0, 8.0), (4.0, 8.0, 16.0))
self.assertAllClose(pred, gt)
@parameterized.parameters(
((4,)),
((None, 4),),
((None, 5, 4),),
)
def test_clip_to_ndc_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(glm.clip_to_ndc, shapes)
def test_clip_to_ndc_exception_raised(self):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(
glm.clip_to_ndc, "must have exactly 4 dimensions in axis -1", ((2,),))
def test_clip_to_ndc_jacobian_preset(self):
"""Tests the Jacobian of clip_to_ndc."""
point_init = np.array(((4.0, 8.0, 16.0, 2.0), (4.0, 8.0, 16.0, 1.0)))
self.assert_jacobian_is_correct_fn(glm.clip_to_ndc, [point_init])
def test_clip_to_ndc_jacobian_random(self):
"""Tests the Jacobian of clip_to_ndc."""
tensor_size = np.random.randint(1, 3)
tensor_shape = np.random.randint(1, 5, size=(tensor_size)).tolist()
point_init = np.random.uniform(size=tensor_shape + [4])
self.assert_jacobian_is_correct_fn(
glm.clip_to_ndc, [point_init], atol=1e-04)
def test_ndc_to_screen_preset(self):
"""Tests that ndc_to_screen generates expected results."""
point = ((1.1, 2.2, 3.3), (5.1, 5.2, 5.3))
lower_left_corner = ((6.4, 4.8), (0.0, 0.0))
screen_dimensions = ((640.0, 480.0), (300.0, 400.0))
near = ((1.0,), (11.0,))
far = ((10.0,), (100.0,))
pred = glm.ndc_to_screen(point, lower_left_corner, screen_dimensions, near,
far)
gt = ((678.40002441, 772.79998779, 20.34999847), (915.0, 1240.0,
291.3500061))
self.assertAllClose(pred, gt)
@parameterized.parameters(
((3,), (2,), (2,), (1,), (1,)),
((None, 3), (None, 2), (None, 2), (None, 1), (None, 1)),
((None, 5, 3), (None, 5, 2), (None, 5, 2), (None, 5, 1), (None, 5, 1)),
)
def test_ndc_to_screen_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(glm.ndc_to_screen, shapes)
@parameterized.parameters(
("must have exactly 3 dimensions in axis -1", (2,), (2,), (2,), (1,),
(1,)),
("must have exactly 2 dimensions in axis -1", (3,), (1,), (2,), (1,),
(1,)),
("must have exactly 2 dimensions in axis -1", (3,), (2,), (3,), (1,),
(1,)),
("must have exactly 1 dimensions in axis -1", (3,), (2,), (2,), (2,),
(1,)),
("must have exactly 1 dimensions in axis -1", (3,), (2,), (2,), (1,),
(3,)),
("Not all batch dimensions are identical", (3,), (2, 2), (3, 2), (3, 1),
(3, 1)),
("Not all batch dimensions are broadcast-compatible", (4, 3), (3, 2),
(3, 2), (3, 1), (3, 1)),
)
def test_ndc_to_screen_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(glm.ndc_to_screen, error_msg, shapes)
def test_ndc_to_screen_exception_near_raised(self):
"""Tests that an exception is raised when `near` is not strictly positive."""
point = np.random.uniform(size=(3,))
lower_left_corner = np.random.uniform(size=(2,))
screen_dimensions = np.random.uniform(1.0, 2.0, size=(2,))
near = np.random.uniform(-1.0, 0.0, size=(1,))
far = np.random.uniform(1.0, 2.0, size=(1,))
with self.subTest("negative_near"):
with self.assertRaises(tf.errors.InvalidArgumentError):
self.evaluate(
glm.ndc_to_screen(point, lower_left_corner, screen_dimensions, near,
far))
with self.subTest("zero_near"):
with self.assertRaises(tf.errors.InvalidArgumentError):
self.evaluate(
glm.ndc_to_screen(point, lower_left_corner, screen_dimensions,
np.array((0.0,)), far))
def test_ndc_to_screen_exception_far_raised(self):
"""Tests that an exception is raised if `far` is not greater than `near`."""
point = np.random.uniform(size=(3,))
lower_left_corner = np.random.uniform(size=(2,))
screen_dimensions = np.random.uniform(1.0, 2.0, size=(2,))
near = np.random.uniform(1.0, 10.0, size=(1,))
far = near + np.random.uniform(-1.0, 0.0, size=(1,))
with self.assertRaises(tf.errors.InvalidArgumentError):
self.evaluate(
glm.ndc_to_screen(point, lower_left_corner, screen_dimensions, near,
far))
def test_ndc_to_screen_exception_screen_dimensions_raised(self):
"""Tests that an exception is raised when `screen_dimensions` is not strictly positive."""
point = np.random.uniform(size=(3,))
lower_left_corner = np.random.uniform(size=(2,))
screen_dimensions = np.random.uniform(-1.0, 0.0, size=(2,))
near = np.random.uniform(1.0, 10.0, size=(1,))
far = near + np.random.uniform(0.1, 1.0, size=(1,))
with self.subTest("negative_screen_dimensions"):
with self.assertRaises(tf.errors.InvalidArgumentError):
self.evaluate(
glm.ndc_to_screen(point, lower_left_corner, screen_dimensions, near,
far))
with self.subTest("zero_screen_dimensions"):
with self.assertRaises(tf.errors.InvalidArgumentError):
self.evaluate(
glm.ndc_to_screen(point, lower_left_corner, np.array((0.0, 0.0)),
near, far))
def test_ndc_to_screen_jacobian_preset(self):
"""Tests the Jacobian of ndc_to_screen."""
point_init = np.array(((1.1, 2.2, 3.3), (5.1, 5.2, 5.3)))
lower_left_corner_init = np.array(((6.4, 4.8), (0.0, 0.0)))
screen_dimensions_init = np.array(((640.0, 480.0), (300.0, 400.0)))
near_init = np.array(((1.0,), (11.0,)))
far_init = np.array(((10.0,), (100.0,)))
self.assert_jacobian_is_correct_fn(glm.ndc_to_screen, [
point_init, lower_left_corner_init, screen_dimensions_init, near_init,
far_init
])
def test_ndc_to_screen_jacobian_random(self):
"""Tests the Jacobian of ndc_to_screen."""
tensor_size = np.random.randint(1, 3)
tensor_shape = np.random.randint(1, 5, size=(tensor_size)).tolist()
point_init = np.random.uniform(size=tensor_shape + [3])
lower_left_corner_init = np.random.uniform(size=tensor_shape + [2])
screen_dimensions_init = np.random.uniform(
1.0, 1000.0, size=tensor_shape + [2])
near_init = np.random.uniform(1.0, 10.0, size=tensor_shape + [1])
far_init = near_init + np.random.uniform(0.1, 1.0, size=(1,))
self.assert_jacobian_is_correct_fn(glm.ndc_to_screen, [
point_init, lower_left_corner_init, screen_dimensions_init, near_init,
far_init
])
def test_model_to_screen_preset(self):
"""Tests that model_to_screen generates expected results."""
point_world_space = np.array(((3.1, 4.1, 5.1), (-1.1, 2.2, -3.1)))
camera_position = np.array(((0.0, 0.0, 0.0), (0.4, -0.8, 0.1)))
camera_up = np.array(((0.0, 1.0, 0.0), (0.0, 0.0, 1.0)))
look_at_point = np.array(((0.0, 0.0, 1.0), (0.0, 1.0, 0.0)))
vertical_field_of_view = np.array(
((60.0 * math.pi / 180.0,), (65 * math.pi / 180,)))
lower_left_corner = np.array(((0.0, 0.0), (10.0, 20.0)))
screen_dimensions = np.array(((501.0, 501.0), (400.0, 600.0)))
near = np.array(((0.01,), (1.0,)))
far = np.array(((4.0,), (3.0,)))
# Build matrices.
model_to_eye_matrix = look_at.right_handed(camera_position, look_at_point,
camera_up)
perspective_matrix = perspective.right_handed(
vertical_field_of_view,
screen_dimensions[..., 0:1] / screen_dimensions[..., 1:2], near, far)
pred_screen, pred_w = glm.model_to_screen(point_world_space,
model_to_eye_matrix,
perspective_matrix,
screen_dimensions,
lower_left_corner)
gt_screen = ((-13.23016357, 599.30444336, 4.00215721),
(98.07017517, -95.40383911, 3.1234405))
gt_w = ((5.1,), (3.42247,))
self.assertAllClose(pred_screen, gt_screen, atol=1e-5, rtol=1e-5)
self.assertAllClose(pred_w, gt_w)
@parameterized.parameters(
((3,), (4, 4), (4, 4), (2,), (2,)),
((640, 480, 3), (4, 4), (4, 4), (2,), (2,)),
((None, 3), (None, 4, 4), (None, 4, 4), (None, 2), (None, 2)),
((3,), (None, 1, 4, 4), (None, 1, 4, 4), (None, 1, 2), (None, 1, 2)),
)
def test_model_to_screen_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(glm.model_to_screen, shapes)
@parameterized.parameters(
("must have exactly 3 dimensions in axis -1", (9.0, 12.0), (0.0, 0.0),
(2,), (4, 4), (4, 4)),
("must have exactly 4 dimensions in axis -1", (9.0, 12.0), (0.0, 0.0),
(3,), (4, 3), (4, 4)),
("must have exactly 4 dimensions in axis -2", (9.0, 12.0), (0.0, 0.0),
(3,), (3, 4), (4, 4)),
("must have exactly 4 dimensions in axis -1", (9.0, 12.0), (0.0, 0.0),
(3,), (4, 4), (4, 3)),
("must have exactly 4 dimensions in axis -2", (9.0, 12.0), (0.0, 0.0),
(3,), (4, 4), (3, 4)),
("Not all batch dimensions are broadcast-compatible", (9.0, 12.0),
(0.0, 0.0), (2, 3), (3, 4, 4), (3, 4, 4)),
)
def test_model_to_screen_exception_raised(self, error_msg, screen_dimensions,
lower_left_corner, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(
func=glm.model_to_screen,
error_msg=error_msg,
shapes=shapes,
screen_dimensions=screen_dimensions,
lower_left_corner=lower_left_corner)
def test_model_to_screen_jacobian_preset(self):
"""Tests the Jacobian of model_to_screen."""
point_world_space_init = np.array(((3.1, 4.1, 5.1), (-1.1, 2.2, -3.1)))
camera_position_init = np.array(((0.0, 0.0, 0.0), (0.4, -0.8, 0.1)))
camera_up_init = np.array(((0.0, 1.0, 0.0), (0.0, 0.0, 1.0)))
look_at_init = np.array(((0.0, 0.0, 1.0), (0.0, 1.0, 0.0)))
vertical_field_of_view_init = np.array(
((60.0 * math.pi / 180.0,), (65 * math.pi / 180,)))
lower_left_corner_init = np.array(((0.0, 0.0), (10.0, 20.0)))
screen_dimensions_init = np.array(((501.0, 501.0), (400.0, 600.0)))
near_init = np.array(((0.01,), (1.0,)))
far_init = np.array(((4.0,), (3.0,)))
# Build matrices.
model_to_eye_matrix = look_at.right_handed(camera_position_init,
look_at_init, camera_up_init)
perspective_matrix = perspective.right_handed(
vertical_field_of_view_init,
screen_dimensions_init[..., 0:1] / screen_dimensions_init[..., 1:2],
near_init, far_init)
args = [
point_world_space_init, model_to_eye_matrix, perspective_matrix,
screen_dimensions_init, lower_left_corner_init
]
with self.subTest(name="jacobian_y_projection"):
self.assert_jacobian_is_correct_fn(
lambda *args: glm.model_to_screen(*args)[0], args, atol=1e-4)
# TODO(julienvalentin): will be fixed before submission
# with self.subTest(name="jacobian_w"):
# self.assert_jacobian_is_correct_fn(
# lambda *args: glm.model_to_screen(*args)[1], args)
def test_model_to_screen_jacobian_random(self):
"""Tests the Jacobian of model_to_screen."""
tensor_size = np.random.randint(1, 3)
tensor_shape = np.random.randint(1, 5, size=(tensor_size)).tolist()
point_world_space_init = np.random.uniform(size=tensor_shape + [3])
camera_position_init = np.random.uniform(size=tensor_shape + [3])
camera_up_init = np.random.uniform(size=tensor_shape + [3])
look_at_init = np.random.uniform(size=tensor_shape + [3])
vertical_field_of_view_init = np.random.uniform(
0.1, 1.0, size=tensor_shape + [1])
lower_left_corner_init = np.random.uniform(size=tensor_shape + [2])
screen_dimensions_init = np.random.uniform(
0.1, 1.0, size=tensor_shape + [2])
near_init = np.random.uniform(0.1, 1.0, size=tensor_shape + [1])
far_init = near_init + np.random.uniform(0.1, 1.0, size=tensor_shape + [1])
# Build matrices.
model_to_eye_matrix = look_at.right_handed(camera_position_init,
look_at_init, camera_up_init)
perspective_matrix = perspective.right_handed(
vertical_field_of_view_init,
screen_dimensions_init[..., 0:1] / screen_dimensions_init[..., 1:2],
near_init, far_init)
args = [
point_world_space_init, model_to_eye_matrix, perspective_matrix,
screen_dimensions_init, lower_left_corner_init
]
with self.subTest(name="jacobian_y_projection"):
self.assert_jacobian_is_correct_fn(
lambda *args: glm.model_to_screen(*args)[0], args, atol=1e-4)
# TODO(julienvalentin): will be fixed before submission
# with self.subTest(name="jacobian_w"):
# self.assert_jacobian_is_correct_fn(
# lambda *args: glm.model_to_screen(*args)[1], args)
def test_perspective_correct_interpolation_preset(self):
"""Tests that perspective_correct_interpolation generates expected results."""
camera_origin = np.array((0.0, 0.0, 0.0))
camera_up = np.array((0.0, 1.0, 0.0))
look_at_point = np.array((0.0, 0.0, 1.0))
fov = np.array((90.0 * np.math.pi / 180.0,))
bottom_left = np.array((0.0, 0.0))
image_size = np.array((501.0, 501.0))
near_plane = np.array((0.01,))
far_plane = np.array((10.0,))
batch_size = np.random.randint(1, 5)
triangle_x_y = np.random.uniform(-10.0, 10.0, (batch_size, 3, 2))
triangle_z = np.random.uniform(2.0, 10.0, (batch_size, 3, 1))
triangles = np.concatenate((triangle_x_y, triangle_z), axis=-1)
# Builds barycentric weights.
barycentric_weights = np.random.uniform(size=(batch_size, 3))
barycentric_weights = barycentric_weights / np.sum(
barycentric_weights, axis=-1, keepdims=True)
# Barycentric interpolation of vertex positions.
convex_combination = np.einsum("ba, bac -> bc", barycentric_weights,
triangles)
# Build matrices.
model_to_eye_matrix = look_at.right_handed(camera_origin, look_at_point,
camera_up)
perspective_matrix = perspective.right_handed(
fov, (image_size[0:1] / image_size[1:2]), near_plane, far_plane)
# Computes where those points project in screen coordinates.
pixel_position, _ = glm.model_to_screen(convex_combination,
model_to_eye_matrix,
perspective_matrix, image_size,
bottom_left)
# Builds attributes.
num_pixels = pixel_position.shape[0]
attribute_size = np.random.randint(10)
attributes = np.random.uniform(size=(num_pixels, 3, attribute_size))
prediction = glm.perspective_correct_interpolation(triangles, attributes,
pixel_position[..., 0:2],
model_to_eye_matrix,
perspective_matrix,
image_size, bottom_left)
groundtruth = np.einsum("ba, bac -> bc", barycentric_weights, attributes)
self.assertAllClose(prediction, groundtruth)
def test_perspective_correct_interpolation_jacobian_preset(self):
"""Tests the Jacobian of perspective_correct_interpolation."""
vertices_init = np.tile(
((-0.2857143, 0.2857143, 5.0), (0.2857143, 0.2857143, 0.5),
(0.0, -0.2857143, 1.0)), (2, 1, 1))
attributes_init = np.tile(
(((1.0, 0.0, 0.0), (0.0, 1.0, 0.0), (0.0, 0.0, 1.0))), (2, 1, 1))
pixel_position_init = np.array(((125.5, 375.5), (250.5, 250.5)))
camera_position_init = np.tile((0.0, 0.0, 0.0), (2, 3, 1))
look_at_init = np.tile((0.0, 0.0, 1.0), (2, 3, 1))
up_vector_init = np.tile((0.0, 1.0, 0.0), (2, 3, 1))
vertical_field_of_view_init = np.tile((1.0471975511965976,), (2, 3, 1))
screen_dimensions_init = np.tile((501.0, 501.0), (2, 3, 1))
near_init = np.tile((0.01,), (2, 3, 1))
far_init = np.tile((10.0,), (2, 3, 1))
lower_left_corner_init = np.tile((0.0, 0.0), (2, 3, 1))
# Build matrices.
model_to_eye_matrix_init = look_at.right_handed(camera_position_init,
look_at_init,
up_vector_init)
perspective_matrix_init = perspective.right_handed(
vertical_field_of_view_init,
screen_dimensions_init[..., 0:1] / screen_dimensions_init[..., 1:2],
near_init, far_init)
self.assert_jacobian_is_correct_fn(glm.perspective_correct_interpolation, [
vertices_init, attributes_init, pixel_position_init,
model_to_eye_matrix_init, perspective_matrix_init,
screen_dimensions_init, lower_left_corner_init
])
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_perspective_correct_interpolation_jacobian_random(self):
"""Tests the Jacobian of perspective_correct_interpolation."""
tensor_size = np.random.randint(1, 3)
tensor_shape = np.random.randint(1, 5, size=(tensor_size)).tolist()
vertices_init = np.random.uniform(size=tensor_shape + [3, 3])
num_attributes = np.random.randint(1, 10)
attributes_init = np.random.uniform(size=tensor_shape + [3, num_attributes])
pixel_position_init = np.random.uniform(size=tensor_shape + [2])
camera_position_init = np.random.uniform(size=tensor_shape + [3, 3])
look_at_init = np.random.uniform(size=tensor_shape + [3, 3])
up_vector_init = np.random.uniform(size=tensor_shape + [3, 3])
vertical_field_of_view_init = np.random.uniform(
0.1, 1.0, size=tensor_shape + [3, 1])
screen_dimensions_init = np.random.uniform(
1.0, 10.0, size=tensor_shape + [3, 2])
near_init = np.random.uniform(1.0, 10.0, size=tensor_shape + [3, 1])
far_init = near_init + np.random.uniform(
0.1, 1.0, size=tensor_shape + [3, 1])
lower_left_corner_init = np.random.uniform(size=tensor_shape + [3, 2])
# Build matrices.
model_to_eye_matrix_init = look_at.right_handed(camera_position_init,
look_at_init,
up_vector_init)
perspective_matrix_init = perspective.right_handed(
vertical_field_of_view_init,
screen_dimensions_init[..., 0:1] / screen_dimensions_init[..., 1:2],
near_init, far_init)
self.assert_jacobian_is_correct_fn(
glm.perspective_correct_interpolation, [
vertices_init, attributes_init, pixel_position_init,
model_to_eye_matrix_init, perspective_matrix_init,
screen_dimensions_init, lower_left_corner_init
],
atol=1e-4)
@parameterized.parameters(
((3, 3), (2,), (4, 4), (4, 4), (2,)),
((3, 3), (7, 2), (4, 4), (4, 4), (2,)),
((3, 3), (None, 2), (4, 4), (4, 4), (2,)),
((7, 3, 3), (2,), (4, 4), (4, 4), (2,)),
((None, 3, 3), (2,), (4, 4), (4, 4), (2,)),
)
def test_perspective_correct_barycentrics_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(glm.perspective_correct_barycentrics,
shapes)
@parameterized.parameters(
("must have exactly 2 dimensions in axis -1", (3, 3), (2,), (4, 4),
(4, 4), (3,)),
("must have exactly 3 dimensions in axis -1", (3, 4), (2,), (4, 4),
(4, 4), (3,)),
("must have exactly 3 dimensions in axis -2", (4, 3), (2,), (4, 4),
(4, 4), (3,)),
)
def test_perspective_correct_barycentrics_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(glm.perspective_correct_barycentrics,
error_msg, shapes)
def test_perspective_correct_barycentrics_preset(self):
"""Tests that perspective_correct_barycentrics generates expected results."""
camera_origin = np.array((0.0, 0.0, 0.0))
camera_up = np.array((0.0, 1.0, 0.0))
look_at_point = np.array((0.0, 0.0, 1.0))
fov = np.array((90.0 * np.math.pi / 180.0,))
bottom_left = np.array((0.0, 0.0))
image_size = np.array((501.0, 501.0))
near_plane = np.array((0.01,))
far_plane = np.array((10.0,))
batch_size = np.random.randint(1, 5)
triangle_x_y = np.random.uniform(-10.0, 10.0, (batch_size, 3, 2))
triangle_z = np.random.uniform(2.0, 10.0, (batch_size, 3, 1))
triangles = np.concatenate((triangle_x_y, triangle_z), axis=-1)
# Builds barycentric weights.
barycentric_weights = np.random.uniform(size=(batch_size, 3))
barycentric_weights = barycentric_weights / np.sum(
barycentric_weights, axis=-1, keepdims=True)
# Barycentric interpolation of vertex positions.
convex_combination = np.einsum("ba, bac -> bc", barycentric_weights,
triangles)
# Build matrices.
model_to_eye_matrix = look_at.right_handed(camera_origin, look_at_point,
camera_up)
perspective_matrix = perspective.right_handed(
fov, (image_size[0:1] / image_size[1:2]), near_plane, far_plane)
# Computes where those points project in screen coordinates.
pixel_position, _ = glm.model_to_screen(convex_combination,
model_to_eye_matrix,
perspective_matrix, image_size,
bottom_left)
prediction = glm.perspective_correct_barycentrics(triangles,
pixel_position[..., 0:2],
model_to_eye_matrix,
perspective_matrix,
image_size, bottom_left)
self.assertAllClose(prediction, barycentric_weights)
def test_perspective_correct_barycentrics_jacobian_random(self):
"""Tests the Jacobian of perspective_correct_barycentrics."""
tensor_size = np.random.randint(1, 3)
tensor_shape = np.random.randint(1, 5, size=(tensor_size)).tolist()
vertices_init = np.random.uniform(size=tensor_shape + [3, 3])
pixel_position_init = np.random.uniform(size=tensor_shape + [2])
camera_position_init = np.random.uniform(size=tensor_shape + [3, 3])
look_at_init = np.random.uniform(size=tensor_shape + [3, 3])
up_vector_init = np.random.uniform(size=tensor_shape + [3, 3])
vertical_field_of_view_init = np.random.uniform(
0.1, 1.0, size=tensor_shape + [3, 1])
screen_dimensions_init = np.random.uniform(
1.0, 10.0, size=tensor_shape + [3, 2])
near_init = np.random.uniform(1.0, 10.0, size=tensor_shape + [3, 1])
far_init = near_init + np.random.uniform(
0.1, 1.0, size=tensor_shape + [3, 1])
lower_left_corner_init = np.random.uniform(size=tensor_shape + [3, 2])
# Build matrices.
model_to_eye_matrix_init = look_at.right_handed(camera_position_init,
look_at_init,
up_vector_init)
perspective_matrix_init = perspective.right_handed(
vertical_field_of_view_init,
screen_dimensions_init[..., 0:1] / screen_dimensions_init[..., 1:2],
near_init, far_init)
self.assert_jacobian_is_correct_fn(
glm.perspective_correct_barycentrics, [
vertices_init, pixel_position_init, model_to_eye_matrix_init,
perspective_matrix_init, screen_dimensions_init,
lower_left_corner_init
],
atol=1e-4)
@parameterized.parameters(
((3, 7), (3,)),
((2, 3, 7), (2, 3)),
((None, 3, 7), (None, 3)),
)
def test_interpolate_attributes_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(glm.interpolate_attributes, shapes)
@parameterized.parameters(
("must have exactly 3 dimensions in axis -2", (2, 7), (3,)),
("must have exactly 3 dimensions in axis -1", (3, 7), (2,)),
("Not all batch dimensions are broadcast-compatible", (5, 3, 7), (4, 3)),
)
def test_interpolate_attributes_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(glm.interpolate_attributes, error_msg,
shapes)
def test_interpolate_attributes_random(self):
"""Checks the output of interpolate_attributes."""
attributes = np.random.uniform(-1.0, 1.0, size=(3,))
barycentric = np.random.uniform(0.0, 1.0, size=(3,))
barycentric = barycentric / np.linalg.norm(
barycentric, axis=-1, ord=1, keepdims=True)
groundtruth = np.sum(attributes * barycentric, keepdims=True)
attributes = np.reshape(attributes, (3, 1))
prediction = glm.interpolate_attributes(attributes, barycentric)
self.assertAllClose(groundtruth, prediction)
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_interpolate_attributes_jacobian_random(self):
"""Tests the jacobian of interpolate_attributes."""
batch_size = np.random.randint(1, 5)
attributes = np.random.uniform(-1.0, 1.0, size=(batch_size, 3, 1))
barycentric = np.random.uniform(
0.0, 1.0, size=(
batch_size,
3,
))
barycentric = barycentric / np.linalg.norm(
barycentric, axis=-1, ord=1, keepdims=True)
self.assert_jacobian_is_correct_fn(glm.interpolate_attributes,
[attributes, barycentric])
if __name__ == "__main__":
test_case.main()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/rendering/reflectance/tests/blinn_phong_test.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for Blinn-Phong reflectance."""
import math
import sys
from absl.testing import flagsaver
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from tensorflow_graphics.rendering.reflectance import blinn_phong
from tensorflow_graphics.util import test_case
class BlinnPhongTest(test_case.TestCase):
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_brdf_jacobian_random(self):
"""Tests the Jacobian of brdf."""
tensor_size = np.random.randint(3)
tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist()
direction_incoming_light_init = np.random.uniform(
-1.0, 1.0, size=tensor_shape + [3])
direction_outgoing_light_init = np.random.uniform(
-1.0, 1.0, size=tensor_shape + [3])
surface_normal_init = np.random.uniform(-1.0, 1.0, size=tensor_shape + [3])
shininess_init = np.random.uniform(size=tensor_shape + [1])
albedo_init = np.random.random(tensor_shape + [3])
self.assert_jacobian_is_correct_fn(blinn_phong.brdf, [
direction_incoming_light_init, direction_outgoing_light_init,
surface_normal_init, shininess_init, albedo_init
])
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_brdf_jacobian_preset(self):
delta = 1e-5
direction_incoming_light_init = np.array((delta, -1.0, 0.0))
direction_outgoing_light_init = np.array((delta, 1.0, 0.0))
surface_normal_init = np.array((1.0, 0.0, 0.0))
shininess_init = np.array((1.0,))
albedo_init = np.array((1.0, 1.0, 1.0))
self.assert_jacobian_is_correct_fn(
blinn_phong.brdf, [
direction_incoming_light_init, direction_outgoing_light_init,
surface_normal_init, shininess_init, albedo_init
],
delta=delta / 10.0)
@parameterized.parameters(
(-1.0, 1.0, 1.0 / math.pi),
(1.0, 1.0, 0.0),
(-1.0, -1.0, 0.0),
(1.0, -1.0, 0.0),
)
def test_brdf_random(self, incoming_yz, outgoing_yz, ratio):
tensor_size = np.random.randint(3)
tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist()
shininess = np.zeros(shape=tensor_shape + [1])
albedo = np.random.uniform(low=0.0, high=1.0, size=tensor_shape + [3])
direction_incoming_light = np.random.uniform(
low=-1.0, high=1.0, size=tensor_shape + [3])
direction_outgoing_light = np.random.uniform(
low=-1.0, high=1.0, size=tensor_shape + [3])
surface_normal = np.array((0.0, 1.0, 1.0))
direction_incoming_light[..., 1:3] = incoming_yz
direction_outgoing_light[..., 1:3] = outgoing_yz
direction_incoming_light = direction_incoming_light / np.linalg.norm(
direction_incoming_light, axis=-1, keepdims=True)
direction_outgoing_light = direction_outgoing_light / np.linalg.norm(
direction_outgoing_light, axis=-1, keepdims=True)
surface_normal = surface_normal / np.linalg.norm(
surface_normal, axis=-1, keepdims=True)
gt = albedo * ratio
pred = blinn_phong.brdf(direction_incoming_light, direction_outgoing_light,
surface_normal, shininess, albedo)
self.assertAllClose(gt, pred)
def test_brdf_exceptions_raised(self):
"""Tests that the exceptions are raised correctly."""
direction_incoming_light = np.random.uniform(-1.0, 1.0, size=(3,))
direction_outgoing_light = np.random.uniform(-1.0, 1.0, size=(3,))
surface_normal = np.random.uniform(-1.0, 1.0, size=(3,))
shininess = np.random.uniform(0.0, 1.0, size=(1,))
albedo = np.random.uniform(0.0, 1.0, (3,))
with self.subTest(name="assert_on_direction_incoming_light_not_normalized"):
with self.assertRaises(tf.errors.InvalidArgumentError):
self.evaluate(
blinn_phong.brdf(direction_incoming_light, direction_outgoing_light,
surface_normal, shininess, albedo))
direction_incoming_light /= np.linalg.norm(
direction_incoming_light, axis=-1)
with self.subTest(name="assert_on_direction_outgoing_light_not_normalized"):
with self.assertRaises(tf.errors.InvalidArgumentError):
self.evaluate(
blinn_phong.brdf(direction_incoming_light, direction_outgoing_light,
surface_normal, shininess, albedo))
direction_outgoing_light /= np.linalg.norm(
direction_outgoing_light, axis=-1)
with self.subTest(name="assert_on_surface_normal_not_normalized"):
with self.assertRaises(tf.errors.InvalidArgumentError):
self.evaluate(
blinn_phong.brdf(direction_incoming_light, direction_outgoing_light,
surface_normal, shininess, albedo))
surface_normal /= np.linalg.norm(surface_normal, axis=-1)
with self.subTest(name="assert_on_albedo_not_normalized"):
albedo = np.random.uniform(-10.0, -sys.float_info.epsilon, (3,))
with self.assertRaises(tf.errors.InvalidArgumentError):
self.evaluate(
blinn_phong.brdf(direction_incoming_light, direction_outgoing_light,
surface_normal, shininess, albedo))
albedo = np.random.uniform(sys.float_info.epsilon, 10.0, (3,))
with self.assertRaises(tf.errors.InvalidArgumentError):
self.evaluate(
blinn_phong.brdf(direction_incoming_light, direction_outgoing_light,
surface_normal, shininess, albedo))
@parameterized.parameters(
((3,), (3,), (3,), (1,), (3,)),
((None, 3), (None, 3), (None, 3), (None, 1), (None, 3)),
((1, 3), (1, 3), (1, 3), (1, 1), (1, 3)),
((2, 3), (2, 3), (2, 3), (2, 1), (2, 3)),
((1, 3), (1, 2, 3), (1, 2, 1, 3), (1, 2, 1), (1, 3)),
((3,), (1, 3), (1, 2, 3), (1, 2, 2, 1), (1, 2, 2, 2, 3)),
((1, 2, 2, 2, 3), (1, 2, 2, 3), (1, 2, 3), (1, 1), (3,)),
)
def test_brdf_shape_exception_not_raised(self, *shape):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(blinn_phong.brdf, shape)
@parameterized.parameters(
("must have exactly 3 dimensions in axis -1", (1,), (3,), (3,), (1,),
(3,)),
("must have exactly 3 dimensions in axis -1", (2,), (3,), (3,), (1,),
(3,)),
("must have exactly 3 dimensions in axis -1", (4,), (3,), (3,), (1,),
(3,)),
("must have exactly 3 dimensions in axis -1", (3,), (1,), (3,), (1,),
(3,)),
("must have exactly 3 dimensions in axis -1", (3,), (2,), (3,), (1,),
(3,)),
("must have exactly 3 dimensions in axis -1", (3,), (4,), (3,), (1,),
(3,)),
("must have exactly 3 dimensions in axis -1", (3,), (3,), (1,), (1,),
(3,)),
("must have exactly 3 dimensions in axis -1", (3,), (3,), (2,), (1,),
(3,)),
("must have exactly 3 dimensions in axis -1", (3,), (3,), (4,), (1,),
(3,)),
("must have exactly 1 dimensions in axis -1", (3,), (3,), (3,), (2,),
(3,)),
("must have exactly 1 dimensions in axis -1", (3,), (3,), (3,), (3,),
(3,)),
("must have exactly 3 dimensions in axis -1", (3,), (3,), (3,), (1,),
(4,)),
("must have exactly 3 dimensions in axis -1", (3,), (3,), (3,), (1,),
(2,)),
("must have exactly 3 dimensions in axis -1", (3,), (3,), (3,), (1,),
(1,)),
("Not all batch dimensions are broadcast-compatible.", (2, 3), (3, 3),
(3,), (1,), (3,)),
)
def test_brdf_shape_exception_raised(self, error_msg, *shape):
"""Tests that the shape exception is raised."""
self.assert_exception_is_raised(blinn_phong.brdf, error_msg, shape)
if __name__ == "__main__":
test_case.main()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for Blinn-Phong reflectance."""
import math
import sys
from absl.testing import flagsaver
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from tensorflow_graphics.rendering.reflectance import blinn_phong
from tensorflow_graphics.util import test_case
class BlinnPhongTest(test_case.TestCase):
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_brdf_jacobian_random(self):
"""Tests the Jacobian of brdf."""
tensor_size = np.random.randint(3)
tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist()
direction_incoming_light_init = np.random.uniform(
-1.0, 1.0, size=tensor_shape + [3])
direction_outgoing_light_init = np.random.uniform(
-1.0, 1.0, size=tensor_shape + [3])
surface_normal_init = np.random.uniform(-1.0, 1.0, size=tensor_shape + [3])
shininess_init = np.random.uniform(size=tensor_shape + [1])
albedo_init = np.random.random(tensor_shape + [3])
self.assert_jacobian_is_correct_fn(blinn_phong.brdf, [
direction_incoming_light_init, direction_outgoing_light_init,
surface_normal_init, shininess_init, albedo_init
])
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_brdf_jacobian_preset(self):
delta = 1e-5
direction_incoming_light_init = np.array((delta, -1.0, 0.0))
direction_outgoing_light_init = np.array((delta, 1.0, 0.0))
surface_normal_init = np.array((1.0, 0.0, 0.0))
shininess_init = np.array((1.0,))
albedo_init = np.array((1.0, 1.0, 1.0))
self.assert_jacobian_is_correct_fn(
blinn_phong.brdf, [
direction_incoming_light_init, direction_outgoing_light_init,
surface_normal_init, shininess_init, albedo_init
],
delta=delta / 10.0)
@parameterized.parameters(
(-1.0, 1.0, 1.0 / math.pi),
(1.0, 1.0, 0.0),
(-1.0, -1.0, 0.0),
(1.0, -1.0, 0.0),
)
def test_brdf_random(self, incoming_yz, outgoing_yz, ratio):
tensor_size = np.random.randint(3)
tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist()
shininess = np.zeros(shape=tensor_shape + [1])
albedo = np.random.uniform(low=0.0, high=1.0, size=tensor_shape + [3])
direction_incoming_light = np.random.uniform(
low=-1.0, high=1.0, size=tensor_shape + [3])
direction_outgoing_light = np.random.uniform(
low=-1.0, high=1.0, size=tensor_shape + [3])
surface_normal = np.array((0.0, 1.0, 1.0))
direction_incoming_light[..., 1:3] = incoming_yz
direction_outgoing_light[..., 1:3] = outgoing_yz
direction_incoming_light = direction_incoming_light / np.linalg.norm(
direction_incoming_light, axis=-1, keepdims=True)
direction_outgoing_light = direction_outgoing_light / np.linalg.norm(
direction_outgoing_light, axis=-1, keepdims=True)
surface_normal = surface_normal / np.linalg.norm(
surface_normal, axis=-1, keepdims=True)
gt = albedo * ratio
pred = blinn_phong.brdf(direction_incoming_light, direction_outgoing_light,
surface_normal, shininess, albedo)
self.assertAllClose(gt, pred)
def test_brdf_exceptions_raised(self):
"""Tests that the exceptions are raised correctly."""
direction_incoming_light = np.random.uniform(-1.0, 1.0, size=(3,))
direction_outgoing_light = np.random.uniform(-1.0, 1.0, size=(3,))
surface_normal = np.random.uniform(-1.0, 1.0, size=(3,))
shininess = np.random.uniform(0.0, 1.0, size=(1,))
albedo = np.random.uniform(0.0, 1.0, (3,))
with self.subTest(name="assert_on_direction_incoming_light_not_normalized"):
with self.assertRaises(tf.errors.InvalidArgumentError):
self.evaluate(
blinn_phong.brdf(direction_incoming_light, direction_outgoing_light,
surface_normal, shininess, albedo))
direction_incoming_light /= np.linalg.norm(
direction_incoming_light, axis=-1)
with self.subTest(name="assert_on_direction_outgoing_light_not_normalized"):
with self.assertRaises(tf.errors.InvalidArgumentError):
self.evaluate(
blinn_phong.brdf(direction_incoming_light, direction_outgoing_light,
surface_normal, shininess, albedo))
direction_outgoing_light /= np.linalg.norm(
direction_outgoing_light, axis=-1)
with self.subTest(name="assert_on_surface_normal_not_normalized"):
with self.assertRaises(tf.errors.InvalidArgumentError):
self.evaluate(
blinn_phong.brdf(direction_incoming_light, direction_outgoing_light,
surface_normal, shininess, albedo))
surface_normal /= np.linalg.norm(surface_normal, axis=-1)
with self.subTest(name="assert_on_albedo_not_normalized"):
albedo = np.random.uniform(-10.0, -sys.float_info.epsilon, (3,))
with self.assertRaises(tf.errors.InvalidArgumentError):
self.evaluate(
blinn_phong.brdf(direction_incoming_light, direction_outgoing_light,
surface_normal, shininess, albedo))
albedo = np.random.uniform(sys.float_info.epsilon, 10.0, (3,))
with self.assertRaises(tf.errors.InvalidArgumentError):
self.evaluate(
blinn_phong.brdf(direction_incoming_light, direction_outgoing_light,
surface_normal, shininess, albedo))
@parameterized.parameters(
((3,), (3,), (3,), (1,), (3,)),
((None, 3), (None, 3), (None, 3), (None, 1), (None, 3)),
((1, 3), (1, 3), (1, 3), (1, 1), (1, 3)),
((2, 3), (2, 3), (2, 3), (2, 1), (2, 3)),
((1, 3), (1, 2, 3), (1, 2, 1, 3), (1, 2, 1), (1, 3)),
((3,), (1, 3), (1, 2, 3), (1, 2, 2, 1), (1, 2, 2, 2, 3)),
((1, 2, 2, 2, 3), (1, 2, 2, 3), (1, 2, 3), (1, 1), (3,)),
)
def test_brdf_shape_exception_not_raised(self, *shape):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(blinn_phong.brdf, shape)
@parameterized.parameters(
("must have exactly 3 dimensions in axis -1", (1,), (3,), (3,), (1,),
(3,)),
("must have exactly 3 dimensions in axis -1", (2,), (3,), (3,), (1,),
(3,)),
("must have exactly 3 dimensions in axis -1", (4,), (3,), (3,), (1,),
(3,)),
("must have exactly 3 dimensions in axis -1", (3,), (1,), (3,), (1,),
(3,)),
("must have exactly 3 dimensions in axis -1", (3,), (2,), (3,), (1,),
(3,)),
("must have exactly 3 dimensions in axis -1", (3,), (4,), (3,), (1,),
(3,)),
("must have exactly 3 dimensions in axis -1", (3,), (3,), (1,), (1,),
(3,)),
("must have exactly 3 dimensions in axis -1", (3,), (3,), (2,), (1,),
(3,)),
("must have exactly 3 dimensions in axis -1", (3,), (3,), (4,), (1,),
(3,)),
("must have exactly 1 dimensions in axis -1", (3,), (3,), (3,), (2,),
(3,)),
("must have exactly 1 dimensions in axis -1", (3,), (3,), (3,), (3,),
(3,)),
("must have exactly 3 dimensions in axis -1", (3,), (3,), (3,), (1,),
(4,)),
("must have exactly 3 dimensions in axis -1", (3,), (3,), (3,), (1,),
(2,)),
("must have exactly 3 dimensions in axis -1", (3,), (3,), (3,), (1,),
(1,)),
("Not all batch dimensions are broadcast-compatible.", (2, 3), (3, 3),
(3,), (1,), (3,)),
)
def test_brdf_shape_exception_raised(self, error_msg, *shape):
"""Tests that the shape exception is raised."""
self.assert_exception_is_raised(blinn_phong.brdf, error_msg, shape)
if __name__ == "__main__":
test_case.main()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/math/optimizer/levenberg_marquardt.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
r"""This module implements a Levenberg-Marquardt optimizer.
Minimizes \\(\min_{\mathbf{x}} \sum_i \|\mathbf{r}_i(\mathbf{x})\|^2_2\\) where
\\(\mathbf{r}_i(\mathbf{x})\\)
are the residuals. This function implements Levenberg-Marquardt, an iterative
process that linearizes the residuals and iteratively finds a displacement
\\(\Delta \mathbf{x}\\) such that at iteration \\(t\\) an update
\\(\mathbf{x}_{t+1} = \mathbf{x}_{t} + \Delta \mathbf{x}\\) improving the
loss can be computed. The displacement is computed by solving an optimization
problem
\\(\min_{\Delta \mathbf{x}} \sum_i
\|\mathbf{J}_i(\mathbf{x}_{t})\Delta\mathbf{x} +
\mathbf{r}_i(\mathbf{x}_t)\|^2_2 + \lambda\|\Delta \mathbf{x} \|_2^2\\) where
\\(\mathbf{J}_i(\mathbf{x}_{t})\\) is the Jacobian of \\(\mathbf{r}_i\\)
computed at \\(\mathbf{x}_t\\), and \\(\lambda\\) is a scalar weight.
More details on Levenberg-Marquardt can be found on [this page.]
(https://en.wikipedia.org/wiki/Levenberg%E2%80%93Marquardt_algorithm)
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from tensorflow_graphics.util import export_api
def _values_and_jacobian(residuals, variables):
"""Computes the residual values and the Jacobian matrix.
Args:
residuals: A list of residuals.
variables: A list of variables.
Returns:
The residual values and the Jacobian matrix.
"""
def _compute_residual_values(residuals, variables):
"""Computes the residual values."""
return tf.concat([
tf.reshape(residual(*variables), shape=(-1,)) for residual in residuals
],
axis=-1)
def _compute_jacobian(values, variables, tape):
"""Computes the Jacobian matrix."""
jacobians = tape.jacobian(
values, variables, unconnected_gradients=tf.UnconnectedGradients.ZERO)
return tf.concat([
tf.reshape(jacobian, shape=(tf.shape(input=jacobian)[0], -1))
for jacobian in jacobians
],
axis=-1)
with tf.GradientTape(watch_accessed_variables=False, persistent=True) as tape:
for variable in variables:
tape.watch(variable)
values = _compute_residual_values(residuals, variables)
jacobian = _compute_jacobian(values, variables, tape)
del tape
values = tf.expand_dims(values, axis=-1)
return values, jacobian
def minimize(residuals,
variables,
max_iterations,
regularizer=1e-20,
regularizer_multiplier=10.0,
callback=None,
name=None):
r"""Minimizes a set of residuals in the least-squares sense.
Args:
residuals: A residual or a list/tuple of residuals. A residual is a Python
`callable`.
variables: A variable or a list or tuple of variables defining the starting
point of the minimization.
max_iterations: The maximum number of iterations.
regularizer: The regularizer is used to damped the stepsize when the
iterations are becoming unstable. The bigger the regularizer is the
smaller the stepsize becomes.
regularizer_multiplier: If an iteration does not decrease the objective a
new regularizer is computed by scaling it by this multiplier.
callback: A callback function that will be called at each iteration. In
graph mode the callback should return an op or list of ops that will
execute the callback logic. The callback needs to be of the form
f(iteration, objective_value, variables). A callback is a Python
`callable`. The callback could be used for logging, for example if one
wants to print the objective value at each iteration.
name: A name for this op. Defaults to "levenberg_marquardt_minimize".
Returns:
The value of the objective function and variables attained at the final
iteration of the minimization procedure.
Raises:
ValueError: If max_iterations is not at least 1.
InvalidArgumentError: This exception is only raised in graph mode if the
Cholesky decomposition is not successful. One likely fix is to increase
the regularizer. In eager mode this exception is catched and the regularizer
is increased automatically.
Examples:
```python
x = tf.constant(np.random.random_sample(size=(1,2)), dtype=tf.float32)
y = tf.constant(np.random.random_sample(size=(3,1)), dtype=tf.float32)
def f1(x, y):
return x + y
def f2(x, y):
return x * y
def callback(iteration, objective_value, variables):
def print_output(iteration, objective_value, *variables):
print("Iteration:", iteration, "Objective Value:", objective_value)
for variable in variables:
print(variable)
inp = [iteration, objective_value] + variables
return tf.py_function(print_output, inp, [])
minimize_op = minimize(residuals=(f1, f2),
variables=(x, y),
max_iterations=10,
callback=callback)
if not tf.executing_eagerly():
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
sess.run(minimize_op)
```
"""
if not isinstance(variables, (tuple, list)):
variables = [variables]
with tf.compat.v1.name_scope(name, 'levenberg_marquardt_minimize', variables):
if not isinstance(residuals, (tuple, list)):
residuals = [residuals]
if isinstance(residuals, tuple):
residuals = list(residuals)
if isinstance(variables, tuple):
variables = list(variables)
variables = [tf.convert_to_tensor(value=variable) for variable in variables]
multiplier = tf.constant(regularizer_multiplier, dtype=variables[0].dtype)
if max_iterations <= 0:
raise ValueError("'max_iterations' needs to be at least 1.")
def _cond(iteration, regularizer, objective_value, variables):
"""Returns whether any iteration still needs to be performed."""
del regularizer, objective_value, variables
return iteration < max_iterations
def _body(iteration, regularizer, objective_value, variables):
"""Main optimization loop."""
iteration += tf.constant(1, dtype=tf.int32)
values, jacobian = _values_and_jacobian(residuals, variables)
# Solves the normal equation.
try:
updates = tf.linalg.lstsq(jacobian, values, l2_regularizer=regularizer)
shapes = [tf.shape(input=variable) for variable in variables]
splits = [tf.reduce_prod(input_tensor=shape) for shape in shapes]
updates = tf.split(tf.squeeze(updates, axis=-1), splits)
new_variables = [
variable - tf.reshape(update, shape)
for variable, update, shape in zip(variables, updates, shapes)
]
new_objective_value = tf.reduce_sum(input_tensor=[
tf.nn.l2_loss(residual(*new_variables)) for residual in residuals
])
# If the new estimated solution does not decrease the objective value,
# no updates are performed, but a new regularizer is computed.
cond = tf.less(new_objective_value, objective_value)
regularizer = tf.compat.v1.where(
cond, x=regularizer, y=regularizer * multiplier)
objective_value = tf.compat.v1.where(
cond, x=new_objective_value, y=objective_value)
variables = [
tf.compat.v1.where(cond, x=new_variable, y=variable)
for variable, new_variable in zip(variables, new_variables)
]
# Note that catching InvalidArgumentError will only work in eager mode.
except tf.errors.InvalidArgumentError:
regularizer *= multiplier
if callback is not None:
callback_ops = callback(iteration, objective_value, variables)
if callback_ops is not None:
if not isinstance(callback_ops, (tuple, list)):
callback_ops = [callback_ops]
with tf.control_dependencies(callback_ops):
iteration = tf.identity(iteration)
objective_value = tf.identity(objective_value)
variables = [tf.identity(v) for v in variables]
return iteration, regularizer, objective_value, variables
starting_value = tf.reduce_sum(input_tensor=[
tf.nn.l2_loss(residual(*variables)) for residual in residuals
])
dtype = variables[0].dtype
initial = (
tf.constant(0, dtype=tf.int32), # Initial iteration number.
tf.constant(regularizer, dtype=dtype), # Initial regularizer.
starting_value, # Initial objective value.
variables, # Initial variables.
)
_, _, final_objective_value, final_variables = tf.while_loop(
cond=_cond, body=_body, loop_vars=initial, parallel_iterations=1)
return final_objective_value, final_variables
# API contains all public functions and classes.
__all__ = export_api.get_functions_and_classes()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
r"""This module implements a Levenberg-Marquardt optimizer.
Minimizes \\(\min_{\mathbf{x}} \sum_i \|\mathbf{r}_i(\mathbf{x})\|^2_2\\) where
\\(\mathbf{r}_i(\mathbf{x})\\)
are the residuals. This function implements Levenberg-Marquardt, an iterative
process that linearizes the residuals and iteratively finds a displacement
\\(\Delta \mathbf{x}\\) such that at iteration \\(t\\) an update
\\(\mathbf{x}_{t+1} = \mathbf{x}_{t} + \Delta \mathbf{x}\\) improving the
loss can be computed. The displacement is computed by solving an optimization
problem
\\(\min_{\Delta \mathbf{x}} \sum_i
\|\mathbf{J}_i(\mathbf{x}_{t})\Delta\mathbf{x} +
\mathbf{r}_i(\mathbf{x}_t)\|^2_2 + \lambda\|\Delta \mathbf{x} \|_2^2\\) where
\\(\mathbf{J}_i(\mathbf{x}_{t})\\) is the Jacobian of \\(\mathbf{r}_i\\)
computed at \\(\mathbf{x}_t\\), and \\(\lambda\\) is a scalar weight.
More details on Levenberg-Marquardt can be found on [this page.]
(https://en.wikipedia.org/wiki/Levenberg%E2%80%93Marquardt_algorithm)
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from tensorflow_graphics.util import export_api
def _values_and_jacobian(residuals, variables):
"""Computes the residual values and the Jacobian matrix.
Args:
residuals: A list of residuals.
variables: A list of variables.
Returns:
The residual values and the Jacobian matrix.
"""
def _compute_residual_values(residuals, variables):
"""Computes the residual values."""
return tf.concat([
tf.reshape(residual(*variables), shape=(-1,)) for residual in residuals
],
axis=-1)
def _compute_jacobian(values, variables, tape):
"""Computes the Jacobian matrix."""
jacobians = tape.jacobian(
values, variables, unconnected_gradients=tf.UnconnectedGradients.ZERO)
return tf.concat([
tf.reshape(jacobian, shape=(tf.shape(input=jacobian)[0], -1))
for jacobian in jacobians
],
axis=-1)
with tf.GradientTape(watch_accessed_variables=False, persistent=True) as tape:
for variable in variables:
tape.watch(variable)
values = _compute_residual_values(residuals, variables)
jacobian = _compute_jacobian(values, variables, tape)
del tape
values = tf.expand_dims(values, axis=-1)
return values, jacobian
def minimize(residuals,
variables,
max_iterations,
regularizer=1e-20,
regularizer_multiplier=10.0,
callback=None,
name=None):
r"""Minimizes a set of residuals in the least-squares sense.
Args:
residuals: A residual or a list/tuple of residuals. A residual is a Python
`callable`.
variables: A variable or a list or tuple of variables defining the starting
point of the minimization.
max_iterations: The maximum number of iterations.
regularizer: The regularizer is used to damped the stepsize when the
iterations are becoming unstable. The bigger the regularizer is the
smaller the stepsize becomes.
regularizer_multiplier: If an iteration does not decrease the objective a
new regularizer is computed by scaling it by this multiplier.
callback: A callback function that will be called at each iteration. In
graph mode the callback should return an op or list of ops that will
execute the callback logic. The callback needs to be of the form
f(iteration, objective_value, variables). A callback is a Python
`callable`. The callback could be used for logging, for example if one
wants to print the objective value at each iteration.
name: A name for this op. Defaults to "levenberg_marquardt_minimize".
Returns:
The value of the objective function and variables attained at the final
iteration of the minimization procedure.
Raises:
ValueError: If max_iterations is not at least 1.
InvalidArgumentError: This exception is only raised in graph mode if the
Cholesky decomposition is not successful. One likely fix is to increase
the regularizer. In eager mode this exception is catched and the regularizer
is increased automatically.
Examples:
```python
x = tf.constant(np.random.random_sample(size=(1,2)), dtype=tf.float32)
y = tf.constant(np.random.random_sample(size=(3,1)), dtype=tf.float32)
def f1(x, y):
return x + y
def f2(x, y):
return x * y
def callback(iteration, objective_value, variables):
def print_output(iteration, objective_value, *variables):
print("Iteration:", iteration, "Objective Value:", objective_value)
for variable in variables:
print(variable)
inp = [iteration, objective_value] + variables
return tf.py_function(print_output, inp, [])
minimize_op = minimize(residuals=(f1, f2),
variables=(x, y),
max_iterations=10,
callback=callback)
if not tf.executing_eagerly():
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
sess.run(minimize_op)
```
"""
if not isinstance(variables, (tuple, list)):
variables = [variables]
with tf.compat.v1.name_scope(name, 'levenberg_marquardt_minimize', variables):
if not isinstance(residuals, (tuple, list)):
residuals = [residuals]
if isinstance(residuals, tuple):
residuals = list(residuals)
if isinstance(variables, tuple):
variables = list(variables)
variables = [tf.convert_to_tensor(value=variable) for variable in variables]
multiplier = tf.constant(regularizer_multiplier, dtype=variables[0].dtype)
if max_iterations <= 0:
raise ValueError("'max_iterations' needs to be at least 1.")
def _cond(iteration, regularizer, objective_value, variables):
"""Returns whether any iteration still needs to be performed."""
del regularizer, objective_value, variables
return iteration < max_iterations
def _body(iteration, regularizer, objective_value, variables):
"""Main optimization loop."""
iteration += tf.constant(1, dtype=tf.int32)
values, jacobian = _values_and_jacobian(residuals, variables)
# Solves the normal equation.
try:
updates = tf.linalg.lstsq(jacobian, values, l2_regularizer=regularizer)
shapes = [tf.shape(input=variable) for variable in variables]
splits = [tf.reduce_prod(input_tensor=shape) for shape in shapes]
updates = tf.split(tf.squeeze(updates, axis=-1), splits)
new_variables = [
variable - tf.reshape(update, shape)
for variable, update, shape in zip(variables, updates, shapes)
]
new_objective_value = tf.reduce_sum(input_tensor=[
tf.nn.l2_loss(residual(*new_variables)) for residual in residuals
])
# If the new estimated solution does not decrease the objective value,
# no updates are performed, but a new regularizer is computed.
cond = tf.less(new_objective_value, objective_value)
regularizer = tf.compat.v1.where(
cond, x=regularizer, y=regularizer * multiplier)
objective_value = tf.compat.v1.where(
cond, x=new_objective_value, y=objective_value)
variables = [
tf.compat.v1.where(cond, x=new_variable, y=variable)
for variable, new_variable in zip(variables, new_variables)
]
# Note that catching InvalidArgumentError will only work in eager mode.
except tf.errors.InvalidArgumentError:
regularizer *= multiplier
if callback is not None:
callback_ops = callback(iteration, objective_value, variables)
if callback_ops is not None:
if not isinstance(callback_ops, (tuple, list)):
callback_ops = [callback_ops]
with tf.control_dependencies(callback_ops):
iteration = tf.identity(iteration)
objective_value = tf.identity(objective_value)
variables = [tf.identity(v) for v in variables]
return iteration, regularizer, objective_value, variables
starting_value = tf.reduce_sum(input_tensor=[
tf.nn.l2_loss(residual(*variables)) for residual in residuals
])
dtype = variables[0].dtype
initial = (
tf.constant(0, dtype=tf.int32), # Initial iteration number.
tf.constant(regularizer, dtype=dtype), # Initial regularizer.
starting_value, # Initial objective value.
variables, # Initial variables.
)
_, _, final_objective_value, final_variables = tf.while_loop(
cond=_cond, body=_body, loop_vars=initial, parallel_iterations=1)
return final_objective_value, final_variables
# API contains all public functions and classes.
__all__ = export_api.get_functions_and_classes()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/util/type_alias.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Lint as: python3
"""Type aliases for Python 3 typing."""
from typing import Union, Sequence
import numpy as np
import tensorflow as tf
Integer = Union[int, np.int8, np.int16, np.int32, np.int64, np.uint8, np.uint16,
np.uint32, np.uint64]
Float = Union[float, np.float16, np.float32, np.float64]
TensorLike = Union[Integer, Float, Sequence, np.ndarray, tf.Tensor, tf.Variable]
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Lint as: python3
"""Type aliases for Python 3 typing."""
from typing import Union, Sequence
import numpy as np
import tensorflow as tf
Integer = Union[int, np.int8, np.int16, np.int32, np.int64, np.uint8, np.uint16,
np.uint32, np.uint64]
Float = Union[float, np.float16, np.float32, np.float64]
TensorLike = Union[Integer, Float, Sequence, np.ndarray, tf.Tensor, tf.Variable]
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/rendering/voxels/tests/visual_hull_test.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for visual hull voxel rendering."""
from absl.testing import flagsaver
from absl.testing import parameterized
import tensorflow as tf
from tensorflow_graphics.rendering.voxels import visual_hull
from tensorflow_graphics.rendering.voxels.tests import test_helpers
from tensorflow_graphics.util import test_case
class VisualHullTest(test_case.TestCase):
@parameterized.parameters(
(0, (8, 16, 6, 1)),
(1, (12, 8, 16, 6, 3)),
)
def test_render_shape_exception_not_raised(self, axis, *shape):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(visual_hull.render, shape, axis=axis)
@parameterized.parameters(
("must have a rank greater than 3", 2, (3,)),
("must have a rank greater than 3", 2, (16, 6, 3)),
("'axis' needs to be 0, 1 or 2", 5, (8, 16, 6, 1)),
)
def test_render_shape_exception_raised(self, error_msg, axis, *shape):
"""Tests that the shape exception is raised."""
self.assert_exception_is_raised(visual_hull.render,
error_msg,
shape,
axis=axis)
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_render_jacobian_random(self):
"""Tests the Jacobian of render."""
voxels_init = test_helpers.generate_random_test_voxels_render()
self.assert_jacobian_is_correct_fn(visual_hull.render, [voxels_init])
def test_render_preset(self):
"""Checks that render returns the expected value."""
x_voxels_init, y_images_init = test_helpers.generate_preset_test_voxels_visual_hull_render(
)
voxels = tf.convert_to_tensor(value=x_voxels_init)
y_images = tf.convert_to_tensor(value=y_images_init)
y = visual_hull.render(voxels)
self.assertAllClose(y_images, y)
if __name__ == "__main__":
test_case.main()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for visual hull voxel rendering."""
from absl.testing import flagsaver
from absl.testing import parameterized
import tensorflow as tf
from tensorflow_graphics.rendering.voxels import visual_hull
from tensorflow_graphics.rendering.voxels.tests import test_helpers
from tensorflow_graphics.util import test_case
class VisualHullTest(test_case.TestCase):
@parameterized.parameters(
(0, (8, 16, 6, 1)),
(1, (12, 8, 16, 6, 3)),
)
def test_render_shape_exception_not_raised(self, axis, *shape):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(visual_hull.render, shape, axis=axis)
@parameterized.parameters(
("must have a rank greater than 3", 2, (3,)),
("must have a rank greater than 3", 2, (16, 6, 3)),
("'axis' needs to be 0, 1 or 2", 5, (8, 16, 6, 1)),
)
def test_render_shape_exception_raised(self, error_msg, axis, *shape):
"""Tests that the shape exception is raised."""
self.assert_exception_is_raised(visual_hull.render,
error_msg,
shape,
axis=axis)
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_render_jacobian_random(self):
"""Tests the Jacobian of render."""
voxels_init = test_helpers.generate_random_test_voxels_render()
self.assert_jacobian_is_correct_fn(visual_hull.render, [voxels_init])
def test_render_preset(self):
"""Checks that render returns the expected value."""
x_voxels_init, y_images_init = test_helpers.generate_preset_test_voxels_visual_hull_render(
)
voxels = tf.convert_to_tensor(value=x_voxels_init)
y_images = tf.convert_to_tensor(value=y_images_init)
y = visual_hull.render(voxels)
self.assertAllClose(y_images, y)
if __name__ == "__main__":
test_case.main()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/io/__init__.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Lint as: python3
"""`tensorflow_graphics.io` module."""
# pylint: disable=g-import-not-at-top
from tensorflow_graphics.util.doc import _import_tfg_docs
if _import_tfg_docs():
from tensorflow_graphics.io import triangle_mesh
from tensorflow_graphics.io import exr
from tensorflow_graphics.util import export_api as _export_api
# API contains submodules of tensorflow_graphics.rendering.reflectance.
__all__ = _export_api.get_modules()
# pylint: enable=g-import-not-at-top
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Lint as: python3
"""`tensorflow_graphics.io` module."""
# pylint: disable=g-import-not-at-top
from tensorflow_graphics.util.doc import _import_tfg_docs
if _import_tfg_docs():
from tensorflow_graphics.io import triangle_mesh
from tensorflow_graphics.io import exr
from tensorflow_graphics.util import export_api as _export_api
# API contains submodules of tensorflow_graphics.rendering.reflectance.
__all__ = _export_api.get_modules()
# pylint: enable=g-import-not-at-top
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/geometry/transformation/tests/linear_blend_skinning_test.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for linear blend skinning."""
# pylint: disable=line-too-long
from absl.testing import flagsaver
from absl.testing import parameterized
import tensorflow as tf
from tensorflow_graphics.geometry.transformation import linear_blend_skinning
from tensorflow_graphics.geometry.transformation.tests import test_helpers
from tensorflow_graphics.util import test_case
class LinearBlendSkinningTest(test_case.TestCase):
# pyformat: disable
@parameterized.parameters(
((3,), (7,), (7, 3, 3), (7, 3)),
((None, 3), (None, 9), (None, 9, 3, 3), (None, 9, 3)),
((7, 1, 3), (1, 4, 11), (5, 11, 3, 3), (1, 11, 3)),
((7, 4, 3), (4, 11), (11, 3, 3), (11, 3)),
((3,), (5, 4, 11), (11, 3, 3), (11, 3)),
)
# pyformat: enable
def test_blend_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(linear_blend_skinning.blend, shapes)
# pyformat: disable
@parameterized.parameters(
("points must have exactly 3 dimensions in axis -1",
(None,), (7,), (7, 3, 3), (7, 3)),
("bone_rotations must have a rank greater than 2", (3,), (7,), (3, 3), (3,)),
("bone_rotations must have exactly 3 dimensions in axis -1",
(3,), (7,), (7, 3, None), (7, 3)),
("bone_rotations must have exactly 3 dimensions in axis -2",
(3,), (7,), (7, None, 3), (7, 3)),
("bone_translations must have a rank greater than 1", (3,), (7,), (7, 3, 3), (3,)),
("bone_translations must have exactly 3 dimensions in axis -1",
(3,), (7,), (7, 3, 3), (7, None)),
(r"Tensors \[\'skinning_weights\', \'bone_rotations\'\] must have the same number of dimensions in axes",
(3,), (9,), (7, 3, 3), (9, 3)),
(r"Tensors \[\'skinning_weights\', \'bone_translations\'\] must have the same number of dimensions in axes",
(3,), (9,), (9, 3, 3), (7, 3)),
("Not all batch dimensions are broadcast-compatible",
(2, 3, 3), (3, 1, 7), (7, 3, 3), (7, 3)),
("Not all batch dimensions are broadcast-compatible",
(2, 3, 3), (2, 1, 7), (3, 7, 3, 3), (2, 7, 3)),
)
# pyformat: enable
def test_blend_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(linear_blend_skinning.blend, error_msg,
shapes)
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_blend_jacobian_random(self):
"""Test the Jacobian of the blend function."""
(x_points_init, x_weights_init, x_rotations_init,
x_translations_init) = test_helpers.generate_random_test_lbs_blend()
self.assert_jacobian_is_correct_fn(
linear_blend_skinning.blend,
[x_points_init, x_weights_init, x_rotations_init, x_translations_init])
def test_blend_preset(self):
"""Checks that blend returns the expected value."""
(x_points_init, x_weights_init, x_rotations_init, x_translations_init,
y_blended_points_init) = test_helpers.generate_preset_test_lbs_blend()
x_points = tf.convert_to_tensor(value=x_points_init)
x_weights = tf.convert_to_tensor(value=x_weights_init)
x_rotations = tf.convert_to_tensor(value=x_rotations_init)
x_translations = tf.convert_to_tensor(value=x_translations_init)
y_blended_points = tf.convert_to_tensor(value=y_blended_points_init)
y = linear_blend_skinning.blend(x_points, x_weights, x_rotations,
x_translations)
self.assertAllClose(y_blended_points, y)
if __name__ == "__main__":
test_case.main()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for linear blend skinning."""
# pylint: disable=line-too-long
from absl.testing import flagsaver
from absl.testing import parameterized
import tensorflow as tf
from tensorflow_graphics.geometry.transformation import linear_blend_skinning
from tensorflow_graphics.geometry.transformation.tests import test_helpers
from tensorflow_graphics.util import test_case
class LinearBlendSkinningTest(test_case.TestCase):
# pyformat: disable
@parameterized.parameters(
((3,), (7,), (7, 3, 3), (7, 3)),
((None, 3), (None, 9), (None, 9, 3, 3), (None, 9, 3)),
((7, 1, 3), (1, 4, 11), (5, 11, 3, 3), (1, 11, 3)),
((7, 4, 3), (4, 11), (11, 3, 3), (11, 3)),
((3,), (5, 4, 11), (11, 3, 3), (11, 3)),
)
# pyformat: enable
def test_blend_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(linear_blend_skinning.blend, shapes)
# pyformat: disable
@parameterized.parameters(
("points must have exactly 3 dimensions in axis -1",
(None,), (7,), (7, 3, 3), (7, 3)),
("bone_rotations must have a rank greater than 2", (3,), (7,), (3, 3), (3,)),
("bone_rotations must have exactly 3 dimensions in axis -1",
(3,), (7,), (7, 3, None), (7, 3)),
("bone_rotations must have exactly 3 dimensions in axis -2",
(3,), (7,), (7, None, 3), (7, 3)),
("bone_translations must have a rank greater than 1", (3,), (7,), (7, 3, 3), (3,)),
("bone_translations must have exactly 3 dimensions in axis -1",
(3,), (7,), (7, 3, 3), (7, None)),
(r"Tensors \[\'skinning_weights\', \'bone_rotations\'\] must have the same number of dimensions in axes",
(3,), (9,), (7, 3, 3), (9, 3)),
(r"Tensors \[\'skinning_weights\', \'bone_translations\'\] must have the same number of dimensions in axes",
(3,), (9,), (9, 3, 3), (7, 3)),
("Not all batch dimensions are broadcast-compatible",
(2, 3, 3), (3, 1, 7), (7, 3, 3), (7, 3)),
("Not all batch dimensions are broadcast-compatible",
(2, 3, 3), (2, 1, 7), (3, 7, 3, 3), (2, 7, 3)),
)
# pyformat: enable
def test_blend_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(linear_blend_skinning.blend, error_msg,
shapes)
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_blend_jacobian_random(self):
"""Test the Jacobian of the blend function."""
(x_points_init, x_weights_init, x_rotations_init,
x_translations_init) = test_helpers.generate_random_test_lbs_blend()
self.assert_jacobian_is_correct_fn(
linear_blend_skinning.blend,
[x_points_init, x_weights_init, x_rotations_init, x_translations_init])
def test_blend_preset(self):
"""Checks that blend returns the expected value."""
(x_points_init, x_weights_init, x_rotations_init, x_translations_init,
y_blended_points_init) = test_helpers.generate_preset_test_lbs_blend()
x_points = tf.convert_to_tensor(value=x_points_init)
x_weights = tf.convert_to_tensor(value=x_weights_init)
x_rotations = tf.convert_to_tensor(value=x_rotations_init)
x_translations = tf.convert_to_tensor(value=x_translations_init)
y_blended_points = tf.convert_to_tensor(value=y_blended_points_init)
y = linear_blend_skinning.blend(x_points, x_weights, x_rotations,
x_translations)
self.assertAllClose(y_blended_points, y)
if __name__ == "__main__":
test_case.main()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/geometry/transformation/tests/__init__.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/image/tests/transformer_test.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for image transformation functionalities."""
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from tensorflow_addons import image as tfa_image
from tensorflow_graphics.image import transformer
from tensorflow_graphics.util import test_case
class TransformerTest(test_case.TestCase, parameterized.TestCase):
@parameterized.parameters(
((None, 1, 2, None), (None, 3, 3)),
((1, 2, 3, 4), (1, 3, 3)),
)
def test_perspective_transform_exception_not_raised(self, *shape):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(transformer.perspective_transform,
shape)
@parameterized.parameters(
("must have a rank of 4.", (1, 1, 1), (1, 3, 3)),
("must have a rank of 3.", (1, 1, 1, 1), (3, 3)),
("Not all batch dimensions are identical.", (1, 1, 1, 1), (2, 3, 3)),
)
def test_perspective_transform_exception_raised(self, error_msg, *shape):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(transformer.perspective_transform,
error_msg, shape)
@parameterized.parameters(
(tf.float32, "NEAREST"),
(tf.float64, "NEAREST"),
(tf.float32, "BILINEAR"),
(tf.float64, "BILINEAR"),
)
def test_perspective_transform_half_integer_centers_preset(
self, dtype, interpolation):
"""Tests that we can reproduce the results of tf.image.resize."""
image = tf.constant(
((1.0, 2.0, 3.0), (4.0, 5.0, 6.0), (7.0, 8.0, 9.0), (10.0, 11.0, 12.0)),
dtype=dtype)
scale = 3
transformation = tf.constant(
((1.0 / scale, 0.0, 0.0), (0.0, 1.0 / scale, 0.0), (0.0, 0.0, 1.0)),
dtype=dtype)
image_shape = tf.shape(image)
image_resized_shape = image_shape * scale
image = image[tf.newaxis, ..., tf.newaxis]
transformation = transformation[tf.newaxis, ...]
image_resized = tf.image.resize(
image,
size=image_resized_shape,
method=tf.image.ResizeMethod.NEAREST_NEIGHBOR
if interpolation == "NEAREST" else tf.image.ResizeMethod.BILINEAR)
image_transformed = transformer.perspective_transform(
image,
transformation,
resampling_type=transformer.ResamplingType.NEAREST
if interpolation == "NEAREST" else transformer.ResamplingType.BILINEAR,
border_type=transformer.BorderType.DUPLICATE,
output_shape=image_resized_shape)
self.assertAllClose(image_resized, image_transformed)
@parameterized.parameters(
(tf.float32, "NEAREST"),
(tf.float64, "NEAREST"),
(tf.float32, "BILINEAR"),
(tf.float64, "BILINEAR"),
)
def test_perspective_transform_integer_centers_preset(self, dtype,
interpolation):
"""Tests that we can reproduce the results of tfa_image.transform."""
image = tf.constant(
((1.0, 2.0, 3.0), (4.0, 5.0, 6.0), (7.0, 8.0, 9.0), (10.0, 11.0, 12.0)),
dtype=dtype)
scale = 3
transformation = tf.constant(
((1.0 / scale, 0.0, 0.0), (0.0, 1.0 / scale, 0.0), (0.0, 0.0, 1.0)),
dtype=dtype)
image_shape = tf.shape(image)
image_resized_shape = image_shape * scale
image = image[tf.newaxis, ..., tf.newaxis]
transformation = transformation[tf.newaxis, ...]
image_resized = tfa_image.transform(
tf.cast(image, tf.float32),
tf.cast(
tfa_image.transform_ops.matrices_to_flat_transforms(transformation),
tf.float32),
interpolation=interpolation,
output_shape=image_resized_shape)
image_transformed = transformer.perspective_transform(
image,
transformation,
resampling_type=transformer.ResamplingType.NEAREST
if interpolation == "NEAREST" else transformer.ResamplingType.BILINEAR,
pixel_type=transformer.PixelType.INTEGER,
output_shape=image_resized_shape)
self.assertAllClose(image_resized, image_transformed)
def test_perspective_transform_jacobian_random(self):
"""Tests the Jacobian of the transform function."""
tensor_shape = np.random.randint(2, 4, size=4)
image_init = np.random.uniform(0.0, 1.0, size=tensor_shape.tolist())
transformation_init = np.random.uniform(
0.0, 1.0, size=(tensor_shape[0], 3, 3))
self.assert_jacobian_is_correct_fn(
lambda x: transformer.perspective_transform(x, transformation_init),
[image_init])
self.assert_jacobian_is_correct_fn(
lambda x: transformer.perspective_transform(image_init, x),
[transformation_init])
@parameterized.parameters(
((None, 1, 2, None), (None, 2)),
((1, 3, 2, 4), (1, 2)),
)
def test_sample_exception_not_raised(self, *shape):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(transformer.sample, shape)
@parameterized.parameters(
("must have a rank of 4.", (1, 1, 1), (1, 2)),
("must have a rank greater than 1", (1, 1, 1, 1), (2,)),
("Not all batch dimensions are identical.", (1, 1, 1, 1), (2, 2)),
)
def test_sample_exception_raised(self, error_msg, *shape):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(transformer.sample, error_msg, shape)
if __name__ == "__main__":
test_case.main()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for image transformation functionalities."""
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from tensorflow_addons import image as tfa_image
from tensorflow_graphics.image import transformer
from tensorflow_graphics.util import test_case
class TransformerTest(test_case.TestCase, parameterized.TestCase):
@parameterized.parameters(
((None, 1, 2, None), (None, 3, 3)),
((1, 2, 3, 4), (1, 3, 3)),
)
def test_perspective_transform_exception_not_raised(self, *shape):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(transformer.perspective_transform,
shape)
@parameterized.parameters(
("must have a rank of 4.", (1, 1, 1), (1, 3, 3)),
("must have a rank of 3.", (1, 1, 1, 1), (3, 3)),
("Not all batch dimensions are identical.", (1, 1, 1, 1), (2, 3, 3)),
)
def test_perspective_transform_exception_raised(self, error_msg, *shape):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(transformer.perspective_transform,
error_msg, shape)
@parameterized.parameters(
(tf.float32, "NEAREST"),
(tf.float64, "NEAREST"),
(tf.float32, "BILINEAR"),
(tf.float64, "BILINEAR"),
)
def test_perspective_transform_half_integer_centers_preset(
self, dtype, interpolation):
"""Tests that we can reproduce the results of tf.image.resize."""
image = tf.constant(
((1.0, 2.0, 3.0), (4.0, 5.0, 6.0), (7.0, 8.0, 9.0), (10.0, 11.0, 12.0)),
dtype=dtype)
scale = 3
transformation = tf.constant(
((1.0 / scale, 0.0, 0.0), (0.0, 1.0 / scale, 0.0), (0.0, 0.0, 1.0)),
dtype=dtype)
image_shape = tf.shape(image)
image_resized_shape = image_shape * scale
image = image[tf.newaxis, ..., tf.newaxis]
transformation = transformation[tf.newaxis, ...]
image_resized = tf.image.resize(
image,
size=image_resized_shape,
method=tf.image.ResizeMethod.NEAREST_NEIGHBOR
if interpolation == "NEAREST" else tf.image.ResizeMethod.BILINEAR)
image_transformed = transformer.perspective_transform(
image,
transformation,
resampling_type=transformer.ResamplingType.NEAREST
if interpolation == "NEAREST" else transformer.ResamplingType.BILINEAR,
border_type=transformer.BorderType.DUPLICATE,
output_shape=image_resized_shape)
self.assertAllClose(image_resized, image_transformed)
@parameterized.parameters(
(tf.float32, "NEAREST"),
(tf.float64, "NEAREST"),
(tf.float32, "BILINEAR"),
(tf.float64, "BILINEAR"),
)
def test_perspective_transform_integer_centers_preset(self, dtype,
interpolation):
"""Tests that we can reproduce the results of tfa_image.transform."""
image = tf.constant(
((1.0, 2.0, 3.0), (4.0, 5.0, 6.0), (7.0, 8.0, 9.0), (10.0, 11.0, 12.0)),
dtype=dtype)
scale = 3
transformation = tf.constant(
((1.0 / scale, 0.0, 0.0), (0.0, 1.0 / scale, 0.0), (0.0, 0.0, 1.0)),
dtype=dtype)
image_shape = tf.shape(image)
image_resized_shape = image_shape * scale
image = image[tf.newaxis, ..., tf.newaxis]
transformation = transformation[tf.newaxis, ...]
image_resized = tfa_image.transform(
tf.cast(image, tf.float32),
tf.cast(
tfa_image.transform_ops.matrices_to_flat_transforms(transformation),
tf.float32),
interpolation=interpolation,
output_shape=image_resized_shape)
image_transformed = transformer.perspective_transform(
image,
transformation,
resampling_type=transformer.ResamplingType.NEAREST
if interpolation == "NEAREST" else transformer.ResamplingType.BILINEAR,
pixel_type=transformer.PixelType.INTEGER,
output_shape=image_resized_shape)
self.assertAllClose(image_resized, image_transformed)
def test_perspective_transform_jacobian_random(self):
"""Tests the Jacobian of the transform function."""
tensor_shape = np.random.randint(2, 4, size=4)
image_init = np.random.uniform(0.0, 1.0, size=tensor_shape.tolist())
transformation_init = np.random.uniform(
0.0, 1.0, size=(tensor_shape[0], 3, 3))
self.assert_jacobian_is_correct_fn(
lambda x: transformer.perspective_transform(x, transformation_init),
[image_init])
self.assert_jacobian_is_correct_fn(
lambda x: transformer.perspective_transform(image_init, x),
[transformation_init])
@parameterized.parameters(
((None, 1, 2, None), (None, 2)),
((1, 3, 2, 4), (1, 2)),
)
def test_sample_exception_not_raised(self, *shape):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(transformer.sample, shape)
@parameterized.parameters(
("must have a rank of 4.", (1, 1, 1), (1, 2)),
("must have a rank greater than 1", (1, 1, 1, 1), (2,)),
("Not all batch dimensions are identical.", (1, 1, 1, 1), (2, 2)),
)
def test_sample_exception_raised(self, error_msg, *shape):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(transformer.sample, error_msg, shape)
if __name__ == "__main__":
test_case.main()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/rendering/triangle_rasterizer.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This module implements a differentiable rasterizer of triangular meshes.
The resulting rendering contains perspective-correct interpolation of attributes
defined at the vertices of the rasterized meshes. This rasterizer does not
provide gradients through visibility, but it does through visible geometry and
attributes.
"""
import tensorflow as tf
from tensorflow_graphics.rendering import rasterization_backend
from tensorflow_graphics.rendering.opengl import math as glm
from tensorflow_graphics.util import export_api
from tensorflow_graphics.util import shape
def _perspective_correct_barycentrics(vertices_per_pixel, model_to_eye_matrix,
perspective_matrix, image_size_float):
"""Creates the pixels grid and computes barycentrics."""
# Construct the pixel grid with half-integer pixel centers.
width = image_size_float[1]
height = image_size_float[0]
px = tf.linspace(0.5, width - 0.5, num=int(width))
py = tf.linspace(0.5, height - 0.5, num=int(height))
xv, yv = tf.meshgrid(px, py)
pixel_position = tf.stack((xv, yv), axis=-1)
return glm.perspective_correct_barycentrics(vertices_per_pixel,
pixel_position,
model_to_eye_matrix,
perspective_matrix,
(width, height))
def _perspective_correct_attributes(attribute, barycentrics, triangles,
triangle_index, len_batch_shape):
attribute = tf.gather(attribute, triangles, axis=-2)
attribute_per_pixel = tf.gather(
attribute, triangle_index, axis=-3, batch_dims=len_batch_shape)
return glm.interpolate_attributes(attribute_per_pixel, barycentrics)
def _dim_value(dim):
return 1 if dim is None else tf.compat.v1.dimension_value(dim)
def rasterize(vertices,
triangles,
attributes,
model_to_eye_matrix,
perspective_matrix,
image_size,
backend=rasterization_backend.RasterizationBackends.OPENGL,
name=None):
"""Rasterizes the scene.
Note:
In the following, A1 to An are optional batch dimensions.
Args:
vertices: A tensor of shape `[A1, ..., An, V, 3]` containing batches of `V`
vertices, each defined by a 3D point.
triangles: A tensor of shape `[T, 3]` containing `T` triangles, each
associated with 3 vertices from `vertices`.
attributes: A dictionary of tensors, each of shape `[A1, ..., An, V, K_a]`
containing batches of `V` vertices, each associated with K-dimensional
attributes. K_a may vary by attribute.
model_to_eye_matrix: A tensor of shape `[A1, ..., An, 4, 4]` containing
batches of matrices used to transform vertices from model to eye
coordinates.
perspective_matrix: A tensor of shape `[A1, ..., An, 4, 4]` containing
batches of matrices used to project vertices from eye to clip coordinates.
image_size: A tuple (height, width) containing the dimensions in pixels of
the rasterized image.
backend: A rasterization_backend.RasterizationBackends enum containing the
backend method to use for rasterization.
name: A name for this op. Defaults to 'triangle_rasterizer_rasterize'.
Returns:
A dictionary. The key "mask" is of shape `[A1, ..., An, height, width, 1]`
and stores a value of `0` of the pixel is assciated with the background,
and `1` with the foreground. The key "barycentrics" is of shape
`[A1, ..., An, height, width, 3]` and stores barycentric weights. Finally,
the dictionary contains perspective correct interpolated attributes of shape
`[A1, ..., An, height, width, K]` per entry in the `attributes` dictionary.
"""
with tf.compat.v1.name_scope(name, "triangle_rasterizer_rasterize",
(vertices, triangles, attributes,
model_to_eye_matrix, perspective_matrix)):
vertices = tf.convert_to_tensor(value=vertices)
triangles = tf.convert_to_tensor(value=triangles)
model_to_eye_matrix = tf.convert_to_tensor(value=model_to_eye_matrix)
perspective_matrix = tf.convert_to_tensor(value=perspective_matrix)
shape.check_static(
tensor=vertices,
tensor_name="vertices",
has_rank_greater_than=1,
has_dim_equals=((-1, 3)))
shape.check_static(
tensor=triangles,
tensor_name="triangles",
has_rank=2,
has_dim_equals=((-1, 3)))
shape.check_static(
tensor=model_to_eye_matrix,
tensor_name="model_to_eye_matrix",
has_dim_equals=(((-2, 4), (-1, 4))))
shape.check_static(
tensor=perspective_matrix,
tensor_name="perspective_matrix",
has_dim_equals=(((-2, 4), (-1, 4))))
image_size_float = (float(image_size[0]), float(image_size[1]))
image_size_backend = (int(image_size[1]), int(image_size[0]))
view_projection_matrix = tf.linalg.matmul(perspective_matrix,
model_to_eye_matrix)
rasterized = rasterization_backend.rasterize(vertices, triangles,
view_projection_matrix,
image_size_backend, backend)
outputs = {
"mask": rasterized.foreground_mask,
"triangle_indices": rasterized.triangle_id
}
# Extract batch shape in order to make sure it is preserved after `gather`
# operation.
batch_shape = rasterized.triangle_id.shape[:-3]
batch_shape = [_dim_value(dim) for dim in batch_shape]
vertices_per_pixel = tf.gather(
vertices, rasterized.vertex_ids, batch_dims=len(batch_shape))
barycentrics = _perspective_correct_barycentrics(vertices_per_pixel,
model_to_eye_matrix,
perspective_matrix,
image_size_float)
mask_float = tf.cast(rasterized.foreground_mask, vertices.dtype)
outputs["barycentrics"] = mask_float * barycentrics
for key, attribute in attributes.items():
attribute = tf.convert_to_tensor(value=attribute)
outputs[key] = mask_float * _perspective_correct_attributes(
attribute, barycentrics, triangles, rasterized.triangle_id[..., 0],
len(batch_shape))
return outputs
# API contains all public functions and classes.
__all__ = export_api.get_functions_and_classes()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This module implements a differentiable rasterizer of triangular meshes.
The resulting rendering contains perspective-correct interpolation of attributes
defined at the vertices of the rasterized meshes. This rasterizer does not
provide gradients through visibility, but it does through visible geometry and
attributes.
"""
import tensorflow as tf
from tensorflow_graphics.rendering import rasterization_backend
from tensorflow_graphics.rendering.opengl import math as glm
from tensorflow_graphics.util import export_api
from tensorflow_graphics.util import shape
def _perspective_correct_barycentrics(vertices_per_pixel, model_to_eye_matrix,
perspective_matrix, image_size_float):
"""Creates the pixels grid and computes barycentrics."""
# Construct the pixel grid with half-integer pixel centers.
width = image_size_float[1]
height = image_size_float[0]
px = tf.linspace(0.5, width - 0.5, num=int(width))
py = tf.linspace(0.5, height - 0.5, num=int(height))
xv, yv = tf.meshgrid(px, py)
pixel_position = tf.stack((xv, yv), axis=-1)
return glm.perspective_correct_barycentrics(vertices_per_pixel,
pixel_position,
model_to_eye_matrix,
perspective_matrix,
(width, height))
def _perspective_correct_attributes(attribute, barycentrics, triangles,
triangle_index, len_batch_shape):
attribute = tf.gather(attribute, triangles, axis=-2)
attribute_per_pixel = tf.gather(
attribute, triangle_index, axis=-3, batch_dims=len_batch_shape)
return glm.interpolate_attributes(attribute_per_pixel, barycentrics)
def _dim_value(dim):
return 1 if dim is None else tf.compat.v1.dimension_value(dim)
def rasterize(vertices,
triangles,
attributes,
model_to_eye_matrix,
perspective_matrix,
image_size,
backend=rasterization_backend.RasterizationBackends.OPENGL,
name=None):
"""Rasterizes the scene.
Note:
In the following, A1 to An are optional batch dimensions.
Args:
vertices: A tensor of shape `[A1, ..., An, V, 3]` containing batches of `V`
vertices, each defined by a 3D point.
triangles: A tensor of shape `[T, 3]` containing `T` triangles, each
associated with 3 vertices from `vertices`.
attributes: A dictionary of tensors, each of shape `[A1, ..., An, V, K_a]`
containing batches of `V` vertices, each associated with K-dimensional
attributes. K_a may vary by attribute.
model_to_eye_matrix: A tensor of shape `[A1, ..., An, 4, 4]` containing
batches of matrices used to transform vertices from model to eye
coordinates.
perspective_matrix: A tensor of shape `[A1, ..., An, 4, 4]` containing
batches of matrices used to project vertices from eye to clip coordinates.
image_size: A tuple (height, width) containing the dimensions in pixels of
the rasterized image.
backend: A rasterization_backend.RasterizationBackends enum containing the
backend method to use for rasterization.
name: A name for this op. Defaults to 'triangle_rasterizer_rasterize'.
Returns:
A dictionary. The key "mask" is of shape `[A1, ..., An, height, width, 1]`
and stores a value of `0` of the pixel is assciated with the background,
and `1` with the foreground. The key "barycentrics" is of shape
`[A1, ..., An, height, width, 3]` and stores barycentric weights. Finally,
the dictionary contains perspective correct interpolated attributes of shape
`[A1, ..., An, height, width, K]` per entry in the `attributes` dictionary.
"""
with tf.compat.v1.name_scope(name, "triangle_rasterizer_rasterize",
(vertices, triangles, attributes,
model_to_eye_matrix, perspective_matrix)):
vertices = tf.convert_to_tensor(value=vertices)
triangles = tf.convert_to_tensor(value=triangles)
model_to_eye_matrix = tf.convert_to_tensor(value=model_to_eye_matrix)
perspective_matrix = tf.convert_to_tensor(value=perspective_matrix)
shape.check_static(
tensor=vertices,
tensor_name="vertices",
has_rank_greater_than=1,
has_dim_equals=((-1, 3)))
shape.check_static(
tensor=triangles,
tensor_name="triangles",
has_rank=2,
has_dim_equals=((-1, 3)))
shape.check_static(
tensor=model_to_eye_matrix,
tensor_name="model_to_eye_matrix",
has_dim_equals=(((-2, 4), (-1, 4))))
shape.check_static(
tensor=perspective_matrix,
tensor_name="perspective_matrix",
has_dim_equals=(((-2, 4), (-1, 4))))
image_size_float = (float(image_size[0]), float(image_size[1]))
image_size_backend = (int(image_size[1]), int(image_size[0]))
view_projection_matrix = tf.linalg.matmul(perspective_matrix,
model_to_eye_matrix)
rasterized = rasterization_backend.rasterize(vertices, triangles,
view_projection_matrix,
image_size_backend, backend)
outputs = {
"mask": rasterized.foreground_mask,
"triangle_indices": rasterized.triangle_id
}
# Extract batch shape in order to make sure it is preserved after `gather`
# operation.
batch_shape = rasterized.triangle_id.shape[:-3]
batch_shape = [_dim_value(dim) for dim in batch_shape]
vertices_per_pixel = tf.gather(
vertices, rasterized.vertex_ids, batch_dims=len(batch_shape))
barycentrics = _perspective_correct_barycentrics(vertices_per_pixel,
model_to_eye_matrix,
perspective_matrix,
image_size_float)
mask_float = tf.cast(rasterized.foreground_mask, vertices.dtype)
outputs["barycentrics"] = mask_float * barycentrics
for key, attribute in attributes.items():
attribute = tf.convert_to_tensor(value=attribute)
outputs[key] = mask_float * _perspective_correct_attributes(
attribute, barycentrics, triangles, rasterized.triangle_id[..., 0],
len(batch_shape))
return outputs
# API contains all public functions and classes.
__all__ = export_api.get_functions_and_classes()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/geometry/representation/tests/grid_test.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for grid."""
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from tensorflow_graphics.geometry.representation import grid
from tensorflow_graphics.util import test_case
class GridTest(test_case.TestCase):
@parameterized.parameters(
(((1,), (1,), (1,)), (tf.float32, tf.float32, tf.int32)),
(((1, 1), (1, 1), (1,)), (tf.float32, tf.float32, tf.int32)),
)
def test_generate_exception_not_raised(self, shapes, dtypes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(grid.generate, shapes, dtypes)
@parameterized.parameters(
("starts must have a rank greater than 0", (), (None,), (None,)),
("stops must have a rank greater than 0", (None,), (), (None,)),
("nums must have a rank of 1", (None,), (None,), ()),
("Not all batch dimensions are identical.", (1,), (0,), (1,)),
("Not all batch dimensions are identical.", (0,), (1,), (1,)),
("must have the same number of dimensions", (1,), (1,), (0,)),
)
def test_generate_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_raised(grid.generate, error_msg, shapes)
@parameterized.parameters(
(((-1.,), (1.,), (3,)), (((-1.,), (0.,), (1.,)),)),
((((-1.,), (-1.,)), ((1.,), (1.,)), (1,)), ((((-1.,),), ((-1.,),)),)),
)
def test_generate_preset(self, test_inputs, test_outputs):
"""Test the uniform grid generation using fix test cases."""
self.assert_output_is_correct(
grid.generate, test_inputs, test_outputs, tile=False)
def test_generate_random(self):
"""Test the uniform grid generation."""
starts = np.array((0., 0.), dtype=np.float32)
stops = np.random.randint(1, 10, size=(2))
nums = stops + 1
stops = stops.astype(np.float32)
g = grid.generate(starts, stops, nums)
shape = nums.tolist() + [2]
xv, yv = np.meshgrid(range(shape[0]), range(shape[1]), indexing="ij")
gt = np.stack((xv, yv), axis=-1).astype(np.float32)
self.assertAllClose(g, gt)
if __name__ == "__main__":
test_case.main()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for grid."""
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from tensorflow_graphics.geometry.representation import grid
from tensorflow_graphics.util import test_case
class GridTest(test_case.TestCase):
@parameterized.parameters(
(((1,), (1,), (1,)), (tf.float32, tf.float32, tf.int32)),
(((1, 1), (1, 1), (1,)), (tf.float32, tf.float32, tf.int32)),
)
def test_generate_exception_not_raised(self, shapes, dtypes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(grid.generate, shapes, dtypes)
@parameterized.parameters(
("starts must have a rank greater than 0", (), (None,), (None,)),
("stops must have a rank greater than 0", (None,), (), (None,)),
("nums must have a rank of 1", (None,), (None,), ()),
("Not all batch dimensions are identical.", (1,), (0,), (1,)),
("Not all batch dimensions are identical.", (0,), (1,), (1,)),
("must have the same number of dimensions", (1,), (1,), (0,)),
)
def test_generate_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_raised(grid.generate, error_msg, shapes)
@parameterized.parameters(
(((-1.,), (1.,), (3,)), (((-1.,), (0.,), (1.,)),)),
((((-1.,), (-1.,)), ((1.,), (1.,)), (1,)), ((((-1.,),), ((-1.,),)),)),
)
def test_generate_preset(self, test_inputs, test_outputs):
"""Test the uniform grid generation using fix test cases."""
self.assert_output_is_correct(
grid.generate, test_inputs, test_outputs, tile=False)
def test_generate_random(self):
"""Test the uniform grid generation."""
starts = np.array((0., 0.), dtype=np.float32)
stops = np.random.randint(1, 10, size=(2))
nums = stops + 1
stops = stops.astype(np.float32)
g = grid.generate(starts, stops, nums)
shape = nums.tolist() + [2]
xv, yv = np.meshgrid(range(shape[0]), range(shape[1]), indexing="ij")
gt = np.stack((xv, yv), axis=-1).astype(np.float32)
self.assertAllClose(g, gt)
if __name__ == "__main__":
test_case.main()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/rendering/light/point_light.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This module implements the rendering equation for a point light."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import math
import tensorflow as tf
from tensorflow_graphics.math import vector
from tensorflow_graphics.util import asserts
from tensorflow_graphics.util import export_api
from tensorflow_graphics.util import shape
def estimate_radiance(point_light_radiance,
point_light_position,
surface_point_position,
surface_point_normal,
observation_point,
brdf,
name=None,
reflected_light_fall_off=False):
"""Estimates the spectral radiance of a point light reflected from the surface point towards the observation point.
Note:
In the following, A1 to An are optional batch dimensions, which must be
broadcast compatible.
B1 to Bm are optional batch dimensions for the lights, which must be
broadcast compatible.
Note:
In case the light or the observation point are located behind the surface
the function will return 0.
Note:
The gradient of this function is not smooth when the dot product of the
normal with the light-to-surface or surface-to-observation vectors is 0.
Args:
point_light_radiance: A tensor of shape '[B1, ..., Bm, K]', where the last
axis represents the radiance of the point light at a specific wave length.
point_light_position: A tensor of shape `[B1, ..., Bm, 3]`, where the last
axis represents the position of the point light.
surface_point_position: A tensor of shape `[A1, ..., An, 3]`, where the last
axis represents the position of the surface point.
surface_point_normal: A tensor of shape `[A1, ..., An, 3]`, where the last
axis represents the normalized surface normal at the given surface point.
observation_point: A tensor of shape `[A1, ..., An, 3]`, where the last axis
represents the observation point.
brdf: The BRDF of the surface as a function of:
incoming_light_direction - The incoming light direction as the last axis
of a tensor with shape `[A1, ..., An, 3]`.
outgoing_light_direction - The outgoing light direction as the last axis
of a tensor with shape `[A1, ..., An, 3]`.
surface_point_normal - The surface normal as the last axis of a tensor
with shape `[A1, ..., An, 3]`.
Note - The BRDF should return a tensor of size '[A1, ..., An, K]' where
the last axis represents the amount of reflected light in each wave
length.
name: A name for this op. Defaults to "estimate_radiance".
reflected_light_fall_off: A boolean specifying whether or not to include the
fall off of the light reflected from the surface towards the observation
point in the calculation. Defaults to False.
Returns:
A tensor of shape `[A1, ..., An, B1, ..., Bm, K]`, where the last
axis represents the amount of light received at the observation point
after being reflected from the given surface point.
Raises:
ValueError: if the shape of `point_light_position`,
`surface_point_position`, `surface_point_normal`, or `observation_point` is
not supported.
InvalidArgumentError: if 'surface_point_normal' is not normalized.
"""
with tf.compat.v1.name_scope(name, "estimate_radiance", [
point_light_radiance, point_light_position, surface_point_position,
surface_point_normal, observation_point, brdf
]):
point_light_radiance = tf.convert_to_tensor(value=point_light_radiance)
point_light_position = tf.convert_to_tensor(value=point_light_position)
surface_point_position = tf.convert_to_tensor(value=surface_point_position)
surface_point_normal = tf.convert_to_tensor(value=surface_point_normal)
observation_point = tf.convert_to_tensor(value=observation_point)
shape.check_static(
tensor=point_light_position,
tensor_name="point_light_position",
has_dim_equals=(-1, 3))
shape.check_static(
tensor=surface_point_position,
tensor_name="surface_point_position",
has_dim_equals=(-1, 3))
shape.check_static(
tensor=surface_point_normal,
tensor_name="surface_point_normal",
has_dim_equals=(-1, 3))
shape.check_static(
tensor=observation_point,
tensor_name="observation_point",
has_dim_equals=(-1, 3))
shape.compare_batch_dimensions(
tensors=(surface_point_position, surface_point_normal,
observation_point),
tensor_names=("surface_point_position", "surface_point_normal",
"observation_point"),
last_axes=-2,
broadcast_compatible=True)
shape.compare_batch_dimensions(
tensors=(point_light_radiance, point_light_position),
tensor_names=("point_light_radiance", "point_light_position"),
last_axes=-2,
broadcast_compatible=True)
surface_point_normal = asserts.assert_normalized(surface_point_normal)
# Get the number of lights dimensions (B1,...,Bm).
lights_num_dimensions = max(
len(point_light_radiance.shape), len(point_light_position.shape)) - 1
# Reshape the other parameters so they can be broadcasted to the output of
# shape [A1,...,An, B1,...,Bm, K].
surface_point_position = tf.reshape(
surface_point_position,
surface_point_position.shape[:-1] + (1,) * lights_num_dimensions + (3,))
surface_point_normal = tf.reshape(
surface_point_normal,
surface_point_normal.shape[:-1] + (1,) * lights_num_dimensions + (3,))
observation_point = tf.reshape(
observation_point,
observation_point.shape[:-1] + (1,) * lights_num_dimensions + (3,))
light_to_surface_point = surface_point_position - point_light_position
distance_light_surface_point = tf.norm(
tensor=light_to_surface_point, axis=-1, keepdims=True)
incoming_light_direction = tf.math.l2_normalize(
light_to_surface_point, axis=-1)
surface_to_observation_point = observation_point - surface_point_position
outgoing_light_direction = tf.math.l2_normalize(
surface_to_observation_point, axis=-1)
brdf_value = brdf(incoming_light_direction, outgoing_light_direction,
surface_point_normal)
incoming_light_dot_surface_normal = vector.dot(-incoming_light_direction,
surface_point_normal)
outgoing_light_dot_surface_normal = vector.dot(outgoing_light_direction,
surface_point_normal)
estimated_radiance = (point_light_radiance * \
brdf_value * incoming_light_dot_surface_normal) / \
(4. * math.pi * tf.math.square(distance_light_surface_point))
if reflected_light_fall_off:
distance_surface_observation_point = tf.norm(
tensor=surface_to_observation_point, axis=-1, keepdims=True)
estimated_radiance = estimated_radiance / \
tf.math.square(distance_surface_observation_point)
# Create a condition for checking whether the light or observation point are
# behind the surface.
min_dot = tf.minimum(incoming_light_dot_surface_normal,
outgoing_light_dot_surface_normal)
common_shape = shape.get_broadcasted_shape(min_dot.shape,
estimated_radiance.shape)
d_val = lambda dim: 1 if dim is None else tf.compat.v1.dimension_value(dim)
common_shape = [d_val(dim) for dim in common_shape]
condition = tf.broadcast_to(tf.greater_equal(min_dot, 0.0), common_shape)
return tf.compat.v1.where(condition, estimated_radiance,
tf.zeros_like(estimated_radiance))
# API contains all public functions and classes.
__all__ = export_api.get_functions_and_classes()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This module implements the rendering equation for a point light."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import math
import tensorflow as tf
from tensorflow_graphics.math import vector
from tensorflow_graphics.util import asserts
from tensorflow_graphics.util import export_api
from tensorflow_graphics.util import shape
def estimate_radiance(point_light_radiance,
point_light_position,
surface_point_position,
surface_point_normal,
observation_point,
brdf,
name=None,
reflected_light_fall_off=False):
"""Estimates the spectral radiance of a point light reflected from the surface point towards the observation point.
Note:
In the following, A1 to An are optional batch dimensions, which must be
broadcast compatible.
B1 to Bm are optional batch dimensions for the lights, which must be
broadcast compatible.
Note:
In case the light or the observation point are located behind the surface
the function will return 0.
Note:
The gradient of this function is not smooth when the dot product of the
normal with the light-to-surface or surface-to-observation vectors is 0.
Args:
point_light_radiance: A tensor of shape '[B1, ..., Bm, K]', where the last
axis represents the radiance of the point light at a specific wave length.
point_light_position: A tensor of shape `[B1, ..., Bm, 3]`, where the last
axis represents the position of the point light.
surface_point_position: A tensor of shape `[A1, ..., An, 3]`, where the last
axis represents the position of the surface point.
surface_point_normal: A tensor of shape `[A1, ..., An, 3]`, where the last
axis represents the normalized surface normal at the given surface point.
observation_point: A tensor of shape `[A1, ..., An, 3]`, where the last axis
represents the observation point.
brdf: The BRDF of the surface as a function of:
incoming_light_direction - The incoming light direction as the last axis
of a tensor with shape `[A1, ..., An, 3]`.
outgoing_light_direction - The outgoing light direction as the last axis
of a tensor with shape `[A1, ..., An, 3]`.
surface_point_normal - The surface normal as the last axis of a tensor
with shape `[A1, ..., An, 3]`.
Note - The BRDF should return a tensor of size '[A1, ..., An, K]' where
the last axis represents the amount of reflected light in each wave
length.
name: A name for this op. Defaults to "estimate_radiance".
reflected_light_fall_off: A boolean specifying whether or not to include the
fall off of the light reflected from the surface towards the observation
point in the calculation. Defaults to False.
Returns:
A tensor of shape `[A1, ..., An, B1, ..., Bm, K]`, where the last
axis represents the amount of light received at the observation point
after being reflected from the given surface point.
Raises:
ValueError: if the shape of `point_light_position`,
`surface_point_position`, `surface_point_normal`, or `observation_point` is
not supported.
InvalidArgumentError: if 'surface_point_normal' is not normalized.
"""
with tf.compat.v1.name_scope(name, "estimate_radiance", [
point_light_radiance, point_light_position, surface_point_position,
surface_point_normal, observation_point, brdf
]):
point_light_radiance = tf.convert_to_tensor(value=point_light_radiance)
point_light_position = tf.convert_to_tensor(value=point_light_position)
surface_point_position = tf.convert_to_tensor(value=surface_point_position)
surface_point_normal = tf.convert_to_tensor(value=surface_point_normal)
observation_point = tf.convert_to_tensor(value=observation_point)
shape.check_static(
tensor=point_light_position,
tensor_name="point_light_position",
has_dim_equals=(-1, 3))
shape.check_static(
tensor=surface_point_position,
tensor_name="surface_point_position",
has_dim_equals=(-1, 3))
shape.check_static(
tensor=surface_point_normal,
tensor_name="surface_point_normal",
has_dim_equals=(-1, 3))
shape.check_static(
tensor=observation_point,
tensor_name="observation_point",
has_dim_equals=(-1, 3))
shape.compare_batch_dimensions(
tensors=(surface_point_position, surface_point_normal,
observation_point),
tensor_names=("surface_point_position", "surface_point_normal",
"observation_point"),
last_axes=-2,
broadcast_compatible=True)
shape.compare_batch_dimensions(
tensors=(point_light_radiance, point_light_position),
tensor_names=("point_light_radiance", "point_light_position"),
last_axes=-2,
broadcast_compatible=True)
surface_point_normal = asserts.assert_normalized(surface_point_normal)
# Get the number of lights dimensions (B1,...,Bm).
lights_num_dimensions = max(
len(point_light_radiance.shape), len(point_light_position.shape)) - 1
# Reshape the other parameters so they can be broadcasted to the output of
# shape [A1,...,An, B1,...,Bm, K].
surface_point_position = tf.reshape(
surface_point_position,
surface_point_position.shape[:-1] + (1,) * lights_num_dimensions + (3,))
surface_point_normal = tf.reshape(
surface_point_normal,
surface_point_normal.shape[:-1] + (1,) * lights_num_dimensions + (3,))
observation_point = tf.reshape(
observation_point,
observation_point.shape[:-1] + (1,) * lights_num_dimensions + (3,))
light_to_surface_point = surface_point_position - point_light_position
distance_light_surface_point = tf.norm(
tensor=light_to_surface_point, axis=-1, keepdims=True)
incoming_light_direction = tf.math.l2_normalize(
light_to_surface_point, axis=-1)
surface_to_observation_point = observation_point - surface_point_position
outgoing_light_direction = tf.math.l2_normalize(
surface_to_observation_point, axis=-1)
brdf_value = brdf(incoming_light_direction, outgoing_light_direction,
surface_point_normal)
incoming_light_dot_surface_normal = vector.dot(-incoming_light_direction,
surface_point_normal)
outgoing_light_dot_surface_normal = vector.dot(outgoing_light_direction,
surface_point_normal)
estimated_radiance = (point_light_radiance * \
brdf_value * incoming_light_dot_surface_normal) / \
(4. * math.pi * tf.math.square(distance_light_surface_point))
if reflected_light_fall_off:
distance_surface_observation_point = tf.norm(
tensor=surface_to_observation_point, axis=-1, keepdims=True)
estimated_radiance = estimated_radiance / \
tf.math.square(distance_surface_observation_point)
# Create a condition for checking whether the light or observation point are
# behind the surface.
min_dot = tf.minimum(incoming_light_dot_surface_normal,
outgoing_light_dot_surface_normal)
common_shape = shape.get_broadcasted_shape(min_dot.shape,
estimated_radiance.shape)
d_val = lambda dim: 1 if dim is None else tf.compat.v1.dimension_value(dim)
common_shape = [d_val(dim) for dim in common_shape]
condition = tf.broadcast_to(tf.greater_equal(min_dot, 0.0), common_shape)
return tf.compat.v1.where(condition, estimated_radiance,
tf.zeros_like(estimated_radiance))
# API contains all public functions and classes.
__all__ = export_api.get_functions_and_classes()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/math/interpolation/slerp.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tensorflow.graphics slerp interpolation module.
Spherical linear interpolation (slerp) is defined for both quaternions and for
regular M-D vectors, and act slightly differently because of inherent
ambiguity of quaternions. This module has two functions returning the
interpolation weights for quaternions (quaternion_weights) and for vectors
(vector_weights), which can then be used in a weighted sum to calculate the
final interpolated quaternions and vectors. A helper interpolate function is
also provided.
The main differences between two methods are:
vector_weights:
can get any M-D tensor as input,
does not expect normalized vectors as input,
returns unnormalized outputs (in general) for unnormalized inputs.
quaternion_weights:
expects M-D tensors with a last dimension of 4,
assumes normalized input,
checks for ambiguity by looking at the angle between quaternions,
returns normalized quaternions naturally.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import enum
import tensorflow as tf
from tensorflow_graphics.math import vector
from tensorflow_graphics.util import asserts
from tensorflow_graphics.util import export_api
from tensorflow_graphics.util import safe_ops
from tensorflow_graphics.util import shape
class InterpolationType(enum.Enum):
"""Defines interpolation methods for slerp module."""
VECTOR = 0
QUATERNION = 1
def _safe_dot(vector1, vector2, eps):
"""Calculates dot product while ensuring it is in the range [-1, 1]."""
dot_product = vector.dot(vector1, vector2)
# Safely shrink to make sure machine precision does not cause the dot
# product to be outside the [-1.0, 1.0] range.
return safe_ops.safe_shrink(
vector=dot_product, minval=-1.0, maxval=1.0, open_bounds=False, eps=eps)
def interpolate(vector1,
vector2,
percent,
method=InterpolationType.QUATERNION,
eps=None,
name=None):
"""Applies slerp to vectors or quaternions.
Args:
vector1: A tensor of shape `[A1, ... , An, M]`, which stores a normalized
vector in its last dimension.
vector2: A tensor of shape `[A1, ... , An, M]`, which stores a normalized
vector in its last dimension.
percent: A `float` or a tensor with shape broadcastable to the shape of
input vectors.
method: An enumerated constant from the class InterpolationType, which is
either InterpolationType.QUATERNION (default) if the input vectors are 4-D
quaternions, or InterpolationType.VECTOR if they are regular M-D vectors.
eps: A small float for operation safety. If left None, its value is
automatically selected using dtype of input vectors.
name: A name for this op. Defaults to "vector_weights" or
"quaternion_weights" depending on the method.
Returns:
A tensor of shape [A1, ... , An, M]` which stores the result of the
interpolation.
Raises:
ValueError: if method is not amongst enumerated constants defined in
InterpolationType.
"""
if method == InterpolationType.QUATERNION:
weight1, weight2 = quaternion_weights(
vector1, vector2, percent, eps=eps, name=name)
elif method == InterpolationType.VECTOR:
weight1, weight2 = vector_weights(
vector1, vector2, percent, eps=eps, name=name)
else:
raise ValueError("Unknown interpolation type supplied.")
return interpolate_with_weights(vector1, vector2, weight1, weight2)
def interpolate_with_weights(vector1, vector2, weight1, weight2, name=None):
"""Interpolates vectors by taking their weighted sum.
Interpolation for all variants of slerp is a simple weighted sum over inputs.
Therefore this function simply returns weight1 * vector1 + weight2 * vector2.
Note:
In the following, A1 to An are optional batch dimensions.
Args:
vector1: A tensor of shape `[A1, ... , An, M]`, which stores a normalized
vector in its last dimension.
vector2: A tensor of shape `[A1, ... , An, M]`, which stores a normalized
vector in its last dimension.
weight1: A `float` or a tensor describing weights for the `vector1` and with
a shape broadcastable to the shape of the input vectors.
weight2: A `float` or a tensor describing weights for the `vector2` and with
a shape broadcastable to the shape of the input vectors.
name: A name for this op. Defaults to "interpolate_with_weights".
Returns:
A tensor of shape `[A1, ... , An, M]` containing the result of the
interpolation.
"""
with tf.compat.v1.name_scope(name, "interpolate_with_weights",
[vector1, vector2, weight1, weight2]):
return weight1 * vector1 + weight2 * vector2
def quaternion_weights(quaternion1, quaternion2, percent, eps=None, name=None):
"""Calculates slerp weights for two normalized quaternions.
Given a percent and two normalized quaternions, this function returns the
slerp weights. It can also produce extrapolation weights when percent is
outside of the [0, 1] range. It reduces to lerp when input quaternions are
almost parallel or anti-parallel. Input quaternions are assumed to be
normalized. The tf.graphics debug flag TFG_ADD_ASSERTS_TO_GRAPH defined
in tfg_flags.py can be set to add assertions to the graph that check whether
the inputs are normalized, and whether Inf or Nan values are produced.
Note:
In the following, A1 to An are optional batch dimensions.
Args:
quaternion1: A tensor of shape `[A1, ... , An, 4]` storing normalized
quaternions in its last dimension.
quaternion2: A tensor of shape `[A1, ... , An, 4]` storing normalized
quaternions in its last dimension.
percent: A `float` or a tensor with a shape broadcastable to the shape `[A1,
... , An]`.
eps: A `float` used to make operations safe. When left as None, the function
automatically picks the best epsilon based on the dtype and the operation.
name: A name for this op. Defaults to "quaternion_weights".
Raises:
ValueError: If the shapes of quaternions do not match, if the last
dimensions of quaternions are not 4, or if percent is neither a float, nor
a tensor with last dimension 1.
Returns:
Two tensors of shape `[A1, ... , An, 1]` each, which are the two slerp
weights for each quaternion.
"""
with tf.compat.v1.name_scope(name, "quaternion_weights",
[quaternion1, quaternion2, percent]):
quaternion1 = tf.convert_to_tensor(value=quaternion1)
quaternion2 = tf.convert_to_tensor(value=quaternion2)
percent = tf.convert_to_tensor(value=percent, dtype=quaternion1.dtype)
if percent.shape.ndims == 0:
percent = tf.expand_dims(percent, axis=0)
shape.check_static(
tensor=quaternion1, tensor_name="quaternion1", has_dim_equals=(-1, 4))
shape.check_static(
tensor=quaternion2, tensor_name="quaternion2", has_dim_equals=(-1, 4))
shape.compare_batch_dimensions(
tensors=(quaternion1, quaternion2, percent),
last_axes=(-2, -2, -1),
broadcast_compatible=True,
tensor_names=("quaternion1", "quaternion2", "percent"))
quaternion1 = asserts.assert_normalized(quaternion1)
quaternion2 = asserts.assert_normalized(quaternion2)
dot_product = _safe_dot(quaternion1, quaternion2, eps)
# Take the shorter path
theta = tf.acos(tf.abs(dot_product))
# safe_sinpx_div_sinx returns p for very small x, which means slerp reduces
# to lerp automatically.
scale1 = safe_ops.safe_sinpx_div_sinx(theta, 1.0 - percent, eps)
scale2 = safe_ops.safe_sinpx_div_sinx(theta, percent, eps)
# Flip the sign of scale1 if quaternions are in different hemispheres.
# tf.sign can make scale1 zero if quaternions are orthogonal.
scale1 *= safe_ops.nonzero_sign(dot_product)
return scale1, scale2
def vector_weights(vector1, vector2, percent, eps=None, name=None):
"""Spherical linear interpolation (slerp) between two unnormalized vectors.
This function applies geometric slerp to unnormalized vectors by first
normalizing them to return the interpolation weights. It reduces to lerp when
input vectors are exactly anti-parallel.
Note:
In the following, A1 to An are optional batch dimensions.
Args:
vector1: A tensor of shape `[A1, ... , An, M]`, which stores a normalized
vector in its last dimension.
vector2: A tensor of shape `[A1, ... , An, M]`, which stores a normalized
vector in its last dimension.
percent: A `float` or tensor with shape broadcastable to the shape of input
vectors.
eps: A small float for operation safety. If left None, its value is
automatically selected using dtype of input vectors.
name: A name for this op. Defaults to "vector_weights".
Raises:
ValueError: if the shape of `vector1`, `vector2`, or `percent` is not
supported.
Returns:
Two tensors of shape `[A1, ... , An, 1]`, representing interpolation weights
for each input vector.
"""
with tf.compat.v1.name_scope(name, "vector_weights",
[vector1, vector2, percent]):
vector1 = tf.convert_to_tensor(value=vector1)
vector2 = tf.convert_to_tensor(value=vector2)
percent = tf.convert_to_tensor(value=percent, dtype=vector1.dtype)
if percent.shape.ndims == 0:
percent = tf.expand_dims(percent, axis=0)
shape.compare_dimensions(
tensors=(vector1, vector2),
axes=-1,
tensor_names=("vector1", "vector2"))
shape.compare_batch_dimensions(
tensors=(vector1, vector2, percent),
last_axes=(-2, -2, -1),
broadcast_compatible=True,
tensor_names=("vector1", "vector2", "percent"))
normalized1 = tf.nn.l2_normalize(vector1, axis=-1)
normalized2 = tf.nn.l2_normalize(vector2, axis=-1)
dot_product = _safe_dot(normalized1, normalized2, eps)
theta = tf.acos(dot_product)
scale1 = safe_ops.safe_sinpx_div_sinx(theta, 1.0 - percent, eps)
scale2 = safe_ops.safe_sinpx_div_sinx(theta, percent, eps)
return scale1, scale2
# API contains all public functions and classes.
__all__ = export_api.get_functions_and_classes()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tensorflow.graphics slerp interpolation module.
Spherical linear interpolation (slerp) is defined for both quaternions and for
regular M-D vectors, and act slightly differently because of inherent
ambiguity of quaternions. This module has two functions returning the
interpolation weights for quaternions (quaternion_weights) and for vectors
(vector_weights), which can then be used in a weighted sum to calculate the
final interpolated quaternions and vectors. A helper interpolate function is
also provided.
The main differences between two methods are:
vector_weights:
can get any M-D tensor as input,
does not expect normalized vectors as input,
returns unnormalized outputs (in general) for unnormalized inputs.
quaternion_weights:
expects M-D tensors with a last dimension of 4,
assumes normalized input,
checks for ambiguity by looking at the angle between quaternions,
returns normalized quaternions naturally.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import enum
import tensorflow as tf
from tensorflow_graphics.math import vector
from tensorflow_graphics.util import asserts
from tensorflow_graphics.util import export_api
from tensorflow_graphics.util import safe_ops
from tensorflow_graphics.util import shape
class InterpolationType(enum.Enum):
"""Defines interpolation methods for slerp module."""
VECTOR = 0
QUATERNION = 1
def _safe_dot(vector1, vector2, eps):
"""Calculates dot product while ensuring it is in the range [-1, 1]."""
dot_product = vector.dot(vector1, vector2)
# Safely shrink to make sure machine precision does not cause the dot
# product to be outside the [-1.0, 1.0] range.
return safe_ops.safe_shrink(
vector=dot_product, minval=-1.0, maxval=1.0, open_bounds=False, eps=eps)
def interpolate(vector1,
vector2,
percent,
method=InterpolationType.QUATERNION,
eps=None,
name=None):
"""Applies slerp to vectors or quaternions.
Args:
vector1: A tensor of shape `[A1, ... , An, M]`, which stores a normalized
vector in its last dimension.
vector2: A tensor of shape `[A1, ... , An, M]`, which stores a normalized
vector in its last dimension.
percent: A `float` or a tensor with shape broadcastable to the shape of
input vectors.
method: An enumerated constant from the class InterpolationType, which is
either InterpolationType.QUATERNION (default) if the input vectors are 4-D
quaternions, or InterpolationType.VECTOR if they are regular M-D vectors.
eps: A small float for operation safety. If left None, its value is
automatically selected using dtype of input vectors.
name: A name for this op. Defaults to "vector_weights" or
"quaternion_weights" depending on the method.
Returns:
A tensor of shape [A1, ... , An, M]` which stores the result of the
interpolation.
Raises:
ValueError: if method is not amongst enumerated constants defined in
InterpolationType.
"""
if method == InterpolationType.QUATERNION:
weight1, weight2 = quaternion_weights(
vector1, vector2, percent, eps=eps, name=name)
elif method == InterpolationType.VECTOR:
weight1, weight2 = vector_weights(
vector1, vector2, percent, eps=eps, name=name)
else:
raise ValueError("Unknown interpolation type supplied.")
return interpolate_with_weights(vector1, vector2, weight1, weight2)
def interpolate_with_weights(vector1, vector2, weight1, weight2, name=None):
"""Interpolates vectors by taking their weighted sum.
Interpolation for all variants of slerp is a simple weighted sum over inputs.
Therefore this function simply returns weight1 * vector1 + weight2 * vector2.
Note:
In the following, A1 to An are optional batch dimensions.
Args:
vector1: A tensor of shape `[A1, ... , An, M]`, which stores a normalized
vector in its last dimension.
vector2: A tensor of shape `[A1, ... , An, M]`, which stores a normalized
vector in its last dimension.
weight1: A `float` or a tensor describing weights for the `vector1` and with
a shape broadcastable to the shape of the input vectors.
weight2: A `float` or a tensor describing weights for the `vector2` and with
a shape broadcastable to the shape of the input vectors.
name: A name for this op. Defaults to "interpolate_with_weights".
Returns:
A tensor of shape `[A1, ... , An, M]` containing the result of the
interpolation.
"""
with tf.compat.v1.name_scope(name, "interpolate_with_weights",
[vector1, vector2, weight1, weight2]):
return weight1 * vector1 + weight2 * vector2
def quaternion_weights(quaternion1, quaternion2, percent, eps=None, name=None):
"""Calculates slerp weights for two normalized quaternions.
Given a percent and two normalized quaternions, this function returns the
slerp weights. It can also produce extrapolation weights when percent is
outside of the [0, 1] range. It reduces to lerp when input quaternions are
almost parallel or anti-parallel. Input quaternions are assumed to be
normalized. The tf.graphics debug flag TFG_ADD_ASSERTS_TO_GRAPH defined
in tfg_flags.py can be set to add assertions to the graph that check whether
the inputs are normalized, and whether Inf or Nan values are produced.
Note:
In the following, A1 to An are optional batch dimensions.
Args:
quaternion1: A tensor of shape `[A1, ... , An, 4]` storing normalized
quaternions in its last dimension.
quaternion2: A tensor of shape `[A1, ... , An, 4]` storing normalized
quaternions in its last dimension.
percent: A `float` or a tensor with a shape broadcastable to the shape `[A1,
... , An]`.
eps: A `float` used to make operations safe. When left as None, the function
automatically picks the best epsilon based on the dtype and the operation.
name: A name for this op. Defaults to "quaternion_weights".
Raises:
ValueError: If the shapes of quaternions do not match, if the last
dimensions of quaternions are not 4, or if percent is neither a float, nor
a tensor with last dimension 1.
Returns:
Two tensors of shape `[A1, ... , An, 1]` each, which are the two slerp
weights for each quaternion.
"""
with tf.compat.v1.name_scope(name, "quaternion_weights",
[quaternion1, quaternion2, percent]):
quaternion1 = tf.convert_to_tensor(value=quaternion1)
quaternion2 = tf.convert_to_tensor(value=quaternion2)
percent = tf.convert_to_tensor(value=percent, dtype=quaternion1.dtype)
if percent.shape.ndims == 0:
percent = tf.expand_dims(percent, axis=0)
shape.check_static(
tensor=quaternion1, tensor_name="quaternion1", has_dim_equals=(-1, 4))
shape.check_static(
tensor=quaternion2, tensor_name="quaternion2", has_dim_equals=(-1, 4))
shape.compare_batch_dimensions(
tensors=(quaternion1, quaternion2, percent),
last_axes=(-2, -2, -1),
broadcast_compatible=True,
tensor_names=("quaternion1", "quaternion2", "percent"))
quaternion1 = asserts.assert_normalized(quaternion1)
quaternion2 = asserts.assert_normalized(quaternion2)
dot_product = _safe_dot(quaternion1, quaternion2, eps)
# Take the shorter path
theta = tf.acos(tf.abs(dot_product))
# safe_sinpx_div_sinx returns p for very small x, which means slerp reduces
# to lerp automatically.
scale1 = safe_ops.safe_sinpx_div_sinx(theta, 1.0 - percent, eps)
scale2 = safe_ops.safe_sinpx_div_sinx(theta, percent, eps)
# Flip the sign of scale1 if quaternions are in different hemispheres.
# tf.sign can make scale1 zero if quaternions are orthogonal.
scale1 *= safe_ops.nonzero_sign(dot_product)
return scale1, scale2
def vector_weights(vector1, vector2, percent, eps=None, name=None):
"""Spherical linear interpolation (slerp) between two unnormalized vectors.
This function applies geometric slerp to unnormalized vectors by first
normalizing them to return the interpolation weights. It reduces to lerp when
input vectors are exactly anti-parallel.
Note:
In the following, A1 to An are optional batch dimensions.
Args:
vector1: A tensor of shape `[A1, ... , An, M]`, which stores a normalized
vector in its last dimension.
vector2: A tensor of shape `[A1, ... , An, M]`, which stores a normalized
vector in its last dimension.
percent: A `float` or tensor with shape broadcastable to the shape of input
vectors.
eps: A small float for operation safety. If left None, its value is
automatically selected using dtype of input vectors.
name: A name for this op. Defaults to "vector_weights".
Raises:
ValueError: if the shape of `vector1`, `vector2`, or `percent` is not
supported.
Returns:
Two tensors of shape `[A1, ... , An, 1]`, representing interpolation weights
for each input vector.
"""
with tf.compat.v1.name_scope(name, "vector_weights",
[vector1, vector2, percent]):
vector1 = tf.convert_to_tensor(value=vector1)
vector2 = tf.convert_to_tensor(value=vector2)
percent = tf.convert_to_tensor(value=percent, dtype=vector1.dtype)
if percent.shape.ndims == 0:
percent = tf.expand_dims(percent, axis=0)
shape.compare_dimensions(
tensors=(vector1, vector2),
axes=-1,
tensor_names=("vector1", "vector2"))
shape.compare_batch_dimensions(
tensors=(vector1, vector2, percent),
last_axes=(-2, -2, -1),
broadcast_compatible=True,
tensor_names=("vector1", "vector2", "percent"))
normalized1 = tf.nn.l2_normalize(vector1, axis=-1)
normalized2 = tf.nn.l2_normalize(vector2, axis=-1)
dot_product = _safe_dot(normalized1, normalized2, eps)
theta = tf.acos(dot_product)
scale1 = safe_ops.safe_sinpx_div_sinx(theta, 1.0 - percent, eps)
scale2 = safe_ops.safe_sinpx_div_sinx(theta, percent, eps)
return scale1, scale2
# API contains all public functions and classes.
__all__ = export_api.get_functions_and_classes()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/projects/__init__.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Projects module."""
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Projects module."""
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/datasets/shapenet/shapenet.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Lint as: python3
"""Shapenet Core dataset."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import csv
import json
import os
import textwrap
import tensorflow.compat.v2 as tf
import tensorflow_datasets as tfds
from tensorflow_datasets import features as tfds_features
from tensorflow_graphics.datasets import features as tfg_features
_CITATION = """
@techreport{shapenet2015,
title = {{ShapeNet: An Information-Rich 3D Model Repository}},
author = {Chang, Angel X. and Funkhouser, Thomas and Guibas, Leonidas and Hanrahan, Pat and Huang, Qixing and Li, Zimo and Savarese, Silvio and Savva, Manolis and Song, Shuran and Su, Hao and Xiao, Jianxiong and Yi, Li and Yu, Fisher},
number = {arXiv:1512.03012 [cs.GR]},
institution = {Stanford University --- Princeton University --- Toyota Technological Institute at Chicago},
year = {2015}
}
"""
_DESCRIPTION = """
ShapeNetCore is a densely annotated subset of ShapeNet covering 55 common object
categories with ~51,300 unique 3D models. Each model in ShapeNetCore is linked
to an appropriate synset in WordNet (version 3.0).
The synsets will be extracted from the taxonomy.json file in the ShapeNetCore.v2.zip
archive and the splits from http://shapenet.cs.stanford.edu/shapenet/obj-zip/SHREC16/all.csv
"""
_TAXONOMY_FILE_NAME = 'taxonomy.json'
_SPLIT_FILE_URL = \
'http://shapenet.cs.stanford.edu/shapenet/obj-zip/SHREC16/all.csv'
class ShapenetConfig(tfds.core.BuilderConfig):
"""Base class for Shapenet BuilderConfigs.
The Shapenet database builder delegates the implementation of info,
split_generators and generate_examples to the specified ShapenetConfig. This
is done to allow multiple versions of the dataset.
"""
def info(self, dataset_builder):
"""Delegated Shapenet._info."""
raise NotImplementedError('Abstract method')
def split_generators(self, dl_manager, dataset_builder):
"""Delegated Shapenet._split_generators."""
raise NotImplementedError('Abstract method')
def generate_examples(self, **kwargs):
"""Delegated Shapenet._generate_examples."""
raise NotImplementedError('Abstract method')
class MeshConfig(ShapenetConfig):
"""A Shapenet config for loading the original .obj files."""
_MODEL_SUBPATH = os.path.join('models', 'model_normalized.obj')
def __init__(self, model_subpath=_MODEL_SUBPATH):
super(MeshConfig, self).__init__(
name='shapenet_trimesh',
description=_DESCRIPTION,
version=tfds.core.Version('1.0.0'))
self.model_subpath = model_subpath
def info(self, dataset_builder):
return tfds.core.DatasetInfo(
builder=dataset_builder,
description=_DESCRIPTION,
features=tfds_features.FeaturesDict({
'trimesh': tfg_features.TriangleMesh(),
'label': tfds_features.ClassLabel(num_classes=353),
'model_id': tfds_features.Text(),
}),
supervised_keys=('trimesh', 'label'),
# Homepage of the dataset for documentation
homepage='https://shapenet.org/',
citation=_CITATION,
)
def split_generators(self, dl_manager, dataset_builder):
# Extract the synset ids from the taxonomy file and update the ClassLabel
# feature.
with tf.io.gfile.GFile(
os.path.join(dl_manager.manual_dir,
_TAXONOMY_FILE_NAME)) as taxonomy_file:
labels = [x['synsetId'] for x in json.loads(taxonomy_file.read())]
# Remove duplicate labels (the json file contains two identical entries
# for synset '04591713').
labels = list(collections.OrderedDict.fromkeys(labels))
dataset_builder.info.features['label'].names = labels
split_file = dl_manager.download(_SPLIT_FILE_URL)
fieldnames = ['id', 'synset', 'sub_synset', 'model_id', 'split']
model_items = collections.defaultdict(list)
with tf.io.gfile.GFile(split_file) as csvfile:
for row in csv.DictReader(csvfile, fieldnames):
model_items[row['split']].append(row)
return [
tfds.core.SplitGenerator(
name=tfds.Split.TRAIN,
gen_kwargs={
'base_dir': dl_manager.manual_dir,
'models': model_items['train']
},
),
tfds.core.SplitGenerator(
name=tfds.Split.TEST,
gen_kwargs={
'base_dir': dl_manager.manual_dir,
'models': model_items['test']
},
),
tfds.core.SplitGenerator(
name=tfds.Split.VALIDATION,
gen_kwargs={
'base_dir': dl_manager.manual_dir,
'models': model_items['val']
},
),
]
def generate_examples(self, base_dir, models):
"""Yields examples.
The structure of the examples:
{
'trimesh': tensorflow_graphics.datasets.features.TriangleMesh
'label': tensorflow_datasets.features.ClassLabel
'model_id': tensorflow_datasets.features.Text
}
Args:
base_dir: The base directory of shapenet.
models: The list of models in the split.
"""
for model in models:
synset = model['synset']
model_id = model['model_id']
model_filepath = os.path.join(base_dir, synset, model_id,
self.model_subpath)
# If the model doesn't exist, skip it.
if not tf.io.gfile.exists(model_filepath):
continue
yield model_id, {
'trimesh': model_filepath,
'label': synset,
'model_id': model_id,
}
class Shapenet(tfds.core.GeneratorBasedBuilder):
"""ShapeNetCore V2.
Example usage of the dataset:
import tensorflow_datasets as tfds
from tensorflow_graphics.datasets.shapenet import Shapenet
data_set = Shapenet.load(
split='train',
download_and_prepare_kwargs={
'download_config':
tfds.download.DownloadConfig(manual_dir='~/shapenet_base')
})
for example in data_set.take(1):
trimesh, label, model_id = example['trimesh'], example['label'],
example['model_id']
"""
BUILDER_CONFIGS = [MeshConfig()]
VERSION = tfds.core.Version('1.0.0')
@staticmethod
def load(*args, **kwargs):
return tfds.load('shapenet', *args, **kwargs) # pytype: disable=wrong-arg-count
MANUAL_DOWNLOAD_INSTRUCTIONS = textwrap.dedent("""\
manual_dir should contain the extracted ShapeNetCore.v2.zip archive.
You need to register on https://shapenet.org/download/shapenetcore in order
to get the link to download the dataset.
""")
def _info(self):
return self.builder_config.info(self)
def _split_generators(self, dl_manager):
"""Returns SplitGenerators."""
return self.builder_config.split_generators(dl_manager, self)
def _generate_examples(self, **kwargs):
"""Yields examples."""
return self.builder_config.generate_examples(**kwargs)
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Lint as: python3
"""Shapenet Core dataset."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import collections
import csv
import json
import os
import textwrap
import tensorflow.compat.v2 as tf
import tensorflow_datasets as tfds
from tensorflow_datasets import features as tfds_features
from tensorflow_graphics.datasets import features as tfg_features
_CITATION = """
@techreport{shapenet2015,
title = {{ShapeNet: An Information-Rich 3D Model Repository}},
author = {Chang, Angel X. and Funkhouser, Thomas and Guibas, Leonidas and Hanrahan, Pat and Huang, Qixing and Li, Zimo and Savarese, Silvio and Savva, Manolis and Song, Shuran and Su, Hao and Xiao, Jianxiong and Yi, Li and Yu, Fisher},
number = {arXiv:1512.03012 [cs.GR]},
institution = {Stanford University --- Princeton University --- Toyota Technological Institute at Chicago},
year = {2015}
}
"""
_DESCRIPTION = """
ShapeNetCore is a densely annotated subset of ShapeNet covering 55 common object
categories with ~51,300 unique 3D models. Each model in ShapeNetCore is linked
to an appropriate synset in WordNet (version 3.0).
The synsets will be extracted from the taxonomy.json file in the ShapeNetCore.v2.zip
archive and the splits from http://shapenet.cs.stanford.edu/shapenet/obj-zip/SHREC16/all.csv
"""
_TAXONOMY_FILE_NAME = 'taxonomy.json'
_SPLIT_FILE_URL = \
'http://shapenet.cs.stanford.edu/shapenet/obj-zip/SHREC16/all.csv'
class ShapenetConfig(tfds.core.BuilderConfig):
"""Base class for Shapenet BuilderConfigs.
The Shapenet database builder delegates the implementation of info,
split_generators and generate_examples to the specified ShapenetConfig. This
is done to allow multiple versions of the dataset.
"""
def info(self, dataset_builder):
"""Delegated Shapenet._info."""
raise NotImplementedError('Abstract method')
def split_generators(self, dl_manager, dataset_builder):
"""Delegated Shapenet._split_generators."""
raise NotImplementedError('Abstract method')
def generate_examples(self, **kwargs):
"""Delegated Shapenet._generate_examples."""
raise NotImplementedError('Abstract method')
class MeshConfig(ShapenetConfig):
"""A Shapenet config for loading the original .obj files."""
_MODEL_SUBPATH = os.path.join('models', 'model_normalized.obj')
def __init__(self, model_subpath=_MODEL_SUBPATH):
super(MeshConfig, self).__init__(
name='shapenet_trimesh',
description=_DESCRIPTION,
version=tfds.core.Version('1.0.0'))
self.model_subpath = model_subpath
def info(self, dataset_builder):
return tfds.core.DatasetInfo(
builder=dataset_builder,
description=_DESCRIPTION,
features=tfds_features.FeaturesDict({
'trimesh': tfg_features.TriangleMesh(),
'label': tfds_features.ClassLabel(num_classes=353),
'model_id': tfds_features.Text(),
}),
supervised_keys=('trimesh', 'label'),
# Homepage of the dataset for documentation
homepage='https://shapenet.org/',
citation=_CITATION,
)
def split_generators(self, dl_manager, dataset_builder):
# Extract the synset ids from the taxonomy file and update the ClassLabel
# feature.
with tf.io.gfile.GFile(
os.path.join(dl_manager.manual_dir,
_TAXONOMY_FILE_NAME)) as taxonomy_file:
labels = [x['synsetId'] for x in json.loads(taxonomy_file.read())]
# Remove duplicate labels (the json file contains two identical entries
# for synset '04591713').
labels = list(collections.OrderedDict.fromkeys(labels))
dataset_builder.info.features['label'].names = labels
split_file = dl_manager.download(_SPLIT_FILE_URL)
fieldnames = ['id', 'synset', 'sub_synset', 'model_id', 'split']
model_items = collections.defaultdict(list)
with tf.io.gfile.GFile(split_file) as csvfile:
for row in csv.DictReader(csvfile, fieldnames):
model_items[row['split']].append(row)
return [
tfds.core.SplitGenerator(
name=tfds.Split.TRAIN,
gen_kwargs={
'base_dir': dl_manager.manual_dir,
'models': model_items['train']
},
),
tfds.core.SplitGenerator(
name=tfds.Split.TEST,
gen_kwargs={
'base_dir': dl_manager.manual_dir,
'models': model_items['test']
},
),
tfds.core.SplitGenerator(
name=tfds.Split.VALIDATION,
gen_kwargs={
'base_dir': dl_manager.manual_dir,
'models': model_items['val']
},
),
]
def generate_examples(self, base_dir, models):
"""Yields examples.
The structure of the examples:
{
'trimesh': tensorflow_graphics.datasets.features.TriangleMesh
'label': tensorflow_datasets.features.ClassLabel
'model_id': tensorflow_datasets.features.Text
}
Args:
base_dir: The base directory of shapenet.
models: The list of models in the split.
"""
for model in models:
synset = model['synset']
model_id = model['model_id']
model_filepath = os.path.join(base_dir, synset, model_id,
self.model_subpath)
# If the model doesn't exist, skip it.
if not tf.io.gfile.exists(model_filepath):
continue
yield model_id, {
'trimesh': model_filepath,
'label': synset,
'model_id': model_id,
}
class Shapenet(tfds.core.GeneratorBasedBuilder):
"""ShapeNetCore V2.
Example usage of the dataset:
import tensorflow_datasets as tfds
from tensorflow_graphics.datasets.shapenet import Shapenet
data_set = Shapenet.load(
split='train',
download_and_prepare_kwargs={
'download_config':
tfds.download.DownloadConfig(manual_dir='~/shapenet_base')
})
for example in data_set.take(1):
trimesh, label, model_id = example['trimesh'], example['label'],
example['model_id']
"""
BUILDER_CONFIGS = [MeshConfig()]
VERSION = tfds.core.Version('1.0.0')
@staticmethod
def load(*args, **kwargs):
return tfds.load('shapenet', *args, **kwargs) # pytype: disable=wrong-arg-count
MANUAL_DOWNLOAD_INSTRUCTIONS = textwrap.dedent("""\
manual_dir should contain the extracted ShapeNetCore.v2.zip archive.
You need to register on https://shapenet.org/download/shapenetcore in order
to get the link to download the dataset.
""")
def _info(self):
return self.builder_config.info(self)
def _split_generators(self, dl_manager):
"""Returns SplitGenerators."""
return self.builder_config.split_generators(dl_manager, self)
def _generate_examples(self, **kwargs):
"""Yields examples."""
return self.builder_config.generate_examples(**kwargs)
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/projects/cvxnet/train.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Training Loop."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow.compat.v1 as tf
from tensorflow_graphics.projects.cvxnet.lib import datasets
from tensorflow_graphics.projects.cvxnet.lib import models
from tensorflow_graphics.projects.cvxnet.lib import utils
tf.disable_eager_execution()
flags = tf.app.flags
logging = tf.logging
tf.logging.set_verbosity(tf.logging.INFO)
utils.define_flags()
FLAGS = flags.FLAGS
def main(unused_argv):
tf.set_random_seed(2191997)
np.random.seed(6281996)
logging.info("=> Starting ...")
# Select dataset.
logging.info("=> Preparing datasets ...")
data = datasets.get_dataset(FLAGS.dataset, "train", FLAGS)
batch = tf.data.make_one_shot_iterator(data).get_next()
# Select model.
logging.info("=> Creating {} model".format(FLAGS.model))
model = models.get_model(FLAGS.model, FLAGS)
optimizer = tf.train.AdamOptimizer(FLAGS.lr)
# Set up the graph
train_loss, train_op, global_step = model.compute_loss(
batch, training=True, optimizer=optimizer)
# Training hooks
stop_hook = tf.train.StopAtStepHook(last_step=FLAGS.max_steps)
summary_writer = tf.summary.FileWriter(FLAGS.train_dir)
ops = tf.get_collection(tf.GraphKeys.SUMMARIES)
summary_hook = tf.train.SummarySaverHook(
save_steps=100, summary_writer=summary_writer, summary_op=ops)
step_counter_hook = tf.train.StepCounterHook(summary_writer=summary_writer)
hooks = [stop_hook, step_counter_hook, summary_hook]
logging.info("=> Start training loop ...")
with tf.train.MonitoredTrainingSession(
checkpoint_dir=FLAGS.train_dir,
hooks=hooks,
scaffold=None,
save_checkpoint_steps=FLAGS.save_every,
save_checkpoint_secs=None,
save_summaries_steps=None,
save_summaries_secs=None,
log_step_count_steps=None,
max_wait_secs=3600) as mon_sess:
while not mon_sess.should_stop():
mon_sess.run([batch, train_loss, global_step, train_op])
if __name__ == "__main__":
tf.app.run(main)
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Training Loop."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import tensorflow.compat.v1 as tf
from tensorflow_graphics.projects.cvxnet.lib import datasets
from tensorflow_graphics.projects.cvxnet.lib import models
from tensorflow_graphics.projects.cvxnet.lib import utils
tf.disable_eager_execution()
flags = tf.app.flags
logging = tf.logging
tf.logging.set_verbosity(tf.logging.INFO)
utils.define_flags()
FLAGS = flags.FLAGS
def main(unused_argv):
tf.set_random_seed(2191997)
np.random.seed(6281996)
logging.info("=> Starting ...")
# Select dataset.
logging.info("=> Preparing datasets ...")
data = datasets.get_dataset(FLAGS.dataset, "train", FLAGS)
batch = tf.data.make_one_shot_iterator(data).get_next()
# Select model.
logging.info("=> Creating {} model".format(FLAGS.model))
model = models.get_model(FLAGS.model, FLAGS)
optimizer = tf.train.AdamOptimizer(FLAGS.lr)
# Set up the graph
train_loss, train_op, global_step = model.compute_loss(
batch, training=True, optimizer=optimizer)
# Training hooks
stop_hook = tf.train.StopAtStepHook(last_step=FLAGS.max_steps)
summary_writer = tf.summary.FileWriter(FLAGS.train_dir)
ops = tf.get_collection(tf.GraphKeys.SUMMARIES)
summary_hook = tf.train.SummarySaverHook(
save_steps=100, summary_writer=summary_writer, summary_op=ops)
step_counter_hook = tf.train.StepCounterHook(summary_writer=summary_writer)
hooks = [stop_hook, step_counter_hook, summary_hook]
logging.info("=> Start training loop ...")
with tf.train.MonitoredTrainingSession(
checkpoint_dir=FLAGS.train_dir,
hooks=hooks,
scaffold=None,
save_checkpoint_steps=FLAGS.save_every,
save_checkpoint_secs=None,
save_summaries_steps=None,
save_summaries_secs=None,
log_step_count_steps=None,
max_wait_secs=3600) as mon_sess:
while not mon_sess.should_stop():
mon_sess.run([batch, train_loss, global_step, train_op])
if __name__ == "__main__":
tf.app.run(main)
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/g3doc/build_docs.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Script to generate external api_docs for tf-graphics."""
# flake8: noqa
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
from absl import app
from absl import flags
from tensorflow_docs.api_generator import generate_lib
os.environ["TFG_DOC_IMPORTS"] = "1"
import tensorflow_graphics as tfg # pylint: disable=g-import-not-at-top
FLAGS = flags.FLAGS
flags.DEFINE_string("output_dir", "/tmp/graphics_api",
"Where to output the docs")
flags.DEFINE_string(
"code_url_prefix",
"https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics",
"The url prefix for links to code.")
flags.DEFINE_bool("search_hints", True,
"Include metadata search hints in the generated files")
flags.DEFINE_string("site_path", "graphics/api_docs/python",
"Path prefix in the _toc.yaml")
def main(_):
doc_generator = generate_lib.DocGenerator(
root_title="Tensorflow Graphics",
py_modules=[("tfg", tfg)],
base_dir=os.path.dirname(tfg.__file__),
search_hints=FLAGS.search_hints,
code_url_prefix=FLAGS.code_url_prefix,
site_path=FLAGS.site_path)
doc_generator.build(output_dir=FLAGS.output_dir)
if __name__ == "__main__":
app.run(main)
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Script to generate external api_docs for tf-graphics."""
# flake8: noqa
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import os
from absl import app
from absl import flags
from tensorflow_docs.api_generator import generate_lib
os.environ["TFG_DOC_IMPORTS"] = "1"
import tensorflow_graphics as tfg # pylint: disable=g-import-not-at-top
FLAGS = flags.FLAGS
flags.DEFINE_string("output_dir", "/tmp/graphics_api",
"Where to output the docs")
flags.DEFINE_string(
"code_url_prefix",
"https://github.com/tensorflow/graphics/blob/master/tensorflow_graphics",
"The url prefix for links to code.")
flags.DEFINE_bool("search_hints", True,
"Include metadata search hints in the generated files")
flags.DEFINE_string("site_path", "graphics/api_docs/python",
"Path prefix in the _toc.yaml")
def main(_):
doc_generator = generate_lib.DocGenerator(
root_title="Tensorflow Graphics",
py_modules=[("tfg", tfg)],
base_dir=os.path.dirname(tfg.__file__),
search_hints=FLAGS.search_hints,
code_url_prefix=FLAGS.code_url_prefix,
site_path=FLAGS.site_path)
doc_generator.build(output_dir=FLAGS.output_dir)
if __name__ == "__main__":
app.run(main)
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/util/tests/__init__.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/util/tfg_flags.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Global flags to be used by various modules."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from absl import flags
FLAGS = flags.FLAGS
TFG_ADD_ASSERTS_TO_GRAPH = 'tfg_add_asserts_to_graph'
flags.DEFINE_boolean(
TFG_ADD_ASSERTS_TO_GRAPH, False,
'If True, calling tensorflow_graphics functions may add assert '
'nodes to the graph where necessary.', short_name='tfg_debug')
# The util functions or classes are not exported.
__all__ = []
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Global flags to be used by various modules."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from absl import flags
FLAGS = flags.FLAGS
TFG_ADD_ASSERTS_TO_GRAPH = 'tfg_add_asserts_to_graph'
flags.DEFINE_boolean(
TFG_ADD_ASSERTS_TO_GRAPH, False,
'If True, calling tensorflow_graphics functions may add assert '
'nodes to the graph where necessary.', short_name='tfg_debug')
# The util functions or classes are not exported.
__all__ = []
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/image/color_space/constants.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Constant parameters for color space conversion."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# Conversion constants following the naming convention from the 'theory of the
# transformation' section at https://en.wikipedia.org/wiki/SRGB.
srgb_gamma = {"A": 0.055, "PHI": 12.92, "K0": 0.04045, "GAMMA": 2.4}
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Constant parameters for color space conversion."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# Conversion constants following the naming convention from the 'theory of the
# transformation' section at https://en.wikipedia.org/wiki/SRGB.
srgb_gamma = {"A": 0.055, "PHI": 12.92, "K0": 0.04045, "GAMMA": 2.4}
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/rendering/reflectance/__init__.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Reflectance module."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow_graphics.rendering.reflectance import blinn_phong
from tensorflow_graphics.rendering.reflectance import lambertian
from tensorflow_graphics.rendering.reflectance import phong
from tensorflow_graphics.util import export_api as _export_api
# API contains submodules of tensorflow_graphics.rendering.reflectance.
__all__ = _export_api.get_modules()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Reflectance module."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow_graphics.rendering.reflectance import blinn_phong
from tensorflow_graphics.rendering.reflectance import lambertian
from tensorflow_graphics.rendering.reflectance import phong
from tensorflow_graphics.util import export_api as _export_api
# API contains submodules of tensorflow_graphics.rendering.reflectance.
__all__ = _export_api.get_modules()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/rendering/__init__.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Rendering module."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# pylint: disable=g-import-not-at-top
from tensorflow_graphics.util.doc import _import_tfg_docs
if _import_tfg_docs():
from tensorflow_graphics.rendering import camera
from tensorflow_graphics.rendering import opengl
from tensorflow_graphics.rendering import reflectance
from tensorflow_graphics.rendering import voxels
from tensorflow_graphics.util import export_api as _export_api
# API contains submodules of tensorflow_graphics.rendering.
__all__ = _export_api.get_modules()
# pylint: enable=g-import-not-at-top
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Rendering module."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
# pylint: disable=g-import-not-at-top
from tensorflow_graphics.util.doc import _import_tfg_docs
if _import_tfg_docs():
from tensorflow_graphics.rendering import camera
from tensorflow_graphics.rendering import opengl
from tensorflow_graphics.rendering import reflectance
from tensorflow_graphics.rendering import voxels
from tensorflow_graphics.util import export_api as _export_api
# API contains submodules of tensorflow_graphics.rendering.
__all__ = _export_api.get_modules()
# pylint: enable=g-import-not-at-top
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/math/interpolation/tests/weighted_test.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for google3.third_party.py.tensorflow_graphics.interpolation.weighted."""
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from tensorflow_graphics.math.interpolation import weighted
from tensorflow_graphics.util import test_case
class WeightedTest(test_case.TestCase):
def _get_tensors_from_shapes(self, num_points, dim_points, num_outputs,
num_pts_to_interpolate):
points = np.random.uniform(size=(num_points, dim_points))
weights = np.random.uniform(size=(num_outputs, num_pts_to_interpolate))
indices = np.asarray([
np.random.permutation(num_points)[:num_pts_to_interpolate].tolist()
for _ in range(num_outputs)
])
indices = np.expand_dims(indices, axis=-1)
return points, weights, indices
@parameterized.parameters(
(3, 4, 2, 3),
(5, 4, 5, 3),
(5, 6, 5, 5),
(2, 6, 5, 1),
)
def test_interpolate_exception_not_raised(self, dim_points, num_points,
num_outputs,
num_pts_to_interpolate):
"""Tests whether exceptions are not raised for compatible shapes."""
points, weights, indices = self._get_tensors_from_shapes(
num_points, dim_points, num_outputs, num_pts_to_interpolate)
self.assert_exception_is_not_raised(
weighted.interpolate,
shapes=[],
points=points,
weights=weights,
indices=indices,
normalize=True)
@parameterized.parameters(
("must have a rank greater than 1", ((3,), (None, 2), (None, 2, 0))),
("must have a rank greater than 1", ((None, 3), (None, 2), (1,))),
("must have exactly 1 dimensions in axis -1", ((None, 3), (None, 2),
(None, 2, 2))),
("must have the same number of dimensions", ((None, 3), (None, 2),
(None, 3, 1))),
("Not all batch dimensions are broadcast-compatible.",
((None, 3), (None, 5, 2), (None, 4, 2, 1))),
)
def test_interpolate_exception_raised(self, error_msg, shapes):
"""Tests whether exceptions are raised for incompatible shapes."""
self.assert_exception_is_raised(
weighted.interpolate, error_msg, shapes=shapes, normalize=False)
@parameterized.parameters(
(((-1.0, 1.0), (1.0, 1.0), (3.0, 1.0), (-1.0, -1.0), (1.0, -1.0),
(3.0, -1.0)), ((0.25, 0.25, 0.25, 0.25), (0.5, 0.5, 0.0, 0.0)),
(((0,), (1,), (3,), (4,)), ((1,), (2,), (4,),
(5,))), False, ((0.0, 0.0), (2.0, 1.0))),)
def test_interpolate_preset(self, points, weights, indices, _, out):
"""Tests whether interpolation results are correct."""
weights = tf.convert_to_tensor(value=weights)
result_unnormalized = weighted.interpolate(
points=points, weights=weights, indices=indices, normalize=False)
result_normalized = weighted.interpolate(
points=points, weights=2.0 * weights, indices=indices, normalize=True)
estimated_unnormalized = self.evaluate(result_unnormalized)
estimated_normalized = self.evaluate(result_normalized)
self.assertAllClose(estimated_unnormalized, out)
self.assertAllClose(estimated_normalized, out)
@parameterized.parameters(
(3, 4, 2, 3),
(5, 4, 5, 3),
(5, 6, 5, 5),
(2, 6, 5, 1),
)
def test_interpolate_negative_weights_raised(self, dim_points, num_points,
num_outputs,
num_pts_to_interpolate):
"""Tests whether exception is raised when weights are negative."""
points, weights, indices = self._get_tensors_from_shapes(
num_points, dim_points, num_outputs, num_pts_to_interpolate)
weights *= -1.0
with self.assertRaises(tf.errors.InvalidArgumentError):
result = weighted.interpolate(
points=points, weights=weights, indices=indices, normalize=True)
self.evaluate(result)
@parameterized.parameters(
(((-1.0, 1.0), (1.0, 1.0), (3.0, 1.0), (-1.0, -1.0), (1.0, -1.0),
(3.0, -1.0)), ((1.0, -1.0, 1.0, -1.0), (0.0, 0.0, 0.0, 0.0)),
(((0,), (1,), (3,), (4,)), ((1,), (2,), (4,), (5,))), ((0.0, 0.0),
(0.0, 0.0))))
def test_interp_unnormalizable_raised_(self, points, weights, indices, _):
"""Tests whether exception is raised when weights are unnormalizable."""
with self.assertRaises(tf.errors.InvalidArgumentError):
result = weighted.interpolate(
points=points,
weights=weights,
indices=indices,
normalize=True,
allow_negative_weights=True)
self.evaluate(result)
@parameterized.parameters(
(3, 4, 2, 3),
(5, 4, 5, 3),
(5, 6, 5, 5),
(2, 6, 5, 1),
)
def test_interpolate_jacobian_random(self, dim_points, num_points,
num_outputs, num_pts_to_interpolate):
"""Tests whether jacobian is correct."""
points_np, weights_np, indices_np = self._get_tensors_from_shapes(
num_points, dim_points, num_outputs, num_pts_to_interpolate)
def interpolate_fn(points, weights):
return weighted.interpolate(
points=points, weights=weights, indices=indices_np, normalize=True)
self.assert_jacobian_is_correct_fn(interpolate_fn, [points_np, weights_np])
@parameterized.parameters(
((3, 2), (2, 2)),
((None, 3, 2), (None, 1, 2)),
((10, 5, 3, 2), (10, 5, 2, 2)),
)
def test_get_barycentric_coordinates_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(weighted.get_barycentric_coordinates,
shapes)
@parameterized.parameters(
("triangle_vertices must have exactly 2 dimensions in axis -1", (3, 1),
(1, 2)),
("triangle_vertices must have exactly 3 dimensions in axis -2", (2, 2),
(1, 2)),
("pixels must have exactly 2 dimensions in axis -1", (3, 2), (1, 3)),
("Not all batch dimensions are broadcast-compatible", (5, 3, 2),
(2, 10, 2)),
)
def test_get_barycentric_coordinates_exception_raised(self, error_msg,
*shape):
"""Tests that the shape exceptions are raised."""
self.assert_exception_is_raised(weighted.get_barycentric_coordinates,
error_msg, shape)
def test_get_barycentric_coordinates_jacobian_random(self):
"""Tests the Jacobian of get_barycentric_coordinates."""
tensor_size = np.random.randint(2)
tensor_shape = np.random.randint(1, 2, size=(tensor_size)).tolist()
triangle_vertices_init = 0.4 * np.random.random(
tensor_shape + [3, 2]).astype(np.float64) - 0.2
triangle_vertices_init += np.array(
((0.25, 0.25), (0.5, 0.75), (0.75, 0.25)))
pixels_init = np.random.random(tensor_shape + [3, 2]).astype(np.float64)
barycentric_fn = weighted.get_barycentric_coordinates
self.assert_jacobian_is_correct_fn(
lambda vertices, pixels: barycentric_fn(vertices, pixels)[0],
[triangle_vertices_init, pixels_init])
def test_get_barycentric_coordinates_normalized(self):
"""Tests whether the barycentric coordinates are normalized."""
tensor_size = np.random.randint(3)
tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist()
num_pixels = np.random.randint(1, 10)
pixels_shape = tensor_shape + [num_pixels]
triangle_vertices = np.random.random(tensor_shape + [3, 2])
pixels = np.random.random(pixels_shape + [2])
barycentric_coordinates, _ = weighted.get_barycentric_coordinates(
triangle_vertices, pixels)
barycentric_coordinates_sum = tf.reduce_sum(
input_tensor=barycentric_coordinates, axis=-1)
self.assertAllClose(barycentric_coordinates_sum, np.full(pixels_shape, 1.0))
if __name__ == "__main__":
test_case.main()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for google3.third_party.py.tensorflow_graphics.interpolation.weighted."""
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from tensorflow_graphics.math.interpolation import weighted
from tensorflow_graphics.util import test_case
class WeightedTest(test_case.TestCase):
def _get_tensors_from_shapes(self, num_points, dim_points, num_outputs,
num_pts_to_interpolate):
points = np.random.uniform(size=(num_points, dim_points))
weights = np.random.uniform(size=(num_outputs, num_pts_to_interpolate))
indices = np.asarray([
np.random.permutation(num_points)[:num_pts_to_interpolate].tolist()
for _ in range(num_outputs)
])
indices = np.expand_dims(indices, axis=-1)
return points, weights, indices
@parameterized.parameters(
(3, 4, 2, 3),
(5, 4, 5, 3),
(5, 6, 5, 5),
(2, 6, 5, 1),
)
def test_interpolate_exception_not_raised(self, dim_points, num_points,
num_outputs,
num_pts_to_interpolate):
"""Tests whether exceptions are not raised for compatible shapes."""
points, weights, indices = self._get_tensors_from_shapes(
num_points, dim_points, num_outputs, num_pts_to_interpolate)
self.assert_exception_is_not_raised(
weighted.interpolate,
shapes=[],
points=points,
weights=weights,
indices=indices,
normalize=True)
@parameterized.parameters(
("must have a rank greater than 1", ((3,), (None, 2), (None, 2, 0))),
("must have a rank greater than 1", ((None, 3), (None, 2), (1,))),
("must have exactly 1 dimensions in axis -1", ((None, 3), (None, 2),
(None, 2, 2))),
("must have the same number of dimensions", ((None, 3), (None, 2),
(None, 3, 1))),
("Not all batch dimensions are broadcast-compatible.",
((None, 3), (None, 5, 2), (None, 4, 2, 1))),
)
def test_interpolate_exception_raised(self, error_msg, shapes):
"""Tests whether exceptions are raised for incompatible shapes."""
self.assert_exception_is_raised(
weighted.interpolate, error_msg, shapes=shapes, normalize=False)
@parameterized.parameters(
(((-1.0, 1.0), (1.0, 1.0), (3.0, 1.0), (-1.0, -1.0), (1.0, -1.0),
(3.0, -1.0)), ((0.25, 0.25, 0.25, 0.25), (0.5, 0.5, 0.0, 0.0)),
(((0,), (1,), (3,), (4,)), ((1,), (2,), (4,),
(5,))), False, ((0.0, 0.0), (2.0, 1.0))),)
def test_interpolate_preset(self, points, weights, indices, _, out):
"""Tests whether interpolation results are correct."""
weights = tf.convert_to_tensor(value=weights)
result_unnormalized = weighted.interpolate(
points=points, weights=weights, indices=indices, normalize=False)
result_normalized = weighted.interpolate(
points=points, weights=2.0 * weights, indices=indices, normalize=True)
estimated_unnormalized = self.evaluate(result_unnormalized)
estimated_normalized = self.evaluate(result_normalized)
self.assertAllClose(estimated_unnormalized, out)
self.assertAllClose(estimated_normalized, out)
@parameterized.parameters(
(3, 4, 2, 3),
(5, 4, 5, 3),
(5, 6, 5, 5),
(2, 6, 5, 1),
)
def test_interpolate_negative_weights_raised(self, dim_points, num_points,
num_outputs,
num_pts_to_interpolate):
"""Tests whether exception is raised when weights are negative."""
points, weights, indices = self._get_tensors_from_shapes(
num_points, dim_points, num_outputs, num_pts_to_interpolate)
weights *= -1.0
with self.assertRaises(tf.errors.InvalidArgumentError):
result = weighted.interpolate(
points=points, weights=weights, indices=indices, normalize=True)
self.evaluate(result)
@parameterized.parameters(
(((-1.0, 1.0), (1.0, 1.0), (3.0, 1.0), (-1.0, -1.0), (1.0, -1.0),
(3.0, -1.0)), ((1.0, -1.0, 1.0, -1.0), (0.0, 0.0, 0.0, 0.0)),
(((0,), (1,), (3,), (4,)), ((1,), (2,), (4,), (5,))), ((0.0, 0.0),
(0.0, 0.0))))
def test_interp_unnormalizable_raised_(self, points, weights, indices, _):
"""Tests whether exception is raised when weights are unnormalizable."""
with self.assertRaises(tf.errors.InvalidArgumentError):
result = weighted.interpolate(
points=points,
weights=weights,
indices=indices,
normalize=True,
allow_negative_weights=True)
self.evaluate(result)
@parameterized.parameters(
(3, 4, 2, 3),
(5, 4, 5, 3),
(5, 6, 5, 5),
(2, 6, 5, 1),
)
def test_interpolate_jacobian_random(self, dim_points, num_points,
num_outputs, num_pts_to_interpolate):
"""Tests whether jacobian is correct."""
points_np, weights_np, indices_np = self._get_tensors_from_shapes(
num_points, dim_points, num_outputs, num_pts_to_interpolate)
def interpolate_fn(points, weights):
return weighted.interpolate(
points=points, weights=weights, indices=indices_np, normalize=True)
self.assert_jacobian_is_correct_fn(interpolate_fn, [points_np, weights_np])
@parameterized.parameters(
((3, 2), (2, 2)),
((None, 3, 2), (None, 1, 2)),
((10, 5, 3, 2), (10, 5, 2, 2)),
)
def test_get_barycentric_coordinates_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(weighted.get_barycentric_coordinates,
shapes)
@parameterized.parameters(
("triangle_vertices must have exactly 2 dimensions in axis -1", (3, 1),
(1, 2)),
("triangle_vertices must have exactly 3 dimensions in axis -2", (2, 2),
(1, 2)),
("pixels must have exactly 2 dimensions in axis -1", (3, 2), (1, 3)),
("Not all batch dimensions are broadcast-compatible", (5, 3, 2),
(2, 10, 2)),
)
def test_get_barycentric_coordinates_exception_raised(self, error_msg,
*shape):
"""Tests that the shape exceptions are raised."""
self.assert_exception_is_raised(weighted.get_barycentric_coordinates,
error_msg, shape)
def test_get_barycentric_coordinates_jacobian_random(self):
"""Tests the Jacobian of get_barycentric_coordinates."""
tensor_size = np.random.randint(2)
tensor_shape = np.random.randint(1, 2, size=(tensor_size)).tolist()
triangle_vertices_init = 0.4 * np.random.random(
tensor_shape + [3, 2]).astype(np.float64) - 0.2
triangle_vertices_init += np.array(
((0.25, 0.25), (0.5, 0.75), (0.75, 0.25)))
pixels_init = np.random.random(tensor_shape + [3, 2]).astype(np.float64)
barycentric_fn = weighted.get_barycentric_coordinates
self.assert_jacobian_is_correct_fn(
lambda vertices, pixels: barycentric_fn(vertices, pixels)[0],
[triangle_vertices_init, pixels_init])
def test_get_barycentric_coordinates_normalized(self):
"""Tests whether the barycentric coordinates are normalized."""
tensor_size = np.random.randint(3)
tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist()
num_pixels = np.random.randint(1, 10)
pixels_shape = tensor_shape + [num_pixels]
triangle_vertices = np.random.random(tensor_shape + [3, 2])
pixels = np.random.random(pixels_shape + [2])
barycentric_coordinates, _ = weighted.get_barycentric_coordinates(
triangle_vertices, pixels)
barycentric_coordinates_sum = tf.reduce_sum(
input_tensor=barycentric_coordinates, axis=-1)
self.assertAllClose(barycentric_coordinates_sum, np.full(pixels_shape, 1.0))
if __name__ == "__main__":
test_case.main()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/projects/cvxnet/eval.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Evaluation."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from os import path
import numpy as np
import tensorflow.compat.v1 as tf
from tensorflow_graphics.projects.cvxnet.lib import datasets
from tensorflow_graphics.projects.cvxnet.lib import models
from tensorflow_graphics.projects.cvxnet.lib import utils
tf.disable_eager_execution()
flags = tf.app.flags
logging = tf.logging
tf.logging.set_verbosity(tf.logging.INFO)
utils.define_flags()
FLAGS = flags.FLAGS
def main(unused_argv):
tf.set_random_seed(2191997)
np.random.seed(6281996)
logging.info('=> Starting ...')
eval_dir = path.join(FLAGS.train_dir, 'eval')
# Select dataset.
logging.info('=> Preparing datasets ...')
data = datasets.get_dataset(FLAGS.dataset, 'test', FLAGS)
batch = tf.data.make_one_shot_iterator(data).get_next()
# Select model.
logging.info('=> Creating {} model'.format(FLAGS.model))
model = models.get_model(FLAGS.model, FLAGS)
# Set up the graph
global_step = tf.train.get_or_create_global_step()
test_loss, test_iou = model.compute_loss(batch, training=False)
if FLAGS.extract_mesh or FLAGS.surface_metrics:
img_ch = 3 if FLAGS.image_input else FLAGS.depth_d
input_holder = tf.placeholder(tf.float32, [None, 224, 224, img_ch])
params = model.encode(input_holder, training=False)
params_holder = tf.placeholder(tf.float32, [None, model.n_params])
points_holder = tf.placeholder(tf.float32, [None, None, FLAGS.dims])
indicators, unused_var = model.decode(
params_holder, points_holder, training=False)
if (not FLAGS.extract_mesh) or (not FLAGS.surface_metrics):
summary_writer = tf.summary.FileWriter(eval_dir)
iou_holder = tf.placeholder(tf.float32)
iou_summary = tf.summary.scalar('test_iou', iou_holder)
logging.info('=> Evaluating ...')
last_step = -1
while True:
shapenet_stats = utils.init_stats()
with tf.train.MonitoredTrainingSession(
checkpoint_dir=FLAGS.train_dir,
hooks=[],
save_checkpoint_steps=None,
save_checkpoint_secs=None,
save_summaries_steps=None,
save_summaries_secs=None,
log_step_count_steps=None,
max_wait_secs=3600) as mon_sess:
step_val = mon_sess.run(global_step)
if step_val <= last_step:
continue
else:
last_step = step_val
while not mon_sess.should_stop():
batch_val, unused_var, test_iou_val = mon_sess.run(
[batch, test_loss, test_iou])
if FLAGS.extract_mesh or FLAGS.surface_metrics:
if FLAGS.image_input:
input_val = batch_val['image']
else:
input_val = batch_val['depth']
mesh = utils.extract_mesh(
input_val,
params,
indicators,
input_holder,
params_holder,
points_holder,
mon_sess,
FLAGS,
)
if FLAGS.trans_dir is not None:
utils.transform_mesh(mesh, batch_val['name'], FLAGS.trans_dir)
if FLAGS.extract_mesh:
utils.save_mesh(mesh, batch_val['name'], eval_dir)
if FLAGS.surface_metrics:
chamfer, fscore = utils.compute_surface_metrics(
mesh, batch_val['name'], FLAGS.mesh_dir)
else:
chamfer = fscore = 0.
example_stats = utils.Stats(
iou=test_iou_val[0], chamfer=chamfer, fscore=fscore)
utils.update_stats(example_stats, batch_val['name'], shapenet_stats)
utils.average_stats(shapenet_stats)
if (not FLAGS.extract_mesh) and (not FLAGS.surface_metrics):
with tf.Session() as sess:
iou_summary_val = sess.run(
iou_summary, feed_dict={iou_holder: shapenet_stats['all']['iou']})
summary_writer.add_summary(iou_summary_val, step_val)
summary_writer.flush()
if FLAGS.surface_metrics:
utils.write_stats(
shapenet_stats,
eval_dir,
step_val,
)
if FLAGS.eval_once or step_val >= FLAGS.max_steps:
break
if __name__ == '__main__':
tf.app.run(main)
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Evaluation."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from os import path
import numpy as np
import tensorflow.compat.v1 as tf
from tensorflow_graphics.projects.cvxnet.lib import datasets
from tensorflow_graphics.projects.cvxnet.lib import models
from tensorflow_graphics.projects.cvxnet.lib import utils
tf.disable_eager_execution()
flags = tf.app.flags
logging = tf.logging
tf.logging.set_verbosity(tf.logging.INFO)
utils.define_flags()
FLAGS = flags.FLAGS
def main(unused_argv):
tf.set_random_seed(2191997)
np.random.seed(6281996)
logging.info('=> Starting ...')
eval_dir = path.join(FLAGS.train_dir, 'eval')
# Select dataset.
logging.info('=> Preparing datasets ...')
data = datasets.get_dataset(FLAGS.dataset, 'test', FLAGS)
batch = tf.data.make_one_shot_iterator(data).get_next()
# Select model.
logging.info('=> Creating {} model'.format(FLAGS.model))
model = models.get_model(FLAGS.model, FLAGS)
# Set up the graph
global_step = tf.train.get_or_create_global_step()
test_loss, test_iou = model.compute_loss(batch, training=False)
if FLAGS.extract_mesh or FLAGS.surface_metrics:
img_ch = 3 if FLAGS.image_input else FLAGS.depth_d
input_holder = tf.placeholder(tf.float32, [None, 224, 224, img_ch])
params = model.encode(input_holder, training=False)
params_holder = tf.placeholder(tf.float32, [None, model.n_params])
points_holder = tf.placeholder(tf.float32, [None, None, FLAGS.dims])
indicators, unused_var = model.decode(
params_holder, points_holder, training=False)
if (not FLAGS.extract_mesh) or (not FLAGS.surface_metrics):
summary_writer = tf.summary.FileWriter(eval_dir)
iou_holder = tf.placeholder(tf.float32)
iou_summary = tf.summary.scalar('test_iou', iou_holder)
logging.info('=> Evaluating ...')
last_step = -1
while True:
shapenet_stats = utils.init_stats()
with tf.train.MonitoredTrainingSession(
checkpoint_dir=FLAGS.train_dir,
hooks=[],
save_checkpoint_steps=None,
save_checkpoint_secs=None,
save_summaries_steps=None,
save_summaries_secs=None,
log_step_count_steps=None,
max_wait_secs=3600) as mon_sess:
step_val = mon_sess.run(global_step)
if step_val <= last_step:
continue
else:
last_step = step_val
while not mon_sess.should_stop():
batch_val, unused_var, test_iou_val = mon_sess.run(
[batch, test_loss, test_iou])
if FLAGS.extract_mesh or FLAGS.surface_metrics:
if FLAGS.image_input:
input_val = batch_val['image']
else:
input_val = batch_val['depth']
mesh = utils.extract_mesh(
input_val,
params,
indicators,
input_holder,
params_holder,
points_holder,
mon_sess,
FLAGS,
)
if FLAGS.trans_dir is not None:
utils.transform_mesh(mesh, batch_val['name'], FLAGS.trans_dir)
if FLAGS.extract_mesh:
utils.save_mesh(mesh, batch_val['name'], eval_dir)
if FLAGS.surface_metrics:
chamfer, fscore = utils.compute_surface_metrics(
mesh, batch_val['name'], FLAGS.mesh_dir)
else:
chamfer = fscore = 0.
example_stats = utils.Stats(
iou=test_iou_val[0], chamfer=chamfer, fscore=fscore)
utils.update_stats(example_stats, batch_val['name'], shapenet_stats)
utils.average_stats(shapenet_stats)
if (not FLAGS.extract_mesh) and (not FLAGS.surface_metrics):
with tf.Session() as sess:
iou_summary_val = sess.run(
iou_summary, feed_dict={iou_holder: shapenet_stats['all']['iou']})
summary_writer.add_summary(iou_summary_val, step_val)
summary_writer.flush()
if FLAGS.surface_metrics:
utils.write_stats(
shapenet_stats,
eval_dir,
step_val,
)
if FLAGS.eval_once or step_val >= FLAGS.max_steps:
break
if __name__ == '__main__':
tf.app.run(main)
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/nn/layer/tests/graph_convolution_test.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for the graph convolution layers."""
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
import tensorflow_graphics.nn.layer.graph_convolution as gc_layer
from tensorflow_graphics.util import test_case
def _dense_to_sparse(data):
"""Convert a numpy array to a tf.SparseTensor."""
indices = np.where(data)
return tf.SparseTensor(
np.stack(indices, axis=-1), data[indices], dense_shape=data.shape)
def _dummy_data(batch_size, num_vertices, num_channels):
"""Create inputs for feature_steered_convolution."""
if batch_size > 0:
data = np.zeros(
shape=(batch_size, num_vertices, num_channels), dtype=np.float32)
neighbors = _dense_to_sparse(
np.tile(np.eye(num_vertices, dtype=np.float32), (batch_size, 1, 1)))
else:
data = np.zeros(shape=(num_vertices, num_channels), dtype=np.float32)
neighbors = _dense_to_sparse(np.eye(num_vertices, dtype=np.float32))
return data, neighbors
class GraphConvolutionTestFeatureSteeredConvolutionLayerTests(
test_case.TestCase):
@parameterized.parameters(
(1, 1, 1, 1, 1, False),
(4, 2, 3, None, 5, False),
(1, 2, 3, 4, 5, True),
)
def test_feature_steered_convolution_layer_exception_not_raised_shapes(
self, batch_size, num_vertices, in_channels, out_channels,
num_weight_matrices, translation_invariant):
"""Check if the convolution parameters and output have correct shapes."""
data, neighbors = _dummy_data(batch_size, num_vertices, in_channels)
name_scope = "test"
if tf.executing_eagerly():
layer = gc_layer.FeatureSteeredConvolutionKerasLayer(
translation_invariant=translation_invariant,
num_weight_matrices=num_weight_matrices,
num_output_channels=out_channels,
name=name_scope)
def _run_convolution():
"""Run the appropriate feature steered convolution layer."""
if tf.executing_eagerly():
try:
output = layer(inputs=[data, neighbors], sizes=None)
except Exception as e: # pylint: disable=broad-except
self.fail("Exception raised: %s" % str(e))
else:
try:
output = gc_layer.feature_steered_convolution_layer(
data=data,
neighbors=neighbors,
sizes=None,
translation_invariant=translation_invariant,
num_weight_matrices=num_weight_matrices,
num_output_channels=out_channels,
name=None,
var_name=name_scope)
except Exception as e: # pylint: disable=broad-except
self.fail("Exception raised: %s" % str(e))
return output
output = _run_convolution()
output_shape = output.shape.as_list()
out_channels = in_channels if out_channels is None else out_channels
self.assertEqual(output_shape[-1], out_channels)
self.assertAllEqual(output_shape[:-1], data.shape[:-1])
def _get_var_shape(var_name):
"""Get the shape of a variable by name."""
if tf.executing_eagerly():
trainable_variables = layer.trainable_variables
for tv in trainable_variables:
if tv.name == name_scope + "/" + var_name + ":0":
return tv.shape.as_list()
raise ValueError("Variable not found.")
else:
with tf.compat.v1.variable_scope(name_scope, reuse=True):
variable = tf.compat.v1.get_variable(
var_name, initializer=tf.constant(0))
return variable.shape.as_list()
self.assertAllEqual(_get_var_shape("u"), [in_channels, num_weight_matrices])
self.assertAllEqual(_get_var_shape("c"), [num_weight_matrices])
self.assertAllEqual(_get_var_shape("b"), [out_channels])
self.assertAllEqual(
_get_var_shape("w"), [num_weight_matrices, in_channels, out_channels])
if not translation_invariant:
self.assertAllEqual(
_get_var_shape("v"), [in_channels, num_weight_matrices])
def test_feature_steered_convolution_layer_initializer(self):
"""Tests a custom variable initializer."""
data = np.array(((1.0, 1.0), (-1.0, 1.0), (-1.0, -1.0), (1.0, -1.0)))
neighbors_indices = np.array(((0, 0), (0, 1), (0, 3),
(1, 0), (1, 1), (1, 2),
(2, 1), (2, 2), (2, 3),
(3, 0), (3, 2), (3, 3)))
neighbors = tf.SparseTensor(
neighbors_indices, np.ones(shape=(12,)) / 3.0, dense_shape=(4, 4))
initializer = tf.compat.v1.keras.initializers.zeros()
if tf.executing_eagerly():
layer = gc_layer.FeatureSteeredConvolutionKerasLayer(
translation_invariant=False,
initializer=initializer)
output = layer(inputs=[data, neighbors], sizes=None)
else:
out = gc_layer.feature_steered_convolution_layer(
data=data,
neighbors=neighbors,
sizes=None,
translation_invariant=False,
initializer=initializer)
self.evaluate(tf.compat.v1.global_variables_initializer())
output = self.evaluate(out)
# All zeros initializer should result in all zeros output.
self.assertAllEqual(output, np.zeros_like(data))
def test_feature_steered_convolution_layer_training(self):
"""Test a simple training loop."""
# Generate a small valid input for a simple training task.
# Four corners of a square.
data = np.array(((1.0, 1.0), (-1.0, 1.0), (-1.0, -1.0), (1.0, -1.0)))
neighbors_indices = np.array(((0, 0), (0, 1), (0, 3),
(1, 0), (1, 1), (1, 2),
(2, 1), (2, 2), (2, 3),
(3, 0), (3, 2), (3, 3)))
neighbors = tf.SparseTensor(
neighbors_indices, np.ones(shape=(12,)) / 3.0, dense_shape=(4, 4))
# Desired output is arbitrary.
labels = np.reshape([-1.0, -0.5, 0.5, 1.0], (-1, 1))
num_training_iterations = 5
if tf.executing_eagerly():
with tf.GradientTape(persistent=True) as tape:
layer = gc_layer.FeatureSteeredConvolutionKerasLayer(
translation_invariant=False,
num_weight_matrices=1,
num_output_channels=1)
output = layer(inputs=[data, neighbors], sizes=None)
loss = tf.nn.l2_loss(output - labels)
trainable_variables = layer.trainable_variables
for _ in range(num_training_iterations):
grads = tape.gradient(loss, trainable_variables)
tf.compat.v1.train.GradientDescentOptimizer(1e-4).apply_gradients(
zip(grads, trainable_variables))
else:
output = gc_layer.feature_steered_convolution_layer(
data=data,
neighbors=neighbors,
sizes=None,
translation_invariant=False,
num_weight_matrices=1,
num_output_channels=1)
train_op = tf.compat.v1.train.GradientDescentOptimizer(1e-4).minimize(
tf.nn.l2_loss(output - labels))
with tf.compat.v1.Session() as sess:
sess.run(tf.compat.v1.initialize_all_variables())
for _ in range(num_training_iterations):
sess.run(train_op)
class GraphConvolutionTestDynamicGraphConvolutionKerasLayerTests(
test_case.TestCase):
@parameterized.parameters(
(1, 1, 1, 1, "weighted"),
(4, 2, 3, 12, "max"),
(1, 2, 3, 4, "max"),
)
def test_dynamic_graph_convolution_keras_layer_exception_not_raised_shapes(
self, batch_size, num_vertices, in_channels, out_channels, reduction):
"""Check if the convolution parameters and output have correct shapes."""
if not tf.executing_eagerly():
return
data, neighbors = _dummy_data(batch_size, num_vertices, in_channels)
layer = gc_layer.DynamicGraphConvolutionKerasLayer(
num_output_channels=out_channels,
reduction=reduction)
try:
output = layer(inputs=[data, neighbors], sizes=None)
except Exception as e: # pylint: disable=broad-except
self.fail("Exception raised: %s" % str(e))
self.assertAllEqual((batch_size, num_vertices, out_channels), output.shape)
@parameterized.parameters(
(1, 1, 1, 1, "weighted"),
(4, 2, 3, 12, "max"),
(1, 2, 3, 4, "max"),
)
def test_dynamic_graph_convolution_keras_layer_zero_kernel(
self, batch_size, num_vertices, in_channels, out_channels, reduction):
"""Tests convolution with an all-zeros kernel."""
if not tf.executing_eagerly():
return
data, neighbors = _dummy_data(batch_size, num_vertices, in_channels)
data = np.random.uniform(size=data.shape).astype(np.float32)
layer = gc_layer.DynamicGraphConvolutionKerasLayer(
num_output_channels=out_channels,
reduction=reduction,
use_bias=False,
kernel_initializer=tf.compat.v1.keras.initializers.zeros())
output = layer(inputs=[data, neighbors], sizes=None)
self.assertAllEqual(
output,
np.zeros(shape=(batch_size, num_vertices, out_channels),
dtype=np.float32))
@parameterized.parameters((1, 1, 1), (2, 3, 12), (2, 3, 4))
def test_dynamic_graph_convolution_keras_layer_duplicate_features(
self, num_vertices, in_channels, out_channels):
"""Tests convolution when all vertex features are identical."""
if not tf.executing_eagerly():
return
data = np.random.uniform(size=(1, in_channels))
data = np.tile(data, (num_vertices, 1))
# Results should be independent of 'neighbors'.
neighbors = np.maximum(np.random.randint(
0, 2, size=(num_vertices, num_vertices)), np.eye(num_vertices))
neighbors = _dense_to_sparse(neighbors)
layer = gc_layer.DynamicGraphConvolutionKerasLayer(
num_output_channels=out_channels,
reduction="max")
output = layer(inputs=[data, neighbors], sizes=None)
output_tile = tf.tile(output[:1, :], (num_vertices, 1))
self.assertAllEqual(output, output_tile)
@parameterized.parameters("weighted", "max")
def test_dynamic_graph_convolution_keras_layer_training(self, reduction):
"""Test a simple training loop."""
if not tf.executing_eagerly():
return
# Generate a small valid input for a simple training task.
# Four corners of a square.
data = np.array(((1.0, 1.0), (-1.0, 1.0), (-1.0, -1.0), (1.0, -1.0)))
neighbors_indices = np.array(((0, 0), (0, 1), (0, 3),
(1, 0), (1, 1), (1, 2),
(2, 1), (2, 2), (2, 3),
(3, 0), (3, 2), (3, 3)))
neighbors = tf.SparseTensor(
neighbors_indices, np.ones(shape=(12,)) / 3.0, dense_shape=(4, 4))
# Desired output is arbitrary.
labels = np.reshape([-1.0, -0.5, 0.5, 1.0], (-1, 1))
num_training_iterations = 5
with tf.GradientTape(persistent=True) as tape:
layer = gc_layer.DynamicGraphConvolutionKerasLayer(
num_output_channels=2,
reduction=reduction)
output = layer(inputs=[data, neighbors], sizes=None)
loss = tf.nn.l2_loss(output - labels)
trainable_variables = layer.trainable_variables
for _ in range(num_training_iterations):
grads = tape.gradient(loss, trainable_variables)
tf.compat.v1.train.GradientDescentOptimizer(1e-4).apply_gradients(
zip(grads, trainable_variables))
if __name__ == "__main__":
test_case.main()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for the graph convolution layers."""
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
import tensorflow_graphics.nn.layer.graph_convolution as gc_layer
from tensorflow_graphics.util import test_case
def _dense_to_sparse(data):
"""Convert a numpy array to a tf.SparseTensor."""
indices = np.where(data)
return tf.SparseTensor(
np.stack(indices, axis=-1), data[indices], dense_shape=data.shape)
def _dummy_data(batch_size, num_vertices, num_channels):
"""Create inputs for feature_steered_convolution."""
if batch_size > 0:
data = np.zeros(
shape=(batch_size, num_vertices, num_channels), dtype=np.float32)
neighbors = _dense_to_sparse(
np.tile(np.eye(num_vertices, dtype=np.float32), (batch_size, 1, 1)))
else:
data = np.zeros(shape=(num_vertices, num_channels), dtype=np.float32)
neighbors = _dense_to_sparse(np.eye(num_vertices, dtype=np.float32))
return data, neighbors
class GraphConvolutionTestFeatureSteeredConvolutionLayerTests(
test_case.TestCase):
@parameterized.parameters(
(1, 1, 1, 1, 1, False),
(4, 2, 3, None, 5, False),
(1, 2, 3, 4, 5, True),
)
def test_feature_steered_convolution_layer_exception_not_raised_shapes(
self, batch_size, num_vertices, in_channels, out_channels,
num_weight_matrices, translation_invariant):
"""Check if the convolution parameters and output have correct shapes."""
data, neighbors = _dummy_data(batch_size, num_vertices, in_channels)
name_scope = "test"
if tf.executing_eagerly():
layer = gc_layer.FeatureSteeredConvolutionKerasLayer(
translation_invariant=translation_invariant,
num_weight_matrices=num_weight_matrices,
num_output_channels=out_channels,
name=name_scope)
def _run_convolution():
"""Run the appropriate feature steered convolution layer."""
if tf.executing_eagerly():
try:
output = layer(inputs=[data, neighbors], sizes=None)
except Exception as e: # pylint: disable=broad-except
self.fail("Exception raised: %s" % str(e))
else:
try:
output = gc_layer.feature_steered_convolution_layer(
data=data,
neighbors=neighbors,
sizes=None,
translation_invariant=translation_invariant,
num_weight_matrices=num_weight_matrices,
num_output_channels=out_channels,
name=None,
var_name=name_scope)
except Exception as e: # pylint: disable=broad-except
self.fail("Exception raised: %s" % str(e))
return output
output = _run_convolution()
output_shape = output.shape.as_list()
out_channels = in_channels if out_channels is None else out_channels
self.assertEqual(output_shape[-1], out_channels)
self.assertAllEqual(output_shape[:-1], data.shape[:-1])
def _get_var_shape(var_name):
"""Get the shape of a variable by name."""
if tf.executing_eagerly():
trainable_variables = layer.trainable_variables
for tv in trainable_variables:
if tv.name == name_scope + "/" + var_name + ":0":
return tv.shape.as_list()
raise ValueError("Variable not found.")
else:
with tf.compat.v1.variable_scope(name_scope, reuse=True):
variable = tf.compat.v1.get_variable(
var_name, initializer=tf.constant(0))
return variable.shape.as_list()
self.assertAllEqual(_get_var_shape("u"), [in_channels, num_weight_matrices])
self.assertAllEqual(_get_var_shape("c"), [num_weight_matrices])
self.assertAllEqual(_get_var_shape("b"), [out_channels])
self.assertAllEqual(
_get_var_shape("w"), [num_weight_matrices, in_channels, out_channels])
if not translation_invariant:
self.assertAllEqual(
_get_var_shape("v"), [in_channels, num_weight_matrices])
def test_feature_steered_convolution_layer_initializer(self):
"""Tests a custom variable initializer."""
data = np.array(((1.0, 1.0), (-1.0, 1.0), (-1.0, -1.0), (1.0, -1.0)))
neighbors_indices = np.array(((0, 0), (0, 1), (0, 3),
(1, 0), (1, 1), (1, 2),
(2, 1), (2, 2), (2, 3),
(3, 0), (3, 2), (3, 3)))
neighbors = tf.SparseTensor(
neighbors_indices, np.ones(shape=(12,)) / 3.0, dense_shape=(4, 4))
initializer = tf.compat.v1.keras.initializers.zeros()
if tf.executing_eagerly():
layer = gc_layer.FeatureSteeredConvolutionKerasLayer(
translation_invariant=False,
initializer=initializer)
output = layer(inputs=[data, neighbors], sizes=None)
else:
out = gc_layer.feature_steered_convolution_layer(
data=data,
neighbors=neighbors,
sizes=None,
translation_invariant=False,
initializer=initializer)
self.evaluate(tf.compat.v1.global_variables_initializer())
output = self.evaluate(out)
# All zeros initializer should result in all zeros output.
self.assertAllEqual(output, np.zeros_like(data))
def test_feature_steered_convolution_layer_training(self):
"""Test a simple training loop."""
# Generate a small valid input for a simple training task.
# Four corners of a square.
data = np.array(((1.0, 1.0), (-1.0, 1.0), (-1.0, -1.0), (1.0, -1.0)))
neighbors_indices = np.array(((0, 0), (0, 1), (0, 3),
(1, 0), (1, 1), (1, 2),
(2, 1), (2, 2), (2, 3),
(3, 0), (3, 2), (3, 3)))
neighbors = tf.SparseTensor(
neighbors_indices, np.ones(shape=(12,)) / 3.0, dense_shape=(4, 4))
# Desired output is arbitrary.
labels = np.reshape([-1.0, -0.5, 0.5, 1.0], (-1, 1))
num_training_iterations = 5
if tf.executing_eagerly():
with tf.GradientTape(persistent=True) as tape:
layer = gc_layer.FeatureSteeredConvolutionKerasLayer(
translation_invariant=False,
num_weight_matrices=1,
num_output_channels=1)
output = layer(inputs=[data, neighbors], sizes=None)
loss = tf.nn.l2_loss(output - labels)
trainable_variables = layer.trainable_variables
for _ in range(num_training_iterations):
grads = tape.gradient(loss, trainable_variables)
tf.compat.v1.train.GradientDescentOptimizer(1e-4).apply_gradients(
zip(grads, trainable_variables))
else:
output = gc_layer.feature_steered_convolution_layer(
data=data,
neighbors=neighbors,
sizes=None,
translation_invariant=False,
num_weight_matrices=1,
num_output_channels=1)
train_op = tf.compat.v1.train.GradientDescentOptimizer(1e-4).minimize(
tf.nn.l2_loss(output - labels))
with tf.compat.v1.Session() as sess:
sess.run(tf.compat.v1.initialize_all_variables())
for _ in range(num_training_iterations):
sess.run(train_op)
class GraphConvolutionTestDynamicGraphConvolutionKerasLayerTests(
test_case.TestCase):
@parameterized.parameters(
(1, 1, 1, 1, "weighted"),
(4, 2, 3, 12, "max"),
(1, 2, 3, 4, "max"),
)
def test_dynamic_graph_convolution_keras_layer_exception_not_raised_shapes(
self, batch_size, num_vertices, in_channels, out_channels, reduction):
"""Check if the convolution parameters and output have correct shapes."""
if not tf.executing_eagerly():
return
data, neighbors = _dummy_data(batch_size, num_vertices, in_channels)
layer = gc_layer.DynamicGraphConvolutionKerasLayer(
num_output_channels=out_channels,
reduction=reduction)
try:
output = layer(inputs=[data, neighbors], sizes=None)
except Exception as e: # pylint: disable=broad-except
self.fail("Exception raised: %s" % str(e))
self.assertAllEqual((batch_size, num_vertices, out_channels), output.shape)
@parameterized.parameters(
(1, 1, 1, 1, "weighted"),
(4, 2, 3, 12, "max"),
(1, 2, 3, 4, "max"),
)
def test_dynamic_graph_convolution_keras_layer_zero_kernel(
self, batch_size, num_vertices, in_channels, out_channels, reduction):
"""Tests convolution with an all-zeros kernel."""
if not tf.executing_eagerly():
return
data, neighbors = _dummy_data(batch_size, num_vertices, in_channels)
data = np.random.uniform(size=data.shape).astype(np.float32)
layer = gc_layer.DynamicGraphConvolutionKerasLayer(
num_output_channels=out_channels,
reduction=reduction,
use_bias=False,
kernel_initializer=tf.compat.v1.keras.initializers.zeros())
output = layer(inputs=[data, neighbors], sizes=None)
self.assertAllEqual(
output,
np.zeros(shape=(batch_size, num_vertices, out_channels),
dtype=np.float32))
@parameterized.parameters((1, 1, 1), (2, 3, 12), (2, 3, 4))
def test_dynamic_graph_convolution_keras_layer_duplicate_features(
self, num_vertices, in_channels, out_channels):
"""Tests convolution when all vertex features are identical."""
if not tf.executing_eagerly():
return
data = np.random.uniform(size=(1, in_channels))
data = np.tile(data, (num_vertices, 1))
# Results should be independent of 'neighbors'.
neighbors = np.maximum(np.random.randint(
0, 2, size=(num_vertices, num_vertices)), np.eye(num_vertices))
neighbors = _dense_to_sparse(neighbors)
layer = gc_layer.DynamicGraphConvolutionKerasLayer(
num_output_channels=out_channels,
reduction="max")
output = layer(inputs=[data, neighbors], sizes=None)
output_tile = tf.tile(output[:1, :], (num_vertices, 1))
self.assertAllEqual(output, output_tile)
@parameterized.parameters("weighted", "max")
def test_dynamic_graph_convolution_keras_layer_training(self, reduction):
"""Test a simple training loop."""
if not tf.executing_eagerly():
return
# Generate a small valid input for a simple training task.
# Four corners of a square.
data = np.array(((1.0, 1.0), (-1.0, 1.0), (-1.0, -1.0), (1.0, -1.0)))
neighbors_indices = np.array(((0, 0), (0, 1), (0, 3),
(1, 0), (1, 1), (1, 2),
(2, 1), (2, 2), (2, 3),
(3, 0), (3, 2), (3, 3)))
neighbors = tf.SparseTensor(
neighbors_indices, np.ones(shape=(12,)) / 3.0, dense_shape=(4, 4))
# Desired output is arbitrary.
labels = np.reshape([-1.0, -0.5, 0.5, 1.0], (-1, 1))
num_training_iterations = 5
with tf.GradientTape(persistent=True) as tape:
layer = gc_layer.DynamicGraphConvolutionKerasLayer(
num_output_channels=2,
reduction=reduction)
output = layer(inputs=[data, neighbors], sizes=None)
loss = tf.nn.l2_loss(output - labels)
trainable_variables = layer.trainable_variables
for _ in range(num_training_iterations):
grads = tape.gradient(loss, trainable_variables)
tf.compat.v1.train.GradientDescentOptimizer(1e-4).apply_gradients(
zip(grads, trainable_variables))
if __name__ == "__main__":
test_case.main()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/datasets/modelnet40/modelnet40_makefakes.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Lint as: python3
"""Generates fake data for testing."""
import os
from absl import app
from absl import flags
import h5py
import numpy as np
flags.DEFINE_string("fakes_path", ".", "path where files will be generated")
FLAGS = flags.FLAGS
def main(argv):
"""Generates files with the internal structure.
Args:
argv: the path where to generate the fake files
Reference: f = h5py.File("modelnet40_ply_hdf5_2048/ply_data_train0.h5", "r")
print(f['data']) # <HDF5 dataset "data": shape(2048, 2048, 3), type "<f4">
print(f['label']) # <HDF5 dataset "label": shape(2048, 1), type "|u1">
"""
if len(argv) != 1:
raise app.UsageError("One argument required.")
for i in range(3):
fake_points = np.random.randn(8, 2048, 3).astype(np.float32)
fake_label = np.random.uniform(low=0, high=40, size=(8, 1)).astype(np.uint8)
path = os.path.join(FLAGS.fakes_path, "ply_data_train{}.h5".format(i))
with h5py.File(path, "w") as h5f:
h5f.create_dataset("data", data=fake_points)
h5f.create_dataset("label", data=fake_label)
for i in range(2):
fake_points = np.random.randn(8, 2048, 3).astype(np.float32)
fake_label = np.random.uniform(low=0, high=40, size=(8, 1)).astype(np.uint8)
path = os.path.join(FLAGS.fakes_path, "ply_data_test{}.h5".format(i))
with h5py.File(path, "w") as h5f:
h5f.create_dataset("data", data=fake_points)
h5f.create_dataset("label", data=fake_label)
if __name__ == "__main__":
app.run(main)
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Lint as: python3
"""Generates fake data for testing."""
import os
from absl import app
from absl import flags
import h5py
import numpy as np
flags.DEFINE_string("fakes_path", ".", "path where files will be generated")
FLAGS = flags.FLAGS
def main(argv):
"""Generates files with the internal structure.
Args:
argv: the path where to generate the fake files
Reference: f = h5py.File("modelnet40_ply_hdf5_2048/ply_data_train0.h5", "r")
print(f['data']) # <HDF5 dataset "data": shape(2048, 2048, 3), type "<f4">
print(f['label']) # <HDF5 dataset "label": shape(2048, 1), type "|u1">
"""
if len(argv) != 1:
raise app.UsageError("One argument required.")
for i in range(3):
fake_points = np.random.randn(8, 2048, 3).astype(np.float32)
fake_label = np.random.uniform(low=0, high=40, size=(8, 1)).astype(np.uint8)
path = os.path.join(FLAGS.fakes_path, "ply_data_train{}.h5".format(i))
with h5py.File(path, "w") as h5f:
h5f.create_dataset("data", data=fake_points)
h5f.create_dataset("label", data=fake_label)
for i in range(2):
fake_points = np.random.randn(8, 2048, 3).astype(np.float32)
fake_label = np.random.uniform(low=0, high=40, size=(8, 1)).astype(np.uint8)
path = os.path.join(FLAGS.fakes_path, "ply_data_test{}.h5".format(i))
with h5py.File(path, "w") as h5f:
h5f.create_dataset("data", data=fake_points)
h5f.create_dataset("label", data=fake_label)
if __name__ == "__main__":
app.run(main)
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/datasets/features/trimesh_feature.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Lint as: python3
"""Triangle mesh feature."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import six
import tensorflow.compat.v2 as tf
from tensorflow_datasets import features
from tensorflow_graphics.io import triangle_mesh
class TriangleMesh(features.FeaturesDict):
"""`FeatureConnector` for triangle meshes.
During `_generate_examples`, the feature connector accepts as input any of:
* `str`: path to a {obj,stl,ply,glb} triangle mesh.
* `trimesh.Trimesh`: A triangle mesh object.
* `trimesh.Scene`: A scene object containing multiple TriangleMesh
objects.
* `dict:` A dictionary containing the vertices and faces of the mesh (see
output format below).
Output:
A dictionary containing:
# TODO(b/156112246): Add additional attributes (vertex normals, colors,
# texture coordinates).
* 'vertices': A `float32` tensor with shape `[N, 3]` denoting the vertex
coordinates, where N is the number of vertices in the mesh.
* 'faces': An `int64` tensor with shape `[F, 3]` denoting the face vertex
indices, where F is the number of faces in the mesh.
Note: In case the input specifies a Scene (with multiple meshes), the output
will be a single TriangleMesh which combines all the triangle meshes in the
scene.
"""
def __init__(self):
super(TriangleMesh, self).__init__({
'vertices': features.Tensor(shape=(None, 3), dtype=tf.float32),
'faces': features.Tensor(shape=(None, 3), dtype=tf.uint64),
})
def encode_example(self, path_or_trianglemesh):
"""Convert the given triangle mesh into a dict convertible to tf example."""
if isinstance(path_or_trianglemesh, six.string_types):
# The parameter is a path.
with tf.io.gfile.GFile(path_or_trianglemesh, 'rb') as tmesh_file:
features_dict = self._convert_to_trimesh_feature(
triangle_mesh.load(tmesh_file))
elif hasattr(path_or_trianglemesh, 'read') and hasattr(
path_or_trianglemesh, 'name') and hasattr(path_or_trianglemesh, 'seek'):
# The parameter is a file object.
path_or_trianglemesh.seek(0) # reset
features_dict = self._convert_to_trimesh_feature(
triangle_mesh.load(path_or_trianglemesh))
elif isinstance(path_or_trianglemesh, dict):
# The parameter is already a Trimesh dictionary.
features_dict = path_or_trianglemesh
else:
# The parameter is a Trimesh or a Scene.
features_dict = self._convert_to_trimesh_feature(path_or_trianglemesh)
return super(TriangleMesh, self).encode_example(features_dict)
def _convert_to_trimesh_feature(self, obj):
if isinstance(obj, triangle_mesh.Trimesh):
vertices = np.array(obj.vertices)
faces = np.array(obj.faces, dtype=np.uint64)
elif isinstance(obj, triangle_mesh.Scene):
# Concatenate all the vertices and faces of the triangle meshes in the
# scene.
# TODO(b/156117488): Change to a different merging algorithm to avoid
# duplicated vertices.
vertices_list = [
np.array(mesh.vertices) for mesh in obj.geometry.values()
]
faces_list = np.array([
np.array(mesh.faces, dtype=np.uint64)
for mesh in obj.geometry.values()
])
faces_offset = np.cumsum(
[vertices.shape[0] for vertices in vertices_list], dtype=np.uint64)
faces_list[1:] += faces_offset[:-1]
vertices = np.concatenate(vertices_list, axis=0)
faces = np.concatenate(faces_list, axis=0)
else:
raise ValueError('obj should be either a Trimesh or a Scene')
return {
'vertices': vertices.astype(np.float32),
'faces': faces,
}
@classmethod
def from_json_content(cls, value) -> 'TriangleMesh':
return cls()
def to_json_content(self):
return {}
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Lint as: python3
"""Triangle mesh feature."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
import six
import tensorflow.compat.v2 as tf
from tensorflow_datasets import features
from tensorflow_graphics.io import triangle_mesh
class TriangleMesh(features.FeaturesDict):
"""`FeatureConnector` for triangle meshes.
During `_generate_examples`, the feature connector accepts as input any of:
* `str`: path to a {obj,stl,ply,glb} triangle mesh.
* `trimesh.Trimesh`: A triangle mesh object.
* `trimesh.Scene`: A scene object containing multiple TriangleMesh
objects.
* `dict:` A dictionary containing the vertices and faces of the mesh (see
output format below).
Output:
A dictionary containing:
# TODO(b/156112246): Add additional attributes (vertex normals, colors,
# texture coordinates).
* 'vertices': A `float32` tensor with shape `[N, 3]` denoting the vertex
coordinates, where N is the number of vertices in the mesh.
* 'faces': An `int64` tensor with shape `[F, 3]` denoting the face vertex
indices, where F is the number of faces in the mesh.
Note: In case the input specifies a Scene (with multiple meshes), the output
will be a single TriangleMesh which combines all the triangle meshes in the
scene.
"""
def __init__(self):
super(TriangleMesh, self).__init__({
'vertices': features.Tensor(shape=(None, 3), dtype=tf.float32),
'faces': features.Tensor(shape=(None, 3), dtype=tf.uint64),
})
def encode_example(self, path_or_trianglemesh):
"""Convert the given triangle mesh into a dict convertible to tf example."""
if isinstance(path_or_trianglemesh, six.string_types):
# The parameter is a path.
with tf.io.gfile.GFile(path_or_trianglemesh, 'rb') as tmesh_file:
features_dict = self._convert_to_trimesh_feature(
triangle_mesh.load(tmesh_file))
elif hasattr(path_or_trianglemesh, 'read') and hasattr(
path_or_trianglemesh, 'name') and hasattr(path_or_trianglemesh, 'seek'):
# The parameter is a file object.
path_or_trianglemesh.seek(0) # reset
features_dict = self._convert_to_trimesh_feature(
triangle_mesh.load(path_or_trianglemesh))
elif isinstance(path_or_trianglemesh, dict):
# The parameter is already a Trimesh dictionary.
features_dict = path_or_trianglemesh
else:
# The parameter is a Trimesh or a Scene.
features_dict = self._convert_to_trimesh_feature(path_or_trianglemesh)
return super(TriangleMesh, self).encode_example(features_dict)
def _convert_to_trimesh_feature(self, obj):
if isinstance(obj, triangle_mesh.Trimesh):
vertices = np.array(obj.vertices)
faces = np.array(obj.faces, dtype=np.uint64)
elif isinstance(obj, triangle_mesh.Scene):
# Concatenate all the vertices and faces of the triangle meshes in the
# scene.
# TODO(b/156117488): Change to a different merging algorithm to avoid
# duplicated vertices.
vertices_list = [
np.array(mesh.vertices) for mesh in obj.geometry.values()
]
faces_list = np.array([
np.array(mesh.faces, dtype=np.uint64)
for mesh in obj.geometry.values()
])
faces_offset = np.cumsum(
[vertices.shape[0] for vertices in vertices_list], dtype=np.uint64)
faces_list[1:] += faces_offset[:-1]
vertices = np.concatenate(vertices_list, axis=0)
faces = np.concatenate(faces_list, axis=0)
else:
raise ValueError('obj should be either a Trimesh or a Scene')
return {
'vertices': vertices.astype(np.float32),
'faces': faces,
}
@classmethod
def from_json_content(cls, value) -> 'TriangleMesh':
return cls()
def to_json_content(self):
return {}
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/util/shape.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Shape utility functions."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import itertools
import numpy as np
import six
import tensorflow as tf
def _broadcast_shape_helper(shape_x, shape_y):
"""Helper function for is_broadcast_compatible and broadcast_shape.
Args:
shape_x: A `TensorShape`.
shape_y: A `TensorShape`.
Returns:
Returns None if the shapes are not broadcast compatible, or a list
containing the broadcasted dimensions otherwise.
"""
# To compute the broadcasted dimensions, we zip together shape_x and shape_y,
# and pad with 1 to make them the same length.
broadcasted_dims = reversed(
list(
six.moves.zip_longest(
reversed(shape_x.dims),
reversed(shape_y.dims),
fillvalue=tf.compat.v1.Dimension(1))))
# Next we combine the dimensions according to the numpy broadcasting rules.
# http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html
return_dims = []
for (dim_x, dim_y) in broadcasted_dims:
if dim_x.value is None or dim_y.value is None:
# One or both dimensions is unknown. If either dimension is greater than
# 1, we assume that the program is correct, and the other dimension will
# be broadcast to match it.
if dim_x.value is not None and dim_x.value > 1:
return_dims.append(dim_x)
elif dim_y.value is not None and dim_y.value > 1:
return_dims.append(dim_y)
else:
return_dims.append(None)
elif dim_x.value == 1:
# We will broadcast dim_x to dim_y.
return_dims.append(dim_y)
elif dim_y.value == 1:
# We will broadcast dim_y to dim_x.
return_dims.append(dim_x)
elif dim_x.value == dim_y.value:
# The dimensions are compatible, so output is the same size in that
# dimension.
return_dims.append(dim_x.merge_with(dim_y))
else:
return None
return return_dims
def is_broadcast_compatible(shape_x, shape_y):
"""Returns True if `shape_x` and `shape_y` are broadcast compatible.
Args:
shape_x: A `TensorShape`.
shape_y: A `TensorShape`.
Returns:
True if a shape exists that both `shape_x` and `shape_y` can be broadcasted
to. False otherwise.
"""
if shape_x.ndims is None or shape_y.ndims is None:
return False
return _broadcast_shape_helper(shape_x, shape_y) is not None
def get_broadcasted_shape(shape_x, shape_y):
"""Returns the common shape for broadcast compatible shapes.
Args:
shape_x: A `TensorShape`.
shape_y: A `TensorShape`.
Returns:
Returns None if the shapes are not broadcast compatible, or a list
containing the broadcasted dimensions otherwise.
"""
if shape_x.ndims is None or shape_y.ndims is None:
return None
return _broadcast_shape_helper(shape_x, shape_y)
def _check_type(variable, variable_name, expected_type):
"""Helper function for checking that inputs are of expected types."""
if isinstance(expected_type, (list, tuple)):
expected_type_name = 'list or tuple'
else:
expected_type_name = expected_type.__name__
if not isinstance(variable, expected_type):
raise ValueError('{} must be of type {}, but it is {}'.format(
variable_name, expected_type_name,
type(variable).__name__))
def _fix_axis_dim_pairs(pairs, name):
"""Helper function to make `pairs` a list if needed."""
if isinstance(pairs[0], int):
pairs = [pairs]
for pair in pairs:
if len(pair) != 2:
raise ValueError(
'{} must consist of axis-value pairs, but found {}'.format(
name, pair))
return pairs
def _get_dim(tensor, axis):
"""Returns dimensionality of a tensor for a given axis."""
return tf.compat.v1.dimension_value(tensor.shape[axis])
def check_static(tensor,
has_rank=None,
has_rank_greater_than=None,
has_rank_less_than=None,
has_dim_equals=None,
has_dim_greater_than=None,
has_dim_less_than=None,
tensor_name='tensor'):
"""Checks static shapes for rank and dimension constraints.
This function can be used to check a tensor's shape for multiple rank and
dimension constraints at the same time.
Args:
tensor: Any tensor with a static shape.
has_rank: An int or `None`. If not `None`, the function checks if the rank
of the `tensor` equals to `has_rank`.
has_rank_greater_than: An int or `None`. If not `None`, the function checks
if the rank of the `tensor` is greater than `has_rank_greater_than`.
has_rank_less_than: An int or `None`. If not `None`, the function checks if
the rank of the `tensor` is less than `has_rank_less_than`.
has_dim_equals: Either a tuple or list containing a single pair of `int`s,
or a list or tuple containing multiple such pairs. Each pair is in the
form (`axis`, `dim`), which means the function should check if
`tensor.shape[axis] == dim`.
has_dim_greater_than: Either a tuple or list containing a single pair of
`int`s, or a list or tuple containing multiple such pairs. Each pair is in
the form (`axis`, `dim`), which means the function should check if
`tensor.shape[axis] > dim`.
has_dim_less_than: Either a tuple or list containing a single pair of
`int`s, or a list or tuple containing multiple such pairs. Each pair is in
the form (`axis`, `dim`), which means the function should check if
`tensor.shape[axis] < dim`.
tensor_name: A name for `tensor` to be used in the error message if one is
thrown.
Raises:
ValueError: If any input is not of the expected types, or if one of the
checks described above fails.
"""
rank = tensor.shape.ndims
def _raise_value_error_for_rank(variable, error_msg):
raise ValueError(
'{} must have a rank {} {}, but it has rank {} and shape {}'.format(
tensor_name, error_msg, variable, rank, tensor.shape.as_list()))
def _raise_value_error_for_dim(tensor_name, error_msg, axis, value):
raise ValueError(
'{} must have {} {} dimensions in axis {}, but it has shape {}'.format(
tensor_name, error_msg, value, axis, tensor.shape.as_list()))
if has_rank is not None:
_check_type(has_rank, 'has_rank', int)
if rank != has_rank:
_raise_value_error_for_rank(has_rank, 'of')
if has_rank_greater_than is not None:
_check_type(has_rank_greater_than, 'has_rank_greater_than', int)
if rank <= has_rank_greater_than:
_raise_value_error_for_rank(has_rank_greater_than, 'greater than')
if has_rank_less_than is not None:
_check_type(has_rank_less_than, 'has_rank_less_than', int)
if rank >= has_rank_less_than:
_raise_value_error_for_rank(has_rank_less_than, 'less than')
if has_dim_equals is not None:
_check_type(has_dim_equals, 'has_dim_equals', (list, tuple))
has_dim_equals = _fix_axis_dim_pairs(has_dim_equals, 'has_dim_equals')
for axis, value in has_dim_equals:
if _get_dim(tensor, axis) != value:
_raise_value_error_for_dim(tensor_name, 'exactly', axis, value)
if has_dim_greater_than is not None:
_check_type(has_dim_greater_than, 'has_dim_greater_than', (list, tuple))
has_dim_greater_than = _fix_axis_dim_pairs(has_dim_greater_than,
'has_dim_greater_than')
for axis, value in has_dim_greater_than:
if not _get_dim(tensor, axis) > value:
_raise_value_error_for_dim(tensor_name, 'greater than', axis, value)
if has_dim_less_than is not None:
_check_type(has_dim_less_than, 'has_dim_less_than', (list, tuple))
has_dim_less_than = _fix_axis_dim_pairs(has_dim_less_than,
'has_dim_less_than')
for axis, value in has_dim_less_than:
if not _get_dim(tensor, axis) < value:
_raise_value_error_for_dim(tensor_name, 'less than', axis, value)
def _check_tensors(tensors, tensors_name):
"""Helper function to check the type and length of tensors."""
_check_type(tensors, tensors_name, (list, tuple))
if len(tensors) < 2:
raise ValueError('At least 2 tensors are required.')
def _check_tensor_axis_lists(tensors, tensors_name, axes, axes_name):
"""Helper function to check that lengths of `tensors` and `axes` match."""
_check_type(axes, axes_name, (list, tuple))
if len(tensors) != len(axes):
raise ValueError(
'{} and {} must have the same length, but are {} and {}.'.format(
tensors_name, axes_name, len(tensors), len(axes)))
def _fix_axes(tensors, axes, allow_negative):
"""Makes all axes positive and checks for out of bound errors."""
axes = [
axis + tensor.shape.ndims if axis < 0 else axis
for tensor, axis in zip(tensors, axes)
]
if not all(
((allow_negative or
(not allow_negative and axis >= 0)) and axis < tensor.shape.ndims)
for tensor, axis in zip(tensors, axes)):
rank_axis_pairs = zip([tensor.shape.ndims for tensor in tensors], axes)
raise ValueError(
'Some axes are out of bounds. Given rank-axes pairs: {}'.format(
[pair for pair in rank_axis_pairs]))
return axes
def _give_default_names(list_of_objects, name):
"""Helper function to give default names to objects for error messages."""
return [name + '_' + str(index) for index in range(len(list_of_objects))]
def _all_are_equal(list_of_objects):
"""Helper function to check if all the items in a list are the same."""
if not list_of_objects:
return True
if isinstance(list_of_objects[0], list):
list_of_objects = [tuple(obj) for obj in list_of_objects]
return len(set(list_of_objects)) == 1
def _raise_error(tensor_names, batch_shapes):
formatted_list = [(name, batch_shape)
for name, batch_shape in zip(tensor_names, batch_shapes)]
raise ValueError(
'Not all batch dimensions are identical: {}'.format(formatted_list))
def compare_batch_dimensions(tensors,
last_axes,
broadcast_compatible,
initial_axes=0,
tensor_names=None):
"""Compares batch dimensions for tensors with static shapes.
Args:
tensors: A list or tuple of tensors with static shapes to compare.
last_axes: An `int` or a list or tuple of `int`s with the same length as
`tensors`. If an `int`, it is assumed to be the same for all the tensors.
Each entry should correspond to the last axis of the batch (with zero
based indices). For instance, if there is only a single batch dimension,
last axis should be `0`.
broadcast_compatible: A 'bool', whether the batch shapes can be broadcast
compatible in the numpy sense.
initial_axes: An `int` or a list or tuple of `int`s with the same length as
`tensors`. If an `int`, it is assumed to be the same for all the tensors.
Each entry should correspond to the first axis of the batch (with zero
based indices). Default value is `0`.
tensor_names: Names of `tensors` to be used in the error message if one is
thrown. If left as `None`, `tensor_i` is used.
Raises:
ValueError: If inputs have unexpected types, or if given axes are out of
bounds, or if the check fails.
"""
_check_tensors(tensors, 'tensors')
if isinstance(initial_axes, int):
initial_axes = [initial_axes] * len(tensors)
if isinstance(last_axes, int):
last_axes = [last_axes] * len(tensors)
_check_tensor_axis_lists(tensors, 'tensors', initial_axes, 'initial_axes')
_check_tensor_axis_lists(tensors, 'tensors', last_axes, 'last_axes')
initial_axes = _fix_axes(tensors, initial_axes, allow_negative=True)
last_axes = _fix_axes(tensors, last_axes, allow_negative=True)
batch_shapes = [
tensor.shape[init:last + 1]
for tensor, init, last in zip(tensors, initial_axes, last_axes)
]
if tensor_names is None:
tensor_names = _give_default_names(tensors, 'tensor')
if not broadcast_compatible:
batch_ndims = [batch_shape.ndims for batch_shape in batch_shapes]
batch_shapes = [batch_shape.as_list() for batch_shape in batch_shapes]
if not _all_are_equal(batch_ndims):
# If not all batch shapes have the same length, they cannot be identical.
_raise_error(tensor_names, batch_shapes)
for dims in zip(*batch_shapes):
if _all_are_equal(dims):
# Continue if all dimensions are None or have the same value.
continue
if None not in dims:
# If all dimensions are known at this point, they are not identical.
_raise_error(tensor_names, batch_shapes)
# At this point dims must consist of both None's and int's.
if len(set(dims)) != 2:
# set(dims) should return (None, some_int).
# Otherwise shapes are not identical.
_raise_error(tensor_names, batch_shapes)
else:
if not all(
is_broadcast_compatible(shape1, shape2)
for shape1, shape2 in itertools.combinations(batch_shapes, 2)):
raise ValueError(
'Not all batch dimensions are broadcast-compatible: {}'.format([
(name, batch_shape.as_list())
for name, batch_shape in zip(tensor_names, batch_shapes)
]))
def compare_dimensions(tensors, axes, tensor_names=None):
"""Compares dimensions of tensors with static or dynamic shapes.
Args:
tensors: A list or tuple of tensors to compare.
axes: An `int` or a list or tuple of `int`s with the same length as
`tensors`. If an `int`, it is assumed to be the same for all the tensors.
Each entry should correspond to the axis of the tensor being compared.
tensor_names: Names of `tensors` to be used in the error message if one is
thrown. If left as `None`, their `Tensor.name` fields are used instead.
Raises:
ValueError: If inputs have unexpected types, or if given axes are out of
bounds, or if the check fails.
"""
_check_tensors(tensors, 'tensors')
if isinstance(axes, int):
axes = [axes] * len(tensors)
_check_tensor_axis_lists(tensors, 'tensors', axes, 'axes')
axes = _fix_axes(tensors, axes, allow_negative=False)
if tensor_names is None:
tensor_names = _give_default_names(tensors, 'tensor')
dimensions = [_get_dim(tensor, axis) for tensor, axis in zip(tensors, axes)]
if not _all_are_equal(dimensions):
raise ValueError('Tensors {} must have the same number of dimensions in '
'axes {}, but they are {}.'.format(
list(tensor_names), list(axes), list(dimensions)))
def is_static(tensor_shape):
"""Checks if the given tensor shape is static."""
if isinstance(tensor_shape, (list, tuple)):
return None not in tensor_shape
else:
return None not in tensor_shape.as_list()
def add_batch_dimensions(tensor, tensor_name, batch_shape, last_axis=None):
"""Broadcasts tensor to match batch dimensions.
It will either broadcast to all provided batch dimensions, therefore
increasing tensor shape by len(batch_shape) dimensions or will do nothing if
batch dimensions already present and equal to expected batch dimensions.
Args:
tensor: A tensor to broadcast of a shape [A1, ..., An, B1, ..., Bn]. Where
[A1, ..., An] is batch dimensions (it is allowed to have no batch
dimensions), and [B1, ..., Bn] are other tensor dimensions. If [A1, ...,
An] are present but different from values in `batch_shape` the error will
be thrown.
tensor_name: Name of `tensor` to be used in the error message if one is
batch_shape: list of `int` representing desired batch dimensions.
last_axis: An `int` corresponding to the last axis of the batch (with zero
based indices). For instance, if there is only a single batch dimension,
last axis should be `0`. If there is no batch dimensions it must be set to
`None`. thrown.
Returns:
Tensor of a shape `batch_shape` + [B1, ..., Bn] or unmodified tensor if
`batch_shape` = [A1, ..., An].
Raises:
ValueError if tensor already has batch dimensions different from desired
one.
"""
if last_axis is not None:
last_axis = _fix_axes([tensor], [last_axis], allow_negative=True)[0]
tensor_batch_shape = tensor.shape.as_list()[:last_axis + 1]
if np.array_equal(tensor_batch_shape, batch_shape):
return tensor
elif tensor_batch_shape:
raise ValueError(
'Tensor {} has batch dimensions different from target '
'one. Found {}, but expected no batch dimensions or {}'.format(
tensor_name, tensor.shape[:last_axis + 1], batch_shape))
return tf.broadcast_to(tensor, batch_shape + list(tensor.shape))
# The util functions or classes are not exported.
__all__ = []
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Shape utility functions."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import itertools
import numpy as np
import six
import tensorflow as tf
def _broadcast_shape_helper(shape_x, shape_y):
"""Helper function for is_broadcast_compatible and broadcast_shape.
Args:
shape_x: A `TensorShape`.
shape_y: A `TensorShape`.
Returns:
Returns None if the shapes are not broadcast compatible, or a list
containing the broadcasted dimensions otherwise.
"""
# To compute the broadcasted dimensions, we zip together shape_x and shape_y,
# and pad with 1 to make them the same length.
broadcasted_dims = reversed(
list(
six.moves.zip_longest(
reversed(shape_x.dims),
reversed(shape_y.dims),
fillvalue=tf.compat.v1.Dimension(1))))
# Next we combine the dimensions according to the numpy broadcasting rules.
# http://docs.scipy.org/doc/numpy/user/basics.broadcasting.html
return_dims = []
for (dim_x, dim_y) in broadcasted_dims:
if dim_x.value is None or dim_y.value is None:
# One or both dimensions is unknown. If either dimension is greater than
# 1, we assume that the program is correct, and the other dimension will
# be broadcast to match it.
if dim_x.value is not None and dim_x.value > 1:
return_dims.append(dim_x)
elif dim_y.value is not None and dim_y.value > 1:
return_dims.append(dim_y)
else:
return_dims.append(None)
elif dim_x.value == 1:
# We will broadcast dim_x to dim_y.
return_dims.append(dim_y)
elif dim_y.value == 1:
# We will broadcast dim_y to dim_x.
return_dims.append(dim_x)
elif dim_x.value == dim_y.value:
# The dimensions are compatible, so output is the same size in that
# dimension.
return_dims.append(dim_x.merge_with(dim_y))
else:
return None
return return_dims
def is_broadcast_compatible(shape_x, shape_y):
"""Returns True if `shape_x` and `shape_y` are broadcast compatible.
Args:
shape_x: A `TensorShape`.
shape_y: A `TensorShape`.
Returns:
True if a shape exists that both `shape_x` and `shape_y` can be broadcasted
to. False otherwise.
"""
if shape_x.ndims is None or shape_y.ndims is None:
return False
return _broadcast_shape_helper(shape_x, shape_y) is not None
def get_broadcasted_shape(shape_x, shape_y):
"""Returns the common shape for broadcast compatible shapes.
Args:
shape_x: A `TensorShape`.
shape_y: A `TensorShape`.
Returns:
Returns None if the shapes are not broadcast compatible, or a list
containing the broadcasted dimensions otherwise.
"""
if shape_x.ndims is None or shape_y.ndims is None:
return None
return _broadcast_shape_helper(shape_x, shape_y)
def _check_type(variable, variable_name, expected_type):
"""Helper function for checking that inputs are of expected types."""
if isinstance(expected_type, (list, tuple)):
expected_type_name = 'list or tuple'
else:
expected_type_name = expected_type.__name__
if not isinstance(variable, expected_type):
raise ValueError('{} must be of type {}, but it is {}'.format(
variable_name, expected_type_name,
type(variable).__name__))
def _fix_axis_dim_pairs(pairs, name):
"""Helper function to make `pairs` a list if needed."""
if isinstance(pairs[0], int):
pairs = [pairs]
for pair in pairs:
if len(pair) != 2:
raise ValueError(
'{} must consist of axis-value pairs, but found {}'.format(
name, pair))
return pairs
def _get_dim(tensor, axis):
"""Returns dimensionality of a tensor for a given axis."""
return tf.compat.v1.dimension_value(tensor.shape[axis])
def check_static(tensor,
has_rank=None,
has_rank_greater_than=None,
has_rank_less_than=None,
has_dim_equals=None,
has_dim_greater_than=None,
has_dim_less_than=None,
tensor_name='tensor'):
"""Checks static shapes for rank and dimension constraints.
This function can be used to check a tensor's shape for multiple rank and
dimension constraints at the same time.
Args:
tensor: Any tensor with a static shape.
has_rank: An int or `None`. If not `None`, the function checks if the rank
of the `tensor` equals to `has_rank`.
has_rank_greater_than: An int or `None`. If not `None`, the function checks
if the rank of the `tensor` is greater than `has_rank_greater_than`.
has_rank_less_than: An int or `None`. If not `None`, the function checks if
the rank of the `tensor` is less than `has_rank_less_than`.
has_dim_equals: Either a tuple or list containing a single pair of `int`s,
or a list or tuple containing multiple such pairs. Each pair is in the
form (`axis`, `dim`), which means the function should check if
`tensor.shape[axis] == dim`.
has_dim_greater_than: Either a tuple or list containing a single pair of
`int`s, or a list or tuple containing multiple such pairs. Each pair is in
the form (`axis`, `dim`), which means the function should check if
`tensor.shape[axis] > dim`.
has_dim_less_than: Either a tuple or list containing a single pair of
`int`s, or a list or tuple containing multiple such pairs. Each pair is in
the form (`axis`, `dim`), which means the function should check if
`tensor.shape[axis] < dim`.
tensor_name: A name for `tensor` to be used in the error message if one is
thrown.
Raises:
ValueError: If any input is not of the expected types, or if one of the
checks described above fails.
"""
rank = tensor.shape.ndims
def _raise_value_error_for_rank(variable, error_msg):
raise ValueError(
'{} must have a rank {} {}, but it has rank {} and shape {}'.format(
tensor_name, error_msg, variable, rank, tensor.shape.as_list()))
def _raise_value_error_for_dim(tensor_name, error_msg, axis, value):
raise ValueError(
'{} must have {} {} dimensions in axis {}, but it has shape {}'.format(
tensor_name, error_msg, value, axis, tensor.shape.as_list()))
if has_rank is not None:
_check_type(has_rank, 'has_rank', int)
if rank != has_rank:
_raise_value_error_for_rank(has_rank, 'of')
if has_rank_greater_than is not None:
_check_type(has_rank_greater_than, 'has_rank_greater_than', int)
if rank <= has_rank_greater_than:
_raise_value_error_for_rank(has_rank_greater_than, 'greater than')
if has_rank_less_than is not None:
_check_type(has_rank_less_than, 'has_rank_less_than', int)
if rank >= has_rank_less_than:
_raise_value_error_for_rank(has_rank_less_than, 'less than')
if has_dim_equals is not None:
_check_type(has_dim_equals, 'has_dim_equals', (list, tuple))
has_dim_equals = _fix_axis_dim_pairs(has_dim_equals, 'has_dim_equals')
for axis, value in has_dim_equals:
if _get_dim(tensor, axis) != value:
_raise_value_error_for_dim(tensor_name, 'exactly', axis, value)
if has_dim_greater_than is not None:
_check_type(has_dim_greater_than, 'has_dim_greater_than', (list, tuple))
has_dim_greater_than = _fix_axis_dim_pairs(has_dim_greater_than,
'has_dim_greater_than')
for axis, value in has_dim_greater_than:
if not _get_dim(tensor, axis) > value:
_raise_value_error_for_dim(tensor_name, 'greater than', axis, value)
if has_dim_less_than is not None:
_check_type(has_dim_less_than, 'has_dim_less_than', (list, tuple))
has_dim_less_than = _fix_axis_dim_pairs(has_dim_less_than,
'has_dim_less_than')
for axis, value in has_dim_less_than:
if not _get_dim(tensor, axis) < value:
_raise_value_error_for_dim(tensor_name, 'less than', axis, value)
def _check_tensors(tensors, tensors_name):
"""Helper function to check the type and length of tensors."""
_check_type(tensors, tensors_name, (list, tuple))
if len(tensors) < 2:
raise ValueError('At least 2 tensors are required.')
def _check_tensor_axis_lists(tensors, tensors_name, axes, axes_name):
"""Helper function to check that lengths of `tensors` and `axes` match."""
_check_type(axes, axes_name, (list, tuple))
if len(tensors) != len(axes):
raise ValueError(
'{} and {} must have the same length, but are {} and {}.'.format(
tensors_name, axes_name, len(tensors), len(axes)))
def _fix_axes(tensors, axes, allow_negative):
"""Makes all axes positive and checks for out of bound errors."""
axes = [
axis + tensor.shape.ndims if axis < 0 else axis
for tensor, axis in zip(tensors, axes)
]
if not all(
((allow_negative or
(not allow_negative and axis >= 0)) and axis < tensor.shape.ndims)
for tensor, axis in zip(tensors, axes)):
rank_axis_pairs = zip([tensor.shape.ndims for tensor in tensors], axes)
raise ValueError(
'Some axes are out of bounds. Given rank-axes pairs: {}'.format(
[pair for pair in rank_axis_pairs]))
return axes
def _give_default_names(list_of_objects, name):
"""Helper function to give default names to objects for error messages."""
return [name + '_' + str(index) for index in range(len(list_of_objects))]
def _all_are_equal(list_of_objects):
"""Helper function to check if all the items in a list are the same."""
if not list_of_objects:
return True
if isinstance(list_of_objects[0], list):
list_of_objects = [tuple(obj) for obj in list_of_objects]
return len(set(list_of_objects)) == 1
def _raise_error(tensor_names, batch_shapes):
formatted_list = [(name, batch_shape)
for name, batch_shape in zip(tensor_names, batch_shapes)]
raise ValueError(
'Not all batch dimensions are identical: {}'.format(formatted_list))
def compare_batch_dimensions(tensors,
last_axes,
broadcast_compatible,
initial_axes=0,
tensor_names=None):
"""Compares batch dimensions for tensors with static shapes.
Args:
tensors: A list or tuple of tensors with static shapes to compare.
last_axes: An `int` or a list or tuple of `int`s with the same length as
`tensors`. If an `int`, it is assumed to be the same for all the tensors.
Each entry should correspond to the last axis of the batch (with zero
based indices). For instance, if there is only a single batch dimension,
last axis should be `0`.
broadcast_compatible: A 'bool', whether the batch shapes can be broadcast
compatible in the numpy sense.
initial_axes: An `int` or a list or tuple of `int`s with the same length as
`tensors`. If an `int`, it is assumed to be the same for all the tensors.
Each entry should correspond to the first axis of the batch (with zero
based indices). Default value is `0`.
tensor_names: Names of `tensors` to be used in the error message if one is
thrown. If left as `None`, `tensor_i` is used.
Raises:
ValueError: If inputs have unexpected types, or if given axes are out of
bounds, or if the check fails.
"""
_check_tensors(tensors, 'tensors')
if isinstance(initial_axes, int):
initial_axes = [initial_axes] * len(tensors)
if isinstance(last_axes, int):
last_axes = [last_axes] * len(tensors)
_check_tensor_axis_lists(tensors, 'tensors', initial_axes, 'initial_axes')
_check_tensor_axis_lists(tensors, 'tensors', last_axes, 'last_axes')
initial_axes = _fix_axes(tensors, initial_axes, allow_negative=True)
last_axes = _fix_axes(tensors, last_axes, allow_negative=True)
batch_shapes = [
tensor.shape[init:last + 1]
for tensor, init, last in zip(tensors, initial_axes, last_axes)
]
if tensor_names is None:
tensor_names = _give_default_names(tensors, 'tensor')
if not broadcast_compatible:
batch_ndims = [batch_shape.ndims for batch_shape in batch_shapes]
batch_shapes = [batch_shape.as_list() for batch_shape in batch_shapes]
if not _all_are_equal(batch_ndims):
# If not all batch shapes have the same length, they cannot be identical.
_raise_error(tensor_names, batch_shapes)
for dims in zip(*batch_shapes):
if _all_are_equal(dims):
# Continue if all dimensions are None or have the same value.
continue
if None not in dims:
# If all dimensions are known at this point, they are not identical.
_raise_error(tensor_names, batch_shapes)
# At this point dims must consist of both None's and int's.
if len(set(dims)) != 2:
# set(dims) should return (None, some_int).
# Otherwise shapes are not identical.
_raise_error(tensor_names, batch_shapes)
else:
if not all(
is_broadcast_compatible(shape1, shape2)
for shape1, shape2 in itertools.combinations(batch_shapes, 2)):
raise ValueError(
'Not all batch dimensions are broadcast-compatible: {}'.format([
(name, batch_shape.as_list())
for name, batch_shape in zip(tensor_names, batch_shapes)
]))
def compare_dimensions(tensors, axes, tensor_names=None):
"""Compares dimensions of tensors with static or dynamic shapes.
Args:
tensors: A list or tuple of tensors to compare.
axes: An `int` or a list or tuple of `int`s with the same length as
`tensors`. If an `int`, it is assumed to be the same for all the tensors.
Each entry should correspond to the axis of the tensor being compared.
tensor_names: Names of `tensors` to be used in the error message if one is
thrown. If left as `None`, their `Tensor.name` fields are used instead.
Raises:
ValueError: If inputs have unexpected types, or if given axes are out of
bounds, or if the check fails.
"""
_check_tensors(tensors, 'tensors')
if isinstance(axes, int):
axes = [axes] * len(tensors)
_check_tensor_axis_lists(tensors, 'tensors', axes, 'axes')
axes = _fix_axes(tensors, axes, allow_negative=False)
if tensor_names is None:
tensor_names = _give_default_names(tensors, 'tensor')
dimensions = [_get_dim(tensor, axis) for tensor, axis in zip(tensors, axes)]
if not _all_are_equal(dimensions):
raise ValueError('Tensors {} must have the same number of dimensions in '
'axes {}, but they are {}.'.format(
list(tensor_names), list(axes), list(dimensions)))
def is_static(tensor_shape):
"""Checks if the given tensor shape is static."""
if isinstance(tensor_shape, (list, tuple)):
return None not in tensor_shape
else:
return None not in tensor_shape.as_list()
def add_batch_dimensions(tensor, tensor_name, batch_shape, last_axis=None):
"""Broadcasts tensor to match batch dimensions.
It will either broadcast to all provided batch dimensions, therefore
increasing tensor shape by len(batch_shape) dimensions or will do nothing if
batch dimensions already present and equal to expected batch dimensions.
Args:
tensor: A tensor to broadcast of a shape [A1, ..., An, B1, ..., Bn]. Where
[A1, ..., An] is batch dimensions (it is allowed to have no batch
dimensions), and [B1, ..., Bn] are other tensor dimensions. If [A1, ...,
An] are present but different from values in `batch_shape` the error will
be thrown.
tensor_name: Name of `tensor` to be used in the error message if one is
batch_shape: list of `int` representing desired batch dimensions.
last_axis: An `int` corresponding to the last axis of the batch (with zero
based indices). For instance, if there is only a single batch dimension,
last axis should be `0`. If there is no batch dimensions it must be set to
`None`. thrown.
Returns:
Tensor of a shape `batch_shape` + [B1, ..., Bn] or unmodified tensor if
`batch_shape` = [A1, ..., An].
Raises:
ValueError if tensor already has batch dimensions different from desired
one.
"""
if last_axis is not None:
last_axis = _fix_axes([tensor], [last_axis], allow_negative=True)[0]
tensor_batch_shape = tensor.shape.as_list()[:last_axis + 1]
if np.array_equal(tensor_batch_shape, batch_shape):
return tensor
elif tensor_batch_shape:
raise ValueError(
'Tensor {} has batch dimensions different from target '
'one. Found {}, but expected no batch dimensions or {}'.format(
tensor_name, tensor.shape[:last_axis + 1], batch_shape))
return tf.broadcast_to(tensor, batch_shape + list(tensor.shape))
# The util functions or classes are not exported.
__all__ = []
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/geometry/representation/mesh/tests/mesh_test_utils.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Helper routines for mesh unit tests.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
def create_single_triangle_mesh():
r"""Creates a single-triangle mesh, in the z=0 plane and facing +z.
(0,1) 2
|\
| \
| \
(0,0) 0---1 (1,0)
Returns:
vertices: A [3, 3] float array
faces: A [1, 3] int array
"""
vertices = np.array(
((0, 0, 0), (1, 0, 0), (0, 1, 0)), dtype=np.float32)
faces = np.array(((0, 1, 2),), dtype=np.int32)
return vertices, faces
def create_square_triangle_mesh():
r"""Creates a square mesh, in the z=0 planse and facing +z.
# (0,1) 2---3 (1,1)
# |\ /|
# | 4 |
# |/ \|
# (0,0) 0---1 (1,0)
Returns:
vertices: A [5, 3] float array
faces: A [4, 3] int array
"""
vertices = np.array(
((0, 0, 0), (1, 0, 0), (0, 1, 0), (1, 1, 0), (0.5, 0.5, 0)),
dtype=np.float32)
faces = np.array(
((0, 1, 4), (1, 3, 4), (3, 2, 4), (2, 0, 4)), dtype=np.int32)
return vertices, faces
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Helper routines for mesh unit tests.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import numpy as np
def create_single_triangle_mesh():
r"""Creates a single-triangle mesh, in the z=0 plane and facing +z.
(0,1) 2
|\
| \
| \
(0,0) 0---1 (1,0)
Returns:
vertices: A [3, 3] float array
faces: A [1, 3] int array
"""
vertices = np.array(
((0, 0, 0), (1, 0, 0), (0, 1, 0)), dtype=np.float32)
faces = np.array(((0, 1, 2),), dtype=np.int32)
return vertices, faces
def create_square_triangle_mesh():
r"""Creates a square mesh, in the z=0 planse and facing +z.
# (0,1) 2---3 (1,1)
# |\ /|
# | 4 |
# |/ \|
# (0,0) 0---1 (1,0)
Returns:
vertices: A [5, 3] float array
faces: A [4, 3] int array
"""
vertices = np.array(
((0, 0, 0), (1, 0, 0), (0, 1, 0), (1, 1, 0), (0.5, 0.5, 0)),
dtype=np.float32)
faces = np.array(
((0, 1, 4), (1, 3, 4), (3, 2, 4), (2, 0, 4)), dtype=np.int32)
return vertices, faces
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/image/color_space/tests/srgb_test.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for srgb."""
from absl.testing import flagsaver
from absl.testing import parameterized
import numpy as np
from tensorflow_graphics.image.color_space import linear_rgb
from tensorflow_graphics.image.color_space import srgb
from tensorflow_graphics.util import test_case
class SrgbTest(test_case.TestCase):
def test_cycle_linear_rgb_srgb_linear_rgb_for_random_input(self):
"""Tests loop from linear RGB to sRGB and back for random inputs."""
tensor_size = np.random.randint(3)
tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist()
linear_input = np.random.uniform(size=tensor_shape + [3])
srgb_output = srgb.from_linear_rgb(linear_input)
linear_reverse = linear_rgb.from_srgb(srgb_output)
self.assertAllClose(linear_input, linear_reverse)
@parameterized.parameters(
(((0., 0.5, 1.), (0.00312, 0.0031308, 0.00314)),
((0., 0.735357, 1.), (0.04031, 0.04045, 0.040567))),)
def test_from_linear_rgb_preset(self, test_inputs, test_outputs):
"""Tests conversion from linear to sRGB color space for preset inputs."""
self.assert_output_is_correct(srgb.from_linear_rgb, (test_inputs,),
(test_outputs,))
def test_from_linear_rgb_jacobian_random(self):
"""Tests the Jacobian of the from_linear_rgb function for random inputs."""
tensor_size = np.random.randint(3)
tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist()
linear_random_init = np.random.uniform(size=tensor_shape + [3])
self.assert_jacobian_is_correct_fn(srgb.from_linear_rgb,
[linear_random_init])
@parameterized.parameters((np.array((0., 0.001, 0.002)),), (np.array(
(0.004, 0.005, 1.)),), (np.array((0.00312, 0.004, 0.00314)),))
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_from_linear_rgb_jacobian_preset(self, inputs_init):
"""Tests the Jacobian of the from_linear_rgb function for preset inputs."""
self.assert_jacobian_is_correct_fn(srgb.from_linear_rgb, [inputs_init])
@parameterized.parameters(
((3,),),
((None, None, None, 3),),
)
def test_from_linear_rgb_exception_not_raised(self, *shape):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(srgb.from_linear_rgb, shape)
@parameterized.parameters(
("must have a rank greater than 0", ()),
("must have exactly 3 dimensions in axis -1", (2, 3, 4)),
)
def test_from_linear_rgb_exception_raised(self, error_msg, *shape):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(srgb.from_linear_rgb, error_msg, shape)
if __name__ == "__main__":
test_case.main()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for srgb."""
from absl.testing import flagsaver
from absl.testing import parameterized
import numpy as np
from tensorflow_graphics.image.color_space import linear_rgb
from tensorflow_graphics.image.color_space import srgb
from tensorflow_graphics.util import test_case
class SrgbTest(test_case.TestCase):
def test_cycle_linear_rgb_srgb_linear_rgb_for_random_input(self):
"""Tests loop from linear RGB to sRGB and back for random inputs."""
tensor_size = np.random.randint(3)
tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist()
linear_input = np.random.uniform(size=tensor_shape + [3])
srgb_output = srgb.from_linear_rgb(linear_input)
linear_reverse = linear_rgb.from_srgb(srgb_output)
self.assertAllClose(linear_input, linear_reverse)
@parameterized.parameters(
(((0., 0.5, 1.), (0.00312, 0.0031308, 0.00314)),
((0., 0.735357, 1.), (0.04031, 0.04045, 0.040567))),)
def test_from_linear_rgb_preset(self, test_inputs, test_outputs):
"""Tests conversion from linear to sRGB color space for preset inputs."""
self.assert_output_is_correct(srgb.from_linear_rgb, (test_inputs,),
(test_outputs,))
def test_from_linear_rgb_jacobian_random(self):
"""Tests the Jacobian of the from_linear_rgb function for random inputs."""
tensor_size = np.random.randint(3)
tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist()
linear_random_init = np.random.uniform(size=tensor_shape + [3])
self.assert_jacobian_is_correct_fn(srgb.from_linear_rgb,
[linear_random_init])
@parameterized.parameters((np.array((0., 0.001, 0.002)),), (np.array(
(0.004, 0.005, 1.)),), (np.array((0.00312, 0.004, 0.00314)),))
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_from_linear_rgb_jacobian_preset(self, inputs_init):
"""Tests the Jacobian of the from_linear_rgb function for preset inputs."""
self.assert_jacobian_is_correct_fn(srgb.from_linear_rgb, [inputs_init])
@parameterized.parameters(
((3,),),
((None, None, None, 3),),
)
def test_from_linear_rgb_exception_not_raised(self, *shape):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(srgb.from_linear_rgb, shape)
@parameterized.parameters(
("must have a rank greater than 0", ()),
("must have exactly 3 dimensions in axis -1", (2, 3, 4)),
)
def test_from_linear_rgb_exception_raised(self, error_msg, *shape):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(srgb.from_linear_rgb, error_msg, shape)
if __name__ == "__main__":
test_case.main()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/image/color_space/tests/__init__.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/nn/loss/tests/chamfer_distance_test.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for the chamfer distance loss."""
from absl.testing import parameterized
import numpy as np
from tensorflow_graphics.nn.loss import chamfer_distance
from tensorflow_graphics.util import test_case
def _random_tensor(tensor_shape):
return np.random.uniform(low=0.0, high=1.0, size=tensor_shape)
def _random_tensor_shape():
tensor_size = np.random.randint(3) + 1
return np.random.randint(1, 10, size=(tensor_size)).tolist()
def _random_point_sets():
space_dimensions = np.random.randint(3) + 1
batch_shape = _random_tensor_shape()
point_set_a_size = np.random.randint(10) + 1
point_set_b_size = np.random.randint(10) + 1
point_set_a_init = np.random.uniform(
low=-100.0,
high=100.0,
size=batch_shape + [point_set_a_size, space_dimensions])
point_set_b_init = np.random.uniform(
low=-100.0,
high=100.0,
size=batch_shape + [point_set_b_size, space_dimensions])
return (point_set_a_init, point_set_b_init)
class ChamferDistanceTest(test_case.TestCase):
@parameterized.parameters(
(((0., 0), (0, 1), (1, 0), (-1, 0)),
((0., 0), (0, 2), (0.7, 0.4), (-0.5, -0.5)),
# a[0] -> b[0]
(0 + \
# a[1] -> b[2]
0.7**2 + 0.6**2 + \
# a[2] -> b[2]
0.3**2 + 0.4**2 + \
# a[3] -> b[3]
0.5) / 4 + \
# b[0] -> a[0]
(0 + \
# b[1] -> a[1]
1 + \
# b[2] -> a[2]
0.3**2 + 0.4**2 + \
# b[3] -> a[3]
0.5) / 4),
(((0., 1, 4), (3, 4, 2)),
((2., 2, 2), (2, 3, 4), (3, 2, 2)),
# a[0] -> b[1]
(8 + \
# a[1] -> b[2]
4) / 2 + \
# b[0] -> a[1]
(5 + \
# b[1] -> a[1]
6 + \
# b[2] -> a[1]
4) / 3),
)
def test_evaluate_preset(self, point_set_a, point_set_b, expected_distance):
tensor_shape = _random_tensor_shape()
point_set_a = np.tile(point_set_a, tensor_shape + [1, 1])
point_set_b = np.tile(point_set_b, tensor_shape + [1, 1])
expected = np.tile(expected_distance, tensor_shape)
result = chamfer_distance.evaluate(point_set_a, point_set_b)
self.assertAllClose(expected, result)
def test_chamfer_distance_evaluate_jacobian(self):
"""Tests the Jacobian of the Chamfer distance loss."""
point_set_a, point_set_b = _random_point_sets()
with self.subTest(name="jacobian_wrt_point_set_a"):
self.assert_jacobian_is_correct_fn(
lambda x: chamfer_distance.evaluate(x, point_set_b), [point_set_a],
atol=1e-5)
with self.subTest(name="jacobian_wrt_point_set_b"):
self.assert_jacobian_is_correct_fn(
lambda x: chamfer_distance.evaluate(point_set_a, x), [point_set_b],
atol=1e-5)
@parameterized.parameters(
("Not all batch dimensions are broadcast-compatible.", (1, 3, 5, 3),
(2, 4, 3)),
("Not all batch dimensions are broadcast-compatible.", (3, 3, 5),
(2, 4, 5)),
("point_set_b must have exactly 3 dimensions in axis -1,.", (2, 4, 3),
(2, 4, 2)),
("point_set_b must have exactly 2 dimensions in axis -1,.", (2, 4, 2),
(2, 4, 3)),
)
def test_evaluate_shape_exception_raised(self, error_msg, *shape):
"""Tests that the shape exception is raised."""
self.assert_exception_is_raised(chamfer_distance.evaluate, error_msg, shape)
@parameterized.parameters(
((1, 5, 6, 3), (2, 5, 9, 3)),
((None, 2, 6, 2), (4, 2, None, 4, 2)),
((3, 5, 8, 7), (3, 1, 1, 7)),
)
def test_evaluate_shape_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(chamfer_distance.evaluate, shapes)
if __name__ == "__main__":
test_case.main()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for the chamfer distance loss."""
from absl.testing import parameterized
import numpy as np
from tensorflow_graphics.nn.loss import chamfer_distance
from tensorflow_graphics.util import test_case
def _random_tensor(tensor_shape):
return np.random.uniform(low=0.0, high=1.0, size=tensor_shape)
def _random_tensor_shape():
tensor_size = np.random.randint(3) + 1
return np.random.randint(1, 10, size=(tensor_size)).tolist()
def _random_point_sets():
space_dimensions = np.random.randint(3) + 1
batch_shape = _random_tensor_shape()
point_set_a_size = np.random.randint(10) + 1
point_set_b_size = np.random.randint(10) + 1
point_set_a_init = np.random.uniform(
low=-100.0,
high=100.0,
size=batch_shape + [point_set_a_size, space_dimensions])
point_set_b_init = np.random.uniform(
low=-100.0,
high=100.0,
size=batch_shape + [point_set_b_size, space_dimensions])
return (point_set_a_init, point_set_b_init)
class ChamferDistanceTest(test_case.TestCase):
@parameterized.parameters(
(((0., 0), (0, 1), (1, 0), (-1, 0)),
((0., 0), (0, 2), (0.7, 0.4), (-0.5, -0.5)),
# a[0] -> b[0]
(0 + \
# a[1] -> b[2]
0.7**2 + 0.6**2 + \
# a[2] -> b[2]
0.3**2 + 0.4**2 + \
# a[3] -> b[3]
0.5) / 4 + \
# b[0] -> a[0]
(0 + \
# b[1] -> a[1]
1 + \
# b[2] -> a[2]
0.3**2 + 0.4**2 + \
# b[3] -> a[3]
0.5) / 4),
(((0., 1, 4), (3, 4, 2)),
((2., 2, 2), (2, 3, 4), (3, 2, 2)),
# a[0] -> b[1]
(8 + \
# a[1] -> b[2]
4) / 2 + \
# b[0] -> a[1]
(5 + \
# b[1] -> a[1]
6 + \
# b[2] -> a[1]
4) / 3),
)
def test_evaluate_preset(self, point_set_a, point_set_b, expected_distance):
tensor_shape = _random_tensor_shape()
point_set_a = np.tile(point_set_a, tensor_shape + [1, 1])
point_set_b = np.tile(point_set_b, tensor_shape + [1, 1])
expected = np.tile(expected_distance, tensor_shape)
result = chamfer_distance.evaluate(point_set_a, point_set_b)
self.assertAllClose(expected, result)
def test_chamfer_distance_evaluate_jacobian(self):
"""Tests the Jacobian of the Chamfer distance loss."""
point_set_a, point_set_b = _random_point_sets()
with self.subTest(name="jacobian_wrt_point_set_a"):
self.assert_jacobian_is_correct_fn(
lambda x: chamfer_distance.evaluate(x, point_set_b), [point_set_a],
atol=1e-5)
with self.subTest(name="jacobian_wrt_point_set_b"):
self.assert_jacobian_is_correct_fn(
lambda x: chamfer_distance.evaluate(point_set_a, x), [point_set_b],
atol=1e-5)
@parameterized.parameters(
("Not all batch dimensions are broadcast-compatible.", (1, 3, 5, 3),
(2, 4, 3)),
("Not all batch dimensions are broadcast-compatible.", (3, 3, 5),
(2, 4, 5)),
("point_set_b must have exactly 3 dimensions in axis -1,.", (2, 4, 3),
(2, 4, 2)),
("point_set_b must have exactly 2 dimensions in axis -1,.", (2, 4, 2),
(2, 4, 3)),
)
def test_evaluate_shape_exception_raised(self, error_msg, *shape):
"""Tests that the shape exception is raised."""
self.assert_exception_is_raised(chamfer_distance.evaluate, error_msg, shape)
@parameterized.parameters(
((1, 5, 6, 3), (2, 5, 9, 3)),
((None, 2, 6, 2), (4, 2, None, 4, 2)),
((3, 5, 8, 7), (3, 1, 1, 7)),
)
def test_evaluate_shape_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(chamfer_distance.evaluate, shapes)
if __name__ == "__main__":
test_case.main()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/rendering/opengl/tests/rasterizer_op_test.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for the opengl rasterizer op."""
from absl.testing import parameterized
import numpy as np
import six
import tensorflow as tf
from tensorflow_graphics.geometry.transformation import look_at
from tensorflow_graphics.rendering.camera import perspective
from tensorflow_graphics.rendering.opengl import rasterization_backend
from tensorflow_graphics.util import test_case
# Empty vertex shader
test_vertex_shader = """
#version 450
void main() { }
"""
# Geometry shader that projects the vertices of visible triangles onto the image
# plane.
test_geometry_shader = """
#version 450
uniform mat4 view_projection_matrix;
layout(points) in;
layout(triangle_strip, max_vertices=3) out;
out layout(location = 0) vec3 position;
out layout(location = 1) vec3 normal;
out layout(location = 2) vec2 bar_coord;
out layout(location = 3) float tri_id;
layout(binding=0) buffer triangular_mesh { float mesh_buffer[]; };
vec3 get_vertex_position(int i) {
int o = gl_PrimitiveIDIn * 9 + i * 3;
return vec3(mesh_buffer[o + 0], mesh_buffer[o + 1], mesh_buffer[o + 2]);
}
bool is_back_facing(vec3 v0, vec3 v1, vec3 v2) {
vec4 tv0 = view_projection_matrix * vec4(v0, 1.0);
vec4 tv1 = view_projection_matrix * vec4(v1, 1.0);
vec4 tv2 = view_projection_matrix * vec4(v2, 1.0);
tv0 /= tv0.w;
tv1 /= tv1.w;
tv2 /= tv2.w;
vec2 a = (tv1.xy - tv0.xy);
vec2 b = (tv2.xy - tv0.xy);
return (a.x * b.y - b.x * a.y) <= 0;
}
void main() {
vec3 v0 = get_vertex_position(0);
vec3 v1 = get_vertex_position(1);
vec3 v2 = get_vertex_position(2);
// Cull back-facing triangles.
if (is_back_facing(v0, v1, v2)) {
return;
}
normal = normalize(cross(v1 - v0, v2 - v0));
vec3 positions[3] = {v0, v1, v2};
for (int i = 0; i < 3; ++i) {
// gl_Position is a pre-defined size 4 output variable
gl_Position = view_projection_matrix * vec4(positions[i], 1);
bar_coord = vec2(i==0 ? 1 : 0, i==1 ? 1 : 0);
tri_id = gl_PrimitiveIDIn;
position = positions[i];
EmitVertex();
}
EndPrimitive();
}
"""
# Fragment shader that packs barycentric coordinates, triangle index, and depth
# map in a resulting vec4 per pixel.
test_fragment_shader = """
#version 450
in layout(location = 0) vec3 position;
in layout(location = 1) vec3 normal;
in layout(location = 2) vec2 bar_coord;
in layout(location = 3) float tri_id;
out vec4 output_color;
void main() {
output_color = vec4(bar_coord, tri_id, position.z);
}
"""
class RasterizerOPTest(test_case.TestCase):
def test_rasterize(self):
max_depth = 10
min_depth = 2
height = 480
width = 640
camera_origin = (0.0, 0.0, 0.0)
camera_up = (0.0, 1.0, 0.0)
look_at_point = (0.0, 0.0, 1.0)
fov = (60.0 * np.math.pi / 180,)
near_plane = (1.0,)
far_plane = (10.0,)
batch_shape = tf.convert_to_tensor(
value=(2, (max_depth - min_depth) // 2), dtype=tf.int32)
world_to_camera = look_at.right_handed(camera_origin, look_at_point,
camera_up)
perspective_matrix = perspective.right_handed(
fov, (float(width) / float(height),), near_plane, far_plane)
view_projection_matrix = tf.matmul(perspective_matrix, world_to_camera)
view_projection_matrix = tf.squeeze(view_projection_matrix)
# Generate triangles at different depths and associated ground truth.
tris = np.zeros((max_depth - min_depth, 9), dtype=np.float32)
gt = np.zeros((max_depth - min_depth, height, width, 2), dtype=np.float32)
for idx in range(max_depth - min_depth):
tris[idx, :] = (-100.0, 100.0, idx + min_depth, 100.0, 100.0,
idx + min_depth, 0.0, -100.0, idx + min_depth)
gt[idx, :, :, :] = (0, idx + min_depth)
# Broadcast the variables.
render_parameters = {
"view_projection_matrix":
("mat",
tf.broadcast_to(
input=view_projection_matrix,
shape=tf.concat(
values=(batch_shape,
tf.shape(input=view_projection_matrix)[-2:]),
axis=0))),
"triangular_mesh":
("buffer",
tf.reshape(
tris, shape=tf.concat(values=(batch_shape, (9,)), axis=0)))
}
# Reshape the ground truth.
gt = tf.reshape(
gt, shape=tf.concat(values=(batch_shape, (height, width, 2)), axis=0))
render_parameters = list(six.iteritems(render_parameters))
variable_names = [v[0] for v in render_parameters]
variable_kinds = [v[1][0] for v in render_parameters]
variable_values = [v[1][1] for v in render_parameters]
def rasterize():
return rasterization_backend.render_ops.rasterize(
num_points=3,
variable_names=variable_names,
variable_kinds=variable_kinds,
variable_values=variable_values,
output_resolution=(width, height),
vertex_shader=test_vertex_shader,
geometry_shader=test_geometry_shader,
fragment_shader=test_fragment_shader,
)
result = rasterize()
self.assertAllClose(result[..., 2:4], gt)
@tf.function
def check_lazy_shape():
# Within @tf.function, the tensor shape is determined by SetShapeFn
# callback. Ensure that the shape of non-batch axes matches that of of
# the actual tensor evaluated in eager mode above.
lazy_shape = rasterize().shape
self.assertEqual(lazy_shape[-3:], list(result.shape)[-3:])
check_lazy_shape()
@parameterized.parameters(
("The variable names, kinds, and values must have the same size.",
["var1"], ["buffer", "buffer"], [[1.0], [1.0]],
tf.errors.InvalidArgumentError, ValueError),
("The variable names, kinds, and values must have the same size.",
["var1", "var2"], ["buffer"], [[1.0], [1.0]],
tf.errors.InvalidArgumentError, ValueError),
("The variable names, kinds, and values must have the same size.",
["var1", "var2"], ["buffer", "buffer"], [[1.0]],
tf.errors.InvalidArgumentError, ValueError),
("has an invalid batch", ["var1", "var2"], ["buffer", "buffer"],
[[1.0], [[1.0]]], tf.errors.InvalidArgumentError, ValueError),
("has an invalid", ["var1"], ["mat"], [[1.0]],
tf.errors.InvalidArgumentError, ValueError),
("has an invalid", ["var1"], ["buffer"], [1.0],
tf.errors.InvalidArgumentError, ValueError),
)
def test_invalid_variable_inputs(self, error_msg, variable_names,
variable_kinds, variable_values, error_eager,
error_graph_mode):
height = 1
width = 1
empty_shader_code = "#version 450\n void main() { }\n"
if tf.executing_eagerly():
error = error_eager
else:
error = error_graph_mode
with self.assertRaisesRegexp(error, error_msg):
self.evaluate(
rasterization_backend.render_ops.rasterize(
num_points=0,
variable_names=variable_names,
variable_kinds=variable_kinds,
variable_values=variable_values,
output_resolution=(width, height),
vertex_shader=empty_shader_code,
geometry_shader=empty_shader_code,
fragment_shader=empty_shader_code))
if __name__ == "__main__":
test_case.main()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for the opengl rasterizer op."""
from absl.testing import parameterized
import numpy as np
import six
import tensorflow as tf
from tensorflow_graphics.geometry.transformation import look_at
from tensorflow_graphics.rendering.camera import perspective
from tensorflow_graphics.rendering.opengl import rasterization_backend
from tensorflow_graphics.util import test_case
# Empty vertex shader
test_vertex_shader = """
#version 450
void main() { }
"""
# Geometry shader that projects the vertices of visible triangles onto the image
# plane.
test_geometry_shader = """
#version 450
uniform mat4 view_projection_matrix;
layout(points) in;
layout(triangle_strip, max_vertices=3) out;
out layout(location = 0) vec3 position;
out layout(location = 1) vec3 normal;
out layout(location = 2) vec2 bar_coord;
out layout(location = 3) float tri_id;
layout(binding=0) buffer triangular_mesh { float mesh_buffer[]; };
vec3 get_vertex_position(int i) {
int o = gl_PrimitiveIDIn * 9 + i * 3;
return vec3(mesh_buffer[o + 0], mesh_buffer[o + 1], mesh_buffer[o + 2]);
}
bool is_back_facing(vec3 v0, vec3 v1, vec3 v2) {
vec4 tv0 = view_projection_matrix * vec4(v0, 1.0);
vec4 tv1 = view_projection_matrix * vec4(v1, 1.0);
vec4 tv2 = view_projection_matrix * vec4(v2, 1.0);
tv0 /= tv0.w;
tv1 /= tv1.w;
tv2 /= tv2.w;
vec2 a = (tv1.xy - tv0.xy);
vec2 b = (tv2.xy - tv0.xy);
return (a.x * b.y - b.x * a.y) <= 0;
}
void main() {
vec3 v0 = get_vertex_position(0);
vec3 v1 = get_vertex_position(1);
vec3 v2 = get_vertex_position(2);
// Cull back-facing triangles.
if (is_back_facing(v0, v1, v2)) {
return;
}
normal = normalize(cross(v1 - v0, v2 - v0));
vec3 positions[3] = {v0, v1, v2};
for (int i = 0; i < 3; ++i) {
// gl_Position is a pre-defined size 4 output variable
gl_Position = view_projection_matrix * vec4(positions[i], 1);
bar_coord = vec2(i==0 ? 1 : 0, i==1 ? 1 : 0);
tri_id = gl_PrimitiveIDIn;
position = positions[i];
EmitVertex();
}
EndPrimitive();
}
"""
# Fragment shader that packs barycentric coordinates, triangle index, and depth
# map in a resulting vec4 per pixel.
test_fragment_shader = """
#version 450
in layout(location = 0) vec3 position;
in layout(location = 1) vec3 normal;
in layout(location = 2) vec2 bar_coord;
in layout(location = 3) float tri_id;
out vec4 output_color;
void main() {
output_color = vec4(bar_coord, tri_id, position.z);
}
"""
class RasterizerOPTest(test_case.TestCase):
def test_rasterize(self):
max_depth = 10
min_depth = 2
height = 480
width = 640
camera_origin = (0.0, 0.0, 0.0)
camera_up = (0.0, 1.0, 0.0)
look_at_point = (0.0, 0.0, 1.0)
fov = (60.0 * np.math.pi / 180,)
near_plane = (1.0,)
far_plane = (10.0,)
batch_shape = tf.convert_to_tensor(
value=(2, (max_depth - min_depth) // 2), dtype=tf.int32)
world_to_camera = look_at.right_handed(camera_origin, look_at_point,
camera_up)
perspective_matrix = perspective.right_handed(
fov, (float(width) / float(height),), near_plane, far_plane)
view_projection_matrix = tf.matmul(perspective_matrix, world_to_camera)
view_projection_matrix = tf.squeeze(view_projection_matrix)
# Generate triangles at different depths and associated ground truth.
tris = np.zeros((max_depth - min_depth, 9), dtype=np.float32)
gt = np.zeros((max_depth - min_depth, height, width, 2), dtype=np.float32)
for idx in range(max_depth - min_depth):
tris[idx, :] = (-100.0, 100.0, idx + min_depth, 100.0, 100.0,
idx + min_depth, 0.0, -100.0, idx + min_depth)
gt[idx, :, :, :] = (0, idx + min_depth)
# Broadcast the variables.
render_parameters = {
"view_projection_matrix":
("mat",
tf.broadcast_to(
input=view_projection_matrix,
shape=tf.concat(
values=(batch_shape,
tf.shape(input=view_projection_matrix)[-2:]),
axis=0))),
"triangular_mesh":
("buffer",
tf.reshape(
tris, shape=tf.concat(values=(batch_shape, (9,)), axis=0)))
}
# Reshape the ground truth.
gt = tf.reshape(
gt, shape=tf.concat(values=(batch_shape, (height, width, 2)), axis=0))
render_parameters = list(six.iteritems(render_parameters))
variable_names = [v[0] for v in render_parameters]
variable_kinds = [v[1][0] for v in render_parameters]
variable_values = [v[1][1] for v in render_parameters]
def rasterize():
return rasterization_backend.render_ops.rasterize(
num_points=3,
variable_names=variable_names,
variable_kinds=variable_kinds,
variable_values=variable_values,
output_resolution=(width, height),
vertex_shader=test_vertex_shader,
geometry_shader=test_geometry_shader,
fragment_shader=test_fragment_shader,
)
result = rasterize()
self.assertAllClose(result[..., 2:4], gt)
@tf.function
def check_lazy_shape():
# Within @tf.function, the tensor shape is determined by SetShapeFn
# callback. Ensure that the shape of non-batch axes matches that of of
# the actual tensor evaluated in eager mode above.
lazy_shape = rasterize().shape
self.assertEqual(lazy_shape[-3:], list(result.shape)[-3:])
check_lazy_shape()
@parameterized.parameters(
("The variable names, kinds, and values must have the same size.",
["var1"], ["buffer", "buffer"], [[1.0], [1.0]],
tf.errors.InvalidArgumentError, ValueError),
("The variable names, kinds, and values must have the same size.",
["var1", "var2"], ["buffer"], [[1.0], [1.0]],
tf.errors.InvalidArgumentError, ValueError),
("The variable names, kinds, and values must have the same size.",
["var1", "var2"], ["buffer", "buffer"], [[1.0]],
tf.errors.InvalidArgumentError, ValueError),
("has an invalid batch", ["var1", "var2"], ["buffer", "buffer"],
[[1.0], [[1.0]]], tf.errors.InvalidArgumentError, ValueError),
("has an invalid", ["var1"], ["mat"], [[1.0]],
tf.errors.InvalidArgumentError, ValueError),
("has an invalid", ["var1"], ["buffer"], [1.0],
tf.errors.InvalidArgumentError, ValueError),
)
def test_invalid_variable_inputs(self, error_msg, variable_names,
variable_kinds, variable_values, error_eager,
error_graph_mode):
height = 1
width = 1
empty_shader_code = "#version 450\n void main() { }\n"
if tf.executing_eagerly():
error = error_eager
else:
error = error_graph_mode
with self.assertRaisesRegexp(error, error_msg):
self.evaluate(
rasterization_backend.render_ops.rasterize(
num_points=0,
variable_names=variable_names,
variable_kinds=variable_kinds,
variable_values=variable_values,
output_resolution=(width, height),
vertex_shader=empty_shader_code,
geometry_shader=empty_shader_code,
fragment_shader=empty_shader_code))
if __name__ == "__main__":
test_case.main()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/nn/metric/fscore.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This module implements the fscore metric."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from tensorflow_graphics.nn.metric import precision as precision_module
from tensorflow_graphics.nn.metric import recall as recall_module
from tensorflow_graphics.util import export_api
from tensorflow_graphics.util import safe_ops
from tensorflow_graphics.util import shape
def evaluate(ground_truth,
prediction,
precision_function=precision_module.evaluate,
recall_function=recall_module.evaluate,
name=None):
"""Computes the fscore metric for the given ground truth and predicted labels.
The fscore is calculated as 2 * (precision * recall) / (precision + recall)
where the precision and recall are evaluated by the given function parameters.
The precision and recall functions default to their definition for boolean
labels (see https://en.wikipedia.org/wiki/Precision_and_recall for more
details).
Note:
In the following, A1 to An are optional batch dimensions, which must be
broadcast compatible.
Args:
ground_truth: A tensor of shape `[A1, ..., An, N]`, where the last axis
represents the ground truth values.
prediction: A tensor of shape `[A1, ..., An, N]`, where the last axis
represents the predicted values.
precision_function: The function to use for evaluating the precision.
Defaults to the precision evaluation for binary ground-truth and
predictions.
recall_function: The function to use for evaluating the recall. Defaults to
the recall evaluation for binary ground-truth and prediction.
name: A name for this op. Defaults to "fscore_evaluate".
Returns:
A tensor of shape `[A1, ..., An]` that stores the fscore metric for the
given ground truth labels and predictions.
Raises:
ValueError: if the shape of `ground_truth`, `prediction` is
not supported.
"""
with tf.compat.v1.name_scope(name, "fscore_evaluate",
[ground_truth, prediction]):
ground_truth = tf.convert_to_tensor(value=ground_truth)
prediction = tf.convert_to_tensor(value=prediction)
shape.compare_batch_dimensions(
tensors=(ground_truth, prediction),
tensor_names=("ground_truth", "prediction"),
last_axes=-1,
broadcast_compatible=True)
recall = recall_function(ground_truth, prediction)
precision = precision_function(ground_truth, prediction)
return safe_ops.safe_signed_div(2 * precision * recall, precision + recall)
# API contains all public functions and classes.
__all__ = export_api.get_functions_and_classes()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This module implements the fscore metric."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from tensorflow_graphics.nn.metric import precision as precision_module
from tensorflow_graphics.nn.metric import recall as recall_module
from tensorflow_graphics.util import export_api
from tensorflow_graphics.util import safe_ops
from tensorflow_graphics.util import shape
def evaluate(ground_truth,
prediction,
precision_function=precision_module.evaluate,
recall_function=recall_module.evaluate,
name=None):
"""Computes the fscore metric for the given ground truth and predicted labels.
The fscore is calculated as 2 * (precision * recall) / (precision + recall)
where the precision and recall are evaluated by the given function parameters.
The precision and recall functions default to their definition for boolean
labels (see https://en.wikipedia.org/wiki/Precision_and_recall for more
details).
Note:
In the following, A1 to An are optional batch dimensions, which must be
broadcast compatible.
Args:
ground_truth: A tensor of shape `[A1, ..., An, N]`, where the last axis
represents the ground truth values.
prediction: A tensor of shape `[A1, ..., An, N]`, where the last axis
represents the predicted values.
precision_function: The function to use for evaluating the precision.
Defaults to the precision evaluation for binary ground-truth and
predictions.
recall_function: The function to use for evaluating the recall. Defaults to
the recall evaluation for binary ground-truth and prediction.
name: A name for this op. Defaults to "fscore_evaluate".
Returns:
A tensor of shape `[A1, ..., An]` that stores the fscore metric for the
given ground truth labels and predictions.
Raises:
ValueError: if the shape of `ground_truth`, `prediction` is
not supported.
"""
with tf.compat.v1.name_scope(name, "fscore_evaluate",
[ground_truth, prediction]):
ground_truth = tf.convert_to_tensor(value=ground_truth)
prediction = tf.convert_to_tensor(value=prediction)
shape.compare_batch_dimensions(
tensors=(ground_truth, prediction),
tensor_names=("ground_truth", "prediction"),
last_axes=-1,
broadcast_compatible=True)
recall = recall_function(ground_truth, prediction)
precision = precision_function(ground_truth, prediction)
return safe_ops.safe_signed_div(2 * precision * recall, precision + recall)
# API contains all public functions and classes.
__all__ = export_api.get_functions_and_classes()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/projects/local_implicit_grid/core/evaluator.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Lint as: python3
"""Utility modules for evaluating model from checkpoint.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import ast
import numpy as np
import tensorflow.compat.v1 as tf
from tensorflow.compat.v1.io import gfile
from tensorflow_graphics.projects.local_implicit_grid.core import implicit_nets as im
from tensorflow_graphics.projects.local_implicit_grid.core import local_implicit_grid_layer as lig
from tensorflow_graphics.projects.local_implicit_grid.core import model_g2g as g2g
from tensorflow_graphics.projects.local_implicit_grid.core import model_g2v as g2v
tf.logging.set_verbosity(tf.logging.ERROR)
def parse_param_file(param_file):
"""Parse parameter file for parameters."""
with gfile.GFile(param_file, 'r') as fh:
lines = fh.readlines()
d = {}
for l in lines:
l = l.rstrip('\n')
splits = l.split(':')
key = splits[0]
val_ = splits[1].strip()
if not val_:
val = ''
else:
try:
val = ast.literal_eval(val_)
except (ValueError, SyntaxError):
val = str(val_)
d[key] = val
return d
class RefinerEvaluator(object):
"""Load pretrained refiner and evaluate for a given code.
"""
def __init__(self, ckpt, codelen, dim=3, out_features=1, num_filters=128,
point_batch=20000):
self.ckpt = ckpt
self.codelen = codelen
self.dim = dim
self.out_features = out_features
self.num_filters = num_filters
self.point_batch = point_batch
self.graph = tf.Graph()
self._init_graph()
self.global_step_ = self.global_step.eval(session=self.sess)
def _init_graph(self):
"""Initialize computation graph for tensorflow.
"""
with self.graph.as_default():
self.refiner = im.ImNet(dim=self.dim,
in_features=self.codelen,
out_features=self.out_features,
num_filters=self.num_filters)
self.global_step = tf.get_variable('global_step', shape=[],
dtype=tf.int64)
self.pts_ph = tf.placeholder(tf.float32, shape=[self.point_batch, 3])
self.lat_ph = tf.placeholder(tf.float32, shape=[self.codelen])
lat = tf.broadcast_to(self.lat_ph[tf.newaxis],
[self.point_batch, self.codelen])
code = tf.concat((self.pts_ph, lat), axis=-1) # [pb, 3+c]
vals = self.refiner(code, training=False) # [pb, 1]
self.vals = tf.squeeze(vals, axis=1) # [pb]
self.saver = tf.train.Saver()
self.sess = tf.Session()
self.saver.restore(self.sess, self.ckpt)
def _get_grid_points(self, xmin, xmax, res):
x = np.linspace(xmin, xmax, res)
xyz = np.meshgrid(*tuple([x] * self.dim), indexing='ij')
xyz = np.stack(xyz, axis=-1)
xyz = xyz.reshape([-1, self.dim])
return xyz
def eval_points(self, lat, points):
"""Evaluate network at locations specified by points.
Args:
lat: [self.codelen,] np array, latent code.
points: [#v, self.dim] np array, point locations to evaluate.
Returns:
all_vals: [#v] np array, function values at locations.
"""
npt = points.shape[0]
npb = int(np.ceil(float(npt)/self.point_batch))
all_vals = np.zeros([npt], dtype=np.float32)
for idx in range(npb):
sid = int(idx * self.point_batch)
eid = int(min(npt, sid+self.point_batch))
pts = points[sid:eid]
pad_w = self.point_batch - (eid - sid)
pts = np.pad(pts, ((0, pad_w), (0, 0)), mode='constant')
with self.graph.as_default():
val = self.sess.run(self.vals, feed_dict={self.pts_ph: pts,
self.lat_ph: lat})
all_vals[sid:eid] = val[:(eid-sid)]
return all_vals
def eval_grid(self, lat, xmin=-1.0, xmax=1.0, res=64):
"""Evaluate network on a grid.
Args:
lat: [self.codelen,] np array, latent code.
xmin: float, minimum coordinate value for grid.
xmax: float, maximum coordinate value for grid.
res: int, resolution (per dimension) of grid.
Returns:
grid_val: [res, res, res] np.float32 array, grid of values from query.
"""
grid_points = self._get_grid_points(xmin=xmin, xmax=xmax, res=res)
point_val = self.eval_points(lat, grid_points)
grid_val = point_val.reshape([res, res, res])
return grid_val
class EncoderEvaluator(object):
"""Load pretrained grid encoder and evaluate single crops."""
def __init__(self,
ckpt,
in_grid_res=32,
encoder_nf=32,
codelen=32,
grid_batch=128):
"""Initialization function.
Args:
ckpt: str, path to checkpoint.
in_grid_res: int, resolution of grid to feed to encoder.
encoder_nf: int, number of base filters for encoder.
codelen: int, length of output latent code.
grid_batch: int, batch size of cut-out grid to evaluate at a time.
"""
self.ckpt = ckpt
self.codelen = codelen
self.grid_batch = grid_batch
self.in_grid_res = in_grid_res
self.encoder_nf = encoder_nf
self.graph = tf.Graph()
self._init_graph() # creates self.sess
def _init_graph(self):
"""Initialize computation graph for tensorflow.
"""
with self.graph.as_default():
self.encoder = g2v.GridEncoder(in_grid_res=self.in_grid_res,
num_filters=self.encoder_nf,
codelen=self.codelen,
name='g2v')
self.grid_ph = tf.placeholder(
tf.float32,
shape=[None, self.in_grid_res, self.in_grid_res, self.in_grid_res, 1])
self.lats = self.encoder(self.grid_ph, training=False) # [gb, codelen]
self.saver = tf.train.Saver()
self.sess = tf.Session()
self.saver.restore(self.sess, self.ckpt)
def eval_grid(self, grid):
"""Strided evaluation of full grid into feature grid.
Args:
grid: [batch, gres, gres, gres, 1] input feature grid.
Returns:
codes: [batch, codelen] output feature gird.
"""
# initialize output feature grid
niters = int(np.ceil(grid.shape[0] / self.grid_batch))
codes = []
for idx in range(niters):
sid = idx * self.grid_batch
eid = min(sid+self.grid_batch, grid.shape[0])
c = self.sess.run(self.lats,
feed_dict={self.grid_ph: grid[sid:eid]})
codes.append(c)
codes = np.concatenate(codes, axis=0)
return codes.astype(np.float32)
class FullGridEncoderEvaluator(object):
"""Load pretrained grid encoder and evaluate a full input grid.
Performs windowed encoding and outputs an encoded feature grid.
"""
def __init__(self,
ckpt,
in_grid_res=32,
num_filters=32,
codelen=128,
grid_batch=128,
gres=256,
overlap=True):
"""Initialization function.
Args:
ckpt: str, path to checkpoint.
in_grid_res: int, resolution of grid to feed to encoder.
num_filters: int, number of base filters for encoder.
codelen: int, length of output latent code.
grid_batch: int, batch size of cut-out grid to evaluate at a time.
gres: int, resolution of the full grid.
overlap: bool, whether to do overlapping or non-overlapping cutout
evaluations.
"""
self.ckpt = ckpt
self.codelen = codelen
self.grid_batch = grid_batch
self.in_grid_res = in_grid_res
self.gres = gres
self.num_filters = num_filters
self.graph = tf.Graph()
self._init_graph()
self.global_step_ = self.global_step.eval(session=self.sess)
if overlap:
ijk = np.arange(0, gres-int(in_grid_res/2), int(in_grid_res/2))
self.out_grid_res = ijk.shape[0]
else:
ijk = np.arange(0, gres, in_grid_res)
self.out_grid_res = ijk.shape[0]
self.ijk = np.meshgrid(ijk, ijk, ijk, indexing='ij')
self.ijk = np.stack(self.ijk, axis=-1).reshape([-1, 3])
def _init_graph(self):
"""Initialize computation graph for tensorflow."""
with self.graph.as_default():
self.encoder = g2v.GridEncoder(
in_grid_res=self.in_grid_res,
num_filters=self.num_filters,
codelen=self.codelen,
name='g2v')
self.global_step = tf.get_variable(
'global_step', shape=[], dtype=tf.int64)
self.grid_ph = tf.placeholder(
tf.float32, shape=[self.gres, self.gres, self.gres])
self.start_ph = tf.placeholder(tf.int32, shape=[self.grid_batch, 3])
self.ingrid = self._batch_slice(self.grid_ph, self.start_ph,
self.in_grid_res, self.grid_batch)
self.ingrid = self.ingrid[..., tf.newaxis]
self.lats = self.encoder(self.ingrid, training=False) # [gb, codelen]
self.saver = tf.train.Saver()
self.sess = tf.Session()
self.saver.restore(self.sess, self.ckpt)
def _batch_slice(self, ary, start_ijk, w, batch_size):
"""Batched slicing of original grid.
Args:
ary: tensor, rank = 3.
start_ijk: [batch_size, 3] tensor, starting index.
w: width of cube to extract.
batch_size: int, batch size.
Returns:
batched_slices: [batch_size, w, w, w] tensor, batched slices of ary.
"""
batch_size = start_ijk.shape[0]
ijk = tf.range(w, dtype=tf.int32)
slice_idx = tf.meshgrid(ijk, ijk, ijk, indexing='ij')
slice_idx = tf.stack(
slice_idx, axis=-1) # [in_grid_res, in_grid_res, in_grid_res, 3]
slice_idx = tf.broadcast_to(slice_idx[tf.newaxis], [batch_size, w, w, w, 3])
offset = tf.broadcast_to(
start_ijk[:, tf.newaxis, tf.newaxis, tf.newaxis, :],
[batch_size, w, w, w, 3])
slice_idx += offset
# [batch_size, in_grid_res, in_grid_res, in_grid_res, 3]
batched_slices = tf.gather_nd(ary, slice_idx)
# [batch_size, in_grid_res, in_grid_res, in_grid_res]
return batched_slices
def eval_grid(self, grid):
"""Strided evaluation of full grid into feature grid.
Args:
grid: [gres, gres, gres] input feature grid.
Returns:
ogrid: [out_grid_res, out_grid_res, out_grid_res, codelen] output feature
gird.
"""
# initialize output feature grid
ogrid = np.zeros([self.ijk.shape[0], self.codelen])
niters = np.ceil(self.ijk.shape[0] / self.grid_batch).astype(np.int)
for idx in range(niters):
sid = idx * self.grid_batch
eid = min(sid + self.grid_batch, self.ijk.shape[0])
start_ijk = self.ijk[sid:eid]
# pad if last iteration does not have a full batch
pad_w = self.grid_batch - start_ijk.shape[0]
start_ijk = np.pad(start_ijk, ((0, pad_w), (0, 0)), mode='constant')
lats = self.sess.run(
self.lats, feed_dict={
self.grid_ph: grid,
self.start_ph: start_ijk
})
ogrid[sid:eid] = lats[:eid - sid]
ogrid = ogrid.reshape(
[self.out_grid_res, self.out_grid_res, self.out_grid_res, self.codelen])
return ogrid.astype(np.float32)
class LIGEvaluator(object):
"""Load pretrained grid refiner and evaluate a feature grid.
"""
def __init__(self,
ckpt,
size=(15, 15, 15),
in_features=32,
out_features=1,
x_location_max=1,
num_filters=32,
min_grid_value=(0., 0., 0.),
max_grid_value=(1., 1., 1.),
net_type='imnet',
method='linear',
point_batch=20000,
scope=''):
"""Initialization function.
Args:
ckpt: str, path to checkpoint.
size: list or tuple of ints, grid dimension in each dimension.
in_features: int, number of input channels.
out_features: int, number of output channels.
x_location_max: float, relative coordinate range for one voxel.
num_filters: int, number of filters for refiner.
min_grid_value: tuple, lower bound of query points.
max_grid_value: tuple, upper bound of query points.
net_type: str, one of occnet/deepsdf.
method: str, one of linear/nn.
point_batch: int, pseudo batch size for evaluating points.
scope: str, scope of imnet layer.
"""
self.dim = 3 # hardcode for dim = 3
self.ckpt = ckpt
self.size = size
self.x_location_max = x_location_max
self.num_filters = num_filters
self.in_features = in_features
self.out_features = out_features
self.net_type = net_type
self.method = method
self.point_batch = point_batch
self.scope = scope
self.min_grid_value = min_grid_value
self.max_grid_value = max_grid_value
self.graph = tf.Graph()
self._init_graph()
def _init_graph(self):
"""Initialize computation graph for tensorflow.
"""
with self.graph.as_default():
self.lig = lig.LocalImplicitGrid(size=self.size,
in_features=self.in_features,
out_features=self.out_features,
num_filters=self.num_filters,
net_type=self.net_type,
method=self.method,
x_location_max=self.x_location_max,
min_grid_value=self.min_grid_value,
max_grid_value=self.max_grid_value,
name='lig')
self.pts_ph = tf.placeholder(tf.float32, shape=[self.point_batch, 3])
self.latgrid_ph = tf.placeholder(tf.float32,
shape=[self.size[0],
self.size[1],
self.size[2],
self.in_features])
self.latgrid = self.latgrid_ph[tf.newaxis]
self.points = self.pts_ph[tf.newaxis]
vals = self.lig(self.latgrid, self.points, training=False) # [1,npts,1]
self.vals = tf.squeeze(vals, axis=[0, 2]) # [npts]
self.map_dict = self._get_var_mapping(model=self.lig)
self.saver = tf.train.Saver(self.map_dict)
self.sess = tf.Session()
self.saver.restore(self.sess, self.ckpt)
def _get_grid_points(self, xmin, xmax, res):
x = np.linspace(xmin, xmax, res)
xyz = np.meshgrid(*tuple([x] * self.dim), indexing='ij')
xyz = np.stack(xyz, axis=-1)
xyz = xyz.reshape([-1, self.dim])
return xyz
def eval_points(self, latgrid, points):
"""Evaluate network at locations specified by points.
Args:
latgrid: [size0, size1, size2, self.codelen] np array, latent code.
points: [#v, self.dim] np array, point locations to evaluate.
Returns:
all_vals: [#v] np array, function values at locations.
"""
npt = points.shape[0]
npb = int(np.ceil(float(npt)/self.point_batch))
all_vals = np.zeros([npt], dtype=np.float32)
for idx in range(npb):
sid = int(idx * self.point_batch)
eid = int(min(npt, sid+self.point_batch))
pts = points[sid:eid]
pad_w = self.point_batch - (eid - sid)
if pts.shape[0] < self.point_batch:
pts_pad = np.tile(pts[0:1], (pad_w, 1))
# repeat the first point in the batch
pts = np.concatenate([pts, pts_pad], axis=0)
with self.graph.as_default():
val = self.sess.run(self.vals, feed_dict={self.pts_ph: pts,
self.latgrid_ph: latgrid})
all_vals[sid:eid] = val[:(eid-sid)]
return all_vals
def eval_grid(self, latgrid, xmin=0.0, xmax=1.0, res=128):
"""Evaluate network on a grid.
Args:
latgrid: [size0, size1, size2, self.codelen] np array, latent code.
xmin: float, minimum coordinate value for grid.
xmax: float, maximum coordinate value for grid.
res: int, resolution (per dimension) of grid.
Returns:
grid_val: [res, res, res] np.float32 array, grid of values from query.
"""
grid_points = self._get_grid_points(xmin=xmin, xmax=xmax, res=res)
point_val = self.eval_points(latgrid, grid_points)
grid_val = point_val.reshape([res, res, res])
return grid_val
def _get_var_mapping(self, model):
vars_ = model.trainable_variables
varnames = [v.name for v in vars_] # .split(':')[0]
varnames = [self.scope+v.replace('lig/', '').strip(':0') for v in varnames]
map_dict = dict(zip(varnames, vars_))
return map_dict
class UNetEvaluator(object):
"""Load pretrained UNet for generating feature grid for coarse voxel inputs."""
def __init__(self,
ckpt,
in_grid_res,
out_grid_res,
num_filters,
max_filters,
out_features,
sph_norm=0.):
self.ckpt = ckpt
self.in_grid_res = in_grid_res
self.out_grid_res = out_grid_res
self.num_filters = num_filters
self.max_filters = max_filters
self.out_features = out_features
self.sph_norm = sph_norm
self.graph = tf.Graph()
self._init_graph()
def _init_graph(self):
"""Initialize computation graph for tensorflow."""
with self.graph.as_default():
self.unet = g2g.UNet3D(in_grid_res=self.in_grid_res,
out_grid_res=self.out_grid_res,
num_filters=self.num_filters,
max_filters=self.max_filters,
out_features=self.out_features)
self.input_grid_ph = tf.placeholder(
tf.float32,
[None, None, None])
self.input_grid = self.input_grid_ph[tf.newaxis, ..., tf.newaxis]
self.feat_grid = self.unet(self.input_grid)
self.saver = tf.train.Saver()
self.sess = tf.Session()
self.saver.restore(self.sess, self.ckpt)
def eval_grid(self, input_grid):
"""Evaluate input grid (no batching).
Args:
input_grid: [in_grid_res, in_grid_res, in_grid_res] tensor.
Returns:
[out_grid_res, out_grid_res, out_grid_res, out_features]
"""
with self.graph.as_default():
feat_grid = self.sess.run(self.feat_grid,
feed_dict={self.input_grid_ph: input_grid})
feat_grid = feat_grid[0]
if self.sph_norm > 0:
feat_grid = (feat_grid /
np.linalg.norm(feat_grid, axis=-1, keepdims=True) *
self.sph_norm)
return feat_grid
class SparseLIGEvaluator(object):
"""Evaluate sparse encoded feature grids."""
def __init__(self, ckpt, num_filters, codelen, origin, grid_shape,
part_size, overlap=True, scope=''):
self.scope = scope
self.overlap = overlap
self.ckpt = ckpt
self.num_filters = num_filters
self.codelen = codelen
if overlap:
self.res = (np.array(grid_shape) - 1) / 2.0
else:
self.res = np.array(grid_shape) - 1
self.res = self.res.astype(np.int32)
self.xmin = np.array(origin)
self.xmax = self.xmin + self.res * part_size
self.part_size = part_size
self.lvg = LIGEvaluator(ckpt=ckpt,
size=grid_shape,
in_features=codelen,
out_features=1,
x_location_max=2-float(overlap),
num_filters=num_filters,
min_grid_value=self.xmin,
max_grid_value=self.xmax,
net_type='imnet',
method='linear' if overlap else 'nn',
scope=scope)
def evaluate_feature_grid(self, feature_grid, mask, res_per_part=4,
conservative=False):
"""Evaluate feature grid.
Args:
feature_grid: [*grid_size, codelen] np.array, feature grid to evaluate.
mask: [*grid_size] bool np.array, mask for feature locations.
res_per_part: int, resolution of output evaluation per part.
conservative: bool, whether to do conservative evaluations.
If true, evalutes a cell if either neighbor is masked. Else, evaluates a
cell if all neighbors are masked.
Returns:
output grid.
"""
# setup grid
eps = 1e-6
s = self.res
l = [np.linspace(self.xmin[i]+eps, self.xmax[i]-eps, res_per_part*s[i])
for i in range(3)]
xyz = np.stack(np.meshgrid(l[0], l[1], l[2],
indexing='ij'), axis=-1).reshape(-1, 3)
output_grid = np.ones([res_per_part*s[0],
res_per_part*s[1],
res_per_part*s[2]], dtype=np.float32).reshape(-1)
mask = mask.astype(np.bool)
if self.overlap:
mask = np.stack([mask[:-1, :-1, :-1],
mask[:-1, :-1, 1:],
mask[:-1, 1:, :-1],
mask[:-1, 1:, 1:],
mask[1:, :-1, :-1],
mask[1:, :-1, 1:],
mask[1:, 1:, :-1],
mask[1:, 1:, 1:]], axis=-1)
if conservative:
mask = np.any(mask, axis=-1)
else:
mask = np.all(mask, axis=-1)
g = np.stack(np.meshgrid(np.arange(mask.shape[0]),
np.arange(mask.shape[1]),
np.arange(mask.shape[2]),
indexing='ij'), axis=-1).reshape(-1, 3)
g = g[:, 0]*(mask.shape[1]*mask.shape[2]) + g[:, 1]*mask.shape[2] + g[:, 2]
g_valid = g[mask.ravel()]
if self.overlap:
ijk = np.floor((xyz - self.xmin) / self.part_size * 2).astype(np.int32)
else:
ijk = np.floor((xyz - self.xmin +
0.5 * self.part_size) / self.part_size).astype(np.int32)
ijk_idx = (ijk[:, 0]*(mask.shape[1] * mask.shape[2]) +
ijk[:, 1]*mask.shape[2] + ijk[:, 2])
pt_mask = np.isin(ijk_idx, g_valid)
output_grid[pt_mask] = self.lvg.eval_points(feature_grid, xyz[pt_mask])
output_grid = output_grid.reshape(res_per_part*s[0], # pylint: disable=too-many-function-args
res_per_part*s[1],
res_per_part*s[2])
return output_grid
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Lint as: python3
"""Utility modules for evaluating model from checkpoint.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import ast
import numpy as np
import tensorflow.compat.v1 as tf
from tensorflow.compat.v1.io import gfile
from tensorflow_graphics.projects.local_implicit_grid.core import implicit_nets as im
from tensorflow_graphics.projects.local_implicit_grid.core import local_implicit_grid_layer as lig
from tensorflow_graphics.projects.local_implicit_grid.core import model_g2g as g2g
from tensorflow_graphics.projects.local_implicit_grid.core import model_g2v as g2v
tf.logging.set_verbosity(tf.logging.ERROR)
def parse_param_file(param_file):
"""Parse parameter file for parameters."""
with gfile.GFile(param_file, 'r') as fh:
lines = fh.readlines()
d = {}
for l in lines:
l = l.rstrip('\n')
splits = l.split(':')
key = splits[0]
val_ = splits[1].strip()
if not val_:
val = ''
else:
try:
val = ast.literal_eval(val_)
except (ValueError, SyntaxError):
val = str(val_)
d[key] = val
return d
class RefinerEvaluator(object):
"""Load pretrained refiner and evaluate for a given code.
"""
def __init__(self, ckpt, codelen, dim=3, out_features=1, num_filters=128,
point_batch=20000):
self.ckpt = ckpt
self.codelen = codelen
self.dim = dim
self.out_features = out_features
self.num_filters = num_filters
self.point_batch = point_batch
self.graph = tf.Graph()
self._init_graph()
self.global_step_ = self.global_step.eval(session=self.sess)
def _init_graph(self):
"""Initialize computation graph for tensorflow.
"""
with self.graph.as_default():
self.refiner = im.ImNet(dim=self.dim,
in_features=self.codelen,
out_features=self.out_features,
num_filters=self.num_filters)
self.global_step = tf.get_variable('global_step', shape=[],
dtype=tf.int64)
self.pts_ph = tf.placeholder(tf.float32, shape=[self.point_batch, 3])
self.lat_ph = tf.placeholder(tf.float32, shape=[self.codelen])
lat = tf.broadcast_to(self.lat_ph[tf.newaxis],
[self.point_batch, self.codelen])
code = tf.concat((self.pts_ph, lat), axis=-1) # [pb, 3+c]
vals = self.refiner(code, training=False) # [pb, 1]
self.vals = tf.squeeze(vals, axis=1) # [pb]
self.saver = tf.train.Saver()
self.sess = tf.Session()
self.saver.restore(self.sess, self.ckpt)
def _get_grid_points(self, xmin, xmax, res):
x = np.linspace(xmin, xmax, res)
xyz = np.meshgrid(*tuple([x] * self.dim), indexing='ij')
xyz = np.stack(xyz, axis=-1)
xyz = xyz.reshape([-1, self.dim])
return xyz
def eval_points(self, lat, points):
"""Evaluate network at locations specified by points.
Args:
lat: [self.codelen,] np array, latent code.
points: [#v, self.dim] np array, point locations to evaluate.
Returns:
all_vals: [#v] np array, function values at locations.
"""
npt = points.shape[0]
npb = int(np.ceil(float(npt)/self.point_batch))
all_vals = np.zeros([npt], dtype=np.float32)
for idx in range(npb):
sid = int(idx * self.point_batch)
eid = int(min(npt, sid+self.point_batch))
pts = points[sid:eid]
pad_w = self.point_batch - (eid - sid)
pts = np.pad(pts, ((0, pad_w), (0, 0)), mode='constant')
with self.graph.as_default():
val = self.sess.run(self.vals, feed_dict={self.pts_ph: pts,
self.lat_ph: lat})
all_vals[sid:eid] = val[:(eid-sid)]
return all_vals
def eval_grid(self, lat, xmin=-1.0, xmax=1.0, res=64):
"""Evaluate network on a grid.
Args:
lat: [self.codelen,] np array, latent code.
xmin: float, minimum coordinate value for grid.
xmax: float, maximum coordinate value for grid.
res: int, resolution (per dimension) of grid.
Returns:
grid_val: [res, res, res] np.float32 array, grid of values from query.
"""
grid_points = self._get_grid_points(xmin=xmin, xmax=xmax, res=res)
point_val = self.eval_points(lat, grid_points)
grid_val = point_val.reshape([res, res, res])
return grid_val
class EncoderEvaluator(object):
"""Load pretrained grid encoder and evaluate single crops."""
def __init__(self,
ckpt,
in_grid_res=32,
encoder_nf=32,
codelen=32,
grid_batch=128):
"""Initialization function.
Args:
ckpt: str, path to checkpoint.
in_grid_res: int, resolution of grid to feed to encoder.
encoder_nf: int, number of base filters for encoder.
codelen: int, length of output latent code.
grid_batch: int, batch size of cut-out grid to evaluate at a time.
"""
self.ckpt = ckpt
self.codelen = codelen
self.grid_batch = grid_batch
self.in_grid_res = in_grid_res
self.encoder_nf = encoder_nf
self.graph = tf.Graph()
self._init_graph() # creates self.sess
def _init_graph(self):
"""Initialize computation graph for tensorflow.
"""
with self.graph.as_default():
self.encoder = g2v.GridEncoder(in_grid_res=self.in_grid_res,
num_filters=self.encoder_nf,
codelen=self.codelen,
name='g2v')
self.grid_ph = tf.placeholder(
tf.float32,
shape=[None, self.in_grid_res, self.in_grid_res, self.in_grid_res, 1])
self.lats = self.encoder(self.grid_ph, training=False) # [gb, codelen]
self.saver = tf.train.Saver()
self.sess = tf.Session()
self.saver.restore(self.sess, self.ckpt)
def eval_grid(self, grid):
"""Strided evaluation of full grid into feature grid.
Args:
grid: [batch, gres, gres, gres, 1] input feature grid.
Returns:
codes: [batch, codelen] output feature gird.
"""
# initialize output feature grid
niters = int(np.ceil(grid.shape[0] / self.grid_batch))
codes = []
for idx in range(niters):
sid = idx * self.grid_batch
eid = min(sid+self.grid_batch, grid.shape[0])
c = self.sess.run(self.lats,
feed_dict={self.grid_ph: grid[sid:eid]})
codes.append(c)
codes = np.concatenate(codes, axis=0)
return codes.astype(np.float32)
class FullGridEncoderEvaluator(object):
"""Load pretrained grid encoder and evaluate a full input grid.
Performs windowed encoding and outputs an encoded feature grid.
"""
def __init__(self,
ckpt,
in_grid_res=32,
num_filters=32,
codelen=128,
grid_batch=128,
gres=256,
overlap=True):
"""Initialization function.
Args:
ckpt: str, path to checkpoint.
in_grid_res: int, resolution of grid to feed to encoder.
num_filters: int, number of base filters for encoder.
codelen: int, length of output latent code.
grid_batch: int, batch size of cut-out grid to evaluate at a time.
gres: int, resolution of the full grid.
overlap: bool, whether to do overlapping or non-overlapping cutout
evaluations.
"""
self.ckpt = ckpt
self.codelen = codelen
self.grid_batch = grid_batch
self.in_grid_res = in_grid_res
self.gres = gres
self.num_filters = num_filters
self.graph = tf.Graph()
self._init_graph()
self.global_step_ = self.global_step.eval(session=self.sess)
if overlap:
ijk = np.arange(0, gres-int(in_grid_res/2), int(in_grid_res/2))
self.out_grid_res = ijk.shape[0]
else:
ijk = np.arange(0, gres, in_grid_res)
self.out_grid_res = ijk.shape[0]
self.ijk = np.meshgrid(ijk, ijk, ijk, indexing='ij')
self.ijk = np.stack(self.ijk, axis=-1).reshape([-1, 3])
def _init_graph(self):
"""Initialize computation graph for tensorflow."""
with self.graph.as_default():
self.encoder = g2v.GridEncoder(
in_grid_res=self.in_grid_res,
num_filters=self.num_filters,
codelen=self.codelen,
name='g2v')
self.global_step = tf.get_variable(
'global_step', shape=[], dtype=tf.int64)
self.grid_ph = tf.placeholder(
tf.float32, shape=[self.gres, self.gres, self.gres])
self.start_ph = tf.placeholder(tf.int32, shape=[self.grid_batch, 3])
self.ingrid = self._batch_slice(self.grid_ph, self.start_ph,
self.in_grid_res, self.grid_batch)
self.ingrid = self.ingrid[..., tf.newaxis]
self.lats = self.encoder(self.ingrid, training=False) # [gb, codelen]
self.saver = tf.train.Saver()
self.sess = tf.Session()
self.saver.restore(self.sess, self.ckpt)
def _batch_slice(self, ary, start_ijk, w, batch_size):
"""Batched slicing of original grid.
Args:
ary: tensor, rank = 3.
start_ijk: [batch_size, 3] tensor, starting index.
w: width of cube to extract.
batch_size: int, batch size.
Returns:
batched_slices: [batch_size, w, w, w] tensor, batched slices of ary.
"""
batch_size = start_ijk.shape[0]
ijk = tf.range(w, dtype=tf.int32)
slice_idx = tf.meshgrid(ijk, ijk, ijk, indexing='ij')
slice_idx = tf.stack(
slice_idx, axis=-1) # [in_grid_res, in_grid_res, in_grid_res, 3]
slice_idx = tf.broadcast_to(slice_idx[tf.newaxis], [batch_size, w, w, w, 3])
offset = tf.broadcast_to(
start_ijk[:, tf.newaxis, tf.newaxis, tf.newaxis, :],
[batch_size, w, w, w, 3])
slice_idx += offset
# [batch_size, in_grid_res, in_grid_res, in_grid_res, 3]
batched_slices = tf.gather_nd(ary, slice_idx)
# [batch_size, in_grid_res, in_grid_res, in_grid_res]
return batched_slices
def eval_grid(self, grid):
"""Strided evaluation of full grid into feature grid.
Args:
grid: [gres, gres, gres] input feature grid.
Returns:
ogrid: [out_grid_res, out_grid_res, out_grid_res, codelen] output feature
gird.
"""
# initialize output feature grid
ogrid = np.zeros([self.ijk.shape[0], self.codelen])
niters = np.ceil(self.ijk.shape[0] / self.grid_batch).astype(np.int)
for idx in range(niters):
sid = idx * self.grid_batch
eid = min(sid + self.grid_batch, self.ijk.shape[0])
start_ijk = self.ijk[sid:eid]
# pad if last iteration does not have a full batch
pad_w = self.grid_batch - start_ijk.shape[0]
start_ijk = np.pad(start_ijk, ((0, pad_w), (0, 0)), mode='constant')
lats = self.sess.run(
self.lats, feed_dict={
self.grid_ph: grid,
self.start_ph: start_ijk
})
ogrid[sid:eid] = lats[:eid - sid]
ogrid = ogrid.reshape(
[self.out_grid_res, self.out_grid_res, self.out_grid_res, self.codelen])
return ogrid.astype(np.float32)
class LIGEvaluator(object):
"""Load pretrained grid refiner and evaluate a feature grid.
"""
def __init__(self,
ckpt,
size=(15, 15, 15),
in_features=32,
out_features=1,
x_location_max=1,
num_filters=32,
min_grid_value=(0., 0., 0.),
max_grid_value=(1., 1., 1.),
net_type='imnet',
method='linear',
point_batch=20000,
scope=''):
"""Initialization function.
Args:
ckpt: str, path to checkpoint.
size: list or tuple of ints, grid dimension in each dimension.
in_features: int, number of input channels.
out_features: int, number of output channels.
x_location_max: float, relative coordinate range for one voxel.
num_filters: int, number of filters for refiner.
min_grid_value: tuple, lower bound of query points.
max_grid_value: tuple, upper bound of query points.
net_type: str, one of occnet/deepsdf.
method: str, one of linear/nn.
point_batch: int, pseudo batch size for evaluating points.
scope: str, scope of imnet layer.
"""
self.dim = 3 # hardcode for dim = 3
self.ckpt = ckpt
self.size = size
self.x_location_max = x_location_max
self.num_filters = num_filters
self.in_features = in_features
self.out_features = out_features
self.net_type = net_type
self.method = method
self.point_batch = point_batch
self.scope = scope
self.min_grid_value = min_grid_value
self.max_grid_value = max_grid_value
self.graph = tf.Graph()
self._init_graph()
def _init_graph(self):
"""Initialize computation graph for tensorflow.
"""
with self.graph.as_default():
self.lig = lig.LocalImplicitGrid(size=self.size,
in_features=self.in_features,
out_features=self.out_features,
num_filters=self.num_filters,
net_type=self.net_type,
method=self.method,
x_location_max=self.x_location_max,
min_grid_value=self.min_grid_value,
max_grid_value=self.max_grid_value,
name='lig')
self.pts_ph = tf.placeholder(tf.float32, shape=[self.point_batch, 3])
self.latgrid_ph = tf.placeholder(tf.float32,
shape=[self.size[0],
self.size[1],
self.size[2],
self.in_features])
self.latgrid = self.latgrid_ph[tf.newaxis]
self.points = self.pts_ph[tf.newaxis]
vals = self.lig(self.latgrid, self.points, training=False) # [1,npts,1]
self.vals = tf.squeeze(vals, axis=[0, 2]) # [npts]
self.map_dict = self._get_var_mapping(model=self.lig)
self.saver = tf.train.Saver(self.map_dict)
self.sess = tf.Session()
self.saver.restore(self.sess, self.ckpt)
def _get_grid_points(self, xmin, xmax, res):
x = np.linspace(xmin, xmax, res)
xyz = np.meshgrid(*tuple([x] * self.dim), indexing='ij')
xyz = np.stack(xyz, axis=-1)
xyz = xyz.reshape([-1, self.dim])
return xyz
def eval_points(self, latgrid, points):
"""Evaluate network at locations specified by points.
Args:
latgrid: [size0, size1, size2, self.codelen] np array, latent code.
points: [#v, self.dim] np array, point locations to evaluate.
Returns:
all_vals: [#v] np array, function values at locations.
"""
npt = points.shape[0]
npb = int(np.ceil(float(npt)/self.point_batch))
all_vals = np.zeros([npt], dtype=np.float32)
for idx in range(npb):
sid = int(idx * self.point_batch)
eid = int(min(npt, sid+self.point_batch))
pts = points[sid:eid]
pad_w = self.point_batch - (eid - sid)
if pts.shape[0] < self.point_batch:
pts_pad = np.tile(pts[0:1], (pad_w, 1))
# repeat the first point in the batch
pts = np.concatenate([pts, pts_pad], axis=0)
with self.graph.as_default():
val = self.sess.run(self.vals, feed_dict={self.pts_ph: pts,
self.latgrid_ph: latgrid})
all_vals[sid:eid] = val[:(eid-sid)]
return all_vals
def eval_grid(self, latgrid, xmin=0.0, xmax=1.0, res=128):
"""Evaluate network on a grid.
Args:
latgrid: [size0, size1, size2, self.codelen] np array, latent code.
xmin: float, minimum coordinate value for grid.
xmax: float, maximum coordinate value for grid.
res: int, resolution (per dimension) of grid.
Returns:
grid_val: [res, res, res] np.float32 array, grid of values from query.
"""
grid_points = self._get_grid_points(xmin=xmin, xmax=xmax, res=res)
point_val = self.eval_points(latgrid, grid_points)
grid_val = point_val.reshape([res, res, res])
return grid_val
def _get_var_mapping(self, model):
vars_ = model.trainable_variables
varnames = [v.name for v in vars_] # .split(':')[0]
varnames = [self.scope+v.replace('lig/', '').strip(':0') for v in varnames]
map_dict = dict(zip(varnames, vars_))
return map_dict
class UNetEvaluator(object):
"""Load pretrained UNet for generating feature grid for coarse voxel inputs."""
def __init__(self,
ckpt,
in_grid_res,
out_grid_res,
num_filters,
max_filters,
out_features,
sph_norm=0.):
self.ckpt = ckpt
self.in_grid_res = in_grid_res
self.out_grid_res = out_grid_res
self.num_filters = num_filters
self.max_filters = max_filters
self.out_features = out_features
self.sph_norm = sph_norm
self.graph = tf.Graph()
self._init_graph()
def _init_graph(self):
"""Initialize computation graph for tensorflow."""
with self.graph.as_default():
self.unet = g2g.UNet3D(in_grid_res=self.in_grid_res,
out_grid_res=self.out_grid_res,
num_filters=self.num_filters,
max_filters=self.max_filters,
out_features=self.out_features)
self.input_grid_ph = tf.placeholder(
tf.float32,
[None, None, None])
self.input_grid = self.input_grid_ph[tf.newaxis, ..., tf.newaxis]
self.feat_grid = self.unet(self.input_grid)
self.saver = tf.train.Saver()
self.sess = tf.Session()
self.saver.restore(self.sess, self.ckpt)
def eval_grid(self, input_grid):
"""Evaluate input grid (no batching).
Args:
input_grid: [in_grid_res, in_grid_res, in_grid_res] tensor.
Returns:
[out_grid_res, out_grid_res, out_grid_res, out_features]
"""
with self.graph.as_default():
feat_grid = self.sess.run(self.feat_grid,
feed_dict={self.input_grid_ph: input_grid})
feat_grid = feat_grid[0]
if self.sph_norm > 0:
feat_grid = (feat_grid /
np.linalg.norm(feat_grid, axis=-1, keepdims=True) *
self.sph_norm)
return feat_grid
class SparseLIGEvaluator(object):
"""Evaluate sparse encoded feature grids."""
def __init__(self, ckpt, num_filters, codelen, origin, grid_shape,
part_size, overlap=True, scope=''):
self.scope = scope
self.overlap = overlap
self.ckpt = ckpt
self.num_filters = num_filters
self.codelen = codelen
if overlap:
self.res = (np.array(grid_shape) - 1) / 2.0
else:
self.res = np.array(grid_shape) - 1
self.res = self.res.astype(np.int32)
self.xmin = np.array(origin)
self.xmax = self.xmin + self.res * part_size
self.part_size = part_size
self.lvg = LIGEvaluator(ckpt=ckpt,
size=grid_shape,
in_features=codelen,
out_features=1,
x_location_max=2-float(overlap),
num_filters=num_filters,
min_grid_value=self.xmin,
max_grid_value=self.xmax,
net_type='imnet',
method='linear' if overlap else 'nn',
scope=scope)
def evaluate_feature_grid(self, feature_grid, mask, res_per_part=4,
conservative=False):
"""Evaluate feature grid.
Args:
feature_grid: [*grid_size, codelen] np.array, feature grid to evaluate.
mask: [*grid_size] bool np.array, mask for feature locations.
res_per_part: int, resolution of output evaluation per part.
conservative: bool, whether to do conservative evaluations.
If true, evalutes a cell if either neighbor is masked. Else, evaluates a
cell if all neighbors are masked.
Returns:
output grid.
"""
# setup grid
eps = 1e-6
s = self.res
l = [np.linspace(self.xmin[i]+eps, self.xmax[i]-eps, res_per_part*s[i])
for i in range(3)]
xyz = np.stack(np.meshgrid(l[0], l[1], l[2],
indexing='ij'), axis=-1).reshape(-1, 3)
output_grid = np.ones([res_per_part*s[0],
res_per_part*s[1],
res_per_part*s[2]], dtype=np.float32).reshape(-1)
mask = mask.astype(np.bool)
if self.overlap:
mask = np.stack([mask[:-1, :-1, :-1],
mask[:-1, :-1, 1:],
mask[:-1, 1:, :-1],
mask[:-1, 1:, 1:],
mask[1:, :-1, :-1],
mask[1:, :-1, 1:],
mask[1:, 1:, :-1],
mask[1:, 1:, 1:]], axis=-1)
if conservative:
mask = np.any(mask, axis=-1)
else:
mask = np.all(mask, axis=-1)
g = np.stack(np.meshgrid(np.arange(mask.shape[0]),
np.arange(mask.shape[1]),
np.arange(mask.shape[2]),
indexing='ij'), axis=-1).reshape(-1, 3)
g = g[:, 0]*(mask.shape[1]*mask.shape[2]) + g[:, 1]*mask.shape[2] + g[:, 2]
g_valid = g[mask.ravel()]
if self.overlap:
ijk = np.floor((xyz - self.xmin) / self.part_size * 2).astype(np.int32)
else:
ijk = np.floor((xyz - self.xmin +
0.5 * self.part_size) / self.part_size).astype(np.int32)
ijk_idx = (ijk[:, 0]*(mask.shape[1] * mask.shape[2]) +
ijk[:, 1]*mask.shape[2] + ijk[:, 2])
pt_mask = np.isin(ijk_idx, g_valid)
output_grid[pt_mask] = self.lvg.eval_points(feature_grid, xyz[pt_mask])
output_grid = output_grid.reshape(res_per_part*s[0], # pylint: disable=too-many-function-args
res_per_part*s[1],
res_per_part*s[2])
return output_grid
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/nn/metric/tests/__init__.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/util/test_case.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unit test base class.
This class is intended to be used as the unit test base class in TensorFlow
Graphics. It implements new methods on top of the TensorFlow TestCase class
that are used to simplify the code and check for various kinds of failure.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import warnings
from absl import flags
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from tensorflow_graphics.util import tfg_flags
FLAGS = flags.FLAGS
def _max_error(arrays1, arrays2):
"""Computes maximum elementwise gap between two lists of ndarrays.
Computes the maximum elementwise gap between two lists with the same length,
of arrays with the same shape.
Args:
arrays1: a lists of np.ndarrays.
arrays2: a lists of np.ndarrays of the same shape as arrays1.
Returns:
The maximum elementwise absolute difference between the two lists of arrays.
"""
error = 0
for array1, array2 in zip(arrays1, arrays2):
if array1.size or array2.size: # Handle zero size ndarrays correctly
error = np.maximum(error, np.fabs(array1 - array2).max())
return error
class TestCase(parameterized.TestCase, tf.test.TestCase):
"""Test case class implementing extra test functionalities."""
def setUp(self): # pylint: disable=invalid-name
"""Sets the seed for tensorflow and numpy."""
super(TestCase, self).setUp()
try:
seed = flags.FLAGS.test_random_seed
except flags.UnparsedFlagAccessError:
seed = 301 # Default seed in case test_random_seed is not defined.
tf.compat.v1.set_random_seed(seed)
np.random.seed(seed)
FLAGS[tfg_flags.TFG_ADD_ASSERTS_TO_GRAPH].value = True
def _remove_dynamic_shapes(self, shapes):
for s in shapes:
if None in s:
return None
return shapes
def _compute_gradient_error(self, x, y, x_init_value, delta=1e-6):
"""Computes the gradient error.
Args:
x: a tensor or list of tensors.
y: a tensor.
x_init_value: a numpy array of the same shape as "x" representing the
initial value of x.
delta: (optional) the amount of perturbation.
Returns:
A tuple (max_error, row, column), with max_error the maxium error between
the two Jacobians, and row/column the position of said maximum error.
"""
x_shape = x.shape.as_list()
y_shape = y.shape.as_list()
with self.cached_session():
grad = tf.compat.v1.test.compute_gradient(x, x_shape, y, y_shape,
x_init_value, delta)
if isinstance(grad, tuple):
grad = [grad]
error = 0
row_max_error = 0
column_max_error = 0
for j_t, j_n in grad:
if j_t.size or j_n.size: # Handle zero size tensors correctly
diff = np.fabs(j_t - j_n)
max_error = np.maximum(error, diff.max())
row_max_error, column_max_error = np.unravel_index(
diff.argmax(), diff.shape)
return max_error, row_max_error, column_max_error
def _create_placeholders_from_shapes(self,
shapes,
dtypes=None,
sparse_tensors=None):
"""Creates a list of placeholders based on a list of shapes.
Args:
shapes: A tuple or list of the input shapes.
dtypes: A list of input types.
sparse_tensors: A `bool` list denoting if placeholder is a SparseTensor.
This is ignored in eager mode - in eager execution, only dense
placeholders will be created.
Returns:
A list of placeholders.
"""
if dtypes is None:
dtypes = [tf.float32] * len(shapes)
if sparse_tensors is None:
sparse_tensors = [False] * len(shapes)
if tf.executing_eagerly():
placeholders = [
tf.compat.v1.placeholder_with_default(
tf.zeros(shape=shape, dtype=dtype), shape=shape)
for shape, dtype in zip(shapes, dtypes)
]
else:
placeholders = [
tf.compat.v1.sparse.placeholder(dtype, shape=shape)
if is_sparse else tf.compat.v1.placeholder(shape=shape, dtype=dtype)
for shape, dtype, is_sparse in zip(shapes, dtypes, sparse_tensors)
]
return placeholders
def _tile_tensors(self, tiling, tensors):
"""Tiles a set of tensors using the tiling information.
Args:
tiling: A list of integers defining how to tile the tensors.
tensors: A list of tensors to tile.
Returns:
A list of tiled tensors.
"""
tensors = [
np.tile(tensor, tiling + [1] * len(np.array(tensor).shape))
for tensor in tensors
]
return tensors
def assert_exception_is_not_raised(self,
func,
shapes,
dtypes=None,
sparse_tensors=None,
**kwargs):
"""Runs the function to make sure an exception is not raised.
Args:
func: A function to exectute.
shapes: A tuple or list of the input shapes.
dtypes: A list of input types.
sparse_tensors: A list of `bool` indicating if the inputs are
SparseTensors. Defaults to all `False`. This is used for creating
SparseTensor placeholders in graph mode.
**kwargs: A dict of keyword arguments to be passed to the function.
"""
if tf.executing_eagerly() and shapes:
# If a shape is given in eager mode, the tensor will be initialized with
# zeros, which can make some range checks fail for certain functions.
# But if only kwargs are passed and shapes is empty, this function
# still should run correctly.
return
placeholders = self._create_placeholders_from_shapes(
shapes=shapes, dtypes=dtypes, sparse_tensors=sparse_tensors)
try:
func(*placeholders, **kwargs)
except Exception as e: # pylint: disable=broad-except
self.fail("Exception raised: %s" % str(e))
def assert_exception_is_raised(self,
func,
error_msg,
shapes,
dtypes=None,
sparse_tensors=None,
**kwargs):
"""Runs the function to make sure an exception is raised.
Args:
func: A function to exectute.
error_msg: The error message of the exception.
shapes: A tuple or list of the input shapes.
dtypes: A list of input types.
sparse_tensors: A list of `bool` indicating if the inputs are
SparseTensors. Defaults to all `False`. This is used for creating
SparseTensor placeholders in graph mode.
**kwargs: A dict of keyword arguments to be passed to the function.
"""
if tf.executing_eagerly():
# If shapes is an empty list, we can continue with the test. If shapes
# has None values, we shoud return.
shapes = self._remove_dynamic_shapes(shapes)
if shapes is None:
return
placeholders = self._create_placeholders_from_shapes(
shapes=shapes, dtypes=dtypes, sparse_tensors=sparse_tensors)
with self.assertRaisesRegexp(ValueError, error_msg):
func(*placeholders, **kwargs)
def assert_jacobian_is_correct(self, x, x_init, y, atol=1e-6, delta=1e-6):
"""Tests that the gradient error of y=f(x) is small.
Args:
x: A tensor.
x_init: A numpy array containing the values at which to estimate the
gradients of y.
y: A tensor.
atol: Maximum absolute tolerance in gradient error.
delta: The amount of perturbation.
"""
warnings.warn((
"assert_jacobian_is_correct is deprecated and might get "
"removed in a future version please use assert_jacobian_is_correct_fn"),
DeprecationWarning)
if tf.executing_eagerly():
self.skipTest(reason="Graph mode only test")
max_error, _, _ = self._compute_gradient_error(x, y, x_init, delta)
self.assertLessEqual(max_error, atol)
def assert_jacobian_is_correct_fn(self, f, x, atol=1e-6, delta=1e-6):
"""Tests that the gradient error of y=f(x) is small.
Args:
f: the function.
x: A list of arguments for the function
atol: Maximum absolute tolerance in gradient error.
delta: The amount of perturbation.
"""
# pylint: disable=no-value-for-parameter
if tf.executing_eagerly():
max_error = _max_error(*tf.test.compute_gradient(f, x, delta))
else:
with self.cached_session():
max_error = _max_error(*tf.test.compute_gradient(f, x, delta))
# pylint: enable=no-value-for-parameter
self.assertLessEqual(max_error, atol)
def assert_jacobian_is_finite(self, x, x_init, y):
"""Tests that the Jacobian only contains valid values.
The analytical gradients and numerical ones are expected to differ at points
where y is not smooth. This function can be used to check that the
analytical gradient is not NaN nor Inf.
Args:
x: A tensor.
x_init: A numpy array containing the values at which to estimate the
gradients of y.
y: A tensor.
"""
warnings.warn((
"assert_jacobian_is_finite is deprecated and might get "
"removed in a future version please use assert_jacobian_is_finite_fn"),
DeprecationWarning)
if tf.executing_eagerly():
self.skipTest(reason="Graph mode only test")
x_shape = x.shape.as_list()
y_shape = y.shape.as_list()
with tf.compat.v1.Session():
gradient = tf.compat.v1.test.compute_gradient(
x, x_shape, y, y_shape, x_init_value=x_init)
theoretical_gradient = gradient[0][0]
self.assertFalse(
np.isnan(theoretical_gradient).any() or
np.isinf(theoretical_gradient).any())
def assert_jacobian_is_finite_fn(self, f, x):
"""Tests that the Jacobian only contains valid values.
The analytical gradients and numerical ones are expected to differ at points
where f(x) is not smooth. This function can be used to check that the
analytical gradient is not 'NaN' nor 'Inf'.
Args:
f: the function.
x: A list of arguments for the function
"""
if tf.executing_eagerly():
theoretical_gradient, _ = tf.compat.v2.test.compute_gradient(f, x)
else:
with self.cached_session():
theoretical_gradient, _ = tf.compat.v2.test.compute_gradient(f, x)
self.assertNotIn(
True, [
np.isnan(element).any() or np.isinf(element).any()
for element in theoretical_gradient
],
msg="nan or inf elements found in theoretical jacobian.")
def assert_output_is_correct(self,
func,
test_inputs,
test_outputs,
rtol=1e-3,
atol=1e-6,
tile=True):
"""Tests that the function gives the correct result.
Args:
func: A function to exectute.
test_inputs: A tuple or list of test inputs.
test_outputs: A tuple or list of test outputs against which the result of
calling `func` on `test_inputs` will be compared to.
rtol: The relative tolerance used during the comparison.
atol: The absolute tolerance used during the comparison.
tile: A `bool` indicating whether or not to automatically tile the test
inputs and outputs.
"""
if tile:
# Creates a rank 4 list of values between 1 and 10.
tensor_tile = np.random.randint(1, 10, size=np.random.randint(4)).tolist()
test_inputs = self._tile_tensors(tensor_tile, test_inputs)
test_outputs = self._tile_tensors(tensor_tile, test_outputs)
test_outputs = [
tf.convert_to_tensor(value=output) for output in test_outputs
]
test_outputs = test_outputs[0] if len(test_outputs) == 1 else test_outputs
self.assertAllClose(test_outputs, func(*test_inputs), rtol=rtol, atol=atol)
def assert_tf_lite_convertible(self,
func,
shapes,
dtypes=None,
test_inputs=None):
"""Runs the tf-lite converter to make sure the function can be exported.
Args:
func: A function to execute with tf-lite.
shapes: A tuple or list of input shapes.
dtypes: A list of input types.
test_inputs: A tuple or list of inputs. If not provided the test inputs
will be randomly generated.
"""
if tf.executing_eagerly():
# Currently TFLite conversion is not supported in eager mode.
self.skipTest(reason="Graph mode only test")
# Generate graph with the function given as input.
in_tensors = self._create_placeholders_from_shapes(shapes, dtypes)
out_tensors = func(*in_tensors)
if not isinstance(out_tensors, (list, tuple)):
out_tensors = [out_tensors]
with tf.compat.v1.Session() as sess:
try:
sess.run(tf.compat.v1.global_variables_initializer())
# Convert to a TFLite model.
converter = tf.compat.v1.lite.TFLiteConverter.from_session(
sess, in_tensors, out_tensors)
tflite_model = converter.convert()
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
# If no test inputs provided then randomly generate inputs.
if test_inputs is None:
test_inputs = [
np.array(np.random.sample(shape), dtype=np.float32)
for shape in shapes
]
else:
test_inputs = [
np.array(test, dtype=np.float32) for test in test_inputs
]
# Evaluate function using TensorFlow.
feed_dict = dict(zip(in_tensors, test_inputs))
test_outputs = sess.run(out_tensors, feed_dict)
# Set tensors for the TFLite model.
input_details = interpreter.get_input_details()
for i, test_input in enumerate(test_inputs):
index = input_details[i]["index"]
interpreter.set_tensor(index, test_input)
# Run TFLite model.
interpreter.invoke()
# Get tensors from the TFLite model and compare with TensorFlow.
output_details = interpreter.get_output_details()
for o, test_output in enumerate(test_outputs):
index = output_details[o]["index"]
self.assertAllClose(test_output, interpreter.get_tensor(index))
except Exception as e: # pylint: disable=broad-except
self.fail("Exception raised: %s" % str(e))
def main(argv=None):
"""Main function."""
tf.test.main(argv)
# The util functions or classes are not exported.
__all__ = []
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Unit test base class.
This class is intended to be used as the unit test base class in TensorFlow
Graphics. It implements new methods on top of the TensorFlow TestCase class
that are used to simplify the code and check for various kinds of failure.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import warnings
from absl import flags
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from tensorflow_graphics.util import tfg_flags
FLAGS = flags.FLAGS
def _max_error(arrays1, arrays2):
"""Computes maximum elementwise gap between two lists of ndarrays.
Computes the maximum elementwise gap between two lists with the same length,
of arrays with the same shape.
Args:
arrays1: a lists of np.ndarrays.
arrays2: a lists of np.ndarrays of the same shape as arrays1.
Returns:
The maximum elementwise absolute difference between the two lists of arrays.
"""
error = 0
for array1, array2 in zip(arrays1, arrays2):
if array1.size or array2.size: # Handle zero size ndarrays correctly
error = np.maximum(error, np.fabs(array1 - array2).max())
return error
class TestCase(parameterized.TestCase, tf.test.TestCase):
"""Test case class implementing extra test functionalities."""
def setUp(self): # pylint: disable=invalid-name
"""Sets the seed for tensorflow and numpy."""
super(TestCase, self).setUp()
try:
seed = flags.FLAGS.test_random_seed
except flags.UnparsedFlagAccessError:
seed = 301 # Default seed in case test_random_seed is not defined.
tf.compat.v1.set_random_seed(seed)
np.random.seed(seed)
FLAGS[tfg_flags.TFG_ADD_ASSERTS_TO_GRAPH].value = True
def _remove_dynamic_shapes(self, shapes):
for s in shapes:
if None in s:
return None
return shapes
def _compute_gradient_error(self, x, y, x_init_value, delta=1e-6):
"""Computes the gradient error.
Args:
x: a tensor or list of tensors.
y: a tensor.
x_init_value: a numpy array of the same shape as "x" representing the
initial value of x.
delta: (optional) the amount of perturbation.
Returns:
A tuple (max_error, row, column), with max_error the maxium error between
the two Jacobians, and row/column the position of said maximum error.
"""
x_shape = x.shape.as_list()
y_shape = y.shape.as_list()
with self.cached_session():
grad = tf.compat.v1.test.compute_gradient(x, x_shape, y, y_shape,
x_init_value, delta)
if isinstance(grad, tuple):
grad = [grad]
error = 0
row_max_error = 0
column_max_error = 0
for j_t, j_n in grad:
if j_t.size or j_n.size: # Handle zero size tensors correctly
diff = np.fabs(j_t - j_n)
max_error = np.maximum(error, diff.max())
row_max_error, column_max_error = np.unravel_index(
diff.argmax(), diff.shape)
return max_error, row_max_error, column_max_error
def _create_placeholders_from_shapes(self,
shapes,
dtypes=None,
sparse_tensors=None):
"""Creates a list of placeholders based on a list of shapes.
Args:
shapes: A tuple or list of the input shapes.
dtypes: A list of input types.
sparse_tensors: A `bool` list denoting if placeholder is a SparseTensor.
This is ignored in eager mode - in eager execution, only dense
placeholders will be created.
Returns:
A list of placeholders.
"""
if dtypes is None:
dtypes = [tf.float32] * len(shapes)
if sparse_tensors is None:
sparse_tensors = [False] * len(shapes)
if tf.executing_eagerly():
placeholders = [
tf.compat.v1.placeholder_with_default(
tf.zeros(shape=shape, dtype=dtype), shape=shape)
for shape, dtype in zip(shapes, dtypes)
]
else:
placeholders = [
tf.compat.v1.sparse.placeholder(dtype, shape=shape)
if is_sparse else tf.compat.v1.placeholder(shape=shape, dtype=dtype)
for shape, dtype, is_sparse in zip(shapes, dtypes, sparse_tensors)
]
return placeholders
def _tile_tensors(self, tiling, tensors):
"""Tiles a set of tensors using the tiling information.
Args:
tiling: A list of integers defining how to tile the tensors.
tensors: A list of tensors to tile.
Returns:
A list of tiled tensors.
"""
tensors = [
np.tile(tensor, tiling + [1] * len(np.array(tensor).shape))
for tensor in tensors
]
return tensors
def assert_exception_is_not_raised(self,
func,
shapes,
dtypes=None,
sparse_tensors=None,
**kwargs):
"""Runs the function to make sure an exception is not raised.
Args:
func: A function to exectute.
shapes: A tuple or list of the input shapes.
dtypes: A list of input types.
sparse_tensors: A list of `bool` indicating if the inputs are
SparseTensors. Defaults to all `False`. This is used for creating
SparseTensor placeholders in graph mode.
**kwargs: A dict of keyword arguments to be passed to the function.
"""
if tf.executing_eagerly() and shapes:
# If a shape is given in eager mode, the tensor will be initialized with
# zeros, which can make some range checks fail for certain functions.
# But if only kwargs are passed and shapes is empty, this function
# still should run correctly.
return
placeholders = self._create_placeholders_from_shapes(
shapes=shapes, dtypes=dtypes, sparse_tensors=sparse_tensors)
try:
func(*placeholders, **kwargs)
except Exception as e: # pylint: disable=broad-except
self.fail("Exception raised: %s" % str(e))
def assert_exception_is_raised(self,
func,
error_msg,
shapes,
dtypes=None,
sparse_tensors=None,
**kwargs):
"""Runs the function to make sure an exception is raised.
Args:
func: A function to exectute.
error_msg: The error message of the exception.
shapes: A tuple or list of the input shapes.
dtypes: A list of input types.
sparse_tensors: A list of `bool` indicating if the inputs are
SparseTensors. Defaults to all `False`. This is used for creating
SparseTensor placeholders in graph mode.
**kwargs: A dict of keyword arguments to be passed to the function.
"""
if tf.executing_eagerly():
# If shapes is an empty list, we can continue with the test. If shapes
# has None values, we shoud return.
shapes = self._remove_dynamic_shapes(shapes)
if shapes is None:
return
placeholders = self._create_placeholders_from_shapes(
shapes=shapes, dtypes=dtypes, sparse_tensors=sparse_tensors)
with self.assertRaisesRegexp(ValueError, error_msg):
func(*placeholders, **kwargs)
def assert_jacobian_is_correct(self, x, x_init, y, atol=1e-6, delta=1e-6):
"""Tests that the gradient error of y=f(x) is small.
Args:
x: A tensor.
x_init: A numpy array containing the values at which to estimate the
gradients of y.
y: A tensor.
atol: Maximum absolute tolerance in gradient error.
delta: The amount of perturbation.
"""
warnings.warn((
"assert_jacobian_is_correct is deprecated and might get "
"removed in a future version please use assert_jacobian_is_correct_fn"),
DeprecationWarning)
if tf.executing_eagerly():
self.skipTest(reason="Graph mode only test")
max_error, _, _ = self._compute_gradient_error(x, y, x_init, delta)
self.assertLessEqual(max_error, atol)
def assert_jacobian_is_correct_fn(self, f, x, atol=1e-6, delta=1e-6):
"""Tests that the gradient error of y=f(x) is small.
Args:
f: the function.
x: A list of arguments for the function
atol: Maximum absolute tolerance in gradient error.
delta: The amount of perturbation.
"""
# pylint: disable=no-value-for-parameter
if tf.executing_eagerly():
max_error = _max_error(*tf.test.compute_gradient(f, x, delta))
else:
with self.cached_session():
max_error = _max_error(*tf.test.compute_gradient(f, x, delta))
# pylint: enable=no-value-for-parameter
self.assertLessEqual(max_error, atol)
def assert_jacobian_is_finite(self, x, x_init, y):
"""Tests that the Jacobian only contains valid values.
The analytical gradients and numerical ones are expected to differ at points
where y is not smooth. This function can be used to check that the
analytical gradient is not NaN nor Inf.
Args:
x: A tensor.
x_init: A numpy array containing the values at which to estimate the
gradients of y.
y: A tensor.
"""
warnings.warn((
"assert_jacobian_is_finite is deprecated and might get "
"removed in a future version please use assert_jacobian_is_finite_fn"),
DeprecationWarning)
if tf.executing_eagerly():
self.skipTest(reason="Graph mode only test")
x_shape = x.shape.as_list()
y_shape = y.shape.as_list()
with tf.compat.v1.Session():
gradient = tf.compat.v1.test.compute_gradient(
x, x_shape, y, y_shape, x_init_value=x_init)
theoretical_gradient = gradient[0][0]
self.assertFalse(
np.isnan(theoretical_gradient).any() or
np.isinf(theoretical_gradient).any())
def assert_jacobian_is_finite_fn(self, f, x):
"""Tests that the Jacobian only contains valid values.
The analytical gradients and numerical ones are expected to differ at points
where f(x) is not smooth. This function can be used to check that the
analytical gradient is not 'NaN' nor 'Inf'.
Args:
f: the function.
x: A list of arguments for the function
"""
if tf.executing_eagerly():
theoretical_gradient, _ = tf.compat.v2.test.compute_gradient(f, x)
else:
with self.cached_session():
theoretical_gradient, _ = tf.compat.v2.test.compute_gradient(f, x)
self.assertNotIn(
True, [
np.isnan(element).any() or np.isinf(element).any()
for element in theoretical_gradient
],
msg="nan or inf elements found in theoretical jacobian.")
def assert_output_is_correct(self,
func,
test_inputs,
test_outputs,
rtol=1e-3,
atol=1e-6,
tile=True):
"""Tests that the function gives the correct result.
Args:
func: A function to exectute.
test_inputs: A tuple or list of test inputs.
test_outputs: A tuple or list of test outputs against which the result of
calling `func` on `test_inputs` will be compared to.
rtol: The relative tolerance used during the comparison.
atol: The absolute tolerance used during the comparison.
tile: A `bool` indicating whether or not to automatically tile the test
inputs and outputs.
"""
if tile:
# Creates a rank 4 list of values between 1 and 10.
tensor_tile = np.random.randint(1, 10, size=np.random.randint(4)).tolist()
test_inputs = self._tile_tensors(tensor_tile, test_inputs)
test_outputs = self._tile_tensors(tensor_tile, test_outputs)
test_outputs = [
tf.convert_to_tensor(value=output) for output in test_outputs
]
test_outputs = test_outputs[0] if len(test_outputs) == 1 else test_outputs
self.assertAllClose(test_outputs, func(*test_inputs), rtol=rtol, atol=atol)
def assert_tf_lite_convertible(self,
func,
shapes,
dtypes=None,
test_inputs=None):
"""Runs the tf-lite converter to make sure the function can be exported.
Args:
func: A function to execute with tf-lite.
shapes: A tuple or list of input shapes.
dtypes: A list of input types.
test_inputs: A tuple or list of inputs. If not provided the test inputs
will be randomly generated.
"""
if tf.executing_eagerly():
# Currently TFLite conversion is not supported in eager mode.
self.skipTest(reason="Graph mode only test")
# Generate graph with the function given as input.
in_tensors = self._create_placeholders_from_shapes(shapes, dtypes)
out_tensors = func(*in_tensors)
if not isinstance(out_tensors, (list, tuple)):
out_tensors = [out_tensors]
with tf.compat.v1.Session() as sess:
try:
sess.run(tf.compat.v1.global_variables_initializer())
# Convert to a TFLite model.
converter = tf.compat.v1.lite.TFLiteConverter.from_session(
sess, in_tensors, out_tensors)
tflite_model = converter.convert()
# Load TFLite model and allocate tensors.
interpreter = tf.lite.Interpreter(model_content=tflite_model)
interpreter.allocate_tensors()
# If no test inputs provided then randomly generate inputs.
if test_inputs is None:
test_inputs = [
np.array(np.random.sample(shape), dtype=np.float32)
for shape in shapes
]
else:
test_inputs = [
np.array(test, dtype=np.float32) for test in test_inputs
]
# Evaluate function using TensorFlow.
feed_dict = dict(zip(in_tensors, test_inputs))
test_outputs = sess.run(out_tensors, feed_dict)
# Set tensors for the TFLite model.
input_details = interpreter.get_input_details()
for i, test_input in enumerate(test_inputs):
index = input_details[i]["index"]
interpreter.set_tensor(index, test_input)
# Run TFLite model.
interpreter.invoke()
# Get tensors from the TFLite model and compare with TensorFlow.
output_details = interpreter.get_output_details()
for o, test_output in enumerate(test_outputs):
index = output_details[o]["index"]
self.assertAllClose(test_output, interpreter.get_tensor(index))
except Exception as e: # pylint: disable=broad-except
self.fail("Exception raised: %s" % str(e))
def main(argv=None):
"""Main function."""
tf.test.main(argv)
# The util functions or classes are not exported.
__all__ = []
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/geometry/representation/mesh/tests/utils_test.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for utils."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from absl.testing import parameterized
import numpy as np
from tensorflow_graphics.geometry.representation.mesh import utils
from tensorflow_graphics.util import test_case
class UtilsTest(test_case.TestCase):
@parameterized.parameters(
(np.array(((0, 1, 2),)), [[0, 1], [0, 2], [1, 2]]),
(np.array(
((0, 1, 2), (0, 1, 3))), [[0, 1], [0, 2], [0, 3], [1, 2], [1, 3]]),
)
def test_extract_undirected_edges_from_triangular_mesh_preset(
self, test_inputs, test_outputs):
"""Tests that the output contain the expected edges."""
edges = utils.extract_unique_edges_from_triangular_mesh(
test_inputs, directed_edges=False)
edges.sort(axis=1) # Ensure edge tuple ordered by first vertex.
self.assertEqual(sorted(edges.tolist()), test_outputs)
@parameterized.parameters(
(np.array(
((0, 1, 2),)), [[0, 1], [0, 2], [1, 0], [1, 2], [2, 0], [2, 1]]),
(np.array(
((0, 1, 2), (0, 1, 3))), [[0, 1], [0, 2], [0, 3], [1, 0], [1, 2],
[1, 3], [2, 0], [2, 1], [3, 0], [3, 1]]),
)
def test_extract_directed_edges_from_triangular_mesh_preset(
self, test_inputs, test_outputs):
"""Tests that the output contain the expected edges."""
edges = utils.extract_unique_edges_from_triangular_mesh(
test_inputs, directed_edges=True)
self.assertEqual(sorted(edges.tolist()), test_outputs)
@parameterized.parameters(
(1, "'faces' must be a numpy.ndarray."),
(np.array((1,)), "must have a rank equal to 2"),
(np.array((((1,),),)), "must have a rank equal to 2"),
(np.array(((1,),)), "must have exactly 3 dimensions in the last axis"),
(np.array(((1, 1),)), "must have exactly 3 dimensions in the last axis"),
(np.array(
((1, 1, 1, 1),)), "must have exactly 3 dimensions in the last axis"),
)
def test_extract_edges_from_triangular_mesh_raised(
self, invalid_input, error_msg):
"""Tests that the shape exceptions are properly raised."""
with self.assertRaisesRegexp(ValueError, error_msg):
utils.extract_unique_edges_from_triangular_mesh(invalid_input)
@parameterized.parameters(
(np.array(((0, 1), (0, 2), (1, 0), (1, 2), (2, 0), (2, 1))),
np.float16,
[0.5, 0.5, 0.5, 0.5, 0.5, 0.5]),
(np.array(((0, 1), (0, 2), (1, 0), (1, 2), (2, 0), (2, 1))),
np.float32,
[0.5, 0.5, 0.5, 0.5, 0.5, 0.5]),
(np.array(((0, 1), (0, 2), (0, 3), (1, 0), (1, 2), (1, 3),
(2, 0), (2, 1), (3, 0), (3, 1))),
np.float64,
[1.0 / 3, 1.0 / 3, 1.0 / 3, 1.0 / 3, 1.0 / 3, 1.0 / 3,
0.5, 0.5, 0.5, 0.5]),
)
def test_get_degree_based_edge_weights_preset(
self, test_inputs, test_dtype, test_outputs):
"""Tests that the output contain the expected edges."""
weights = utils.get_degree_based_edge_weights(test_inputs, test_dtype)
self.assertAllClose(weights.tolist(), test_outputs)
@parameterized.parameters(
(1, "'edges' must be a numpy.ndarray."),
(np.array((1,)), "must have a rank equal to 2"),
(np.array((((1,),),)), "must have a rank equal to 2"),
(np.array(((1,),)), "must have exactly 2 dimensions in the last axis"),
(np.array(
((1, 1, 1),)), "must have exactly 2 dimensions in the last axis"),
)
def test_get_degree_based_edge_weights_invalid_edges_raised(
self, invalid_input, error_msg):
"""Tests that the shape exceptions are properly raised."""
with self.assertRaisesRegexp(ValueError, error_msg):
utils.get_degree_based_edge_weights(invalid_input)
@parameterized.parameters(
(np.bool, "must be a numpy float type"),
(np.int, "must be a numpy float type"),
(np.complex, "must be a numpy float type"),
(np.uint, "must be a numpy float type"),
(np.int16, "must be a numpy float type"),
)
def test_get_degree_based_edge_weights_dtype_raised(
self, invalid_type, error_msg):
"""Tests that the shape exceptions are properly raised."""
with self.assertRaisesRegexp(ValueError, error_msg):
utils.get_degree_based_edge_weights(np.array(((1, 1),)), invalid_type)
if __name__ == "__main__":
test_case.main()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for utils."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from absl.testing import parameterized
import numpy as np
from tensorflow_graphics.geometry.representation.mesh import utils
from tensorflow_graphics.util import test_case
class UtilsTest(test_case.TestCase):
@parameterized.parameters(
(np.array(((0, 1, 2),)), [[0, 1], [0, 2], [1, 2]]),
(np.array(
((0, 1, 2), (0, 1, 3))), [[0, 1], [0, 2], [0, 3], [1, 2], [1, 3]]),
)
def test_extract_undirected_edges_from_triangular_mesh_preset(
self, test_inputs, test_outputs):
"""Tests that the output contain the expected edges."""
edges = utils.extract_unique_edges_from_triangular_mesh(
test_inputs, directed_edges=False)
edges.sort(axis=1) # Ensure edge tuple ordered by first vertex.
self.assertEqual(sorted(edges.tolist()), test_outputs)
@parameterized.parameters(
(np.array(
((0, 1, 2),)), [[0, 1], [0, 2], [1, 0], [1, 2], [2, 0], [2, 1]]),
(np.array(
((0, 1, 2), (0, 1, 3))), [[0, 1], [0, 2], [0, 3], [1, 0], [1, 2],
[1, 3], [2, 0], [2, 1], [3, 0], [3, 1]]),
)
def test_extract_directed_edges_from_triangular_mesh_preset(
self, test_inputs, test_outputs):
"""Tests that the output contain the expected edges."""
edges = utils.extract_unique_edges_from_triangular_mesh(
test_inputs, directed_edges=True)
self.assertEqual(sorted(edges.tolist()), test_outputs)
@parameterized.parameters(
(1, "'faces' must be a numpy.ndarray."),
(np.array((1,)), "must have a rank equal to 2"),
(np.array((((1,),),)), "must have a rank equal to 2"),
(np.array(((1,),)), "must have exactly 3 dimensions in the last axis"),
(np.array(((1, 1),)), "must have exactly 3 dimensions in the last axis"),
(np.array(
((1, 1, 1, 1),)), "must have exactly 3 dimensions in the last axis"),
)
def test_extract_edges_from_triangular_mesh_raised(
self, invalid_input, error_msg):
"""Tests that the shape exceptions are properly raised."""
with self.assertRaisesRegexp(ValueError, error_msg):
utils.extract_unique_edges_from_triangular_mesh(invalid_input)
@parameterized.parameters(
(np.array(((0, 1), (0, 2), (1, 0), (1, 2), (2, 0), (2, 1))),
np.float16,
[0.5, 0.5, 0.5, 0.5, 0.5, 0.5]),
(np.array(((0, 1), (0, 2), (1, 0), (1, 2), (2, 0), (2, 1))),
np.float32,
[0.5, 0.5, 0.5, 0.5, 0.5, 0.5]),
(np.array(((0, 1), (0, 2), (0, 3), (1, 0), (1, 2), (1, 3),
(2, 0), (2, 1), (3, 0), (3, 1))),
np.float64,
[1.0 / 3, 1.0 / 3, 1.0 / 3, 1.0 / 3, 1.0 / 3, 1.0 / 3,
0.5, 0.5, 0.5, 0.5]),
)
def test_get_degree_based_edge_weights_preset(
self, test_inputs, test_dtype, test_outputs):
"""Tests that the output contain the expected edges."""
weights = utils.get_degree_based_edge_weights(test_inputs, test_dtype)
self.assertAllClose(weights.tolist(), test_outputs)
@parameterized.parameters(
(1, "'edges' must be a numpy.ndarray."),
(np.array((1,)), "must have a rank equal to 2"),
(np.array((((1,),),)), "must have a rank equal to 2"),
(np.array(((1,),)), "must have exactly 2 dimensions in the last axis"),
(np.array(
((1, 1, 1),)), "must have exactly 2 dimensions in the last axis"),
)
def test_get_degree_based_edge_weights_invalid_edges_raised(
self, invalid_input, error_msg):
"""Tests that the shape exceptions are properly raised."""
with self.assertRaisesRegexp(ValueError, error_msg):
utils.get_degree_based_edge_weights(invalid_input)
@parameterized.parameters(
(np.bool, "must be a numpy float type"),
(np.int, "must be a numpy float type"),
(np.complex, "must be a numpy float type"),
(np.uint, "must be a numpy float type"),
(np.int16, "must be a numpy float type"),
)
def test_get_degree_based_edge_weights_dtype_raised(
self, invalid_type, error_msg):
"""Tests that the shape exceptions are properly raised."""
with self.assertRaisesRegexp(ValueError, error_msg):
utils.get_degree_based_edge_weights(np.array(((1, 1),)), invalid_type)
if __name__ == "__main__":
test_case.main()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/rendering/voxels/tests/absorption_test.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for absoprtion voxel rendering."""
from absl.testing import flagsaver
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from tensorflow_graphics.rendering.voxels import absorption
from tensorflow_graphics.rendering.voxels.tests import test_helpers
from tensorflow_graphics.util import test_case
class AbsorptionTest(test_case.TestCase):
@parameterized.parameters(
(0, (8, 16, 6, 1)),
(1, (12, 8, 16, 6, 3)),
)
def test_render_shape_exception_not_raised(self, axis, *shape):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(absorption.render, shape, axis=axis)
@parameterized.parameters(
("must have a rank greater than 3", 2, (3,)),
("must have a rank greater than 3", 2, (16, 6, 3)),
("'axis' needs to be 0, 1 or 2", 5, (8, 16, 6, 1)),
)
def test_render_shape_exception_raised(self, error_msg, axis, *shape):
"""Tests that the shape exception is raised."""
self.assert_exception_is_raised(absorption.render,
error_msg,
shape,
axis=axis)
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_render_jacobian_random(self):
"""Tests the Jacobian of render."""
voxels_init = test_helpers.generate_random_test_voxels_render()
absorption_factor_init = np.float64(np.random.uniform(low=0.1, high=2.0))
cell_size_init = np.float64(np.random.uniform(low=0.1, high=2.0))
self.assert_jacobian_is_correct_fn(
absorption.render,
[voxels_init, absorption_factor_init, cell_size_init])
def test_render_preset(self):
"""Checks that render returns the expected value."""
x_voxels_init, y_images_init = test_helpers.generate_preset_test_voxels_absorption_render(
)
voxels = tf.convert_to_tensor(value=x_voxels_init)
y_images = tf.convert_to_tensor(value=y_images_init)
y = absorption.render(voxels, absorption_factor=0.1, cell_size=0.2)
self.assertAllClose(y_images, y)
if __name__ == "__main__":
test_case.main()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for absoprtion voxel rendering."""
from absl.testing import flagsaver
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from tensorflow_graphics.rendering.voxels import absorption
from tensorflow_graphics.rendering.voxels.tests import test_helpers
from tensorflow_graphics.util import test_case
class AbsorptionTest(test_case.TestCase):
@parameterized.parameters(
(0, (8, 16, 6, 1)),
(1, (12, 8, 16, 6, 3)),
)
def test_render_shape_exception_not_raised(self, axis, *shape):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(absorption.render, shape, axis=axis)
@parameterized.parameters(
("must have a rank greater than 3", 2, (3,)),
("must have a rank greater than 3", 2, (16, 6, 3)),
("'axis' needs to be 0, 1 or 2", 5, (8, 16, 6, 1)),
)
def test_render_shape_exception_raised(self, error_msg, axis, *shape):
"""Tests that the shape exception is raised."""
self.assert_exception_is_raised(absorption.render,
error_msg,
shape,
axis=axis)
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_render_jacobian_random(self):
"""Tests the Jacobian of render."""
voxels_init = test_helpers.generate_random_test_voxels_render()
absorption_factor_init = np.float64(np.random.uniform(low=0.1, high=2.0))
cell_size_init = np.float64(np.random.uniform(low=0.1, high=2.0))
self.assert_jacobian_is_correct_fn(
absorption.render,
[voxels_init, absorption_factor_init, cell_size_init])
def test_render_preset(self):
"""Checks that render returns the expected value."""
x_voxels_init, y_images_init = test_helpers.generate_preset_test_voxels_absorption_render(
)
voxels = tf.convert_to_tensor(value=x_voxels_init)
y_images = tf.convert_to_tensor(value=y_images_init)
y = absorption.render(voxels, absorption_factor=0.1, cell_size=0.2)
self.assertAllClose(y_images, y)
if __name__ == "__main__":
test_case.main()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/math/tests/vector_test.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for vector."""
from absl.testing import flagsaver
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from tensorflow_graphics.geometry.transformation.tests import test_data as td
from tensorflow_graphics.math import vector
from tensorflow_graphics.util import test_case
class VectorTest(test_case.TestCase):
@parameterized.parameters(
((None, 3), (None, 3)),)
def test_cross_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(vector.cross, shapes)
@parameterized.parameters(
("must have exactly 3 dimensions in axis", (1,), (3,)),
("must have exactly 3 dimensions in axis", (3,), (2,)),
("Not all batch dimensions are broadcast-compatible.", (2, 3), (3, 3)),
)
def test_cross_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(vector.cross, error_msg, shapes)
@parameterized.parameters(
(td.AXIS_3D_0, td.AXIS_3D_0),
(td.AXIS_3D_0, td.AXIS_3D_X),
(td.AXIS_3D_0, td.AXIS_3D_Y),
(td.AXIS_3D_0, td.AXIS_3D_Z),
(td.AXIS_3D_X, td.AXIS_3D_X),
(td.AXIS_3D_X, td.AXIS_3D_Y),
(td.AXIS_3D_X, td.AXIS_3D_Z),
(td.AXIS_3D_Y, td.AXIS_3D_X),
(td.AXIS_3D_Y, td.AXIS_3D_Y),
(td.AXIS_3D_Y, td.AXIS_3D_Z),
(td.AXIS_3D_Z, td.AXIS_3D_X),
(td.AXIS_3D_Z, td.AXIS_3D_Y),
(td.AXIS_3D_Z, td.AXIS_3D_Z),
)
def test_cross_jacobian_preset(self, u_init, v_init):
"""Tests the Jacobian of the dot product."""
self.assert_jacobian_is_correct_fn(vector.cross, [u_init, v_init])
def test_cross_jacobian_random(self):
"""Test the Jacobian of the dot product."""
tensor_size = np.random.randint(3)
tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist()
u_init = np.random.random(size=tensor_shape + [3])
v_init = np.random.random(size=tensor_shape + [3])
self.assert_jacobian_is_correct_fn(vector.cross, [u_init, v_init])
@parameterized.parameters(
((td.AXIS_3D_0, td.AXIS_3D_0), (td.AXIS_3D_0,)),
((td.AXIS_3D_0, td.AXIS_3D_X), (td.AXIS_3D_0,)),
((td.AXIS_3D_0, td.AXIS_3D_Y), (td.AXIS_3D_0,)),
((td.AXIS_3D_0, td.AXIS_3D_Z), (td.AXIS_3D_0,)),
((td.AXIS_3D_X, td.AXIS_3D_X), (td.AXIS_3D_0,)),
((td.AXIS_3D_X, td.AXIS_3D_Y), (td.AXIS_3D_Z,)),
((td.AXIS_3D_X, td.AXIS_3D_Z), (-td.AXIS_3D_Y,)),
((td.AXIS_3D_Y, td.AXIS_3D_X), (-td.AXIS_3D_Z,)),
((td.AXIS_3D_Y, td.AXIS_3D_Y), (td.AXIS_3D_0,)),
((td.AXIS_3D_Y, td.AXIS_3D_Z), (td.AXIS_3D_X,)),
((td.AXIS_3D_Z, td.AXIS_3D_X), (td.AXIS_3D_Y,)),
((td.AXIS_3D_Z, td.AXIS_3D_Y), (-td.AXIS_3D_X,)),
((td.AXIS_3D_Z, td.AXIS_3D_Z), (td.AXIS_3D_0,)),
)
def test_cross_preset(self, test_inputs, test_outputs):
"""Tests the cross product of predefined axes."""
self.assert_output_is_correct(vector.cross, test_inputs, test_outputs)
def test_cross_random(self):
"""Tests the cross product function."""
tensor_size = np.random.randint(1, 4)
tensor_shape = np.random.randint(1, 10, size=tensor_size).tolist()
axis = np.random.randint(tensor_size)
tensor_shape[axis] = 3 # pylint: disable=invalid-sequence-index
u = np.random.random(size=tensor_shape)
v = np.random.random(size=tensor_shape)
self.assertAllClose(
vector.cross(u, v, axis=axis), np.cross(u, v, axis=axis))
@parameterized.parameters(
((None,), (None,)),
((None, None), (None, None)),
)
def test_dot_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(vector.dot, shapes)
@parameterized.parameters(
("must have the same number of dimensions", (None, 1), (None, 2)),
("Not all batch dimensions are broadcast-compatible.", (2, 3), (3, 3)),
)
def test_dot_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(vector.dot, error_msg, shapes)
@parameterized.parameters(
(td.AXIS_3D_0, td.AXIS_3D_0),
(td.AXIS_3D_0, td.AXIS_3D_X),
(td.AXIS_3D_0, td.AXIS_3D_Y),
(td.AXIS_3D_0, td.AXIS_3D_Z),
(td.AXIS_3D_X, td.AXIS_3D_X),
(td.AXIS_3D_X, td.AXIS_3D_Y),
(td.AXIS_3D_X, td.AXIS_3D_Z),
(td.AXIS_3D_Y, td.AXIS_3D_X),
(td.AXIS_3D_Y, td.AXIS_3D_Y),
(td.AXIS_3D_Y, td.AXIS_3D_Z),
(td.AXIS_3D_Z, td.AXIS_3D_X),
(td.AXIS_3D_Z, td.AXIS_3D_Y),
(td.AXIS_3D_Z, td.AXIS_3D_Z),
)
def test_dot_jacobian_preset(self, u_init, v_init):
"""Tests the Jacobian of the dot product."""
self.assert_jacobian_is_correct_fn(vector.dot, [u_init, v_init])
def test_dot_jacobian_random(self):
"""Tests the Jacobian of the dot product."""
tensor_size = np.random.randint(3)
tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist()
u_init = np.random.random(size=tensor_shape + [3])
v_init = np.random.random(size=tensor_shape + [3])
self.assert_jacobian_is_correct_fn(vector.dot, [u_init, v_init])
@parameterized.parameters(
((td.AXIS_3D_0, td.AXIS_3D_0), (0.,)),
((td.AXIS_3D_0, td.AXIS_3D_X), (0.,)),
((td.AXIS_3D_0, td.AXIS_3D_Y), (0.,)),
((td.AXIS_3D_0, td.AXIS_3D_Z), (0.,)),
((td.AXIS_3D_X, td.AXIS_3D_X), (1.,)),
((td.AXIS_3D_X, td.AXIS_3D_Y), (0.,)),
((td.AXIS_3D_X, td.AXIS_3D_Z), (0.,)),
((td.AXIS_3D_Y, td.AXIS_3D_X), (0.,)),
((td.AXIS_3D_Y, td.AXIS_3D_Y), (1.,)),
((td.AXIS_3D_Y, td.AXIS_3D_Z), (0.,)),
((td.AXIS_3D_Z, td.AXIS_3D_X), (0.,)),
((td.AXIS_3D_Z, td.AXIS_3D_Y), (0.,)),
((td.AXIS_3D_Z, td.AXIS_3D_Z), (1.,)),
)
def test_dot_preset(self, test_inputs, test_outputs):
"""Tests the dot product of predefined axes."""
def func(u, v):
return tf.squeeze(vector.dot(u, v), axis=-1)
self.assert_output_is_correct(func, test_inputs, test_outputs)
def test_dot_random(self):
"""Tests the dot product function."""
tensor_size = np.random.randint(2, 4)
tensor_shape = np.random.randint(1, 10, size=tensor_size).tolist()
axis = np.random.randint(tensor_size)
u = np.random.random(size=tensor_shape)
v = np.random.random(size=tensor_shape)
dot = tf.linalg.tensor_diag_part(tf.tensordot(u, v, axes=[[axis], [axis]]))
dot = tf.expand_dims(dot, axis=axis)
self.assertAllClose(vector.dot(u, v, axis=axis), dot)
@parameterized.parameters(
((None,), (None,)),
((None, None), (None, None)),
((1,), (1,)),
((1, 1), (1, 1)),
)
def test_reflect_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(vector.reflect, shapes)
@parameterized.parameters(
("must have the same number of dimensions", (None, 1), (None, 2)),
("Not all batch dimensions are broadcast-compatible.", (2, 2), (3, 2)),
)
def test_reflect_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(vector.reflect, error_msg, shapes)
@parameterized.parameters(
(td.AXIS_3D_0, td.AXIS_3D_0),
(td.AXIS_3D_0, td.AXIS_3D_X),
(td.AXIS_3D_0, td.AXIS_3D_Y),
(td.AXIS_3D_0, td.AXIS_3D_Z),
(td.AXIS_3D_X, td.AXIS_3D_X),
(td.AXIS_3D_X, td.AXIS_3D_Y),
(td.AXIS_3D_X, td.AXIS_3D_Z),
(td.AXIS_3D_Y, td.AXIS_3D_X),
(td.AXIS_3D_Y, td.AXIS_3D_Y),
(td.AXIS_3D_Y, td.AXIS_3D_Z),
(td.AXIS_3D_Z, td.AXIS_3D_X),
(td.AXIS_3D_Z, td.AXIS_3D_Y),
(td.AXIS_3D_Z, td.AXIS_3D_Z),
)
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_reflect_jacobian_preset(self, u_init, v_init):
"""Tests the Jacobian of the reflect function."""
self.assert_jacobian_is_correct_fn(vector.reflect, [u_init, v_init])
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_reflect_jacobian_random(self):
"""Tests the Jacobian of the reflect function."""
tensor_size = np.random.randint(3)
tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist()
u_init = np.random.random(size=tensor_shape + [3])
v_init = np.random.random(size=tensor_shape + [3])
self.assert_jacobian_is_correct_fn(vector.reflect, [u_init, v_init])
@parameterized.parameters(
((td.AXIS_3D_0, td.AXIS_3D_X), (td.AXIS_3D_0,)),
((td.AXIS_3D_0, td.AXIS_3D_Y), (td.AXIS_3D_0,)),
((td.AXIS_3D_0, td.AXIS_3D_Z), (td.AXIS_3D_0,)),
((td.AXIS_3D_X, td.AXIS_3D_X), (-td.AXIS_3D_X,)),
((td.AXIS_3D_X, td.AXIS_3D_Y), (td.AXIS_3D_X,)),
((td.AXIS_3D_X, td.AXIS_3D_Z), (td.AXIS_3D_X,)),
((td.AXIS_3D_Y, td.AXIS_3D_X), (td.AXIS_3D_Y,)),
((td.AXIS_3D_Y, td.AXIS_3D_Y), (-td.AXIS_3D_Y,)),
((td.AXIS_3D_Y, td.AXIS_3D_Z), (td.AXIS_3D_Y,)),
((td.AXIS_3D_Z, td.AXIS_3D_X), (td.AXIS_3D_Z,)),
((td.AXIS_3D_Z, td.AXIS_3D_Y), (td.AXIS_3D_Z,)),
((td.AXIS_3D_Z, td.AXIS_3D_Z), (-td.AXIS_3D_Z,)),
)
def test_reflect_preset(self, test_inputs, test_outputs):
"""Tests the reflect function of predefined axes."""
self.assert_output_is_correct(vector.reflect, test_inputs, test_outputs)
def test_reflect_random(self):
"""Tests that calling reflect twice give an identity transform."""
tensor_size = np.random.randint(2, 4)
tensor_shape = np.random.randint(2, 3, size=tensor_size).tolist()
axis = np.random.randint(tensor_size)
u = np.random.random(size=tensor_shape)
v = np.random.random(size=tensor_shape)
v /= np.linalg.norm(v, axis=axis, keepdims=True)
u_new = vector.reflect(u, v, axis=axis)
u_new = vector.reflect(u_new, v, axis=axis)
self.assertAllClose(u_new, u)
if __name__ == "__main__":
test_case.main()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for vector."""
from absl.testing import flagsaver
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from tensorflow_graphics.geometry.transformation.tests import test_data as td
from tensorflow_graphics.math import vector
from tensorflow_graphics.util import test_case
class VectorTest(test_case.TestCase):
@parameterized.parameters(
((None, 3), (None, 3)),)
def test_cross_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(vector.cross, shapes)
@parameterized.parameters(
("must have exactly 3 dimensions in axis", (1,), (3,)),
("must have exactly 3 dimensions in axis", (3,), (2,)),
("Not all batch dimensions are broadcast-compatible.", (2, 3), (3, 3)),
)
def test_cross_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(vector.cross, error_msg, shapes)
@parameterized.parameters(
(td.AXIS_3D_0, td.AXIS_3D_0),
(td.AXIS_3D_0, td.AXIS_3D_X),
(td.AXIS_3D_0, td.AXIS_3D_Y),
(td.AXIS_3D_0, td.AXIS_3D_Z),
(td.AXIS_3D_X, td.AXIS_3D_X),
(td.AXIS_3D_X, td.AXIS_3D_Y),
(td.AXIS_3D_X, td.AXIS_3D_Z),
(td.AXIS_3D_Y, td.AXIS_3D_X),
(td.AXIS_3D_Y, td.AXIS_3D_Y),
(td.AXIS_3D_Y, td.AXIS_3D_Z),
(td.AXIS_3D_Z, td.AXIS_3D_X),
(td.AXIS_3D_Z, td.AXIS_3D_Y),
(td.AXIS_3D_Z, td.AXIS_3D_Z),
)
def test_cross_jacobian_preset(self, u_init, v_init):
"""Tests the Jacobian of the dot product."""
self.assert_jacobian_is_correct_fn(vector.cross, [u_init, v_init])
def test_cross_jacobian_random(self):
"""Test the Jacobian of the dot product."""
tensor_size = np.random.randint(3)
tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist()
u_init = np.random.random(size=tensor_shape + [3])
v_init = np.random.random(size=tensor_shape + [3])
self.assert_jacobian_is_correct_fn(vector.cross, [u_init, v_init])
@parameterized.parameters(
((td.AXIS_3D_0, td.AXIS_3D_0), (td.AXIS_3D_0,)),
((td.AXIS_3D_0, td.AXIS_3D_X), (td.AXIS_3D_0,)),
((td.AXIS_3D_0, td.AXIS_3D_Y), (td.AXIS_3D_0,)),
((td.AXIS_3D_0, td.AXIS_3D_Z), (td.AXIS_3D_0,)),
((td.AXIS_3D_X, td.AXIS_3D_X), (td.AXIS_3D_0,)),
((td.AXIS_3D_X, td.AXIS_3D_Y), (td.AXIS_3D_Z,)),
((td.AXIS_3D_X, td.AXIS_3D_Z), (-td.AXIS_3D_Y,)),
((td.AXIS_3D_Y, td.AXIS_3D_X), (-td.AXIS_3D_Z,)),
((td.AXIS_3D_Y, td.AXIS_3D_Y), (td.AXIS_3D_0,)),
((td.AXIS_3D_Y, td.AXIS_3D_Z), (td.AXIS_3D_X,)),
((td.AXIS_3D_Z, td.AXIS_3D_X), (td.AXIS_3D_Y,)),
((td.AXIS_3D_Z, td.AXIS_3D_Y), (-td.AXIS_3D_X,)),
((td.AXIS_3D_Z, td.AXIS_3D_Z), (td.AXIS_3D_0,)),
)
def test_cross_preset(self, test_inputs, test_outputs):
"""Tests the cross product of predefined axes."""
self.assert_output_is_correct(vector.cross, test_inputs, test_outputs)
def test_cross_random(self):
"""Tests the cross product function."""
tensor_size = np.random.randint(1, 4)
tensor_shape = np.random.randint(1, 10, size=tensor_size).tolist()
axis = np.random.randint(tensor_size)
tensor_shape[axis] = 3 # pylint: disable=invalid-sequence-index
u = np.random.random(size=tensor_shape)
v = np.random.random(size=tensor_shape)
self.assertAllClose(
vector.cross(u, v, axis=axis), np.cross(u, v, axis=axis))
@parameterized.parameters(
((None,), (None,)),
((None, None), (None, None)),
)
def test_dot_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(vector.dot, shapes)
@parameterized.parameters(
("must have the same number of dimensions", (None, 1), (None, 2)),
("Not all batch dimensions are broadcast-compatible.", (2, 3), (3, 3)),
)
def test_dot_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(vector.dot, error_msg, shapes)
@parameterized.parameters(
(td.AXIS_3D_0, td.AXIS_3D_0),
(td.AXIS_3D_0, td.AXIS_3D_X),
(td.AXIS_3D_0, td.AXIS_3D_Y),
(td.AXIS_3D_0, td.AXIS_3D_Z),
(td.AXIS_3D_X, td.AXIS_3D_X),
(td.AXIS_3D_X, td.AXIS_3D_Y),
(td.AXIS_3D_X, td.AXIS_3D_Z),
(td.AXIS_3D_Y, td.AXIS_3D_X),
(td.AXIS_3D_Y, td.AXIS_3D_Y),
(td.AXIS_3D_Y, td.AXIS_3D_Z),
(td.AXIS_3D_Z, td.AXIS_3D_X),
(td.AXIS_3D_Z, td.AXIS_3D_Y),
(td.AXIS_3D_Z, td.AXIS_3D_Z),
)
def test_dot_jacobian_preset(self, u_init, v_init):
"""Tests the Jacobian of the dot product."""
self.assert_jacobian_is_correct_fn(vector.dot, [u_init, v_init])
def test_dot_jacobian_random(self):
"""Tests the Jacobian of the dot product."""
tensor_size = np.random.randint(3)
tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist()
u_init = np.random.random(size=tensor_shape + [3])
v_init = np.random.random(size=tensor_shape + [3])
self.assert_jacobian_is_correct_fn(vector.dot, [u_init, v_init])
@parameterized.parameters(
((td.AXIS_3D_0, td.AXIS_3D_0), (0.,)),
((td.AXIS_3D_0, td.AXIS_3D_X), (0.,)),
((td.AXIS_3D_0, td.AXIS_3D_Y), (0.,)),
((td.AXIS_3D_0, td.AXIS_3D_Z), (0.,)),
((td.AXIS_3D_X, td.AXIS_3D_X), (1.,)),
((td.AXIS_3D_X, td.AXIS_3D_Y), (0.,)),
((td.AXIS_3D_X, td.AXIS_3D_Z), (0.,)),
((td.AXIS_3D_Y, td.AXIS_3D_X), (0.,)),
((td.AXIS_3D_Y, td.AXIS_3D_Y), (1.,)),
((td.AXIS_3D_Y, td.AXIS_3D_Z), (0.,)),
((td.AXIS_3D_Z, td.AXIS_3D_X), (0.,)),
((td.AXIS_3D_Z, td.AXIS_3D_Y), (0.,)),
((td.AXIS_3D_Z, td.AXIS_3D_Z), (1.,)),
)
def test_dot_preset(self, test_inputs, test_outputs):
"""Tests the dot product of predefined axes."""
def func(u, v):
return tf.squeeze(vector.dot(u, v), axis=-1)
self.assert_output_is_correct(func, test_inputs, test_outputs)
def test_dot_random(self):
"""Tests the dot product function."""
tensor_size = np.random.randint(2, 4)
tensor_shape = np.random.randint(1, 10, size=tensor_size).tolist()
axis = np.random.randint(tensor_size)
u = np.random.random(size=tensor_shape)
v = np.random.random(size=tensor_shape)
dot = tf.linalg.tensor_diag_part(tf.tensordot(u, v, axes=[[axis], [axis]]))
dot = tf.expand_dims(dot, axis=axis)
self.assertAllClose(vector.dot(u, v, axis=axis), dot)
@parameterized.parameters(
((None,), (None,)),
((None, None), (None, None)),
((1,), (1,)),
((1, 1), (1, 1)),
)
def test_reflect_exception_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(vector.reflect, shapes)
@parameterized.parameters(
("must have the same number of dimensions", (None, 1), (None, 2)),
("Not all batch dimensions are broadcast-compatible.", (2, 2), (3, 2)),
)
def test_reflect_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(vector.reflect, error_msg, shapes)
@parameterized.parameters(
(td.AXIS_3D_0, td.AXIS_3D_0),
(td.AXIS_3D_0, td.AXIS_3D_X),
(td.AXIS_3D_0, td.AXIS_3D_Y),
(td.AXIS_3D_0, td.AXIS_3D_Z),
(td.AXIS_3D_X, td.AXIS_3D_X),
(td.AXIS_3D_X, td.AXIS_3D_Y),
(td.AXIS_3D_X, td.AXIS_3D_Z),
(td.AXIS_3D_Y, td.AXIS_3D_X),
(td.AXIS_3D_Y, td.AXIS_3D_Y),
(td.AXIS_3D_Y, td.AXIS_3D_Z),
(td.AXIS_3D_Z, td.AXIS_3D_X),
(td.AXIS_3D_Z, td.AXIS_3D_Y),
(td.AXIS_3D_Z, td.AXIS_3D_Z),
)
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_reflect_jacobian_preset(self, u_init, v_init):
"""Tests the Jacobian of the reflect function."""
self.assert_jacobian_is_correct_fn(vector.reflect, [u_init, v_init])
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_reflect_jacobian_random(self):
"""Tests the Jacobian of the reflect function."""
tensor_size = np.random.randint(3)
tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist()
u_init = np.random.random(size=tensor_shape + [3])
v_init = np.random.random(size=tensor_shape + [3])
self.assert_jacobian_is_correct_fn(vector.reflect, [u_init, v_init])
@parameterized.parameters(
((td.AXIS_3D_0, td.AXIS_3D_X), (td.AXIS_3D_0,)),
((td.AXIS_3D_0, td.AXIS_3D_Y), (td.AXIS_3D_0,)),
((td.AXIS_3D_0, td.AXIS_3D_Z), (td.AXIS_3D_0,)),
((td.AXIS_3D_X, td.AXIS_3D_X), (-td.AXIS_3D_X,)),
((td.AXIS_3D_X, td.AXIS_3D_Y), (td.AXIS_3D_X,)),
((td.AXIS_3D_X, td.AXIS_3D_Z), (td.AXIS_3D_X,)),
((td.AXIS_3D_Y, td.AXIS_3D_X), (td.AXIS_3D_Y,)),
((td.AXIS_3D_Y, td.AXIS_3D_Y), (-td.AXIS_3D_Y,)),
((td.AXIS_3D_Y, td.AXIS_3D_Z), (td.AXIS_3D_Y,)),
((td.AXIS_3D_Z, td.AXIS_3D_X), (td.AXIS_3D_Z,)),
((td.AXIS_3D_Z, td.AXIS_3D_Y), (td.AXIS_3D_Z,)),
((td.AXIS_3D_Z, td.AXIS_3D_Z), (-td.AXIS_3D_Z,)),
)
def test_reflect_preset(self, test_inputs, test_outputs):
"""Tests the reflect function of predefined axes."""
self.assert_output_is_correct(vector.reflect, test_inputs, test_outputs)
def test_reflect_random(self):
"""Tests that calling reflect twice give an identity transform."""
tensor_size = np.random.randint(2, 4)
tensor_shape = np.random.randint(2, 3, size=tensor_size).tolist()
axis = np.random.randint(tensor_size)
u = np.random.random(size=tensor_shape)
v = np.random.random(size=tensor_shape)
v /= np.linalg.norm(v, axis=axis, keepdims=True)
u_new = vector.reflect(u, v, axis=axis)
u_new = vector.reflect(u_new, v, axis=axis)
self.assertAllClose(u_new, u)
if __name__ == "__main__":
test_case.main()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/nn/metric/precision.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This module implements the precision metric."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from tensorflow_graphics.util import export_api
from tensorflow_graphics.util import safe_ops
from tensorflow_graphics.util import shape
def _cast_to_int(prediction):
return tf.cast(x=prediction, dtype=tf.int32)
def evaluate(ground_truth,
prediction,
classes=None,
reduce_average=True,
prediction_to_category_function=_cast_to_int,
name=None):
"""Computes the precision metric for the given ground truth and predictions.
Note:
In the following, A1 to An are optional batch dimensions, which must be
broadcast compatible.
Args:
ground_truth: A tensor of shape `[A1, ..., An, N]`, where the last axis
represents the ground truth labels. Will be cast to int32.
prediction: A tensor of shape `[A1, ..., An, N]`, where the last axis
represents the predictions (which can be continuous).
classes: An integer or a list/tuple of integers representing the classes for
which the precision will be evaluated. In case 'classes' is 'None', the
number of classes will be inferred from the given labels and the precision
will be calculated for each of the classes. Defaults to 'None'.
reduce_average: Whether to calculate the average of the precision for each
class and return a single precision value. Defaults to true.
prediction_to_category_function: A function to associate a `prediction` to a
category. Defaults to rounding down the value of the prediction to the
nearest integer value.
name: A name for this op. Defaults to "precision_evaluate".
Returns:
A tensor of shape `[A1, ..., An, C]`, where the last axis represents the
precision calculated for each of the requested classes.
Raises:
ValueError: if the shape of `ground_truth`, `prediction` is not supported.
"""
with tf.compat.v1.name_scope(name, "precision_evaluate",
[ground_truth, prediction]):
ground_truth = tf.cast(
x=tf.convert_to_tensor(value=ground_truth), dtype=tf.int32)
prediction = tf.convert_to_tensor(value=prediction)
shape.compare_batch_dimensions(
tensors=(ground_truth, prediction),
tensor_names=("ground_truth", "prediction"),
last_axes=-1,
broadcast_compatible=True)
prediction = prediction_to_category_function(prediction)
if classes is None:
num_classes = tf.math.maximum(
tf.math.reduce_max(input_tensor=ground_truth),
tf.math.reduce_max(input_tensor=prediction)) + 1
classes = tf.range(num_classes)
else:
classes = tf.convert_to_tensor(value=classes)
# Make sure classes is a tensor of rank 1.
classes = tf.reshape(classes, [1]) if tf.rank(classes) == 0 else classes
# Create a confusion matrix for each of the classes (with dimensions
# [A1, ..., An, C, N]).
classes = tf.expand_dims(classes, -1)
ground_truth_per_class = tf.equal(tf.expand_dims(ground_truth, -2), classes)
prediction_per_class = tf.equal(tf.expand_dims(prediction, -2), classes)
# Calculate the precision for each of the classes.
true_positives = tf.math.reduce_sum(
input_tensor=tf.cast(
x=tf.math.logical_and(ground_truth_per_class, prediction_per_class),
dtype=tf.float32),
axis=-1)
total_predicted_positives = tf.math.reduce_sum(
input_tensor=tf.cast(x=prediction_per_class, dtype=tf.float32), axis=-1)
precision_per_class = safe_ops.safe_signed_div(true_positives,
total_predicted_positives)
if reduce_average:
return tf.math.reduce_mean(input_tensor=precision_per_class, axis=-1)
else:
return precision_per_class
# API contains all public functions and classes.
__all__ = export_api.get_functions_and_classes()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This module implements the precision metric."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from tensorflow_graphics.util import export_api
from tensorflow_graphics.util import safe_ops
from tensorflow_graphics.util import shape
def _cast_to_int(prediction):
return tf.cast(x=prediction, dtype=tf.int32)
def evaluate(ground_truth,
prediction,
classes=None,
reduce_average=True,
prediction_to_category_function=_cast_to_int,
name=None):
"""Computes the precision metric for the given ground truth and predictions.
Note:
In the following, A1 to An are optional batch dimensions, which must be
broadcast compatible.
Args:
ground_truth: A tensor of shape `[A1, ..., An, N]`, where the last axis
represents the ground truth labels. Will be cast to int32.
prediction: A tensor of shape `[A1, ..., An, N]`, where the last axis
represents the predictions (which can be continuous).
classes: An integer or a list/tuple of integers representing the classes for
which the precision will be evaluated. In case 'classes' is 'None', the
number of classes will be inferred from the given labels and the precision
will be calculated for each of the classes. Defaults to 'None'.
reduce_average: Whether to calculate the average of the precision for each
class and return a single precision value. Defaults to true.
prediction_to_category_function: A function to associate a `prediction` to a
category. Defaults to rounding down the value of the prediction to the
nearest integer value.
name: A name for this op. Defaults to "precision_evaluate".
Returns:
A tensor of shape `[A1, ..., An, C]`, where the last axis represents the
precision calculated for each of the requested classes.
Raises:
ValueError: if the shape of `ground_truth`, `prediction` is not supported.
"""
with tf.compat.v1.name_scope(name, "precision_evaluate",
[ground_truth, prediction]):
ground_truth = tf.cast(
x=tf.convert_to_tensor(value=ground_truth), dtype=tf.int32)
prediction = tf.convert_to_tensor(value=prediction)
shape.compare_batch_dimensions(
tensors=(ground_truth, prediction),
tensor_names=("ground_truth", "prediction"),
last_axes=-1,
broadcast_compatible=True)
prediction = prediction_to_category_function(prediction)
if classes is None:
num_classes = tf.math.maximum(
tf.math.reduce_max(input_tensor=ground_truth),
tf.math.reduce_max(input_tensor=prediction)) + 1
classes = tf.range(num_classes)
else:
classes = tf.convert_to_tensor(value=classes)
# Make sure classes is a tensor of rank 1.
classes = tf.reshape(classes, [1]) if tf.rank(classes) == 0 else classes
# Create a confusion matrix for each of the classes (with dimensions
# [A1, ..., An, C, N]).
classes = tf.expand_dims(classes, -1)
ground_truth_per_class = tf.equal(tf.expand_dims(ground_truth, -2), classes)
prediction_per_class = tf.equal(tf.expand_dims(prediction, -2), classes)
# Calculate the precision for each of the classes.
true_positives = tf.math.reduce_sum(
input_tensor=tf.cast(
x=tf.math.logical_and(ground_truth_per_class, prediction_per_class),
dtype=tf.float32),
axis=-1)
total_predicted_positives = tf.math.reduce_sum(
input_tensor=tf.cast(x=prediction_per_class, dtype=tf.float32), axis=-1)
precision_per_class = safe_ops.safe_signed_div(true_positives,
total_predicted_positives)
if reduce_average:
return tf.math.reduce_mean(input_tensor=precision_per_class, axis=-1)
else:
return precision_per_class
# API contains all public functions and classes.
__all__ = export_api.get_functions_and_classes()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/geometry/representation/mesh/tests/sampler_test.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for uniform mesh sampler."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from tensorflow_graphics.geometry.representation.mesh import sampler
from tensorflow_graphics.geometry.representation.mesh.tests import mesh_test_utils
from tensorflow_graphics.util import test_case
class MeshSamplerTest(test_case.TestCase):
def setUp(self):
"""Sets up default parameters."""
super(MeshSamplerTest, self).setUp()
self._test_sigma_compare_tolerance = 4.0
def compare_poisson_equivalence(self, expected, actual):
"""Performs equivalence check on Poisson-distributed random variables."""
delta = np.sqrt(expected) * self._test_sigma_compare_tolerance
self.assertAllClose(expected, actual, atol=delta)
# Tests for generate_random_face_indices
@parameterized.parameters(
(((), (2,)), (tf.int32, tf.float32)),
(((), (None, 1)), (tf.int64, tf.float32)),
(((), (None, 3, 4)), (tf.int64, tf.float64)),
)
def test_random_face_indices_exception_not_raised(self, shapes, dtypes):
"""Tests that shape exceptions are not raised for random_face_indices."""
self.assert_exception_is_not_raised(sampler.generate_random_face_indices,
shapes, dtypes)
@parameterized.parameters(
("face_weights must have a rank greater than 0.", (), ()),
("num_samples must have a rank of 0.", (None,), (1, 2)),
("num_samples must have a rank of 0.", (4, 2), (1,)),
)
def test_random_face_indices_shape_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised for random_face_indices."""
self.assert_exception_is_raised(
sampler.generate_random_face_indices, error_msg, shapes)
def test_negative_weights_random_face_indices_exception(self):
"""Test for exception for random_face_indices with negative weights."""
face_wts = np.array([0.1, -0.1], dtype=np.float32)
num_samples = 10
error_msg = "Condition x >= y did not hold."
with self.assertRaisesRegexp(tf.errors.InvalidArgumentError, error_msg):
sampler.generate_random_face_indices(num_samples, face_weights=face_wts)
@parameterized.parameters(
((0., 0.), 10, (5, 5)),
((0., 0.0, 0.001), 100, (0, 0, 100)),
((0.1, 0.2, 0.3), 1000, (167, 333, 500)),
)
def test_random_face_indices(self, face_weights, num_samples, expected):
"""Test for generate_random_face_indices."""
face_weights = np.array(face_weights, dtype=np.float32)
expected = np.array(expected, dtype=np.intp)
sample_faces = sampler.generate_random_face_indices(
num_samples, face_weights)
self.assertEqual(sample_faces.shape[0], num_samples)
self.compare_poisson_equivalence(expected, tf.math.bincount(sample_faces))
# Tests for generate_random_barycentric_coordinates
@parameterized.parameters(
((1,), (tf.int32)),
((None,), (tf.int64)),
)
def test_random_barycentric_coordinates_exception_not_raised(
self, shapes, dtypes):
"""Tests that shape exceptions are not raised for random_barycentric_coordinates."""
self.assert_exception_is_not_raised(
sampler.generate_random_barycentric_coordinates, shapes, dtypes)
@parameterized.parameters(
("sample_shape must have a rank of 1.", ()),
("sample_shape must have a rank of 1.", (4, None)),
)
def test_random_barycentric_coordinates_shape_exception_raised(
self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised for random_barycentric_coordinates."""
self.assert_exception_is_raised(
sampler.generate_random_barycentric_coordinates, error_msg, shapes)
@parameterized.parameters(
((5,),),
((10, 1, 3),),
)
def test_random_barycentric_coordinates(self, sample_shape):
"""Test for generate_random_barycentric_coordinates."""
sample_shape = np.array(sample_shape, dtype=np.intp)
random_coordinates = sampler.generate_random_barycentric_coordinates(
sample_shape=sample_shape)
coordinate_sum = tf.reduce_sum(input_tensor=random_coordinates, axis=-1)
expected_coordinate_sum = np.ones(shape=sample_shape)
self.assertAllClose(expected_coordinate_sum, coordinate_sum)
# Tests for weighted_random_sample_triangle_mesh
@parameterized.parameters(
(((4, 3), (5, 3), (), (5,)),
(tf.float32, tf.int32, tf.int32, tf.float32)),
(((None, 3), (None, 3), (), (None,)),
(tf.float32, tf.int32, tf.int32, tf.float32)),
(((3, None, 3), (3, None, 3), (), (3, None)),
(tf.float32, tf.int64, tf.int64, tf.float64)),
(((3, 6, 5), (3, 5, 3), (), (3, 5)),
(tf.float64, tf.int32, tf.int32, tf.float32)),
)
def test_weighted_sampler_exception_not_raised(self, shapes, dtypes):
"""Tests that the shape exceptions are not raised for weighted sampler."""
self.assert_exception_is_not_raised(
sampler.weighted_random_sample_triangle_mesh, shapes, dtypes)
@parameterized.parameters(
("vertex_attributes must have a rank greater than 1.", (3,), (None, 3),
(), (None, 3)),
("faces must have a rank greater than 1.", (5, 2), (None,), (), (None,)),
("face_weights must have a rank greater than 0.", (1, None, 3), (None, 3),
(), ()),
("Not all batch dimensions are identical", (4, 4, 2), (3, 5, 3), (),
(3, 5)),
("Not all batch dimensions are identical", (4, 2), (5, 3), (), (4,)),
)
def test_weighted_sampler_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised for weighted sampler."""
self.assert_exception_is_raised(
sampler.weighted_random_sample_triangle_mesh, error_msg, shapes)
def test_weighted_sampler_negative_weights(self):
"""Test for exception with negative weights."""
vertices, faces = mesh_test_utils.create_square_triangle_mesh()
face_wts = np.array([-0.3, 0.1, 0.5, 0.6], dtype=np.float32)
num_samples = 10
error_msg = "Condition x >= y did not hold."
with self.assertRaisesRegexp(tf.errors.InvalidArgumentError, error_msg):
sampler.weighted_random_sample_triangle_mesh(
vertices, faces, num_samples, face_weights=face_wts)
def test_weighted_random_sample(self):
"""Test for provided face weights."""
faces = np.array([[0, 1, 2], [2, 1, 3]], dtype=np.int32)
vertex_attributes = np.array([[0.], [0.], [1.], [1.]], dtype=np.float32)
# Equal face weights, mean of sampled attributes = 0.5.
expected_mean = np.array([0.5], dtype=np.float32)
sample_pts, _ = sampler.weighted_random_sample_triangle_mesh(
vertex_attributes,
faces,
num_samples=1000000,
face_weights=(0.5, 0.5))
self.assertAllClose(
expected_mean,
tf.reduce_mean(input_tensor=sample_pts, axis=-2),
atol=1e-3)
# Face weights biased towards second face, mean > 0.5
sample_pts, _ = sampler.weighted_random_sample_triangle_mesh(
vertex_attributes,
faces,
num_samples=1000000,
face_weights=(0.2, 0.8))
self.assertGreater(
tf.reduce_mean(input_tensor=sample_pts, axis=-2), expected_mean)
def test_weighted_sampler_jacobian_random(self):
"""Test the Jacobian of weighted triangle random sampler."""
tensor_vertex_size = np.random.randint(1, 3)
tensor_out_shape = np.random.randint(1, 5, size=tensor_vertex_size)
tensor_out_shape = tensor_out_shape.tolist()
vertex_axis = np.array(((0., 0., 1), (1., 0., 0.), (0., 1., 0.),
(0., 0., -1.), (-1., 0., 0.), (0., -1., 0.)),
dtype=np.float32)
vertex_axis = vertex_axis.reshape([1] * tensor_vertex_size + [6, 3])
faces = np.array(((0, 1, 2), (0, 2, 4), (0, 4, 5), (0, 5, 1), (3, 2, 1),
(3, 4, 2), (3, 5, 4), (3, 1, 5)),
dtype=np.int32)
faces = faces.reshape([1] * tensor_vertex_size + [8, 3])
index_init = np.tile(faces, tensor_out_shape + [1, 1])
vertex_scale = np.random.uniform(0.5, 5., tensor_out_shape + [1] * 2)
vertex_init = vertex_axis * vertex_scale
index_tensor = tf.convert_to_tensor(value=index_init)
face_weights = np.random.uniform(
size=index_init.shape[:index_init.ndim - 1])
weights_tensor = tf.convert_to_tensor(value=face_weights)
num_samples = np.random.randint(10, 100)
def sampler_fn(vertices):
sample_pts, _ = sampler.weighted_random_sample_triangle_mesh(
vertices,
index_tensor,
num_samples,
weights_tensor,
seed=[0, 1],
stateless=True)
return sample_pts
self.assert_jacobian_is_correct_fn(
sampler_fn, [vertex_init], atol=1e-4, delta=1e-4)
# Tests for area_weighted_random_sample_triangle_mesh
@parameterized.parameters(
(((4, 3), (5, 3), ()), (tf.float32, tf.int32, tf.int32)),
(((None, 3), (None, 3), ()), (tf.float32, tf.int32, tf.int32)),
(((3, None, 3), (3, None, 3), ()), (tf.float32, tf.int64, tf.int64)),
# Test for vertex attributes + positions
(((3, 6, 5), (3, 5, 3), (), (3, 6, 3)),
(tf.float64, tf.int32, tf.int32, tf.float32)),
)
def test_area_sampler_exception_not_raised(self, shapes, dtypes):
"""Tests that the shape exceptions are not raised for area weighted sampler."""
self.assert_exception_is_not_raised(
sampler.area_weighted_random_sample_triangle_mesh, shapes, dtypes)
@parameterized.parameters(
("vertices must have a rank greater than 1.", (3,), (None, 3), ()),
("vertices must have greater than 2 dimensions in axis -1.", (5, 2),
(None, 3), ()),
("vertex_positions must have exactly 3 dimensions in axis -1.", (5, 3),
(None, 3), (), (3, 2)),
)
def test_area_sampler_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised for area weighted sampler."""
self.assert_exception_is_raised(
sampler.area_weighted_random_sample_triangle_mesh, error_msg, shapes)
def test_area_sampler_distribution(self):
"""Test for area weighted sampler distribution."""
vertices, faces = mesh_test_utils.create_single_triangle_mesh()
vertices = np.repeat(np.expand_dims(vertices, axis=0), 3, axis=0)
faces = np.repeat(np.expand_dims(faces, axis=0), 3, axis=0)
num_samples = 5000
sample_pts, _ = sampler.area_weighted_random_sample_triangle_mesh(
vertices, faces, num_samples)
for i in range(3):
samples = sample_pts[i, ...]
self.assertEqual(samples.shape[-2], num_samples)
# Test distribution in 4 quadrants of [0,1]x[0,1]
v = samples[:, :2] < [0.5, 0.5]
not_v = tf.logical_not(v)
quad00 = tf.math.count_nonzero(tf.reduce_all(input_tensor=v, axis=-1))
quad11 = tf.math.count_nonzero(tf.reduce_all(input_tensor=not_v, axis=-1))
quad01 = tf.math.count_nonzero(tf.reduce_all(
input_tensor=tf.stack((v[:, 0], not_v[:, 1]), axis=1), axis=-1))
quad10 = tf.math.count_nonzero(tf.reduce_all(
input_tensor=tf.stack((not_v[:, 0], v[:, 1]), axis=1), axis=-1))
counts = tf.stack((quad00, quad01, quad10, quad11), axis=0)
expected = np.array(
[num_samples / 2, num_samples / 4, num_samples / 4, 0],
dtype=np.float32)
self.compare_poisson_equivalence(expected, counts)
def test_face_distribution(self):
"""Test for distribution of face indices with area weighted sampler."""
vertices, faces = mesh_test_utils.create_square_triangle_mesh()
num_samples = 1000
_, sample_faces = sampler.area_weighted_random_sample_triangle_mesh(
vertices, faces, num_samples)
# All points should be approx poisson distributed among the 4 faces.
self.assertEqual(sample_faces.shape[0], num_samples)
num_faces = faces.shape[0]
expected = np.array([num_samples / num_faces] * num_faces, dtype=np.intp)
self.compare_poisson_equivalence(expected, tf.math.bincount(sample_faces))
def test_area_sampler_jacobian_random(self):
"""Test the Jacobian of area weighted triangle random sampler."""
tensor_vertex_size = np.random.randint(1, 3)
tensor_out_shape = np.random.randint(1, 5, size=tensor_vertex_size)
tensor_out_shape = tensor_out_shape.tolist()
vertex_axis = np.array(((0., 0., 1), (1., 0., 0.), (0., 1., 0.),
(0., 0., -1.), (-1., 0., 0.), (0., -1., 0.)),
dtype=np.float32)
vertex_axis = vertex_axis.reshape([1] * tensor_vertex_size + [6, 3])
faces = np.array(((0, 1, 2), (0, 2, 4), (0, 4, 5), (0, 5, 1), (3, 2, 1),
(3, 4, 2), (3, 5, 4), (3, 1, 5)),
dtype=np.int32)
faces = faces.reshape([1] * tensor_vertex_size + [8, 3])
index_init = np.tile(faces, tensor_out_shape + [1, 1])
vertex_scale = np.random.uniform(0.5, 5., tensor_out_shape + [1] * 2)
vertex_init = vertex_axis * vertex_scale
index_tensor = tf.convert_to_tensor(value=index_init)
num_samples = np.random.randint(10, 100)
def sampler_fn(vertices):
sample_pts, _ = sampler.area_weighted_random_sample_triangle_mesh(
vertices, index_tensor, num_samples, seed=[0, 1], stateless=True)
return sample_pts
self.assert_jacobian_is_correct_fn(
sampler_fn, [vertex_init], atol=1e-4, delta=1e-4)
if __name__ == "__main__":
test_case.main()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for uniform mesh sampler."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from tensorflow_graphics.geometry.representation.mesh import sampler
from tensorflow_graphics.geometry.representation.mesh.tests import mesh_test_utils
from tensorflow_graphics.util import test_case
class MeshSamplerTest(test_case.TestCase):
def setUp(self):
"""Sets up default parameters."""
super(MeshSamplerTest, self).setUp()
self._test_sigma_compare_tolerance = 4.0
def compare_poisson_equivalence(self, expected, actual):
"""Performs equivalence check on Poisson-distributed random variables."""
delta = np.sqrt(expected) * self._test_sigma_compare_tolerance
self.assertAllClose(expected, actual, atol=delta)
# Tests for generate_random_face_indices
@parameterized.parameters(
(((), (2,)), (tf.int32, tf.float32)),
(((), (None, 1)), (tf.int64, tf.float32)),
(((), (None, 3, 4)), (tf.int64, tf.float64)),
)
def test_random_face_indices_exception_not_raised(self, shapes, dtypes):
"""Tests that shape exceptions are not raised for random_face_indices."""
self.assert_exception_is_not_raised(sampler.generate_random_face_indices,
shapes, dtypes)
@parameterized.parameters(
("face_weights must have a rank greater than 0.", (), ()),
("num_samples must have a rank of 0.", (None,), (1, 2)),
("num_samples must have a rank of 0.", (4, 2), (1,)),
)
def test_random_face_indices_shape_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised for random_face_indices."""
self.assert_exception_is_raised(
sampler.generate_random_face_indices, error_msg, shapes)
def test_negative_weights_random_face_indices_exception(self):
"""Test for exception for random_face_indices with negative weights."""
face_wts = np.array([0.1, -0.1], dtype=np.float32)
num_samples = 10
error_msg = "Condition x >= y did not hold."
with self.assertRaisesRegexp(tf.errors.InvalidArgumentError, error_msg):
sampler.generate_random_face_indices(num_samples, face_weights=face_wts)
@parameterized.parameters(
((0., 0.), 10, (5, 5)),
((0., 0.0, 0.001), 100, (0, 0, 100)),
((0.1, 0.2, 0.3), 1000, (167, 333, 500)),
)
def test_random_face_indices(self, face_weights, num_samples, expected):
"""Test for generate_random_face_indices."""
face_weights = np.array(face_weights, dtype=np.float32)
expected = np.array(expected, dtype=np.intp)
sample_faces = sampler.generate_random_face_indices(
num_samples, face_weights)
self.assertEqual(sample_faces.shape[0], num_samples)
self.compare_poisson_equivalence(expected, tf.math.bincount(sample_faces))
# Tests for generate_random_barycentric_coordinates
@parameterized.parameters(
((1,), (tf.int32)),
((None,), (tf.int64)),
)
def test_random_barycentric_coordinates_exception_not_raised(
self, shapes, dtypes):
"""Tests that shape exceptions are not raised for random_barycentric_coordinates."""
self.assert_exception_is_not_raised(
sampler.generate_random_barycentric_coordinates, shapes, dtypes)
@parameterized.parameters(
("sample_shape must have a rank of 1.", ()),
("sample_shape must have a rank of 1.", (4, None)),
)
def test_random_barycentric_coordinates_shape_exception_raised(
self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised for random_barycentric_coordinates."""
self.assert_exception_is_raised(
sampler.generate_random_barycentric_coordinates, error_msg, shapes)
@parameterized.parameters(
((5,),),
((10, 1, 3),),
)
def test_random_barycentric_coordinates(self, sample_shape):
"""Test for generate_random_barycentric_coordinates."""
sample_shape = np.array(sample_shape, dtype=np.intp)
random_coordinates = sampler.generate_random_barycentric_coordinates(
sample_shape=sample_shape)
coordinate_sum = tf.reduce_sum(input_tensor=random_coordinates, axis=-1)
expected_coordinate_sum = np.ones(shape=sample_shape)
self.assertAllClose(expected_coordinate_sum, coordinate_sum)
# Tests for weighted_random_sample_triangle_mesh
@parameterized.parameters(
(((4, 3), (5, 3), (), (5,)),
(tf.float32, tf.int32, tf.int32, tf.float32)),
(((None, 3), (None, 3), (), (None,)),
(tf.float32, tf.int32, tf.int32, tf.float32)),
(((3, None, 3), (3, None, 3), (), (3, None)),
(tf.float32, tf.int64, tf.int64, tf.float64)),
(((3, 6, 5), (3, 5, 3), (), (3, 5)),
(tf.float64, tf.int32, tf.int32, tf.float32)),
)
def test_weighted_sampler_exception_not_raised(self, shapes, dtypes):
"""Tests that the shape exceptions are not raised for weighted sampler."""
self.assert_exception_is_not_raised(
sampler.weighted_random_sample_triangle_mesh, shapes, dtypes)
@parameterized.parameters(
("vertex_attributes must have a rank greater than 1.", (3,), (None, 3),
(), (None, 3)),
("faces must have a rank greater than 1.", (5, 2), (None,), (), (None,)),
("face_weights must have a rank greater than 0.", (1, None, 3), (None, 3),
(), ()),
("Not all batch dimensions are identical", (4, 4, 2), (3, 5, 3), (),
(3, 5)),
("Not all batch dimensions are identical", (4, 2), (5, 3), (), (4,)),
)
def test_weighted_sampler_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised for weighted sampler."""
self.assert_exception_is_raised(
sampler.weighted_random_sample_triangle_mesh, error_msg, shapes)
def test_weighted_sampler_negative_weights(self):
"""Test for exception with negative weights."""
vertices, faces = mesh_test_utils.create_square_triangle_mesh()
face_wts = np.array([-0.3, 0.1, 0.5, 0.6], dtype=np.float32)
num_samples = 10
error_msg = "Condition x >= y did not hold."
with self.assertRaisesRegexp(tf.errors.InvalidArgumentError, error_msg):
sampler.weighted_random_sample_triangle_mesh(
vertices, faces, num_samples, face_weights=face_wts)
def test_weighted_random_sample(self):
"""Test for provided face weights."""
faces = np.array([[0, 1, 2], [2, 1, 3]], dtype=np.int32)
vertex_attributes = np.array([[0.], [0.], [1.], [1.]], dtype=np.float32)
# Equal face weights, mean of sampled attributes = 0.5.
expected_mean = np.array([0.5], dtype=np.float32)
sample_pts, _ = sampler.weighted_random_sample_triangle_mesh(
vertex_attributes,
faces,
num_samples=1000000,
face_weights=(0.5, 0.5))
self.assertAllClose(
expected_mean,
tf.reduce_mean(input_tensor=sample_pts, axis=-2),
atol=1e-3)
# Face weights biased towards second face, mean > 0.5
sample_pts, _ = sampler.weighted_random_sample_triangle_mesh(
vertex_attributes,
faces,
num_samples=1000000,
face_weights=(0.2, 0.8))
self.assertGreater(
tf.reduce_mean(input_tensor=sample_pts, axis=-2), expected_mean)
def test_weighted_sampler_jacobian_random(self):
"""Test the Jacobian of weighted triangle random sampler."""
tensor_vertex_size = np.random.randint(1, 3)
tensor_out_shape = np.random.randint(1, 5, size=tensor_vertex_size)
tensor_out_shape = tensor_out_shape.tolist()
vertex_axis = np.array(((0., 0., 1), (1., 0., 0.), (0., 1., 0.),
(0., 0., -1.), (-1., 0., 0.), (0., -1., 0.)),
dtype=np.float32)
vertex_axis = vertex_axis.reshape([1] * tensor_vertex_size + [6, 3])
faces = np.array(((0, 1, 2), (0, 2, 4), (0, 4, 5), (0, 5, 1), (3, 2, 1),
(3, 4, 2), (3, 5, 4), (3, 1, 5)),
dtype=np.int32)
faces = faces.reshape([1] * tensor_vertex_size + [8, 3])
index_init = np.tile(faces, tensor_out_shape + [1, 1])
vertex_scale = np.random.uniform(0.5, 5., tensor_out_shape + [1] * 2)
vertex_init = vertex_axis * vertex_scale
index_tensor = tf.convert_to_tensor(value=index_init)
face_weights = np.random.uniform(
size=index_init.shape[:index_init.ndim - 1])
weights_tensor = tf.convert_to_tensor(value=face_weights)
num_samples = np.random.randint(10, 100)
def sampler_fn(vertices):
sample_pts, _ = sampler.weighted_random_sample_triangle_mesh(
vertices,
index_tensor,
num_samples,
weights_tensor,
seed=[0, 1],
stateless=True)
return sample_pts
self.assert_jacobian_is_correct_fn(
sampler_fn, [vertex_init], atol=1e-4, delta=1e-4)
# Tests for area_weighted_random_sample_triangle_mesh
@parameterized.parameters(
(((4, 3), (5, 3), ()), (tf.float32, tf.int32, tf.int32)),
(((None, 3), (None, 3), ()), (tf.float32, tf.int32, tf.int32)),
(((3, None, 3), (3, None, 3), ()), (tf.float32, tf.int64, tf.int64)),
# Test for vertex attributes + positions
(((3, 6, 5), (3, 5, 3), (), (3, 6, 3)),
(tf.float64, tf.int32, tf.int32, tf.float32)),
)
def test_area_sampler_exception_not_raised(self, shapes, dtypes):
"""Tests that the shape exceptions are not raised for area weighted sampler."""
self.assert_exception_is_not_raised(
sampler.area_weighted_random_sample_triangle_mesh, shapes, dtypes)
@parameterized.parameters(
("vertices must have a rank greater than 1.", (3,), (None, 3), ()),
("vertices must have greater than 2 dimensions in axis -1.", (5, 2),
(None, 3), ()),
("vertex_positions must have exactly 3 dimensions in axis -1.", (5, 3),
(None, 3), (), (3, 2)),
)
def test_area_sampler_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised for area weighted sampler."""
self.assert_exception_is_raised(
sampler.area_weighted_random_sample_triangle_mesh, error_msg, shapes)
def test_area_sampler_distribution(self):
"""Test for area weighted sampler distribution."""
vertices, faces = mesh_test_utils.create_single_triangle_mesh()
vertices = np.repeat(np.expand_dims(vertices, axis=0), 3, axis=0)
faces = np.repeat(np.expand_dims(faces, axis=0), 3, axis=0)
num_samples = 5000
sample_pts, _ = sampler.area_weighted_random_sample_triangle_mesh(
vertices, faces, num_samples)
for i in range(3):
samples = sample_pts[i, ...]
self.assertEqual(samples.shape[-2], num_samples)
# Test distribution in 4 quadrants of [0,1]x[0,1]
v = samples[:, :2] < [0.5, 0.5]
not_v = tf.logical_not(v)
quad00 = tf.math.count_nonzero(tf.reduce_all(input_tensor=v, axis=-1))
quad11 = tf.math.count_nonzero(tf.reduce_all(input_tensor=not_v, axis=-1))
quad01 = tf.math.count_nonzero(tf.reduce_all(
input_tensor=tf.stack((v[:, 0], not_v[:, 1]), axis=1), axis=-1))
quad10 = tf.math.count_nonzero(tf.reduce_all(
input_tensor=tf.stack((not_v[:, 0], v[:, 1]), axis=1), axis=-1))
counts = tf.stack((quad00, quad01, quad10, quad11), axis=0)
expected = np.array(
[num_samples / 2, num_samples / 4, num_samples / 4, 0],
dtype=np.float32)
self.compare_poisson_equivalence(expected, counts)
def test_face_distribution(self):
"""Test for distribution of face indices with area weighted sampler."""
vertices, faces = mesh_test_utils.create_square_triangle_mesh()
num_samples = 1000
_, sample_faces = sampler.area_weighted_random_sample_triangle_mesh(
vertices, faces, num_samples)
# All points should be approx poisson distributed among the 4 faces.
self.assertEqual(sample_faces.shape[0], num_samples)
num_faces = faces.shape[0]
expected = np.array([num_samples / num_faces] * num_faces, dtype=np.intp)
self.compare_poisson_equivalence(expected, tf.math.bincount(sample_faces))
def test_area_sampler_jacobian_random(self):
"""Test the Jacobian of area weighted triangle random sampler."""
tensor_vertex_size = np.random.randint(1, 3)
tensor_out_shape = np.random.randint(1, 5, size=tensor_vertex_size)
tensor_out_shape = tensor_out_shape.tolist()
vertex_axis = np.array(((0., 0., 1), (1., 0., 0.), (0., 1., 0.),
(0., 0., -1.), (-1., 0., 0.), (0., -1., 0.)),
dtype=np.float32)
vertex_axis = vertex_axis.reshape([1] * tensor_vertex_size + [6, 3])
faces = np.array(((0, 1, 2), (0, 2, 4), (0, 4, 5), (0, 5, 1), (3, 2, 1),
(3, 4, 2), (3, 5, 4), (3, 1, 5)),
dtype=np.int32)
faces = faces.reshape([1] * tensor_vertex_size + [8, 3])
index_init = np.tile(faces, tensor_out_shape + [1, 1])
vertex_scale = np.random.uniform(0.5, 5., tensor_out_shape + [1] * 2)
vertex_init = vertex_axis * vertex_scale
index_tensor = tf.convert_to_tensor(value=index_init)
num_samples = np.random.randint(10, 100)
def sampler_fn(vertices):
sample_pts, _ = sampler.area_weighted_random_sample_triangle_mesh(
vertices, index_tensor, num_samples, seed=[0, 1], stateless=True)
return sample_pts
self.assert_jacobian_is_correct_fn(
sampler_fn, [vertex_init], atol=1e-4, delta=1e-4)
if __name__ == "__main__":
test_case.main()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/util/doc.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Query environment variable for documentation building."""
import os
def _import_tfg_docs():
"""Checks if __init__.py imports should be executed (for buildling docs)."""
return os.getenv("TFG_DOC_IMPORTS", "0") == "1"
def enable_tfg_doc_imports():
"""Re-enables the imports in the __init__.py so that docs can be built."""
os.environ["TFG_DOC_IMPORTS"] = "1"
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Query environment variable for documentation building."""
import os
def _import_tfg_docs():
"""Checks if __init__.py imports should be executed (for buildling docs)."""
return os.getenv("TFG_DOC_IMPORTS", "0") == "1"
def enable_tfg_doc_imports():
"""Re-enables the imports in the __init__.py so that docs can be built."""
os.environ["TFG_DOC_IMPORTS"] = "1"
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/rendering/tests/__init__.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/projects/neural_voxel_renderer/__init__.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Neural voxel renderer module."""
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Neural voxel renderer module."""
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/rendering/camera/perspective.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
r"""This module implements perspective camera functionalities.
The perspective camera model, also referred to as pinhole camera model, is
defined using a focal length \\((f_x, f_y)\\) and a principal point
\\((c_x, c_y)\\). The perspective camera model can be written as a calibration
matrix
$$
\mathbf{C} =
\begin{bmatrix}
f_x & 0 & c_x \\
0 & f_y & c_y \\
0 & 0 & 1 \\
\end{bmatrix},
$$
also referred to as the intrinsic parameter matrix. The camera focal length
\\((f_x, f_y)\\), defined in pixels, is the physical focal length divided by the
physical size of a camera pixel. The physical focal length is the distance
between the camera center and the image plane. The principal point is the
intersection of the camera axis with the image plane. The camera axis is the
line perpendicular to the image plane starting at the optical center.
More details about perspective cameras can be found on [this page.]
(http://ksimek.github.io/2013/08/13/intrinsic/)
Note: The current implementation does not take into account distortion or
skew parameters.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import math
import tensorflow as tf
from tensorflow_graphics.util import asserts
from tensorflow_graphics.util import export_api
from tensorflow_graphics.util import safe_ops
from tensorflow_graphics.util import shape
def parameters_from_right_handed(projection_matrix, name=None):
"""Recovers the parameters used to contruct a right handed projection matrix.
Note:
In the following, A1 to An are optional batch dimensions.
Args:
projection_matrix: A tensor of shape `[A1, ..., An, 4, 4]`, containing
matrices of right handed perspective-view frustum.
name: A name for this op. Defaults to
'perspective_parameters_from_right_handed'.
Raises:
InvalidArgumentError: if `projection_matrix` is not of the expected shape.
Returns:
Tuple of 4 tensors of shape `[A1, ..., An, 1]`, where the first tensor
represents the vertical field of view used to contruct `projection_matrix,
the second tensor represents the ascpect ratio used to construct
`projection_matrix`, and the third and fourth parameters repectively
represent the near and far clipping planes used to construct
`projection_matrix`.
"""
with tf.compat.v1.name_scope(name, "perspective_parameters_from_right_handed",
[projection_matrix]):
projection_matrix = tf.convert_to_tensor(value=projection_matrix)
shape.check_static(
tensor=projection_matrix,
tensor_name="projection_matrix",
has_rank_greater_than=1,
has_dim_equals=((-2, 4), (-1, 4)))
inverse_tan_half_vertical_field_of_view = projection_matrix[..., 1, 1:2]
vertical_field_of_view = 2.0 * tf.atan(
1.0 / inverse_tan_half_vertical_field_of_view)
aspect_ratio = inverse_tan_half_vertical_field_of_view / projection_matrix[
..., 0, 0:1]
a = projection_matrix[..., 2, 2:3]
b = projection_matrix[..., 2, 3:4]
far = b / (a + 1.0)
near = (a + 1.0) / (a - 1.0) * far
return vertical_field_of_view, aspect_ratio, near, far
def right_handed(vertical_field_of_view, aspect_ratio, near, far, name=None):
"""Generates the matrix for a right handed perspective projection.
Note:
In the following, A1 to An are optional batch dimensions.
Args:
vertical_field_of_view: A tensor of shape `[A1, ..., An, 1]`, where the last
dimension represents the vertical field of view of the frustum expressed
in radians. Note that values for `vertical_field_of_view` must be in the
range (0,pi).
aspect_ratio: A tensor of shape `[A1, ..., An, 1]`, where the last dimension
stores the width over height ratio of the frustum. Note that values for
`aspect_ratio` must be non-negative.
near: A tensor of shape `[A1, ..., An, 1]`, where the last dimension
captures the distance between the viewer and the near clipping plane. Note
that values for `near` must be non-negative.
far: A tensor of shape `[A1, ..., An, 1]`, where the last dimension
captures the distance between the viewer and the far clipping plane. Note
that values for `far` must be greater than those of `near`.
name: A name for this op. Defaults to 'perspective_right_handed'.
Raises:
InvalidArgumentError: if any input contains data not in the specified range
of valid values.
ValueError: if the all the inputs are not of the same shape.
Returns:
A tensor of shape `[A1, ..., An, 4, 4]`, containing matrices of right
handed perspective-view frustum.
"""
with tf.compat.v1.name_scope(
name, "perspective_right_handed",
[vertical_field_of_view, aspect_ratio, near, far]):
vertical_field_of_view = tf.convert_to_tensor(value=vertical_field_of_view)
aspect_ratio = tf.convert_to_tensor(value=aspect_ratio)
near = tf.convert_to_tensor(value=near)
far = tf.convert_to_tensor(value=far)
shape.check_static(
tensor=vertical_field_of_view,
tensor_name="vertical_field_of_view",
has_dim_equals=(-1, 1))
shape.check_static(
tensor=aspect_ratio, tensor_name="aspect_ratio", has_dim_equals=(-1, 1))
shape.check_static(tensor=near, tensor_name="near", has_dim_equals=(-1, 1))
shape.check_static(tensor=far, tensor_name="far", has_dim_equals=(-1, 1))
shape.compare_batch_dimensions(
tensors=(vertical_field_of_view, aspect_ratio, near, far),
last_axes=-2,
tensor_names=("vertical_field_of_view", "aspect_ratio", "near", "far"),
broadcast_compatible=False)
vertical_field_of_view = asserts.assert_all_in_range(
vertical_field_of_view, 0.0, math.pi, open_bounds=True)
aspect_ratio = asserts.assert_all_above(aspect_ratio, 0.0, open_bound=True)
near = asserts.assert_all_above(near, 0.0, open_bound=True)
far = asserts.assert_all_above(far, near, open_bound=True)
inverse_tan_half_vertical_field_of_view = 1.0 / tf.tan(
vertical_field_of_view * 0.5)
zero = tf.zeros_like(inverse_tan_half_vertical_field_of_view)
one = tf.ones_like(inverse_tan_half_vertical_field_of_view)
near_minus_far = near - far
matrix = tf.concat(
(inverse_tan_half_vertical_field_of_view / aspect_ratio, zero, zero,
zero, zero, inverse_tan_half_vertical_field_of_view, zero, zero, zero,
zero, (far + near) / near_minus_far, 2.0 * far * near / near_minus_far,
zero, zero, -one, zero),
axis=-1)
matrix_shape = tf.shape(input=matrix)
output_shape = tf.concat((matrix_shape[:-1], (4, 4)), axis=-1)
return tf.reshape(matrix, shape=output_shape)
def intrinsics_from_matrix(matrix, name=None):
r"""Extracts intrinsic parameters from a calibration matrix.
Extracts the focal length \\((f_x, f_y)\\) and the principal point
\\((c_x, c_y)\\) from a camera calibration matrix
$$
\mathbf{C} =
\begin{bmatrix}
f_x & 0 & c_x \\
0 & f_y & c_y \\
0 & 0 & 1 \\
\end{bmatrix}.
$$
Note:
In the following, A1 to An are optional batch dimensions.
Args:
matrix: A tensor of shape `[A1, ..., An, 3, 3]`, where the last two
dimensions represent a camera calibration matrix.
name: A name for this op that defaults to
"perspective_intrinsics_from_matrix".
Returns:
Tuple of two tensors, each one of shape `[A1, ..., An, 2]`. The first
tensor represents the focal length, and the second one the principle point.
Raises:
ValueError: If the shape of `matrix` is not supported.
"""
with tf.compat.v1.name_scope(name, "perspective_intrinsics_from_matrix",
[matrix]):
matrix = tf.convert_to_tensor(value=matrix)
shape.check_static(
tensor=matrix,
tensor_name="matrix",
has_rank_greater_than=1,
has_dim_equals=((-1, 3), (-2, 3)))
fx = matrix[..., 0, 0]
fy = matrix[..., 1, 1]
cx = matrix[..., 0, 2]
cy = matrix[..., 1, 2]
focal = tf.stack((fx, fy), axis=-1)
principal_point = tf.stack((cx, cy), axis=-1)
return focal, principal_point
def matrix_from_intrinsics(focal, principal_point, name=None):
r"""Builds calibration matrix from intrinsic parameters.
Builds the camera calibration matrix as
$$
\mathbf{C} =
\begin{bmatrix}
f_x & 0 & c_x \\
0 & f_y & c_y \\
0 & 0 & 1 \\
\end{bmatrix}
$$
from the focal length \\((f_x, f_y)\\) and the principal point
\\((c_x, c_y)\\).
Note:
In the following, A1 to An are optional batch dimensions.
Args:
focal: A tensor of shape `[A1, ..., An, 2]`, where the last dimension
represents a camera focal length.
principal_point: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension represents a camera principal point.
name: A name for this op that defaults to
"perspective_matrix_from_intrinsics".
Returns:
A tensor of shape `[A1, ..., An, 3, 3]`, where the last two dimensions
represent a camera calibration matrix.
Raises:
ValueError: If the shape of `focal`, or `principal_point` is not
supported.
"""
with tf.compat.v1.name_scope(name, "perspective_matrix_from_intrinsics",
[focal, principal_point]):
focal = tf.convert_to_tensor(value=focal)
principal_point = tf.convert_to_tensor(value=principal_point)
shape.check_static(
tensor=focal, tensor_name="focal", has_dim_equals=(-1, 2))
shape.check_static(
tensor=principal_point,
tensor_name="principal_point",
has_dim_equals=(-1, 2))
shape.compare_batch_dimensions(
tensors=(focal, principal_point),
tensor_names=("focal", "principal_point"),
last_axes=-2,
broadcast_compatible=False)
fx, fy = tf.unstack(focal, axis=-1)
cx, cy = tf.unstack(principal_point, axis=-1)
zero = tf.zeros_like(fx)
one = tf.ones_like(fx)
matrix = tf.stack((fx, zero, cx,
zero, fy, cy,
zero, zero, one),
axis=-1) # pyformat: disable
matrix_shape = tf.shape(input=matrix)
output_shape = tf.concat((matrix_shape[:-1], (3, 3)), axis=-1)
return tf.reshape(matrix, shape=output_shape)
def project(point_3d, focal, principal_point, name=None):
r"""Projects a 3d point onto the 2d camera plane.
Projects a 3d point \\((x, y, z)\\) to a 2d point \\((x', y')\\) onto the
image plane with
$$
\begin{matrix}
x' = \frac{f_x}{z}x + c_x, & y' = \frac{f_y}{z}y + c_y,
\end{matrix}
$$
where \\((f_x, f_y)\\) is the focal length and \\((c_x, c_y)\\) the principal
point.
Note:
In the following, A1 to An are optional batch dimensions that must be
broadcast compatible.
Args:
point_3d: A tensor of shape `[A1, ..., An, 3]`, where the last dimension
represents a 3d point to project.
focal: A tensor of shape `[A1, ..., An, 2]`, where the last dimension
represents a camera focal length.
principal_point: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension represents a camera principal point.
name: A name for this op that defaults to "perspective_project".
Returns:
A tensor of shape `[A1, ..., An, 2]`, where the last dimension represents
a 2d point.
Raises:
ValueError: If the shape of `point_3d`, `focal`, or `principal_point` is not
supported.
"""
with tf.compat.v1.name_scope(name, "perspective_project",
[point_3d, focal, principal_point]):
point_3d = tf.convert_to_tensor(value=point_3d)
focal = tf.convert_to_tensor(value=focal)
principal_point = tf.convert_to_tensor(value=principal_point)
shape.check_static(
tensor=point_3d, tensor_name="point_3d", has_dim_equals=(-1, 3))
shape.check_static(
tensor=focal, tensor_name="focal", has_dim_equals=(-1, 2))
shape.check_static(
tensor=principal_point,
tensor_name="principal_point",
has_dim_equals=(-1, 2))
shape.compare_batch_dimensions(
tensors=(point_3d, focal, principal_point),
tensor_names=("point_3d", "focal", "principal_point"),
last_axes=-2,
broadcast_compatible=True)
point_2d, depth = tf.split(point_3d, (2, 1), axis=-1)
point_2d *= safe_ops.safe_signed_div(focal, depth)
point_2d += principal_point
return point_2d
def ray(point_2d, focal, principal_point, name=None):
r"""Computes the 3d ray for a 2d point (the z component of the ray is 1).
Computes the 3d ray \\((r_x, r_y, 1)\\) from the camera center to a 2d point
\\((x', y')\\) on the image plane with
$$
\begin{matrix}
r_x = \frac{(x' - c_x)}{f_x}, & r_y = \frac{(y' - c_y)}{f_y}, & z = 1,
\end{matrix}
$$
where \\((f_x, f_y)\\) is the focal length and \\((c_x, c_y)\\) the principal
point. The camera optical center is assumed to be at \\((0, 0, 0)\\).
Note:
In the following, A1 to An are optional batch dimensions that must be
broadcast compatible.
Args:
point_2d: A tensor of shape `[A1, ..., An, 2]`, where the last dimension
represents a 2d point.
focal: A tensor of shape `[A1, ..., An, 2]`, where the last dimension
represents a camera focal length.
principal_point: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension represents a camera principal point.
name: A name for this op that defaults to "perspective_ray".
Returns:
A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents
a 3d ray.
Raises:
ValueError: If the shape of `point_2d`, `focal`, or `principal_point` is not
supported.
"""
with tf.compat.v1.name_scope(name, "perspective_ray",
[point_2d, focal, principal_point]):
point_2d = tf.convert_to_tensor(value=point_2d)
focal = tf.convert_to_tensor(value=focal)
principal_point = tf.convert_to_tensor(value=principal_point)
shape.check_static(
tensor=point_2d, tensor_name="point_2d", has_dim_equals=(-1, 2))
shape.check_static(
tensor=focal, tensor_name="focal", has_dim_equals=(-1, 2))
shape.check_static(
tensor=principal_point,
tensor_name="principal_point",
has_dim_equals=(-1, 2))
shape.compare_batch_dimensions(
tensors=(point_2d, focal, principal_point),
tensor_names=("point_2d", "focal", "principal_point"),
last_axes=-2,
broadcast_compatible=True)
point_2d -= principal_point
point_2d = safe_ops.safe_signed_div(point_2d, focal)
padding = [[0, 0] for _ in point_2d.shape]
padding[-1][-1] = 1
return tf.pad(
tensor=point_2d, paddings=padding, mode="CONSTANT", constant_values=1.0)
def unproject(point_2d, depth, focal, principal_point, name=None):
r"""Unprojects a 2d point in 3d.
Unprojects a 2d point \\((x', y')\\) to a 3d point \\((x, y, z)\\) knowing the
depth \\(z\\) with
$$
\begin{matrix}
x = \frac{z (x' - c_x)}{f_x}, & y = \frac{z(y' - c_y)}{f_y}, & z = z,
\end{matrix}
$$
where \\((f_x, f_y)\\) is the focal length and \\((c_x, c_y)\\) the principal
point.
Note:
In the following, A1 to An are optional batch dimensions.
Args:
point_2d: A tensor of shape `[A1, ..., An, 2]`, where the last dimension
represents a 2d point to unproject.
depth: A tensor of shape `[A1, ..., An, 1]`, where the last dimension
represents the depth of a 2d point.
focal: A tensor of shape `[A1, ..., An, 2]`, where the last dimension
represents a camera focal length.
principal_point: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension represents a camera principal point.
name: A name for this op that defaults to "perspective_unproject".
Returns:
A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents
a 3d point.
Raises:
ValueError: If the shape of `point_2d`, `depth`, `focal`, or
`principal_point` is not supported.
"""
with tf.compat.v1.name_scope(name, "perspective_unproject",
[point_2d, depth, focal, principal_point]):
point_2d = tf.convert_to_tensor(value=point_2d)
depth = tf.convert_to_tensor(value=depth)
focal = tf.convert_to_tensor(value=focal)
principal_point = tf.convert_to_tensor(value=principal_point)
shape.check_static(
tensor=point_2d, tensor_name="point_2d", has_dim_equals=(-1, 2))
shape.check_static(
tensor=depth, tensor_name="depth", has_dim_equals=(-1, 1))
shape.check_static(
tensor=focal, tensor_name="focal", has_dim_equals=(-1, 2))
shape.check_static(
tensor=principal_point,
tensor_name="principal_point",
has_dim_equals=(-1, 2))
shape.compare_batch_dimensions(
tensors=(point_2d, depth, focal, principal_point),
tensor_names=("point_2d", "depth", "focal", "principal_point"),
last_axes=-2,
broadcast_compatible=False)
point_2d -= principal_point
point_2d *= safe_ops.safe_signed_div(depth, focal)
return tf.concat((point_2d, depth), axis=-1)
# API contains all public functions and classes.
__all__ = export_api.get_functions_and_classes()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
r"""This module implements perspective camera functionalities.
The perspective camera model, also referred to as pinhole camera model, is
defined using a focal length \\((f_x, f_y)\\) and a principal point
\\((c_x, c_y)\\). The perspective camera model can be written as a calibration
matrix
$$
\mathbf{C} =
\begin{bmatrix}
f_x & 0 & c_x \\
0 & f_y & c_y \\
0 & 0 & 1 \\
\end{bmatrix},
$$
also referred to as the intrinsic parameter matrix. The camera focal length
\\((f_x, f_y)\\), defined in pixels, is the physical focal length divided by the
physical size of a camera pixel. The physical focal length is the distance
between the camera center and the image plane. The principal point is the
intersection of the camera axis with the image plane. The camera axis is the
line perpendicular to the image plane starting at the optical center.
More details about perspective cameras can be found on [this page.]
(http://ksimek.github.io/2013/08/13/intrinsic/)
Note: The current implementation does not take into account distortion or
skew parameters.
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import math
import tensorflow as tf
from tensorflow_graphics.util import asserts
from tensorflow_graphics.util import export_api
from tensorflow_graphics.util import safe_ops
from tensorflow_graphics.util import shape
def parameters_from_right_handed(projection_matrix, name=None):
"""Recovers the parameters used to contruct a right handed projection matrix.
Note:
In the following, A1 to An are optional batch dimensions.
Args:
projection_matrix: A tensor of shape `[A1, ..., An, 4, 4]`, containing
matrices of right handed perspective-view frustum.
name: A name for this op. Defaults to
'perspective_parameters_from_right_handed'.
Raises:
InvalidArgumentError: if `projection_matrix` is not of the expected shape.
Returns:
Tuple of 4 tensors of shape `[A1, ..., An, 1]`, where the first tensor
represents the vertical field of view used to contruct `projection_matrix,
the second tensor represents the ascpect ratio used to construct
`projection_matrix`, and the third and fourth parameters repectively
represent the near and far clipping planes used to construct
`projection_matrix`.
"""
with tf.compat.v1.name_scope(name, "perspective_parameters_from_right_handed",
[projection_matrix]):
projection_matrix = tf.convert_to_tensor(value=projection_matrix)
shape.check_static(
tensor=projection_matrix,
tensor_name="projection_matrix",
has_rank_greater_than=1,
has_dim_equals=((-2, 4), (-1, 4)))
inverse_tan_half_vertical_field_of_view = projection_matrix[..., 1, 1:2]
vertical_field_of_view = 2.0 * tf.atan(
1.0 / inverse_tan_half_vertical_field_of_view)
aspect_ratio = inverse_tan_half_vertical_field_of_view / projection_matrix[
..., 0, 0:1]
a = projection_matrix[..., 2, 2:3]
b = projection_matrix[..., 2, 3:4]
far = b / (a + 1.0)
near = (a + 1.0) / (a - 1.0) * far
return vertical_field_of_view, aspect_ratio, near, far
def right_handed(vertical_field_of_view, aspect_ratio, near, far, name=None):
"""Generates the matrix for a right handed perspective projection.
Note:
In the following, A1 to An are optional batch dimensions.
Args:
vertical_field_of_view: A tensor of shape `[A1, ..., An, 1]`, where the last
dimension represents the vertical field of view of the frustum expressed
in radians. Note that values for `vertical_field_of_view` must be in the
range (0,pi).
aspect_ratio: A tensor of shape `[A1, ..., An, 1]`, where the last dimension
stores the width over height ratio of the frustum. Note that values for
`aspect_ratio` must be non-negative.
near: A tensor of shape `[A1, ..., An, 1]`, where the last dimension
captures the distance between the viewer and the near clipping plane. Note
that values for `near` must be non-negative.
far: A tensor of shape `[A1, ..., An, 1]`, where the last dimension
captures the distance between the viewer and the far clipping plane. Note
that values for `far` must be greater than those of `near`.
name: A name for this op. Defaults to 'perspective_right_handed'.
Raises:
InvalidArgumentError: if any input contains data not in the specified range
of valid values.
ValueError: if the all the inputs are not of the same shape.
Returns:
A tensor of shape `[A1, ..., An, 4, 4]`, containing matrices of right
handed perspective-view frustum.
"""
with tf.compat.v1.name_scope(
name, "perspective_right_handed",
[vertical_field_of_view, aspect_ratio, near, far]):
vertical_field_of_view = tf.convert_to_tensor(value=vertical_field_of_view)
aspect_ratio = tf.convert_to_tensor(value=aspect_ratio)
near = tf.convert_to_tensor(value=near)
far = tf.convert_to_tensor(value=far)
shape.check_static(
tensor=vertical_field_of_view,
tensor_name="vertical_field_of_view",
has_dim_equals=(-1, 1))
shape.check_static(
tensor=aspect_ratio, tensor_name="aspect_ratio", has_dim_equals=(-1, 1))
shape.check_static(tensor=near, tensor_name="near", has_dim_equals=(-1, 1))
shape.check_static(tensor=far, tensor_name="far", has_dim_equals=(-1, 1))
shape.compare_batch_dimensions(
tensors=(vertical_field_of_view, aspect_ratio, near, far),
last_axes=-2,
tensor_names=("vertical_field_of_view", "aspect_ratio", "near", "far"),
broadcast_compatible=False)
vertical_field_of_view = asserts.assert_all_in_range(
vertical_field_of_view, 0.0, math.pi, open_bounds=True)
aspect_ratio = asserts.assert_all_above(aspect_ratio, 0.0, open_bound=True)
near = asserts.assert_all_above(near, 0.0, open_bound=True)
far = asserts.assert_all_above(far, near, open_bound=True)
inverse_tan_half_vertical_field_of_view = 1.0 / tf.tan(
vertical_field_of_view * 0.5)
zero = tf.zeros_like(inverse_tan_half_vertical_field_of_view)
one = tf.ones_like(inverse_tan_half_vertical_field_of_view)
near_minus_far = near - far
matrix = tf.concat(
(inverse_tan_half_vertical_field_of_view / aspect_ratio, zero, zero,
zero, zero, inverse_tan_half_vertical_field_of_view, zero, zero, zero,
zero, (far + near) / near_minus_far, 2.0 * far * near / near_minus_far,
zero, zero, -one, zero),
axis=-1)
matrix_shape = tf.shape(input=matrix)
output_shape = tf.concat((matrix_shape[:-1], (4, 4)), axis=-1)
return tf.reshape(matrix, shape=output_shape)
def intrinsics_from_matrix(matrix, name=None):
r"""Extracts intrinsic parameters from a calibration matrix.
Extracts the focal length \\((f_x, f_y)\\) and the principal point
\\((c_x, c_y)\\) from a camera calibration matrix
$$
\mathbf{C} =
\begin{bmatrix}
f_x & 0 & c_x \\
0 & f_y & c_y \\
0 & 0 & 1 \\
\end{bmatrix}.
$$
Note:
In the following, A1 to An are optional batch dimensions.
Args:
matrix: A tensor of shape `[A1, ..., An, 3, 3]`, where the last two
dimensions represent a camera calibration matrix.
name: A name for this op that defaults to
"perspective_intrinsics_from_matrix".
Returns:
Tuple of two tensors, each one of shape `[A1, ..., An, 2]`. The first
tensor represents the focal length, and the second one the principle point.
Raises:
ValueError: If the shape of `matrix` is not supported.
"""
with tf.compat.v1.name_scope(name, "perspective_intrinsics_from_matrix",
[matrix]):
matrix = tf.convert_to_tensor(value=matrix)
shape.check_static(
tensor=matrix,
tensor_name="matrix",
has_rank_greater_than=1,
has_dim_equals=((-1, 3), (-2, 3)))
fx = matrix[..., 0, 0]
fy = matrix[..., 1, 1]
cx = matrix[..., 0, 2]
cy = matrix[..., 1, 2]
focal = tf.stack((fx, fy), axis=-1)
principal_point = tf.stack((cx, cy), axis=-1)
return focal, principal_point
def matrix_from_intrinsics(focal, principal_point, name=None):
r"""Builds calibration matrix from intrinsic parameters.
Builds the camera calibration matrix as
$$
\mathbf{C} =
\begin{bmatrix}
f_x & 0 & c_x \\
0 & f_y & c_y \\
0 & 0 & 1 \\
\end{bmatrix}
$$
from the focal length \\((f_x, f_y)\\) and the principal point
\\((c_x, c_y)\\).
Note:
In the following, A1 to An are optional batch dimensions.
Args:
focal: A tensor of shape `[A1, ..., An, 2]`, where the last dimension
represents a camera focal length.
principal_point: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension represents a camera principal point.
name: A name for this op that defaults to
"perspective_matrix_from_intrinsics".
Returns:
A tensor of shape `[A1, ..., An, 3, 3]`, where the last two dimensions
represent a camera calibration matrix.
Raises:
ValueError: If the shape of `focal`, or `principal_point` is not
supported.
"""
with tf.compat.v1.name_scope(name, "perspective_matrix_from_intrinsics",
[focal, principal_point]):
focal = tf.convert_to_tensor(value=focal)
principal_point = tf.convert_to_tensor(value=principal_point)
shape.check_static(
tensor=focal, tensor_name="focal", has_dim_equals=(-1, 2))
shape.check_static(
tensor=principal_point,
tensor_name="principal_point",
has_dim_equals=(-1, 2))
shape.compare_batch_dimensions(
tensors=(focal, principal_point),
tensor_names=("focal", "principal_point"),
last_axes=-2,
broadcast_compatible=False)
fx, fy = tf.unstack(focal, axis=-1)
cx, cy = tf.unstack(principal_point, axis=-1)
zero = tf.zeros_like(fx)
one = tf.ones_like(fx)
matrix = tf.stack((fx, zero, cx,
zero, fy, cy,
zero, zero, one),
axis=-1) # pyformat: disable
matrix_shape = tf.shape(input=matrix)
output_shape = tf.concat((matrix_shape[:-1], (3, 3)), axis=-1)
return tf.reshape(matrix, shape=output_shape)
def project(point_3d, focal, principal_point, name=None):
r"""Projects a 3d point onto the 2d camera plane.
Projects a 3d point \\((x, y, z)\\) to a 2d point \\((x', y')\\) onto the
image plane with
$$
\begin{matrix}
x' = \frac{f_x}{z}x + c_x, & y' = \frac{f_y}{z}y + c_y,
\end{matrix}
$$
where \\((f_x, f_y)\\) is the focal length and \\((c_x, c_y)\\) the principal
point.
Note:
In the following, A1 to An are optional batch dimensions that must be
broadcast compatible.
Args:
point_3d: A tensor of shape `[A1, ..., An, 3]`, where the last dimension
represents a 3d point to project.
focal: A tensor of shape `[A1, ..., An, 2]`, where the last dimension
represents a camera focal length.
principal_point: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension represents a camera principal point.
name: A name for this op that defaults to "perspective_project".
Returns:
A tensor of shape `[A1, ..., An, 2]`, where the last dimension represents
a 2d point.
Raises:
ValueError: If the shape of `point_3d`, `focal`, or `principal_point` is not
supported.
"""
with tf.compat.v1.name_scope(name, "perspective_project",
[point_3d, focal, principal_point]):
point_3d = tf.convert_to_tensor(value=point_3d)
focal = tf.convert_to_tensor(value=focal)
principal_point = tf.convert_to_tensor(value=principal_point)
shape.check_static(
tensor=point_3d, tensor_name="point_3d", has_dim_equals=(-1, 3))
shape.check_static(
tensor=focal, tensor_name="focal", has_dim_equals=(-1, 2))
shape.check_static(
tensor=principal_point,
tensor_name="principal_point",
has_dim_equals=(-1, 2))
shape.compare_batch_dimensions(
tensors=(point_3d, focal, principal_point),
tensor_names=("point_3d", "focal", "principal_point"),
last_axes=-2,
broadcast_compatible=True)
point_2d, depth = tf.split(point_3d, (2, 1), axis=-1)
point_2d *= safe_ops.safe_signed_div(focal, depth)
point_2d += principal_point
return point_2d
def ray(point_2d, focal, principal_point, name=None):
r"""Computes the 3d ray for a 2d point (the z component of the ray is 1).
Computes the 3d ray \\((r_x, r_y, 1)\\) from the camera center to a 2d point
\\((x', y')\\) on the image plane with
$$
\begin{matrix}
r_x = \frac{(x' - c_x)}{f_x}, & r_y = \frac{(y' - c_y)}{f_y}, & z = 1,
\end{matrix}
$$
where \\((f_x, f_y)\\) is the focal length and \\((c_x, c_y)\\) the principal
point. The camera optical center is assumed to be at \\((0, 0, 0)\\).
Note:
In the following, A1 to An are optional batch dimensions that must be
broadcast compatible.
Args:
point_2d: A tensor of shape `[A1, ..., An, 2]`, where the last dimension
represents a 2d point.
focal: A tensor of shape `[A1, ..., An, 2]`, where the last dimension
represents a camera focal length.
principal_point: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension represents a camera principal point.
name: A name for this op that defaults to "perspective_ray".
Returns:
A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents
a 3d ray.
Raises:
ValueError: If the shape of `point_2d`, `focal`, or `principal_point` is not
supported.
"""
with tf.compat.v1.name_scope(name, "perspective_ray",
[point_2d, focal, principal_point]):
point_2d = tf.convert_to_tensor(value=point_2d)
focal = tf.convert_to_tensor(value=focal)
principal_point = tf.convert_to_tensor(value=principal_point)
shape.check_static(
tensor=point_2d, tensor_name="point_2d", has_dim_equals=(-1, 2))
shape.check_static(
tensor=focal, tensor_name="focal", has_dim_equals=(-1, 2))
shape.check_static(
tensor=principal_point,
tensor_name="principal_point",
has_dim_equals=(-1, 2))
shape.compare_batch_dimensions(
tensors=(point_2d, focal, principal_point),
tensor_names=("point_2d", "focal", "principal_point"),
last_axes=-2,
broadcast_compatible=True)
point_2d -= principal_point
point_2d = safe_ops.safe_signed_div(point_2d, focal)
padding = [[0, 0] for _ in point_2d.shape]
padding[-1][-1] = 1
return tf.pad(
tensor=point_2d, paddings=padding, mode="CONSTANT", constant_values=1.0)
def unproject(point_2d, depth, focal, principal_point, name=None):
r"""Unprojects a 2d point in 3d.
Unprojects a 2d point \\((x', y')\\) to a 3d point \\((x, y, z)\\) knowing the
depth \\(z\\) with
$$
\begin{matrix}
x = \frac{z (x' - c_x)}{f_x}, & y = \frac{z(y' - c_y)}{f_y}, & z = z,
\end{matrix}
$$
where \\((f_x, f_y)\\) is the focal length and \\((c_x, c_y)\\) the principal
point.
Note:
In the following, A1 to An are optional batch dimensions.
Args:
point_2d: A tensor of shape `[A1, ..., An, 2]`, where the last dimension
represents a 2d point to unproject.
depth: A tensor of shape `[A1, ..., An, 1]`, where the last dimension
represents the depth of a 2d point.
focal: A tensor of shape `[A1, ..., An, 2]`, where the last dimension
represents a camera focal length.
principal_point: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension represents a camera principal point.
name: A name for this op that defaults to "perspective_unproject".
Returns:
A tensor of shape `[A1, ..., An, 3]`, where the last dimension represents
a 3d point.
Raises:
ValueError: If the shape of `point_2d`, `depth`, `focal`, or
`principal_point` is not supported.
"""
with tf.compat.v1.name_scope(name, "perspective_unproject",
[point_2d, depth, focal, principal_point]):
point_2d = tf.convert_to_tensor(value=point_2d)
depth = tf.convert_to_tensor(value=depth)
focal = tf.convert_to_tensor(value=focal)
principal_point = tf.convert_to_tensor(value=principal_point)
shape.check_static(
tensor=point_2d, tensor_name="point_2d", has_dim_equals=(-1, 2))
shape.check_static(
tensor=depth, tensor_name="depth", has_dim_equals=(-1, 1))
shape.check_static(
tensor=focal, tensor_name="focal", has_dim_equals=(-1, 2))
shape.check_static(
tensor=principal_point,
tensor_name="principal_point",
has_dim_equals=(-1, 2))
shape.compare_batch_dimensions(
tensors=(point_2d, depth, focal, principal_point),
tensor_names=("point_2d", "depth", "focal", "principal_point"),
last_axes=-2,
broadcast_compatible=False)
point_2d -= principal_point
point_2d *= safe_ops.safe_signed_div(depth, focal)
return tf.concat((point_2d, depth), axis=-1)
# API contains all public functions and classes.
__all__ = export_api.get_functions_and_classes()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/rendering/opengl/math.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This module implements math routines used by OpenGL."""
import tensorflow as tf
from tensorflow_graphics.geometry.transformation import look_at
from tensorflow_graphics.math.interpolation import weighted
from tensorflow_graphics.rendering.camera import perspective
from tensorflow_graphics.util import asserts
from tensorflow_graphics.util import export_api
from tensorflow_graphics.util import shape
def model_to_eye(point_model_space,
camera_position,
look_at_point,
up_vector,
name=None):
"""Transforms points from model to eye coordinates.
Note:
In the following, A1 to An are optional batch dimensions which must be
broadcast compatible.
Args:
point_model_space: A tensor of shape `[A1, ..., An, 3]`, where the last
dimension represents the 3D points in model space.
camera_position: A tensor of shape `[A1, ..., An, 3]`, where the last
dimension represents the 3D position of the camera.
look_at_point: A tensor of shape `[A1, ..., An, 3]`, with the last dimension
storing the position where the camera is looking at.
up_vector: A tensor of shape `[A1, ..., An, 3]`, where the last dimension
defines the up vector of the camera.
name: A name for this op. Defaults to 'model_to_eye'.
Raises:
ValueError: if the all the inputs are not of the same shape, or if any input
of of an unsupported shape.
Returns:
A tensor of shape `[A1, ..., An, 3]`, containing `point_model_space` in eye
coordinates.
"""
with tf.compat.v1.name_scope(
name, "model_to_eye",
[point_model_space, camera_position, look_at_point, up_vector]):
point_model_space = tf.convert_to_tensor(value=point_model_space)
camera_position = tf.convert_to_tensor(value=camera_position)
look_at_point = tf.convert_to_tensor(value=look_at_point)
up_vector = tf.convert_to_tensor(value=up_vector)
shape.check_static(
tensor=point_model_space,
tensor_name="point_model_space",
has_dim_equals=(-1, 3))
shape.compare_batch_dimensions(
tensors=(point_model_space, camera_position),
last_axes=-2,
tensor_names=("point_model_space", "camera_position"),
broadcast_compatible=True)
model_to_eye_matrix = look_at.right_handed(camera_position, look_at_point,
up_vector)
batch_shape = tf.shape(input=point_model_space)[:-1]
one = tf.ones(
shape=tf.concat((batch_shape, (1,)), axis=-1),
dtype=point_model_space.dtype)
point_model_space = tf.concat((point_model_space, one), axis=-1)
point_model_space = tf.expand_dims(point_model_space, axis=-1)
res = tf.squeeze(tf.matmul(model_to_eye_matrix, point_model_space), axis=-1)
return res[..., :-1]
def eye_to_clip(point_eye_space,
vertical_field_of_view,
aspect_ratio,
near,
far,
name=None):
"""Transforms points from eye to clip space.
Note:
In the following, A1 to An are optional batch dimensions which must be
broadcast compatible.
Args:
point_eye_space: A tensor of shape `[A1, ..., An, 3]`, where the last
dimension represents the 3D points in eye coordinates.
vertical_field_of_view: A tensor of shape `[A1, ..., An, 1]`, where the last
dimension represents the vertical field of view of the frustum. Note that
values for `vertical_field_of_view` must be in the range ]0,pi[.
aspect_ratio: A tensor of shape `[A1, ..., An, 1]`, where the last dimension
stores the width over height ratio of the frustum. Note that values for
`aspect_ratio` must be non-negative.
near: A tensor of shape `[A1, ..., An, 1]`, where the last dimension
captures the distance between the viewer and the near clipping plane. Note
that values for `near` must be non-negative.
far: A tensor of shape `[A1, ..., An, 1]`, where the last dimension captures
the distance between the viewer and the far clipping plane. Note that
values for `far` must be non-negative.
name: A name for this op. Defaults to 'eye_to_clip'.
Raises:
ValueError: If any input is of an unsupported shape.
Returns:
A tensor of shape `[A1, ..., An, 4]`, containing `point_eye_space` in
homogeneous clip coordinates.
"""
with tf.compat.v1.name_scope(
name, "eye_to_clip",
[point_eye_space, vertical_field_of_view, aspect_ratio, near, far]):
point_eye_space = tf.convert_to_tensor(value=point_eye_space)
vertical_field_of_view = tf.convert_to_tensor(value=vertical_field_of_view)
aspect_ratio = tf.convert_to_tensor(value=aspect_ratio)
near = tf.convert_to_tensor(value=near)
far = tf.convert_to_tensor(value=far)
shape.check_static(
tensor=point_eye_space,
tensor_name="point_eye_space",
has_dim_equals=(-1, 3))
shape.check_static(
tensor=vertical_field_of_view,
tensor_name="vertical_field_of_view",
has_dim_equals=(-1, 1))
shape.check_static(
tensor=aspect_ratio, tensor_name="aspect_ratio", has_dim_equals=(-1, 1))
shape.check_static(tensor=near, tensor_name="near", has_dim_equals=(-1, 1))
shape.check_static(tensor=far, tensor_name="far", has_dim_equals=(-1, 1))
shape.compare_batch_dimensions(
tensors=(point_eye_space, vertical_field_of_view, aspect_ratio, near,
far),
last_axes=-2,
tensor_names=("point_eye_space", "vertical_field_of_view",
"aspect_ratio", "near", "far"),
broadcast_compatible=True)
perspective_matrix = perspective.right_handed(vertical_field_of_view,
aspect_ratio, near, far)
batch_shape = tf.shape(input=point_eye_space)[:-1]
one = tf.ones(
shape=tf.concat((batch_shape, (1,)), axis=-1),
dtype=point_eye_space.dtype)
point_eye_space = tf.concat((point_eye_space, one), axis=-1)
point_eye_space = tf.expand_dims(point_eye_space, axis=-1)
return tf.squeeze(tf.matmul(perspective_matrix, point_eye_space), axis=-1)
def clip_to_ndc(point_clip_space, name=None):
"""Transforms points from clip to normalized device coordinates (ndc).
Note:
In the following, A1 to An are optional batch dimensions.
Args:
point_clip_space: A tensor of shape `[A1, ..., An, 4]`, where the last
dimension represents points in clip space.
name: A name for this op. Defaults to 'clip_to_ndc'.
Raises:
ValueError: If `point_clip_space` is not of size 4 in its last dimension.
Returns:
A tensor of shape `[A1, ..., An, 3]`, containing `point_clip_space` in
normalized device coordinates.
"""
with tf.compat.v1.name_scope(name, "clip_to_ndc", [point_clip_space]):
point_clip_space = tf.convert_to_tensor(value=point_clip_space)
shape.check_static(
tensor=point_clip_space,
tensor_name="point_clip_space",
has_dim_equals=(-1, 4))
w = point_clip_space[..., -1:]
return point_clip_space[..., :3] / w
def ndc_to_screen(point_ndc_space,
lower_left_corner,
screen_dimensions,
near,
far,
name=None):
"""Transforms points from normalized device coordinates to screen coordinates.
Note:
In the following, A1 to An are optional batch dimensions which must be
broadcast compatible between `point_ndc_space` and the other variables.
Args:
point_ndc_space: A tensor of shape `[A1, ..., An, 3]`, where the last
dimension represents points in normalized device coordinates.
lower_left_corner: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension captures the position (in pixels) of the lower left corner of
the screen.
screen_dimensions: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension is expressed in pixels and captures the width and the height (in
pixels) of the screen.
near: A tensor of shape `[A1, ..., An, 1]`, where the last dimension
captures the distance between the viewer and the near clipping plane. Note
that values for `near` must be non-negative.
far: A tensor of shape `[A1, ..., An, 1]`, where the last dimension
captures the distance between the viewer and the far clipping plane. Note
that values for `far` must be greater than those of `near`.
name: A name for this op. Defaults to 'ndc_to_screen'.
Raises:
InvalidArgumentError: if any input contains data not in the specified range
of valid values.
ValueError: If any input is of an unsupported shape.
Returns:
A tensor of shape `[A1, ..., An, 3]`, containing `point_ndc_space` in
screen coordinates.
"""
with tf.compat.v1.name_scope(
name, "ndc_to_screen",
[point_ndc_space, lower_left_corner, screen_dimensions, near, far]):
point_ndc_space = tf.convert_to_tensor(value=point_ndc_space)
lower_left_corner = tf.convert_to_tensor(value=lower_left_corner)
screen_dimensions = tf.convert_to_tensor(value=screen_dimensions)
near = tf.convert_to_tensor(value=near)
far = tf.convert_to_tensor(value=far)
shape.check_static(
tensor=point_ndc_space,
tensor_name="point_ndc_space",
has_dim_equals=(-1, 3))
shape.check_static(
tensor=lower_left_corner,
tensor_name="lower_left_corner",
has_dim_equals=(-1, 2))
shape.check_static(
tensor=screen_dimensions,
tensor_name="screen_dimensions",
has_dim_equals=(-1, 2))
shape.check_static(tensor=near, tensor_name="near", has_dim_equals=(-1, 1))
shape.check_static(tensor=far, tensor_name="far", has_dim_equals=(-1, 1))
shape.compare_batch_dimensions(
tensors=(lower_left_corner, screen_dimensions, near, far),
last_axes=-2,
tensor_names=("lower_left_corner", "screen_dimensions", "near", "far"),
broadcast_compatible=False)
shape.compare_batch_dimensions(
tensors=(point_ndc_space, near),
last_axes=-2,
tensor_names=("point_ndc_space", "near"),
broadcast_compatible=True)
screen_dimensions = asserts.assert_all_above(
screen_dimensions, 0.0, open_bound=True)
near = asserts.assert_all_above(near, 0.0, open_bound=True)
far = asserts.assert_all_above(far, near, open_bound=True)
ndc_to_screen_factor = tf.concat(
(screen_dimensions, far - near), axis=-1) / 2.0
screen_center = tf.concat(
(lower_left_corner + screen_dimensions / 2.0, (near + far) / 2.0),
axis=-1)
return ndc_to_screen_factor * point_ndc_space + screen_center
def model_to_screen(point_model_space,
model_to_eye_matrix,
perspective_matrix,
screen_dimensions,
lower_left_corner=(0.0, 0.0),
name=None):
"""Transforms points from model to screen coordinates.
Note:
Please refer to http://www.songho.ca/opengl/gl_transform.html for an
in-depth review of this pipeline.
Note:
In the following, A1 to An are optional batch dimensions which must be
broadcast compatible.
Args:
point_model_space: A tensor of shape `[A1, ..., An, 3]`, where the last
dimension represents the 3D points in model space.
model_to_eye_matrix: A tensor of shape `[A1, ..., An, 4, 4]`, where the last
two dimension represent matrices to transform points from model to eye
coordinates.
perspective_matrix: A tensor of shape `[A1, ..., An, 4, 4]`, where the last
two dimension represent matrices to transform points from eye to clip
coordinates.
screen_dimensions: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension is expressed in pixels and captures the width and the height (in
pixels) of the screen.
lower_left_corner: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension captures the position (in pixels) of the lower left corner of
the screen.
name: A name for this op. Defaults to 'model_to_screen'.
Raises:
InvalidArgumentError: if any input contains data not in the specified range
of valid values.
ValueError: If any input is of an unsupported shape.
Returns:
A tuple of two tensors, respectively of shape `[A1, ..., An, 3]` and
`[A1, ..., An, 1]`, where the first tensor containing the projection of
`point_model_space` in screen coordinates, and the second represents the 'w'
component of `point_model_space` in clip space.
"""
with tf.compat.v1.name_scope(name, "model_to_screen", [
point_model_space, model_to_eye_matrix, perspective_matrix,
screen_dimensions, lower_left_corner
]):
point_model_space = tf.convert_to_tensor(value=point_model_space)
model_to_eye_matrix = tf.convert_to_tensor(value=model_to_eye_matrix)
perspective_matrix = tf.convert_to_tensor(value=perspective_matrix)
shape.check_static(
tensor=point_model_space,
tensor_name="point_model_space",
has_dim_equals=(-1, 3))
shape.check_static(
tensor=model_to_eye_matrix,
tensor_name="model_to_eye_matrix",
has_dim_equals=((-1, 4), (-2, 4)))
shape.check_static(
tensor=perspective_matrix,
tensor_name="perspective_matrix",
has_dim_equals=((-1, 4), (-2, 4)))
shape.compare_batch_dimensions(
tensors=(point_model_space, model_to_eye_matrix, perspective_matrix),
last_axes=(-2, -3, -3),
tensor_names=("point_model_space", "model_to_eye_matrix",
"perspective_matrix"),
broadcast_compatible=True)
batch_shape = tf.shape(input=point_model_space)[:-1]
one = tf.ones(
shape=tf.concat((batch_shape, (1,)), axis=-1),
dtype=point_model_space.dtype)
point_model_space = tf.concat((point_model_space, one), axis=-1)
point_model_space = tf.expand_dims(point_model_space, axis=-1)
view_projection_matrix = tf.linalg.matmul(perspective_matrix,
model_to_eye_matrix)
_, _, near, far = perspective.parameters_from_right_handed(
perspective_matrix)
point_clip_space = tf.squeeze(
tf.matmul(view_projection_matrix, point_model_space), axis=-1)
point_ndc_space = clip_to_ndc(point_clip_space)
point_screen_space = ndc_to_screen(point_ndc_space, lower_left_corner,
screen_dimensions, near, far)
return point_screen_space, point_clip_space[..., 3:4]
def perspective_correct_barycentrics(triangle_vertices_model_space,
pixel_position,
model_to_eye_matrix,
perspective_matrix,
screen_dimensions,
lower_left_corner=(0.0, 0.0),
name=None):
"""Computes perspective correct barycentrics.
Note:
In the following, A1 to An are optional batch dimensions.
Args:
triangle_vertices_model_space: A tensor of shape `[A1, ..., An, 3, 3]`,
where the last dimension represents the vertices of a triangle in model
space.
pixel_position: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension stores the position (in pixels) where the interpolation is
requested.
model_to_eye_matrix: A tensor of shape `[A1, ..., An, 4, 4]`, where the last
two dimension represent matrices to transform points from model to eye
coordinates.
perspective_matrix: A tensor of shape `[A1, ..., An, 4, 4]`, where the last
two dimension represent matrices to transform points from eye to clip
coordinates.
screen_dimensions: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension is expressed in pixels and captures the width and the height (in
pixels) of the screen.
lower_left_corner: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension captures the position (in pixels) of the lower left corner of
the screen.
name: A name for this op. Defaults to 'perspective_correct_barycentrics'.
Raises:
InvalidArgumentError: if any input contains data not in the specified range
of valid values.
ValueError: If any input is of an unsupported shape.
Returns:
A tensor of shape `[A1, ..., An, 3]`, containing perspective correct
barycentric coordinates.
"""
with tf.compat.v1.name_scope(name, "perspective_correct_barycentrics", [
triangle_vertices_model_space, pixel_position, model_to_eye_matrix,
perspective_matrix, screen_dimensions, lower_left_corner
]):
pixel_position = tf.convert_to_tensor(value=pixel_position)
triangle_vertices_model_space = tf.convert_to_tensor(
value=triangle_vertices_model_space)
shape.check_static(
tensor=pixel_position,
tensor_name="pixel_position",
has_dim_equals=(-1, 2))
shape.check_static(
tensor=triangle_vertices_model_space,
tensor_name="triangle_vertices_model_space",
has_dim_equals=((-2, 3), (-1, 3)))
lower_left_corner = tf.convert_to_tensor(value=lower_left_corner)
screen_dimensions = tf.convert_to_tensor(value=screen_dimensions)
lower_left_corner = shape.add_batch_dimensions(
lower_left_corner,
"lower_left_corner",
model_to_eye_matrix.shape[:-2],
last_axis=-2)
screen_dimensions = shape.add_batch_dimensions(
screen_dimensions,
"screen_dimensions",
model_to_eye_matrix.shape[:-2],
last_axis=-2)
vertices_screen, vertices_w = model_to_screen(triangle_vertices_model_space,
model_to_eye_matrix,
perspective_matrix,
screen_dimensions,
lower_left_corner)
vertices_w = tf.squeeze(vertices_w, axis=-1)
pixel_position = tf.expand_dims(pixel_position, axis=-2)
barycentric_coordinates, _ = weighted.get_barycentric_coordinates(
vertices_screen[..., :2], pixel_position)
barycentric_coordinates = tf.squeeze(barycentric_coordinates, axis=-2)
coeffs = barycentric_coordinates / vertices_w
return tf.linalg.normalize(coeffs, ord=1, axis=-1)[0]
def interpolate_attributes(attribute, barycentric, name=None):
"""Interpolates attributes using barycentric weights.
Note:
In the following, A1 to An are optional batch dimensions.
Args:
attribute: A tensor of shape `[A1, ..., An, 3, B]`, where the last dimension
stores a per-vertex `B`-dimensional attribute.
barycentric: A tensor of shape `[A1, ..., An, 3]`, where the last dimension
contains barycentric coordinates.
name: A name for this op. Defaults to 'interpolate_attributes'.
Returns:
A tensor of shape `[A1, ..., An, B]`, containing interpolated attributes.
"""
with tf.compat.v1.name_scope(name, "interpolate_attributes",
(attribute, barycentric)):
attribute = tf.convert_to_tensor(value=attribute)
barycentric = tf.convert_to_tensor(value=barycentric)
shape.check_static(
tensor=attribute, tensor_name="attribute", has_dim_equals=(-2, 3))
shape.check_static(
tensor=barycentric, tensor_name="barycentric", has_dim_equals=(-1, 3))
shape.compare_batch_dimensions(
tensors=(attribute, barycentric),
last_axes=(-2, -1),
tensor_names=("attribute", "barycentric"),
broadcast_compatible=True)
barycentric = asserts.assert_normalized(barycentric, order=1)
return tf.reduce_sum(
input_tensor=tf.expand_dims(barycentric, axis=-1) * attribute, axis=-2)
def perspective_correct_interpolation(triangle_vertices_model_space,
attribute,
pixel_position,
model_to_eye_matrix,
perspective_matrix,
screen_dimensions,
lower_left_corner=(0.0, 0.0),
name=None):
"""Returns perspective corrected interpolation of attributes over triangles.
Note:
In the following, A1 to An are optional batch dimensions.
Args:
triangle_vertices_model_space: A tensor of shape `[A1, ..., An, 3, 3]`,
where the last dimension represents the vertices of a triangle in model
space.
attribute: A tensor of shape `[A1, ..., An, 3, B]`, where the last dimension
stores a per-vertex `B`-dimensional attribute.
pixel_position: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension stores the position (in pixels) where the interpolation is
requested.
model_to_eye_matrix: A tensor of shape `[A1, ..., An, 4, 4]`, where the last
two dimension represent matrices to transform points from model to eye
coordinates.
perspective_matrix: A tensor of shape `[A1, ..., An, 4, 4]`, where the last
two dimension represent matrices to transform points from eye to clip
coordinates.
screen_dimensions: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension is expressed in pixels and captures the width and the height (in
pixels) of the screen.
lower_left_corner: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension captures the position (in pixels) of the lower left corner of
the screen.
name: A name for this op. Defaults to 'perspective_correct_interpolation'.
Raises:
tf.errors.InvalidArgumentError: if any input contains data not in the
specified range of valid values.
ValueError: If any input is of an unsupported shape.
Returns:
A tensor of shape `[A1, ..., An, B]`, containing interpolated attributes.
"""
with tf.compat.v1.name_scope(name, "perspective_correct_interpolation", [
triangle_vertices_model_space, attribute, pixel_position,
model_to_eye_matrix, perspective_matrix, screen_dimensions,
lower_left_corner
]):
barycentric = perspective_correct_barycentrics(
triangle_vertices_model_space, pixel_position, model_to_eye_matrix,
perspective_matrix, screen_dimensions, lower_left_corner)
return interpolate_attributes(attribute, barycentric)
# API contains all public functions and classes.
__all__ = export_api.get_functions_and_classes()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This module implements math routines used by OpenGL."""
import tensorflow as tf
from tensorflow_graphics.geometry.transformation import look_at
from tensorflow_graphics.math.interpolation import weighted
from tensorflow_graphics.rendering.camera import perspective
from tensorflow_graphics.util import asserts
from tensorflow_graphics.util import export_api
from tensorflow_graphics.util import shape
def model_to_eye(point_model_space,
camera_position,
look_at_point,
up_vector,
name=None):
"""Transforms points from model to eye coordinates.
Note:
In the following, A1 to An are optional batch dimensions which must be
broadcast compatible.
Args:
point_model_space: A tensor of shape `[A1, ..., An, 3]`, where the last
dimension represents the 3D points in model space.
camera_position: A tensor of shape `[A1, ..., An, 3]`, where the last
dimension represents the 3D position of the camera.
look_at_point: A tensor of shape `[A1, ..., An, 3]`, with the last dimension
storing the position where the camera is looking at.
up_vector: A tensor of shape `[A1, ..., An, 3]`, where the last dimension
defines the up vector of the camera.
name: A name for this op. Defaults to 'model_to_eye'.
Raises:
ValueError: if the all the inputs are not of the same shape, or if any input
of of an unsupported shape.
Returns:
A tensor of shape `[A1, ..., An, 3]`, containing `point_model_space` in eye
coordinates.
"""
with tf.compat.v1.name_scope(
name, "model_to_eye",
[point_model_space, camera_position, look_at_point, up_vector]):
point_model_space = tf.convert_to_tensor(value=point_model_space)
camera_position = tf.convert_to_tensor(value=camera_position)
look_at_point = tf.convert_to_tensor(value=look_at_point)
up_vector = tf.convert_to_tensor(value=up_vector)
shape.check_static(
tensor=point_model_space,
tensor_name="point_model_space",
has_dim_equals=(-1, 3))
shape.compare_batch_dimensions(
tensors=(point_model_space, camera_position),
last_axes=-2,
tensor_names=("point_model_space", "camera_position"),
broadcast_compatible=True)
model_to_eye_matrix = look_at.right_handed(camera_position, look_at_point,
up_vector)
batch_shape = tf.shape(input=point_model_space)[:-1]
one = tf.ones(
shape=tf.concat((batch_shape, (1,)), axis=-1),
dtype=point_model_space.dtype)
point_model_space = tf.concat((point_model_space, one), axis=-1)
point_model_space = tf.expand_dims(point_model_space, axis=-1)
res = tf.squeeze(tf.matmul(model_to_eye_matrix, point_model_space), axis=-1)
return res[..., :-1]
def eye_to_clip(point_eye_space,
vertical_field_of_view,
aspect_ratio,
near,
far,
name=None):
"""Transforms points from eye to clip space.
Note:
In the following, A1 to An are optional batch dimensions which must be
broadcast compatible.
Args:
point_eye_space: A tensor of shape `[A1, ..., An, 3]`, where the last
dimension represents the 3D points in eye coordinates.
vertical_field_of_view: A tensor of shape `[A1, ..., An, 1]`, where the last
dimension represents the vertical field of view of the frustum. Note that
values for `vertical_field_of_view` must be in the range ]0,pi[.
aspect_ratio: A tensor of shape `[A1, ..., An, 1]`, where the last dimension
stores the width over height ratio of the frustum. Note that values for
`aspect_ratio` must be non-negative.
near: A tensor of shape `[A1, ..., An, 1]`, where the last dimension
captures the distance between the viewer and the near clipping plane. Note
that values for `near` must be non-negative.
far: A tensor of shape `[A1, ..., An, 1]`, where the last dimension captures
the distance between the viewer and the far clipping plane. Note that
values for `far` must be non-negative.
name: A name for this op. Defaults to 'eye_to_clip'.
Raises:
ValueError: If any input is of an unsupported shape.
Returns:
A tensor of shape `[A1, ..., An, 4]`, containing `point_eye_space` in
homogeneous clip coordinates.
"""
with tf.compat.v1.name_scope(
name, "eye_to_clip",
[point_eye_space, vertical_field_of_view, aspect_ratio, near, far]):
point_eye_space = tf.convert_to_tensor(value=point_eye_space)
vertical_field_of_view = tf.convert_to_tensor(value=vertical_field_of_view)
aspect_ratio = tf.convert_to_tensor(value=aspect_ratio)
near = tf.convert_to_tensor(value=near)
far = tf.convert_to_tensor(value=far)
shape.check_static(
tensor=point_eye_space,
tensor_name="point_eye_space",
has_dim_equals=(-1, 3))
shape.check_static(
tensor=vertical_field_of_view,
tensor_name="vertical_field_of_view",
has_dim_equals=(-1, 1))
shape.check_static(
tensor=aspect_ratio, tensor_name="aspect_ratio", has_dim_equals=(-1, 1))
shape.check_static(tensor=near, tensor_name="near", has_dim_equals=(-1, 1))
shape.check_static(tensor=far, tensor_name="far", has_dim_equals=(-1, 1))
shape.compare_batch_dimensions(
tensors=(point_eye_space, vertical_field_of_view, aspect_ratio, near,
far),
last_axes=-2,
tensor_names=("point_eye_space", "vertical_field_of_view",
"aspect_ratio", "near", "far"),
broadcast_compatible=True)
perspective_matrix = perspective.right_handed(vertical_field_of_view,
aspect_ratio, near, far)
batch_shape = tf.shape(input=point_eye_space)[:-1]
one = tf.ones(
shape=tf.concat((batch_shape, (1,)), axis=-1),
dtype=point_eye_space.dtype)
point_eye_space = tf.concat((point_eye_space, one), axis=-1)
point_eye_space = tf.expand_dims(point_eye_space, axis=-1)
return tf.squeeze(tf.matmul(perspective_matrix, point_eye_space), axis=-1)
def clip_to_ndc(point_clip_space, name=None):
"""Transforms points from clip to normalized device coordinates (ndc).
Note:
In the following, A1 to An are optional batch dimensions.
Args:
point_clip_space: A tensor of shape `[A1, ..., An, 4]`, where the last
dimension represents points in clip space.
name: A name for this op. Defaults to 'clip_to_ndc'.
Raises:
ValueError: If `point_clip_space` is not of size 4 in its last dimension.
Returns:
A tensor of shape `[A1, ..., An, 3]`, containing `point_clip_space` in
normalized device coordinates.
"""
with tf.compat.v1.name_scope(name, "clip_to_ndc", [point_clip_space]):
point_clip_space = tf.convert_to_tensor(value=point_clip_space)
shape.check_static(
tensor=point_clip_space,
tensor_name="point_clip_space",
has_dim_equals=(-1, 4))
w = point_clip_space[..., -1:]
return point_clip_space[..., :3] / w
def ndc_to_screen(point_ndc_space,
lower_left_corner,
screen_dimensions,
near,
far,
name=None):
"""Transforms points from normalized device coordinates to screen coordinates.
Note:
In the following, A1 to An are optional batch dimensions which must be
broadcast compatible between `point_ndc_space` and the other variables.
Args:
point_ndc_space: A tensor of shape `[A1, ..., An, 3]`, where the last
dimension represents points in normalized device coordinates.
lower_left_corner: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension captures the position (in pixels) of the lower left corner of
the screen.
screen_dimensions: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension is expressed in pixels and captures the width and the height (in
pixels) of the screen.
near: A tensor of shape `[A1, ..., An, 1]`, where the last dimension
captures the distance between the viewer and the near clipping plane. Note
that values for `near` must be non-negative.
far: A tensor of shape `[A1, ..., An, 1]`, where the last dimension
captures the distance between the viewer and the far clipping plane. Note
that values for `far` must be greater than those of `near`.
name: A name for this op. Defaults to 'ndc_to_screen'.
Raises:
InvalidArgumentError: if any input contains data not in the specified range
of valid values.
ValueError: If any input is of an unsupported shape.
Returns:
A tensor of shape `[A1, ..., An, 3]`, containing `point_ndc_space` in
screen coordinates.
"""
with tf.compat.v1.name_scope(
name, "ndc_to_screen",
[point_ndc_space, lower_left_corner, screen_dimensions, near, far]):
point_ndc_space = tf.convert_to_tensor(value=point_ndc_space)
lower_left_corner = tf.convert_to_tensor(value=lower_left_corner)
screen_dimensions = tf.convert_to_tensor(value=screen_dimensions)
near = tf.convert_to_tensor(value=near)
far = tf.convert_to_tensor(value=far)
shape.check_static(
tensor=point_ndc_space,
tensor_name="point_ndc_space",
has_dim_equals=(-1, 3))
shape.check_static(
tensor=lower_left_corner,
tensor_name="lower_left_corner",
has_dim_equals=(-1, 2))
shape.check_static(
tensor=screen_dimensions,
tensor_name="screen_dimensions",
has_dim_equals=(-1, 2))
shape.check_static(tensor=near, tensor_name="near", has_dim_equals=(-1, 1))
shape.check_static(tensor=far, tensor_name="far", has_dim_equals=(-1, 1))
shape.compare_batch_dimensions(
tensors=(lower_left_corner, screen_dimensions, near, far),
last_axes=-2,
tensor_names=("lower_left_corner", "screen_dimensions", "near", "far"),
broadcast_compatible=False)
shape.compare_batch_dimensions(
tensors=(point_ndc_space, near),
last_axes=-2,
tensor_names=("point_ndc_space", "near"),
broadcast_compatible=True)
screen_dimensions = asserts.assert_all_above(
screen_dimensions, 0.0, open_bound=True)
near = asserts.assert_all_above(near, 0.0, open_bound=True)
far = asserts.assert_all_above(far, near, open_bound=True)
ndc_to_screen_factor = tf.concat(
(screen_dimensions, far - near), axis=-1) / 2.0
screen_center = tf.concat(
(lower_left_corner + screen_dimensions / 2.0, (near + far) / 2.0),
axis=-1)
return ndc_to_screen_factor * point_ndc_space + screen_center
def model_to_screen(point_model_space,
model_to_eye_matrix,
perspective_matrix,
screen_dimensions,
lower_left_corner=(0.0, 0.0),
name=None):
"""Transforms points from model to screen coordinates.
Note:
Please refer to http://www.songho.ca/opengl/gl_transform.html for an
in-depth review of this pipeline.
Note:
In the following, A1 to An are optional batch dimensions which must be
broadcast compatible.
Args:
point_model_space: A tensor of shape `[A1, ..., An, 3]`, where the last
dimension represents the 3D points in model space.
model_to_eye_matrix: A tensor of shape `[A1, ..., An, 4, 4]`, where the last
two dimension represent matrices to transform points from model to eye
coordinates.
perspective_matrix: A tensor of shape `[A1, ..., An, 4, 4]`, where the last
two dimension represent matrices to transform points from eye to clip
coordinates.
screen_dimensions: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension is expressed in pixels and captures the width and the height (in
pixels) of the screen.
lower_left_corner: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension captures the position (in pixels) of the lower left corner of
the screen.
name: A name for this op. Defaults to 'model_to_screen'.
Raises:
InvalidArgumentError: if any input contains data not in the specified range
of valid values.
ValueError: If any input is of an unsupported shape.
Returns:
A tuple of two tensors, respectively of shape `[A1, ..., An, 3]` and
`[A1, ..., An, 1]`, where the first tensor containing the projection of
`point_model_space` in screen coordinates, and the second represents the 'w'
component of `point_model_space` in clip space.
"""
with tf.compat.v1.name_scope(name, "model_to_screen", [
point_model_space, model_to_eye_matrix, perspective_matrix,
screen_dimensions, lower_left_corner
]):
point_model_space = tf.convert_to_tensor(value=point_model_space)
model_to_eye_matrix = tf.convert_to_tensor(value=model_to_eye_matrix)
perspective_matrix = tf.convert_to_tensor(value=perspective_matrix)
shape.check_static(
tensor=point_model_space,
tensor_name="point_model_space",
has_dim_equals=(-1, 3))
shape.check_static(
tensor=model_to_eye_matrix,
tensor_name="model_to_eye_matrix",
has_dim_equals=((-1, 4), (-2, 4)))
shape.check_static(
tensor=perspective_matrix,
tensor_name="perspective_matrix",
has_dim_equals=((-1, 4), (-2, 4)))
shape.compare_batch_dimensions(
tensors=(point_model_space, model_to_eye_matrix, perspective_matrix),
last_axes=(-2, -3, -3),
tensor_names=("point_model_space", "model_to_eye_matrix",
"perspective_matrix"),
broadcast_compatible=True)
batch_shape = tf.shape(input=point_model_space)[:-1]
one = tf.ones(
shape=tf.concat((batch_shape, (1,)), axis=-1),
dtype=point_model_space.dtype)
point_model_space = tf.concat((point_model_space, one), axis=-1)
point_model_space = tf.expand_dims(point_model_space, axis=-1)
view_projection_matrix = tf.linalg.matmul(perspective_matrix,
model_to_eye_matrix)
_, _, near, far = perspective.parameters_from_right_handed(
perspective_matrix)
point_clip_space = tf.squeeze(
tf.matmul(view_projection_matrix, point_model_space), axis=-1)
point_ndc_space = clip_to_ndc(point_clip_space)
point_screen_space = ndc_to_screen(point_ndc_space, lower_left_corner,
screen_dimensions, near, far)
return point_screen_space, point_clip_space[..., 3:4]
def perspective_correct_barycentrics(triangle_vertices_model_space,
pixel_position,
model_to_eye_matrix,
perspective_matrix,
screen_dimensions,
lower_left_corner=(0.0, 0.0),
name=None):
"""Computes perspective correct barycentrics.
Note:
In the following, A1 to An are optional batch dimensions.
Args:
triangle_vertices_model_space: A tensor of shape `[A1, ..., An, 3, 3]`,
where the last dimension represents the vertices of a triangle in model
space.
pixel_position: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension stores the position (in pixels) where the interpolation is
requested.
model_to_eye_matrix: A tensor of shape `[A1, ..., An, 4, 4]`, where the last
two dimension represent matrices to transform points from model to eye
coordinates.
perspective_matrix: A tensor of shape `[A1, ..., An, 4, 4]`, where the last
two dimension represent matrices to transform points from eye to clip
coordinates.
screen_dimensions: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension is expressed in pixels and captures the width and the height (in
pixels) of the screen.
lower_left_corner: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension captures the position (in pixels) of the lower left corner of
the screen.
name: A name for this op. Defaults to 'perspective_correct_barycentrics'.
Raises:
InvalidArgumentError: if any input contains data not in the specified range
of valid values.
ValueError: If any input is of an unsupported shape.
Returns:
A tensor of shape `[A1, ..., An, 3]`, containing perspective correct
barycentric coordinates.
"""
with tf.compat.v1.name_scope(name, "perspective_correct_barycentrics", [
triangle_vertices_model_space, pixel_position, model_to_eye_matrix,
perspective_matrix, screen_dimensions, lower_left_corner
]):
pixel_position = tf.convert_to_tensor(value=pixel_position)
triangle_vertices_model_space = tf.convert_to_tensor(
value=triangle_vertices_model_space)
shape.check_static(
tensor=pixel_position,
tensor_name="pixel_position",
has_dim_equals=(-1, 2))
shape.check_static(
tensor=triangle_vertices_model_space,
tensor_name="triangle_vertices_model_space",
has_dim_equals=((-2, 3), (-1, 3)))
lower_left_corner = tf.convert_to_tensor(value=lower_left_corner)
screen_dimensions = tf.convert_to_tensor(value=screen_dimensions)
lower_left_corner = shape.add_batch_dimensions(
lower_left_corner,
"lower_left_corner",
model_to_eye_matrix.shape[:-2],
last_axis=-2)
screen_dimensions = shape.add_batch_dimensions(
screen_dimensions,
"screen_dimensions",
model_to_eye_matrix.shape[:-2],
last_axis=-2)
vertices_screen, vertices_w = model_to_screen(triangle_vertices_model_space,
model_to_eye_matrix,
perspective_matrix,
screen_dimensions,
lower_left_corner)
vertices_w = tf.squeeze(vertices_w, axis=-1)
pixel_position = tf.expand_dims(pixel_position, axis=-2)
barycentric_coordinates, _ = weighted.get_barycentric_coordinates(
vertices_screen[..., :2], pixel_position)
barycentric_coordinates = tf.squeeze(barycentric_coordinates, axis=-2)
coeffs = barycentric_coordinates / vertices_w
return tf.linalg.normalize(coeffs, ord=1, axis=-1)[0]
def interpolate_attributes(attribute, barycentric, name=None):
"""Interpolates attributes using barycentric weights.
Note:
In the following, A1 to An are optional batch dimensions.
Args:
attribute: A tensor of shape `[A1, ..., An, 3, B]`, where the last dimension
stores a per-vertex `B`-dimensional attribute.
barycentric: A tensor of shape `[A1, ..., An, 3]`, where the last dimension
contains barycentric coordinates.
name: A name for this op. Defaults to 'interpolate_attributes'.
Returns:
A tensor of shape `[A1, ..., An, B]`, containing interpolated attributes.
"""
with tf.compat.v1.name_scope(name, "interpolate_attributes",
(attribute, barycentric)):
attribute = tf.convert_to_tensor(value=attribute)
barycentric = tf.convert_to_tensor(value=barycentric)
shape.check_static(
tensor=attribute, tensor_name="attribute", has_dim_equals=(-2, 3))
shape.check_static(
tensor=barycentric, tensor_name="barycentric", has_dim_equals=(-1, 3))
shape.compare_batch_dimensions(
tensors=(attribute, barycentric),
last_axes=(-2, -1),
tensor_names=("attribute", "barycentric"),
broadcast_compatible=True)
barycentric = asserts.assert_normalized(barycentric, order=1)
return tf.reduce_sum(
input_tensor=tf.expand_dims(barycentric, axis=-1) * attribute, axis=-2)
def perspective_correct_interpolation(triangle_vertices_model_space,
attribute,
pixel_position,
model_to_eye_matrix,
perspective_matrix,
screen_dimensions,
lower_left_corner=(0.0, 0.0),
name=None):
"""Returns perspective corrected interpolation of attributes over triangles.
Note:
In the following, A1 to An are optional batch dimensions.
Args:
triangle_vertices_model_space: A tensor of shape `[A1, ..., An, 3, 3]`,
where the last dimension represents the vertices of a triangle in model
space.
attribute: A tensor of shape `[A1, ..., An, 3, B]`, where the last dimension
stores a per-vertex `B`-dimensional attribute.
pixel_position: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension stores the position (in pixels) where the interpolation is
requested.
model_to_eye_matrix: A tensor of shape `[A1, ..., An, 4, 4]`, where the last
two dimension represent matrices to transform points from model to eye
coordinates.
perspective_matrix: A tensor of shape `[A1, ..., An, 4, 4]`, where the last
two dimension represent matrices to transform points from eye to clip
coordinates.
screen_dimensions: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension is expressed in pixels and captures the width and the height (in
pixels) of the screen.
lower_left_corner: A tensor of shape `[A1, ..., An, 2]`, where the last
dimension captures the position (in pixels) of the lower left corner of
the screen.
name: A name for this op. Defaults to 'perspective_correct_interpolation'.
Raises:
tf.errors.InvalidArgumentError: if any input contains data not in the
specified range of valid values.
ValueError: If any input is of an unsupported shape.
Returns:
A tensor of shape `[A1, ..., An, B]`, containing interpolated attributes.
"""
with tf.compat.v1.name_scope(name, "perspective_correct_interpolation", [
triangle_vertices_model_space, attribute, pixel_position,
model_to_eye_matrix, perspective_matrix, screen_dimensions,
lower_left_corner
]):
barycentric = perspective_correct_barycentrics(
triangle_vertices_model_space, pixel_position, model_to_eye_matrix,
perspective_matrix, screen_dimensions, lower_left_corner)
return interpolate_attributes(attribute, barycentric)
# API contains all public functions and classes.
__all__ = export_api.get_functions_and_classes()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/geometry/representation/mesh/tests/normals_test.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for normals."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from tensorflow_graphics.geometry.representation.mesh import normals
from tensorflow_graphics.util import test_case
class MeshTest(test_case.TestCase):
@parameterized.parameters(
(((None, 3), (None, 3)), (tf.float32, tf.int32)),
(((3, 6, 3), (3, 5, 4)), (tf.float32, tf.int32)),
)
def test_gather_faces_exception_not_raised(self, shapes, dtypes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(normals.gather_faces, shapes, dtypes)
@parameterized.parameters(
("Not all batch dimensions are identical", (3, 5, 4, 4), (1, 2, 4, 4)),
("Not all batch dimensions are identical", (5, 4, 4), (1, 2, 4, 4)),
("Not all batch dimensions are identical", (3, 5, 4, 4), (2, 4, 4)),
("vertices must have a rank greater than 1", (4,), (1, 2, 4, 4)),
("indices must have a rank greater than 1", (3, 5, 4, 4), (4,)),
)
def test_gather_faces_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(normals.gather_faces, error_msg, shapes)
def test_gather_faces_jacobian_random(self):
"""Test the Jacobian of the face extraction function."""
tensor_size = np.random.randint(2, 5)
tensor_shape = np.random.randint(1, 5, size=tensor_size).tolist()
vertex_init = np.random.random(size=tensor_shape)
indices_init = np.random.randint(0, tensor_shape[-2], size=tensor_shape)
indices_tensor = tf.convert_to_tensor(value=indices_init)
def gather_faces(vertex_tensor):
return normals.gather_faces(vertex_tensor, indices_tensor)
self.assert_jacobian_is_correct_fn(gather_faces, [vertex_init])
@parameterized.parameters(
((((0.,), (1.,)), ((1, 0),)), ((((1.,), (0.,)),),)),
((((0., 1.), (2., 3.)), ((1, 0),)), ((((2., 3.), (0., 1.)),),)),
((((0., 1., 2.), (3., 4., 5.)), ((1, 0),)), ((((3., 4., 5.),
(0., 1., 2.)),),)),
)
def test_gather_faces_preset(self, test_inputs, test_outputs):
"""Tests the extraction of mesh faces."""
self.assert_output_is_correct(
normals.gather_faces, test_inputs, test_outputs, tile=False)
def test_gather_faces_random(self):
"""Tests the extraction of mesh faces."""
tensor_size = np.random.randint(3, 5)
tensor_shape = np.random.randint(1, 5, size=tensor_size).tolist()
vertices = np.random.random(size=tensor_shape)
indices = np.arange(tensor_shape[-2])
indices = indices.reshape([1] * (tensor_size - 1) + [-1])
indices = np.tile(indices, tensor_shape[:-2] + [1, 1])
expected = np.expand_dims(vertices, -3)
self.assertAllClose(
normals.gather_faces(vertices, indices), expected, rtol=1e-3)
@parameterized.parameters(
(((None, 4, 3),), (tf.float32,)),
(((4, 3),), (tf.float32,)),
(((3, 4, 3),), (tf.float32,)),
)
def test_face_normals_exception_not_raised(self, shapes, dtypes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(normals.face_normals, shapes, dtypes)
@parameterized.parameters(
("faces must have a rank greater than 1.", (3,)),
("faces must have greater than 2 dimensions in axis -2", (2, 3)),
("faces must have exactly 3 dimensions in axis -1.", (5, 2)),
)
def test_face_normals_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(normals.face_normals, error_msg, shapes)
def test_face_normals_jacobian_random(self):
"""Test the Jacobian of the face normals function."""
tensor_vertex_size = np.random.randint(1, 3)
tensor_out_shape = np.random.randint(1, 5, size=tensor_vertex_size)
tensor_out_shape = tensor_out_shape.tolist()
tensor_vertex_shape = list(tensor_out_shape)
tensor_vertex_shape[-1] *= 3
tensor_index_shape = tensor_out_shape[-1]
vertex_init = np.random.random(size=tensor_vertex_shape + [3])
index_init = np.arange(tensor_vertex_shape[-1])
np.random.shuffle(index_init)
index_init = np.reshape(index_init, newshape=[1] * \
(tensor_vertex_size - 1) + \
[tensor_index_shape, 3])
index_init = np.tile(index_init, tensor_vertex_shape[:-1] + [1, 1])
index_tensor = tf.convert_to_tensor(value=index_init)
def face_normals(vertex_tensor):
face_tensor = normals.gather_faces(vertex_tensor, index_tensor)
return normals.face_normals(face_tensor)
self.assert_jacobian_is_correct_fn(
face_normals, [vertex_init], atol=1e-4, delta=1e-9)
@parameterized.parameters(
((((0., 0., 0.), (1., 0., 0.), (0., 1., 0.)), ((0, 1, 2),)),
(((0., 0., 1.),),)),
((((0., 0., 0.), (0., 0., 1.), (1., 0., 0.)), ((0, 1, 2),)),
(((0., 1., 0.),),)),
((((0., 0., 0.), (0., 1., 0.), (0., 0., 1.)), ((0, 1, 2),)),
(((1., 0., 0.),),)),
((((0., -2., -2.), (0, -2., 2.), (0., 2., 2.), (0., 2., -2.)),
((0, 1, 2, 3),)), (((-1., 0., 0.),),)),
)
def test_face_normals_preset(self, test_inputs, test_outputs):
"""Tests the computation of mesh face normals."""
faces = normals.gather_faces(*test_inputs[:2])
test_inputs = [faces] + list(test_inputs[2:])
self.assert_output_is_correct(
normals.face_normals, test_inputs, test_outputs, tile=False)
def test_face_normals_random(self):
"""Tests the computation of mesh face normals in each axis."""
tensor_vertex_size = np.random.randint(1, 3)
tensor_out_shape = np.random.randint(1, 5, size=tensor_vertex_size)
tensor_out_shape = tensor_out_shape.tolist()
tensor_vertex_shape = list(tensor_out_shape)
tensor_vertex_shape[-1] *= 3
tensor_index_shape = tensor_out_shape[-1]
for i in range(3):
vertices = np.random.random(size=tensor_vertex_shape + [3])
indices = np.arange(tensor_vertex_shape[-1])
np.random.shuffle(indices)
indices = np.reshape(indices,
newshape=[1] * (tensor_vertex_size - 1) \
+ [tensor_index_shape, 3])
indices = np.tile(indices, tensor_vertex_shape[:-1] + [1, 1])
vertices[..., i] = 0.
expected = np.zeros(shape=tensor_out_shape + [3], dtype=vertices.dtype)
expected[..., i] = 1.
faces = normals.gather_faces(vertices, indices)
self.assertAllClose(
tf.abs(normals.face_normals(faces)), expected, rtol=1e-3)
@parameterized.parameters(
(((4, 3), (5, 3)), (tf.float32, tf.int32)),
(((None, 3), (None, 3)), (tf.float32, tf.int32)),
(((3, None, 3), (3, None, 5)), (tf.float32, tf.int32)),
(((3, 6, 3), (3, 5, 5)), (tf.float32, tf.int32)),
)
def test_vertex_normals_exception_not_raised(self, shapes, dtypes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(normals.vertex_normals, shapes, dtypes)
@parameterized.parameters(
("Not all batch dimensions are broadcast-compatible.", (3, 5, 4, 3),
(1, 2, 4, 3)),
("Not all batch dimensions are broadcast-compatible.", (2, 200, 3),
(4, 100, 3)),
("Not all batch dimensions are broadcast-compatible.", (5, 4, 3),
(1, 2, 4, 3)),
("Not all batch dimensions are broadcast-compatible.", (3, 5, 4, 3),
(2, 4, 3)),
("vertices must have a rank greater than 1.", (3,), (1, 2, 4, 3)),
("indices must have a rank greater than 1.", (3, 5, 4, 3), (3,)),
("vertices must have exactly 3 dimensions in axis -1.", (3, 5, 4, 2),
(3, 5, 4, 3)),
("indices must have greater than 2 dimensions in axis -1.", (3, 5, 4, 3),
(3, 5, 4, 2)),
("'indices' must have specified batch dimensions.", (None, 6, 3),
(None, 5, 5)),
)
def test_vertex_normals_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(normals.vertex_normals, error_msg, shapes)
def test_vertex_normals_jacobian_random(self):
"""Test the Jacobian of the vertex normals function."""
tensor_vertex_size = np.random.randint(1, 3)
tensor_out_shape = np.random.randint(1, 5, size=tensor_vertex_size)
tensor_out_shape = tensor_out_shape.tolist()
vertex_axis = np.array(((0., 0., 1), (1., 0., 0.), (0., 1., 0.),
(0., 0., -1.), (-1., 0., 0.), (0., -1., 0.)),
dtype=np.float32)
vertex_axis = vertex_axis.reshape([1] * tensor_vertex_size + [6, 3])
faces = np.array(((0, 1, 2), (0, 2, 4), (0, 4, 5), (0, 5, 1), (3, 2, 1),
(3, 4, 2), (3, 5, 4), (3, 1, 5)),
dtype=np.int32)
faces = faces.reshape([1] * tensor_vertex_size + [8, 3])
index_init = np.tile(faces, tensor_out_shape + [1, 1])
vertex_scale = np.random.uniform(0.5, 5., tensor_out_shape + [1] * 2)
vertex_init = vertex_axis * vertex_scale
index_tensor = tf.convert_to_tensor(value=index_init)
def vertex_normals(vertex_tensor):
return normals.vertex_normals(vertex_tensor, index_tensor)
self.assert_jacobian_is_correct_fn(vertex_normals, [vertex_init])
@parameterized.parameters(
(((((-1., -1., 1.), (-1., 1., 1.), (-1., -1., -1.), (-1., 1., -1.),
(1., -1., 1.), (1., 1., 1.), (1., -1., -1.), (1., 1., -1.)),),
(((1, 2, 0), (3, 6, 2), (7, 4, 6), (5, 0, 4), (6, 0, 2), (3, 5, 7),
(1, 3, 2), (3, 7, 6), (7, 5, 4), (5, 1, 0), (6, 4, 0), (3, 1, 5)),)),
((((-0.3333333134651184, -0.6666666269302368, 0.6666666269302368),
(-0.8164965510368347, 0.40824827551841736, 0.40824827551841736),
(-0.8164965510368347, -0.40824827551841736, -0.40824827551841736),
(-0.3333333134651184, 0.6666666269302368, -0.6666666269302368),
(0.8164965510368347, -0.40824827551841736, 0.40824827551841736),
(0.3333333134651184, 0.6666666269302368, 0.6666666269302368),
(0.3333333134651184, -0.6666666269302368, -0.6666666269302368),
(0.8164965510368347, 0.40824827551841736, -0.40824827551841736)),),)),
)
def test_vertex_normals_preset(self, test_inputs, test_outputs):
"""Tests the computation of vertex normals."""
self.assert_output_is_correct(
normals.vertex_normals, test_inputs, test_outputs, tile=False)
def test_vertex_normals_random(self):
"""Tests the computation of vertex normals for a regular octahedral."""
tensor_vertex_size = np.random.randint(1, 3)
tensor_out_shape = np.random.randint(1, 5, size=tensor_vertex_size)
tensor_out_shape = tensor_out_shape.tolist()
with self.subTest(name="triangular_faces"):
vertex_on_axes = np.array(((0., 0., 1), (1., 0., 0.), (0., 1., 0.),
(0., 0., -1.), (-1., 0., 0.), (0., -1., 0.)),
dtype=np.float32)
vertex_on_axes = vertex_on_axes.reshape([1] * tensor_vertex_size + [6, 3])
index_init = np.array(((0, 1, 2), (0, 2, 4), (0, 4, 5), (0, 5, 1),
(3, 2, 1), (3, 4, 2), (3, 5, 4), (3, 1, 5)),
dtype=np.int32)
index_init = index_init.reshape([1] * tensor_vertex_size + [8, 3])
index_init = np.tile(index_init, tensor_out_shape + [1, 1])
vertex_scale = np.random.uniform(0.5, 5., tensor_out_shape + [1] * 2)
vertex_init = vertex_on_axes * vertex_scale
expected = vertex_on_axes * (vertex_scale * 0. + 1.)
vertex_tensor = tf.convert_to_tensor(value=vertex_init)
index_tensor = tf.convert_to_tensor(value=index_init)
self.assertAllClose(
normals.vertex_normals(vertex_tensor, index_tensor), expected)
with self.subTest(name="polygon_faces"):
num_vertices = np.random.randint(4, 8)
poly_vertices = []
rad_step = np.pi * 2. / num_vertices
for i in range(num_vertices):
poly_vertices.append([np.cos(i * rad_step), np.sin(i * rad_step), 0])
vertex_init = np.array(poly_vertices, dtype=np.float32)
vertex_init = vertex_init.reshape([1] * tensor_vertex_size + [-1, 3])
vertex_init = vertex_init * vertex_scale
index_init = np.arange(num_vertices, dtype=np.int32)
index_init = index_init.reshape([1] * tensor_vertex_size + [1, -1])
index_init = np.tile(index_init, tensor_out_shape + [1, 1])
expected = np.array((0., 0., 1.), dtype=np.float32)
expected = expected.reshape([1] * tensor_vertex_size + [1, 3])
expected = np.tile(expected, tensor_out_shape + [num_vertices, 1])
vertex_tensor = tf.convert_to_tensor(value=vertex_init)
index_tensor = tf.convert_to_tensor(value=index_init)
self.assertAllClose(
normals.vertex_normals(vertex_tensor, index_tensor), expected)
if __name__ == "__main__":
test_case.main()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Tests for normals."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from tensorflow_graphics.geometry.representation.mesh import normals
from tensorflow_graphics.util import test_case
class MeshTest(test_case.TestCase):
@parameterized.parameters(
(((None, 3), (None, 3)), (tf.float32, tf.int32)),
(((3, 6, 3), (3, 5, 4)), (tf.float32, tf.int32)),
)
def test_gather_faces_exception_not_raised(self, shapes, dtypes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(normals.gather_faces, shapes, dtypes)
@parameterized.parameters(
("Not all batch dimensions are identical", (3, 5, 4, 4), (1, 2, 4, 4)),
("Not all batch dimensions are identical", (5, 4, 4), (1, 2, 4, 4)),
("Not all batch dimensions are identical", (3, 5, 4, 4), (2, 4, 4)),
("vertices must have a rank greater than 1", (4,), (1, 2, 4, 4)),
("indices must have a rank greater than 1", (3, 5, 4, 4), (4,)),
)
def test_gather_faces_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(normals.gather_faces, error_msg, shapes)
def test_gather_faces_jacobian_random(self):
"""Test the Jacobian of the face extraction function."""
tensor_size = np.random.randint(2, 5)
tensor_shape = np.random.randint(1, 5, size=tensor_size).tolist()
vertex_init = np.random.random(size=tensor_shape)
indices_init = np.random.randint(0, tensor_shape[-2], size=tensor_shape)
indices_tensor = tf.convert_to_tensor(value=indices_init)
def gather_faces(vertex_tensor):
return normals.gather_faces(vertex_tensor, indices_tensor)
self.assert_jacobian_is_correct_fn(gather_faces, [vertex_init])
@parameterized.parameters(
((((0.,), (1.,)), ((1, 0),)), ((((1.,), (0.,)),),)),
((((0., 1.), (2., 3.)), ((1, 0),)), ((((2., 3.), (0., 1.)),),)),
((((0., 1., 2.), (3., 4., 5.)), ((1, 0),)), ((((3., 4., 5.),
(0., 1., 2.)),),)),
)
def test_gather_faces_preset(self, test_inputs, test_outputs):
"""Tests the extraction of mesh faces."""
self.assert_output_is_correct(
normals.gather_faces, test_inputs, test_outputs, tile=False)
def test_gather_faces_random(self):
"""Tests the extraction of mesh faces."""
tensor_size = np.random.randint(3, 5)
tensor_shape = np.random.randint(1, 5, size=tensor_size).tolist()
vertices = np.random.random(size=tensor_shape)
indices = np.arange(tensor_shape[-2])
indices = indices.reshape([1] * (tensor_size - 1) + [-1])
indices = np.tile(indices, tensor_shape[:-2] + [1, 1])
expected = np.expand_dims(vertices, -3)
self.assertAllClose(
normals.gather_faces(vertices, indices), expected, rtol=1e-3)
@parameterized.parameters(
(((None, 4, 3),), (tf.float32,)),
(((4, 3),), (tf.float32,)),
(((3, 4, 3),), (tf.float32,)),
)
def test_face_normals_exception_not_raised(self, shapes, dtypes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(normals.face_normals, shapes, dtypes)
@parameterized.parameters(
("faces must have a rank greater than 1.", (3,)),
("faces must have greater than 2 dimensions in axis -2", (2, 3)),
("faces must have exactly 3 dimensions in axis -1.", (5, 2)),
)
def test_face_normals_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(normals.face_normals, error_msg, shapes)
def test_face_normals_jacobian_random(self):
"""Test the Jacobian of the face normals function."""
tensor_vertex_size = np.random.randint(1, 3)
tensor_out_shape = np.random.randint(1, 5, size=tensor_vertex_size)
tensor_out_shape = tensor_out_shape.tolist()
tensor_vertex_shape = list(tensor_out_shape)
tensor_vertex_shape[-1] *= 3
tensor_index_shape = tensor_out_shape[-1]
vertex_init = np.random.random(size=tensor_vertex_shape + [3])
index_init = np.arange(tensor_vertex_shape[-1])
np.random.shuffle(index_init)
index_init = np.reshape(index_init, newshape=[1] * \
(tensor_vertex_size - 1) + \
[tensor_index_shape, 3])
index_init = np.tile(index_init, tensor_vertex_shape[:-1] + [1, 1])
index_tensor = tf.convert_to_tensor(value=index_init)
def face_normals(vertex_tensor):
face_tensor = normals.gather_faces(vertex_tensor, index_tensor)
return normals.face_normals(face_tensor)
self.assert_jacobian_is_correct_fn(
face_normals, [vertex_init], atol=1e-4, delta=1e-9)
@parameterized.parameters(
((((0., 0., 0.), (1., 0., 0.), (0., 1., 0.)), ((0, 1, 2),)),
(((0., 0., 1.),),)),
((((0., 0., 0.), (0., 0., 1.), (1., 0., 0.)), ((0, 1, 2),)),
(((0., 1., 0.),),)),
((((0., 0., 0.), (0., 1., 0.), (0., 0., 1.)), ((0, 1, 2),)),
(((1., 0., 0.),),)),
((((0., -2., -2.), (0, -2., 2.), (0., 2., 2.), (0., 2., -2.)),
((0, 1, 2, 3),)), (((-1., 0., 0.),),)),
)
def test_face_normals_preset(self, test_inputs, test_outputs):
"""Tests the computation of mesh face normals."""
faces = normals.gather_faces(*test_inputs[:2])
test_inputs = [faces] + list(test_inputs[2:])
self.assert_output_is_correct(
normals.face_normals, test_inputs, test_outputs, tile=False)
def test_face_normals_random(self):
"""Tests the computation of mesh face normals in each axis."""
tensor_vertex_size = np.random.randint(1, 3)
tensor_out_shape = np.random.randint(1, 5, size=tensor_vertex_size)
tensor_out_shape = tensor_out_shape.tolist()
tensor_vertex_shape = list(tensor_out_shape)
tensor_vertex_shape[-1] *= 3
tensor_index_shape = tensor_out_shape[-1]
for i in range(3):
vertices = np.random.random(size=tensor_vertex_shape + [3])
indices = np.arange(tensor_vertex_shape[-1])
np.random.shuffle(indices)
indices = np.reshape(indices,
newshape=[1] * (tensor_vertex_size - 1) \
+ [tensor_index_shape, 3])
indices = np.tile(indices, tensor_vertex_shape[:-1] + [1, 1])
vertices[..., i] = 0.
expected = np.zeros(shape=tensor_out_shape + [3], dtype=vertices.dtype)
expected[..., i] = 1.
faces = normals.gather_faces(vertices, indices)
self.assertAllClose(
tf.abs(normals.face_normals(faces)), expected, rtol=1e-3)
@parameterized.parameters(
(((4, 3), (5, 3)), (tf.float32, tf.int32)),
(((None, 3), (None, 3)), (tf.float32, tf.int32)),
(((3, None, 3), (3, None, 5)), (tf.float32, tf.int32)),
(((3, 6, 3), (3, 5, 5)), (tf.float32, tf.int32)),
)
def test_vertex_normals_exception_not_raised(self, shapes, dtypes):
"""Tests that the shape exceptions are not raised."""
self.assert_exception_is_not_raised(normals.vertex_normals, shapes, dtypes)
@parameterized.parameters(
("Not all batch dimensions are broadcast-compatible.", (3, 5, 4, 3),
(1, 2, 4, 3)),
("Not all batch dimensions are broadcast-compatible.", (2, 200, 3),
(4, 100, 3)),
("Not all batch dimensions are broadcast-compatible.", (5, 4, 3),
(1, 2, 4, 3)),
("Not all batch dimensions are broadcast-compatible.", (3, 5, 4, 3),
(2, 4, 3)),
("vertices must have a rank greater than 1.", (3,), (1, 2, 4, 3)),
("indices must have a rank greater than 1.", (3, 5, 4, 3), (3,)),
("vertices must have exactly 3 dimensions in axis -1.", (3, 5, 4, 2),
(3, 5, 4, 3)),
("indices must have greater than 2 dimensions in axis -1.", (3, 5, 4, 3),
(3, 5, 4, 2)),
("'indices' must have specified batch dimensions.", (None, 6, 3),
(None, 5, 5)),
)
def test_vertex_normals_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(normals.vertex_normals, error_msg, shapes)
def test_vertex_normals_jacobian_random(self):
"""Test the Jacobian of the vertex normals function."""
tensor_vertex_size = np.random.randint(1, 3)
tensor_out_shape = np.random.randint(1, 5, size=tensor_vertex_size)
tensor_out_shape = tensor_out_shape.tolist()
vertex_axis = np.array(((0., 0., 1), (1., 0., 0.), (0., 1., 0.),
(0., 0., -1.), (-1., 0., 0.), (0., -1., 0.)),
dtype=np.float32)
vertex_axis = vertex_axis.reshape([1] * tensor_vertex_size + [6, 3])
faces = np.array(((0, 1, 2), (0, 2, 4), (0, 4, 5), (0, 5, 1), (3, 2, 1),
(3, 4, 2), (3, 5, 4), (3, 1, 5)),
dtype=np.int32)
faces = faces.reshape([1] * tensor_vertex_size + [8, 3])
index_init = np.tile(faces, tensor_out_shape + [1, 1])
vertex_scale = np.random.uniform(0.5, 5., tensor_out_shape + [1] * 2)
vertex_init = vertex_axis * vertex_scale
index_tensor = tf.convert_to_tensor(value=index_init)
def vertex_normals(vertex_tensor):
return normals.vertex_normals(vertex_tensor, index_tensor)
self.assert_jacobian_is_correct_fn(vertex_normals, [vertex_init])
@parameterized.parameters(
(((((-1., -1., 1.), (-1., 1., 1.), (-1., -1., -1.), (-1., 1., -1.),
(1., -1., 1.), (1., 1., 1.), (1., -1., -1.), (1., 1., -1.)),),
(((1, 2, 0), (3, 6, 2), (7, 4, 6), (5, 0, 4), (6, 0, 2), (3, 5, 7),
(1, 3, 2), (3, 7, 6), (7, 5, 4), (5, 1, 0), (6, 4, 0), (3, 1, 5)),)),
((((-0.3333333134651184, -0.6666666269302368, 0.6666666269302368),
(-0.8164965510368347, 0.40824827551841736, 0.40824827551841736),
(-0.8164965510368347, -0.40824827551841736, -0.40824827551841736),
(-0.3333333134651184, 0.6666666269302368, -0.6666666269302368),
(0.8164965510368347, -0.40824827551841736, 0.40824827551841736),
(0.3333333134651184, 0.6666666269302368, 0.6666666269302368),
(0.3333333134651184, -0.6666666269302368, -0.6666666269302368),
(0.8164965510368347, 0.40824827551841736, -0.40824827551841736)),),)),
)
def test_vertex_normals_preset(self, test_inputs, test_outputs):
"""Tests the computation of vertex normals."""
self.assert_output_is_correct(
normals.vertex_normals, test_inputs, test_outputs, tile=False)
def test_vertex_normals_random(self):
"""Tests the computation of vertex normals for a regular octahedral."""
tensor_vertex_size = np.random.randint(1, 3)
tensor_out_shape = np.random.randint(1, 5, size=tensor_vertex_size)
tensor_out_shape = tensor_out_shape.tolist()
with self.subTest(name="triangular_faces"):
vertex_on_axes = np.array(((0., 0., 1), (1., 0., 0.), (0., 1., 0.),
(0., 0., -1.), (-1., 0., 0.), (0., -1., 0.)),
dtype=np.float32)
vertex_on_axes = vertex_on_axes.reshape([1] * tensor_vertex_size + [6, 3])
index_init = np.array(((0, 1, 2), (0, 2, 4), (0, 4, 5), (0, 5, 1),
(3, 2, 1), (3, 4, 2), (3, 5, 4), (3, 1, 5)),
dtype=np.int32)
index_init = index_init.reshape([1] * tensor_vertex_size + [8, 3])
index_init = np.tile(index_init, tensor_out_shape + [1, 1])
vertex_scale = np.random.uniform(0.5, 5., tensor_out_shape + [1] * 2)
vertex_init = vertex_on_axes * vertex_scale
expected = vertex_on_axes * (vertex_scale * 0. + 1.)
vertex_tensor = tf.convert_to_tensor(value=vertex_init)
index_tensor = tf.convert_to_tensor(value=index_init)
self.assertAllClose(
normals.vertex_normals(vertex_tensor, index_tensor), expected)
with self.subTest(name="polygon_faces"):
num_vertices = np.random.randint(4, 8)
poly_vertices = []
rad_step = np.pi * 2. / num_vertices
for i in range(num_vertices):
poly_vertices.append([np.cos(i * rad_step), np.sin(i * rad_step), 0])
vertex_init = np.array(poly_vertices, dtype=np.float32)
vertex_init = vertex_init.reshape([1] * tensor_vertex_size + [-1, 3])
vertex_init = vertex_init * vertex_scale
index_init = np.arange(num_vertices, dtype=np.int32)
index_init = index_init.reshape([1] * tensor_vertex_size + [1, -1])
index_init = np.tile(index_init, tensor_out_shape + [1, 1])
expected = np.array((0., 0., 1.), dtype=np.float32)
expected = expected.reshape([1] * tensor_vertex_size + [1, 3])
expected = np.tile(expected, tensor_out_shape + [num_vertices, 1])
vertex_tensor = tf.convert_to_tensor(value=vertex_init)
index_tensor = tf.convert_to_tensor(value=index_init)
self.assertAllClose(
normals.vertex_normals(vertex_tensor, index_tensor), expected)
if __name__ == "__main__":
test_case.main()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/notebooks/resources/triangulated_stripe.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Mesh of a flat rectangular surface."""
import numpy as np
vertices = (
(-1.8, 0.0, 0.0),
(-1.6, 0.0, 0.0),
(-1.4, 0.0, 0.0),
(-1.2, 0.0, 0.0),
(-1.0, 0.0, 0.0),
(-0.8, 0.0, 0.0),
(-0.6, 0.0, 0.0),
(-0.4, 0.0, 0.0),
(-0.2, 0.0, 0.0),
(0.0, 0.0, 0.0),
(0.2, 0.0, 0.0),
(0.4, 0.0, 0.0),
(0.6, 0.0, 0.0),
(0.8, 0.0, 0.0),
(1.0, 0.0, 0.0),
(1.2, 0.0, 0.0),
(1.4, 0.0, 0.0),
(1.6, 0.0, 0.0),
(1.8, 0.0, 0.0),
(-1.8, 1.0, 0.0),
(-1.6, 1.0, 0.0),
(-1.4, 1.0, 0.0),
(-1.2, 1.0, 0.0),
(-1.0, 1.0, 0.0),
(-0.8, 1.0, 0.0),
(-0.6, 1.0, 0.0),
(-0.4, 1.0, 0.0),
(-0.2, 1.0, 0.0),
(0.0, 1.0, 0.0),
(0.2, 1.0, 0.0),
(0.4, 1.0, 0.0),
(0.6, 1.0, 0.0),
(0.8, 1.0, 0.0),
(1.0, 1.0, 0.0),
(1.2, 1.0, 0.0),
(1.4, 1.0, 0.0),
(1.6, 1.0, 0.0),
(1.8, 1.0, 0.0),
)
vertices = np.array(vertices)
faces = (
(0, 1, 19),
(1, 20, 19),
(1, 2, 20),
(2, 21, 20),
(2, 3, 21),
(3, 22, 21),
(3, 4, 22),
(4, 23, 22),
(4, 5, 23),
(5, 24, 23),
(5, 6, 24),
(6, 25, 24),
(6, 7, 25),
(7, 26, 25),
(7, 8, 26),
(8, 27, 26),
(8, 9, 27),
(9, 28, 27),
(9, 10, 28),
(10, 29, 28),
(10, 11, 29),
(11, 30, 29),
(11, 12, 30),
(12, 31, 30),
(12, 13, 31),
(13, 32, 31),
(13, 14, 32),
(14, 33, 32),
(14, 15, 33),
(15, 34, 33),
(15, 16, 34),
(16, 35, 34),
(16, 17, 35),
(17, 36, 35),
(17, 18, 36),
(18, 37, 36),
)
faces = np.array(faces)
mesh = {'vertices': vertices, 'faces': faces}
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Mesh of a flat rectangular surface."""
import numpy as np
vertices = (
(-1.8, 0.0, 0.0),
(-1.6, 0.0, 0.0),
(-1.4, 0.0, 0.0),
(-1.2, 0.0, 0.0),
(-1.0, 0.0, 0.0),
(-0.8, 0.0, 0.0),
(-0.6, 0.0, 0.0),
(-0.4, 0.0, 0.0),
(-0.2, 0.0, 0.0),
(0.0, 0.0, 0.0),
(0.2, 0.0, 0.0),
(0.4, 0.0, 0.0),
(0.6, 0.0, 0.0),
(0.8, 0.0, 0.0),
(1.0, 0.0, 0.0),
(1.2, 0.0, 0.0),
(1.4, 0.0, 0.0),
(1.6, 0.0, 0.0),
(1.8, 0.0, 0.0),
(-1.8, 1.0, 0.0),
(-1.6, 1.0, 0.0),
(-1.4, 1.0, 0.0),
(-1.2, 1.0, 0.0),
(-1.0, 1.0, 0.0),
(-0.8, 1.0, 0.0),
(-0.6, 1.0, 0.0),
(-0.4, 1.0, 0.0),
(-0.2, 1.0, 0.0),
(0.0, 1.0, 0.0),
(0.2, 1.0, 0.0),
(0.4, 1.0, 0.0),
(0.6, 1.0, 0.0),
(0.8, 1.0, 0.0),
(1.0, 1.0, 0.0),
(1.2, 1.0, 0.0),
(1.4, 1.0, 0.0),
(1.6, 1.0, 0.0),
(1.8, 1.0, 0.0),
)
vertices = np.array(vertices)
faces = (
(0, 1, 19),
(1, 20, 19),
(1, 2, 20),
(2, 21, 20),
(2, 3, 21),
(3, 22, 21),
(3, 4, 22),
(4, 23, 22),
(4, 5, 23),
(5, 24, 23),
(5, 6, 24),
(6, 25, 24),
(6, 7, 25),
(7, 26, 25),
(7, 8, 26),
(8, 27, 26),
(8, 9, 27),
(9, 28, 27),
(9, 10, 28),
(10, 29, 28),
(10, 11, 29),
(11, 30, 29),
(11, 12, 30),
(12, 31, 30),
(12, 13, 31),
(13, 32, 31),
(13, 14, 32),
(14, 33, 32),
(14, 15, 33),
(15, 34, 33),
(15, 16, 34),
(16, 35, 34),
(16, 17, 35),
(17, 36, 35),
(17, 18, 36),
(18, 37, 36),
)
faces = np.array(faces)
mesh = {'vertices': vertices, 'faces': faces}
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/math/optimizer/tests/__init__.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/geometry/representation/tests/ray_test.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
r"""Tests for ray."""
import sys
from absl import flags
from absl.testing import flagsaver
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from tensorflow_graphics.geometry.representation import ray
from tensorflow_graphics.util import test_case
FLAGS = flags.FLAGS
class RayTest(test_case.TestCase):
def _generate_random_example(self):
num_cameras = 4
num_keypoints = 3
batch_size = 2
self.points_values = np.random.random_sample((batch_size, num_keypoints, 3))
points_expanded_values = np.expand_dims(self.points_values, axis=-2)
startpoints_values = np.random.random_sample(
(batch_size, num_keypoints, num_cameras, 3))
difference = points_expanded_values - startpoints_values
difference_norm = np.sqrt((difference * difference).sum(axis=-1))
direction = difference / np.expand_dims(difference_norm, axis=-1)
self.startpoints_values = points_expanded_values - 0.5 * direction
self.endpoints_values = points_expanded_values + 0.5 * direction
self.weights_values = np.ones((batch_size, num_keypoints, num_cameras))
# Wrap these with identies because some assert_* ops look at the constant
# tensor values and mark these as unfeedable.
self.points = tf.identity(tf.convert_to_tensor(value=self.points_values))
self.startpoints = tf.identity(
tf.convert_to_tensor(value=self.startpoints_values))
self.endpoints = tf.identity(
tf.convert_to_tensor(value=self.endpoints_values))
self.weights = tf.identity(tf.convert_to_tensor(value=self.weights_values))
@parameterized.parameters(
("Not all batch dimensions are identical.", (4, 3), (5, 3), (4,)),
("must have exactly 3 dimensions in axis", (4, 2), (4, 2), (4,)),
("must have a rank greater than 1", (3,), (3,), (None,)),
("must have greater than 1 dimensions in axis -2", (1, 3), (1, 3), (1,)),
("Not all batch dimensions are identical.", (2, 4, 3), (2, 4, 3), (2, 5)),
)
def test_triangulate_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(ray.triangulate, error_msg, shapes)
@parameterized.parameters(
((4, 3), (4, 3), (4,)),
((5, 4, 3), (5, 4, 3), (5, 4)),
((6, 5, 4, 3), (6, 5, 4, 3), (6, 5, 4)),
)
def test_triangulate_exception_is_not_raised(self, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_not_raised(ray.triangulate, shapes)
def test_triangulate_jacobian_is_correct(self):
"""Tests that Jacobian is correct."""
self._generate_random_example()
self.assert_jacobian_is_correct_fn(
lambda x: ray.triangulate(x, self.endpoints, self.weights),
[self.startpoints_values])
self.assert_jacobian_is_correct_fn(
lambda x: ray.triangulate(self.startpoints, x, self.weights),
[self.endpoints_values])
self.assert_jacobian_is_correct_fn(
lambda x: ray.triangulate(self.startpoints, self.endpoints, x),
[self.weights_values])
def test_triangulate_jacobian_is_finite(self):
"""Tests that Jacobian is finite."""
self._generate_random_example()
self.assert_jacobian_is_finite_fn(
lambda x: ray.triangulate(x, self.endpoints, self.weights),
[self.startpoints_values])
self.assert_jacobian_is_finite_fn(
lambda x: ray.triangulate(self.startpoints, x, self.weights),
[self.endpoints_values])
self.assert_jacobian_is_finite_fn(
lambda x: ray.triangulate(self.startpoints, self.endpoints, x),
[self.weights_values])
def test_triangulate_random(self):
"""Tests that original points are recovered by triangualtion."""
self._generate_random_example()
test_inputs = (self.startpoints, self.endpoints, self.weights)
test_outputs = (self.points_values,)
self.assert_output_is_correct(
ray.triangulate,
test_inputs,
test_outputs,
rtol=1e-05,
atol=1e-08,
tile=False)
def test_negative_weights_exception_raised(self):
"""Tests that exceptions are properly raised."""
self._generate_random_example()
self.weights = -1.0 * tf.ones_like(self.weights, dtype=tf.float64)
with self.assertRaises(tf.errors.InvalidArgumentError):
points = ray.triangulate(self.startpoints, self.endpoints, self.weights)
self.evaluate(points)
def test_less_that_two_nonzero_weights_exception_raised(self):
"""Tests that exceptions are properly raised."""
self._generate_random_example()
self.weights = tf.convert_to_tensor(
value=np.array([[[1., 1., 0., 0.], [1., 1., 0., 0.], [1., 1., 0., 0.]],
[[1., 1., 0., 0.], [1., 1., 0., 0.], [1., 0., 0., 0.]]],
dtype=np.float64))
with self.assertRaises(tf.errors.InvalidArgumentError):
points = ray.triangulate(self.startpoints, self.endpoints, self.weights)
self.evaluate(points)
@parameterized.parameters(
("must have exactly 3 dimensions in axis 0", (2,), (1,), (3,), (3,)),
("must have a rank of 1", (2, 3), (1,), (3,), (3,)),
("must have exactly 1 dimensions in axis 0", (3,), (2,), (3,), (3,)),
("must have a rank of 1", (3,), (2, 1), (3,), (3,)),
("must have exactly 3 dimensions in axis -1", (3,), (1,), (2,), (3,)),
("must have exactly 3 dimensions in axis -1", (3,), (1,), (3,), (2,)),
("Not all batch dimensions are identical.", (3,), (1,), (3,), (2, 3)),
)
def test_intersection_ray_sphere_shape_raised(self, error_msg, *shapes):
"""tests that exceptions are raised when shapes are not supported."""
self.assert_exception_is_raised(ray.intersection_ray_sphere, error_msg,
shapes)
@parameterized.parameters(
((3,), (1,), (3,), (3,)),
((3), (1), (None, 3), (None, 3)),
)
def test_intersection_ray_sphere_shape_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised on supported shapes."""
self.assert_exception_is_not_raised(ray.intersection_ray_sphere, shapes)
def test_intersection_ray_sphere_exception_raised(self):
"""Tests that exceptions are properly raised."""
sphere_center = np.random.uniform(size=(3,))
point_on_ray = np.random.uniform(size=(3,))
sample_ray = np.random.uniform(size=(3,))
normalized_sample_ray = sample_ray / np.linalg.norm(sample_ray, axis=-1)
positive_sphere_radius = np.random.uniform(
sys.float_info.epsilon, 1.0, size=(1,))
negative_sphere_radius = np.random.uniform(-1.0, 0.0, size=(1,))
with self.subTest(name="positive_radius"):
with self.assertRaises(tf.errors.InvalidArgumentError):
self.evaluate(
ray.intersection_ray_sphere(sphere_center, negative_sphere_radius,
normalized_sample_ray, point_on_ray))
with self.subTest(name="normalized_ray"):
with self.assertRaises(tf.errors.InvalidArgumentError):
self.evaluate(
ray.intersection_ray_sphere(sphere_center, positive_sphere_radius,
sample_ray, point_on_ray))
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_intersection_ray_sphere_jacobian_random(self):
"""Test the Jacobian of the intersection_ray_sphere function."""
tensor_size = np.random.randint(3)
tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist()
sphere_center_init = np.random.uniform(0.0, 1.0, size=(3,))
sphere_radius_init = np.random.uniform(10.0, 11.0, size=(1,))
ray_init = np.random.uniform(size=tensor_shape + [3])
ray_init /= np.linalg.norm(ray_init, axis=-1, keepdims=True)
point_on_ray_init = np.random.uniform(0.0, 1.0, size=tensor_shape + [3])
def intersection_ray_sphere_position(sphere_center, sphere_radius,
input_ray, point_on_ray):
y_p, _ = ray.intersection_ray_sphere(sphere_center, sphere_radius,
input_ray, point_on_ray)
return y_p
def intersection_ray_sphere_normal(sphere_center, sphere_radius, input_ray,
point_on_ray):
_, y_n = ray.intersection_ray_sphere(sphere_center, sphere_radius,
input_ray, point_on_ray)
return y_n
self.assert_jacobian_is_correct_fn(
intersection_ray_sphere_position,
[sphere_center_init, sphere_radius_init, ray_init, point_on_ray_init])
self.assert_jacobian_is_correct_fn(
intersection_ray_sphere_normal,
[sphere_center_init, sphere_radius_init, ray_init, point_on_ray_init])
@parameterized.parameters(
(((0.0, 0.0, 3.0), (1.0,), (0.0, 0.0, 1.0), (0.0, 0.0, 0.0)),
(((0.0, 0.0, 2.0), (0.0, 0.0, 4.0)), ((0.0, 0.0, -1.0),
(0.0, 0.0, 1.0)))),
(((0.0, 0.0, 3.0), (1.0,), (0.0, 0.0, 1.0), (1.0, 0.0, 0.0)),
(((1.0, 0.0, 3.0), (1.0, 0.0, 3.0)), ((1.0, 0.0, 0.0),
(1.0, 0.0, 0.0)))),
)
def test_intersection_ray_sphere_preset(self, test_inputs, test_outputs):
self.assert_output_is_correct(
ray.intersection_ray_sphere, test_inputs, test_outputs, tile=False)
if __name__ == "__main__":
test_case.main()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
r"""Tests for ray."""
import sys
from absl import flags
from absl.testing import flagsaver
from absl.testing import parameterized
import numpy as np
import tensorflow as tf
from tensorflow_graphics.geometry.representation import ray
from tensorflow_graphics.util import test_case
FLAGS = flags.FLAGS
class RayTest(test_case.TestCase):
def _generate_random_example(self):
num_cameras = 4
num_keypoints = 3
batch_size = 2
self.points_values = np.random.random_sample((batch_size, num_keypoints, 3))
points_expanded_values = np.expand_dims(self.points_values, axis=-2)
startpoints_values = np.random.random_sample(
(batch_size, num_keypoints, num_cameras, 3))
difference = points_expanded_values - startpoints_values
difference_norm = np.sqrt((difference * difference).sum(axis=-1))
direction = difference / np.expand_dims(difference_norm, axis=-1)
self.startpoints_values = points_expanded_values - 0.5 * direction
self.endpoints_values = points_expanded_values + 0.5 * direction
self.weights_values = np.ones((batch_size, num_keypoints, num_cameras))
# Wrap these with identies because some assert_* ops look at the constant
# tensor values and mark these as unfeedable.
self.points = tf.identity(tf.convert_to_tensor(value=self.points_values))
self.startpoints = tf.identity(
tf.convert_to_tensor(value=self.startpoints_values))
self.endpoints = tf.identity(
tf.convert_to_tensor(value=self.endpoints_values))
self.weights = tf.identity(tf.convert_to_tensor(value=self.weights_values))
@parameterized.parameters(
("Not all batch dimensions are identical.", (4, 3), (5, 3), (4,)),
("must have exactly 3 dimensions in axis", (4, 2), (4, 2), (4,)),
("must have a rank greater than 1", (3,), (3,), (None,)),
("must have greater than 1 dimensions in axis -2", (1, 3), (1, 3), (1,)),
("Not all batch dimensions are identical.", (2, 4, 3), (2, 4, 3), (2, 5)),
)
def test_triangulate_exception_raised(self, error_msg, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_raised(ray.triangulate, error_msg, shapes)
@parameterized.parameters(
((4, 3), (4, 3), (4,)),
((5, 4, 3), (5, 4, 3), (5, 4)),
((6, 5, 4, 3), (6, 5, 4, 3), (6, 5, 4)),
)
def test_triangulate_exception_is_not_raised(self, *shapes):
"""Tests that the shape exceptions are properly raised."""
self.assert_exception_is_not_raised(ray.triangulate, shapes)
def test_triangulate_jacobian_is_correct(self):
"""Tests that Jacobian is correct."""
self._generate_random_example()
self.assert_jacobian_is_correct_fn(
lambda x: ray.triangulate(x, self.endpoints, self.weights),
[self.startpoints_values])
self.assert_jacobian_is_correct_fn(
lambda x: ray.triangulate(self.startpoints, x, self.weights),
[self.endpoints_values])
self.assert_jacobian_is_correct_fn(
lambda x: ray.triangulate(self.startpoints, self.endpoints, x),
[self.weights_values])
def test_triangulate_jacobian_is_finite(self):
"""Tests that Jacobian is finite."""
self._generate_random_example()
self.assert_jacobian_is_finite_fn(
lambda x: ray.triangulate(x, self.endpoints, self.weights),
[self.startpoints_values])
self.assert_jacobian_is_finite_fn(
lambda x: ray.triangulate(self.startpoints, x, self.weights),
[self.endpoints_values])
self.assert_jacobian_is_finite_fn(
lambda x: ray.triangulate(self.startpoints, self.endpoints, x),
[self.weights_values])
def test_triangulate_random(self):
"""Tests that original points are recovered by triangualtion."""
self._generate_random_example()
test_inputs = (self.startpoints, self.endpoints, self.weights)
test_outputs = (self.points_values,)
self.assert_output_is_correct(
ray.triangulate,
test_inputs,
test_outputs,
rtol=1e-05,
atol=1e-08,
tile=False)
def test_negative_weights_exception_raised(self):
"""Tests that exceptions are properly raised."""
self._generate_random_example()
self.weights = -1.0 * tf.ones_like(self.weights, dtype=tf.float64)
with self.assertRaises(tf.errors.InvalidArgumentError):
points = ray.triangulate(self.startpoints, self.endpoints, self.weights)
self.evaluate(points)
def test_less_that_two_nonzero_weights_exception_raised(self):
"""Tests that exceptions are properly raised."""
self._generate_random_example()
self.weights = tf.convert_to_tensor(
value=np.array([[[1., 1., 0., 0.], [1., 1., 0., 0.], [1., 1., 0., 0.]],
[[1., 1., 0., 0.], [1., 1., 0., 0.], [1., 0., 0., 0.]]],
dtype=np.float64))
with self.assertRaises(tf.errors.InvalidArgumentError):
points = ray.triangulate(self.startpoints, self.endpoints, self.weights)
self.evaluate(points)
@parameterized.parameters(
("must have exactly 3 dimensions in axis 0", (2,), (1,), (3,), (3,)),
("must have a rank of 1", (2, 3), (1,), (3,), (3,)),
("must have exactly 1 dimensions in axis 0", (3,), (2,), (3,), (3,)),
("must have a rank of 1", (3,), (2, 1), (3,), (3,)),
("must have exactly 3 dimensions in axis -1", (3,), (1,), (2,), (3,)),
("must have exactly 3 dimensions in axis -1", (3,), (1,), (3,), (2,)),
("Not all batch dimensions are identical.", (3,), (1,), (3,), (2, 3)),
)
def test_intersection_ray_sphere_shape_raised(self, error_msg, *shapes):
"""tests that exceptions are raised when shapes are not supported."""
self.assert_exception_is_raised(ray.intersection_ray_sphere, error_msg,
shapes)
@parameterized.parameters(
((3,), (1,), (3,), (3,)),
((3), (1), (None, 3), (None, 3)),
)
def test_intersection_ray_sphere_shape_not_raised(self, *shapes):
"""Tests that the shape exceptions are not raised on supported shapes."""
self.assert_exception_is_not_raised(ray.intersection_ray_sphere, shapes)
def test_intersection_ray_sphere_exception_raised(self):
"""Tests that exceptions are properly raised."""
sphere_center = np.random.uniform(size=(3,))
point_on_ray = np.random.uniform(size=(3,))
sample_ray = np.random.uniform(size=(3,))
normalized_sample_ray = sample_ray / np.linalg.norm(sample_ray, axis=-1)
positive_sphere_radius = np.random.uniform(
sys.float_info.epsilon, 1.0, size=(1,))
negative_sphere_radius = np.random.uniform(-1.0, 0.0, size=(1,))
with self.subTest(name="positive_radius"):
with self.assertRaises(tf.errors.InvalidArgumentError):
self.evaluate(
ray.intersection_ray_sphere(sphere_center, negative_sphere_radius,
normalized_sample_ray, point_on_ray))
with self.subTest(name="normalized_ray"):
with self.assertRaises(tf.errors.InvalidArgumentError):
self.evaluate(
ray.intersection_ray_sphere(sphere_center, positive_sphere_radius,
sample_ray, point_on_ray))
@flagsaver.flagsaver(tfg_add_asserts_to_graph=False)
def test_intersection_ray_sphere_jacobian_random(self):
"""Test the Jacobian of the intersection_ray_sphere function."""
tensor_size = np.random.randint(3)
tensor_shape = np.random.randint(1, 10, size=(tensor_size)).tolist()
sphere_center_init = np.random.uniform(0.0, 1.0, size=(3,))
sphere_radius_init = np.random.uniform(10.0, 11.0, size=(1,))
ray_init = np.random.uniform(size=tensor_shape + [3])
ray_init /= np.linalg.norm(ray_init, axis=-1, keepdims=True)
point_on_ray_init = np.random.uniform(0.0, 1.0, size=tensor_shape + [3])
def intersection_ray_sphere_position(sphere_center, sphere_radius,
input_ray, point_on_ray):
y_p, _ = ray.intersection_ray_sphere(sphere_center, sphere_radius,
input_ray, point_on_ray)
return y_p
def intersection_ray_sphere_normal(sphere_center, sphere_radius, input_ray,
point_on_ray):
_, y_n = ray.intersection_ray_sphere(sphere_center, sphere_radius,
input_ray, point_on_ray)
return y_n
self.assert_jacobian_is_correct_fn(
intersection_ray_sphere_position,
[sphere_center_init, sphere_radius_init, ray_init, point_on_ray_init])
self.assert_jacobian_is_correct_fn(
intersection_ray_sphere_normal,
[sphere_center_init, sphere_radius_init, ray_init, point_on_ray_init])
@parameterized.parameters(
(((0.0, 0.0, 3.0), (1.0,), (0.0, 0.0, 1.0), (0.0, 0.0, 0.0)),
(((0.0, 0.0, 2.0), (0.0, 0.0, 4.0)), ((0.0, 0.0, -1.0),
(0.0, 0.0, 1.0)))),
(((0.0, 0.0, 3.0), (1.0,), (0.0, 0.0, 1.0), (1.0, 0.0, 0.0)),
(((1.0, 0.0, 3.0), (1.0, 0.0, 3.0)), ((1.0, 0.0, 0.0),
(1.0, 0.0, 0.0)))),
)
def test_intersection_ray_sphere_preset(self, test_inputs, test_outputs):
self.assert_output_is_correct(
ray.intersection_ray_sphere, test_inputs, test_outputs, tile=False)
if __name__ == "__main__":
test_case.main()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/tensorboard/mesh_visualizer/tf_mesh_dashboard/mesh-viewer.js | /* Copyright 2020 The TensorFlow Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
/**
* @fileoverview MeshViewer aims to provide 3D rendering capabilities.
*/
var vz_mesh;
(function(vz_mesh) {
class MeshViewer extends THREE.EventDispatcher {
/**
* MeshViewer constructor. Initializes the component and underlying objects.
* @param {string} runColor Run color to use in case when colors are absent.
*/
constructor(runColor) {
super();
/** @type {!THREE.Mesh} Last rendered mesh. */
this._lastMesh = null;
this._clock = new THREE.Clock();
/** @type {!Object} Contains width and height of the canvas. */
this._canvasSize = null;
this._runColor = runColor;
/** @type {!Object} Describes what layers must be rendered in addition
to a mesh or a point cloud layers. */
this._layersConfig = null;
}
// TODO(b/130030314) replace with some thirdparty library call.
/**
* Returns true if the specified value is an object.
* @param {?} val Variable to test.
* @private
* @return {boolean} Whether variable is an object.
*/
_isObject(val) {
var type = typeof val;
// We're interested in objects representing dictionaries only. Everything
// else is "not mergeable", so we consider it as primitive types.
return type == 'object' && val != null && !Array.isArray(val);
}
/**
* Merges two configs together.
* @param {!Object} userConfig User configuration has higher priority.
* @param {!Object} defaultConfig Default configuration has lower priority and
* will be overridden by any conflicting keys from userConfig.
* @private
* @return {!Object} Merged dictionary from two configuration dictionaries.
*/
_applyDefaults(userConfig, defaultConfig) {
let mergedConfig = {};
const configs = [userConfig, defaultConfig];
for (let i = 0; i < configs.length; i++) {
const config = configs[i];
for (let key in config) {
const is_key_present = key in mergedConfig;
if (this._isObject(config[key])) {
mergedConfig[key] =
this._applyDefaults(mergedConfig[key] || {}, config[key]);
} else if (!is_key_present) {
mergedConfig[key] = config[key];
}
}
}
return mergedConfig;
}
/**
* Creates additional layers to render on top of a mesh or a point cloud
* layers.
* @private
*/
_createLayers() {
if (!this._layersConfig || !this._scene || !this._lastMesh) return;
if (this._layersConfig.showBoundingBox) {
var box = new THREE.BoxHelper(this._lastMesh, "rgb(0, 0, 255)");
this._scene.add(box);
}
if (this._layersConfig.showAxes) {
var axesHelper = new THREE.AxesHelper(5);
this._scene.add(axesHelper);
}
}
/**
* Sets layers config.
* @param {!Object} layersConfig Config object describing what layers should
* be rendered.
*/
setLayersConfig(layersConfig) {
this._layersConfig = this._applyDefaults(
layersConfig, this._layersConfig || {});
}
/**
* Creates scene, camera and renderer.
* @param {!Object} config Scene rendering configuration.
* @param {!HTMLDOMElement} domElement The HTML element used for event listeners.
* @private
*/
_createWorld(config, domElement) {
if (this.isReady()) { // keep world objects as singleton objects.
return;
}
this._scene = new THREE.Scene();
var camera = new THREE[config.camera.cls](
config.camera.fov, this._canvasSize.width / this._canvasSize.height,
config.camera.near, config.camera.far);
this._camera = camera;
var camControls = new THREE.OrbitControls(camera, domElement);
camControls.lookSpeed = 0.4;
camControls.movementSpeed = 20;
camControls.noFly = true;
camControls.lookVertical = true;
camControls.constrainVertical = true;
camControls.verticalMin = 1.0;
camControls.verticalMax = 2.0;
camControls.addEventListener(
'change', this._onCameraPositionChange.bind(this));
this._cameraControls = camControls;
this._renderer = new THREE.WebGLRenderer({antialias: true});
this._renderer.setPixelRatio(window.devicePixelRatio);
this._renderer.setSize(this._canvasSize.width, this._canvasSize.height);
this._renderer.setClearColor(0xffffff, 1);
}
/**
* Clears scene from any 3D geometry.
*/
_clearScene() {
while (this._scene.children.length > 0) {
this._scene.remove(this._scene.children[0]);
}
}
/**
* Returns underlying renderer.
* @public
*/
getRenderer() {
return this._renderer;
}
/**
* Returns underlying camera controls.
* @public
*/
getCameraControls() {
return this._cameraControls;
}
/**
* Returns true when all underlying components were initialized.
* @public
*/
isReady() {
return !!this._camera && !!this._cameraControls;
}
/**
* Returns current camera position.
* @public
*/
getCameraPosition() {
return {
far: this._camera.far,
position: this._camera.position.clone(),
target: this._cameraControls.target.clone()
};
}
/**
* Sets new canvas size.
* @param {!Object} canvasSize Contains current canvas width and height.
* @public
*/
setCanvasSize(canvasSize) {
this._canvasSize = canvasSize;
}
/**
* Renders component into the browser.
* @public
*/
draw() {
// Cancel any previous requests to perform redraw.
if (this._animationFrameIndex) {
cancelAnimationFrame(this._animationFrameIndex);
}
this._camera.aspect = this._canvasSize.width / this._canvasSize.height;
this._camera.updateProjectionMatrix();
this._renderer.setSize(this._canvasSize.width, this._canvasSize.height);
const animate = function () {
var delta = this._clock.getDelta();
this._cameraControls.update(delta);
this._animationFrameIndex = requestAnimationFrame(animate);
this._renderer.render(this._scene, this._camera);
}.bind(this);
animate();
}
/**
* Updates the scene.
* @param {!Object} currentStep Step datum.
* @param {!HTMLDOMElement} domElement The HTML element used for event listeners.
* @public
*/
updateScene(currentStep, domElement) {
let config = {};
if ('config' in currentStep && currentStep.config) {
config = JSON.parse(currentStep.config);
}
// This event is an opportunity for UI-responsible component (parent) to set
// proper canvas size.
this.dispatchEvent({type:'beforeUpdateScene'});
const default_config = {
camera: {cls: 'PerspectiveCamera', fov: 75, near: 0.1, far: 1000},
lights: [
{cls: 'AmbientLight', color: '#ffffff', intensity: 0.75}, {
cls: 'DirectionalLight',
color: '#ffffff',
intensity: 0.75,
position: [0, -1, 2]
}
]
};
config = this._applyDefaults(config, default_config);
this._createWorld(config, domElement);
this._clearScene();
this._createLights(this._scene, config);
this._createGeometry(currentStep, config);
this._createLayers();
this.draw();
}
/**
* Sets camera to default position and zoom.
* @param {?THREE.Mesh} mesh Mesh to fit into viewport.
* @public
*/
resetView(mesh) {
if (!this.isReady()) return;
this._cameraControls.reset();
if (!mesh && this._lastMesh) {
mesh = this._lastMesh;
}
if (mesh) {
this._fitObjectToViewport(mesh);
// Store last mesh in case of resetView method called due to some events.
this._lastMesh = mesh;
}
this._cameraControls.update();
}
/**
* Creates geometry for current step data.
* @param {!Object} currentStep Step datum.
* @param {!Object} config Scene rendering configuration.
* @private
*/
_createGeometry(currentStep, config) {
const mesh = currentStep.mesh;
if (mesh.vertices && mesh.faces && mesh.faces.length) {
this._createMesh(mesh, config);
} else {
this._createPointCloud(mesh, config);
}
}
/**
* Creates point cloud geometry for current step data.
* @param {!Object} pointCloudData Object with point cloud data.
* @param {!Object} config Scene rendering configuration.
* @private
*/
_createPointCloud(pointCloudData, config) {
const points = pointCloudData.vertices;
const colors = pointCloudData.colors;
let defaultConfig = {
material: {
cls: 'PointsMaterial', size: 0.005
}
};
// Determine what colors will be used.
if (colors && colors.length == points.length) {
defaultConfig.material['vertexColors'] = THREE.VertexColors;
} else {
defaultConfig.material['color'] = this._runColor;
}
const pc_config = this._applyDefaults(config, defaultConfig);
var geometry = new THREE.Geometry();
points.forEach(function(point) {
var p = new THREE.Vector3(point[0], point[1], point[2]);
const scale = 1.;
p.x = point[0] * scale;
p.y = point[1] * scale;
p.z = point[2] * scale;
geometry.vertices.push(p);
});
if (colors && colors.length == points.length) {
colors.forEach(function (color) {
const c = new THREE.Color(
color[0] / 255., color[1] / 255., color[2] / 255.);
geometry.colors.push(c);
});
}
var material = new THREE[pc_config.material.cls](pc_config.material);
var mesh = new THREE.Points(geometry, material);
this._scene.add(mesh);
this._lastMesh = mesh;
}
/**
* Creates mesh geometry for current step data.
* @param {!THREE.Vector3} position Position of the camera.
* @param {number} far Camera frustum far plane.
* @param {!THREE.Vector3} target Point in space for camera to look at.
* @public
*/
setCameraViewpoint(position, far, target) {
this._silent = true;
this._camera.far = far;
this._camera.position.set(position.x, position.y, position.z);
this._camera.lookAt(target.clone());
this._camera.updateProjectionMatrix();
this._cameraControls.target = target.clone();
this._cameraControls.update();
this._silent = false;
}
/**
* Triggered when camera position changed.
* @private
*/
_onCameraPositionChange(event) {
if (this._silent) return;
this.dispatchEvent({type:'cameraPositionChange', event: event});
}
/**
* Positions camera on such distance from the object that the whole object is
* visible.
* @param {!THREE.Mesh} mesh Mesh to fit into viewport.
* @private
*/
_fitObjectToViewport(mesh) {
// Small offset multiplicator to avoid edges of mesh touching edges of
// viewport.
const offset = 1.25;
const boundingBox = new THREE.Box3();
boundingBox.setFromObject(mesh);
const center = boundingBox.center();
const size = boundingBox.size();
const max_dim = Math.max(size.x, size.y, size.z);
const fov = this._camera.fov * (Math.PI / 180);
let camera_z = Math.abs(max_dim / (2 * Math.tan(fov / 2))) * offset;
const min_z = boundingBox.min.z;
// Make sure that even after arbitrary rotation mesh won't be clipped.
const camera_to_far_edge =
(min_z < 0) ? -min_z + camera_z : camera_z - min_z;
// Set camera position and orientation.
this.setCameraViewpoint(
{x: center.x, y: center.y, z: camera_z}, camera_to_far_edge * 3,
center);
}
/**
* Creates mesh geometry for current step data.
* @param {!Object} meshData Object with mesh data.
* @param {!Object} config Scene rendering configuration.
* @private
*/
_createMesh(meshData, config) {
const vertices = meshData.vertices;
const faces = meshData.faces;
const colors = meshData.colors;
const mesh_config = this._applyDefaults(config, {
material: {
cls: 'MeshStandardMaterial',
color: '#a0a0a0',
roughness: 1,
metalness: 0,
}
});
let geometry = new THREE.Geometry();
vertices.forEach(function(point) {
let p = new THREE.Vector3(point[0], point[1], point[2]);
const scale = 1.;
p.x = point[0] * scale;
p.y = point[1] * scale;
p.z = point[2] * scale;
geometry.vertices.push(p);
});
faces.forEach(function(face_indices) {
let face =
new THREE.Face3(face_indices[0], face_indices[1], face_indices[2]);
if (colors && colors.length) {
const face_colors = [
colors[face_indices[0]], colors[face_indices[1]],
colors[face_indices[2]]
];
for (let i = 0; i < face_colors.length; i++) {
const vertex_color = face_colors[i];
let color = new THREE.Color(
vertex_color[0] / 255., vertex_color[1] / 255.,
vertex_color[2] / 255.);
face.vertexColors.push(color);
}
}
geometry.faces.push(face);
});
if (colors && colors.length) {
mesh_config.material = mesh_config.material || {};
mesh_config.material.vertexColors = THREE.VertexColors;
}
geometry.center();
geometry.computeBoundingSphere();
geometry.computeVertexNormals();
let material = new THREE[mesh_config.material.cls](mesh_config.material);
let mesh = new THREE.Mesh(geometry, material);
mesh.castShadow = true;
mesh.receiveShadow = true;
this._scene.add(mesh);
this._lastMesh = mesh;
}
/**
* Creates lights for a given scene based on passed configuration.
* @param {!Scene} scene Scene object to add lights to.
* @param {!Object} config Scene rendering configuration.
* @private
*/
_createLights(scene, config) {
for (let i = 0; i < config.lights.length; i++) {
const light_config = config.lights[i];
let light = new THREE[light_config.cls](
light_config.color, light_config.intensity);
if (light_config.position) {
light.position.set(
light_config.position[0], light_config.position[1],
light_config.position[2]);
}
scene.add(light);
}
}
} // end of MeshViewer class.
vz_mesh.MeshViewer = MeshViewer;
})(vz_mesh || (vz_mesh = {}));
| /* Copyright 2020 The TensorFlow Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
/**
* @fileoverview MeshViewer aims to provide 3D rendering capabilities.
*/
var vz_mesh;
(function(vz_mesh) {
class MeshViewer extends THREE.EventDispatcher {
/**
* MeshViewer constructor. Initializes the component and underlying objects.
* @param {string} runColor Run color to use in case when colors are absent.
*/
constructor(runColor) {
super();
/** @type {!THREE.Mesh} Last rendered mesh. */
this._lastMesh = null;
this._clock = new THREE.Clock();
/** @type {!Object} Contains width and height of the canvas. */
this._canvasSize = null;
this._runColor = runColor;
/** @type {!Object} Describes what layers must be rendered in addition
to a mesh or a point cloud layers. */
this._layersConfig = null;
}
// TODO(b/130030314) replace with some thirdparty library call.
/**
* Returns true if the specified value is an object.
* @param {?} val Variable to test.
* @private
* @return {boolean} Whether variable is an object.
*/
_isObject(val) {
var type = typeof val;
// We're interested in objects representing dictionaries only. Everything
// else is "not mergeable", so we consider it as primitive types.
return type == 'object' && val != null && !Array.isArray(val);
}
/**
* Merges two configs together.
* @param {!Object} userConfig User configuration has higher priority.
* @param {!Object} defaultConfig Default configuration has lower priority and
* will be overridden by any conflicting keys from userConfig.
* @private
* @return {!Object} Merged dictionary from two configuration dictionaries.
*/
_applyDefaults(userConfig, defaultConfig) {
let mergedConfig = {};
const configs = [userConfig, defaultConfig];
for (let i = 0; i < configs.length; i++) {
const config = configs[i];
for (let key in config) {
const is_key_present = key in mergedConfig;
if (this._isObject(config[key])) {
mergedConfig[key] =
this._applyDefaults(mergedConfig[key] || {}, config[key]);
} else if (!is_key_present) {
mergedConfig[key] = config[key];
}
}
}
return mergedConfig;
}
/**
* Creates additional layers to render on top of a mesh or a point cloud
* layers.
* @private
*/
_createLayers() {
if (!this._layersConfig || !this._scene || !this._lastMesh) return;
if (this._layersConfig.showBoundingBox) {
var box = new THREE.BoxHelper(this._lastMesh, "rgb(0, 0, 255)");
this._scene.add(box);
}
if (this._layersConfig.showAxes) {
var axesHelper = new THREE.AxesHelper(5);
this._scene.add(axesHelper);
}
}
/**
* Sets layers config.
* @param {!Object} layersConfig Config object describing what layers should
* be rendered.
*/
setLayersConfig(layersConfig) {
this._layersConfig = this._applyDefaults(
layersConfig, this._layersConfig || {});
}
/**
* Creates scene, camera and renderer.
* @param {!Object} config Scene rendering configuration.
* @param {!HTMLDOMElement} domElement The HTML element used for event listeners.
* @private
*/
_createWorld(config, domElement) {
if (this.isReady()) { // keep world objects as singleton objects.
return;
}
this._scene = new THREE.Scene();
var camera = new THREE[config.camera.cls](
config.camera.fov, this._canvasSize.width / this._canvasSize.height,
config.camera.near, config.camera.far);
this._camera = camera;
var camControls = new THREE.OrbitControls(camera, domElement);
camControls.lookSpeed = 0.4;
camControls.movementSpeed = 20;
camControls.noFly = true;
camControls.lookVertical = true;
camControls.constrainVertical = true;
camControls.verticalMin = 1.0;
camControls.verticalMax = 2.0;
camControls.addEventListener(
'change', this._onCameraPositionChange.bind(this));
this._cameraControls = camControls;
this._renderer = new THREE.WebGLRenderer({antialias: true});
this._renderer.setPixelRatio(window.devicePixelRatio);
this._renderer.setSize(this._canvasSize.width, this._canvasSize.height);
this._renderer.setClearColor(0xffffff, 1);
}
/**
* Clears scene from any 3D geometry.
*/
_clearScene() {
while (this._scene.children.length > 0) {
this._scene.remove(this._scene.children[0]);
}
}
/**
* Returns underlying renderer.
* @public
*/
getRenderer() {
return this._renderer;
}
/**
* Returns underlying camera controls.
* @public
*/
getCameraControls() {
return this._cameraControls;
}
/**
* Returns true when all underlying components were initialized.
* @public
*/
isReady() {
return !!this._camera && !!this._cameraControls;
}
/**
* Returns current camera position.
* @public
*/
getCameraPosition() {
return {
far: this._camera.far,
position: this._camera.position.clone(),
target: this._cameraControls.target.clone()
};
}
/**
* Sets new canvas size.
* @param {!Object} canvasSize Contains current canvas width and height.
* @public
*/
setCanvasSize(canvasSize) {
this._canvasSize = canvasSize;
}
/**
* Renders component into the browser.
* @public
*/
draw() {
// Cancel any previous requests to perform redraw.
if (this._animationFrameIndex) {
cancelAnimationFrame(this._animationFrameIndex);
}
this._camera.aspect = this._canvasSize.width / this._canvasSize.height;
this._camera.updateProjectionMatrix();
this._renderer.setSize(this._canvasSize.width, this._canvasSize.height);
const animate = function () {
var delta = this._clock.getDelta();
this._cameraControls.update(delta);
this._animationFrameIndex = requestAnimationFrame(animate);
this._renderer.render(this._scene, this._camera);
}.bind(this);
animate();
}
/**
* Updates the scene.
* @param {!Object} currentStep Step datum.
* @param {!HTMLDOMElement} domElement The HTML element used for event listeners.
* @public
*/
updateScene(currentStep, domElement) {
let config = {};
if ('config' in currentStep && currentStep.config) {
config = JSON.parse(currentStep.config);
}
// This event is an opportunity for UI-responsible component (parent) to set
// proper canvas size.
this.dispatchEvent({type:'beforeUpdateScene'});
const default_config = {
camera: {cls: 'PerspectiveCamera', fov: 75, near: 0.1, far: 1000},
lights: [
{cls: 'AmbientLight', color: '#ffffff', intensity: 0.75}, {
cls: 'DirectionalLight',
color: '#ffffff',
intensity: 0.75,
position: [0, -1, 2]
}
]
};
config = this._applyDefaults(config, default_config);
this._createWorld(config, domElement);
this._clearScene();
this._createLights(this._scene, config);
this._createGeometry(currentStep, config);
this._createLayers();
this.draw();
}
/**
* Sets camera to default position and zoom.
* @param {?THREE.Mesh} mesh Mesh to fit into viewport.
* @public
*/
resetView(mesh) {
if (!this.isReady()) return;
this._cameraControls.reset();
if (!mesh && this._lastMesh) {
mesh = this._lastMesh;
}
if (mesh) {
this._fitObjectToViewport(mesh);
// Store last mesh in case of resetView method called due to some events.
this._lastMesh = mesh;
}
this._cameraControls.update();
}
/**
* Creates geometry for current step data.
* @param {!Object} currentStep Step datum.
* @param {!Object} config Scene rendering configuration.
* @private
*/
_createGeometry(currentStep, config) {
const mesh = currentStep.mesh;
if (mesh.vertices && mesh.faces && mesh.faces.length) {
this._createMesh(mesh, config);
} else {
this._createPointCloud(mesh, config);
}
}
/**
* Creates point cloud geometry for current step data.
* @param {!Object} pointCloudData Object with point cloud data.
* @param {!Object} config Scene rendering configuration.
* @private
*/
_createPointCloud(pointCloudData, config) {
const points = pointCloudData.vertices;
const colors = pointCloudData.colors;
let defaultConfig = {
material: {
cls: 'PointsMaterial', size: 0.005
}
};
// Determine what colors will be used.
if (colors && colors.length == points.length) {
defaultConfig.material['vertexColors'] = THREE.VertexColors;
} else {
defaultConfig.material['color'] = this._runColor;
}
const pc_config = this._applyDefaults(config, defaultConfig);
var geometry = new THREE.Geometry();
points.forEach(function(point) {
var p = new THREE.Vector3(point[0], point[1], point[2]);
const scale = 1.;
p.x = point[0] * scale;
p.y = point[1] * scale;
p.z = point[2] * scale;
geometry.vertices.push(p);
});
if (colors && colors.length == points.length) {
colors.forEach(function (color) {
const c = new THREE.Color(
color[0] / 255., color[1] / 255., color[2] / 255.);
geometry.colors.push(c);
});
}
var material = new THREE[pc_config.material.cls](pc_config.material);
var mesh = new THREE.Points(geometry, material);
this._scene.add(mesh);
this._lastMesh = mesh;
}
/**
* Creates mesh geometry for current step data.
* @param {!THREE.Vector3} position Position of the camera.
* @param {number} far Camera frustum far plane.
* @param {!THREE.Vector3} target Point in space for camera to look at.
* @public
*/
setCameraViewpoint(position, far, target) {
this._silent = true;
this._camera.far = far;
this._camera.position.set(position.x, position.y, position.z);
this._camera.lookAt(target.clone());
this._camera.updateProjectionMatrix();
this._cameraControls.target = target.clone();
this._cameraControls.update();
this._silent = false;
}
/**
* Triggered when camera position changed.
* @private
*/
_onCameraPositionChange(event) {
if (this._silent) return;
this.dispatchEvent({type:'cameraPositionChange', event: event});
}
/**
* Positions camera on such distance from the object that the whole object is
* visible.
* @param {!THREE.Mesh} mesh Mesh to fit into viewport.
* @private
*/
_fitObjectToViewport(mesh) {
// Small offset multiplicator to avoid edges of mesh touching edges of
// viewport.
const offset = 1.25;
const boundingBox = new THREE.Box3();
boundingBox.setFromObject(mesh);
const center = boundingBox.center();
const size = boundingBox.size();
const max_dim = Math.max(size.x, size.y, size.z);
const fov = this._camera.fov * (Math.PI / 180);
let camera_z = Math.abs(max_dim / (2 * Math.tan(fov / 2))) * offset;
const min_z = boundingBox.min.z;
// Make sure that even after arbitrary rotation mesh won't be clipped.
const camera_to_far_edge =
(min_z < 0) ? -min_z + camera_z : camera_z - min_z;
// Set camera position and orientation.
this.setCameraViewpoint(
{x: center.x, y: center.y, z: camera_z}, camera_to_far_edge * 3,
center);
}
/**
* Creates mesh geometry for current step data.
* @param {!Object} meshData Object with mesh data.
* @param {!Object} config Scene rendering configuration.
* @private
*/
_createMesh(meshData, config) {
const vertices = meshData.vertices;
const faces = meshData.faces;
const colors = meshData.colors;
const mesh_config = this._applyDefaults(config, {
material: {
cls: 'MeshStandardMaterial',
color: '#a0a0a0',
roughness: 1,
metalness: 0,
}
});
let geometry = new THREE.Geometry();
vertices.forEach(function(point) {
let p = new THREE.Vector3(point[0], point[1], point[2]);
const scale = 1.;
p.x = point[0] * scale;
p.y = point[1] * scale;
p.z = point[2] * scale;
geometry.vertices.push(p);
});
faces.forEach(function(face_indices) {
let face =
new THREE.Face3(face_indices[0], face_indices[1], face_indices[2]);
if (colors && colors.length) {
const face_colors = [
colors[face_indices[0]], colors[face_indices[1]],
colors[face_indices[2]]
];
for (let i = 0; i < face_colors.length; i++) {
const vertex_color = face_colors[i];
let color = new THREE.Color(
vertex_color[0] / 255., vertex_color[1] / 255.,
vertex_color[2] / 255.);
face.vertexColors.push(color);
}
}
geometry.faces.push(face);
});
if (colors && colors.length) {
mesh_config.material = mesh_config.material || {};
mesh_config.material.vertexColors = THREE.VertexColors;
}
geometry.center();
geometry.computeBoundingSphere();
geometry.computeVertexNormals();
let material = new THREE[mesh_config.material.cls](mesh_config.material);
let mesh = new THREE.Mesh(geometry, material);
mesh.castShadow = true;
mesh.receiveShadow = true;
this._scene.add(mesh);
this._lastMesh = mesh;
}
/**
* Creates lights for a given scene based on passed configuration.
* @param {!Scene} scene Scene object to add lights to.
* @param {!Object} config Scene rendering configuration.
* @private
*/
_createLights(scene, config) {
for (let i = 0; i < config.lights.length; i++) {
const light_config = config.lights[i];
let light = new THREE[light_config.cls](
light_config.color, light_config.intensity);
if (light_config.position) {
light.position.set(
light_config.position[0], light_config.position[1],
light_config.position[2]);
}
scene.add(light);
}
}
} // end of MeshViewer class.
vz_mesh.MeshViewer = MeshViewer;
})(vz_mesh || (vz_mesh = {}));
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/datasets/modelnet40/__init__.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Lint as: python3
"""`tensorflow_graphics.datasets.modelnet40` module."""
from tensorflow_graphics.datasets.modelnet40.modelnet40 import ModelNet40
__all__ = [
"ModelNet40",
]
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Lint as: python3
"""`tensorflow_graphics.datasets.modelnet40` module."""
from tensorflow_graphics.datasets.modelnet40.modelnet40 import ModelNet40
__all__ = [
"ModelNet40",
]
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/notebooks/resources/__init__.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""resources module."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import sys
from tensorflow_graphics.notebooks.resources import tfg_simplified_logo
from tensorflow_graphics.notebooks.resources import triangulated_stripe
# The resources module is not exported.
__all__ = []
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""resources module."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import sys
from tensorflow_graphics.notebooks.resources import tfg_simplified_logo
from tensorflow_graphics.notebooks.resources import triangulated_stripe
# The resources module is not exported.
__all__ = []
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/rendering/voxels/visual_hull.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This module implements the visual hull voxel rendering."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from tensorflow_graphics.util import export_api
from tensorflow_graphics.util import shape
def render(voxels, axis=2, name=None):
"""Renders the visual hull of a voxel grid, as described in ["Escaping Plato's Cave: 3D Shape From Adversarial Rendering" (Henzler 2019)](https://github.com/henzler/platonicgan).
Note:
In the following, A1 to An are optional batch dimensions.
Args:
voxels: A tensor of shape `[A1, ..., An, Vx, Vy, Vz, Vd]`, where Vx, Vy, Vz
are the dimensions of the voxel grid and Vd the dimension of the
information stored in each voxel (e.g. 3 for RGB color).
axis: An index to the projection axis (0 for X, 1 for Y or 2 for Z).
name: A name for this op. Defaults to "visual_hull_render".
Returns:
A tensor of shape `[A1, ..., An, Vx, Vy, Vd]` representing images of size
(Vx,Vy).
Raises:
ValueError: If the shape of the input tensors are not supported.
"""
with tf.compat.v1.name_scope(name, "visual_hull_render", [voxels]):
voxels = tf.convert_to_tensor(value=voxels)
shape.check_static(
tensor=voxels, tensor_name="voxels", has_rank_greater_than=3)
if axis not in [0, 1, 2]:
raise ValueError("'axis' needs to be 0, 1 or 2")
image = tf.reduce_sum(input_tensor=voxels, axis=axis - 4)
image = tf.ones_like(image) - tf.math.exp(-image)
return image
# API contains all public functions and classes.
__all__ = export_api.get_functions_and_classes()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This module implements the visual hull voxel rendering."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from tensorflow_graphics.util import export_api
from tensorflow_graphics.util import shape
def render(voxels, axis=2, name=None):
"""Renders the visual hull of a voxel grid, as described in ["Escaping Plato's Cave: 3D Shape From Adversarial Rendering" (Henzler 2019)](https://github.com/henzler/platonicgan).
Note:
In the following, A1 to An are optional batch dimensions.
Args:
voxels: A tensor of shape `[A1, ..., An, Vx, Vy, Vz, Vd]`, where Vx, Vy, Vz
are the dimensions of the voxel grid and Vd the dimension of the
information stored in each voxel (e.g. 3 for RGB color).
axis: An index to the projection axis (0 for X, 1 for Y or 2 for Z).
name: A name for this op. Defaults to "visual_hull_render".
Returns:
A tensor of shape `[A1, ..., An, Vx, Vy, Vd]` representing images of size
(Vx,Vy).
Raises:
ValueError: If the shape of the input tensors are not supported.
"""
with tf.compat.v1.name_scope(name, "visual_hull_render", [voxels]):
voxels = tf.convert_to_tensor(value=voxels)
shape.check_static(
tensor=voxels, tensor_name="voxels", has_rank_greater_than=3)
if axis not in [0, 1, 2]:
raise ValueError("'axis' needs to be 0, 1 or 2")
image = tf.reduce_sum(input_tensor=voxels, axis=axis - 4)
image = tf.ones_like(image) - tf.math.exp(-image)
return image
# API contains all public functions and classes.
__all__ = export_api.get_functions_and_classes()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/rendering/opengl/rasterizer.cc | /* Copyright 2020 The TensorFlow Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
#include "rasterizer.h"
Rasterizer::Rasterizer(
std::unique_ptr<gl_utils::Program>&& program,
std::unique_ptr<gl_utils::RenderTargets>&& render_targets, float clear_red,
float clear_green, float clear_blue, float clear_alpha, float clear_depth,
bool enable_cull_face)
: program_(std::move(program)),
render_targets_(std::move(render_targets)),
clear_red_(clear_red),
clear_green_(clear_green),
clear_blue_(clear_blue),
clear_alpha_(clear_alpha),
clear_depth_(clear_depth),
enable_cull_face_(enable_cull_face) {}
Rasterizer::~Rasterizer() {}
void Rasterizer::Reset() {
program_.reset();
render_targets_.reset();
for (auto&& buffer : shader_storage_buffers_) buffer.second.reset();
}
tensorflow::Status Rasterizer::Render(int num_points,
absl::Span<float> result) {
return RenderImpl(num_points, result);
}
tensorflow::Status Rasterizer::Render(int num_points,
absl::Span<unsigned char> result) {
return RenderImpl(num_points, result);
}
tensorflow::Status Rasterizer::SetUniformMatrix(
const std::string& name, int num_columns, int num_rows, bool transpose,
absl::Span<const float> matrix) {
if (size_t(num_rows * num_columns) != matrix.size())
return TFG_INTERNAL_ERROR("num_rows * num_columns != matrix.size()");
typedef void (*setter_fn)(GLint location, GLsizei count, GLboolean transpose,
const GLfloat* value);
static const auto type_mapping =
std::unordered_map<int, std::tuple<int, int, setter_fn>>({
{GL_FLOAT_MAT2, std::make_tuple(2, 2, glUniformMatrix2fv)},
{GL_FLOAT_MAT3, std::make_tuple(3, 3, glUniformMatrix3fv)},
{GL_FLOAT_MAT4, std::make_tuple(4, 4, glUniformMatrix4fv)},
{GL_FLOAT_MAT2x3, std::make_tuple(2, 3, glUniformMatrix2x3fv)},
{GL_FLOAT_MAT2x4, std::make_tuple(2, 4, glUniformMatrix2x4fv)},
{GL_FLOAT_MAT3x2, std::make_tuple(3, 2, glUniformMatrix3x2fv)},
{GL_FLOAT_MAT3x4, std::make_tuple(3, 4, glUniformMatrix3x4fv)},
{GL_FLOAT_MAT4x2, std::make_tuple(4, 2, glUniformMatrix4x2fv)},
{GL_FLOAT_MAT4x3, std::make_tuple(4, 3, glUniformMatrix4x3fv)},
});
GLint uniform_type;
GLenum property = GL_TYPE;
TF_RETURN_IF_ERROR(program_->GetResourceProperty(
name, GL_UNIFORM, 1, &property, 1, &uniform_type));
// Is a resource active under that name?
if (uniform_type == GLint(GL_INVALID_INDEX))
return TFG_INTERNAL_ERROR("GL_INVALID_INDEX");
auto type_info = type_mapping.find(uniform_type);
if (type_info == type_mapping.end())
return TFG_INTERNAL_ERROR("Unsupported type");
if (std::get<0>(type_info->second) != num_columns ||
std::get<1>(type_info->second) != num_rows)
return TFG_INTERNAL_ERROR("Invalid dimensions");
GLint uniform_location;
property = GL_LOCATION;
TF_RETURN_IF_ERROR(program_->GetResourceProperty(
name, GL_UNIFORM, 1, &property, 1, &uniform_location));
TF_RETURN_IF_ERROR(program_->Use());
auto program_cleanup = MakeCleanup([this]() { return program_->Detach(); });
// Specify the value of the uniform in the current program.
TFG_RETURN_IF_GL_ERROR(std::get<2>(type_info->second)(
uniform_location, 1, transpose ? GL_TRUE : GL_FALSE, matrix.data()));
// Cleanup the program; no program is active at this point.
return tensorflow::Status::OK();
}
| /* Copyright 2020 The TensorFlow Authors
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
==============================================================================*/
#include "rasterizer.h"
Rasterizer::Rasterizer(
std::unique_ptr<gl_utils::Program>&& program,
std::unique_ptr<gl_utils::RenderTargets>&& render_targets, float clear_red,
float clear_green, float clear_blue, float clear_alpha, float clear_depth,
bool enable_cull_face)
: program_(std::move(program)),
render_targets_(std::move(render_targets)),
clear_red_(clear_red),
clear_green_(clear_green),
clear_blue_(clear_blue),
clear_alpha_(clear_alpha),
clear_depth_(clear_depth),
enable_cull_face_(enable_cull_face) {}
Rasterizer::~Rasterizer() {}
void Rasterizer::Reset() {
program_.reset();
render_targets_.reset();
for (auto&& buffer : shader_storage_buffers_) buffer.second.reset();
}
tensorflow::Status Rasterizer::Render(int num_points,
absl::Span<float> result) {
return RenderImpl(num_points, result);
}
tensorflow::Status Rasterizer::Render(int num_points,
absl::Span<unsigned char> result) {
return RenderImpl(num_points, result);
}
tensorflow::Status Rasterizer::SetUniformMatrix(
const std::string& name, int num_columns, int num_rows, bool transpose,
absl::Span<const float> matrix) {
if (size_t(num_rows * num_columns) != matrix.size())
return TFG_INTERNAL_ERROR("num_rows * num_columns != matrix.size()");
typedef void (*setter_fn)(GLint location, GLsizei count, GLboolean transpose,
const GLfloat* value);
static const auto type_mapping =
std::unordered_map<int, std::tuple<int, int, setter_fn>>({
{GL_FLOAT_MAT2, std::make_tuple(2, 2, glUniformMatrix2fv)},
{GL_FLOAT_MAT3, std::make_tuple(3, 3, glUniformMatrix3fv)},
{GL_FLOAT_MAT4, std::make_tuple(4, 4, glUniformMatrix4fv)},
{GL_FLOAT_MAT2x3, std::make_tuple(2, 3, glUniformMatrix2x3fv)},
{GL_FLOAT_MAT2x4, std::make_tuple(2, 4, glUniformMatrix2x4fv)},
{GL_FLOAT_MAT3x2, std::make_tuple(3, 2, glUniformMatrix3x2fv)},
{GL_FLOAT_MAT3x4, std::make_tuple(3, 4, glUniformMatrix3x4fv)},
{GL_FLOAT_MAT4x2, std::make_tuple(4, 2, glUniformMatrix4x2fv)},
{GL_FLOAT_MAT4x3, std::make_tuple(4, 3, glUniformMatrix4x3fv)},
});
GLint uniform_type;
GLenum property = GL_TYPE;
TF_RETURN_IF_ERROR(program_->GetResourceProperty(
name, GL_UNIFORM, 1, &property, 1, &uniform_type));
// Is a resource active under that name?
if (uniform_type == GLint(GL_INVALID_INDEX))
return TFG_INTERNAL_ERROR("GL_INVALID_INDEX");
auto type_info = type_mapping.find(uniform_type);
if (type_info == type_mapping.end())
return TFG_INTERNAL_ERROR("Unsupported type");
if (std::get<0>(type_info->second) != num_columns ||
std::get<1>(type_info->second) != num_rows)
return TFG_INTERNAL_ERROR("Invalid dimensions");
GLint uniform_location;
property = GL_LOCATION;
TF_RETURN_IF_ERROR(program_->GetResourceProperty(
name, GL_UNIFORM, 1, &property, 1, &uniform_location));
TF_RETURN_IF_ERROR(program_->Use());
auto program_cleanup = MakeCleanup([this]() { return program_->Detach(); });
// Specify the value of the uniform in the current program.
TFG_RETURN_IF_GL_ERROR(std::get<2>(type_info->second)(
uniform_location, 1, transpose ? GL_TRUE : GL_FALSE, matrix.data()));
// Cleanup the program; no program is active at this point.
return tensorflow::Status::OK();
}
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/math/interpolation/__init__.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Interpolation module."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow_graphics.math.interpolation import bspline
from tensorflow_graphics.math.interpolation import slerp
from tensorflow_graphics.math.interpolation import trilinear
from tensorflow_graphics.math.interpolation import weighted
from tensorflow_graphics.util import export_api as _export_api
# API contains submodules of tensorflow_graphics.math.
__all__ = _export_api.get_modules()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Interpolation module."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow_graphics.math.interpolation import bspline
from tensorflow_graphics.math.interpolation import slerp
from tensorflow_graphics.math.interpolation import trilinear
from tensorflow_graphics.math.interpolation import weighted
from tensorflow_graphics.util import export_api as _export_api
# API contains submodules of tensorflow_graphics.math.
__all__ = _export_api.get_modules()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/nn/metric/intersection_over_union.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This module implements the intersection-over-union metric."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from tensorflow_graphics.util import asserts
from tensorflow_graphics.util import export_api
from tensorflow_graphics.util import shape
def evaluate(ground_truth_labels, predicted_labels, grid_size=1, name=None):
"""Computes the Intersection-Over-Union metric for the given ground truth and predicted labels.
Note:
In the following, A1 to An are optional batch dimensions, which must be
broadcast compatible, and G1 to Gm are the grid dimensions.
Args:
ground_truth_labels: A tensor of shape `[A1, ..., An, G1, ..., Gm]`, where
the last m axes represent a grid of ground truth attributes. Each
attribute can either be 0 or 1.
predicted_labels: A tensor of shape `[A1, ..., An, G1, ..., Gm]`, where the
last m axes represent a grid of predicted attributes. Each attribute can
either be 0 or 1.
grid_size: The number of grid dimensions. Defaults to 1.
name: A name for this op. Defaults to "intersection_over_union_evaluate".
Returns:
A tensor of shape `[A1, ..., An]` that stores the intersection-over-union
metric of the given ground truth labels and predictions.
Raises:
ValueError: if the shape of `ground_truth_labels`, `predicted_labels` is
not supported.
"""
with tf.compat.v1.name_scope(name, "intersection_over_union_evaluate",
[ground_truth_labels, predicted_labels]):
ground_truth_labels = tf.convert_to_tensor(value=ground_truth_labels)
predicted_labels = tf.convert_to_tensor(value=predicted_labels)
shape.compare_batch_dimensions(
tensors=(ground_truth_labels, predicted_labels),
tensor_names=("ground_truth_labels", "predicted_labels"),
last_axes=-grid_size,
broadcast_compatible=True)
ground_truth_labels = asserts.assert_binary(ground_truth_labels)
predicted_labels = asserts.assert_binary(predicted_labels)
sum_ground_truth = tf.math.reduce_sum(
input_tensor=ground_truth_labels, axis=range(-grid_size, 0))
sum_predictions = tf.math.reduce_sum(
input_tensor=predicted_labels, axis=range(-grid_size, 0))
intersection = tf.math.reduce_sum(
input_tensor=ground_truth_labels * predicted_labels,
axis=range(-grid_size, 0))
union = sum_ground_truth + sum_predictions - intersection
return tf.compat.v1.where(
tf.math.equal(union, 0), tf.ones_like(union), intersection / union)
# API contains all public functions and classes.
__all__ = export_api.get_functions_and_classes()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This module implements the intersection-over-union metric."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
from tensorflow_graphics.util import asserts
from tensorflow_graphics.util import export_api
from tensorflow_graphics.util import shape
def evaluate(ground_truth_labels, predicted_labels, grid_size=1, name=None):
"""Computes the Intersection-Over-Union metric for the given ground truth and predicted labels.
Note:
In the following, A1 to An are optional batch dimensions, which must be
broadcast compatible, and G1 to Gm are the grid dimensions.
Args:
ground_truth_labels: A tensor of shape `[A1, ..., An, G1, ..., Gm]`, where
the last m axes represent a grid of ground truth attributes. Each
attribute can either be 0 or 1.
predicted_labels: A tensor of shape `[A1, ..., An, G1, ..., Gm]`, where the
last m axes represent a grid of predicted attributes. Each attribute can
either be 0 or 1.
grid_size: The number of grid dimensions. Defaults to 1.
name: A name for this op. Defaults to "intersection_over_union_evaluate".
Returns:
A tensor of shape `[A1, ..., An]` that stores the intersection-over-union
metric of the given ground truth labels and predictions.
Raises:
ValueError: if the shape of `ground_truth_labels`, `predicted_labels` is
not supported.
"""
with tf.compat.v1.name_scope(name, "intersection_over_union_evaluate",
[ground_truth_labels, predicted_labels]):
ground_truth_labels = tf.convert_to_tensor(value=ground_truth_labels)
predicted_labels = tf.convert_to_tensor(value=predicted_labels)
shape.compare_batch_dimensions(
tensors=(ground_truth_labels, predicted_labels),
tensor_names=("ground_truth_labels", "predicted_labels"),
last_axes=-grid_size,
broadcast_compatible=True)
ground_truth_labels = asserts.assert_binary(ground_truth_labels)
predicted_labels = asserts.assert_binary(predicted_labels)
sum_ground_truth = tf.math.reduce_sum(
input_tensor=ground_truth_labels, axis=range(-grid_size, 0))
sum_predictions = tf.math.reduce_sum(
input_tensor=predicted_labels, axis=range(-grid_size, 0))
intersection = tf.math.reduce_sum(
input_tensor=ground_truth_labels * predicted_labels,
axis=range(-grid_size, 0))
union = sum_ground_truth + sum_predictions - intersection
return tf.compat.v1.where(
tf.math.equal(union, 0), tf.ones_like(union), intersection / union)
# API contains all public functions and classes.
__all__ = export_api.get_functions_and_classes()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/geometry/convolution/__init__.py | # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Convolution module."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow_graphics.geometry.convolution import graph_convolution
from tensorflow_graphics.geometry.convolution import graph_pooling
from tensorflow_graphics.geometry.convolution import utils
from tensorflow_graphics.util import export_api as _export_api
# API contains submodules of tensorflow_graphics.geometry.convolution.
__all__ = _export_api.get_modules()
| # Copyright 2020 The TensorFlow Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Convolution module."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from tensorflow_graphics.geometry.convolution import graph_convolution
from tensorflow_graphics.geometry.convolution import graph_pooling
from tensorflow_graphics.geometry.convolution import utils
from tensorflow_graphics.util import export_api as _export_api
# API contains submodules of tensorflow_graphics.geometry.convolution.
__all__ = _export_api.get_modules()
| -1 |
tensorflow/graphics | 486 | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2. | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| copybara-service[bot] | "2021-01-29T04:02:31Z" | "2021-02-07T22:38:58Z" | 9d257ad4a72ccf65e4349910b9fff7c0a5648073 | f683a9a5794bade30ede447339394e84b44acc0b | Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.. Migrate tensorflow_graphics/geometry/transformation to using TensorFlow 2.
Following changes are made to the library code:
- tf.compat.v1.name_scope -> tf.name_scope
- tf.compat.v1.where. -> tf.where
- tf.compat.v1.assert_equal -> tf.debugging.assert_equal
- tf.compat.v1.dimension_value -> tf.compat.dimension_value
Following changes are made to the test code:
- Remove tf.compat.v1.get_variable()
- Remove tf.compat.v1.global_variables_initializer()
- Remove tf.compat.v1.Session() except for a couple of places using assert_jacobian_is_finite() that depends on TensorFlow v1 libraries
| ./tensorflow_graphics/datasets/modelnet40/fakes/modelnet40_ply_hdf5_2048/ply_data_train1.h5 | HDF
` TREE 0 HEAP X data label @ 8 S^ X SNOD x ( S^ p Ծ0u>HH>>pC{.>FMYۿC>Ͽsd>y۾ȅX7>6ݯZ>KI>A?AKV?=>ܙQ}*4h9?&%?@=?Ѿƿ,F{>=7iTO?N??cNyvlUoT+?˾>-H&,QFϿ> ?XLNE36M?g6Ͼ%xqPп*t?._CI%o?׀??g?lP?$:9)z_+FU?CV@;sS>Si?|=?BE`5
ʿ:?;>A?ҿ^?Щ>?? >{>DJ54d>7cZ?Q0c?xG>+Le?|>:~Y?:U)>wS?%>fIM?gb<ɲ@N|>})>5>?߰Of?ӴIN3?MG>@ؤ2,*7?c=<e}
n?#C?qcQsy?~w쩽t/>ep??g>&E?JS??RL>zrLP?>-?0|?>H>? b.'?q?dq?þ>`>;?2=
lE>v3ݾ?㖼 ?ex>ghI>>J>G?={
@e? ዿz?3ipo>`>CT
-ʿ?3ۿ`?k?C?Ǧ!?ϙhB>>]y}?? ࿂{Яx:>=?3L?K6
>GZV?Q
<u=]ҿ>%?rts?Qͼr?6.?S\i6>>&ޡm?z)?Ӭ?ܵ?=Xֈ?i}.*@ʽ>D'F&D?~?KIciY>?ο ?n{y<Ӈϒ?xz?|֞;>DY=/(@Ei!?1fƮ?_K?_?WZ
?䶿 KuX>LC!-Ѯ2??o^?K?UJũ ?!+a`["3>?{%B8?D?i{2ҟ?&y?N?={?E?>me>>!1=dY
=X|>ۿ"=?'>h>`K_>A?B?bN<(YF@q/??q<4eি-'2S7?,h?\?H־~?>|=M*B=?7/Mv#f?M+N
#?Rrվ>2=1?`*ty"?%?eP-@Ѿk?>?%x?^8Xf7yIrw|>PM,?Ͽ}U?ʞ>˻>W?UU>yj}?&?P@,>|?E(>iqD>???QZE0=[G?M?@v>>]9>̻Nڿ`iwE>HjA>BY=4?V!?95>u<zP>#?LͿ*@Bp>?Y?:=@Ax?*~|iF?Iױ!N<)I?hqQU-=.&>0\i>?!=:VI˾aʃ??-ڀ>?*]+1b<=jNi O!?>I>/6>U7>=PF>6>:[
@lD?P~>>VX?JQ>F)?˞?~]k?u5^?0UpB?\V\?tϾh?vҿ> t~F?w?ݿY>z)(jD
?`ھU?2ao\@O=;-2;ZIİ>?El?(ơa?&[>Ȼ>,?l? /?t8T?!;a??3~?Ȭ?>?Z^$AٿV?<C=vd0j5>Fg>K?f~??:Ǘ=Cm?e?Iؤ?(e?><
I?7M?־ch>7>!!Ͳ>P>wG=>Ԕ?)?~XlP>@˩>/\??Q\?lAq>?}o<v@;?@/G?!ʾ?M>.45/>8
>?>'G?[R}>)0:.뿍;?ҧ?@Y-?o[Y:b>`>YT>?wl=)*es${ 0MP^>cv?XEy@k?9_ׂzWCO=+>r@Ѝ?)7\tj"*87>X?L>'mTfߋ?qY?rBwQ==?Cub ?߫%˿kv$??fr?=Rxzl?*7>dN?[?q1l)j>M?d=ZQ?ot ;
<?88?"==rT@>h܍@:?p3?1V>Rx@&8þc2>yH?z㣻yuH;~+(?^鏿"\|?>(>>;zC>=@l? ??%>{?8Z8{sJJJX?9J =I->?>g;?P?pRnU>>e h?S$INq?%f?82}?=[J頿c/?pa?zeܿ0>>wM?u5XZ=)>LSF{T>?m?:?T7>?ʼnJV1?
bߙ@>">N}?Z>?+v?b?G>jq>Q|XO:?=?uB=(<>8W?ag;@us]?x=I
@ ?$|7?I?l[A??jB?@8Qg? (?vٿs>]M%Wz??x
>Cy?:?S> ?4?w?7&zxX?ZP?R?ă?";ډ>aN
[b?q?iξ2? #?=o>>>>}!#>%=ɿ>b?q @?eu?O@>8o?(?Ϡz s?}Of?x?9l?@?nlT "N?,??VikwOfgӿ>ua
?
G5?̱;l??xlt\"=?
W>ge?>ϖ>%P2nneW>hЧ?Ͽӫ>[*I2y)?2Ϳc9C:j >q]>Vm ?Րu7=<>P*?pl>Q3Y?l@"4<?D??,+<,p?DZZ+3~:<?Ӛ?u*`,Tb<|=lL>ǿ
?1?BS?C>?m{?a?Q־Cŗ??z:
m>~?,¾| l<_>j?Wf?C>`˿q2 ?w>=<y8>˦ǣ?>Y=¿?BC?:kҾජ?T}>xT?I >|O>Ic>Wž>7?>澸?Y.ྕ??v=- п+$!Bs'?C>fpE>ݾ>)'??QhωP?W?`"?q~=~ @m $R?iol+=N>̱=UBV->Sy>ϙ|?+?X
Q?0B>"jh 5_?>\(>Đ,۾<?[x>SB>B@'>K>>L@>+玿?nM>>܇8v?@e?7?tH>:Eyo"\ѿ7>EEbf=P͓?T"=p"??$?ʊg?z>_νbscyZb>
?J5Ӱ?ާ?39徣<U?5>
?H?[>搾@=l?ų? &镂>pr`g\"hR?K@(h>B ?44 N? >*נw ?En?A
?{H?q?,Ǭ;A!{=e\#?SY?1 2C@?`΅7?RW>N?d?e46?a:>?_I*?y?YݙXKѾ@4?<>m
?I:O??
?{?
>Y+?,jt>4ƅ2?.?؝?BF@T;96?S?&?EAɒ>D|Ɉ>H1=F=60?=?U,_>Ƹl4W=BjȾKnd=0% @>/jf?^en?[?nÒ>鯲gPi
?{e>L??j>=?p܄\P[ij=Nz?!?>9>+?_?ٽ2܌tSYj??<Iֿ{A"=7F?<傿ƾC>c\!V?F
yK?\A,>6[?5@̾xٿl۽N>@)>qϿ*ƚ?0? P:?q?*0?!>?{?SIWGx?-9Y
=s>"@M:Ek?io@ľdd>X?!B?=\=:=C>&@;>N+?*^?]=1\nS9D7>?bu?N
;za:9Ʈ"@(>_>KG>>??s>_>@ྱiI[>]m꾡?̠Ω?y? sR?G5??8>ks=n)>Dt4S?wڦ?6?! Ј߿5?q?i?1?/=\)@n\q?D,=l?-@8J?5?N|ɻS寸 w1]?>??0H\?oa?DѿrE|YD?:4|<=,>ʿ6?0]W>3@*d蔼@\ƿ0<>(Y?3S$?]Cн~mAɚN> 䘉G?3EO ?
Q>6v?RξC'?l>M?J+!@߾i`##?f?
b>i'ca>?&zWF9"=jj>N6?)=?8o۽HY>'&?9[?ͨԿ4^F> K?D+?Z?ľ4"W<=>?_p[X[ʀ>u[#>,?>[;=?>>03?]來RPV6?{=leƿ"?z߀?n>7?σ=oiҪ;c?r>
@uc
?./ˀ0:Y?2ל?_E$wI܇i??3ʾ?m@g??LU?+u?m**?m
T?T>̃v+?HH<w>{b;kCQ UBmxG>Juּ 5_=$Q' %ƿT?(Rh7??ÿ=j}C&kJ>j?4VSL???H>E?|0??$>FcZiվ h3'?V?=Οi>?ҸټםS>џ
?$P?KĿ\>)?>&;>^?:@DH?m?P6?Ŀt><?e?4?6>w<?%?>">>f:?U9>H=}?``jW¾k>[?%X_??6>Te?ӿ_??)̿7Hfa߾ο?h0?]1?B?+)?dL@E@?v?>[F
??xV?>d><wU>7e>ޢc?լ?[H>>ylat?
jӫW=Ola&7`?>9g?.迆W?R7>9~?n}>U>c57#c
U$?ߑJY?Ӎu֊>P5yX?`Z셿>Ļ<Q=5N=">5p߾x?r?z2ȄOUBF?]r[?W?>V0-hO>恿?6??YSU$=DZɾR6?:1=N>G
}Yps߿2 #n?]>Y>վp->*<Q ^ٽU7~8? ?#h.?g#?xD@? >pmz?X$[=~>d&>'?VѾ
f?P7=̐)ܿ?Y>G#,&Jmv<?ۿC>?(>F???&?CT?tp>=O=yeɏ?E?*^???[>Tl^?e?ˣj>c???N?R?>ri7Ǿj>r?>^?i>#?K?Կy?߈82駿*?'X;R?z?*#|>^м2r<QJp?6? f$rsy?v}C>-]þ
=kf?bNTs4?꿾Xu>o?p?(J5
/<
Zj?X>?~#U?_>e >>Òٿ)u?&?c?9)k>Ì=1[yq">=?V+邿l>6>@ܞ=οȆYh-iGҙ?C{<h+; _i= 9P(dCtu1? ՓJ,PϘ?=ފ?OB>u>7$@T"g??E ?>X
>_ z>d[fO{Ռ?0?Gu:ݽ>"⽀Ҿﰿx" ?g`>j>-X>gc>:>M (?
Nu?>??OֿX>lTG?)?k*Y?s@aE4w?
QC>\>rC2|DgI?|?nZ'M>7gk/@|>;{^yZ~ZI?.d??hQ5?"z>Y> zb<'*>73G!>?^߈?6'=օ?ZgҾ-?C?2>m?3>)ev?/>U>m}>-?W9!=>RL2? *>v<M?0 Qz >>YS]=?cF9X,97=ؿVhu?]%Py?`G>r>j<?k?}hٿ/Vݾ1?(>,?5@RYcu?r>?=?ᄿ6@ֽ
["^(U?g?*ݫ?>h @hVm(f0Fb?,c?$Md#?L=דt<
?̨A?L>+QD
>Bک?<9>L=D$ 뿐-?qaӽ+n>B>܃=sg2{4?&>¿ΰg>{䍡?Iw?kidCAO?[N?%~?XU
Zfr(wL?@?В?ƿqW=??#_s^n@?m'?=k?>B0Q>Zd?1}?nvþd0k?c%@˟>??,㿒s?;1??ޚL쿫*}3?#?f?2V2>HחQ >P?:|T=^'@K$w?eE?YC@$>D>d?&>?LP$dp >
??9?lS4
]|>ۏJ5w?WWٿl< |