Retrieval Sources
Collection
Retrieval sources for retrieval-augmented code generation.
•
6 items
•
Updated
•
4
doc_content
stringlengths 1
386k
| doc_id
stringlengths 5
188
|
---|---|
tf.AggregationMethod View source on GitHub A class listing aggregation methods used to combine gradients. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.AggregationMethod Computing partial derivatives can require aggregating gradient contributions. This class lists the various methods that can be used to combine gradients in the graph. The following aggregation methods are part of the stable API for aggregating gradients:
ADD_N: All of the gradient terms are summed as part of one operation using the "AddN" op (see tf.add_n). This method has the property that all gradients must be ready and buffered separately in memory before any aggregation is performed.
DEFAULT: The system-chosen default aggregation method. The following aggregation methods are experimental and may not be supported in future releases:
EXPERIMENTAL_TREE: Gradient terms are summed in pairs using using the "AddN" op. This method of summing gradients may reduce performance, but it can improve memory utilization because the gradients can be released earlier.
Class Variables
ADD_N 0
DEFAULT 0
EXPERIMENTAL_ACCUMULATE_N 2
EXPERIMENTAL_TREE 1 | tensorflow.aggregationmethod |
tf.argsort View source on GitHub Returns the indices of a tensor that give its sorted order along an axis. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.argsort
tf.argsort(
values, axis=-1, direction='ASCENDING', stable=False, name=None
)
For a 1D tensor, tf.gather(values, tf.argsort(values)) is equivalent to tf.sort(values). For higher dimensions, the output has the same shape as values, but along the given axis, values represent the index of the sorted element in that slice of the tensor at the given position. Usage: import tensorflow as tf
a = [1, 10, 26.9, 2.8, 166.32, 62.3]
b = tf.argsort(a,axis=-1,direction='ASCENDING',stable=False,name=None)
c = tf.keras.backend.eval(b)
# Here, c = [0 3 1 2 5 4]
Args
values 1-D or higher numeric Tensor.
axis The axis along which to sort. The default is -1, which sorts the last axis.
direction The direction in which to sort the values ('ASCENDING' or 'DESCENDING').
stable If True, equal elements in the original tensor will not be re-ordered in the returned order. Unstable sort is not yet implemented, but will eventually be the default for performance reasons. If you require a stable order, pass stable=True for forwards compatibility.
name Optional name for the operation.
Returns An int32 Tensor with the same shape as values. The indices that would sort each slice of the given values along the given axis.
Raises
ValueError If axis is not a constant scalar, or the direction is invalid. | tensorflow.argsort |
Module: tf.audio Public API for tf.audio namespace. Functions decode_wav(...): Decode a 16-bit PCM WAV file to a float tensor. encode_wav(...): Encode audio data using the WAV file format. | tensorflow.audio |
tf.audio.decode_wav Decode a 16-bit PCM WAV file to a float tensor. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.audio.decode_wav
tf.audio.decode_wav(
contents, desired_channels=-1, desired_samples=-1, name=None
)
The -32768 to 32767 signed 16-bit values will be scaled to -1.0 to 1.0 in float. When desired_channels is set, if the input contains fewer channels than this then the last channel will be duplicated to give the requested number, else if the input has more channels than requested then the additional channels will be ignored. If desired_samples is set, then the audio will be cropped or padded with zeroes to the requested length. The first output contains a Tensor with the content of the audio samples. The lowest dimension will be the number of channels, and the second will be the number of samples. For example, a ten-sample-long stereo WAV file should give an output shape of [10, 2].
Args
contents A Tensor of type string. The WAV-encoded audio, usually from a file.
desired_channels An optional int. Defaults to -1. Number of sample channels wanted.
desired_samples An optional int. Defaults to -1. Length of audio requested.
name A name for the operation (optional).
Returns A tuple of Tensor objects (audio, sample_rate). audio A Tensor of type float32.
sample_rate A Tensor of type int32. | tensorflow.audio.decode_wav |
tf.audio.encode_wav Encode audio data using the WAV file format. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.audio.encode_wav
tf.audio.encode_wav(
audio, sample_rate, name=None
)
This operation will generate a string suitable to be saved out to create a .wav audio file. It will be encoded in the 16-bit PCM format. It takes in float values in the range -1.0f to 1.0f, and any outside that value will be clamped to that range. audio is a 2-D float Tensor of shape [length, channels]. sample_rate is a scalar Tensor holding the rate to use (e.g. 44100).
Args
audio A Tensor of type float32. 2-D with shape [length, channels].
sample_rate A Tensor of type int32. Scalar containing the sample frequency.
name A name for the operation (optional).
Returns A Tensor of type string. | tensorflow.audio.encode_wav |
Module: tf.autodiff Public API for tf.autodiff namespace. Classes class ForwardAccumulator: Computes Jacobian-vector products ("JVP"s) using forward-mode autodiff. class GradientTape: Record operations for automatic differentiation. | tensorflow.autodiff |
tf.autodiff.ForwardAccumulator Computes Jacobian-vector products ("JVP"s) using forward-mode autodiff.
tf.autodiff.ForwardAccumulator(
primals, tangents
)
Compare to tf.GradientTape which computes vector-Jacobian products ("VJP"s) using reverse-mode autodiff (backprop). Reverse mode is more attractive when computing gradients of a scalar-valued function with respect to many inputs (e.g. a neural network with many parameters and a scalar loss). Forward mode works best on functions with many outputs and few inputs. Since it does not hold on to intermediate activations, it is much more memory efficient than backprop where it is applicable. Consider a simple linear regression:
x = tf.constant([[2.0, 3.0], [1.0, 4.0]])
dense = tf.keras.layers.Dense(1)
dense.build([None, 2])
with tf.autodiff.ForwardAccumulator(
primals=dense.kernel,
tangents=tf.constant([[1.], [0.]])) as acc:
loss = tf.reduce_sum((dense(x) - tf.constant([1., -1.])) ** 2.)
acc.jvp(loss)
<tf.Tensor: shape=(), dtype=float32, numpy=...>
The example has two variables containing parameters, dense.kernel (2 parameters) and dense.bias (1 parameter). Considering the training data x as a constant, this means the Jacobian matrix for the function mapping from parameters to loss has one row and three columns. With forwardprop, we specify a length-three vector in advance which multiplies the Jacobian. The primals constructor argument is the parameter (a tf.Tensor or tf.Variable) we're specifying a vector for, and the tangents argument is the "vector" in Jacobian-vector product. If our goal is to compute the entire Jacobian matrix, forwardprop computes one column at a time while backprop computes one row at a time. Since the Jacobian in the linear regression example has only one row, backprop requires fewer invocations:
x = tf.constant([[2.0, 3.0], [1.0, 4.0]])
dense = tf.keras.layers.Dense(1)
dense.build([None, 2])
loss_fn = lambda: tf.reduce_sum((dense(x) - tf.constant([1., -1.])) ** 2.)
kernel_fprop = []
with tf.autodiff.ForwardAccumulator(
dense.kernel, tf.constant([[1.], [0.]])) as acc:
kernel_fprop.append(acc.jvp(loss_fn()))
with tf.autodiff.ForwardAccumulator(
dense.kernel, tf.constant([[0.], [1.]])) as acc:
kernel_fprop.append(acc.jvp(loss_fn()))
with tf.autodiff.ForwardAccumulator(dense.bias, tf.constant([1.])) as acc:
bias_fprop = acc.jvp(loss_fn())
with tf.GradientTape() as tape:
loss = loss_fn()
kernel_grad, bias_grad = tape.gradient(loss, (dense.kernel, dense.bias))
np.testing.assert_allclose(
kernel_grad, tf.stack(kernel_fprop)[:, tf.newaxis])
np.testing.assert_allclose(bias_grad, bias_fprop[tf.newaxis])
Implicit in the tape.gradient call is a length-one vector which left-multiplies the Jacobian, a vector-Jacobian product. ForwardAccumulator maintains JVPs corresponding primal tensors it is watching, derived from the original primals specified in the constructor. As soon as a primal tensor is deleted, ForwardAccumulator deletes the corresponding JVP. acc.jvp(x) retrieves acc's JVP corresponding to the primal tensor x. It does not perform any computation. acc.jvp calls can be repeated as long as acc is accessible, whether the context manager is active or not. New JVPs are only computed while the context manager is active. Note that ForwardAccumulators are always applied in the order their context managers were entered, so inner accumulators will not see JVP computation from outer accumulators. Take higher-order JVPs from outer accumulators:
primal = tf.constant(1.1)
with tf.autodiff.ForwardAccumulator(primal, tf.constant(1.)) as outer:
with tf.autodiff.ForwardAccumulator(primal, tf.constant(1.)) as inner:
primal_out = primal ** tf.constant(3.5)
inner_jvp = inner.jvp(primal_out)
inner_jvp # 3.5 * 1.1 ** 2.5
<tf.Tensor: shape=(), dtype=float32, numpy=4.4417057>
outer.jvp(inner_jvp) # 3.5 * 2.5 * 1.1 ** 1.5
<tf.Tensor: shape=(), dtype=float32, numpy=10.094786>
Reversing the collection in the last line to instead retrieve inner.jvp(outer.jvp(primal_out)) will not work. Strict nesting also applies to combinations of ForwardAccumulator and tf.GradientTape. More deeply nested GradientTape objects will ignore the products of outer ForwardAccumulator objects. This allows (for example) memory-efficient forward-over-backward computation of Hessian-vector products, where the inner GradientTape would otherwise hold on to all intermediate JVPs:
v = tf.Variable([1., 2.])
with tf.autodiff.ForwardAccumulator(
v,
# The "vector" in Hessian-vector product.
tf.constant([1., 0.])) as acc:
with tf.GradientTape() as tape:
y = tf.reduce_sum(v ** 3.)
backward = tape.gradient(y, v)
backward # gradient from backprop
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([ 3., 12.], dtype=float32)>
acc.jvp(backward) # forward-over-backward Hessian-vector product
<tf.Tensor: shape=(2,), dtype=float32, numpy=array([6., 0.], dtype=float32)>
Args
primals A tensor or nested structure of tensors to watch.
tangents A tensor or nested structure of tensors, with the same nesting structure as primals, with each element being a vector with the same size as the corresponding primal element.
Raises
ValueError If the same tensor or variable is specified multiple times in primals. Methods jvp View source
jvp(
primals, unconnected_gradients=tf.UnconnectedGradients.NONE
)
Fetches the Jacobian-vector product computed for primals. Note that this method performs no computation, and simply looks up a JVP that was already computed (unlike backprop using a tf.GradientTape, where the computation happens on the call to tape.gradient).
Args
primals A watched Tensor or structure of Tensors to fetch the JVPs for.
unconnected_gradients A value which can either hold 'none' or 'zero' and alters the value which will be returned if no JVP was computed for primals. The possible values and effects are detailed in 'tf.UnconnectedGradients' and it defaults to 'none'.
Returns Tensors with the same shapes and dtypes as primals, or None if no JVP is available.
__enter__ View source
__enter__()
__exit__ View source
__exit__(
typ, value, traceback
) | tensorflow.autodiff.forwardaccumulator |
Module: tf.autograph Conversion of plain Python into TensorFlow graph code.
Note: In TensorFlow 2.0, AutoGraph is automatically applied when using tf.function. This module contains lower-level APIs for advanced use.
For more information, see the AutoGraph guide. By equivalent graph code we mean code that generates a TensorFlow graph when run. The generated graph has the same effects as the original code when executed (for example with tf.function or tf.compat.v1.Session.run). In other words, using AutoGraph can be thought of as running Python in TensorFlow. Modules experimental module: Public API for tf.autograph.experimental namespace. Functions set_verbosity(...): Sets the AutoGraph verbosity level. to_code(...): Returns the source code generated by AutoGraph, as a string. to_graph(...): Converts a Python entity into a TensorFlow graph. trace(...): Traces argument information at compilation time. | tensorflow.autograph |
Module: tf.autograph.experimental Public API for tf.autograph.experimental namespace. Classes class Feature: This enumeration represents optional conversion options. Functions do_not_convert(...): Decorator that suppresses the conversion of a function. set_loop_options(...): Specifies additional arguments to be passed to the enclosing while_loop. | tensorflow.autograph.experimental |
tf.autograph.experimental.do_not_convert View source on GitHub Decorator that suppresses the conversion of a function. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.autograph.experimental.do_not_convert
tf.autograph.experimental.do_not_convert(
func=None
)
Args
func function to decorate.
Returns If func is not None, returns a Callable which is equivalent to func, but is not converted by AutoGraph. If func is None, returns a decorator that, when invoked with a single func argument, returns a Callable equivalent to the above case. | tensorflow.autograph.experimental.do_not_convert |
tf.autograph.experimental.Feature View source on GitHub This enumeration represents optional conversion options. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.autograph.experimental.Feature These conversion options are experimental. They are subject to change without notice and offer no guarantees. Example Usage optionals= tf.autograph.experimental.Feature.EQUALITY_OPERATORS
@tf.function(experimental_autograph_options=optionals)
def f(i):
if i == 0: # EQUALITY_OPERATORS allows the use of == here.
tf.print('i is zero')
Attributes
ALL Enable all features.
AUTO_CONTROL_DEPS Insert of control dependencies in the generated code.
ASSERT_STATEMENTS Convert Tensor-dependent assert statements to tf.Assert.
BUILTIN_FUNCTIONS Convert builtin functions applied to Tensors to their TF counterparts.
EQUALITY_OPERATORS Whether to convert the comparison operators, like equality. This is soon to be deprecated as support is being added to the Tensor class.
LISTS Convert list idioms, like initializers, slices, append, etc.
NAME_SCOPES Insert name scopes that name ops according to context, like the function they were defined in.
Class Variables
ALL tf.autograph.experimental.Feature
ASSERT_STATEMENTS tf.autograph.experimental.Feature
AUTO_CONTROL_DEPS tf.autograph.experimental.Feature
BUILTIN_FUNCTIONS tf.autograph.experimental.Feature
EQUALITY_OPERATORS tf.autograph.experimental.Feature
LISTS tf.autograph.experimental.Feature
NAME_SCOPES tf.autograph.experimental.Feature | tensorflow.autograph.experimental.feature |
tf.autograph.experimental.set_loop_options Specifies additional arguments to be passed to the enclosing while_loop. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.autograph.experimental.set_loop_options
tf.autograph.experimental.set_loop_options(
parallel_iterations=UNSPECIFIED, swap_memory=UNSPECIFIED,
maximum_iterations=UNSPECIFIED, shape_invariants=UNSPECIFIED
)
The parameters apply to and only to the immediately enclosing loop. It only has effect if the loop is staged as a TF while_loop; otherwise the parameters have no effect. Usage:
@tf.function(autograph=True)
def f():
n = 0
for i in tf.range(10):
tf.autograph.experimental.set_loop_options(maximum_iterations=3)
n += 1
return n
@tf.function(autograph=True)
def f():
v = tf.constant((0,))
for i in tf.range(3):
tf.autograph.experimental.set_loop_options(
shape_invariants=[(v, tf.TensorShape([None]))]
)
v = tf.concat((v, [i]), 0)
return v
Also see tf.while_loop.
Args
parallel_iterations The maximum number of iterations allowed to run in parallel at any given time. Note that this does not guarantee parallel execution.
swap_memory Whether to store intermediate values needed for gradients on the CPU instead of GPU.
maximum_iterations Allows limiting the total number of iterations executed by the loop.
shape_invariants Allows controlling the argument with the same name passed to tf.while_loop. Unlike tf.while_loop, this is a list of (tensor, shape) pairs. | tensorflow.autograph.experimental.set_loop_options |
tf.autograph.set_verbosity View source on GitHub Sets the AutoGraph verbosity level. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.autograph.set_verbosity
tf.autograph.set_verbosity(
level, alsologtostdout=False
)
Debug logging in AutoGraph More verbose logging is useful to enable when filing bug reports or doing more in-depth debugging. There are two means to control the logging verbosity: The set_verbosity function The AUTOGRAPH_VERBOSITY environment variable set_verbosity takes precedence over the environment variable. For example: import os
import tensorflow as tf
os.environ['AUTOGRAPH_VERBOSITY'] = 5
# Verbosity is now 5
tf.autograph.set_verbosity(0)
# Verbosity is now 0
os.environ['AUTOGRAPH_VERBOSITY'] = 1
# No effect, because set_verbosity was already called.
Logs entries are output to absl's default output, with INFO level. Logs can be mirrored to stdout by using the alsologtostdout argument. Mirroring is enabled by default when Python runs in interactive mode.
Args
level int, the verbosity level; larger values specify increased verbosity; 0 means no logging. When reporting bugs, it is recommended to set this value to a larger number, like 10.
alsologtostdout bool, whether to also output log messages to sys.stdout. | tensorflow.autograph.set_verbosity |
tf.autograph.to_code View source on GitHub Returns the source code generated by AutoGraph, as a string.
tf.autograph.to_code(
entity, recursive=True, experimental_optional_features=None
)
Example usage:
def f(x):
if x < 0:
x = -x
return x
tf.autograph.to_code(f)
"...def tf__f(x):..."
Also see: tf.autograph.to_graph.
Note: If a function has been decorated with tf.function, pass its underlying Python function, rather than the callable that `tf.function creates:
@tf.function
def f(x):
if x < 0:
x = -x
return x
tf.autograph.to_code(f.python_function)
"...def tf__f(x):..."
Args
entity Python callable or class to convert.
recursive Whether to recursively convert any functions that the converted function may call.
experimental_optional_features None, a tuple of, or a single tf.autograph.experimental.Feature value.
Returns The converted code as string. | tensorflow.autograph.to_code |
tf.autograph.to_graph View source on GitHub Converts a Python entity into a TensorFlow graph.
tf.autograph.to_graph(
entity, recursive=True, experimental_optional_features=None
)
Also see: tf.autograph.to_code, tf.function. Unlike tf.function, to_graph is a low-level transpiler that converts Python code to TensorFlow graph code. It does not implement any caching, variable management or create any actual ops, and is best used where greater control over the generated TensorFlow graph is desired. Another difference from tf.function is that to_graph will not wrap the graph into a TensorFlow function or a Python callable. Internally, tf.function uses to_graph. Example usage:
def f(x):
if x > 0:
y = x * x
else:
y = -x
return y
converted_f = to_graph(f)
x = tf.constant(2)
converted_f(x) # converted_foo is like a TensorFlow Op.
<tf.Tensor: shape=(), dtype=int32, numpy=4>
Supported Python entities include: functions classes object methods Functions are converted into new functions with converted code. Classes are converted by generating a new class whose methods use converted code. Methods are converted into unbound function that have an additional first argument called self. For a tutorial, see the tf.function and AutoGraph guide. For more detailed information, see the AutoGraph reference documentation.
Args
entity Python callable or class to convert.
recursive Whether to recursively convert any functions that the converted function may call.
experimental_optional_features None, a tuple of, or a single tf.autograph.experimental.Feature value.
Returns Same as entity, the converted Python function or class.
Raises
ValueError If the entity could not be converted. | tensorflow.autograph.to_graph |
tf.autograph.trace View source on GitHub Traces argument information at compilation time. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.autograph.trace
tf.autograph.trace(
*args
)
trace is useful when debugging, and it always executes during the tracing phase, that is, when the TF graph is constructed. Example usage import tensorflow as tf
for i in tf.range(10):
tf.autograph.trace(i)
# Output: <Tensor ...>
Args
*args Arguments to print to sys.stdout. | tensorflow.autograph.trace |
tf.batch_to_space View source on GitHub BatchToSpace for N-D tensors of type T.
tf.batch_to_space(
input, block_shape, crops, name=None
)
This operation reshapes the "batch" dimension 0 into M + 1 dimensions of shape block_shape + [batch], interleaves these blocks back into the grid defined by the spatial dimensions [1, ..., M], to obtain a result with the same rank as the input. The spatial dimensions of this intermediate result are then optionally cropped according to crops to produce the output. This is the reverse of SpaceToBatch (see tf.space_to_batch).
Args
input A N-D Tensor with shape input_shape = [batch] + spatial_shape + remaining_shape, where spatial_shape has M dimensions.
block_shape A 1-D Tensor with shape [M]. Must be one of the following types: int32, int64. All values must be >= 1. For backwards compatibility with TF 1.0, this parameter may be an int, in which case it is converted to numpy.array([block_shape, block_shape], dtype=numpy.int64).
crops A 2-D Tensor with shape [M, 2]. Must be one of the following types: int32, int64. All values must be >= 0. crops[i] = [crop_start, crop_end] specifies the amount to crop from input dimension i + 1, which corresponds to spatial dimension i. It is required that crop_start[i] + crop_end[i] <= block_shape[i] * input_shape[i + 1]. This operation is equivalent to the following steps: Reshape input to reshaped of shape: [block_shape[0], ..., block_shape[M-1], batch / prod(block_shape), input_shape[1], ..., input_shape[N-1]] Permute dimensions of reshaped to produce permuted of shape [batch / prod(block_shape), input_shape[1], block_shape[0], ..., input_shape[M], block_shape[M-1], input_shape[M+1], ..., input_shape[N-1]] Reshape permuted to produce reshaped_permuted of shape [batch / prod(block_shape), input_shape[1] * block_shape[0], ..., input_shape[M] * block_shape[M-1], input_shape[M+1], ..., input_shape[N-1]] Crop the start and end of dimensions [1, ..., M] of reshaped_permuted according to crops to produce the output of shape: [batch / prod(block_shape), input_shape[1] * block_shape[0] - crops[0,0] - crops[0,1], ..., input_shape[M] * block_shape[M-1] - crops[M-1,0] - crops[M-1,1], input_shape[M+1], ..., input_shape[N-1]]
name A name for the operation (optional). Examples: (1) For the following input of shape [4, 1, 1, 1], block_shape = [2, 2], and crops = [[0, 0], [0, 0]]: [[[[1]]],
[[[2]]],
[[[3]]],
[[[4]]]]
The output tensor has shape [1, 2, 2, 1] and value: x = [[[[1], [2]],
[[3], [4]]]]
(2) For the following input of shape [4, 1, 1, 3], block_shape = [2, 2], and crops = [[0, 0], [0, 0]]: [[[1, 2, 3]],
[[4, 5, 6]],
[[7, 8, 9]],
[[10, 11, 12]]]
The output tensor has shape [1, 2, 2, 3] and value: x = [[[[1, 2, 3], [4, 5, 6 ]],
[[7, 8, 9], [10, 11, 12]]]]
```
(3) For the following
input of shape `[4, 2, 2, 1]`,
`block_shape = [2, 2]`, and `crops = [[0, 0], [0, 0]]`:
```python
x = [[[[1], [3]], [[ 9], [11]]],
[[[2], [4]], [[10], [12]]],
[[[5], [7]], [[13], [15]]],
[[[6], [8]], [[14], [16]]]]
```
The output tensor has shape `[1, 4, 4, 1]` and value:
```python
x = [[[1], [2], [ 3], [ 4]],
[[5], [6], [ 7], [ 8]],
[[9], [10], [11], [12]],
[[13], [14], [15], [16]]]
```
(4) For the following input of shape
`[8, 1, 3, 1]`,
`block_shape = [2, 2]`, and `crops = [[0, 0], [2, 0]]`:
```python
x = [[[[0], [ 1], [ 3]]],
[[[0], [ 9], [11]]],
[[[0], [ 2], [ 4]]],
[[[0], [10], [12]]],
[[[0], [ 5], [ 7]]],
[[[0], [13], [15]]],
[[[0], [ 6], [ 8]]],
[[[0], [14], [16]]]]
```
The output tensor has shape `[2, 2, 4, 1]` and value:
```python
x = [[[[ 1], [ 2], [ 3], [ 4]],
[[ 5], [ 6], [ 7], [ 8]]],
[[[ 9], [10], [11], [12]],
[[13], [14], [15], [16]]]]
```
<!-- Tabular view -->
<table class="responsive fixed orange">
<colgroup><col width="214px"><col></colgroup>
<tr><th colspan="2"><h2 class="add-link">Returns</h2></th></tr>
<tr class="alt">
<td colspan="2">
A `Tensor`. Has the same type as `input`.
</td>
</tr>
</table> | tensorflow.batch_to_space |
tf.bitcast Bitcasts a tensor from one type to another without copying data. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.bitcast
tf.bitcast(
input, type, name=None
)
Given a tensor input, this operation returns a tensor that has the same buffer data as input with datatype type. If the input datatype T is larger than the output datatype type then the shape changes from [...] to [..., sizeof(T)/sizeof(type)]. If T is smaller than type, the operator requires that the rightmost dimension be equal to sizeof(type)/sizeof(T). The shape then goes from [..., sizeof(type)/sizeof(T)] to [...]. tf.bitcast() and tf.cast() work differently when real dtype is casted as a complex dtype (e.g. tf.complex64 or tf.complex128) as tf.cast() make imaginary part 0 while tf.bitcast() gives module error. For example, Example 1:
a = [1., 2., 3.]
equality_bitcast = tf.bitcast(a, tf.complex128)
Traceback (most recent call last):
InvalidArgumentError: Cannot bitcast from 1 to 18 [Op:Bitcast]
equality_cast = tf.cast(a, tf.complex128)
print(equality_cast)
tf.Tensor([1.+0.j 2.+0.j 3.+0.j], shape=(3,), dtype=complex128)
Example 2:
tf.bitcast(tf.constant(0xffffffff, dtype=tf.uint32), tf.uint8)
<tf.Tensor: shape=(4,), dtype=uint8, numpy=array([255, 255, 255, 255], dtype=uint8)>
Example 3:
x = [1., 2., 3.]
y = [0., 2., 3.]
equality= tf.equal(x,y)
equality_cast = tf.cast(equality,tf.float32)
equality_bitcast = tf.bitcast(equality_cast,tf.uint8)
print(equality)
tf.Tensor([False True True], shape=(3,), dtype=bool)
print(equality_cast)
tf.Tensor([0. 1. 1.], shape=(3,), dtype=float32)
print(equality_bitcast)
tf.Tensor(
[[ 0 0 0 0]
[ 0 0 128 63]
[ 0 0 128 63]], shape=(3, 4), dtype=uint8)
Note: Bitcast is implemented as a low-level cast, so machines with different endian orderings will give different results.
Args
input A Tensor. Must be one of the following types: bfloat16, half, float32, float64, int64, int32, uint8, uint16, uint32, uint64, int8, int16, complex64, complex128, qint8, quint8, qint16, quint16, qint32.
type A tf.DType from: tf.bfloat16, tf.half, tf.float32, tf.float64, tf.int64, tf.int32, tf.uint8, tf.uint16, tf.uint32, tf.uint64, tf.int8, tf.int16, tf.complex64, tf.complex128, tf.qint8, tf.quint8, tf.qint16, tf.quint16, tf.qint32.
name A name for the operation (optional).
Returns A Tensor of type type. | tensorflow.bitcast |
Module: tf.bitwise Operations for manipulating the binary representations of integers. Functions bitwise_and(...): Elementwise computes the bitwise AND of x and y. bitwise_or(...): Elementwise computes the bitwise OR of x and y. bitwise_xor(...): Elementwise computes the bitwise XOR of x and y. invert(...): Invert (flip) each bit of supported types; for example, type uint8 value 01010101 becomes 10101010. left_shift(...): Elementwise computes the bitwise left-shift of x and y. right_shift(...): Elementwise computes the bitwise right-shift of x and y. | tensorflow.bitwise |
tf.bitwise.bitwise_and Elementwise computes the bitwise AND of x and y. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.bitwise.bitwise_and
tf.bitwise.bitwise_and(
x, y, name=None
)
The result will have those bits set, that are set in both x and y. The computation is performed on the underlying representations of x and y. For example: import tensorflow as tf
from tensorflow.python.ops import bitwise_ops
dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64,
tf.uint8, tf.uint16, tf.uint32, tf.uint64]
for dtype in dtype_list:
lhs = tf.constant([0, 5, 3, 14], dtype=dtype)
rhs = tf.constant([5, 0, 7, 11], dtype=dtype)
exp = tf.constant([0, 0, 3, 10], dtype=tf.float32)
res = bitwise_ops.bitwise_and(lhs, rhs)
tf.assert_equal(tf.cast(res, tf.float32), exp) # TRUE
Args
x A Tensor. Must be one of the following types: int8, int16, int32, int64, uint8, uint16, uint32, uint64.
y A Tensor. Must have the same type as x.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | tensorflow.bitwise.bitwise_and |
tf.bitwise.bitwise_or Elementwise computes the bitwise OR of x and y. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.bitwise.bitwise_or
tf.bitwise.bitwise_or(
x, y, name=None
)
The result will have those bits set, that are set in x, y or both. The computation is performed on the underlying representations of x and y. For example: import tensorflow as tf
from tensorflow.python.ops import bitwise_ops
dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64,
tf.uint8, tf.uint16, tf.uint32, tf.uint64]
for dtype in dtype_list:
lhs = tf.constant([0, 5, 3, 14], dtype=dtype)
rhs = tf.constant([5, 0, 7, 11], dtype=dtype)
exp = tf.constant([5, 5, 7, 15], dtype=tf.float32)
res = bitwise_ops.bitwise_or(lhs, rhs)
tf.assert_equal(tf.cast(res, tf.float32), exp) # TRUE
Args
x A Tensor. Must be one of the following types: int8, int16, int32, int64, uint8, uint16, uint32, uint64.
y A Tensor. Must have the same type as x.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | tensorflow.bitwise.bitwise_or |
tf.bitwise.bitwise_xor Elementwise computes the bitwise XOR of x and y. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.bitwise.bitwise_xor
tf.bitwise.bitwise_xor(
x, y, name=None
)
The result will have those bits set, that are different in x and y. The computation is performed on the underlying representations of x and y. For example: import tensorflow as tf
from tensorflow.python.ops import bitwise_ops
dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64,
tf.uint8, tf.uint16, tf.uint32, tf.uint64]
for dtype in dtype_list:
lhs = tf.constant([0, 5, 3, 14], dtype=dtype)
rhs = tf.constant([5, 0, 7, 11], dtype=dtype)
exp = tf.constant([5, 5, 4, 5], dtype=tf.float32)
res = bitwise_ops.bitwise_xor(lhs, rhs)
tf.assert_equal(tf.cast(res, tf.float32), exp) # TRUE
Args
x A Tensor. Must be one of the following types: int8, int16, int32, int64, uint8, uint16, uint32, uint64.
y A Tensor. Must have the same type as x.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | tensorflow.bitwise.bitwise_xor |
tf.bitwise.invert Invert (flip) each bit of supported types; for example, type uint8 value 01010101 becomes 10101010. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.bitwise.invert
tf.bitwise.invert(
x, name=None
)
Flip each bit of supported types. For example, type int8 (decimal 2) binary 00000010 becomes (decimal -3) binary 11111101. This operation is performed on each element of the tensor argument x. Example: import tensorflow as tf
from tensorflow.python.ops import bitwise_ops
# flip 2 (00000010) to -3 (11111101)
tf.assert_equal(-3, bitwise_ops.invert(2))
dtype_list = [dtypes.int8, dtypes.int16, dtypes.int32, dtypes.int64,
dtypes.uint8, dtypes.uint16, dtypes.uint32, dtypes.uint64]
inputs = [0, 5, 3, 14]
for dtype in dtype_list:
# Because of issues with negative numbers, let's test this indirectly.
# 1. invert(a) and a = 0
# 2. invert(a) or a = invert(0)
input_tensor = tf.constant([0, 5, 3, 14], dtype=dtype)
not_a_and_a, not_a_or_a, not_0 = [bitwise_ops.bitwise_and(
input_tensor, bitwise_ops.invert(input_tensor)),
bitwise_ops.bitwise_or(
input_tensor, bitwise_ops.invert(input_tensor)),
bitwise_ops.invert(
tf.constant(0, dtype=dtype))]
expected = tf.constant([0, 0, 0, 0], dtype=tf.float32)
tf.assert_equal(tf.cast(not_a_and_a, tf.float32), expected)
expected = tf.cast([not_0] * 4, tf.float32)
tf.assert_equal(tf.cast(not_a_or_a, tf.float32), expected)
# For unsigned dtypes let's also check the result directly.
if dtype.is_unsigned:
inverted = bitwise_ops.invert(input_tensor)
expected = tf.constant([dtype.max - x for x in inputs], dtype=tf.float32)
tf.assert_equal(tf.cast(inverted, tf.float32), tf.cast(expected, tf.float32))
Args
x A Tensor. Must be one of the following types: int8, int16, int32, int64, uint8, uint16, uint32, uint64.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | tensorflow.bitwise.invert |
tf.bitwise.left_shift Elementwise computes the bitwise left-shift of x and y. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.bitwise.left_shift
tf.bitwise.left_shift(
x, y, name=None
)
If y is negative, or greater than or equal to the width of x in bits the result is implementation defined. Example: import tensorflow as tf
from tensorflow.python.ops import bitwise_ops
import numpy as np
dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64]
for dtype in dtype_list:
lhs = tf.constant([-1, -5, -3, -14], dtype=dtype)
rhs = tf.constant([5, 0, 7, 11], dtype=dtype)
left_shift_result = bitwise_ops.left_shift(lhs, rhs)
print(left_shift_result)
# This will print:
# tf.Tensor([ -32 -5 -128 0], shape=(4,), dtype=int8)
# tf.Tensor([ -32 -5 -384 -28672], shape=(4,), dtype=int16)
# tf.Tensor([ -32 -5 -384 -28672], shape=(4,), dtype=int32)
# tf.Tensor([ -32 -5 -384 -28672], shape=(4,), dtype=int64)
lhs = np.array([-2, 64, 101, 32], dtype=np.int8)
rhs = np.array([-1, -5, -3, -14], dtype=np.int8)
bitwise_ops.left_shift(lhs, rhs)
# <tf.Tensor: shape=(4,), dtype=int8, numpy=array([ -2, 64, 101, 32], dtype=int8)>
Args
x A Tensor. Must be one of the following types: int8, int16, int32, int64, uint8, uint16, uint32, uint64.
y A Tensor. Must have the same type as x.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | tensorflow.bitwise.left_shift |
tf.bitwise.right_shift Elementwise computes the bitwise right-shift of x and y. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.bitwise.right_shift
tf.bitwise.right_shift(
x, y, name=None
)
Performs a logical shift for unsigned integer types, and an arithmetic shift for signed integer types. If y is negative, or greater than or equal to than the width of x in bits the result is implementation defined. Example: import tensorflow as tf
from tensorflow.python.ops import bitwise_ops
import numpy as np
dtype_list = [tf.int8, tf.int16, tf.int32, tf.int64]
for dtype in dtype_list:
lhs = tf.constant([-1, -5, -3, -14], dtype=dtype)
rhs = tf.constant([5, 0, 7, 11], dtype=dtype)
right_shift_result = bitwise_ops.right_shift(lhs, rhs)
print(right_shift_result)
# This will print:
# tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int8)
# tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int16)
# tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int32)
# tf.Tensor([-1 -5 -1 -1], shape=(4,), dtype=int64)
lhs = np.array([-2, 64, 101, 32], dtype=np.int8)
rhs = np.array([-1, -5, -3, -14], dtype=np.int8)
bitwise_ops.right_shift(lhs, rhs)
# <tf.Tensor: shape=(4,), dtype=int8, numpy=array([ -2, 64, 101, 32], dtype=int8)>
Args
x A Tensor. Must be one of the following types: int8, int16, int32, int64, uint8, uint16, uint32, uint64.
y A Tensor. Must have the same type as x.
name A name for the operation (optional).
Returns A Tensor. Has the same type as x. | tensorflow.bitwise.right_shift |
tf.boolean_mask View source on GitHub Apply boolean mask to tensor.
tf.boolean_mask(
tensor, mask, axis=None, name='boolean_mask'
)
Numpy equivalent is tensor[mask]. In general, 0 < dim(mask) = K <= dim(tensor), and mask's shape must match the first K dimensions of tensor's shape. We then have: boolean_mask(tensor, mask)[i, j1,...,jd] = tensor[i1,...,iK,j1,...,jd] where (i1,...,iK) is the ith True entry of mask (row-major order). The axis could be used with mask to indicate the axis to mask from. In that case, axis + dim(mask) <= dim(tensor) and mask's shape must match the first axis + dim(mask) dimensions of tensor's shape. See also: tf.ragged.boolean_mask, which can be applied to both dense and ragged tensors, and can be used if you need to preserve the masked dimensions of tensor (rather than flattening them, as tf.boolean_mask does). Examples:
tensor = [0, 1, 2, 3] # 1-D example
mask = np.array([True, False, True, False])
tf.boolean_mask(tensor, mask)
<tf.Tensor: shape=(2,), dtype=int32, numpy=array([0, 2], dtype=int32)>
tensor = [[1, 2], [3, 4], [5, 6]] # 2-D example
mask = np.array([True, False, True])
tf.boolean_mask(tensor, mask)
<tf.Tensor: shape=(2, 2), dtype=int32, numpy=
array([[1, 2],
[5, 6]], dtype=int32)>
Args
tensor N-D Tensor.
mask K-D boolean Tensor, K <= N and K must be known statically.
axis A 0-D int Tensor representing the axis in tensor to mask from. By default, axis is 0 which will mask from the first dimension. Otherwise K + axis <= N.
name A name for this operation (optional).
Returns (N-K+1)-dimensional tensor populated by entries in tensor corresponding to True values in mask.
Raises
ValueError If shapes do not conform. Examples: # 2-D example
tensor = [[1, 2], [3, 4], [5, 6]]
mask = np.array([True, False, True])
boolean_mask(tensor, mask) # [[1, 2], [5, 6]] | tensorflow.boolean_mask |
tf.broadcast_dynamic_shape View source on GitHub Computes the shape of a broadcast given symbolic shapes. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.broadcast_dynamic_shape
tf.broadcast_dynamic_shape(
shape_x, shape_y
)
When shape_x and shape_y are Tensors representing shapes (i.e. the result of calling tf.shape on another Tensor) this computes a Tensor which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y. This is useful when validating the result of a broadcasting operation when the tensors do not have statically known shapes. Example:
shape_x = (1, 2, 3)
shape_y = (5, 1, 3)
tf.broadcast_dynamic_shape(shape_x, shape_y)
<tf.Tensor: shape=(3,), dtype=int32, numpy=array([5, 2, 3], ...>
Args
shape_x A rank 1 integer Tensor, representing the shape of x.
shape_y A rank 1 integer Tensor, representing the shape of y.
Returns A rank 1 integer Tensor representing the broadcasted shape.
Raises
InvalidArgumentError If the two shapes are incompatible for broadcasting. | tensorflow.broadcast_dynamic_shape |
tf.broadcast_static_shape View source on GitHub Computes the shape of a broadcast given known shapes. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.broadcast_static_shape
tf.broadcast_static_shape(
shape_x, shape_y
)
When shape_x and shape_y are fully known TensorShapes this computes a TensorShape which is the shape of the result of a broadcasting op applied in tensors of shapes shape_x and shape_y. For example, if shape_x is TensorShape([1, 2, 3]) and shape_y is TensorShape([5, 1, 3]), the result is a TensorShape whose value is TensorShape([5, 2, 3]). This is useful when validating the result of a broadcasting operation when the tensors have statically known shapes. Example:
shape_x = tf.TensorShape([1, 2, 3])
shape_y = tf.TensorShape([5, 1 ,3])
tf.broadcast_static_shape(shape_x, shape_y)
TensorShape([5, 2, 3])
Args
shape_x A TensorShape
shape_y A TensorShape
Returns A TensorShape representing the broadcasted shape.
Raises
ValueError If the two shapes can not be broadcasted. | tensorflow.broadcast_static_shape |
tf.broadcast_to Broadcast an array for a compatible shape. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.broadcast_to
tf.broadcast_to(
input, shape, name=None
)
Broadcasting is the process of making arrays to have compatible shapes for arithmetic operations. Two shapes are compatible if for each dimension pair they are either equal or one of them is one. When trying to broadcast a Tensor to a shape, it starts with the trailing dimensions, and works its way forward. For example,
x = tf.constant([1, 2, 3])
y = tf.broadcast_to(x, [3, 3])
print(y)
tf.Tensor(
[[1 2 3]
[1 2 3]
[1 2 3]], shape=(3, 3), dtype=int32)
In the above example, the input Tensor with the shape of [1, 3] is broadcasted to output Tensor with shape of [3, 3]. When doing broadcasted operations such as multiplying a tensor by a scalar, broadcasting (usually) confers some time or space benefit, as the broadcasted tensor is never materialized. However, broadcast_to does not carry with it any such benefits. The newly-created tensor takes the full memory of the broadcasted shape. (In a graph context, broadcast_to might be fused to subsequent operation and then be optimized away, however.)
Args
input A Tensor. A Tensor to broadcast.
shape A Tensor. Must be one of the following types: int32, int64. An 1-D int Tensor. The shape of the desired output.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | tensorflow.broadcast_to |
tf.case View source on GitHub Create a case operation.
tf.case(
pred_fn_pairs, default=None, exclusive=False, strict=False,
name='case'
)
See also tf.switch_case. The pred_fn_pairs parameter is a list of pairs of size N. Each pair contains a boolean scalar tensor and a python callable that creates the tensors to be returned if the boolean evaluates to True. default is a callable generating a list of tensors. All the callables in pred_fn_pairs as well as default (if provided) should return the same number and types of tensors. If exclusive==True, all predicates are evaluated, and an exception is thrown if more than one of the predicates evaluates to True. If exclusive==False, execution stops at the first predicate which evaluates to True, and the tensors generated by the corresponding function are returned immediately. If none of the predicates evaluate to True, this operation returns the tensors generated by default. tf.case supports nested structures as implemented in tf.contrib.framework.nest. All of the callables must return the same (possibly nested) value structure of lists, tuples, and/or named tuples. Singleton lists and tuples form the only exceptions to this: when returned by a callable, they are implicitly unpacked to single values. This behavior is disabled by passing strict=True. Example 1: Pseudocode: if (x < y) return 17;
else return 23;
Expressions: f1 = lambda: tf.constant(17)
f2 = lambda: tf.constant(23)
r = tf.case([(tf.less(x, y), f1)], default=f2)
Example 2: Pseudocode: if (x < y && x > z) raise OpError("Only one predicate may evaluate to True");
if (x < y) return 17;
else if (x > z) return 23;
else return -1;
Expressions: def f1(): return tf.constant(17)
def f2(): return tf.constant(23)
def f3(): return tf.constant(-1)
r = tf.case([(tf.less(x, y), f1), (tf.greater(x, z), f2)],
default=f3, exclusive=True)
Args
pred_fn_pairs List of pairs of a boolean scalar tensor and a callable which returns a list of tensors.
default Optional callable that returns a list of tensors.
exclusive True iff at most one predicate is allowed to evaluate to True.
strict A boolean that enables/disables 'strict' mode; see above.
name A name for this operation (optional).
Returns The tensors returned by the first pair whose predicate evaluated to True, or those returned by default if none does.
Raises
TypeError If pred_fn_pairs is not a list/tuple.
TypeError If pred_fn_pairs is a list but does not contain 2-tuples.
TypeError If fns[i] is not callable for any i, or default is not callable. V2 Compatibility pred_fn_pairs could be a dictionary in v1. However, tf.Tensor and tf.Variable are no longer hashable in v2, so cannot be used as a key for a dictionary. Please use a list or a tuple instead. | tensorflow.case |
tf.cast View source on GitHub Casts a tensor to a new type. View aliases Main aliases
tf.dtypes.cast Compat aliases for migration See Migration guide for more details. tf.compat.v1.cast, tf.compat.v1.dtypes.cast
tf.cast(
x, dtype, name=None
)
The operation casts x (in case of Tensor) or x.values (in case of SparseTensor or IndexedSlices) to dtype. For example:
x = tf.constant([1.8, 2.2], dtype=tf.float32)
tf.dtypes.cast(x, tf.int32)
<tf.Tensor: shape=(2,), dtype=int32, numpy=array([1, 2], dtype=int32)>
The operation supports data types (for x and dtype) of uint8, uint16, uint32, uint64, int8, int16, int32, int64, float16, float32, float64, complex64, complex128, bfloat16. In case of casting from complex types (complex64, complex128) to real types, only the real part of x is returned. In case of casting from real types to complex types (complex64, complex128), the imaginary part of the returned value is set to 0. The handling of complex types here matches the behavior of numpy. Note casting nan and inf values to integral types has undefined behavior.
Args
x A Tensor or SparseTensor or IndexedSlices of numeric type. It could be uint8, uint16, uint32, uint64, int8, int16, int32, int64, float16, float32, float64, complex64, complex128, bfloat16.
dtype The destination type. The list of supported dtypes is the same as x.
name A name for the operation (optional).
Returns A Tensor or SparseTensor or IndexedSlices with same shape as x and same type as dtype.
Raises
TypeError If x cannot be cast to the dtype. | tensorflow.cast |
tf.clip_by_global_norm View source on GitHub Clips values of multiple tensors by the ratio of the sum of their norms. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.clip_by_global_norm
tf.clip_by_global_norm(
t_list, clip_norm, use_norm=None, name=None
)
Given a tuple or list of tensors t_list, and a clipping ratio clip_norm, this operation returns a list of clipped tensors list_clipped and the global norm (global_norm) of all tensors in t_list. Optionally, if you've already computed the global norm for t_list, you can specify the global norm with use_norm. To perform the clipping, the values t_list[i] are set to: t_list[i] * clip_norm / max(global_norm, clip_norm)
where: global_norm = sqrt(sum([l2norm(t)**2 for t in t_list]))
If clip_norm > global_norm then the entries in t_list remain as they are, otherwise they're all shrunk by the global ratio. If global_norm == infinity then the entries in t_list are all set to NaN to signal that an error occurred. Any of the entries of t_list that are of type None are ignored. This is the correct way to perform gradient clipping (Pascanu et al., 2012). However, it is slower than clip_by_norm() because all the parameters must be ready before the clipping operation can be performed.
Args
t_list A tuple or list of mixed Tensors, IndexedSlices, or None.
clip_norm A 0-D (scalar) Tensor > 0. The clipping ratio.
use_norm A 0-D (scalar) Tensor of type float (optional). The global norm to use. If not provided, global_norm() is used to compute the norm.
name A name for the operation (optional).
Returns
list_clipped A list of Tensors of the same type as list_t.
global_norm A 0-D (scalar) Tensor representing the global norm.
Raises
TypeError If t_list is not a sequence. References: On the difficulty of training Recurrent Neural Networks: Pascanu et al., 2012 (pdf) | tensorflow.clip_by_global_norm |
tf.clip_by_norm View source on GitHub Clips tensor values to a maximum L2-norm. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.clip_by_norm
tf.clip_by_norm(
t, clip_norm, axes=None, name=None
)
Given a tensor t, and a maximum clip value clip_norm, this operation normalizes t so that its L2-norm is less than or equal to clip_norm, along the dimensions given in axes. Specifically, in the default case where all dimensions are used for calculation, if the L2-norm of t is already less than or equal to clip_norm, then t is not modified. If the L2-norm is greater than clip_norm, then this operation returns a tensor of the same type and shape as t with its values set to: t * clip_norm / l2norm(t) In this case, the L2-norm of the output tensor is clip_norm. As another example, if t is a matrix and axes == [1], then each row of the output will have L2-norm less than or equal to clip_norm. If axes == [0] instead, each column of the output will be clipped. Code example:
some_nums = tf.constant([[1, 2, 3, 4, 5]], dtype=tf.float32)
tf.clip_by_norm(some_nums, 2.0).numpy()
array([[0.26967996, 0.5393599 , 0.80903983, 1.0787199 , 1.3483998 ]],
dtype=float32)
This operation is typically used to clip gradients before applying them with an optimizer. Most gradient data is a collection of different shaped tensors for different parts of the model. Thus, this is a common usage: # Get your gradients after training
loss_value, grads = grad(model, features, labels)
# Apply some clipping
grads = [tf.clip_by_norm(g, norm)
for g in grads]
# Continue on with training
optimizer.apply_gradients(grads)
Args
t A Tensor or IndexedSlices. This must be a floating point type.
clip_norm A 0-D (scalar) Tensor > 0. A maximum clipping value, also floating point
axes A 1-D (vector) Tensor of type int32 containing the dimensions to use for computing the L2-norm. If None (the default), uses all dimensions.
name A name for the operation (optional).
Returns A clipped Tensor or IndexedSlices.
Raises
ValueError If the clip_norm tensor is not a 0-D scalar tensor.
TypeError If dtype of the input is not a floating point or complex type. | tensorflow.clip_by_norm |
tf.clip_by_value View source on GitHub Clips tensor values to a specified min and max. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.clip_by_value
tf.clip_by_value(
t, clip_value_min, clip_value_max, name=None
)
Given a tensor t, this operation returns a tensor of the same type and shape as t with its values clipped to clip_value_min and clip_value_max. Any values less than clip_value_min are set to clip_value_min. Any values greater than clip_value_max are set to clip_value_max.
Note: clip_value_min needs to be smaller or equal to clip_value_max for correct results.
For example: Basic usage passes a scalar as the min and max value.
t = tf.constant([[-10., -1., 0.], [0., 2., 10.]])
t2 = tf.clip_by_value(t, clip_value_min=-1, clip_value_max=1)
t2.numpy()
array([[-1., -1., 0.],
[ 0., 1., 1.]], dtype=float32)
The min and max can be the same size as t, or broadcastable to that size.
t = tf.constant([[-1, 0., 10.], [-1, 0, 10]])
clip_min = [[2],[1]]
t3 = tf.clip_by_value(t, clip_value_min=clip_min, clip_value_max=100)
t3.numpy()
array([[ 2., 2., 10.],
[ 1., 1., 10.]], dtype=float32)
Broadcasting fails, intentionally, if you would expand the dimensions of t
t = tf.constant([[-1, 0., 10.], [-1, 0, 10]])
clip_min = [[[2, 1]]] # Has a third axis
t4 = tf.clip_by_value(t, clip_value_min=clip_min, clip_value_max=100)
Traceback (most recent call last):
InvalidArgumentError: Incompatible shapes: [2,3] vs. [1,1,2]
It throws a TypeError if you try to clip an int to a float value (tf.cast the input to float first).
t = tf.constant([[1, 2], [3, 4]], dtype=tf.int32)
t5 = tf.clip_by_value(t, clip_value_min=-3.1, clip_value_max=3.1)
Traceback (most recent call last):
TypeError: Cannot convert ...
Args
t A Tensor or IndexedSlices.
clip_value_min The minimum value to clip to. A scalar Tensor or one that is broadcastable to the shape of t.
clip_value_max The maximum value to clip to. A scalar Tensor or one that is broadcastable to the shape of t.
name A name for the operation (optional).
Returns A clipped Tensor or IndexedSlices.
Raises tf.errors.InvalidArgumentError: If the clip tensors would trigger array broadcasting that would make the returned tensor larger than the input. TypeError If dtype of the input is int32 and dtype of the clip_value_min or clip_value_max is float32 | tensorflow.clip_by_value |
Module: tf.compat Compatibility functions. The tf.compat module contains two sets of compatibility functions. Tensorflow 1.x and 2.x APIs The compat.v1 and compat.v2 submodules provide a complete copy of both the v1 and v2 APIs for backwards and forwards compatibility across TensorFlow versions 1.x and 2.x. See the migration guide for details. Utilities for writing compatible code Aside from the compat.v1 and compat.v2 submodules, tf.compat also contains a set of helper functions for writing code that works in both: TensorFlow 1.x and 2.x Python 2 and 3 Type collections The compatibility module also provides the following aliases for common sets of python types: bytes_or_text_types complex_types integral_types real_types Modules v1 module: Bring in all of the public TensorFlow interface into this module. Functions as_bytes(...): Converts bytearray, bytes, or unicode python input types to bytes. as_str(...) as_str_any(...): Converts input to str type. as_text(...): Converts any string-like python input types to unicode. dimension_at_index(...): Compatibility utility required to allow for both V1 and V2 behavior in TF. dimension_value(...): Compatibility utility required to allow for both V1 and V2 behavior in TF. forward_compatibility_horizon(...): Context manager for testing forward compatibility of generated graphs. forward_compatible(...): Return true if the forward compatibility window has expired. path_to_str(...): Converts input which is a PathLike object to str type.
Other Members
bytes_or_text_types
complex_types
integral_types
real_types | tensorflow.compat |
tf.compat.as_bytes View source on GitHub Converts bytearray, bytes, or unicode python input types to bytes. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.compat.as_bytes
tf.compat.as_bytes(
bytes_or_text, encoding='utf-8'
)
Uses utf-8 encoding for text by default.
Args
bytes_or_text A bytearray, bytes, str, or unicode object.
encoding A string indicating the charset for encoding unicode.
Returns A bytes object.
Raises
TypeError If bytes_or_text is not a binary or unicode string. | tensorflow.compat.as_bytes |
tf.compat.as_str View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.compat.as_str
tf.compat.as_str(
bytes_or_text, encoding='utf-8'
) | tensorflow.compat.as_str |
tf.compat.as_str_any View source on GitHub Converts input to str type. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.compat.as_str_any
tf.compat.as_str_any(
value
)
Uses str(value), except for bytes typed inputs, which are converted using as_str.
Args
value A object that can be converted to str.
Returns A str object. | tensorflow.compat.as_str_any |
tf.compat.as_text View source on GitHub Converts any string-like python input types to unicode. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.compat.as_text
tf.compat.as_text(
bytes_or_text, encoding='utf-8'
)
Returns the input as a unicode string. Uses utf-8 encoding for text by default.
Args
bytes_or_text A bytes, str, or unicode object.
encoding A string indicating the charset for decoding unicode.
Returns A unicode (Python 2) or str (Python 3) object.
Raises
TypeError If bytes_or_text is not a binary or unicode string. | tensorflow.compat.as_text |
tf.compat.dimension_at_index View source on GitHub Compatibility utility required to allow for both V1 and V2 behavior in TF. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.compat.dimension_at_index, tf.compat.v1.dimension_at_index
tf.compat.dimension_at_index(
shape, index
)
Until the release of TF 2.0, we need the legacy behavior of TensorShape to coexist with the new behavior. This utility is a bridge between the two. If you want to retrieve the Dimension instance corresponding to a certain index in a TensorShape instance, use this utility, like this: # If you had this in your V1 code:
dim = tensor_shape[i]
# Use `dimension_at_index` as direct replacement compatible with both V1 & V2:
dim = dimension_at_index(tensor_shape, i)
# Another possibility would be this, but WARNING: it only works if the
# tensor_shape instance has a defined rank.
dim = tensor_shape.dims[i] # `dims` may be None if the rank is undefined!
# In native V2 code, we recommend instead being more explicit:
if tensor_shape.rank is None:
dim = Dimension(None)
else:
dim = tensor_shape.dims[i]
# Being more explicit will save you from the following trap (present in V1):
# you might do in-place modifications to `dim` and expect them to be reflected
# in `tensor_shape[i]`, but they would not be (as the Dimension object was
# instantiated on the fly.
Arguments
shape A TensorShape instance.
index An integer index.
Returns A dimension object. | tensorflow.compat.dimension_at_index |
tf.compat.dimension_value View source on GitHub Compatibility utility required to allow for both V1 and V2 behavior in TF. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.compat.dimension_value, tf.compat.v1.dimension_value
tf.compat.dimension_value(
dimension
)
Until the release of TF 2.0, we need the legacy behavior of TensorShape to coexist with the new behavior. This utility is a bridge between the two. When accessing the value of a TensorShape dimension, use this utility, like this: # If you had this in your V1 code:
value = tensor_shape[i].value
# Use `dimension_value` as direct replacement compatible with both V1 & V2:
value = dimension_value(tensor_shape[i])
# This would be the V2 equivalent:
value = tensor_shape[i] # Warning: this will return the dim value in V2!
Arguments
dimension Either a Dimension instance, an integer, or None.
Returns A plain value, i.e. an integer or None. | tensorflow.compat.dimension_value |
tf.compat.forward_compatibility_horizon View source on GitHub Context manager for testing forward compatibility of generated graphs. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.compat.forward_compatibility_horizon
@tf_contextlib.contextmanager
tf.compat.forward_compatibility_horizon(
year, month, day
)
See Version compatibility. To ensure forward compatibility of generated graphs (see forward_compatible) with older binaries, new features can be gated with: if compat.forward_compatible(year=2018, month=08, date=01):
generate_graph_with_new_features()
else:
generate_graph_so_older_binaries_can_consume_it()
However, when adding new features, one may want to unittest it before the forward compatibility window expires. This context manager enables such tests. For example: from tensorflow.python.compat import compat
def testMyNewFeature(self):
with compat.forward_compatibility_horizon(2018, 08, 02):
# Test that generate_graph_with_new_features() has an effect
Args
year A year (e.g., 2018). Must be an int.
month A month (1 <= month <= 12) in year. Must be an int.
day A day (1 <= day <= 31, or 30, or 29, or 28) in month. Must be an int.
Yields Nothing. | tensorflow.compat.forward_compatibility_horizon |
tf.compat.forward_compatible View source on GitHub Return true if the forward compatibility window has expired. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.compat.forward_compatible
tf.compat.forward_compatible(
year, month, day
)
See Version compatibility. Forward-compatibility refers to scenarios where the producer of a TensorFlow model (a GraphDef or SavedModel) is compiled against a version of the TensorFlow library newer than what the consumer was compiled against. The "producer" is typically a Python program that constructs and trains a model while the "consumer" is typically another program that loads and serves the model. TensorFlow has been supporting a 3 week forward-compatibility window for programs compiled from source at HEAD. For example, consider the case where a new operation MyNewAwesomeAdd is created with the intent of replacing the implementation of an existing Python wrapper - tf.add. The Python wrapper implementation should change from something like: def add(inputs, name=None):
return gen_math_ops.add(inputs, name)
to: from tensorflow.python.compat import compat
def add(inputs, name=None):
if compat.forward_compatible(year, month, day):
# Can use the awesome new implementation.
return gen_math_ops.my_new_awesome_add(inputs, name)
# To maintain forward compatibility, use the old implementation.
return gen_math_ops.add(inputs, name)
Where year, month, and day specify the date beyond which binaries that consume a model are expected to have been updated to include the new operations. This date is typically at least 3 weeks beyond the date the code that adds the new operation is committed.
Args
year A year (e.g., 2018). Must be an int.
month A month (1 <= month <= 12) in year. Must be an int.
day A day (1 <= day <= 31, or 30, or 29, or 28) in month. Must be an int.
Returns True if the caller can expect that serialized TensorFlow graphs produced can be consumed by programs that are compiled with the TensorFlow library source code after (year, month, day). | tensorflow.compat.forward_compatible |
tf.compat.path_to_str View source on GitHub Converts input which is a PathLike object to str type. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.compat.path_to_str
tf.compat.path_to_str(
path
)
Converts from any python constant representation of a PathLike object to a string. If the input is not a PathLike object, simply returns the input.
Args
path An object that can be converted to path representation.
Returns A str object.
Usage: In case a simplified str version of the path is needed from an os.PathLike object Examples: $ tf.compat.path_to_str('C:\XYZ\tensorflow\./.././tensorflow')
'C:\XYZ\tensorflow\./.././tensorflow' # Windows OS
$ tf.compat.path_to_str(Path('C:\XYZ\tensorflow\./.././tensorflow'))
'C:\XYZ\tensorflow\..\tensorflow' # Windows OS
$ tf.compat.path_to_str(Path('./corpus'))
'corpus' # Linux OS
$ tf.compat.path_to_str('./.././Corpus')
'./.././Corpus' # Linux OS
$ tf.compat.path_to_str(Path('./.././Corpus'))
'../Corpus' # Linux OS
$ tf.compat.path_to_str(Path('./..////../'))
'../..' # Linux OS | tensorflow.compat.path_to_str |
Module: tf.compat.v1 View source on GitHub Bring in all of the public TensorFlow interface into this module. Modules app module: Generic entry point script. audio module: Public API for tf.audio namespace. autograph module: Conversion of plain Python into TensorFlow graph code. bitwise module: Operations for manipulating the binary representations of integers. compat module: Compatibility functions. config module: Public API for tf.config namespace. data module: tf.data.Dataset API for input pipelines. debugging module: Public API for tf.debugging namespace. distribute module: Library for running a computation across multiple devices. distributions module: Core module for TensorFlow distribution objects and helpers. dtypes module: Public API for tf.dtypes namespace. errors module: Exception types for TensorFlow errors. estimator module: Estimator: High level tools for working with models. experimental module: Public API for tf.experimental namespace. feature_column module: Public API for tf.feature_column namespace. flags module: Import router for absl.flags. See https://github.com/abseil/abseil-py gfile module: Import router for file_io. graph_util module: Helpers to manipulate a tensor graph in python. image module: Image ops. initializers module: Public API for tf.initializers namespace. io module: Public API for tf.io namespace. keras module: Implementation of the Keras API meant to be a high-level API for TensorFlow. layers module: Public API for tf.layers namespace. linalg module: Operations for linear algebra. lite module: Public API for tf.lite namespace. logging module: Logging and Summary Operations. lookup module: Public API for tf.lookup namespace. losses module: Loss operations for use in neural networks. manip module: Operators for manipulating tensors. math module: Math Operations. metrics module: Evaluation-related metrics. mixed_precision module: Public API for tf.mixed_precision namespace. mlir module: Public API for tf.mlir namespace. nest module: Public API for tf.nest namespace. nn module: Wrappers for primitive Neural Net (NN) Operations. profiler module: Public API for tf.profiler namespace. python_io module: Python functions for directly manipulating TFRecord-formatted files. quantization module: Public API for tf.quantization namespace. queue module: Public API for tf.queue namespace. ragged module: Ragged Tensors. random module: Public API for tf.random namespace. raw_ops module: Public API for tf.raw_ops namespace. resource_loader module: Resource management library. saved_model module: Public API for tf.saved_model namespace. sets module: Tensorflow set operations. signal module: Signal processing operations. sparse module: Sparse Tensor Representation. spectral module: Public API for tf.spectral namespace. strings module: Operations for working with string Tensors. summary module: Operations for writing summary data, for use in analysis and visualization. sysconfig module: System configuration library. test module: Testing. tpu module: Ops related to Tensor Processing Units. train module: Support for training models. types module: Public TensorFlow type definitions. user_ops module: Public API for tf.user_ops namespace. version module: Public API for tf.version namespace. xla module: Public API for tf.xla namespace. Classes class AggregationMethod: A class listing aggregation methods used to combine gradients. class AttrValue: A ProtocolMessage class ConditionalAccumulator: A conditional accumulator for aggregating gradients. class ConditionalAccumulatorBase: A conditional accumulator for aggregating gradients. class ConfigProto: A ProtocolMessage class CriticalSection: Critical section. class DType: Represents the type of the elements in a Tensor. class DeviceSpec: Represents a (possibly partial) specification for a TensorFlow device. class Dimension: Represents the value of one dimension in a TensorShape. class Event: A ProtocolMessage class FIFOQueue: A queue implementation that dequeues elements in first-in first-out order. class FixedLenFeature: Configuration for parsing a fixed-length input feature. class FixedLenSequenceFeature: Configuration for parsing a variable-length input feature into a Tensor. class FixedLengthRecordReader: A Reader that outputs fixed-length records from a file. class GPUOptions: A ProtocolMessage class GradientTape: Record operations for automatic differentiation. class Graph: A TensorFlow computation, represented as a dataflow graph. class GraphDef: A ProtocolMessage class GraphKeys: Standard names to use for graph collections. class GraphOptions: A ProtocolMessage class HistogramProto: A ProtocolMessage class IdentityReader: A Reader that outputs the queued work as both the key and value. class IndexedSlices: A sparse representation of a set of tensor slices at given indices. class IndexedSlicesSpec: Type specification for a tf.IndexedSlices. class InteractiveSession: A TensorFlow Session for use in interactive contexts, such as a shell. class LMDBReader: A Reader that outputs the records from a LMDB file. class LogMessage: A ProtocolMessage class MetaGraphDef: A ProtocolMessage class Module: Base neural network module class. class NameAttrList: A ProtocolMessage class NodeDef: A ProtocolMessage class OpError: A generic error that is raised when TensorFlow execution fails. class Operation: Represents a graph node that performs computation on tensors. class OptimizerOptions: A ProtocolMessage class OptionalSpec: Type specification for tf.experimental.Optional. class PaddingFIFOQueue: A FIFOQueue that supports batching variable-sized tensors by padding. class PriorityQueue: A queue implementation that dequeues elements in prioritized order. class QueueBase: Base class for queue implementations. class RaggedTensor: Represents a ragged tensor. class RaggedTensorSpec: Type specification for a tf.RaggedTensor. class RandomShuffleQueue: A queue implementation that dequeues elements in a random order. class ReaderBase: Base class for different Reader types, that produce a record every step. class RegisterGradient: A decorator for registering the gradient function for an op type. class RunMetadata: A ProtocolMessage class RunOptions: A ProtocolMessage class Session: A class for running TensorFlow operations. class SessionLog: A ProtocolMessage class SparseConditionalAccumulator: A conditional accumulator for aggregating sparse gradients. class SparseFeature: Configuration for parsing a sparse input feature from an Example. class SparseTensor: Represents a sparse tensor. class SparseTensorSpec: Type specification for a tf.sparse.SparseTensor. class SparseTensorValue: SparseTensorValue(indices, values, dense_shape) class Summary: A ProtocolMessage class SummaryMetadata: A ProtocolMessage class TFRecordReader: A Reader that outputs the records from a TFRecords file. class Tensor: A tensor is a multidimensional array of elements represented by a class TensorArray: Class wrapping dynamic-sized, per-time-step, write-once Tensor arrays. class TensorArraySpec: Type specification for a tf.TensorArray. class TensorInfo: A ProtocolMessage class TensorShape: Represents the shape of a Tensor. class TensorSpec: Describes a tf.Tensor. class TextLineReader: A Reader that outputs the lines of a file delimited by newlines. class TypeSpec: Specifies a TensorFlow value type. class UnconnectedGradients: Controls how gradient computation behaves when y does not depend on x. class VarLenFeature: Configuration for parsing a variable-length input feature. class Variable: See the Variables Guide. class VariableAggregation: Indicates how a distributed variable will be aggregated. class VariableScope: Variable scope object to carry defaults to provide to get_variable. class VariableSynchronization: Indicates when a distributed variable will be synced. class WholeFileReader: A Reader that outputs the entire contents of a file as a value. class constant_initializer: Initializer that generates tensors with constant values. class glorot_normal_initializer: The Glorot normal initializer, also called Xavier normal initializer. class glorot_uniform_initializer: The Glorot uniform initializer, also called Xavier uniform initializer. class name_scope: A context manager for use when defining a Python op. class ones_initializer: Initializer that generates tensors initialized to 1. class orthogonal_initializer: Initializer that generates an orthogonal matrix. class random_normal_initializer: Initializer that generates tensors with a normal distribution. class random_uniform_initializer: Initializer that generates tensors with a uniform distribution. class truncated_normal_initializer: Initializer that generates a truncated normal distribution. class uniform_unit_scaling_initializer: Initializer that generates tensors without scaling variance. class variable_scope: A context manager for defining ops that creates variables (layers). class variance_scaling_initializer: Initializer capable of adapting its scale to the shape of weights tensors. class zeros_initializer: Initializer that generates tensors initialized to 0. Functions Assert(...): Asserts that the given condition is true. NoGradient(...): Specifies that ops of type op_type is not differentiable. NotDifferentiable(...): Specifies that ops of type op_type is not differentiable. Print(...): Prints a list of tensors. (deprecated) abs(...): Computes the absolute value of a tensor. accumulate_n(...): Returns the element-wise sum of a list of tensors. acos(...): Computes acos of x element-wise. acosh(...): Computes inverse hyperbolic cosine of x element-wise. add(...): Returns x + y element-wise. add_check_numerics_ops(...): Connect a tf.debugging.check_numerics to every floating point tensor. add_n(...): Adds all input tensors element-wise. add_to_collection(...): Wrapper for Graph.add_to_collection() using the default graph. add_to_collections(...): Wrapper for Graph.add_to_collections() using the default graph. all_variables(...): Use tf.compat.v1.global_variables instead. (deprecated) angle(...): Returns the element-wise argument of a complex (or real) tensor. arg_max(...): Returns the index with the largest value across dimensions of a tensor. arg_min(...): Returns the index with the smallest value across dimensions of a tensor. argmax(...): Returns the index with the largest value across axes of a tensor. (deprecated arguments) argmin(...): Returns the index with the smallest value across axes of a tensor. (deprecated arguments) argsort(...): Returns the indices of a tensor that give its sorted order along an axis. as_dtype(...): Converts the given type_value to a DType. as_string(...): Converts each entry in the given tensor to strings. asin(...): Computes the trignometric inverse sine of x element-wise. asinh(...): Computes inverse hyperbolic sine of x element-wise. assert_equal(...): Assert the condition x == y holds element-wise. assert_greater(...): Assert the condition x > y holds element-wise. assert_greater_equal(...): Assert the condition x >= y holds element-wise. assert_integer(...): Assert that x is of integer dtype. assert_less(...): Assert the condition x < y holds element-wise. assert_less_equal(...): Assert the condition x <= y holds element-wise. assert_near(...): Assert the condition x and y are close element-wise. assert_negative(...): Assert the condition x < 0 holds element-wise. assert_non_negative(...): Assert the condition x >= 0 holds element-wise. assert_non_positive(...): Assert the condition x <= 0 holds element-wise. assert_none_equal(...): Assert the condition x != y holds element-wise. assert_positive(...): Assert the condition x > 0 holds element-wise. assert_proper_iterable(...): Static assert that values is a "proper" iterable. assert_rank(...): Assert x has rank equal to rank. assert_rank_at_least(...): Assert x has rank equal to rank or higher. assert_rank_in(...): Assert x has rank in ranks. assert_same_float_dtype(...): Validate and return float type based on tensors and dtype. assert_scalar(...): Asserts that the given tensor is a scalar (i.e. zero-dimensional). assert_type(...): Statically asserts that the given Tensor is of the specified type. assert_variables_initialized(...): Returns an Op to check if variables are initialized. assign(...): Update ref by assigning value to it. assign_add(...): Update ref by adding value to it. assign_sub(...): Update ref by subtracting value from it. atan(...): Computes the trignometric inverse tangent of x element-wise. atan2(...): Computes arctangent of y/x element-wise, respecting signs of the arguments. atanh(...): Computes inverse hyperbolic tangent of x element-wise. batch_gather(...): Gather slices from params according to indices with leading batch dims. (deprecated) batch_scatter_update(...): Generalization of tf.compat.v1.scatter_update to axis different than 0. (deprecated) batch_to_space(...): BatchToSpace for 4-D tensors of type T. batch_to_space_nd(...): BatchToSpace for N-D tensors of type T. betainc(...): Compute the regularized incomplete beta integral \(I_x(a, b)\). bincount(...): Counts the number of occurrences of each value in an integer array. bitcast(...): Bitcasts a tensor from one type to another without copying data. boolean_mask(...): Apply boolean mask to tensor. broadcast_dynamic_shape(...): Computes the shape of a broadcast given symbolic shapes. broadcast_static_shape(...): Computes the shape of a broadcast given known shapes. broadcast_to(...): Broadcast an array for a compatible shape. case(...): Create a case operation. cast(...): Casts a tensor to a new type. ceil(...): Return the ceiling of the input, element-wise. check_numerics(...): Checks a tensor for NaN and Inf values. cholesky(...): Computes the Cholesky decomposition of one or more square matrices. cholesky_solve(...): Solves systems of linear eqns A X = RHS, given Cholesky factorizations. clip_by_average_norm(...): Clips tensor values to a maximum average L2-norm. (deprecated) clip_by_global_norm(...): Clips values of multiple tensors by the ratio of the sum of their norms. clip_by_norm(...): Clips tensor values to a maximum L2-norm. clip_by_value(...): Clips tensor values to a specified min and max. colocate_with(...): DEPRECATED FUNCTION complex(...): Converts two real numbers to a complex number. concat(...): Concatenates tensors along one dimension. cond(...): Return true_fn() if the predicate pred is true else false_fn(). (deprecated arguments) confusion_matrix(...): Computes the confusion matrix from predictions and labels. conj(...): Returns the complex conjugate of a complex number. constant(...): Creates a constant tensor. container(...): Wrapper for Graph.container() using the default graph. control_dependencies(...): Wrapper for Graph.control_dependencies() using the default graph. control_flow_v2_enabled(...): Returns True if v2 control flow is enabled. convert_to_tensor(...): Converts the given value to a Tensor. convert_to_tensor_or_indexed_slices(...): Converts the given object to a Tensor or an IndexedSlices. convert_to_tensor_or_sparse_tensor(...): Converts value to a SparseTensor or Tensor. cos(...): Computes cos of x element-wise. cosh(...): Computes hyperbolic cosine of x element-wise. count_nonzero(...): Computes number of nonzero elements across dimensions of a tensor. (deprecated arguments) (deprecated arguments) count_up_to(...): Increments 'ref' until it reaches 'limit'. (deprecated) create_partitioned_variables(...): Create a list of partitioned variables according to the given slicing. (deprecated) cross(...): Compute the pairwise cross product. cumprod(...): Compute the cumulative product of the tensor x along axis. cumsum(...): Compute the cumulative sum of the tensor x along axis. custom_gradient(...): Decorator to define a function with a custom gradient. decode_base64(...): Decode web-safe base64-encoded strings. decode_compressed(...): Decompress strings. decode_csv(...): Convert CSV records to tensors. Each column maps to one tensor. decode_json_example(...): Convert JSON-encoded Example records to binary protocol buffer strings. decode_raw(...): Convert raw byte strings into tensors. (deprecated arguments) delete_session_tensor(...): Delete the tensor for the given tensor handle. depth_to_space(...): DepthToSpace for tensors of type T. dequantize(...): Dequantize the 'input' tensor into a float or bfloat16 Tensor. deserialize_many_sparse(...): Deserialize and concatenate SparseTensors from a serialized minibatch. device(...): Wrapper for Graph.device() using the default graph. diag(...): Returns a diagonal tensor with a given diagonal values. diag_part(...): Returns the diagonal part of the tensor. digamma(...): Computes Psi, the derivative of Lgamma (the log of the absolute value of dimension_at_index(...): Compatibility utility required to allow for both V1 and V2 behavior in TF. dimension_value(...): Compatibility utility required to allow for both V1 and V2 behavior in TF. disable_control_flow_v2(...): Opts out of control flow v2. disable_eager_execution(...): Disables eager execution. disable_resource_variables(...): Opts out of resource variables. (deprecated) disable_tensor_equality(...): Compare Tensors by their id and be hashable. disable_v2_behavior(...): Disables TensorFlow 2.x behaviors. disable_v2_tensorshape(...): Disables the V2 TensorShape behavior and reverts to V1 behavior. div(...): Divides x / y elementwise (using Python 2 division operator semantics). (deprecated) div_no_nan(...): Computes a safe divide which returns 0 if the y is zero. divide(...): Computes Python style division of x by y. dynamic_partition(...): Partitions data into num_partitions tensors using indices from partitions. dynamic_stitch(...): Interleave the values from the data tensors into a single tensor. edit_distance(...): Computes the Levenshtein distance between sequences. einsum(...): Tensor contraction over specified indices and outer product. enable_control_flow_v2(...): Use control flow v2. enable_eager_execution(...): Enables eager execution for the lifetime of this program. enable_resource_variables(...): Creates resource variables by default. enable_tensor_equality(...): Compare Tensors with element-wise comparison and thus be unhashable. enable_v2_behavior(...): Enables TensorFlow 2.x behaviors. enable_v2_tensorshape(...): In TensorFlow 2.0, iterating over a TensorShape instance returns values. encode_base64(...): Encode strings into web-safe base64 format. ensure_shape(...): Updates the shape of a tensor and checks at runtime that the shape holds. equal(...): Returns the truth value of (x == y) element-wise. erf(...): Computes the Gauss error function of x element-wise. erfc(...): Computes the complementary error function of x element-wise. executing_eagerly(...): Checks whether the current thread has eager execution enabled. executing_eagerly_outside_functions(...): Returns True if executing eagerly, even if inside a graph function. exp(...): Computes exponential of x element-wise. \(y = e^x\). expand_dims(...): Returns a tensor with a length 1 axis inserted at index axis. (deprecated arguments) expm1(...): Computes exp(x) - 1 element-wise. extract_image_patches(...): Extract patches from images and put them in the "depth" output dimension. extract_volume_patches(...): Extract patches from input and put them in the "depth" output dimension. 3D extension of extract_image_patches. eye(...): Construct an identity matrix, or a batch of matrices. fake_quant_with_min_max_args(...): Fake-quantize the 'inputs' tensor, type float to 'outputs' tensor of same type. fake_quant_with_min_max_args_gradient(...): Compute gradients for a FakeQuantWithMinMaxArgs operation. fake_quant_with_min_max_vars(...): Fake-quantize the 'inputs' tensor of type float via global float scalars fake_quant_with_min_max_vars_gradient(...): Compute gradients for a FakeQuantWithMinMaxVars operation. fake_quant_with_min_max_vars_per_channel(...): Fake-quantize the 'inputs' tensor of type float via per-channel floats fake_quant_with_min_max_vars_per_channel_gradient(...): Compute gradients for a FakeQuantWithMinMaxVarsPerChannel operation. fft(...): Fast Fourier transform. fft2d(...): 2D fast Fourier transform. fft3d(...): 3D fast Fourier transform. fill(...): Creates a tensor filled with a scalar value. fingerprint(...): Generates fingerprint values. fixed_size_partitioner(...): Partitioner to specify a fixed number of shards along given axis. floor(...): Returns element-wise largest integer not greater than x. floor_div(...): Returns x // y element-wise. floordiv(...): Divides x / y elementwise, rounding toward the most negative integer. floormod(...): Returns element-wise remainder of division. When x < 0 xor y < 0 is foldl(...): foldl on the list of tensors unpacked from elems on dimension 0. foldr(...): foldr on the list of tensors unpacked from elems on dimension 0. function(...): Compiles a function into a callable TensorFlow graph. gather(...): Gather slices from params axis axis according to indices. gather_nd(...): Gather slices from params into a Tensor with shape specified by indices. get_collection(...): Wrapper for Graph.get_collection() using the default graph. get_collection_ref(...): Wrapper for Graph.get_collection_ref() using the default graph. get_default_graph(...): Returns the default graph for the current thread. get_default_session(...): Returns the default session for the current thread. get_local_variable(...): Gets an existing local variable or creates a new one. get_logger(...): Return TF logger instance. get_seed(...): Returns the local seeds an operation should use given an op-specific seed. get_session_handle(...): Return the handle of data. get_session_tensor(...): Get the tensor of type dtype by feeding a tensor handle. get_static_value(...): Returns the constant value of the given tensor, if efficiently calculable. get_variable(...): Gets an existing variable with these parameters or create a new one. get_variable_scope(...): Returns the current variable scope. global_norm(...): Computes the global norm of multiple tensors. global_variables(...): Returns global variables. global_variables_initializer(...): Returns an Op that initializes global variables. grad_pass_through(...): Creates a grad-pass-through op with the forward behavior provided in f. gradients(...): Constructs symbolic derivatives of sum of ys w.r.t. x in xs. greater(...): Returns the truth value of (x > y) element-wise. greater_equal(...): Returns the truth value of (x >= y) element-wise. group(...): Create an op that groups multiple operations. guarantee_const(...): Gives a guarantee to the TF runtime that the input tensor is a constant. hessians(...): Constructs the Hessian of sum of ys with respect to x in xs. histogram_fixed_width(...): Return histogram of values. histogram_fixed_width_bins(...): Bins the given values for use in a histogram. identity(...): Return a Tensor with the same shape and contents as input. identity_n(...): Returns a list of tensors with the same shapes and contents as the input ifft(...): Inverse fast Fourier transform. ifft2d(...): Inverse 2D fast Fourier transform. ifft3d(...): Inverse 3D fast Fourier transform. igamma(...): Compute the lower regularized incomplete Gamma function P(a, x). igammac(...): Compute the upper regularized incomplete Gamma function Q(a, x). imag(...): Returns the imaginary part of a complex (or real) tensor. import_graph_def(...): Imports the graph from graph_def into the current default Graph. (deprecated arguments) init_scope(...): A context manager that lifts ops out of control-flow scopes and function-building graphs. initialize_all_tables(...): Returns an Op that initializes all tables of the default graph. (deprecated) initialize_all_variables(...): See tf.compat.v1.global_variables_initializer. (deprecated) initialize_local_variables(...): See tf.compat.v1.local_variables_initializer. (deprecated) initialize_variables(...): See tf.compat.v1.variables_initializer. (deprecated) invert_permutation(...): Computes the inverse permutation of a tensor. is_finite(...): Returns which elements of x are finite. is_inf(...): Returns which elements of x are Inf. is_nan(...): Returns which elements of x are NaN. is_non_decreasing(...): Returns True if x is non-decreasing. is_numeric_tensor(...): Returns True if the elements of tensor are numbers. is_strictly_increasing(...): Returns True if x is strictly increasing. is_tensor(...): Checks whether x is a TF-native type that can be passed to many TF ops. is_variable_initialized(...): Tests if a variable has been initialized. lbeta(...): Computes \(ln(|Beta(x)|)\), reducing along the last dimension. less(...): Returns the truth value of (x < y) element-wise. less_equal(...): Returns the truth value of (x <= y) element-wise. lgamma(...): Computes the log of the absolute value of Gamma(x) element-wise. lin_space(...): Generates evenly-spaced values in an interval along a given axis. linspace(...): Generates evenly-spaced values in an interval along a given axis. load_file_system_library(...): Loads a TensorFlow plugin, containing file system implementation. (deprecated) load_library(...): Loads a TensorFlow plugin. load_op_library(...): Loads a TensorFlow plugin, containing custom ops and kernels. local_variables(...): Returns local variables. local_variables_initializer(...): Returns an Op that initializes all local variables. log(...): Computes natural logarithm of x element-wise. log1p(...): Computes natural logarithm of (1 + x) element-wise. log_sigmoid(...): Computes log sigmoid of x element-wise. logical_and(...): Logical AND function. logical_not(...): Returns the truth value of NOT x element-wise. logical_or(...): Returns the truth value of x OR y element-wise. logical_xor(...): Logical XOR function. make_ndarray(...): Create a numpy ndarray from a tensor. make_template(...): Given an arbitrary function, wrap it so that it does variable sharing. make_tensor_proto(...): Create a TensorProto. map_fn(...): Transforms elems by applying fn to each element unstacked on axis 0. (deprecated arguments) matching_files(...): Returns the set of files matching one or more glob patterns. matmul(...): Multiplies matrix a by matrix b, producing a * b. matrix_band_part(...): Copy a tensor setting everything outside a central band in each innermost matrix to zero. matrix_determinant(...): Computes the determinant of one or more square matrices. matrix_diag(...): Returns a batched diagonal tensor with given batched diagonal values. matrix_diag_part(...): Returns the batched diagonal part of a batched tensor. matrix_inverse(...): Computes the inverse of one or more square invertible matrices or their adjoints (conjugate transposes). matrix_set_diag(...): Returns a batched matrix tensor with new batched diagonal values. matrix_solve(...): Solves systems of linear equations. matrix_solve_ls(...): Solves one or more linear least-squares problems. matrix_square_root(...): Computes the matrix square root of one or more square matrices: matrix_transpose(...): Transposes last two dimensions of tensor a. matrix_triangular_solve(...): Solve systems of linear equations with upper or lower triangular matrices. maximum(...): Returns the max of x and y (i.e. x > y ? x : y) element-wise. meshgrid(...): Broadcasts parameters for evaluation on an N-D grid. min_max_variable_partitioner(...): Partitioner to allocate minimum size per slice. minimum(...): Returns the min of x and y (i.e. x < y ? x : y) element-wise. mod(...): Returns element-wise remainder of division. When x < 0 xor y < 0 is model_variables(...): Returns all variables in the MODEL_VARIABLES collection. moving_average_variables(...): Returns all variables that maintain their moving averages. multinomial(...): Draws samples from a multinomial distribution. (deprecated) multiply(...): Returns an element-wise x * y. negative(...): Computes numerical negative value element-wise. no_gradient(...): Specifies that ops of type op_type is not differentiable. no_op(...): Does nothing. Only useful as a placeholder for control edges. no_regularizer(...): Use this function to prevent regularization of variables. nondifferentiable_batch_function(...): Batches the computation done by the decorated function. norm(...): Computes the norm of vectors, matrices, and tensors. (deprecated arguments) not_equal(...): Returns the truth value of (x != y) element-wise. numpy_function(...): Wraps a python function and uses it as a TensorFlow op. one_hot(...): Returns a one-hot tensor. ones(...): Creates a tensor with all elements set to one (1). ones_like(...): Creates a tensor with all elements set to 1. op_scope(...): DEPRECATED. Same as name_scope above, just different argument order. pad(...): Pads a tensor. parallel_stack(...): Stacks a list of rank-R tensors into one rank-(R+1) tensor in parallel. parse_example(...): Parses Example protos into a dict of tensors. parse_single_example(...): Parses a single Example proto. parse_single_sequence_example(...): Parses a single SequenceExample proto. parse_tensor(...): Transforms a serialized tensorflow.TensorProto proto into a Tensor. placeholder(...): Inserts a placeholder for a tensor that will be always fed. placeholder_with_default(...): A placeholder op that passes through input when its output is not fed. polygamma(...): Compute the polygamma function \(\psi^{(n)}(x)\). pow(...): Computes the power of one value to another. print(...): Print the specified inputs. py_func(...): Wraps a python function and uses it as a TensorFlow op. py_function(...): Wraps a python function into a TensorFlow op that executes it eagerly. qr(...): Computes the QR decompositions of one or more matrices. quantize(...): Quantize the 'input' tensor of type float to 'output' tensor of type 'T'. quantize_and_dequantize_v4(...): Returns the gradient of QuantizeAndDequantizeV4. quantize_v2(...): Please use tf.quantization.quantize instead. quantized_concat(...): Concatenates quantized tensors along one dimension. random_crop(...): Randomly crops a tensor to a given size. random_gamma(...): Draws shape samples from each of the given Gamma distribution(s). random_normal(...): Outputs random values from a normal distribution. random_poisson(...): Draws shape samples from each of the given Poisson distribution(s). random_shuffle(...): Randomly shuffles a tensor along its first dimension. random_uniform(...): Outputs random values from a uniform distribution. range(...): Creates a sequence of numbers. rank(...): Returns the rank of a tensor. read_file(...): Reads and outputs the entire contents of the input filename. real(...): Returns the real part of a complex (or real) tensor. realdiv(...): Returns x / y element-wise for real types. reciprocal(...): Computes the reciprocal of x element-wise. recompute_grad(...): An eager-compatible version of recompute_grad. reduce_all(...): Computes the "logical and" of elements across dimensions of a tensor. (deprecated arguments) reduce_any(...): Computes the "logical or" of elements across dimensions of a tensor. (deprecated arguments) reduce_join(...): Joins all strings into a single string, or joins along an axis. reduce_logsumexp(...): Computes log(sum(exp(elements across dimensions of a tensor))). (deprecated arguments) reduce_max(...): Computes the maximum of elements across dimensions of a tensor. (deprecated arguments) reduce_mean(...): Computes the mean of elements across dimensions of a tensor. reduce_min(...): Computes the minimum of elements across dimensions of a tensor. (deprecated arguments) reduce_prod(...): Computes the product of elements across dimensions of a tensor. (deprecated arguments) reduce_sum(...): Computes the sum of elements across dimensions of a tensor. (deprecated arguments) regex_replace(...): Replace elements of input matching regex pattern with rewrite. register_tensor_conversion_function(...): Registers a function for converting objects of base_type to Tensor. repeat(...): Repeat elements of input. report_uninitialized_variables(...): Adds ops to list the names of uninitialized variables. required_space_to_batch_paddings(...): Calculate padding required to make block_shape divide input_shape. reset_default_graph(...): Clears the default graph stack and resets the global default graph. reshape(...): Reshapes a tensor. resource_variables_enabled(...): Returns True if resource variables are enabled. reverse(...): Reverses specific dimensions of a tensor. reverse_sequence(...): Reverses variable length slices. (deprecated arguments) (deprecated arguments) reverse_v2(...): Reverses specific dimensions of a tensor. rint(...): Returns element-wise integer closest to x. roll(...): Rolls the elements of a tensor along an axis. round(...): Rounds the values of a tensor to the nearest integer, element-wise. rsqrt(...): Computes reciprocal of square root of x element-wise. saturate_cast(...): Performs a safe saturating cast of value to dtype. scalar_mul(...): Multiplies a scalar times a Tensor or IndexedSlices object. scan(...): scan on the list of tensors unpacked from elems on dimension 0. scatter_add(...): Adds sparse updates to the variable referenced by resource. scatter_div(...): Divides a variable reference by sparse updates. scatter_max(...): Reduces sparse updates into a variable reference using the max operation. scatter_min(...): Reduces sparse updates into a variable reference using the min operation. scatter_mul(...): Multiplies sparse updates into a variable reference. scatter_nd(...): Scatter updates into a new tensor according to indices. scatter_nd_add(...): Applies sparse addition to individual values or slices in a Variable. scatter_nd_sub(...): Applies sparse subtraction to individual values or slices in a Variable. scatter_nd_update(...): Applies sparse updates to individual values or slices in a Variable. scatter_sub(...): Subtracts sparse updates to a variable reference. scatter_update(...): Applies sparse updates to a variable reference. searchsorted(...): Searches input tensor for values on the innermost dimension. segment_max(...): Computes the maximum along segments of a tensor. segment_mean(...): Computes the mean along segments of a tensor. segment_min(...): Computes the minimum along segments of a tensor. segment_prod(...): Computes the product along segments of a tensor. segment_sum(...): Computes the sum along segments of a tensor. self_adjoint_eig(...): Computes the eigen decomposition of a batch of self-adjoint matrices. self_adjoint_eigvals(...): Computes the eigenvalues of one or more self-adjoint matrices. sequence_mask(...): Returns a mask tensor representing the first N positions of each cell. serialize_many_sparse(...): Serialize N-minibatch SparseTensor into an [N, 3] Tensor. serialize_sparse(...): Serialize a SparseTensor into a 3-vector (1-D Tensor) object. serialize_tensor(...): Transforms a Tensor into a serialized TensorProto proto. set_random_seed(...): Sets the graph-level random seed for the default graph. setdiff1d(...): Computes the difference between two lists of numbers or strings. shape(...): Returns the shape of a tensor. shape_n(...): Returns shape of tensors. sigmoid(...): Computes sigmoid of x element-wise. sign(...): Returns an element-wise indication of the sign of a number. sin(...): Computes sine of x element-wise. sinh(...): Computes hyperbolic sine of x element-wise. size(...): Returns the size of a tensor. slice(...): Extracts a slice from a tensor. sort(...): Sorts a tensor. space_to_batch(...): SpaceToBatch for 4-D tensors of type T. space_to_batch_nd(...): SpaceToBatch for N-D tensors of type T. space_to_depth(...): SpaceToDepth for tensors of type T. sparse_add(...): Adds two tensors, at least one of each is a SparseTensor. (deprecated arguments) sparse_concat(...): Concatenates a list of SparseTensor along the specified dimension. (deprecated arguments) sparse_fill_empty_rows(...): Fills empty rows in the input 2-D SparseTensor with a default value. sparse_mask(...): Masks elements of IndexedSlices. sparse_matmul(...): Multiply matrix "a" by matrix "b". sparse_maximum(...): Returns the element-wise max of two SparseTensors. sparse_merge(...): Combines a batch of feature ids and values into a single SparseTensor. (deprecated) sparse_minimum(...): Returns the element-wise min of two SparseTensors. sparse_placeholder(...): Inserts a placeholder for a sparse tensor that will be always fed. sparse_reduce_max(...): Computes the max of elements across dimensions of a SparseTensor. (deprecated arguments) (deprecated arguments) sparse_reduce_max_sparse(...): Computes the max of elements across dimensions of a SparseTensor. (deprecated arguments) sparse_reduce_sum(...): Computes the sum of elements across dimensions of a SparseTensor. (deprecated arguments) (deprecated arguments) sparse_reduce_sum_sparse(...): Computes the sum of elements across dimensions of a SparseTensor. (deprecated arguments) sparse_reorder(...): Reorders a SparseTensor into the canonical, row-major ordering. sparse_reset_shape(...): Resets the shape of a SparseTensor with indices and values unchanged. sparse_reshape(...): Reshapes a SparseTensor to represent values in a new dense shape. sparse_retain(...): Retains specified non-empty values within a SparseTensor. sparse_segment_mean(...): Computes the mean along sparse segments of a tensor. sparse_segment_sqrt_n(...): Computes the sum along sparse segments of a tensor divided by the sqrt(N). sparse_segment_sum(...): Computes the sum along sparse segments of a tensor. sparse_slice(...): Slice a SparseTensor based on the start and `size. sparse_softmax(...): Applies softmax to a batched N-D SparseTensor. sparse_split(...): Split a SparseTensor into num_split tensors along axis. (deprecated arguments) sparse_tensor_dense_matmul(...): Multiply SparseTensor (or dense Matrix) (of rank 2) "A" by dense matrix sparse_tensor_to_dense(...): Converts a SparseTensor into a dense tensor. sparse_to_dense(...): Converts a sparse representation into a dense tensor. (deprecated) sparse_to_indicator(...): Converts a SparseTensor of ids into a dense bool indicator tensor. sparse_transpose(...): Transposes a SparseTensor split(...): Splits a tensor value into a list of sub tensors. sqrt(...): Computes element-wise square root of the input tensor. square(...): Computes square of x element-wise. squared_difference(...): Returns conj(x - y)(x - y) element-wise. squeeze(...): Removes dimensions of size 1 from the shape of a tensor. (deprecated arguments) stack(...): Stacks a list of rank-R tensors into one rank-(R+1) tensor. stop_gradient(...): Stops gradient computation. strided_slice(...): Extracts a strided slice of a tensor (generalized Python array indexing). string_join(...): Perform element-wise concatenation of a list of string tensors. string_split(...): Split elements of source based on delimiter. (deprecated arguments) string_strip(...): Strip leading and trailing whitespaces from the Tensor. string_to_hash_bucket(...): Converts each string in the input Tensor to its hash mod by a number of buckets. string_to_hash_bucket_fast(...): Converts each string in the input Tensor to its hash mod by a number of buckets. string_to_hash_bucket_strong(...): Converts each string in the input Tensor to its hash mod by a number of buckets. string_to_number(...): Converts each string in the input Tensor to the specified numeric type. substr(...): Return substrings from Tensor of strings. subtract(...): Returns x - y element-wise. svd(...): Computes the singular value decompositions of one or more matrices. switch_case(...): Create a switch/case operation, i.e. an integer-indexed conditional. tables_initializer(...): Returns an Op that initializes all tables of the default graph. tan(...): Computes tan of x element-wise. tanh(...): Computes hyperbolic tangent of x element-wise. tensor_scatter_add(...): Adds sparse updates to an existing tensor according to indices. tensor_scatter_nd_add(...): Adds sparse updates to an existing tensor according to indices. tensor_scatter_nd_max(...) tensor_scatter_nd_min(...) tensor_scatter_nd_sub(...): Subtracts sparse updates from an existing tensor according to indices. tensor_scatter_nd_update(...): "Scatter updates into an existing tensor according to indices. tensor_scatter_sub(...): Subtracts sparse updates from an existing tensor according to indices. tensor_scatter_update(...): "Scatter updates into an existing tensor according to indices. tensordot(...): Tensor contraction of a and b along specified axes and outer product. tile(...): Constructs a tensor by tiling a given tensor. timestamp(...): Provides the time since epoch in seconds. to_bfloat16(...): Casts a tensor to type bfloat16. (deprecated) to_complex128(...): Casts a tensor to type complex128. (deprecated) to_complex64(...): Casts a tensor to type complex64. (deprecated) to_double(...): Casts a tensor to type float64. (deprecated) to_float(...): Casts a tensor to type float32. (deprecated) to_int32(...): Casts a tensor to type int32. (deprecated) to_int64(...): Casts a tensor to type int64. (deprecated) trace(...): Compute the trace of a tensor x. trainable_variables(...): Returns all variables created with trainable=True. transpose(...): Transposes a. truediv(...): Divides x / y elementwise (using Python 3 division operator semantics). truncated_normal(...): Outputs random values from a truncated normal distribution. truncatediv(...): Returns x / y element-wise for integer types. truncatemod(...): Returns element-wise remainder of division. This emulates C semantics in that tuple(...): Group tensors together. type_spec_from_value(...): Returns a tf.TypeSpec that represents the given value. unique(...): Finds unique elements in a 1-D tensor. unique_with_counts(...): Finds unique elements in a 1-D tensor. unravel_index(...): Converts an array of flat indices into a tuple of coordinate arrays. unsorted_segment_max(...): Computes the maximum along segments of a tensor. unsorted_segment_mean(...): Computes the mean along segments of a tensor. unsorted_segment_min(...): Computes the minimum along segments of a tensor. unsorted_segment_prod(...): Computes the product along segments of a tensor. unsorted_segment_sqrt_n(...): Computes the sum along segments of a tensor divided by the sqrt(N). unsorted_segment_sum(...): Computes the sum along segments of a tensor. unstack(...): Unpacks the given dimension of a rank-R tensor into rank-(R-1) tensors. variable_axis_size_partitioner(...): Get a partitioner for VariableScope to keep shards below max_shard_bytes. variable_creator_scope(...): Scope which defines a variable creation function to be used by variable(). variable_op_scope(...): Deprecated: context manager for defining an op that creates variables. variables_initializer(...): Returns an Op that initializes a list of variables. vectorized_map(...): Parallel map on the list of tensors unpacked from elems on dimension 0. verify_tensor_all_finite(...): Assert that the tensor does not contain any NaN's or Inf's. where(...): Return the elements, either from x or y, depending on the condition. where_v2(...): Return the elements where condition is True (multiplexing x and y). while_loop(...): Repeat body while the condition cond is true. wrap_function(...): Wraps the TF 1.x function fn into a graph function. write_file(...): Writes contents to the file at input filename. Creates file and recursively zeros(...): Creates a tensor with all elements set to zero. zeros_like(...): Creates a tensor with all elements set to zero. zeta(...): Compute the Hurwitz zeta function \(\zeta(x, q)\).
Other Members
AUTO_REUSE
COMPILER_VERSION '7.3.1 20180303'
CXX11_ABI_FLAG 0
GIT_VERSION 'v2.4.0-rc4-71-g582c8d236cb'
GRAPH_DEF_VERSION 561
GRAPH_DEF_VERSION_MIN_CONSUMER 0
GRAPH_DEF_VERSION_MIN_PRODUCER 0
MONOLITHIC_BUILD 0
QUANTIZED_DTYPES
VERSION '2.4.0'
version '2.4.0'
bfloat16 tf.dtypes.DType
bool tf.dtypes.DType
complex128 tf.dtypes.DType
complex64 tf.dtypes.DType
double tf.dtypes.DType
float16 tf.dtypes.DType
float32 tf.dtypes.DType
float64 tf.dtypes.DType
half tf.dtypes.DType
int16 tf.dtypes.DType
int32 tf.dtypes.DType
int64 tf.dtypes.DType
int8 tf.dtypes.DType
newaxis None
qint16 tf.dtypes.DType
qint32 tf.dtypes.DType
qint8 tf.dtypes.DType
quint16 tf.dtypes.DType
quint8 tf.dtypes.DType
resource tf.dtypes.DType
string tf.dtypes.DType
uint16 tf.dtypes.DType
uint32 tf.dtypes.DType
uint64 tf.dtypes.DType
uint8 tf.dtypes.DType
variant tf.dtypes.DType | tensorflow.compat.v1 |
tf.compat.v1.add_check_numerics_ops Connect a tf.debugging.check_numerics to every floating point tensor.
tf.compat.v1.add_check_numerics_ops()
check_numerics operations themselves are added for each half, float, or double tensor in the current default graph. For all ops in the graph, the check_numerics op for all of its (half, float, or double) inputs is guaranteed to run before the check_numerics op on any of its outputs.
Note: This API is not compatible with the use of tf.cond or tf.while_loop, and will raise a ValueError if you attempt to call it in such a graph.
Returns A group op depending on all check_numerics ops added.
Raises
ValueError If the graph contains any numeric operations in a control flow structure.
RuntimeError If called with eager execution enabled. Eager Compatibility Not compatible with eager execution. To check for Infs and NaNs under eager execution, call tf.debugging.enable_check_numerics() once before executing the checked operations. | tensorflow.compat.v1.add_check_numerics_ops |
tf.compat.v1.add_to_collection Wrapper for Graph.add_to_collection() using the default graph.
tf.compat.v1.add_to_collection(
name, value
)
See tf.Graph.add_to_collection for more details.
Args
name The key for the collection. For example, the GraphKeys class contains many standard names for collections.
value The value to add to the collection. Eager Compatibility Collections are only supported in eager when variables are created inside an EagerVariableStore (e.g. as part of a layer or template). | tensorflow.compat.v1.add_to_collection |
tf.compat.v1.add_to_collections Wrapper for Graph.add_to_collections() using the default graph.
tf.compat.v1.add_to_collections(
names, value
)
See tf.Graph.add_to_collections for more details.
Args
names The key for the collections. The GraphKeys class contains many standard names for collections.
value The value to add to the collections. Eager Compatibility Collections are only supported in eager when variables are created inside an EagerVariableStore (e.g. as part of a layer or template). | tensorflow.compat.v1.add_to_collections |
tf.compat.v1.all_variables Use tf.compat.v1.global_variables instead. (deprecated)
tf.compat.v1.all_variables()
Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-03-02. Instructions for updating: Please use tf.global_variables instead. | tensorflow.compat.v1.all_variables |
Module: tf.compat.v1.app Generic entry point script. Modules flags module: Import router for absl.flags. See https://github.com/abseil/abseil-py Functions run(...): Runs the program with an optional 'main' function and 'argv' list. | tensorflow.compat.v1.app |
tf.compat.v1.app.run Runs the program with an optional 'main' function and 'argv' list.
tf.compat.v1.app.run(
main=None, argv=None
) | tensorflow.compat.v1.app.run |
tf.compat.v1.argmax Returns the index with the largest value across axes of a tensor. (deprecated arguments) View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.argmax
tf.compat.v1.argmax(
input, axis=None, name=None, dimension=None, output_type=tf.dtypes.int64
)
Warning: SOME ARGUMENTS ARE DEPRECATED: (dimension). They will be removed in a future version. Instructions for updating: Use the axis argument instead Note that in case of ties the identity of the return value is not guaranteed. Usage: import tensorflow as tf
a = [1, 10, 26.9, 2.8, 166.32, 62.3]
b = tf.math.argmax(input = a)
c = tf.keras.backend.eval(b)
# c = 4
# here a[4] = 166.32 which is the largest element of a across axis 0
Args
input A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64, bool.
axis A Tensor. Must be one of the following types: int32, int64. int32 or int64, must be in the range [-rank(input), rank(input)). Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
output_type An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int64.
name A name for the operation (optional).
Returns A Tensor of type output_type. | tensorflow.compat.v1.argmax |
tf.compat.v1.argmin Returns the index with the smallest value across axes of a tensor. (deprecated arguments) View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.argmin
tf.compat.v1.argmin(
input, axis=None, name=None, dimension=None, output_type=tf.dtypes.int64
)
Warning: SOME ARGUMENTS ARE DEPRECATED: (dimension). They will be removed in a future version. Instructions for updating: Use the axis argument instead Note that in case of ties the identity of the return value is not guaranteed. Usage: import tensorflow as tf
a = [1, 10, 26.9, 2.8, 166.32, 62.3]
b = tf.math.argmin(input = a)
c = tf.keras.backend.eval(b)
# c = 0
# here a[0] = 1 which is the smallest element of a across axis 0
Args
input A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64, bool.
axis A Tensor. Must be one of the following types: int32, int64. int32 or int64, must be in the range [-rank(input), rank(input)). Describes which axis of the input Tensor to reduce across. For vectors, use axis = 0.
output_type An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int64.
name A name for the operation (optional).
Returns A Tensor of type output_type. | tensorflow.compat.v1.argmin |
tf.compat.v1.arg_max Returns the index with the largest value across dimensions of a tensor.
tf.compat.v1.arg_max(
input, dimension, output_type=tf.dtypes.int64, name=None
)
Note that in case of ties the identity of the return value is not guaranteed. Usage: import tensorflow as tf
a = [1, 10, 26.9, 2.8, 166.32, 62.3]
b = tf.math.argmax(input = a)
c = tf.keras.backend.eval(b)
# c = 4
# here a[4] = 166.32 which is the largest element of a across axis 0
Args
input A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64, bool.
dimension A Tensor. Must be one of the following types: int32, int64. int32 or int64, must be in the range [-rank(input), rank(input)). Describes which dimension of the input Tensor to reduce across. For vectors, use dimension = 0.
output_type An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int64.
name A name for the operation (optional).
Returns A Tensor of type output_type. | tensorflow.compat.v1.arg_max |
tf.compat.v1.arg_min Returns the index with the smallest value across dimensions of a tensor.
tf.compat.v1.arg_min(
input, dimension, output_type=tf.dtypes.int64, name=None
)
Note that in case of ties the identity of the return value is not guaranteed. Usage: import tensorflow as tf
a = [1, 10, 26.9, 2.8, 166.32, 62.3]
b = tf.math.argmin(input = a)
c = tf.keras.backend.eval(b)
# c = 0
# here a[0] = 1 which is the smallest element of a across axis 0
Args
input A Tensor. Must be one of the following types: float32, float64, int32, uint8, int16, int8, complex64, int64, qint8, quint8, qint32, bfloat16, uint16, complex128, half, uint32, uint64, bool.
dimension A Tensor. Must be one of the following types: int32, int64. int32 or int64, must be in the range [-rank(input), rank(input)). Describes which dimension of the input Tensor to reduce across. For vectors, use dimension = 0.
output_type An optional tf.DType from: tf.int32, tf.int64. Defaults to tf.int64.
name A name for the operation (optional).
Returns A Tensor of type output_type. | tensorflow.compat.v1.arg_min |
tf.compat.v1.assert_equal Assert the condition x == y holds element-wise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.debugging.assert_equal
tf.compat.v1.assert_equal(
x, y, data=None, summarize=None, message=None, name=None
)
This condition holds if for every pair of (possibly broadcast) elements x[i], y[i], we have x[i] == y[i]. If both x and y are empty, this is trivially satisfied. When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation: with tf.control_dependencies([tf.compat.v1.assert_equal(x, y)]):
output = tf.reduce_sum(x)
Args
x Numeric Tensor.
y Numeric Tensor, same dtype as and broadcastable to x.
data The tensors to print out if the condition is False. Defaults to error message and first few entries of x, y.
summarize Print this many entries of each tensor.
message A string to prefix to the default message.
name A name for this operation (optional). Defaults to "assert_equal".
Returns Op that raises InvalidArgumentError if x == y is False.
Raises
InvalidArgumentError if the check can be performed immediately and x == y is False. The check can be performed immediately during eager execution or if x and y are statically known. Eager Compatibility returns None | tensorflow.compat.v1.assert_equal |
tf.compat.v1.assert_greater Assert the condition x > y holds element-wise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.debugging.assert_greater
tf.compat.v1.assert_greater(
x, y, data=None, summarize=None, message=None, name=None
)
This condition holds if for every pair of (possibly broadcast) elements x[i], y[i], we have x[i] > y[i]. If both x and y are empty, this is trivially satisfied. When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation: with tf.control_dependencies([tf.compat.v1.assert_greater(x, y)]):
output = tf.reduce_sum(x)
Args
x Numeric Tensor.
y Numeric Tensor, same dtype as and broadcastable to x.
data The tensors to print out if the condition is False. Defaults to error message and first few entries of x, y.
summarize Print this many entries of each tensor.
message A string to prefix to the default message.
name A name for this operation (optional). Defaults to "assert_greater".
Returns Op that raises InvalidArgumentError if x > y is False.
Raises
InvalidArgumentError if the check can be performed immediately and x > y is False. The check can be performed immediately during eager execution or if x and y are statically known. Eager Compatibility returns None | tensorflow.compat.v1.assert_greater |
tf.compat.v1.assert_greater_equal Assert the condition x >= y holds element-wise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.debugging.assert_greater_equal
tf.compat.v1.assert_greater_equal(
x, y, data=None, summarize=None, message=None, name=None
)
This condition holds if for every pair of (possibly broadcast) elements x[i], y[i], we have x[i] >= y[i]. If both x and y are empty, this is trivially satisfied. When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation: with tf.control_dependencies([tf.compat.v1.assert_greater_equal(x, y)]):
output = tf.reduce_sum(x)
Args
x Numeric Tensor.
y Numeric Tensor, same dtype as and broadcastable to x.
data The tensors to print out if the condition is False. Defaults to error message and first few entries of x, y.
summarize Print this many entries of each tensor.
message A string to prefix to the default message.
name A name for this operation (optional). Defaults to "assert_greater_equal".
Returns Op that raises InvalidArgumentError if x >= y is False.
Raises
InvalidArgumentError if the check can be performed immediately and x >= y is False. The check can be performed immediately during eager execution or if x and y are statically known. Eager Compatibility returns None | tensorflow.compat.v1.assert_greater_equal |
tf.compat.v1.assert_integer Assert that x is of integer dtype. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.debugging.assert_integer
tf.compat.v1.assert_integer(
x, message=None, name=None
)
Example of adding a dependency to an operation: with tf.control_dependencies([tf.compat.v1.assert_integer(x)]):
output = tf.reduce_sum(x)
Args
x Tensor whose basetype is integer and is not quantized.
message A string to prefix to the default message.
name A name for this operation (optional). Defaults to "assert_integer".
Raises
TypeError If x.dtype is anything other than non-quantized integer.
Returns A no_op that does nothing. Type can be determined statically. | tensorflow.compat.v1.assert_integer |
tf.compat.v1.assert_less Assert the condition x < y holds element-wise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.debugging.assert_less
tf.compat.v1.assert_less(
x, y, data=None, summarize=None, message=None, name=None
)
This condition holds if for every pair of (possibly broadcast) elements x[i], y[i], we have x[i] < y[i]. If both x and y are empty, this is trivially satisfied. When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation: with tf.control_dependencies([tf.compat.v1.assert_less(x, y)]):
output = tf.reduce_sum(x)
Args
x Numeric Tensor.
y Numeric Tensor, same dtype as and broadcastable to x.
data The tensors to print out if the condition is False. Defaults to error message and first few entries of x, y.
summarize Print this many entries of each tensor.
message A string to prefix to the default message.
name A name for this operation (optional). Defaults to "assert_less".
Returns Op that raises InvalidArgumentError if x < y is False.
Raises
InvalidArgumentError if the check can be performed immediately and x < y is False. The check can be performed immediately during eager execution or if x and y are statically known. Eager Compatibility returns None | tensorflow.compat.v1.assert_less |
tf.compat.v1.assert_less_equal Assert the condition x <= y holds element-wise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.debugging.assert_less_equal
tf.compat.v1.assert_less_equal(
x, y, data=None, summarize=None, message=None, name=None
)
This condition holds if for every pair of (possibly broadcast) elements x[i], y[i], we have x[i] <= y[i]. If both x and y are empty, this is trivially satisfied. When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation: with tf.control_dependencies([tf.compat.v1.assert_less_equal(x, y)]):
output = tf.reduce_sum(x)
Args
x Numeric Tensor.
y Numeric Tensor, same dtype as and broadcastable to x.
data The tensors to print out if the condition is False. Defaults to error message and first few entries of x, y.
summarize Print this many entries of each tensor.
message A string to prefix to the default message.
name A name for this operation (optional). Defaults to "assert_less_equal".
Returns Op that raises InvalidArgumentError if x <= y is False.
Raises
InvalidArgumentError if the check can be performed immediately and x <= y is False. The check can be performed immediately during eager execution or if x and y are statically known. Eager Compatibility returns None | tensorflow.compat.v1.assert_less_equal |
tf.compat.v1.assert_near Assert the condition x and y are close element-wise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.debugging.assert_near
tf.compat.v1.assert_near(
x, y, rtol=None, atol=None, data=None, summarize=None, message=None, name=None
)
Example of adding a dependency to an operation: with tf.control_dependencies([tf.compat.v1.assert_near(x, y)]):
output = tf.reduce_sum(x)
This condition holds if for every pair of (possibly broadcast) elements x[i], y[i], we have tf.abs(x[i] - y[i]) <= atol + rtol * tf.abs(y[i]). If both x and y are empty, this is trivially satisfied. The default atol and rtol is 10 * eps, where eps is the smallest representable positive number such that 1 + eps != 1. This is about 1.2e-6 in 32bit, 2.22e-15 in 64bit, and 0.00977 in 16bit. See numpy.finfo.
Args
x Float or complex Tensor.
y Float or complex Tensor, same dtype as, and broadcastable to, x.
rtol Tensor. Same dtype as, and broadcastable to, x. The relative tolerance. Default is 10 * eps.
atol Tensor. Same dtype as, and broadcastable to, x. The absolute tolerance. Default is 10 * eps.
data The tensors to print out if the condition is False. Defaults to error message and first few entries of x, y.
summarize Print this many entries of each tensor.
message A string to prefix to the default message.
name A name for this operation (optional). Defaults to "assert_near".
Returns Op that raises InvalidArgumentError if x and y are not close enough.
Numpy Compatibility Similar to numpy.testing.assert_allclose, except tolerance depends on data type. This is due to the fact that TensorFlow is often used with 32bit, 64bit, and even 16bit data. | tensorflow.compat.v1.assert_near |
tf.compat.v1.assert_negative Assert the condition x < 0 holds element-wise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.debugging.assert_negative
tf.compat.v1.assert_negative(
x, data=None, summarize=None, message=None, name=None
)
When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation: with tf.control_dependencies([tf.debugging.assert_negative(x, y)]):
output = tf.reduce_sum(x)
Negative means, for every element x[i] of x, we have x[i] < 0. If x is empty this is trivially satisfied.
Args
x Numeric Tensor.
data The tensors to print out if the condition is False. Defaults to error message and first few entries of x.
summarize Print this many entries of each tensor.
message A string to prefix to the default message.
name A name for this operation (optional). Defaults to "assert_negative".
Returns Op that raises InvalidArgumentError if x < 0 is False.
Raises
InvalidArgumentError if the check can be performed immediately and x < 0 is False. The check can be performed immediately during eager execution or if x is statically known. Eager Compatibility returns None | tensorflow.compat.v1.assert_negative |
tf.compat.v1.assert_none_equal Assert the condition x != y holds element-wise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.debugging.assert_none_equal
tf.compat.v1.assert_none_equal(
x, y, data=None, summarize=None, message=None, name=None
)
This condition holds if for every pair of (possibly broadcast) elements x[i], y[i], we have x[i] != y[i]. If both x and y are empty, this is trivially satisfied. When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation: with tf.control_dependencies([tf.compat.v1.assert_none_equal(x, y)]):
output = tf.reduce_sum(x)
Args
x Numeric Tensor.
y Numeric Tensor, same dtype as and broadcastable to x.
data The tensors to print out if the condition is False. Defaults to error message and first few entries of x, y.
summarize Print this many entries of each tensor.
message A string to prefix to the default message.
name A name for this operation (optional). Defaults to "assert_none_equal".
Returns Op that raises InvalidArgumentError if x != y is False.
Raises
InvalidArgumentError if the check can be performed immediately and x != y is False. The check can be performed immediately during eager execution or if x and y are statically known. Eager Compatibility returns None | tensorflow.compat.v1.assert_none_equal |
tf.compat.v1.assert_non_negative Assert the condition x >= 0 holds element-wise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.debugging.assert_non_negative
tf.compat.v1.assert_non_negative(
x, data=None, summarize=None, message=None, name=None
)
When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation: with tf.control_dependencies([tf.debugging.assert_non_negative(x, y)]):
output = tf.reduce_sum(x)
Non-negative means, for every element x[i] of x, we have x[i] >= 0. If x is empty this is trivially satisfied.
Args
x Numeric Tensor.
data The tensors to print out if the condition is False. Defaults to error message and first few entries of x.
summarize Print this many entries of each tensor.
message A string to prefix to the default message.
name A name for this operation (optional). Defaults to "assert_non_negative".
Returns Op that raises InvalidArgumentError if x >= 0 is False.
Raises
InvalidArgumentError if the check can be performed immediately and x >= 0 is False. The check can be performed immediately during eager execution or if x is statically known. Eager Compatibility returns None | tensorflow.compat.v1.assert_non_negative |
tf.compat.v1.assert_non_positive Assert the condition x <= 0 holds element-wise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.debugging.assert_non_positive
tf.compat.v1.assert_non_positive(
x, data=None, summarize=None, message=None, name=None
)
When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation: with tf.control_dependencies([tf.debugging.assert_non_positive(x, y)]):
output = tf.reduce_sum(x)
Non-positive means, for every element x[i] of x, we have x[i] <= 0. If x is empty this is trivially satisfied.
Args
x Numeric Tensor.
data The tensors to print out if the condition is False. Defaults to error message and first few entries of x.
summarize Print this many entries of each tensor.
message A string to prefix to the default message.
name A name for this operation (optional). Defaults to "assert_non_positive".
Returns Op that raises InvalidArgumentError if x <= 0 is False.
Raises
InvalidArgumentError if the check can be performed immediately and x <= 0 is False. The check can be performed immediately during eager execution or if x is statically known. Eager Compatibility returns None | tensorflow.compat.v1.assert_non_positive |
tf.compat.v1.assert_positive Assert the condition x > 0 holds element-wise. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.debugging.assert_positive
tf.compat.v1.assert_positive(
x, data=None, summarize=None, message=None, name=None
)
When running in graph mode, you should add a dependency on this operation to ensure that it runs. Example of adding a dependency to an operation: with tf.control_dependencies([tf.debugging.assert_positive(x, y)]):
output = tf.reduce_sum(x)
Positive means, for every element x[i] of x, we have x[i] > 0. If x is empty this is trivially satisfied.
Args
x Numeric Tensor.
data The tensors to print out if the condition is False. Defaults to error message and first few entries of x.
summarize Print this many entries of each tensor.
message A string to prefix to the default message.
name A name for this operation (optional). Defaults to "assert_positive".
Returns Op that raises InvalidArgumentError if x > 0 is False.
Raises
InvalidArgumentError if the check can be performed immediately and x > 0 is False. The check can be performed immediately during eager execution or if x is statically known. Eager Compatibility returns None | tensorflow.compat.v1.assert_positive |
tf.compat.v1.assert_rank Assert x has rank equal to rank. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.debugging.assert_rank
tf.compat.v1.assert_rank(
x, rank, data=None, summarize=None, message=None, name=None
)
Example of adding a dependency to an operation: with tf.control_dependencies([tf.compat.v1.assert_rank(x, 2)]):
output = tf.reduce_sum(x)
Args
x Numeric Tensor.
rank Scalar integer Tensor.
data The tensors to print out if the condition is False. Defaults to error message and the shape of x.
summarize Print this many entries of each tensor.
message A string to prefix to the default message.
name A name for this operation (optional). Defaults to "assert_rank".
Returns Op raising InvalidArgumentError unless x has specified rank. If static checks determine x has correct rank, a no_op is returned.
Raises
ValueError If static checks determine x has wrong rank. | tensorflow.compat.v1.assert_rank |
tf.compat.v1.assert_rank_at_least Assert x has rank equal to rank or higher. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.debugging.assert_rank_at_least
tf.compat.v1.assert_rank_at_least(
x, rank, data=None, summarize=None, message=None, name=None
)
Example of adding a dependency to an operation: with tf.control_dependencies([tf.compat.v1.assert_rank_at_least(x, 2)]):
output = tf.reduce_sum(x)
Args
x Numeric Tensor.
rank Scalar Tensor.
data The tensors to print out if the condition is False. Defaults to error message and first few entries of x.
summarize Print this many entries of each tensor.
message A string to prefix to the default message.
name A name for this operation (optional). Defaults to "assert_rank_at_least".
Returns Op raising InvalidArgumentError unless x has specified rank or higher. If static checks determine x has correct rank, a no_op is returned.
Raises
ValueError If static checks determine x has wrong rank. | tensorflow.compat.v1.assert_rank_at_least |
tf.compat.v1.assert_rank_in Assert x has rank in ranks. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.debugging.assert_rank_in
tf.compat.v1.assert_rank_in(
x, ranks, data=None, summarize=None, message=None, name=None
)
Example of adding a dependency to an operation: with tf.control_dependencies([tf.compat.v1.assert_rank_in(x, (2, 4))]):
output = tf.reduce_sum(x)
Args
x Numeric Tensor.
ranks Iterable of scalar Tensor objects.
data The tensors to print out if the condition is False. Defaults to error message and first few entries of x.
summarize Print this many entries of each tensor.
message A string to prefix to the default message.
name A name for this operation (optional). Defaults to "assert_rank_in".
Returns Op raising InvalidArgumentError unless rank of x is in ranks. If static checks determine x has matching rank, a no_op is returned.
Raises
ValueError If static checks determine x has mismatched rank. | tensorflow.compat.v1.assert_rank_in |
tf.compat.v1.assert_scalar Asserts that the given tensor is a scalar (i.e. zero-dimensional). View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.debugging.assert_scalar
tf.compat.v1.assert_scalar(
tensor, name=None, message=None
)
This function raises ValueError unless it can be certain that the given tensor is a scalar. ValueError is also raised if the shape of tensor is unknown.
Args
tensor A Tensor.
name A name for this operation. Defaults to "assert_scalar"
message A string to prefix to the default message.
Returns The input tensor (potentially converted to a Tensor).
Raises
ValueError If the tensor is not scalar (rank 0), or if its shape is unknown. | tensorflow.compat.v1.assert_scalar |
tf.compat.v1.assert_type Statically asserts that the given Tensor is of the specified type. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.debugging.assert_type
tf.compat.v1.assert_type(
tensor, tf_type, message=None, name=None
)
Args
tensor A Tensor or SparseTensor.
tf_type A tensorflow type (dtypes.float32, tf.int64, dtypes.bool, etc).
message A string to prefix to the default message.
name A name to give this Op. Defaults to "assert_type"
Raises
TypeError If the tensors data type doesn't match tf_type.
Returns A no_op that does nothing. Type can be determined statically. | tensorflow.compat.v1.assert_type |
tf.compat.v1.assert_variables_initialized Returns an Op to check if variables are initialized.
tf.compat.v1.assert_variables_initialized(
var_list=None
)
Note: This function is obsolete and will be removed in 6 months. Please change your implementation to use report_uninitialized_variables().
When run, the returned Op will raise the exception FailedPreconditionError if any of the variables has not yet been initialized.
Note: This function is implemented by trying to fetch the values of the variables. If one of the variables is not initialized a message may be logged by the C++ runtime. This is expected.
Args
var_list List of Variable objects to check. Defaults to the value of global_variables().
Returns An Op, or None if there are no variables.
Note: The output of this function should be used. If it is not, a warning will be logged or an error may be raised. To mark the output as used, call its .mark_used() method. | tensorflow.compat.v1.assert_variables_initialized |
tf.compat.v1.assign Update ref by assigning value to it.
tf.compat.v1.assign(
ref, value, validate_shape=None, use_locking=None, name=None
)
This operation outputs a Tensor that holds the new value of ref after the value has been assigned. This makes it easier to chain operations that need to use the reset value.
Args
ref A mutable Tensor. Should be from a Variable node. May be uninitialized.
value A Tensor. Must have the same shape and dtype as ref. The value to be assigned to the variable.
validate_shape An optional bool. Defaults to True. If true, the operation will validate that the shape of 'value' matches the shape of the Tensor being assigned to. If false, 'ref' will take on the shape of 'value'.
use_locking An optional bool. Defaults to True. If True, the assignment will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns A Tensor that will hold the new value of ref after the assignment has completed. | tensorflow.compat.v1.assign |
tf.compat.v1.assign_add Update ref by adding value to it.
tf.compat.v1.assign_add(
ref, value, use_locking=None, name=None
)
This operation outputs "ref" after the update is done. This makes it easier to chain operations that need to use the reset value. Unlike tf.math.add, this op does not broadcast. ref and value must have the same shape.
Args
ref A mutable Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Should be from a Variable node.
value A Tensor. Must have the same shape and dtype as ref. The value to be added to the variable.
use_locking An optional bool. Defaults to False. If True, the addition will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns Same as "ref". Returned as a convenience for operations that want to use the new value after the variable has been updated. | tensorflow.compat.v1.assign_add |
tf.compat.v1.assign_sub Update ref by subtracting value from it.
tf.compat.v1.assign_sub(
ref, value, use_locking=None, name=None
)
This operation outputs ref after the update is done. This makes it easier to chain operations that need to use the reset value. Unlike tf.math.subtract, this op does not broadcast. ref and value must have the same shape.
Args
ref A mutable Tensor. Must be one of the following types: float32, float64, int64, int32, uint8, uint16, int16, int8, complex64, complex128, qint8, quint8, qint32, half. Should be from a Variable node.
value A Tensor. Must have the same shape and dtype as ref. The value to be subtracted to the variable.
use_locking An optional bool. Defaults to False. If True, the subtraction will be protected by a lock; otherwise the behavior is undefined, but may exhibit less contention.
name A name for the operation (optional).
Returns Same as "ref". Returned as a convenience for operations that want to use the new value after the variable has been updated. | tensorflow.compat.v1.assign_sub |
tf.compat.v1.AttrValue A ProtocolMessage
Attributes
b bool b
f float f
func NameAttrList func
i int64 i
list ListValue list
placeholder string placeholder
s bytes s
shape TensorShapeProto shape
tensor TensorProto tensor
type DataType type Child Classes class ListValue | tensorflow.compat.v1.attrvalue |
tf.compat.v1.AttrValue.ListValue A ProtocolMessage
Attributes
b repeated bool b
f repeated float f
func repeated NameAttrList func
i repeated int64 i
s repeated bytes s
shape repeated TensorShapeProto shape
tensor repeated TensorProto tensor
type repeated DataType type | tensorflow.compat.v1.attrvalue.listvalue |
Module: tf.compat.v1.audio Public API for tf.audio namespace. Functions decode_wav(...): Decode a 16-bit PCM WAV file to a float tensor. encode_wav(...): Encode audio data using the WAV file format. | tensorflow.compat.v1.audio |
Module: tf.compat.v1.autograph Conversion of plain Python into TensorFlow graph code.
Note: In TensorFlow 2.0, AutoGraph is automatically applied when using tf.function. This module contains lower-level APIs for advanced use.
For more information, see the AutoGraph guide. By equivalent graph code we mean code that generates a TensorFlow graph when run. The generated graph has the same effects as the original code when executed (for example with tf.function or tf.compat.v1.Session.run). In other words, using AutoGraph can be thought of as running Python in TensorFlow. Modules experimental module: Public API for tf.autograph.experimental namespace. Functions set_verbosity(...): Sets the AutoGraph verbosity level. to_code(...): Returns the source code generated by AutoGraph, as a string. to_graph(...): Converts a Python entity into a TensorFlow graph. trace(...): Traces argument information at compilation time. | tensorflow.compat.v1.autograph |
Module: tf.compat.v1.autograph.experimental Public API for tf.autograph.experimental namespace. Classes class Feature: This enumeration represents optional conversion options. Functions do_not_convert(...): Decorator that suppresses the conversion of a function. set_loop_options(...): Specifies additional arguments to be passed to the enclosing while_loop. | tensorflow.compat.v1.autograph.experimental |
tf.compat.v1.autograph.to_code Returns the source code generated by AutoGraph, as a string.
tf.compat.v1.autograph.to_code(
entity, recursive=True, arg_values=None, arg_types=None, indentation='
', experimental_optional_features=None
)
Example usage:
def f(x):
if x < 0:
x = -x
return x
tf.autograph.to_code(f)
"...def tf__f(x):..."
Also see: tf.autograph.to_graph.
Note: If a function has been decorated with tf.function, pass its underlying Python function, rather than the callable that `tf.function creates:
@tf.function
def f(x):
if x < 0:
x = -x
return x
tf.autograph.to_code(f.python_function)
"...def tf__f(x):..."
Args
entity Python callable or class.
recursive Whether to recursively convert any functions that the converted function may call.
arg_values Deprecated.
arg_types Deprecated.
indentation Deprecated.
experimental_optional_features None, a tuple of, or a single tf.autograph.experimental.Feature value.
Returns The converted code as string. | tensorflow.compat.v1.autograph.to_code |
tf.compat.v1.autograph.to_graph Converts a Python entity into a TensorFlow graph.
tf.compat.v1.autograph.to_graph(
entity, recursive=True, arg_values=None, arg_types=None,
experimental_optional_features=None
)
Also see: tf.autograph.to_code, tf.function. Unlike tf.function, to_graph is a low-level transpiler that converts Python code to TensorFlow graph code. It does not implement any caching, variable management or create any actual ops, and is best used where greater control over the generated TensorFlow graph is desired. Another difference from tf.function is that to_graph will not wrap the graph into a TensorFlow function or a Python callable. Internally, tf.function uses to_graph. Example Usage def foo(x):
if x > 0:
y = x * x
else:
y = -x
return y
converted_foo = to_graph(foo)
x = tf.constant(1)
y = converted_foo(x) # converted_foo is a TensorFlow Op-like.
assert is_tensor(y)
Supported Python entities include: functions classes object methods Functions are converted into new functions with converted code. Classes are converted by generating a new class whose methods use converted code. Methods are converted into unbound function that have an additional first argument called self.
Args
entity Python callable or class to convert.
recursive Whether to recursively convert any functions that the converted function may call.
arg_values Deprecated.
arg_types Deprecated.
experimental_optional_features None, a tuple of, or a single tf.autograph.experimental.Feature value.
Returns Same as entity, the converted Python function or class.
Raises
ValueError If the entity could not be converted. | tensorflow.compat.v1.autograph.to_graph |
tf.compat.v1.batch_gather Gather slices from params according to indices with leading batch dims. (deprecated)
tf.compat.v1.batch_gather(
params, indices, name=None
)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2017-10-25. Instructions for updating: tf.batch_gather is deprecated, please use tf.gather with batch_dims=-1 instead. | tensorflow.compat.v1.batch_gather |
tf.compat.v1.batch_scatter_update Generalization of tf.compat.v1.scatter_update to axis different than 0. (deprecated)
tf.compat.v1.batch_scatter_update(
ref, indices, updates, use_locking=True, name=None
)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed after 2018-11-29. Instructions for updating: Use the batch_scatter_update method of Variable instead. Analogous to batch_gather. This assumes that ref, indices and updates have a series of leading dimensions that are the same for all of them, and the updates are performed on the last dimension of indices. In other words, the dimensions should be the following: num_prefix_dims = indices.ndims - 1 batch_dim = num_prefix_dims + 1 updates.shape = indices.shape + var.shape[batch_dim:] where updates.shape[:num_prefix_dims] == indices.shape[:num_prefix_dims] == var.shape[:num_prefix_dims] And the operation performed can be expressed as: var[i_1, ..., i_n, indices[i_1, ..., i_n, j]] = updates[i_1, ..., i_n, j] When indices is a 1D tensor, this operation is equivalent to tf.compat.v1.scatter_update. To avoid this operation there would be 2 alternatives: 1) Reshaping the variable by merging the first ndims dimensions. However, this is not possible because tf.reshape returns a Tensor, which we cannot use tf.compat.v1.scatter_update on. 2) Looping over the first ndims of the variable and using tf.compat.v1.scatter_update on the subtensors that result of slicing the first dimension. This is a valid option for ndims = 1, but less efficient than this implementation. See also tf.compat.v1.scatter_update and tf.compat.v1.scatter_nd_update.
Args
ref Variable to scatter onto.
indices Tensor containing indices as described above.
updates Tensor of updates to apply to ref.
use_locking Boolean indicating whether to lock the writing operation.
name Optional scope name string.
Returns Ref to variable after it has been modified.
Raises
ValueError If the initial ndims of ref, indices, and updates are not the same. | tensorflow.compat.v1.batch_scatter_update |
tf.compat.v1.batch_to_space BatchToSpace for 4-D tensors of type T.
tf.compat.v1.batch_to_space(
input, crops, block_size, name=None, block_shape=None
)
This is a legacy version of the more general BatchToSpaceND. Rearranges (permutes) data from batch into blocks of spatial data, followed by cropping. This is the reverse transformation of SpaceToBatch. More specifically, this op outputs a copy of the input tensor where values from the batch dimension are moved in spatial blocks to the height and width dimensions, followed by cropping along the height and width dimensions.
Args
input A Tensor. 4-D tensor with shape [batch*block_size*block_size, height_pad/block_size, width_pad/block_size, depth]. Note that the batch size of the input tensor must be divisible by block_size * block_size.
crops A Tensor. Must be one of the following types: int32, int64. 2-D tensor of non-negative integers with shape [2, 2]. It specifies how many elements to crop from the intermediate result across the spatial dimensions as follows: crops = [[crop_top, crop_bottom], [crop_left, crop_right]]
block_size An int that is >= 2.
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | tensorflow.compat.v1.batch_to_space |
tf.compat.v1.batch_to_space_nd BatchToSpace for N-D tensors of type T. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.manip.batch_to_space_nd
tf.compat.v1.batch_to_space_nd(
input, block_shape, crops, name=None
)
This operation reshapes the "batch" dimension 0 into M + 1 dimensions of shape block_shape + [batch], interleaves these blocks back into the grid defined by the spatial dimensions [1, ..., M], to obtain a result with the same rank as the input. The spatial dimensions of this intermediate result are then optionally cropped according to crops to produce the output. This is the reverse of SpaceToBatch. See below for a precise description.
Args
input A Tensor. N-D with shape input_shape = [batch] + spatial_shape + remaining_shape, where spatial_shape has M dimensions.
block_shape A Tensor. Must be one of the following types: int32, int64. 1-D with shape [M], all values must be >= 1.
crops A Tensor. Must be one of the following types: int32, int64. 2-D with shape [M, 2], all values must be >= 0. crops[i] = [crop_start, crop_end] specifies the amount to crop from input dimension i + 1, which corresponds to spatial dimension i. It is required that crop_start[i] + crop_end[i] <= block_shape[i] * input_shape[i + 1]. This operation is equivalent to the following steps: Reshape input to reshaped of shape: [block_shape[0], ..., block_shape[M-1], batch / prod(block_shape), input_shape[1], ..., input_shape[N-1]] Permute dimensions of reshaped to produce permuted of shape [batch / prod(block_shape), input_shape[1], block_shape[0], ..., input_shape[M], block_shape[M-1], input_shape[M+1], ..., input_shape[N-1]] Reshape permuted to produce reshaped_permuted of shape [batch / prod(block_shape), input_shape[1] * block_shape[0], ..., input_shape[M] * block_shape[M-1], input_shape[M+1], ..., input_shape[N-1]] Crop the start and end of dimensions [1, ..., M] of reshaped_permuted according to crops to produce the output of shape: [batch / prod(block_shape), input_shape[1] * block_shape[0] - crops[0,0] - crops[0,1], ..., input_shape[M] * block_shape[M-1] - crops[M-1,0] - crops[M-1,1], input_shape[M+1], ..., input_shape[N-1]] Some examples: (1) For the following input of shape [4, 1, 1, 1], block_shape = [2, 2], and crops = [[0, 0], [0, 0]]: [[[[1]]], [[[2]]], [[[3]]], [[[4]]]]
The output tensor has shape [1, 2, 2, 1] and value: x = [[[[1], [2]], [[3], [4]]]]
(2) For the following input of shape [4, 1, 1, 3], block_shape = [2, 2], and crops = [[0, 0], [0, 0]]: [[[[1, 2, 3]]], [[[4, 5, 6]]], [[[7, 8, 9]]], [[[10, 11, 12]]]]
The output tensor has shape [1, 2, 2, 3] and value: x = [[[[1, 2, 3], [4, 5, 6]],
[[7, 8, 9], [10, 11, 12]]]]
(3) For the following input of shape [4, 2, 2, 1], block_shape = [2, 2], and crops = [[0, 0], [0, 0]]: x = [[[[1], [3]], [[9], [11]]],
[[[2], [4]], [[10], [12]]],
[[[5], [7]], [[13], [15]]],
[[[6], [8]], [[14], [16]]]]
The output tensor has shape [1, 4, 4, 1] and value: x = [[[[1], [2], [3], [4]],
[[5], [6], [7], [8]],
[[9], [10], [11], [12]],
[[13], [14], [15], [16]]]]
(4) For the following input of shape [8, 1, 3, 1], block_shape = [2, 2], and crops = [[0, 0], [2, 0]]: x = [[[[0], [1], [3]]], [[[0], [9], [11]]],
[[[0], [2], [4]]], [[[0], [10], [12]]],
[[[0], [5], [7]]], [[[0], [13], [15]]],
[[[0], [6], [8]]], [[[0], [14], [16]]]]
The output tensor has shape [2, 2, 4, 1] and value: x = [[[[1], [2], [3], [4]],
[[5], [6], [7], [8]]],
[[[9], [10], [11], [12]],
[[13], [14], [15], [16]]]]
name A name for the operation (optional).
Returns A Tensor. Has the same type as input. | tensorflow.compat.v1.batch_to_space_nd |
tf.compat.v1.bincount Counts the number of occurrences of each value in an integer array. View aliases Compat aliases for migration
See Migration guide for more details. tf.compat.v1.math.bincount
tf.compat.v1.bincount(
arr, weights=None, minlength=None, maxlength=None, dtype=tf.dtypes.int32
)
If minlength and maxlength are not given, returns a vector with length tf.reduce_max(arr) + 1 if arr is non-empty, and length 0 otherwise. If weights are non-None, then index i of the output stores the sum of the value in weights at each index where the corresponding value in arr is i.
Args
arr An int32 tensor of non-negative values.
weights If non-None, must be the same shape as arr. For each value in arr, the bin will be incremented by the corresponding weight instead of 1.
minlength If given, ensures the output has length at least minlength, padding with zeros at the end if necessary.
maxlength If given, skips values in arr that are equal or greater than maxlength, ensuring that the output has length at most maxlength.
dtype If weights is None, determines the type of the output bins.
Returns A vector with the same dtype as weights or the given dtype. The bin values. | tensorflow.compat.v1.bincount |
Module: tf.compat.v1.bitwise Operations for manipulating the binary representations of integers. Functions bitwise_and(...): Elementwise computes the bitwise AND of x and y. bitwise_or(...): Elementwise computes the bitwise OR of x and y. bitwise_xor(...): Elementwise computes the bitwise XOR of x and y. invert(...): Invert (flip) each bit of supported types; for example, type uint8 value 01010101 becomes 10101010. left_shift(...): Elementwise computes the bitwise left-shift of x and y. right_shift(...): Elementwise computes the bitwise right-shift of x and y. | tensorflow.compat.v1.bitwise |
tf.compat.v1.boolean_mask Apply boolean mask to tensor.
tf.compat.v1.boolean_mask(
tensor, mask, name='boolean_mask', axis=None
)
Numpy equivalent is tensor[mask]. In general, 0 < dim(mask) = K <= dim(tensor), and mask's shape must match the first K dimensions of tensor's shape. We then have: boolean_mask(tensor, mask)[i, j1,...,jd] = tensor[i1,...,iK,j1,...,jd] where (i1,...,iK) is the ith True entry of mask (row-major order). The axis could be used with mask to indicate the axis to mask from. In that case, axis + dim(mask) <= dim(tensor) and mask's shape must match the first axis + dim(mask) dimensions of tensor's shape. See also: tf.ragged.boolean_mask, which can be applied to both dense and ragged tensors, and can be used if you need to preserve the masked dimensions of tensor (rather than flattening them, as tf.boolean_mask does). Examples: # 1-D example
tensor = [0, 1, 2, 3]
mask = np.array([True, False, True, False])
tf.boolean_mask(tensor, mask) # [0, 2]
# 2-D example
tensor = [[1, 2], [3, 4], [5, 6]]
mask = np.array([True, False, True])
tf.boolean_mask(tensor, mask) # [[1, 2], [5, 6]]
Args
tensor N-D Tensor.
mask K-D boolean Tensor, K <= N and K must be known statically.
name A name for this operation (optional).
axis A 0-D int Tensor representing the axis in tensor to mask from. By default, axis is 0 which will mask from the first dimension. Otherwise K + axis <= N.
Returns (N-K+1)-dimensional tensor populated by entries in tensor corresponding to True values in mask.
Raises
ValueError If shapes do not conform. | tensorflow.compat.v1.boolean_mask |
tf.compat.v1.case Create a case operation.
tf.compat.v1.case(
pred_fn_pairs, default=None, exclusive=False, strict=False,
name='case'
)
See also tf.switch_case. The pred_fn_pairs parameter is a dict or list of pairs of size N. Each pair contains a boolean scalar tensor and a python callable that creates the tensors to be returned if the boolean evaluates to True. default is a callable generating a list of tensors. All the callables in pred_fn_pairs as well as default (if provided) should return the same number and types of tensors. If exclusive==True, all predicates are evaluated, and an exception is thrown if more than one of the predicates evaluates to True. If exclusive==False, execution stops at the first predicate which evaluates to True, and the tensors generated by the corresponding function are returned immediately. If none of the predicates evaluate to True, this operation returns the tensors generated by default. tf.case supports nested structures as implemented in tf.contrib.framework.nest. All of the callables must return the same (possibly nested) value structure of lists, tuples, and/or named tuples. Singleton lists and tuples form the only exceptions to this: when returned by a callable, they are implicitly unpacked to single values. This behavior is disabled by passing strict=True. If an unordered dictionary is used for pred_fn_pairs, the order of the conditional tests is not guaranteed. However, the order is guaranteed to be deterministic, so that variables created in conditional branches are created in fixed order across runs. Example 1: Pseudocode: if (x < y) return 17;
else return 23;
Expressions: f1 = lambda: tf.constant(17)
f2 = lambda: tf.constant(23)
r = tf.case([(tf.less(x, y), f1)], default=f2)
Example 2: Pseudocode: if (x < y && x > z) raise OpError("Only one predicate may evaluate to True");
if (x < y) return 17;
else if (x > z) return 23;
else return -1;
Expressions: def f1(): return tf.constant(17)
def f2(): return tf.constant(23)
def f3(): return tf.constant(-1)
r = tf.case({tf.less(x, y): f1, tf.greater(x, z): f2},
default=f3, exclusive=True)
Args
pred_fn_pairs Dict or list of pairs of a boolean scalar tensor and a callable which returns a list of tensors.
default Optional callable that returns a list of tensors.
exclusive True iff at most one predicate is allowed to evaluate to True.
strict A boolean that enables/disables 'strict' mode; see above.
name A name for this operation (optional).
Returns The tensors returned by the first pair whose predicate evaluated to True, or those returned by default if none does.
Raises
TypeError If pred_fn_pairs is not a list/dictionary.
TypeError If pred_fn_pairs is a list but does not contain 2-tuples.
TypeError If fns[i] is not callable for any i, or default is not callable. Eager Compatibility Unordered dictionaries are not supported in eager mode when exclusive=False. Use a list of tuples instead. | tensorflow.compat.v1.case |
tf.compat.v1.clip_by_average_norm Clips tensor values to a maximum average L2-norm. (deprecated)
tf.compat.v1.clip_by_average_norm(
t, clip_norm, name=None
)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: clip_by_average_norm is deprecated in TensorFlow 2.0. Please use clip_by_norm(t, clip_norm * tf.cast(tf.size(t), tf.float32), name) instead. Given a tensor t, and a maximum clip value clip_norm, this operation normalizes t so that its average L2-norm is less than or equal to clip_norm. Specifically, if the average L2-norm is already less than or equal to clip_norm, then t is not modified. If the average L2-norm is greater than clip_norm, then this operation returns a tensor of the same type and shape as t with its values set to: t * clip_norm / l2norm_avg(t) In this case, the average L2-norm of the output tensor is clip_norm. This operation is typically used to clip gradients before applying them with an optimizer.
Args
t A Tensor.
clip_norm A 0-D (scalar) Tensor > 0. A maximum clipping value.
name A name for the operation (optional).
Returns A clipped Tensor. | tensorflow.compat.v1.clip_by_average_norm |
tf.compat.v1.colocate_with DEPRECATED FUNCTION
tf.compat.v1.colocate_with(
op, ignore_existing=False
)
Warning: THIS FUNCTION IS DEPRECATED. It will be removed in a future version. Instructions for updating: Colocations handled automatically by placer. | tensorflow.compat.v1.colocate_with |
Module: tf.compat.v1.compat Compatibility functions. The tf.compat module contains two sets of compatibility functions. Tensorflow 1.x and 2.x APIs The compat.v1 and compat.v2 submodules provide a complete copy of both the v1 and v2 APIs for backwards and forwards compatibility across TensorFlow versions 1.x and 2.x. See the migration guide for details. Utilities for writing compatible code Aside from the compat.v1 and compat.v2 submodules, tf.compat also contains a set of helper functions for writing code that works in both: TensorFlow 1.x and 2.x Python 2 and 3 Type collections The compatibility module also provides the following aliases for common sets of python types: bytes_or_text_types complex_types integral_types real_types Functions as_bytes(...): Converts bytearray, bytes, or unicode python input types to bytes. as_str(...) as_str_any(...): Converts input to str type. as_text(...): Converts any string-like python input types to unicode. dimension_at_index(...): Compatibility utility required to allow for both V1 and V2 behavior in TF. dimension_value(...): Compatibility utility required to allow for both V1 and V2 behavior in TF. forward_compatibility_horizon(...): Context manager for testing forward compatibility of generated graphs. forward_compatible(...): Return true if the forward compatibility window has expired. path_to_str(...): Converts input which is a PathLike object to str type.
Other Members
bytes_or_text_types
complex_types
integral_types
real_types | tensorflow.compat.v1.compat |
tf.compat.v1.cond Return true_fn() if the predicate pred is true else false_fn(). (deprecated arguments)
tf.compat.v1.cond(
pred, true_fn=None, false_fn=None, strict=False, name=None, fn1=None, fn2=None
)
Warning: SOME ARGUMENTS ARE DEPRECATED: (fn1, fn2). They will be removed in a future version. Instructions for updating: fn1/fn2 are deprecated in favor of the true_fn/false_fn arguments. true_fn and false_fn both return lists of output tensors. true_fn and false_fn must have the same non-zero number and type of outputs. Warning: Any Tensors or Operations created outside of true_fn and false_fn will be executed regardless of which branch is selected at runtime. Although this behavior is consistent with the dataflow model of TensorFlow, it has frequently surprised users who expected a lazier semantics. Consider the following simple program: z = tf.multiply(a, b)
result = tf.cond(x < y, lambda: tf.add(x, z), lambda: tf.square(y))
If x < y, the tf.add operation will be executed and tf.square operation will not be executed. Since z is needed for at least one branch of the cond, the tf.multiply operation is always executed, unconditionally. Note that cond calls true_fn and false_fn exactly once (inside the call to cond, and not at all during Session.run()). cond stitches together the graph fragments created during the true_fn and false_fn calls with some additional graph nodes to ensure that the right branch gets executed depending on the value of pred. tf.cond supports nested structures as implemented in tensorflow.python.util.nest. Both true_fn and false_fn must return the same (possibly nested) value structure of lists, tuples, and/or named tuples. Singleton lists and tuples form the only exceptions to this: when returned by true_fn and/or false_fn, they are implicitly unpacked to single values. This behavior is disabled by passing strict=True.
Args
pred A scalar determining whether to return the result of true_fn or false_fn.
true_fn The callable to be performed if pred is true.
false_fn The callable to be performed if pred is false.
strict A boolean that enables/disables 'strict' mode; see above.
name Optional name prefix for the returned tensors.
Returns Tensors returned by the call to either true_fn or false_fn. If the callables return a singleton list, the element is extracted from the list.
Raises
TypeError if true_fn or false_fn is not callable.
ValueError if true_fn and false_fn do not return the same number of tensors, or return tensors of different types. Example: x = tf.constant(2)
y = tf.constant(5)
def f1(): return tf.multiply(x, 17)
def f2(): return tf.add(y, 23)
r = tf.cond(tf.less(x, y), f1, f2)
# r is set to f1().
# Operations in f2 (e.g., tf.add) are not executed. | tensorflow.compat.v1.cond |
tf.compat.v1.ConditionalAccumulator A conditional accumulator for aggregating gradients. Inherits From: ConditionalAccumulatorBase
tf.compat.v1.ConditionalAccumulator(
dtype, shape=None, shared_name=None, name='conditional_accumulator',
reduction_type='MEAN'
)
Up-to-date gradients (i.e., time step at which gradient was computed is equal to the accumulator's time step) are added to the accumulator. Extraction of the average gradient is blocked until the required number of gradients has been accumulated.
Args
dtype Datatype of the accumulated gradients.
shape Shape of the accumulated gradients.
shared_name Optional. If non-empty, this accumulator will be shared under the given name across multiple sessions.
name Optional name for the accumulator.
reduction_type Reduction type to use when taking the gradient.
Attributes
accumulator_ref The underlying accumulator reference.
dtype The datatype of the gradients accumulated by this accumulator.
name The name of the underlying accumulator. Methods apply_grad View source
apply_grad(
grad, local_step=0, name=None
)
Attempts to apply a gradient to the accumulator. The attempt is silently dropped if the gradient is stale, i.e., local_step is less than the accumulator's global time step.
Args
grad The gradient tensor to be applied.
local_step Time step at which the gradient was computed.
name Optional name for the operation.
Returns The operation that (conditionally) applies a gradient to the accumulator.
Raises
ValueError If grad is of the wrong shape num_accumulated View source
num_accumulated(
name=None
)
Number of gradients that have currently been aggregated in accumulator.
Args
name Optional name for the operation.
Returns Number of accumulated gradients currently in accumulator.
set_global_step View source
set_global_step(
new_global_step, name=None
)
Sets the global time step of the accumulator. The operation logs a warning if we attempt to set to a time step that is lower than the accumulator's own time step.
Args
new_global_step Value of new time step. Can be a variable or a constant
name Optional name for the operation.
Returns Operation that sets the accumulator's time step.
take_grad View source
take_grad(
num_required, name=None
)
Attempts to extract the average gradient from the accumulator. The operation blocks until sufficient number of gradients have been successfully applied to the accumulator. Once successful, the following actions are also triggered: Counter of accumulated gradients is reset to 0. Aggregated gradient is reset to 0 tensor. Accumulator's internal time step is incremented by 1.
Args
num_required Number of gradients that needs to have been aggregated
name Optional name for the operation
Returns A tensor holding the value of the average gradient.
Raises
InvalidArgumentError If num_required < 1 | tensorflow.compat.v1.conditionalaccumulator |
tf.compat.v1.ConditionalAccumulatorBase A conditional accumulator for aggregating gradients.
tf.compat.v1.ConditionalAccumulatorBase(
dtype, shape, accumulator_ref
)
Up-to-date gradients (i.e., time step at which gradient was computed is equal to the accumulator's time step) are added to the accumulator. Extraction of the average gradient is blocked until the required number of gradients has been accumulated.
Args
dtype Datatype of the accumulated gradients.
shape Shape of the accumulated gradients.
accumulator_ref A handle to the conditional accumulator, created by sub- classes
Attributes
accumulator_ref The underlying accumulator reference.
dtype The datatype of the gradients accumulated by this accumulator.
name The name of the underlying accumulator. Methods num_accumulated View source
num_accumulated(
name=None
)
Number of gradients that have currently been aggregated in accumulator.
Args
name Optional name for the operation.
Returns Number of accumulated gradients currently in accumulator.
set_global_step View source
set_global_step(
new_global_step, name=None
)
Sets the global time step of the accumulator. The operation logs a warning if we attempt to set to a time step that is lower than the accumulator's own time step.
Args
new_global_step Value of new time step. Can be a variable or a constant
name Optional name for the operation.
Returns Operation that sets the accumulator's time step. | tensorflow.compat.v1.conditionalaccumulatorbase |
Module: tf.compat.v1.config Public API for tf.config namespace. Modules experimental module: Public API for tf.config.experimental namespace. optimizer module: Public API for tf.config.optimizer namespace. threading module: Public API for tf.config.threading namespace. Classes class LogicalDevice: Abstraction for a logical device initialized by the runtime. class LogicalDeviceConfiguration: Configuration class for a logical devices. class PhysicalDevice: Abstraction for a locally visible physical device. Functions experimental_connect_to_cluster(...): Connects to the given cluster. experimental_connect_to_host(...): Connects to a single machine to enable remote execution on it. experimental_functions_run_eagerly(...): Returns the value of the experimental_run_functions_eagerly setting. (deprecated) experimental_run_functions_eagerly(...): Enables / disables eager execution of tf.functions. (deprecated) functions_run_eagerly(...): Returns the value of the run_functions_eagerly setting. get_logical_device_configuration(...): Get the virtual device configuration for a tf.config.PhysicalDevice. get_soft_device_placement(...): Get if soft device placement is enabled. get_visible_devices(...): Get the list of visible physical devices. list_logical_devices(...): Return a list of logical devices created by runtime. list_physical_devices(...): Return a list of physical devices visible to the host runtime. run_functions_eagerly(...): Enables / disables eager execution of tf.functions. set_logical_device_configuration(...): Set the logical device configuration for a tf.config.PhysicalDevice. set_soft_device_placement(...): Set if soft device placement is enabled. set_visible_devices(...): Set the list of visible devices. | tensorflow.compat.v1.config |
Module: tf.compat.v1.config.experimental Public API for tf.config.experimental namespace. Classes class ClusterDeviceFilters: Represent a collection of device filters for the remote workers in cluster. class VirtualDeviceConfiguration: Configuration class for a logical devices. Functions disable_mlir_bridge(...): Disables experimental MLIR-Based TensorFlow Compiler Bridge. disable_mlir_graph_optimization(...): Disables experimental MLIR-Based TensorFlow Compiler Optimizations. enable_mlir_bridge(...): Enables experimental MLIR-Based TensorFlow Compiler Bridge. enable_mlir_graph_optimization(...): Enables experimental MLIR-Based TensorFlow Compiler Optimizations. enable_tensor_float_32_execution(...): Enable or disable the use of TensorFloat-32 on supported hardware. get_device_details(...): Returns details about a physical devices. get_device_policy(...): Gets the current device policy. get_memory_growth(...): Get if memory growth is enabled for a PhysicalDevice. get_memory_usage(...): Get the memory usage, in bytes, for the chosen device. get_synchronous_execution(...): Gets whether operations are executed synchronously or asynchronously. get_virtual_device_configuration(...): Get the virtual device configuration for a tf.config.PhysicalDevice. get_visible_devices(...): Get the list of visible physical devices. list_logical_devices(...): Return a list of logical devices created by runtime. list_physical_devices(...): Return a list of physical devices visible to the host runtime. set_device_policy(...): Sets the current thread device policy. set_memory_growth(...): Set if memory growth should be enabled for a PhysicalDevice. set_synchronous_execution(...): Specifies whether operations are executed synchronously or asynchronously. set_virtual_device_configuration(...): Set the logical device configuration for a tf.config.PhysicalDevice. set_visible_devices(...): Set the list of visible devices. tensor_float_32_execution_enabled(...): Returns whether TensorFloat-32 is enabled. | tensorflow.compat.v1.config.experimental |
Module: tf.compat.v1.config.optimizer Public API for tf.config.optimizer namespace. Functions get_experimental_options(...): Get experimental optimizer options. get_jit(...): Get if JIT compilation is enabled. set_experimental_options(...): Set experimental optimizer options. set_jit(...): Set if JIT compilation is enabled. | tensorflow.compat.v1.config.optimizer |
The library documentation retrieval source for code-rag-bench, contains all documentation for Python libraries available on devdocs.io.