text
stringlengths 0
1.73k
| source
stringlengths 35
119
| category
stringclasses 2
values |
---|---|---|
wait for worker 0 to finish work, and then shutdown.
>>> rpc.shutdown()
class torch.distributed.rpc.WorkerInfo
A structure that encapsulates information of a worker in the
system. Contains the name and ID of the worker. This class is not
meant to be constructed directly, rather, an instance can be
retrieved through "get_worker_info()" and the result can be passed
in to functions such as "rpc_sync()", "rpc_async()", "remote()" to
avoid copying a string on every invocation.
property id
Globally unique id to identify the worker.
property name
The name of the worker.
The RPC package also provides decorators which allow applications to
specify how a given function should be treated on the callee side.
torch.distributed.rpc.functions.async_execution(fn)
A decorator for a function indicating that the return value of the
function is guaranteed to be a "Future" object and this function | https://pytorch.org/docs/stable/rpc.html | pytorch docs |
can run asynchronously on the RPC callee. More specifically, the
callee extracts the "Future" returned by the wrapped function and
installs subsequent processing steps as a callback to that
"Future". The installed callback will read the value from the
"Future" when completed and send the value back as the RPC
response. That also means the returned "Future" only exists on the
callee side and is never sent through RPC. This decorator is useful
when the wrapped function's ("fn") execution needs to pause and
resume due to, e.g., containing "rpc_async()" or waiting for other
signals.
Note:
To enable asynchronous execution, applications must pass the
function object returned by this decorator to RPC APIs. If RPC
detected attributes installed by this decorator, it knows that
this function returns a "Future" object and will handle that
accordingly. However, this does not mean this decorator has to be
| https://pytorch.org/docs/stable/rpc.html | pytorch docs |
outmost one when defining a function. For example, when combined
with "@staticmethod" or "@classmethod",
"@rpc.functions.async_execution" needs to be the inner decorator
to allow the target function be recognized as a static or class
function. This target function can still execute asynchronously
because, when accessed, the static or class method preserves
attributes installed by "@rpc.functions.async_execution".
Example::
The returned "Future" object can come from "rpc_async()",
"then()", or "Future" constructor. The example below shows
directly using the "Future" returned by "then()".
>>> from torch.distributed import rpc
>>>
>>> # omitting setup and shutdown RPC
>>>
>>> # On all workers
>>> @rpc.functions.async_execution
>>> def async_add_chained(to, x, y, z):
>>> # This function runs on "worker1" and returns immediately when
| https://pytorch.org/docs/stable/rpc.html | pytorch docs |
# the callback is installed through the `then(cb)` API. In the
>>> # mean time, the `rpc_async` to "worker2" can run concurrently.
>>> # When the return value of that `rpc_async` arrives at
>>> # "worker1", "worker1" will run the lambda function accordingly
>>> # and set the value for the previously returned `Future`, which
>>> # will then trigger RPC to send the result back to "worker0".
>>> return rpc.rpc_async(to, torch.add, args=(x, y)).then(
>>> lambda fut: fut.wait() + z
>>> )
>>>
>>> # On worker0
>>> ret = rpc.rpc_sync(
>>> "worker1",
>>> async_add_chained,
>>> args=("worker2", torch.ones(2), 1, 1)
>>> )
>>> print(ret) # prints tensor([3., 3.])
When combined with TorchScript decorators, this decorator must
be the outmost one.
>>> from torch import Tensor
>>> from torch.futures import Future
| https://pytorch.org/docs/stable/rpc.html | pytorch docs |
from torch.futures import Future
>>> from torch.distributed import rpc
>>>
>>> # omitting setup and shutdown RPC
>>>
>>> # On all workers
>>> @torch.jit.script
>>> def script_add(x: Tensor, y: Tensor) -> Tensor:
>>> return x + y
>>>
>>> @rpc.functions.async_execution
>>> @torch.jit.script
>>> def async_add(to: str, x: Tensor, y: Tensor) -> Future[Tensor]:
>>> return rpc.rpc_async(to, script_add, (x, y))
>>>
>>> # On worker0
>>> ret = rpc.rpc_sync(
>>> "worker1",
>>> async_add,
>>> args=("worker2", torch.ones(2), 1)
>>> )
>>> print(ret) # prints tensor([2., 2.])
When combined with static or class method, this decorator must
be the inner one.
>>> from torch.distributed import rpc
>>>
>>> # omitting setup and shutdown RPC
>>>
>>> # On all workers
>>> class AsyncExecutionClass:
>>>
| https://pytorch.org/docs/stable/rpc.html | pytorch docs |
class AsyncExecutionClass:
>>>
>>> @staticmethod
>>> @rpc.functions.async_execution
>>> def static_async_add(to, x, y, z):
>>> return rpc.rpc_async(to, torch.add, args=(x, y)).then(
>>> lambda fut: fut.wait() + z
>>> )
>>>
>>> @classmethod
>>> @rpc.functions.async_execution
>>> def class_async_add(cls, to, x, y, z):
>>> ret_fut = torch.futures.Future()
>>> rpc.rpc_async(to, torch.add, args=(x, y)).then(
>>> lambda fut: ret_fut.set_result(fut.wait() + z)
>>> )
>>> return ret_fut
>>>
>>> @rpc.functions.async_execution
>>> def bound_async_add(self, to, x, y, z):
>>> return rpc.rpc_async(to, torch.add, args=(x, y)).then(
>>> lambda fut: fut.wait() + z
>>> )
>>>
>>> # On worker0
>>> ret = rpc.rpc_sync(
>>> "worker1",
| https://pytorch.org/docs/stable/rpc.html | pytorch docs |
"worker1",
>>> AsyncExecutionClass.static_async_add,
>>> args=("worker2", torch.ones(2), 1, 2)
>>> )
>>> print(ret) # prints tensor([4., 4.])
>>>
>>> ret = rpc.rpc_sync(
>>> "worker1",
>>> AsyncExecutionClass.class_async_add,
>>> args=("worker2", torch.ones(2), 1, 2)
>>> )
>>> print(ret) # prints tensor([4., 4.])
This decorator also works with RRef helpers, i.e., .
"torch.distributed.rpc.RRef.rpc_sync()",
"torch.distributed.rpc.RRef.rpc_async()", and
"torch.distributed.rpc.RRef.remote()".
>>> from torch.distributed import rpc
>>>
>>> # reuse the AsyncExecutionClass class above
>>> rref = rpc.remote("worker1", AsyncExecutionClass)
>>> ret = rref.rpc_sync().static_async_add("worker2", torch.ones(2), 1, 2)
>>> print(ret) # prints tensor([4., 4.])
>>>
>>> rref = rpc.remote("worker1", AsyncExecutionClass)
| https://pytorch.org/docs/stable/rpc.html | pytorch docs |
ret = rref.rpc_async().static_async_add("worker2", torch.ones(2), 1, 2).wait()
>>> print(ret) # prints tensor([4., 4.])
>>>
>>> rref = rpc.remote("worker1", AsyncExecutionClass)
>>> ret = rref.remote().static_async_add("worker2", torch.ones(2), 1, 2).to_here()
>>> print(ret) # prints tensor([4., 4.])
Backends
The RPC module can leverage different backends to perform the
communication between the nodes. The backend to be used can be
specified in the "init_rpc()" function, by passing a certain value of
the "BackendType" enum. Regardless of what backend is used, the rest
of the RPC API won't change. Each backend also defines its own
subclass of the "RpcBackendOptions" class, an instance of which can
also be passed to "init_rpc()" to configure the backend's behavior.
class torch.distributed.rpc.BackendType(value)
An enum class of available backends.
PyTorch ships with a builtin "BackendType.TENSORPIPE" backend. | https://pytorch.org/docs/stable/rpc.html | pytorch docs |
Additional ones can be registered using the "register_backend()"
function.
class torch.distributed.rpc.RpcBackendOptions
An abstract structure encapsulating the options passed into the RPC
backend. An instance of this class can be passed in to "init_rpc()"
in order to initialize RPC with specific configurations, such as
the RPC timeout and "init_method" to be used.
property init_method
URL specifying how to initialize the process group. Default is
"env://"
property rpc_timeout
A float indicating the timeout to use for all RPCs. If an RPC
does not complete in this timeframe, it will complete with an
exception indicating that it has timed out.
TensorPipe Backend
~~~~~~~~~~~~~~~~~~
The TensorPipe agent, which is the default, leverages the TensorPipe
library, which provides a natively point-to-point communication
primitive specifically suited for machine learning that fundamentally
addresses some of the limitations of Gloo. Compared to Gloo, it has | https://pytorch.org/docs/stable/rpc.html | pytorch docs |
the advantage of being asynchronous, which allows a large number of
transfers to occur simultaneously, each at their own speed, without
blocking each other. It will only open pipes between pairs of nodes
when needed, on demand, and when one node fails only its incident
pipes will be closed, while all other ones will keep working as
normal. In addition, it is able to support multiple different
transports (TCP, of course, but also shared memory, NVLink,
InfiniBand, ...) and can automatically detect their availability and
negotiate the best transport to use for each pipe.
The TensorPipe backend has been introduced in PyTorch v1.6 and is
being actively developed. At the moment, it only supports CPU tensors,
with GPU support coming soon. It comes with a TCP-based transport,
just like Gloo. It is also able to automatically chunk and multiplex
large tensors over multiple sockets and threads in order to achieve
very high bandwidths. The agent will be able to pick the best | https://pytorch.org/docs/stable/rpc.html | pytorch docs |
transport on its own, with no intervention required.
Example:
import os
from torch.distributed import rpc
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '29500'
rpc.init_rpc(
"worker1",
rank=0,
world_size=2,
rpc_backend_options=rpc.TensorPipeRpcBackendOptions(
num_worker_threads=8,
rpc_timeout=20 # 20 second timeout
)
)
omitting init_rpc invocation on worker2
class torch.distributed.rpc.TensorPipeRpcBackendOptions(*, num_worker_threads=16, rpc_timeout=60.0, init_method='env://', device_maps=None, devices=None, _transports=None, _channels=None)
The backend options for "TensorPipeAgent", derived from
"RpcBackendOptions".
Parameters:
* num_worker_threads (int, optional) -- The number of
threads in the thread-pool used by "TensorPipeAgent" to
execute requests (default: 16). | https://pytorch.org/docs/stable/rpc.html | pytorch docs |
execute requests (default: 16).
* **rpc_timeout** (*float**, **optional*) -- The default
timeout, in seconds, for RPC requests (default: 60 seconds).
If the RPC has not completed in this timeframe, an exception
indicating so will be raised. Callers can override this
timeout for individual RPCs in "rpc_sync()" and "rpc_async()"
if necessary.
* **init_method** (*str**, **optional*) -- The URL to initialize
the distributed store used for rendezvous. It takes any value
accepted for the same argument of "init_process_group()"
(default: "env://").
* **device_maps** (*Dict**[**str**, **Dict**]**, **optional*) --
Device placement mappings from this worker to the callee. Key
is the callee worker name and value the dictionary ("Dict" of
"int", "str", or "torch.device") that maps this worker's
devices to the callee worker's devices. (default: "None")
| https://pytorch.org/docs/stable/rpc.html | pytorch docs |
devices (List[int, str, or "torch.device"], optional) --
all local CUDA devices used by RPC agent. By Default, it will
be initialized to all local devices from its own "device_maps"
and corresponding devices from its peers' "device_maps". When
processing CUDA RPC requests, the agent will properly
synchronize CUDA streams for all devices in this "List".
property device_maps
The device map locations.
property devices
All devices used by the local agent.
property init_method
URL specifying how to initialize the process group. Default is
"env://"
property num_worker_threads
The number of threads in the thread-pool used by
"TensorPipeAgent" to execute requests.
property rpc_timeout
A float indicating the timeout to use for all RPCs. If an RPC
does not complete in this timeframe, it will complete with an
exception indicating that it has timed out.
set_device_map(to, device_map) | https://pytorch.org/docs/stable/rpc.html | pytorch docs |
set_device_map(to, device_map)
Set device mapping between each RPC caller and callee pair. This
function can be called multiple times to incrementally add
device placement configurations.
Parameters:
* **to** (*str*) -- Callee name.
* **device_map** (*Dict of python:int**, **str**, or
**torch.device*) -- Device placement mappings from this
worker to the callee. This map must be invertible.
-[ Example ]-
>>> # both workers
>>> def add(x, y):
>>> print(x) # tensor([1., 1.], device='cuda:1')
>>> return x + y, (x + y).to(2)
>>>
>>> # on worker 0
>>> options = TensorPipeRpcBackendOptions(
>>> num_worker_threads=8,
>>> device_maps={"worker1": {0: 1}}
>>> # maps worker0's cuda:0 to worker1's cuda:1
>>> )
>>> options.set_device_map("worker1", {1: 2})
>>> # maps worker0's cuda:1 to worker1's cuda:2
>>>
>>> rpc.init_rpc(
| https://pytorch.org/docs/stable/rpc.html | pytorch docs |
>>> rpc.init_rpc(
>>> "worker0",
>>> rank=0,
>>> world_size=2,
>>> backend=rpc.BackendType.TENSORPIPE,
>>> rpc_backend_options=options
>>> )
>>>
>>> x = torch.ones(2)
>>> rets = rpc.rpc_sync("worker1", add, args=(x.to(0), 1))
>>> # The first argument will be moved to cuda:1 on worker1. When
>>> # sending the return value back, it will follow the invert of
>>> # the device map, and hence will be moved back to cuda:0 and
>>> # cuda:1 on worker0
>>> print(rets[0]) # tensor([2., 2.], device='cuda:0')
>>> print(rets[1]) # tensor([2., 2.], device='cuda:1')
set_devices(devices)
Set local devices used by the TensorPipe RPC agent. When
processing CUDA RPC requests, the TensorPipe RPC agent will
properly synchronize CUDA streams for all devices in this
"List".
Parameters:
**devices** (*List of python:int**, **str**, or
| https://pytorch.org/docs/stable/rpc.html | pytorch docs |
*torch.device) -- local devices used by the TensorPipe RPC
agent.
Note:
The RPC framework does not automatically retry any "rpc_sync()",
"rpc_async()" and "remote()" calls. The reason being that there is
no way the RPC framework can determine whether an operation is
idempotent or not and whether it is safe to retry. As a result, it
is the application's responsibility to deal with failures and retry
if necessary. RPC communication is based on TCP and as a result
failures could happen due to network failures or intermittent
network connectivity issues. In such scenarios, the application
needs to retry appropriately with reasonable backoffs to ensure the
network isn't overwhelmed by aggressive retries.
RRef
Warning:
RRefs are not currently supported when using CUDA tensors
An "RRef" (Remote REFerence) is a reference to a value of some type
"T" (e.g. "Tensor") on a remote worker. This handle keeps the
referenced remote value alive on the owner, but there is no | https://pytorch.org/docs/stable/rpc.html | pytorch docs |
implication that the value will be transferred to the local worker in
the future. RRefs can be used in multi-machine training by holding
references to nn.Modules that exist on other workers, and calling the
appropriate functions to retrieve or modify their parameters during
training. See Remote Reference Protocol for more details.
class torch.distributed.rpc.RRef
More Information about RRef
^^^^^^^^^^^^^^^^^^^^^^^^^^^
Remote Reference Protocol
Background
Assumptions
RRef Lifetime
Design Reasoning
Implementation
Protocol Scenarios
User Share RRef with Owner as Return Value
User Share RRef with Owner as Argument
Owner Share RRef with User
User Share RRef with User
RemoteModule
Warning:
RemoteModule is not currently supported when using CUDA tensors
"RemoteModule" is an easy way to create an nn.Module remotely on a
different process. The actual module resides on a remote host, but the | https://pytorch.org/docs/stable/rpc.html | pytorch docs |
local host has a handle to this module and invoke this module similar
to a regular nn.Module. The invocation however incurs RPC calls to the
remote end and can be performed asynchronously if needed via
additional APIs supported by RemoteModule.
class torch.distributed.nn.api.remote_module.RemoteModule(args, *kwargs)
A RemoteModule instance can only be created after RPC
initialization. It creates a user-specified module on a
specified remote node. It behaves like a regular "nn.Module"
except that the "forward" method is executed on the remote node.
It takes care of autograd recording to ensure the backward pass
propagates gradients back to the corresponding remote module.
It generates two methods "forward_async" and "forward" based on
the signature of the "forward" method of "module_cls".
"forward_async" runs asynchronously and returns a Future. The
arguments of "forward_async" and "forward" are the same as the
| https://pytorch.org/docs/stable/rpc.html | pytorch docs |
"forward" method of the module returned by the "module_cls".
For example, if "module_cls" returns an instance of "nn.Linear",
that has "forward" method signature: "def forward(input: Tensor)
-> Tensor:", the generated "RemoteModule" will have 2 methods
with the signatures:
"def forward(input: Tensor) -> Tensor:"
"def forward_async(input: Tensor) -> Future[Tensor]:"
Parameters:
* remote_device (str) -- Device on the destination worker
where we'd like to place this module. The format should be
"/", where the device field can be parsed
as torch.device type. E.g., "trainer0/cpu", "trainer0",
"ps0/cuda:0". In addition, the device field can be optional
and the default value is "cpu".
* **module_cls** (*nn.Module*) --
Class for the module to be created remotely. For example,
>>> class MyModule(nn.Module):
>>> def forward(input):
| https://pytorch.org/docs/stable/rpc.html | pytorch docs |
def forward(input):
>>> return input + 1
>>>
>>> module_cls = MyModule
* **args** (*Sequence**, **optional*) -- args to be passed to
"module_cls".
* **kwargs** (*Dict**, **optional*) -- kwargs to be passed to
"module_cls".
Returns:
A remote module instance which wraps the "Module" created by the
user-provided "module_cls", it has a blocking "forward" method
and an asynchronous "forward_async" method that returns a future
of the "forward" call on the user-provided module on the remote
side.
Example::
Run the following code in two different processes:
>>> # On worker 0:
>>> import torch
>>> import torch.distributed.rpc as rpc
>>> from torch import nn, Tensor
>>> from torch.distributed.nn.api.remote_module import RemoteModule
>>>
>>> rpc.init_rpc("worker0", rank=0, world_size=2)
>>> remote_linear_module = RemoteModule(
| https://pytorch.org/docs/stable/rpc.html | pytorch docs |
remote_linear_module = RemoteModule(
>>> "worker1/cpu", nn.Linear, args=(20, 30),
>>> )
>>> input = torch.randn(128, 20)
>>> ret_fut = remote_linear_module.forward_async(input)
>>> ret = ret_fut.wait()
>>> rpc.shutdown()
>>> # On worker 1:
>>> import torch
>>> import torch.distributed.rpc as rpc
>>>
>>> rpc.init_rpc("worker1", rank=1, world_size=2)
>>> rpc.shutdown()
Furthermore, a more practical example that is combined with
DistributedDataParallel (DDP) can be found in this tutorial.
get_module_rref()
Returns an "RRef" ("RRef[nn.Module]") pointing to the remote
module.
Return type:
*RRef*[*Module*]
remote_parameters(recurse=True)
Returns a list of "RRef" pointing to the remote module's
parameters. This can typically be used in conjuction with
"DistributedOptimizer".
Parameters:
**recurse** (*bool*) -- if True, then returns parameters of
| https://pytorch.org/docs/stable/rpc.html | pytorch docs |
the remote module and all submodules of the remote module.
Otherwise, returns only parameters that are direct members of
the remote module.
Returns:
A list of "RRef" ("List[RRef[nn.Parameter]]") to remote
module's parameters.
Return type:
*List*[*RRef*[*Parameter*]]
Distributed Autograd Framework
Warning:
Distributed autograd is not currently supported when using CUDA
tensors
This module provides an RPC-based distributed autograd framework that
can be used for applications such as model parallel training. In
short, applications may send and receive gradient recording tensors
over RPC. In the forward pass, we record when gradient recording
tensors are sent over RPC and during the backward pass we use this
information to perform a distributed backward pass using RPC. For more
details see Distributed Autograd Design. | https://pytorch.org/docs/stable/rpc.html | pytorch docs |
details see Distributed Autograd Design.
torch.distributed.autograd.backward(context_id: int, roots: List[Tensor], retain_graph=False) -> None
Kicks off the distributed backward pass using the provided roots.
This currently implements the FAST mode algorithm which assumes all
RPC messages sent in the same distributed autograd context across
workers would be part of the autograd graph during the backward
pass.
We use the provided roots to discover the autograd graph and
compute appropriate dependencies. This method blocks until the
entire autograd computation is done.
We accumulate the gradients in the appropriate
"torch.distributed.autograd.context" on each of the nodes. The
autograd context to be used is looked up given the "context_id"
that is passed in when "torch.distributed.autograd.backward()" is
called. If there is no valid autograd context corresponding to the
given ID, we throw an error. You can retrieve the accumulated | https://pytorch.org/docs/stable/rpc.html | pytorch docs |
gradients using the "get_gradients()" API.
Parameters:
* context_id (int) -- The autograd context id for which we
should retrieve the gradients.
* **roots** (*list*) -- Tensors which represent the roots of the
autograd computation. All the tensors should be scalars.
* **retain_graph** (*bool**, **optional*) -- If False, the graph
used to compute the grad will be freed. Note that in nearly
all cases setting this option to True is not needed and often
can be worked around in a much more efficient way. Usually,
you need to set this to True to run backward multiple times.
Example::
>>> import torch.distributed.autograd as dist_autograd
>>> with dist_autograd.context() as context_id:
>>> pred = model.forward()
>>> loss = loss_func(pred, loss)
>>> dist_autograd.backward(context_id, loss)
class torch.distributed.autograd.context | https://pytorch.org/docs/stable/rpc.html | pytorch docs |
class torch.distributed.autograd.context
Context object to wrap forward and backward passes when using
distributed autograd. The "context_id" generated in the "with"
statement is required to uniquely identify a distributed backward
pass on all workers. Each worker stores metadata associated with
this "context_id", which is required to correctly execute a
distributed autograd pass.
Example::
>>> import torch.distributed.autograd as dist_autograd
>>> with dist_autograd.context() as context_id:
>>> t1 = torch.rand((3, 3), requires_grad=True)
>>> t2 = torch.rand((3, 3), requires_grad=True)
>>> loss = rpc.rpc_sync("worker1", torch.add, args=(t1, t2)).sum()
>>> dist_autograd.backward(context_id, [loss])
torch.distributed.autograd.get_gradients(context_id: int) -> Dict[Tensor, Tensor]
Retrieves a map from Tensor to the appropriate gradient for that
Tensor accumulated in the provided context corresponding to the | https://pytorch.org/docs/stable/rpc.html | pytorch docs |
given "context_id" as part of the distributed autograd backward
pass.
Parameters:
context_id (int) -- The autograd context id for which we
should retrieve the gradients.
Returns:
A map where the key is the Tensor and the value is the
associated gradient for that Tensor.
Example::
>>> import torch.distributed.autograd as dist_autograd
>>> with dist_autograd.context() as context_id:
>>> t1 = torch.rand((3, 3), requires_grad=True)
>>> t2 = torch.rand((3, 3), requires_grad=True)
>>> loss = t1 + t2
>>> dist_autograd.backward(context_id, [loss.sum()])
>>> grads = dist_autograd.get_gradients(context_id)
>>> print(grads[t1])
>>> print(grads[t2])
More Information about RPC Autograd
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Distributed Autograd Design
Background
Autograd recording during the forward pass
Distributed Autograd Context
Distributed Backward Pass
| https://pytorch.org/docs/stable/rpc.html | pytorch docs |
Distributed Backward Pass
Computing dependencies
FAST mode algorithm
SMART mode algorithm
Distributed Optimizer
Simple end to end example
Distributed Optimizer
See the torch.distributed.optim page for documentation on distributed
optimizers.
Design Notes
The distributed autograd design note covers the design of the RPC-
based distributed autograd framework that is useful for applications
such as model parallel training.
Distributed Autograd Design
The RRef design note covers the design of the RRef (Remote REFerence)
protocol used to refer to values on remote workers by the framework.
Remote Reference Protocol
Tutorials
The RPC tutorials introduce users to the RPC framework, provide
several example applications using torch.distributed.rpc APIs, and
demonstrate how to use the profiler to profile RPC-based workloads.
Getting started with Distributed RPC Framework
| https://pytorch.org/docs/stable/rpc.html | pytorch docs |
Getting started with Distributed RPC Framework
Implementing a Parameter Server using Distributed RPC Framework
Combining Distributed DataParallel with Distributed RPC Framework
(covers RemoteModule as well)
Profiling RPC-based Workloads
Implementing batch RPC processing
Distributed Pipeline Parallel
| https://pytorch.org/docs/stable/rpc.html | pytorch docs |
torch.special
The torch.special module, modeled after SciPy's special module.
Functions
torch.special.airy_ai(input, *, out=None) -> Tensor
Airy function \text{Ai}\left(\text{input}\right).
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
torch.special.bessel_j0(input, *, out=None) -> Tensor
Bessel function of the first kind of order 0.
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
torch.special.bessel_j1(input, *, out=None) -> Tensor
Bessel function of the first kind of order 1.
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
torch.special.digamma(input, *, out=None) -> Tensor
Computes the logarithmic derivative of the gamma function on
input. | https://pytorch.org/docs/stable/special.html | pytorch docs |
input.
\digamma(x) = \frac{d}{dx} \ln\left(\Gamma\left(x\right)\right)
= \frac{\Gamma'(x)}{\Gamma(x)}
Parameters:
input (Tensor) -- the tensor to compute the digamma
function on
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Note:
This function is similar to SciPy's *scipy.special.digamma*.
Note:
From PyTorch 1.8 onwards, the digamma function returns *-Inf* for
*0*. Previously it returned *NaN* for *0*.
Example:
>>> a = torch.tensor([1, 0.5])
>>> torch.special.digamma(a)
tensor([-0.5772, -1.9635])
torch.special.entr(input, *, out=None) -> Tensor
Computes the entropy on "input" (as defined below), elementwise.
\begin{align} \text{entr(x)} = \begin{cases} -x * \ln(x) &
x > 0 \\ 0 & x = 0.0 \\ -\infty & x < 0 \end{cases}
\end{align}
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments: | https://pytorch.org/docs/stable/special.html | pytorch docs |
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example::
>>> a = torch.arange(-0.5, 1, 0.5)
>>> a
tensor([-0.5000, 0.0000, 0.5000])
>>> torch.special.entr(a)
tensor([ -inf, 0.0000, 0.3466])
torch.special.erf(input, *, out=None) -> Tensor
Computes the error function of "input". The error function is
defined as follows:
\mathrm{erf}(x) = \frac{2}{\sqrt{\pi}} \int_{0}^{x} e^{-t^2} dt
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> torch.special.erf(torch.tensor([0, -1., 10.]))
tensor([ 0.0000, -0.8427, 1.0000])
torch.special.erfc(input, *, out=None) -> Tensor
Computes the complementary error function of "input". The
complementary error function is defined as follows:
\mathrm{erfc}(x) = 1 - \frac{2}{\sqrt{\pi}} \int_{0}^{x}
e^{-t^2} dt
Parameters: | https://pytorch.org/docs/stable/special.html | pytorch docs |
e^{-t^2} dt
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> torch.special.erfc(torch.tensor([0, -1., 10.]))
tensor([ 1.0000, 1.8427, 0.0000])
torch.special.erfcx(input, *, out=None) -> Tensor
Computes the scaled complementary error function for each element
of "input". The scaled complementary error function is defined as
follows:
\mathrm{erfcx}(x) = e^{x^2} \mathrm{erfc}(x)
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> torch.special.erfcx(torch.tensor([0, -1., 10.]))
tensor([ 1.0000, 5.0090, 0.0561])
torch.special.erfinv(input, *, out=None) -> Tensor
Computes the inverse error function of "input". The inverse error
function is defined in the range (-1, 1) as:
\mathrm{erfinv}(\mathrm{erf}(x)) = x
Parameters: | https://pytorch.org/docs/stable/special.html | pytorch docs |
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> torch.special.erfinv(torch.tensor([0, 0.5, -1.]))
tensor([ 0.0000, 0.4769, -inf])
torch.special.exp2(input, *, out=None) -> Tensor
Computes the base two exponential function of "input".
y_{i} = 2^{x_{i}}
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> torch.special.exp2(torch.tensor([0, math.log2(2.), 3, 4]))
tensor([ 1., 2., 8., 16.])
torch.special.expit(input, *, out=None) -> Tensor
Computes the expit (also known as the logistic sigmoid function) of
the elements of "input".
\text{out}_{i} = \frac{1}{1 + e^{-\text{input}_{i}}}
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor. | https://pytorch.org/docs/stable/special.html | pytorch docs |
Example:
>>> t = torch.randn(4)
>>> t
tensor([ 0.9213, 1.0887, -0.8858, -1.7683])
>>> torch.special.expit(t)
tensor([ 0.7153, 0.7481, 0.2920, 0.1458])
torch.special.expm1(input, *, out=None) -> Tensor
Computes the exponential of the elements minus 1 of "input".
y_{i} = e^{x_{i}} - 1
Note:
This function provides greater precision than exp(x) - 1 for
small values of x.
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> torch.special.expm1(torch.tensor([0, math.log(2.)]))
tensor([ 0., 1.])
torch.special.gammainc(input, other, *, out=None) -> Tensor
Computes the regularized lower incomplete gamma function:
\text{out}_{i} = \frac{1}{\Gamma(\text{input}_i)}
\int_0^{\text{other}_i} t^{\text{input}_i-1} e^{-t} dt
where both \text{input}_i and \text{other}_i are weakly positive | https://pytorch.org/docs/stable/special.html | pytorch docs |
and at least one is strictly positive. If both are zero or either
is negative then \text{out}_i=\text{nan}. \Gamma(\cdot) in the
equation above is the gamma function,
\Gamma(\text{input}_i) = \int_0^\infty t^{(\text{input}_i-1)}
e^{-t} dt.
See "torch.special.gammaincc()" and "torch.special.gammaln()" for
related functions.
Supports broadcasting to a common shape and float inputs.
Note:
The backward pass with respect to "input" is not yet supported.
Please open an issue on PyTorch's Github to request it.
Parameters:
* input (Tensor) -- the first non-negative input tensor
* **other** (*Tensor*) -- the second non-negative input tensor
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a1 = torch.tensor([4.0])
>>> a2 = torch.tensor([3.0, 4.0, 5.0])
>>> a = torch.special.gammaincc(a1, a2)
tensor([0.3528, 0.5665, 0.7350])
tensor([0.3528, 0.5665, 0.7350])
| https://pytorch.org/docs/stable/special.html | pytorch docs |
tensor([0.3528, 0.5665, 0.7350])
>>> b = torch.special.gammainc(a1, a2) + torch.special.gammaincc(a1, a2)
tensor([1., 1., 1.])
torch.special.gammaincc(input, other, *, out=None) -> Tensor
Computes the regularized upper incomplete gamma function:
\text{out}_{i} = \frac{1}{\Gamma(\text{input}_i)}
\int_{\text{other}_i}^{\infty} t^{\text{input}_i-1} e^{-t} dt
where both \text{input}_i and \text{other}_i are weakly positive
and at least one is strictly positive. If both are zero or either
is negative then \text{out}_i=\text{nan}. \Gamma(\cdot) in the
equation above is the gamma function,
\Gamma(\text{input}_i) = \int_0^\infty t^{(\text{input}_i-1)}
e^{-t} dt.
See "torch.special.gammainc()" and "torch.special.gammaln()" for
related functions.
Supports broadcasting to a common shape and float inputs.
Note:
The backward pass with respect to "input" is not yet supported.
Please open an issue on PyTorch's Github to request it.
| https://pytorch.org/docs/stable/special.html | pytorch docs |
Parameters:
* input (Tensor) -- the first non-negative input tensor
* **other** (*Tensor*) -- the second non-negative input tensor
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a1 = torch.tensor([4.0])
>>> a2 = torch.tensor([3.0, 4.0, 5.0])
>>> a = torch.special.gammaincc(a1, a2)
tensor([0.6472, 0.4335, 0.2650])
>>> b = torch.special.gammainc(a1, a2) + torch.special.gammaincc(a1, a2)
tensor([1., 1., 1.])
torch.special.gammaln(input, *, out=None) -> Tensor
Computes the natural logarithm of the absolute value of the gamma
function on "input".
\text{out}_{i} = \ln \Gamma(|\text{input}_{i}|)
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.arange(0.5, 2, 0.5)
>>> torch.special.gammaln(a)
tensor([ 0.5724, 0.0000, -0.1208])
| https://pytorch.org/docs/stable/special.html | pytorch docs |
tensor([ 0.5724, 0.0000, -0.1208])
torch.special.i0(input, *, out=None) -> Tensor
Computes the zeroth order modified Bessel function of the first
kind for each element of "input".
\text{out}_{i} = I_0(\text{input}_{i}) = \sum_{k=0}^{\infty}
\frac{(\text{input}_{i}^2/4)^k}{(k!)^2}
Parameters:
input (Tensor) -- the input tensor
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> torch.i0(torch.arange(5, dtype=torch.float32))
tensor([ 1.0000, 1.2661, 2.2796, 4.8808, 11.3019])
torch.special.i0e(input, *, out=None) -> Tensor
Computes the exponentially scaled zeroth order modified Bessel
function of the first kind (as defined below) for each element of
"input".
\text{out}_{i} = \exp(-|x|) * i0(x) = \exp(-|x|) *
\sum_{k=0}^{\infty} \frac{(\text{input}_{i}^2/4)^k}{(k!)^2}
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments: | https://pytorch.org/docs/stable/special.html | pytorch docs |
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example::
>>> torch.special.i0e(torch.arange(5, dtype=torch.float32))
tensor([1.0000, 0.4658, 0.3085, 0.2430, 0.2070])
torch.special.i1(input, *, out=None) -> Tensor
Computes the first order modified Bessel function of the first kind
(as defined below) for each element of "input".
\text{out}_{i} = \frac{(\text{input}_{i})}{2} *
\sum_{k=0}^{\infty} \frac{(\text{input}_{i}^2/4)^k}{(k!) *
(k+1)!}
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example::
>>> torch.special.i1(torch.arange(5, dtype=torch.float32))
tensor([0.0000, 0.5652, 1.5906, 3.9534, 9.7595])
torch.special.i1e(input, *, out=None) -> Tensor
Computes the exponentially scaled first order modified Bessel
function of the first kind (as defined below) for each element of
"input". | https://pytorch.org/docs/stable/special.html | pytorch docs |
"input".
\text{out}_{i} = \exp(-|x|) * i1(x) = \exp(-|x|) *
\frac{(\text{input}_{i})}{2} * \sum_{k=0}^{\infty}
\frac{(\text{input}_{i}^2/4)^k}{(k!) * (k+1)!}
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example::
>>> torch.special.i1e(torch.arange(5, dtype=torch.float32))
tensor([0.0000, 0.2079, 0.2153, 0.1968, 0.1788])
torch.special.log1p(input, *, out=None) -> Tensor
Alias for "torch.log1p()".
torch.special.log_ndtr(input, *, out=None) -> Tensor
Computes the log of the area under the standard Gaussian
probability density function, integrated from minus infinity to
"input", elementwise.
\text{log\_ndtr}(x) = \log\left(\frac{1}{\sqrt{2
\pi}}\int_{-\infty}^{x} e^{-\frac{1}{2}t^2} dt \right)
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor. | https://pytorch.org/docs/stable/special.html | pytorch docs |
Example::
>>> torch.special.log_ndtr(torch.tensor([-3., -2, -1, 0, 1, 2, 3]))
tensor([-6.6077 -3.7832 -1.841 -0.6931 -0.1728 -0.023 -0.0014])
torch.special.log_softmax(input, dim, *, dtype=None) -> Tensor
Computes softmax followed by a logarithm.
While mathematically equivalent to log(softmax(x)), doing these two
operations separately is slower and numerically unstable. This
function is computed as:
\text{log\_softmax}(x_{i}) = \log\left(\frac{\exp(x_i) }{ \sum_j
\exp(x_j)} \right)
Parameters:
* input (Tensor) -- input
* **dim** (*int*) -- A dimension along which log_softmax will be
computed.
* **dtype** ("torch.dtype", optional) -- the desired data type
of returned tensor. If specified, the input tensor is cast to
"dtype" before the operation is performed. This is useful for
preventing data type overflows. Default: None.
Example::
>>> t = torch.ones(2, 2) | https://pytorch.org/docs/stable/special.html | pytorch docs |
Example::
>>> t = torch.ones(2, 2)
>>> torch.special.log_softmax(t, 0)
tensor([[-0.6931, -0.6931],
[-0.6931, -0.6931]])
torch.special.logit(input, eps=None, *, out=None) -> Tensor
Returns a new tensor with the logit of the elements of "input".
"input" is clamped to [eps, 1 - eps] when eps is not None. When eps
is None and "input" < 0 or "input" > 1, the function will yields
NaN.
\begin{align} y_{i} &= \ln(\frac{z_{i}}{1 - z_{i}}) \\ z_{i} &=
\begin{cases} x_{i} & \text{if eps is None} \\
\text{eps} & \text{if } x_{i} < \text{eps} \\ x_{i} &
\text{if } \text{eps} \leq x_{i} \leq 1 - \text{eps} \\ 1 -
\text{eps} & \text{if } x_{i} > 1 - \text{eps} \end{cases}
\end{align}
Parameters:
* input (Tensor) -- the input tensor.
* **eps** (*float**, **optional*) -- the epsilon for input clamp
bound. Default: "None"
Keyword Arguments: | https://pytorch.org/docs/stable/special.html | pytorch docs |
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.rand(5)
>>> a
tensor([0.2796, 0.9331, 0.6486, 0.1523, 0.6516])
>>> torch.special.logit(a, eps=1e-6)
tensor([-0.9466, 2.6352, 0.6131, -1.7169, 0.6261])
torch.special.logsumexp(input, dim, keepdim=False, *, out=None)
Alias for "torch.logsumexp()".
torch.special.multigammaln(input, p, *, out=None) -> Tensor
Computes the multivariate log-gamma function with dimension p
element-wise, given by
\log(\Gamma_{p}(a)) = C + \displaystyle \sum_{i=1}^{p}
\log\left(\Gamma\left(a - \frac{i - 1}{2}\right)\right)
where C = \log(\pi) \cdot \frac{p (p - 1)}{4} and \Gamma(-) is the
Gamma function.
All elements must be greater than \frac{p - 1}{2}, otherwise the
behavior is undefiend.
Parameters:
* input (Tensor) -- the tensor to compute the multivariate
log-gamma function
* **p** (*int*) -- the number of dimensions
| https://pytorch.org/docs/stable/special.html | pytorch docs |
p (int) -- the number of dimensions
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> a = torch.empty(2, 3).uniform_(1, 2)
>>> a
tensor([[1.6835, 1.8474, 1.1929],
[1.0475, 1.7162, 1.4180]])
>>> torch.special.multigammaln(a, 2)
tensor([[0.3928, 0.4007, 0.7586],
[1.0311, 0.3901, 0.5049]])
torch.special.ndtr(input, *, out=None) -> Tensor
Computes the area under the standard Gaussian probability density
function, integrated from minus infinity to "input", elementwise.
\text{ndtr}(x) = \frac{1}{\sqrt{2 \pi}}\int_{-\infty}^{x}
e^{-\frac{1}{2}t^2} dt
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example::
>>> torch.special.ndtr(torch.tensor([-3., -2, -1, 0, 1, 2, 3]))
tensor([0.0013, 0.0228, 0.1587, 0.5000, 0.8413, 0.9772, 0.9987]) | https://pytorch.org/docs/stable/special.html | pytorch docs |
torch.special.ndtri(input, *, out=None) -> Tensor
Computes the argument, x, for which the area under the Gaussian
probability density function (integrated from minus infinity to x)
is equal to "input", elementwise.
\text{ndtri}(p) = \sqrt{2}\text{erf}^{-1}(2p - 1)
Note:
Also known as quantile function for Normal Distribution.
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example::
>>> torch.special.ndtri(torch.tensor([0, 0.25, 0.5, 0.75, 1]))
tensor([ -inf, -0.6745, 0.0000, 0.6745, inf])
torch.special.polygamma(n, input, *, out=None) -> Tensor
Computes the n^{th} derivative of the digamma function on "input".
n \geq 0 is called the order of the polygamma function.
\psi^{(n)}(x) = \frac{d^{(n)}}{dx^{(n)}} \psi(x)
Note:
This function is implemented only for nonnegative integers n \geq
0.
Parameters: | https://pytorch.org/docs/stable/special.html | pytorch docs |
0.
Parameters:
* n (int) -- the order of the polygamma function
* **input** (*Tensor*) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example::
>>> a = torch.tensor([1, 0.5])
>>> torch.special.polygamma(1, a)
tensor([1.64493, 4.9348])
>>> torch.special.polygamma(2, a)
tensor([ -2.4041, -16.8288])
>>> torch.special.polygamma(3, a)
tensor([ 6.4939, 97.4091])
>>> torch.special.polygamma(4, a)
tensor([ -24.8863, -771.4742])
torch.special.psi(input, *, out=None) -> Tensor
Alias for "torch.special.digamma()".
torch.special.round(input, *, out=None) -> Tensor
Alias for "torch.round()".
torch.special.scaled_modified_bessel_k0(input, *, out=None) -> Tensor
Scaled modified Bessel function of the second kind of order 0.
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments: | https://pytorch.org/docs/stable/special.html | pytorch docs |
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
torch.special.scaled_modified_bessel_k1(input, *, out=None) -> Tensor
Scaled modified Bessel function of the second kind of order 1.
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
torch.special.sinc(input, *, out=None) -> Tensor
Computes the normalized sinc of "input."
\text{out}_{i} = \begin{cases} 1, & \text{if}\
\text{input}_{i}=0 \\ \sin(\pi \text{input}_{i}) / (\pi
\text{input}_{i}), & \text{otherwise} \end{cases}
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example::
>>> t = torch.randn(4)
>>> t
tensor([ 0.2252, -0.2948, 1.0267, -1.1566])
>>> torch.special.sinc(t)
tensor([ 0.9186, 0.8631, -0.0259, -0.1300]) | https://pytorch.org/docs/stable/special.html | pytorch docs |
torch.special.softmax(input, dim, *, dtype=None) -> Tensor
Computes the softmax function.
Softmax is defined as:
\text{Softmax}(x_{i}) = \frac{\exp(x_i)}{\sum_j \exp(x_j)}
It is applied to all slices along dim, and will re-scale them so
that the elements lie in the range [0, 1] and sum to 1.
Parameters:
* input (Tensor) -- input
* **dim** (*int*) -- A dimension along which softmax will be
computed.
* **dtype** ("torch.dtype", optional) -- the desired data type
of returned tensor. If specified, the input tensor is cast to
"dtype" before the operation is performed. This is useful for
preventing data type overflows. Default: None.
Examples::
>>> t = torch.ones(2, 2)
>>> torch.special.softmax(t, 0)
tensor([[0.5000, 0.5000],
[0.5000, 0.5000]])
torch.special.spherical_bessel_j0(input, *, out=None) -> Tensor
Spherical Bessel function of the first kind of order 0.
Parameters: | https://pytorch.org/docs/stable/special.html | pytorch docs |
Parameters:
input (Tensor) -- the input tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
torch.special.xlog1py(input, other, *, out=None) -> Tensor
Computes "input * log1p(other)" with the following cases.
\text{out}_{i} = \begin{cases} \text{NaN} & \text{if }
\text{other}_{i} = \text{NaN} \\ 0 & \text{if }
\text{input}_{i} = 0.0 \text{ and } \text{other}_{i} !=
\text{NaN} \\ \text{input}_{i} *
\text{log1p}(\text{other}_{i})& \text{otherwise} \end{cases}
Similar to SciPy's scipy.special.xlog1py.
Parameters:
* input (Number or Tensor) -- Multiplier
* **other** (*Number** or **Tensor*) -- Argument
Note:
At least one of "input" or "other" must be a tensor.
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> x = torch.zeros(5,)
>>> y = torch.tensor([-1, 0, 1, float('inf'), float('nan')])
| https://pytorch.org/docs/stable/special.html | pytorch docs |
torch.special.xlog1py(x, y)
tensor([0., 0., 0., 0., nan])
>>> x = torch.tensor([1, 2, 3])
>>> y = torch.tensor([3, 2, 1])
>>> torch.special.xlog1py(x, y)
tensor([1.3863, 2.1972, 2.0794])
>>> torch.special.xlog1py(x, 4)
tensor([1.6094, 3.2189, 4.8283])
>>> torch.special.xlog1py(2, y)
tensor([2.7726, 2.1972, 1.3863])
torch.special.xlogy(input, other, *, out=None) -> Tensor
Computes "input * log(other)" with the following cases.
\text{out}_{i} = \begin{cases} \text{NaN} & \text{if }
\text{other}_{i} = \text{NaN} \\ 0 & \text{if }
\text{input}_{i} = 0.0 \\ \text{input}_{i} *
\log{(\text{other}_{i})} & \text{otherwise} \end{cases}
Similar to SciPy's scipy.special.xlogy.
Parameters:
* input (Number or Tensor) -- Multiplier
* **other** (*Number** or **Tensor*) -- Argument
Note:
At least one of "input" or "other" must be a tensor.
Keyword Arguments: | https://pytorch.org/docs/stable/special.html | pytorch docs |
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example:
>>> x = torch.zeros(5,)
>>> y = torch.tensor([-1, 0, 1, float('inf'), float('nan')])
>>> torch.special.xlogy(x, y)
tensor([0., 0., 0., 0., nan])
>>> x = torch.tensor([1, 2, 3])
>>> y = torch.tensor([3, 2, 1])
>>> torch.special.xlogy(x, y)
tensor([1.0986, 1.3863, 0.0000])
>>> torch.special.xlogy(x, 4)
tensor([1.3863, 2.7726, 4.1589])
>>> torch.special.xlogy(2, y)
tensor([2.1972, 1.3863, 0.0000])
torch.special.zeta(input, other, *, out=None) -> Tensor
Computes the Hurwitz zeta function, elementwise.
\zeta(x, q) = \sum_{k=0}^{\infty} \frac{1}{(k + q)^x}
Parameters:
* input (Tensor) -- the input tensor corresponding to x.
* **other** (*Tensor*) -- the input tensor corresponding to *q*.
Note:
The Riemann zeta function corresponds to the case when *q = 1*
Keyword Arguments: | https://pytorch.org/docs/stable/special.html | pytorch docs |
Keyword Arguments:
out (Tensor, optional) -- the output tensor.
Example::
>>> x = torch.tensor([2., 4.])
>>> torch.special.zeta(x, 1)
tensor([1.6449, 1.0823])
>>> torch.special.zeta(x, torch.tensor([1., 2.]))
tensor([1.6449, 0.0823])
>>> torch.special.zeta(2, torch.tensor([1., 2.]))
tensor([1.6449, 0.6449]) | https://pytorch.org/docs/stable/special.html | pytorch docs |
torch.utils.bottleneck
torch.utils.bottleneck is a tool that can be used as an initial step
for debugging bottlenecks in your program. It summarizes runs of your
script with the Python profiler and PyTorch's autograd profiler.
Run it on the command line with
python -m torch.utils.bottleneck /path/to/source/script.py [args]
where [args] are any number of arguments to script.py, or run
"python -m torch.utils.bottleneck -h" for more usage instructions.
Warning:
Because your script will be profiled, please ensure that it exits in
a finite amount of time.
Warning:
Due to the asynchronous nature of CUDA kernels, when running against
CUDA code, the cProfile output and CPU-mode autograd profilers may
not show correct timings: the reported CPU time reports the amount
of time used to launch the kernels but does not include the time the
kernel spent executing on a GPU unless the operation does a
synchronize. Ops that do synchronize appear to be extremely | https://pytorch.org/docs/stable/bottleneck.html | pytorch docs |
expensive under regular CPU-mode profilers. In these case where
timings are incorrect, the CUDA-mode autograd profiler may be
helpful.
Note:
To decide which (CPU-only-mode or CUDA-mode) autograd profiler
output to look at, you should first check if your script is CPU-
bound ("CPU total time is much greater than CUDA total time"). If it
is CPU-bound, looking at the results of the CPU-mode autograd
profiler will help. If on the other hand your script spends most of
its time executing on the GPU, then it makes sense to start looking
for responsible CUDA operators in the output of the CUDA-mode
autograd profiler.Of course the reality is much more complicated and
your script might not be in one of those two extremes depending on
the part of the model you're evaluating. If the profiler outputs
don't help, you could try looking at the result of
"torch.autograd.profiler.emit_nvtx()" with "nvprof". However, please
take into account that the NVTX overhead is very high and often | https://pytorch.org/docs/stable/bottleneck.html | pytorch docs |
gives a heavily skewed timeline. Similarly, "Intel® VTune⢠Profiler"
helps to analyze performance on Intel platforms further with
"torch.autograd.profiler.emit_itt()".
Warning:
If you are profiling CUDA code, the first profiler that "bottleneck"
runs (cProfile) will include the CUDA startup time (CUDA buffer
allocation cost) in its time reporting. This should not matter if
your bottlenecks result in code much slower than the CUDA startup
time.
For more complicated uses of the profilers (like in a multi-GPU case),
please see https://docs.python.org/3/library/profile.html or
"torch.autograd.profiler.profile()" for more information. | https://pytorch.org/docs/stable/bottleneck.html | pytorch docs |
Frequently Asked Questions
My model reports "cuda runtime error(2): out of memory"
As the error message suggests, you have run out of memory on your GPU.
Since we often deal with large amounts of data in PyTorch, small
mistakes can rapidly cause your program to use up all of your GPU;
fortunately, the fixes in these cases are often simple. Here are a few
common things to check:
Don't accumulate history across your training loop. By default,
computations involving variables that require gradients will keep
history. This means that you should avoid using such variables in
computations which will live beyond your training loops, e.g., when
tracking statistics. Instead, you should detach the variable or access
its underlying data.
Sometimes, it can be non-obvious when differentiable variables can
occur. Consider the following training loop (abridged from source):
total_loss = 0
for i in range(10000): | https://pytorch.org/docs/stable/notes/faq.html | pytorch docs |
total_loss = 0
for i in range(10000):
optimizer.zero_grad()
output = model(input)
loss = criterion(output)
loss.backward()
optimizer.step()
total_loss += loss
Here, "total_loss" is accumulating history across your training loop,
since "loss" is a differentiable variable with autograd history. You
can fix this by writing total_loss += float(loss) instead.
Other instances of this problem: 1.
Don't hold onto tensors and variables you don't need. If you
assign a Tensor or Variable to a local, Python will not deallocate
until the local goes out of scope. You can free this reference by
using "del x". Similarly, if you assign a Tensor or Variable to a
member variable of an object, it will not deallocate until the object
goes out of scope. You will get the best memory usage if you don't
hold onto temporaries you don't need.
The scopes of locals can be larger than you expect. For example:
for i in range(5):
intermediate = f(input[i]) | https://pytorch.org/docs/stable/notes/faq.html | pytorch docs |
intermediate = f(input[i])
result += g(intermediate)
output = h(result)
return output
Here, "intermediate" remains live even while "h" is executing, because
its scope extrudes past the end of the loop. To free it earlier, you
should "del intermediate" when you are done with it.
Avoid running RNNs on sequences that are too large. The amount of
memory required to backpropagate through an RNN scales linearly with
the length of the RNN input; thus, you will run out of memory if you
try to feed an RNN a sequence that is too long.
The technical term for this phenomenon is backpropagation through
time, and there are plenty of references for how to implement
truncated BPTT, including in the word language model example;
truncation is handled by the "repackage" function as described in this
forum post.
Don't use linear layers that are too large. A linear layer
"nn.Linear(m, n)" uses O(nm) memory: that is to say, the memory | https://pytorch.org/docs/stable/notes/faq.html | pytorch docs |
requirements of the weights scales quadratically with the number of
features. It is very easy to blow through your memory this way (and
remember that you will need at least twice the size of the weights,
since you also need to store the gradients.)
Consider checkpointing. You can trade-off memory for compute by
using checkpoint.
My GPU memory isn't freed properly
PyTorch uses a caching memory allocator to speed up memory
allocations. As a result, the values shown in "nvidia-smi" usually
don't reflect the true memory usage. See Memory management for more
details about GPU memory management.
If your GPU memory isn't freed even after Python quits, it is very
likely that some Python subprocesses are still alive. You may find
them via "ps -elf | grep python" and manually kill them with "kill -9
[pid]".
My out of memory exception handler can't allocate memory | https://pytorch.org/docs/stable/notes/faq.html | pytorch docs |
You may have some code that tries to recover from out of memory
errors.
try:
run_model(batch_size)
except RuntimeError: # Out of memory
for _ in range(batch_size):
run_model(1)
But find that when you do run out of memory, your recovery code can't
allocate either. That's because the python exception object holds a
reference to the stack frame where the error was raised. Which
prevents the original tensor objects from being freed. The solution is
to move you OOM recovery code outside of the "except" clause.
oom = False
try:
run_model(batch_size)
except RuntimeError: # Out of memory
oom = True
if oom:
for _ in range(batch_size):
run_model(1)
My data loader workers return identical random numbers
You are likely using other libraries to generate random numbers in the
dataset and worker subprocesses are started via "fork". See | https://pytorch.org/docs/stable/notes/faq.html | pytorch docs |
"torch.utils.data.DataLoader"'s documentation for how to properly set
up random seeds in workers with its "worker_init_fn" option.
My recurrent network doesn't work with data parallelism
There is a subtlety in using the "pack sequence -> recurrent network
-> unpack sequence" pattern in a "Module" with "DataParallel" or
"data_parallel()". Input to each the "forward()" on each device will
only be part of the entire input. Because the unpack operation
"torch.nn.utils.rnn.pad_packed_sequence()" by default only pads up to
the longest input it sees, i.e., the longest on that particular
device, size mismatches will happen when results are gathered
together. Therefore, you can instead take advantage of the
"total_length" argument of "pad_packed_sequence()" to make sure that
the "forward()" calls return sequences of same length. For example,
you can write:
from torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence | https://pytorch.org/docs/stable/notes/faq.html | pytorch docs |
class MyModule(nn.Module):
# ... init, other methods, etc.
# padded_input is of shape [B x T x *] (batch_first mode) and contains
# the sequences sorted by lengths
# B is the batch size
# T is max sequence length
def forward(self, padded_input, input_lengths):
total_length = padded_input.size(1) # get the max sequence length
packed_input = pack_padded_sequence(padded_input, input_lengths,
batch_first=True)
packed_output, _ = self.my_lstm(packed_input)
output, _ = pad_packed_sequence(packed_output, batch_first=True,
total_length=total_length)
return output
m = MyModule().cuda()
dp_m = nn.DataParallel(m)
Additionally, extra care needs to be taken when batch dimension is dim
"1" (i.e., "batch_first=False") with data parallelism. In this case, | https://pytorch.org/docs/stable/notes/faq.html | pytorch docs |
the first argument of pack_padded_sequence "padding_input" will be of
shape "[T x B x *]" and should be scattered along dim "1", but the
second argument "input_lengths" will be of shape "[B]" and should be
scattered along dim "0". Extra code to manipulate the tensor shapes
will be needed. | https://pytorch.org/docs/stable/notes/faq.html | pytorch docs |
MPS backend
"mps" device enables high-performance training on GPU for MacOS
devices with Metal programming framework. It introduces a new device
to map Machine Learning computational graphs and primitives on highly
efficient Metal Performance Shaders Graph framework and tuned kernels
provided by Metal Performance Shaders framework respectively.
The new MPS backend extends the PyTorch ecosystem and provides
existing scripts capabilities to setup and run operations on GPU.
To get started, simply move your Tensor and Module to the "mps"
device:
# Check that MPS is available
if not torch.backends.mps.is_available():
if not torch.backends.mps.is_built():
print("MPS not available because the current PyTorch install was not "
"built with MPS enabled.")
else:
print("MPS not available because the current MacOS version is not 12.3+ "
"and/or you do not have an MPS-enabled device on this machine.")
else: | https://pytorch.org/docs/stable/notes/mps.html | pytorch docs |
else:
mps_device = torch.device("mps")
# Create a Tensor directly on the mps device
x = torch.ones(5, device=mps_device)
# Or
x = torch.ones(5, device="mps")
# Any operation happens on the GPU
y = x * 2
# Move your model to mps just like any other device
model = YourFavoriteNet()
model.to(mps_device)
# Now every call runs on the GPU
pred = model(x)
| https://pytorch.org/docs/stable/notes/mps.html | pytorch docs |
Distributed Data Parallel
Warning:
The implementation of "torch.nn.parallel.DistributedDataParallel"
evolves over time. This design note is written based on the state as
of v1.4.
"torch.nn.parallel.DistributedDataParallel" (DDP) transparently
performs distributed data parallel training. This page describes how
it works and reveals implementation details.
Example
Let us start with a simple "torch.nn.parallel.DistributedDataParallel"
example. This example uses a "torch.nn.Linear" as the local model,
wraps it with DDP, and then runs one forward pass, one backward pass,
and an optimizer step on the DDP model. After that, parameters on the
local model will be updated, and all models on different processes
should be exactly the same.
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
import torch.nn as nn
import torch.optim as optim
from torch.nn.parallel import DistributedDataParallel as DDP | https://pytorch.org/docs/stable/notes/ddp.html | pytorch docs |
def example(rank, world_size):
# create default process group
dist.init_process_group("gloo", rank=rank, world_size=world_size)
# create local model
model = nn.Linear(10, 10).to(rank)
# construct DDP model
ddp_model = DDP(model, device_ids=[rank])
# define loss function and optimizer
loss_fn = nn.MSELoss()
optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)
# forward pass
outputs = ddp_model(torch.randn(20, 10).to(rank))
labels = torch.randn(20, 10).to(rank)
# backward pass
loss_fn(outputs, labels).backward()
# update parameters
optimizer.step()
def main():
world_size = 2
mp.spawn(example,
args=(world_size,),
nprocs=world_size,
join=True)
if name=="main":
# Environment variables which need to be
# set when using c10d's default "env"
# initialization mode.
os.environ["MASTER_ADDR"] = "localhost" | https://pytorch.org/docs/stable/notes/ddp.html | pytorch docs |
os.environ["MASTER_ADDR"] = "localhost"
os.environ["MASTER_PORT"] = "29500"
main()
DDP works with TorchDynamo. When used with TorchDynamo, apply the DDP
model wrapper before compiling the model, such that torchdynamo can
apply "DDPOptimizer" (graph-break optimizations) based on DDP bucket
sizes. (See TorchDynamo DDPOptimizer for more information.)
TorchDynamo support for DDP currently requires setting
static_graph=False, due to interactions between the graph tracing
process and DDP's mechanism for observing operations happening on its
module, but this should be fixed ultimately.
ddp_model = DDP(model, device_ids=[rank])
ddp_model = torch.compile(ddp_model)
Internal Design
This section reveals how it works under the hood of
"torch.nn.parallel.DistributedDataParallel" by diving into details of
every step in one iteration.
Prerequisite: DDP relies on c10d "ProcessGroup" for
communications. Hence, applications must create "ProcessGroup"
| https://pytorch.org/docs/stable/notes/ddp.html | pytorch docs |
instances before constructing DDP.
Construction: The DDP constructor takes a reference to the local
module, and broadcasts "state_dict()" from the process with rank 0
to all other processes in the group to make sure that all model
replicas start from the exact same state. Then, each DDP process
creates a local "Reducer", which later will take care of the
gradients synchronization during the backward pass. To improve
communication efficiency, the "Reducer" organizes parameter
gradients into buckets, and reduces one bucket at a time. Bucket
size can be configured by setting the bucket_cap_mb argument in
DDP constructor. The mapping from parameter gradients to buckets is
determined at the construction time, based on the bucket size limit
and parameter sizes. Model parameters are allocated into buckets in
(roughly) the reverse order of "Model.parameters()" from the given
model. The reason for using the reverse order is because DDP expects
| https://pytorch.org/docs/stable/notes/ddp.html | pytorch docs |
gradients to become ready during the backward pass in approximately
that order. The figure below shows an example. Note that, the
"grad0" and "grad1" are in "bucket1", and the other two gradients
are in "bucket0". Of course, this assumption might not always be
true, and when that happens it could hurt DDP backward speed as the
"Reducer" cannot kick off the communication at the earliest possible
time. Besides bucketing, the "Reducer" also registers autograd hooks
during construction, one hook per parameter. These hooks will be
triggered during the backward pass when the gradient becomes ready.
Forward Pass: The DDP takes the input and passes it to the local
model, and then analyzes the output from the local model if
"find_unused_parameters" is set to "True". This mode allows running
backward on a subgraph of the model, and DDP finds out which
parameters are involved in the backward pass by traversing the
autograd graph from the model output and marking all unused
| https://pytorch.org/docs/stable/notes/ddp.html | pytorch docs |
parameters as ready for reduction. During the backward pass, the
"Reducer" would only wait for unready parameters, but it would still
reduce all buckets. Marking a parameter gradient as ready does not
help DDP skip buckets as for now, but it will prevent DDP from
waiting for absent gradients forever during the backward pass. Note
that traversing the autograd graph introduces extra overheads, so
applications should only set "find_unused_parameters" to "True" when
necessary.
Backward Pass: The "backward()" function is directly invoked on
the loss "Tensor", which is out of DDP's control, and DDP uses
autograd hooks registered at construction time to trigger gradients
synchronizations. When one gradient becomes ready, its corresponding
DDP hook on that grad accumulator will fire, and DDP will then mark
that parameter gradient as ready for reduction. When gradients in
one bucket are all ready, the "Reducer" kicks off an asynchronous
| https://pytorch.org/docs/stable/notes/ddp.html | pytorch docs |
"allreduce" on that bucket to calculate mean of gradients across all
processes. When all buckets are ready, the "Reducer" will block
waiting for all "allreduce" operations to finish. When this is done,
averaged gradients are written to the "param.grad" field of all
parameters. So after the backward pass, the grad field on the same
corresponding parameter across different DDP processes should be the
same.
Optimizer Step: From the optimizer's perspective, it is
optimizing a local model. Model replicas on all DDP processes can
keep in sync because they all start from the same state and they
have the same averaged gradients in every iteration.
[image: ddp_grad_sync.png][image]
Note:
DDP requires "Reducer" instances on all processes to invoke
"allreduce" in exactly the same order, which is done by always
running "allreduce" in the bucket index order instead of actual
bucket ready order. Mismatched "allreduce" order across processes | https://pytorch.org/docs/stable/notes/ddp.html | pytorch docs |
can lead to wrong results or DDP backward hang.
Implementation
Below are pointers to the DDP implementation components. The stacked
graph shows the structure of the code.
ProcessGroup
ProcessGroup.hpp: contains the abstract API of all process group
implementations. The "c10d" library provides 3 implementations out
of the box, namely, ProcessGroupGloo, ProcessGroupNCCL, and
ProcessGroupMPI. "DistributedDataParallel" uses
"ProcessGroup::broadcast()" to send model states from the process
with rank 0 to others during initialization and
"ProcessGroup::allreduce()" to sum gradients.
Store.hpp: assists the rendezvous service for process group
instances to find each other.
DistributedDataParallel
distributed.py: is the Python entry point for DDP. It implements the
initialization steps and the "forward" function for the
"nn.parallel.DistributedDataParallel" module which call into C++
| https://pytorch.org/docs/stable/notes/ddp.html | pytorch docs |
libraries. Its "_sync_param" function performs intra-process
parameter synchronization when one DDP process works on multiple
devices, and it also broadcasts model buffers from the process with
rank 0 to all other processes. The inter-process parameter
synchronization happens in "Reducer.cpp".
comm.h: implements the coalesced broadcast helper function which is
invoked to broadcast model states during initialization and
synchronize model buffers before the forward pass.
reducer.h: provides the core implementation for gradient
synchronization in the backward pass. It has three entry point
functions:
"Reducer": The constructor is called in "distributed.py" which
registers "Reducer::autograd_hook()" to gradient accumulators.
"autograd_hook()" function will be invoked by the autograd engine
when a gradient becomes ready.
"prepare_for_backward()" is called at the end of DDP forward pass
in "distributed.py". It traverses the autograd graph to find
| https://pytorch.org/docs/stable/notes/ddp.html | pytorch docs |
unused parameters when "find_unused_parameters" is set to "True"
in DDP constructor.
[image: ddp_code.png][image]
TorchDynamo DDPOptimizer
DDP's performance advantage comes from overlapping allreduce
collectives with computations during backwards. AotAutograd prevents
this overlap when used with TorchDynamo for compiling a whole forward
and whole backward graph, because allreduce ops are launched by
autograd hooks after the whole optimized backwards computation
finishes.
TorchDynamo's DDPOptimizer helps by breaking the forward graph at the
logical boundaries of DDP's allreduce buckets during backwards. Note:
the goal is to break the graph during backwards, and the simplest
implementation is to break the forward graphs and then call
AotAutograd and compilation on each section. This allows DDP's
allreduce hooks to fire in-between sections of backwards, and schedule
communications to overlap with compute. | https://pytorch.org/docs/stable/notes/ddp.html | pytorch docs |
communications to overlap with compute.
See this blog post for a more in-depth explanation and experimental
results, or read the docs and code at
torch/_dynamo/optimizations/distributed.py
To Debug DDPOptimizer, set torch._dynamo.config.log_level to DEBUG
(for full graph dumps) or INFO (for basic info about bucket
boundaries). To disable DDPOptimizer, set
torch._dynamo.config.optimize_ddp=False. DDP and TorchDynamo should
still work correctly without DDPOptimizer, but with performance
degradation. | https://pytorch.org/docs/stable/notes/ddp.html | pytorch docs |
Features for large-scale deployments
* Fleet-wide operator profiling
API usage logging
Attaching metadata to saved TorchScript models
Build environment considerations
Common extension points
This note talks about several extension points and tricks that might
be useful when running PyTorch within a larger system or operating
multiple systems using PyTorch in a larger organization.
It doesn't cover topics of deploying models to production. Check
"torch.jit" or one of the corresponding tutorials.
The note assumes that you either build PyTorch from source in your
organization or have an ability to statically link additional code to
be loaded when PyTorch is used. Therefore, many of the hooks are
exposed as C++ APIs that can be triggered once in a centralized place,
e.g. in static initialization code.
Fleet-wide operator profiling
PyTorch comes with "torch.autograd.profiler" capable of measuring time | https://pytorch.org/docs/stable/notes/large_scale_deployments.html | pytorch docs |
taken by individual operators on demand. One can use the same
mechanism to do "always ON" measurements for any process running
PyTorch. It might be useful for gathering information about PyTorch
workloads running in a given process or across the entire set of
machines.
New callbacks for any operator invocation can be added with
"torch::addGlobalCallback". Hooks will be called with
"torch::RecordFunction" struct that describes invocation context (e.g.
name). If enabled, "RecordFunction::inputs()" contains arguments of
the function represented as "torch::IValue" variant type. Note, that
inputs logging is relatively expensive and thus has to be enabled
explicitly.
The operator callbacks also have access to
"c10::ThreadLocalDebugInfo::get()" interface that returns a pointer to
the struct holding the debug information. This debug information can
be set earlier by using "at::DebugInfoGuard" object. Debug information
is propagated through the forward (including async "fork" tasks) and | https://pytorch.org/docs/stable/notes/large_scale_deployments.html | pytorch docs |
backward passes and can be useful for passing some extra information
about execution environment (e.g. model id) from the higher layers of
the application down to the operator callbacks.
Invoking callbacks adds some overhead, so usually it's useful to just
randomly sample operator invocations. This can be enabled on per-
callback basis with an optional sampling rate passed into
"torch::addGlobalCallback".
Note, that "addGlobalCallback" is not thread-safe and can be called
only when no PyTorch operator is running. Usually, it's a good idea to
call them once during initialization.
Here's an example:
// Called somewhere in the program beginning
void init() {
// Sample one in a hundred operator runs randomly
addGlobalCallback(
RecordFunctionCallback(
&onFunctionEnter,
&onFunctionExit)
.needsInputs(true)
.samplingProb(0.01)
);
// Note, to enable observers in the model calling thread, | https://pytorch.org/docs/stable/notes/large_scale_deployments.html | pytorch docs |
// call enableRecordFunction() in the thread before running a model
}
void onFunctionEnter(const RecordFunction& fn) {
std::cerr << "Before function " << fn.name()
<< " with " << fn.inputs().size() << " inputs" << std::endl;
}
void onFunctionExit(const RecordFunction& fn) {
std::cerr << "After function " << fn.name();
}
API usage logging
When running in a broader ecosystem, for example in managed job
scheduler, it's often useful to track which binaries invoke particular
PyTorch APIs. There exists simple instrumentation injected at several
important API points that triggers a given callback. Because usually
PyTorch is invoked in one-off python scripts, the callback fires only
once for a given process for each of the APIs.
"c10::SetAPIUsageHandler" can be used to register API usage
instrumentation handler. Passed argument is going to be an "api key"
identifying used point, for example "python.import" for PyTorch | https://pytorch.org/docs/stable/notes/large_scale_deployments.html | pytorch docs |
extension import or "torch.script.compile" if TorchScript compilation
was triggered.
SetAPIUsageLogger( {
std::cerr << "API was used: " << event_name << std::endl;
});
Note for developers: new API trigger points can be added in code with
"C10_LOG_API_USAGE_ONCE("my_api")" in C++ or
"torch._C._log_api_usage_once("my.api")" in Python.
Attaching metadata to saved TorchScript models
TorchScript modules can be saved as an archive file that bundles
serialized parameters and module code as TorchScript (see
"torch.jit.save()"). It's often convenient to bundle additional
information together with the model, for example, description of model
producer or auxiliary artifacts.
It can be achieved by passing the "_extra_files" argument to
"torch.jit.save()" and "torch::jit::load" to store and retrieve
arbitrary binary blobs during saving process. Since TorchScript files | https://pytorch.org/docs/stable/notes/large_scale_deployments.html | pytorch docs |
are regular ZIP archives, extra information gets stored as regular
files inside archive's "extra/" directory.
There's also a global hook allowing to attach extra files to any
TorchScript archive produced in the current process. It might be
useful to tag models with producer metadata, akin to JPEG metadata
produced by digital cameras. Example usage might look like:
SetExportModuleExtraFilesHook( {
ExtraFilesMap files;
files["producer_info.json"] = "{\"user\": \"" + getenv("USER") + "\"}";
return files;
});
Build environment considerations
TorchScript's compilation needs to have access to the original python
files as it uses python's "inspect.getsource" call. In certain
production environments it might require explicitly deploying ".py"
files along with precompiled ".pyc".
Common extension points
PyTorch APIs are generally loosely coupled and it's easy to replace a | https://pytorch.org/docs/stable/notes/large_scale_deployments.html | pytorch docs |
component with specialized version. Common extension points include:
Custom operators implemented in C++ - see tutorial for more details.
Custom data reading can be often integrated directly by invoking
corresponding python library. Existing functionality of
"torch.utils.data" can be utilized by extending "Dataset" or
"IterableDataset".
| https://pytorch.org/docs/stable/notes/large_scale_deployments.html | pytorch docs |
Numerical accuracy
In modern computers, floating point numbers are represented using IEEE
754 standard. For more details on floating point arithmetics and IEEE
754 standard, please see Floating point arithmetic In particular, note
that floating point provides limited accuracy (about 7 decimal digits
for single precision floating point numbers, about 16 decimal digits
for double precision floating point numbers) and that floating point
addition and multiplication are not associative, so the order of the
operations affects the results. Because of this, PyTorch is not
guaranteed to produce bitwise identical results for floating point
computations that are mathematically identical. Similarly, bitwise
identical results are not guaranteed across PyTorch releases,
individual commits, or different platforms. In particular, CPU and GPU
results can be different even for bitwise-identical inputs and even
after controlling for the sources of randomness.
Batched computations or slice computations | https://pytorch.org/docs/stable/notes/numerical_accuracy.html | pytorch docs |
Batched computations or slice computations
Many operations in PyTorch support batched computation, where the same
operation is performed for the elements of the batches of inputs. An
example of this is "torch.mm()" and "torch.bmm()". It is possible to
implement batched computation as a loop over batch elements, and apply
the necessary math operations to the individual batch elements, for
efficiency reasons we are not doing that, and typically perform
computation for the whole batch. The mathematical libraries that we
are calling, and PyTorch internal implementations of operations can
produces slightly different results in this case, compared to non-
batched computations. In particular, let "A" and "B" be 3D tensors
with the dimensions suitable for batched matrix multiplication. Then
"(A@B)[0]" (the first element of the batched result) is not guaranteed
to be bitwise identical to "A[0]@B[0]" (the matrix product of the | https://pytorch.org/docs/stable/notes/numerical_accuracy.html | pytorch docs |
first elements of the input batches) even though mathematically it's
an identical computation.
Similarly, an operation applied to a tensor slice is not guaranteed to
produce results that are identical to the slice of the result of the
same operation applied to the full tensor. E.g. let "A" be a
2-dimensional tensor. "A.sum(-1)[0]" is not guaranteed to be bitwise
equal to "A[:,0].sum()".
Extremal values
When inputs contain large values such that intermediate results may
overflow the range of the used datatype, the end result may overflow
too, even though it is representable in the original datatype. E.g.:
import torch
a=torch.tensor([1e20, 1e20]) # fp32 type by default
a.norm() # produces tensor(inf)
a.double().norm() # produces tensor(1.4142e+20, dtype=torch.float64), representable in fp32
Linear algebra ("torch.linalg")
Non-finite values
The external libraries (backends) that "torch.linalg" uses provide no | https://pytorch.org/docs/stable/notes/numerical_accuracy.html | pytorch docs |
guarantees on their behaviour when the inputs have non-finite values
like "inf" or "NaN". As such, neither does PyTorch. The operations may
return a tensor with non-finite values, or raise an exception, or even
segfault.
Consider using "torch.isfinite()" before calling these functions to
detect this situation.
Extremal values in linalg
Functions within "torch.linalg" have more Extremal Values than other
PyTorch functions.
Solvers and Inverses assume that the input matrix "A" is invertible.
If it is close to being non-invertible (for example, if it has a very
small singular value), then these algorithms may silently return
incorrect results. These matrices are said to be ill-conditioned. If
provided with ill-conditioned inputs, the result of these functions
they may vary when using the same inputs on different devices or when
using different backends via the keyword "driver".
Spectral operations like "svd", "eig", and "eigh" may also return | https://pytorch.org/docs/stable/notes/numerical_accuracy.html | pytorch docs |
incorrect results (and their gradients may be infinite) when their
inputs have singular values that are close to each other. This is
because the algorithms used to compute these decompositions struggle
to converge for these inputs.
Running the computation in "float64" (as NumPy does by default) often
helps, but it does not solve these issues in all cases. Analyzing the
spectrum of the inputs via "torch.linalg.svdvals()" or their condition
number via "torch.linalg.cond()" may help to detect these issues.
TensorFloat-32(TF32) on Nvidia Ampere devices
On Ampere Nvidia GPUs, PyTorch can use TensorFloat32 (TF32) to speed
up mathematically intensive operations, in particular matrix
multiplications and convolutions. When an operation is performed using
TF32 tensor cores, only the first 10 bits of the input mantissa are
read. This may reduce accuracy and produce surprising results (e.g.,
multiplying a matrix by the identity matrix may produce results that | https://pytorch.org/docs/stable/notes/numerical_accuracy.html | pytorch docs |
are different from the input). By default, TF32 tensor cores are
disabled for matrix multiplications and enabled for convolutions,
although most neural network workloads have the same convergence
behavior when using TF32 as they have with fp32. We recommend enabling
TF32 tensor cores for matrix multiplications with
"torch.backends.cuda.matmul.allow_tf32 = True" if your network does
not need full float32 precision. If your network needs full float32
precision for both matrix multiplications and convolutions, then TF32
tensor cores can also be disabled for convolutions with
"torch.backends.cudnn.allow_tf32 = False".
For more information see TensorFloat32.
Reduced Precision Reduction for FP16 and BF16 GEMMs
Half-precision GEMM operations are typically done with intermediate
accumulations (reduction) in single-precision for numerical accuracy
and improved resilience to overflow. For performance, certain GPU | https://pytorch.org/docs/stable/notes/numerical_accuracy.html | pytorch docs |
architectures, especially more recent ones, allow a few truncations of
the intermediate accumulation results to the reduced precision (e.g.,
half-precision). This change is often benign from the perspective of
model convergence, though it may lead to unexpected results (e.g.,
"inf" values when the final result should be be representable in half-
precision). If reduced-precision reductions are problematic, they can
be turned off with
"torch.backends.cuda.matmul.allow_fp16_reduced_precision_reduction =
False"
A similar flag exists for BF16 GEMM operations and is turned off by
default. If BF16 reduced-precision reductions are problematic, they
can be turned off with
"torch.backends.cuda.matmul.allow_bf16_reduced_precision_reduction =
False"
For more information see allow_fp16_reduced_precision_reduction and
allow_bf16_reduced_precision_reduction
Reduced Precision FP16 and BF16 GEMMs and Convolutions on AMD Instinct MI200 devices | https://pytorch.org/docs/stable/notes/numerical_accuracy.html | pytorch docs |
====================================================================================
On AMD Instinct MI200 GPUs, the FP16 and BF16 V_DOT2 and MFMA matrix
instructions flush input and output denormal values to zero. FP32 and
FP64 MFMA matrix instructions do not flush input and output denormal
values to zero. The affected instructions are only used by rocBLAS
(GEMM) and MIOpen (convolution) kernels; all other PyTorch operations
will not encounter this behavior. All other supported AMD GPUs will
not encounter this behavior.
rocBLAS and MIOpen provide alternate implementations for affected FP16
operations. Alternate implementations for BF16 operations are not
provided; BF16 numbers have a larger dynamic range than FP16 numbers
and are less likely to encounter denormal values. For the FP16
alternate implementations, FP16 input values are cast to an
intermediate BF16 value and then cast back to FP16 output after the
accumulate FP32 operations. In this way, the input and output types
are unchanged. | https://pytorch.org/docs/stable/notes/numerical_accuracy.html | pytorch docs |
are unchanged.
When training using FP16 precision, some models may fail to converge
with FP16 denorms flushed to zero. Denormal values more frequently
occur in the backward pass of training during gradient calculation.
PyTorch by default will use the rocBLAS and MIOpen alternate
implementations during the backward pass. The default behavior can be
overridden using environment variables, ROCBLAS_INTERNAL_FP16_ALT_IMPL
and MIOPEN_DEBUG_CONVOLUTION_ATTRIB_FP16_ALT_IMPL. The behavior of
these environment variables is as follows:
+-----------------+-------------+-------------+
| | forward | backward |
|=================|=============|=============|
| Env unset | original | alternate |
+-----------------+-------------+-------------+
| Env set to 1 | alternate | alternate |
+-----------------+-------------+-------------+
| Env set to 0 | original | original |
+-----------------+-------------+-------------+ | https://pytorch.org/docs/stable/notes/numerical_accuracy.html | pytorch docs |
+-----------------+-------------+-------------+
The following is the list of operations where rocBLAS may be used:
torch.addbmm
torch.addmm
torch.baddbmm
torch.bmm
torch.mm
torch.nn.GRUCell
torch.nn.LSTMCell
torch.nn.Linear
torch.sparse.addmm
the following torch._C._ConvBackend implementations:
slowNd
slowNd_transposed
slowNd_dilated
slowNd_dilated_transposed
The following is the list of operations where MIOpen may be used:
torch.nn.Conv[Transpose]Nd
the following torch._C._ConvBackend implementations:
ConvBackend::Miopen
ConvBackend::MiopenDepthwise
ConvBackend::MiopenTranspose
| https://pytorch.org/docs/stable/notes/numerical_accuracy.html | pytorch docs |
Broadcasting semantics
Many PyTorch operations support NumPy's broadcasting semantics. See
https://numpy.org/doc/stable/user/basics.broadcasting.html for
details.
In short, if a PyTorch operation supports broadcast, then its Tensor
arguments can be automatically expanded to be of equal sizes (without
making copies of the data).
General semantics
Two tensors are "broadcastable" if the following rules hold:
Each tensor has at least one dimension.
When iterating over the dimension sizes, starting at the trailing
dimension, the dimension sizes must either be equal, one of them is
1, or one of them does not exist.
For Example:
x=torch.empty(5,7,3)
y=torch.empty(5,7,3)
# same shapes are always broadcastable (i.e. the above rules always hold)
x=torch.empty((0,))
y=torch.empty(2,2)
# x and y are not broadcastable, because x does not have at least 1 dimension
# can line up trailing dimensions
x=torch.empty(5,3,4,1)
| https://pytorch.org/docs/stable/notes/broadcasting.html | pytorch docs |
x=torch.empty(5,3,4,1)
y=torch.empty( 3,1,1)
# x and y are broadcastable.
# 1st trailing dimension: both have size 1
# 2nd trailing dimension: y has size 1
# 3rd trailing dimension: x size == y size
# 4th trailing dimension: y dimension doesn't exist
# but:
x=torch.empty(5,2,4,1)
y=torch.empty( 3,1,1)
# x and y are not broadcastable, because in the 3rd trailing dimension 2 != 3
If two tensors "x", "y" are "broadcastable", the resulting tensor size
is calculated as follows:
If the number of dimensions of "x" and "y" are not equal, prepend 1
to the dimensions of the tensor with fewer dimensions to make them
equal length.
Then, for each dimension size, the resulting dimension size is the
max of the sizes of "x" and "y" along that dimension.
For Example:
# can line up trailing dimensions to make reading easier
x=torch.empty(5,1,4,1)
y=torch.empty( 3,1,1)
(x+y).size()
torch.Size([5, 3, 4, 1])
# but not necessary: | https://pytorch.org/docs/stable/notes/broadcasting.html | pytorch docs |
but not necessary:
x=torch.empty(1)
y=torch.empty(3,1,7)
(x+y).size()
torch.Size([3, 1, 7])
x=torch.empty(5,2,4,1)
y=torch.empty(3,1,1)
(x+y).size()
RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 1
In-place semantics
One complication is that in-place operations do not allow the in-place
tensor to change shape as a result of the broadcast.
For Example:
x=torch.empty(5,3,4,1)
y=torch.empty(3,1,1)
(x.add_(y)).size()
torch.Size([5, 3, 4, 1])
# but:
x=torch.empty(1,3,1)
y=torch.empty(3,1,7)
(x.add_(y)).size()
RuntimeError: The expanded size of the tensor (1) must match the existing size (7) at non-singleton dimension 2.
Backwards compatibility
Prior versions of PyTorch allowed certain pointwise functions to
execute on tensors with different shapes, as long as the number of | https://pytorch.org/docs/stable/notes/broadcasting.html | pytorch docs |
elements in each tensor was equal. The pointwise operation would then
be carried out by viewing each tensor as 1-dimensional. PyTorch now
supports broadcasting and the "1-dimensional" pointwise behavior is
considered deprecated and will generate a Python warning in cases
where tensors are not broadcastable, but have the same number of
elements.
Note that the introduction of broadcasting can cause backwards
incompatible changes in the case where two tensors do not have the
same shape, but are broadcastable and have the same number of
elements. For Example:
torch.add(torch.ones(4,1), torch.randn(4))
would previously produce a Tensor with size: torch.Size([4,1]), but
now produces a Tensor with size: torch.Size([4,4]). In order to help
identify cases in your code where backwards incompatibilities
introduced by broadcasting may exist, you may set
torch.utils.backcompat.broadcast_warning.enabled to True, which
will generate a python warning in such cases.
For Example: | https://pytorch.org/docs/stable/notes/broadcasting.html | pytorch docs |
For Example:
torch.utils.backcompat.broadcast_warning.enabled=True
torch.add(torch.ones(4,1), torch.ones(4))
main:1: UserWarning: self and other do not have the same shape, but are broadcastable, and have the same number of elements.
Changing behavior in a backwards incompatible manner to broadcasting rather than viewing as 1-dimensional.
| https://pytorch.org/docs/stable/notes/broadcasting.html | pytorch docs |
HIP (ROCm) semantics
ROCm⢠is AMDâs open source software platform for GPU-accelerated high
performance computing and machine learning. HIP is ROCm's C++ dialect
designed to ease conversion of CUDA applications to portable C++ code.
HIP is used when converting existing CUDA applications like PyTorch to
portable C++ and for new projects that require portability between AMD
and NVIDIA.
HIP Interfaces Reuse the CUDA Interfaces
PyTorch for HIP intentionally reuses the existing "torch.cuda"
interfaces. This helps to accelerate the porting of existing PyTorch
code and models because very few code changes are necessary, if any.
The example from CUDA semantics will work exactly the same for HIP:
cuda = torch.device('cuda') # Default HIP device
cuda0 = torch.device('cuda:0') # 'rocm' or 'hip' are not valid, use 'cuda'
cuda2 = torch.device('cuda:2') # GPU 2 (these are 0-indexed)
x = torch.tensor([1., 2.], device=cuda0) | https://pytorch.org/docs/stable/notes/hip.html | pytorch docs |
x = torch.tensor([1., 2.], device=cuda0)
# x.device is device(type='cuda', index=0)
y = torch.tensor([1., 2.]).cuda()
# y.device is device(type='cuda', index=0)
with torch.cuda.device(1):
# allocates a tensor on GPU 1
a = torch.tensor([1., 2.], device=cuda)
# transfers a tensor from CPU to GPU 1
b = torch.tensor([1., 2.]).cuda()
# a.device and b.device are device(type='cuda', index=1)
# You can also use ``Tensor.to`` to transfer a tensor:
b2 = torch.tensor([1., 2.]).to(device=cuda)
# b.device and b2.device are device(type='cuda', index=1)
c = a + b
# c.device is device(type='cuda', index=1)
z = x + y
# z.device is device(type='cuda', index=0)
# even within a context, you can specify the device
# (or give a GPU index to the .cuda call)
d = torch.randn(2, device=cuda2)
e = torch.randn(2).to(cuda2)
f = torch.randn(2).cuda(cuda2)
| https://pytorch.org/docs/stable/notes/hip.html | pytorch docs |