text
stringlengths
0
1.73k
source
stringlengths
35
119
category
stringclasses
2 values
Today, we are excited to share our latest work for PyTorch/XLA 2.0. The release of PyTorch 2.0 is yet another major milestone for this storied community and we are excited to continue to be part of it. When the PyTorch/XLA project started in 2018 between Google and Meta, the focus was on bringing cutting edge Cloud TPUs to help support the PyTorch community. Along the way, others in the community such as Amazon joined the project and very quickly the community expanded. We are excited about XLA's direction and the benefits this project continues to bring to the PyTorch community. In this blog we’d like to showcase some key features that have been in development, show code snippets, and illustrate the benefit through some benchmarks.
https://pytorch.org/blog/pytorch-2.0-xla/
pytorch blogs
TorchDynamo / torch.compile (Experimental) TorchDynamo (Dynamo) is a Python-level JIT compiler designed to make unmodified PyTorch programs faster. It provides a clean API for compiler backends to hook in; its biggest feature is to dynamically modify Python bytecode just before execution. In the PyTorch/XLA 2.0 release, an experimental backend for Dynamo is provided for both inference and training. Dynamo provides a Torch FX (FX) graph when it recognizes a model pattern and PyTorch/XLA uses a Lazy Tensor approach to compile the FX graph and return the compiled function. To get more insight regarding the technical details about PyTorch/XLA’s dynamo implementation, check out this dev-discuss post and dynamo doc.
https://pytorch.org/blog/pytorch-2.0-xla/
pytorch blogs
Here is a small code example of running ResNet18 with torch.compile: import torch import torchvision import torch_xla.core.xla_model as xm def eval_model(loader): device = xm.xla_device() xla_resnet18 = torchvision.models.resnet18().to(device) xla_resnet18.eval() dynamo_resnet18 = torch.compile( xla_resnet18, backend='torchxla_trace_once') for data, _ in loader: output = dynamo_resnet18(data) With torch.compile PyTorch/XLA only traces the ResNet18 model once during the init time and executes the compiled binary everytime dynamo_resnet18 is invoked, instead of tracing the model every step. To illustrate the benefits of Dynamo+XLA, below is an inference speedup analysis to compare Dynamo and LazyTensor (without Dynamo) using TorchBench on a Cloud TPU v4-8 where the y-axis is the speedup multiplier.
https://pytorch.org/blog/pytorch-2.0-xla/
pytorch blogs
Dynamo for training is in the development stage with its implementation being at an earlier stage than inference. Developers are welcome to test this early feature, however, in the 2.0 release, PyTorch/XLA supports the forward and backward pass graphs and not the optimizer graph; the optimizer graph is available in the nightly builds and will land in the PyTorch/XLA 2.1 release. Below is an example of what training looks like using the ResNet18 example with torch.compile: ``` import torch import torchvision import torch_xla.core.xla_model as xm def train_model(model, data, target): loss_fn = torch.nn.CrossEntropyLoss() pred = model(data) loss = loss_fn(pred, target) loss.backward() return pred def train_model_main(loader): device = xm.xla_device() xla_resnet18 = torchvision.models.resnet18().to(device) xla_resnet18.train() dynamo_train_model = torch.compile( train_model, backend='aot_torchxla_trace_once') for data, target in loader:
https://pytorch.org/blog/pytorch-2.0-xla/
pytorch blogs
for data, target in loader: output = dynamo_train_model(xla_resnet18, data, target) ``` Note that the backend for training is aot_torchxla_trace_once (API will be updated for stable release) whereas the inference backend is torchxla_trace_once (name subject to change). We expect to extract and execute 3 graphs per training step instead of 1 training step if you use the Lazy tensor. Below is a training speedup analysis to compare Dynamo and Lazy using the TorchBench on Cloud TPU v4-8. PJRT Runtime (Beta)
https://pytorch.org/blog/pytorch-2.0-xla/
pytorch blogs
PJRT Runtime (Beta) PyTorch/XLA is migrating from XRT to the new PJRT runtime. PJRT is a better-maintained stack, with demonstrated performance advantages, including, on average, a 35% performance for training on TorchBench 2.0 models. It also supports a richer set of features enabling technologies like SPMD. In the PyTorch/XLA 2.0 release, PJRT is the default runtime for TPU and CPU; GPU support is in experimental state. The PJRT features included in the PyTorch/XLA 2.0 release are: TPU runtime implementation in libtpu using the PJRT Plugin API improves performance by up to 30% torch.distributed support for TPU v2 and v3, including pjrt:// init_method (Experimental) Single-host GPU support. Multi-host support coming soon. (Experimental)
https://pytorch.org/blog/pytorch-2.0-xla/
pytorch blogs
Switching to PJRT requires no change (or minimal change for GPUs) to user code (see pjrt.md for more details). Runtime configuration is as simple as setting the PJRT_DEVICE environment variable to the local device type (i.e. TPU, GPU, CPU). Below are examples of using PJRT runtimes on different devices. # TPU Device PJRT_DEVICE=TPU python3 xla/test/test_train_mp_imagenet.py --fake_data --batch_size=256 --num_epochs=1 # TPU Pod Device gcloud alpha compute tpus tpu-vm ssh $USER-pjrt --zone=us-central2-b --project=$PROJECT --worker=all --command="git clone --depth=1 --branch r2.0 https://github.com/pytorch/xla.git" gcloud alpha compute tpus tpu-vm ssh $USER-pjrt --zone=us-central2-b --project=$PROJECT --worker=all --command="PJRT_DEVICE=TPU python3 xla/test/test_train_mp_imagenet.py --fake_data --batch_size=256 --num_epochs=1" ``` GPU Device (Experimental)
https://pytorch.org/blog/pytorch-2.0-xla/
pytorch blogs
GPU Device (Experimental) PJRT_DEVICE=GPU GPU_NUM_DEVICES=4 python3 xla/test/test_train_mp_imagenet.py --fake_data --batch_size=128 --num_epochs=1 ``` Below is a performance comparison between XRT and PJRT by task on TorchBench 2.0 on v4-8 TPU. To learn more about PJRT vs. XRT please review the documentation. Parallelization GSPMD (Experimental)
https://pytorch.org/blog/pytorch-2.0-xla/
pytorch blogs
Parallelization GSPMD (Experimental) We are delighted to introduce General and Scalable Parallelization for ML Computation Graphs (GSPMD) in PyTorch as a new experimental data & model sharding solution. GSPMD provides automatic parallelization for common ML workloads, allowing developers to write PyTorch programs as if on a single large device and without custom sharded computation ops and/or collective communication ops. The XLA compiler transforms the single device program into a partitioned one with proper collectives, based on the user provided sharding hints. The API (RFC) will be available in the PyTorch/XLA 2.0 release as an experimental feature on a single TPU VM host. Next Steps for GSPMD
https://pytorch.org/blog/pytorch-2.0-xla/
pytorch blogs
Next Steps for GSPMD GSPMD is experimental in 2.0 release. To bring it to Stable status, we plan to address a number of feature gaps and known issues in the following releases, including multi-host support, DTensor integration, partial replication sharding, asynchronous data loading, and checkpointing. FSDP (Beta) PyTorch/XLA introduced fully sharded data parallel (FSDP) experimental support in version 1.12. This feature is a parallel representation of PyTorch FSDP and there are subtle differences in how XLA and upstream CUDA kernels are set up. auto_wrap_policy is a new argument that enables developers to automatically specify conditions for propagating partitioning specifications to neural network submodules. auto_wrap_policys may be simply passed in as an argument when wrapping a model with FSDP. Two auto_wrap_policy callables worth noting are: size_based_auto_wrap_policy, transformer_auto_wrap_policy.
https://pytorch.org/blog/pytorch-2.0-xla/
pytorch blogs
size_based_auto_wrap_policy enables users to wrap submodules with a minimum number of parameters. The example below wraps model submodules having at least 10M parameters. auto_wrap_policy = partial(size_based_auto_wrap_policy, min_num_params=1e7) transformer_auto_wrap_policy enables users to wrap all submodules that match a specific layer type. The example below wraps model submodules named torch.nn.Conv2d. To learn more, review this ResNet example by Ronghang Hu. auto_wrap_policy = partial(transformer_auto_wrap_policy, transformer_layer_cls={torch.nn.Conv2d})
https://pytorch.org/blog/pytorch-2.0-xla/
pytorch blogs
``` PyTorch/XLA FSDP is now integrated in HuggingFace trainer class (PR) enabling users to train much larger models on PyTorch/XLA (official Hugging Face documentation). A 16B parameters GPT2 model trained on Cloud TPU v4-64 with this FSDP configuration achieved 39% hardware utilization. TPU Accelerator - Num Devices v4-64 GPT2 Parameter Count 16B Layers Wrapped with FSDP GPT2Block TFLOPs / Chip 275 PFLOPs / Step 50
https://pytorch.org/blog/pytorch-2.0-xla/
pytorch blogs
50 Hardware Utilization 39% Differences Between FSDP & GSPMD FSDP is a data parallelism technique that reduces device memory footprint by storing model parameters, optimizer states, and gradients all sharded. Note that the actual computation is still local to the device and requires all-gathering the sharded model parameters for both forward and backward passes, hence the name “data parallel”. FSDP is one of the newest additions to PyTorch/XLA to scale large model training.
https://pytorch.org/blog/pytorch-2.0-xla/
pytorch blogs
GSPMD on the other hand, is a general parallelization system that enables various types of parallelisms, including both data and model parallelisms. PyTorch/XLA provides a sharding annotation API and XLAShardedTensor abstraction, so a user can annotate any tensor with sharding specs in the PyTorch program. Developers don’t need to manually implement sharded computations or inject collective communications ops to get it right. The XLA compiler does the work so that each computation can run in a distributed manner on multiple devices. Examples & Preliminary Results To learn about PyTorch/XLA parallelism sharding API, visit our RFC and see the Sample Code references. Below is a simple example to enable data and model parallelism. ``` model = SimpleLinear().to(xm.xla_device()) Sharding annotate the linear layer weights. xs.mark_sharding(model.fc1.weight, mesh, partition_spec) Training loop
https://pytorch.org/blog/pytorch-2.0-xla/
pytorch blogs
Training loop model.train() for step, (data, target) in enumerate(loader): optimizer.zero_grad() data = data.to(xm.xla_device()) target = target.to(xm.xla_device()) # Sharding annotate input data, we can shard any input # dimensions. Sharidng the batch dimension enables # data parallelism, sharding the feature dimension enables # spatial partitioning. xs.mark_sharding(data, mesh, partition_spec) ouput = model(data) loss = loss_fn(output, target) optimizer.step() xm.mark_step() ``` The following graph highlights the memory efficiency benefits of PyTorch/XLA FSDP and SPMD on Cloud TPU v4-8 running ResNet50. Closing Thoughts…
https://pytorch.org/blog/pytorch-2.0-xla/
pytorch blogs
Closing Thoughts… We are excited to bring these features to the PyTorch community, and this is really just the beginning. Areas like dynamic shapes, deeper support for OpenXLA and many others are in development and we plan to put out more blogs to dive into the details. PyTorch/XLA is developed fully open source and we invite you to join the community of developers by filing issues, submitting pull requests, and sending RFCs on GitHub. You can try PyTorch/XLA on a variety of XLA devices including TPUs and GPUs. Here is how to get started. Congratulations again to the PyTorch community on this milestone! Cheers, The PyTorch Team at Google
https://pytorch.org/blog/pytorch-2.0-xla/
pytorch blogs
layout: blog_detail title: "Compromised PyTorch-nightly dependency chain between December 25th and December 30th, 2022." author: The PyTorch Team If you installed PyTorch-nightly on Linux via pip between December 25, 2022 and December 30, 2022, please uninstall it and torchtriton immediately, and use the latest nightly binaries (newer than Dec 30th 2022). $ pip3 uninstall -y torch torchvision torchaudio torchtriton $ pip3 cache purge PyTorch-nightly Linux packages installed via pip during that time installed a dependency, torchtriton, which was compromised on the Python Package Index (PyPI) code repository and ran a malicious binary. This is what is known as a supply chain attack and directly affects dependencies for packages that are hosted on public package indices. NOTE: Users of the PyTorch stable packages are not affected by this issue. How to check if your Python environment is affected
https://pytorch.org/blog/compromised-nightly-dependency/
pytorch blogs
The following command searches for the malicious binary in the torchtriton package (PYTHON_SITE_PACKAGES/triton/runtime/triton) and prints out whether your current Python environment is affected or not. python3 -c "import pathlib;import importlib.util;s=importlib.util.find_spec('triton'); affected=any(x.name == 'triton' for x in (pathlib.Path(s.submodule_search_locations[0] if s is not None else '/' ) / 'runtime').glob('*'));print('You are {}affected'.format('' if affected else 'not '))" The malicious binary is executed when the triton package is imported, which requires explicit code to do and is not PyTorch’s default behavior. The Background
https://pytorch.org/blog/compromised-nightly-dependency/
pytorch blogs
The Background At around 4:40pm GMT on December 30 (Friday), we learned about a malicious dependency package (torchtriton) that was uploaded to the Python Package Index (PyPI) code repository with the same package name as the one we ship on the PyTorch nightly package index. Since the PyPI index takes precedence, this malicious package was being installed instead of the version from our official repository. This design enables somebody to register a package by the same name as one that exists in a third party index, and pip will install their version by default. This malicious package has the same name torchtriton but added in code that uploads sensitive data from the machine. What we know torchtriton on PyPI contains a malicious triton binary which is installed at PYTHON_SITE_PACKAGES/triton/runtime/triton. Its SHA256 hash is listed below.
https://pytorch.org/blog/compromised-nightly-dependency/
pytorch blogs
SHA256(triton)= 2385b29489cd9e35f92c072780f903ae2e517ed422eae67246ae50a5cc738a0e The binary’s main function does the following: Get system information nameservers from /etc/resolv.conf hostname from gethostname() current username from getlogin() current working directory name from getcwd() environment variables Read the following files /etc/hosts /etc/passwd The first 1,000 files in $HOME/* $HOME/.gitconfig $HOME/.ssh/* Upload all of this information, including file contents, via encrypted DNS queries to the domain *.h4ck[.]cfd, using the DNS server wheezy[.]io The binary’s file upload functionality is limited to files less than 99,999 bytes in size. It also uploads only the first 1,000 files in $HOME (but all files < 99,999 bytes in the .ssh directory). Steps taken towards mitigation
https://pytorch.org/blog/compromised-nightly-dependency/
pytorch blogs
Steps taken towards mitigation torchtriton has been removed as a dependency for our nightly packages and replaced with pytorch-triton (pytorch/pytorch#91539) and a dummy package registered on PyPI (so that this issue doesn’t repeat) All nightly packages that depend on torchtriton have been removed from our package indices at https://download.pytorch.org until further notice We have reached out to the PyPI security team to get proper ownership of the torchtriton package on PyPI and to delete the malicious version
https://pytorch.org/blog/compromised-nightly-dependency/
pytorch blogs
layout: blog_detail title: 'Running PyTorch Models on Jetson Nano' author: Jeff Tang, Hamid Shojanazeri, Geeta Chauhan featured-img: 'assets/images/pytorch-logo.jpg' Overview NVIDIA Jetson Nano, part of the Jetson family of products or Jetson modules, is a small yet powerful Linux (Ubuntu) based embedded computer with 2/4GB GPU. With it, you can run many PyTorch models efficiently. This document summarizes our experience of running different deep learning models using 3 different mechanisms on Jetson Nano: Jetson Inference the higher-level NVIDIA API that has built-in support for running most common computer vision models which can be transfer-learned with PyTorch on the Jetson platform.
https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/
pytorch blogs
TensorRT, an SDK for high-performance inference from NVIDIA that requires the conversion of a PyTorch model to ONNX, and then to the TensorRT engine file that the TensorRT runtime can run. PyTorch with the direct PyTorch API torch.nn for inference. Setting up Jetson Nano After purchasing a Jetson Nano here, simply follow the clear step-by-step instructions to download and write the Jetson Nano Developer Kit SD Card Image to a microSD card, and complete the setup. After the setup is done and the Nano is booted, you’ll see the standard Linux prompt along with the username and the Nano name used in the setup. To check the GPU status on Nano, run the following commands: sudo pip3 install jetson-stats sudo jtop You’ll see information, including:
https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/
pytorch blogs
You can also see the installed CUDA version: $ ls -lt /usr/local lrwxrwxrwx 1 root root 22 Aug 2 01:47 cuda -> /etc/alternatives/cuda lrwxrwxrwx 1 root root 25 Aug 2 01:47 cuda-10 -> /etc/alternatives/cuda-10 drwxr-xr-x 12 root root 4096 Aug 2 01:47 cuda-10.2 To use a camera on Jetson Nano, for example, Arducam 8MP IMX219, follow the instructions here or run the commands below after installing a camera module: cd ~ wget https://github.com/ArduCAM/MIPI_Camera/releases/download/v0.0.3/install_full.sh chmod +x install_full.sh ./install_full.sh -m arducam
https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/
pytorch blogs
./install_full.sh -m arducam Another way to do this is to use the original Jetson Nano camera driver: sudo dpkg -r arducam-nvidia-l4t-kernel sudo shutdown -r now Then, use ls /dev/video0 to confirm the camera is found: $ ls /dev/video0 /dev/video0 And finally, the following command to see the camera in action: nvgstcapture-1.0 --orientation=2 ### Using Jetson Inference NVIDIA [Jetson Inference](https://github.com/dusty-nv/jetson-inference) API offers the easiest way to run image recognition, object detection, semantic segmentation, and pose estimation models on Jetson Nano. Jetson Inference has TensorRT built-in, so it’s very fast. To test run Jetson Inference, first clone the repo and download the models: git clone --recursive https://github.com/dusty-nv/jetson-inference cd jetson-inference ```
https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/
pytorch blogs
cd jetson-inference Then use the pre-built [Docker Container](https://github.com/dusty-nv/jetson-inference/blob/master/docs/jetpack-setup-2.md) that already has PyTorch installed to test run the models: docker/run.sh --volume ~/jetson_inference:/jetson_inference To run image recognition, object detection, semantic segmentation, and pose estimation models on test images, use the following: cd build/aarch64/bin ./imagenet.py images/jellyfish.jpg /jetson_inference/jellyfish.jpg ./segnet.py images/dog.jpg /jetson_inference/dog.jpeg ./detectnet.py images/peds_0.jpg /jetson_inference/peds_0.jpg ./posenet.py images/humans_0.jpg /jetson_inference/pose_humans_0.jpg Four result images from running the four different models will be generated. Exit the docker image to see them: $ ls -lt ~/jetson_inference/ -rw-r--r-- 1 root root 68834 Oct 15 21:30 pose_humans_0.jpg -rw-r--r-- 1 root root 914058 Oct 15 21:30 peds_0.jpg -rw-r--r-- 1 root root 666239 Oct 15 21:30 dog.jpeg
https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/
pytorch blogs
-rw-r--r-- 1 root root 179760 Oct 15 21:29 jellyfish.jpg <div style="display: flex; justify-content: space-between;"> <img src="/assets/images/blog-2022-3-10-using-jetson-interface-1.jpeg" alt="Using jest interface example 1" width="40%"> <img src="/assets/images/blog-2022-3-10-using-jetson-interface-2.jpeg" alt="Using jest interface example 2" width="60%"> </div> <div style="display: flex; justify-content: space-between;"> <img src="/assets/images/blog-2022-3-10-using-jetson-interface-3.jpeg" alt="Using jest interface example 3" width="60%"> <img src="/assets/images/blog-2022-3-10-using-jetson-interface-4.jpeg" alt="Using jest interface example 4" width="40%"> </div> You can also use the docker image to run PyTorch models because the image has PyTorch, torchvision and torchaudio installed: pip list|grep torch torch (1.9.0) torchaudio (0.9.0a0+33b2469) torchvision (0.10.0a0+300a8a4) ```
https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/
pytorch blogs
torchvision (0.10.0a0+300a8a4) ``` Although Jetson Inference includes models already converted to the TensorRT engine file format, you can fine-tune the models by following the steps in Transfer Learning with PyTorch (for Jetson Inference) here. Using TensorRT TensorRT is an SDK for high-performance inference from NVIDIA. Jetson Nano supports TensorRT via the Jetpack SDK, included in the SD Card image used to set up Jetson Nano. To confirm that TensorRT is already installed in Nano, run dpkg -l|grep -i tensorrt:
https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/
pytorch blogs
Theoretically, TensorRT can be used to “take a trained PyTorch model and optimize it to run more efficiently during inference on an NVIDIA GPU.” Follow the instructions and code in the notebook to see how to use PyTorch with TensorRT through ONNX on a torchvision Resnet50 model: How to convert the model from PyTorch to ONNX; How to convert the ONNX model to a TensorRT engine file; How to run the engine file with the TensorRT runtime for performance improvement: inference time improved from the original 31.5ms/19.4ms (FP32/FP16 precision) to 6.28ms (TensorRT).
https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/
pytorch blogs
You can replace the Resnet50 model in the notebook code with another PyTorch model, go through the conversion process above, and run the finally converted model TensorRT engine file with the TensorRT runtime to see the optimized performance. But be aware that due to the Nano GPU memory size, models larger than 100MB are likely to fail to run, with the following error information: Error Code 1: Cuda Runtime (all CUDA-capable devices are busy or unavailable) You may also see an error when converting a PyTorch model to ONNX model, which may be fixed by replacing: torch.onnx.export(resnet50, dummy_input, "resnet50_pytorch.onnx", verbose=False) with: torch.onnx.export(model, dummy_input, "deeplabv3_pytorch.onnx", opset_version=11, verbose=False) Using PyTorch First, to download and install PyTorch 1.9 on Nano, run the following commands (see here for more information): ```
https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/
pytorch blogs
wget https://nvidia.box.com/shared/static/p57jwntv436lfrd78inwl7iml6p13fzh.whl -O torch-1.8.0-cp36-cp36m-linux_aarch64.whl -O torch-1.9.0-cp36-cp36m-linux_aarch64.whl sudo apt-get install python3-pip libopenblas-base libopenmpi-dev pip3 install Cython pip3 install numpy torch-1.9.0-cp36-cp36m-linux_aarch64.whl To download and install torchvision 0.10 on Nano, run the commands below: https://drive.google.com/uc?id=1tU6YlPjrP605j4z8PMnqwCSoP6sSC91Z pip3 install torchvision-0.10.0a0+300a8a4-cp36-cp36m-linux_aarch64.whl After the steps above, run this to confirm: $ pip3 list|grep torch torch (1.9.0) torchvision (0.10.0) You can also use the docker image described in the section Using Jetson Inference (which also has PyTorch and torchvision installed), to skip the manual steps above. The official YOLOv5 repo is used to run the PyTorch YOLOv5 model on Jetson Nano. After logging in to Jetson Nano, follow the steps below:
https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/
pytorch blogs
Get the repo and install what’s required: git clone https://github.com/ultralytics/yolov5 cd yolov5 pip install -r requirements.txt Run python3 detect.py, which by default uses the PyTorch yolov5s.pt model. You should see something like: detect: weights=yolov5s.pt, source=data/images, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False YOLOv5 🚀 v5.0-499-g48b00db torch 1.9.0 CUDA:0 (NVIDIA Tegra X1, 3956.1015625MB) Fusing layers... Model Summary: 224 layers, 7266973 parameters, 0 gradients image 1/5 /home/jeff/repos/yolov5-new/yolov5/data/images/bus.jpg: 640x480 4 persons, 1 bus, 1 fire hydrant, Done. (0.142s) ...
https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/
pytorch blogs
... **The inference time on Jetson Nano GPU is about 140ms, more than twice as fast as the inference time on iOS or Android (about 330ms).** If you get an error `“ImportError: The _imagingft C module is not installed.”` then you need to reinstall pillow: sudo apt-get install libpng-dev sudo apt-get install libfreetype6-dev pip3 uninstall pillow pip3 install --no-cache-dir pillow After successfully completing the `python3 detect.py` run, the object detection results of the test images located in `data/images` will be in the `runs/detect/exp` directory. To test the detection with a live webcam instead of local images, use the `--source 0` parameter when running `python3 detect.py`): ~/repos/yolov5$ ls -lt runs/detect/exp10 total 1456 -rw-rw-r-- 1 jeff jeff 254895 Oct 15 16:12 zidane.jpg -rw-rw-r-- 1 jeff jeff 202674 Oct 15 16:12 test3.png -rw-rw-r-- 1 jeff jeff 217117 Oct 15 16:12 test2.jpg -rw-rw-r-- 1 jeff jeff 305826 Oct 15 16:12 test1.png
https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/
pytorch blogs
-rw-rw-r-- 1 jeff jeff 495760 Oct 15 16:12 bus.jpg ``` Using the same test files used in the PyTorch iOS YOLOv5 demo app or Android YOLOv5 demo app, you can compare the results generated with running the YOLOv5 PyTorch model on mobile devices and Jetson Nano: Figure 1. PyTorch YOLOv5 on Jetson Nano. Figure 2. PyTorch YOLOv5 on iOS.
https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/
pytorch blogs
Figure 2. PyTorch YOLOv5 on iOS. Figure 3. PyTorch YOLOv5 on Android. Summary Based on our experience of running different PyTorch models for potential demo apps on Jetson Nano, we see that even Jetson Nano, a lower-end of the Jetson family of products, provides a powerful GPU and embedded system that can directly run some of the latest PyTorch models, pre-trained or transfer learned, efficiently. Building PyTorch demo apps on Jetson Nano can be similar to building PyTorch apps on Linux, but you can also choose to use TensorRT after converting the PyTorch models to the TensorRT engine file format.
https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/
pytorch blogs
But if you just need to run some common computer vision models on Jetson Nano using NVIDIA’s Jetson Inference which supports image recognition, object detection, semantic segmentation, and pose estimation models, then this is the easiest way. References Torch-TensorRT, a compiler for PyTorch via TensorRT: https://github.com/NVIDIA/Torch-TensorRT/ Jetson Inference docker image details: https://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-docker.md A guide to using TensorRT on the NVIDIA Jetson Nano: https://docs.donkeycar.com/guide/robot_sbc/tensorrt_jetson_nano/ including: Use Jetson as a portable GPU device to run an NN chess engine model:
https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/
pytorch blogs
https://medium.com/@ezchess/jetson-lc0-running-leela-chess-zero-on-nvidia-jetson-a-portable-gpu-device-a213afc9c018 A MaskEraser app using PyTorch and torchvision, installed directly with pip: https://github.com/INTEC-ATI/MaskEraser#install-pytorch
https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/
pytorch blogs
layout: blog_detail title: 'PyTorch 1.9 Release, including torch.linalg and Mobile Interpreter' author: Team PyTorch We are excited to announce the release of PyTorch 1.9. The release is composed of more than 3,400 commits since 1.8, made by 398 contributors. The release notes are available here. Highlights include: 1. Major improvements to support scientific computing, including torch.linalg, torch.special, and Complex Autograd 2. Major improvements in on-device binary size with Mobile Interpreter 3. Native support for elastic-fault tolerance training through the upstreaming of TorchElastic into PyTorch Core 4. Major updates to the PyTorch RPC framework to support large scale distributed training with GPU support 5. New APIs to optimize performance and packaging for model inference deployment 6. Support for Distributed training, GPU utilization and SM efficiency in the PyTorch Profiler
https://pytorch.org/blog/pytorch-1.9-released/
pytorch blogs
Along with 1.9, we are also releasing major updates to the PyTorch libraries, which you can read about in this blog post. We’d like to thank the community for their support and work on this latest release. We’d especially like to thank Quansight and Microsoft for their contributions. Features in PyTorch releases are classified as Stable, Beta, and Prototype. You can learn more about the definitions in this blog post. Frontend APIs (Stable) torch.linalg
https://pytorch.org/blog/pytorch-1.9-released/
pytorch blogs
Frontend APIs (Stable) torch.linalg In 1.9, the torch.linalg module is moving to a stable release. Linear algebra is essential to deep learning and scientific computing, and the torch.linalg module extends PyTorch’s support for it with implementations of every function from NumPy’s linear algebra module (now with support for accelerators and autograd) and more, like torch.linalg.matrix_norm and torch.linalg.householder_product. This makes the module immediately familiar to users who have worked with NumPy. Refer to the documentation here.
https://pytorch.org/blog/pytorch-1.9-released/
pytorch blogs
We plan to publish another blog post with more details on the torch.linalg module next week! (Stable) Complex Autograd The Complex Autograd feature, released as a beta in PyTorch 1.8, is now stable. Since the beta release, we have extended support for Complex Autograd for over 98% operators in PyTorch 1.9, improved testing for complex operators by adding more OpInfos, and added greater validation through TorchAudio migration to native complex tensors (refer to this issue). This feature provides users the functionality to calculate complex gradients and optimize real valued loss functions with complex variables. This is a required feature for multiple current and downstream prospective users of complex numbers in PyTorch like TorchAudio, ESPNet, Asteroid, and FastMRI. Refer to the documentation for more details. (Stable) torch.use_deterministic_algorithms()
https://pytorch.org/blog/pytorch-1.9-released/
pytorch blogs
To help with debugging and writing reproducible programs, PyTorch 1.9 includes a torch.use_determinstic_algorithms option. When this setting is enabled, operations will behave deterministically, if possible, or throw a runtime error if they might behave nondeterministically. Here are a couple examples: >>> a = torch.randn(100, 100, 100, device='cuda').to_sparse() >>> b = torch.randn(100, 100, 100, device='cuda') # Sparse-dense CUDA bmm is usually nondeterministic >>> torch.bmm(a, b).eq(torch.bmm(a, b)).all().item() False >>> torch.use_deterministic_algorithms(True) # Now torch.bmm gives the same result each time, but with reduced performance >>> torch.bmm(a, b).eq(torch.bmm(a, b)).all().item() True # CUDA kthvalue has no deterministic algorithm, so it throws a runtime error >>> torch.zeros(10000, device='cuda').kthvalue(1) RuntimeError: kthvalue CUDA does not have a deterministic implementation...
https://pytorch.org/blog/pytorch-1.9-released/
pytorch blogs
``` PyTorch 1.9 adds deterministic implementations for a number of indexing operations, too, including index_add, index_copy, and index_put with accum=False. For more details, refer to the documentation and reproducibility note. (Beta) torch.special A torch.special module, analogous to SciPy’s special module, is now available in beta. This module contains many functions useful for scientific computing and working with distributions such as iv, ive, erfcx, logerfc, and logerfcx. Refer to the documentation for more details. (Beta) nn.Module parameterization
https://pytorch.org/blog/pytorch-1.9-released/
pytorch blogs
(Beta) nn.Module parameterization nn.Module parameterization allows users to parametrize any parameter or buffer of an nn.Module without modifying the nn.Module itself. It allows you to constrain the space in which your parameters live without the need for special optimization methods. This also contains a new implementation of the spectral_norm parametrization for PyTorch 1.9. More parametrization will be added to this feature (weight_norm, matrix constraints and part of pruning) for the feature to become stable in 1.10. For more details, refer to the documentation and tutorial. PyTorch Mobile (Beta) Mobile Interpreter
https://pytorch.org/blog/pytorch-1.9-released/
pytorch blogs
PyTorch Mobile (Beta) Mobile Interpreter We are releasing Mobile Interpreter, a streamlined version of the PyTorch runtime, in beta. The Interpreter will execute PyTorch programs in edge devices, with reduced binary size footprint. Mobile Interpreter is one of the top requested features for PyTorch Mobile. This new release will significantly reduce binary size compared with the current on-device runtime. In order for you to get the binary size improvements with our interpreter (which can reduce the binary size up to ~75% for a typical application) follow these instructions. As an example, using Mobile Interpreter, we can reach 2.6 MB compressed with MobileNetV2 in arm64-v7a Android. With this latest release we are making it much simpler to integrate the interpreter by providing pre-built libraries for iOS and Android. TorchVision Library
https://pytorch.org/blog/pytorch-1.9-released/
pytorch blogs
TorchVision Library Starting from 1.9, users can use the TorchVision library on their iOS/Android apps. The Torchvision library contains the C++ TorchVision ops and needs to be linked together with the main PyTorch library for iOS, for Android it can be added as a gradle dependency. This allows using TorchVision prebuilt MaskRCNN operators for object detections and segmentation. To learn more about the library, please refer to our tutorials and demo apps. Demo apps
https://pytorch.org/blog/pytorch-1.9-released/
pytorch blogs
Demo apps We are releasing a new video app based on PyTorch Video library and an updated speech recognition app based on the latest torchaudio, wave2vec model. Both are available on iOS and Android. In addition, we have updated the seven Computer Vision and three Natural Language Processing demo apps, including the HuggingFace DistilBERT, and the DeiT vision transformer models, with PyTorch Mobile v1.9. With the addition of these two apps, we now offer a full suite of demo apps covering image, text, audio, and video. To get started check out our iOS demo apps and Android demo apps. Distributed Training (Beta) TorchElastic is now part of core
https://pytorch.org/blog/pytorch-1.9-released/
pytorch blogs
(Beta) TorchElastic is now part of core TorchElastic, which was open sourced over a year ago in the pytorch/elastic github repository, is a runner and coordinator for PyTorch worker processes. Since then, it has been adopted by various distributed torch use-cases: 1) deepspeech.pytorch 2) pytorch-lightning 3) Kubernetes CRD. Now, it is part of PyTorch core.
https://pytorch.org/blog/pytorch-1.9-released/
pytorch blogs
As its name suggests, the core function of TorcheElastic is to gracefully handle scaling events. A notable corollary of elasticity is that peer discovery and rank assignment are built into TorchElastic enabling users to run distributed training on preemptible instances without requiring a gang scheduler. As a side note, etcd used to be a hard dependency of TorchElastic. With the upstream, this is no longer the case since we have added a “standalone” rendezvous based on c10d::Store. For more details, refer to the documentation. (Beta) Distributed Training Updates In addition to TorchElastic, there are a number of beta features available in the distributed package:
https://pytorch.org/blog/pytorch-1.9-released/
pytorch blogs
(Beta) CUDA support is available in RPC: Compared to CPU RPC and general-purpose RPC frameworks, CUDA RPC is a much more efficient way for P2P Tensor communication. It is built on top of TensorPipe which can automatically choose a communication channel for each Tensor based on Tensor device type and channel availability on both the caller and the callee. Existing TensorPipe channels cover NVLink, InfiniBand, SHM, CMA, TCP, etc. See this recipe for how CUDA RPC helps to attain 34x speedup compared to CPU RPC.
https://pytorch.org/blog/pytorch-1.9-released/
pytorch blogs
(Beta) ZeroRedundancyOptimizer: ZeroRedundancyOptimizer can be used in conjunction with DistributedDataParallel to reduce the size of per-process optimizer states. The idea of ZeroRedundancyOptimizer comes from DeepSpeed/ZeRO project and Marian, where the optimizer in each process owns a shard of model parameters and their corresponding optimizer states. When running step(), each optimizer only updates its own parameters, and then uses collective communication to synchronize updated parameters across all processes. Refer to this documentation and this tutorial to learn more.
https://pytorch.org/blog/pytorch-1.9-released/
pytorch blogs
(Beta) Support for profiling distributed collectives: PyTorch’s profiler tools, torch.profiler and torch.autograd.profiler, are able to profile distributed collectives and point to point communication primitives including allreduce, alltoall, allgather, send/recv, etc. This is enabled for all backends supported natively by PyTorch: gloo, mpi, and nccl. This can be used to debug performance issues, analyze traces that contain distributed communication, and gain insight into performance of applications that use distributed training. To learn more, refer to this documentation. Performance Optimization and Tooling (Stable) Freezing API
https://pytorch.org/blog/pytorch-1.9-released/
pytorch blogs
(Stable) Freezing API Module Freezing is the process of inlining module parameters and attributes values as constants into the TorchScript internal representation. This allows further optimization and specialization of your program, both for TorchScript optimizations and lowering to other backends. It is used by optimize_for_mobile API, ONNX, and others. Freezing is recommended for model deployment. It helps TorchScript JIT optimizations optimize away overhead and bookkeeping that is necessary for training, tuning, or debugging PyTorch models. It enables graph fusions that are not semantically valid on non-frozen graphs - such as fusing Conv-BN. For more details, refer to the documentation. (Beta) PyTorch Profiler
https://pytorch.org/blog/pytorch-1.9-released/
pytorch blogs
The new PyTorch Profiler graduates to beta and leverages Kineto for GPU profiling, TensorBoard for visualization and is now the standard across our tutorials and documentation. PyTorch 1.9 extends support for the new torch.profiler API to more builds, including Windows and Mac and is recommended in most cases instead of the previous torch.autograd.profiler API. The new API supports existing profiler features, integrates with CUPTI library (Linux-only) to trace on-device CUDA kernels and provides support for long-running jobs, e.g.: ```python def trace_handler(p): output = p.key_averages().table(sort_by="self_cuda_time_total", row_limit=10) print(output) p.export_chrome_trace("/tmp/trace_" + str(p.step_num) + ".json") with profile( activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA], # schedule argument specifies the iterations on which the profiler is active schedule=torch.profiler.schedule( wait=1, warmup=1,
https://pytorch.org/blog/pytorch-1.9-released/
pytorch blogs
wait=1, warmup=1, active=2), # on_trace_ready argument specifies the handler for the traces on_trace_ready=trace_handler ) as p: for idx in range(8): model(inputs) # profiler will trace iterations 2 and 3, and then 6 and 7 (counting from zero) p.step() ``` More usage examples can be found on the profiler recipe page. The PyTorch Profiler Tensorboard plugin has new features for: * Distributed Training summary view with communications overview for NCCL * GPU Utilization and SM Efficiency in Trace view and GPU operators view * Memory Profiling view * Jump to source when launched from Microsoft VSCode * Ability for load traces from cloud object storage systems (Beta) Inference Mode API
https://pytorch.org/blog/pytorch-1.9-released/
pytorch blogs
(Beta) Inference Mode API Inference Mode API allows significant speed-up for inference workloads while remaining safe and ensuring no incorrect gradients can ever be computed. It offers the best possible performance when no autograd is required. For more details, refer to the documentation for inference mode itself and the documentation explaining when to use it and the difference with no_grad mode. (Beta) torch.package
https://pytorch.org/blog/pytorch-1.9-released/
pytorch blogs
(Beta) torch.package torch.package is a new way to package PyTorch models in a self-contained, stable format. A package will include both the model’s data (e.g. parameters, buffers) and its code (model architecture). Packaging a model with its full set of Python dependencies, combined with a description of a conda environment with pinned versions, can be used to easily reproduce training. Representing a model in a self-contained artifact will also allow it to be published and transferred throughout a production ML pipeline while retaining the flexibility of a pure-Python representation. For more details, refer to the documentation. (Prototype) prepare_for_inference
https://pytorch.org/blog/pytorch-1.9-released/
pytorch blogs
(Prototype) prepare_for_inference prepare_for_inference is a new prototype feature that takes in a module and performs graph-level optimizations to improve inference performance, depending on the device. It is meant to be a PyTorch-native option that requires minimal changes to user’s workflows. For more details, see the documentation for the Torchscript version here or the FX version here. (Prototype) Profile-directed typing in TorchScript
https://pytorch.org/blog/pytorch-1.9-released/
pytorch blogs
TorchScript has a hard requirement for source code to have type annotations in order for compilation to be successful. For a long time, it was only possible to add missing or incorrect type annotations through trial and error (i.e., by fixing the type-checking errors generated by torch.jit.script one by one), which was inefficient and time consuming. Now, we have enabled profile directed typing for torch.jit.script by leveraging existing tools like MonkeyType, which makes the process much easier, faster, and more efficient. For more details, refer to the documentation.
https://pytorch.org/blog/pytorch-1.9-released/
pytorch blogs
Thanks for reading. If you’re interested in these updates and want to join the PyTorch community, we encourage you to join the discussion forums and open GitHub issues. To get the latest news from PyTorch, follow us on Facebook, Twitter, Medium, YouTube, or LinkedIn. Cheers! Team PyTorch
https://pytorch.org/blog/pytorch-1.9-released/
pytorch blogs
layout: blog_detail title: 'Announcing PyTorch Developer Day 2020' author: Team PyTorch Starting this year, we plan to host two separate events for PyTorch: one for developers and users to discuss core technical development, ideas and roadmaps called “Developer Day”, and another for the PyTorch ecosystem and industry communities to showcase their work and discover opportunities to collaborate called “Ecosystem Day” (scheduled for early 2021). The PyTorch Developer Day (#PTD2) is kicking off on November 12, 2020, 8AM PST with a full day of technical talks on a variety of topics, including updates to the core framework, new tools and libraries to support development across a variety of domains. You'll also see talks covering the latest research around systems and tooling in ML.
https://pytorch.org/blog/pytorch-developer-day-2020/
pytorch blogs
For Developer Day, we have an online networking event limited to people composed of PyTorch maintainers and contributors, long-time stakeholders and experts in areas relevant to PyTorch’s future. Conversations from the networking event will strongly shape the future of PyTorch. Hence, invitations are required to attend the networking event. All talks will be livestreamed and available to the public. * Livestream event page * Apply for an invitation to the networking event Visit the event website to learn more. We look forward to welcoming you to PyTorch Developer Day on November 12th! Thank you, The PyTorch team
https://pytorch.org/blog/pytorch-developer-day-2020/
pytorch blogs
layout: blog_detail title: 'PyTorch 1.7 released w/ CUDA 11, New APIs for FFTs, Windows support for Distributed training and more' author: Team PyTorch Today, we’re announcing the availability of PyTorch 1.7, along with updated domain libraries. The PyTorch 1.7 release includes a number of new APIs including support for NumPy-Compatible FFT operations, profiling tools and major updates to both distributed data parallel (DDP) and remote procedure call (RPC) based distributed training. In addition, several features moved to stable including custom C++ Classes, the memory profiler, extensions via custom tensor-like objects, user async functions in RPC and a number of other features in torch.distributed such as Per-RPC timeout, DDP dynamic bucketing and RRef helper. A few of the highlights include: * CUDA 11 is now officially supported with binaries available at PyTorch.org
https://pytorch.org/blog/pytorch-1.7-released/
pytorch blogs
Updates and additions to profiling and performance for RPC, TorchScript and Stack traces in the autograd profiler (Beta) Support for NumPy compatible Fast Fourier transforms (FFT) via torch.fft (Prototype) Support for Nvidia A100 generation GPUs and native TF32 format (Prototype) Distributed training on Windows now supported torchvision (Stable) Transforms now support Tensor inputs, batch computation, GPU, and TorchScript (Stable) Native image I/O for JPEG and PNG formats (Beta) New Video Reader API torchaudio (Stable) Added support for speech rec (wav2letter), text to speech (WaveRNN) and source separation (ConvTasNet) To reiterate, starting PyTorch 1.6, features are now classified as stable, beta and prototype. You can see the detailed announcement here. Note that the prototype features listed in this blog are available as part of this release.
https://pytorch.org/blog/pytorch-1.7-released/
pytorch blogs
Find the full release notes here. Front End APIs [Beta] NumPy Compatible torch.fft module FFT-related functionality is commonly used in a variety of scientific fields like signal processing. While PyTorch has historically supported a few FFT-related functions, the 1.7 release adds a new torch.fft module that implements FFT-related functions with the same API as NumPy. This new module must be imported to be used in the 1.7 release, since its name conflicts with the historic (and now deprecated) torch.fft function. Example usage: ```python import torch.fft t = torch.arange(4) t tensor([0, 1, 2, 3]) torch.fft.fft(t) tensor([ 6.+0.j, -2.+2.j, -2.+0.j, -2.-2.j]) t = tensor([0.+1.j, 2.+3.j, 4.+5.j, 6.+7.j]) torch.fft.fft(t) tensor([12.+16.j, -8.+0.j, -4.-4.j, 0.-8.j]) ``` Documentation [Beta] C++ Support for Transformer NN Modules
https://pytorch.org/blog/pytorch-1.7-released/
pytorch blogs
[Beta] C++ Support for Transformer NN Modules Since PyTorch 1.5, we’ve continued to maintain parity between the python and C++ frontend APIs. This update allows developers to use the nn.transformer module abstraction from the C++ Frontend. And moreover, developers no longer need to save a module from python/JIT and load into C++ as it can now be used it in C++ directly. * Documentation [Beta] torch.set_deterministic
https://pytorch.org/blog/pytorch-1.7-released/
pytorch blogs
[Beta] torch.set_deterministic Reproducibility (bit-for-bit determinism) may help identify errors when debugging or testing a program. To facilitate reproducibility, PyTorch 1.7 adds the torch.set_deterministic(bool) function that can direct PyTorch operators to select deterministic algorithms when available, and to throw a runtime error if an operation may result in nondeterministic behavior. By default, the flag this function controls is false and there is no change in behavior, meaning PyTorch may implement its operations nondeterministically by default. More precisely, when this flag is true: * Operations known to not have a deterministic implementation throw a runtime error; * Operations with deterministic variants use those variants (usually with a performance penalty versus the non-deterministic version); and * torch.backends.cudnn.deterministic = True is set.
https://pytorch.org/blog/pytorch-1.7-released/
pytorch blogs
Note that this is necessary, but not sufficient, for determinism within a single run of a PyTorch program. Other sources of randomness like random number generators, unknown operations, or asynchronous or distributed computation may still cause nondeterministic behavior. See the documentation for torch.set_deterministic(bool) for the list of affected operations. * RFC * Documentation Performance & Profiling [Beta] Stack traces added to profiler
https://pytorch.org/blog/pytorch-1.7-released/
pytorch blogs
[Beta] Stack traces added to profiler Users can now see not only operator name/inputs in the profiler output table but also where the operator is in the code. The workflow requires very little change to take advantage of this capability. The user uses the autograd profiler as before but with optional new parameters: with_stack and group_by_stack_n. Caution: regular profiling runs should not use this feature as it adds significant overhead. * Detail * Documentation Distributed Training & RPC [Stable] TorchElastic now bundled into PyTorch docker image
https://pytorch.org/blog/pytorch-1.7-released/
pytorch blogs
Torchelastic offers a strict superset of the current torch.distributed.launch CLI with the added features for fault-tolerance and elasticity. If the user is not be interested in fault-tolerance, they can get the exact functionality/behavior parity by setting max_restarts=0 with the added convenience of auto-assigned RANK and MASTER_ADDR|PORT (versus manually specified in torch.distributed.launch). By bundling torchelastic in the same docker image as PyTorch, users can start experimenting with TorchElastic right-away without having to separately install torchelastic. In addition to convenience, this work is a nice-to-have when adding support for elastic parameters in the existing Kubeflow’s distributed PyTorch operators. * Usage examples and how to get started [Beta] Support for uneven dataset inputs in DDP
https://pytorch.org/blog/pytorch-1.7-released/
pytorch blogs
[Beta] Support for uneven dataset inputs in DDP PyTorch 1.7 introduces a new context manager to be used in conjunction with models trained using torch.nn.parallel.DistributedDataParallel to enable training with uneven dataset size across different processes. This feature enables greater flexibility when using DDP and prevents the user from having to manually ensure dataset sizes are the same across different process. With this context manager, DDP will handle uneven dataset sizes automatically, which can prevent errors or hangs at the end of training. * RFC * Documentation [Beta] NCCL Reliability - Async Error/Timeout Handling
https://pytorch.org/blog/pytorch-1.7-released/
pytorch blogs
In the past, NCCL training runs would hang indefinitely due to stuck collectives, leading to a very unpleasant experience for users. This feature will abort stuck collectives and throw an exception/crash the process if a potential hang is detected. When used with something like torchelastic (which can recover the training process from the last checkpoint), users can have much greater reliability for distributed training. This feature is completely opt-in and sits behind an environment variable that needs to be explicitly set in order to enable this functionality (otherwise users will see the same behavior as before). * RFC * Documentation [Beta] TorchScript rpc_remote and rpc_sync
https://pytorch.org/blog/pytorch-1.7-released/
pytorch blogs
torch.distributed.rpc.rpc_async has been available in TorchScript in prior releases. For PyTorch 1.7, this functionality will be extended the remaining two core RPC APIs, torch.distributed.rpc.rpc_sync and torch.distributed.rpc.remote. This will complete the major RPC APIs targeted for support in TorchScript, it allows users to use the existing python RPC APIs within TorchScript (in a script function or script method, which releases the python Global Interpreter Lock) and could possibly improve application performance in multithreaded environment. * Documentation * Usage examples [Beta] Distributed optimizer with TorchScript support
https://pytorch.org/blog/pytorch-1.7-released/
pytorch blogs
PyTorch provides a broad set of optimizers for training algorithms, and these have been used repeatedly as part of the python API. However, users often want to use multithreaded training instead of multiprocess training as it provides better resource utilization and efficiency in the context of large scale distributed training (e.g. Distributed Model Parallel) or any RPC-based training application). Users couldn’t do this with with distributed optimizer before because we need to get rid of the python Global Interpreter Lock (GIL) limitation to achieve this.
https://pytorch.org/blog/pytorch-1.7-released/
pytorch blogs
In PyTorch 1.7, we are enabling the TorchScript support in distributed optimizer to remove the GIL, and make it possible to run optimizer in multithreaded applications. The new distributed optimizer has the exact same interface as before but it automatically converts optimizers within each worker into TorchScript to make each GIL free. This is done by leveraging a functional optimizer concept and allowing the distributed optimizer to convert the computational portion of the optimizer into TorchScript. This will help use cases like distributed model parallel training and improve performance using multithreading.
https://pytorch.org/blog/pytorch-1.7-released/
pytorch blogs
Currently, the only optimizer that supports automatic conversion with TorchScript is Adagrad and all other optimizers will still work as before without TorchScript support. We are working on expanding the coverage to all PyTorch optimizers and expect more to come in future releases. The usage to enable TorchScript support is automatic and exactly the same with existing python APIs, here is an example of how to use this: ```python import torch.distributed.autograd as dist_autograd import torch.distributed.rpc as rpc from torch import optim from torch.distributed.optim import DistributedOptimizer with dist_autograd.context() as context_id: # Forward pass. rref1 = rpc.remote("worker1", torch.add, args=(torch.ones(2), 3)) rref2 = rpc.remote("worker1", torch.add, args=(torch.ones(2), 1)) loss = rref1.to_here() + rref2.to_here() # Backward pass. dist_autograd.backward(context_id, [loss.sum()]) # Optimizer, pass in optim.Adagrad, DistributedOptimizer will
https://pytorch.org/blog/pytorch-1.7-released/
pytorch blogs
automatically convert/compile it to TorchScript (GIL-free) dist_optim = DistributedOptimizer( optim.Adagrad, [rref1, rref2], lr=0.05, ) dist_optim.step(context_id) ``` * RFC * Documentation [Beta] Enhancements to RPC-based Profiling Support for using the PyTorch profiler in conjunction with the RPC framework was first introduced in PyTorch 1.6. In PyTorch 1.7, the following enhancements have been made: * Implemented better support for profiling TorchScript functions over RPC * Achieved parity in terms of profiler features that work with RPC * Added support for asynchronous RPC functions on the server-side (functions decorated with rpc.functions.async_execution).
https://pytorch.org/blog/pytorch-1.7-released/
pytorch blogs
Users are now able to use familiar profiling tools such as with torch.autograd.profiler.profile() and with torch.autograd.profiler.record_function, and this works transparently with the RPC framework with full feature support, profiles asynchronous functions, and TorchScript functions. * Design doc * Usage examples [Prototype] Windows support for Distributed Training PyTorch 1.7 brings prototype support for DistributedDataParallel and collective communications on the Windows platform. In this release, the support only covers Gloo-based ProcessGroup and FileStore. To use this feature across multiple machines, please provide a file from a shared file system in init_process_group. ```python initialize the process group dist.init_process_group( "gloo", # multi-machine example: # init_method = "file://////{machine}/{share_folder}/file"
https://pytorch.org/blog/pytorch-1.7-released/
pytorch blogs
init_method="file:///{your local file path}", rank=rank, world_size=world_size ) model = DistributedDataParallel(local_model, device_ids=[rank]) ``` * Design doc * Documentation * Acknowledgement (gunandrose4u) Mobile PyTorch Mobile supports both iOS and Android with binary packages available in Cocoapods and JCenter respectively. You can learn more about PyTorch Mobile here. [Beta] PyTorch Mobile Caching allocator for performance improvements
https://pytorch.org/blog/pytorch-1.7-released/
pytorch blogs
On some mobile platforms, such as Pixel, we observed that memory is returned to the system more aggressively. This results in frequent page faults as PyTorch being a functional framework does not maintain state for the operators. Thus outputs are allocated dynamically on each execution of the op, for the most ops. To ameliorate performance penalties due to this, PyTorch 1.7 provides a simple caching allocator for CPU. The allocator caches allocations by tensor sizes and, is currently, available only via the PyTorch C++ API. The caching allocator itself is owned by client and thus the lifetime of the allocator is also maintained by client code. Such a client owned caching allocator can then be used with scoped guard, c10::WithCPUCachingAllocatorGuard, to enable the use of cached allocation within that scope. Example usage: ```python include ..... c10::CPUCachingAllocator caching_allocator;
https://pytorch.org/blog/pytorch-1.7-released/
pytorch blogs
..... c10::CPUCachingAllocator caching_allocator; // Owned by client code. Can be a member of some client class so as to tie the // the lifetime of caching allocator to that of the class. ..... { c10::optional caching_allocator_guard; if (FLAGS_use_caching_allocator) { caching_allocator_guard.emplace(&caching_allocator); } .... model.forward(..); } ... ``` NOTE: Caching allocator is only available on mobile builds, thus the use of caching allocator outside of mobile builds won’t be effective. * Documentation * Usage examples torchvision [Stable] Transforms now support Tensor inputs, batch computation, GPU, and TorchScript
https://pytorch.org/blog/pytorch-1.7-released/
pytorch blogs
torchvision transforms are now inherited from nn.Module and can be torchscripted and applied on torch Tensor inputs as well as on PIL images. They also support Tensors with batch dimensions and work seamlessly on CPU/GPU devices: ```python import torch import torchvision.transforms as T to fix random seed, use torch.manual_seed instead of random.seed torch.manual_seed(12) transforms = torch.nn.Sequential( T.RandomCrop(224), T.RandomHorizontalFlip(p=0.3), T.ConvertImageDtype(torch.float), T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]) ) scripted_transforms = torch.jit.script(transforms) Note: we can similarly use T.Compose to define transforms transforms = T.Compose([...]) and scripted_transforms = torch.jit.script(torch.nn.Sequential(*transforms.transforms)) tensor_image = torch.randint(0, 256, size=(3, 256, 256), dtype=torch.uint8) works directly on Tensors out_image1 = transforms(tensor_image) on the GPU out_image1_cuda = transforms(tensor_image.cuda())
https://pytorch.org/blog/pytorch-1.7-released/
pytorch blogs
out_image1_cuda = transforms(tensor_image.cuda()) with batches batched_image = torch.randint(0, 256, size=(4, 3, 256, 256), dtype=torch.uint8) out_image_batched = transforms(batched_image) and has torchscript support out_image2 = scripted_transforms(tensor_image) These improvements enable the following new features: * support for GPU acceleration * batched transformations e.g. as needed for videos * transform multi-band torch tensor images (with more than 3-4 channels) * torchscript transforms together with your model for deployment **Note:** Exceptions for TorchScript support includesCompose,RandomChoice,RandomOrder,Lambdaand those applied on PIL images, such asToPILImage```. [Stable] Native image IO for JPEG and PNG formats
https://pytorch.org/blog/pytorch-1.7-released/
pytorch blogs
torchvision 0.8.0 introduces native image reading and writing operations for JPEG and PNG formats. Those operators support TorchScript and return CxHxW tensors in uint8 format, and can thus be now part of your model for deployment in C++ environments. from torchvision.io import read_image # tensor_image is a CxHxW uint8 Tensor tensor_image = read_image('path_to_image.jpeg') # or equivalently from torchvision.io import read_file, decode_image # raw_data is a 1d uint8 Tensor with the raw bytes raw_data = read_file('path_to_image.jpeg') tensor_image = decode_image(raw_data) # all operators are torchscriptable and can be # serialized together with your model torchscript code scripted_read_image = torch.jit.script(read_image) [Stable] RetinaNet detection model This release adds pretrained models for RetinaNet with a ResNet50 backbone from Focal Loss for Dense Object Detection. [Beta] New Video Reader API
https://pytorch.org/blog/pytorch-1.7-released/
pytorch blogs
[Beta] New Video Reader API This release introduces a new video reading abstraction, which gives more fine-grained control of iteration over videos. It supports image and audio, and implements an iterator interface so that it is interoperable with other the python libraries such as itertools. ```python from torchvision.io import VideoReader stream indicates if reading from audio or video reader = VideoReader('path_to_video.mp4', stream='video') can change the stream after construction via reader.set_current_stream to read all frames in a video starting at 2 seconds for frame in reader.seek(2): # frame is a dict with "data" and "pts" metadata print(frame["data"], frame["pts"]) because reader is an iterator you can combine it with itertools from itertools import takewhile, islice read 10 frames starting from 2 seconds for frame in islice(reader.seek(2), 10): pass or to return all frames between 2 and 5 seconds for frame in takewhile(lambda x: x["pts"] < 5, reader):
https://pytorch.org/blog/pytorch-1.7-released/
pytorch blogs
pass ``` Notes: * In order to use the Video Reader API beta, you must compile torchvision from source and have ffmpeg installed in your system. * The VideoReader API is currently released as beta and its API may change following user feedback. torchaudio With this release, torchaudio is expanding its support for models and end-to-end applications, adding a wav2letter training pipeline and end-to-end text-to-speech and source separation pipelines. Please file an issue on github to provide feedback on them. [Stable] Speech Recognition Building on the addition of the wav2letter model for speech recognition in the last release, we’ve now added an example wav2letter training pipeline with the LibriSpeech dataset. [Stable] Text-to-speech
https://pytorch.org/blog/pytorch-1.7-released/
pytorch blogs
[Stable] Text-to-speech With the goal of supporting text-to-speech applications, we added a vocoder based on the WaveRNN model, based on the implementation from this repository. The original implementation was introduced in "Efficient Neural Audio Synthesis". We also provide an example WaveRNN training pipeline that uses the LibriTTS dataset added to torchaudio in this release. [Stable] Source Separation With the addition of the ConvTasNet model, based on the paper "Conv-TasNet: Surpassing Ideal Time-Frequency Magnitude Masking for Speech Separation," torchaudio now also supports source separation. An example ConvTasNet training pipeline is provided with the wsj-mix dataset. Cheers! Team PyTorch
https://pytorch.org/blog/pytorch-1.7-released/
pytorch blogs
layout: blog_detail title: 'Adding a Contributor License Agreement for PyTorch' author: Team PyTorch To ensure the ongoing growth and success of the framework, we're introducing the use of the Apache Contributor License Agreement (CLA) for PyTorch. We care deeply about the broad community of contributors who make PyTorch such a great framework, so we want to take a moment to explain why we are adding a CLA. Why Does PyTorch Need a CLA? CLAs help clarify that users and maintainers have the relevant rights to use and maintain code contributed to an open source project, while allowing contributors to retain ownership rights to their code.
https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/
pytorch blogs
PyTorch has grown from a small group of enthusiasts to a now global community with over 1,600 contributors from dozens of countries, each bringing their own diverse perspectives, values and approaches to collaboration. Looking forward, clarity about how this collaboration is happening is an important milestone for the framework as we continue to build a stronger, safer and more scalable community around PyTorch.
https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/
pytorch blogs
The text of the Apache CLA can be found here, together with an accompanying FAQ. The language in the PyTorch CLA is identical to the Apache template. Although CLAs have been the subject of significant discussion in the open source community, we are seeing that using a CLA, and particularly the Apache CLA, is now standard practice when projects and communities reach a certain scale. Popular projects that have adopted some type of CLA include: Visual Studio Code, Flutter, TensorFlow, kubernetes, Ubuntu, Django, Python, Go, Android and many others. What is Not Changing
https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/
pytorch blogs
What is Not Changing PyTorch’s BSD license is not changing. There is no impact to PyTorch users. CLAs will only be required for new contributions to the project. For past contributions, no action is necessary. Everything else stays the same, whether it’s IP ownership, workflows, contributor roles or anything else that you’ve come to expect from PyTorch. How the New CLA will Work Moving forward, all contributors to projects under the PyTorch GitHub organization will need to sign a CLA to merge their contributions. If you've contributed to other Facebook Open Source projects, you may have already signed the CLA, and no action is required. If you have not signed the CLA, a GitHub check will prompt you to sign it before your pull requests can be merged. You can reach the CLA from this link.
https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/
pytorch blogs
If you're contributing as an individual, meaning the code is not something you worked on as part of your job, you should sign the individual contributor agreement. This agreement associates your GitHub username with future contributions and only needs to be signed once. If you're contributing as part of your employment, you may need to sign the corporate contributor agreement. Check with your legal team on filling this out. Also you will include a list of github ids from your company. As always, we continue to be humbled and grateful for all your support, and we look forward to scaling PyTorch together to even greater heights in the years to come. Thank you! Team PyTorch
https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/
pytorch blogs
layout: blog_detail title: 'Feature Extraction in TorchVision using Torch FX' author: Alexander Soare and Francisco Massa featured-img: 'assets/images/fx-image2.png' Introduction FX based feature extraction is a new TorchVision utility that lets us access intermediate transformations of an input during the forward pass of a PyTorch Module. It does so by symbolically tracing the forward method to produce a graph where each node represents a single operation. Nodes are named in a human-readable manner such that one may easily specify which nodes they want to access.
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
Did that all sound a little complicated? Not to worry as there’s a little in this article for everyone. Whether you’re a beginner or an advanced deep-vision practitioner, chances are you will want to know about FX feature extraction. If you still want more background on feature extraction in general, read on. If you’re already comfortable with that and want to know how to do it in PyTorch, skim ahead to Existing Methods in PyTorch: Pros and Cons. And if you already know about the challenges of doing feature extraction in PyTorch, feel free to skim forward to FX to The Rescue. A Recap On Feature Extraction We’re all used to the idea of having a deep neural network (DNN) that takes inputs and produces outputs, and we don’t necessarily think of what happens in between. Let’s just consider a ResNet-50 classification model as an example:
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
Figure 1: ResNet-50 takes an image of a bird and transforms that into the abstract concept "bird". Source: Bird image from ImageNet. We know though, that there are many sequential “layers” within the ResNet-50 architecture that transform the input step-by-step. In Figure 2 below, we peek under the hood to show the layers within ResNet-50, and we also show the intermediate transformations of the input as it passes through those layers.
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
Figure 2: ResNet-50 transforms the input image in multiple steps. Conceptually, we may access the intermediate transformation of the image after each one of these steps. Source: Bird image from ImageNet. Existing Methods In PyTorch: Pros and Cons There were already a few ways of doing feature extraction in PyTorch prior to FX based feature extraction being introduced. To illustrate these, let’s consider a simple convolutional neural network that does the following Applies several “blocks” each with several convolution layers within. After several blocks, it uses a global average pool and flatten operation. Finally it uses a single output classification layer. ```python import torch from torch import nn class ConvBlock(nn.Module): """ Applies num_layers 3x3 convolutions each followed by ReLU then downsamples via 2x2 max pool. """ def init(self, num_layers, in_channels, out_channels): super().init() self.convs = nn.ModuleList(
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
self.convs = nn.ModuleList( [nn.Sequential( nn.Conv2d(in_channels if i==0 else out_channels, out_channels, 3, padding=1), nn.ReLU() ) for i in range(num_layers)] ) self.downsample = nn.MaxPool2d(kernel_size=2, stride=2) def forward(self, x): for conv in self.convs: x = conv(x) x = self.downsample(x) return x class CNN(nn.Module): """ Applies several ConvBlocks each doubling the number of channels, and halving the feature map size, before taking a global average and classifying. """ def init(self, in_channels, num_blocks, num_classes): super().init() first_channels = 64 self.blocks = nn.ModuleList( [ConvBlock( 2 if i==0 else 3, in_channels=(in_channels if i == 0 else first_channels(2(i-1))), out_channels=first_channels(2**i)) for i in range(num_blocks)] )
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
for i in range(num_blocks)] ) self.global_pool = nn.AdaptiveAvgPool2d((1, 1)) self.cls = nn.Linear(first_channels(2*(num_blocks-1)), num_classes) def forward(self, x): for block in self.blocks: x = block(x) x = self.global_pool(x) x = x.flatten(1) x = self.cls(x) return x model = CNN(3, 4, 10) out = model(torch.zeros(1, 3, 32, 32)) # This will be the final logits over classes Let’s say we want to get the final feature map before global average pooling. We could do the following: ### Modify the forward method ```python def forward(self, x): for block in self.blocks: x = block(x) self.final_feature_map = x x = self.global_pool(x) x = x.flatten(1) x = self.cls(x) return x Or return it directly: ```python def forward(self, x): for block in self.blocks: x = block(x) final_feature_map = x x = self.global_pool(x) x = x.flatten(1) x = self.cls(x)
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
x = x.flatten(1) x = self.cls(x) return x, final_feature_map ``` That looks pretty easy. But there are some downsides here which all stem from the same underlying issue: that is, modifying the source code is not ideal: It’s not always easy to access and change given the practical considerations of a project. If we want flexibility (switching feature extraction on or off, or having variations on it), we need to further adapt the source code to support that. It’s not always just a question of inserting a single line of code. Think about how you would go about getting the feature map from one of the intermediate blocks with the way I’ve written this module. Overall, we’d rather avoid the overhead of maintaining source code for a model, when we actually don’t need to change anything about how it works. One can see how this downside can start to get a lot more thorny when dealing with larger, more complicated models, and trying to get at features from within nested submodules.
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs
Write a new module using the parameters from the original one Following on the example from above, say we want to get a feature map from each block. We could write a new module like so: class CNNFeatures(nn.Module): def __init__(self, backbone): super().__init__() self.blocks = backbone.blocks def forward(self, x): feature_maps = [] for block in self.blocks: x = block(x) feature_maps.append(x) return feature_maps backbone = CNN(3, 4, 10) model = CNNFeatures(backbone) out = model(torch.zeros(1, 3, 32, 32)) # This is now a list of Tensors, each representing a feature map In fact, this is much like the method that TorchVision used internally to make many of its detection models. Although this approach solves some of the issues with modifying the source code directly, there are still some major downsides:
https://pytorch.org/blog/FX-feature-extraction-torchvision/
pytorch blogs