text
stringlengths
0
1.73k
source
stringlengths
35
119
category
stringclasses
2 values
Speech extensions to fairseq Language translation and audio processing are critical components in systems and applications such as search, translation, speech, and assistants. There has been tremendous progress in these fields recently thanks to the development of new architectures like transformers, as well as large-scale pretraining methods. We’ve extended fairseq, a framework for sequence-to-sequence applications such as language translation, to include support for end-to-end learning for speech and audio recognition tasks.These extensions to fairseq enable faster exploration and prototyping of new speech research ideas while offering a clear path to production. Get started with fairseq here. Cloud provider and hardware ecosystem support
https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/
pytorch blogs
Cloud provider and hardware ecosystem support Cloud providers such as Amazon Web Services, Microsoft Azure, and Google Cloud provide extensive support for anyone looking to develop ML on PyTorch and deploy in production. We’re excited to share the general availability of Google Cloud TPU support and a newly launched integration with Alibaba Cloud. We’re also expanding hardware ecosystem support.
https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/
pytorch blogs
Google Cloud TPU support now broadly available. To accelerate the largest-scale machine learning (ML) applications deployed today and enable rapid development of the ML applications of tomorrow, Google created custom silicon chips called Tensor Processing Units (TPUs). When assembled into multi-rack ML supercomputers called Cloud TPU Pods, these TPUs can complete ML workloads in minutes or hours that previously took days or weeks on other systems. Engineers from Facebook, Google, and Salesforce worked together to enable and pilot Cloud TPU support in PyTorch, including experimental support for Cloud TPU Pods. PyTorch support for Cloud TPUs is also available in Colab. Learn more about how to get started with PyTorch on Cloud TPUs here.
https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/
pytorch blogs
Alibaba adds support for PyTorch in Alibaba Cloud. The initial integration involves a one-click solution for PyTorch 1.x, Data Science Workshop notebook service, distributed training with Gloo/NCCL, as well as seamless integration with Alibaba IaaS such as OSS, ODPS, and NAS. Together with the toolchain provided by Alibaba, we look forward to significantly reducing the overhead necessary for adoption, as well as helping Alibaba Cloud’s global customer base leverage PyTorch to develop new AI applications. ML hardware ecosystem expands. In addition to key GPU and CPU partners, the PyTorch ecosystem has also enabled support for dedicated ML accelerators. Updates from Intel and Habana showcase how PyTorch, connected to the Glow optimizing compiler, enables developers to utilize these market-specific solutions. Growth in the PyTorch community
https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/
pytorch blogs
Growth in the PyTorch community As an open source, community-driven project, PyTorch benefits from wide range of contributors bringing new capabilities to the ecosystem. Here are some recent examples: Mila SpeechBrain aims to provide an open source, all-in-one speech toolkit based on PyTorch. The goal is to develop a single, flexible, user-friendly toolkit that can be used to easily develop state-of-the-art systems for speech recognition (both end to end and HMM-DNN), speaker recognition, speech separation, multi-microphone signal processing (e.g., beamforming), self-supervised learning, and many others. Learn more
https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/
pytorch blogs
SpaCy is a new wrapping library with consistent and easy-to-use interfaces to several models, in order to extract features to power NLP pipelines. Support is provided for via spaCy’s standard training API. The library also calculates an alignment so the transformer features can be related back to actual words instead of just wordpieces. Learn more HuggingFace PyTorch-Transformers (formerly known as pytorch-pretrained-bert is a library of state-of-the-art pretrained models for Natural Language Processing (NLP). The library currently contains PyTorch implementations, pretrained model weights, usage scripts, and conversion utilities for models such as BERT, GPT-2, RoBERTa, and DistilBERT. It has also grown quickly, with more than 13,000 GitHub stars and a broad set of users. Learn more
https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/
pytorch blogs
PyTorch Lightning is a Keras-like ML library for PyTorch. It leaves core training and validation logic to you and automates the rest. Reproducibility is a crucial requirement for many fields of research, including those based on ML techniques. As the number of research papers submitted to arXiv and conferences skyrockets into the tens of thousands, scaling reproducibility becomes difficult. Learn more. We recently held the first online Global PyTorch Summer Hackathon, where researchers and developers around the world were invited to build innovative new projects with PyTorch. Nearly 1,500 developers participated, submitting projects ranging from livestock disease detection to AI-powered financial assistants. The winning projects were:
https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/
pytorch blogs
Torchmeta, which provides extensions for PyTorch to simplify the development of meta-learning algorithms in PyTorch. It features a unified interface inspired by TorchVision for both few-shot classification and regression problems, to allow easy benchmarking on multiple data sets to aid with reproducibility. Open-Unmix, a system for end-to-end music demixing with PyTorch. Demixing separates the individual instruments or vocal track from any stereo recording. Endless AI-Generated Tees, a store featuring AI-generated T-shirt designs that can be purchased and delivered worldwide. The system uses a state-of-the-art generative model (StyleGAN) that was built with PyTorch and then trained on modern art. Visit pytorch.org to learn more and get started with PyTorch 1.3 and the latest libraries and ecosystem projects. We look forward to the contributions, exciting research advancements, and real-world applications that the community builds with PyTorch.
https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/
pytorch blogs
We’d like to thank the entire PyTorch team and the community for all their contributions to this work.
https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/
pytorch blogs
layout: blog_detail title: "New library updates in PyTorch 1.12" author: Team PyTorch featured-img: '' We are bringing a number of improvements to the current PyTorch libraries, alongside the PyTorch 1.12 release. These updates demonstrate our focus on developing common and extensible APIs across all domains to make it easier for our community to build ecosystem projects on PyTorch. Summary: - TorchVision - Added multi-weight support API, new architectures, model variants, and pretrained weight. See the release notes here. - TorchAudio - Introduced beta features including a streaming API, a CTC beam search decoder, and new beamforming modules and methods. See the release notes here.
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
TorchText - Extended support for scriptable BERT tokenizer and added datasets for GLUE benchmark. See the release notes here. TorchRec - Added EmbeddingModule benchmarks, examples for TwoTower Retrieval, inference and sequential embeddings, metrics, improved planner and demonstrated integration with production components. See the release notes here. TorchX - Launch PyTorch trainers developed on local workspaces onto five different types of schedulers. See the release notes here. FBGemm - Added and improved kernels for Recommendation Systems inference workloads, including table batched embedding bag, jagged tensor operations, and other special-case optimizations. TorchVision v0.13 Multi-weight support API
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
Multi-weight support API TorchVision v0.13 offers a new Multi-weight support API for loading different weights to the existing model builder methods: from torchvision.models import * # Old weights with accuracy 76.130% resnet50(weights=ResNet50_Weights.IMAGENET1K_V1) # New weights with accuracy 80.858% resnet50(weights=ResNet50_Weights.IMAGENET1K_V2) # Best available weights (currently alias for IMAGENET1K_V2) # Note that these weights may change across versions resnet50(weights=ResNet50_Weights.DEFAULT) # Strings are also supported resnet50(weights="IMAGENET1K_V2") # No weights - random initialization resnet50(weights=None) The new API bundles along with the weights important details such as the preprocessing transforms and meta-data such as labels. Here is how to make the most out of it: ```python from torchvision.io import read_image from torchvision.models import resnet50, ResNet50_Weights
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
img = read_image("test/assets/encode_jpeg/grace_hopper_517x606.jpg") Step 1: Initialize model with the best available weights weights = ResNet50_Weights.DEFAULT model = resnet50(weights=weights) model.eval() Step 2: Initialize the inference transforms preprocess = weights.transforms() Step 3: Apply inference preprocessing transforms batch = preprocess(img).unsqueeze(0) Step 4: Use the model and print the predicted category prediction = model(batch).squeeze(0).softmax(0) class_id = prediction.argmax().item() score = prediction[class_id].item() category_name = weights.meta["categories"][class_id] print(f"{category_name}: {100 * score:.1f}%") ``` You can read more about the new API in the docs. To provide your feedback, please use this dedicated Github issue. New architectures and model variants Classification
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
Classification The Swin Transformer and EfficienetNetV2 are two popular classification models which are often used for downstream vision tasks. This release includes 6 pre-trained weights for their classification variants. Here is how to use the new models: import torch from torchvision.models import * image = torch.rand(1, 3, 224, 224) model = swin_t(weights="DEFAULT").eval() prediction = model(image) image = torch.rand(1, 3, 384, 384) model = efficientnet_v2_s(weights="DEFAULT").eval() prediction = model(image) In addition to the above, we also provide new variants for existing architectures such as ShuffleNetV2, ResNeXt and MNASNet. The accuracies of all the new pre-trained models obtained on ImageNet-1K are seen below: Model Acc@1 Acc@5 swin_t 81.474 95.776
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
| swin_s | 83.196 | 96.36 | | swin_b | 83.582 | 96.64 | | efficientnet_v2_s | 84.228 | 96.878 | | efficientnet_v2_m | 85.112 | 97.156 | | efficientnet_v2_l | 85.808 | 97.788 | | resnext101_64x4d | 83.246 | 96.454 | | resnext101_64x4d (quantized) | 82.898 | 96.326 | | shufflenet_v2_x1_5 | 72.996 | 91.086 | | shufflenet_v2_x1_5 (quantized) | 72.052 | 0.700 | | shufflenet_v2_x2_0 | 76.230 | 93.006 | | shufflenet_v2_x2_0 (quantized) | 75.354 | 92.488 | | mnasnet0_75 | 71.180 | 90.496 | | mnas1_3 | 76.506 | 93.522 | We would like to thank Hu Ye for contributing to TorchVision the Swin Transformer implementation. (BETA) Object Detection and Instance Segmentation
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
We have introduced 3 new model variants for RetinaNet, FasterRCNN and MaskRCNN that include several post-paper architectural optimizations and improved training recipes. All models can be used similarly: import torch from torchvision.models.detection import * images = [torch.rand(3, 800, 600)] model = retinanet_resnet50_fpn_v2(weights="DEFAULT") # model = fasterrcnn_resnet50_fpn_v2(weights="DEFAULT") # model = maskrcnn_resnet50_fpn_v2(weights="DEFAULT") model.eval() prediction = model(images) Below we present the metrics of the new variants on COCO val2017. In parenthesis we denote the improvement over the old variants: Model Box mAP Mask mAP retinanet_resnet50_fpn_v2 41.5 (+5.1) - fasterrcnn_resnet50_fpn_v2 46.7 (+9.7) - maskrcnn_resnet50_fpn_v2 47.4 (+9.5) 41.8 (+7.2)
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
We would like to thank Ross Girshick, Piotr Dollar, Vaibhav Aggarwal, Francisco Massa and Hu Ye for their past research and contributions to this work. New pre-trained weights SWAG weights The ViT and RegNet model variants offer new pre-trained SWAG (​​Supervised Weakly from hashtAGs) weights. One of the biggest of these models achieves a whopping 88.6% accuracy on ImageNet-1K. We currently offer two versions of the weights: 1) fine-tuned end-to-end weights on ImageNet-1K (highest accuracy) and 2) frozen trunk weights with a linear classifier fit on ImageNet-1K (great for transfer learning). Below we see the detailed accuracies of each model variant: Model Weights Acc@1 Acc@5 RegNet_Y_16GF_Weights.IMAGENET1K_SWAG_E2E_V1 86.012 98.054 RegNet_Y_16GF_Weights.IMAGENET1K_SWAG_LINEAR_V1 83.976 97.244
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
| RegNet_Y_32GF_Weights.IMAGENET1K_SWAG_E2E_V1 | 86.838 | 98.362 | | RegNet_Y_32GF_Weights.IMAGENET1K_SWAG_LINEAR_V1 | 84.622 | 97.48 | | RegNet_Y_128GF_Weights.IMAGENET1K_SWAG_E2E_V1 | 88.228 | 98.682 | | RegNet_Y_128GF_Weights.IMAGENET1K_SWAG_LINEAR_V1 | 86.068 | 97.844 | | ViT_B_16_Weights.IMAGENET1K_SWAG_E2E_V1 | 85.304 | 97.65 | | ViT_B_16_Weights.IMAGENET1K_SWAG_LINEAR_V1 | 81.886 | 96.18 | | ViT_L_16_Weights.IMAGENET1K_SWAG_E2E_V1 | 88.064 | 98.512 | | ViT_L_16_Weights.IMAGENET1K_SWAG_LINEAR_V1 | 85.146 | 97.422 | | ViT_H_14_Weights.IMAGENET1K_SWAG_E2E_V1 | 88.552 | 98.694 | | ViT_H_14_Weights.IMAGENET1K_SWAG_LINEAR_V1 | 85.708 | 97.73 |
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
The SWAG weights are released under the Attribution-NonCommercial 4.0 International license. We would like to thank Laura Gustafson, Mannat Singh and Aaron Adcock for their work and support in making the weights available to TorchVision. Model Refresh The release of the Multi-weight support API enabled us to refresh the most popular models and offer more accurate weights. We improved on average each model by ~3 points. The new recipe used was learned on top of ResNet50 and its details were covered on a previous blog post. Model Old weights New weights efficientnet_b1 78.642 79.838 mobilenet_v2 71.878 72.154
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
| mobilenet_v3_large | 74.042 | 75.274 | | regnet_y_400mf | 74.046 | 75.804 | | regnet_y_800mf | 76.42 | 78.828 | | regnet_y_1_6gf | 77.95 | 80.876 | | regnet_y_3_2gf | 78.948 | 81.982 | | regnet_y_8gf | 80.032 | 82.828 | | regnet_y_16gf | 80.424 | 82.886 | | regnet_y_32gf | 80.878 | 83.368 | | regnet_x_400mf | 72.834 | 74.864 | | regnet_x_800mf | 75.212 | 77.522 | | regnet_x_1_6gf | 77.04 | 79.668 | | regnet_x_3_2gf | 78.364 | 81.196 | | regnet_x_8gf | 79.344 | 81.682 | | regnet_x_16gf | 80.058 | 82.716 |
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
| regnet_x_32gf | 80.622 | 83.014 | | resnet50 | 76.13 | 80.858 | | resnet50 (quantized) | 75.92 | 80.282 | | resnet101 | 77.374 | 81.886 | | resnet152 | 78.312 | 82.284 | | resnext50_32x4d | 77.618 | 81.198 | | resnext101_32x8d | 79.312 | 82.834 | | resnext101_32x8d (quantized) | 78.986 | 82.574 | | wide_resnet50_2 | 78.468 | 81.602 | | wide_resnet101_2 | 78.848 | 82.51 | We would like to thank Piotr Dollar, Mannat Singh and Hugo Touvron for their past research and contributions to this work. New Augmentations, Layers and Losses
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
New Augmentations, Layers and Losses This release brings a bunch of new primitives which can be used to produce SOTA models. Some highlights include the addition of AugMix data-augmentation method, the DropBlock layer, the cIoU/dIoU loss and many more. We would like to thank Aditya Oke, Abhijit Deo, Yassine Alouini and Hu Ye for contributing to the project and for helping us maintain TorchVision relevant and fresh. Documentation
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
We completely revamped our models documentation to make them easier to browse, and added various key information such as supported image sizes, or image pre-processing steps of pre-trained weights. We now have a main model page with various summary tables of available weights, and each model has a dedicated page. Each model builder is also documented in their own page, with more details about the available weights, including accuracy, minimal image size, link to training recipes, and other valuable info. For comparison, our previous models docs are here. To provide feedback on the new documentation, please use the dedicated Github issue.
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
TorchAudio v0.12 (BETA) Streaming API StreamReader is TorchAudio’s new I/O API. It is backed by FFmpeg†, and allows users to: - Decode audio and video formats, including MP4 and AAC - Handle input forms, such as local files, network protocols, microphones, webcams, screen captures and file-like objects - Iterate over and decode chunk-by-chunk, while changing the sample rate or frame rate - Apply audio and video filters, such as low-pass filter and image scaling - Decode video with Nvidia's hardware-based decoder (NVDEC) For usage details, please check out the documentation and tutorials: - Media Stream API - Pt.1
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
Media Stream API - Pt.2 Online ASR with Emformer RNN-T Device ASR with Emformer RNN-T Accelerated Video Decoding with NVDEC † To use StreamReader, FFmpeg libraries are required. Please install FFmpeg. The coverage of codecs depends on how these libraries are configured. TorchAudio official binaries are compiled to work with FFmpeg 4 libraries; FFmpeg 5 can be used if TorchAudio is built from source. (BETA) CTC Beam Search Decoder TorchAudio integrates the wav2letter CTC beam search decoder from Flashlight (GitHub). The addition of this inference time decoder enables running end-to-end CTC ASR evaluation using TorchAudio utils.
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
Customizable lexicon and lexicon-free decoders are supported, and both are compatible with KenLM n-gram language models or without using a language model. TorchAudio additionally supports downloading token, lexicon, and pretrained KenLM files for the LibriSpeech dataset. For usage details, please check out the documentation and ASR inference tutorial. (BETA) New Beamforming Modules and Methods To improve flexibility in usage, the release adds two new beamforming modules under torchaudio.transforms: SoudenMVDR and RTFMVDR. The main differences from MVDR are:
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
Use power spectral density (PSD) and relative transfer function (RTF) matrices as inputs instead of time-frequency masks. The module can be integrated with neural networks that directly predict complex-valued STFT coefficients of speech and noise Add \'reference_channel\' as an input argument in the forward method, to allow users to select the reference channel in model training or dynamically change the reference channel in inference Besides the two modules, new function-level beamforming methods are added under torchaudio.functional. These include: - psd - mvdr_weights_souden - mvdr_weights_rtf - rtf_evd - rtf_power
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
apply_beamforming For usage details, please check out the documentation at torchaudio.transforms and torchaudio.functional and the Speech Enhancement with MVDR Beamforming tutorial. TorchText v0.13 Glue Datasets We increased the number of datasets in TorchText from 22 to 30 by adding the remaining 8 datasets from the GLUE benchmark (SST-2 was already supported). The complete list of GLUE datasets is as follows: - CoLA (paper): Single sentence binary classification acceptability task - SST-2 (paper): Single sentence binary classification sentiment task
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
MRPC (paper): Dual sentence binary classification paraphrase task QQP: Dual sentence binary classification paraphrase task STS-B (paper): Single sentence to float regression sentence similarity task MNLI (paper): Sentence ternary classification NLI task QNLI (paper): Sentence binary classification QA and NLI tasks RTE (paper): Dual sentence binary classification NLI task
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
WNLI (paper): Dual sentence binary classification coreference and NLI tasks Scriptable BERT Tokenizer TorchText has extended support for scriptable tokenizer by adding the WordPiece tokenizer used in BERT. It is one of the commonly used algorithms for splitting input text into sub-words units and was introduced in Japanese and Korean Voice Search (Schuster et al., 2012). TorchScriptabilty support would allow users to embed the BERT text-preprocessing natively in C++ without needing the support of python runtime. As TorchText now supports the CMAKE build system to natively link torchtext binaries with application code, users can easily integrate BERT tokenizers for deployment needs.
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
For usage details, please refer to the corresponding documentation. TorchRec v0.2.0 EmbeddingModule + DLRM benchmarks A set of benchmarking tests, showing performance characteristics of TorchRec’s base modules and research models built out of TorchRec. TwoTower Retrieval Example, with FAISS We provide an example demonstrating training a distributed TwoTower (i.e. User-Item) Retrieval model that is sharded using TorchRec. The projected item embeddings are added to an IVFPQ FAISS index for candidate generation. The retrieval model and KNN lookup are bundled in a Pytorch model for efficient end-to-end retrieval. Integrations We demonstrate that TorchRec works out of the box with many components commonly used alongside PyTorch models in production like systems, such as
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
Training a TorchRec model on Ray Clusters utilizing the Torchx Ray scheduler Preprocessing and DataLoading with NVTabular on DLRM Training a TorchRec model with on-the-fly preprocessing with TorchArrow showcasing RecSys domain UDFs Sequential Embeddings Example: Bert4Rec We provide an example, using TorchRec, that reimplements the BERT4REC paper, showcasing EmbeddingCollection for non-pooled embeddings. Using DistributedModelParallel we see a 35% QPS gain over conventional data parallelism. (Beta) Planner
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
(Beta) Planner The TorchRec library includes a built-in planner that selects near optimal sharding plan for a given model. The planner attempts to identify the best sharding plan by evaluating a series of proposals which are statically analyzed and fed into an integer partitioner. The planner is able to automatically adjust plans for a wide range of hardware setups, allowing users to scale performance seamlessly from local development environment to large scale production hardware. See this notebook for a more detailed tutorial. (Beta) Inference
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
(Beta) Inference TorchRec Inference is a C++ library that supports multi-gpu inference. The TorchRec library is used to shard models written and packaged in Python via torch.package (an alternative to TorchScript). The torch.deploy library is used to serve inference from C++ by launching multiple Python interpreters carrying the packaged model, thus subverting the GIL. Two models are provided as examples: DLRM multi-GPU (sharded via TorchRec) and DLRM single-GPU. (Beta) RecMetrics RecMetrics is a metrics library that collects common utilities and optimizations for Recommendation models. It extends torchmetrics.
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
A centralized metrics module that allows users to add new metrics Commonly used metrics, including AUC, Calibration, CTR, MSE/RMSE, NE & Throughput Optimization for metrics related operations to reduce the overhead of metric computation Checkpointing (Prototype) Single process Batched + Fused Embeddings Previously TorchRec’s abstractions (EmbeddingBagCollection/EmbeddingCollection) over FBGEMM kernels, which provide benefits such as table batching, optimizer fusion, and UVM placement, could only be used in conjunction with DistributedModelParallel. We’ve decoupled these notions from sharding, and introduced the FusedEmbeddingBagCollection, which can be used as a standalone module, with all of the above features, and can also be sharded. TorchX v0.2.0
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
TorchX v0.2.0 TorchX is a job launcher that makes it easier to run PyTorch in distributed training clusters with many scheduler integrations including Kubernetes and Slurm. We're excited to release TorchX 0.2.0 with a number of improvements. TorchX is currently being used in production in both on-premise and cloud environments. Check out the quickstart to start launching local and remote jobs. Workspaces TorchX now supports workspaces which allows users to easily launch training jobs using their local workspace. TorchX can automatically build a patch with your local training code on top of a base image to minimize iteration time and time to training. .torchxconfig
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
.torchxconfig Specifying options in .torchxconfig saves you from having to type long CLI commands each time you launch a job. You can also define project level generic configs and drop a config file in your home directory for user-level overrides. Expanded Scheduler Support TorchX now supports AWS Batch and Ray (experimental) schedulers in addition to our existing integrations. Distributed Training On All Schedulers The TorchX dist.ddp component now works on all schedulers without any configuration. Distributed training workers will automatically discover each other when using torchelastic via the builtin dist.ddp component. Hyper Parameter Optimization
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
Hyper Parameter Optimization TorchX integrates with Ax to let you scale hyper-parameter optimizations (HPO) by launching the search trials onto remote clusters. File and Device Mounts TorchX now supports remote filesystem mounts and custom devices. This enables your PyTorch jobs to efficiently access cloud storage such as NFS or Lustre. The device mounts enables usage of network accelerators like Infiniband and custom inference/training accelerators. FBGemm v0.2.0 The FBGEMM library contains optimized kernels meant to improve the performance of PyTorch workloads. We’ve added a number of new features and optimizations over the last few months that we are excited to report. Inference Table Batched Embedding (TBE)
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
Inference Table Batched Embedding (TBE) The table batched embedding bag (TBE) operator is an important base operation for embedding lookup for recommendation system inference on GPU. We added the following enhancements for performance and flexibility: Alignment restriction removed - Embedding dimension * data type size had to be multiple of 4B before and now, it is 1B. Unified Virtual Memory (UVM) caching kernel optimizations - UVM caching kernels now scale linearly with # of tables using UVM caching. Previously, it was having similar overhead as all tables using UVM caching - UVM caching kernel overhead is much smaller than before Inference FP8 Table Batched Embedding (TBE)
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
Inference FP8 Table Batched Embedding (TBE) The table batched embedding bag (TBE) previously supported FP32, FP16, INT8, INT4, and INT2 embedding weight types. While these weight types work well in many models, we integrate FP8 weight types (in both GPU and CPU operations) to allow for numerical and performance evaluations of FP8 in our models. Compared to INT8, FP8 does not require the additional bias and scale storage and calculations. Additionally, the next generation of H100 GPUs has the FP8 support on Tensor Core (mainly matmul ops). Jagged Tensor Kernels
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
Jagged Tensor Kernels We added optimized kernels to speed up TorchRec JaggedTensor. The purpose of JaggedTensor is to handle the case where one dimension of the input data is “jagged”, meaning that each consecutive row in a given dimension may be a different length, which is often the case with sparse feature inputs in recommendation systems. The internal representation is shown below:
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
We added ops for converting jagged tensors from sparse to dense formats and back, performing matrix multiplications with jagged tensors, and elementwise ops. Optimized permute102-baddbmm-permute102
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
Optimized permute102-baddbmm-permute102 It is difficult to fuse various matrix multiplications where the batch size is not the batch size of the model, switching the batch dimension is a quick solution. We created the permute102_baddbmm_permute102 operation that switches the first and the second dimension, performs the batched matrix multiplication and then switches back. Currently we only support forward pass with FP16 data type and will support FP32 type and backward pass in the future. Optimized index_select for dim 0 index selection
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
index_select is normally used as part of a sparse operation. While PyTorch supports a generic index_select for an arbitrary-dimension index selection, its performance for a special case like the dim 0 index selection is suboptimal. For this reason, we implement a specialized index_select for dim 0. In some cases, we have observed 1.4x performance gain from FBGEMM’s index_select compared to the one from PyTorch (using uniform index distribution). More about the implementation of influential instances can be found on our GitHub page and tutorials.
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
Thanks for reading, If you’re interested in these updates and want to join the PyTorch community, we encourage you to join the discussion forums and open GitHub issues. To get the latest news from PyTorch, follow us on Twitter, Medium, YouTube, and LinkedIn. Cheers! Team PyTorch
https://pytorch.org/blog/pytorch-1.12-new-library-releases/
pytorch blogs
layout: blog_detail title: 'The torch.linalg module: Accelerated Linear Algebra with Autograd in PyTorch' author: Mike Ruberry, Ivan Yashchuk, Xiao Wang, Mario Lezcano and Natalia Gimelshein featured-img: 'assets/images/cholesky-decomposition.png' Linear algebra is essential to deep learning and scientific computing, and it’s always been a core part of PyTorch. PyTorch 1.9 extends PyTorch’s support for linear algebra operations with the torch.linalg module. This module, documented here, has 26 operators, including faster and easier to use versions of older PyTorch operators, every function from NumPy’s linear algebra module extended with accelerator and autograd support, and a few operators that are completely new. This makes the torch.linalg immediately familiar to NumPy users and an exciting update to PyTorch’s linear algebra support.
https://pytorch.org/blog/torch-linalg-autograd/
pytorch blogs
NumPy-like linear algebra in PyTorch If you’re familiar with NumPy’s linear algebra module then it’ll be easy to start using torch.linalg. In most cases it’s a drop-in replacement. Let’s looking at drawing samples from a multivariate normal distribution using the Cholesky decomposition as a motivating example to demonstrate this: import numpy as np # Creates inputs np.random.seed(0) mu_np = np.random.rand(4) L = np.random.rand(4, 4) # Covariance matrix sigma is positive-definite sigma_np = L @ L.T + np.eye(4) normal_noise_np = np.random.standard_normal(mu_np.size) def multivariate_normal_sample_np(mu, sigma, normal_noise): return mu + np.linalg.cholesky(sigma) @ normal_noise print("Random sample: ", multivariate_normal_sample_np(mu_np, sigma_np, normal_noise_np)) : Random sample: [2.9502426 1.78518077 1.83168697 0.90798228]
https://pytorch.org/blog/torch-linalg-autograd/
pytorch blogs
Now let’s see the same sampler implemented in PyTorch: ```python import torch def multivariate_normal_sample_torch(mu, sigma, normal_noise): return mu + torch.linalg.cholesky(sigma) @ normal_noise The two functions are identical, and we can validate their behavior by calling the function with the same arguments wrapped as PyTorch tensors: # NumPy arrays are wrapped as tensors and share their memory mu_torch = torch.from_numpy(mu_np) sigma_torch = torch.from_numpy(sigma_np) normal_noise_torch = torch.from_numpy(normal_noise_np) multivariate_normal_sample_torch(mu_torch, sigma_torch, normal_noise_torch) : tensor([2.9502, 1.7852, 1.8317, 0.9080], dtype=torch.float64) The only difference is in how PyTorch prints tensors by default.
https://pytorch.org/blog/torch-linalg-autograd/
pytorch blogs
The Cholesky decomposition can also help us quickly compute the probability density function of the non-degenerate multivariate normal distribution. One of the expensive terms in that computation is the square root of the determinant of the covariance matrix. Using properties of the determinant and the Cholesky decomposition we can calculate the same result faster than the naive computation, however. Here’s the NumPy program that demonstrates this: sqrt_sigma_det_np = np.sqrt(np.linalg.det(sigma_np)) sqrt_L_det_np = np.prod(np.diag(np.linalg.cholesky(sigma_np))) print("|sigma|^0.5 = ", sqrt_sigma_det_np) : |sigma|^0.5 = 4.237127491242027 print("|L| = ", sqrt_L_det_np) : |L| = 4.237127491242028 And here’s the same validation in PyTorch: ```python sqrt_sigma_det_torch = torch.sqrt(torch.linalg.det(sigma_torch)) sqrt_L_det_torch = torch.prod(torch.diag(torch.linalg.cholesky(sigma_torch)))
https://pytorch.org/blog/torch-linalg-autograd/
pytorch blogs
print("|sigma|^0.5 = ", sqrt_sigma_det_torch) |sigma|^0.5 = tensor(4.2371, dtype=torch.float64) print("|L| = ", sqrt_L_det_torch) |L| = tensor(4.2371, dtype=torch.float64) ``` We can measure the difference in run time using PyTorch’s built-in benchmark utility: ```python import torch.utils.benchmark as benchmark t0 = benchmark.Timer( stmt='torch.sqrt(torch.linalg.det(sigma))', globals={'sigma': sigma_torch}) t1 = benchmark.Timer( stmt='torch.prod(torch.diag(torch.linalg.cholesky(sigma)))', globals={'sigma': sigma_torch}) print(t0.timeit(100)) torch.sqrt(torch.linalg.det(sigma)) 80.80 us 1 measurement, 100 runs , 1 thread print(t1.timeit(100)) torch.prod(torch.diag(torch.linalg.cholesky(sigma))) 11.56 us 1 measurement, 100 runs , 1 thread ```
https://pytorch.org/blog/torch-linalg-autograd/
pytorch blogs
1 measurement, 100 runs , 1 thread ``` Demonstrating that the approach using the Cholesky decomposition can be significantly faster. Behind the scenes, PyTorch’s linear algebra module uses OpenBLAS or MKL implementations of the LAPACK standard to maximize its CPU performance. Autograd Support PyTorch’s linear algebra module doesn’t just implement the same functions as NumPy’s linear algebra module (and a few more), it also extends them with autograd and CUDA support. Let’s look at a very simple program that just computes an inverse and the gradient of that operation to show how autograd works: t = torch.tensor(((1, 2), (3, 4)), dtype=torch.float32, requires_grad=True) inv = torch.linalg.inv(t) inv.backward(torch.ones_like(inv)) print(t.grad) : tensor([[-0.5000, 0.5000], [ 0.5000, -0.5000]]) We can mimic the same computation in NumPy by defining the autograd formula ourselves: ```python a = np.array(((1, 2), (3, 4)), dtype=np.float32) inv_np = np.linalg.inv(a)
https://pytorch.org/blog/torch-linalg-autograd/
pytorch blogs
inv_np = np.linalg.inv(a) def inv_backward(result, grad): return -(result.transpose(-2, -1) @ (grad @ result.transpose(-2, -1))) grad_np = inv_backward(inv_np, np.ones_like(inv_np)) print(grad_np) [[-0.5 0.5] [ 0.5 -0.5]] ``` Of course, as programs become more complicated it’s convenient to have builtin autograd support, and PyTorch’s linear algebra module supports both real and complex autograd. CUDA Support Support for autograd and accelerators, like CUDA devices, is a core part of PyTorch. The torch.linalg module was developed with NVIDIA’s PyTorch and cuSOLVER teams, who helped optimize its performance on CUDA devices with the cuSOLVER, cuBLAS, and MAGMA libraries. These improvements make PyTorch’s CUDA linear algebra operations faster than ever. For example, let’s look at the performance of PyTorch 1.9’s torch.linalg.cholesky vs. PyTorch 1.8’s (now deprecated) torch.cholesky:
https://pytorch.org/blog/torch-linalg-autograd/
pytorch blogs
(The above charts were created using an Ampere A100 GPU with CUDA 11.3, cuSOLVER 11.1.1.58, and MAGMA 2.5.2. Matrices are in double precision.) These charts show that performance has increased significantly on larger matrices, and that batched performance is better across the board. Other linear algebra operations, including torch.linalg.qr and torch.linalg.lstsq, have also had their CUDA performance improved. Beyond NumPy
https://pytorch.org/blog/torch-linalg-autograd/
pytorch blogs
Beyond NumPy In addition to offering all the functions in NumPy’s linear algebra module with support for autograd and accelerators, torch.linalg has a few new functions of its own. NumPy’s linalg.norm does not allow users to compute vector norms over arbitrary subsets of dimensions, so to enable this functionality we added torch.linalg.vector_norm. We’ve also started modernizing other linear algebra functionality in PyTorch, so we created torch.linalg.householder_product to replace the older torch.orgqr, and we plan to continue adding more linear algebra functionality in the future, too. The Future of Linear Algebra in PyTorch
https://pytorch.org/blog/torch-linalg-autograd/
pytorch blogs
The Future of Linear Algebra in PyTorch The torch.linalg module is fast and familiar with great support for autograd and accelerators. It’s already being used in libraries like botorch, too. But we’re not stopping here. We plan to continue updating more of PyTorch’s existing linear algebra functionality (like torch.lobpcg) and offering more support for low rank and sparse linear algebra. We also want to hear your feedback on how we can improve, so start a conversation on the forum or file an issue on our Github and share your thoughts. We look forward to hearing from you and seeing what the community does with PyTorch’s new linear algebra functionality!
https://pytorch.org/blog/torch-linalg-autograd/
pytorch blogs
layout: blog_detail title: "Democratizing AI with PyTorch Foundation and ROCm™ support for PyTorch" author: AMD Last year, Meta announced that PyTorch joined the Linux Foundation as a neutral home for growing the machine learning project and community with AMD representation as a part of the founding membership and governing board. PyTorch Foundation’s mission is to drive AI adoption by democratizing its software ecosystem through open source principles aligning with the AMD core principle of an Open software ecosystem. AMD strives to foster innovation through the support for latest generations of hardware, tools, libraries, and other components to simplify and accelerate adoption of AI across a broad range of scientific discoveries.
https://pytorch.org/blog/democratizing-ai-with-pytorch/
pytorch blogs
https://pytorch.org/blog/democratizing-ai-with-pytorch/
pytorch blogs
AMD, along with key PyTorch codebase developers (including those at Meta AI), delivered a set of updates to the ROCm™ open software ecosystem that brings stable support for AMD Instinct™ accelerators as well as many Radeon™ GPUs. This now gives PyTorch developers the ability to build their next great AI solutions leveraging AMD GPU accelerators & ROCm. The support from PyTorch community in identifying gaps, prioritizing key updates, providing feedback for performance optimizing and supporting our journey from “Beta” to “Stable” was immensely helpful and we deeply appreciate the strong collaboration between the two teams at AMD and PyTorch. The move for ROCm support from “Beta” to “Stable” came in the PyTorch 1.12 release (June 2022) brings the added support to easily run PyTorch on native environment without having to configure custom dockers. This is a sign of confidence about the quality of support and performance of PyTorch using AMD Instinct and ROCm. The results of these collaborative efforts are evident in the performance measured on key industry benchmarks like Microsoft’s SuperBench shown below in Graph 1.
https://pytorch.org/blog/democratizing-ai-with-pytorch/
pytorch blogs
“We are excited to see the significant impact of developers at AMD to contribute to and extend features within PyTorch to make AI models run in a more performant, efficient, and scalable way. A great example of this is the thought-leadership around unified memory approaches between the framework and future hardware systems, and we look forward to seeing that feature progress.” - Soumith Chintala, PyTorch lead-maintainer and Director of Engineering, Meta AI
https://pytorch.org/blog/democratizing-ai-with-pytorch/
pytorch blogs
The progressive improvements on both the AMD CDNA™ architecture as well as ROCm and PyTorch shows single GPU model throughput increase from AMD Instinct MI100 to the latest generation AMD Instinct MI200 family GPUs going from ROCm 4.2 to ROCm 5.3 and from PyTorch 1.7 to PyTorch 1.12. Graph 1: ML model performance over generation using Microsoft Superbench Suite 1, 2, 3 Below are a few of the key updates for ROCm support since the PyTorch 1.12 release Full Continuous Integration (CI) for ROCm on PyTorch
https://pytorch.org/blog/democratizing-ai-with-pytorch/
pytorch blogs
With the ROCm support for PyTorch move from “Beta” to “Stable,” all the functions and features commits are now verified through a full Continuous Integration (CI) process. The CI process helps ensure the proper build and test process ahead of an expected Docker and PIP wheel release with stable commits forthcoming. Support for Kineto Profiler The addition of Kineto profiler support to ROCm now helps developers and users understand performance bottlenecks through effective diagnosis and profiling tools. The tool also provides recommendations to improve known issues and visualization through TensorBoard UI. Key PyTorch Libraries support added
https://pytorch.org/blog/democratizing-ai-with-pytorch/
pytorch blogs
Key PyTorch Libraries support added PyTorch ecosystem libraries like TorchText (Text classification), TorchRec (libraries for recommender systems - RecSys), TorchVision (Computer Vision), TorchAudio (audio and signal processing) are fully supported since ROCm 5.1 and upstreamed with PyTorch 1.12. Key libraries provided with the ROCm software stack including MIOpen (Convolution models), RCCL (ROCm Collective Communications) and rocBLAS (BLAS for transformers) were further optimized to offer new potential efficiencies and higher performance.
https://pytorch.org/blog/democratizing-ai-with-pytorch/
pytorch blogs
MIOpen innovates on several fronts, such as implementing fusion to optimize for memory bandwidth and GPU launch overheads, providing an auto-tuning infrastructure to overcome the large design space of problem configurations, and implementing different algorithms to optimize convolutions for different filter and input sizes. MIOpen is one of the first libraries to publicly support the bfloat16 data-type for convolutions, allowing efficient training at lower precision maintaining expected accuracy.
https://pytorch.org/blog/democratizing-ai-with-pytorch/
pytorch blogs
RCCL (pronounced "Rickle") is a stand-alone library of standard collective communication routines for GPUs, implementing all-reduce, all-gather, reduce, broadcast, reduce-scatter, gather, scatter, and all-to-all. There is support for direct GPU-to-GPU send and receive operations. It has been optimized to achieve high bandwidth on platforms using PCIe®, Infinity Fabric™ (GPU to GPU) as well as networking using InfiniBand Verbs or TCP/IP sockets. RCCL supports an arbitrary number of GPUs installed in single or multiple nodes and can be used in either single- or multi-process (e.g., MPI) applications.
https://pytorch.org/blog/democratizing-ai-with-pytorch/
pytorch blogs
Along with the above key highlights, over 50 features and functionality improvements were completed jointly between AMD and PyTorch to add stable support for ROCm. These include improvements to tools, compilers, runtime, graph optimizations through TorchScript, INT8 quant path usage, and ONNX runtime integration including support for Navi 21 based Radeon™ PRO datacenter graphics card to name a few. AITemplate Inference Engine
https://pytorch.org/blog/democratizing-ai-with-pytorch/
pytorch blogs
MetaAI recently published a blog announcing the release of its open source AITemplate (link) for a unified inference system supporting AMD Instinct GPU accelerators using the AMD ROCm stack. This Python based framework can help significantly improve performance through increased utilization of AMD matrix cores for transformer blocks. This is achieved through the AMD Composable Kernel (CK) library which provides performance critical Kernels for ML AI workloads across multiple architectures including GPUs and CPUs through HIP & C++. Moreover, the AITemplate also provides out-of-the-box support for widely used AI models like BERT, ResNET, Vision Transformer, Stable Diffusion etc. simplifying deployment process through these pretrained models. What’s coming with future ROCm releases? Unified memory models for CPU + GPU
https://pytorch.org/blog/democratizing-ai-with-pytorch/
pytorch blogs
Unified memory models for CPU + GPU As system architecture evolves to address the complexity of large problem sizes and data sets, memory management becomes a key performance bottle neck that needs a cohesive strategy to be addressed through innovations at both hardware and software levels. AMD is uniquely positioned to address this problem with its effective data center solutions integrating AMD EPYC™ CPU cores with its AMD Instinct GPU compute units in a truly unified datacenter APU (Accelerated Processing Unit) form factor set to be launched in 2H 2023. The software work to leverage the unified CPU + GPU memory has already started in collaboration with the PyTorch team, to enable the usage of a fast, low latency, synchronized memory model that enables not only AMD but also other AI accelerators to address the complex memory management problem of today. We are looking forward to this joint effort and announcement soon. Acknowledgement
https://pytorch.org/blog/democratizing-ai-with-pytorch/
pytorch blogs
Acknowledgement The content in this blog highlights the joint work between AMD and key PyTorch contributors including Meta, working on many of the core features, as well as Microsoft enabling ONNX Runtime support. We are looking forward to working with the other founding members at the PyTorch Foundation on the next steps and improvements to democratize and grow adoption of PyTorch across the industry. CAUTIONARY STATEMENT
https://pytorch.org/blog/democratizing-ai-with-pytorch/
pytorch blogs
This blog contains forward-looking statements concerning Advanced Micro Devices, Inc. (AMD) such as the availability, timing and expected benefits of an AMD datacenter APU form factor, which are made pursuant to the Safe Harbor provisions of the Private Securities Litigation Reform Act of 1995. Forward-looking statements are commonly identified by words such as "would," "may," "expects," "believes," "plans," "intends," "projects" and other terms with similar meaning. Investors are cautioned that the forward-looking statements in this blog are based on current beliefs, assumptions and expectations, speak only as of the date of this blog and involve risks and uncertainties that could cause actual results to differ materially from current expectations. Such statements are subject to certain known and unknown risks and uncertainties, many of which are difficult to predict and generally beyond AMD's control, that could cause actual results and other future events to differ materially from those expressed in, or implied or projected by, the forward-looking information and statements. Investors are urged to review in detail the risks and uncertainties in AMD’s Securities and Exchange Commission filings, including but not limited to AMD’s most recent reports on Forms 10-K and 10-Q. AMD does not assume, and hereby disclaims, any obligation to update forward-looking statements made in this blog, except as may be required by law.
https://pytorch.org/blog/democratizing-ai-with-pytorch/
pytorch blogs
Endnotes MI100D-01 SuperBench v0.5 model training results based on AMD internal testing as of 11/09/2022 measuring the total training throughput, at half precision, using a 2P AMD EPYC™ 7763 CPU server tested with 1x AMD Instinct™ MI100 (32GB HBM2e) 300W GPU, SBIOS 2.2, Ubuntu® 20.04.5 LTS, host ROCm™ 5.2.0, guest ROCm 4.2, PyTorch 1.7.0. Server manufacturers may vary configurations, yielding different results. Performance may vary based factors including use of latest drivers and optimizations.
https://pytorch.org/blog/democratizing-ai-with-pytorch/
pytorch blogs
MI200D-01 SuperBench v0.6 model training results based on AMD internal testing as of 11/09/2022 measuring the total training throughput, at half precision, using a 2P AMD EPYC™ 7763 CPU server tested with 1x AMD Instinct™ MI210 (64GB HBM2e) 300W GPU, SBIOS 2.2, Ubuntu 20.04.5 LTS, host ROCm 5.3.0, guest ROCm 5.3, PyTorch 1.12. Server manufacturers may vary configurations, yielding different results. Performance may vary based factors including use of latest drivers and optimizations. MI200D-02: SuperBench v0.6 model training results based on AMD internal testing as of 11/09/2022 measuring the total training throughput, at half precision, using a 2P AMD EPYC™️ 7763 CPU server tested with 1x AMD Instinct™️ MI250 (128GB HBM2e) 560W GPU, SBIOS M12, Ubuntu 20.04 LTS, host ROCm 5.3.0, guest ROCm 5.3, PyTorch 1.12. Server manufacturers may vary configurations, yielding different results. Performance may vary based factors including use of latest drivers and optimizations.
https://pytorch.org/blog/democratizing-ai-with-pytorch/
pytorch blogs
layout: blog_detail title: "Introducing PyTorch Fully Sharded Data Parallel (FSDP) API" author: Yanli Zhao, Rohan Varma, Chien-Chin Huang, Shen Li, Min Xu, Alban Desmaison featured-img: "assets/images/pytorch-logo.jpg" Recent studies have shown that large model training will be beneficial for improving model quality. During the last 3 years, model size grew 10,000 times from BERT with 110M parameters to Megatron-2 with one trillion. However, training large AI models is not easy—aside from the need for large amounts of computing resources, software engineering complexity is also challenging. PyTorch has been working on building tools and infrastructure to make it easier.
https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/
pytorch blogs
PyTorch Distributed data parallelism is a staple of scalable deep learning because of its robustness and simplicity. It however requires the model to fit on one GPU. Recent approaches like DeepSpeed ZeRO and FairScale’s Fully Sharded Data Parallel allow us to break this barrier by sharding a model’s parameters, gradients and optimizer states across data parallel workers while still maintaining the simplicity of data parallelism. With PyTorch 1.11 we’re adding native support for Fully Sharded Data Parallel (FSDP), currently available as a prototype feature. Its implementation heavily borrows from FairScale’s version while bringing more streamlined APIs and additional performance improvements.
https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/
pytorch blogs
Scaling tests of PyTorch FSDP on AWS show it can scale up to train dense models with 1T parameters. Realized performance in our experiments reached 84 TFLOPS per A100 GPU for GPT 1T model and 159 TFLOPS per A100 GPU for GPT 175B model on AWS cluster. Native FSDP implementation also dramatically improved model initialization time compared to FairScale’s original when CPU offloading was enabled. In future PyTorch versions, we’re going to enable users to seamlessly switch between DDP, ZeRO-1, ZeRO-2 and FSDP flavors of data parallelism, so that users can train different scales of models with simple configurations in the unified API. How FSDP Works FSDP is a type of data-parallel training, but unlike traditional data-parallel, which maintains a per-GPU copy of a model’s parameters, gradients and optimizer states, it shards all of these states across data-parallel workers and can optionally offload the sharded model parameters to CPUs. The figure below shows how FSDP works for 2 data-parallel processes:
https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/
pytorch blogs
Figure 1. FSDP workflow Usually, model layers are wrapped with FSDP in a nested way, so that only layers in a single FSDP instance need to gather the full parameters to a single device during forward or backward computations. The gathered full parameters will be freed immediately after computation, and the freed memory can be used for the next layer’s computation. In this way, peak GPU memory could be saved and thus training can be scaled to use a larger model size or larger batch size. To further maximize memory efficiency, FSDP can offload the parameters, gradients and optimizer states to CPUs when the instance is not active in the computation. Using FSDP in PyTorch There are two ways to wrap a model with PyTorch FSDP. Auto wrapping is a drop-in replacement for DDP; manual wrapping needs minimal changes of model definition code with the ability to explore complex sharding strategies.
https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/
pytorch blogs
Auto Wrapping Model layers should be wrapped in FSDP in a nested way to save peak memory and enable communication and computation overlapping. The simplest way to do it is auto wrapping, which can serve as a drop-in replacement for DDP without changing the rest of the code. fsdp_auto_wrap_policy argument allows specifying a callable function to recursively wrap layers with FSDP. default_auto_wrap_policy function provided by the PyTorch FSDP recursively wraps layers with the number of parameters larger than 100M. You can supply your own wrapping policy as needed. The example of writing a customized wrapping policy is shown in the FSDP API doc. In addition, cpu_offload could be configured optionally to offload wrapped parameters to CPUs when these parameters are not used in computation. This can further improve memory efficiency at the cost of data transfer overhead between host and device. The example below shows how FSDP is wrapped using auto wrapping.
https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/
pytorch blogs
from torch.distributed.fsdp import ( FullyShardedDataParallel, CPUOffload, ) from torch.distributed.fsdp.wrap import ( default_auto_wrap_policy, ) import torch.nn as nn class model(nn.Module): def __init__(self): super().__init__() self.layer1 = nn.Linear(8, 4) self.layer2 = nn.Linear(4, 16) self.layer3 = nn.Linear(16, 4) model = DistributedDataParallel(model()) fsdp_model = FullyShardedDataParallel( model(), fsdp_auto_wrap_policy=default_auto_wrap_policy, cpu_offload=CPUOffload(offload_params=True), ) Manual Wrapping Manual wrapping can be useful to explore complex sharding strategies by applying wrap selectively to some parts of the model. Overall settings can be passed to the enable_wrap() context manager. ```python from torch.distributed.fsdp import ( FullyShardedDataParallel, CPUOffload, ) from torch.distributed.fsdp.wrap import ( enable_wrap, wrap, ) import torch.nn as nn from typing import Dict
https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/
pytorch blogs
import torch.nn as nn from typing import Dict class model(nn.Module): def init(self): super().init() self.layer1 = wrap(nn.Linear(8, 4)) self.layer2 = nn.Linear(4, 16) self.layer3 = wrap(nn.Linear(16, 4)) wrapper_kwargs = Dict(cpu_offload=CPUOffload(offload_params=True)) with enable_wrap(wrapper_cls=FullyShardedDataParallel, **wrapper_kwargs): fsdp_model = wrap(model()) After wrapping the model with FSDP using one of the two above approaches, the model can be trained in a similar way as local training, like this: ```python optim = torch.optim.Adam(fsdp_model.parameters(), lr=0.0001) for sample, label in next_batch(): out = fsdp_model(input) loss = criterion(out, label) loss.backward() optim.step() Benchmark Results
https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/
pytorch blogs
optim.step() ``` Benchmark Results We ran extensive scaling tests for 175B and 1T GPT models on AWS clusters using PyTorch FSDP. Each cluster node is an instance with 8 NVIDIA A100-SXM4-40GB GPUs, and inter-nodes are connected via AWS Elastic Fabric Adapter (EFA) with 400 Gbps network bandwidth. GPT models are implemented using minGPT. A randomly generated input dataset is used for benchmarking purposes. All experiments ran with 50K vocabulary size, fp16 precision and SGD optimizer. Model Number of layers Hidden size Attention heads Model size, billions of parameters
https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/
pytorch blogs
| GPT 175B | 96 | 12288 | 96 | 175 | | GPT 1T | 128 | 25600 | 160 | 1008 | In addition to using FSDP with parameters CPU offloading in the experiments, the activation checkpointing feature in PyTorch is also applied in the tests. The maximum per-GPU throughput of 159 teraFLOP/s (51% of NVIDIA A100 peak theoretical performance 312 teraFLOP/s/GPU) is achieved with batch size 20 and sequence length 512 on 128 GPUs for the GPT 175B model; further increase of the number of GPUs leads to per-GPU throughput degradation because of growing communication between the nodes.
https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/
pytorch blogs
For the GPT 1T model, the maximum per-GPU throughput of 84 teraFLOP/s (27% of the peak teraFLOP/s) is achieved with batch size 4 and sequence length 2048 on 128 GPUs. However, further increase of the number of GPUs doesn’t affect the per-GPU throughput too much because we observed that the largest bottleneck in the 1T model training is not from communication but from the slow CUDA cache allocator when peak GPU memory is reaching the limit. The use of A100 80G GPUs with larger memory capacity will mostly resolve this issue and also help scale the batch size to achieve much larger throughput. Future Work
https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/
pytorch blogs
Future Work In the next beta release, we are planning to add efficient distributed model/states checkpointing APIs, meta device support for large model materialization, and mixed-precision support inside FSDP computation and communication. We’re also going to make it easier to switch between DDP, ZeRO1, ZeRO2 and FSDP flavors of data parallelism in the new API. To further improve FSDP performance, memory fragmentation reduction and communication efficiency improvements are also planned. A Bit of History of 2 Versions of FSDP
https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/
pytorch blogs
A Bit of History of 2 Versions of FSDP FairScale FSDP was released in early 2021 as part of the FairScale library. And then we started the effort to upstream FairScale FSDP to PyTorch in PT 1.11, making it production-ready. We have selectively upstreamed and refactored key features from FairScale FSDP, redesigned user interfaces and made performance improvements. In the near future, FairScale FSDP will stay in the FairScale repository for research projects, while generic and widely adopted features will be upstreamed to PyTorch incrementally and hardened accordingly. Meanwhile, PyTorch FSDP will focus more on production readiness and long-term support. This includes better integration with ecosystems and improvements on performance, usability, reliability, debuggability and composability. Acknowledgments
https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/
pytorch blogs
Acknowledgments We would like to thank the authors of FairScale FSDP: Myle Ott, Sam Shleifer, Min Xu, Priya Goyal, Quentin Duval, Vittorio Caggiano, Tingting Markstrum, Anjali Sridhar. Thanks to the Microsoft DeepSpeed ZeRO team for developing and popularizing sharded data parallel techniques. Thanks to Pavel Belevich, Jessica Choi, Sisil Mehta for running experiments using PyTorch FSDP on different clusters. Thanks to Geeta Chauhan, Mahesh Yadav, Pritam Damania, Dmytro Dzhulgakov for supporting this effort and insightful discussions.
https://pytorch.org/blog/introducing-pytorch-fully-sharded-data-parallel-api/
pytorch blogs
layout: blog_detail title: 'PyTorch library updates including new model serving library ' author: Team PyTorch Along with the PyTorch 1.5 release, we are announcing new libraries for high-performance PyTorch model serving and tight integration with TorchElastic and Kubernetes. Additionally, we are releasing updated packages for torch_xla (Google Cloud TPUs), torchaudio, torchvision, and torchtext. All of these new libraries and enhanced capabilities are available today and accompany all of the core features released in PyTorch 1.5. TorchServe (Experimental)
https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/
pytorch blogs
TorchServe (Experimental) TorchServe is a flexible and easy to use library for serving PyTorch models in production performantly at scale. It is cloud and environment agnostic and supports features such as multi-model serving, logging, metrics, and the creation of RESTful endpoints for application integration. TorchServe was jointly developed by engineers from Facebook and AWS with feedback and engagement from the broader PyTorch community. The experimental release of TorchServe is available today. Some of the highlights include: Support for both Python-based and TorchScript-based models Default handlers for common use cases (e.g., image segmentation, text classification) as well as the ability to write custom handlers for other use cases Model versioning, the ability to run multiple versions of a model at the same time, and the ability to roll back to an earlier version
https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/
pytorch blogs
The ability to package a model, learning weights, and supporting files (e.g., class mappings, vocabularies) into a single, persistent artifact (a.k.a. the “model archive”) Robust management capability, allowing full configuration of models, versions, and individual worker threads via command line, config file, or run-time API Automatic batching of individual inferences across HTTP requests Logging including common metrics, and the ability to incorporate custom metrics Ready-made Dockerfile for easy deployment HTTPS support for secure deployment To learn more about the APIs and the design of this feature, see the links below: * See for a full multi-node deployment reference architecture. * The full documentation can be found here. TorchElastic integration with Kubernetes (Experimental)
https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/
pytorch blogs
TorchElastic is a proven library for training large scale deep neural networks at scale within companies like Facebook, where having the ability to dynamically adapt to server availability and scale as new compute resources come online is critical. Kubernetes enables customers using machine learning frameworks like PyTorch to run training jobs distributed across fleets of powerful GPU instances like the Amazon EC2 P3. Distributed training jobs, however, are not fault-tolerant, and a job cannot continue if a node failure or reclamation interrupts training. Further, jobs cannot start without acquiring all required resources, or scale up and down without being restarted. This lack of resiliency and flexibility results in increased training time and costs from idle resources. TorchElastic addresses these limitations by enabling distributed training jobs to be executed in a fault-tolerant and elastic manner. Until today, Kubernetes users needed to manage Pods and Services required for TorchElastic training jobs manually.
https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/
pytorch blogs
Through the joint collaboration of engineers at Facebook and AWS, TorchElastic, adding elasticity and fault tolerance, is now supported using vanilla Kubernetes and through the managed EKS service from AWS. To learn more see the TorchElastic repo for the controller implementation and docs on how to use it. torch_xla 1.5 now available
https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/
pytorch blogs
torch_xla 1.5 now available torch_xla is a Python package that uses the XLA linear algebra compiler to accelerate the PyTorch deep learning framework on Cloud TPUs and Cloud TPU Pods. torch_xla aims to give PyTorch users the ability to do everything they can do on GPUs on Cloud TPUs as well while minimizing changes to the user experience. The project began with a conversation at NeurIPS 2017 and gathered momentum in 2018 when teams from Facebook and Google came together to create a proof of concept. We announced this collaboration at PTDC 2018 and made the PyTorch/XLA integration broadly available at PTDC 2019. The project already has 28 contributors, nearly 2k commits, and a repo that has been forked more than 100 times.
https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/
pytorch blogs
This release of torch_xla is aligned and tested with PyTorch 1.5 to reduce friction for developers and to provide a stable and mature PyTorch/XLA stack for training models using Cloud TPU hardware. You can try it for free in your browser on an 8-core Cloud TPU device with Google Colab, and you can use it at a much larger scaleon Google Cloud. See the full torch_xla release notes here. Full docs and tutorials can be found here and here. PyTorch Domain Libraries
https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/
pytorch blogs
PyTorch Domain Libraries torchaudio, torchvision, and torchtext complement PyTorch with common datasets, models, and transforms in each domain area. We’re excited to share new releases for all three domain libraries alongside PyTorch 1.5 and the rest of the library updates. For this release, all three domain libraries are removing support for Python2 and will support Python3 only. torchaudio 0.5 The torchaudio 0.5 release includes new transforms, functionals, and datasets. Highlights for the release include: Added the Griffin-Lim functional and transform, InverseMelScale and Vol transforms, and DB_to_amplitude. Added support for allpass, fade, bandpass, bandreject, band, treble, deemph, and riaa filters and transformations. New datasets added including LJSpeech and SpeechCommands datasets. See the release full notes here and full docs can be found here. torchvision 0.6
https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/
pytorch blogs
torchvision 0.6 The torchvision 0.6 release includes updates to datasets, models and a significant number of bug fixes. Highlights include: Faster R-CNN now supports negative samples which allows the feeding of images without annotations at training time. Added aligned flag to RoIAlign to match Detectron2. Refactored abstractions for C++ video decoder See the release full notes here and full docs can be found here. torchtext 0.6 The torchtext 0.6 release includes a number of bug fixes and improvements to documentation. Based on user's feedback, dataset abstractions are currently being redesigned also. Highlights for the release include: Fixed an issue related to the SentencePiece dependency in conda package. Added support for the experimental IMDB dataset to allow a custom vocab.
https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/
pytorch blogs
A number of documentation updates including adding a code of conduct and a deduplication of the docs on the torchtext site. Your feedback and discussions on the experimental datasets API are welcomed. You can send them to issue #664. We would also like to highlight the pull request here where the latest dataset abstraction is applied to the text classification datasets. The feedback can be beneficial to finalizing this abstraction. See the release full notes here and full docs can be found here. We’d like to thank the entire PyTorch team, the Amazon team and the community for all their contributions to this work. Cheers! Team PyTorch
https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/
pytorch blogs
layout: blog_detail title: "Understanding LazyTensor System Performance with PyTorch/XLA on Cloud TPU" author: Vaibhav Singh featured-img: "" Introduction Ease of use, expressivity, and debuggability are among the core principles of PyTorch. One of the key drivers for the ease of use is that PyTorch execution is by default “eager, i.e. op by op execution preserves the imperative nature of the program. However, eager execution does not offer the compiler based optimization, for example, the optimizations when the computation can be expressed as a graph. LazyTensor [[1]], first introduced with PyTorch/XLA, helps combine these seemingly disparate approaches. While PyTorch eager execution is widely used, intuitive, and well understood, lazy execution is not as prevalent yet.
https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/
pytorch blogs
In this post we will explore some of the basic concepts of the LazyTensor System with the goal of applying these concepts to understand and debug performance of LazyTensor based implementations in PyTorch. Although we will use PyTorch/XLA on Cloud TPU as the vehicle for exploring these concepts, we hope that these ideas will be useful to understand other system(s) built on LazyTensors. LazyTensor Any operation performed on a PyTorch tensor is by default dispatched as a kernel or a composition of kernels to the underlying hardware. These kernels are executed asynchronously on the underlying hardware. The program execution is not blocked until the value of a tensor is fetched. This approach scales extremely well with massively parallel programmed hardware such as GPUs.
https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/
pytorch blogs
The starting point of a LazyTensor system is a custom tensor type. In PyTorch/XLA, this type is called XLA tensor. In contrast to PyTorch’s native tensor type, operations performed on XLA tensors are recorded into an IR graph. Let’s examine an example that sums the product of two tensors: import torch import torch_xla import torch_xla.core.xla_model as xm dev = xm.xla_device() x1 = torch.rand((3, 3)).to(dev) x2 = torch.rand((3, 8)).to(dev) y1 = torch.einsum('bs,st->bt', x1, x2) print(torch_xla._XLAC._get_xla_tensors_text([y1])) You can execute this colab notebook to examine the resulting graph for y1. Notice that no computation has been performed yet. y1 = y1 + x2 print(torch_xla._XLAC._get_xla_tensors_text([y1]))
https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/
pytorch blogs
The operations will continue until PyTorch/XLA encounters a barrier. This barrier can either be a [mark step()](https://github.com/pytorch/xla/blob/ff079bb48744e5aa6696201ccf34057f15fc7cac/torch_xla/core/xla_model.py#L751) api call or any other event which forces the execution of the graph recorded so far. ```python xm.mark_step() print(torch_xla._XLAC._get_xla_tensors_text([y1])) Once the mark_step() is called, the graph is compiled and then executed on TPU, i.e. the tensors have been materialized. Therefore, the graph is now reduced to a single line y1 tensor which holds the result of the computation. Compile Once, Execute Often
https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/
pytorch blogs
Compile Once, Execute Often XLA compilation passes offer optimizations (e.g. op-fusion, which reduces HBM pressure by using scratch-pad memory for multiple ops, ref ) and leverages lower level XLA infrastructure to optimally use the underlying hardware. However, there is one caveat, compilation passes are expensive, i.e. can add to the training step time. Therefore, this approach scales well if and only if we can compile once and execute often (compilation cache helps, such that the same graph is not compiled more than once). In the following example, we create a small computation graph and time the execution: y1 = torch.rand((3, 8)).to(dev) def dummy_step() : y1 = torch.einsum('bs,st->bt', y1, x) xm.mark_step() return y1 %timeit dummy_step The slowest run took 29.74 times longer than the fastest. This could mean that an intermediate result is being cached. 10000000 loops, best of 5: 34.2 ns per loop
https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/
pytorch blogs
10000000 loops, best of 5: 34.2 ns per loop ``` You notice that the slowest step is quite longer than the fastest. This is because of the graph compilation overhead which is incurred only once for a given shape of graph, input shape, and output shape. Subsequent steps are faster because no graph compilation is necessary. This also implies that we expect to see performance cliffs when the “compile once and execute often” assumption breaks. Understanding when this assumption breaks is the key to understanding and optimizing the performance of a LazyTensor system. Let’s examine what triggers the compilation. Graph Compilation and Execution and LazyTensor Barrier
https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/
pytorch blogs