text
stringlengths
0
1.73k
source
stringlengths
35
119
category
stringclasses
2 values
Training Large Models - Large language models (LLM) and diffusion models have quickly risen in popularity and many cutting edge applications today are built on them. Further to this, training these models requires scale and more specifically the ability to train across thousands of accelerators. To achieve this we are investing in features such as AMP for mixed precision training, PjRt for increased runtime performance, SPMD / FSDP for efficient model sharding, Dynamic Shapes to enable new research approaches, faster data loading through Ray and tf.data, and a toolchain that packages all of these features together into a seamless workflow. Some of these features are already available in experimental or beta stages, and others are coming up this year with many heavily leveraging the underlying OpenXLA compiler stack.
https://pytorch.org/blog/pytorch-2.0-xla-path-forward/
pytorch blogs
Model Inference - With large models continuing to grow in size and computational cost, deployment becomes the next challenge as these models continue to find their way into applications. With the introduction of Dynamo in the PyTorch 2.0 release, PyTorch/XLA delivers performance competitive inference. We are, however, incorporating additional inference-oriented including model serving support, Dynamo for sharded large models, quantization via Torch.Export and StableHLO. Ecosystem integration - We are expanding integration with Hugging Face and PyTorch Lightning so users can take advantage of upcoming PyTorch/XLA cutting edge features (e.g. FSDP support in Hugging Face) and the downstream OpenXLA features (e.g. Quantization) through familiar APIs.
https://pytorch.org/blog/pytorch-2.0-xla-path-forward/
pytorch blogs
Additionally, PyTorch/XLA is set to migrate to the open source OpenXLA as its default downstream compiler; allowing the PyTorch community to gain access to a leading, framework-agnostic compiler stack that enjoys industry-wide contribution and innovation. To achieve this, we will begin supporting StableHLO. As a result, OpenXLA will replace the existing TF:XLA dependency, overall streamlining the dependencies and creating leverage from the broader compiler ecosystem. PyTorch/XLA will also sunset the XRT runtime after migration. You can see the resulting high level stack below with the TensorFlow dependency stricken out: Figure: the upcoming PyTorch/XLA features and integrations are illustrated here
https://pytorch.org/blog/pytorch-2.0-xla-path-forward/
pytorch blogs
We cannot be more excited about what’s ahead for PyTorch/XLA and invite the community to join us. PyTorch/XLA is developed fully in open source so please file issues, submit pull requests, and send RFCs to GitHub such that we can openly collaborate. You can also try out PyTorch/XLA for yourself on various XLA devices including TPUs and GPUs. Cheers, The PyTorch/XLA Team at Google
https://pytorch.org/blog/pytorch-2.0-xla-path-forward/
pytorch blogs
layout: blog_detail title: "Scaling Multimodal Foundation Models in TorchMultimodal with Pytorch Distributed" author: Ankita De, Edward Wang (EcoF), Rohan Varma, Anjali Sridhar, Kartikay Khandelwal featured-img: "/assets/images/scaling-multimodal-image1-diagram-of-multimodal-flava-new.png" Introduction
https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/
pytorch blogs
Introduction In recent years, scaling model sizes has become a promising area of research. In the field of NLP, language models have gone from hundreds of millions of parameters (BERT) to hundreds of billions of parameters (GPT-3) demonstrating significant improvements on downstream tasks. The scaling laws for large scale language models have also been studied extensively in the industry. A similar trend can be observed in the vision field, with the community moving to transformer based models (like Vision Transformer, Masked Auto Encoders) as well. It is clear that individual modalities - text, image, video - have benefited massively from recent advancements in scale, and frameworks have quickly adapted to accommodate larger models.
https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/
pytorch blogs
At the same time, multimodality is becoming increasingly important in research with tasks like image-text retrieval, visual question-answering, visual dialog and text to image generation gaining traction in real world applications. Training large scale multimodal models is the natural next step and we already see several efforts in this area like CLIP from OpenAI, Parti from Google and CM3 from Meta.
https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/
pytorch blogs
In this blog, we present a case study demonstrating the scaling of FLAVA to 10B params using techniques from PyTorch Distributed. FLAVA is a vision and language foundation model, available in TorchMultimodal, which has shown competitive performance on both unimodal and multimodal benchmarks. We also give the relevant code pointers in this blog. The instructions for running an example script to scale FLAVA can be found here. Scaling FLAVA Overview
https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/
pytorch blogs
Scaling FLAVA Overview FLAVA is a foundation multimodal model which consists of transformer based image and text encoders followed by a transformer-based multimodal fusion module. It is pretrained on both unimodal and multimodal data with a diverse set of losses. This includes masked language, image and multimodal modeling losses that require the model to reconstruct the original input from its context (self-supervised learning). It also uses image text matching loss over positive and negative examples of aligned image-text pairs as well as CLIP style contrastive loss. In addition to multimodal tasks (like image-text retrieval), FLAVA demonstrated competitive performance on unimodal benchmarks as well (GLUE tasks for NLP and image classification for vision).
https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/
pytorch blogs
The original FLAVA model has ~350M parameters and uses ViT-B16 configurations (from the Vision Transformer paper) for image and text encoders. The multimodal fusion transformer follows the unimodal encoders but with half the number of layers. We explore increasing the size of each encoder to larger ViT variants. Another aspect of scaling is adding the ability to increase the batch size. FLAVA makes use of contrastive loss over in-batch negatives, which typically benefits from large batch size (as studied here). The largest training efficiency or throughput is also generally achieved when operating near maximum possible batch sizes as determined by the amount of GPU memory available (also see the experiments section).
https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/
pytorch blogs
The following table displays the different model configurations we experimented with. We also determine the maximum batch size that was able to fit in memory for each configuration in the experiments section. Approx Model params Hidden size MLP size Heads Unimodal layers Multimodal layers Model size (fp32) 350M (original) 768 3072 12 12 6 1.33GB 900M 1024 4096 16 24 12 3.48GB 1.8B 1280 5120 16 32 16 6.66GB 2.7B 1408 6144 16 40 20 10.3GB
https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/
pytorch blogs
| 4.8B | 1664 | 8192 | 16 | 48 | 24 | 18.1GB | | 10B | 2048 | 10240 | 16 | 64 | 40 | 38GB | Optimization overview PyTorch offers several native techniques to efficiently scale models. In the following sections, we go over some of these techniques and show how they can be applied to scale up a FLAVA model to 10 billion parameters. Distributed Data Parallel A common starting point for distributed training is data parallelism. Data parallelism replicates the model across each worker (GPU), and partitions the dataset across the workers. Different workers process different data partitions in parallel and synchronize their gradients (via all reduce) before model weights are updated. The figure below showcases the flow (forward, backward, and weight update steps) for processing a single example for data parallelism:
https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/
pytorch blogs
Source: https://engineering.fb.com/2021/07/15/open-source/fsdp/ PyTorch provides a native API, DistributedDataParallel (DDP) to enable data parallelism which can be used as a module wrapper as showcased below. Please see PyTorch Distributed documentation for more details. ```Python from torchmultimodal.models.flava.model import flava_model_for_pretraining import torch import torch.distributed as dist model = flava_model_for_pretraining().cuda() Initialize PyTorch Distributed process groups Please see https://pytorch.org/tutorials/intermediate/dist_tuto.html for details dist.init_process_group(backend=”nccl”) Wrap model in DDP
https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/
pytorch blogs
Wrap model in DDP model = torch.nn.parallel.DistributedDataParallel(model, device_ids=[torch.cuda.current_device()]) ``` Fully Sharded Data Parallel GPU memory usage of a training application can roughly be broken down into model inputs, intermediate activations (needed for gradient computation), model parameters, gradients, and optimizer states. Scaling a model will typically increase each of these elements. Scaling a model with DDP can eventually result in out-of-memory issues when a single GPU's memory becomes insufficient since it replicates the parameters, gradients, and optimizer states on all workers.
https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/
pytorch blogs
To reduce this replication and save GPU memory, we can shard the model parameters, gradients, and optimizer states across all workers with each worker only managing a single shard. This technique was popularized by the ZeRO-3 approach developed by Microsoft. A PyTorch-native implementation of this approach is available as FullyShardedDataParallel (FSDP) API, released as a beta feature in PyTorch 1.12. During a module’s forward and backward passes, FSDP unshards the model parameters as needed for computation (using all-gather) and reshards them after computation. It synchronizes gradients using the reduce-scatter collective to ensure sharded gradients are globally averaged. The forward and backward pass flow of a model wrapped in FSDP are detailed below:
https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/
pytorch blogs
Source: https://engineering.fb.com/2021/07/15/open-source/fsdp/ To use FSDP, the submodules of a model need to be wrapped with the API to control when specific submodules are sharded or unsharded. FSDP provides an auto-wrapping API (see the auto_wrap_policy argument) that can be used out of the box as well as several wrapping policies and the ability to write your own policy.
https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/
pytorch blogs
The following example demonstrates wrapping the FLAVA model with FSDP. We specify the auto-wrapping policy as transformer_auto_wrap_policy. This will wrap individual transformer layers (TransformerEncoderLayer), the image transformer (ImageTransformer), text encoder (BERTTextEncoder) and multimodal encoder (FLAVATransformerWithoutEmbeddings) as individual FSDP units. This uses a recursive wrapping approach for efficient memory management. For example, after an individual transformer layer’s forward or backward pass is finished, its parameters are discarded, freeing up memory thereby reducing peak memory usage.
https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/
pytorch blogs
FSDP also provides a number of configurable options to tune the performance of applications. For example, in our use case, we illustrate the use of the new limit_all_gathers flag, which prevents all-gathering model parameters too early thereby alleviating memory pressure on the application. We encourage users to experiment with this flag which can potentially improve the performance of applications with high active memory usage. ```Python import torch from torch.distributed.fsdp import FullyShardedDataParallel as FSDP from torch.distributed.fsdp.wrap import transformer_auto_wrap_policy from torchmultimodal.models.flava.model import flava_model_for_pretraining from torchmultimodal.models.flava.text_encoder import BertTextEncoder from torchmultimodal.models.flava.image_encoder import ImageTransformer from torchmultimodal.models.flava.transformer import FLAVATransformerWithoutEmbeddings from torchmultimodal.modules.layers.transformer import TransformerEncoderLayer
https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/
pytorch blogs
model = flava_model_for_pretraining().cuda() dist.init_process_group(backend=”nccl”) model = FSDP( model, device_id=torch.cuda.current_device(), auto_wrap_policy=partial( transformer_auto_wrap_policy, transformer_layer_cls={ TransformerEncoderLayer, ImageTransformer, BERTTextEncoder, FLAVATransformerWithoutEmbeddings }, ), limit_all_gathers=True, ) ``` Activation Checkpointing
https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/
pytorch blogs
) ``` Activation Checkpointing As discussed above, intermediate activations, model parameters, gradients, and optimizer states contribute to the overall GPU memory usage. FSDP can reduce memory consumption due to the latter three but does not reduce memory consumed by activations. Memory used by activations increases with increase in batch size or number of hidden layers. Activation checkpointing is a technique to decrease this memory usage by recomputing the activations during the backward pass instead of holding them in memory for a specific checkpointed module. For example, we observed ~4x reduction in the peak active memory after forward pass by applying activation checkpointing to the 2.7B parameter model.
https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/
pytorch blogs
PyTorch offers a wrapper based activation checkpointing API. In particular, checkpoint_wrapper allows users to wrap an individual module with checkpointing, and apply_activation_checkpointing allows users to specify a policy with which to wrap modules within an overall module with checkpointing. Both these APIs can be applied to most models as they do not require any modifications to the model definition code. However, if more granular control over checkpointed segments, such as checkpointing specific functions within a module, is required, the functional torch.utils.checkpoint API can be leveraged, although this requires modification to the model code. The application of the activation checkpointing wrapper to individual FLAVA transformer layers (denoted by TransformerEncoderLayer) is shown below. For a thorough description of activation checkpointing, please see the description in the PyTorch documentation.
https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/
pytorch blogs
from torchmultimodal.models.flava.model import flava_model_for_pretraining from torch.distributed.algorithms._checkpoint.checkpoint_wrapper import apply_activation_checkpointing, checkpoint_wrapper, CheckpointImpl from torchmultimodal.modules.layers.transformer import TransformerEncoderLayer model = flava_model_for_pretraining() checkpoint_tformer_layers_policy = lambda submodule: isinstance(submodule, TransformerEncoderLayer) apply_activation_checkpointing( model, checkpoint_wrapper_fn=checkpoint_wrapper, check_fn=checkpoint_tformer_layers_policy, ) Used together, wrapping FLAVA transformer layers with activation checkpointing and wrapping the overall model with FSDP as demonstrated above, we are able to scale FLAVA to 10B parameters. Experiments
https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/
pytorch blogs
Experiments We conduct an empirical study about the impact of the different optimizations from the previous section on system performance. For all our experiments, we use a single node with 8 A100 40GB GPUs and run the pretraining for 1000 iterations. All runs also used PyTorch’s automatic mixed precision with the bfloat16 data type. TensorFloat32 format is also enabled to improve matmul performance on the A100. We define throughput as the average number of items (text or image) processed per second (we ignore the first 100 iterations while measuring throughput to account for warmup). We leave training to convergence and its impact on downstream task metrics as an area for future study.
https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/
pytorch blogs
Figure 1 plots the throughput for each model configuration and optimization, both with a local batch size of 8 and then with the maximum batch size possible on 1 node. Absence of a data point for a model variant for an optimization indicates that the model could not be trained on a single node. Figure 2 plots the maximum possible batch size per worker for each optimization. We observe a few things: Scaling model size: DDP is only able to fit the 350M and 900M model on a node. With FSDP, due to memory savings, we are able to train ~3x bigger models compared to DDP (i.e. the 1.8B and 2.7B variants). Combining activation checkpointing (AC) with FSDP enables training even bigger models, on the order of ~10x compared to DDP (i.e. 4.8B and 10B variants) Throughput:
https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/
pytorch blogs
Throughput: For smaller model sizes, at a constant batch size of 8, the throughput for DDP is slightly higher than or equal to FSDP, explainable by the additional communication required by FSDP. It is lowest for FSDP and AC combined together. This is because AC re-runs checkpointed forward passes during the backwards pass, trading off additional computation for memory savings. However, in the case of the 2.7B model, FSDP + AC actually has higher throughput compared to FSDP alone. This is because the 2.7B model with FSDP is operating close to the memory limit even at batch size 8 triggering CUDA malloc retries which tend to slow down training. AC helps with reducing the memory pressure and leads to no retries.
https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/
pytorch blogs
For DDP and FSDP + AC, the throughput increases with an increase in batch size for each model. For FSDP alone, this is true for smaller variants. However, with the 1.8B and 2.7B parameter models, we observe throughput degradation when increasing batch size. A potential reason for this, as noted above also, is that at the memory limit, PyTorch’s CUDA memory management may have to retry cudaMalloc calls and/or run expensive defragmentation steps to find free memory blocks to handle the workload’s memory requirements which can result in training slowdown.
https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/
pytorch blogs
For larger models that can only be trained with FSDP (1.8B, 2.7B, 4.8B) the setting with highest throughput achieved is with FSDP + AC scaling to the maximum batch size. For 10B, we observe nearly equal throughput for smaller and maximum batch size. This might be counterintuitive as AC results in increased computation and maxing out batch size potentially leads to expensive defragmentation operations due to operating at CUDA memory limit. However, for these large models, the increase in batch size is large enough to mask this overhead. Figure 1: Training throughput for different configurations
https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/
pytorch blogs
Batch size: FSDP alone enables slightly higher batch sizes compared to DDP. Using FSDP + AC enables ~3x batch size compared to DDP for the 350M param model and ~5.5x for 900M param model. Even for 10B, a max batch size of ~20 which is fairly decent. This essentially enables larger global batch size using fewer GPUs which is especially useful for contrastive learning tasks. Figure 2: Max local batchsize possible for different configurations Conclusion
https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/
pytorch blogs
Conclusion As the world moves towards multimodal foundation models, scaling model parameters and efficient training is becoming an area of focus. The PyTorch ecosystem aims to accelerate innovation in this field by providing different tools to the research community, both for training and scaling multimodal models. With FLAVA, we laid out an example of scaling a model for multimodal understanding. In the future, we plan to add support for other kinds of models like the ones for multimodal generation and demonstrate their scaling factors. We also hope to automate many of these scaling and memory saving techniques (such as sharding and activation checkpointing) to reduce the amount of user experimentation needed to achieve the desired scale and maximum training throughput. References Introducing TorchMultimodal - a library for accelerating exploration in Multimodal AI
https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/
pytorch blogs
FLAVA paper Introducing Pytorch FSDP
https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/
pytorch blogs
layout: blog_detail title: "A BetterTransformer for Fast Transformer Inference" author: Michael Gschwind, Eric Han, Scott Wolchok, Rui Zhu, Christian Puhrsch featured-img: "/assets/images/2022-7-12-a-better-transformer-for-fast-transformer-encoder-inference-3.png" tl;dr Transformers achieve state-of-the-art performance for NLP, and are becoming popular for a myriad of other tasks. They are computationally expensive which has been a blocker to their widespread productionisation. Launching with PyTorch 1.12, BetterTransformer implements a backwards-compatible fast path of torch.nn.TransformerEncoder for Transformer Encoder Inference and does not require model authors to modify their models. BetterTransformer improvements can exceed 2x in speedup and throughput for many common execution scenarios. To use BetterTransformer, install PyTorch 1.12 and start using high-quality, high-performance Transformer models with the PyTorch API today.
https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/
pytorch blogs
Diagram of the Transformer Encoder Architecture (from "Attention Is All You Need"). During Inference, the entire module will execute as a single PyTorch-native function. In this blog post, we share the following topics — Performance Improvements, Backwards compatibility, and Taking advantage of the FastPath. Learn more about these topics below. Performance Improvements
https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/
pytorch blogs
Performance Improvements BetterTransformer launches with accelerated native implementations of MultiHeadAttention and TransformerEncoderLayer for CPUs and GPUs. These fast paths are integrated in the standard PyTorch Transformer APIs, and will accelerate TransformerEncoder, TransformerEncoderLayer and MultiHeadAttention nn.modules. These new modules implement two types of optimizations: (1) fused kernels combine multiple individual operators normally used to implement Transformers to provide a more efficient implementation, and (2) take advantage of sparsity in the inputs to avoid performing unnecessary operations on padding tokens. Padding tokens frequently account for a large fraction of input batches in many Transformer models used for Natural Language Processing.
https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/
pytorch blogs
Backwards compatibility Advantageously, no model changes are necessary to benefit from the performance boost offered by BetterTransformer. To benefit from fast path execution, inputs and operating conditions must satisfy some access conditions (see below). While the internal implementation of Transformer APIs has changed, PyTorch 1.12 maintains strict compatibility with Transformer modules shipped in previous versions, enabling PyTorch users to use models created and trained with previous PyTorch releases while benefiting from BetterTransformer improvements. In addition to enabling the PyTorch nn.Modules, BetterTransformer provides improvements for PyTorch libraries. Performance benefits will become available through two different enablement paths:
https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/
pytorch blogs
Transparent acceleration: Current users of PyTorch nn.Modules such as MultiHeadAttention as well as higher-level Transformer components will benefit from the improved performance of the new nn.Modules automatically. An example of this is the visual transformer (ViT) implementation used in the torchvision library (code link).
https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/
pytorch blogs
Torchtext library acceleration: As part of this project, we have optimized Torchtext to build on the PyTorch core API to benefit from BetterTransformer enhancements while maintaining strict and transparent compatibility with previous library versions and models trained with previous Torchtext versions. Using PyTorch Transformers in Torchtext also ensures that Torchtext will benefit from expected future enhancements to the PyTorch Transformer implementation. Taking advantage of the Fastpath BetterTransformer is a fastpath for the PyTorch Transformer API. The fastpath is a native, specialized implementation of key Transformer functions for CPU and GPU that applies to common Transformer use cases.
https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/
pytorch blogs
To take advantage of input sparsity (i.e. padding) in accelerating your model (see Figure 2), set the keyword argument enable_nested_tensor=True when instantiating a TransformerEncoder and pass in the src_key_padding_mask argument (which denotes padding tokens) during inference. This requires the padding mask to be contiguous, which is the typical case.
https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/
pytorch blogs
Currently, the BetterTransformer speedup only applies to transformer encoder models used in inference. To benefit from fastpath execution, models must be composed of any of the following components: TransformerEncoder, TransformerEncoderLayer or MultiheadAttention (MHA). Fastpath execution is also subject to some criteria. Most importantly, the model must be executed in inference mode and operate on input tensors that do not collect gradient tape information (e.g., running with torch.no_grad). The full list of conditions can be found at these links for nn.MultiHeadAttention and nn.TransformerEncoder, respectively. If the criteria are not met, control flows to the legacy PyTorch 1.11 Transformer implementation which has the same API, but lacks the fastpath performance boost.
https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/
pytorch blogs
Other transformer models (such as decoder models) which use the PyTorch MultiheadAttention module will benefit from the BetterTransformer fastpath. Planned future work is to expand the end-to-end BetterTransformer fastpath to models based on TransformerDecoder to support popular seq2seq and decoder-only (e.g., OPT) model architectures, and to training. Speedups The following graphs show the performance achieved for the BERT-base model with small and large-scale inputs: Figure 1: PyTorch 1.12 Improvements with BetterTransformer fastpath execution
https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/
pytorch blogs
Figure 2: PyTorch 1.12 Improvements with BetterTransformer fastpath execution with sparsity optimization enabled by enable_nested_tensor=True
https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/
pytorch blogs
BetterTransformer includes two types of optimization: (1) fused kernels implementing multiple operations more efficiently in a single kernel, and (2) exploiting sparsity by avoiding unnecessary processing on padding tokens. Enhanced performance for small input sizes benefits primarily from the fused kernel implementations, and shows a constant performance improvement regardless of padding amount. While large inputs still benefit from fused kernels, the computation heavy processing limits the benefits that may be obtained by the fused kernels as baseline performance is already closer to the theoretical peak. However, as we increase the amount of padding, performance increases dramatically as increasingly large amounts of computation can be avoided by exploiting the sparsity introduced by padding in NLP workloads. Future Work
https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/
pytorch blogs
Future Work As part of our ongoing work on PyTorch BetterTransformer, we are working on extending BetterTransformer improvements to Transformer Decoders. We aim to expand beyond inference to training as well. We are partnering to enable BetterTransformer on additional libraries such as FairSeq, MetaSeq, and HuggingFace to benefit all Transformer-based PyTorch models. We’ll provide future updates on the progress of BetterTransformer accelerations for the larger PyTorch ecosystem as part of this blog series. Acknowledgements: The authors would like to thank Lin Qiao, Ajit Mathews, Andrew Tulloch, Dmytro Dzhulgakov, Natalia Gimelshein, Emad El-Haraty, Mark Saroufim, Adnan Aziz, Geeta Chauhan, and Hamid Shojanazeri for their support, contributions and many helpful suggestions throughout the course of this project, and in the preparation of this blog.
https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/
pytorch blogs
layout: blog_detail title: "Experience the power of PyTorch 2.0 on AMD Solutions" author: AMD PyTorch 2.0 represents a significant step forward for the PyTorch machine learning framework. The stable release of PyTorch 2.0 brings new features that unlock even higher performance, while remaining backward compatible with prior releases and retaining the Pythonic focus which has helped to make PyTorch so enthusiastically adopted by the AI/ML community. AMD has long been a strong proponent of PyTorch, and we are delighted that the PyTorch 2.0 stable release includes support for AMD Instinct™ and Radeon™ GPUs that are supported by the ROCm™ software platform.
https://pytorch.org/blog/experience-power-pytorch-2.0/
pytorch blogs
With the stable PyTorch 2.0 release, PyTorch 2.0 introduces torch.compile as a beta feature underpinned by TorchInductor with support for AMD Instinct and Radeon GPUs through OpenAI Triton deep learning compiler. Through TorchInductor, developers can now generate low level kernels using Triton that are portable and performant to hand-written kernels on native hardware centric kernel programming models. OpenAI Triton is a language and compiler for blocked algorithms, which aims to provide an abstraction layer between CUDA/HIP and Torch at which developers can write efficient kernels more productively. We have written a new backend which interfaces Triton's custom MLIR dialects with our ROCm compiler stack.
https://pytorch.org/blog/experience-power-pytorch-2.0/
pytorch blogs
Triton can automatically optimize kernels generated by machine learning compilers such as TorchInductor for multiple AI accelerators including AMD Instinct GPU accelerator by leveraging hardware-specific features of the AMD CDNA™ GPU architecture. This makes it easy for developers and users to switch seamlessly from any HW to AMD Instinct GPU accelerators and get great out of the box performance. In addition, compilers like Triton can also enable developers to use high-level programming languages, such as Python, to write machine learning code that can be efficiently compiled and executed on specialized hardware. This can help greatly improve the productivity of machine learning developers, as they can focus on the algorithmic aspects of their models and rely on the compiler to generate efficient code.
https://pytorch.org/blog/experience-power-pytorch-2.0/
pytorch blogs
By design, PyTorch 2.0 is backward compatible to earlier PyTorch releases. This holds true for the ROCm build of PyTorch 2.0 as well. Developers using PyTorch with AMD GPUs can migrate to PyTorch 2.0 with the confidence that their existing code will continue to work without any required changes, so there is no penalty to access the improvements that come with this release. On the other hand, using PyTorch 2.0 and TorchInductor can result in significant performance improvement over the default eager-mode as shown below.
https://pytorch.org/blog/experience-power-pytorch-2.0/
pytorch blogs
The initial results using AMD Instinct MI250 GPUs already shows strong performance improvement with minimal optimization on TorchInductor compared to the default eager-mode. We see an average performance increase of up to 1.54X on 44 out of the 45 models on HuggingFace benchmarks suite with CamemBert, DistillGPT2 and T5Small being a few of the standout models with up to 1.5X or more performance improvement over eager-mode. We are looking forward to continued engagement with members of the PyTorch team at Meta to enable further optimization on ROCm software stack and the additional performance improvement for future PyTorch releases. Image 1: AMD MI250 GPU performance improvement for TorchInductor vs eager-mode using HuggingFace MI200-89.
https://pytorch.org/blog/experience-power-pytorch-2.0/
pytorch blogs
PyTorch 2.0 follows the same set of install options as before to build and install for supporting AMD GPUs. These include an installable Python package hosted at pytorch.org, AMD’s public PyTorch docker image, and of course the option to build from source using the upstream PyTorch repository. As with PyTorch builds for other platforms, the specific command line to be run for pip-based install is provided by the configurator at https://pytorch.org/get-started/locally/. The GPUs supported by the ROCm software platform which forms the basis for PyTorch support on AMD GPUs are documented at https://docs.amd.com/bundle/Hardware_and_Software_Reference_Guide/page/Hardware_and_Software_Support.html Conclusion
https://pytorch.org/blog/experience-power-pytorch-2.0/
pytorch blogs
Conclusion PyTorch 2.0 represents a major step in continuing to broaden support for ML developers by increasing performance while maintaining a simple, Pythonic interface. This performance uplift is made possible in large part by the new TorchInductor infrastructure, which in turn harnesses the Triton ML programming language and just-in-time compiler. AMD’s support for these technologies allows users to realize the full promise of the new PyTorch architecture. Our GPU support in PyTorch 2.0 is just one manifestation of a larger vision around AI and machine learning. AI/ML plays an important role in multiple AMD product lines, including Instinct and Radeon GPUs, Alveo™ data center accelerators, and both Ryzen™ and EPYC processors. These hardware and software initiatives are all part of AMD’s Pervasive AI vision, and we look forward to addressing the many new challenges and opportunities of this dynamic space.
https://pytorch.org/blog/experience-power-pytorch-2.0/
pytorch blogs
MI200-89 – PyTorch Inductor mode HuggingFace Transformers training speedup, running the standard PyTorch 2.0 test suite, over PyTorch eager-mode comparison based on AMD internal testing on a single GCD as of 3/10/2023 using a 2P AMD EPYC™ 7763 production server with 4x AMD Instinct™ MI250 (128GB HBM2e) 560W GPUs with Infinity Fabric™ technology; host ROCm™ 5.3, guest ROCm™ 5.4.4, PyTorch 2.0.0, Triton 2.0. Server manufacturers may vary configurations, yielding different results. Performance may vary based on factors including use of latest drivers and optimizations. © 2023 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, AMD CDNA, AMD Instinct, EPYC, Radeon, ROCm, Ryzen, and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other product names used in this publication are for identification purposes only and may be trademarks of their respective owners.
https://pytorch.org/blog/experience-power-pytorch-2.0/
pytorch blogs
layout: blog_detail title: 'Announcing PyTorch Developer Day 2021' author: Team PyTorch featured-img: 'assets/images/ptdevday21.gif' We are excited to announce PyTorch Developer Day (#PTD2), taking place virtually from December 1 & 2, 2021. Developer Day is designed for developers and users to discuss core technical developments, ideas, and roadmaps. Event Details Technical Talks Live Stream - December 1, 2021 Join us for technical talks on a variety of topics, including updates to the core framework, new tools and libraries to support development across a variety of domains, responsible AI and industry use cases. All talks will take place on December 1 and will be live streamed on PyTorch channels.
https://pytorch.org/blog/pytorch-developer-day-2021/
pytorch blogs
Stay up to date by following us on our social channels: Twitter, Facebook, or LinkedIn. Poster Exhibition & Networking - December 2, 2021 On the second day, we’ll be hosting an online poster exhibition on Gather.Town. There will be opportunities to meet the authors and learn more about their PyTorch projects as well as network with the community. This poster and networking event is limited to people composed of PyTorch maintainers and contributors, long-time stakeholders and experts in areas relevant to PyTorch’s future. Conversations from the networking event will strongly shape the future of PyTorch. As such, invitations are required to attend the networking event. Apply for an invitation to the networking event by clicking here. Call for Content Now Open
https://pytorch.org/blog/pytorch-developer-day-2021/
pytorch blogs
Call for Content Now Open Submit your poster abstracts today! Please send us the title and brief summary of your project, tools and libraries that could benefit PyTorch researchers in academia and industry, application developers, and ML engineers for consideration. The focus must be on academic papers, machine learning research, or open-source projects related to PyTorch development, Responsible AI or Mobile. Please no sales pitches. Deadline for submission is September 24, 2021. You can submit your poster abstract during your application & registration process here. Visit the event website for more information and we look forward to having you at PyTorch Developer Day. For any questions about the event, contact pytorch@fbreg.com.
https://pytorch.org/blog/pytorch-developer-day-2021/
pytorch blogs
layout: blog_detail title: 'Efficient PyTorch: Tensor Memory Format Matters' author: 'Dhruv Matani, Suraj Subramanian' featured-img: '' Ensuring the right memory format for your inputs can significantly impact the running time of your PyTorch vision models. When in doubt, choose a Channels Last memory format. When dealing with vision models in PyTorch that accept multimedia (for example image Tensorts) as input, the Tensor’s memory format can significantly impact the inference execution speed of your model on mobile platforms when using the CPU backend along with XNNPACK. This holds true for training and inference on server platforms as well, but latency is particularly critical for mobile devices and users. Outline of this article
https://pytorch.org/blog/tensor-memory-format-matters/
pytorch blogs
Outline of this article Deep Dive into matrix storage/memory representation in C++. Introduction to Row and Column major order. Impact of looping over a matrix in the same or different order as the storage representation, along with an example. Introduction to Cachegrind; a tool to inspect the cache friendliness of your code. Memory formats supported by PyTorch Operators. Best practices example to ensure efficient model execution with XNNPACK optimizations Matrix Storage Representation in C++ Images are fed into PyTorch ML models as multi-dimensional Tensors. These Tensors have specific memory formats. To understand this concept better, let’s take a look at how a 2-d matrix may be stored in memory. Broadly speaking, there are 2 main ways of efficiently storing multi-dimensional data in memory.
https://pytorch.org/blog/tensor-memory-format-matters/
pytorch blogs
Row Major Order: In this format, the matrix is stored in row order, with each row stored before the next row in memory. I.e. row N comes before row N+1. Column Major Order: In this format, the matrix is stored in column-order, with each column stored before the next column in memory. I.e. column N comes before column N+1. You can see the differences graphically below. C++ stores multi-dimensional data in row-major format. Efficiently accessing elements of a 2d matrix Similar to the storage format, there are 2 ways to access data in a 2d matrix. Loop Over Rows first: All elements of a row are processed before any element of the next row. Loop Over Columns first: All elements of a column are processed before any element of the next column.
https://pytorch.org/blog/tensor-memory-format-matters/
pytorch blogs
For maximum efficiency, one should always access data in the same format in which it is stored. I.e. if the data is stored in row-major order, then one should try to access it in that order. The code below (main.cpp) shows 2 ways of accessing all the elements of a 2d 4000x4000 matrix. ```python include include // loop1 accesses data in matrix 'a' in row major order, // since i is the outer loop variable, and j is the // inner loop variable. int loop1(int a[4000][4000]) { int s = 0; for (int i = 0; i < 4000; ++i) { for (int j = 0; j < 4000; ++j) { s += a[i][j]; } } return s; } // loop2 accesses data in matrix 'a' in column major order // since j is the outer loop variable, and i is the // inner loop variable. int loop2(int a[4000][4000]) { int s = 0; for (int j = 0; j < 4000; ++j) { for (int i = 0; i < 4000; ++i) { s += a[i][j]; }
https://pytorch.org/blog/tensor-memory-format-matters/
pytorch blogs
s += a[i][j]; } } return s; } int main() { static int a[4000][4000] = {0}; for (int i = 0; i < 100; ++i) { int x = rand() % 4000; int y = rand() % 4000; a[x][y] = rand() % 1000; } auto start = std::chrono::high_resolution_clock::now(); auto end = start; int s = 0; if defined RUN_LOOP1 start = std::chrono::high_resolution_clock::now(); s = 0; for (int i = 0; i < 10; ++i) { s += loop1(a); s = s % 100; } end = std::chrono::high_resolution_clock::now(); std::cout << "s = " << s << std::endl; std::cout << "Time for loop1: " << std::chrono::duration(end - start).count() << "ms" << std::endl; endif if defined RUN_LOOP2 start = std::chrono::high_resolution_clock::now(); s = 0; for (int i = 0; i < 10; ++i) { s += loop2(a); s = s % 100; } end = std::chrono::high_resolution_clock::now(); std::cout << "s = " << s << std::endl; std::cout << "Time for loop2: " << std::chrono::duration(end - start).count()
https://pytorch.org/blog/tensor-memory-format-matters/
pytorch blogs
<< "ms" << std::endl; endif } Let’s build and run this program and see what it prints. g++ -O2 main.cpp -DRUN_LOOP1 -DRUN_LOOP2 ./a.out Prints the following: s = 70 Time for loop1: 77.0687ms s = 70 Time for loop2: 1219.49ms loop1() is **15x faster** than loop2(). Why is that? Let’s find out below! ## Measure cache misses using Cachegrind [Cachegrind](https://courses.cs.washington.edu/courses/cse326/05wi/valgrind-doc/cg_main.html) is a cache profiling tool used to see how many I1 (first level instruction), D1 (first level data), and LL (last level) cache misses your program caused. Let’s build our program with just loop1() and just loop2() to see how cache friendly each of these functions is. ### Build and run/profile just loop1() ```python g++ -O2 main.cpp -DRUN_LOOP1 valgrind --tool=cachegrind ./a.out Prints: ```python ==3299700== ==3299700== I refs: 643,156,721 ==3299700== I1 misses: 2,077 ==3299700== LLi misses: 2,021
https://pytorch.org/blog/tensor-memory-format-matters/
pytorch blogs
==3299700== LLi misses: 2,021 ==3299700== I1 miss rate: 0.00% ==3299700== LLi miss rate: 0.00% ==3299700== ==3299700== D refs: 160,952,192 (160,695,444 rd + 256,748 wr) ==3299700== D1 misses: 10,021,300 ( 10,018,723 rd + 2,577 wr) ==3299700== LLd misses: 10,010,916 ( 10,009,147 rd + 1,769 wr) ==3299700== D1 miss rate: 6.2% ( 6.2% + 1.0% ) ==3299700== LLd miss rate: 6.2% ( 6.2% + 0.7% ) ==3299700== ==3299700== LL refs: 10,023,377 ( 10,020,800 rd + 2,577 wr) ==3299700== LL misses: 10,012,937 ( 10,011,168 rd + 1,769 wr) ==3299700== LL miss rate: 1.2% ( 1.2% + 0.7% ) ### Build and run/profile just loop2() ```python g++ -O2 main.cpp -DRUN_LOOP2 valgrind --tool=cachegrind ./a.out Prints: ```python ==3300389== ==3300389== I refs: 643,156,726 ==3300389== I1 misses: 2,075 ==3300389== LLi misses: 2,018
https://pytorch.org/blog/tensor-memory-format-matters/
pytorch blogs
==3300389== LLi misses: 2,018 ==3300389== I1 miss rate: 0.00% ==3300389== LLi miss rate: 0.00% ==3300389== ==3300389== D refs: 160,952,196 (160,695,447 rd + 256,749 wr) ==3300389== D1 misses: 160,021,290 (160,018,713 rd + 2,577 wr) ==3300389== LLd misses: 10,014,907 ( 10,013,138 rd + 1,769 wr) ==3300389== D1 miss rate: 99.4% ( 99.6% + 1.0% ) ==3300389== LLd miss rate: 6.2% ( 6.2% + 0.7% ) ==3300389== ==3300389== LL refs: 160,023,365 (160,020,788 rd + 2,577 wr) ==3300389== LL misses: 10,016,925 ( 10,015,156 rd + 1,769 wr) ==3300389== LL miss rate: 1.2% ( 1.2% + 0.7% ) ``` The main differences between the 2 runs are: 1. D1 misses: 10M v/s 160M 2. D1 miss rate: 6.2% v/s 99.4% As you can see, loop2() causes many many more (~16x more) L1 data cache misses than loop1(). This is why loop1() is ~15x faster than loop2().
https://pytorch.org/blog/tensor-memory-format-matters/
pytorch blogs
Memory Formats supported by PyTorch Operators While PyTorch operators expect all tensors to be in Channels First (NCHW) dimension format, PyTorch operators support 3 output memory formats. Contiguous: Tensor memory is in the same order as the tensor’s dimensions. ChannelsLast: Irrespective of the dimension order, the 2d (image) tensor is laid out as an HWC or NHWC (N: batch, H: height, W: width, C: channels) tensor in memory. The dimensions could be permuted in any order. ChannelsLast3d: For 3d tensors (video tensors), the memory is laid out in THWC (Time, Height, Width, Channels) or NTHWC (N: batch, T: time, H: height, W: width, C: channels) format. The dimensions could be permuted in any order.
https://pytorch.org/blog/tensor-memory-format-matters/
pytorch blogs
The reason that ChannelsLast is preferred for vision models is because XNNPACK (kernel acceleration library) used by PyTorch expects all inputs to be in Channels Last format, so if the input to the model isn’t channels last, then it must first be converted to channels last, which is an additional operation. Additionally, most PyTorch operators preserve the input tensor’s memory format, so if the input is Channels First, then the operator needs to first convert to Channels Last, then perform the operation, and then convert back to Channels First. When you combine it with the fact that accelerated operators work better with a channels last memory format, you’ll notice that having the operator return back a channels-last memory format is better for subsequent operator calls or you’ll end up having every operator convert to channels-last (should it be more efficient for that specific operator). From the XNNPACK home page:
https://pytorch.org/blog/tensor-memory-format-matters/
pytorch blogs
From the XNNPACK home page: “All operators in XNNPACK support NHWC layout, but additionally allow custom stride along the Channel dimension". PyTorch Best Practice The best way to get the most performance from your PyTorch vision models is to ensure that your input tensor is in a Channels Last memory format before it is fed into the model. You can get even more speedups by optimizing your model to use the XNNPACK backend (by simply calling optimize_for_mobile() on your torchscripted model). Note that XNNPACK models will run slower if the inputs are contiguous, so definitely make sure it is in Channels-Last format. Working example showing speedup
https://pytorch.org/blog/tensor-memory-format-matters/
pytorch blogs
Working example showing speedup Run this example on Google Colab - note that runtimes on colab CPUs might not reflect accurate performance; it is recommended to run this code on your local machine. ```python import torch from torch.utils.mobile_optimizer import optimize_for_mobile import torch.backends.xnnpack import time print("XNNPACK is enabled: ", torch.backends.xnnpack.enabled, "\n") N, C, H, W = 1, 3, 200, 200 x = torch.rand(N, C, H, W) print("Contiguous shape: ", x.shape) print("Contiguous stride: ", x.stride()) print() xcl = x.to(memory_format=torch.channels_last) print("Channels-Last shape: ", xcl.shape) print("Channels-Last stride: ", xcl.stride()) Outputs: XNNPACK is enabled: True Contiguous shape: torch.Size([1, 3, 200, 200]) Contiguous stride: (120000, 40000, 200, 1) Channels-Last shape: torch.Size([1, 3, 200, 200])
https://pytorch.org/blog/tensor-memory-format-matters/
pytorch blogs
Channels-Last stride: (120000, 1, 600, 3) ``` The input shape stays the same for contiguous and channels-last formats. Internally however, the tensor's layout has changed as you can see in the strides. Now, the number of jumps required to go across channels is only 1 (instead of 40000 in the contiguous tensor). This better data locality means convolution layers can access all the channels for a given pixel much faster. Let's see now how the memory format affects runtime: ```python from torchvision.models import resnet34, resnet50, resnet101 m = resnet34(pretrained=False) m = resnet50(pretrained=False) m = resnet101(pretrained=False) def get_optimized_model(mm): mm = mm.eval() scripted = torch.jit.script(mm) optimized = optimize_for_mobile(scripted) # explicitly call the xnnpack rewrite return scripted, optimized def compare_contiguous_CL(mm): # inference on contiguous start = time.perf_counter() for i in range(20): mm(x) end = time.perf_counter()
https://pytorch.org/blog/tensor-memory-format-matters/
pytorch blogs
mm(x) end = time.perf_counter() print("Contiguous: ", end-start) # inference on channels-last start = time.perf_counter() for i in range(20): mm(xcl) end = time.perf_counter() print("Channels-Last: ", end-start) with torch.inference_mode(): scripted, optimized = get_optimized_model(m) print("Runtimes for torchscripted model: ") compare_contiguous_CL(scripted.eval()) print() print("Runtimes for mobile-optimized model: ") compare_contiguous_CL(optimized.eval()) Outputs (on an Intel Core i9 CPU): Runtimes for torchscripted model: Contiguous: 1.6711160129999598 Channels-Last: 1.6678222839999535 Runtimes for mobile-optimized model: Contiguous: 0.5712863490000473 Channels-Last: 0.46113000699995155 ``` Conclusion The Memory Layout of an input tensor can significantly impact a model’s running time. For Vision Models, prefer a Channels Last memory format to get the most out of your PyTorch models. References
https://pytorch.org/blog/tensor-memory-format-matters/
pytorch blogs
References Row/Column Major matrix storage order Loop order impact on performance Cachegrind: a cache-miss profiler NHWC format explained Why does PyTorch prefer NCHW? XNNPACK PyTorch memory format tutorial Supported operators
https://pytorch.org/blog/tensor-memory-format-matters/
pytorch blogs
layout: blog_detail title: 'PyTorch framework for cryptographically secure random number generation, torchcsprng, now available' author: Team PyTorch One of the key components of modern cryptography is the pseudorandom number generator. Katz and Lindell stated, "The use of badly designed or inappropriate random number generators can often leave a good cryptosystem vulnerable to attack. Particular care must be taken to use a random number generator that is designed for cryptographic use, rather than a 'general-purpose' random number generator which may be fine for some applications but not ones that are required to be cryptographically secure."[1] Additionally, most pseudorandom number generators scale poorly to massively parallel high-performance computation because of their sequential nature. Others don’t satisfy cryptographically secure properties.
https://pytorch.org/blog/torchcsprng-release-blog/
pytorch blogs
torchcsprng is a PyTorch C++/CUDA extension that provides cryptographically secure pseudorandom number generators for PyTorch. torchcsprng overview Historically, PyTorch had only two pseudorandom number generator implementations: Mersenne Twister for CPU and Nvidia’s cuRAND Philox for CUDA. Despite good performance properties, neither of them are suitable for cryptographic applications. Over the course of the past several months, the PyTorch team developed the torchcsprng extension API. Based on PyTorch dispatch mechanism and operator registration, it allows the users to extend c10::GeneratorImpl and implement their own custom pseudorandom number generator.
https://pytorch.org/blog/torchcsprng-release-blog/
pytorch blogs
torchcsprng generates a random 128-bit key on the CPU using one of its generators and then runs AES128 in CTR mode either on CPU or GPU using CUDA. This then generates a random 128-bit state and applies a transformation function to map it to target tensor values. This approach is based on Parallel Random Numbers: As Easy as 1, 2, 3 (John K. Salmon, Mark A. Moraes, Ron O. Dror, and David E. Shaw, D. E. Shaw Research). It makes torchcsprng both crypto-secure and parallel on both CPU and CUDA. Since torchcsprng is a PyTorch extension, it is available on the platforms where PyTorch is available (support for Windows-CUDA will be available in the coming months). Using torchcsprng
https://pytorch.org/blog/torchcsprng-release-blog/
pytorch blogs
Using torchcsprng The torchcsprng API is very simple to use and is fully compatible with the PyTorch random infrastructure: Step 1: Install via binary distribution Anaconda: python conda install torchcsprng -c pytorch pip: python pip install torchcsprng Step 2: import packages as usual but add csprng python import torch import torchcsprng as csprng Step 3: Create a cryptographically secure pseudorandom number generator from /dev/urandom: python urandom_gen = csprng.create_random_device_generator('/dev/urandom') and simply use it with the existing PyTorch methods: python torch.randn(10, device='cpu', generator=urandom_gen) Step 4: Test with Cuda One of the advantages of torchcsprng generators is that they can be used with both CPU and CUDA tensors: python torch.randn(10, device='cuda', generator=urandom_gen) Another advantage of torchcsprng generators is that they are parallel on CPU unlike the default PyTorch CPU generator.
https://pytorch.org/blog/torchcsprng-release-blog/
pytorch blogs
Getting Started The easiest way to get started with torchcsprng is by visiting the GitHub page where you can find installation and build instructions, and more how-to examples. Cheers, The PyTorch Team [1] Introduction to Modern Cryptography: Principles and Protocols (Chapman & Hall/CRC Cryptography and Network Security Series) by Jonathan Katz and Yehuda Lindell
https://pytorch.org/blog/torchcsprng-release-blog/
pytorch blogs
layout: blog_detail title: "Optimizing Production PyTorch Models’ Performance with Graph Transformations" author: Jade Nie, CK Luk, Xiaodong Wang, Jackie (Jiaqi) Xu featured-img: "assets/images/blog1-3b.png" 1. Introduction PyTorch supports two execution modes [1]: eager mode and graph mode. In eager mode, operators in a model are immediately executed as they are encountered. In contrast, in graph mode, operators are first synthesized into a graph, which will then be compiled and executed as a whole. Eager mode is easier to use, more suitable for ML researchers, and hence is the default mode of execution. On the other hand, graph mode typically delivers higher performance and hence is heavily used in production.
https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/
pytorch blogs
Specifically, graph mode enables operator fusion [2], wherein one operator is merged with another to reduce/localize memory reads as well as total kernel launch overhead. Fusion can be horizontal—taking a single operation (e.g., BatchNorm) that is independently applied to many operands and merging those operands into an array; and vertical—merging a kernel with another kernel that consumes the output of the first kernel (e.g., Convolution followed by ReLU). Torch.FX [3, 4] (abbreviated as FX) is a publicly available toolkit as part of the PyTorch package that supports graph mode execution. In particular, it (1) captures the graph from a PyTorch program and (2) allows developers to write transformations on the captured graph. It is used inside Meta to optimize the training throughput of production models. By introducing a number of FX-based optimizations developed at Meta, we demonstrate the approach of using graph transformation to optimize PyTorch’s performance for production. 2. Background
https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/
pytorch blogs
2. Background Embedding tables are ubiquitous in recommendation systems. Section 3 will discuss three FX transformations that optimize accesses to embedding tables. In this section, we provide some background on FX (Section 2.1) and embedding tables (Section 2.2). 2.1 FX Figure 1 is a simple example adopted from [3] which illustrates using FX to transform a PyTorch program. It contains three steps: (1) capturing the graph from a program, (2) modifying the graph (in this example, all uses of RELU are replaced by GELU), and (3) generating a new program from the modified graph. Figure 1: A FX example which replaces all uses of RELU by GELU in a PyTorch module. The FX API [4] provides many more functionalities for inspecting and transforming PyTorch program graphs. 2.2 Embedding Tables
https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/
pytorch blogs
Figure 2: Illustration of an embedding table for a sparse feature with batch size = 1 In a recommendation system, sparse features (e.g., User ID, Story ID) are represented by embedding tables. An embedding table E is an HxD matrix, where H is the hash size, D is the embedding dimension. Each row of E is a vector of floats. Feature hashing [5] is used to map a sparse feature to a list of indices to E, say [S1,S2, …, Sk], where 0<=Si<H. Its output value is computed as f(E[S1], E[S2], …, E[Sk]), where E[Si] is the vector at row Si, and f is called the pooling function, which is typically one of the following functions: sum, average, maximum. See Figure 2 for an illustration.
https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/
pytorch blogs
To fully utilize the GPU, sparse features are usually processed in a batch. Each entity in a batch has its own list of indices. If a batch has B entities, a naive representation has B lists of indices. A more compact representation is to combine the B lists of indices into a single list of indices and add a list of the lengths of indices (one length for each entity in the batch). For example, if a batch has 3 entities whose lists of indices are as follows: Entity 1: indices = [10, 20] Entity 2: indices = [5, 9, 77, 81] Entity 3: indices = [15, 20, 45] Then the indices and lengths for the entire batch will be: Indices = [10, 20, 5, 9, 77, 81, 15, 20, 45] Lengths = [2, 4, 3] And the output of the embedding table lookup for the whole batch is a BxD matrix. 3. Three FX Transformations
https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/
pytorch blogs
3. Three FX Transformations We have developed three FX transformations that accelerate accesses to embedding tables. Section 3.1 discusses a transformation that combines multiple small input tensors into a single big tensor; Section 3.2 a transformation that fuses multiple, parallel compute chains into a single compute chain; and Section 3.3 a transformation that overlaps communication with computation. 3.1 Combining Input Sparse Features
https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/
pytorch blogs
3.1 Combining Input Sparse Features Recall that an input sparse feature in a batch is represented by two lists: a list of indices and a list of B lengths, where B is the batch size. In PyTorch, these two lists are implemented as two tensors. When a PyTorch model is run on a GPU, embedding tables are commonly stored in the GPU memory (which is closer to the GPU and has much higher read/write bandwidth than the CPU memory). To use an input sparse feature, its two tensors need to be first copied from CPU to GPU. Nevertheless, per host-to-device memory copying requires a kernel launch, which is relatively expensive compared to the actual data transfer time. If a model uses many input sparse features, this copying could become a performance bottleneck (e.g., 1000 input sparse features would require copying 2000 tensors from host to device).
https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/
pytorch blogs
An optimization that reduces the number of host-to-device memcpy is to combine multiple input sparse features before sending them to the device. For instance, given the following three input features: Feature_A: indices = [106, 211, 7], lengths = [2, 1] Feature_B: indices = [52, 498, 616, 870, 1013], lengths = [3, 2] Feature_C: indices = [2011, 19, 351, 790], lengths = [1, 3] The combined form is: Features_A_B_C: indices = [106, 211, 7, 52, 498, 616, 870, 1013, 2011, 19, 351, 790], lengths = [2, 1, 3, 2, 1, 3] So, instead of copying 3x2=6 tensors from host to device, we only need to copy 2 tensors. Figure 3(b) describes an implementation of this optimization, which has two components: On the CPU side: The input pipeline is modified to combine all the indices of sparse features into a single tensor and similarly all the lengths into another tensor. Then the two tensors are copied to the GPU.
https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/
pytorch blogs
On the GPU side: Using FX, we insert a Permute_and_Split op into the model graph to recover the indices and lengths tensors of individual features from the combined tensors, and route them to the corresponding nodes downstream. (a). Without the optimization (b). With the optimization Figure 3: Combining input sparse features 3.2 Horizontal fusion of computation chains started with accesses to embedding tables
https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/
pytorch blogs
In a production model, it is fairly common to have 10s of embedding tables residing on each GPU. For performance reasons, lookups to these tables are grouped together so that their outputs are concatenated in a single big tensor (see the red part in Figure 4(a)). To apply computations to individual feature outputs, a Split op is used to divide the big tensors into N smaller tensors (where N is the number of features) and then the desired computations are applied to each tensor. This is shown in Figure 4(a), where the computation applied to each feature output O is Tanh(LayerNorm(O)). All the computation results are concatenated back to a big tensor, which is then passed to downstream ops (Op1 in Figure 4(a)).
https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/
pytorch blogs
The main runtime cost here is the GPU kernel launch overhead. For instance, the number of GPU kernel launches in Figure 4(a) is 2*N + 3 (each oval in the figure is a GPU kernel). This could become a performance issue because execution times of LayerNorm and Tanh on the GPU are short compared to their kernel launch times. In addition, the Split op may create an extra copy of the embedding output tensor, consuming additional GPU memory.
https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/
pytorch blogs
We use FX to implement an optimization called horizontal fusion which dramatically reduces the number of GPU kernel launches (in this example, the optimized number of GPU kernel launches is 5, see Figure 4(b)). Instead of doing an explicit Split, we use the Add_middle_dim op to reshape the 2D embedding tensor of shape (B, NxD) to a 3D tensor of shape (B, N, D). Then a single LayerNorm is applied to the last dimension of it. Then a single Tanh is applied to the result of the LayerNorm. At the end, we use the Remove_middle_dim op to reshape the Tanh’s result back to a 2D tensor. In addition, since Add_middle_dim and Remove_middle_dim only reshape the tensor without creating an extra copy, the amount of GPU memory consumption could be reduced as well. (a). Without the optimization (b). With the optimization Figure 4: Horizontal fusion
https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/
pytorch blogs
Figure 4: Horizontal fusion 3.3 Overlapping Computation with Communication Training of a production recommendation model is typically done on a distributed GPU system. Since the capacity of the device memory per GPU is not big enough to hold all the embedding tables in the model, they need to be distributed among the GPUs. Within a training step, a GPU needs to read/write feature values from/to the embedding tables on the other GPUs. This is known as all-to-all communication [6] and can be a major performance bottleneck. We use FX to implement a transformation that can overlap computation with all-to-all communication. Figure 5(a) shows the example of a model graph which has embedding table accesses (EmbeddingAllToAll) and other ops. Without any optimization, they are sequentially executed on a GPU stream, as shown in Figure 5(b). Using FX, we break EmbeddingAllToAll into EmbeddingAllToAll_Request and EmbeddingAllToAll_Wait, and schedule independent ops in between them.
https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/
pytorch blogs
(a) Model graph (b) Original execution order (c)Optimized execution order Figure 5: Overlapping Computation with Communication 3.4 Summary Table 1 summarizes the optimizations discussed in this section and the corresponding performance bottlenecks addressed. Optimization Performance Bottleneck Addressed Combining Input Sparse Features Host-to-device memory copy Horizontal fusion GPU kernel launch overhead Overlapping Computation with Communication Embedding all-to-all access time
https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/
pytorch blogs
Table 1: Summary of the optimizations and the performance bottlenecks addressed We have also developed other FX transformations which are not discussed in this section due to space limitations. To discover which models would benefit from these transformations, we analyzed the performance data collected by MAIProf [7] from the models that run at Meta’s data centers. Altogether, these transformations provide up to 2-3x of speedups compared to eager mode on a set of production models. 4. Concluding Remarks The graph mode in PyTorch is preferred over the eager mode for production use for performance reasons. FX is a powerful tool for capturing and optimizing the graph of a PyTorch program. We demonstrate three FX transformations that are used to optimize production recommendation models inside Meta. We hope that this blog can motivate other PyTorch model developers to use graph transformations to boost their models’ performance. References
https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/
pytorch blogs
References [1] End-to-end Machine Learning Framework [2] DNNFusion: Accelerating Deep Neural Networks Execution with Advanced Operator Fusion [3] Torch.FX: Practical Program Capture and Transformation for Deep Learning In Python, MLSys 2022. [4] Torch.fx—PyTorch 1.12 documentation [5] Feature Hashing for Large Scale Multitask Learning [6] NVIDIA Collective Communication Library Documentation [7] Performance Debugging of Production PyTorch Models at Meta
https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/
pytorch blogs
layout: blog_detail title: 'PyTorch 1.3 adds mobile, privacy, quantization, and named tensors' author: Team PyTorch PyTorch continues to gain momentum because of its focus on meeting the needs of researchers, its streamlined workflow for production use, and most of all because of the enthusiastic support it has received from the AI community. PyTorch citations in papers on ArXiv grew 194 percent in the first half of 2019 alone, as noted by O’Reilly, and the number of contributors to the platform has grown more than 50 percent over the last year, to nearly 1,200. Facebook, Microsoft, Uber, and other organizations across industries are increasingly using it as the foundation for their most important machine learning (ML) research and production workloads.
https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/
pytorch blogs
We are now advancing the platform further with the release of PyTorch 1.3, which includes experimental support for features such as seamless model deployment to mobile devices, model quantization for better performance at inference time, and front-end improvements, like the ability to name tensors and create clearer code with less need for inline comments. We’re also launching a number of additional tools and libraries to support model interpretability and bringing multimodal research to production.
https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/
pytorch blogs
Additionally, we’ve collaborated with Google and Salesforce to add broad support for Cloud Tensor Processing Units, providing a significantly accelerated option for training large-scale deep neural networks. Alibaba Cloud also joins Amazon Web Services, Microsoft Azure, and Google Cloud as supported cloud platforms for PyTorch users. You can get started now at pytorch.org. PyTorch 1.3 The 1.3 release of PyTorch brings significant new features, including experimental support for mobile device deployment, eager mode quantization at 8-bit integer, and the ability to name tensors. With each of these enhancements, we look forward to additional contributions and improvements from the PyTorch community. Named tensors (experimental)
https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/
pytorch blogs
Named tensors (experimental) Cornell University’s Sasha Rush has argued that, despite its ubiquity in deep learning, the traditional implementation of tensors has significant shortcomings, such as exposing private dimensions, broadcasting based on absolute position, and keeping type information in documentation. He proposed named tensors as an alternative approach. Today, we name and access dimensions by comment: # Tensor[N, C, H, W] images = torch.randn(32, 3, 56, 56) images.sum(dim=1) images.select(dim=1, index=0) But naming explicitly leads to more readable and maintainable code: NCHW = [‘N’, ‘C’, ‘H’, ‘W’] images = torch.randn(32, 3, 56, 56, names=NCHW) images.sum('C') images.select('C', index=0) Quantization (experimental)
https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/
pytorch blogs
``` Quantization (experimental) It’s important to make efficient use of both server-side and on-device compute resources when developing ML applications. To support more efficient deployment on servers and edge devices, PyTorch 1.3 now supports 8-bit model quantization using the familiar eager mode Python API. Quantization refers to techniques used to perform computation and storage at reduced precision, such as 8-bit integer. This currently experimental feature includes support for post-training quantization, dynamic quantization, and quantization-aware training. It leverages the FBGEMM and QNNPACK state-of-the-art quantized kernel back ends, for x86 and ARM CPUs, respectively, which are integrated with PyTorch and now share a common API.
https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/
pytorch blogs
To learn more about the design and architecture, check out the API docs here, and get started with any of the supported techniques using the tutorials available here. PyTorch mobile (experimental) Running ML on edge devices is growing in importance as applications continue to demand lower latency. It is also a foundational element for privacy-preserving techniques such as federated learning. To enable more efficient on-device ML, PyTorch 1.3 now supports an end-to-end workflow from Python to deployment on iOS and Android. This is an early, experimental release, optimized for end-to-end development. Coming releases will focus on: Optimization for size: Build level optimization and selective compilation depending on the operators needed for user applications (i.e., you pay binary size for only the operators you need) Performance: Further improvements to performance and coverage on mobile CPUs and GPUs
https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/
pytorch blogs
High level API: Extend mobile native APIs to cover common preprocessing and integration tasks needed for incorporating ML in mobile applications. e.g. Computer vision and NLP Learn more or get started on Android or iOS here. New tools for model interpretability and privacy Captum As models become ever more complex, it is increasingly important to develop new methods for model interpretability. To help address this need, we’re launching Captum, a tool to help developers working in PyTorch understand why their model generates a specific output. Captum provides state-of-the-art tools to understand how the importance of specific neurons and layers and affect predictions made by the models. Captum’s algorithms include integrated gradients, conductance, SmoothGrad and VarGrad, and DeepLift. The example below shows how to apply model interpretability algorithms on a pretrained ResNet model and then visualize the attributions for each pixel by overlaying them on the image.
https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/
pytorch blogs
noise_tunnel = NoiseTunnel(integrated_gradients) attributions_ig_nt, delta = noise_tunnel.attribute(input, n_samples=10, nt_type='smoothgrad_sq', target=pred_label_idx) _ = viz.visualize_image_attr_multiple(["original_image", "heat_map"], ["all", "positive"], np.transpose(attributions_ig_nt.squeeze().cpu().detach().numpy(), (1,2,0)), np.transpose(transformed_img.squeeze().cpu().detach().numpy(), (1,2,0)), cmap=default_cmap, show_colorbar=True) Learn more about Captum at captum.ai. CrypTen
https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/
pytorch blogs
CrypTen Practical applications of ML via cloud-based or machine-learning-as-a-service (MLaaS) platforms pose a range of security and privacy challenges. In particular, users of these platforms may not want or be able to share unencrypted data, which prevents them from taking full advantage of ML tools. To address these challenges, the ML community is exploring a number of technical approaches, at various levels of maturity. These include homomorphic encryption, secure multiparty computation, trusted execution environments, on-device computation, and differential privacy. To provide a better understanding of how some of these technologies can be applied, we are releasing CrypTen, a new community-based research platform for taking the field of privacy-preserving ML forward. Learn more about CrypTen here. It is available on GitHub here.
https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/
pytorch blogs
Tools for multimodal AI systems Digital content is often made up of several modalities, such as text, images, audio, and video. For example, a single public post might contain an image, body text, a title, a video, and a landing page. Even one particular component may have more than one modality, such as a video that contains both visual and audio signals, or a landing page that is composed of images, text, and HTML sources. The ecosystem of tools and libraries that work with PyTorch offer enhanced ways to address the challenges of building multimodal ML systems. Here are some of the latest libraries launching today: Detectron2
https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/
pytorch blogs
Detectron2 Object detection and segmentation are used for tasks ranging from autonomous vehicles to content understanding for platform integrity. To advance this work, Facebook AI Research (FAIR) is releasing Detectron2, an object detection library now implemented in PyTorch. Detectron2 provides support for the latest models and tasks, increased flexibility to aid computer vision research, and improvements in maintainability and scalability to support production use cases. Detectron2 is available here and you can learn more here. Speech extensions to fairseq
https://pytorch.org/blog/pytorch-1-dot-3-adds-mobile-privacy-quantization-and-named-tensors/
pytorch blogs