text
stringlengths
0
1.73k
source
stringlengths
35
119
category
stringclasses
2 values
We saw that the computation graph is compiled and executed when a LazyTensor barrier is encountered. There are three scenarios when the LazyTensor barrier is automatically or manually introduced. The first is the explicit call of mark_step() api as shown in the preceding example. mark_step() is also called implicitly at every step when you wrap your dataloader with MpDeviceLoader (highly recommended to overlap compute and data upload to TPU device). The Optimizer step method of xla_model also allows to implicitly call mark_step (when you set barrier=True).
https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/
pytorch blogs
The second scenario where a barrier is introduced is when PyTorch/XLA finds an op with no mapping (lowering) to equivalent XLA HLO ops. PyTorch has 2000+ operations. Although most of these operations are composite (i.e. can be expressed in terms of other fundamental operations), some of these operations do not have corresponding lowering in XLA. What happens when an op with no XLA lowering is used? PyTorch XLA stops the operation recording and cuts the graph(s) leading to the input(s) of the unlowered op. This cut graph is then compiled and dispatched for execution. The results (materialized tensor) of execution are sent back from device to host, the unlowered op is then executed on the host (cpu), and then downstream LazyTensor operations creating a new graph(s) until a barrier is encountered again.
https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/
pytorch blogs
The third and final scenario which results in a LazyTensor barrier is when there is a control structure/statement or another method which requires the value of a tensor. This statement would at the minimum cause the execution of the computation graph leading to the tensor (if the graph has already been seen) or cause compilation and execution of both. Other examples of such methods include .item(), isEqual(). In general, any operation that maps Tensor -> Scalar will cause this behavior. Dynamic Graph As illustrated in the preceding section, graph compilation cost is amortized if the same shape of the graph is executed many times. It’s because the compiled graph is cached with a hash derived from the graph shape, input shape, and the output shape. If these shapes change it will trigger compilation, and too frequent compilation will result in training time degradation. Let’s consider the following example: ```python def dummy_step(x, y, loss, acc=False): z = torch.einsum('bs,st->bt', y, x)
https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/
pytorch blogs
z = torch.einsum('bs,st->bt', y, x) step_loss = z.sum().view(1,) if acc: loss = torch.cat((loss, step_loss)) else: loss = step_loss xm.mark_step() return loss import time def measure_time(acc=False): exec_times = [] iter_count = 100 x = torch.rand((512, 8)).to(dev) y = torch.rand((512, 512)).to(dev) loss = torch.zeros(1).to(dev) for i in range(iter_count): tic = time.time() loss = dummy_step(x, y, loss, acc=acc) toc = time.time() exec_times.append(toc - tic) return exec_times dyn = measure_time(acc=True) # acc= True Results in dynamic graph st = measure_time(acc=False) # Static graph, computation shape, inputs and output shapes don't change import matplotlib.pyplot as plt plt.plot(st, label = 'static graph') plt.plot(dyn, label = 'dynamic graph') plt.legend() plt.title('Execution time in seconds') ```
https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/
pytorch blogs
Note that static and dynamic cases have the same computation but dynamic graph compiles every time, leading to the higher overall run-time. In practice, the training step with recompilation can sometimes be an order of magnitude or slower. In the next section we discuss some of the PyTorch/XLA tools to debug training degradation. Profiling Training Performance with PyTorch/XLA PyTorch/XLA profiling consists of two major components. First is the client side profiling. This feature is turned on by simply setting the environment variable PT_XLA_DEBUG to 1. Client side profiling points to unlowered ops or device-to-host transfer in your source code. Client side profiling also reports if there are too frequent compilations happening during the training. You can explore some metrics and counters provided by PyTorch/XLA in conjunction with the profiler in this notebook.
https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/
pytorch blogs
The second component offered by PyTorch/XLA profiler is the inline trace annotation. For example: import torch_xla.debug.profiler as xp def train_imagenet(): print('==> Preparing data..') img_dim = get_model_property('img_dim') .... server = xp.start_server(3294) def train_loop_fn(loader, epoch): .... model.train() for step, (data, target) in enumerate(loader): with xp.StepTrace('Train_Step', step_num=step): .... if FLAGS.amp: .... else: with xp.Trace('build_graph'): output = model(data) loss = loss_fn(output, target) loss.backward() xm.optimizer_step(optimizer) Notice the start_server api call. The port number that you have used here is the same port number you will use with the tensorboard profiler in order to view the op trace similar to:
https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/
pytorch blogs
Op trace along with the client-side debugging function is a powerful set of tools to debug and optimize your training performance with PyTorch/XLA. For more detailed instructions on the profiler usage, the reader is encouraged to explore blogs part-1, part-2, and part-3 of the blog series on PyTorch/XLA performance debugging. Summary
https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/
pytorch blogs
Summary In this article we have reviewed the fundamentals of the LazyTensor system. We built on those fundamentals with PyTorch/XLA to understand the potential causes of training performance degradation. We discussed why “compile once and execute often” helps to get the best performance on LazyTensor systems, and why training slows down when this assumption breaks. We hope that PyTorch users will find these insights helpful for their novel works with LazyTensor systems. Acknowledgements A big thank you to my outstanding colleagues Jack Cao, Milad Mohammedi, Karl Weinmeister, Rajesh Thallam, Jordan Tottan (Google) and Geeta Chauhan (Meta) for their meticulous reviews and feedback. And thanks to the extended PyTorch/XLA development team from Google, Meta, and the open source community to make PyTorch possible on TPUs. And finally, thanks to the authors of the LazyTensor paper not only for developing LazyTensor but also for writing such an accessible paper.
https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/
pytorch blogs
Refrences [[1]] LazyTensor: combining eager execution with domain-specific compilers
https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/
pytorch blogs
layout: blog_detail title: "Extending TorchVision’s Transforms to Object Detection, Segmentation & Video tasks" author: Philip Meier, Victor Fomin, Vasilis Vryniotis, Nicolas Hug featured-img: "assets/images/Transforms-v2-feature-image.png" Note: A previous version of this post was published in November 2022. We have updated this post with the most up-to-date info, in view of the upcoming 0.15 release of torchvision in March 2023, jointly with PyTorch 2.0. TorchVision is extending its Transforms API! Here is what’s new: You can use them not only for Image Classification but also for Object Detection, Instance & Semantic Segmentation and Video Classification. You can use new functional transforms for transforming Videos, Bounding Boxes and Segmentation Masks.
https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/
pytorch blogs
The API is completely backward compatible with the previous one, and remains the same to assist the migration and adoption. We are now releasing this new API as Beta in the torchvision.transforms.v2 namespace, and we would love to get early feedback from you to improve its functionality. Please reach out to us if you have any questions or suggestions. Limitations of current Transforms The existing Transforms API of TorchVision (aka V1) only supports single images. As a result it can only be used for classification tasks: from torchvision import transforms trans = transforms.Compose([ transforms.ColorJitter(contrast=0.5), transforms.RandomRotation(30), transforms.CenterCrop(480), ]) imgs = trans(imgs)
https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/
pytorch blogs
]) imgs = trans(imgs) ``` The above approach doesn’t support Object Detection nor Segmentation. This limitation made any non-classification Computer Vision tasks second-class citizens as one couldn’t use the Transforms API to perform the necessary augmentations. Historically this made it difficult to train high-accuracy models using TorchVision’s primitives and thus our Model Zoo lagged by several points from SoTA.
https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/
pytorch blogs
To circumvent this limitation, TorchVision offered custom implementations in its reference scripts that show-cased how one could perform augmentations in each task. Though this practice enabled us to train high accuracy classification, object detection & segmentation models, it was a hacky approach which made those transforms impossible to import from the TorchVision binary. The new Transforms API The Transforms V2 API supports videos, bounding boxes, and segmentation masks meaning that it offers native support for many Computer Vision tasks. The new solution is a drop-in replacement: ```python import torchvision.transforms.v2 as transforms Exactly the same interface as V1: trans = transforms.Compose([
https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/
pytorch blogs
trans = transforms.Compose([ transforms.ColorJitter(contrast=0.5), transforms.RandomRotation(30), transforms.CenterCrop(480), ]) imgs, bboxes, labels = trans(imgs, bboxes, labels) The new Transform Classes can receive any arbitrary number of inputs without enforcing specific order or structure: ```python # Already supported: trans(imgs) # Image Classification trans(videos) # Video Tasks trans(imgs, bboxes, labels) # Object Detection trans(imgs, bboxes, masks, labels) # Instance Segmentation trans(imgs, masks) # Semantic Segmentation trans({"image": imgs, "box": bboxes, "tag": labels}) # Arbitrary Structure # Future support: trans(imgs, bboxes, labels, keypoints) # Keypoint Detection trans(stereo_images, disparities, masks) # Depth Perception trans(image1, image2, optical_flows, masks) # Optical Flow trans(imgs_or_videos, labels) # MixUp/CutMix-style Transforms
https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/
pytorch blogs
The Transform Classes make sure that they apply the same random transforms to all the inputs to ensure consistent results. The functional API has been updated to support all necessary signal processing kernels (resizing, cropping, affine transforms, padding etc) for all inputs: ```python from torchvision.transforms.v2 import functional as F # High-level dispatcher, accepts any supported input type, fully BC F.resize(inpt, size=[224, 224]) # Image tensor kernel F.resize_image_tensor(img_tensor, size=[224, 224], antialias=True) # PIL image kernel F.resize_image_pil(img_pil, size=[224, 224], interpolation=BILINEAR) # Video kernel F.resize_video(video, size=[224, 224], antialias=True) # Mask kernel F.resize_mask(mask, size=[224, 224]) # Bounding box kernel F.resize_bounding_box(bbox, size=[224, 224], spatial_size=[256, 256])
https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/
pytorch blogs
Under the hood, the API uses Tensor subclassing to wrap the input, attach useful meta-data and dispatch to the right kernel. For your data to be compatible with these new transforms, you can either use the provided dataset wrapper which should work with most of torchvision built-in datasets, or your can wrap your data manually into Datapoints: ```python from torchvision.datasets import wrap_dataset_for_transforms_v2 ds = CocoDetection(..., transforms=v2_transforms) ds = wrap_dataset_for_transforms_v2(ds) # data is now compatible with transforms v2! # Or wrap your data manually using the lower-level Datapoint classes: from torchvision import datapoints imgs = datapoints.Image(images) vids = datapoints.Video(videos) masks = datapoints.Mask(target["masks“]) bboxes = datapoints.BoundingBox(target["boxes“], format=”XYXY”, spatial_size=imgs.shape)
https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/
pytorch blogs
``` In addition to the new API, we now provide importable implementations for several data augmentations that are used in SoTA research such as Large Scale Jitter, AutoAugmentation methods and several new Geometric, Color and Type Conversion transforms. The API continues to support both PIL and Tensor backends for Images, single or batched input and maintains JIT-scriptability on both the functional and class APIs.. The new API has been verified to achieve the same accuracy as the previous implementation. An end-to-end example
https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/
pytorch blogs
An end-to-end example Here is an example of the new API using the following image. It works both with PIL images and Tensors. For more examples and tutorials, take a look at our gallery! ```python from torchvision import io, utils from torchvision import datapoints from torchvision.transforms import v2 as T from torchvision.transforms.v2 import functional as F Defining and wrapping input to appropriate Tensor Subclasses path = "COCO_val2014_000000418825.jpg" img = datapoints.Image(io.read_image(path)) img = PIL.Image.open(path) bboxes = datapoints.BoundingBox( [[2, 0, 206, 253], [396, 92, 479, 241], [328, 253, 417, 332], [148, 68, 256, 182], [93, 158, 170, 260], [432, 0, 438, 26], [422, 0, 480, 25], [419, 39, 424, 52], [448, 37, 456, 62], [435, 43, 437, 50], [461, 36, 469, 63], [461, 75, 469, 94],
https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/
pytorch blogs
[469, 36, 480, 64], [440, 37, 446, 56], [398, 233, 480, 304], [452, 39, 463, 63], [424, 38, 429, 50]], format=datapoints.BoundingBoxFormat.XYXY, spatial_size=F.get_spatial_size(img), ) labels = [59, 58, 50, 64, 76, 74, 74, 74, 74, 74, 74, 74, 74, 74, 50, 74, 74] Defining and applying Transforms V2 trans = T.Compose( [ T.ColorJitter(contrast=0.5), T.RandomRotation(30), T.CenterCrop(480), ] ) img, bboxes, labels = trans(img, bboxes, labels) Visualizing results viz = utils.draw_bounding_boxes(F.to_image_tensor(img), boxes=bboxes) F.to_pil_image(viz).show() ``` Development milestones and future work Here is where we are in development: [x] Design API [x] Write Kernels for transforming Videos, Bounding Boxes, Masks and Labels [x] Rewrite all existing Transform Classes (stable + references) on the new API: [x] Image Classification [x] Video Classification [x] Object Detection [x] Instance Segmentation [x] Semantic Segmentation
https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/
pytorch blogs
[x] Semantic Segmentation [x] Verify the accuracy of the new API for all supported Tasks and Backends [x] Speed Benchmarks and Performance Optimizations (in progress - planned for Dec) [x] Graduate from Prototype (planned for Q1) [ ] Add support of Depth Perception, Keypoint Detection, Optical Flow and more (future) [ ] Add smooth support for batch-wise transforms like MixUp and CutMix We would love to get feedback from you to improve its functionality. Please reach out to us if you have any questions or suggestions.
https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/
pytorch blogs
layout: blog_detail title: "New Library Updates in PyTorch 1.13" author: Team PyTorch featured-img: "assets/images/new-library-updates-in-pytorch-1.13-2.jpg" Summary We are bringing a number of improvements to the current PyTorch libraries, alongside the PyTorch 1.13 release. These updates demonstrate our focus on developing common and extensible APIs across all domains to make it easier for our community to build ecosystem projects on PyTorch. Along with 1.13, we are releasing updates to the PyTorch Libraries, please find them below. TorchAudio (Beta) Hybrid Demucs Model and Pipeline Hybrid Demucs is a music source separation model that uses both spectrogram and time domain features. It has demonstrated state-of-the-art performance in the Sony® Music DeMixing Challenge. (citation: https://arxiv.org/abs/2111.03600) The TorchAudio v0.13 release includes the following features
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
MUSDB_HQ Dataset, which is used in Hybrid Demucs training (docs) Hybrid Demucs model architecture (docs) Three factory functions suitable for different sample rate ranges Pre-trained pipelines (docs) SDR Results of pre-trained pipelines on MUSDB_HQ test set Tutorial that steps through music source separation using the pretrained pipeline (docs) Pipeline All Drums Bass Other Vocals HDEMUCS_HIGH_MUSDB* 6.42 7.76 6.51 4.47 6.93
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
| HDEMUCS_HIGH_MUSDB_PLUS** | 9.37 | 11.38 | 10.53 | 7.24 | 8.32 | * Trained on the training data of MUSDB-HQ dataset.** Trained on both training and test sets of MUSDB-HQ and 150 extra songs from an internal database that were specifically produced for Meta. from torchaudio.pipelines import HDEMUCS_HIGH_MUSDB_PLUS bundle = HDEMUCS_HIGH_MUSDB_PLUS model = bundle.get_model() sources_list = model.sources mixture, samplerate = torchaudio.load("song.wav") sources = model(mixture) audios = dict(zip(sources_list, sources) Special thanks to Alexandre Defossez for the guidance. (Beta) Datasets and Metadata Mode for SUPERB Benchmark
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
TorchAudio adds support for various audio-related datasets used in downstream tasks for benchmarking self-supervised learning models. With the addition of several new datasets, there is now support for the downstream tasks in version 1 of the SUPERB benchmark, which can be found in the s3prl repository. For these datasets, we also add metadata support through a get_metadata function, enabling faster dataset iteration or preprocessing without the need to load waveforms. The function returns the same features as __getitem__, except it returns the relative waveform path rather than the loaded waveform. Datasets with metadata functionality LIBRISPEECH (docs) LibriMix (docs)
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
QUESST14 (docs) SPEECHCOMMANDS (docs) (new) FluentSpeechCommands (docs) (new) Snips (docs) (new) IEMOCAP (docs) (new) VoxCeleb1 (Identification, Verification)
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
(Beta) Custom Language Model support in CTC Beam Search Decoding TorchAudio released a CTC beam search decoder in release 0.12, with KenLM language model support. This release, there is added functionality for creating custom Python language models that are compatible with the decoder, using the torchaudio.models.decoder.CTCDecoderLM wrapper. For more information on using a custom language model, please refer to the documentation and tutorial. (Beta) StreamWriter torchaudio.io.StreamWriter is a class for encoding media including audio and video. This can handle a wide variety of codecs, chunk-by-chunk encoding and GPU encoding. ```python writer = StreamWriter("example.mp4") writer.add_audio_stream( sample_rate=16_000, num_channels=2, ) writer.add_video_stream(
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
num_channels=2, ) writer.add_video_stream( frame_rate=30, height=96, width=128, format="rgb24", ) with writer.open(): writer.write_audio_chunk(0, audio) writer.write_video_chunk(1, video) ``` For more information, refer to the documentation and the following tutorials - StreamWriter Basic Usage - StreamWriter Advanced Usage - Hardware-Accelerated Video Decoding and Encoding TorchData For a complete list of changes and new features, please visit our repository’s 0.5.0 release note. (Prototype) DataLoader2
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
(Prototype) DataLoader2 DataLoader2 was introduced in the last release to execute DataPipe graph, with support for dynamic sharding for multi-process/distributed data loading, multiple backend ReadingServices, and DataPipe graph in-place modification (e.g. shuffle control). In this release, we further consolidated the API for DataLoader2 and a detailed documentation is now available here. We continue to welcome early adopters and feedback, as well as potential contributors. If you are interested in trying it out, we encourage you to install the nightly version of TorchData. (Beta) Data Loading from Cloud Service Providers We extended our support to load data from additional cloud storage providers via DataPipes, now covering AWS, Google Cloud Storage, and Azure. A tutorial is also available. We are open to feedback and feature requests.
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
We also performed a simple benchmark, comparing the performance of data loading from AWS S3 and attached volume on an AWS EC2 instance. The results are visible here. torch::deploy (Beta) torch::deploy is now in Beta! torch::deploy is a C++ library for Linux based operating systems that allows you to run multiple Python interpreters in a single process. You can run your existing eager PyTorch models without any changes for production inference use cases. Highlights include: Existing models work out of the box–no need to modify your python code to support tracing. Full support for your existing Python environment including C extensions. No need to cross process boundaries to load balance in multi-GPU serving environments. Model weight can be shared between multiple Python interpreters. A vastly improved installation and setup process. ```Python torch::deploy::InterpreterManager manager(4);
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
torch::deploy::InterpreterManager manager(4); // access one of the 4 interpreters auto I = manager.acquireOne(); // run infer from your_model.py I.global("your_model", "infer")({at::randn({10, 240, 320})}); ``` Learn more here. (Beta) CUDA/ROCm/CPU Backends torch::deploy now links against standard PyTorch Python distributions so all accelerators that PyTorch core supports such as CUDA and AMD/HIP work out of the box. Can install any device variant of PyTorch via pip/conda like normal. https://pytorch.org/get-started/locally/ (Prototype) aarch64/arm64 support torch::deploy now has basic support for aarch64 Linux systems. We're looking to gather feedback on it and learn more about arm use cases for eager PyTorch models. Learn more / share your use case at https://github.com/pytorch/multipy/issues/64 TorchEval
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
TorchEval (Prototype) Introducing Native Metrics Support for PyTorch TorchEval is a library built for users who want highly performant implementations of common metrics to evaluate machine learning models. It also provides an easy to use interface for building custom metrics with the same toolkit. Building your metrics with TorchEval makes running distributed training loops with torch.distributed a breeze. Learn more with our docs, see our examples, or check out our GitHub repo. TorchMultimodal Release (Beta)
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
TorchMultimodal Release (Beta) Please watch for upcoming blogs in early November that will introduce TorchMultimodal, a PyTorch domain library for training SoTA multi-task multimodal models at scale, in more details; in the meantime, play around with the library and models through our tutorial. TorchRec (Prototype) Simplified Optimizer Fusion APIs
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
(Prototype) Simplified Optimizer Fusion APIs We’ve provided a simplified and more intuitive API for setting fused optimizer settings via apply_optimizer_in_backward. This new approach enables the ability to specify optimizer settings on a per-parameter basis and sharded modules will configure FBGEMM’s TableBatchedEmbedding modules accordingly. Additionally, this now let's TorchRec’s planner account for optimizer memory usage. This should alleviate reports of sharding jobs OOMing after using Adam using a plan generated from planner. (Prototype) Simplified Sharding APIs
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
(Prototype) Simplified Sharding APIs We’re introducing the shard API, which now allows you to shard only the embedding modules within a model, and provides an alternative to the current main entry point - DistributedModelParallel. This lets you have a finer grained control over the rest of the model, which can be useful for customized parallelization logic, and inference use cases (which may not require any parallelization on the dense layers). We’re also introducing construct_module_sharding_plan, providing a simpler interface to the TorchRec sharder. (Beta) Quantized Comms
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
(Beta) Quantized Comms Applying quantization or mixed precision to tensors in a collective call during model parallel training greatly improves training efficiency, with little to no effect on model quality. TorchRec now integrates with the quantized comms library provided by FBGEMM GPU and provides an interface to construct encoders and decoders (codecs) that surround the all_to_all, and reduce_scatter collective calls in the output_dist of a sharded module. We also allow you to construct your own codecs to apply to your sharded module. The codces provided by FBGEMM allow FP16, BF16, FP8, and INT8 compressions, and you may use different quantizations for the forward pass and backward pass. TorchSnapshot (Beta)
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
TorchSnapshot (Beta) Along with PyTorch 1.13, we are releasing the beta version of TorchSnapshot, which is a performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind. Highlights include: Performance: TorchSnapshot provides a fast checkpointing implementation employing various optimizations, including zero-copy serialization for most tensor types, overlapped device-to-host copy and storage I/O, parallelized storage I/O Memory Use: TorchSnapshot's memory usage adapts to the host's available resources, greatly reducing the chance of out-of-memory issues when saving and loading checkpoints Usability: Simple APIs that are consistent between distributed and non-distributed workloads Learn more with our tutorial. TorchVision
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
TorchVision We are happy to introduce torchvision v0.14 (release note). This version introduces a new model registration API to help users retrieving and listing models and weights. It also includes new image and video classification models such as MViT, S3D, Swin Transformer V2, and MaxViT. Last but not least, we also have new primitives and augmentation such as PolynomicalLR scheduler and SimpleCopyPaste. (Beta) Model Registration API
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
(Beta) Model Registration API Following up on the multi-weight support API that was released on the previous version, we have added a new model registration API to help users retrieve models and weights. There are now 4 new methods under the torchvision.models module: get_model, get_model_weights, get_weight, and list_models. Here are examples of how we can use them: ```Python import torchvision from torchvision.models import get_model, get_model_weights, list_models max_params = 5000000 tiny_models = [] for model_name in list_models(module=torchvision.models): weights_enum = get_model_weights(model_name) if len([w for w in weights_enum if w.meta["num_params"] <= max_params]) > 0: tiny_models.append(model_name) print(tiny_models) ['mnasnet0_5', 'mnasnet0_75', 'mnasnet1_0', 'mobilenet_v2', ...]
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
model = get_model(tiny_models[0], weights="DEFAULT") print(sum(x.numel() for x in model.state_dict().values())) 2239188 #### (Beta) New Video Classification Models We added two new video classification models, MViT and S3D. MViT is a state of the art video classification transformer model which has 80.757% accuracy on the Kinetics400 dataset, while S3D is a relatively small model with good accuracy for its size. These models can be used as follows: ```Python import torch from torchvision.models.video import * video = torch.rand(3, 32, 800, 600) model = mvit_v2_s(weights="DEFAULT") # model = s3d(weights="DEFAULT") model.eval() prediction = model(images) Here is the table showing the accuracy of the new video classification models tested in the Kinetics400 dataset. Model Acc@1 Acc@5 mvit_v1_b 81.474 95.776
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
| mvit_v2_s | 83.196 | 96.36 | | s3d | 83.582 | 96.64 | We would like to thank Haoqi Fan, Yanghao Li, Christoph Feichtenhofer and Wan-Yen Lo for their work on PyTorchVideo and their support during the development of the MViT model. We would like to thank Sophia Zhi for her contribution implementing the S3D model in torchvision. (Stable) New Architecture and Model Variants For Classification Models, we’ve added the Swin Transformer V2 architecture along with pre-trained weights for its tiny/small/base variants. In addition, we have added support for the MaxViT transformer. Here is an example on how to use the models: import torch from torchvision.models import * image = torch.rand(1, 3, 224, 224) model = swin_v2_t(weights="DEFAULT").eval() # model = maxvit_t(weights="DEFAULT").eval() prediction = model(image)
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
prediction = model(image) ``` Here is the table showing the accuracy of the models tested on ImageNet1K dataset. Model Acc@1 Acc@1 change over V1 Acc@5 Acc@5 change over V1 swin_v2_t 82.072 + 0.598 96.132 + 0.356 swin_v2_s 83.712 + 0.516 96.816 + 0.456 swin_v2_b 84.112 + 0.530 96.864 + 0.224 maxvit_t 83.700 - 96.722 - We would like to thank Ren Pang and Teodor Poncu for contributing the 2 models to torchvision. (Stable) New Primitives & Augmentations
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
(Stable) New Primitives & Augmentations In this release we’ve added the SimpleCopyPaste augmentation in our reference scripts and we up-streamed the PolynomialLR scheduler to PyTorch Core. We would like to thank Lezwon Castelino and Federico Pozzi for their contributions. We are continuing our efforts to modernize TorchVision by adding more SoTA primitives, Augmentations and architectures with the help of our community. If you are interested in contributing, have a look at the following issue. Torch-TensorRT (Prototype) TensorRT with FX2TRT frontend Torch-TensorRT is the PyTorch integration for TensorRT, providing high performance inference on NVIDIA GPUs. Torch-TRT allows for optimizing models directly in PyTorch for deployment providing up to 6x performance improvement.
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
Torch-TRT is an AoT compiler which ingests an nn.Module or TorchScript module, optimizes compatible subgraphs in TensorRT & leaves the rest to run in PyTorch. This gives users the performance of TensorRT, but the usability and familiarity of Torch. Torch-TensorRT is part of the PyTorch ecosystem, and was released as v1.0 in November ‘21. There are currently two distinct front-ends: Torchscript & FX. Each provides the same value proposition and underlying operation with the primary difference being the input & output formats (TS vs FX / Python). The Torchscript front-end was included in v1.0 and should be considered stable. The FX front-end is first released in v1.2 and should be considered a Beta. Relevant Links: Github Documentation Generic (TS) getting started guide
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
FX getting started guide (Stable) Introducing Torch-TensorRT Torch-TensorRT is an integration for PyTorch that leverages inference optimizations of TensorRT on NVIDIA GPUs. It takes advantage of TensorRT optimizations, such as FP16 and INT8 reduced precision, graph optimization, operation fusion, etc. while offering a fallback to native PyTorch when TensorRT does not support the model subgraphs. Currently, there are two frontend paths existing in the library that help to convert a PyTorch model to tensorRT engine. One path is through Torch Script (TS) and the other is through FX frontend. That being said, the models are traced by either TS or FX into their IR graph and then converted to TensorRT from it. Learn more with our tutorial. TorchX
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
TorchX TorchX 0.3 updates include a new list API, experiment tracking, elastic training and improved scheduler support. There’s also a new Multi-Objective NAS tutorial using TorchX + Ax. (Prototype) List The newly added list command and API allows you to list recently launched jobs and their statuses for a given scheduler directly from within TorchX. This removes the need for using secondary tools to list the jobs. Full programmatic access to recent jobs for integration with custom tools. $ torchx list -s kubernetes APP HANDLE APP STATUS ----------------------------------------------- ----------------- kubernetes://torchx/default:train-f2nx4459p5crr SUCCEEDED Learn more with our documentation. (Prototype) Tracker
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
(Prototype) Tracker TorchX Tracker is a new prototype library that provides a flexible and customizable experiment and artifact tracking interface. This allows you to track inputs and outputs for jobs across multiple steps to make it easier to use TorchX with pipelines and other external systems. from torchx import tracker app_run = tracker.app_run_from_env() app_run.add_metadata(lr=lr, gamma=gamma) # hyper parameters app_run.add_artifact("model", "storage://path/mnist_cnn.pt") # logs / checkpoints app_run.add_source(parent_run_id, "model") # lineage Example: https://github.com/pytorch/torchx/tree/main/torchx/examples/apps/tracker https://pytorch.org/torchx/main/tracker.html (Prototype) Elastic Training and Autoscaling
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
(Prototype) Elastic Training and Autoscaling Elasticity on Ray and Kubernetes – automatic scale up of distributed training jobs when using a supported scheduler. Learn more with our documentation. (Prototype) Scheduler Improvements: IBM® Spectrum LSF Added prototype support for the IBM Spectrum LSF scheduler. (Beta) AWS Batch Scheduler The AWS Batch scheduler integration is now in beta. log fetching and listing jobs is now supported. Added configs for job priorities and queue policies Easily access job UI via ui_url https://pytorch.org/torchx/main/schedulers/aws_batch.html (Prototype) AnyPrecision Optimizer Drop in replacement for AdamW optimizer that reduces GPU memory, enables two main features: Ability to successfully train the entire model pipeline in full BFloat16.
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
Kahan summation ensures precision. This can improve training throughput, especially on huge models, by reduced memory and increased computation speed. - Ability to change the variance state to BFloat16. This can reduce overall memory required for model training with additional speed improvements. Find more information here.
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
layout: blog_detail title: "PyTorch 1.11, TorchData, and functorch are now available" author: Team PyTorch featured-img: "assets/images/pytorch-logo.jpg" We are excited to announce the release of PyTorch 1.11 (release notes). This release is composed of over 3,300 commits since 1.10, made by 434 contributors. Along with 1.11, we are releasing beta versions of TorchData and functorch. Summary: TorchData is a new library for common modular data loading primitives for easily constructing flexible and performant data pipelines. View it on GitHub. functorch, a library that adds composable function transforms to PyTorch, is now available in beta. View it on GitHub. Distributed Data Parallel (DDP) static graph optimizations available in stable. Introducing TorchData
https://pytorch.org/blog/pytorch-1.11-released/
pytorch blogs
Introducing TorchData We are delighted to present the Beta release of TorchData. This is a library of common modular data loading primitives for easily constructing flexible and performant data pipelines. Based on community feedback, we have found that the existing DataLoader bundled too many features together and can be difficult to extend. Moreover, different use cases often have to rewrite the same data loading utilities over and over again. The goal here is to enable composable data loading through Iterable-style and Map-style building blocks called “DataPipes” that work well out of the box with the PyTorch’s DataLoader.
https://pytorch.org/blog/pytorch-1.11-released/
pytorch blogs
A DataPipe takes in some access function over Python data structures, __iter__ for IterDataPipe and __getitem__ for MapDataPipe, and returns a new access function with a slight transformation applied. You can chain multiple DataPipes together to form a data pipeline that performs all the necessary data transformation. We have implemented over 50 DataPipes that provide different core functionalities, such as opening files, parsing texts, transforming samples, caching, shuffling, and batching. For users who are interested in connecting to cloud providers (such as Google Drive or AWS S3), the fsspec and iopath DataPipes will allow you to do so. The documentation provides detailed explanations and usage examples of each IterDataPipe and MapDataPipe.
https://pytorch.org/blog/pytorch-1.11-released/
pytorch blogs
In this release, some of the PyTorch domain libraries have migrated their datasets to use DataPipes. In TorchText, the popular datasets provided by the library are implemented using DataPipes and a section of its SST-2 binary text classification tutorial demonstrates how you can use DataPipes to preprocess data for your model. There also are other prototype implementations of datasets with DataPipes in TorchVision (available in nightly releases) and in TorchRec. You can find more specific examples here.
https://pytorch.org/blog/pytorch-1.11-released/
pytorch blogs
The documentation for TorchData is now live. It contains a tutorial that covers how to use DataPipes, use them with DataLoader, and implement custom ones. FAQs and future plans related to DataLoader are described in our project’s README file. Introducing functorch We’re excited to announce the first beta release of functorch. Heavily inspired by Google JAX, functorch is a library that adds composable function transforms to PyTorch. It aims to provide composable vmap (vectorization) and autodiff transforms that work with PyTorch modules and PyTorch autograd with good eager-mode performance.
https://pytorch.org/blog/pytorch-1.11-released/
pytorch blogs
Composable function transforms can help with a number of use cases that are tricky to do in PyTorch today: computing per-sample-gradients (or other per-sample quantities) running ensembles of models on a single machine efficiently batching together tasks in the inner-loop of MAML efficiently computing Jacobians and Hessians as well as batched ones Composing vmap (vectorization), vjp (reverse-mode AD), and jvp (forward-mode AD) transforms allows us to effortlessly express the above without designing a separate library for each. For more details, please see our documentation, tutorials, and installation instructions. Distributed Training (Stable) DDP static graph
https://pytorch.org/blog/pytorch-1.11-released/
pytorch blogs
(Stable) DDP static graph DDP static graph assumes that your model employs the same set of used/unused parameters in every iteration, so that it can deterministically know states like which hooks will fire, how many times the hooks will fire and gradients computation ready order after the first iteration. Static graph caches these states in the first iteration, and thus it could support features that DDP can not support in previous releases, e.g., support multiple activation checkpoints on the same parameters regardless of whether there are unused parameters or not. The static graph feature also applies performance optimizations when there are unused parameters, e.g., it avoids traversing graphs to search unused parameters every iteration, and enables dynamic bucketing order. These optimizations in the DDP static graph brought 10% QPS gain for some recommendation models. To enable static graph, just simply set static_graph=True in the DDP API like this: ```
https://pytorch.org/blog/pytorch-1.11-released/
pytorch blogs
ddp_model = DistributedDataParallel(model, static_graph=True) For more details, please see our documentation and tutorials. Thanks for reading, If you’re interested in these updates and want to join the PyTorch community, we encourage you to join the discussion forums and open GitHub issues. To get the latest news from PyTorch, follow us on Twitter, Medium, YouTube, and LinkedIn. Cheers! Team PyTorch
https://pytorch.org/blog/pytorch-1.11-released/
pytorch blogs
layout: blog_detail title: "Torchserve Performance Tuning, Animated Drawings Case-Study" author: Hamid Shojanazeri, Geeta Chauhan, Mark Saroufim, Jesse Smith featured-img: "assets/images/sketch_animator.png"
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
In this post we discuss performance tuning of Torchserve for serving your models in production. One of the biggest challenges in the life cycle of a ML project is deploying models in production. This requires a reliable serving solution along with solutions that address the MLOps needs. A robust serving solution needs to provide support for multi model serving, model versioning, metric logging, monitoring and scaling to serve the peak traffic. In this post, we will have an overview of Torchserve and how to tune its performance for production use-cases. We discuss the Animated Drawings app from Meta that can turn your human figure sketches to animations and how it could serve the peak traffic with Torchserve. The Animated Drawing’s workflow is below.
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
https://ai.facebook.com/blog/using-ai-to-bring-childrens-drawings-to-life/ Many AI systems and tools are designed to handle realistic images of humans, children's drawings add a level of complexity and unpredictability as they are often constructed in abstract, fanciful ways. These types of morphological and stylistic variations can confuse even state-of-the-art AI systems that excel at spotting objects in photorealistic images and drawings. Meta AI researchers are working to overcome this challenge so that AI systems will be better able to recognize drawings of human figures in the wildly varied ways that children create them. This great blog post provides more details about the Animated Drawings and the approach taken. Torchserve Fig1. Overall flow of Torchserve performance tuning
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
Once you have trained your model, it needs to be integrated into a larger system to have a full-fledged application, we use the term “model serving” to refer to this integration. Basically model serving is making your trained model available to run inferences and subsequent use of the model. Torchserve is the Pytorch preferred solution for serving models in production. It is a performant and scalable tool that wraps your model in a HTTP or HTTPS API. It has a frontend implemented in Java that handles multiple tasks from assigning workers for serving models to handling the connection between client and server. Torchserve has a Python backend that is responsible for handling the inference service.
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
Torchserve supports multi model serving and versioning for AB test, dynamic batching, logging and metrics. It exposes four APIs for inference, explanations, management and metrics. Inference API is listening on port 8080 and accessible through localhost by default, this can be configured in Torchserve configuration and enable getting predictions from the model.
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
Explanation API uses Captum under the hood to provide explanations of the model that is being served and listens to the port 8080 as well. Management API allows to register or unregister and describe a model. It also enables users to scale up or down the number of workers that serve the model. Metric API by default listens to port 8082 and enables us to monitor the model that is being served.
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
Torchserve let you scale your model serving and handle the peak traffic by supporting batch inference and multiple workers that serve your model. Scaling can be done through management API and settings through a configuration file. Also, metric API helps you to monitor your model serving through default and customizable metrics. Other advanced settings such as the length of the queue for the received requests, maximum wait time for a batch of inputs and many other properties are configurable through a config file that can be passed to Torchserve when it is started. Steps to serve your model with Torchserve
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
Steps to serve your model with Torchserve Install Torchserve, model archiver and its requirements. Choose a default handler that fits your task (e.g image classification, etc) or author a custom handler. Package your model artifacts (trained model checkpoint and all other necessary files for loading and running your model) and the handler into a “.mar” file using Torcharchive and place it in the model store. Start serving your model.
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
Run inference. We will discuss model handlers and metrics in more detail here. Model handlers Torchserve uses a handler in the backend to load the models, preprocess the received data, run inference and post-process the response. Handler in torchserve is a python script that all the model initialization, preprocessing, inference and post processing logic goes into. Torchserve provides an out of the box handler for a number of applications like image classification, segmentation, object detection and text classification. It also supports custom handlers, in case your use case is not supported in default handlers. It provides a great flexibility in custom handlers, this potentially make Torchserve as multi-framework serving tool. Custom handlers let you define your custom logic to initialize a model that can be used also to load models from other frameworks such as ONNX.
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
Torchserve handler is made of four main functions, initialize, preprocess, inference and postprocess that each return a list. The code snippet below shows an example of a custom handler.Custom handlers inherit from BaseHandler in Torchserve and can overwrite any of the main functions. Here is an example of the handler used for loading the Detectron2 model for figure detection, this model has been exported to Torchscript and uses model.half() to run the inference with FP16, details are explained in another section in this post. ```python class MyModelHandler(BaseHandler): def initialize(self, context): self.manifest = ctx.manifest properties = ctx.system_properties model_dir = properties.get("model_dir") serialized_file = self.manifest["model"]["serializedFile"] model_pt_path = os.path.join(model_dir, serialized_file) self.device = torch.device(
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
self.device = torch.device( "cuda:" + str(properties.get("gpu_id")) if torch.cuda.is_available() and properties.get("gpu_id") is not None else "cpu" ) self.model = torch.jit.load(model_pt_path, map_location=self.device) self.model = self.model.half() def preprocess(self, data): inputs = [] for request in batch: request_body = request.get("body") input_ = io.BytesIO(request_body) image = cv2.imdecode(np.fromstring(input_.read(), np.uint8), 1) input = torch.Tensor(image).permute(2, 0, 1) input = input.to(self.device) input = input.half() inputs.append({"image": input}) return inputs def inference(self,inputs): predictions = self.model(**inputs) return predictions def postprocess(self, output): responses = [] for inference_output in inference_outputs: responses_json = {
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
responses_json = { 'classes': inference_output['pred_classes'].tolist(), 'scores': inference_output['scores'].tolist(), "boxes": inference_output['pred_boxes'].tolist() } responses.append(json.dumps(responses_json)) return responses ``` Metrics An essential component in serving models in production is the ability to monitor them. Torchserve collects system level metrics regularly and allows adding custom metrics as well.
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
System level metrics consist of CPU utilization, available and used disk space and memory on the host machine along with number of requests with different response codes (e.g 200-300, 400-500 and above 500). Custom metrics can be added to the metrics as explained here. TorchServe logs these two sets of metrics to different log files. Metrics are collected by default at: System metrics - log_directory/ts_metrics.log Custom metrics - log directory/model_metrics.log
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
As mentioned before, Torchserve also exposes metric API, that by default listens to port 8082 and enables users to query and monitor the collected metrics. The default metrics endpoint returns Prometheus formatted metrics. You can query metrics using curl requests or point a Prometheus Server to the endpoint and use Grafana for dashboards. While serving a model you can query metrics using curl request as follows: curl http://127.0.0.1:8082/metrics
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
curl http://127.0.0.1:8082/metrics In case you are looking into exporting the logged metrics, please refer to this example that uses mtail to export metrics to Prometheus. Tracking these metrics in a dashboard allows you to monitor performance regressions that may have been sporadic or hard to spot during an offline benchmark run. What to consider for tuning performance of a model in production The workflow suggested in Fig 1, is the general idea on how to approach model deployment in production with Torchserve. In many cases serving models in production is optimized based on throughput or latency service level agreement (SLA)s. Usually real-time applications are more concerned about latency whereas off-line applications may care more about higher throughput.
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
There are a number of main factors contributing to the performance of a serving model in production. In particular, we are focusing on serving Pytorch models with Torchserve here, however most of these factors generalize to all models from other frameworks as well. Model optimizations: this is a pre-step for deploying models into production. This is a very broad discussion that we will get into in a series of future blogs. This includes techniques like quantization, pruning to decrease the size of the model, using Intermediate representations (IR graphs) such as Torchscript in Pytorch, fusing kernels and many others. Currently torchprep provides many of these techniques as a CLI tool.
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
Batch inference: it refers to feeding multiple inputs into a model, while it is essential during training, it can be very helpful to manage the cost at inference time as well. Hardware accelerators are optimized for parallelism and batching helps to saturate the compute capacity and often leads to higher throughput. The main difference in inference is you can’t wait too long to get a batch filled from clients, something we call dynamic batching Number of Workers : Torchserve uses workers to serve models. Torchserve workers are Python processes that hold a copy of the model weights for running inference. Too few workers means you’re not benefitting from enough parallelism but too many can cause worker contention and degrade end to end performance.
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
Hardware : choosing the appropriate hardware based on the model, application and latency, throughput budget. This could be one of the supported hardwares in Torchserve, CPU, GPU, AWS Inferentia. Some hardware configurations are intended for best in class performance and others are better suited for cost effective inference. From our experiments we’ve found that GPUs shine best at larger batch sizes whereas the right CPUs and AWS Inferentia can be far more cost effective for lower batch sizes and low latency. Best Practices for Performance tuning on Torchserve To get the best performance out of your model while serving it with Torchserve, we are sharing some of the best practices here. Torchserve provides a benchmark suite that provides helpful insight to make informed decisions on different choices as detailed below.
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
Optimize your model as the first step, Pytorch model optimization tutorials. Model optimization choices are also closely tied to the hardware of choice. We will discuss it in more detail in another blog post. Deciding the hardware for model deployment can be closely related to the latency and throughput budget and cost per inference. Depending on the size of model and application it can vary, for some models like computer vision models it has been historically not affordable to run in production on CPU. However, by having optimizations such IPEX as recently added to Torchserve this has been much more affordable and cost beneficial and you can learn more in this investigative case study
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
Workers in Torchserve are Python processes that provide parallelism, setting the number of workers should be done carefully. By default Torchserve launch number of workers equal to VCPUs or available GPUs on the host, this can add a considerable amount of time to the Torchserve start. Torchserve exposes a config property to set the number of workers. To provide an efficient parallelism through multiple workers and avoiding them to compete over resources, as a baseline we recommend following setting on CPU and GPU: CPU : In the handler, torch.set_num_threads(1)then set the number of workers to num physical cores / 2.But the the best threading configurations can be achieved by leveraging the Intel CPU launcher script.
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
GPU: number of available GPUs can be set through number_gpus in config.properties. Torchserve uses round robin to assign workers to GPUs. We recommend setting the number of workers as follows. Number of worker = (Number of available GPUs) / (Number of Unique Models).Note that GPUs that are pre-Ampere do not provide any resource isolation with Multi Instance GPUs.
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
Batch size can directly affect the latency and the throughput. To better utilize the compute resources batch size needs to be increased. However, there is a tradeoff between latency and throughput. Larger batch sizes can increase the throughput but results in a higher latency as well. Batch size can be set in Torchserve in two ways, either through model config in config.properties or while registering the model using Management API. In the next section, we are going to use Torchserve benchmark suite to decide the best combination of model optimization, hardware, workers, and batch size. Animated Drawings Performance Tuning
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
Animated Drawings Performance Tuning To use the Torchserve benchmark suite, first we need to have an archived file, “.mar” file as discussed above, that contains the model, handler and all other artifacts to load and run inference. Animated Drawings uses Detectron2’s implementation of Mask-RCNN for an object detection model. How to run benchmark suite The Automated benchmark suite in Torchserve let you benchmark multiple models with different setting including batch size and number of worker and finally generate a report for you. To get started: git clone https://github.com/pytorch/serve.git cd serve/benchmarks pip install -r requirements-ab.txt apt-get install apache2-utils Model level settings can be configured in a yaml file similar to ```yaml Model_name: eager_mode: benchmark_engine: "ab" url: "Path to .mar file" workers: - 1 - 4
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
workers: - 1 - 4 batch_delay: 100 batch_size: - 1 - 2 - 4 - 8 requests: 10000 concurrency: 10 input: "Path to model input" backend_profiling: False exec_env: "local" processors: - "cpu" - "gpus": "all" This yaml file will be referenced in the [benchmark_config_template](https://github.com/pytorch/serve/blob/master/benchmarks/benchmark_config_template.yaml#L12).yaml file that includes other settings for generating reports, this can optionally work with AWS cloud watch for logs as well. python benchmarks/auto_benchmark.py --input benchmark_config_template.yaml ```
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
``` Running the benchmarks, results will be written in “csv” file that can be found in “ /tmp/benchmark/ab_report.csv” and full report “/tmp/ts_benchmark/report.md". It will include items such as Torchserve average latency, model P99 latency, throughput, number of concurrency, number of requests, handler time, and some other metrics. Here we focus on some of the important ones that we track to tune the performance which are, concurrency, model P99 latency, throughput. We look at these numbers specifically in combination with batch size, the used device, number of workers and if any model optimization has been done. The latency SLA for this model has been set to 100 ms, this is real-time application and as we discussed earlier, latency is more of a concern and throughput ideally should be as high as possible while it does not violate the latency SLA.
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
Through searching the space, over different batch sizes (1-32), number of workers (1-16) and devices (CPU,GPU), we have run a set of experiments that summarized the best ones in the table below. Device Concurrency # Requests #workers Batch size Payload/image Optimization Throughput Latency P99 CPU 10 1000 1 1 small N/A 3.45 305.3 ms CPU 1 1000 1 1 small N/A 3.45 291.8 ms GPU 10 1000 1 1 small N/A 41.05
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
N/A 41.05 25.48 ms GPU 1 1000 1 1 small N/A 42.21 23.6 ms GPU 10 1000 1 4 small N/A 54.78 73.62 ms GPU 10 1000 1 4 small model.half() 78.62 50.69 ms GPU 10 1000 1 8 small model.half() 85.29 94.4 ms
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
94.4 ms The latency of this model on CPU with all of the tried settings in terms of batch size, concurrency and number of workers did not meet the SLA, in fact ~13x higher. Moving the model serving to GPU, immediately could improve the latency ~13x from 305 ms down to 23.6 ms. One of the simplest optimizations that we could do for the model was lowering its precision to fp16, it is one liner (model.half()) and could reduce the model P99 latency by 32% and increase the throughput by almost the same amount. There could be other optimization done by Torchscripting the model and using optimize_for_inference or other tricks including onnx or tensorrt runtime optimizations which leverage aggressive fusions are out of the scope of this post. We will discuss model optimizations in a separate post.
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
We found both on CPU and GPU , setting number of workers=1 worked the best in this case. Moving the model to GPU, using number of workers = 1, and batch size = 1 increased the Throughput ~12x compared to CPU and latency ~13x. Moving the model to GPU, using model.half(), number of workers = 1, and batch size = 8 yielded best results in terms of Throughput and tolerable latency. Throughput increased ~25x compared to CPU with latency still meeting the SLA (94.4ms). Note: if you are running the benchmark suite, make sure you are setting a proper batch_delay and set the concurrency of the request to a number proportional to your batch size. Concurrency here means the number of concurrent requests being sent to the server. Conclusion
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
Conclusion In this post, we have discussed the considerations and knobs that Torchserve expose to tune the performance in production. We have discussed the Torchserve benchmark suite as a means to tune the performance and get insights on possible choices for model optimizations, hardware choice and cost in general. We used Animated Drawings app which uses Detectron2’s Mask-RCNN model as a case-study to showcase the performance tuning with benchmark suite. For more details on Performance tuning in Torchserve please refer to our documentation here. Also feel free to open a ticket on Torchserve repo for any further questions and feedback. Acknowledgement
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
Acknowledgement We would like to thank Somya Jain (Meta), Christopher Gustave (Meta) for their great support and guidance throughout many steps of this blog and providing insights to Sketch Animator workflow. Also, special thanks to Li Ning from AWS for the great efforts to make performance tuning much easier on Torchserve with automated benchmark suite.
https://pytorch.org/blog/torchserve-performance-tuning/
pytorch blogs
layout: blog_detail title: "Scaling Vision Model Training Platforms with PyTorch" author: Vaibhav Aggarwal, Mannat Singh, Anjali Sridhar, Yanghao Li, Shoubhik Debnath, Ronghang Hu, Will Feng, Xinlei Chen, Tingting Markstrum, Diana Liskovich, Anupam Bhatnagar, Chay Ryali, Haoqi Fan, Tete Xiao, Min Xu, Rahul Iyer, Christoph Feichtenhofer, Ross Girshick, Piotr Dollar, Aaron Adcock, Wan-Yen Lo, CK Luk featured-img: "/assets/images/scaling-vision-figure_1-solutions-to-the-challenges.png" TL;DR: We demonstrate the use of PyTorch with FairScale’s FullyShardedDataParallel (FSDP) API in writing large vision transformer models. We discuss our techniques for scaling and optimizing these models on a GPU cluster. The goal of this platform scaling effort is to enable research at scale. This blog does not discuss model accuracy, new model architectures, or new training recipes. 1. Introduction
https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/
pytorch blogs
1. Introduction Latest vision research [1, 2] demonstrates model scaling as a promising research direction. In this project, we aim to enable our platforms to train massive vision transformer (ViT) [3] models. We present our work on scaling the largest trainable ViT from 1B to 120B parameters in FAIR vision platforms. We wrote ViT in PyTorch and leveraged its support for large-scale, distributed training on a GPU cluster. In the rest of this blog, we will first discuss the main challenges, namely scalability, optimization, and numerical stability. Then we will discuss how we tackle them with techniques including data and model parallelism, automatic mixed precision, kernel fusion, and bfloat16. Finally, we present our results and conclude. 2. Main Challenges 2.1 Scalability
https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/
pytorch blogs
2. Main Challenges 2.1 Scalability The key scalability challenge is to efficiently shard a model’s operations and state across multiple GPUs. A 100B parameter model requires ~200GB of RAM just for parameters, assuming fp16 representation. So, it is impossible to fit the model on a single GPU (A100 has at most 80GB RAM). Therefore, we need some way to efficiently shard a model’s data (input, parameters, activations, and optimizer state) across multiple GPUs. Another aspect of this problem is to scale without significantly changing the training recipe. E.g. Certain representation learning recipes use a global batch size of up to 4096 beyond which we start to see accuracy degradation. We cannot scale to more than 4096 GPUs without using some form of tensor or pipeline parallelism. 2.2 Optimization
https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/
pytorch blogs
2.2 Optimization The key optimization challenge is to maintain high GPU utilization even as we scale the number of model parameters and flops. When we scale models to teraflops and beyond, we start to hit major bottlenecks in our software stack that super-linearly increase training time and reduce accelerator utilization. We require hundreds or thousands of GPUs to run just a single experiment. Improvements in accelerator utilization can lead to significant reductions in cost and improve fleet utilization. It enables us to fund more projects and run more experiments in parallel. 2.3 Numerical Stability
https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/
pytorch blogs
2.3 Numerical Stability The key stability challenge is to avoid numerical instability and divergence at large scale. We empirically observed in our experiments that the training instability gets severe and hard to deal with when we scale up model sizes, data, batch sizes, learning rate, etc. Vision Transformers particularly face training instability even at a lower parameter threshold. E.g., we find it challenging to train even ViT-H (with just 630M parameters) in mixed-precision mode without using strong data augmentation. We need to study the model properties and training recipes to make sure that the models train stably and converge. 3. Our Solutions Figure 1 depicts our solutions to each of the challenges. 3.1 Addressing scaling challenges with data parallelism and model parallelism
https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/
pytorch blogs
We apply various forms of data and model parallelism to enable fitting very large models in GPU memory. We use FairScale’s FullyShardedDataParallel (FSDP) API [4], based on PyTorch, to shard parameters, gradients, and optimizer state across multiple GPUs, thereby reducing the memory footprint per GPU. This process consists of the following three steps: Step 1: We wrapped the entire model in a single FSDP instance. This shards the model parameters at the end of a forward pass and gathers parameters at the beginning of a forward pass. This enabled us to scale ~3x from 1.5B to 4.5B parameters. Step 2: We experimented with wrapping individual model layers in separate FSDP instances. This nested wrapping further reduced the memory footprint by sharding and gathering parameters of individual model layers instead of an entire model. The peak memory is then determined by an individually wrapped transformer block in GPU memory in this mode instead of the entire model.
https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/
pytorch blogs
Step 3: We used activation-checkpoint to reduce the memory consumption by activations. It saves the input tensors and discards the intermediate activation tensors during the forward pass. These are recomputed during the backward pass. In addition, we experimented with model-parallelism techniques such as pipeline parallelism [5], which allow us to scale to more GPUs without increasing the batch size. 3.2 Addressing optimization challenges with advanced AMP and kernel fusion Advanced AMP Automatic Mixed Precision (AMP) [6] training refers to training models using a lower precision of bits than FP32 or the default but still maintaining accuracy. We experimented with three levels of AMP as described below: AMP O1: This refers to training in mixed precision where weights are in FP32 and some operations are in FP16. With AMP O1, the ops that might impact accuracy remain in FP32 and are not autocasted to FP16.
https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/
pytorch blogs
AMP O2: This refers to training in mixed precision but with more weights and ops in FP16 than in O1. Weights do not implicitly remain in FP32 and are cast to FP16. A copy of the master weights is maintained in the FP32 precision that is used by the optimizer. If we want the normalization layer weights in FP32 then we need to explicitly use layer wrapping to ensure that. Full FP16: This refers to training in full FP16 where weights and operations are in FP16. FP16 is challenging to enable for training due to convergence issues. We found that AMP O2 with LayerNorm wrapping in FP32 leads to the best performance without sacrificing accuracy. Kernel Fusion To reduce GPU kernel launch overhead and increase GPU work granularity, we experimented with kernel fusions, including fused dropout and fused layer-norm, using the xformers library [7]. 3.3 Addressing stability challenges by studying ops numerical stability and training recipes
https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/
pytorch blogs
BFloat16 in general but with LayerNorm in FP32 The bfloat16 (BF16) [8] floating-point format provides the same dynamic range as FP32 with a memory footprint identical to FP16. We found that we could train models in the BF16 format using the same set of hyperparameters as in FP32, without special parameter tuning. Nevertheless, we found that we need to keep LayerNorm in FP32 mode in order for the training to converge. 3.4 Final training recipe A summary of the final training recipe. Wrap the outer model in an FSDP instance. Enable parameter sharding after the forward pass. Wrap individual ViT blocks with activation checkpointing, nested FSDP wrapping, and parameter flattening. Enable mixed precision mode (AMP O2) with bfloat16 representation. Maintain the optimizer state in FP32 precision to enhance numerical stability. Wrap normalization layers like LayerNorm in FP32 for better numerical stability.
https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/
pytorch blogs
Maximize the Nvidia TensorCore utilization by keeping matrix dimensions to be multiple of 8. For More details check Nvidia Tensor Core Performance Guide. 4. Results In this section, we show the scaling results of ViT on three types of tasks: (1) image classification, (2) object detection (3) video understanding. Our key result is that we are able to train massive ViT backbones across these vision tasks after applying the discussed scaling and optimization techniques. This enables vision research at a much larger scale. We trained the models to convergence to verify that we maintain the current baselines even with all the optimizations. A common trend in Figures 2, 3, 4 is that we are able to train up to 25B-param models with an epoch time of less than 4 hours on 128 A100 GPUs. The 60B and 120B models are relatively slower to train.
https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/
pytorch blogs
Figure 2 shows the image-classification scaling result. It plots the epoch time for training ViTs on ImageNet using 128 A100-80GB GPUs with different model sizes. Figure 2: Image-classification scaling result. Figure 3 shows the object-detection scaling result. It plots the epoch time for training ViTDet [9] with different ViT backbones on COCO using 128 A100-80GB GPUs. Figure 3: Object-detection scaling result.
https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/
pytorch blogs
Figure 4 shows the video-understanding scaling result. It plots the epoch time for training MViTv2 [10] models on Kinetics 400 [11] using 128 V100 (32 GB) GPUs in FP32. Figure 4: Video-understanding scaling result. Figure 5 shows the optimization result with the ViT-H model in Figure 2 on 8 A100-40GB GPUs. Three versions are used: (1) the baseline uses PyTorch’s DDP [12] with AMP O1, (2) FSDP + AMP-O2 + other optimizations, and (3) FSDP + FP16 + other optimizations. These optimizations altogether speed up the training by up to 2.2x. Figure 5: Training speedups from various optimizations.
https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/
pytorch blogs
5. Concluding Remarks We have demonstrated the use of PyTorch with FairScale’s FullyShardedDataParallel (FSDP) API in writing large vision transformer models. We discuss our techniques for scaling and optimizing these models on a GPU cluster. We hope that this article can motivate others to develop large-scale ML models with PyTorch and its ecosystem. References [1] Masked Autoencoders Are Scalable Vision Learners [2] Revisiting Weakly Supervised Pre-Training of Visual Perception Models [3] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale [4] fairscale.nn.FullyShardedDataParallel [5] Pipeline parallelism in PyTorch [6] Automatic Mixed Precision (AMP) in PyTorch
https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/
pytorch blogs