text
stringlengths
0
1.73k
source
stringlengths
35
119
category
stringclasses
2 values
The first screenshot shows that a process has a very high allreduce cost in both the first and the third steps, because this process reaches the synchronization phase earlier than the straggler(s), and it spends more time on waiting. On the other hand, the allreduce cost is relatively small in the second step, this suggests that 1) there is no straggler at this step; or 2) this process is the straggler among all the processes, so it does not need to wait for any other process. Both the 1st and the 3rd Steps Are Slowed Down by Stragglers The second screenshot shows a normal case without stragglers. In this case, all the gradient synchronizations are relatively short.
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
Normal Case Without Stragglers Hierarchical SGD in PyTorch Recently hierarchical SGD has been proposed to optimize the communication costs by mainly reducing the total amount of data transfer in large-scale distributed training, and multiple convergence analyses have been provided (example). As a main novelty of this post, at Cruise we could leverage hierarchical SGD to mitigate stragglers, which may also occur on training relatively small models. Our implementation has been upstreamed by Cruise to PyTorch in early 2022. How Does Hierarchical SGD Work? As the name implies, hierarchical SGD organizes all the processes into groups at different levels as a hierarchy, and runs synchronization by following the rules below:
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
All the groups at the same level have the same number of processes, and the processes in these groups synchronize at the same frequency concurrently, where the synchronization period is pre-defined by the user. The higher level a group is, the larger synchronization period is used, as the synchronization becomes more expensive. When multiple overlapping groups are supposed to synchronize according to their periods, to reduce redundant synchronization and avoid data race across groups, only the highest-level group runs synchronization. The following figure illustrates an example of 4-level hierarchy SGD among 16 processes on 8 machines, each of which has 2 GPUs: Level 1: Each process runs mini-batch SGD locally; Level 2: Each 4-process group across 2 machines runs synchronization every 2 steps; Level 3: Each 8-process group across 4 machines runs synchronization every 4 steps;
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
Level 4: The global process group of all 16 processes over 8 machines runs synchronization every 8 steps. Particularly, when the step number can be divided by 8, only the synchronization at 3) is executed, and when the step number can be divided by 4 but not 8, only the synchronization at 2) is executed. Intuitively, hierarchical SGD can be viewed as an extension of local SGD, which only has a two-level hierarchy – every process runs mini-batch SGD locally and then synchronizes globally at a certain frequency. This can also help explain that, just like local SGD, hierarchical SGD synchronizes model parameters instead of gradients. Otherwise the gradient descent will be mathematically incorrect when the frequency is greater than 1.
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
Why Can Hierarchical SGD Mitigate Stragglers? The key insight here is that, when there is a random straggler, it only directly slows down a relatively small group of processes instead of all the processes. Next time another random straggler is very likely to slow down a different small group, and hence a hierarchy can help smooth out the straggler effect. The example below assumes that there is a random straggler among totally 8 processes at every step. After 4 steps, vanilla DDP that runs synchronous SGD will be slowed down by straggler 4 times, because it runs global synchronization at every step. In contrast, hierarchical SGD runs synchronization with the groups of 4 processes after the first two steps, and then a global synchronization after another two steps. We can see that both the first two and the last two stragglers have a large overlap, and hence the performance loss can be mitigated.
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
Essentially, the mitigation effect of this hierarchical SGD example actually is between local SGD at a frequency of every 2 steps and every 4 steps. The main advantage of hierarchical SGD over local SGD is a better convergence efficiency of the same global synchronization frequency, because hierarchical SGD allows more low-level synchronization. Moreover, it is possible for hierarchical SGD to provide a global synchronization frequency lower than local SGD with model parity, leading to a higher training performance, especially in a large-scale distributed training. Ease of Use
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
Straggler mitigation is not a novel study in distributed training. Multiple approaches have been proposed, such as gossip SGD, data encoding, gradient coding, as well as some particularly designed for parameter-server architecture, including backup workers and stale synchronous parallel. However, to the best of our knowledge, before this effort we have not found a good open-source PyTorch implementation of straggler mitigation that can work like a plugin to our training system at Cruise. In contrast, our implementation only requires the minimal changes – no need to modify the existing code or tune any existing hyperparameters. This is a very appealing advantage for industry users.
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
As the code example below shows, only a few lines need to be added to the setup of DDP model, and the training loop code can keep untouched. As explained previously, hierarchical SGD is an extended form of local SGD, so the enablement can be quite similar to local SGD (see PyTorch docs of PostLocalSGDOptimizer): 1. Register a post-local SGD communication hook to run a warmup stage of fully synchronous SGD and defer hierarchical SGD. 2. Create a post-local SGD optimizer that wraps an existing local optimizer and a hierarchical SGD configuration. ``` import torch.distributed.algorithms.model_averaging.hierarchical_model_averager as hierarchicalSGD from torch.distributed.algorithms.ddp_comm_hooks.post_localSGD_hook import ( PostLocalSGDState, post_localSGD_hook, ) from torch.distributed.optim import PostLocalSGDOptimizer ddp_model = nn.parallel.DistributedDataParallel( module=model,
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
module=model, device_ids=[rank], ) Register a post-local SGD communication hook for the warmup. subgroup, _ = torch.distributed.new_subgroups() state = PostLocalSGDState(subgroup=subgroup, start_localSGD_iter=1_000) ddp_model.register_comm_hook(state, post_localSGD_hook) Wraps the existing (local) optimizer to run hierarchical model averaging. optim = PostLocalSGDOptimizer( optim=optim, averager=hierarchicalSGD.HierarchicalModelAverager( # The config runs a 4-level hierarchy SGD among 128 processes: # 1) Each process runs mini-batch SGD locally; # 2) Each 8-process group synchronize every 2 steps; # 3) Each 32-process group synchronize every 4 steps; # 4) All 128 processes synchronize every 8 steps. period_group_size_dict=OrderedDict([(2, 8), (4, 32), (8, 128)]), # Do not run hierarchical SGD until 1K steps for model parity. warmup_steps=1_000) ) ``` Algorithm Hyperparameters
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
) ``` Algorithm Hyperparameters Hierarchical SGD has two major hyperparameters: period_group_size_dict and warmup_steps. * period_group_size_dict is an ordered dictionary mapping from synchronization period to process group size, used for initializing process groups of different sizes in a hierarchy to synchronize parameters concurrently. A larger group is expected to use a larger synchronization period. * warmup_steps specifies a number of steps as the warmup stage to run synchronous SGD before hierarchical SGD. Similar to post-local SGD algorithm, a warmup stage is usually recommended to achieve a higher accuracy. The value should be the same as start_localSGD_iter arg used in PostLocalSGDState when post_localSGD_hook is registered. Typically the warmup stage should at least cover the beginning of training when the loss is decreased drastically.
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
A subtle difference between the PyTorch implementation and the initial design proposed by relevant papers is that, after the warmup stage, by default the processes within each host still run intra-host gradient synchronization at every step. This is because that: 1. The intra-host communication is relatively cheap, and it can usually significantly accelerate the convergence; 2. The intra-host group (of size 4 or 8 for most industry users) can usually be a good choice of the smallest group of processes that synchronize most frequently in hierarchical SGD. If the synchronization period is 1, then gradient synchronization is faster than model parameter synchronization (a.k.a., model averaging), because DDP automatically overlaps gradient synchronization and the backward pass. Such intra-host gradient synchronization can be disabled by unsetting post_local_gradient_allreduce arg in PostLocalSGDState. Demonstration
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
Demonstration Now we demonstrate that hierarchical SGD can accelerate distributed training by mitigating stragglers. Experimental Setup We compared the performance of hierarchical SGD against local SGD and synchronous SGD on ResNet18 (model size: 45MB). Since the model is so small, the training is not bottlenecked by data transfer cost during synchronization. To avoid the noises incurred by data loading from remote storage, the input data was randomly simulated from memory. We varied the number of GPUs used by training from 64 to 256. The batch size per worker is 32, and the number of iterations of training is 1,000. Since we don’t evaluate convergence efficiency in this set of experiments, warmup is not enabled.
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
We also emulated stragglers at a rate of 1% on 128 and 256 GPUs, and 2% on 64 GPUs, to make sure at least one stragglers at every step on average. These stragglers randomly appear on different CUDA devices. Each straggler stalls for 1 second besides the normal per-step training time (~55ms in our setup). This can be perceived as a practical scenario where 1% or 2% of input data are outliers in terms of the data pre-processing cost (I/O and/or data transformation on the fly) during training, and such cost is 20X+ larger than the average. The code snippet below shows how a straggler can be emulated in the training loop. We applied it to a ResNet model, and it can be easily applied to the other models as well. loss = loss_fn(y_pred, y) # Emulate a straggler that lags for 1 second at a rate of 1%. if random.randint(1, 100) == 1: time.sleep(1) loss.backward() optimizer.step()
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
loss.backward() optimizer.step() ``` The experiments are conducted on us-central1 GCP cluster. Each machine has 4 NVIDIA Tesla T4 GPUs with 16 GB memory per GPU, connected through a 32 Gbit/s ethernet network. Each instance also features 96 vCPUs, 360 GB RAM. Architecture ResNet18 (45MB) Workers 64, 128, 256 Backend NCCL GPU Tesla T4, 16 GB memory Batch size 32 x ## of workers Straggler Duration 1 sec Straggler Rate 1% on 128 and 256 GPUs, 2% on 64 GPUs We used multiple configurations for both local SGD and hierarchical SGD. Local SGD runs global synchronization every 2, 4, and 8 steps, respectively.
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
We ran hierarchical SGD with the following configurations: 1. On 64 GPUs: 1. Each 8-process group, 32-process, and the global 64-process group synchronizes every 2, 4, and 8 steps, respectively. Denoted as "HSGD 2-8,4-32,8-64". 2. Each 32-process group and the global 64-process group synchronizes every 4 and 8 steps, respectively. Denoted as "HSGD 4-32,8-64". 2. On 128 GPUs: 3. Each 8-process group, 32-process group, and the global 128-process group synchronizes every 2, 4, and 8 steps, respectively. Denoted as "HSGD 2-8,4-32,8-128". 4. Each 32-process group and the global 128-process group synchronizes every 4 and 8 steps, respectively. Denoted as "HSGD 4-32,8-128". 3. On 256 GPUs: 5. Each 4-process group, 16-process group, 64-process group, and the global 256-process group synchronizes every 1, 2, 4, and 8 steps, respectively. Denoted as "HSGD 1-4,2-16,4-64,8-256".
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
Each 8-process group, 64-process group, and the global 256-process group synchronizes every 2, 4, and 8 steps. Denoted as "HSGD 2-8,4-64,8-256". Each 16-process group and the global 256-process group synchronizes every 4 and 8 steps, respectively. Denoted as "HSGD 4-16,8-256". Experimental Results The figures below show the speedups of different communication schemes against the baseline of synchronous SGD, with the emulated stragglers. We can make the following observations: 1. As expected, we can see that both hierarchical SGD and local SGD can achieve a higher speedup with a lower synchronization frequency. 2. The speedups of the hierarchical SGD schemes are 2.08X-2.45X on 64 GPUs, 2.57X-2.68X on 128 GPUs, and 2.63X-3.25X on 256 GPUs, respectively. This shows that hierarchical SGD can significantly mitigate stragglers, and such mitigation can be more effective at a larger scale.
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
The performance of local SGD with the synchronization period of 2 steps and 8 steps can be perceived as the lower bound and upper bound of the experimented hierarchical SGD schemes, respectively. This is because the hierarchical SGD schemes synchronize less frequently than every 2 steps globally, but their low-level synchronization at small groups are the extra overheads in comparison with the global synchronization every 8 steps. Overall, hierarchical SGD can provide a finer-grained trade-off between communication cost and model quality than local SGD. Therefore, when local SGD at a relatively large synchronization period like 8 or 4 cannot give a satisfactory convergence efficiency, hierarchical SGD can have a much better chance to achieve both a good speedup and a model parity. Since only simulated data is used in the experiments, we did not demonstrate the model parity here, which in practice can be achieved in two ways: Tuning the hyperparameters including both hierarchy and warmup steps;
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
For some cases, hierarchical SGD could lead to a slightly lower quality than the original model for the same number of training steps (i.e., lower convergence rate), but with a speedup like 2X+ per training step, it is still possible to achieve model parity with more steps but still less total training time. Limitations Before applying hierarchical SGD to straggler mitigation, the user should be aware of a few limitations of this approach:
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
This approach can only mitigate non-persistent stragglers, which occur to different workers at different times. However, for the case of persistent stragglers, which can be caused by hardware degradation or a network issue on a specific host, these stragglers will slow down the same low-level subgroup at every time, leading to nearly no straggler mitigation. This approach can only mitigate low-frequency stragglers. E.g., if 30% workers can randomly become stragglers at every step, then most low-level synchronizations will still be slowed down by stragglers. As a result, hierarchical SGD may not show an obvious performance advantage over synchronous SGD.
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
Since hierarchical SGD applies model averaging that does not overlap with backward like gradient averaging used by vanilla DDP, its performance gain of straggler mitigation must outweigh the performance loss of no overlap between communication and backward pass. Therefore, if stragglers only slow down training by less than 10%, hierarchical SGD may not be able to bring much speedup. This limitation can be addressed by overlapping optimizer step and backward pass in the future.
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
Since hierarchical SGD is less well-studied than local SGD, there is no guarantee that hierarchical SGD with a finer-grained synchronization granularity can converge faster than certain advanced forms of local SGD, such as SlowMo, which can improve convergence efficiency with slow momentum. However, to the best of our knowledge, these advanced algorithms cannot be natively supported as a PyTorch DDP plugin like hierarchical SGD yet. Acknowledgements We would like to thank Cruise teammates Bo Tian, Sergei Vorobev, Eugene Selivonchyk, Tsugn-Hsien Lee, Dan Ring, Ian Ackerman, Lei Chen, Maegan Chew, Viet Anh To, Xiaohui Long, Zeyu Chen, Alexander Sidorov, Igor Tsvetkov, Xin Hu, Manav Kataria, Marina Rubtsova, and Mohamed Fawzy, as well as Meta teammates Shen Li, Yanli Zhao, Suraj Subramanian, Hamid Shojanzeri, Anjali Sridhar and Bernard Nguyen for the support.
https://pytorch.org/blog/straggler-mitigation/
pytorch blogs
layout: blog_detail title: 'Mapillary Research: Seamless Scene Segmentation and In-Place Activated BatchNorm' author: Lorenzo Porzi, Mapillary redirect_from: /2019/07/23/mapillary-research.html With roads in developed countries like the US changing up to 15% annually, Mapillary addresses a growing demand for keeping maps updated by combining images from any camera into a 3D visualization of the world. Mapillary's independent and collaborative approach enables anyone to collect, share, and use street-level images for improving maps, developing cities, and advancing the automotive industry. Today, people and organizations all over the world have contributed more than 600 million images toward Mapillary's mission of helping people understand the world's places through images and making this data available, with clients and partners including the World Bank, HERE, and Toyota Research Institute.
https://pytorch.org/blog/mapillary-research/
pytorch blogs
Mapillary’s computer vision technology brings intelligence to maps in an unprecedented way, increasing our overall understanding of the world. Mapillary runs state-of-the-art semantic image analysis and image-based 3d modeling at scale and on all its images. In this post we discuss two recent works from Mapillary Research and their implementations in PyTorch - Seamless Scene Segmentation [1] and In-Place Activated BatchNorm [2] - generating Panoptic segmentation results and saving up to 50% of GPU memory during training, respectively. Seamless Scene Segmentation Github project page: https://github.com/mapillary/seamseg/
https://pytorch.org/blog/mapillary-research/
pytorch blogs
The objective of Seamless Scene Segmentation is to predict a “panoptic” segmentation [3] from an image, that is a complete labeling where each pixel is assigned with a class id and, where possible, an instance id. Like many modern CNNs dealing with instance detection and segmentation, we adopt the Mask R-CNN framework [4], using ResNet50 + FPN [5] as a backbone. This architecture works in two stages: first, the “Proposal Head” selects a set of candidate bounding boxes on the image (i.e. the proposals) that could contain an object; then, the “Mask Head” focuses on each proposal, predicting its class and segmentation mask. The output of this process is a “sparse” instance segmentation, covering only the parts of the image that contain countable objects (e.g. cars and pedestrians).
https://pytorch.org/blog/mapillary-research/
pytorch blogs
To complete our panoptic approach coined Seamless Scene Segmentation, we add a third stage to Mask R-CNN. Stemming from the same backbone, the “Semantic Head” predicts a dense semantic segmentation over the whole image, also accounting for the uncountable or amorphous classes (e.g. road and sky). The outputs of the Mask and Semantic heads are finally fused using a simple non-maximum suppression algorithm to generate the final panoptic prediction. All details about the actual network architecture, used losses and underlying math can be found at the project website for our CVPR 2019 paper [1].
https://pytorch.org/blog/mapillary-research/
pytorch blogs
While several versions of Mask R-CNN are publicly available, including an official implementation written in Caffe2, at Mapillary we decided to build Seamless Scene Segmentation from scratch using PyTorch, in order to have full control and understanding of the whole pipeline. While doing so we encountered a couple of main stumbling blocks, and had to come up with some creative workarounds we are going to describe next. Dealing with variable-sized tensors
https://pytorch.org/blog/mapillary-research/
pytorch blogs
Dealing with variable-sized tensors Something that sets aside panoptic segmentation networks from traditional CNNs is the prevalence of variable-sized data. In fact, many of the quantities we are dealing with cannot be easily represented with fixed sized tensors: each image contains a different number of objects, the Proposal head can produce a different number of proposals for each image, and the images themselves can have different sizes. While this is not a problem per-se -- one could just process images one at a time -- we would still like to exploit batch-level parallelism as much as possible. Furthermore, when performing distributed training with multiple GPUs, DistributedDataParallel expects its inputs to be batched, uniformly-sized tensors.
https://pytorch.org/blog/mapillary-research/
pytorch blogs
Our solution to these issues is to wrap each batch of variable-sized tensors in a PackedSequence. PackedSequence is little more than a glorified list class for tensors, tagging its contents as “related”, ensuring that they all share the same type, and providing useful methods like moving all the tensors to a particular device, etc. When performing light-weight operations that wouldn’t be much faster with batch-level parallelism, we simply iterate over the contents of the PackedSequence in a for loop. When performance is crucial, e.g. in the body of the network, we simply concatenate the contents of the PackedSequence, adding zero padding as required (like in RNNs with variable-length inputs), and keeping track of the original dimensions of each tensor.
https://pytorch.org/blog/mapillary-research/
pytorch blogs
PackedSequences also help us deal with the second problem highlighted above. We slightly modify DistributedDataParallel to recognize PackedSequence inputs, splitting them in equally sized chunks and distributing their contents across the GPUs. Asymmetric computational graphs with Distributed Data Parallel Another, perhaps more subtle, peculiarity of our network is that it can generate asymmetric computational graphs across GPUs. In fact, some of the modules that compose the network are “optional”, in the sense that they are not always computed for all images. As an example, when the Proposal head doesn’t output any proposal, the Mask head is not traversed at all. If we are training on multiple GPUs with DistributedDataParallel, this results in one of the replicas not computing gradients for the Mask head parameters.
https://pytorch.org/blog/mapillary-research/
pytorch blogs
Prior to PyTorch 1.1, this resulted in a crash, so we had to develop a workaround. Our simple but effective solution was to compute a “fake forward pass” when no actual forward is required, i.e. something like this: def fake_forward(): fake_input = get_correctly_shaped_fake_input() fake_output = mask_head(fake_input) fake_loss = fake_output.sum() * 0 return fake_loss Here, we generate a batch of bogus data, pass it through the Mask head, and return a loss that always back-progates zeros to all parameters. Starting from PyTorch 1.1 this workaround is no longer required: by setting find_unused_parameters=True in the constructor, DistributedDataParallel is told to identify parameters whose gradients have not been computed by all replicas and correctly handle them. This leads to some substantial simplifications in our code base! In-place Activated BatchNorm Github project page: https://github.com/mapillary/inplace_abn/
https://pytorch.org/blog/mapillary-research/
pytorch blogs
Most researchers would probably agree that there are always constraints in terms of available GPU resources, regardless if their research lab has access to only a few or multiple thousands of GPUs. In a time where at Mapillary we still worked at rather few and mostly 12GB Titan X - style prosumer GPUs, we were searching for a solution that virtually enhances the usable memory during training, so we would be able to obtain and push state-of-the-art results on dense labeling tasks like semantic segmentation. In-place activated BatchNorm is enabling us to use up to 50% more memory (at little computational overhead) and is therefore deeply integrated in all our current projects (including Seamless Scene Segmentation described above).
https://pytorch.org/blog/mapillary-research/
pytorch blogs
When processing a BN-Activation-Convolution sequence in the forward pass, most deep learning frameworks (including PyTorch) need to store two big buffers, i.e. the input x of BN and the input z of Conv. This is necessary because the standard implementations of the backward passes of BN and Conv depend on their inputs to calculate the gradients. Using InPlace-ABN to replace the BN-Activation sequence, we can safely discard x, thus saving up to 50% GPU memory at training time. To achieve this, we rewrite the backward pass of BN in terms of its output y, which is in turn reconstructed from z by inverting the activation function.
https://pytorch.org/blog/mapillary-research/
pytorch blogs
The only limitation of InPlace-ABN is that it requires using an invertible activation function, such as leaky relu or elu. Except for this, it can be used as a direct, drop-in replacement for BN+activation modules in any network. Our native CUDA implementation offers minimal computational overhead compared to PyTorch’s standard BN, and is available for anyone to use from here: https://github.com/mapillary/inplace_abn/. Synchronized BN with asymmetric graphs and unbalanced batches
https://pytorch.org/blog/mapillary-research/
pytorch blogs
When training networks with synchronized SGD over multiple GPUs and/or multiple nodes, it’s common practice to compute BatchNorm statistics separately on each device. However, in our experience working with semantic and panoptic segmentation networks, we found that accumulating mean and variance across all workers can bring a substantial boost in accuracy. This is particularly true when dealing with small batches, like in Seamless Scene Segmentation where we train with a single, super-high resolution image per GPU. InPlace-ABN supports synchronized operation over multiple GPUs and multiple nodes, and, since version 1.1, this can also be achieved in the standard PyTorch library using SyncBatchNorm. Compared to SyncBatchNorm, however, we support some additional functionality which is particularly important for Seamless Scene Segmentation: unbalanced batches and asymmetric graphs.
https://pytorch.org/blog/mapillary-research/
pytorch blogs
As mentioned before, Mask R-CNN-like networks naturally give rise to variable-sized tensors. Thus, in InPlace-ABN we calculate synchronized statistics using a variant of the parallel algorithm described here, which properly takes into account the fact that each GPU can hold a different number of samples. PyTorch’s SyncBatchNorm is currently being revised to support this, and the improved functionality will be available in a future release.
https://pytorch.org/blog/mapillary-research/
pytorch blogs
Asymmetric graphs (in the sense mentioned above) are another complicating factor one has to deal with when creating a synchronized BatchNorm implementation. Luckily, PyTorch’s distributed group functionality allows us to restrict distributed communication to a subset of workers, easily excluding those that are currently inactive. The only missing piece is that, in order to create a distributed group, each process needs to know the ids of all processes that will participate in the group, and even processes that are not part of the group need to call the new_group() function. In InPlace-ABN we handle it with a function like this: ```python import torch import torch.distributed as distributed def active_group(active): """Initialize a distributed group where each process can independently decide whether to participate or not""" world_size = distributed.get_world_size() rank = distributed.get_rank() # Gather active status from all workers
https://pytorch.org/blog/mapillary-research/
pytorch blogs
Gather active status from all workers active = torch.tensor(rank if active else -1, dtype=torch.long, device=torch.cuda.current_device()) active_workers = torch.empty(world_size, dtype=torch.long, device=torch.cuda.current_device()) distributed.all_gather(list(active_workers.unbind(0)), active) # Create group active_workers = [int(i) for i in active_workers.tolist() if i != -1] group = distributed.new_group(active_workers) return group `` First each process, including inactive ones, communicates its status to all others through anall_gathercall, then it creates the distributed group with the shared information. In the actual implementation we also include a caching mechanism for groups, sincenew_group()` is usually too expensive to call at each batch. References [1] Seamless Scene Segmentation; Lorenzo Porzi, Samuel Rota Bulò, Aleksander Colovic, Peter Kontschieder; Computer Vision and Pattern Recognition (CVPR), 2019
https://pytorch.org/blog/mapillary-research/
pytorch blogs
[2] In-place Activated BatchNorm for Memory-Optimized Training of DNNs; Samuel Rota Bulò, Lorenzo Porzi, Peter Kontschieder; Computer Vision and Pattern Recognition (CVPR), 2018 [3] Panoptic Segmentation; Alexander Kirillov, Kaiming He, Ross Girshick, Carsten Rother, Piotr Dollar; Computer Vision and Pattern Recognition (CVPR), 2019 [4] Mask R-CNN; Kaiming He, Georgia Gkioxari, Piotr Dollar, Ross Girshick; International Conference on Computer Vision (ICCV), 2017 [5] Feature Pyramid Networks for Object Detection; Tsung-Yi Lin, Piotr Dollar, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie; Computer Vision and Pattern Recognition (CVPR), 2017
https://pytorch.org/blog/mapillary-research/
pytorch blogs
layout: blog_detail title: "New Library Updates in PyTorch 2.0" Summary We are bringing a number of improvements to the current PyTorch libraries, alongside the PyTorch 2.0 release. These updates demonstrate our focus on developing common and extensible APIs across all domains to make it easier for our community to build ecosystem projects on PyTorch. Along with 2.0, we are also releasing a series of beta updates to the PyTorch domain libraries, including those that are in-tree, and separate libraries including TorchAudio, TorchVision, and TorchText. An update for TorchX is also being released as it moves to community supported mode. Please find the list of the latest stable versions and updates below. Latest Stable Library Versions (Full List) TorchArrow 0.1.0 TorchRec 0.4.0 TorchVision 0.15
https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/
pytorch blogs
TorchVision 0.15 TorchAudio 2.0 TorchServe 0.7.1 TorchX 0.4.0 TorchData 0.6.0 TorchText 0.15.0 PyTorch on XLA Devices 1.14 *To see prior versions or (unstable) nightlies, click on versions in the top left menu above ‘Search Docs’. TorchAudio [Beta] Data augmentation operators The release adds several data augmentation operators under torchaudio.functional and torchaudio.transforms: * torchaudio.functional.add_noise * torchaudio.functional.convolve * torchaudio.functional.deemphasis * torchaudio.functional.fftconvolve * torchaudio.functional.preemphasis * torchaudio.functional.speed * torchaudio.transforms.AddNoise * torchaudio.transforms.Convolve * torchaudio.transforms.Deemphasis * torchaudio.transforms.FFTConvolve * torchaudio.transforms.Preemphasis * torchaudio.transforms.Speed
https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/
pytorch blogs
torchaudio.transforms.Speed torchaudio.transforms.SpeedPerturbation The operators can be used to synthetically diversify training data to improve the generalizability of downstream models. For usage details, please refer to the functional and transform documentation and Audio Data Augmentation tutorial. [Beta] WavLM and XLS-R models The release adds two self-supervised learning models for speech and audio. * WavLM that is robust to noise and reverberation. * XLS-R that is trained on cross-lingual datasets. Besides the model architectures, torchaudio also supports corresponding pre-trained pipelines: * torchaudio.pipelines.WAVLM_BASE * torchaudio.pipelines.WAVLM_BASE_PLUS * torchaudio.pipelines.WAVLM_LARGE
https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/
pytorch blogs
torchaudio.pipelines.WAVLM_LARGE torchaudio.pipelines.WAV2VEC_XLSR_300M torchaudio.pipelines.WAV2VEC_XLSR_1B torchaudio.pipelines.WAV2VEC_XLSR_2B For usage details, please refer to the factory function and pre-trained pipelines documentation. TorchRL The initial release of torchrl includes several features that span across the entire RL domain. TorchRL can already be used in online, offline, multi-agent, multi-task and distributed RL settings, among others. See below: [Beta] Environment wrappers and transforms torchrl.envs includes several wrappers around common environment libraries. This allows users to swap one library with another without effort. These wrappers build an interface between these simulators and torchrl: * dm_control: * Gym * Brax * EnvPool * Jumanji * Habitat
https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/
pytorch blogs
Gym Brax EnvPool Jumanji Habitat It also comes with many commonly used transforms and vectorized environment utilities that allow for a fast execution across simulation libraries. Please refer to the documentation for more detail. [Beta] Datacollectors Data collection in RL is made easy via the usage of single process or multiprocessed/distributed data collectors that execute the policy in the environment over a desired duration and deliver samples according to the user’s needs. These can be found in torchrl.collectors and are documented here. [Beta] Objective modules Several objective functions are included in torchrl.objectives, among which: * A generic PPOLoss class and derived ClipPPOLoss and KLPPOLoss * SACLoss and DiscreteSACLoss * DDPGLoss * DQNLoss * REDQLoss * A2CLoss * TD3Loss * ReinforceLoss * Dreamer
https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/
pytorch blogs
A2CLoss TD3Loss ReinforceLoss Dreamer Vectorized value function operators also appear in the library. Check the documentation here. [Beta] Models and exploration strategies We provide multiple models, modules and exploration strategies. Get a detailed description in the doc. [Beta] Composable replay buffer A composable replay buffer class is provided that can be used to store data in multiple contexts including single and multi-agent, on and off-policy and many more.. Components include: * Storages (list, physical or memory-based contiguous storages) * Samplers (Prioritized, sampler without repetition) * Writers * Possibility to add transforms Replay buffers and other data utilities are documented here. [Beta] Logging tools and trainer We support multiple logging tools including tensorboard, wandb and mlflow.
https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/
pytorch blogs
We provide a generic Trainer class that allows for easy code recycling and checkpointing. These features are documented here. TensorDict TensorDict is a new data carrier for PyTorch. [Beta] TensorDict: specialized dictionary for PyTorch TensorDict allows you to execute many common operations across batches of tensors carried by a single container. TensorDict supports many shape and device or storage operations, and can readily be used in distributed settings. Check the documentation to know more. [Beta] @tensorclass: a dataclass for PyTorch Like TensorDict, tensorclass provides the opportunity to write dataclasses with built-in torch features such as shape or device operations. [Beta] tensordict.nn: specialized modules for TensorDict
https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/
pytorch blogs
The tensordict.nn module provides specialized nn.Module subclasses that make it easy to build arbitrarily complex graphs that can be executed with TensorDict inputs. It is compatible with the latest PyTorch features such as functorch, torch.fx and torch.compile. TorchRec [Beta] KeyedJaggedTensor All-to-All Redesign and Input Dist Fusion We observed performance regression due to a bottleneck in sparse data distribution for models that have multiple, large KJTs to redistribute. To combat this we altered the comms pattern to transport the minimum data required in the initial collective to support the collective calls for the actual KJT tensor data. This data sent in the initial collective, ‘splits’ means more data is transmitted over the comms stream overall, but the CPU is blocked for significantly shorter amounts of time leading to better overall QPS.
https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/
pytorch blogs
Furthermore, we altered the TorchRec train pipeline to group the initial collective calls for the splits together before launching the more expensive KJT tensor collective calls. This fusion minimizes the CPU blocked time as launching each subsequent input distribution is no longer dependent on the previous input distribution. With this feature, variable batch sizes are now natively supported across ranks. These features are documented here. TorchVision [Beta] Extending TorchVision’s Transforms to Object Detection, Segmentation & Video tasks TorchVision is extending its Transforms API! Here is what’s new: * You can use them not only for Image Classification but also for Object Detection, Instance & Semantic Segmentation and Video Classification. * You can use new functional transforms for transforming Videos, Bounding Boxes and Segmentation Masks.
https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/
pytorch blogs
Learn more about these new transforms from our docs, and submit any feedback in our dedicated issue. TorchText [Beta] Adding scriptable T5 and Flan-T5 to the TorchText library with incremental decoding support! TorchText has added the T5 model architecture with pre-trained weights for both the original T5 paper and Flan-T5. The model is fully torchscriptable and features an optimized multiheaded attention implementation. We include several examples of how to utilize the model including summarization, classification, and translation. For more details, please refer to our docs. TorchX
https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/
pytorch blogs
TorchX TorchX is moving to community supported mode. More details will be coming in at a later time.
https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/
pytorch blogs
layout: blog_detail title: "Fast Beam Search Decoding in PyTorch with TorchAudio and Flashlight Text" author: Caroline Chen, Jacob Kahn (@jacob_d_kahn) featured-img: "/assets/images/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text-6.png" Beam search decoding with industry-leading speed from Flashlight Text (part of the Flashlight ML framework) is now available with official support in TorchAudio, bringing high-performance beam search and text utilities for speech and text applications built on top of PyTorch. The current integration supports CTC-style decoding, but it can be used for any modeling setting that outputs token-level probability distributions over time steps. A brief beam search refresher
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
A brief beam search refresher In speech and language settings, beam search is an efficient, greedy algorithm that can convert sequences of continuous values (i.e. probabilities or scores) into graphs or sequences (i.e. tokens, word-pieces, words) using optional constraints on valid sequences (i.e. a lexicon), optional external scoring (i.e. an LM which scores valid sequences), and other score adjustments for particular sequences. In the example that follows, we'll consider — a token set of {ϵ, a, b}, where ϵ is a special token that we can imagine denotes a space between words or a pause in speech. Graphics here and below are taken from Awni Hannun's excellent distill.pub writeup on CTC and beam search.
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
With a greedy-like approach, beam search considers the next viable token given an existing sequence of tokens — in the example above, a, b, b is a valid sequence, but a, b, a is not. We rank each possible next token at each step of the beam search according to a scoring function. Scoring functions (s) typically looks something like:
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
Where ŷ is a potential path/sequence of tokens, x is the input (P(ŷ|x) represents the model's predictions over time), and 𝛼 is a weight on the language model probability (P(y) the probability of the sequence under the language model). Some scoring functions add 𝜷 which adjusts a score based on the length of the predicted sequence |ŷ|. This particular scoring function is used in FAIR's prior work on end-to-end ASR, and there are many variations on scoring functions which can vary across application areas.
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
Given a particular sequence, to assess the next viable token in that sequence (perhaps constrained by a set of allowed words or sequences, such as a lexicon of words), the beam search algorithm scores the sequence with each candidate token added, and sorts token candidates based on those scores. For efficiency and since the number of paths is exponential in the token set size, the top-k highest-scoring candidates are kept — k represents the beam size. There are many other nuances with how beam search can progress: similar hypothesis sequences can be “merged”, for instance.
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
The scoring function can be further augmented to up/down-weight token insertion or long or short words. Scoring with stronger external language models, while incurring computational cost, can also significantly improve performance; this is frequently referred to as LM fusion. There are many other knobs to tune for decoding — these are documented in TorchAudio’s documentation and explored further in TorchAudio’s ASR Inference tutorial. Since decoding is quite efficient, parameters can be easily swept and tuned. Beam search has been used in ASR extensively over the years in far too many works to cite, and in strong, recent results and systems including wav2vec 2.0 and NVIDIA's NeMo.
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
Why beam search? Beam search remains a fast competitor to heavier-weight decoding approaches such as RNN-Transducer that Google has invested in putting on-device and has shown strong results with on common benchmarks. Autoregressive text models at scale can benefit from beam search as well. Among other things, beam search gives: - A flexible performance/latency tradeoff — by adjusting beam size and the external LM, users can sacrifice latency for accuracy or pay for more accurate results with a small latency cost. Decoding with no external LM can improve results at very little performance cost. - Portability without retraining — existing neural models can benefit from multiple decoding setups and plug-and-play with external LMs without training or fine-tuning.
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
A compelling complexity/accuracy tradeoff — adding beam search to an existing modeling pipeline incurs little additional complexity and can improve performance. Performance Benchmarks Today's most commonly-used beam search decoding libraries today that support external language model integration include Kensho's pyctcdecode, NVIDIA's NeMo toolkit. We benchmark the TorchAudio + Flashlight decoder against them with a wav2vec 2.0 base model trained on 100 hours of audio evaluated on LibriSpeech dev-other with the official KenLM 3-gram LM. Benchmarks were run on Intel E5-2698 CPUs on a single thread. All computation was in-memory — KenLM memory mapping was disabled as it wasn't widely supported.
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
When benchmarking, we measure the time-to-WER (word error rate) — because of subtle differences in the implementation of decoding algorithms and the complex relationships between parameters and decoding speed, some hyperparameters differed across runs. To fairly assess performance, we first sweep for parameters that achieve a baseline WER, minimizing beam size if possible. Decoding performance on Librispeech dev-other of a pretrained wav2vec 2.0 model. TorchAudio + Flashlight decoding outperforms by an order of magnitude at low WERs.
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
Time-to-WER results, deferring to smaller beam size, across decoders. The TorchAudio + Flashlight decoder scales far better with larger beam sizes and at lower WERs. TorchAudio API and Usage TorchAudio provides a Python API for CTC beam search decoding, with support for the following: - lexicon and lexicon-free decoding - KenLM n-gram language model integration - character and word-piece decoding - sample pretrained LibriSpeech KenLM models and corresponding lexicon and token files - various customizable beam search parameters (beam size, pruning threshold, LM weight...) To set up the decoder, use the factory function torchaudio.models.decoder.ctc_decoder from torchaudio.models.decoder import ctc_decoder, download_pretrained_files files = download_pretrained_files("librispeech-4-gram") decoder = ctc_decoder( lexicon=files.lexicon, tokens=files.tokens, lm=files.lm, nbest=1, ... additional optional customizable args ... )
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
) Given emissions of shape *(batch, time, num_tokens)*, the decoder will compute and return a List of batch Lists, each consisting of the nbest hypotheses corresponding to the emissions. Each hypothesis can be further broken down into tokens, words (if a lexicon is provided), score, and timesteps components. ```python emissions = acoustic_model(waveforms) # (B, T, N) batch_hypotheses = decoder(emissions) # List[List[CTCHypothesis]] # transcript for a lexicon decoder transcripts = [" ".join(hypo[0].words) for hypo in batch_hypotheses] # transcript for a lexicon free decoder, splitting by sil token batch_tokens = [decoder.idxs_to_tokens(hypo[0].tokens) for hypo in batch_hypotheses] transcripts = ["".join(tokens) for tokens in batch_tokens]
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
``` Please refer to the documentation for more API details, and the tutorial (ASR Inference Decoding) or sample inference script for more usage examples. Upcoming Improvements Full NNLM support — decoding with large neural language models (e.g. transformers) remains somewhat unexplored at scale. Already supported in Flashlight, we plan to add support in TorchAudio, allowing users to use custom decoder-compatible LMs. Custom word level language models are already available in the nightly TorchAudio build, and is slated to be released in TorchAudio 0.13.
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
Autoregressive/seq2seq decoding — Flashlight Text also supports sequence-to-sequence (seq2seq) decoding for autoregressive models, which we hope to add bindings for and add to TorchAudio and TorchText with efficient GPU implementations as well. Better build support — to benefit from improvements in Flashlight Text, TorchAudio will directly submodule Flashlight Text to make upstreaming modifications and improvements easier. This is already in effect in the nightly TorchAudio build, and is slated to be released in TorchAudio 0.13. Citation To cite the decoder, please use the following: ```python @inproceedings{kahn2022flashlight, title={Flashlight: Enabling innovation in tools for machine learning}, author={Kahn, Jacob D and Pratap, Vineel and Likhomanenko, Tatiana and Xu, Qiantong and Hannun, Awni and Cai, Jeff and Tomasello, Paden and Lee, Ann and Grave, Edouard and Avidov, Gilad and others},
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
booktitle={International Conference on Machine Learning}, pages={10557--10574}, year={2022}, organization={PMLR} } ```python @inproceedings{yang2022torchaudio, title={Torchaudio: Building blocks for audio and speech processing}, author={Yang, Yao-Yuan and Hira, Moto and Ni, Zhaoheng and Astafurov, Artyom and Chen, Caroline and Puhrsch, Christian and Pollack, David and Genzel, Dmitriy and Greenberg, Donny and Yang, Edward Z and others}, booktitle={ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={6982--6986}, year={2022}, organization={IEEE} }
https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/
pytorch blogs
layout: blog_detail title: "New Library Updates in PyTorch 1.13" author: Team PyTorch featured-img: "assets/images/new-library-updates-in-pytorch-1.13-2.jpg" Summary We are bringing a number of improvements to the current PyTorch libraries, alongside the PyTorch 1.13 release. These updates demonstrate our focus on developing common and extensible APIs across all domains to make it easier for our community to build ecosystem projects on PyTorch. Along with 1.13, we are releasing updates to the PyTorch Libraries, please find them below. TorchAudio (Beta) Hybrid Demucs Model and Pipeline Hybrid Demucs is a music source separation model that uses both spectrogram and time domain features. It has demonstrated state-of-the-art performance in the Sony® Music DeMixing Challenge. (citation: https://arxiv.org/abs/2111.03600) The TorchAudio v0.13 release includes the following features
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
MUSDB_HQ Dataset, which is used in Hybrid Demucs training (docs) Hybrid Demucs model architecture (docs) Three factory functions suitable for different sample rate ranges Pre-trained pipelines (docs) SDR Results of pre-trained pipelines on MUSDB_HQ test set Tutorial that steps through music source separation using the pretrained pipeline (docs) | Pipeline | All | Drums | Bass | Other | Vocals | |----------------------------------------|-------|-------|--------|-------|--------| | HDEMUCS_HIGH_MUSDB* | 6.42 | 7.76 | 6.51 | 4.47 | 6.93 |
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
| HDEMUCS_HIGH_MUSDB_PLUS** | 9.37 | 11.38 | 10.53 | 7.24 | 8.32 | * Trained on the training data of MUSDB-HQ dataset.** Trained on both training and test sets of MUSDB-HQ and 150 extra songs from an internal database that were specifically produced for Meta. from torchaudio.pipelines import HDEMUCS_HIGH_MUSDB_PLUS bundle = HDEMUCS_HIGH_MUSDB_PLUS model = bundle.get_model() sources_list = model.sources mixture, samplerate = torchaudio.load("song.wav") sources = model(mixture) audios = dict(zip(sources_list, sources) Special thanks to Alexandre Defossez for the guidance. (Beta) Datasets and Metadata Mode for SUPERB Benchmark
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
TorchAudio adds support for various audio-related datasets used in downstream tasks for benchmarking self-supervised learning models. With the addition of several new datasets, there is now support for the downstream tasks in version 1 of the SUPERB benchmark, which can be found in the s3prl repository. For these datasets, we also add metadata support through a get_metadata function, enabling faster dataset iteration or preprocessing without the need to load waveforms. The function returns the same features as __getitem__, except it returns the relative waveform path rather than the loaded waveform. Datasets with metadata functionality - LIBRISPEECH (docs) - LibriMix (docs)
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
QUESST14 (docs) SPEECHCOMMANDS (docs) (new) FluentSpeechCommands (docs) (new) Snips (docs) (new) IEMOCAP (docs) (new) VoxCeleb1 (Identification, Verification)
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
(Beta) Custom Language Model support in CTC Beam Search Decoding TorchAudio released a CTC beam search decoder in release 0.12, with KenLM language model support. This release, there is added functionality for creating custom Python language models that are compatible with the decoder, using the torchaudio.models.decoder.CTCDecoderLM wrapper. For more information on using a custom language model, please refer to the documentation and tutorial. (Beta) StreamWriter torchaudio.io.StreamWriter is a class for encoding media including audio and video. This can handle a wide variety of codecs, chunk-by-chunk encoding and GPU encoding. ```python writer = StreamWriter("example.mp4") writer.add_audio_stream( sample_rate=16_000, num_channels=2, ) writer.add_video_stream( frame_rate=30,
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
) writer.add_video_stream( frame_rate=30, height=96, width=128, format="rgb24", ) with writer.open(): writer.write_audio_chunk(0, audio) writer.write_video_chunk(1, video) ``` For more information, refer to the documentation and the following tutorials - StreamWriter Basic Usage - StreamWriter Advanced Usage - Hardware-Accelerated Video Decoding and Encoding TorchData For a complete list of changes and new features, please visit our repository’s 0.5.0 release note. (Prototype) DataLoader2
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
(Prototype) DataLoader2 DataLoader2 was introduced in the last release to execute DataPipe graph, with support for dynamic sharding for multi-process/distributed data loading, multiple backend ReadingServices, and DataPipe graph in-place modification (e.g. shuffle control). In this release, we further consolidated the API for DataLoader2 and a detailed documentation is now available here. We continue to welcome early adopters and feedback, as well as potential contributors. If you are interested in trying it out, we encourage you to install the nightly version of TorchData. (Beta) Data Loading from Cloud Service Providers We extended our support to load data from additional cloud storage providers via DataPipes, now covering AWS, Google Cloud Storage, and Azure. A tutorial is also available. We are open to feedback and feature requests.
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
We also performed a simple benchmark, comparing the performance of data loading from AWS S3 and attached volume on an AWS EC2 instance. The results are visible here. torch::deploy (Beta) torch::deploy is now in Beta! torch::deploy is a C++ library for Linux based operating systems that allows you to run multiple Python interpreters in a single process. You can run your existing eager PyTorch models without any changes for production inference use cases. Highlights include: - Existing models work out of the box–no need to modify your python code to support tracing. - Full support for your existing Python environment including C extensions. - No need to cross process boundaries to load balance in multi-GPU serving environments. - Model weight can be shared between multiple Python interpreters. - A vastly improved installation and setup process. ```Python torch::deploy::InterpreterManager manager(4);
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
torch::deploy::InterpreterManager manager(4); // access one of the 4 interpreters auto I = manager.acquireOne(); // run infer from your_model.py I.global("your_model", "infer")({at::randn({10, 240, 320})}); ``` Learn more here. (Beta) CUDA/ROCm/CPU Backends torch::deploy now links against standard PyTorch Python distributions so all accelerators that PyTorch core supports such as CUDA and AMD/HIP work out of the box. - Can install any device variant of PyTorch via pip/conda like normal. - https://pytorch.org/get-started/locally/ (Prototype) aarch64/arm64 support torch::deploy now has basic support for aarch64 Linux systems. - We're looking to gather feedback on it and learn more about arm use cases for eager PyTorch models. - Learn more / share your use case at https://github.com/pytorch/multipy/issues/64 TorchEval
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
TorchEval (Prototype) Introducing Native Metrics Support for PyTorch TorchEval is a library built for users who want highly performant implementations of common metrics to evaluate machine learning models. It also provides an easy to use interface for building custom metrics with the same toolkit. Building your metrics with TorchEval makes running distributed training loops with torch.distributed a breeze. Learn more with our docs, see our examples, or check out our GitHub repo. TorchMultimodal Release (Beta)
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
TorchMultimodal Release (Beta) Please watch for upcoming blogs in early November that will introduce TorchMultimodal, a PyTorch domain library for training SoTA multi-task multimodal models at scale, in more details; in the meantime, play around with the library and models through our tutorial. TorchRec (Prototype) Simplified Optimizer Fusion APIs
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
(Prototype) Simplified Optimizer Fusion APIs We’ve provided a simplified and more intuitive API for setting fused optimizer settings via apply_optimizer_in_backward. This new approach enables the ability to specify optimizer settings on a per-parameter basis and sharded modules will configure FBGEMM’s TableBatchedEmbedding modules accordingly. Additionally, this now let's TorchRec’s planner account for optimizer memory usage. This should alleviate reports of sharding jobs OOMing after using Adam using a plan generated from planner. (Prototype) Simplified Sharding APIs
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
(Prototype) Simplified Sharding APIs We’re introducing the shard API, which now allows you to shard only the embedding modules within a model, and provides an alternative to the current main entry point - DistributedModelParallel. This lets you have a finer grained control over the rest of the model, which can be useful for customized parallelization logic, and inference use cases (which may not require any parallelization on the dense layers). We’re also introducing construct_module_sharding_plan, providing a simpler interface to the TorchRec sharder. (Beta) Quantized Comms
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
(Beta) Quantized Comms Applying quantization or mixed precision to tensors in a collective call during model parallel training greatly improves training efficiency, with little to no effect on model quality. TorchRec now integrates with the quantized comms library provided by FBGEMM GPU and provides an interface to construct encoders and decoders (codecs) that surround the all_to_all, and reduce_scatter collective calls in the output_dist of a sharded module. We also allow you to construct your own codecs to apply to your sharded module. The codces provided by FBGEMM allow FP16, BF16, FP8, and INT8 compressions, and you may use different quantizations for the forward pass and backward pass. TorchSnapshot (Beta)
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
TorchSnapshot (Beta) Along with PyTorch 1.13, we are releasing the beta version of TorchSnapshot, which is a performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind. Highlights include: - Performance: TorchSnapshot provides a fast checkpointing implementation employing various optimizations, including zero-copy serialization for most tensor types, overlapped device-to-host copy and storage I/O, parallelized storage I/O - Memory Use: TorchSnapshot's memory usage adapts to the host's available resources, greatly reducing the chance of out-of-memory issues when saving and loading checkpoints - Usability: Simple APIs that are consistent between distributed and non-distributed workloads Learn more with our tutorial. TorchVision
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
TorchVision We are happy to introduce torchvision v0.14 (release note). This version introduces a new model registration API to help users retrieving and listing models and weights. It also includes new image and video classification models such as MViT, S3D, Swin Transformer V2, and MaxViT. Last but not least, we also have new primitives and augmentation such as PolynomicalLR scheduler and SimpleCopyPaste. (Beta) Model Registration API
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
(Beta) Model Registration API Following up on the multi-weight support API that was released on the previous version, we have added a new model registration API to help users retrieve models and weights. There are now 4 new methods under the torchvision.models module: get_model, get_model_weights, get_weight, and list_models. Here are examples of how we can use them: ```Python import torchvision from torchvision.models import get_model, get_model_weights, list_models max_params = 5000000 tiny_models = [] for model_name in list_models(module=torchvision.models): weights_enum = get_model_weights(model_name) if len([w for w in weights_enum if w.meta["num_params"] <= max_params]) > 0: tiny_models.append(model_name) print(tiny_models) ['mnasnet0_5', 'mnasnet0_75', 'mnasnet1_0', 'mobilenet_v2', ...]
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
model = get_model(tiny_models[0], weights="DEFAULT") print(sum(x.numel() for x in model.state_dict().values())) 2239188 #### (Beta) New Video Classification Models We added two new video classification models, MViT and S3D. MViT is a state of the art video classification transformer model which has 80.757% accuracy on the Kinetics400 dataset, while S3D is a relatively small model with good accuracy for its size. These models can be used as follows: ```Python import torch from torchvision.models.video import * video = torch.rand(3, 32, 800, 600) model = mvit_v2_s(weights="DEFAULT") # model = s3d(weights="DEFAULT") model.eval() prediction = model(images) Here is the table showing the accuracy of the new video classification models tested in the Kinetics400 dataset. | Model | Acc@1 | Acc@5 | |--------------------------------|-----------|-----------| | mvit_v1_b | 81.474 | 95.776 | | mvit_v2_s | 83.196 | 96.36 |
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
| s3d | 83.582 | 96.64 | We would like to thank Haoqi Fan, Yanghao Li, Christoph Feichtenhofer and Wan-Yen Lo for their work on PyTorchVideo and their support during the development of the MViT model. We would like to thank Sophia Zhi for her contribution implementing the S3D model in torchvision. (Stable) New Architecture and Model Variants For Classification Models, we’ve added the Swin Transformer V2 architecture along with pre-trained weights for its tiny/small/base variants. In addition, we have added support for the MaxViT transformer. Here is an example on how to use the models: import torch from torchvision.models import * image = torch.rand(1, 3, 224, 224) model = swin_v2_t(weights="DEFAULT").eval() # model = maxvit_t(weights="DEFAULT").eval() prediction = model(image) Here is the table showing the accuracy of the models tested on ImageNet1K dataset.
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
Model Acc@1 Acc@1 change over V1 Acc@5 Acc@5 change over V1 swin_v2_t 82.072 + 0.598 96.132 + 0.356 swin_v2_s 83.712 + 0.516 96.816 + 0.456 swin_v2_b 84.112 + 0.530 96.864 + 0.224 maxvit_t 83.700 - 96.722 - We would like to thank Ren Pang and Teodor Poncu for contributing the 2 models to torchvision. ### (Stable) New Primitives & Augmentations
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
(Stable) New Primitives & Augmentations In this release we’ve added the SimpleCopyPaste augmentation in our reference scripts and we up-streamed the PolynomialLR scheduler to PyTorch Core. We would like to thank Lezwon Castelino and Federico Pozzi for their contributions. We are continuing our efforts to modernize TorchVision by adding more SoTA primitives, Augmentations and architectures with the help of our community. If you are interested in contributing, have a look at the following issue. Torch-TensorRT (Prototype) TensorRT with FX2TRT frontend Torch-TensorRT is the PyTorch integration for TensorRT, providing high performance inference on NVIDIA GPUs. Torch-TRT allows for optimizing models directly in PyTorch for deployment providing up to 6x performance improvement.
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
Torch-TRT is an AoT compiler which ingests an nn.Module or TorchScript module, optimizes compatible subgraphs in TensorRT & leaves the rest to run in PyTorch. This gives users the performance of TensorRT, but the usability and familiarity of Torch. Torch-TensorRT is part of the PyTorch ecosystem, and was released as v1.0 in November ‘21. There are currently two distinct front-ends: Torchscript & FX. Each provides the same value proposition and underlying operation with the primary difference being the input & output formats (TS vs FX / Python). The Torchscript front-end was included in v1.0 and should be considered stable. The FX front-end is first released in v1.2 and should be considered a Beta. Relevant Links: - Github - Documentation - Generic (TS) getting started guide
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
FX getting started guide (Stable) Introducing Torch-TensorRT Torch-TensorRT is an integration for PyTorch that leverages inference optimizations of TensorRT on NVIDIA GPUs. It takes advantage of TensorRT optimizations, such as FP16 and INT8 reduced precision, graph optimization, operation fusion, etc. while offering a fallback to native PyTorch when TensorRT does not support the model subgraphs. Currently, there are two frontend paths existing in the library that help to convert a PyTorch model to tensorRT engine. One path is through Torch Script (TS) and the other is through FX frontend. That being said, the models are traced by either TS or FX into their IR graph and then converted to TensorRT from it. Learn more with our tutorial. TorchX
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
TorchX TorchX 0.3 updates include a new list API, experiment tracking, elastic training and improved scheduler support. There’s also a new Multi-Objective NAS tutorial using TorchX + Ax. (Prototype) List The newly added list command and API allows you to list recently launched jobs and their statuses for a given scheduler directly from within TorchX. - This removes the need for using secondary tools to list the jobs. - Full programmatic access to recent jobs for integration with custom tools. $ torchx list -s kubernetes APP HANDLE APP STATUS ----------------------------------------------- ----------------- kubernetes://torchx/default:train-f2nx4459p5crr SUCCEEDED Learn more with our documentation. (Prototype) Tracker
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
(Prototype) Tracker TorchX Tracker is a new prototype library that provides a flexible and customizable experiment and artifact tracking interface. This allows you to track inputs and outputs for jobs across multiple steps to make it easier to use TorchX with pipelines and other external systems. from torchx import tracker app_run = tracker.app_run_from_env() app_run.add_metadata(lr=lr, gamma=gamma) # hyper parameters app_run.add_artifact("model", "storage://path/mnist_cnn.pt") # logs / checkpoints app_run.add_source(parent_run_id, "model") # lineage Example: - https://github.com/pytorch/torchx/tree/main/torchx/examples/apps/tracker - https://pytorch.org/torchx/main/tracker.html (Prototype) Elastic Training and Autoscaling
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
(Prototype) Elastic Training and Autoscaling Elasticity on Ray and Kubernetes – automatic scale up of distributed training jobs when using a supported scheduler. Learn more with our documentation. (Prototype) Scheduler Improvements: IBM® Spectrum LSF Added prototype support for the IBM Spectrum LSF scheduler. (Beta) AWS Batch Scheduler The AWS Batch scheduler integration is now in beta. - log fetching and listing jobs is now supported. - Added configs for job priorities and queue policies - Easily access job UI via ui_url https://pytorch.org/torchx/main/schedulers/aws_batch.html (Prototype) AnyPrecision Optimizer Drop in replacement for AdamW optimizer that reduces GPU memory, enables two main features: - Ability to successfully train the entire model pipeline in full BFloat16.
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
Kahan summation ensures precision. This can improve training throughput, especially on huge models, by reduced memory and increased computation speed. - Ability to change the variance state to BFloat16. This can reduce overall memory required for model training with additional speed improvements. Find more information here.
https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/
pytorch blogs
layout: blog_detail title: "Scaling PyTorch models on Cloud TPUs with FSDP" author: Ronghang Hu, Vaibhav Singh, Jack Cao, Milad Mohammadi, Yeounoh Chung, Shauheen Zahirazami, Ross Girshick featured-img: "/assets/images/scaling-pytorch-models-on-cloud-tpus-with-fsdp.jpg" Introduction The research community has witnessed a lot of successes with large models across NLP, computer vision, and other domains in recent years. Many of these successes were enabled by Cloud TPUs -- which are powerful hardware for distributed training. To support TPUs in PyTorch, the PyTorch/XLA library provides a backend for XLA devices (most notably TPUs) and lays the groundwork for scaling large PyTorch models on TPUs. However, most existing modeling scaling tools in the PyTorch ecosystem assume GPU (or CPU) devices, often depend on specific features in CUDA, and do not work directly on TPUs. The lack of scaling tools makes it challenging to build large models that cannot fit into the memory of a single TPU chip.
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
To support model scaling on TPUs, we implemented the widely-adopted Fully Sharded Data Parallel (FSDP) algorithm for XLA devices as part of the PyTorch/XLA 1.12 release. We provide an FSDP interface with a similar high-level design to the CUDA-based PyTorch FSDP class while also handling several restrictions in XLA (see Design Notes below for more details). This FSDP interface allowed us to easily build models with e.g. 10B+ parameters on TPUs and has enabled many research explorations. Using Fully Sharded Data Parallel (FSDP) in PyTorch/XLA We provide a wrapper class XlaFullyShardedDataParallel over a given PyTorch model to shard its parameters across data-parallel workers. An example usage is as follows: ```python import torch import torch_xla.core.xla_model as xm from torch_xla.distributed.fsdp import XlaFullyShardedDataParallel as FSDP model = FSDP(my_module) optim = torch.optim.Adam(model.parameters(), lr=0.0001) output = model(x, y)
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
output = model(x, y) loss = output.sum() loss.backward() optim.step() `` Wrapping annn.Moduleinstance withXlaFullyShardedDataParallel` enables the ZeRO-2 algorithm on it, where its gradients and the optimizer states are sharded for the entire training process. During its forward and backward passes, the full parameters of the wrapped module are first reconstructed from their corresponding shards for computation.
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
Nested FSDP wrapping can be used to further save memory. This allows the model to store only the full parameters of one individual layer at any given time. For nested FSDP, one should first wrap its individual submodules with an inner FSDP before wrapping the base model with an outer FSDP. This allows the model to store only the full parameters of one individual layer at any given time. And having an outer wrapper ensures to handle any leftover parameters, corresponding to the ZeRO-3 algorithm. Nested FSDP wrapping can be applied at any depth of submodules and there can be more than 2 layers of nesting.
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
Model checkpoint saving and loading for models and optimizers can be done like before by saving and loading their .state_dict(). Meanwhile, each training process should save its own checkpoint file of the sharded model parameters and optimizer states, and load the checkpoint file for the corresponding rank when resuming (regardless of ZeRO-2 or ZeRO-3, i.e. nested wrapping or not). A command line tool and a Python interface are provided to consolidate the sharded model checkpoint files together into a full/unshareded model checkpoint file. Gradient checkpointing (also referred to as "activation checkpointing" or "rematerialization") is another common technique for model scaling and can be used in conjunction with FSDP. We provide checkpoint_module, a wrapper function over a given nn.Module instance for gradient checkpointing (based on torch_xla.utils.checkpoint.checkpoint).
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
The MNIST and ImageNet examples below provide illustrative usages of (plain or nested) FSDP, saving and consolidation of model checkpoints, as well as gradient checkpointing. Starting examples of FSDP in PyTorch/XLA Training MNIST and ImageNet with FSDP MNIST and ImageNet classification can often be used as starting points to build more complicated deep learning models. We provide the following FSDP examples on these two datasets: - MNIST: test/test_train_mp_mnist_fsdp_with_ckpt.py (it also illustrates checkpoint saving and consolidation) - ImageNet: test/test_train_mp_imagenet_fsdp.py
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
A comparison of them with the vanilla data-parallel examples of MNIST and ImageNet illustrates how to adapt a training script to use FSDP. A major distinction to keep in mind is that when stepping the optimizer on an FSDP-wrapped model, one should directly call optimizer.step() instead of xm.optimizer_step(optimizer). The latter reduces the gradients across ranks, which is not what we need in FSDP, where the gradients are already reduced and sharded (from a reduce-scatter op in its backward pass). Installation FSDP is available from the PyTorch/XLA 1.12 and newer nightly releases. Please refer to https://github.com/pytorch/xla#-available-images-and-wheels for a guide on installation as well as Cloud TPU allocation. Then clone PyTorch/XLA repo on a TPU VM as follows ```python
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
mkdir -p ~/pytorch && cd ~/pytorch git clone --recursive https://github.com/pytorch/xla.git cd ~/ Train MNIST on v3-8 TPU It gets around 98.9 accuracy for 2 epochs: python3 ~/pytorch/xla/test/test_train_mp_mnist_fsdp_with_ckpt.py \ --batch_size 16 --drop_last --num_epochs 2 \ --use_nested_fsdp The script above automatically tests consolidation of the sharded model checkpoints at the end. You can also manually consolidate the sharded checkpoint files via python3 -m torch_xla.distributed.fsdp.consolidate_sharded_ckpts \ --ckpt_prefix /tmp/mnist-fsdp/final_ckpt \ --ckpt_suffix "_rank-*-of-*.pth" Train ImageNet with ResNet-50 on v3-8 TPU It gets around 75.9 accuracy for 100 epochs, same as what one would get without using FSDP; download and preprocess the ImageNet-1k dataset to /datasets/imagenet-1k: ```python python3 ~/pytorch/xla/test/test_train_mp_imagenet_fsdp.py \
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
--datadir /datasets/imagenet-1k --drop_last \ --model resnet50 --test_set_batch_size 64 --eval_interval 10 \ --lr 0.4 --batch_size 128 --num_warmup_epochs 5 \ --lr_scheduler_divide_every_n_epochs 30 --lr_scheduler_divisor 10 \ --num_epochs 100 \ --use_nested_fsdp `` You can also explore other options in these two examples, such as--use_gradient_checkpointingto apply gradient checkpointing (i.e. activation checkpointing) on the ResNet blocks, or--compute_dtype bfloat16` to perform forward and backward passes in bfloat16 precision. Examples on large-scale models When building large models on TPUs, we often need to be aware of the memory constraints (e.g. 16 GB per core in TPU v3 and 32 GB per chip in TPU v4). For large models that cannot fit into a single TPU memory or the host CPU memory, one should use nested FSDP to implement the ZeRO-3 algorithm interleave submodule construction with inner FSDP wrapping, so that the full model never needs to be stored in memory during construction.
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs