text
stringlengths
0
1.73k
source
stringlengths
35
119
category
stringclasses
2 values
We illustrate these cases in https://github.com/ronghanghu/ptxla_scaling_examples, which provides examples of training a Vision Transformer (ViT) model with 10B+ parameters on a TPU v3 pod (with 128 cores) as well as other cases. Design Notes One might wonder why we need to develop a separate FSDP class in PyTorch/XLA instead of directly reusing PyTorch's FSDP class or extending it to the XLA backend. The main motivation behind a separate FSDP class in PyTorch/XLA is that the native PyTorch's FSDP class heavily relies on CUDA features that are not supported by XLA devices, while XLA also has several unique characteristics that need special handling. These distinctions require a different implementation of FSDP that would be much easier to build in a separate class. Changes in API calls
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
Changes in API calls One prominent distinction is that the native PyTorch FSDP is built upon separate CUDA streams for asynchronous execution in eager mode, while PyTorch/XLA runs in lazy mode and also does not support streams. In addition, TPU requires that all devices homogeneously run the same program. As a result, in the PyTorch/XLA FSDP implementation, CUDA calls and per-process heterogeneity need to be replaced by XLA APIs and alternative homogeneous implementations. Tensor Storage Handling
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
Tensor Storage Handling Another prominent distinction is how to free a tensor's storage, which is much harder in XLA than in CUDA. To implement ZeRO-3, one needs to free the storage of full parameters after a module's forward pass, so that the next module can reuse this memory buffer for subsequent computation. PyTorch's FSPD accomplishes this on CUDA by freeing the actual storage of a parameter p via p.data.storage().resize_(0). However, XLA tensors do not have this .storage() handle given that the XLA HLO IRs are completely functional and do not provide any ops to deallocate a tensor or resize its storage. Below the PyTorch interface, only the XLA compiler can decide when to free a TPU device memory corresponding to an XLA tensor, and a prerequisite is that the memory can only be released when the tensor object gets deallocated in Python -- which cannot happen in FSDP because these parameter tensors are referenced as module attributes and also saved by PyTorch autograd for the backward pass.
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
Our solution to this issue is to split a tensor's value properties from its autograd Variable properties, and to free a nn.Parameter tensor by setting its .data attribute to a dummy scalar of size 1. This way the actual data tensor for the full parameter gets dereferenced in Python so that XLA can recycle its memory for other computation, while autograd can still trace the base nn.Parameter as a weak reference to the parameter data. To get this to work, one also needs to handle views over the parameters as views in PyTorch also hold references to its actual data (this required fixing a shape-related issue with views in PyTorch/XLA). Working with XLA compiler
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
The solution above should be enough to free full parameters if the XLA compiler faithfully preserves the operations and their execution order in our PyTorch program. But there is another problem -- XLA attempts to optimize the program to speed up its execution by applying common subexpression elimination (CSE) to the HLO IRs. In a naive implementation of FSDP, the XLA compiler typically eliminates the 2nd all-gather in the backward pass to reconstruct the full parameters when it sees that it is a repeated computation from the forward pass, and directly holds and reuses the full parameters we want to free up after the forward pass. To guard against this undesired compiler behavior, we introduced the optimization barrier op into PyTorch/XLA and used it to stop eliminating the 2nd all-gather. This optimization barrier is also applied to a similar case of gradient checkpointing to prevent CSE between forward and backward passes that could eliminate the rematerialization.
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
In the future, if the distinctions between CUDA and XLA become not as prominent as mentioned above, it could be worth considering a merge of the PyTorch/XLA FSDP with the native PyTorch FSDP to have a unified interface. Acknowledgments Thanks to Junmin Hao from AWS for reviewing the PyTorch/XLA FSDP pull request. Thanks to Brian Hirsh from the Meta PyTorch team for support on the PyTorch core issues. Thanks to Isaack Karanja, Will Cromar, and Blake Hechtman from Google for support on GCP, XLA, and TPU issues. Thanks to Piotr Dollar, Wan-Yen Lo, Alex Berg, Ryan Mark, Kaiming He, Xinlei Chen, Saining Xie, Shoubhik Debnath, Min Xu, and Vaibhav Aggarwal from Meta FAIR for various TPU-related discussions.
https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/
pytorch blogs
layout: blog_detail title: 'PyTorch library updates including new model serving library ' author: Team PyTorch Along with the PyTorch 1.5 release, we are announcing new libraries for high-performance PyTorch model serving and tight integration with TorchElastic and Kubernetes. Additionally, we are releasing updated packages for torch_xla (Google Cloud TPUs), torchaudio, torchvision, and torchtext. All of these new libraries and enhanced capabilities are available today and accompany all of the core features released in PyTorch 1.5. TorchServe (Experimental)
https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/
pytorch blogs
TorchServe (Experimental) TorchServe is a flexible and easy to use library for serving PyTorch models in production performantly at scale. It is cloud and environment agnostic and supports features such as multi-model serving, logging, metrics, and the creation of RESTful endpoints for application integration. TorchServe was jointly developed by engineers from Facebook and AWS with feedback and engagement from the broader PyTorch community. The experimental release of TorchServe is available today. Some of the highlights include: * Support for both Python-based and TorchScript-based models * Default handlers for common use cases (e.g., image segmentation, text classification) as well as the ability to write custom handlers for other use cases * Model versioning, the ability to run multiple versions of a model at the same time, and the ability to roll back to an earlier version
https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/
pytorch blogs
The ability to package a model, learning weights, and supporting files (e.g., class mappings, vocabularies) into a single, persistent artifact (a.k.a. the “model archive”) Robust management capability, allowing full configuration of models, versions, and individual worker threads via command line, config file, or run-time API Automatic batching of individual inferences across HTTP requests Logging including common metrics, and the ability to incorporate custom metrics Ready-made Dockerfile for easy deployment HTTPS support for secure deployment To learn more about the APIs and the design of this feature, see the links below: See for a full multi-node deployment reference architecture. The full documentation can be found here. TorchElastic integration with Kubernetes (Experimental)
https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/
pytorch blogs
TorchElastic is a proven library for training large scale deep neural networks at scale within companies like Facebook, where having the ability to dynamically adapt to server availability and scale as new compute resources come online is critical. Kubernetes enables customers using machine learning frameworks like PyTorch to run training jobs distributed across fleets of powerful GPU instances like the Amazon EC2 P3. Distributed training jobs, however, are not fault-tolerant, and a job cannot continue if a node failure or reclamation interrupts training. Further, jobs cannot start without acquiring all required resources, or scale up and down without being restarted. This lack of resiliency and flexibility results in increased training time and costs from idle resources. TorchElastic addresses these limitations by enabling distributed training jobs to be executed in a fault-tolerant and elastic manner. Until today, Kubernetes users needed to manage Pods and Services required for TorchElastic training jobs manually.
https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/
pytorch blogs
Through the joint collaboration of engineers at Facebook and AWS, TorchElastic, adding elasticity and fault tolerance, is now supported using vanilla Kubernetes and through the managed EKS service from AWS. To learn more see the TorchElastic repo for the controller implementation and docs on how to use it. torch_xla 1.5 now available
https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/
pytorch blogs
torch_xla 1.5 now available torch_xla is a Python package that uses the XLA linear algebra compiler to accelerate the PyTorch deep learning framework on Cloud TPUs and Cloud TPU Pods. torch_xla aims to give PyTorch users the ability to do everything they can do on GPUs on Cloud TPUs as well while minimizing changes to the user experience. The project began with a conversation at NeurIPS 2017 and gathered momentum in 2018 when teams from Facebook and Google came together to create a proof of concept. We announced this collaboration at PTDC 2018 and made the PyTorch/XLA integration broadly available at PTDC 2019. The project already has 28 contributors, nearly 2k commits, and a repo that has been forked more than 100 times.
https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/
pytorch blogs
This release of torch_xla is aligned and tested with PyTorch 1.5 to reduce friction for developers and to provide a stable and mature PyTorch/XLA stack for training models using Cloud TPU hardware. You can try it for free in your browser on an 8-core Cloud TPU device with Google Colab, and you can use it at a much larger scaleon Google Cloud. See the full torch_xla release notes here. Full docs and tutorials can be found here and here. PyTorch Domain Libraries
https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/
pytorch blogs
PyTorch Domain Libraries torchaudio, torchvision, and torchtext complement PyTorch with common datasets, models, and transforms in each domain area. We’re excited to share new releases for all three domain libraries alongside PyTorch 1.5 and the rest of the library updates. For this release, all three domain libraries are removing support for Python2 and will support Python3 only. torchaudio 0.5 The torchaudio 0.5 release includes new transforms, functionals, and datasets. Highlights for the release include: * Added the Griffin-Lim functional and transform, InverseMelScale and Vol transforms, and DB_to_amplitude. * Added support for allpass, fade, bandpass, bandreject, band, treble, deemph, and riaa filters and transformations. * New datasets added including LJSpeech and SpeechCommands datasets. See the release full notes here and full docs can be found here. torchvision 0.6
https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/
pytorch blogs
torchvision 0.6 The torchvision 0.6 release includes updates to datasets, models and a significant number of bug fixes. Highlights include: * Faster R-CNN now supports negative samples which allows the feeding of images without annotations at training time. * Added aligned flag to RoIAlign to match Detectron2. * Refactored abstractions for C++ video decoder See the release full notes here and full docs can be found here. torchtext 0.6 The torchtext 0.6 release includes a number of bug fixes and improvements to documentation. Based on user's feedback, dataset abstractions are currently being redesigned also. Highlights for the release include: * Fixed an issue related to the SentencePiece dependency in conda package. * Added support for the experimental IMDB dataset to allow a custom vocab.
https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/
pytorch blogs
A number of documentation updates including adding a code of conduct and a deduplication of the docs on the torchtext site. Your feedback and discussions on the experimental datasets API are welcomed. You can send them to issue #664. We would also like to highlight the pull request here where the latest dataset abstraction is applied to the text classification datasets. The feedback can be beneficial to finalizing this abstraction. See the release full notes here and full docs can be found here. We’d like to thank the entire PyTorch team, the Amazon team and the community for all their contributions to this work. Cheers! Team PyTorch
https://pytorch.org/blog/pytorch-library-updates-new-model-serving-library/
pytorch blogs
layout: blog_detail title: 'Announcing the PyTorch Enterprise Support Program' author: Team PyTorch Today, we are excited to announce the PyTorch Enterprise Support Program, a participatory program that enables service providers to develop and offer tailored enterprise-grade support to their customers. This new offering, built in collaboration between Facebook and Microsoft, was created in direct response to feedback from PyTorch enterprise users who are developing models in production at scale for mission-critical applications. The PyTorch Enterprise Support Program is available to any service provider. It is designed to mutually benefit all program Participants by sharing and improving PyTorch long-term support (LTS), including contributions of hotfixes and other improvements found while working closely with customers and on their systems.
https://pytorch.org/blog/announcing-pytorch-enterprise/
pytorch blogs
To benefit the open source community, all hotfixes developed by Participants will be tested and fed back to the LTS releases of PyTorch regularly through PyTorch’s standard pull request process. To participate in the program, a service provider must apply and meet a set of program terms and certification requirements. Once accepted, the service provider becomes a program Participant and can offer a packaged PyTorch Enterprise support service with LTS, prioritized troubleshooting, useful integrations, and more.
https://pytorch.org/blog/announcing-pytorch-enterprise/
pytorch blogs
As one of the founding members and an inaugural member of the PyTorch Enterprise Support Program, Microsoft is launching PyTorch Enterprise on Microsoft Azure to deliver a reliable production experience for PyTorch users. Microsoft will support each PyTorch release for as long as it is current. In addition, it will support selected releases for two years, enabling a stable production experience. Microsoft Premier and Unified Support customers can access prioritized troubleshooting for hotfixes, bugs, and security patches at no additional cost. Microsoft will extensively test PyTorch releases for performance regression. The latest release of PyTorch will be integrated with Azure Machine Learning and other PyTorch add-ons including ONNX Runtime for faster inference.
https://pytorch.org/blog/announcing-pytorch-enterprise/
pytorch blogs
PyTorch Enterprise on Microsoft Azure not only benefits its customers, but also the PyTorch community users. All improvements will be tested and fed back to the future release for PyTorch so everyone in the community can use them. As an organization or PyTorch user, the standard way of researching and deploying with different release versions of PyTorch does not change. If your organization is looking for the managed long-term support, prioritized patches, bug fixes, and additional enterprise-grade support, then you should reach out to service providers participating in the program. To learn more and participate in the program as a service provider, visit the PyTorch Enterprise Support Program. If you want to learn more about Microsoft’s offering, visit PyTorch Enterprise on Microsoft Azure. Thank you, Team PyTorch
https://pytorch.org/blog/announcing-pytorch-enterprise/
pytorch blogs
layout: blog_detail title: 'PyTorch framework for cryptographically secure random number generation, torchcsprng, now available' author: Team PyTorch One of the key components of modern cryptography is the pseudorandom number generator. Katz and Lindell stated, "The use of badly designed or inappropriate random number generators can often leave a good cryptosystem vulnerable to attack. Particular care must be taken to use a random number generator that is designed for cryptographic use, rather than a 'general-purpose' random number generator which may be fine for some applications but not ones that are required to be cryptographically secure."[1] Additionally, most pseudorandom number generators scale poorly to massively parallel high-performance computation because of their sequential nature. Others don’t satisfy cryptographically secure properties.
https://pytorch.org/blog/torchcsprng-release-blog/
pytorch blogs
torchcsprng is a PyTorch C++/CUDA extension that provides cryptographically secure pseudorandom number generators for PyTorch. torchcsprng overview Historically, PyTorch had only two pseudorandom number generator implementations: Mersenne Twister for CPU and Nvidia’s cuRAND Philox for CUDA. Despite good performance properties, neither of them are suitable for cryptographic applications. Over the course of the past several months, the PyTorch team developed the torchcsprng extension API. Based on PyTorch dispatch mechanism and operator registration, it allows the users to extend c10::GeneratorImpl and implement their own custom pseudorandom number generator.
https://pytorch.org/blog/torchcsprng-release-blog/
pytorch blogs
torchcsprng generates a random 128-bit key on the CPU using one of its generators and then runs AES128 in CTR mode either on CPU or GPU using CUDA. This then generates a random 128-bit state and applies a transformation function to map it to target tensor values. This approach is based on Parallel Random Numbers: As Easy as 1, 2, 3 (John K. Salmon, Mark A. Moraes, Ron O. Dror, and David E. Shaw, D. E. Shaw Research). It makes torchcsprng both crypto-secure and parallel on both CPU and CUDA. Since torchcsprng is a PyTorch extension, it is available on the platforms where PyTorch is available (support for Windows-CUDA will be available in the coming months). Using torchcsprng
https://pytorch.org/blog/torchcsprng-release-blog/
pytorch blogs
Using torchcsprng The torchcsprng API is very simple to use and is fully compatible with the PyTorch random infrastructure: Step 1: Install via binary distribution Anaconda: python conda install torchcsprng -c pytorch pip: python pip install torchcsprng Step 2: import packages as usual but add csprng python import torch import torchcsprng as csprng Step 3: Create a cryptographically secure pseudorandom number generator from /dev/urandom: python urandom_gen = csprng.create_random_device_generator('/dev/urandom') and simply use it with the existing PyTorch methods: python torch.randn(10, device='cpu', generator=urandom_gen) Step 4: Test with Cuda One of the advantages of torchcsprng generators is that they can be used with both CPU and CUDA tensors: python torch.randn(10, device='cuda', generator=urandom_gen) Another advantage of torchcsprng generators is that they are parallel on CPU unlike the default PyTorch CPU generator.
https://pytorch.org/blog/torchcsprng-release-blog/
pytorch blogs
Getting Started The easiest way to get started with torchcsprng is by visiting the GitHub page where you can find installation and build instructions, and more how-to examples. Cheers, The PyTorch Team [1] Introduction to Modern Cryptography: Principles and Protocols (Chapman & Hall/CRC Cryptography and Network Security Series) by Jonathan Katz and Yehuda Lindell
https://pytorch.org/blog/torchcsprng-release-blog/
pytorch blogs
layout: blog_detail title: 'Overview of PyTorch Autograd Engine' author: Preferred Networks, Inc. This blog post is based on PyTorch version 1.8, although it should apply for older versions too, since most of the mechanics have remained constant. To help understand the concepts explained here, it is recommended that you read the awesome blog post by @ezyang: PyTorch internals if you are not familiar with PyTorch architecture components such as ATen or c10d. What is autograd? Background
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
What is autograd? Background PyTorch computes the gradient of a function with respect to the inputs by using automatic differentiation. Automatic differentiation is a technique that, given a computational graph, calculates the gradients of the inputs. Automatic differentiation can be performed in two different ways; forward and reverse mode. Forward mode means that we calculate the gradients along with the result of the function, while reverse mode requires us to evaluate the function first, and then we calculate the gradients starting from the output. While both modes have their pros and cons, the reverse mode is the de-facto choice since the number of outputs is smaller than the number of inputs, which allows a much more efficient computation. Check [3] to learn more about this. Automatic differentiation relies on a classic calculus formula known as the chain-rule. The chain rule allows us to calculate very complex derivatives by splitting them and recombining them later.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Formally speaking, given a composite function , we can calculate its derivative as . This result is what makes automatic differentiation work. By combining the derivatives of the simpler functions that compose a larger one, such as a neural network, it is possible to compute the exact value of the gradient at a given point rather than relying on the numerical approximation, which would require multiple perturbations in the input to obtain a value.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
To get the intuition of how the reverse mode works, let’s look at a simple function . Figure 1 shows its computational graph where the inputs x, y in the left, flow through a series of operations to generate the output z. Figure 1: Computational graph of f(x, y) = log(x*y) The automatic differentiation engine will normally execute this graph. It will also extend it to calculate the derivatives of w with respect to the inputs x, y, and the intermediate result v.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
The example function can be decomposed in f and g, where and . Every time the engine executes an operation in the graph, the derivative of that operation is added to the graph to be executed later in the backward pass. Note, that the engine knows the derivatives of the basic functions.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
In the example above, when multiplying x and y to obtain v, the engine will extend the graph to calculate the partial derivatives of the multiplication by using the multiplication derivative definition that it already knows. and . The resulting extended graph is shown in Figure 2, where the MultDerivative node also calculates the product of the resulting gradients by an input gradient to apply the chain rule; this will be explicitly seen in the following operations. Note that the backward graph (green nodes) will not be executed until all the forward steps are completed.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Figure 2: Computational graph extended after executing the logarithm
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Continuing, the engine now calculates the operation and extends the graph again with the log derivative that it knows to be . This is shown in figure 3. This operation generates the result that when propagated backward and multiplied by the multiplication derivative as in the chain rule, generates the derivatives , .
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Figure 3: Computational graph extended after executing the logarithm The original computation graph is extended with a new dummy variable z that is the same w. The derivative of z with respect to w is 1 as they are the same variable, this trick allows us to apply the chain rule to calculate the derivatives of the inputs. After the forward pass is complete, we start the backward pass, by supplying the initial value of 1.0 for . This is shown in Figure 4.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Figure 4: Computational graph extended for reverse auto differentiation
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Then following the green graph we execute the LogDerivative operation that the auto differentiation engine introduced, and multiply its result by to obtain the gradient as per the chain rule states. Next, the multiplication derivative is executed in the same way, and the desired derivatives are finally obtained.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Formally, what we are doing here, and PyTorch autograd engine also does, is computing a Jacobian-vector product (Jvp) to calculate the gradients of the model parameters, since the model parameters and inputs are vectors. The Jacobian-vector product When we calculate the gradient of a vector-valued function (a function whose inputs and outputs are vectors), we are essentially constructing a Jacobian matrix .
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Thanks to the chain rule, multiplying the Jacobian matrix of a function by a vector with the previously calculated gradients of a scalar function results in the gradients of the scalar output with respect to the vector-valued function inputs.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
As an example, let’s look at some functions in python notation to show how the chain rule applies. def f(x1, x2): a = x1 * x2 y1 = log(a) y2 = sin(x2) return (y1, y2) def g(y1, y2): return y1 * y2
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
return y1 * y2 Now, if we derive this by hand using the chain rule and the definition of the derivatives, we obtain the following set of identities that we can directly plug into the Jacobian matrix of
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Next, let’s consider the gradients for the scalar function
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
If we now calculate the transpose-Jacobian vector product obeying the chain rule, we obtain the following expression:
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Evaluating the Jvp for yields the result: We can execute the same expression in PyTorch and calculate the gradient of the input: >>> import torch >>> x = torch.tensor([0.5, 0.75], requires_grad=True) >>> y = torch.log(x[0] * x[1]) * torch.sin(x[1]) >>> y.backward(1.0) >>> x.grad tensor([1.3633, 0.1912])
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
tensor([1.3633, 0.1912]) The result is the same as our hand-calculated Jacobian-vector product! However, PyTorch never constructed the matrix as it could grow prohibitively large but instead, created a graph of operations that traversed backward while applying the Jacobian-vector products defined in tools/autograd/derivatives.yaml. Going through the graph Every time PyTorch executes an operation, the autograd engine constructs the graph to be traversed backward.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
The reverse mode auto differentiation starts by adding a scalar variable at the end so that as we saw in the introduction. This is the initial gradient value that is supplied to the Jvp engine calculation as we saw in the section above. In PyTorch, the initial gradient is explicitly set by the user when he calls the backward method.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Then, the Jvp calculation starts but it never constructs the matrix. Instead, when PyTorch records the computational graph, the derivatives of the executed forward operations are added (Backward Nodes). Figure 5 shows a backward graph generated by the execution of the functions and seen before. Figure 5: Computational Graph extended with the backward pass
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Once the forward pass is done, the results are used in the backward pass where the derivatives in the computational graph are executed. The basic derivatives are stored in the tools/autograd/derivatives.yaml file and they are not regular derivatives but the Jvp versions of them [3]. They take their primitive function inputs and outputs as parameters along with the gradient of the function outputs with respect to the final outputs. By repeatedly multiplying the resulting gradients by the next Jvp derivatives in the graph, the gradients up to the inputs will be generated following the chain rule. Figure 6: How the chain rule is applied in backward differentiation
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Figure 6 represents the process by showing the chain rule. We started with a value of 1.0 as detailed before which is the already calculated gradient highlighted in green. And we move to the next node in the graph. The backward function registered in derivatives.yaml will calculate the associated
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
value highlighted in red and multiply it by . By the chain rule this results in which will be the already calculated gradient (green) when we process the next backward node in the graph.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
You may also have noticed that in Figure 5 there is a gradient generated from two different sources. When two different functions share an input, the gradients with respect to the output are aggregated for that input, and calculations using that gradient can’t proceed unless all the paths have been aggregated together. Let’s see an example of how the derivatives are stored in PyTorch.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
Suppose that we are currently processing the backward propagation of the function, in the LogBackward node in Figure 2. The derivative of in derivatives.yaml is specified as grad.div(self.conj()). grad is the already calculated gradient and self.conj() is the complex conjugate of the input vector. For complex numbers PyTorch calculates a special derivative called the conjugate Wirtinger derivative [6]. This derivative takes the complex number and its conjugate and by operating some magic that is described in [6], they are the direction of steepest descent when plugged into optimizers.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
This code translates to , the corresponding green, and red squares in Figure 3. Continuing, the autograd engine will execute the next operation; backward of the multiplication. As before, the inputs are the original function’s inputs and the gradient calculated from the backward step. This step will keep repeating until we reach the gradient with respect to the inputs and the computation will be finished. The gradient of is only completed once the multiplication and sin gradients are added together. As you can see, we computed the equivalent of the Jvp but without constructing the matrix.
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
In the next post we will dive inside PyTorch code to see how this graph is constructed and where are the relevant pieces should you want to experiment with it! References https://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html https://web.stanford.edu/class/cs224n/readings/gradient-notes.pdf https://www.cs.toronto.edu/~rgrosse/courses/csc321_2018/slides/lec10.pdf https://mustafaghali11.medium.com/how-pytorch-backward-function-works-55669b3b7c62
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
https://indico.cern.ch/event/708041/contributions/3308814/attachments/1813852/2963725/automatic_differentiation_and_deep_learning.pdf https://pytorch.org/docs/stable/notes/autograd.html#complex-autograd-doc Recommended: shows why the backprop is formally expressed with the Jacobian https://cs.ubc.ca/~fwood/CS340/lectures/AD1.pdf
https://pytorch.org/blog/overview-of-pytorch-autograd-engine/
pytorch blogs
layout: blog_detail title: 'PyTorch 1.8 Release, including Compiler and Distributed Training updates, and New Mobile Tutorials' author: Team PyTorch We are excited to announce the availability of PyTorch 1.8. This release is composed of more than 3,000 commits since 1.7. It includes major updates and new features for compilation, code optimization, frontend APIs for scientific computing, and AMD ROCm support through binaries that are available via pytorch.org. It also provides improved features for large-scale training for pipeline and model parallelism, and gradient compression. A few of the highlights include: 1. Support for doing python to python functional transformations via torch.fx; 2. Added or stabilized APIs to support FFTs (torch.fft), Linear Algebra functions (torch.linalg), added support for autograd for complex tensors and updates to improve performance for calculating hessians and jacobians; and
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
Significant updates and improvements to distributed training including: Improved NCCL reliability; Pipeline parallelism support; RPC profiling; and support for communication hooks adding gradient compression. See the full release notes here. Along with 1.8, we are also releasing major updates to PyTorch libraries including TorchCSPRNG, TorchVision, TorchText and TorchAudio. For more on the library releases, see the post here. As previously noted, features in PyTorch releases are classified as Stable, Beta and Prototype. You can learn more about the definitions in the post here. New and Updated APIs
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
New and Updated APIs The PyTorch 1.8 release brings a host of new and updated API surfaces ranging from additional APIs for NumPy compatibility, also support for ways to improve and scale your code for performance at both inference and training time. Here is a brief summary of the major features coming in this release: [Stable] Torch.fft support for high performance NumPy style FFTs As part of PyTorch’s goal to support scientific computing, we have invested in improving our FFT support and with PyTorch 1.8, we are releasing the torch.fft module. This module implements the same functions as NumPy’s np.fft module, but with support for hardware acceleration and autograd. * See this blog post for more details * Documentation [Beta] Support for NumPy style linear algebra functions via torch.linalg
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
The torch.linalg module, modeled after NumPy’s np.linalg module, brings NumPy-style support for common linear algebra operations including Cholesky decompositions, determinants, eigenvalues and many others. * Documentation [Beta] Python code Transformations with FX FX allows you to write transformations of the form transform(input_module : nn.Module) -> nn.Module, where you can feed in a Module instance and get a transformed Module instance out of it.
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
This kind of functionality is applicable in many scenarios. For example, the FX-based Graph Mode Quantization product is releasing as a prototype contemporaneously with FX. Graph Mode Quantization automates the process of quantizing a neural net and does so by leveraging FX’s program capture, analysis and transformation facilities. We are also developing many other transformation products with FX and we are excited to share this powerful toolkit with the community. Because FX transforms consume and produce nn.Module instances, they can be used within many existing PyTorch workflows. This includes workflows that, for example, train in Python then deploy via TorchScript.
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
You can read more about FX in the official documentation. You can also find several examples of program transformations implemented using torch.fx here. We are constantly improving FX and invite you to share any feedback you have about the toolkit on the forums or issue tracker. We’d like to acknowledge TorchScript tracing, Apache MXNet hybridize, and more recently JAX as influences for program acquisition via tracing. We’d also like to acknowledge Caffe2, JAX, and TensorFlow as inspiration for the value of simple, directed dataflow graph program representations and transformations over those representations.
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
Distributed Training The PyTorch 1.8 release added a number of new features as well as improvements to reliability and usability. Concretely, support for: Stable level async error/timeout handling was added to improve NCCL reliability; and stable support for RPC based profiling. Additionally, we have added support for pipeline parallelism as well as gradient compression through the use of communication hooks in DDP. Details are below: [Beta] Pipeline Parallelism As machine learning models continue to grow in size, traditional Distributed DataParallel (DDP) training no longer scales as these models don’t fit on a single GPU device. The new pipeline parallelism feature provides an easy to use PyTorch API to leverage pipeline parallelism as part of your training loop. * RFC
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
Documentation [Beta] DDP Communication Hook The DDP communication hook is a generic interface to control how to communicate gradients across workers by overriding the vanilla allreduce in DistributedDataParallel. A few built-in communication hooks are provided including PowerSGD, and users can easily apply any of these hooks to optimize communication. Additionally, the communication hook interface can also support user-defined communication strategies for more advanced use cases. * RFC * Documentation Additional Prototype Features for Distributed Training In addition to the major stable and beta distributed training features in this release, we also have a number of prototype features available in our nightlies to try out and provide feedback. We have linked in the draft docs below for reference:
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
(Prototype) ZeroRedundancyOptimizer - Based on and in partnership with the Microsoft DeepSpeed team, this feature helps reduce per-process memory footprint by sharding optimizer states across all participating processes in the ProcessGroup gang. Refer to this documentation for more details. (Prototype) Process Group NCCL Send/Recv - The NCCL send/recv API was introduced in v2.7 and this feature adds support for it in NCCL process groups. This feature will provide an option for users to implement collective operations at Python layer instead of C++ layer. Refer to this documentation and code examples to learn more.
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
(Prototype) CUDA-support in RPC using TensorPipe - This feature should bring consequent speed improvements for users of PyTorch RPC with multiple-GPU machines, as TensorPipe will automatically leverage NVLink when available, and avoid costly copies to and from host memory when exchanging GPU tensors between processes. When not on the same machine, TensorPipe will fall back to copying the tensor to host memory and sending it as a regular CPU tensor. This will also improve the user experience as users will be able to treat GPU tensors like regular CPU tensors in their code. Refer to this documentation for more details.
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
(Prototype) Remote Module - This feature allows users to operate a module on a remote worker like using a local module, where the RPCs are transparent to the user. In the past, this functionality was implemented in an ad-hoc way and overall this feature will improve the usability of model parallelism on PyTorch. Refer to this documentation for more details. PyTorch Mobile Support for PyTorch Mobile is expanding with a new set of tutorials to help new users launch models on-device quicker and give existing users a tool to get more out of our framework. These include: * Image segmentation DeepLabV3 on iOS * Image segmentation DeepLabV3 on Android
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
Our new demo apps also include examples of image segmentation, object detection, neural machine translation, question answering, and vision transformers. They are available on both iOS and Android: * iOS demo app * Android demo app In addition to performance improvements on CPU for MobileNetV3 and other models, we also revamped our Android GPU backend prototype for broader models coverage and faster inferencing: * Android tutorial Lastly, we are launching the PyTorch Mobile Lite Interpreter as a prototype feature in this release. The Lite Interpreter allows users to reduce the runtime binary size. Please try these out and send us your feedback on the PyTorch Forums. All our latest updates can be found on the PyTorch Mobile page [Prototype] PyTorch Mobile Lite Interpreter
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
[Prototype] PyTorch Mobile Lite Interpreter PyTorch Lite Interpreter is a streamlined version of the PyTorch runtime that can execute PyTorch programs in resource constrained devices, with reduced binary size footprint. This prototype feature reduces binary sizes by up to 70% compared to the current on-device runtime in the current release. * iOS/Android Tutorial Performance Optimization In 1.8, we are releasing the support for benchmark utils to enable users to better monitor performance. We are also opening up a new automated quantization API. See the details below: (Beta) Benchmark utils Benchmark utils allows users to take accurate performance measurements, and provides composable tools to help with both benchmark formulation and post processing. This expected to be helpful for contributors to PyTorch to quickly understand how their contributions are impacting PyTorch performance. Example: ```python
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
Example: from torch.utils.benchmark import Timer results = [] for num_threads in [1, 2, 4]: timer = Timer( stmt="torch.add(x, y, out=out)", setup=""" n = 1024 x = torch.ones((n, n)) y = torch.ones((n, 1)) out = torch.empty((n, n)) """, num_threads=num_threads, ) results.append(timer.blocked_autorange(min_run_time=5)) print( f"{num_threads} thread{'s' if num_threads > 1 else ' ':<4}" f"{results[-1].median * 1e6:>4.0f} us " + (f"({results[0].median / results[-1].median:.1f}x)" if num_threads > 1 else '') ) 1 thread 376 us 2 threads 189 us (2.0x) 4 threads 99 us (3.8x) Documentation Tutorial (Prototype) FX Graph Mode Quantization
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
(Prototype) FX Graph Mode Quantization FX Graph Mode Quantization is the new automated quantization API in PyTorch. It improves upon Eager Mode Quantization by adding support for functionals and automating the quantization process, although people might need to refactor the model to make the model compatible with FX Graph Mode Quantization (symbolically traceable with torch.fx). * Documentation * Tutorials: * (Prototype) FX Graph Mode Post Training Dynamic Quantization * (Prototype) FX Graph Mode Post Training Static Qunatization * (Prototype) FX Graph Mode Quantization User Guide Hardware Support [Beta] Ability to Extend the PyTorch Dispatcher for a new backend in C++
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
In PyTorch 1.8, you can now create new out-of-tree devices that live outside the pytorch/pytorch repo. The tutorial linked below shows how to register your device and keep it in sync with native PyTorch devices. * Tutorial [Beta] AMD GPU Binaries Now Available Starting in PyTorch 1.8, we have added support for ROCm wheels providing an easy onboarding to using AMD GPUs. You can simply go to the standard PyTorch installation selector and choose ROCm as an installation option and execute the provided command. Thanks for reading, and if you are excited about these updates and want to participate in the future of PyTorch, we encourage you to join the discussion forums and open GitHub issues. Cheers! Team PyTorch
https://pytorch.org/blog/pytorch-1.8-released/
pytorch blogs
layout: blog_detail title: "Introducing Accelerated PyTorch Training on Mac" author: PyTorch featured-img: "/assets/images/METAPT-002-BarGraph-02-static.png" In collaboration with the Metal engineering team at Apple, we are excited to announce support for GPU-accelerated PyTorch training on Mac. Until now, PyTorch training on Mac only leveraged the CPU, but with the upcoming PyTorch v1.12 release, developers and researchers can take advantage of Apple silicon GPUs for significantly faster model training. This unlocks the ability to perform machine learning workflows like prototyping and fine-tuning locally, right on Mac. Metal Acceleration
https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/
pytorch blogs
Metal Acceleration Accelerated GPU training is enabled using Apple’s Metal Performance Shaders (MPS) as a backend for PyTorch. The MPS backend extends the PyTorch framework, providing scripts and capabilities to set up and run operations on Mac. MPS optimizes compute performance with kernels that are fine-tuned for the unique characteristics of each Metal GPU family. The new device maps machine learning computational graphs and primitives on the MPS Graph framework and tuned kernels provided by MPS. Training Benefits on Apple Silicon Every Apple silicon Mac has a unified memory architecture, providing the GPU with direct access to the full memory store. This makes Mac a great platform for machine learning, enabling users to train larger networks or batch sizes locally. This reduces costs associated with cloud-based development or the need for additional local GPUs. The Unified Memory architecture also reduces data retrieval latency, improving end-to-end performance.
https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/
pytorch blogs
In the graphs below, you can see the performance speedup from accelerated GPU training and evaluation compared to the CPU baseline: Accelerated GPU training and evaluation speedups over CPU-only (times faster) Getting Started To get started, just install the latest Preview (Nightly) build on your Apple silicon Mac running macOS 12.3 or later with a native version (arm64) of Python. You can also learn more about Metal and MPS on Apple’s Metal page.
https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/
pytorch blogs
* Testing conducted by Apple in April 2022 using production Mac Studio systems with Apple M1 Ultra, 20-core CPU, 64-core GPU 128GB of RAM, and 2TB SSD. Tested with macOS Monterey 12.3, prerelease PyTorch 1.12, ResNet50 (batch size=128), HuggingFace BERT (batch size=64), and VGG16 (batch size=64). Performance tests are conducted using specific computer systems and reflect the approximate performance of Mac Studio.
https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/
pytorch blogs
layout: blog_detail title: 'How to Train State-Of-The-Art Models Using TorchVision’s Latest Primitives' author: Vasilis Vryniotis featured-img: 'assets/images/fx-image2.png'
https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/
pytorch blogs
A few weeks ago, TorchVision v0.11 was released packed with numerous new primitives, models and training recipe improvements which allowed achieving state-of-the-art (SOTA) results. The project was dubbed “TorchVision with Batteries Included” and aimed to modernize our library. We wanted to enable researchers to reproduce papers and conduct research more easily by using common building blocks. Moreover, we aspired to provide the necessary tools to Applied ML practitioners to train their models on their own data using the same SOTA techniques as in research. Finally, we wanted to refresh our pre-trained weights and offer better off-the-shelf models to our users, hoping that they would build better applications.
https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/
pytorch blogs
Though there is still much work to be done, we wanted to share with you some exciting results from the above work. We will showcase how one can use the new tools included in TorchVision to achieve state-of-the-art results on a highly competitive and well-studied architecture such as ResNet50 [1]. We will share the exact recipe used to improve our baseline by over 4.7 accuracy points to reach a final top-1 accuracy of 80.9% and share the journey for deriving the new training process. Moreover, we will show that this recipe generalizes well to other model variants and families. We hope that the above will influence future research for developing stronger generalizable training methodologies and will inspire the community to adopt and contribute to our efforts. The Results Using our new training recipe found on ResNet50, we’ve refreshed the pre-trained weights of the following models: | Model | Accuracy@1 | Accuracy@5| |----------|:--------:|:----------:|
https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/
pytorch blogs
|----------|:--------:|:----------:| | ResNet50 | 80.858 | 95.434| |----------|:--------:|:----------:| | ResNet101 | 81.886 | 95.780| |----------|:--------:|:----------:| | ResNet152 | 82.284 | 96.002| |----------|:--------:|:----------:| | ResNeXt50-32x4d | 81.198 | 95.340| Note that the accuracy of all models except RetNet50 can be further improved by adjusting their training parameters slightly, but our focus was to have a single robust recipe which performs well for all. UPDATE: We have refreshed the majority of popular classification models of TorchVision, you can find the details on this blog post. There are currently two ways to use the latest weights of the model. Using the Multi-pretrained weight API
https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/
pytorch blogs
Using the Multi-pretrained weight API We are currently working on a new prototype mechanism which will extend the model builder methods of TorchVision to support multiple weights. Along with the weights, we store useful meta-data (such as the labels, the accuracy, links to recipe etc) and the preprocessing transforms necessary for using the models. Example: ```python from PIL import Image from torchvision import prototype as P img = Image.open("test/assets/encode_jpeg/grace_hopper_517x606.jpg")   # Initialize model weights = P.models.ResNet50_Weights.IMAGENET1K_V2 model = P.models.resnet50(weights=weights) model.eval() # Initialize inference transforms preprocess = weights.transforms()   # Apply inference preprocessing transforms batch = preprocess(img).unsqueeze(0)
https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/
pytorch blogs
batch = preprocess(img).unsqueeze(0) prediction = model(batch).squeeze(0).softmax(0)   # Make predictions label = prediction.argmax().item() score = prediction[label].item()   # Use meta to get the labels category_name = weights.meta['categories'][label] print(f"{category_name}: {100 * score}%") ## Using the legacy API Those who don’t want to use a prototype API have the option of accessing the new weights via the legacy API using the following approach: ```python from torchvision.models import resnet   # Overwrite the URL of the previous weights resnet.model_urls["resnet50"] = "https://download.pytorch.org/models/resnet50-11ad3fa6.pth"   # Initialize the model using the legacy API model = resnet.resnet50(pretrained=True)   # TODO: Apply preprocessing + call the model # ... The Training Recipe
https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/
pytorch blogs
... ``` The Training Recipe Our goal was to use the newly introduced primitives of TorchVision to derive a new strong training recipe which achieves state-of-the-art results for the vanilla ResNet50 architecture when trained from scratch on ImageNet with no additional external data. Though by using architecture specific tricks [2] one could further improve the accuracy, we’ve decided not to include them so that the recipe can be used in other architectures. Our recipe heavily focuses on simplicity and builds upon work by FAIR [3], [4], [5], [6], [7]. Our findings align with the parallel study of Wightman et al. [7], who also report major accuracy improvements by focusing on the training recipes.
https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/
pytorch blogs
Without further ado, here are the main parameters of our recipe: # Optimizer & LR scheme ngpus=8, batch_size=128,  # per GPU epochs=600, opt='sgd',   momentum=0.9, lr=0.5, lr_scheduler='cosineannealinglr', lr_warmup_epochs=5, lr_warmup_method='linear', lr_warmup_decay=0.01, # Regularization and Augmentation weight_decay=2e-05, norm_weight_decay=0.0, label_smoothing=0.1, mixup_alpha=0.2, cutmix_alpha=1.0, auto_augment='ta_wide', random_erase=0.1, ra_sampler=True, ra_reps=4, # EMA configuration model_ema=True, model_ema_steps=32, model_ema_decay=0.99998, # Resizing interpolation='bilinear', val_resize_size=232, val_crop_size=224, train_crop_size=176, Using our standard training reference script, we can train a ResNet50 using the following command: ``` torchrun --nproc_per_node=8 train.py --model resnet50 --batch-size 128 --lr 0.5 \
https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/
pytorch blogs
--lr-scheduler cosineannealinglr --lr-warmup-epochs 5 --lr-warmup-method linear \ --auto-augment ta_wide --epochs 600 --random-erase 0.1 --weight-decay 0.00002 \ --norm-weight-decay 0.0 --label-smoothing 0.1 --mixup-alpha 0.2 --cutmix-alpha 1.0 \ --train-crop-size 176 --model-ema --val-resize-size 232 --ra-sampler --ra-reps 4 ``` Methodology There are a few principles we kept in mind during our explorations: 1. Training is a stochastic process and the validation metric we try to optimize is a random variable. This is due to the random weight initialization scheme employed and the existence of random effects during the training process. This means that we can’t do a single run to assess the effect of a recipe change. The standard practice is doing multiple runs (usually 3 to 5) and studying the summarization stats (such as mean, std, median, max, etc).
https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/
pytorch blogs
There is usually a significant interaction between different parameters, especially for techniques that focus on Regularization and reducing overfitting. Thus changing the value of one can have effects on the optimal configurations of others. To account for that one can either adopt a greedy search approach (which often leads to suboptimal results but tractable experiments) or apply grid search (which leads to better results but is computationally expensive). In this work, we used a mixture of both. Techniques that are non-deterministic or introduce noise usually require longer training cycles to improve model performance. To keep things tractable, we initially used short training cycles (small number of epochs) to decide which paths can be eliminated early and which should be explored using longer training.
https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/
pytorch blogs
There is a risk of overfitting the validation dataset [8] because of the repeated experiments. To mitigate some of the risk, we apply only training optimizations that provide a significant accuracy improvements and use K-fold cross validation to verify optimizations done on the validation set. Moreover we confirm that our recipe ingredients generalize well on other models for which we didn’t optimize the hyper-parameters. Break down of key accuracy improvements
https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/
pytorch blogs
Break down of key accuracy improvements As discussed in earlier blogposts, training models is not a journey of monotonically increasing accuracies and the process involves a lot of backtracking. To quantify the effect of each optimization, below we attempt to show-case an idealized linear journey of deriving the final recipe starting from the original recipe of TorchVision. We would like to clarify that this is an oversimplification of the actual path we followed and thus it should be taken with a grain of salt.  In the table below, we provide a summary of the performance of stacked incremental improvements on top of Baseline. Unless denoted otherwise, we report the model with best Acc@1 out of 3 runs:
https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/
pytorch blogs
Accuracy@1 Accuracy@5 Incremental Diff Absolute Diff ResNet50 Baseline 76.130 92.862 0.000 0.000 ---------- :--------: :----------: :--------- :--------: + LR optimizations 76.494 93.198 0.364 0.364 ---------- :--------: :----------: :--------- :--------: + TrivialAugment 76.806 93.272 0.312 0.676 ---------- :--------: :----------: :--------- :--------: + Long Training 78.606 94.052 1.800 2.476 ---------- :--------: :----------: :--------- :--------: + Random Erasing 78.796 94.094 0.190 2.666 ---------- :--------: :----------: :--------- :--------: + Label Smoothing 79.114 94.374 0.318 2.984 ---------- :--------: :----------: :--------- :--------: + Mixup 79.232 94.536 0.118 3.102 ---------- :--------: :----------: :--------- :--------: + Cutmix 79.510 94.642 0.278 3.380 ---------- :--------: :----------: :--------- :--------:
https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/
pytorch blogs
+ Weight Decay tuning 80.036 94.746 0.526 3.906 + FixRes mitigations 80.196 94.672 0.160 4.066 ---------- :--------: :----------: :--------- :--------: + EMA 80.450 94.908 0.254 4.320 ---------- :--------: :----------: :--------- :--------: + Inference Resize tuning * 80.674 95.166 0.224 4.544 ---------- :--------: :----------: :--------- :--------: + Repeated Augmentation ** 80.858 95.434 0.184 4.728 *The tuning of the inference size was done on top of the last model. See below for details. ** Community contribution done after the release of the article. See below for details. ## Baseline Our baseline is the previously released ResNet50 model of TorchVision. It was trained with the following recipe: ```python # Optimizer & LR scheme ngpus=8, batch_size=32,  # per GPU epochs=90, opt='sgd',   momentum=0.9, lr=0.1, lr_scheduler='steplr', lr_step_size=30, lr_gamma=0.1, # Regularization
https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/
pytorch blogs
lr_gamma=0.1, # Regularization weight_decay=1e-4, # Resizing interpolation='bilinear', val_resize_size=256, val_crop_size=224, train_crop_size=224, ``` Most of the above parameters are the defaults on our training scripts. We will start building on top of this baseline by introducing optimizations until we gradually arrive at the final recipe. LR optimizations
https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/
pytorch blogs
LR optimizations There are a few parameter updates we can apply to improve both the accuracy and the speed of our training. This can be achieved by increasing the batch size and tuning the LR. Another common method is to apply warmup and gradually increase our learning rate. This is beneficial especially when we use very high learning rates and helps with the stability of the training in the early epochs. Finally, another optimization is to apply Cosine Schedule to adjust our LR during the epochs. A big advantage of cosine is that there are no hyper-parameters to optimize, which cuts down our search space. Here are the additional optimizations applied on top of the baseline recipe. Note that we’ve run multiple experiments to determine the optimal configuration of the parameters: batch_size=128,  # per GPU lr=0.5, lr_scheduler='cosineannealinglr', lr_warmup_epochs=5, lr_warmup_method='linear', lr_warmup_decay=0.01,
https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/
pytorch blogs
lr_warmup_decay=0.01, The above optimizations increase our top-1 Accuracy by 0.364 points comparing to the baseline. Note that in order to combine the different LR strategies we use the newly introduced [SequentialLR](https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.SequentialLR.html#torch.optim.lr_scheduler.SequentialLR) scheduler. ## TrivialAugment The original model was trained using basic augmentation transforms such as Random resized crops and horizontal flips. An easy way to improve our accuracy is to apply more complex “Automatic-Augmentation” techniques. The one that performed best for us is TrivialAugment [[9]](https://arxiv.org/abs/2103.10158), which is extremely simple and can be considered “parameter free”, which means it can help us cut down our search space further. Here is the update applied on top of the previous step: auto_augment='ta_wide', ``` The use of TrivialAugment increased our top-1 Accuracy by 0.312 points compared to the previous step. Long Training
https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/
pytorch blogs
Long Training Longer training cycles are beneficial when our recipe contains ingredients that behave randomly. More specifically as we start adding more and more techniques that introduce noise, increasing the number of epochs becomes crucial. Note that at early stages of our exploration, we used relatively short cycles of roughly 200 epochs which was later increased to 400 as we started narrowing down most of the parameters and finally increased to 600 epochs at the final versions of the recipe. Below we see the update applied on top of the earlier steps: epochs=600,
https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/
pytorch blogs
epochs=600, This further increases our top-1 Accuracy by 1.8 points on top of the previous step. This is the biggest increase we will observe in this iterative process. It’s worth noting that the effect of this single optimization is overstated and somehow misleading. Just increasing the number of epochs on top of the old baseline won’t yield such significant improvements. Nevertheless the combination of the LR optimizations with strong Augmentation strategies helps the model benefit from longer cycles. It’s also worth mentioning that the reason we introduce the lengthy training cycles so early in the process is because in the next steps we will introduce techniques that require significantly more epochs to provide good results. Random Erasing
https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/
pytorch blogs
Random Erasing Another data augmentation technique known to help the classification accuracy is Random Erasing [10], [11]. Often paired with Automatic Augmentation methods, it usually yields additional improvements in accuracy due to its regularization effect. In our experiments we tuned only the probability of applying the method via a grid search and found that it’s beneficial to keep its probability at low levels, typically around 10%.  Here is the extra parameter introduced on top of the previous: random_erase=0.1, Applying Random Erasing increases our Acc@1 by further 0.190 points. Label Smoothing
https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/
pytorch blogs
Label Smoothing A good technique to reduce overfitting is to stop the model from becoming overconfident. This can be achieved by softening the ground truth using Label Smoothing [12]. There is a single parameter which controls the degree of smoothing (the higher the stronger) that we need to specify. Though optimizing it via grid search is possible, we found that values around 0.05-0.15 yield similar results, so to avoid overfitting it we used the same value as on the paper that introduced it. Below we can find the extra config added on this step: label_smoothing=0.1, We use PyTorch’s newly introduced CrossEntropyLoss label_smoothing parameter and that increases our accuracy by an additional 0.318 points. Mixup and Cutmix
https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/
pytorch blogs
Mixup and Cutmix Two data augmentation techniques often used to produce SOTA results are Mixup and Cutmix [13], [14]. They both provide strong regularization effects by softening not only the labels but also the images. In our setup we found it beneficial to apply one of them randomly with equal probability. Each is parameterized with a hyperparameter alpha, which controls the shape of the Beta distribution from which the smoothing probability is sampled. We did a very limited grid search, focusing primarily on common values proposed on the papers.  Below you will find the optimal values for the alpha parameters of the two techniques: mixup_alpha=0.2, cutmix_alpha=1.0, Applying mixup increases our accuracy by 0.118 points and combining it with cutmix improves it by additional 0.278 points. Weight Decay tuning
https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/
pytorch blogs
Weight Decay tuning Our standard recipe uses L2 regularization to reduce overfitting. The Weight Decay parameter controls the degree of the regularization (the larger the stronger) and is applied universally to all learned parameters of the model by default. In this recipe, we apply two optimizations to the standard approach. First we perform grid search to tune the parameter of weight decay and second we disable weight decay for the parameters of the normalization layers.  Below you can find the optimal configuration of weight decay for our recipe: weight_decay=2e-05, norm_weight_decay=0.0, The above update improves our accuracy by a further 0.526 points, providing additional experimental evidence for a known fact that tuning weight decay has significant effects on the performance of the model. Our approach for separating the Normalization parameters from the rest was inspired by ClassyVision’s approach. FixRes mitigations
https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/
pytorch blogs