{"text": "\nlayout: blog_detail\ntitle: \"PyTorch Trace Analysis for the Masses\"\nauthor: Anupam Bhatnagar, Xizhou Feng, Brian Coutinho, Yifan Liu, Sung-Han Lin, Louis Feng, and Yuzhen Huang\n\nWe are excited to announce the public release of Holistic Trace Analysis (HTA), an open source performance analysis and visualization Python library for PyTorch users. HTA takes as input Kineto traces collected by the PyTorch profiler, which are complex and challenging to interpret, and up-levels the performance information contained in these traces. It was initially developed internally at Meta to understand and debug performance problems for large-scale distributed training jobs on GPUs. The multidisciplinary team has made a number of enhancements to HTA\u2019s features and scaled them to support state-of-the-art ML workloads.", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} {"text": "ML researchers and systems engineers often struggle to computationally scale up their models because they are not aware of the performance bottlenecks in their workloads. The resources requested for a job (e.g. GPUs, memory) are often misaligned with the resources actually required due to lack of visibility \u201cunder the hood\u201d. To achieve the best performance from the hardware stack, it is imperative to understand the resource utilization and bottlenecks for distributed training workloads.\nThe initial HTA implementation was specifically targeted at Deep Learning Based Recommendation Models (DLRM). To make the features in HTA generic and applicable to use cases such as analyzing Vision and NLP models, we decided to refactor the HTA codebase and make the library available to the larger community. This new codebase has implemented several important ideas which lead to significant efficiency and performance improvements.", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} {"text": "In this blog, we present several features implemented in the open source version of HTA, which can be used as a Python script as well as interactively in a Jupyter notebook. HTA provides the following features:\n\nBreakdown by Dimensions\nTemporal: Breakdown of GPU time in terms of time spent in computation, communication, memory events, and idle time on a single node and across all ranks.\nIdle Time: Breakdown of GPU idle time into waiting for the host, waiting for another kernel or attributed to an unknown cause.\nKernel: Find kernels with the longest duration on each rank.\nCommunication Computation Overlap: Calculate the percentage of time when communication overlaps computation.\n\n\nStatistical Analysis\nKernel Duration Distribution: Distribution of average time taken by longest kernels across different ranks.\nCUDA Kernel Launch: Distributions of GPU kernels with very small duration, large duration, and excessive launch time.\n\n\n", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} {"text": "\nAugmented Counters (Memory bandwidth, Queue length): Augmented trace files which provide insights into memory copy bandwidth and number of outstanding operations on each CUDA stream.\nPatterns\nFrequent CUDA Kernels: Find the CUDA kernels most frequently launched by any given PyTorch or user defined operator.\n\n\nTrace Comparison\nTrace Diff: A trace comparison tool to identify and visualize the differences between traces.\n\n\n\nHTA source code is available to users via Github. Users can request new features or build their own analysis using the core libraries and data structures provided in the codebase in addition to the features mentioned above.\nGPU Training Performance Debugging 101\nTo understand the GPU performance in distributed training jobs, we consider how the model operators interact with the GPU devices and how such interactions are reflected in certain measurable metrics.", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} {"text": "At a high level, we can break down the GPU operations in a model execution into three broad categories, henceforth referred to as kernel types: \n1. Computation (COMP) - Compute kernels execute compiled routines for matrix multiplication and similar numeric calculations. They are responsible for all of the number-crunching necessary for model execution. \n1. Communication (COMM) - Communication kernels are routines which are responsible for exchanging and synchronizing data between different GPU devices in a distributed training job. The NVIDIA Collective Communication Library (NCCL) is a widely used communication library and all its kernels have the prefix \u201cnccl\u201d. Example NCCL kernels include NCCL_AllGather, NCCL_ReduceScatter, NCCL_AllReduce, etc.", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} {"text": "\nMemory (MEM) - Memory kernels manage the memory allocations/deallocations on the GPU devices and data movement between the memory space on the host and the GPUs. The memory kernels include Memcpy_H2D, Memcpy_D2H, Memcpy_D2D, Memset, etc. Here, H represents the Host and D represents the GPU Device. Thus, H2D, D2H, D2D stands for Host to Device, Device to Host and Device to Device respectively. \n\nBecause a modern GPU device like the NVIDIA A100 GPU is a massively parallel device which is capable of running multiple kernels simultaneously, it is possible to overlap the computation, communication, and memory kernels to reduce the model execution time. One common technique to achieve the overlap is to utilize multiple CUDA streams. A CUDA stream is a sequence of operations that execute on a GPU device in the order in which they are issued by the host code. Different CUDA streams can be interleaved and even run concurrently, thus achieving the effect of kernel overlap.", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} {"text": "To help understand the above concepts, Figure 1 provides a timeline of the GPU kernels in a sample distributed training job on 8 GPUs for one iteration. In the figure below, each rank represents one GPU and the kernels on each GPU run on 6 CUDA streams. In the right column of the figure, you can see names of the GPU kernels used. In the middle of the figure, you see the overlap between compute and communicate kernels. This figure is created using the plot_timeline example notebook available in HTA.\n\nFigure 1. An example of the execution timeline of GPU Kernels across multiple ranks", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} {"text": "\nThe performance of multiple GPU training jobs is affected by multiple factors. Among these factors, how does a model execution create and orchestrate the GPU kernels plays a critical role. HTA provides insights on how the model execution interacts with the GPU devices and highlights the opportunities for performance improvement.\nWith the features we built in HTA, we aim to provide users insights into \u201cwhat is happening under the hood in a distributed GPU training?\u201d We briefly describe these features in the next few paragraphs.\nFeatures in Holistic Trace Analysis\nFor most users, understanding the performance of GPU training jobs is nontrivial. Thus, we built this library to simplify the task of trace analysis and provide the user useful insights by examining the model execution traces. As the first step, we developed features which are important and generic enough so that most users can benefit from this library.", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} {"text": "Temporal Breakdown: We begin by asking whether the GPU is spending time on computation, communication, memory events, or is it idle? To answer this question, the temporal breakdown feature presents a breakdown in terms of these categories. To achieve high training efficiency the code should maximize time used by computation kernels and minimize idle time and non-compute time (time used by communication or memory kernels). This is accomplished by implementing concurrent execution of computation kernels with communication or memory kernels. Note that, during concurrent execution of computation kernels with communication/memory kernels the time spent by communication/memory kernels is accounted for under compute time.\n\nFigure 2: Temporal Breakdown across 8 GPUs", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} {"text": "\nKernel Breakdown: It is natural to ask which kernels are taking the most amount of time. The next feature breaks down the time spent within each kernel type (COMM, COMP, MEM) and sorts them by duration. We present this information for each kernel type and for each rank as a pie chart. See figure 3 below. \n\nFigure 3: Pie chart of top computation and communication kernels", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} {"text": "\nKernel Duration Distribution: Subsequently, one can also ask - for any given kernel, what is the distribution of the time spent across the ranks? To answer this, HTA generates bar graphs for the average duration of a given kernel across all ranks. Additionally, the error bars in the bar graphs show the minimum and maximum amount of time taken by a given kernel on a given rank. Figure 4 below shows a discrepancy between average duration on rank 0 as compared to other ranks. This anomalous behavior on rank 0 guides the user on where to look for possible bugs.\n\nFigure 4: Average duration of NCCL AllReduce Kernel across 8 ranks", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} {"text": "\nCommunication Computation Overlap: In distributed training, a significant amount of time is spent in communication and synchronization events among multiple GPU devices. To achieve high GPU efficiency (i.e. TFLOPS/GPU) it is vital to keep the GPU doing actual computation work. In other words, a GPU should not be blocked because of waiting for data from other GPUs. One way to measure the extent to which computation is blocked by data dependencies is to calculate the computation-communication overlap. Higher GPU efficiency is observed if communication events overlap computation events. Lack of communication and computation overlap will lead to the GPU being idle, thus the efficiency would be low. Thus, the communication computation overlap feature calculates the percentage of time communication and computation overlap in a job for each rank and generates a bar graph representation. See figure below. More precisely, we measure the following ratio", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} {"text": "(time spent in computation while communicating) / (time spent in communication)\n\nFigure 5: Communication computation overlap\nAugmented Counters (Queue length, Memory bandwidth): To aid in debugging, HTA calculates the memory bandwidth statistics for D2H, H2D and D2D memory copy (memcpy) and memory set (memset) events. Additionally, HTA also computes the number of outstanding CUDA operations on each CUDA stream. We refer to this as queue length. When the queue length on a stream is 1024 or larger new events cannot be scheduled on that stream and the CPU will stall until the GPU events have processed. Additionally, HTA generates a new trace file containing tracks with the memory bandwidth and queue length time series. See Figure 6 below.\n", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} {"text": "Figure 6: Memory Bandwidth and Queue Length\nThese primary features give us a peek into the system performance and help answer \u201cwhat is happening in the system?\u201d. As HTA evolves, we hope to address \u201cwhy is X happening?\u201d and also suggest possible solutions to overcome the bottlenecks.\nInstallation and Usage\nInstallation\nFor installing the HTA please refer to the README. In brief, the user is required to clone the repo and install the necessary Python packages via pip.\nUsage", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} {"text": "Usage\nThis version of Holistic Trace Analysis is currently in beta and we recommend using HTA in a Jupyter notebook. A demo notebook is provided for your convenience. To get started, import the hta package in a Jupyter notebook, create a TraceAnalysis object and off we go in exactly two lines of code.\nfrom hta.trace_analysis import TraceAnalysis\nanalyzer = TraceAnalysis(trace_dir = \u201c/trace/folder/path\u201d)\n\nRequirements\n\nAll trace files for a training or inference job must be stored in a unique folder.\nTrace files are in json or gzipped json format.\n\nFAQ\nQ. How can I install HTA?\nPlease see the README in the root directory of the repository.\nQ. Is there any documentation on the features and API in HTA?", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} {"text": "The documentation and detailed API is available here.\nQ. Can you implement feature X?\nDepending on how widely the feature is needed and the level of effort required to implement it we would consider developing the feature. Please open a Github Issue and tag it with the feature-request label.\nQ. Can I modify the code?\nPlease do and send a PR along the way, if you think it would be useful for others.\nQ. How can I collect traces in PyTorch?\nPlease refer to this tutorial here.\nQ. Can HTA be used at production scale?\nYes, please see a use case study here.", "source": "https://pytorch.org/blog/trace-analysis-for-masses/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'PyTorch adds new dev tools as it hits production scale'\nauthor: The PyTorch Team\n\nThis is a partial re-post of the original blog post on the Facebook AI Blog. The full post can be viewed here\nSince its release just a few months ago, PyTorch 1.0 has been rapidly adopted as a powerful, flexible deep learning platform that enables engineers and researchers to move quickly from research to production. We are highlighting some of the ways the AI engineering and research community is using PyTorch 1.0. We\u2019re also sharing new details about the latest release, PyTorch 1.1, and showcasing some of the new development tools created by the community.", "source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"} {"text": "Building on the initial launch of PyTorch in 2017, we partnered with the AI community to ship the stable release of PyTorch 1.0 last December. Along with enhanced production-oriented capabilities and deep integration with leading cloud platforms, PyTorch 1.0 expands on the open source library\u2019s core features, with the addition of PyTorch JIT (Just in time compilation) that seamlessly transitions between eager mode and graph mode to provide both flexibility and speed.", "source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"} {"text": "Leading businesses across industries are beginning to use PyTorch to both facilitate their research and then also deploy at large scale for applications such as translation, computer vision, conversational interfaces, pharmaceutical research, factory optimization, and automated driving research. Community adoption of PyTorch has also continued to expand. Stanford, UC Berkeley, Caltech, and other universities are using PyTorch as a fundamental tool for their machine learning (ML) courses; new ecosystem projects have launched to support development on PyTorch; and major cloud platforms have expanded their integration with PyTorch.\nUsing PyTorch across industries\nMany leading businesses are moving to PyTorch 1.0 to accelerate development and deployment of new AI systems. Here are some examples:\n\nAirbnb leveraged PyTorch's rich libraries and APIs for conversational AI and deployed a Smart Reply to help the company\u2019s service agents respond more effectively to customers.\n", "source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"} {"text": "\nATOM is building a platform to generate and optimize new drug candidates significantly faster and with greater success than conventional processes. Using machine learning frameworks such as PyTorch, ATOM was able to design a variational autoencoder for representing diverse chemical structures and designing new drug candidates.\nGenentech is utilizing PyTorch\u2019s flexible control structures and dynamic graphs to train deep learning models that will aid in the development of individualized cancer therapy.\nMicrosoft is using PyTorch across its organization to develop ML models at scale and deploy them via the ONNX Runtime. Using PyTorch, Microsoft Cognition has built distributed language models that scale to billions of words and are now in production in offerings such as Cognitive Services.\n", "source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"} {"text": "\nToyota Research Institute (TRI) is developing a two-pronged approach toward automated driving with Toyota Guardian and Toyota Chauffeur technologies. The Machine Learning Team at TRI is creating new deep learning algorithms to leverage Toyota's 10 million sales per year data advantage. The flexibility of PyTorch has vastly accelerated their pace of exploration and its new production features will enable faster deployment towards their safety critical applications.\n\nFollowing the release of PyTorch 1.0 in December 2018, we\u2019re now announcing the availability of v1.1, which improves performance, adds new model understanding and visualization tools to improve usability, and provides new APIs.\nKey features of PyTorch v1.1 include:", "source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"} {"text": "Key features of PyTorch v1.1 include:\n\nTensorBoard: First-class and native support for visualization and model debugging with TensorBoard, a web application suite for inspecting and understanding training runs and graphs. PyTorch now natively supports TensorBoard with a simple \u201cfrom torch.utils.tensorboard import SummaryWriter\u201d command.\nJIT compiler: Improvements to just-in-time (JIT) compilation. These include various bug fixes as well as expanded capabilities in TorchScript, such as support for dictionaries, user classes, and attributes.\nNew APIs: Support for Boolean tensors and better support for custom recurrent neural networks.\n", "source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"} {"text": "\nDistributed Training: Improved performance for common models such as CNNs, added support for multi device modules including the ability to split models across GPUs while still using Distributed Data Parallel (DDP) and support for modules where not all parameters are used in every iteration (e.g. control flow, like adaptive softmax, etc). See the latest tutorials here.\n\nWe\u2019ve also continued to partner with the community to foster projects and tools aimed at supporting ML engineers for needs ranging from improved model understanding to auto-tuning using AutoML methods. With the release of Ax and BoTorch (below), we will be sharing some of our core algorithms, including meta-learning for efficiently optimizing hyperparameters from based on historical tasks. We are excited to see this work open-sourced for the community to build on.", "source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"} {"text": "This ecosystem includes open source projects and tools that have been deployed at production scale, as well as products and services from our partnership with industry leaders who share our vision of an open and collaborative AI community. Here are a few of the latest tools:\n\nBoTorch: BoTorch is a research framework built on top of PyTorch to provide Bayesian optimization, a sample-efficient technique for sequential optimization of costly-to-evaluate black-box functions.\nAx: Ax is an ML platform for managing adaptive experiments. It enables researchers and engineers to systematically explore large configuration spaces in order to optimize machine learning models, infrastructure, and products.\n", "source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"} {"text": "\nPyTorch-BigGraph: PBG is a distributed system for creating embeddings of very large graphs with billions of entities and trillions of edges. It includes support for sharding and negative sampling and it offers sample use cases based on Wikidata embeddings.\nGoogle AI Platform Notebooks: AI Platform Notebooks is a new, hosted JupyterLab service from Google Cloud Platform. Data scientists can quickly create virtual machines running JupyterLab with the latest version of PyTorch preinstalled. It is also tightly integrated with GCP services such as BigQuery, Cloud Dataproc, Cloud Dataflow, and AI Factory, making it easy to execute the full ML cycle without ever leaving JupyterLab.\n\nWe\u2019re also excited to see many interesting new projects from the broader PyTorch community. Highlights include:", "source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"} {"text": "\nBigGAN-PyTorch:This is a full PyTorch reimplementation that uses gradient accumulation to provide the benefits of big batches on as few as four GPUs.\nGeomLoss: A Python API that defines PyTorch layers for geometric loss functions between sampled measures, images, and volumes. It includes MMD, Wasserstein, Sinkhorn, and more.\n\n\n\n\n\nPyTorch Geometric: A deep learning extension library for PyTorch that offers several methods for deep learning on graphs and other irregular structures (also known as geometric deep learning) from a variety of published papers.\n", "source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"} {"text": "\nCurve-GCN: A real-time, interactive image annotation approach that uses an end-to-end-trained graph convolutional network (GCN). It supports object annotation by either polygons or splines, facilitating labeling efficiency for both line-based and curved objects. Curve-GCN runs 10x faster than traditional methods, such as Polygon-RNN++.\n\nUdacity, fast.ai, and others develop new PyTorch resources\nPyTorch is ideal for teaching ML development because it enables rapid experimentation through its flexible, dynamic programming environment and user-friendly Pythonic interface. In addition, Google Colab now offers an interactive Jupyter Notebook environment that natively supports PyTorch, allowing developers to run any PyTorch tutorial immediately with free CPU and GPU resources.", "source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"} {"text": "University-level classes \u2014 including Stanford NLP, UC Berkeley Computer Vision, and Caltech Robotics courses \u2014 are now being taught on PyTorch. In addition, massive open online courses (MOOCs) are training thousands of new PyTorch developers.\nToday, we\u2019re announcing a new Udacity course, building upon the Intro to Deep Learning course launched last year. This new course, led by Andrew Trask of Oxford University and OpenMined, covers important concepts around privacy in AI, including methods such as differential privacy and federated learning. Facebook will also be providing scholarships to support students as they continue their ML education in Udacity\u2019s full Nanodegree programs.", "source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"} {"text": "The fast.ai community is also continuing to invest energy and resources in PyTorch. In June, fast.ai will launch a new course called Deep Learning from the Foundations, which will show developers how to go all the way from writing matrix multiplication from scratch to how to train and implement a state-of-the-art ImageNet model. The course will include deep dives into the underlying implementation of methods in the PyTorch and fast.ai libraries, and will use the code to explain and illustrate the academic papers that underlie these methods.", "source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"} {"text": "As part of the course, fast.ai will also release new software modules, including fastai.audio, which brings the power of fast.ai\u2019s deep abstractions and curated algorithms to the new PyTorch.audio module, and show how fastai.vision can be used to create stunning high-resolution videos from material such as old classic movies, and from cutting-edge microscopy sequences through a collaboration with the Salk Institute. In addition, fast.ai is contributing its new X-ResNet module, including a suite of models pretrained on ImageNet.\nGetting started with PyTorch", "source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"} {"text": "Getting started with PyTorch\nEveryone in the AI community \u2014 including those new to ML development as well as researchers and engineers looking for ways to accelerate their end-to-end workflows \u2014 can experiment with PyTorch instantly by visiting pytorch.org and launching a tutorial in Colab. There are also many easy ways to get started both locally and on popular cloud platforms.", "source": "https://pytorch.org/blog/pytorch-adds-new-dev-tools/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Introducing Accelerated PyTorch Training on Mac\"\nauthor: PyTorch\nfeatured-img: \"/assets/images/METAPT-002-BarGraph-02-static.png\"\n\nIn collaboration with the Metal engineering team at Apple, we are excited to announce support for GPU-accelerated PyTorch training on Mac. Until now, PyTorch training on Mac only leveraged the CPU, but with the upcoming PyTorch v1.12 release, developers and researchers can take advantage of Apple silicon GPUs for significantly faster model training. This unlocks the ability to perform machine learning workflows like prototyping and fine-tuning locally, right on Mac.\n\n\n\nMetal Acceleration", "source": "https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/", "category": "pytorch blogs"} {"text": "\nMetal Acceleration\nAccelerated GPU training is enabled using Apple\u2019s Metal Performance Shaders (MPS) as a backend for PyTorch. The MPS backend extends the PyTorch framework, providing scripts and capabilities to set up and run operations on Mac. MPS optimizes compute performance with kernels that are fine-tuned for the unique characteristics of each Metal GPU family. The new device maps machine learning computational graphs and primitives on the MPS Graph framework and tuned kernels provided by MPS. \nTraining Benefits on Apple Silicon\nEvery Apple silicon Mac has a unified memory architecture, providing the GPU with direct access to the full memory store. This makes Mac a great platform for machine learning, enabling users to train larger networks or batch sizes locally. This reduces costs associated with cloud-based development or the need for additional local GPUs. The Unified Memory architecture also reduces data retrieval latency, improving end-to-end performance.", "source": "https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/", "category": "pytorch blogs"} {"text": "In the graphs below, you can see the performance speedup from accelerated GPU training and evaluation compared to the CPU baseline:\n\n\n\n\nAccelerated GPU training and evaluation speedups over CPU-only (times faster)\n\nGetting Started\nTo get started, just install the latest Preview (Nightly) build on your Apple silicon Mac running macOS 12.3 or later with a native version (arm64) of Python.\nYou can also learn more about Metal and MPS on Apple\u2019s Metal page.", "source": "https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/", "category": "pytorch blogs"} {"text": "* Testing conducted by Apple in April 2022 using production Mac Studio systems with Apple M1 Ultra, 20-core CPU, 64-core GPU 128GB of RAM, and 2TB SSD. Tested with macOS Monterey 12.3, prerelease PyTorch 1.12, ResNet50 (batch size=128), HuggingFace BERT (batch size=64), and VGG16 (batch size=64). Performance tests are conducted using specific computer systems and reflect the approximate performance of Mac Studio.", "source": "https://pytorch.org/blog/introducing-accelerated-pytorch-training-on-mac/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Accelerating Hugging Face and TIMM models with PyTorch 2.0\"\nauthor: Mark Saroufim\nfeatured-img: \"assets/images/pytorch-2.0-feature-img.png\"\n\ntorch.compile() makes it easy to experiment with different compiler backends to make PyTorch code faster with a single line decorator torch.compile(). It works either directly over an nn.Module as a drop-in replacement for torch.jit.script() but without requiring you to make any source code changes. We expect this one line code change to provide you with between 30%-2x training time speedups on the vast majority of models that you\u2019re already running.\n\nopt_module = torch.compile(module)\n\n\ntorch.compile supports arbitrary PyTorch code, control flow, mutation and comes with experimental support for dynamic shapes. We\u2019re so excited about this development that we call it PyTorch 2.0.", "source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"} {"text": "What makes this announcement different for us is we\u2019ve already benchmarked some of the most popular open source PyTorch models and gotten substantial speedups ranging from 30% to 2x https://github.com/pytorch/torchdynamo/issues/681.\nThere are no tricks here, we\u2019ve pip installed popular libraries like https://github.com/huggingface/transformers, https://github.com/huggingface/accelerate and https://github.com/rwightman/pytorch-image-models and then ran torch.compile() on them and that\u2019s it.\nIt\u2019s rare to get both performance and convenience, but this is why the core team finds PyTorch 2.0 so exciting. The Hugging Face team is also excited, in their words:\nRoss Wightman the primary maintainer of TIMM: \u201cPT 2.0 works out of the box with majority of timm models for inference and train workloads and no code changes\u201d", "source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"} {"text": "Sylvain Gugger the primary maintainer of transformers and accelerate: \"With just one line of code to add, PyTorch 2.0 gives a speedup between 1.5x and 2.x in training Transformers models. This is the most exciting thing since mixed precision training was introduced!\"\nThis tutorial will show you exactly how to replicate those speedups so you can be as excited as to PyTorch 2.0 as we are.\nRequirements and Setup\nFor GPU (newer generation GPUs will see drastically better performance)\npip3 install numpy --pre torch --force-reinstall --extra-index-url https://download.pytorch.org/whl/nightly/cu117\n\n\nFor CPU\npip3 install --pre torch --extra-index-url https://download.pytorch.org/whl/nightly/cpu\n\n\nOptional: Verify Installation\ngit clone https://github.com/pytorch/pytorch\ncd tools/dynamo\npython verify_dynamo.py\n\nOptional: Docker installation\nWe also provide all the required dependencies in the PyTorch nightly\nbinaries which you can download with\n```", "source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"} {"text": "binaries which you can download with\ndocker pull ghcr.io/pytorch/pytorch-nightly\n\n\nAnd for ad hoc experiments just make sure that your container has access\nto all your GPUs\ndocker run --gpus all -it ghcr.io/pytorch/pytorch-nightly:latest /bin/bash\n\n\nGetting started\na toy exmaple\nLet\u2019s start with a simple example and make things more complicated step\nby step. Please note that you\u2019re likely to see more significant speedups the newer your GPU is.\nimport torch\ndef fn(x, y):\n a = torch.sin(x).cuda()\n b = torch.sin(y).cuda()\n return a + b\nnew_fn = torch.compile(fn, backend=\"inductor\")\ninput_tensor = torch.randn(10000).to(device=\"cuda:0\")\na = new_fn(input_tensor, input_tensor)\n\nThis example won\u2019t actually run faster but it\u2019s educational.", "source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"} {"text": "example that features torch.cos() and torch.sin() which are examples of pointwise ops as in they operate element by element on a vector. A more famous pointwise op you might actually want to use would be something like torch.relu().\nPointwise ops in eager mode are suboptimal because each one would need to read a tensor from memory, make some changes and then write back those changes.\nThe single most important optimization that PyTorch 2.0 does for you is fusion.\nSo back to our example we can turn 2 reads and 2 writes into 1 read and 1 write which is crucial especially for newer GPUs where the bottleneck is memory bandwidth (how quickly you can send data to a GPU) instead of compute (how quickly your GPU can crunch floating point operations)\nThe second most important optimization that PyTorch 2.0 does for you is CUDA graphs\nCUDA graphs help eliminate the overhead from launching individual kernels from a python program.", "source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"} {"text": "torch.compile() supports many different backends but one that we\u2019re particularly excited about is Inductor which generates Triton kernels https://github.com/openai/triton which are written in Python yet outperform the vast majority of handwritten CUDA kernels. Suppose our example above was called trig.py we can actually inspect the code generated triton kernels by running.\nTORCH_COMPILE_DEBUG=1 python trig.py\n\n```python\n@pointwise(size_hints=[16384], filename=file, meta={'signature': {0: 'fp32', 1: 'fp32', 2: 'i32'}, 'device': 0, 'constants': {}, 'configs': [instance_descriptor(divisible_by_16=(0, 1, 2), equal_to_1=())]})\n@triton.jit\ndef kernel(in_ptr0, out_ptr0, xnumel, XBLOCK : tl.constexpr):\n xnumel = 10000\n xoffset = tl.program_id(0) * XBLOCK\n xindex = xoffset + tl.reshape(tl.arange(0, XBLOCK), [XBLOCK])\n xmask = xindex < xnumel\n x0 = xindex\n tmp0 = tl.load(in_ptr0 + (x0), xmask)\n tmp1 = tl.sin(tmp0)\n tmp2 = tl.sin(tmp1)", "source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"} {"text": "tmp1 = tl.sin(tmp0)\n tmp2 = tl.sin(tmp1)\n tl.store(out_ptr0 + (x0 + tl.zeros([XBLOCK], tl.int32)), tmp2, xmask)\n\nAnd you can verify that fusing the two `sins` did actually occur because the two `sin` operations occur within a single Triton kernel and the temporary variables are held in registers with very fast access.\n\n### a real model\n\nAs a next step let\u2019s try a real model like resnet50 from the PyTorch hub.\n\n```python\nimport torch\nmodel = torch.hub.load('pytorch/vision:v0.10.0', 'resnet18', pretrained=True)\nopt_model = torch.compile(model, backend=\"inductor\")\nmodel(torch.randn(1,3,64,64))\n\n\nIf you actually run you may be surprised that the first run is slow and that\u2019s because the model is being compiled. Subsequent runs will be faster so it's common practice to warm up your model before you start benchmarking it.", "source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"} {"text": "You may have noticed how we also passed in the name of a compiler explicitly here with \u201cinductor\u201d but it\u2019s not the only available backend, you can run in a REPL torch._dynamo.list_backends() to see the full list of available backends. For fun you should try out aot_cudagraphs or nvfuser.\nHugging Face models\nLet\u2019s do something a bit more interesting now, our community frequently\nuses pretrained models from transformers https://github.com/huggingface/transformers or TIMM https://github.com/rwightman/pytorch-image-models and one of our design goals for PyTorch 2.0 was that any new compiler stack needs to work out of the box with the vast majority of models people actually run.\nSo we\u2019re going to directly download a pretrained model from the Hugging Face hub and optimize it\n```python\nimport torch\nfrom transformers import BertTokenizer, BertModel", "source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"} {"text": "from transformers import BertTokenizer, BertModel\nCopy pasted from here https://huggingface.co/bert-base-uncased\ntokenizer = BertTokenizer.from_pretrained('bert-base-uncased')\nmodel = BertModel.from_pretrained(\"bert-base-uncased\").to(device=\"cuda:0\")\nmodel = torch.compile(model) # This is the only line of code that we changed\ntext = \"Replace me by any text you'd like.\"\nencoded_input = tokenizer(text, return_tensors='pt').to(device=\"cuda:0\")\noutput = model(**encoded_input)\n```\nIf you remove the to(device=\"cuda:0\") from the model and encoded_input then PyTorch 2.0 will generate C++ kernels that will be optimized for running on your CPU. You can inspect both Triton or C++ kernels for BERT, they\u2019re obviously more complex than the trigonometry example we had above but you can similarly skim it and understand if you understand PyTorch.\nThe same code also works just fine if used with https://github.com/huggingface/accelerate and DDP", "source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"} {"text": "Similarly let\u2019s try out a TIMM example\nimport timm\nimport torch\nmodel = timm.create_model('resnext101_32x8d', pretrained=True, num_classes=2)\nopt_model = torch.compile(model, backend=\"inductor\")\nopt_model(torch.randn(64,3,7,7))\n\nOur goal with PyTorch was to build a breadth-first compiler that would speed up the vast majority of actual models people run in open source. The Hugging Face Hub ended up being an extremely valuable benchmarking tool for us, ensuring that any optimization we work on actually helps accelerate models people want to run.\nSo please try out PyTorch 2.0, enjoy the free perf and if you\u2019re not seeing it then please open an issue and we will make sure your model is supported https://github.com/pytorch/torchdynamo/issues\nAfter all, we can\u2019t claim we\u2019re created a breadth-first unless YOUR models actually run faster.", "source": "https://pytorch.org/blog/Accelerating-Hugging-Face-and-TIMM-models/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'PyTorch 1.6 now includes Stochastic Weight Averaging'\nauthor: Pavel Izmailov, Andrew Gordon Wilson and Vincent Queneneville-Belair\n\nDo you use stochastic gradient descent (SGD) or Adam? Regardless of the procedure you use to train your neural network, you can likely achieve significantly better generalization at virtually no additional cost with a simple new technique now natively supported in PyTorch 1.6, Stochastic Weight Averaging (SWA) [1]. Even if you have already trained your model, it\u2019s easy to realize the benefits of SWA by running SWA for a small number of epochs starting with a pre-trained model. Again and again, researchers are discovering that SWA improves the performance of well-tuned models in a wide array of practical applications with little cost or effort!\nSWA has a wide range of applications and features:", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} {"text": "SWA has a wide range of applications and features:\n* SWA significantly improves performance compared to standard training techniques in computer vision (e.g., VGG, ResNets, Wide ResNets and DenseNets on ImageNet and CIFAR benchmarks [1, 2]).\n* SWA provides state-of-the-art performance on key benchmarks in semi-supervised learning and domain adaptation [2].\n* SWA was shown to improve performance in language modeling (e.g., AWD-LSTM on WikiText-2 [4]) and policy-gradient methods in deep reinforcement learning [3].\n* SWAG, an extension of SWA, can approximate Bayesian model averaging in Bayesian deep learning and achieves state-of-the-art uncertainty calibration results in various settings. Moreover, its recent generalization MultiSWAG provides significant additional performance gains and mitigates double-descent [4, 10]. Another approach, Subspace Inference, approximates the Bayesian posterior in a small subspace of the parameter space around the SWA solution [5].", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} {"text": "\nSWA for low precision training, SWALP, can match the performance of full-precision SGD training, even with all numbers quantized down to 8 bits, including gradient accumulators [6].\nSWA in parallel, SWAP, was shown to greatly speed up the training of neural networks by using large batch sizes and, in particular, set a record by training a neural network to 94% accuracy on CIFAR-10 in 27 seconds [11].\n\n\n\n\nFigure 1. Illustrations of SWA and SGD with a Preactivation ResNet-164 on CIFAR-100 [1]. Left: test error surface for three FGE samples and the corresponding SWA solution (averaging in weight space). Middle and Right: test error and train loss surfaces showing the weights proposed by SGD (at convergence) and SWA, starting from the same initialization of SGD after 125 training epochs. Please see [1] for details on how these figures were constructed.", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} {"text": "In short, SWA performs an equal average of the weights traversed by SGD (or any stochastic optimizer) with a modified learning rate schedule (see the left panel of Figure 1.). SWA solutions end up in the center of a wide flat region of loss, while SGD tends to converge to the boundary of the low-loss region, making it susceptible to the shift between train and test error surfaces (see the middle and right panels of Figure 1). We emphasize that SWA can be used with any optimizer, such as Adam, and is not specific to SGD.\nPreviously, SWA was in PyTorch contrib. In PyTorch 1.6, we provide a new convenient implementation of SWA in torch.optim.swa_utils.\nIs this just Averaged SGD?", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} {"text": "Is this just Averaged SGD?\nAt a high level, averaging SGD iterates dates back several decades in convex optimization [7, 8], where it is sometimes referred to as Polyak-Ruppert averaging, or averaged SGD. But the details matter. Averaged SGD is often used in conjunction with a decaying learning rate, and an exponential moving average (EMA), typically for convex optimization. In convex optimization, the focus has been on improved rates of convergence. In deep learning, this form of averaged SGD smooths the trajectory of SGD iterates but does not perform very differently.\nBy contrast, SWA uses an equal average of SGD iterates with a modified cyclical or high constant learning rate and exploits the flatness of training objectives [8] specific to deep learning for improved generalization. \nHow does Stochastic Weight Averaging Work?", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} {"text": "How does Stochastic Weight Averaging Work?\nThere are two important ingredients that make SWA work. First, SWA uses a modified learning rate schedule so that SGD (or other optimizers such as Adam) continues to bounce around the optimum and explore diverse models instead of simply converging to a single solution. For example, we can use the standard decaying learning rate strategy for the first 75% of training time and then set the learning rate to a reasonably high constant value for the remaining 25% of the time (see Figure 2 below). The second ingredient is to take an average of the weights (typically an equal average) of the networks traversed by SGD. For example, we can maintain a running average of the weights obtained at the end of every epoch within the last 25% of training time (see Figure 2). After training is complete, we then set the weights of the network to the computed SWA averages.\n\n", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} {"text": "\nFigure 2. Illustration of the learning rate schedule adopted by SWA. Standard decaying schedule is used for the first 75% of the training and then a high constant value is used for the remaining 25%. The SWA averages are formed during the last 25% of training.\nOne important detail is the batch normalization. Batch normalization layers compute running statistics of activations during training. Note that the SWA averages of the weights are never used to make predictions during training. So the batch normalization layers do not have the activation statistics computed at the end of training. We can compute these statistics by doing a single forward pass on the train data with the SWA model.\nWhile we focus on SGD for simplicity in the description above, SWA can be combined with any optimizer. You can also use cyclical learning rates instead of a high constant value (see e.g., [2]).\nHow to use SWA in PyTorch?", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} {"text": "How to use SWA in PyTorch?\nIn torch.optim.swa_utils we implement all the SWA ingredients to make it convenient to use SWA with any model. In particular, we implement AveragedModel class for SWA models, SWALR learning rate scheduler, and update_bn utility function to update SWA batch normalization statistics at the end of training. \nIn the example below, swa_model is the SWA model that accumulates the averages of the weights. We train the model for a total of 300 epochs, and we switch to the SWA learning rate schedule and start to collect SWA averages of the parameters at epoch 160. \n```python\nfrom torch.optim.swa_utils import AveragedModel, SWALR\nfrom torch.optim.lr_scheduler import CosineAnnealingLR\nloader, optimizer, model, loss_fn = ...\nswa_model = AveragedModel(model)\nscheduler = CosineAnnealingLR(optimizer, T_max=100)\nswa_start = 5\nswa_scheduler = SWALR(optimizer, swa_lr=0.05)\nfor epoch in range(100):\n for input, target in loader:\n optimizer.zero_grad()", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} {"text": "optimizer.zero_grad()\n loss_fn(model(input), target).backward()\n optimizer.step()\n if epoch > swa_start:\n swa_model.update_parameters(model)\n swa_scheduler.step()\n else:\n scheduler.step()\nUpdate bn statistics for the swa_model at the end\ntorch.optim.swa_utils.update_bn(loader, swa_model)\nUse swa_model to make predictions on test data\npreds = swa_model(test_input)\n```\nNext, we explain each component of torch.optim.swa_utils in detail.\nAveragedModel class serves to compute the weights of the SWA model. You can create an averaged model by running swa_model = AveragedModel(model). You can then update the parameters of the averaged model by swa_model.update_parameters(model). By default, AveragedModel computes a running equal average of the parameters that you provide, but you can also use custom averaging functions with the avg_fn parameter. In the following example, ema_model computes an exponential moving average.\n```python", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} {"text": "ema_avg = lambda averaged_model_parameter, model_parameter, num_averaged:\\\n0.1 * averaged_model_parameter + 0.9 * model_parameter\nema_model = torch.optim.swa_utils.AveragedModel(model, avg_fn=ema_avg)\n\nIn practice, we find an equal average with the modified learning rate schedule in Figure 2 provides the best performance.\nSWALR is a learning rate scheduler that anneals the learning rate to a fixed value, and then keeps it constant. For example, the following code creates a scheduler that linearly anneals the learning rate from its initial value to 0.05 in 5 epochs within each parameter group.\nswa_scheduler = torch.optim.swa_utils.SWALR(optimizer, \nanneal_strategy=\"linear\", anneal_epochs=5, swa_lr=0.05)\n\n\nWe also implement cosine annealing to a fixed value (anneal_strategy=\"cos\"). In practice, we typically switch to SWALR at epoch swa_start (e.g. after 75% of the training epochs), and simultaneously start to compute the running averages of the weights:\n```python", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} {"text": "scheduler = torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max=100)\nswa_start = 75\nfor epoch in range(100):\n # \n if i > swa_start:\n swa_model.update_parameters(model)\n swa_scheduler.step()\n else:\n scheduler.step()\n\nFinally, update_bn is a utility function that computes the batchnorm statistics for the SWA model on a given dataloader loader:\ntorch.optim.swa_utils.update_bn(loader, swa_model) \n\nupdate_bn applies the swa_model to every element in the dataloader and computes the activation statistics for each batch normalization layer in the model.\nOnce you computed the SWA averages and updated the batch normalization layers, you can apply swa_model to make predictions on test data.\nWhy does it work?", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} {"text": "Why does it work?\nThere are large flat regions of the loss surface [9]. In Figure 3 below, we show a visualization of the loss surface in a subspace of the parameter space containing a path connecting two independently trained SGD solutions, such that the loss is similarly low at every point along the path. SGD converges near the boundary of these regions because there isn\u2019t much gradient signal to move inside, as the points in the region all have similarly low values of loss. By increasing the learning rate, SWA spins around this flat region, and then by averaging the iterates, moves towards the center of the flat region.\n\n\n", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} {"text": "\nFigure 3: visualization of mode connectivity for ResNet-20 with no skip connections on CIFAR-10 dataset. The visualization is created in collaboration with Javier Ideami (https://losslandscape.com/). For more details, see this blogpost.\nWe expect solutions that are centered in the flat region of the loss to generalize better than those near the boundary. Indeed, train and test error surfaces are not perfectly aligned in the weight space. Solutions that are centered in the flat region are not as susceptible to the shifts between train and test error surfaces as those near the boundary. In Figure 4 below, we show the train loss and test error surfaces along the direction connecting the SWA and SGD solutions. As you can see, while the SWA solution has a higher train loss compared to the SGD solution, it is centered in a region of low loss and has a substantially better test error.\n", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} {"text": "\n\n\nFigure 4. Train loss and test error along the line connecting the SWA solution (circle) and SGD solution (square). The SWA solution is centered in a wide region of low train loss, while the SGD solution lies near the boundary. Because of the shift between train loss and test error surfaces, the SWA solution leads to much better generalization.\nWhat are the results achieved with SWA?\nWe release a GitHub repo with examples using the PyTorch implementation of SWA for training DNNs. For example, these examples can be used to achieve the following results on CIFAR-100:\n{:.table.table-striped.table-bordered}\n | | VGG-16 | ResNet-164 | WideResNet-28x10 | \n| ------------- | ------------- | ------------- | ------------- |\n| SGD | 72.8 \u00b1 0.3 | 78.4 \u00b1 0.3 | 81.0 \u00b1 0.3 | \n| SWA | 74.4 \u00b1 0.3 | 79.8 \u00b1 0.4 | 82.5 \u00b1 0.2 |", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} {"text": "| SWA | 74.4 \u00b1 0.3 | 79.8 \u00b1 0.4 | 82.5 \u00b1 0.2 |\nSemi-Supervised Learning\nIn a follow-up paper SWA was applied to semi-supervised learning, where it improved the best reported results in multiple settings [2]. For example, with SWA you can get 95% accuracy on CIFAR-10 if you only have the training labels for 4k training data points (the previous best reported result on this problem was 93.7%). This paper also explores averaging multiple times within epochs, which can accelerate convergence and find still flatter solutions in a given time.\n\n\n\nFigure 5. Performance of fast-SWA on semi-supervised learning with CIFAR-10. fast-SWA achieves record results in every setting considered.\nReinforcement Learning", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} {"text": "Reinforcement Learning\nIn another follow-up paper SWA was shown to improve the performance of policy gradient methods A2C and DDPG on several Atari games and MuJoCo environments [3]. This application is also an instance of where SWA is used with Adam. Recall that SWA is not specific to SGD and can benefit essentially any optimizer.\n{:.table.table-striped.table-bordered}\n | Environment Name | A2C | A2C + SWA | \n| ------------- | ------------- | ------------- |\n| Breakout | 522 \u00b1 34 | 703 \u00b1 60 |\n| Qbert | 18777 \u00b1 778 | 21272 \u00b1 655 |\n| SpaceInvaders | 7727 \u00b1 1121 | 21676 \u00b1 8897 |\n| Seaquest | 1779 \u00b1 4 | 1795 \u00b1 4 |\n| BeamRider | 9999 \u00b1 402 | 11321 \u00b1 1065 |\n| CrazyClimber | 147030 \u00b1 10239 | 139752 \u00b1 11618 |\nLow Precision Training", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} {"text": "Low Precision Training\nWe can filter through quantization noise by combining weights that have been rounded down with weights that have been rounded up. Moreover, by averaging weights to find a flat region of the loss surface, large perturbations of the weights will not affect the quality of the solution (Figures 9 and 10). Recent work shows that by adapting SWA to the low precision setting, in a method called SWALP, one can match the performance of full-precision SGD even with all training in 8 bits [5]. This is quite a practically important result, given that (1) SGD training in 8 bits performs notably worse than full precision SGD, and (2) low precision training is significantly harder than predictions in low precision after training (the usual setting). For example, a ResNet-164 trained on CIFAR-100 with float (16-bit) SGD achieves 22.2% error, while 8-bit SGD achieves 24.0% error. By contrast, SWALP with 8 bit training achieves 21.8% error.", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} {"text": "\n\n\nFigure 9. Quantizing a solution leads to a perturbation of the weights which has a greater effect on the quality of the sharp solution (left) compared to wide solution (right). \n\n\n\nFigure 10. The difference between standard low precision training and SWALP.\nAnother work, SQWA, presents an approach for quantization and fine-tuning of neural networks in low precision [12]. In particular, SQWA achieved state-of-the-art results for DNNs quantized to 2 bits on CIFAR-100 and ImageNet.\nCalibration and Uncertainty Estimates\nBy finding a centred solution in the loss, SWA can also improve calibration and uncertainty representation. Indeed, SWA can be viewed as an approximation to an ensemble, resembling a Bayesian model average, but with a single model [1].", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} {"text": "SWA can be viewed as taking the first moment of SGD iterates with a modified learning rate schedule. We can directly generalize SWA by also taking the second moment of iterates to form a Gaussian approximate posterior over the weights, further characterizing the loss geometry with SGD iterates. This approach,SWA-Gaussian (SWAG) is a simple, scalable and convenient approach to uncertainty estimation and calibration in Bayesian deep learning [4]. The SWAG distribution approximates the shape of the true posterior: Figure 6 below shows the SWAG distribution and the posterior log-density for ResNet-20 on CIFAR-10.\n\n\n", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} {"text": "\nFigure 6. SWAG posterior approximation and the loss surface for a ResNet-20 without skip-connections trained on CIFAR-10 in the subspace formed by the two largest eigenvalues of the SWAG covariance matrix. The shape of SWAG distribution is aligned with the posterior: the peaks of the two distributions coincide, and both distributions are wider in one direction than in the orthogonal direction. Visualization created in collaboration with Javier Ideami.\nEmpirically, SWAG performs on par or better than popular alternatives including MC dropout, KFAC Laplace, and temperature scaling on uncertainty quantification, out-of-distribution detection, calibration and transfer learning in computer vision tasks. Code for SWAG is available here. \n\n\n", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} {"text": "\nFigure 7. MultiSWAG generalizes SWAG and deep ensembles, to perform Bayesian model averaging over multiple basins of attraction, leading to significantly improved performance. By contrast, as shown here, deep ensembles select different modes, while standard variational inference (VI) marginalizes (model averages) within a single basin.\nMultiSWAG [9] uses multiple independent SWAG models to form a mixture of Gaussians as an approximate posterior distribution. Different basins of attraction contain highly complementary explanations of the data. Accordingly, marginalizing over these multiple basins provides a significant boost in accuracy and uncertainty representation. MultiSWAG can be viewed as a generalization of deep ensembles, but with performance improvements.", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} {"text": "Indeed, we see in Figure 8 that MultiSWAG entirely mitigates double descent -- more flexible models have monotonically improving performance -- and provides significantly improved generalization over SGD. For example, when the ResNet-18 has layers of width 20, Multi-SWAG achieves under 30% error whereas SGD achieves over 45%, more than a 15% gap! \n\n\n\nFigure 8. SGD, SWAG, and Multi-SWAG on CIFAR-100 for a ResNet-18 with varying widths. We see Multi-SWAG in particular mitigates double descent and provides significant accuracy improvements over SGD.\nReference [10] also considers Multi-SWA, which uses multiple independently trained SWA solutions in an ensemble, providing performance improvements over deep ensembles without any additional computational cost. Code for MultiSWA and MultiSWAG is available here.", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} {"text": "Another method, Subspace Inference, constructs a low-dimensional subspace around the SWA solution and marginalizes the weights in this subspace to approximate the Bayesian model average [5]. Subspace Inference uses the statistics from the SGD iterates to construct both the SWA solution and the subspace. The method achieves strong performance in terms of prediction accuracy and uncertainty calibration both in classification and regression problems. Code is available here.\nTry it Out!", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} {"text": "Try it Out!\nOne of the greatest open questions in deep learning is why SGD manages to find good solutions, given that the training objectives are highly multimodal, and there are many settings of parameters that achieve no training loss but poor generalization. By understanding geometric features such as flatness, which relate to generalization, we can begin to resolve these questions and build optimizers that provide even better generalization, and many other useful features, such as uncertainty representation. We have presented SWA, a simple drop-in replacement for standard optimizers such as SGD and Adam, which can in principle, benefit anyone training a deep neural network. SWA has been demonstrated to have a strong performance in several areas, including computer vision, semi-supervised learning, reinforcement learning, uncertainty representation, calibration, Bayesian model averaging, and low precision training.", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} {"text": "We encourage you to try out SWA! SWA is now as easy as any standard training in PyTorch. And even if you have already trained your model, you can use SWA to significantly improve performance by running it for a small number of epochs from a pre-trained model. \n[1] Averaging Weights Leads to Wider Optima and Better Generalization; Pavel Izmailov, Dmitry Podoprikhin, Timur Garipov, Dmitry Vetrov, Andrew Gordon Wilson; Uncertainty in Artificial Intelligence (UAI), 2018.\n[2] There Are Many Consistent Explanations of Unlabeled Data: Why You Should Average; Ben Athiwaratkun, Marc Finzi, Pavel Izmailov, Andrew Gordon Wilson; \nInternational Conference on Learning Representations (ICLR), 2019.\n[3] Improving Stability in Deep Reinforcement Learning with Weight Averaging; Evgenii Nikishin, Pavel Izmailov, Ben Athiwaratkun, Dmitrii Podoprikhin, \nTimur Garipov, Pavel Shvechikov, Dmitry Vetrov, Andrew Gordon Wilson; UAI 2018 Workshop: Uncertainty in Deep Learning, 2018.", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} {"text": "[4] A Simple Baseline for Bayesian Uncertainty in Deep Learning\nWesley Maddox, Timur Garipov, Pavel Izmailov, Andrew Gordon Wilson; Neural Information Processing Systems (NeurIPS), 2019.\n[5] Subspace Inference for Bayesian Deep Learning\nPavel Izmailov, Wesley Maddox, Polina Kirichenko, Timur Garipov, Dmitry Vetrov, Andrew Gordon Wilson\nUncertainty in Artificial Intelligence (UAI), 2019.\n[6] SWALP : Stochastic Weight Averaging in Low Precision Training\nGuandao Yang, Tianyi Zhang, Polina Kirichenko, Junwen Bai, \nAndrew Gordon Wilson, Christopher De Sa; International Conference on Machine Learning (ICML), 2019.\n[7] David Ruppert. Efficient estimations from a slowly convergent Robbins-Monro process; Technical report, Cornell University Operations Research and Industrial Engineering, 1988.\n[8] Acceleration of stochastic approximation by averaging. Boris T Polyak and Anatoli B Juditsky; SIAM Journal on Control and Optimization, 30(4):838\u2013855, 1992.", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} {"text": "[9] Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs\nTimur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry Vetrov, \nAndrew Gordon Wilson. Neural Information Processing Systems (NeurIPS), 2018.\n[10] Bayesian Deep Learning and a Probabilistic Perspective of Generalization\nAndrew Gordon Wilson, Pavel Izmailov. ArXiv preprint, 2020.\n[11] Stochastic Weight Averaging in Parallel: Large-Batch Training That Generalizes Well\nGupta, Vipul, Santiago Akle Serrano, and Dennis DeCoste; International Conference on Learning Representations (ICLR). 2019.\n[12] SQWA: Stochastic Quantized Weight Averaging for Improving the Generalization Capability of Low-Precision Deep Neural Networks\nShin, Sungho, Yoonho Boo, and Wonyong Sung; arXiv preprint 2020.", "source": "https://pytorch.org/blog/pytorch-1.6-now-includes-stochastic-weight-averaging/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Introducing TorchRec, and other domain library updates in PyTorch 1.11\"\nauthor: Team PyTorch\nfeatured-img: \"assets/images/pytorch-logo.jpg\"\n\nWe are introducing the beta release of TorchRec and a number of improvements to the current PyTorch domain libraries, alongside the PyTorch 1.11 release. These updates demonstrate our focus on developing common and extensible APIs across all domains to make it easier for our community to build ecosystem projects on PyTorch. Highlights include:\n\nTorchRec, a PyTorch domain library for Recommendation Systems, is available in beta. View it on GitHub.\nTorchAudio - Added Enformer- and RNN-T-based models and recipes to support the full development lifecycle of a streaming ASR model. See the release notes here.\n", "source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"} {"text": "\nTorchText - Added beta support for RoBERTa and XLM-R models, byte-level BPE tokenizer, and text datasets backed by TorchData. See the release notes here.\nTorchVision - Added 4 new model families and 14 new classification datasets such as CLEVR, GTSRB, FER2013. See the release notes here.\n\nTorchRec 0.1\nWe announced TorchRec a few weeks ago and we are excited to release the beta version today. To recap, TorchRec is a PyTorch domain library for Recommendation Systems. This new library provides common sparsity and parallelism primitives, enabling researchers to build state-of-the-art personalization models and deploy them in production. TorchRec was used to train a 1.25 trillion parameter model, pushed to production in January 2022.\nIn particular, the library includes:", "source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"} {"text": "In particular, the library includes:\n\nModeling primitives, such as embedding bags and jagged tensors, that enable easy authoring of large, performant multi-device/multi-node models using hybrid data-parallelism and model-parallelism.\nOptimized RecSys kernels powered by FBGEMM, including support for sparse and quantized operations.\nA sharder which can partition embedding tables with a variety of different strategies including data-parallel, table-wise, row-wise, table-wise-row-wise, and column-wise sharding.\nA planner which can automatically generate optimized sharding plans for models.\nPipelining to overlap dataloading device transfer (copy to GPU), inter-device communications (input_dist), and computation (forward, backward) for increased performance.\nGPU inference support.\nCommon modules for RecSys, such as models and public datasets (Criteo & Movielens).\n", "source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"} {"text": "Please check the TorchRec announcement post here, video tutorial, install instructions here, test drive the feature through this tutorial here, and refer to the reference document here.\nTorchAudio 0.11\nTorchAudio: Building Blocks for Audio and Speech Processing\nWe published a paper, TorchAudio: Building Blocks for Audio and Speech Processing, describing the overview of the TorchAudio library. If you find TorchAudio useful for your research, please help us share with the community by citing our paper.\n(Beta) RNN-T & (Prototype) Emformer Models and Recipes\n\n\n", "source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"} {"text": "\nEmformer is an efficient memory-transformer-based streaming acoustic model that has demonstrated state-of-the-art streaming automatic speech recognition (ASR) performance in low-latency, resource-constrained scenarios, such as on-device applications (citation: https://arxiv.org/abs/2010.10759).\nThe TorchAudio v0.11 release includes the following beta features:\n\nImplementation of Emformer (docs)\nRecurrent neural network transducer (RNN-T) streaming ASR model that uses Emformer for its transcription network (docs)\nRNN-T beam search decoder with TorchScript support (docs)\n", "source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"} {"text": "\nLibriSpeech Emformer RNN-T training recipe (GitHub) and corresponding pre-trained streaming ASR inference pipeline (docs)\n\nAlso there are prototype features that are available from nightly builds or the main branch.\n\nTraining recipes trained on MuST-C and TED-LIUM3 datasets. (GitHub)\nPre-trained pipelines corresponding to the recipes. (docs)\nTutorial that steps through performing online speech recognition with RNN-T Emformer model. (docs)\n", "source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"} {"text": "Collectively, these features cover the full development lifecycle of a streaming ASR model, from definition through training and inference, and enable users to easily develop their own Emformer- and RNN-T-based models.\nSpecial thanks to Yangyang Shi, Jay Mahadeokar, and Gil Keren for their code contributions and guidance.\n(Beta) HuBERT Pretrain Model", "source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"} {"text": "(Beta) HuBERT Pretrain Model\nThe masked prediction training of HuBERT model requires the masked logits, unmasked logits, and feature norm as the outputs. The logits are for cross-entropy losses and the feature norm is for penalty loss. The release adds HuBERTPretrainModel and corresponding factory functions (hubert_pretrain_base, hubert_pretrain_large, and hubert_pretrain_xlarge) to enable training from scratch.\n(Prototype) CTC Beam Search Decoder", "source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"} {"text": "(Prototype) CTC Beam Search Decoder\nIn recent releases, TorchAudio has added support for ASR models fine-tuned on CTC loss. The addition of an inference time CTC beam search decoder enables running end-to-end ASR evaluation using TorchAudio utils.\nThe CTC decoder in TorchAudio supports customizable beam search decoding with lexicon constraint. It also has optional KenLM language model support.\nFor more details, please check out the API tutorial and documentation. This prototype feature is available through nightly builds.\n(Prototype) Streaming API\nTorchAudio started as simple audio I/O APIs that supplement PyTorch. With the recent addition of ASR models and training recipes, the project has received requests to support high-level application development.", "source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"} {"text": "Streaming API makes it easy to develop and test the model in online inference. It utilizes ffmpeg under the hood, and enables reading media from online services and hardware devices, decoding media in an incremental manner, and applying filters and preprocessing.\nPlease checkout the API tutorial and the documentation. There are also the streaming ASR tutorial and the device streaming ASR tutorial. This feature is available from nightly releases. Please refer to pytorch.org for how to install nightly builds.\nTorchText 0.12\n(Beta) RoBERTa and XLM-R Models", "source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"} {"text": "(Beta) RoBERTa and XLM-R Models\nTorchText has added support for pre-trained RoBERTa and XLM-R models. It would allow users to train end-2-end Transformer Encoder based models on standard NLP tasks using TorchText.\nMore specifically:\n\nThe models are torchscriptable and hence can be employed for production use-cases.\nThe model APIs let users to easily attach custom task-specific heads with pre-trained encoders.\nThe API also comes equipped with data pre-processing transforms to match the pre-trained weights and model configuration.\n\nWe have added a tutorial to demonstrate SST-2 binary text classification task with pre-trained XLM-R base architecture.\nFor additional details on model APIs and usage examples, please refer to the documentation.\n(Beta) byte-level BPE tokenizer", "source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"} {"text": "(Beta) byte-level BPE tokenizer\nTorchText has added support for a Byte-Level BPE tokenizer, as used in GPT-2. This tokenizer is also used for tokenizing inputs to the pre-trained RoBERTa models described previously. In addition to the RoBERTa vocab, users can also load their own custom BPE vocab to use the tokenizer. Furthermore, the tokenizer is fully torchscriptable and hence can be employed for production use-cases. For additional details on model APIs and usage examples, please refer to the documentation.\n(Beta) Text datasets backed by TorchData\nTorchText has modernized its datasets by migrating from older-style Iterable Datasets to TorchData\u2019s DataPipes. TorchData is a library that provides modular/composable primitives, allowing users to load and transform data in performant data pipelines.", "source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"} {"text": "These DataPipes work out-of-the-box with PyTorch DataLoader and would enable new functionalities like auto-sharding. Users can now easily do data manipulation and pre-processing using user-defined functions and transformations in a functional style programming. Datasets backed by DataPipes also enable standard flow-control like batching, collation, shuffling and bucketizing.\nCollectively, DataPipes provides a comprehensive experience for data preprocessing and tensorization needs in a pythonic and flexible way for model training. We have added a tutorial to demonstrate data-processing pipelining using the modernized dataset for binary text-classification.\nYou can learn more about TorchData DataPipe APIs in its official documentation.\nTorchVision 0.12\nNew Models\nFour new model families have been released in the latest version along with pre-trained weights for their variants.", "source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"} {"text": "#1 Object Detection\nFCOS is a popular, fully convolutional, anchor-free model for object detection. In this release we include a community-contributed model implementation as well as pre-trained weights. The model was trained on COCO train2017 and can be used as follows:\nimport torch\nfrom torchvision import models\n\nx = [torch.rand(3, 224, 224)]\nfcos = models.detection.fcos_resnet50_fpn(pretrained=True).eval()\npredictions = fcos(x)\n\nThe box AP of the pre-trained model on COCO val2017 is 39.2 (see #4961 for more details).", "source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"} {"text": "We would like to thank Hu Ye and Zhiqiang Wang for contributing to the model implementation and initial training. This was the first community-contributed model in a long while, and given its success, we decided to use the learnings from this process and create a new model contribution guidelines.\n#2 Optical Flow support and RAFT model\nTorchVision now supports optical flow! Optical Flow models try to predict movement in a video: given two consecutive frames, the model predicts where each pixel of the first frame ends up in the second frame. Check out our new tutorial on Optical Flow!", "source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"} {"text": "We implemented a torchscript-compatible RAFT model with pre-trained weights (both normal and \u201csmall\u201d versions), and added support for training and evaluating optical flow models. Our training scripts support distributed training across processes and nodes, leading to much faster training time than the original implementation. We also added 5 new optical flow datasets: Flying Chairs, Flying Things, Sintel, Kitti, and HD1K.\n\n\n\n#3. Image Classification", "source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"} {"text": "\n#3. Image Classification\nVision Transformer (ViT) and ConvNeXt are two popular architectures which can be used as image classifiers or as backbones for downstream vision tasks. In this release we include 8 pre-trained weights for their classification variants. The models were trained on ImageNet and can be used as follows:\nimport torch\nfrom torchvision import models\n\nx = torch.rand(1, 3, 224, 224)\nvit = models.vit_b_16(pretrained=True).eval()\nconvnext = models.convnext_tiny(pretrained=True).eval()\npredictions1 = vit(x)\npredictions2 = convnext(x)\n\nThe accuracies of the pre-trained models obtained on ImageNet val are seen below:\n\n\n\nModel\nAcc@1\nAcc@5\n\n\n\n\nvit_b_16\n81.072\n95.318\n\n\nvit_b_32\n75.912\n92.466\n\n\nvit_l_16\n79.662\n94.638\n\n\nvit_l_32\n76.972\n93.07\n\n\n", "source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"} {"text": "| vit_l_32 | 76.972 | 93.07 |\n| convnext_tiny | 82.52 | 96.146 |\n| convnext_small | 83.616 | 96.65 |\n| convnext_base | 84.062 | 96.87 |\n| convnext_large | 84.414 | 96.976 |\nThe above models have been trained using an adjusted version of our new training recipe and this allows us to offer models with accuracies significantly higher than the ones on the original papers.\n#4. GPU Video Decoding\nIn this release, we add support for GPU video decoding in the video reading API. To use hardware-accelerated decoding, we just need to pass a cuda device to the video reading API as shown below:\nimport torchvision\n\nreader = torchvision.io.VideoReader(file_name, device=\"cuda:0\")\nfor frame in reader:\n print(frame)\n\nWe also support seeking to anyframe or a keyframe in the video before reading, as shown below:\nreader.seek(seek_time)\n", "source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"} {"text": "reader.seek(seek_time)\n\nNew Datasets\nWe have implemented 14 new classification datasets: CLEVR, GTSRB, FER2013, SUN397, Country211, Flowers102, fvgc_aircraft, OxfordIIITPet, DTD, Food 101, Rendered SST2, Stanford cars, PCAM, and EuroSAT.\nAs part of our work on Optical Flow support (see above for more details), we also added 5 new optical flow datasets: Flying Chairs, Flying Things, Sintel, Kitti, and HD1K.\nOther Updates\n\nNew documentation layout: Each function / class is now documented in a separate page, clearing up some space in the per-module pages, and easing the discovery of the proposed APIs. Compare e.g. our previous docs vs the new ones. Please let us know if you have any feedback!\n", "source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"} {"text": "\nNew model contribution guidelines have been published following the success of the FCOS model which was contributed by the community. These guidelines aim to be an overview of the model contribution process for anyone who would like to suggest, implement and train a new model.\nUpcoming Prototype API - We are currently working on a prototype API which adds Multi-weight support on all of our model builder methods. This will enable us to offer multiple pre-trained weights, associated with their meta-data and inference transforms. The API is still under review and thus was not included in the release but you can read more about it on our blogpost and provide your feedback on the dedicated Github issue.\n", "source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"} {"text": "\nChanges in our deprecation policy - Up until now, torchvision would almost never remove deprecated APIs. In order to be more aligned and consistent with pytorch core, we are updating our deprecation policy. We are now following a 2-release deprecation cycle: deprecated APIs will raise a warning for 2 versions, and will be removed after that. To reflect these changes and to smooth the transition, we have decided to:\nRemove all APIs that had been deprecated before or on v0.8, released 1.5 years ago.\nUpdate the removal timeline of all other deprecated APIs to v0.14, to reflect the new 2-cycle policy starting now in v0.12.\n\nCaptum 0.5", "source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"} {"text": "Captum 0.5\nCaptum is a PyTorch library for model interpretability. For this release, we expanded Captum with influential instances and added support for both similarity based influences and novel algorithms, TracIn and its variants. TracIn variants offer faster approximation of influence scores based on random projections for fully connected layers.\nMore specifically the new, influence, subsection of Captum includes:\n\nSimilarityInfluence computes similarity scores between test and training examples using default (cosine or euclidean) or custom user definite metrics w.r.t. given input model layers.\n", "source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"} {"text": "\nTracInCP approximates the influential score of each training example on a given test example based on the dot-product similarity between loss gradients w.r.t. model parameters for test and training examples. Note that if we use training examples as test examples then we compute self influence. This method and its variants described below also return top-k proponents and opponents which are the top-k largest positive and negative influential examples respectively.\nTracInCPFast is an approximation of TracInCP that avoids computing the gradients w.r.t. large parameter matrices. It approximates influence score based on the dot products between last fully connected layer activations and loss gradients w.r.t. that layer for training and test examples.\n", "source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"} {"text": "\nTracInCPFastRandProj uses a nearest neighbor approximation library such as annoy to compute the dot product between the training and test quantities. In order to reduce the dimensionality of layer activations and corresponding gradients this method, in addition, allows to project those vectors into a lower dimensional space using random projection matrices.\n\nMore about the implementation of influential instances can be found on our GitHub page and tutorials.", "source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"} {"text": "Thanks for reading, If you\u2019re interested in these updates and want to join the PyTorch community, we encourage you to join the discussion forums and open GitHub issues. To get the latest news from PyTorch, follow us on Twitter, Medium, YouTube, and LinkedIn.\nCheers!\nTeam PyTorch\n\n\n\n\n\nTorchRec 0.1\n\n\nTorchAudio 0.11\n\n", "source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"} {"text": "\nTorchText 0.12\n\n
  • \n TorchVision 0.12 \n
  • \n \n \n\n\n\n", "source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"} {"text": "color: #6c6c6d;\n font-weight: 400;\n }\n", "source": "https://pytorch.org/blog/pytorch-1.11-new-library-releases/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"PyTorch 2.0 & XLA\u2014The Latest Cutting Edge Features\"\nauthor: Jack Cao, Milad Mohammadi, Alex Wertheim, Yeounoh Chung, Joe Spisak, Will Cromar, Shauheen Zahirazami\n", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"} {"text": "\nToday, we are excited to share our latest work for PyTorch/XLA 2.0. The release of PyTorch 2.0 is yet another major milestone for this storied community and we are excited to continue to be part of it. When the PyTorch/XLA project started in 2018 between Google and Meta, the focus was on bringing cutting edge Cloud TPUs to help support the PyTorch community. Along the way, others in the community such as Amazon joined the project and very quickly the community expanded. We are excited about XLA's direction and the benefits this project continues to bring to the PyTorch community. In this blog we\u2019d like to showcase some key features that have been in development, show code snippets, and illustrate the benefit through some benchmarks.", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"} {"text": "TorchDynamo / torch.compile (Experimental)\nTorchDynamo (Dynamo) is a Python-level JIT compiler designed to make unmodified PyTorch programs faster. It provides a clean API for compiler backends to hook in; its biggest feature is to dynamically modify Python bytecode just before execution. In the PyTorch/XLA 2.0 release, an experimental backend for Dynamo is provided for both inference and training. \nDynamo provides a Torch FX (FX) graph when it recognizes a model pattern and PyTorch/XLA uses a Lazy Tensor approach to compile the FX graph and return the compiled function. To get more insight regarding the technical details about PyTorch/XLA\u2019s dynamo implementation, check out this dev-discuss post and dynamo doc.", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"} {"text": "Here is a small code example of running ResNet18 with torch.compile:\nimport torch\nimport torchvision\nimport torch_xla.core.xla_model as xm\n\ndef eval_model(loader):\n device = xm.xla_device()\n xla_resnet18 = torchvision.models.resnet18().to(device)\n xla_resnet18.eval()\n dynamo_resnet18 = torch.compile(\n xla_resnet18, backend='torchxla_trace_once')\n for data, _ in loader:\n output = dynamo_resnet18(data)\n\nWith torch.compile PyTorch/XLA only traces the ResNet18 model once during the init time and executes the compiled binary everytime dynamo_resnet18 is invoked, instead of tracing the model every step. To illustrate the benefits of Dynamo+XLA, below is an inference speedup analysis to compare Dynamo and LazyTensor (without Dynamo) using TorchBench on a Cloud TPU v4-8 where the y-axis is the speedup multiplier.\n", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"} {"text": "Dynamo for training is in the development stage with its implementation being at an earlier stage than inference. Developers are welcome to test this early feature, however, in the 2.0 release, PyTorch/XLA supports the forward and backward pass graphs and not the optimizer graph; the optimizer graph is available in the nightly builds and will land in the PyTorch/XLA 2.1 release. Below is an example of what training looks like using the ResNet18 example with torch.compile:\n```\nimport torch\nimport torchvision\nimport torch_xla.core.xla_model as xm\ndef train_model(model, data, target):\n loss_fn = torch.nn.CrossEntropyLoss()\n pred = model(data)\n loss = loss_fn(pred, target)\n loss.backward()\n return pred\ndef train_model_main(loader):\n device = xm.xla_device()\n xla_resnet18 = torchvision.models.resnet18().to(device)\n xla_resnet18.train()\n dynamo_train_model = torch.compile(\n train_model, backend='aot_torchxla_trace_once')\n for data, target in loader:", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"} {"text": "for data, target in loader:\n output = dynamo_train_model(xla_resnet18, data, target)\n```\nNote that the backend for training is aot_torchxla_trace_once (API will be updated for stable release) whereas the inference backend is torchxla_trace_once (name subject to change). We expect to extract and execute 3 graphs per training step instead of 1 training step if you use the Lazy tensor. Below is a training speedup analysis to compare Dynamo and Lazy using the TorchBench on Cloud TPU v4-8.\n\nPJRT Runtime (Beta)", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"} {"text": "PJRT Runtime (Beta)\nPyTorch/XLA is migrating from XRT to the new PJRT runtime. PJRT is a better-maintained stack, with demonstrated performance advantages, including, on average, a 35% performance for training on TorchBench 2.0 models. It also supports a richer set of features enabling technologies like SPMD. In the PyTorch/XLA 2.0 release, PJRT is the default runtime for TPU and CPU; GPU support is in experimental state. The PJRT features included in the PyTorch/XLA 2.0 release are:\n\nTPU runtime implementation in libtpu using the PJRT Plugin API improves performance by up to 30%\ntorch.distributed support for TPU v2 and v3, including pjrt:// init_method (Experimental)\nSingle-host GPU support. Multi-host support coming soon. (Experimental)\n", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"} {"text": "Switching to PJRT requires no change (or minimal change for GPUs) to user code (see pjrt.md for more details). Runtime configuration is as simple as setting the PJRT_DEVICE environment variable to the local device type (i.e. TPU, GPU, CPU). Below are examples of using PJRT runtimes on different devices. \n# TPU Device\nPJRT_DEVICE=TPU python3 xla/test/test_train_mp_imagenet.py --fake_data --batch_size=256 --num_epochs=1\n\n# TPU Pod Device\ngcloud alpha compute tpus tpu-vm ssh $USER-pjrt --zone=us-central2-b --project=$PROJECT --worker=all --command=\"git clone --depth=1 --branch r2.0 https://github.com/pytorch/xla.git\"\n\ngcloud alpha compute tpus tpu-vm ssh $USER-pjrt --zone=us-central2-b --project=$PROJECT --worker=all --command=\"PJRT_DEVICE=TPU python3 xla/test/test_train_mp_imagenet.py --fake_data --batch_size=256 --num_epochs=1\"\n\n```\nGPU Device (Experimental)", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"} {"text": "\n\nGPU Device (Experimental)\nPJRT_DEVICE=GPU GPU_NUM_DEVICES=4 python3 xla/test/test_train_mp_imagenet.py --fake_data --batch_size=128 --num_epochs=1\n```\nBelow is a performance comparison between XRT and PJRT by task on TorchBench 2.0 on v4-8 TPU. To learn more about PJRT vs. XRT please review the documentation.\n\nParallelization\nGSPMD (Experimental)", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"} {"text": "Parallelization\nGSPMD (Experimental)\nWe are delighted to introduce General and Scalable Parallelization for ML Computation Graphs (GSPMD) in PyTorch as a new experimental data & model sharding solution. GSPMD provides automatic parallelization for common ML workloads, allowing developers to write PyTorch programs as if on a single large device and without custom sharded computation ops and/or collective communication ops. The XLA compiler transforms the single device program into a partitioned one with proper collectives, based on the user provided sharding hints. The API (RFC) will be available in the PyTorch/XLA 2.0 release as an experimental feature on a single TPU VM host. \nNext Steps for GSPMD", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"} {"text": "Next Steps for GSPMD\nGSPMD is experimental in 2.0 release. To bring it to Stable status, we plan to address a number of feature gaps and known issues in the following releases, including multi-host support, DTensor integration, partial replication sharding, asynchronous data loading, and checkpointing. \nFSDP (Beta)\nPyTorch/XLA introduced fully sharded data parallel (FSDP) experimental support in version 1.12. This feature is a parallel representation of PyTorch FSDP and there are subtle differences in how XLA and upstream CUDA kernels are set up. auto_wrap_policy is a new argument that enables developers to automatically specify conditions for propagating partitioning specifications to neural network submodules. auto_wrap_policys may be simply passed in as an argument when wrapping a model with FSDP. Two auto_wrap_policy callables worth noting are: size_based_auto_wrap_policy, transformer_auto_wrap_policy.", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"} {"text": "size_based_auto_wrap_policy enables users to wrap submodules with a minimum number of parameters. The example below wraps model submodules having at least 10M parameters.\nauto_wrap_policy = partial(size_based_auto_wrap_policy, min_num_params=1e7)\n\ntransformer_auto_wrap_policy enables users to wrap all submodules that match a specific layer type. The example below wraps model submodules named torch.nn.Conv2d. To learn more, review this ResNet example by Ronghang Hu.\nauto_wrap_policy = partial(transformer_auto_wrap_policy, transformer_layer_cls={torch.nn.Conv2d})\n", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"} {"text": "```\nPyTorch/XLA FSDP is now integrated in HuggingFace trainer class (PR) enabling users to train much larger models on PyTorch/XLA (official Hugging Face documentation). A 16B parameters GPT2 model trained on Cloud TPU v4-64 with this FSDP configuration achieved 39% hardware utilization.\n\n\nTPU Accelerator - Num Devices\n\nv4-64\n \n\n\nGPT2 Parameter Count\n\n16B\n \n\n\nLayers Wrapped with FSDP\n\nGPT2Block\n \n\n\nTFLOPs / Chip\n\n275\n \n\n\nPFLOPs / Step\n\n50\n \n\n", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"} {"text": "\n\n50\n \n\n\n\nHardware Utilization\n\n39%\n \n\n\nDifferences Between FSDP & GSPMD\nFSDP is a data parallelism technique that reduces device memory footprint by storing model parameters, optimizer states, and gradients all sharded. Note that the actual computation is still local to the device and requires all-gathering the sharded model parameters for both forward and backward passes, hence the name \u201cdata parallel\u201d. FSDP is one of the newest additions to PyTorch/XLA to scale large model training.", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"} {"text": "GSPMD on the other hand, is a general parallelization system that enables various types of parallelisms, including both data and model parallelisms. PyTorch/XLA provides a sharding annotation API and XLAShardedTensor abstraction, so a user can annotate any tensor with sharding specs in the PyTorch program. Developers don\u2019t need to manually implement sharded computations or inject collective communications ops to get it right. The XLA compiler does the work so that each computation can run in a distributed manner on multiple devices.\nExamples & Preliminary Results\nTo learn about PyTorch/XLA parallelism sharding API, visit our RFC and see the Sample Code references. Below is a simple example to enable data and model parallelism.\n```\nmodel = SimpleLinear().to(xm.xla_device())\nSharding annotate the linear layer weights.\nxs.mark_sharding(model.fc1.weight, mesh, partition_spec)\nTraining loop", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"} {"text": "Training loop\nmodel.train()\nfor step, (data, target) in enumerate(loader):\n optimizer.zero_grad()\n data = data.to(xm.xla_device())\n target = target.to(xm.xla_device())\n # Sharding annotate input data, we can shard any input\n # dimensions. Sharidng the batch dimension enables \n # data parallelism, sharding the feature dimension enables\n # spatial partitioning.\n xs.mark_sharding(data, mesh, partition_spec)\n ouput = model(data)\n loss = loss_fn(output, target)\n optimizer.step()\n xm.mark_step()\n```\nThe following graph highlights the memory efficiency benefits of PyTorch/XLA FSDP and SPMD on Cloud TPU v4-8 running ResNet50.\n\nClosing Thoughts\u2026", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"} {"text": "Closing Thoughts\u2026\nWe are excited to bring these features to the PyTorch community, and this is really just the beginning. Areas like dynamic shapes, deeper support for OpenXLA and many others are in development and we plan to put out more blogs to dive into the details. PyTorch/XLA is developed fully open source and we invite you to join the community of developers by filing issues, submitting pull requests, and sending RFCs on GitHub. You can try PyTorch/XLA on a variety of XLA devices including TPUs and GPUs. Here is how to get started.\nCongratulations again to the PyTorch community on this milestone!\nCheers,\nThe PyTorch Team at Google", "source": "https://pytorch.org/blog/pytorch-2.0-xla/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Compromised PyTorch-nightly dependency chain between December 25th and December 30th, 2022.\"\nauthor: The PyTorch Team\n\nIf you installed PyTorch-nightly on Linux via pip between December 25, 2022 and December 30, 2022, please uninstall it and torchtriton immediately, and use the latest nightly binaries (newer than Dec 30th 2022).\n$ pip3 uninstall -y torch torchvision torchaudio torchtriton\n$ pip3 cache purge\n\nPyTorch-nightly Linux packages installed via pip during that time installed a dependency, torchtriton, which was compromised on the Python Package Index (PyPI) code repository and ran a malicious binary. This is what is known as a supply chain attack and directly affects dependencies for packages that are hosted on public package indices.\nNOTE: Users of the PyTorch stable packages are not affected by this issue.\nHow to check if your Python environment is affected", "source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"} {"text": "The following command searches for the malicious binary in the torchtriton package (PYTHON_SITE_PACKAGES/triton/runtime/triton) and prints out whether your current Python environment is affected or not.\npython3 -c \"import pathlib;import importlib.util;s=importlib.util.find_spec('triton'); affected=any(x.name == 'triton' for x in (pathlib.Path(s.submodule_search_locations[0] if s is not None else '/' ) / 'runtime').glob('*'));print('You are {}affected'.format('' if affected else 'not '))\"\n\nThe malicious binary is executed when the triton package is imported, which requires explicit code to do and is not PyTorch\u2019s default behavior.\nThe Background", "source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"} {"text": "The Background\nAt around 4:40pm GMT on December 30 (Friday), we learned about a malicious dependency package (torchtriton) that was uploaded to the Python Package Index (PyPI) code repository with the same package name as the one we ship on the PyTorch nightly package index. Since the PyPI index takes precedence, this malicious package was being installed instead of the version from our official repository. This design enables somebody to register a package by the same name as one that exists in a third party index, and pip will install their version by default.\nThis malicious package has the same name torchtriton but added in code that uploads sensitive data from the machine.\nWhat we know\ntorchtriton on PyPI contains a malicious triton binary which is installed at PYTHON_SITE_PACKAGES/triton/runtime/triton. Its SHA256 hash is listed below.", "source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"} {"text": "SHA256(triton)= 2385b29489cd9e35f92c072780f903ae2e517ed422eae67246ae50a5cc738a0e\nThe binary\u2019s main function does the following:\n\nGet system information\nnameservers from /etc/resolv.conf\nhostname from gethostname()\ncurrent username from getlogin()\ncurrent working directory name from getcwd()\nenvironment variables\nRead the following files\n/etc/hosts\n/etc/passwd\nThe first 1,000 files in $HOME/*\n$HOME/.gitconfig\n$HOME/.ssh/*\nUpload all of this information, including file contents, via encrypted DNS queries to the domain *.h4ck[.]cfd, using the DNS server wheezy[.]io\n\nThe binary\u2019s file upload functionality is limited to files less than 99,999 bytes in size. It also uploads only the first 1,000 files in $HOME (but all files < 99,999 bytes in the .ssh directory).\nSteps taken towards mitigation", "source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"} {"text": "Steps taken towards mitigation\n\ntorchtriton has been removed as a dependency for our nightly packages and replaced with pytorch-triton (pytorch/pytorch#91539) and a dummy package registered on PyPI (so that this issue doesn\u2019t repeat)\nAll nightly packages that depend on torchtriton have been removed from our package indices at https://download.pytorch.org until further notice\nWe have reached out to the PyPI security team to get proper ownership of the torchtriton package on PyPI and to delete the malicious version\n", "source": "https://pytorch.org/blog/compromised-nightly-dependency/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Running PyTorch Models on Jetson Nano'\nauthor: Jeff Tang, Hamid Shojanazeri, Geeta Chauhan\nfeatured-img: 'assets/images/pytorch-logo.jpg'\n\nOverview\nNVIDIA Jetson Nano, part of the Jetson family of products or Jetson modules, is a small yet powerful Linux (Ubuntu) based embedded computer with 2/4GB GPU. With it, you can run many PyTorch models efficiently. This document summarizes our experience of running different deep learning models using 3 different mechanisms on Jetson Nano:\n\nJetson Inference the higher-level NVIDIA API that has built-in support for running most common computer vision models which can be transfer-learned with PyTorch on the Jetson platform.\n", "source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"} {"text": "\n\nTensorRT, an SDK for high-performance inference from NVIDIA that requires the conversion of a PyTorch model to ONNX, and then to the TensorRT engine file that the TensorRT runtime can run.\n\n\nPyTorch with the direct PyTorch API torch.nn for inference.\n\n\nSetting up Jetson Nano\nAfter purchasing a Jetson Nano here, simply follow the clear step-by-step instructions to download and write the Jetson Nano Developer Kit SD Card Image to a microSD card, and complete the setup. After the setup is done and the Nano is booted, you\u2019ll see the standard Linux prompt along with the username and the Nano name used in the setup.\nTo check the GPU status on Nano, run the following commands:\nsudo pip3 install jetson-stats\nsudo jtop\n\nYou\u2019ll see information, including:\n", "source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"} {"text": "\n\n\nYou can also see the installed CUDA version:\n$ ls -lt /usr/local\nlrwxrwxrwx 1 root root 22 Aug 2 01:47 cuda -> /etc/alternatives/cuda\nlrwxrwxrwx 1 root root 25 Aug 2 01:47 cuda-10 -> /etc/alternatives/cuda-10\ndrwxr-xr-x 12 root root 4096 Aug 2 01:47 cuda-10.2\n\nTo use a camera on Jetson Nano, for example, Arducam 8MP IMX219, follow the instructions here or run the commands below after installing a camera module:\ncd ~\nwget https://github.com/ArduCAM/MIPI_Camera/releases/download/v0.0.3/install_full.sh\nchmod +x install_full.sh\n./install_full.sh -m arducam\n", "source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"} {"text": "./install_full.sh -m arducam\n\nAnother way to do this is to use the original Jetson Nano camera driver:\n\n\nsudo dpkg -r arducam-nvidia-l4t-kernel\nsudo shutdown -r now\n\nThen, use ls /dev/video0 to confirm the camera is found:\n\n\n$ ls /dev/video0\n/dev/video0\n\nAnd finally, the following command to see the camera in action:\n\n\nnvgstcapture-1.0 --orientation=2\n\n### Using Jetson Inference\nNVIDIA [Jetson Inference](https://github.com/dusty-nv/jetson-inference) API offers the easiest way to run image recognition, object detection, semantic segmentation, and pose estimation models on Jetson Nano. Jetson Inference has TensorRT built-in, so it\u2019s very fast. \n\nTo test run Jetson Inference, first clone the repo and download the models:\n\n\ngit clone --recursive https://github.com/dusty-nv/jetson-inference\ncd jetson-inference\n```", "source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"} {"text": "cd jetson-inference\n\nThen use the pre-built [Docker Container](https://github.com/dusty-nv/jetson-inference/blob/master/docs/jetpack-setup-2.md) that already has PyTorch installed to test run the models:\n\n\ndocker/run.sh --volume ~/jetson_inference:/jetson_inference\n\nTo run image recognition, object detection, semantic segmentation, and pose estimation models on test images, use the following:\n\n\ncd build/aarch64/bin\n./imagenet.py images/jellyfish.jpg /jetson_inference/jellyfish.jpg\n./segnet.py images/dog.jpg /jetson_inference/dog.jpeg\n./detectnet.py images/peds_0.jpg /jetson_inference/peds_0.jpg\n./posenet.py images/humans_0.jpg /jetson_inference/pose_humans_0.jpg\n\nFour result images from running the four different models will be generated. Exit the docker image to see them:\n\n\n$ ls -lt ~/jetson_inference/\n-rw-r--r-- 1 root root 68834 Oct 15 21:30 pose_humans_0.jpg\n-rw-r--r-- 1 root root 914058 Oct 15 21:30 peds_0.jpg\n-rw-r--r-- 1 root root 666239 Oct 15 21:30 dog.jpeg", "source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"} {"text": "-rw-r--r-- 1 root root 179760 Oct 15 21:29 jellyfish.jpg\n\n\n
    \n \"Using\n \"Using\n
    \n\n\n
    \n \"Using\n \"Using\n
    \n\nYou can also use the docker image to run PyTorch models because the image has PyTorch, torchvision and torchaudio installed:\n\n\npip list|grep torch\ntorch (1.9.0)\ntorchaudio (0.9.0a0+33b2469)\ntorchvision (0.10.0a0+300a8a4)\n```", "source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"} {"text": "torchvision (0.10.0a0+300a8a4)\n```\nAlthough Jetson Inference includes models already converted to the TensorRT engine file format, you can fine-tune the models by following the steps in Transfer Learning with PyTorch (for Jetson Inference) here.\nUsing TensorRT\nTensorRT is an SDK for high-performance inference from NVIDIA. Jetson Nano supports TensorRT via the Jetpack SDK, included in the SD Card image used to set up Jetson Nano. To confirm that TensorRT is already installed in Nano, run dpkg -l|grep -i tensorrt:\n\n\n", "source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"} {"text": "\nTheoretically, TensorRT can be used to \u201ctake a trained PyTorch model and optimize it to run more efficiently during inference on an NVIDIA GPU.\u201d Follow the instructions and code in the notebook to see how to use PyTorch with TensorRT through ONNX on a torchvision Resnet50 model:\n\n\nHow to convert the model from PyTorch to ONNX;\n\n\nHow to convert the ONNX model to a TensorRT engine file; \n\n\nHow to run the engine file with the TensorRT runtime for performance improvement: inference time improved from the original 31.5ms/19.4ms (FP32/FP16 precision) to 6.28ms (TensorRT).\n\n", "source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"} {"text": "You can replace the Resnet50 model in the notebook code with another PyTorch model, go through the conversion process above, and run the finally converted model TensorRT engine file with the TensorRT runtime to see the optimized performance. But be aware that due to the Nano GPU memory size, models larger than 100MB are likely to fail to run, with the following error information:\nError Code 1: Cuda Runtime (all CUDA-capable devices are busy or unavailable)\nYou may also see an error when converting a PyTorch model to ONNX model, which may be fixed by replacing: \ntorch.onnx.export(resnet50, dummy_input, \"resnet50_pytorch.onnx\", verbose=False)\nwith:\ntorch.onnx.export(model, dummy_input, \"deeplabv3_pytorch.onnx\", opset_version=11, verbose=False)\nUsing PyTorch\nFirst, to download and install PyTorch 1.9 on Nano, run the following commands (see here for more information):\n```", "source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"} {"text": "wget https://nvidia.box.com/shared/static/p57jwntv436lfrd78inwl7iml6p13fzh.whl -O torch-1.8.0-cp36-cp36m-linux_aarch64.whl -O torch-1.9.0-cp36-cp36m-linux_aarch64.whl\nsudo apt-get install python3-pip libopenblas-base libopenmpi-dev \npip3 install Cython\npip3 install numpy torch-1.9.0-cp36-cp36m-linux_aarch64.whl\n\nTo download and install torchvision 0.10 on Nano, run the commands below:\nhttps://drive.google.com/uc?id=1tU6YlPjrP605j4z8PMnqwCSoP6sSC91Z\npip3 install torchvision-0.10.0a0+300a8a4-cp36-cp36m-linux_aarch64.whl\n\nAfter the steps above, run this to confirm:\n$ pip3 list|grep torch\ntorch (1.9.0)\ntorchvision (0.10.0)\n\nYou can also use the docker image described in the section Using Jetson Inference (which also has PyTorch and torchvision installed), to skip the manual steps above.\nThe official YOLOv5 repo is used to run the PyTorch YOLOv5 model on Jetson Nano. After logging in to Jetson Nano, follow the steps below:", "source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"} {"text": "\nGet the repo and install what\u2019s required:\n\ngit clone https://github.com/ultralytics/yolov5\ncd yolov5\npip install -r requirements.txt\n\n\nRun python3 detect.py, which by default uses the PyTorch yolov5s.pt model. You should see something like:\n\ndetect: weights=yolov5s.pt, source=data/images, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False\nYOLOv5 \ud83d\ude80 v5.0-499-g48b00db torch 1.9.0 CUDA:0 (NVIDIA Tegra X1, 3956.1015625MB)\n\nFusing layers... \nModel Summary: 224 layers, 7266973 parameters, 0 gradients\nimage 1/5 /home/jeff/repos/yolov5-new/yolov5/data/images/bus.jpg: 640x480 4 persons, 1 bus, 1 fire hydrant, Done. (0.142s)\n...\n", "source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"} {"text": "...\n\n**The inference time on Jetson Nano GPU is about 140ms, more than twice as fast as the inference time on iOS or Android (about 330ms).**\n\nIf you get an error `\u201cImportError: The _imagingft C module is not installed.\u201d` then you need to reinstall pillow:\n\nsudo apt-get install libpng-dev\nsudo apt-get install libfreetype6-dev\npip3 uninstall pillow\npip3 install --no-cache-dir pillow\n\nAfter successfully completing the `python3 detect.py` run, the object detection results of the test images located in `data/images` will be in the `runs/detect/exp` directory. To test the detection with a live webcam instead of local images, use the `--source 0` parameter when running `python3 detect.py`):\n\n\n~/repos/yolov5$ ls -lt runs/detect/exp10\ntotal 1456\n-rw-rw-r-- 1 jeff jeff 254895 Oct 15 16:12 zidane.jpg\n-rw-rw-r-- 1 jeff jeff 202674 Oct 15 16:12 test3.png\n-rw-rw-r-- 1 jeff jeff 217117 Oct 15 16:12 test2.jpg\n-rw-rw-r-- 1 jeff jeff 305826 Oct 15 16:12 test1.png", "source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"} {"text": "-rw-rw-r-- 1 jeff jeff 495760 Oct 15 16:12 bus.jpg\n```\nUsing the same test files used in the PyTorch iOS YOLOv5 demo app or Android YOLOv5 demo app, you can compare the results generated with running the YOLOv5 PyTorch model on mobile devices and Jetson Nano:\n\n\n\n\nFigure 1. PyTorch YOLOv5 on Jetson Nano. \n\n\n\n\nFigure 2. PyTorch YOLOv5 on iOS.", "source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"} {"text": "\nFigure 2. PyTorch YOLOv5 on iOS. \n\n\n\n\nFigure 3. PyTorch YOLOv5 on Android. \nSummary\nBased on our experience of running different PyTorch models for potential demo apps on Jetson Nano, we see that even Jetson Nano, a lower-end of the Jetson family of products, provides a powerful GPU and embedded system that can directly run some of the latest PyTorch models, pre-trained or transfer learned, efficiently.\nBuilding PyTorch demo apps on Jetson Nano can be similar to building PyTorch apps on Linux, but you can also choose to use TensorRT after converting the PyTorch models to the TensorRT engine file format.", "source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"} {"text": "But if you just need to run some common computer vision models on Jetson Nano using NVIDIA\u2019s Jetson Inference which supports image recognition, object detection, semantic segmentation, and pose estimation models, then this is the easiest way.\nReferences\nTorch-TensorRT, a compiler for PyTorch via TensorRT:\nhttps://github.com/NVIDIA/Torch-TensorRT/\nJetson Inference docker image details:\nhttps://github.com/dusty-nv/jetson-inference/blob/master/docs/aux-docker.md\nA guide to using TensorRT on the NVIDIA Jetson Nano:\nhttps://docs.donkeycar.com/guide/robot_sbc/tensorrt_jetson_nano/ \nincluding:\n\nUse Jetson as a portable GPU device to run an NN chess engine model:\n", "source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"} {"text": "https://medium.com/@ezchess/jetson-lc0-running-leela-chess-zero-on-nvidia-jetson-a-portable-gpu-device-a213afc9c018\n\nA MaskEraser app using PyTorch and torchvision, installed directly with pip:\nhttps://github.com/INTEC-ATI/MaskEraser#install-pytorch\n", "source": "https://pytorch.org/blog/running-pytorch-models-on-jetson-nano/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'PyTorch 1.9 Release, including torch.linalg and Mobile Interpreter'\nauthor: Team PyTorch \n\nWe are excited to announce the release of PyTorch 1.9. The release is composed of more than 3,400 commits since 1.8, made by 398 contributors. The release notes are available here. Highlights include:\n1. Major improvements to support scientific computing, including torch.linalg, torch.special, and Complex Autograd\n2. Major improvements in on-device binary size with Mobile Interpreter\n3. Native support for elastic-fault tolerance training through the upstreaming of TorchElastic into PyTorch Core\n4. Major updates to the PyTorch RPC framework to support large scale distributed training with GPU support\n5. New APIs to optimize performance and packaging for model inference deployment \n6. Support for Distributed training, GPU utilization and SM efficiency in the PyTorch Profiler", "source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"} {"text": "Along with 1.9, we are also releasing major updates to the PyTorch libraries, which you can read about in this blog post. \nWe\u2019d like to thank the community for their support and work on this latest release. We\u2019d especially like to thank Quansight and Microsoft for their contributions.\nFeatures in PyTorch releases are classified as Stable, Beta, and Prototype. You can learn more about the definitions in this blog post. \nFrontend APIs\n(Stable) torch.linalg", "source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"} {"text": "Frontend APIs\n(Stable) torch.linalg\nIn 1.9, the torch.linalg module is moving to a stable release. Linear algebra is essential to deep learning and scientific computing, and the torch.linalg module extends PyTorch\u2019s support for it with implementations of every function from NumPy\u2019s linear algebra module (now with support for accelerators and autograd) and more, like torch.linalg.matrix_norm and torch.linalg.householder_product. This makes the module immediately familiar to users who have worked with NumPy. Refer to the documentation here.", "source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"} {"text": "We plan to publish another blog post with more details on the torch.linalg module next week!\n(Stable) Complex Autograd\nThe Complex Autograd feature, released as a beta in PyTorch 1.8, is now stable. Since the beta release, we have extended support for Complex Autograd for over 98% operators in PyTorch 1.9, improved testing for complex operators by adding more OpInfos, and added greater validation through TorchAudio migration to native complex tensors (refer to this issue). \nThis feature provides users the functionality to calculate complex gradients and optimize real valued loss functions with complex variables. This is a required feature for multiple current and downstream prospective users of complex numbers in PyTorch like TorchAudio, ESPNet, Asteroid, and FastMRI. Refer to the documentation for more details. \n(Stable) torch.use_deterministic_algorithms()", "source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"} {"text": "To help with debugging and writing reproducible programs, PyTorch 1.9 includes a torch.use_determinstic_algorithms option. When this setting is enabled, operations will behave deterministically, if possible, or throw a runtime error if they might behave nondeterministically. Here are a couple examples:\n>>> a = torch.randn(100, 100, 100, device='cuda').to_sparse()\n>>> b = torch.randn(100, 100, 100, device='cuda')\n\n# Sparse-dense CUDA bmm is usually nondeterministic\n>>> torch.bmm(a, b).eq(torch.bmm(a, b)).all().item()\nFalse\n\n>>> torch.use_deterministic_algorithms(True)\n\n# Now torch.bmm gives the same result each time, but with reduced performance\n>>> torch.bmm(a, b).eq(torch.bmm(a, b)).all().item()\nTrue\n\n# CUDA kthvalue has no deterministic algorithm, so it throws a runtime error\n>>> torch.zeros(10000, device='cuda').kthvalue(1)\nRuntimeError: kthvalue CUDA does not have a deterministic implementation...\n", "source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"} {"text": "```\nPyTorch 1.9 adds deterministic implementations for a number of indexing operations, too, including index_add, index_copy, and index_put with accum=False. For more details, refer to the documentation and reproducibility note.\n(Beta) torch.special\nA torch.special module, analogous to SciPy\u2019s special module, is now available in beta. This module contains many functions useful for scientific computing and working with distributions such as iv, ive, erfcx, logerfc, and logerfcx. Refer to the documentation for more details. \n(Beta) nn.Module parameterization", "source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"} {"text": "(Beta) nn.Module parameterization\nnn.Module parameterization allows users to parametrize any parameter or buffer of an nn.Module without modifying the nn.Module itself. It allows you to constrain the space in which your parameters live without the need for special optimization methods.\nThis also contains a new implementation of the spectral_norm parametrization for PyTorch 1.9. More parametrization will be added to this feature (weight_norm, matrix constraints and part of pruning) for the feature to become stable in 1.10. For more details, refer to the documentation and tutorial.\nPyTorch Mobile\n(Beta) Mobile Interpreter", "source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"} {"text": "PyTorch Mobile\n(Beta) Mobile Interpreter\nWe are releasing Mobile Interpreter, a streamlined version of the PyTorch runtime, in beta. The Interpreter will execute PyTorch programs in edge devices, with reduced binary size footprint. \nMobile Interpreter is one of the top requested features for PyTorch Mobile. This new release will significantly reduce binary size compared with the current on-device runtime. In order for you to get the binary size improvements with our interpreter (which can reduce the binary size up to ~75% for a typical application) follow these instructions. As an example, using Mobile Interpreter, we can reach 2.6 MB compressed with MobileNetV2 in arm64-v7a Android. With this latest release we are making it much simpler to integrate the interpreter by providing pre-built libraries for iOS and Android.\nTorchVision Library", "source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"} {"text": "TorchVision Library\nStarting from 1.9, users can use the TorchVision library on their iOS/Android apps. The Torchvision library contains the C++ TorchVision ops and needs to be linked together with the main PyTorch library for iOS, for Android it can be added as a gradle dependency. This allows using TorchVision prebuilt MaskRCNN operators for object detections and segmentation. To learn more about the library, please refer to our tutorials and demo apps. \nDemo apps", "source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"} {"text": "Demo apps\nWe are releasing a new video app based on PyTorch Video library and an updated speech recognition app based on the latest torchaudio, wave2vec model. Both are available on iOS and Android. In addition, we have updated the seven Computer Vision and three Natural Language Processing demo apps, including the HuggingFace DistilBERT, and the DeiT vision transformer models, with PyTorch Mobile v1.9. With the addition of these two apps, we now offer a full suite of demo apps covering image, text, audio, and video. To get started check out our iOS demo apps and Android demo apps.\n\n\n\nDistributed Training\n(Beta) TorchElastic is now part of core", "source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"} {"text": "(Beta) TorchElastic is now part of core\nTorchElastic, which was open sourced over a year ago in the pytorch/elastic github repository, is a runner and coordinator for PyTorch worker processes. Since then, it has been adopted by various distributed torch use-cases: 1) deepspeech.pytorch 2) pytorch-lightning 3) Kubernetes CRD. Now, it is part of PyTorch core.", "source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"} {"text": "As its name suggests, the core function of TorcheElastic is to gracefully handle scaling events. A notable corollary of elasticity is that peer discovery and rank assignment are built into TorchElastic enabling users to run distributed training on preemptible instances without requiring a gang scheduler. As a side note, etcd used to be a hard dependency of TorchElastic. With the upstream, this is no longer the case since we have added a \u201cstandalone\u201d rendezvous based on c10d::Store. For more details, refer to the documentation.\n(Beta) Distributed Training Updates\nIn addition to TorchElastic, there are a number of beta features available in the distributed package:", "source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"} {"text": "\n(Beta) CUDA support is available in RPC: Compared to CPU RPC and general-purpose RPC frameworks, CUDA RPC is a much more efficient way for P2P Tensor communication. It is built on top of TensorPipe which can automatically choose a communication channel for each Tensor based on Tensor device type and channel availability on both the caller and the callee. Existing TensorPipe channels cover NVLink, InfiniBand, SHM, CMA, TCP, etc. See this recipe for how CUDA RPC helps to attain 34x speedup compared to CPU RPC.\n", "source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"} {"text": "\n(Beta) ZeroRedundancyOptimizer: ZeroRedundancyOptimizer can be used in conjunction with DistributedDataParallel to reduce the size of per-process optimizer states. The idea of ZeroRedundancyOptimizer comes from DeepSpeed/ZeRO project and Marian, where the optimizer in each process owns a shard of model parameters and their corresponding optimizer states. When running step(), each optimizer only updates its own parameters, and then uses collective communication to synchronize updated parameters across all processes. Refer to this documentation and this tutorial to learn more.\n", "source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"} {"text": "\n(Beta) Support for profiling distributed collectives: PyTorch\u2019s profiler tools, torch.profiler and torch.autograd.profiler, are able to profile distributed collectives and point to point communication primitives including allreduce, alltoall, allgather, send/recv, etc. This is enabled for all backends supported natively by PyTorch: gloo, mpi, and nccl. This can be used to debug performance issues, analyze traces that contain distributed communication, and gain insight into performance of applications that use distributed training. To learn more, refer to this documentation. \n\nPerformance Optimization and Tooling\n(Stable) Freezing API", "source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"} {"text": "(Stable) Freezing API\nModule Freezing is the process of inlining module parameters and attributes values as constants into the TorchScript internal representation. This allows further optimization and specialization of your program, both for TorchScript optimizations and lowering to other backends. It is used by optimize_for_mobile API, ONNX, and others. \nFreezing is recommended for model deployment. It helps TorchScript JIT optimizations optimize away overhead and bookkeeping that is necessary for training, tuning, or debugging PyTorch models. It enables graph fusions that are not semantically valid on non-frozen graphs - such as fusing Conv-BN. For more details, refer to the documentation.\n(Beta) PyTorch Profiler\n\n\n", "source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"} {"text": "\nThe new PyTorch Profiler graduates to beta and leverages Kineto for GPU profiling, TensorBoard for visualization and is now the standard across our tutorials and documentation. \nPyTorch 1.9 extends support for the new torch.profiler API to more builds, including Windows and Mac and is recommended in most cases instead of the previous torch.autograd.profiler API. The new API supports existing profiler features, integrates with CUPTI library (Linux-only) to trace on-device CUDA kernels and provides support for long-running jobs, e.g.:\n```python\ndef trace_handler(p):\n output = p.key_averages().table(sort_by=\"self_cuda_time_total\", row_limit=10)\n print(output)\n p.export_chrome_trace(\"/tmp/trace_\" + str(p.step_num) + \".json\")\nwith profile(\n activities=[ProfilerActivity.CPU, ProfilerActivity.CUDA],\n # schedule argument specifies the iterations on which the profiler is active\n schedule=torch.profiler.schedule(\n wait=1,\n warmup=1,", "source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"} {"text": "wait=1,\n warmup=1,\n active=2),\n # on_trace_ready argument specifies the handler for the traces\n on_trace_ready=trace_handler\n) as p:\n for idx in range(8):\n model(inputs)\n # profiler will trace iterations 2 and 3, and then 6 and 7 (counting from zero)\n p.step()\n```\nMore usage examples can be found on the profiler recipe page. \nThe PyTorch Profiler Tensorboard plugin has new features for:\n* Distributed Training summary view with communications overview for NCCL\n* GPU Utilization and SM Efficiency in Trace view and GPU operators view\n* Memory Profiling view\n* Jump to source when launched from Microsoft VSCode\n* Ability for load traces from cloud object storage systems \n(Beta) Inference Mode API", "source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"} {"text": "(Beta) Inference Mode API\nInference Mode API allows significant speed-up for inference workloads while remaining safe and ensuring no incorrect gradients can ever be computed. It offers the best possible performance when no autograd is required. For more details, refer to the documentation for inference mode itself and the documentation explaining when to use it and the difference with no_grad mode.\n(Beta) torch.package", "source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"} {"text": "(Beta) torch.package\ntorch.package is a new way to package PyTorch models in a self-contained, stable format. A package will include both the model\u2019s data (e.g. parameters, buffers) and its code (model architecture). Packaging a model with its full set of Python dependencies, combined with a description of a conda environment with pinned versions, can be used to easily reproduce training. Representing a model in a self-contained artifact will also allow it to be published and transferred throughout a production ML pipeline while retaining the flexibility of a pure-Python representation. For more details, refer to the documentation.\n(Prototype) prepare_for_inference", "source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"} {"text": "(Prototype) prepare_for_inference\nprepare_for_inference is a new prototype feature that takes in a module and performs graph-level optimizations to improve inference performance, depending on the device. It is meant to be a PyTorch-native option that requires minimal changes to user\u2019s workflows. For more details, see the documentation for the Torchscript version here or the FX version here.\n(Prototype) Profile-directed typing in TorchScript", "source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"} {"text": "TorchScript has a hard requirement for source code to have type annotations in order for compilation to be successful. For a long time, it was only possible to add missing or incorrect type annotations through trial and error (i.e., by fixing the type-checking errors generated by torch.jit.script one by one), which was inefficient and time consuming. Now, we have enabled profile directed typing for torch.jit.script by leveraging existing tools like MonkeyType, which makes the process much easier, faster, and more efficient. For more details, refer to the documentation.", "source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"} {"text": "Thanks for reading. If you\u2019re interested in these updates and want to join the PyTorch community, we encourage you to join the discussion forums and open GitHub issues. To get the latest news from PyTorch, follow us on Facebook, Twitter, Medium, YouTube, or LinkedIn. \nCheers!\nTeam PyTorch", "source": "https://pytorch.org/blog/pytorch-1.9-released/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Announcing PyTorch Developer Day 2020'\nauthor: Team PyTorch\n\nStarting this year, we plan to host two separate events for PyTorch: one for developers and users to discuss core technical development, ideas and roadmaps called \u201cDeveloper Day\u201d, and another for the PyTorch ecosystem and industry communities to showcase their work and discover opportunities to collaborate called \u201cEcosystem Day\u201d (scheduled for early 2021).\n\n\n\nThe PyTorch Developer Day (#PTD2) is kicking off on November 12, 2020, 8AM PST with a full day of technical talks on a variety of topics, including updates to the core framework, new tools and libraries to support development across a variety of domains. You'll also see talks covering the latest research around systems and tooling in ML.", "source": "https://pytorch.org/blog/pytorch-developer-day-2020/", "category": "pytorch blogs"} {"text": "For Developer Day, we have an online networking event limited to people composed of PyTorch maintainers and contributors, long-time stakeholders and experts in areas relevant to PyTorch\u2019s future. Conversations from the networking event will strongly shape the future of PyTorch. Hence, invitations are required to attend the networking event.\nAll talks will be livestreamed and available to the public.\n* Livestream event page\n* Apply for an invitation to the networking event\nVisit the event website to learn more. We look forward to welcoming you to PyTorch Developer Day on November 12th! \nThank you,\nThe PyTorch team", "source": "https://pytorch.org/blog/pytorch-developer-day-2020/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'PyTorch 1.7 released w/ CUDA 11, New APIs for FFTs, Windows support for Distributed training and more'\nauthor: Team PyTorch\n\nToday, we\u2019re announcing the availability of PyTorch 1.7, along with updated domain libraries. The PyTorch 1.7 release includes a number of new APIs including support for NumPy-Compatible FFT operations, profiling tools and major updates to both distributed data parallel (DDP) and remote procedure call (RPC) based distributed training. In addition, several features moved to stable including custom C++ Classes, the memory profiler, extensions via custom tensor-like objects, user async functions in RPC and a number of other features in torch.distributed such as Per-RPC timeout, DDP dynamic bucketing and RRef helper. \nA few of the highlights include:\n* CUDA 11 is now officially supported with binaries available at PyTorch.org", "source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"} {"text": "\nUpdates and additions to profiling and performance for RPC, TorchScript and Stack traces in the autograd profiler\n(Beta) Support for NumPy compatible Fast Fourier transforms (FFT) via torch.fft\n(Prototype) Support for Nvidia A100 generation GPUs and native TF32 format \n(Prototype) Distributed training on Windows now supported\ntorchvision\n(Stable) Transforms now support Tensor inputs, batch computation, GPU, and TorchScript\n(Stable) Native image I/O for JPEG and PNG formats\n(Beta) New Video Reader API\ntorchaudio\n(Stable) Added support for speech rec (wav2letter), text to speech (WaveRNN) and source separation (ConvTasNet)\n\nTo reiterate, starting PyTorch 1.6, features are now classified as stable, beta and prototype. You can see the detailed announcement here. Note that the prototype features listed in this blog are available as part of this release.", "source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"} {"text": "Find the full release notes here. \nFront End APIs\n[Beta] NumPy Compatible torch.fft module\nFFT-related functionality is commonly used in a variety of scientific fields like signal processing. While PyTorch has historically supported a few FFT-related functions, the 1.7 release adds a new torch.fft module that implements FFT-related functions with the same API as NumPy.\nThis new module must be imported to be used in the 1.7 release, since its name conflicts with the historic (and now deprecated) torch.fft function.\nExample usage:\n```python\n\n\n\nimport torch.fft\nt = torch.arange(4)\nt\ntensor([0, 1, 2, 3])\ntorch.fft.fft(t)\ntensor([ 6.+0.j, -2.+2.j, -2.+0.j, -2.-2.j])\nt = tensor([0.+1.j, 2.+3.j, 4.+5.j, 6.+7.j])\ntorch.fft.fft(t)\ntensor([12.+16.j, -8.+0.j, -4.-4.j, 0.-8.j])\n ```\n\n\n\n\nDocumentation\n\n[Beta] C++ Support for Transformer NN Modules", "source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"} {"text": "[Beta] C++ Support for Transformer NN Modules\nSince PyTorch 1.5, we\u2019ve continued to maintain parity between the python and C++ frontend APIs. This update allows developers to use the nn.transformer module abstraction from the C++ Frontend. And moreover, developers no longer need to save a module from python/JIT and load into C++ as it can now be used it in C++ directly.\n* Documentation\n[Beta] torch.set_deterministic", "source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"} {"text": "[Beta] torch.set_deterministic\nReproducibility (bit-for-bit determinism) may help identify errors when debugging or testing a program. To facilitate reproducibility, PyTorch 1.7 adds the torch.set_deterministic(bool) function that can direct PyTorch operators to select deterministic algorithms when available, and to throw a runtime error if an operation may result in nondeterministic behavior. By default, the flag this function controls is false and there is no change in behavior, meaning PyTorch may implement its operations nondeterministically by default. \nMore precisely, when this flag is true:\n* Operations known to not have a deterministic implementation throw a runtime error;\n* Operations with deterministic variants use those variants (usually with a performance penalty versus the non-deterministic version); and\n* torch.backends.cudnn.deterministic = True is set.", "source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"} {"text": "Note that this is necessary, but not sufficient, for determinism within a single run of a PyTorch program. Other sources of randomness like random number generators, unknown operations, or asynchronous or distributed computation may still cause nondeterministic behavior.\nSee the documentation for torch.set_deterministic(bool) for the list of affected operations.\n* RFC\n* Documentation\nPerformance & Profiling\n[Beta] Stack traces added to profiler", "source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"} {"text": "[Beta] Stack traces added to profiler\nUsers can now see not only operator name/inputs in the profiler output table but also where the operator is in the code. The workflow requires very little change to take advantage of this capability. The user uses the autograd profiler as before but with optional new parameters: with_stack and group_by_stack_n. Caution: regular profiling runs should not use this feature as it adds significant overhead.\n* Detail\n* Documentation\nDistributed Training & RPC\n[Stable] TorchElastic now bundled into PyTorch docker image", "source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"} {"text": "Torchelastic offers a strict superset of the current torch.distributed.launch CLI with the added features for fault-tolerance and elasticity. If the user is not be interested in fault-tolerance, they can get the exact functionality/behavior parity by setting max_restarts=0 with the added convenience of auto-assigned RANK and MASTER_ADDR|PORT (versus manually specified in torch.distributed.launch).\nBy bundling torchelastic in the same docker image as PyTorch, users can start experimenting with TorchElastic right-away without having to separately install torchelastic. In addition to convenience, this work is a nice-to-have when adding support for elastic parameters in the existing Kubeflow\u2019s distributed PyTorch operators.\n* Usage examples and how to get started\n[Beta] Support for uneven dataset inputs in DDP", "source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"} {"text": "[Beta] Support for uneven dataset inputs in DDP\nPyTorch 1.7 introduces a new context manager to be used in conjunction with models trained using torch.nn.parallel.DistributedDataParallel to enable training with uneven dataset size across different processes. This feature enables greater flexibility when using DDP and prevents the user from having to manually ensure dataset sizes are the same across different process. With this context manager, DDP will handle uneven dataset sizes automatically, which can prevent errors or hangs at the end of training.\n* RFC\n* Documentation\n[Beta] NCCL Reliability - Async Error/Timeout Handling", "source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"} {"text": "In the past, NCCL training runs would hang indefinitely due to stuck collectives, leading to a very unpleasant experience for users. This feature will abort stuck collectives and throw an exception/crash the process if a potential hang is detected. When used with something like torchelastic (which can recover the training process from the last checkpoint), users can have much greater reliability for distributed training. This feature is completely opt-in and sits behind an environment variable that needs to be explicitly set in order to enable this functionality (otherwise users will see the same behavior as before).\n* RFC\n* Documentation\n[Beta] TorchScript rpc_remote and rpc_sync", "source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"} {"text": "torch.distributed.rpc.rpc_async has been available in TorchScript in prior releases. For PyTorch 1.7, this functionality will be extended the remaining two core RPC APIs, torch.distributed.rpc.rpc_sync and torch.distributed.rpc.remote. This will complete the major RPC APIs targeted for support in TorchScript, it allows users to use the existing python RPC APIs within TorchScript (in a script function or script method, which releases the python Global Interpreter Lock) and could possibly improve application performance in multithreaded environment.\n* Documentation\n* Usage examples\n[Beta] Distributed optimizer with TorchScript support", "source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"} {"text": "PyTorch provides a broad set of optimizers for training algorithms, and these have been used repeatedly as part of the python API. However, users often want to use multithreaded training instead of multiprocess training as it provides better resource utilization and efficiency in the context of large scale distributed training (e.g. Distributed Model Parallel) or any RPC-based training application). Users couldn\u2019t do this with with distributed optimizer before because we need to get rid of the python Global Interpreter Lock (GIL) limitation to achieve this.", "source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"} {"text": "In PyTorch 1.7, we are enabling the TorchScript support in distributed optimizer to remove the GIL, and make it possible to run optimizer in multithreaded applications. The new distributed optimizer has the exact same interface as before but it automatically converts optimizers within each worker into TorchScript to make each GIL free. This is done by leveraging a functional optimizer concept and allowing the distributed optimizer to convert the computational portion of the optimizer into TorchScript. This will help use cases like distributed model parallel training and improve performance using multithreading.", "source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"} {"text": "Currently, the only optimizer that supports automatic conversion with TorchScript is Adagrad and all other optimizers will still work as before without TorchScript support. We are working on expanding the coverage to all PyTorch optimizers and expect more to come in future releases. The usage to enable TorchScript support is automatic and exactly the same with existing python APIs, here is an example of how to use this:\n```python\nimport torch.distributed.autograd as dist_autograd\nimport torch.distributed.rpc as rpc\nfrom torch import optim\nfrom torch.distributed.optim import DistributedOptimizer\nwith dist_autograd.context() as context_id:\n # Forward pass.\n rref1 = rpc.remote(\"worker1\", torch.add, args=(torch.ones(2), 3))\n rref2 = rpc.remote(\"worker1\", torch.add, args=(torch.ones(2), 1))\n loss = rref1.to_here() + rref2.to_here()\n# Backward pass.\n dist_autograd.backward(context_id, [loss.sum()])\n# Optimizer, pass in optim.Adagrad, DistributedOptimizer will", "source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"} {"text": "automatically convert/compile it to TorchScript (GIL-free)\ndist_optim = DistributedOptimizer(\n optim.Adagrad,\n [rref1, rref2],\n lr=0.05,\n )\n dist_optim.step(context_id)\n ```\n* RFC\n* Documentation\n[Beta] Enhancements to RPC-based Profiling\nSupport for using the PyTorch profiler in conjunction with the RPC framework was first introduced in PyTorch 1.6. In PyTorch 1.7, the following enhancements have been made:\n* Implemented better support for profiling TorchScript functions over RPC\n* Achieved parity in terms of profiler features that work with RPC\n* Added support for asynchronous RPC functions on the server-side (functions decorated with rpc.functions.async_execution).", "source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"} {"text": "Users are now able to use familiar profiling tools such as with torch.autograd.profiler.profile() and with torch.autograd.profiler.record_function, and this works transparently with the RPC framework with full feature support, profiles asynchronous functions, and TorchScript functions.\n* Design doc\n* Usage examples\n[Prototype] Windows support for Distributed Training\nPyTorch 1.7 brings prototype support for DistributedDataParallel and collective communications on the Windows platform. In this release, the support only covers Gloo-based ProcessGroup and FileStore.\nTo use this feature across multiple machines, please provide a file from a shared file system in init_process_group. \n```python\ninitialize the process group\ndist.init_process_group(\n \"gloo\",\n # multi-machine example:\n # init_method = \"file://////{machine}/{share_folder}/file\"", "source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"} {"text": "init_method=\"file:///{your local file path}\",\n rank=rank,\n world_size=world_size\n)\nmodel = DistributedDataParallel(local_model, device_ids=[rank])\n```\n* Design doc\n* Documentation\n* Acknowledgement (gunandrose4u)\nMobile\nPyTorch Mobile supports both iOS and Android with binary packages available in Cocoapods and JCenter respectively. You can learn more about PyTorch Mobile here. \n[Beta] PyTorch Mobile Caching allocator for performance improvements", "source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"} {"text": "On some mobile platforms, such as Pixel, we observed that memory is returned to the system more aggressively. This results in frequent page faults as PyTorch being a functional framework does not maintain state for the operators. Thus outputs are allocated dynamically on each execution of the op, for the most ops. To ameliorate performance penalties due to this, PyTorch 1.7 provides a simple caching allocator for CPU. The allocator caches allocations by tensor sizes and, is currently, available only via the PyTorch C++ API. The caching allocator itself is owned by client and thus the lifetime of the allocator is also maintained by client code. Such a client owned caching allocator can then be used with scoped guard, c10::WithCPUCachingAllocatorGuard, to enable the use of cached allocation within that scope.\nExample usage:\n```python\ninclude \n.....\nc10::CPUCachingAllocator caching_allocator;", "source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"} {"text": ".....\nc10::CPUCachingAllocator caching_allocator;\n // Owned by client code. Can be a member of some client class so as to tie the\n // the lifetime of caching allocator to that of the class.\n.....\n{\n c10::optional caching_allocator_guard;\n if (FLAGS_use_caching_allocator) {\n caching_allocator_guard.emplace(&caching_allocator);\n }\n ....\n model.forward(..);\n}\n...\n```\nNOTE: Caching allocator is only available on mobile builds, thus the use of caching allocator outside of mobile builds won\u2019t be effective.\n* Documentation\n* Usage examples\ntorchvision\n[Stable] Transforms now support Tensor inputs, batch computation, GPU, and TorchScript", "source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"} {"text": "torchvision transforms are now inherited from nn.Module and can be torchscripted and applied on torch Tensor inputs as well as on PIL images. They also support Tensors with batch dimensions and work seamlessly on CPU/GPU devices:\n```python\nimport torch\nimport torchvision.transforms as T\nto fix random seed, use torch.manual_seed\ninstead of random.seed\ntorch.manual_seed(12)\ntransforms = torch.nn.Sequential(\n T.RandomCrop(224),\n T.RandomHorizontalFlip(p=0.3),\n T.ConvertImageDtype(torch.float),\n T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n)\nscripted_transforms = torch.jit.script(transforms)\nNote: we can similarly use T.Compose to define transforms\ntransforms = T.Compose([...]) and\nscripted_transforms = torch.jit.script(torch.nn.Sequential(*transforms.transforms))\ntensor_image = torch.randint(0, 256, size=(3, 256, 256), dtype=torch.uint8)\nworks directly on Tensors\nout_image1 = transforms(tensor_image)\non the GPU\nout_image1_cuda = transforms(tensor_image.cuda())", "source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"} {"text": "out_image1_cuda = transforms(tensor_image.cuda())\nwith batches\nbatched_image = torch.randint(0, 256, size=(4, 3, 256, 256), dtype=torch.uint8)\nout_image_batched = transforms(batched_image)\nand has torchscript support\nout_image2 = scripted_transforms(tensor_image)\nThese improvements enable the following new features:\n* support for GPU acceleration\n* batched transformations e.g. as needed for videos\n* transform multi-band torch tensor images (with more than 3-4 channels)\n* torchscript transforms together with your model for deployment\n**Note:** Exceptions for TorchScript support includesCompose,RandomChoice,RandomOrder,Lambdaand those applied on PIL images, such asToPILImage```.\n[Stable] Native image IO for JPEG and PNG formats", "source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"} {"text": "torchvision 0.8.0 introduces native image reading and writing operations for JPEG and PNG formats. Those operators support TorchScript and return CxHxW tensors in uint8 format, and can thus be now part of your model for deployment in C++ environments.\nfrom torchvision.io import read_image\n\n# tensor_image is a CxHxW uint8 Tensor\ntensor_image = read_image('path_to_image.jpeg')\n\n# or equivalently\nfrom torchvision.io import read_file, decode_image\n# raw_data is a 1d uint8 Tensor with the raw bytes\nraw_data = read_file('path_to_image.jpeg')\ntensor_image = decode_image(raw_data)\n\n# all operators are torchscriptable and can be\n# serialized together with your model torchscript code\nscripted_read_image = torch.jit.script(read_image)\n\n[Stable] RetinaNet detection model\nThis release adds pretrained models for RetinaNet with a ResNet50 backbone from Focal Loss for Dense Object Detection.\n[Beta] New Video Reader API", "source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"} {"text": "[Beta] New Video Reader API\nThis release introduces a new video reading abstraction, which gives more fine-grained control of iteration over videos. It supports image and audio, and implements an iterator interface so that it is interoperable with other the python libraries such as itertools.\n```python\nfrom torchvision.io import VideoReader\nstream indicates if reading from audio or video\nreader = VideoReader('path_to_video.mp4', stream='video')\ncan change the stream after construction\nvia reader.set_current_stream\nto read all frames in a video starting at 2 seconds\nfor frame in reader.seek(2):\n # frame is a dict with \"data\" and \"pts\" metadata\n print(frame[\"data\"], frame[\"pts\"])\nbecause reader is an iterator you can combine it with\nitertools\nfrom itertools import takewhile, islice\nread 10 frames starting from 2 seconds\nfor frame in islice(reader.seek(2), 10):\n pass\nor to return all frames between 2 and 5 seconds\nfor frame in takewhile(lambda x: x[\"pts\"] < 5, reader):", "source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"} {"text": "pass\n```\nNotes:\n* In order to use the Video Reader API beta, you must compile torchvision from source and have ffmpeg installed in your system.\n* The VideoReader API is currently released as beta and its API may change following user feedback.\ntorchaudio\nWith this release, torchaudio is expanding its support for models and end-to-end applications, adding a wav2letter training pipeline and end-to-end text-to-speech and source separation pipelines. Please file an issue on github to provide feedback on them.\n[Stable] Speech Recognition\nBuilding on the addition of the wav2letter model for speech recognition in the last release, we\u2019ve now added an example wav2letter training pipeline with the LibriSpeech dataset.\n[Stable] Text-to-speech", "source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"} {"text": "[Stable] Text-to-speech\nWith the goal of supporting text-to-speech applications, we added a vocoder based on the WaveRNN model, based on the implementation from this repository. The original implementation was introduced in \"Efficient Neural Audio Synthesis\". We also provide an example WaveRNN training pipeline that uses the LibriTTS dataset added to torchaudio in this release.\n[Stable] Source Separation\nWith the addition of the ConvTasNet model, based on the paper \"Conv-TasNet: Surpassing Ideal Time-Frequency Magnitude Masking for Speech Separation,\" torchaudio now also supports source separation. An example ConvTasNet training pipeline is provided with the wsj-mix dataset.\nCheers!\nTeam PyTorch", "source": "https://pytorch.org/blog/pytorch-1.7-released/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Adding a Contributor License Agreement for PyTorch'\nauthor: Team PyTorch\n\nTo ensure the ongoing growth and success of the framework, we're introducing the use of the Apache Contributor License Agreement (CLA) for PyTorch. We care deeply about the broad community of contributors who make PyTorch such a great framework, so we want to take a moment to explain why we are adding a CLA.\nWhy Does PyTorch Need a CLA?\nCLAs help clarify that users and maintainers have the relevant rights to use and maintain code contributed to an open source project, while allowing contributors to retain ownership rights to their code.", "source": "https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/", "category": "pytorch blogs"} {"text": "PyTorch has grown from a small group of enthusiasts to a now global community with over 1,600 contributors from dozens of countries, each bringing their own diverse perspectives, values and approaches to collaboration. Looking forward, clarity about how this collaboration is happening is an important milestone for the framework as we continue to build a stronger, safer and more scalable community around PyTorch.", "source": "https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/", "category": "pytorch blogs"} {"text": "The text of the Apache CLA can be found here, together with an accompanying FAQ. The language in the PyTorch CLA is identical to the Apache template. Although CLAs have been the subject of significant discussion in the open source community, we are seeing that using a CLA, and particularly the Apache CLA, is now standard practice when projects and communities reach a certain scale. Popular projects that have adopted some type of CLA include: Visual Studio Code, Flutter, TensorFlow, kubernetes, Ubuntu, Django, Python, Go, Android and many others.\nWhat is Not Changing", "source": "https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/", "category": "pytorch blogs"} {"text": "What is Not Changing\nPyTorch\u2019s BSD license is not changing. There is no impact to PyTorch users. CLAs will only be required for new contributions to the project. For past contributions, no action is necessary. Everything else stays the same, whether it\u2019s IP ownership, workflows, contributor roles or anything else that you\u2019ve come to expect from PyTorch. \nHow the New CLA will Work\nMoving forward, all contributors to projects under the PyTorch GitHub organization will need to sign a CLA to merge their contributions. \n\n\n\nIf you've contributed to other Facebook Open Source projects, you may have already signed the CLA, and no action is required. If you have not signed the CLA, a GitHub check will prompt you to sign it before your pull requests can be merged. You can reach the CLA from this link.\n", "source": "https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/", "category": "pytorch blogs"} {"text": "\n\n\nIf you're contributing as an individual, meaning the code is not something you worked on as part of your job, you should sign the individual contributor agreement. This agreement associates your GitHub username with future contributions and only needs to be signed once.\nIf you're contributing as part of your employment, you may need to sign the corporate contributor agreement. Check with your legal team on filling this out. Also you will include a list of github ids from your company.\nAs always, we continue to be humbled and grateful for all your support, and we look forward to scaling PyTorch together to even greater heights in the years to come.\nThank you!\nTeam PyTorch", "source": "https://pytorch.org/blog/a-contributor-license-agreement-for-pytorch/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Feature Extraction in TorchVision using Torch FX'\nauthor: Alexander Soare and Francisco Massa\nfeatured-img: 'assets/images/fx-image2.png'\n\n\nIntroduction\nFX based feature extraction is a new TorchVision utility that lets us access intermediate transformations of an input during the forward pass of a PyTorch Module. It does so by symbolically tracing the forward method to produce a graph where each node represents a single operation. Nodes are named in a human-readable manner such that one may easily specify which nodes they want to access.", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "Did that all sound a little complicated? Not to worry as there\u2019s a little in this article for everyone. Whether you\u2019re a beginner or an advanced deep-vision practitioner, chances are you will want to know about FX feature extraction. If you still want more background on feature extraction in general, read on. If you\u2019re already comfortable with that and want to know how to do it in PyTorch, skim ahead to Existing Methods in PyTorch: Pros and Cons. And if you already know about the challenges of doing feature extraction in PyTorch, feel free to skim forward to FX to The Rescue.\nA Recap On Feature Extraction\nWe\u2019re all used to the idea of having a deep neural network (DNN) that takes inputs and produces outputs, and we don\u2019t necessarily think of what happens in between. Let\u2019s just consider a ResNet-50 classification model as an example:\n", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "\n\n\n Figure 1: ResNet-50 takes an image of a bird and transforms that into the abstract concept \"bird\". Source: Bird image from ImageNet.\n\nWe know though, that there are many sequential \u201clayers\u201d within the ResNet-50 architecture that transform the input step-by-step. In Figure 2 below, we peek under the hood to show the layers within ResNet-50, and we also show the intermediate transformations of the input as it passes through those layers.\n\n\n", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "\n Figure 2: ResNet-50 transforms the input image in multiple steps. Conceptually, we may access the intermediate transformation of the image after each one of these steps. Source: Bird image from ImageNet.\n\nExisting Methods In PyTorch: Pros and Cons\nThere were already a few ways of doing feature extraction in PyTorch prior to FX based feature extraction being introduced.\nTo illustrate these, let\u2019s consider a simple convolutional neural network that does the following\n\nApplies several \u201cblocks\u201d each with several convolution layers within.\nAfter several blocks, it uses a global average pool and flatten operation.\nFinally it uses a single output classification layer.\n\n```python\nimport torch\nfrom torch import nn\nclass ConvBlock(nn.Module):\n \"\"\"\n Applies num_layers 3x3 convolutions each followed by ReLU then downsamples\n via 2x2 max pool.\n \"\"\"\ndef init(self, num_layers, in_channels, out_channels):\n super().init()\n self.convs = nn.ModuleList(", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "self.convs = nn.ModuleList(\n [nn.Sequential(\n nn.Conv2d(in_channels if i==0 else out_channels, out_channels, 3, padding=1),\n nn.ReLU()\n )\n for i in range(num_layers)]\n )\n self.downsample = nn.MaxPool2d(kernel_size=2, stride=2)\ndef forward(self, x):\n for conv in self.convs:\n x = conv(x)\n x = self.downsample(x)\n return x\nclass CNN(nn.Module):\n \"\"\"\n Applies several ConvBlocks each doubling the number of channels, and\n halving the feature map size, before taking a global average and classifying.\n \"\"\"\ndef init(self, in_channels, num_blocks, num_classes):\n super().init()\n first_channels = 64\n self.blocks = nn.ModuleList(\n [ConvBlock(\n 2 if i==0 else 3,\n in_channels=(in_channels if i == 0 else first_channels(2(i-1))),\n out_channels=first_channels(2**i))\n for i in range(num_blocks)]\n )", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "for i in range(num_blocks)]\n )\n self.global_pool = nn.AdaptiveAvgPool2d((1, 1))\n self.cls = nn.Linear(first_channels(2*(num_blocks-1)), num_classes)\ndef forward(self, x):\n for block in self.blocks:\n x = block(x)\n x = self.global_pool(x)\n x = x.flatten(1)\n x = self.cls(x)\n return x\nmodel = CNN(3, 4, 10)\nout = model(torch.zeros(1, 3, 32, 32)) # This will be the final logits over classes\n\nLet\u2019s say we want to get the final feature map before global average pooling. We could do the following:\n\n### Modify the forward method\n\n```python\ndef forward(self, x):\n for block in self.blocks:\n x = block(x)\n self.final_feature_map = x\n x = self.global_pool(x)\n x = x.flatten(1)\n x = self.cls(x)\n return x\n\nOr return it directly:\n```python\ndef forward(self, x):\n for block in self.blocks:\n x = block(x)\n final_feature_map = x\n x = self.global_pool(x)\n x = x.flatten(1)\n x = self.cls(x)", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "x = x.flatten(1)\n x = self.cls(x)\n return x, final_feature_map\n```\nThat looks pretty easy. But there are some downsides here which all stem from the same underlying issue: that is, modifying the source code is not ideal:\n\nIt\u2019s not always easy to access and change given the practical considerations of a project.\nIf we want flexibility (switching feature extraction on or off, or having variations on it), we need to further adapt the source code to support that.\nIt\u2019s not always just a question of inserting a single line of code. Think about how you would go about getting the feature map from one of the intermediate blocks with the way I\u2019ve written this module.\nOverall, we\u2019d rather avoid the overhead of maintaining source code for a model, when we actually don\u2019t need to change anything about how it works.\n\nOne can see how this downside can start to get a lot more thorny when dealing with larger, more complicated models, and trying to get at features from within nested submodules.", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "Write a new module using the parameters from the original one\nFollowing on the example from above, say we want to get a feature map from each block. We could write a new module like so:\nclass CNNFeatures(nn.Module):\n def __init__(self, backbone):\n super().__init__()\n self.blocks = backbone.blocks\n\n def forward(self, x):\n feature_maps = []\n for block in self.blocks:\n x = block(x)\n feature_maps.append(x)\n return feature_maps\n\n\nbackbone = CNN(3, 4, 10)\nmodel = CNNFeatures(backbone)\nout = model(torch.zeros(1, 3, 32, 32)) # This is now a list of Tensors, each representing a feature map\n\nIn fact, this is much like the method that TorchVision used internally to make many of its detection models. \nAlthough this approach solves some of the issues with modifying the source code directly, there are still some major downsides:", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "\nIt\u2019s only really straight-forward to access the outputs of top-level submodules. Dealing with nested submodules rapidly becomes complicated.\nWe have to be careful not to miss any important operations in between the input and the output. We introduce potential for errors in transcribing the exact functionality of the original module to the new module.\n\nOverall, this method and the last both have the complication of tying in feature extraction with the model\u2019s source code itself. Indeed, if we examine the source code for TorchVision models we might suspect that some of the design choices were influenced by the desire to use them in this way for downstream tasks.\nUse hooks\nHooks move us away from the paradigm of writing source code, towards one of specifying outputs. Considering our toy CNN example above, and the goal of getting feature maps for each layer, we could use hooks like this:\n```python\nmodel = CNN(3, 4, 10)", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "model = CNN(3, 4, 10)\nfeature_maps = [] # This will be a list of Tensors, each representing a feature map\n\ndef hook_feat_map(mod, inp, out):\n feature_maps.append(out)\n\nfor block in model.blocks:\n block.register_forward_hook(hook_feat_map)\n\nout = model(torch.zeros(1, 3, 32, 32)) # This will be the final logits over classes\n\nNow we have full flexibility in terms of accessing nested submodules, and we free ourselves of the responsibilities of fiddling with the source code. But this approach comes with its own downsides:\n\nWe can only apply hooks to modules. If we have functional operations (reshape, view, functional non-linearities, etc) for which we want the outputs, hooks won\u2019t work directly on them.\nWe have not modified anything about the source code, so the whole forward pass is executed, regardless of the hooks. If we only need to access early features without any need for the final output, this could result in a lot of useless computation.\nHooks are not TorchScript friendly.\n", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "\nHooks are not TorchScript friendly.\n\nHere\u2019s a summary of the different methods and their pros/cons:\n\n\n\n\nCan use source code as is without any modifications or rewriting\nFull flexibility in accessing features\nDrops unnecessary computational steps\nTorchScript friendly\n\n\n\n\nModify forward method\nNO\nTechnically yes. Depends on how much code you\u2019re willing to write. So in practice, NO.\nYES\nYES\n\n\n", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|\n| New module that reuses submodules / parameters of original module | NO | Technically yes. Depends on how much code you\u2019re willing to write. So in practice, NO. | YES | YES |\n|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "\n\n\nHooks\nYES\nMostly YES. Only outputs of submodules\nNO\nNO\n\n\n\n\n\n\n\n\n\n\n\n\nTable 1: The pros (or cons) of some of the existing methods for feature extraction with PyTorch\nIn the next section of this article, let\u2019s see how we can get YES across the board.\nFX to The Rescue", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "FX to The Rescue\nThe natural question for some new-starters in Python and coding at this point might be: \u201cCan\u2019t we just point to a line of code and tell Python or PyTorch that we want the result of that line?\u201d For those who have spent more time coding, the reason this can\u2019t be done is clear: multiple operations can happen in one line of code, whether they are explicitly written there, or they are implicit as sub-operations. Just take this simple module as an example:\nclass MyModule(torch.nn.Module):\n def __init__(self):\n super().__init__()\n self.param = torch.nn.Parameter(torch.rand(3, 4))\n self.submodule = MySubModule()\n\n def forward(self, x):\n return self.submodule(x + self.param).clamp(min=0.0, max=1.0)\n\nThe forward method has a single line of code which we can unravel as:\n\nAdd self.param to x\n", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "\nAdd self.param to x\nPass x through self.submodule. Here we would need to consider the steps happening in that submodule. I\u2019m just going to use dummy operation names for illustration:\n I. submodule.op_1\n II. submodule.op_2\nApply the clamp operation\n\nSo even if we point at this one line, the question then is: \u201cFor which step do we want to extract the output?\u201d.\nFX is a core PyTorch toolkit that (oversimplifying) does the unravelling I just mentioned. It does something called \u201csymbolic tracing\u201d, which means the Python code is interpreted and stepped through, operation-by-operation, using some dummy proxy for a real input. Introducing some nomenclature, each step as described above is considered a \u201cnode\u201d, and consecutive nodes are connected to one another to form a \u201cgraph\u201d (not unlike the common mathematical notion of a graph). Here are the \u201csteps\u201d above translated to this concept of a graph.\n", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "\n\n\n Figure 3: Graphical representation of the result of symbolically tracing our example of a simple forward method.\n\nNote that we call this a graph, and not just a set of steps, because it\u2019s possible for the graph to branch off and recombine. Think of the skip connection in a residual block. This would look something like:\n\n\n", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "\n Figure 4: Graphical representation of a residual skip connection. The middle node is like the main branch of a residual block, and the final node represents the sum of the input and output of the main branch.\n\nNow, TorchVision\u2019s get_graph_node_names function applies FX as described above, and in the process of doing so, tags each node with a human readable name. Let\u2019s try this with our toy CNN model from the previous section:\nmodel = CNN(3, 4, 10)\nfrom torchvision.models.feature_extraction import get_graph_node_names\nnodes, _ = get_graph_node_names(model)\nprint(nodes)\n\nwhich will result in:\n```python", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "print(nodes)\nwhich will result in:\n```python\n['x', 'blocks.0.convs.0.0', 'blocks.0.convs.0.1', 'blocks.0.convs.1.0', 'blocks.0.convs.1.1', 'blocks.0.downsample', 'blocks.1.convs.0.0', 'blocks.1.convs.0.1', 'blocks.1.convs.1.0', 'blocks.1.convs.1.1', 'blocks.1.convs.2.0', 'blocks.1.convs.2.1', 'blocks.1.downsample', 'blocks.2.convs.0.0', 'blocks.2.convs.0.1', 'blocks.2.convs.1.0', 'blocks.2.convs.1.1', 'blocks.2.convs.2.0', 'blocks.2.convs.2.1', 'blocks.2.downsample', 'blocks.3.convs.0.0', 'blocks.3.convs.0.1', 'blocks.3.convs.1.0', 'blocks.3.convs.1.1', 'blocks.3.convs.2.0', 'blocks.3.convs.2.1', 'blocks.3.downsample', 'global_pool', 'flatten', 'cls']\n\nWe can read these node names as hierarchically organised \u201caddresses\u201d for the operations of interest. For example 'blocks.1.downsample' refers to the MaxPool2d layer in the second ConvBlock.", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "create_feature_extractor, which is where all the magic happens, goes a few steps further than get_graph_node_names. It takes desired node names as one of the input arguments, and then uses more FX core functionality to:\n\nAssign the desired nodes as outputs.\nPrune unnecessary downstream nodes and their associated parameters.\nTranslate the resulting graph back into Python code.\nReturn another PyTorch Module to the user. This has the python code from step 3 as the forward method.\n\nAs a demonstration, here\u2019s how we would apply create_feature_extractor to get the 4 feature maps from our toy CNN model\n```python\nfrom torchvision.models.feature_extraction import create_feature_extractor\nConfused about the node specification here?\nWe are allowed to provide truncated node names, and create_feature_extractor\nwill choose the last node with that prefix.", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "will choose the last node with that prefix.\nfeature_extractor = create_feature_extractor(\n model, return_nodes=['blocks.0', 'blocks.1', 'blocks.2', 'blocks.3'])\nout will be a dict of Tensors, each representing a feature map\nout = feature_extractor(torch.zeros(1, 3, 32, 32))\n```\nIt\u2019s as simple as that. When it comes down to it, FX feature extraction is just a way of making it possible to do what some of us would have naively hoped for when we first started programming: \u201cjust give me the output of this code (points finger at screen)\u201d*.\n\n[ ] \u2026 does not require us to fiddle with source code.\n[ ] \u2026 provides full flexibility in terms of accessing any intermediate transformation of our inputs, whether they are the results of a module or a functional operation\n[ ] \u2026 does drop unnecessary computations steps once features have been extracted\n[ ] \u2026 and I didn\u2019t mention this before, but it\u2019s also TorchScript friendly!\n\nHere\u2019s that table again with another row added for FX feature extraction", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "\n\n\n\nCan use source code as is without any modifications or rewriting\nFull flexibility in accessing features\nDrops unnecessary computational steps\nTorchScript friendly\n\n\n\n\nModify forward method\nNO\nTechnically yes. Depends on how much code you\u2019re willing to write. So in practice, NO.\nYES\nYES\n\n\n", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|\n| New module that reuses submodules / parameters of original module | NO | Technically yes. Depends on how much code you\u2019re willing to write. So in practice, NO. | YES | YES |\n|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "\n\n\nHooks\nYES\nMostly YES. Only outputs of submodules\nNO\nNO\n\n\n\n\nFX\nYES\nYES\nYES\nYES\n\n\n", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "|-------------------------------------------------------------------|:-----------------------------------------------------------------:|:--------------------------------------------------------------------------------------:|:--------------------------------------:|:--------------------:|\nTable 2: A copy of Table 1 with an added row for FX feature extraction. FX feature extraction gets YES across the board!\nCurrent FX Limitations\nAlthough I would have loved to end the post there, FX does have some of its own limitations which boil down to:\n\nThere may be some Python code that isn\u2019t yet handled by FX when it comes to the step of interpretation and translation into a graph.\nDynamic control flow can\u2019t be represented in terms of a static graph.\n", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "The easiest thing to do when these problems crop up is to bundle the underlying code into a \u201cleaf node\u201d. Recall the example graph from Figure 3? Conceptually, we may agree that the submodule should be treated as a node in itself rather than a set of nodes representing the underlying operations. If we do so, we can redraw the graph as:\n\n\n\n Figure 5: The individual operations within `submodule` may (left - within red box), may be consolidated into one node (right - node #2) if we consider the `submodule` as a \"leaf\" node.\n", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "\nWe would want to do so if there is some problematic code within the submodule, but we don\u2019t have any need for extracting any intermediate transformations from within it. In practice, this is easily achievable by providing a keyword argument to create_feature_extractor or get_graph_node_names.\nmodel = CNN(3, 4, 10)\nnodes, _ = get_graph_node_names(model, tracer_kwargs={'leaf_modules': [ConvBlock]})\nprint(nodes)\n\nfor which the output will be:\n['x', 'blocks.0', 'blocks.1', 'blocks.2', 'blocks.3', 'global_pool', 'flatten', 'cls']\n\nNotice how, as compared to previously, all the nodes for any given ConvBlock are consolidated into a single node.\nWe could do something similar with functions. For example, Python\u2019s inbuilt len needs to be wrapped and the result should be treated as a leaf node. Here\u2019s how you can do that with core FX functionality:\n```python\ntorch.fx.wrap('len')\nclass MyModule(nn.Module):\n def forward(self, x):\n x += 1\n len(x)", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "x += 1\n len(x)\nmodel = MyModule()\nfeature_extractor = create_feature_extractor(model, return_nodes=['add'])\n\nFor functions you define, you may instead use another keyword argument to `create_feature_extractor` (minor detail: here\u2019s[ why you might want to do it this way instead](https://github.com/pytorch/pytorch/issues/62021#issue-950458396)):\n\n\n```python\ndef myfunc(x):\n return len(x)\n\nclass MyModule(nn.Module):\n def forward(self, x):\n x += 1\n myfunc(x)\n\nmodel = MyModule()\nfeature_extractor = create_feature_extractor(\n model, return_nodes=['add'], tracer_kwargs={'autowrap_functions': [myfunc]})\n\nNotice that none of the fixes above involved modifying source code.", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "Of course, there may be times when the very intermediate transformation one is trying to get access to is within the same forward method or function that is causing problems. Here, we can\u2019t just treat that module or function as a leaf node, because then we can\u2019t access the intermediate transformations within. In these cases, some rewriting of the source code will be needed. Here are some examples (not exhaustive)\n\nFX will raise an error when trying to trace through code with an assert statement. In this case you may need to remove that assertion or switch it with torch._assert (this is not a public function - so consider it a bandaid and use with caution).\nSymbolically tracing in-place changes to slices of tensors is not supported. You will need to make a new variable for the slice, apply the operation, then reconstruct the original tensor using concatenation or stacking.\n", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "\nRepresenting dynamic control flow in a static graph is just not logically possible. See if you can distill the coded logic down to something that is not dynamic - see FX documentation for tips.\n\nIn general, you may consult the FX documentation for more detail on the limitations of symbolic tracing and the possible workarounds.\nConclusion", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "Conclusion\nWe did a quick recap on feature extraction and why one might want to do it. Although there are existing methods for doing feature extraction in PyTorch they all have rather significant shortcomings. We learned how TorchVision\u2019s FX feature extraction utility works and what makes it so versatile compared to the existing methods. While there are still some minor kinks to iron out for the latter, we understand the limitations, and can trade them off against the limitations of other methods depending on our use case. Hopefully by adding this new utility to your PyTorch toolkit, you\u2019re now equipped to handle the vast majority of feature extraction requirements you may come across.\nHappy coding!", "source": "https://pytorch.org/blog/FX-feature-extraction-torchvision/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'PyTorch Ecosystem Day 2021 Recap and New Contributor Resources'\nauthor: Team PyTorch\n\nThank you to our incredible community for making the first ever PyTorch Ecosystem Day a success! The day was filled with discussions on new developments, trends and challenges showcased through 71 posters, 32 breakout sessions and 6 keynote speakers. \n\n\n\nSpecial thanks to our keynote speakers: Piotr Bialecki, Ritchie Ng, Miquel Farr\u00e9, Joe Spisak, Geeta Chauhan, and Suraj Subramanian who shared updates from the latest release of PyTorch, exciting work being done with partners, use case example from Disney, the growth and development of the PyTorch community in Asia Pacific, and latest contributor highlights.\nIf you missed the opening talks, you rewatch them here:\n* Morning/EMEA Opening Talks", "source": "https://pytorch.org/blog/ecosystem-day-2021-recap/", "category": "pytorch blogs"} {"text": "\nEvening/APAC Opening Talks\n\nIn addition to the talks, we had 71 posters covering various topics such as multimodal, NLP, compiler, distributed training, researcher productivity tools, AI accelerators, and more. From the event, it was clear that an underlying thread that ties all of these different projects together is the cross-collaboration of the PyTorch community. Thank you for continuing to push the state of the art with PyTorch! \nTo view the full catalogue of poster, please visit PyTorch Ecosystem Day 2021 Event Page. \nNew Contributor Resources\nToday, we are also sharing new contributor resources that we are trying out to give you the most access to up-to-date news, networking opportunities and more.", "source": "https://pytorch.org/blog/ecosystem-day-2021-recap/", "category": "pytorch blogs"} {"text": "\nContributor Newsletter - Includes curated news including RFCs, feature roadmaps, notable PRs, editorials from developers, and more to support keeping track of everything that\u2019s happening in our community. \nContributors Discussion Forum - Designed for contributors to learn and collaborate on the latest development across PyTorch. \nPyTorch Developer Podcast (Beta) - Edward Yang, PyTorch Research Scientist, at Facebook AI shares bite-sized (10 to 20 mins) podcast episodes discussing topics about all sorts of internal development topics in PyTorch.\n\nThank you,\nTeam PyTorch", "source": "https://pytorch.org/blog/ecosystem-day-2021-recap/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Geospatial deep learning with TorchGeo\"\nauthor: Adam Stewart (University of Illinois at Urbana-Champaign), Caleb Robinson (Microsoft AI for Good Research Lab), Isaac Corley (University of Texas at San Antonio)\nfeatured-img: 'assets/images/torchgeo-hurricane.jpg'\n\nTorchGeo is a PyTorch domain library providing datasets, samplers, transforms, and pre-trained models specific to geospatial data.\n\n\n\n\nhttps://github.com/microsoft/torchgeo\n", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} {"text": "\nFor decades, Earth observation satellites, aircraft, and more recently UAV platforms have been collecting increasing amounts of imagery of the Earth\u2019s surface. With information about seasonal and long-term trends, remotely sensed imagery can be invaluable for solving some of the greatest challenges to humanity, including climate change adaptation, natural disaster monitoring, water resource management, and food security for a growing global population. From a computer vision perspective, this includes applications like land cover mapping (semantic segmentation), deforestation and flood monitoring (change detection), glacial flow (pixel tracking), hurricane tracking and intensity estimation (regression), and building and road detection (object detection, instance segmentation). By leveraging recent advancements in deep learning architectures, cheaper and more powerful GPUs, and petabytes of freely available satellite imagery datasets, we can come closer to solving these important problems.", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} {"text": "\n\n\n\nNational Oceanic and Atmospheric Administration satellite image of Hurricane Katrina, taken on August 28, 2005 (source). Geospatial machine learning libraries like TorchGeo can be used to detect, track, and predict future trajectories of hurricanes and other natural disasters.\n\nThe challenges", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} {"text": "In traditional computer vision datasets, such as ImageNet, the image files themselves tend to be rather simple and easy to work with. Most images have 3 spectral bands (RGB), are stored in common file formats like PNG or JPEG, and can be easily loaded with popular software libraries like PIL or OpenCV. Each image in these datasets is usually small enough to pass directly into a neural network. Furthermore, most of these datasets contain a finite number of well-curated images that are assumed to be independent and identically distributed, making train-val-test splits straightforward. As a result of this relative homogeneity, the same pre-trained models (e.g., CNNs pretrained on ImageNet) have shown to be effective across a wide range of vision tasks using transfer learning methods. Existing libraries, such as torchvision, handle these simple cases well, and have been used to make large advances in vision tasks over the past decade.", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} {"text": "Remote sensing imagery is not so uniform. Instead of simple RGB images, satellites tend to capture images that are multispectral (Landsat 8 has 11 spectral bands) or even hyperspectral (Hyperion has 242 spectral bands). These images capture information at a wider range of wavelengths (400 nm\u201315 \u00b5m), far outside of the visible spectrum. Different satellites also have very different spatial resolutions\u2014GOES has a resolution of 4 km/px, Maxar imagery is 30 cm/px, and drone imagery resolution can be as high as 7 mm/px. These datasets almost always have a temporal component, with satellite revisists that are daily, weekly, or biweekly. Images often have overlap with other images in the dataset, and need to be stitched together based on geographic metadata. These images tend to be very large (e.g., 10K x 10K pixels), so it isn't possible to pass an entire image through a neural network. This data is distributed in hundreds of different raster and vector file formats like GeoTIFF and ESRI Shapefile, requiring specialty libraries like GDAL to load.", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} {"text": "\n\n\n\nFrom left to right: Mercator, Albers Equal Area, and Interrupted Goode Homolosine projections (source). Geospatial data is associated with one of many different types of reference systems that project the 3D Earth onto a 2D representation. Combining data from different sources often involves re-projecting to a common reference system in order to ensure that all layers are aligned.\n", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} {"text": "\nAlthough each image is 2D, the Earth itself is 3D. In order to stitch together images, they first need to be projected onto a 2D representation of the Earth, called a coordinate reference system (CRS). Most people are familiar with equal angle representations like Mercator that distort the size of regions (Greenland looks larger than Africa even though Africa is 15x larger), but there are many other CRSs that are commonly used. Each dataset may use a different CRS, and each image within a single dataset may also be in a unique CRS. In order to use data from multiple layers, they must all share a common CRS, otherwise the data won't be properly aligned. For those who aren't familiar with remote sensing data, this can be a daunting task.\n\n\n\n", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} {"text": "\n\nEven if you correctly georeference images during indexing, if you don't project them to a common CRS, you'll end up with rotated images with nodata values around them, and the images won't be pixel-aligned.\n\nThe solution\nAt the moment, it can be quite challenging to work with both deep learning models and geospatial data without having expertise in both of these very different fields. To address these challenges, we've built TorchGeo, a PyTorch domain library for working with geospatial data. TorchGeo is designed to make it simple:\n\nfor machine learning experts to work with geospatial data, and\nfor remote sensing experts to explore machine learning solutions.\n", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} {"text": "TorchGeo is not just a research project, but a production-quality library that uses continuous integration to test every commit with a range of Python versions on a range of platforms (Linux, macOS, Windows). It can be easily installed with any of your favorite package managers, including pip, conda, and spack:\n$ pip install torchgeo\n\nTorchGeo is designed to have the same API as other PyTorch domain libraries like torchvision, torchtext, and torchaudio. If you already use torchvision in your workflow for computer vision datasets, you can switch to TorchGeo by changing only a few lines of code. All TorchGeo datasets and samplers are compatible with the PyTorch DataLoader class, meaning that you can take advantage of wrapper libraries like PyTorch Lightning for distributed training. In the following sections, we'll explore possible use cases for TorchGeo to show how simple it is to use.\nGeospatial datasets and samplers\n", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} {"text": "\n\n\n\nExample application in which we combine A) a scene from Landsat 8 and B) Cropland Data Layer labels, even though these files are in different EPSG projections. We want to sample patches C) and D) from these datasets using a geospatial bounding box as an index.\n\nMany remote sensing applications involve working with geospatial datasets \u2014datasets with geographic metadata. In TorchGeo, we define a GeoDataset class to represent these kinds of datasets. Instead of being indexed by an integer, each GeoDataset is indexed by a spatiotemporal bounding box, meaning that two or more datasets covering a different geographic extent can be intelligently combined.", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} {"text": "In this example, we show how easy it is to work with geospatial data and to sample small image patches from a combination of Landsat and Cropland Data Layer (CDL) data using TorchGeo. First, we assume that the user has Landsat 7 and 8 imagery downloaded. Since Landsat 8 has more spectral bands than Landsat 7, we'll only use the bands that both satellites have in common. We'll create a single dataset including all images from both Landsat 7 and 8 data by taking the union between these two datasets.\nfrom torch.utils.data import DataLoader\nfrom torchgeo.datasets import CDL, Landsat7, Landsat8, stack_samples\nfrom torchgeo.samplers import RandomGeoSampler\n\nlandsat7 = Landsat7(root=\"...\")\nlandsat8 = Landsat8(root=\"...\", bands=Landsat8.all_bands[1:-2])\nlandsat = landsat7 | landsat8\n", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} {"text": "landsat = landsat7 | landsat8\n\nNext, we take the intersection between this dataset and the CDL dataset. We want to take the intersection instead of the union to ensure that we only sample from regions where we have both Landsat and CDL data. Note that we can automatically download and checksum CDL data. Also note that each of these datasets may contain files in different CRSs or resolutions, but TorchGeo automatically ensures that a matching CRS and resolution is used.\n\n```c++\ncdl = CDL(root=\"...\", download=True, checksum=True)\ndataset = landsat & cdl\n", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} {"text": "dataset = landsat & cdl\n\nThis dataset can now be used with a PyTorch data loader. Unlike benchmark datasets, geospatial datasets often include very large images. For example, the CDL dataset consists of a single image covering the entire contiguous United States. In order to sample from these datasets using geospatial coordinates, TorchGeo defines a number of [*samplers*](https://torchgeo.readthedocs.io/en/latest/api/samplers.html). In this example, we'll use a random sampler that returns 256 x 256 pixel images and 10,000 samples per epoch. We'll also use a custom collation function to combine each sample dictionary into a mini-batch of samples.\n\n```c++\nsampler = RandomGeoSampler(dataset, size=256, length=10000)\ndataloader = DataLoader(dataset, batch_size=128, sampler=sampler, collate_fn=stack_samples)\n\nThis data loader can now be used in your normal training/evaluation pipeline.\n```c++\nfor batch in dataloader:\n image = batch[\"image\"]\n mask = batch[\"mask\"]", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} {"text": "mask = batch[\"mask\"]\n# train a model, or make predictions using a pre-trained model\n\n```\nMany applications involve intelligently composing datasets based on geospatial metadata like this. For example, users may want to:\n\nCombine datasets for multiple image sources and treat them as equivalent (e.g., Landsat 7 and 8)\nCombine datasets for disparate geospatial locations (e.g., Chesapeake NY and PA)\n\nThese combinations require that all queries are present in at least one dataset, and can be created using a UnionDataset. Similarly, users may want to:\n\nCombine image and target labels and sample from both simultaneously (e.g., Landsat and CDL)\nCombine datasets for multiple image sources for multimodal learning or data fusion (e.g., Landsat and Sentinel)\n", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} {"text": "These combinations require that all queries are present in both datasets, and can be created using an IntersectionDataset. TorchGeo automatically composes these datasets for you when you use the intersection (&) and union (|) operators.\nMultispectral and geospatial transforms\nIn deep learning, it's common to augment and transform the data so that models are robust to variations in the input space. Geospatial data can have variations such as seasonal changes and warping effects, as well as image processing and capture issues like cloud cover and atmospheric distortion. TorchGeo utilizes augmentations and transforms from the Kornia library, which supports GPU acceleration and supports multispectral imagery with more than 3 channels.", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} {"text": "Traditional geospatial analyses compute and visualize spectral indices which are combinations of multispectral bands. Spectral indices are designed to highlight areas of interest in a multispectral image relevant to some application, such as vegetation health, areas of man-made change or increasing urbanization, or snow cover. TorchGeo supports numerous transforms, which can compute common spectral indices and append them as additional bands to a multispectral image tensor.\nBelow, we show a simple example where we compute the Normalized Difference Vegetation Index (NDVI) on a Sentinel-2 image. NDVI measures the presence of vegetation and vegetation health and is computed as the normalized difference between the red and near-infrared (NIR) spectral bands. Spectral index transforms operate on sample dictionaries returned from TorchGeo datasets and append the resulting spectral index to the image channel dimension.", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} {"text": "First, we instantiate a Sentinel-2 dataset and load a sample image. Then, we plot the true color (RGB) representation of this data to see the region we are looking at.\nimport matplotlib.pyplot as plt\nfrom torchgeo.datasets import Sentinel2\nfrom torchgeo.transforms import AppendNDVI\n\ndataset = Sentinel2(root=\"...\")\nsample = dataset[...]\nfig = dataset.plot(sample)\nplt.show()\n\nNext, we instantiate and compute an NDVI transform, appending this new channel to the end of the image. Sentinel-2 imagery uses index 0 for its red band and index 3 for its NIR band. In order to visualize the data, we also normalize the image. NDVI values can range from -1 to 1, but we want to use the range 0 to 1 for plotting.\ntransform = AppendNDVI(index_red=0, index_nir=3)\nsample = transform(sample)\nsample[\"image\"][-1] = (sample[\"image\"][-1] + 1) / 2\nplt.imshow(sample[\"image\"][-1], cmap=\"RdYlGn_r\")\nplt.show()\n\n\n\n", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} {"text": "\n\nTrue color (left) and NDVI (right) of the Texas Hill Region, taken on November 16, 2018 by the Sentinel-2 satellite. In the NDVI image, red indicates water bodies, yellow indicates barren soil, light green indicates unhealthy vegetation, and dark green indicates healthy vegetation.\n\nBenchmark datasets\nOne of the driving factors behind progress in computer vision is the existence of standardized benchmark datasets like ImageNet and MNIST. Using these datasets, researchers can directly compare the performance of different models and training procedures to determine which perform the best. In the remote sensing domain, there are many such datasets, but due to the aforementioned difficulties of working with this data and the lack of existing libraries for loading these datasets, many researchers opt to use their own custom datasets.", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} {"text": "One of the goals of TorchGeo is to provide easy-to-use data loaders for these existing datasets. TorchGeo includes a number of benchmark datasets \u2014datasets that include both input images and target labels. This includes datasets for tasks like image classification, regression, semantic segmentation, object detection, instance segmentation, change detection, and more.\nIf you've used torchvision before, these types of datasets should be familiar. In this example, we'll create a dataset for the Northwestern Polytechnical University (NWPU) very-high-resolution ten-class (VHR-10) geospatial object detection dataset. This dataset can be automatically downloaded, checksummed, and extracted, just like with torchvision.\n```c++\nfrom torch.utils.data import DataLoader\nfrom torchgeo.datasets import VHR10\ndataset = VHR10(root=\"...\", download=True, checksum=True)", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} {"text": "dataloader = DataLoader(dataset, batch_size=128, shuffle=True, num_workers=4)\nfor batch in dataloader:\n image = batch[\"image\"]\n label = batch[\"label\"]\n# train a model, or make predictions using a pre-trained model\n\n```\nAll TorchGeo datasets are compatible with PyTorch data loaders, making them easy to integrate into existing training workflows. The only difference between a benchmark dataset in TorchGeo and a similar dataset in torchvision is that each dataset returns a dictionary with keys for each PyTorch Tensor.\n\n\n\n\nExample predictions from a Mask R-CNN model trained on the NWPU VHR-10 dataset. The model predicts sharp bounding boxes and masks for all objects with high confidence scores.\n\nReproducibility with PyTorch Lightning", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} {"text": "\nReproducibility with PyTorch Lightning\nAnother key goal of TorchGeo is reproducibility. For many of these benchmark datasets, there is no predefined train-val-test split, or the predefined split has issues with class imbalance or geographic distribution. As a result, the performance metrics reported in the literature either can't be reproduced, or aren't indicative of how well a pre-trained model would work in a different geographic location.", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} {"text": "In order to facilitate direct comparisons between results published in the literature and further reduce the boilerplate code needed to run experiments with datasets in TorchGeo, we have created PyTorch Lightning datamodules with well-defined train-val-test splits and trainers for various tasks like classification, regression, and semantic segmentation. These datamodules show how to incorporate augmentations from the kornia library, include preprocessing transforms (with pre-calculated channel statistics), and let users easily experiment with hyperparameters related to the data itself (as opposed to the modeling process). Training a semantic segmentation model on the Inria Aerial Image Labeling dataset is as easy as a few imports and four lines of code.\n```c++\nfrom pytorch_lightning import Trainer\nfrom torchgeo.datamodules import InriaAerialImageLabelingDataModule", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} {"text": "from torchgeo.trainers import SemanticSegmentationTask\ndatamodule = InriaAerialImageLabelingDataModule(root_dir=\"...\", batch_size=64, num_workers=6)\ntask = SemanticSegmentationTask(segmentation_model=\"unet\", encoder_weights=\"imagenet\", learning_rate=0.1)\ntrainer = Trainer(gpus=1, default_root_dir=\"...\")\ntrainer.fit(model=task, datamodule=datamodule)\n```\n\n\n\n\nBuilding segmentations produced by a U-Net model trained on the Inria Aerial Image Labeling dataset. Reproducing these results is as simple as a few imports and four lines of code, making comparison of different models and training techniques simple and easy.\n", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} {"text": "\nIn our preprint we show a set of results that use the aforementioned datamodules and trainers to benchmark simple modeling approaches for several of the datasets in TorchGeo. For example, we find that a simple ResNet-50 can achieve state-of-the-art performance on the So2Sat dataset. These types of baseline results are important for evaluating the contribution of different modeling choices when tackling problems with remotely sensed data.\nFuture work and contributing\nThere is still a lot of remaining work to be done in order to make TorchGeo as easy to use as possible, especially for users without prior deep learning experience. One of the ways in which we plan to achieve this is by expanding our tutorials to include subjects like \"writing a custom dataset\" and \"transfer learning\", or tasks like \"land cover mapping\" and \"object detection\".", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} {"text": "Another important project we are working on is pre-training models. Most remote sensing researchers work with very small labeled datasets, and could benefit from pre-trained models and transfer learning approaches. TorchGeo is the first deep learning library to provide models pre-trained on multispectral imagery. Our goal is to provide models for different image modalities (optical, SAR, multispectral) and specific platforms (Landsat, Sentinel, MODIS) as well as benchmark results showing their performance with different amounts of training data. Self-supervised learning is a promising method for training such models. Satellite imagery datasets often contain petabytes of imagery, but accurately labeled datasets are much harder to come by. Self-supervised learning methods will allow us to train directly on the raw imagery without needing large labeled datasets.", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} {"text": "Aside from these larger projects, we're always looking to add new datasets, data augmentation transforms, and sampling strategies. If you're Python savvy and interested in contributing to TorchGeo, we would love to see contributions! TorchGeo is open source under an MIT license, so you can use it in almost any project.\nExternal links:\n\nHomepage: https://github.com/microsoft/torchgeo\nDocumentation: https://torchgeo.readthedocs.io/\nPyPI: https://pypi.org/project/torchgeo/\nPaper: https://arxiv.org/abs/2111.08872\n\nIf you like TorchGeo, give us a star on GitHub! And if you use TorchGeo in your work, please cite our paper.\nAcknowledgments", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} {"text": "Acknowledgments\nWe would like to thank all TorchGeo contributors for their efforts in creating the library, the Microsoft AI for Good program for support, and the PyTorch Team for their guidance. This research is part of the Blue Waters sustained-petascale computing project, which is supported by the National Science Foundation (awards OCI-0725070 and ACI-1238993), the State of Illinois, and as of December, 2019, the National Geospatial-Intelligence Agency. Blue Waters is a joint effort of the University of Illinois at Urbana-Champaign and its National Center for Supercomputing Applications. The research was supported in part by NSF grants IIS-1908104, OAC-1934634, and DBI-2021898.", "source": "https://pytorch.org/blog/geospatial-deep-learning-with-torchgeo/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'What\u2019s New in PyTorch Profiler 1.9?'\nauthor: Sabrina Smai, Program Manager on the AI Framework team at Microsoft\n\nPyTorch Profiler v1.9 has been released! The goal of this new release (previous PyTorch Profiler release) is to provide you with new state-of-the-art tools to help diagnose and fix machine learning performance issues regardless of whether you are working on one or numerous machines. The objective is to target the execution steps that are the most costly in time and/or memory, and visualize the work load distribution between GPUs and CPUs. \nHere is a summary of the five major features being released:", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"} {"text": "\nDistributed Training View: This helps you understand how much time and memory is consumed in your distributed training job. Many issues occur when you take a training model and split the load into worker nodes to be run in parallel as it can be a black box. The overall model goal is to speed up model training. This distributed training view will help you diagnose and debug issues within individual nodes. \nMemory View: This view allows you to understand your memory usage better. This tool will help you avoid the famously pesky Out of Memory error by showing active memory allocations at various points of your program run. \nGPU Utilization Visualization: This tool helps you make sure that your GPU is being fully utilized. \nCloud Storage Support: Tensorboard plugin can now read profiling data from Azure Blob Storage, Amazon S3, and Google Cloud Platform.\n", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"} {"text": "\nJump to Source Code: This feature allows you to visualize stack tracing information and jump directly into the source code. This helps you quickly optimize and iterate on your code based on your profiling results. \n\nGetting Started with PyTorch Profiling Tool\nPyTorch includes a profiling functionality called \u00ab PyTorch Profiler \u00bb. The PyTorch Profiler tutorial can be found here.\nTo instrument your PyTorch code for profiling, you must:\n$ pip install torch-tb-profiler\nimport torch.profiler as profiler\nWith profiler.profile(XXXX)\n\nComments:\n\u2022 For CUDA and CPU profiling, see below: \nwith torch.profiler.profile( \nactivities=[ \ntorch.profiler.ProfilerActivity.CPU, \ntorch.profiler.ProfilerActivity.CUDA], \n", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"} {"text": "torch.profiler.ProfilerActivity.CUDA], \n```\n\u2022 With profiler.record_function(\u201c$NAME\u201d): allows putting a decorator (a tag associated to a name) for a block of function\n\u2022 Profile_memory=True parameter under profiler.profile allows you to profile CPU and GPU memory footprint\nVisualizing PyTorch Model Performance using PyTorch Profiler\nDistributed Training\nRecent advances in deep learning argue for the value of large datasets and large models, which requires you to scale out model training to more computational resources. Distributed Data Parallel (DDP) and NVIDIA Collective Communications Library (NCCL) are the widely adopted paradigms in PyTorch for accelerating your deep learning training. \nIn this release of PyTorch Profiler, DDP with NCCL backend is now supported.\n\n\n\nComputation/Communication Overview", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"} {"text": "\nComputation/Communication Overview\nIn the Computation/Communication overview under the Distributed training view, you can observe the computation-to-communication ratio of each worker and [load balancer](https://en.wikipedia.org/wiki/Load_balancing_(computing) nodes between worker as measured by granularity. \nScenario 1:\nIf the computation and overlapping time of one worker is much larger than the others, this may suggest an issue in the workload balance or worker being a straggler. Computation is the sum of kernel time on GPU minus the overlapping time. The overlapping time is the time saved by interleaving communications during computation. The more overlapping time represents better parallelism between computation and communication. Ideally the computation and communication completely overlap with each other. Communication is the total communication time minus the overlapping time. The example image below displays how this scenario appears on Tensorboard. \n", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"} {"text": "\n\nFigure: A straggler example\n\nScenario 2:\nIf there is a small batch size (i.e. less computation on each worker) or the data to be transferred is large, the computation-to-communication may also be small and be seen in the profiler with low GPU utilization and long waiting times. This computation/communication view will allow you to diagnose your code to reduce communication by adopting gradient accumulation, or to decrease the communication proportion by increasing batch size. DDP communication time depends on model size. Batch size has no relationship with model size. So increasing batch size could make computation time longer and make computation-to-communication ratio bigger. \nSynchronizing/Communication Overview", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"} {"text": "Synchronizing/Communication Overview\nIn the Synchronizing/Communication view, you can observe the efficiency of communication. This is done by taking the step time minus computation and communication time. Synchronizing time is part of the total communication time for waiting and synchronizing with other workers. The Synchronizing/Communication view includes initialization, data loader, CPU computation, and so on Insights like what is the ratio of total communication is really used for exchanging data and what is the idle time of waiting for data from other workers can be drawn from this view. \n\n\n\nFor example, if there is an inefficient workload balance or straggler issue, you\u2019ll be able to identify it in this Synchronizing/Communication view. This view will show several workers\u2019 waiting time being longer than others. \n", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"} {"text": "\n\n\nThis table view above allows you to see the detailed statistics of all communication ops in each node. This allows you to see what operation types are being called, how many times each op is called, what is the size of the data being transferred by each op, etc. \nMemory View:\nThis memory view tool helps you understand the hardware resource consumption of the operators in your model. Understanding the time and memory consumption on the operator-level allows you to resolve performance bottlenecks and in turn, allow your model to execute faster. Given limited GPU memory size, optimizing the memory usage can: \n\nAllow bigger model which can potentially generalize better on end level tasks.\nAllow bigger batch size. Bigger batch sizes increase the training speed.\n", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"} {"text": "The profiler records all the memory allocation during the profiler interval. Selecting the \u201cDevice\u201d will allow you to see each operator\u2019s memory usage on the GPU side or host side. You must enable profile_memory=True to generate the below memory data as shown here. \nWith torch.profiler.profile(\nProfiler_memory=True # this will take 1 \u2013 2 minutes to complete. \n)\n\nImportant Definitions:\n\u2022 \u201cSize Increase\u201d displays the sum of all allocation bytes and minus all the memory release bytes.\n\u2022 \u201cAllocation Size\u201d shows the sum of all allocation bytes without considering the memory release.\n\u2022 \u201cSelf\u201d means the allocated memory is not from any child operators, instead by the operator itself.\n\n\n\nGPU Metric on Timeline:", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"} {"text": "\nGPU Metric on Timeline:\nThis feature will help you debug performance issues when one or more GPU are underutilized. Ideally, your program should have high GPU utilization (aiming for 100% GPU utilization), minimal CPU to GPU communication, and no overhead. \nOverview:\nThe overview page highlights the results of three important GPU usage metrics at different levels (i.e. GPU Utilization, Est. SM Efficiency, and Est. Achieved Occupancy). Essentially, each GPU has a bunch of SM each with a bunch of warps that can execute a bunch of threads concurrently. Warps execute a bunch because the amount depends on the GPU. But at a high level, this GPU Metric on Timeline tool allows you can see the whole stack, which is useful. \nIf the GPU utilization result is low, this suggests a potential bottleneck is present in your model. Common reasons: \n\u2022Insufficient parallelism in kernels (i.e., low batch size) \n\u2022Small kernels called in a loop. This is to say the launch overheads are not amortized", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"} {"text": "\u2022CPU or I/O bottlenecks lead to the GPU not receiving enough work to keep busy \nLooking of the overview page where the performance recommendation section is where you\u2019ll find potential suggestions on how to increase that GPU utilization. In this example, GPU utilization is low so the performance recommendation was to increase batch size. Increasing batch size 4 to 32, as per the performance recommendation, increased the GPU Utilization by 60.68%.", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"} {"text": "GPU Utilization: the step interval time in the profiler when a GPU engine was executing a workload. The high the utilization %, the better. The drawback of using GPU utilization solely to diagnose performance bottlenecks is it is too high-level and coarse. It won\u2019t be able to tell you how many Streaming Multiprocessors are in use. Note that while this metric is useful for detecting periods of idleness, a high value does not indicate efficient use of the GPU, only that it is doing anything at all. For instance, a kernel with a single thread running continuously will get a GPU Utilization of 100%", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"} {"text": "Estimated Stream Multiprocessor Efficiency (Est. SM Efficiency) is a finer grained metric, it indicates what percentage of SMs are in use at any point in the trace This metric reports the percentage of time where there is at least one active warp on a SM and those that are stalled (NVIDIA doc). Est. SM Efficiency also has it\u2019s limitation. For instance, a kernel with only one thread per block can\u2019t fully use each SM. SM Efficiency does not tell us how busy each SM is, only that they are doing anything at all, which can include stalling while waiting on the result of a memory load. To keep an SM busy, it is necessary to have a sufficient number of ready warps that can be run whenever a stall occurs", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"} {"text": "Estimated Achieved Occupancy (Est. Achieved Occupancy) is a layer deeper than Est. SM Efficiency and GPU Utilization for diagnosing performance issues. Estimated Achieved Occupancy indicates how many warps can be active at once per SMs. Having a sufficient number of active warps is usually key to achieving good throughput. Unlike GPU Utilization and SM Efficiency, it is not a goal to make this value as high as possible. As a rule of thumb, good throughput gains can be had by improving this metric to 15% and above. But at some point you will hit diminishing returns. If the value is already at 30% for example, further gains will be uncertain. This metric reports the average values of all warp schedulers for the kernel execution period (NVIDIA doc). The larger the Est. Achieve Occupancy value is the better. \n", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"} {"text": "\n\nOverview details: Resnet50_batchsize4\n\n\n\nOverview details: Resnet50_batchsize32\n\nKernel View\nThe kernel has \u201cBlocks per SM\u201d and \u201cEst. Achieved Occupancy\u201d which is a great tool to compare model runs. \n\n\n\nMean Blocks per SM:\nBlocks per SM = Blocks of this kernel / SM number of this GPU. If this number is less than 1, it indicates the GPU multiprocessors are not fully utilized. \u201cMean Blocks per SM\u201d is weighted average of all runs of this kernel name, using each run\u2019s duration as weight. \nMean Est. Achieved Occupancy:", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"} {"text": "Mean Est. Achieved Occupancy:\nEst. Achieved Occupancy is defined as above in overview. \u201cMean Est. Achieved Occupancy\u201d is weighted average of all runs of this kernel name, using each run\u2019s duration as weight. \nTrace View\nThis trace view displays a timeline that shows the duration of operators in your model and which system executed the operation. This view can help you identify whether the high consumption and long execution is because of input or model training. Currently, this trace view shows GPU Utilization and Est. SM Efficiency on a timeline. \n\n\n", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"} {"text": "\nGPU utilization is calculated independently and divided into multiple 10 millisecond buckets. The buckets\u2019 GPU utilization values are drawn alongside the timeline between 0 \u2013 100%. In the above example, the \u201cProfilerStep5\u201d GPU utilization during thread 28022\u2019s busy time is higher than the following the one during \u201cOptimizer.step\u201d. This is where you can zoom-in to investigate why that is. \n\n\n\nFrom above, we can see the former\u2019s kernels are longer than the later\u2019s kernels. The later\u2019s kernels are too short in execution, which results in lower GPU utilization. \nEst. SM Efficiency: Each kernel has a calculated est. SM efficiency between 0 \u2013 100%. For example, the below kernel has only 64 blocks, while the SMs in this GPU is 80. Then its \u201cEst. SM Efficiency\u201d is 64/80, which is 0.8. \n", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"} {"text": "\n\n\nCloud Storage Support\nAfter running pip install tensorboard, to have data be read through these cloud providers, you can now run: \ntorch-tb-profiler[blob] \ntorch-tb-profiler[gs] \ntorch-tb-profiler[s3] \n\npip install torch-tb-profiler[blob], pip install torch-tb-profiler[gs], or pip install torch-tb-profiler[S3] to have data be read through these cloud providers. For more information, please refer to this README. \nJump to Source Code:", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"} {"text": "Jump to Source Code:\nOne of the great benefits of having both TensorBoard and the PyTorch Profiler being integrated directly in Visual Studio Code (VS Code) is the ability to directly jump to the source code (file and line) from the profiler stack traces. VS Code Python Extension now supports TensorBoard Integration.\nJump to source is ONLY available when Tensorboard is launched within VS Code. Stack tracing will appear on the plugin UI if the profiling with_stack=True. When you click on a stack trace from the PyTorch Profiler, VS Code will automatically open the corresponding file side by side and jump directly to the line of code of interest for you to debug. This allows you to quickly make actionable optimizations and changes to your code based on the profiling results and suggestions. \n\n", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"} {"text": "Gify: Jump to Source using Visual Studio Code Plug In UI \n\nFor how to optimize batch size performance, check out the step-by-step tutorial here. PyTorch Profiler is also integrated with PyTorch Lightning and you can simply launch your lightning training jobs with --trainer.profiler=pytorch flag to generate the traces. Check out an example here. \nWhat\u2019s Next for the PyTorch Profiler?\nYou just saw how PyTorch Profiler can help optimize a model. You can now try the Profiler by pip install torch-tb-profiler to optimize your PyTorch model.", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"} {"text": "Look out for an advanced version of this tutorial in the future. We are also thrilled to continue to bring state-of-the-art tool to PyTorch users to improve ML performance. We'd love to hear from you. Feel free to open an issue here. \nFor new and exciting features coming up with PyTorch Profiler, follow @PyTorch on Twitter and check us out on pytorch.org. \nAcknowledgements\nThe author would like to thank the contributions of the following individuals to this piece. From the Facebook side: Geeta Chauhan, Gisle Dankel, Woo Kim, Sam Farahzad, and Mark Saroufim. On the Microsoft side: AI Framework engineers (Teng Gao, Mike Guo, and Yang Gu), Guoliang Hua, and Thuy Nguyen.", "source": "https://pytorch.org/blog/pytorch-profiler-1.9-released/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Announcing the Winners of the 2020 Global PyTorch Summer Hackathon'\nauthor: Team PyTorch\n\nMore than 2,500 participants in this year\u2019s Global PyTorch Summer Hackathon pushed the envelope to create unique new tools and applications for PyTorch developers and researchers.\n\n\n\nNotice: None of the projects submitted to the hackathon are associated with or offered by Facebook, Inc. \nThis year\u2019s projects fell into three categories:\n\n\nPyTorch Developer Tools: a tool or library for improving productivity and efficiency for PyTorch researchers and developers.\n\n\nWeb/Mobile Applications Powered by PyTorch: a web or mobile interface and/or an embedded device built using PyTorch.\n\n", "source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"} {"text": "\nPyTorch Responsible AI Development Tools: a tool, library, or web/mobile app to support researchers and developers in creating responsible AI that factors in fairness, security, privacy, and more throughout its entire development process.\n\nThe virtual hackathon ran from June 22 to August 25, with more than 2,500 registered participants, representing 114 countries from Republic of Azerbaijan, to Zimbabwe, to Japan, submitting a total of 106 projects. Entrants were judged on their idea\u2019s quality, originality, potential impact, and how well they implemented it.\nMeet the winners of each category below. \nPyTorch Developer Tools\n1st place - DeMask", "source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"} {"text": "DeMask is an end-to-end model for enhancing speech while wearing face masks \u2014 offering a clear benefit during times when face masks are mandatory in many spaces and for workers who wear face masks on the job. Built with Asteroid, a PyTorch-based audio source separation toolkit, DeMask is trained to recognize distortions in speech created by the muffling from face masks and to adjust the speech to make it sound clearer. \nThis submission stood out in particular because it represents both a high-quality idea and an implementation that can be reproduced by other researchers.\nHere is an example on how to train a speech separation model in less than 20 lines:\n```python\nfrom torch import optim\nfrom pytorch_lightning import Trainer\nfrom asteroid import ConvTasNet\nfrom asteroid.losses import PITLossWrapper\nfrom asteroid.data import LibriMix\nfrom asteroid.engine import System\ntrain_loader, val_loader = LibriMix.loaders_from_mini(task='sep_clean', batch_size=4)", "source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"} {"text": "model = ConvTasNet(n_src=2)\noptimizer = optim.Adam(model.parameters(), lr=1e-3)\nloss = PITLossWrapper(\n lambda x, y: (x - y).pow(2).mean(-1), # MSE\n pit_from=\"pw_pt\", # Point in the pairwise matrix.\n)\nsystem = System(model, optimizer, loss, train_loader, val_loader)\ntrainer = Trainer(fast_dev_run=True)\ntrainer.fit(system)\n```\n2nd place - carefree-learn\nA PyTorch-based automated machine learning (AutoML) solution, carefree-learn provides high-level APIs to make training models using tabular data sets simpler. It features an interface similar to scikit-learn and functions as an end-to-end end pipeline for tabular data sets. It automatically detects feature column types and redundant feature columns, imputes missing values, encodes string columns and categorical columns, and preprocesses numerical columns, among other features. \n3rd Place - TorchExpo", "source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"} {"text": "TorchExpo is a collection of models and extensions that simplifies taking PyTorch from research to production in mobile devices. This library is more than a web and mobile application, and also comes with a Python library. The Python library is available via pip install and it helps researchers convert a state-of-the-art model in TorchScript and ONNX format in just one line. Detailed docs are available here.\nWeb/Mobile Applications Powered by PyTorch\n1st place - Q&Aid", "source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"} {"text": "Q&Aid is a conceptual health-care chatbot aimed at making health-care diagnoses and facilitating communication between patients and doctors. It relies on a series of machine learning models to filter, label, and answer medical questions, based on a medical image and/or questions in text provided by a patient. The transcripts from the chat app then can be forwarded to the local hospitals and the patient will be contacted by one of them to make an appointment to determine proper diagnosis and care. The team hopes that this concept application helps hospitals to work with patients more efficiently and provide proper care. \n\n\n\n2nd place - Rasoee", "source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"} {"text": "Rasoee is an application that can take images as input and output the name of the dish. It also lists the ingredients and recipe, along with the link to the original recipe online. Additionally, users can choose a cuisine from the list of cuisines in the drop menu, and describe the taste and/or method of preparation in text. Then the application will return matching dishes from the list of 308 identifiable dishes. The team has put a significant amount of effort gathering and cleaning various datasets to build more accurate and comprehensive models. You can check out the application here.\n3rd place - Rexana the Robot \u2014 PyTorch", "source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"} {"text": "Rexana is an AI voice assistant meant to lay the foundation for a physical robot that can complete basic tasks around the house. The system is capable of autonomous navigation (knowing its position around the house relative to landmarks), recognizing voice commands, and object detection and recognition \u2014 meaning it can be commanded to perform various household tasks (e.g., \"Rexana, water the potted plant in the lounge room.\u201d). Rexana can be controlled remotely via a mobile device, and the robot itself features customizable hands (magnets, grippers, etc.) for taking on different jobs.\nPyTorch Responsible AI Development Tools\n1st place: FairTorch", "source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"} {"text": "FairTorch is a fairness library for PyTorch. It lets developers add constraints to their models to equalize metrics across subgroups by simply adding a few lines of code. Model builders can choose a metric definition of fairness for their context, and enforce it at time of training. The library offers a suite of metrics that measure an AI system\u2019s performance among subgroups, and can apply to high-stakes examples where decision-making algorithms are deployed, such as hiring, school admissions, and banking.\n\n\n\n\n\n2nd place: Fluence", "source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"} {"text": "Fluence is a PyTorch-based deep learning library for language research. It specifically addresses the large compute demands of natural language processing (NLP) research. Fluence aims to provide low-resource and computationally efficient algorithms for NLP, giving researchers algorithms that can enhance current NLP methods or help discover where current methods fall short.\n3rd place: Causing: CAUSal INterpretation using Graphs", "source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"} {"text": "Causing (CAUSal INterpretation using Graphs) is a multivariate graphic analysis tool for bringing transparency to neural networks. It explains causality and helps researchers and developers interpret the causal effects of a given equation system to ensure fairness. Developers can input data and a model describing the dependencies between the variables within the data set into Causing, and Causing will output a colored graph of quantified effects acting between the model\u2019s variables. In addition, it also allows developers to estimate these effects to validate whether data fits a model.\nThank you,\nThe PyTorch team", "source": "https://pytorch.org/blog/announcing-the-winners-of-the-2020-global-pytorch-summer-hackathon/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"PyTorch\u2019s Tracing Based Selective Build\"\nauthor: Dhruv Matani, Suraj Subramanian\nfeatured-img: \"/assets/images/pytorchs-tracing-based-selective-build_Figure_4.png\"\n\nIntroduction\nTL;DR: It can be challenging to run PyTorch on mobile devices, SBCs (Single Board Computers), and IOT devices. When compiled, the PyTorch library is huge and includes dependencies that might not be needed for the on-device use case. \nTo run a specific set of models on-device, we actually require only a small subset of the features in the PyTorch library. We found that using a PyTorch runtime generated using selective build can achieve up to 90% reduction in binary size (for the CPU and QuantizedCPU backends on an x86-64 build on Linux). In this blog, we share our experience of generating model-specific minimal runtimes using Selective Build and show you how to do the same.\nWhy is this important for app developers?", "source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"} {"text": "Why is this important for app developers?\nUsing a PyTorch runtime generated by selective build can reduce the size of AI-powered apps by 30+ MB - a significant reduction for a typical mobile app! Making mobile applications more lightweight has many benefits - they are runnable on a wider variety of devices, consume less cellular data, and can be downloaded and updated faster on user\u2019s devices.\nWhat does the Developer Experience look like?\nThis method can work seamlessly with any existing PyTorch Mobile deployment workflows. All you need to do is replace the general PyTorch runtime library with a runtime customized for the specific models you wish to use in your application. The general steps in this process are:\n\nBuild the PyTorch Runtime in instrumentation mode (this is called an instrumentation build of PyTorch). This will record the used operators, kernels and features.\n", "source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"} {"text": "\nRun your models through this instrumentation build by using the provided model_tracer binary. This will generate a single YAML file that stores all the features used by your model. These features will be preserved in the minimal runtime.\nBuild PyTorch using this YAML file as input. This is the selective build technique, and it greatly reduces the size of the final PyTorch binary.\nUse this selectively-built PyTorch library to reduce the size of your mobile application!\n\nBuilding the PyTorch Runtime in a special \u201cinstrumentation\u201d mode ( by passing the TRACING_BASED=1 build option) generates an instrumentation build runtime of PyTorch, along with a model_tracer binary. Running a model with this build allows us to trace the parts of PyTorch used by the model.\n\n\n\n\nFigure 1: Instrumentation build of PyTorch\n\n```python", "source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"} {"text": "\n# Clone the PyTorch repo\ngit clone https://github.com/pytorch/pytorch.git\ncd pytorch\n\n# Build the model_tracer\nUSE_NUMPY=0 USE_DISTRIBUTED=0 USE_CUDA=0 TRACING_BASED=1 \\\n python setup.py develop\n\nNow this instrumentation build is used to run a model inference with representative inputs. The model_tracer binary observes parts of the instrumentation build that were activated during the inference run, and dumps it to a YAML file.\n\n\n\n\nFigure 2: YAML file generated by running model(s) on an instrumentation build\n\n# Generate YAML file\n./build/bin/model_tracer \\\n --model_input_path /tmp/path_to_model.ptl \\\n --build_yaml_path /tmp/selected_ops.yaml\n", "source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"} {"text": "--build_yaml_path /tmp/selected_ops.yaml\n\nNow we build the PyTorch Runtime again, but this time using the YAML file generated by the tracer. The runtime now only includes those parts that are needed for this model. This is called **\u201cSelectively built PyTorch runtime\u201d** in the diagram below.\n\n```python\n# Clean out cached configuration\nmake clean\n\n# Build PyTorch using Selected Operators (from the YAML file)\n# using the host toolchain, and use this generated library\nBUILD_PYTORCH_MOBILE_WITH_HOST_TOOLCHAIN=1 \\\nUSE_LIGHTWEIGHT_DISPATCH=0 \\\nBUILD_LITE_INTERPRETER=1 \\\nSELECTED_OP_LIST=/tmp/selected_ops.yaml \\\nTRACING_BASED=1 \\\n ./scripts/build_mobile.sh\n\n\n\n\n\nFigure 3: Selective Build of PyTorch and model execution on a selectively built PyTorch runtime\n\nShow me the code!", "source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"} {"text": "\nShow me the code!\nWe\u2019ve put together a notebook to illustrate what the process above looks like in code using a simple PyTorch model. \nFor a more hands-on tutorial to deploy this on Android/iOS this tutorial should be helpful.\nTechnical FAQs\nWhy is Tracing needed for a Selective Build of PyTorch?\nIn PyTorch, CPU kernels can call other operators via the PyTorch Dispatcher. Simply including the set of root operators called directly by the model is not sufficient as there might be many more being called under-the-hood transitively. Running the model on representative inputs and observing the actual list of operators called (aka \u201ctracing\u201d) is the most accurate way of determining what parts of PyTorch are used.", "source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"} {"text": "Additionally, factors such as which dtypes a kernel should handle are also runtime features that depend on actual input provided to the model. Hence, the tracing mechanism is extremely suitable for this purpose.\nWhich features can be selected (in or out) by using Tracing Based Selective Build?\nThe following features can be selected for the PyTorch runtime during the tracing based selective build process:\n\nCPU/QuantizedCPU kernels for PyTorch\u2019s ATen Operators: If a PyTorch Operator is not needed by a model targeted at a selectively built runtime, then the registration of that CPU kernel is omitted in the runtime. This is controlled via Torchgen code-gen.\n", "source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"} {"text": "\nPrimary Operators: This is controlled by a macro named TORCH_SELECTIVE_SCHEMA (via templated selective build) that either selects a primary operator or de-selects it based on information in a generated header file.\nCode that handles specific dtypes in CPU kernels: This is performed by generating exception throws in specific case statements in the switch case generated by the macro AT_PRIVATE_CHECK_SELECTIVE_BUILD.\n", "source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"} {"text": "\nRegistration of Custom C++ Classes that extend PyTorch: This is controlled by the macro TORCH_SELECTIVE_CLASS, which can be used when registering Custom C++ Classes. The torch::selective_class_<> helper is to be used in conjunction with the macro TORCH_SELECTIVE_CLASS.\n\nWhat is the structure of the YAML file used during the build?\nThe YAML file generated after tracing looks like the example below. It encodes all the elements of the \u201cselectable\u201d build feature as specified above.\n```python\ninclude_all_non_op_selectives: false\nbuild_features: []\noperators:\n aten::add.Tensor:\n is_used_for_training: false\n is_root_operator: true", "source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"} {"text": "is_root_operator: true\n include_all_overloads: false\n aten::len.t:\n is_used_for_training: false\n is_root_operator: true\n include_all_overloads: false\nkernel_metadata:\n local_scalar_dense_cpu:\n - Float\n add_stub:\n - Float\n copy:\n - Bool\n - Byte\n mul_cpu:\n - Float\ncustom_classes: []\n```\nHow exactly is code eliminated from the generated binary?\nDepending on the specific scenario, there are 2 main techniques that are used to hint the compiler and linker about unused and unreachable code. This code is then cleaned up by the compiler or linker as unreachable code.\n[1] Unreferenced functions removed by the Linker\nWhen a function that isn\u2019t transitively referenced from any visible function is present in the compiled object files that are being linked together, the linker will remove it (if the right build flags are provided). This is leveraged in 2 scenarios by the selective build system.\nKernel Registration in the Dispatcher", "source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"} {"text": "Kernel Registration in the Dispatcher\nIf an operator\u2019s kernel isn\u2019t needed, then it isn\u2019t registered with the dispatcher. An unregistered kernel means that the function is unreachable, and it will be removed by the linker.\nTemplated Selective Build\nThe general idea here is that a class template specialization is used to select a class that either captures a reference to a function or not (depending on whether it\u2019s used) and the linker can come along and clean out the unreferenced function.\nFor example, in the code below, there\u2019s no reference to the function \u201cfn2\u201d, so it will be cleaned up by the linker since it\u2019s not referenced anywhere.\n```python\ninclude \ninclude \ntemplate \nstruct FunctionSelector {\n T fn_;\n FunctionSelector(T fn): fn_(fn) {}\n T get() { return this->fn_; }\n};\n// The \"false\" specialization of this class does NOT retain the argument passed\n// to the class constructor, which means that the function pointer passed in", "source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"} {"text": "// is considered to be unreferenced in the program (unless it is referenced\n// elsewhere).\ntemplate \nstruct FunctionSelector {\n FunctionSelector(T) {}\n};\ntemplate \nFunctionSelector make_function_selector_true(T fn) {\n return FunctionSelector(fn);\n}\ntemplate \nFunctionSelector make_function_selector_false(T fn) {\n return FunctionSelector(fn);\n}\ntypedef void(*fn_ptr_type)();\nstd::vector fns;\ntemplate \nvoid add_fn(FunctionSelector fs) {\n fns.push_back(fs.get());\n}\ntemplate \nvoid add_fn(FunctionSelector) {\n // Do nothing.\n}\n// fn1 will be kept by the linker since it is added to the vector \"fns\" at\n// runtime.\nvoid fn1() {\n printf(\"fn1\\n\");\n}\n// fn2 will be removed by the linker since it isn't referenced at all.\nvoid fn2() {\n printf(\"fn2\\n\");\n}\nint main() {\n add_fn(make_function_selector_true(fn1));\n add_fn(make_function_selector_false(fn2));", "source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"} {"text": "add_fn(make_function_selector_false(fn2));\n}\n```\n[2] Dead Code Eliminated by the Compiler\nC++ Compilers can detect dead (unreachable) code by analyzing the code\u2019s control flow statically. For example, if there\u2019s a code-path that comes after an unconditional exception throw, then all the code after it will be marked as dead code and not converted to object code by the compiler. Typically, compilers require the use of the -fdce flag to eliminate dead code.\nIn the example below, you can see that the C++ code on the left (in the red boxes) doesn\u2019t have any corresponding generated object code on the right.\n\n\n\n\nFigure 4: Dead Code Elimination by C++ Compilers\n", "source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"} {"text": "\nThis property is leveraged in the bodies of PyTorch kernel implementations that have a lot of repeated code to handle multiple dtypes of a Tensor. A dtype is the underlying data-type that the Tensor stores elements of. This can be one of float, double, int64, bool, int8, etc\u2026\nAlmost every PyTorch CPU kernel uses a macro of the form AT_DISPATCH_ALL_TYPES* that is used to substitute some code specialized for every dtype that the kernel needs to handle. For example:\nAT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND3(\n kBool, kHalf, kBFloat16, dtype, \"copy_kernel\", [&] {\n cpu_kernel_vec(\n iter,\n [=](scalar_t a) -> scalar_t { return a; },\n [=](Vectorized a) -> Vectorized { return a; });\n});\n", "source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"} {"text": "});\n```\nThe macro AT_DISPATCH_ALL_TYPES_AND_COMPLEX_AND3 internally has a switch-case statement that looks like the code in Figure-4 above. The tracing process records the dtypes triggered for the kernel tag \"copy_kernel\" and the build process processes these tags and inserts throw statements in every case statement that is handling the dtype that isn\u2019t required for this kernel tag.\nThis is how dtype selectivity is implemented in PyTorch\u2019s Tracing Based Selective Build.\nConclusion\nTracing Based Selective Build is a practical and scalable approach to selecting only the used parts of an application to retain code that static analysis can not detect. This code is usually extremely data/input dependent in nature.\nThis article provides detailed insights into how Tracing Based Selective Build works under the hood, and the technical details related to its implementation. These techniques can also be applied to other applications and situations that can benefit from reduced binary size.", "source": "https://pytorch.org/blog/pytorchs-tracing-based-selective-build/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"PyTorch & OpenXLA: The Path Forward\"\nauthor: Milad Mohammadi, Jack Cao, Shauheen Zahirazami, Joe Spisak, and Jiewen Tan \n\nAs we celebrate the release of OpenXLA, PyTorch 2.0, and PyTorch/XLA 2.0, it\u2019s worth taking a step back and sharing where we see it all going in the short to medium term. With PyTorch adoption leading in the AI space and XLA supporting best-in-class compiler features, PyTorch/XLA is well positioned to provide a cutting edge development stack for both model training and inference. To achieve this, we see investments in three main areas:", "source": "https://pytorch.org/blog/pytorch-2.0-xla-path-forward/", "category": "pytorch blogs"} {"text": "\nTraining Large Models - Large language models (LLM) and diffusion models have quickly risen in popularity and many cutting edge applications today are built on them. Further to this, training these models requires scale and more specifically the ability to train across thousands of accelerators. To achieve this we are investing in features such as AMP for mixed precision training, PjRt for increased runtime performance, SPMD / FSDP for efficient model sharding, Dynamic Shapes to enable new research approaches, faster data loading through Ray and tf.data, and a toolchain that packages all of these features together into a seamless workflow. Some of these features are already available in experimental or beta stages, and others are coming up this year with many heavily leveraging the underlying OpenXLA compiler stack.\n", "source": "https://pytorch.org/blog/pytorch-2.0-xla-path-forward/", "category": "pytorch blogs"} {"text": "\nModel Inference - With large models continuing to grow in size and computational cost, deployment becomes the next challenge as these models continue to find their way into applications. With the introduction of Dynamo in the PyTorch 2.0 release, PyTorch/XLA delivers performance competitive inference. We are, however, incorporating additional inference-oriented including model serving support, Dynamo for sharded large models, quantization via Torch.Export and StableHLO.\nEcosystem integration - We are expanding integration with Hugging Face and PyTorch Lightning so users can take advantage of upcoming PyTorch/XLA cutting edge features (e.g. FSDP support in Hugging Face) and the downstream OpenXLA features (e.g. Quantization) through familiar APIs.\n", "source": "https://pytorch.org/blog/pytorch-2.0-xla-path-forward/", "category": "pytorch blogs"} {"text": "Additionally, PyTorch/XLA is set to migrate to the open source OpenXLA as its default downstream compiler; allowing the PyTorch community to gain access to a leading, framework-agnostic compiler stack that enjoys industry-wide contribution and innovation. To achieve this, we will begin supporting StableHLO. As a result, OpenXLA will replace the existing TF:XLA dependency, overall streamlining the dependencies and creating leverage from the broader compiler ecosystem. PyTorch/XLA will also sunset the XRT runtime after migration. You can see the resulting high level stack below with the TensorFlow dependency stricken out:\n \nFigure: the upcoming PyTorch/XLA features and integrations are illustrated here", "source": "https://pytorch.org/blog/pytorch-2.0-xla-path-forward/", "category": "pytorch blogs"} {"text": "We cannot be more excited about what\u2019s ahead for PyTorch/XLA and invite the community to join us. PyTorch/XLA is developed fully in open source so please file issues, submit pull requests, and send RFCs to GitHub such that we can openly collaborate. You can also try out PyTorch/XLA for yourself on various XLA devices including TPUs and GPUs.\nCheers,\nThe PyTorch/XLA Team at Google", "source": "https://pytorch.org/blog/pytorch-2.0-xla-path-forward/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Scaling Multimodal Foundation Models in TorchMultimodal with Pytorch Distributed\"\nauthor: Ankita De, Edward Wang (EcoF), Rohan Varma, Anjali Sridhar, Kartikay Khandelwal\nfeatured-img: \"/assets/images/scaling-multimodal-image1-diagram-of-multimodal-flava-new.png\"\n\nIntroduction", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} {"text": "\nIntroduction\nIn recent years, scaling model sizes has become a promising area of research. In the field of NLP, language models have gone from hundreds of millions of parameters (BERT) to hundreds of billions of parameters (GPT-3) demonstrating significant improvements on downstream tasks. The scaling laws for large scale language models have also been studied extensively in the industry. A similar trend can be observed in the vision field, with the community moving to transformer based models (like Vision Transformer, Masked Auto Encoders) as well. It is clear that individual modalities - text, image, video - have benefited massively from recent advancements in scale, and frameworks have quickly adapted to accommodate larger models.", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} {"text": "At the same time, multimodality is becoming increasingly important in research with tasks like image-text retrieval, visual question-answering, visual dialog and text to image generation gaining traction in real world applications. Training large scale multimodal models is the natural next step and we already see several efforts in this area like CLIP from OpenAI, Parti from Google and CM3 from Meta.", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} {"text": "In this blog, we present a case study demonstrating the scaling of FLAVA to 10B params using techniques from PyTorch Distributed. FLAVA is a vision and language foundation model, available in TorchMultimodal, which has shown competitive performance on both unimodal and multimodal benchmarks. We also give the relevant code pointers in this blog. The instructions for running an example script to scale FLAVA can be found here.\nScaling FLAVA Overview", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} {"text": "Scaling FLAVA Overview\nFLAVA is a foundation multimodal model which consists of transformer based image and text encoders followed by a transformer-based multimodal fusion module. It is pretrained on both unimodal and multimodal data with a diverse set of losses. This includes masked language, image and multimodal modeling losses that require the model to reconstruct the original input from its context (self-supervised learning). It also uses image text matching loss over positive and negative examples of aligned image-text pairs as well as CLIP style contrastive loss. In addition to multimodal tasks (like image-text retrieval), FLAVA demonstrated competitive performance on unimodal benchmarks as well (GLUE tasks for NLP and image classification for vision).\n\n\n", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} {"text": "\nThe original FLAVA model has ~350M parameters and uses ViT-B16 configurations (from the Vision Transformer paper) for image and text encoders. The multimodal fusion transformer follows the unimodal encoders but with half the number of layers. We explore increasing the size of each encoder to larger ViT variants. \nAnother aspect of scaling is adding the ability to increase the batch size. FLAVA makes use of contrastive loss over in-batch negatives, which typically benefits from large batch size (as studied here). The largest training efficiency or throughput is also generally achieved when operating near maximum possible batch sizes as determined by the amount of GPU memory available (also see the experiments section).", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} {"text": "The following table displays the different model configurations we experimented with. We also determine the maximum batch size that was able to fit in memory for each configuration in the experiments section.\n\n\n\nApprox Model params\nHidden size\nMLP size\nHeads\nUnimodal layers\nMultimodal layers\nModel size (fp32)\n\n\n\n\n350M (original)\n768\n3072\n12\n12\n6\n1.33GB\n\n\n900M\n1024\n4096\n16\n24\n12\n3.48GB\n\n\n1.8B\n1280\n5120\n16\n32\n16\n6.66GB\n\n\n2.7B\n1408\n6144\n16\n40\n20\n10.3GB\n\n\n", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} {"text": "| 4.8B | 1664 | 8192 | 16 | 48 | 24 | 18.1GB |\n| 10B | 2048 | 10240 | 16 | 64 | 40 | 38GB |\nOptimization overview\nPyTorch offers several native techniques to efficiently scale models. In the following sections, we go over some of these techniques and show how they can be applied to scale up a FLAVA model to 10 billion parameters.\nDistributed Data Parallel\nA common starting point for distributed training is data parallelism. Data parallelism replicates the model across each worker (GPU), and partitions the dataset across the workers. Different workers process different data partitions in parallel and synchronize their gradients (via all reduce) before model weights are updated. The figure below showcases the flow (forward, backward, and weight update steps) for processing a single example for data parallelism:\n", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} {"text": "\n\n\n\n Source: https://engineering.fb.com/2021/07/15/open-source/fsdp/\n\nPyTorch provides a native API, DistributedDataParallel (DDP) to enable data parallelism which can be used as a module wrapper as showcased below. Please see PyTorch Distributed documentation for more details.\n```Python\nfrom torchmultimodal.models.flava.model import flava_model_for_pretraining\nimport torch\nimport torch.distributed as dist\nmodel = flava_model_for_pretraining().cuda()\nInitialize PyTorch Distributed process groups\nPlease see https://pytorch.org/tutorials/intermediate/dist_tuto.html for details\ndist.init_process_group(backend=\u201dnccl\u201d)\nWrap model in DDP", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} {"text": "Wrap model in DDP\nmodel = torch.nn.parallel.DistributedDataParallel(model, device_ids=[torch.cuda.current_device()])\n```\nFully Sharded Data Parallel\nGPU memory usage of a training application can roughly be broken down into model inputs, intermediate activations (needed for gradient computation), model parameters, gradients, and optimizer states. Scaling a model will typically increase each of these elements. Scaling a model with DDP can eventually result in out-of-memory issues when a single GPU's memory becomes insufficient since it replicates the parameters, gradients, and optimizer states on all workers.", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} {"text": "To reduce this replication and save GPU memory, we can shard the model parameters, gradients, and optimizer states across all workers with each worker only managing a single shard. This technique was popularized by the ZeRO-3 approach developed by Microsoft. A PyTorch-native implementation of this approach is available as FullyShardedDataParallel (FSDP) API, released as a beta feature in PyTorch 1.12. During a module\u2019s forward and backward passes, FSDP unshards the model parameters as needed for computation (using all-gather) and reshards them after computation. It synchronizes gradients using the reduce-scatter collective to ensure sharded gradients are globally averaged. The forward and backward pass flow of a model wrapped in FSDP are detailed below:\n\n\n\n", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} {"text": "\n\n Source: https://engineering.fb.com/2021/07/15/open-source/fsdp/\n\nTo use FSDP, the submodules of a model need to be wrapped with the API to control when specific submodules are sharded or unsharded. FSDP provides an auto-wrapping API (see the auto_wrap_policy argument) that can be used out of the box as well as several wrapping policies and the ability to write your own policy.", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} {"text": "The following example demonstrates wrapping the FLAVA model with FSDP. We specify the auto-wrapping policy as transformer_auto_wrap_policy. This will wrap individual transformer layers (TransformerEncoderLayer), the image transformer (ImageTransformer), text encoder (BERTTextEncoder) and multimodal encoder (FLAVATransformerWithoutEmbeddings) as individual FSDP units. This uses a recursive wrapping approach for efficient memory management. For example, after an individual transformer layer\u2019s forward or backward pass is finished, its parameters are discarded, freeing up memory thereby reducing peak memory usage.", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} {"text": "FSDP also provides a number of configurable options to tune the performance of applications. For example, in our use case, we illustrate the use of the new limit_all_gathers flag, which prevents all-gathering model parameters too early thereby alleviating memory pressure on the application. We encourage users to experiment with this flag which can potentially improve the performance of applications with high active memory usage.\n```Python\nimport torch\nfrom torch.distributed.fsdp import FullyShardedDataParallel as FSDP\nfrom torch.distributed.fsdp.wrap import transformer_auto_wrap_policy\nfrom torchmultimodal.models.flava.model import flava_model_for_pretraining\nfrom torchmultimodal.models.flava.text_encoder import BertTextEncoder\nfrom torchmultimodal.models.flava.image_encoder import ImageTransformer\nfrom torchmultimodal.models.flava.transformer import FLAVATransformerWithoutEmbeddings\nfrom torchmultimodal.modules.layers.transformer import TransformerEncoderLayer", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} {"text": "model = flava_model_for_pretraining().cuda()\ndist.init_process_group(backend=\u201dnccl\u201d)\nmodel = FSDP(\n model,\n device_id=torch.cuda.current_device(),\n auto_wrap_policy=partial(\n transformer_auto_wrap_policy,\n transformer_layer_cls={\n TransformerEncoderLayer,\n ImageTransformer,\n BERTTextEncoder,\n FLAVATransformerWithoutEmbeddings\n },\n ),\n limit_all_gathers=True,\n )\n```\nActivation Checkpointing", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} {"text": ")\n```\nActivation Checkpointing\nAs discussed above, intermediate activations, model parameters, gradients, and optimizer states contribute to the overall GPU memory usage. FSDP can reduce memory consumption due to the latter three but does not reduce memory consumed by activations. Memory used by activations increases with increase in batch size or number of hidden layers. Activation checkpointing is a technique to decrease this memory usage by recomputing the activations during the backward pass instead of holding them in memory for a specific checkpointed module. For example, we observed ~4x reduction in the peak active memory after forward pass by applying activation checkpointing to the 2.7B parameter model.", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} {"text": "PyTorch offers a wrapper based activation checkpointing API. In particular, checkpoint_wrapper allows users to wrap an individual module with checkpointing, and apply_activation_checkpointing allows users to specify a policy with which to wrap modules within an overall module with checkpointing. Both these APIs can be applied to most models as they do not require any modifications to the model definition code. However, if more granular control over checkpointed segments, such as checkpointing specific functions within a module, is required, the functional torch.utils.checkpoint API can be leveraged, although this requires modification to the model code. The application of the activation checkpointing wrapper to individual FLAVA transformer layers (denoted by TransformerEncoderLayer) is shown below. For a thorough description of activation checkpointing, please see the description in the PyTorch documentation.", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} {"text": "from torchmultimodal.models.flava.model import flava_model_for_pretraining\nfrom torch.distributed.algorithms._checkpoint.checkpoint_wrapper import apply_activation_checkpointing, checkpoint_wrapper, CheckpointImpl\nfrom torchmultimodal.modules.layers.transformer import TransformerEncoderLayer\n\nmodel = flava_model_for_pretraining()\ncheckpoint_tformer_layers_policy = lambda submodule: isinstance(submodule, TransformerEncoderLayer)\n\napply_activation_checkpointing(\n model,\n checkpoint_wrapper_fn=checkpoint_wrapper,\n check_fn=checkpoint_tformer_layers_policy,\n )\n\nUsed together, wrapping FLAVA transformer layers with activation checkpointing and wrapping the overall model with FSDP as demonstrated above, we are able to scale FLAVA to 10B parameters.\nExperiments", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} {"text": "Experiments\nWe conduct an empirical study about the impact of the different optimizations from the previous section on system performance. For all our experiments, we use a single node with 8 A100 40GB GPUs and run the pretraining for 1000 iterations. All runs also used PyTorch\u2019s automatic mixed precision with the bfloat16 data type. TensorFloat32 format is also enabled to improve matmul performance on the A100. We define throughput as the average number of items (text or image) processed per second (we ignore the first 100 iterations while measuring throughput to account for warmup). We leave training to convergence and its impact on downstream task metrics as an area for future study.", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} {"text": "Figure 1 plots the throughput for each model configuration and optimization, both with a local batch size of 8 and then with the maximum batch size possible on 1 node. Absence of a data point for a model variant for an optimization indicates that the model could not be trained on a single node.\nFigure 2 plots the maximum possible batch size per worker for each optimization. We observe a few things:\n\nScaling model size: DDP is only able to fit the 350M and 900M model on a node. With FSDP, due to memory savings, we are able to train ~3x bigger models compared to DDP (i.e. the 1.8B and 2.7B variants). Combining activation checkpointing (AC) with FSDP enables training even bigger models, on the order of ~10x compared to DDP (i.e. 4.8B and 10B variants)\nThroughput:\n", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} {"text": "\nThroughput:\nFor smaller model sizes, at a constant batch size of 8, the throughput for DDP is slightly higher than or equal to FSDP, explainable by the additional communication required by FSDP. It is lowest for FSDP and AC combined together. This is because AC re-runs checkpointed forward passes during the backwards pass, trading off additional computation for memory savings. However, in the case of the 2.7B model, FSDP + AC actually has higher throughput compared to FSDP alone. This is because the 2.7B model with FSDP is operating close to the memory limit even at batch size 8 triggering CUDA malloc retries which tend to slow down training. AC helps with reducing the memory pressure and leads to no retries.\n", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} {"text": "\nFor DDP and FSDP + AC, the throughput increases with an increase in batch size for each model. For FSDP alone, this is true for smaller variants. However, with the 1.8B and 2.7B parameter models, we observe throughput degradation when increasing batch size. A potential reason for this, as noted above also, is that at the memory limit, PyTorch\u2019s CUDA memory management may have to retry cudaMalloc calls and/or run expensive defragmentation steps to find free memory blocks to handle the workload\u2019s memory requirements which can result in training slowdown.\n", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} {"text": "\nFor larger models that can only be trained with FSDP (1.8B, 2.7B, 4.8B) the setting with highest throughput achieved is with FSDP + AC scaling to the maximum batch size. For 10B, we observe nearly equal throughput for smaller and maximum batch size. This might be counterintuitive as AC results in increased computation and maxing out batch size potentially leads to expensive defragmentation operations due to operating at CUDA memory limit. However, for these large models, the increase in batch size is large enough to mask this overhead.\n\n\n\n\n\n Figure 1: Training throughput for different configurations\n\n", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} {"text": "\n\nBatch size: FSDP alone enables slightly higher batch sizes compared to DDP. Using FSDP + AC enables ~3x batch size compared to DDP for the 350M param model and ~5.5x for 900M param model. Even for 10B, a max batch size of ~20 which is fairly decent. This essentially enables larger global batch size using fewer GPUs which is especially useful for contrastive learning tasks.\n\n\n\n\n\n Figure 2: Max local batchsize possible for different configurations\n\nConclusion", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} {"text": "\nConclusion\nAs the world moves towards multimodal foundation models, scaling model parameters and efficient training is becoming an area of focus. The PyTorch ecosystem aims to accelerate innovation in this field by providing different tools to the research community, both for training and scaling multimodal models. With FLAVA, we laid out an example of scaling a model for multimodal understanding. In the future, we plan to add support for other kinds of models like the ones for multimodal generation and demonstrate their scaling factors. We also hope to automate many of these scaling and memory saving techniques (such as sharding and activation checkpointing) to reduce the amount of user experimentation needed to achieve the desired scale and maximum training throughput.\nReferences\n\nIntroducing TorchMultimodal - a library for accelerating exploration in Multimodal AI\n", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} {"text": "\nFLAVA paper\nIntroducing Pytorch FSDP\n", "source": "https://pytorch.org/blog/scaling-multimodal-foundation-models-in-torchmultimodal-with-pytorch-distributed/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"A BetterTransformer for Fast Transformer Inference\"\nauthor: Michael Gschwind, Eric Han, Scott Wolchok, Rui Zhu, Christian Puhrsch\nfeatured-img: \"/assets/images/2022-7-12-a-better-transformer-for-fast-transformer-encoder-inference-3.png\"\n\ntl;dr Transformers achieve state-of-the-art performance for NLP, and are becoming popular for a myriad of other tasks. They are computationally expensive which has been a blocker to their widespread productionisation. Launching with PyTorch 1.12, BetterTransformer implements a backwards-compatible fast path of torch.nn.TransformerEncoder for Transformer Encoder Inference and does not require model authors to modify their models. BetterTransformer improvements can exceed 2x in speedup and throughput for many common execution scenarios. To use BetterTransformer, install PyTorch 1.12 and start using high-quality, high-performance Transformer models with the PyTorch API today.", "source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"} {"text": "\n\n\n\nDiagram of the Transformer Encoder Architecture (from \"Attention Is All You Need\"). During Inference, the entire module will execute as a single PyTorch-native function.\n\nIn this blog post, we share the following topics \u2014 Performance Improvements, Backwards compatibility, and Taking advantage of the FastPath. Learn more about these topics below. \nPerformance Improvements", "source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"} {"text": "Performance Improvements\nBetterTransformer launches with accelerated native implementations of MultiHeadAttention and TransformerEncoderLayer for CPUs and GPUs. These fast paths are integrated in the standard PyTorch Transformer APIs, and will accelerate TransformerEncoder, TransformerEncoderLayer and MultiHeadAttention nn.modules. These new modules implement two types of optimizations: (1) fused kernels combine multiple individual operators normally used to implement Transformers to provide a more efficient implementation, and (2) take advantage of sparsity in the inputs to avoid performing unnecessary operations on padding tokens. Padding tokens frequently account for a large fraction of input batches in many Transformer models used for Natural Language Processing.", "source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"} {"text": "Backwards compatibility\nAdvantageously, no model changes are necessary to benefit from the performance boost offered by BetterTransformer. To benefit from fast path execution, inputs and operating conditions must satisfy some access conditions (see below). While the internal implementation of Transformer APIs has changed, PyTorch 1.12 maintains strict compatibility with Transformer modules shipped in previous versions, enabling PyTorch users to use models created and trained with previous PyTorch releases while benefiting from BetterTransformer improvements.\nIn addition to enabling the PyTorch nn.Modules, BetterTransformer provides improvements for PyTorch libraries. Performance benefits will become available through two different enablement paths:", "source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"} {"text": "\nTransparent acceleration: Current users of PyTorch nn.Modules such as MultiHeadAttention as well as higher-level Transformer components will benefit from the improved performance of the new nn.Modules automatically. An example of this is the visual transformer (ViT) implementation used in the torchvision library (code link).\n", "source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"} {"text": "\nTorchtext library acceleration: As part of this project, we have optimized Torchtext to build on the PyTorch core API to benefit from BetterTransformer enhancements while maintaining strict and transparent compatibility with previous library versions and models trained with previous Torchtext versions. Using PyTorch Transformers in Torchtext also ensures that Torchtext will benefit from expected future enhancements to the PyTorch Transformer implementation.\n\nTaking advantage of the Fastpath\nBetterTransformer is a fastpath for the PyTorch Transformer API. The fastpath is a native, specialized implementation of key Transformer functions for CPU and GPU that applies to common Transformer use cases.", "source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"} {"text": "To take advantage of input sparsity (i.e. padding) in accelerating your model (see Figure 2), set the keyword argument enable_nested_tensor=True when instantiating a TransformerEncoder and pass in the src_key_padding_mask argument (which denotes padding tokens) during inference. This requires the padding mask to be contiguous, which is the typical case.", "source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"} {"text": "Currently, the BetterTransformer speedup only applies to transformer encoder models used in inference. To benefit from fastpath execution, models must be composed of any of the following components: TransformerEncoder, TransformerEncoderLayer or MultiheadAttention (MHA). Fastpath execution is also subject to some criteria. Most importantly, the model must be executed in inference mode and operate on input tensors that do not collect gradient tape information (e.g., running with torch.no_grad). The full list of conditions can be found at these links for nn.MultiHeadAttention and nn.TransformerEncoder, respectively. If the criteria are not met, control flows to the legacy PyTorch 1.11 Transformer implementation which has the same API, but lacks the fastpath performance boost.", "source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"} {"text": "Other transformer models (such as decoder models) which use the PyTorch MultiheadAttention module will benefit from the BetterTransformer fastpath. Planned future work is to expand the end-to-end BetterTransformer fastpath to models based on TransformerDecoder to support popular seq2seq and decoder-only (e.g., OPT) model architectures, and to training.\nSpeedups\nThe following graphs show the performance achieved for the BERT-base model with small and large-scale inputs:\n\n\n\n\nFigure 1: PyTorch 1.12 Improvements with BetterTransformer fastpath execution\n\n", "source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"} {"text": "\n\n\n\n\nFigure 2: PyTorch 1.12 Improvements with BetterTransformer fastpath execution\nwith sparsity optimization enabled by enable_nested_tensor=True\n", "source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"} {"text": "\nBetterTransformer includes two types of optimization: (1) fused kernels implementing multiple operations more efficiently in a single kernel, and (2) exploiting sparsity by avoiding unnecessary processing on padding tokens. Enhanced performance for small input sizes benefits primarily from the fused kernel implementations, and shows a constant performance improvement regardless of padding amount. While large inputs still benefit from fused kernels, the computation heavy processing limits the benefits that may be obtained by the fused kernels as baseline performance is already closer to the theoretical peak. However, as we increase the amount of padding, performance increases dramatically as increasingly large amounts of computation can be avoided by exploiting the sparsity introduced by padding in NLP workloads.\nFuture Work", "source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"} {"text": "Future Work\nAs part of our ongoing work on PyTorch BetterTransformer, we are working on extending BetterTransformer improvements to Transformer Decoders. We aim to expand beyond inference to training as well.\nWe are partnering to enable BetterTransformer on additional libraries such as FairSeq, MetaSeq, and HuggingFace to benefit all Transformer-based PyTorch models. We\u2019ll provide future updates on the progress of BetterTransformer accelerations for the larger PyTorch ecosystem as part of this blog series.\nAcknowledgements: The authors would like to thank Lin Qiao, Ajit Mathews, Andrew Tulloch, Dmytro Dzhulgakov, Natalia Gimelshein, Emad El-Haraty, Mark Saroufim, Adnan Aziz, Geeta Chauhan, and Hamid Shojanazeri for their support, contributions and many helpful suggestions throughout the course of this project, and in the preparation of this blog.", "source": "https://pytorch.org/blog/a-better-transformer-for-fast-transformer-encoder-inference/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Experience the power of PyTorch 2.0 on AMD Solutions\"\nauthor: AMD\n\nPyTorch 2.0 represents a significant step forward for the PyTorch machine learning framework. The stable release of PyTorch 2.0 brings new features that unlock even higher performance, while remaining backward compatible with prior releases and retaining the Pythonic focus which has helped to make PyTorch so enthusiastically adopted by the AI/ML community. AMD has long been a strong proponent of PyTorch, and we are delighted that the PyTorch 2.0 stable release includes support for AMD Instinct\u2122 and Radeon\u2122 GPUs that are supported by the ROCm\u2122 software platform.", "source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"} {"text": "With the stable PyTorch 2.0 release, PyTorch 2.0 introduces torch.compile as a beta feature underpinned by TorchInductor with support for AMD Instinct and Radeon GPUs through OpenAI Triton deep learning compiler. Through TorchInductor, developers can now generate low level kernels using Triton that are portable and performant to hand-written kernels on native hardware centric kernel programming models.\nOpenAI Triton is a language and compiler for blocked algorithms, which aims to provide an abstraction layer between CUDA/HIP and Torch at which developers can write efficient kernels more productively. We have written a new backend which interfaces Triton's custom MLIR dialects with our ROCm compiler stack.", "source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"} {"text": "Triton can automatically optimize kernels generated by machine learning compilers such as TorchInductor for multiple AI accelerators including AMD Instinct GPU accelerator by leveraging hardware-specific features of the AMD CDNA\u2122 GPU architecture. This makes it easy for developers and users to switch seamlessly from any HW to AMD Instinct GPU accelerators and get great out of the box performance. \nIn addition, compilers like Triton can also enable developers to use high-level programming languages, such as Python, to write machine learning code that can be efficiently compiled and executed on specialized hardware. This can help greatly improve the productivity of machine learning developers, as they can focus on the algorithmic aspects of their models and rely on the compiler to generate efficient code.", "source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"} {"text": "By design, PyTorch 2.0 is backward compatible to earlier PyTorch releases. This holds true for the ROCm build of PyTorch 2.0 as well. Developers using PyTorch with AMD GPUs can migrate to PyTorch 2.0 with the confidence that their existing code will continue to work without any required changes, so there is no penalty to access the improvements that come with this release. On the other hand, using PyTorch 2.0 and TorchInductor can result in significant performance improvement over the default eager-mode as shown below.", "source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"} {"text": "The initial results using AMD Instinct MI250 GPUs already shows strong performance improvement with minimal optimization on TorchInductor compared to the default eager-mode. We see an average performance increase of up to 1.54X on 44 out of the 45 models on HuggingFace benchmarks suite with CamemBert, DistillGPT2 and T5Small being a few of the standout models with up to 1.5X or more performance improvement over eager-mode. We are looking forward to continued engagement with members of the PyTorch team at Meta to enable further optimization on ROCm software stack and the additional performance improvement for future PyTorch releases. \n \nImage 1: AMD MI250 GPU performance improvement for TorchInductor vs eager-mode using HuggingFace MI200-89.", "source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"} {"text": "PyTorch 2.0 follows the same set of install options as before to build and install for supporting AMD GPUs. These include an installable Python package hosted at pytorch.org, AMD\u2019s public PyTorch docker image, and of course the option to build from source using the upstream PyTorch repository. As with PyTorch builds for other platforms, the specific command line to be run for pip-based install is provided by the configurator at https://pytorch.org/get-started/locally/.\nThe GPUs supported by the ROCm software platform which forms the basis for PyTorch support on AMD GPUs are documented at https://docs.amd.com/bundle/Hardware_and_Software_Reference_Guide/page/Hardware_and_Software_Support.html\nConclusion", "source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"} {"text": "Conclusion\nPyTorch 2.0 represents a major step in continuing to broaden support for ML developers by increasing performance while maintaining a simple, Pythonic interface. This performance uplift is made possible in large part by the new TorchInductor infrastructure, which in turn harnesses the Triton ML programming language and just-in-time compiler. AMD\u2019s support for these technologies allows users to realize the full promise of the new PyTorch architecture. Our GPU support in PyTorch 2.0 is just one manifestation of a larger vision around AI and machine learning. AI/ML plays an important role in multiple AMD product lines, including Instinct and Radeon GPUs, Alveo\u2122 data center accelerators, and both Ryzen\u2122 and EPYC processors. These hardware and software initiatives are all part of AMD\u2019s Pervasive AI vision, and we look forward to addressing the many new challenges and opportunities of this dynamic space.", "source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"} {"text": "MI200-89 \u2013 PyTorch Inductor mode HuggingFace Transformers training speedup, running the standard PyTorch 2.0 test suite, over PyTorch eager-mode comparison based on AMD internal testing on a single GCD as of 3/10/2023 using a 2P AMD EPYC\u2122 7763 production server with 4x AMD Instinct\u2122 MI250 (128GB HBM2e) 560W GPUs with Infinity Fabric\u2122 technology; host ROCm\u2122 5.3, guest ROCm\u2122 5.4.4, PyTorch 2.0.0, Triton 2.0. Server manufacturers may vary configurations, yielding different results. Performance may vary based on factors including use of latest drivers and optimizations. \n\u00a9 2023 Advanced Micro Devices, Inc. All rights reserved. AMD, the AMD Arrow logo, AMD CDNA, AMD Instinct, EPYC, Radeon, ROCm, Ryzen, and combinations thereof are trademarks of Advanced Micro Devices, Inc. Other product names used in this publication are for identification purposes only and may be trademarks of their respective owners.", "source": "https://pytorch.org/blog/experience-power-pytorch-2.0/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Announcing PyTorch Developer Day 2021'\nauthor: Team PyTorch\nfeatured-img: 'assets/images/ptdevday21.gif'\n\nWe are excited to announce PyTorch Developer Day (#PTD2), taking place virtually from December 1 & 2, 2021. Developer Day is designed for developers and users to discuss core technical developments, ideas, and roadmaps. \n\n\n\nEvent Details\nTechnical Talks Live Stream - December 1, 2021\nJoin us for technical talks on a variety of topics, including updates to the core framework, new tools and libraries to support development across a variety of domains, responsible AI and industry use cases. All talks will take place on December 1 and will be live streamed on PyTorch channels.", "source": "https://pytorch.org/blog/pytorch-developer-day-2021/", "category": "pytorch blogs"} {"text": "Stay up to date by following us on our social channels: Twitter, Facebook, or LinkedIn.\nPoster Exhibition & Networking - December 2, 2021\nOn the second day, we\u2019ll be hosting an online poster exhibition on Gather.Town. There will be opportunities to meet the authors and learn more about their PyTorch projects as well as network with the community. This poster and networking event is limited to people composed of PyTorch maintainers and contributors, long-time stakeholders and experts in areas relevant to PyTorch\u2019s future. Conversations from the networking event will strongly shape the future of PyTorch. As such, invitations are required to attend the networking event. \nApply for an invitation to the networking event by clicking here.\nCall for Content Now Open", "source": "https://pytorch.org/blog/pytorch-developer-day-2021/", "category": "pytorch blogs"} {"text": "Call for Content Now Open\nSubmit your poster abstracts today! Please send us the title and brief summary of your project, tools and libraries that could benefit PyTorch researchers in academia and industry, application developers, and ML engineers for consideration. The focus must be on academic papers, machine learning research, or open-source projects related to PyTorch development, Responsible AI or Mobile. Please no sales pitches. Deadline for submission is September 24, 2021. \nYou can submit your poster abstract during your application & registration process here.\nVisit the event website for more information and we look forward to having you at PyTorch Developer Day. For any questions about the event, contact pytorch@fbreg.com.", "source": "https://pytorch.org/blog/pytorch-developer-day-2021/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Efficient PyTorch: Tensor Memory Format Matters'\nauthor: 'Dhruv Matani, Suraj Subramanian'\nfeatured-img: ''\n\nEnsuring the right memory format for your inputs can significantly impact the running time of your PyTorch vision models. When in doubt, choose a Channels Last memory format.\nWhen dealing with vision models in PyTorch that accept multimedia (for example image Tensorts) as input, the Tensor\u2019s memory format can significantly impact the inference execution speed of your model on mobile platforms when using the CPU backend along with XNNPACK. This holds true for training and inference on server platforms as well, but latency is particularly critical for mobile devices and users.\n\nOutline of this article", "source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"} {"text": "\nOutline of this article\n\nDeep Dive into matrix storage/memory representation in C++. Introduction to Row and Column major order.\nImpact of looping over a matrix in the same or different order as the storage representation, along with an example.\nIntroduction to Cachegrind; a tool to inspect the cache friendliness of your code.\nMemory formats supported by PyTorch Operators.\nBest practices example to ensure efficient model execution with XNNPACK optimizations\n\nMatrix Storage Representation in C++\nImages are fed into PyTorch ML models as multi-dimensional Tensors. These Tensors have specific memory formats. To understand this concept better, let\u2019s take a look at how a 2-d matrix may be stored in memory.\nBroadly speaking, there are 2 main ways of efficiently storing multi-dimensional data in memory.", "source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"} {"text": "\nRow Major Order: In this format, the matrix is stored in row order, with each row stored before the next row in memory. I.e. row N comes before row N+1.\nColumn Major Order: In this format, the matrix is stored in column-order, with each column stored before the next column in memory. I.e. column N comes before column N+1.\n\nYou can see the differences graphically below.\n\n\n\nC++ stores multi-dimensional data in row-major format.\n\nEfficiently accessing elements of a 2d matrix\nSimilar to the storage format, there are 2 ways to access data in a 2d matrix.\n\nLoop Over Rows first: All elements of a row are processed before any element of the next row.\nLoop Over Columns first: All elements of a column are processed before any element of the next column.\n", "source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"} {"text": "For maximum efficiency, one should always access data in the same format in which it is stored. I.e. if the data is stored in row-major order, then one should try to access it in that order.\nThe code below (main.cpp) shows 2 ways of accessing all the elements of a 2d 4000x4000 matrix.\n```python\ninclude \ninclude \n// loop1 accesses data in matrix 'a' in row major order,\n// since i is the outer loop variable, and j is the\n// inner loop variable.\nint loop1(int a[4000][4000]) {\n int s = 0;\n for (int i = 0; i < 4000; ++i) {\n for (int j = 0; j < 4000; ++j) {\n s += a[i][j];\n }\n }\n return s;\n}\n// loop2 accesses data in matrix 'a' in column major order\n// since j is the outer loop variable, and i is the\n// inner loop variable.\nint loop2(int a[4000][4000]) {\n int s = 0;\n for (int j = 0; j < 4000; ++j) {\n for (int i = 0; i < 4000; ++i) {\n s += a[i][j];\n }", "source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"} {"text": "s += a[i][j];\n }\n }\n return s;\n}\nint main() {\n static int a[4000][4000] = {0};\n for (int i = 0; i < 100; ++i) {\n int x = rand() % 4000;\n int y = rand() % 4000;\n a[x][y] = rand() % 1000;\n }\nauto start = std::chrono::high_resolution_clock::now();\n auto end = start;\n int s = 0;\nif defined RUN_LOOP1\nstart = std::chrono::high_resolution_clock::now();\ns = 0;\n for (int i = 0; i < 10; ++i) {\n s += loop1(a);\n s = s % 100;\n }\n end = std::chrono::high_resolution_clock::now();\nstd::cout << \"s = \" << s << std::endl;\n std::cout << \"Time for loop1: \"\n << std::chrono::duration(end - start).count()\n << \"ms\" << std::endl;\nendif\nif defined RUN_LOOP2\nstart = std::chrono::high_resolution_clock::now();\n s = 0;\n for (int i = 0; i < 10; ++i) {\n s += loop2(a);\n s = s % 100;\n }\n end = std::chrono::high_resolution_clock::now();\nstd::cout << \"s = \" << s << std::endl;\n std::cout << \"Time for loop2: \"\n << std::chrono::duration(end - start).count()", "source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"} {"text": "<< \"ms\" << std::endl;\nendif\n}\nLet\u2019s build and run this program and see what it prints.\ng++ -O2 main.cpp -DRUN_LOOP1 -DRUN_LOOP2\n./a.out\nPrints the following:\ns = 70\nTime for loop1: 77.0687ms\ns = 70\nTime for loop2: 1219.49ms\n\nloop1() is **15x faster** than loop2(). Why is that? Let\u2019s find out below!\n\n## Measure cache misses using Cachegrind\n\n[Cachegrind](https://courses.cs.washington.edu/courses/cse326/05wi/valgrind-doc/cg_main.html) is a cache profiling tool used to see how many I1 (first level instruction), D1 (first level data), and LL (last level) cache misses your program caused.\n\nLet\u2019s build our program with just loop1() and just loop2() to see how cache friendly each of these functions is.\n\n### Build and run/profile just loop1()\n\n```python\ng++ -O2 main.cpp -DRUN_LOOP1\nvalgrind --tool=cachegrind ./a.out\n\nPrints:\n```python\n==3299700==\n==3299700== I refs: 643,156,721\n==3299700== I1 misses: 2,077\n==3299700== LLi misses: 2,021", "source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"} {"text": "==3299700== LLi misses: 2,021\n==3299700== I1 miss rate: 0.00%\n==3299700== LLi miss rate: 0.00%\n==3299700==\n==3299700== D refs: 160,952,192 (160,695,444 rd + 256,748 wr)\n==3299700== D1 misses: 10,021,300 ( 10,018,723 rd + 2,577 wr)\n==3299700== LLd misses: 10,010,916 ( 10,009,147 rd + 1,769 wr)\n==3299700== D1 miss rate: 6.2% ( 6.2% + 1.0% )\n==3299700== LLd miss rate: 6.2% ( 6.2% + 0.7% )\n==3299700==\n==3299700== LL refs: 10,023,377 ( 10,020,800 rd + 2,577 wr)\n==3299700== LL misses: 10,012,937 ( 10,011,168 rd + 1,769 wr)\n==3299700== LL miss rate: 1.2% ( 1.2% + 0.7% )\n\n### Build and run/profile just loop2()\n\n\n```python\ng++ -O2 main.cpp -DRUN_LOOP2\nvalgrind --tool=cachegrind ./a.out\n\nPrints:\n```python\n==3300389==\n==3300389== I refs: 643,156,726\n==3300389== I1 misses: 2,075\n==3300389== LLi misses: 2,018", "source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"} {"text": "==3300389== LLi misses: 2,018\n==3300389== I1 miss rate: 0.00%\n==3300389== LLi miss rate: 0.00%\n==3300389==\n==3300389== D refs: 160,952,196 (160,695,447 rd + 256,749 wr)\n==3300389== D1 misses: 160,021,290 (160,018,713 rd + 2,577 wr)\n==3300389== LLd misses: 10,014,907 ( 10,013,138 rd + 1,769 wr)\n==3300389== D1 miss rate: 99.4% ( 99.6% + 1.0% )\n==3300389== LLd miss rate: 6.2% ( 6.2% + 0.7% )\n==3300389==\n==3300389== LL refs: 160,023,365 (160,020,788 rd + 2,577 wr)\n==3300389== LL misses: 10,016,925 ( 10,015,156 rd + 1,769 wr)\n==3300389== LL miss rate: 1.2% ( 1.2% + 0.7% )\n```\nThe main differences between the 2 runs are:\n1. D1 misses: 10M v/s 160M\n2. D1 miss rate: 6.2% v/s 99.4%\nAs you can see, loop2() causes many many more (~16x more) L1 data cache misses than loop1(). This is why loop1() is ~15x faster than loop2().", "source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"} {"text": "Memory Formats supported by PyTorch Operators\nWhile PyTorch operators expect all tensors to be in Channels First (NCHW) dimension format, PyTorch operators support 3 output memory formats.\n\nContiguous: Tensor memory is in the same order as the tensor\u2019s dimensions.\nChannelsLast: Irrespective of the dimension order, the 2d (image) tensor is laid out as an HWC or NHWC (N: batch, H: height, W: width, C: channels) tensor in memory. The dimensions could be permuted in any order.\nChannelsLast3d: For 3d tensors (video tensors), the memory is laid out in THWC (Time, Height, Width, Channels) or NTHWC (N: batch, T: time, H: height, W: width, C: channels) format. The dimensions could be permuted in any order.\n", "source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"} {"text": "The reason that ChannelsLast is preferred for vision models is because XNNPACK (kernel acceleration library) used by PyTorch expects all inputs to be in Channels Last format, so if the input to the model isn\u2019t channels last, then it must first be converted to channels last, which is an additional operation.\nAdditionally, most PyTorch operators preserve the input tensor\u2019s memory format, so if the input is Channels First, then the operator needs to first convert to Channels Last, then perform the operation, and then convert back to Channels First.\nWhen you combine it with the fact that accelerated operators work better with a channels last memory format, you\u2019ll notice that having the operator return back a channels-last memory format is better for subsequent operator calls or you\u2019ll end up having every operator convert to channels-last (should it be more efficient for that specific operator).\nFrom the XNNPACK home page:", "source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"} {"text": "From the XNNPACK home page:\n\n\u201cAll operators in XNNPACK support NHWC layout, but additionally allow custom stride along the Channel dimension\".\n\nPyTorch Best Practice\nThe best way to get the most performance from your PyTorch vision models is to ensure that your input tensor is in a Channels Last memory format before it is fed into the model.\nYou can get even more speedups by optimizing your model to use the XNNPACK backend (by simply calling optimize_for_mobile() on your torchscripted model). Note that XNNPACK models will run slower if the inputs are contiguous, so definitely make sure it is in Channels-Last format.\nWorking example showing speedup", "source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"} {"text": "Working example showing speedup\nRun this example on Google Colab - note that runtimes on colab CPUs might not reflect accurate performance; it is recommended to run this code on your local machine.\n```python\nimport torch\nfrom torch.utils.mobile_optimizer import optimize_for_mobile\nimport torch.backends.xnnpack\nimport time\nprint(\"XNNPACK is enabled: \", torch.backends.xnnpack.enabled, \"\\n\")\nN, C, H, W = 1, 3, 200, 200\nx = torch.rand(N, C, H, W)\nprint(\"Contiguous shape: \", x.shape)\nprint(\"Contiguous stride: \", x.stride())\nprint()\nxcl = x.to(memory_format=torch.channels_last)\nprint(\"Channels-Last shape: \", xcl.shape)\nprint(\"Channels-Last stride: \", xcl.stride())\nOutputs:\nXNNPACK is enabled: True\nContiguous shape: torch.Size([1, 3, 200, 200])\nContiguous stride: (120000, 40000, 200, 1)\nChannels-Last shape: torch.Size([1, 3, 200, 200])", "source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"} {"text": "Channels-Last stride: (120000, 1, 600, 3)\n```\nThe input shape stays the same for contiguous and channels-last formats. Internally however, the tensor's layout has changed as you can see in the strides. Now, the number of jumps required to go across channels is only 1 (instead of 40000 in the contiguous tensor).\nThis better data locality means convolution layers can access all the channels for a given pixel much faster. Let's see now how the memory format affects runtime:\n```python\nfrom torchvision.models import resnet34, resnet50, resnet101\nm = resnet34(pretrained=False)\nm = resnet50(pretrained=False)\nm = resnet101(pretrained=False)\ndef get_optimized_model(mm):\n mm = mm.eval()\n scripted = torch.jit.script(mm)\n optimized = optimize_for_mobile(scripted) # explicitly call the xnnpack rewrite \n return scripted, optimized\ndef compare_contiguous_CL(mm):\n # inference on contiguous\n start = time.perf_counter()\n for i in range(20):\n mm(x)\n end = time.perf_counter()", "source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"} {"text": "mm(x)\n end = time.perf_counter()\n print(\"Contiguous: \", end-start)\n# inference on channels-last\n start = time.perf_counter()\n for i in range(20):\n mm(xcl)\n end = time.perf_counter()\n print(\"Channels-Last: \", end-start)\nwith torch.inference_mode():\n scripted, optimized = get_optimized_model(m)\nprint(\"Runtimes for torchscripted model: \")\n compare_contiguous_CL(scripted.eval())\n print()\n print(\"Runtimes for mobile-optimized model: \")\n compare_contiguous_CL(optimized.eval())\nOutputs (on an Intel Core i9 CPU):\nRuntimes for torchscripted model:\nContiguous: 1.6711160129999598\nChannels-Last: 1.6678222839999535\nRuntimes for mobile-optimized model:\nContiguous: 0.5712863490000473\nChannels-Last: 0.46113000699995155\n```\nConclusion\nThe Memory Layout of an input tensor can significantly impact a model\u2019s running time. For Vision Models, prefer a Channels Last memory format to get the most out of your PyTorch models.\nReferences", "source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"} {"text": "References\n\nRow/Column Major matrix storage order\nLoop order impact on performance\nCachegrind: a cache-miss profiler\nNHWC format explained\nWhy does PyTorch prefer NCHW?\nXNNPACK\nPyTorch memory format tutorial\nSupported operators\n", "source": "https://pytorch.org/blog/tensor-memory-format-matters/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'PyTorch framework for cryptographically secure random number generation, torchcsprng, now available'\nauthor: Team PyTorch\n\nOne of the key components of modern cryptography is the pseudorandom number generator. Katz and Lindell stated, \"The use of badly designed or inappropriate random number generators can often leave a good cryptosystem vulnerable to attack. Particular care must be taken to use a random number generator that is designed for cryptographic use, rather than a 'general-purpose' random number generator which may be fine for some applications but not ones that are required to be cryptographically secure.\"[1] Additionally, most pseudorandom number generators scale poorly to massively parallel high-performance computation because of their sequential nature. Others don\u2019t satisfy cryptographically secure properties.", "source": "https://pytorch.org/blog/torchcsprng-release-blog/", "category": "pytorch blogs"} {"text": "torchcsprng is a PyTorch C++/CUDA extension that provides cryptographically secure pseudorandom number generators for PyTorch.\ntorchcsprng overview\nHistorically, PyTorch had only two pseudorandom number generator implementations: Mersenne Twister for CPU and Nvidia\u2019s cuRAND Philox for CUDA. Despite good performance properties, neither of them are suitable for cryptographic applications. Over the course of the past several months, the PyTorch team developed the torchcsprng extension API. Based on PyTorch dispatch mechanism and operator registration, it allows the users to extend c10::GeneratorImpl and implement their own custom pseudorandom number generator.", "source": "https://pytorch.org/blog/torchcsprng-release-blog/", "category": "pytorch blogs"} {"text": "torchcsprng generates a random 128-bit key on the CPU using one of its generators and then runs AES128 in CTR mode either on CPU or GPU using CUDA. This then generates a random 128-bit state and applies a transformation function to map it to target tensor values. This approach is based on Parallel Random Numbers: As Easy as 1, 2, 3 (John K. Salmon, Mark A. Moraes, Ron O. Dror, and David E. Shaw, D. E. Shaw Research). It makes torchcsprng both crypto-secure and parallel on both CPU and CUDA.\n\n\n\nSince torchcsprng is a PyTorch extension, it is available on the platforms where PyTorch is available (support for Windows-CUDA will be available in the coming months). \nUsing torchcsprng", "source": "https://pytorch.org/blog/torchcsprng-release-blog/", "category": "pytorch blogs"} {"text": "Using torchcsprng\nThe torchcsprng API is very simple to use and is fully compatible with the PyTorch random infrastructure:\nStep 1: Install via binary distribution\nAnaconda:\npython\nconda install torchcsprng -c pytorch\npip:\npython\npip install torchcsprng\nStep 2: import packages as usual but add csprng\npython\nimport torch\nimport torchcsprng as csprng\nStep 3: Create a cryptographically secure pseudorandom number generator from /dev/urandom:\npython\nurandom_gen = csprng.create_random_device_generator('/dev/urandom')\nand simply use it with the existing PyTorch methods:\npython\ntorch.randn(10, device='cpu', generator=urandom_gen)\nStep 4: Test with Cuda\nOne of the advantages of torchcsprng generators is that they can be used with both CPU and CUDA tensors:\npython\ntorch.randn(10, device='cuda', generator=urandom_gen)\nAnother advantage of torchcsprng generators is that they are parallel on CPU unlike the default PyTorch CPU generator.", "source": "https://pytorch.org/blog/torchcsprng-release-blog/", "category": "pytorch blogs"} {"text": "Getting Started\nThe easiest way to get started with torchcsprng is by visiting the GitHub page where you can find installation and build instructions, and more how-to examples. \nCheers,\nThe PyTorch Team\n[1] Introduction to Modern Cryptography: Principles and Protocols (Chapman & Hall/CRC Cryptography and Network Security Series) by Jonathan Katz and Yehuda Lindell", "source": "https://pytorch.org/blog/torchcsprng-release-blog/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Optimizing Production PyTorch Models\u2019 Performance with Graph Transformations\"\nauthor: Jade Nie, CK Luk, Xiaodong Wang, Jackie (Jiaqi) Xu\nfeatured-img: \"assets/images/blog1-3b.png\"\n\n1. Introduction\nPyTorch supports two execution modes [1]: eager mode and graph mode. In eager mode, operators in a model are immediately executed as they are encountered. In contrast, in graph mode, operators are first synthesized into a graph, which will then be compiled and executed as a whole. Eager mode is easier to use, more suitable for ML researchers, and hence is the default mode of execution. On the other hand, graph mode typically delivers higher performance and hence is heavily used in production.", "source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"} {"text": "Specifically, graph mode enables operator fusion [2], wherein one operator is merged with another to reduce/localize memory reads as well as total kernel launch overhead. Fusion can be horizontal\u2014taking a single operation (e.g., BatchNorm) that is independently applied to many operands and merging those operands into an array; and vertical\u2014merging a kernel with another kernel that consumes the output of the first kernel (e.g., Convolution followed by ReLU).\nTorch.FX [3, 4] (abbreviated as FX) is a publicly available toolkit as part of the PyTorch package that supports graph mode execution. In particular, it (1) captures the graph from a PyTorch program and (2) allows developers to write transformations on the captured graph. It is used inside Meta to optimize the training throughput of production models. By introducing a number of FX-based optimizations developed at Meta, we demonstrate the approach of using graph transformation to optimize PyTorch\u2019s performance for production.\n2. Background", "source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"} {"text": "2. Background\nEmbedding tables are ubiquitous in recommendation systems. Section 3 will discuss three FX transformations that optimize accesses to embedding tables. In this section, we provide some background on FX (Section 2.1) and embedding tables (Section 2.2).\n2.1 FX\nFigure 1 is a simple example adopted from [3] which illustrates using FX to transform a PyTorch program. It contains three steps: (1) capturing the graph from a program, (2) modifying the graph (in this example, all uses of RELU are replaced by GELU), and (3) generating a new program from the modified graph.\n\n\n\nFigure 1: A FX example which replaces all uses of RELU by GELU in a PyTorch module.\nThe FX API [4] provides many more functionalities for inspecting and transforming PyTorch program graphs.\n2.2 Embedding Tables\n\n\n", "source": "https://pytorch.org/blog/optimizing-production-pytorch-performance-with-graph-transformations/", "category": "pytorch blogs"} {"text": "\nFigure 2: Illustration of an embedding table for a sparse feature with batch size = 1\nIn a recommendation system, sparse features (e.g., User ID, Story ID) are represented by embedding tables. An embedding table E is an HxD matrix, where H is the hash size, D is the embedding dimension. Each row of E is a vector of floats. Feature hashing [5] is used to map a sparse feature to a list of indices to E, say [S1,S2, \u2026, Sk], where 0<=Sibt', x1, x2)\nprint(torch_xla._XLAC._get_xla_tensors_text([y1]))\n\nYou can execute this colab notebook to examine the resulting graph for y1. Notice that no computation has been performed yet.\ny1 = y1 + x2\nprint(torch_xla._XLAC._get_xla_tensors_text([y1]))\n", "source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"} {"text": "\nThe operations will continue until PyTorch/XLA encounters a barrier. This barrier can either be a [mark step()](https://github.com/pytorch/xla/blob/ff079bb48744e5aa6696201ccf34057f15fc7cac/torch_xla/core/xla_model.py#L751) api call or any other event which forces the execution of the graph recorded so far.\n\n```python\nxm.mark_step()\nprint(torch_xla._XLAC._get_xla_tensors_text([y1]))\n\nOnce the mark_step() is called, the graph is compiled and then executed on TPU, i.e. the tensors have been materialized. Therefore, the graph is now reduced to a single line y1 tensor which holds the result of the computation.\nCompile Once, Execute Often", "source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"} {"text": "Compile Once, Execute Often\nXLA compilation passes offer optimizations (e.g. op-fusion, which reduces HBM pressure by using scratch-pad memory for multiple ops, ref ) and leverages lower level XLA infrastructure to optimally use the underlying hardware. However, there is one caveat, compilation passes are expensive, i.e. can add to the training step time. Therefore, this approach scales well if and only if we can compile once and execute often (compilation cache helps, such that the same graph is not compiled more than once).\nIn the following example, we create a small computation graph and time the execution:\ny1 = torch.rand((3, 8)).to(dev)\ndef dummy_step() :\n y1 = torch.einsum('bs,st->bt', y1, x)\n xm.mark_step()\n return y1\n\n%timeit dummy_step\n\nThe slowest run took 29.74 times longer than the fastest. This could mean that an intermediate result is being cached.\n10000000 loops, best of 5: 34.2 ns per loop\n", "source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"} {"text": "10000000 loops, best of 5: 34.2 ns per loop\n```\nYou notice that the slowest step is quite longer than the fastest. This is because of the graph compilation overhead which is incurred only once for a given shape of graph, input shape, and output shape. Subsequent steps are faster because no graph compilation is necessary.\nThis also implies that we expect to see performance cliffs when the \u201ccompile once and execute often\u201d assumption breaks. Understanding when this assumption breaks is the key to understanding and optimizing the performance of a LazyTensor system. Let\u2019s examine what triggers the compilation.\nGraph Compilation and Execution and LazyTensor Barrier", "source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"} {"text": "We saw that the computation graph is compiled and executed when a LazyTensor barrier is encountered. There are three scenarios when the LazyTensor barrier is automatically or manually introduced. The first is the explicit call of mark_step() api as shown in the preceding example. mark_step() is also called implicitly at every step when you wrap your dataloader with MpDeviceLoader (highly recommended to overlap compute and data upload to TPU device). The Optimizer step method of xla_model also allows to implicitly call mark_step (when you set barrier=True).", "source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"} {"text": "The second scenario where a barrier is introduced is when PyTorch/XLA finds an op with no mapping (lowering) to equivalent XLA HLO ops. PyTorch has 2000+ operations. Although most of these operations are composite (i.e. can be expressed in terms of other fundamental operations), some of these operations do not have corresponding lowering in XLA.\n\n\n\nWhat happens when an op with no XLA lowering is used? PyTorch XLA stops the operation recording and cuts the graph(s) leading to the input(s) of the unlowered op. This cut graph is then compiled and dispatched for execution. The results (materialized tensor) of execution are sent back from device to host, the unlowered op is then executed on the host (cpu), and then downstream LazyTensor operations creating a new graph(s) until a barrier is encountered again.", "source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"} {"text": "The third and final scenario which results in a LazyTensor barrier is when there is a control structure/statement or another method which requires the value of a tensor. This statement would at the minimum cause the execution of the computation graph leading to the tensor (if the graph has already been seen) or cause compilation and execution of both.\nOther examples of such methods include .item(), isEqual(). In general, any operation that maps Tensor -> Scalar will cause this behavior.\nDynamic Graph\nAs illustrated in the preceding section, graph compilation cost is amortized if the same shape of the graph is executed many times. It\u2019s because the compiled graph is cached with a hash derived from the graph shape, input shape, and the output shape. If these shapes change it will trigger compilation, and too frequent compilation will result in training time degradation.\nLet\u2019s consider the following example:\n```python\ndef dummy_step(x, y, loss, acc=False):\n z = torch.einsum('bs,st->bt', y, x)", "source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"} {"text": "z = torch.einsum('bs,st->bt', y, x)\n step_loss = z.sum().view(1,)\n if acc:\n loss = torch.cat((loss, step_loss))\n else:\n loss = step_loss\n xm.mark_step()\n return loss\nimport time\ndef measure_time(acc=False):\n exec_times = []\n iter_count = 100\n x = torch.rand((512, 8)).to(dev)\n y = torch.rand((512, 512)).to(dev)\n loss = torch.zeros(1).to(dev)\n for i in range(iter_count):\n tic = time.time()\n loss = dummy_step(x, y, loss, acc=acc)\n toc = time.time()\n exec_times.append(toc - tic)\n return exec_times\ndyn = measure_time(acc=True) # acc= True Results in dynamic graph\nst = measure_time(acc=False) # Static graph, computation shape, inputs and output shapes don't change\nimport matplotlib.pyplot as plt\nplt.plot(st, label = 'static graph')\nplt.plot(dyn, label = 'dynamic graph')\nplt.legend()\nplt.title('Execution time in seconds')\n```\n\n\n", "source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"} {"text": "\nNote that static and dynamic cases have the same computation but dynamic graph compiles every time, leading to the higher overall run-time. In practice, the training step with recompilation can sometimes be an order of magnitude or slower. In the next section we discuss some of the PyTorch/XLA tools to debug training degradation.\nProfiling Training Performance with PyTorch/XLA\nPyTorch/XLA profiling consists of two major components. First is the client side profiling. This feature is turned on by simply setting the environment variable PT_XLA_DEBUG to 1. Client side profiling points to unlowered ops or device-to-host transfer in your source code. Client side profiling also reports if there are too frequent compilations happening during the training. You can explore some metrics and counters provided by PyTorch/XLA in conjunction with the profiler in this notebook.", "source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"} {"text": "The second component offered by PyTorch/XLA profiler is the inline trace annotation. For example:\nimport torch_xla.debug.profiler as xp\n\ndef train_imagenet():\n print('==> Preparing data..')\n img_dim = get_model_property('img_dim')\n ....\n server = xp.start_server(3294)\n def train_loop_fn(loader, epoch):\n ....\n model.train()\n for step, (data, target) in enumerate(loader):\n with xp.StepTrace('Train_Step', step_num=step):\n ....\n if FLAGS.amp:\n ....\n else:\n with xp.Trace('build_graph'):\n output = model(data)\n loss = loss_fn(output, target)\n loss.backward()\n xm.optimizer_step(optimizer)\n\nNotice the start_server api call. The port number that you have used here is the same port number you will use with the tensorboard profiler in order to view the op trace similar to:\n\n\n", "source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"} {"text": "\nOp trace along with the client-side debugging function is a powerful set of tools to debug and optimize your training performance with PyTorch/XLA. For more detailed instructions on the profiler usage, the reader is encouraged to explore blogs part-1, part-2, and part-3 of the blog series on PyTorch/XLA performance debugging.\nSummary", "source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"} {"text": "Summary\nIn this article we have reviewed the fundamentals of the LazyTensor system. We built on those fundamentals with PyTorch/XLA to understand the potential causes of training performance degradation. We discussed why \u201ccompile once and execute often\u201d helps to get the best performance on LazyTensor systems, and why training slows down when this assumption breaks.\nWe hope that PyTorch users will find these insights helpful for their novel works with LazyTensor systems.\nAcknowledgements\nA big thank you to my outstanding colleagues Jack Cao, Milad Mohammedi, Karl Weinmeister, Rajesh Thallam, Jordan Tottan (Google) and Geeta Chauhan (Meta) for their meticulous reviews and feedback. And thanks to the extended PyTorch/XLA development team from Google, Meta, and the open source community to make PyTorch possible on TPUs. And finally, thanks to the authors of the LazyTensor paper not only for developing LazyTensor but also for writing such an accessible paper.", "source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"} {"text": "Refrences\n[[1]] LazyTensor: combining eager execution with domain-specific compilers", "source": "https://pytorch.org/blog/understanding-lazytensor-system-performance-with-pytorch-xla-on-cloud-tpu/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Extending TorchVision\u2019s Transforms to Object Detection, Segmentation & Video tasks\"\nauthor: Philip Meier, Victor Fomin, Vasilis Vryniotis, Nicolas Hug\nfeatured-img: \"assets/images/Transforms-v2-feature-image.png\"\n\nNote: A previous version of this post was published in November 2022. We have updated this post with the most up-to-date info, in view of the upcoming 0.15 release of torchvision in March 2023, jointly with PyTorch 2.0.\nTorchVision is extending its Transforms API! Here is what\u2019s new:\n\nYou can use them not only for Image Classification but also for Object Detection, Instance & Semantic Segmentation and Video Classification.\nYou can use new functional transforms for transforming Videos, Bounding Boxes and Segmentation Masks.\n", "source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"} {"text": "The API is completely backward compatible with the previous one, and remains the same to assist the migration and adoption. We are now releasing this new API as Beta in the torchvision.transforms.v2 namespace, and we would love to get early feedback from you to improve its functionality. Please reach out to us if you have any questions or suggestions.\nLimitations of current Transforms\nThe existing Transforms API of TorchVision (aka V1) only supports single images. As a result it can only be used for classification tasks:\nfrom torchvision import transforms\ntrans = transforms.Compose([\n transforms.ColorJitter(contrast=0.5),\n transforms.RandomRotation(30),\n transforms.CenterCrop(480),\n])\nimgs = trans(imgs)\n", "source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"} {"text": "])\nimgs = trans(imgs)\n```\nThe above approach doesn\u2019t support Object Detection nor Segmentation. This limitation made any non-classification Computer Vision tasks second-class citizens as one couldn\u2019t use the Transforms API to perform the necessary augmentations. Historically this made it difficult to train high-accuracy models using TorchVision\u2019s primitives and thus our Model Zoo lagged by several points from SoTA.", "source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"} {"text": "To circumvent this limitation, TorchVision offered custom implementations in its reference scripts that show-cased how one could perform augmentations in each task. Though this practice enabled us to train high accuracy classification, object detection & segmentation models, it was a hacky approach which made those transforms impossible to import from the TorchVision binary.\nThe new Transforms API\nThe Transforms V2 API supports videos, bounding boxes, and segmentation masks meaning that it offers native support for many Computer Vision tasks. The new solution is a drop-in replacement:\n```python\nimport torchvision.transforms.v2 as transforms\nExactly the same interface as V1:\ntrans = transforms.Compose([", "source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"} {"text": "trans = transforms.Compose([\n transforms.ColorJitter(contrast=0.5),\n transforms.RandomRotation(30),\n transforms.CenterCrop(480),\n])\nimgs, bboxes, labels = trans(imgs, bboxes, labels)\n\nThe new Transform Classes can receive any arbitrary number of inputs without enforcing specific order or structure:\n\n```python\n# Already supported:\ntrans(imgs) # Image Classification\ntrans(videos) # Video Tasks\ntrans(imgs, bboxes, labels) # Object Detection\ntrans(imgs, bboxes, masks, labels) # Instance Segmentation\ntrans(imgs, masks) # Semantic Segmentation\ntrans({\"image\": imgs, \"box\": bboxes, \"tag\": labels}) # Arbitrary Structure\n\n# Future support:\ntrans(imgs, bboxes, labels, keypoints) # Keypoint Detection\ntrans(stereo_images, disparities, masks) # Depth Perception\ntrans(image1, image2, optical_flows, masks) # Optical Flow\ntrans(imgs_or_videos, labels) # MixUp/CutMix-style Transforms\n", "source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"} {"text": "\nThe Transform Classes make sure that they apply the same random transforms to all the inputs to ensure consistent results.\n\nThe functional API has been updated to support all necessary signal processing kernels (resizing, cropping, affine transforms, padding etc) for all inputs:\n\n```python\nfrom torchvision.transforms.v2 import functional as F\n\n\n# High-level dispatcher, accepts any supported input type, fully BC\nF.resize(inpt, size=[224, 224])\n# Image tensor kernel\nF.resize_image_tensor(img_tensor, size=[224, 224], antialias=True) \n# PIL image kernel\nF.resize_image_pil(img_pil, size=[224, 224], interpolation=BILINEAR)\n# Video kernel\nF.resize_video(video, size=[224, 224], antialias=True) \n# Mask kernel\nF.resize_mask(mask, size=[224, 224])\n# Bounding box kernel\nF.resize_bounding_box(bbox, size=[224, 224], spatial_size=[256, 256])\n", "source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"} {"text": "\nUnder the hood, the API uses Tensor subclassing to wrap the input, attach useful meta-data and dispatch to the right kernel. For your data to be compatible with these new transforms, you can either use the provided dataset wrapper which should work with most of torchvision built-in datasets, or your can wrap your data manually into Datapoints:\n\n```python\nfrom torchvision.datasets import wrap_dataset_for_transforms_v2\nds = CocoDetection(..., transforms=v2_transforms)\nds = wrap_dataset_for_transforms_v2(ds) # data is now compatible with transforms v2!\n\n# Or wrap your data manually using the lower-level Datapoint classes:\nfrom torchvision import datapoints\n\nimgs = datapoints.Image(images)\nvids = datapoints.Video(videos)\nmasks = datapoints.Mask(target[\"masks\u201c])\nbboxes = datapoints.BoundingBox(target[\"boxes\u201c], format=\u201dXYXY\u201d, spatial_size=imgs.shape)\n", "source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"} {"text": "```\nIn addition to the new API, we now provide importable implementations for several data augmentations that are used in SoTA research such as Large Scale Jitter, AutoAugmentation methods and several new Geometric, Color and Type Conversion transforms.\nThe API continues to support both PIL and Tensor backends for Images, single or batched input and maintains JIT-scriptability on both the functional and class APIs.. The new API has been verified to achieve the same accuracy as the previous implementation.\nAn end-to-end example", "source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"} {"text": "An end-to-end example\nHere is an example of the new API using the following image. It works both with PIL images and Tensors. For more examples and tutorials, take a look at our gallery!\n```python\nfrom torchvision import io, utils\nfrom torchvision import datapoints\nfrom torchvision.transforms import v2 as T\nfrom torchvision.transforms.v2 import functional as F\nDefining and wrapping input to appropriate Tensor Subclasses\npath = \"COCO_val2014_000000418825.jpg\"\nimg = datapoints.Image(io.read_image(path))\nimg = PIL.Image.open(path)\nbboxes = datapoints.BoundingBox(\n [[2, 0, 206, 253], [396, 92, 479, 241], [328, 253, 417, 332],\n [148, 68, 256, 182], [93, 158, 170, 260], [432, 0, 438, 26],\n [422, 0, 480, 25], [419, 39, 424, 52], [448, 37, 456, 62],\n [435, 43, 437, 50], [461, 36, 469, 63], [461, 75, 469, 94],", "source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"} {"text": "[469, 36, 480, 64], [440, 37, 446, 56], [398, 233, 480, 304],\n [452, 39, 463, 63], [424, 38, 429, 50]],\n format=datapoints.BoundingBoxFormat.XYXY,\n spatial_size=F.get_spatial_size(img),\n)\nlabels = [59, 58, 50, 64, 76, 74, 74, 74, 74, 74, 74, 74, 74, 74, 50, 74, 74]\nDefining and applying Transforms V2\ntrans = T.Compose(\n [\n T.ColorJitter(contrast=0.5),\n T.RandomRotation(30),\n T.CenterCrop(480),\n ]\n)\nimg, bboxes, labels = trans(img, bboxes, labels)\nVisualizing results\nviz = utils.draw_bounding_boxes(F.to_image_tensor(img), boxes=bboxes)\nF.to_pil_image(viz).show()\n```\nDevelopment milestones and future work\nHere is where we are in development:\n\n[x] Design API\n[x] Write Kernels for transforming Videos, Bounding Boxes, Masks and Labels\n[x] Rewrite all existing Transform Classes (stable + references) on the new API:\n[x] Image Classification\n[x] Video Classification\n[x] Object Detection\n[x] Instance Segmentation\n[x] Semantic Segmentation\n", "source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"} {"text": "\n[x] Semantic Segmentation\n[x] Verify the accuracy of the new API for all supported Tasks and Backends\n[x] Speed Benchmarks and Performance Optimizations (in progress - planned for Dec)\n[x] Graduate from Prototype (planned for Q1)\n[ ] Add support of Depth Perception, Keypoint Detection, Optical Flow and more (future)\n[ ] Add smooth support for batch-wise transforms like MixUp and CutMix\n\nWe would love to get feedback from you to improve its functionality. Please reach out to us if you have any questions or suggestions.", "source": "https://pytorch.org/blog/extending-torchvisions-transforms-to-object-detection-segmentation-and-video-tasks/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"New Library Updates in PyTorch 1.13\"\nauthor: Team PyTorch\nfeatured-img: \"assets/images/new-library-updates-in-pytorch-1.13-2.jpg\"\n\nSummary\nWe are bringing a number of improvements to the current PyTorch libraries, alongside the PyTorch 1.13 release. These updates demonstrate our focus on developing common and extensible APIs across all domains to make it easier for our community to build ecosystem projects on PyTorch.\nAlong with 1.13, we are releasing updates to the PyTorch Libraries, please find them below.\nTorchAudio\n(Beta) Hybrid Demucs Model and Pipeline\nHybrid Demucs is a music source separation model that uses both spectrogram and time domain features. It has demonstrated state-of-the-art performance in the Sony\u00ae Music DeMixing Challenge. (citation: https://arxiv.org/abs/2111.03600)\nThe TorchAudio v0.13 release includes the following features", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "\nMUSDB_HQ Dataset, which is used in Hybrid Demucs training (docs)\nHybrid Demucs model architecture (docs)\nThree factory functions suitable for different sample rate ranges\nPre-trained pipelines (docs)\nSDR Results of pre-trained pipelines on MUSDB_HQ test set\nTutorial that steps through music source separation using the pretrained pipeline (docs)\n\n\n\n\nPipeline\nAll\nDrums\nBass\nOther\nVocals\n\n\n\n\nHDEMUCS_HIGH_MUSDB*\n6.42\n7.76\n6.51\n4.47\n6.93\n\n\n", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "| HDEMUCS_HIGH_MUSDB_PLUS** | 9.37 | 11.38 | 10.53 | 7.24 | 8.32 |\n* Trained on the training data of MUSDB-HQ dataset.** Trained on both training and test sets of MUSDB-HQ and 150 extra songs from an internal database that were specifically produced for Meta.\nfrom torchaudio.pipelines import HDEMUCS_HIGH_MUSDB_PLUS\n\nbundle = HDEMUCS_HIGH_MUSDB_PLUS\nmodel = bundle.get_model()\nsources_list = model.sources\n\nmixture, samplerate = torchaudio.load(\"song.wav\")\nsources = model(mixture)\naudios = dict(zip(sources_list, sources)\n\nSpecial thanks to Alexandre Defossez for the guidance.\n(Beta) Datasets and Metadata Mode for SUPERB Benchmark", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "TorchAudio adds support for various audio-related datasets used in downstream tasks for benchmarking self-supervised learning models. With the addition of several new datasets, there is now support for the downstream tasks in version 1 of the SUPERB benchmark, which can be found in the s3prl repository.\nFor these datasets, we also add metadata support through a get_metadata function, enabling faster dataset iteration or preprocessing without the need to load waveforms. The function returns the same features as __getitem__, except it returns the relative waveform path rather than the loaded waveform.\nDatasets with metadata functionality\n\nLIBRISPEECH (docs)\nLibriMix (docs)\n", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "\nQUESST14 (docs)\nSPEECHCOMMANDS (docs)\n(new) FluentSpeechCommands (docs)\n(new) Snips (docs)\n(new) IEMOCAP (docs)\n(new) VoxCeleb1 (Identification, Verification)\n", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "(Beta) Custom Language Model support in CTC Beam Search Decoding\nTorchAudio released a CTC beam search decoder in release 0.12, with KenLM language model support. This release, there is added functionality for creating custom Python language models that are compatible with the decoder, using the torchaudio.models.decoder.CTCDecoderLM wrapper.\nFor more information on using a custom language model, please refer to the documentation and tutorial.\n(Beta) StreamWriter\ntorchaudio.io.StreamWriter is a class for encoding media including audio and video. This can handle a wide variety of codecs, chunk-by-chunk encoding and GPU encoding.\n```python\nwriter = StreamWriter(\"example.mp4\")\nwriter.add_audio_stream(\n sample_rate=16_000,\n num_channels=2,\n)\nwriter.add_video_stream(", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "num_channels=2,\n)\nwriter.add_video_stream(\n frame_rate=30,\n height=96,\n width=128,\n format=\"rgb24\",\n)\nwith writer.open():\n writer.write_audio_chunk(0, audio)\n writer.write_video_chunk(1, video)\n```\nFor more information, refer to the documentation and the following tutorials\n- StreamWriter Basic Usage\n- StreamWriter Advanced Usage\n- Hardware-Accelerated Video Decoding and Encoding\nTorchData\nFor a complete list of changes and new features, please visit our repository\u2019s 0.5.0 release note.\n(Prototype) DataLoader2", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "(Prototype) DataLoader2\nDataLoader2 was introduced in the last release to execute DataPipe graph, with support for dynamic sharding for multi-process/distributed data loading, multiple backend ReadingServices, and DataPipe graph in-place modification (e.g. shuffle control).\nIn this release, we further consolidated the API for DataLoader2 and a detailed documentation is now available here. We continue to welcome early adopters and feedback, as well as potential contributors. If you are interested in trying it out, we encourage you to install the nightly version of TorchData.\n(Beta) Data Loading from Cloud Service Providers\nWe extended our support to load data from additional cloud storage providers via DataPipes, now covering AWS, Google Cloud Storage, and Azure. A tutorial is also available. We are open to feedback and feature requests.", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "We also performed a simple benchmark, comparing the performance of data loading from AWS S3 and attached volume on an AWS EC2 instance. The results are visible here.\ntorch::deploy (Beta)\ntorch::deploy is now in Beta! torch::deploy is a C++ library for Linux based operating systems that allows you to run multiple Python interpreters in a single process. You can run your existing eager PyTorch models without any changes for production inference use cases. Highlights include: \n\nExisting models work out of the box\u2013no need to modify your python code to support tracing.\nFull support for your existing Python environment including C extensions.\nNo need to cross process boundaries to load balance in multi-GPU serving environments.\nModel weight can be shared between multiple Python interpreters.\nA vastly improved installation and setup process.\n\n```Python\ntorch::deploy::InterpreterManager manager(4);", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "torch::deploy::InterpreterManager manager(4);\n// access one of the 4 interpreters\nauto I = manager.acquireOne();\n// run infer from your_model.py\nI.global(\"your_model\", \"infer\")({at::randn({10, 240, 320})});\n```\nLearn more here.\n(Beta) CUDA/ROCm/CPU Backends\ntorch::deploy now links against standard PyTorch Python distributions so all accelerators that PyTorch core supports such as CUDA and AMD/HIP work out of the box.\n\nCan install any device variant of PyTorch via pip/conda like normal.\nhttps://pytorch.org/get-started/locally/\n\n(Prototype) aarch64/arm64 support\ntorch::deploy now has basic support for aarch64 Linux systems.\n\nWe're looking to gather feedback on it and learn more about arm use cases for eager PyTorch models.\nLearn more / share your use case at https://github.com/pytorch/multipy/issues/64\n\nTorchEval", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "TorchEval\n(Prototype) Introducing Native Metrics Support for PyTorch\nTorchEval is a library built for users who want highly performant implementations of common metrics to evaluate machine learning models. It also provides an easy to use interface for building custom metrics with the same toolkit. Building your metrics with TorchEval makes running distributed training loops with torch.distributed a breeze.\nLearn more with our docs, see our examples, or check out our GitHub repo.\nTorchMultimodal Release (Beta)", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "TorchMultimodal Release (Beta)\nPlease watch for upcoming blogs in early November that will introduce TorchMultimodal, a PyTorch domain library for training SoTA multi-task multimodal models at scale, in more details; in the meantime, play around with the library and models through our tutorial.\nTorchRec\n(Prototype) Simplified Optimizer Fusion APIs", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "(Prototype) Simplified Optimizer Fusion APIs\nWe\u2019ve provided a simplified and more intuitive API for setting fused optimizer settings via apply_optimizer_in_backward. This new approach enables the ability to specify optimizer settings on a per-parameter basis and sharded modules will configure FBGEMM\u2019s TableBatchedEmbedding modules accordingly. Additionally, this now let's TorchRec\u2019s planner account for optimizer memory usage. This should alleviate reports of sharding jobs OOMing after using Adam using a plan generated from planner.\n(Prototype) Simplified Sharding APIs", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "(Prototype) Simplified Sharding APIs\nWe\u2019re introducing the shard API, which now allows you to shard only the embedding modules within a model, and provides an alternative to the current main entry point - DistributedModelParallel. This lets you have a finer grained control over the rest of the model, which can be useful for customized parallelization logic, and inference use cases (which may not require any parallelization on the dense layers). We\u2019re also introducing construct_module_sharding_plan, providing a simpler interface to the TorchRec sharder.\n(Beta) Quantized Comms", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "(Beta) Quantized Comms\nApplying quantization or mixed precision to tensors in a collective call during model parallel training greatly improves training efficiency, with little to no effect on model quality. TorchRec now integrates with the quantized comms library provided by FBGEMM GPU and provides an interface to construct encoders and decoders (codecs) that surround the all_to_all, and reduce_scatter collective calls in the output_dist of a sharded module. We also allow you to construct your own codecs to apply to your sharded module. The codces provided by FBGEMM allow FP16, BF16, FP8, and INT8 compressions, and you may use different quantizations for the forward pass and backward pass.\nTorchSnapshot (Beta)", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "TorchSnapshot (Beta)\nAlong with PyTorch 1.13, we are releasing the beta version of TorchSnapshot, which is a performant, memory-efficient checkpointing library for PyTorch applications, designed with large, complex distributed workloads in mind. Highlights include:\n\nPerformance: TorchSnapshot provides a fast checkpointing implementation employing various optimizations, including zero-copy serialization for most tensor types, overlapped device-to-host copy and storage I/O, parallelized storage I/O\nMemory Use: TorchSnapshot's memory usage adapts to the host's available resources, greatly reducing the chance of out-of-memory issues when saving and loading checkpoints\nUsability: Simple APIs that are consistent between distributed and non-distributed workloads\n\nLearn more with our tutorial.\nTorchVision", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "TorchVision\nWe are happy to introduce torchvision v0.14 (release note). This version introduces a new model registration API to help users retrieving and listing models and weights. It also includes new image and video classification models such as MViT, S3D, Swin Transformer V2, and MaxViT. Last but not least, we also have new primitives and augmentation such as PolynomicalLR scheduler and SimpleCopyPaste.\n(Beta) Model Registration API", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "(Beta) Model Registration API\nFollowing up on the multi-weight support API that was released on the previous version, we have added a new model registration API to help users retrieve models and weights. There are now 4 new methods under the torchvision.models module: get_model, get_model_weights, get_weight, and list_models. Here are examples of how we can use them:\n```Python\nimport torchvision\nfrom torchvision.models import get_model, get_model_weights, list_models\nmax_params = 5000000\ntiny_models = []\nfor model_name in list_models(module=torchvision.models):\n weights_enum = get_model_weights(model_name)\n if len([w for w in weights_enum if w.meta[\"num_params\"] <= max_params]) > 0:\n tiny_models.append(model_name)\nprint(tiny_models)\n['mnasnet0_5', 'mnasnet0_75', 'mnasnet1_0', 'mobilenet_v2', ...]", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "model = get_model(tiny_models[0], weights=\"DEFAULT\")\nprint(sum(x.numel() for x in model.state_dict().values()))\n2239188\n\n#### (Beta) New Video Classification Models\n\nWe added two new video classification models, MViT and S3D. MViT is a state of the art video classification transformer model which has 80.757% accuracy on the Kinetics400 dataset, while S3D is a relatively small model with good accuracy for its size. These models can be used as follows:\n\n```Python\nimport torch\nfrom torchvision.models.video import *\n\nvideo = torch.rand(3, 32, 800, 600)\nmodel = mvit_v2_s(weights=\"DEFAULT\")\n# model = s3d(weights=\"DEFAULT\")\nmodel.eval()\nprediction = model(images)\n\nHere is the table showing the accuracy of the new video classification models tested in the Kinetics400 dataset.\n\n\n\nModel\nAcc@1\nAcc@5\n\n\n\n\nmvit_v1_b\n81.474\n95.776\n\n\n", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "| mvit_v2_s | 83.196 | 96.36 |\n| s3d | 83.582 | 96.64 |\nWe would like to thank Haoqi Fan, Yanghao Li, Christoph Feichtenhofer and Wan-Yen Lo for their work on PyTorchVideo and their support during the development of the MViT model. We would like to thank Sophia Zhi for her contribution implementing the S3D model in torchvision.\n(Stable) New Architecture and Model Variants\nFor Classification Models, we\u2019ve added the Swin Transformer V2 architecture along with pre-trained weights for its tiny/small/base variants. In addition, we have added support for the MaxViT transformer. Here is an example on how to use the models:\nimport torch\nfrom torchvision.models import *\n\nimage = torch.rand(1, 3, 224, 224)\nmodel = swin_v2_t(weights=\"DEFAULT\").eval()\n# model = maxvit_t(weights=\"DEFAULT\").eval()\nprediction = model(image)\n", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "prediction = model(image)\n```\nHere is the table showing the accuracy of the models tested on ImageNet1K dataset.\n\n\n\nModel\nAcc@1\nAcc@1 change over V1\nAcc@5\nAcc@5 change over V1\n\n\n\n\nswin_v2_t\n82.072\n+ 0.598\n96.132\n+ 0.356\n\n\nswin_v2_s\n83.712\n+ 0.516\n96.816\n+ 0.456\n\n\nswin_v2_b\n84.112\n+ 0.530\n96.864\n+ 0.224\n\n\nmaxvit_t\n83.700\n-\n96.722\n-\n\n\n\nWe would like to thank Ren Pang and Teodor Poncu for contributing the 2 models to torchvision.\n(Stable) New Primitives & Augmentations", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "(Stable) New Primitives & Augmentations\nIn this release we\u2019ve added the SimpleCopyPaste augmentation in our reference scripts and we up-streamed the PolynomialLR scheduler to PyTorch Core. We would like to thank Lezwon Castelino and Federico Pozzi for their contributions. We are continuing our efforts to modernize TorchVision by adding more SoTA primitives, Augmentations and architectures with the help of our community. If you are interested in contributing, have a look at the following issue.\nTorch-TensorRT\n(Prototype) TensorRT with FX2TRT frontend\nTorch-TensorRT is the PyTorch integration for TensorRT, providing high performance inference on NVIDIA GPUs. Torch-TRT allows for optimizing models directly in PyTorch for deployment providing up to 6x performance improvement.", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "Torch-TRT is an AoT compiler which ingests an nn.Module or TorchScript module, optimizes compatible subgraphs in TensorRT & leaves the rest to run in PyTorch. This gives users the performance of TensorRT, but the usability and familiarity of Torch.\nTorch-TensorRT is part of the PyTorch ecosystem, and was released as v1.0 in November \u201821. There are currently two distinct front-ends: Torchscript & FX. Each provides the same value proposition and underlying operation with the primary difference being the input & output formats (TS vs FX / Python).\nThe Torchscript front-end was included in v1.0 and should be considered stable. The FX front-end is first released in v1.2 and should be considered a Beta.\nRelevant Links:\n\nGithub\nDocumentation\nGeneric (TS) getting started guide\n", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "\nFX getting started guide\n\n(Stable) Introducing Torch-TensorRT\nTorch-TensorRT is an integration for PyTorch that leverages inference optimizations of TensorRT on NVIDIA GPUs. It takes advantage of TensorRT optimizations, such as FP16 and INT8 reduced precision, graph optimization, operation fusion, etc. while offering a fallback to native PyTorch when TensorRT does not support the model subgraphs. Currently, there are two frontend paths existing in the library that help to convert a PyTorch model to tensorRT engine. One path is through Torch Script (TS) and the other is through FX frontend. That being said, the models are traced by either TS or FX into their IR graph and then converted to TensorRT from it.\nLearn more with our tutorial.\nTorchX", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "TorchX\nTorchX 0.3 updates include a new list API, experiment tracking, elastic training and improved scheduler support. There\u2019s also a new Multi-Objective NAS tutorial using TorchX + Ax.\n(Prototype) List\nThe newly added list command and API allows you to list recently launched jobs and their statuses for a given scheduler directly from within TorchX.\n\nThis removes the need for using secondary tools to list the jobs.\nFull programmatic access to recent jobs for integration with custom tools.\n\n$ torchx list -s kubernetes\nAPP HANDLE APP STATUS\n----------------------------------------------- -----------------\nkubernetes://torchx/default:train-f2nx4459p5crr SUCCEEDED\n\nLearn more with our documentation.\n(Prototype) Tracker", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "(Prototype) Tracker\nTorchX Tracker is a new prototype library that provides a flexible and customizable experiment and artifact tracking interface. This allows you to track inputs and outputs for jobs across multiple steps to make it easier to use TorchX with pipelines and other external systems.\nfrom torchx import tracker\n\napp_run = tracker.app_run_from_env()\napp_run.add_metadata(lr=lr, gamma=gamma) # hyper parameters\napp_run.add_artifact(\"model\", \"storage://path/mnist_cnn.pt\") # logs / checkpoints\napp_run.add_source(parent_run_id, \"model\") # lineage\n\nExample:\n\nhttps://github.com/pytorch/torchx/tree/main/torchx/examples/apps/tracker\nhttps://pytorch.org/torchx/main/tracker.html\n\n(Prototype) Elastic Training and Autoscaling", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "(Prototype) Elastic Training and Autoscaling\nElasticity on Ray and Kubernetes \u2013 automatic scale up of distributed training jobs when using a supported scheduler. Learn more with our documentation.\n(Prototype) Scheduler Improvements: IBM\u00ae Spectrum LSF\nAdded prototype support for the IBM Spectrum LSF scheduler.\n(Beta) AWS Batch Scheduler\nThe AWS Batch scheduler integration is now in beta.\n\nlog fetching and listing jobs is now supported.\nAdded configs for job priorities and queue policies\nEasily access job UI via ui_url\nhttps://pytorch.org/torchx/main/schedulers/aws_batch.html\n\n(Prototype) AnyPrecision Optimizer\nDrop in replacement for AdamW optimizer that reduces GPU memory, enables two main features:\n\nAbility to successfully train the entire model pipeline in full BFloat16.\n", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "Kahan summation ensures precision. This can improve training throughput, especially on huge models, by reduced memory and increased computation speed.\n- Ability to change the variance state to BFloat16. This can reduce overall memory required for model training with additional speed improvements.\nFind more information here.", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-1.13/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"PyTorch 1.11, TorchData, and functorch are now available\"\nauthor: Team PyTorch\nfeatured-img: \"assets/images/pytorch-logo.jpg\"\n\nWe are excited to announce the release of PyTorch 1.11 (release notes). This release is composed of over 3,300 commits since 1.10, made by 434 contributors. Along with 1.11, we are releasing beta versions of TorchData and functorch.\nSummary:\n\nTorchData is a new library for common modular data loading primitives for easily constructing flexible and performant data pipelines. View it on GitHub.\nfunctorch, a library that adds composable function transforms to PyTorch, is now available in beta. View it on GitHub.\nDistributed Data Parallel (DDP) static graph optimizations available in stable.\n\nIntroducing TorchData", "source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"} {"text": "Introducing TorchData\nWe are delighted to present the Beta release of TorchData. This is a library of common modular data loading primitives for easily constructing flexible and performant data pipelines. Based on community feedback, we have found that the existing DataLoader bundled too many features together and can be difficult to extend. Moreover, different use cases often have to rewrite the same data loading utilities over and over again. The goal here is to enable composable data loading through Iterable-style and Map-style building blocks called \u201cDataPipes\u201d that work well out of the box with the PyTorch\u2019s DataLoader.", "source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"} {"text": "A DataPipe takes in some access function over Python data structures, __iter__ for IterDataPipe and __getitem__ for MapDataPipe, and returns a new access function with a slight transformation applied. You can chain multiple DataPipes together to form a data pipeline that performs all the necessary data transformation.\nWe have implemented over 50 DataPipes that provide different core functionalities, such as opening files, parsing texts, transforming samples, caching, shuffling, and batching. For users who are interested in connecting to cloud providers (such as Google Drive or AWS S3), the fsspec and iopath DataPipes will allow you to do so. The documentation provides detailed explanations and usage examples of each IterDataPipe and MapDataPipe.", "source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"} {"text": "In this release, some of the PyTorch domain libraries have migrated their datasets to use DataPipes. In TorchText, the popular datasets provided by the library are implemented using DataPipes and a section of its SST-2 binary text classification tutorial demonstrates how you can use DataPipes to preprocess data for your model. There also are other prototype implementations of datasets with DataPipes in TorchVision (available in nightly releases) and in TorchRec. You can find more specific examples here.", "source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"} {"text": "The documentation for TorchData is now live. It contains a tutorial that covers how to use DataPipes, use them with DataLoader, and implement custom ones. FAQs and future plans related to DataLoader are described in our project\u2019s README file.\nIntroducing functorch\nWe\u2019re excited to announce the first beta release of functorch. Heavily inspired by Google JAX, functorch is a library that adds composable function transforms to PyTorch. It aims to provide composable vmap (vectorization) and autodiff transforms that work with PyTorch modules and PyTorch autograd with good eager-mode performance.", "source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"} {"text": "Composable function transforms can help with a number of use cases that are tricky to do in PyTorch today:\n\ncomputing per-sample-gradients (or other per-sample quantities)\nrunning ensembles of models on a single machine\nefficiently batching together tasks in the inner-loop of MAML\nefficiently computing Jacobians and Hessians as well as batched ones\n\nComposing vmap (vectorization), vjp (reverse-mode AD), and jvp (forward-mode AD) transforms allows us to effortlessly express the above without designing a separate library for each.\nFor more details, please see our documentation, tutorials, and installation instructions.\nDistributed Training\n(Stable) DDP static graph", "source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"} {"text": "(Stable) DDP static graph\nDDP static graph assumes that your model employs the same set of used/unused parameters in every iteration, so that it can deterministically know states like which hooks will fire, how many times the hooks will fire and gradients computation ready order after the first iteration. Static graph caches these states in the first iteration, and thus it could support features that DDP can not support in previous releases, e.g., support multiple activation checkpoints on the same parameters regardless of whether there are unused parameters or not. The static graph feature also applies performance optimizations when there are unused parameters, e.g., it avoids traversing graphs to search unused parameters every iteration, and enables dynamic bucketing order. These optimizations in the DDP static graph brought 10% QPS gain for some recommendation models.\nTo enable static graph, just simply set static_graph=True in the DDP API like this:\n```", "source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"} {"text": "ddp_model = DistributedDataParallel(model, static_graph=True)\n\nFor more details, please see our documentation and tutorials.\nThanks for reading, If you\u2019re interested in these updates and want to join the PyTorch community, we encourage you to join the discussion forums and open GitHub issues. To get the latest news from PyTorch, follow us on Twitter, Medium, YouTube, and LinkedIn.\nCheers!\nTeam PyTorch", "source": "https://pytorch.org/blog/pytorch-1.11-released/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Torchserve Performance Tuning, Animated Drawings Case-Study\"\nauthor: Hamid Shojanazeri, Geeta Chauhan, Mark Saroufim, Jesse Smith\nfeatured-img: \"assets/images/sketch_animator.png\"\n", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "\nIn this post we discuss performance tuning of Torchserve for serving your models in production. One of the biggest challenges in the life cycle of a ML project is deploying models in production. This requires a reliable serving solution along with solutions that address the MLOps needs. A robust serving solution needs to provide support for multi model serving, model versioning, metric logging, monitoring and scaling to serve the peak traffic. In this post, we will have an overview of Torchserve and how to tune its performance for production use-cases. We discuss the Animated Drawings app from Meta that can turn your human figure sketches to animations and how it could serve the peak traffic with Torchserve. The Animated Drawing\u2019s workflow is below.\n\n\n", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "\nhttps://ai.facebook.com/blog/using-ai-to-bring-childrens-drawings-to-life/\nMany AI systems and tools are designed to handle realistic images of humans, children's drawings add a level of complexity and unpredictability as they are often constructed in abstract, fanciful ways. These types of morphological and stylistic variations can confuse even state-of-the-art AI systems that excel at spotting objects in photorealistic images and drawings.\nMeta AI researchers are working to overcome this challenge so that AI systems will be better able to recognize drawings of human figures in the wildly varied ways that children create them. This great blog post provides more details about the Animated Drawings and the approach taken.\nTorchserve\n\n\nFig1. Overall flow of Torchserve performance tuning \n", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "\nOnce you have trained your model, it needs to be integrated into a larger system to have a full-fledged application, we use the term \u201cmodel serving\u201d to refer to this integration. Basically model serving is making your trained model available to run inferences and subsequent use of the model. \nTorchserve is the Pytorch preferred solution for serving models in production. It is a performant and scalable tool that wraps your model in a HTTP or HTTPS API. It has a frontend implemented in Java that handles multiple tasks from assigning workers for serving models to handling the connection between client and server. Torchserve has a Python backend that is responsible for handling the inference service.", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "Torchserve supports multi model serving and versioning for AB test, dynamic batching, logging and metrics. It exposes four APIs for inference, explanations, management and metrics. \nInference API is listening on port 8080 and accessible through localhost by default, this can be configured in Torchserve configuration and enable getting predictions from the model.", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "Explanation API uses Captum under the hood to provide explanations of the model that is being served and listens to the port 8080 as well.\nManagement API allows to register or unregister and describe a model. It also enables users to scale up or down the number of workers that serve the model. \nMetric API by default listens to port 8082 and enables us to monitor the model that is being served.", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "Torchserve let you scale your model serving and handle the peak traffic by supporting batch inference and multiple workers that serve your model. Scaling can be done through management API and settings through a configuration file. Also, metric API helps you to monitor your model serving through default and customizable metrics.\nOther advanced settings such as the length of the queue for the received requests, maximum wait time for a batch of inputs and many other properties are configurable through a config file that can be passed to Torchserve when it is started.\nSteps to serve your model with Torchserve", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "Steps to serve your model with Torchserve\n\nInstall Torchserve, model archiver and its requirements.\nChoose a default handler that fits your task (e.g image classification, etc) or author a custom handler.\nPackage your model artifacts (trained model checkpoint and all other necessary files for loading and running your model) and the handler into a \u201c.mar\u201d file using Torcharchive and place it in the model store.\nStart serving your model.\n", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "\nRun inference.\nWe will discuss model handlers and metrics in more detail here.\n\nModel handlers\nTorchserve uses a handler in the backend to load the models, preprocess the received data, run inference and post-process the response. Handler in torchserve is a python script that all the model initialization, preprocessing, inference and post processing logic goes into.\nTorchserve provides an out of the box handler for a number of applications like image classification, segmentation, object detection and text classification. It also supports custom handlers, in case your use case is not supported in default handlers. \nIt provides a great flexibility in custom handlers, this potentially make Torchserve as multi-framework serving tool. Custom handlers let you define your custom logic to initialize a model that can be used also to load models from other frameworks such as ONNX.", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "Torchserve handler is made of four main functions, initialize, preprocess, inference and postprocess that each return a list. The code snippet below shows an example of a custom handler.Custom handlers inherit from BaseHandler in Torchserve and can overwrite any of the main functions. Here is an example of the handler used for loading the Detectron2 model for figure detection, this model has been exported to Torchscript and uses model.half() to run the inference with FP16, details are explained in another section in this post.\n```python\nclass MyModelHandler(BaseHandler):\n def initialize(self, context):\n self.manifest = ctx.manifest\n properties = ctx.system_properties\n model_dir = properties.get(\"model_dir\")\n serialized_file = self.manifest[\"model\"][\"serializedFile\"]\n model_pt_path = os.path.join(model_dir, serialized_file)\n self.device = torch.device(\n", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "self.device = torch.device(\n \"cuda:\" + str(properties.get(\"gpu_id\"))\n if torch.cuda.is_available() and properties.get(\"gpu_id\") is not None\n else \"cpu\"\n )\n self.model = torch.jit.load(model_pt_path, map_location=self.device)\n self.model = self.model.half()\n\ndef preprocess(self, data):\n\n inputs = []\n for request in batch:\n\n request_body = request.get(\"body\")\n\n input_ = io.BytesIO(request_body)\n image = cv2.imdecode(np.fromstring(input_.read(), np.uint8), 1)\n input = torch.Tensor(image).permute(2, 0, 1)\n input = input.to(self.device)\n input = input.half()\n inputs.append({\"image\": input})\n\n return inputs\n\ndef inference(self,inputs):\n predictions = self.model(**inputs)\n return predictions\n\ndef postprocess(self, output):\n responses = []\n for inference_output in inference_outputs:\n responses_json = {\n", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "responses_json = {\n 'classes': inference_output['pred_classes'].tolist(),\n 'scores': inference_output['scores'].tolist(),\n \"boxes\": inference_output['pred_boxes'].tolist()\n }\n responses.append(json.dumps(responses_json))\n return responses\n\n```\nMetrics\nAn essential component in serving models in production is the ability to monitor them. Torchserve collects system level metrics regularly and allows adding custom metrics as well.", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "System level metrics consist of CPU utilization, available and used disk space and memory on the host machine along with number of requests with different response codes (e.g 200-300, 400-500 and above 500). Custom metrics can be added to the metrics as explained here. TorchServe logs these two sets of metrics to different log files. Metrics are collected by default at:\n\nSystem metrics - log_directory/ts_metrics.log\nCustom metrics - log directory/model_metrics.log\n", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "As mentioned before, Torchserve also exposes metric API, that by default listens to port 8082 and enables users to query and monitor the collected metrics. The default metrics endpoint returns Prometheus formatted metrics. You can query metrics using curl requests or point a Prometheus Server to the endpoint and use Grafana for dashboards. \nWhile serving a model you can query metrics using curl request as follows:\ncurl http://127.0.0.1:8082/metrics\n", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "curl http://127.0.0.1:8082/metrics\n\nIn case you are looking into exporting the logged metrics, please refer to this example that uses mtail to export metrics to Prometheus. Tracking these metrics in a dashboard allows you to monitor performance regressions that may have been sporadic or hard to spot during an offline benchmark run.\nWhat to consider for tuning performance of a model in production\nThe workflow suggested in Fig 1, is the general idea on how to approach model deployment in production with Torchserve.\nIn many cases serving models in production is optimized based on throughput or latency service level agreement (SLA)s. Usually real-time applications are more concerned about latency whereas off-line applications may care more about higher throughput.", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "There are a number of main factors contributing to the performance of a serving model in production. In particular, we are focusing on serving Pytorch models with Torchserve here, however most of these factors generalize to all models from other frameworks as well.\n\nModel optimizations: this is a pre-step for deploying models into production. This is a very broad discussion that we will get into in a series of future blogs. This includes techniques like quantization, pruning to decrease the size of the model, using Intermediate representations (IR graphs) such as Torchscript in Pytorch, fusing kernels and many others. Currently torchprep provides many of these techniques as a CLI tool.\n", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "\nBatch inference: it refers to feeding multiple inputs into a model, while it is essential during training, it can be very helpful to manage the cost at inference time as well. Hardware accelerators are optimized for parallelism and batching helps to saturate the compute capacity and often leads to higher throughput. The main difference in inference is you can\u2019t wait too long to get a batch filled from clients, something we call dynamic batching\nNumber of Workers : Torchserve uses workers to serve models. Torchserve workers are Python processes that hold a copy of the model weights for running inference. Too few workers means you\u2019re not benefitting from enough parallelism but too many can cause worker contention and degrade end to end performance.\n", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "\nHardware : choosing the appropriate hardware based on the model, application and latency, throughput budget. This could be one of the supported hardwares in Torchserve, CPU, GPU, AWS Inferentia. Some hardware configurations are intended for best in class performance and others are better suited for cost effective inference. From our experiments we\u2019ve found that GPUs shine best at larger batch sizes whereas the right CPUs and AWS Inferentia can be far more cost effective for lower batch sizes and low latency.\n\nBest Practices for Performance tuning on Torchserve\nTo get the best performance out of your model while serving it with Torchserve, we are sharing some of the best practices here. Torchserve provides a benchmark suite that provides helpful insight to make informed decisions on different choices as detailed below.", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "\nOptimize your model as the first step, Pytorch model optimization tutorials. Model optimization choices are also closely tied to the hardware of choice. We will discuss it in more detail in another blog post.\nDeciding the hardware for model deployment can be closely related to the latency and throughput budget and cost per inference. Depending on the size of model and application it can vary, for some models like computer vision models it has been historically not affordable to run in production on CPU. However, by having optimizations such IPEX as recently added to Torchserve this has been much more affordable and cost beneficial and you can learn more in this investigative case study\n", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "\n\nWorkers in Torchserve are Python processes that provide parallelism, setting the number of workers should be done carefully. By default Torchserve launch number of workers equal to VCPUs or available GPUs on the host, this can add a considerable amount of time to the Torchserve start. \nTorchserve exposes a config property to set the number of workers. To provide an efficient parallelism through multiple workers and avoiding them to compete over resources, as a baseline we recommend following setting on CPU and GPU:\nCPU : In the handler, torch.set_num_threads(1)then set the number of workers to num physical cores / 2.But the the best threading configurations can be achieved by leveraging the Intel CPU launcher script.\n\n", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "GPU: number of available GPUs can be set through number_gpus in config.properties. Torchserve uses round robin to assign workers to GPUs. We recommend setting the number of workers as follows. Number of worker = (Number of available GPUs) / (Number of Unique Models).Note that GPUs that are pre-Ampere do not provide any resource isolation with Multi Instance GPUs.", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "\nBatch size can directly affect the latency and the throughput. To better utilize the compute resources batch size needs to be increased. However, there is a tradeoff between latency and throughput. Larger batch sizes can increase the throughput but results in a higher latency as well. Batch size can be set in Torchserve in two ways, either through model config in config.properties or while registering the model using Management API. \n\nIn the next section, we are going to use Torchserve benchmark suite to decide the best combination of model optimization, hardware, workers, and batch size. \nAnimated Drawings Performance Tuning", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "Animated Drawings Performance Tuning\nTo use the Torchserve benchmark suite, first we need to have an archived file, \u201c.mar\u201d file as discussed above, that contains the model, handler and all other artifacts to load and run inference. Animated Drawings uses Detectron2\u2019s implementation of Mask-RCNN for an object detection model. \nHow to run benchmark suite\nThe Automated benchmark suite in Torchserve let you benchmark multiple models with different setting including batch size and number of worker and finally generate a report for you. To get started:\ngit clone https://github.com/pytorch/serve.git\n\ncd serve/benchmarks\n\npip install -r requirements-ab.txt\n\napt-get install apache2-utils\n\nModel level settings can be configured in a yaml file similar to \n```yaml\nModel_name:\n eager_mode:\n benchmark_engine: \"ab\"\n url: \"Path to .mar file\"\n workers:\n - 1\n - 4", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "workers:\n - 1\n - 4\n batch_delay: 100\n batch_size:\n - 1\n - 2\n - 4\n - 8\n requests: 10000\n concurrency: 10\n input: \"Path to model input\"\n backend_profiling: False\n exec_env: \"local\"\n processors:\n - \"cpu\"\n - \"gpus\": \"all\"\n\nThis yaml file will be referenced in the [benchmark_config_template](https://github.com/pytorch/serve/blob/master/benchmarks/benchmark_config_template.yaml#L12).yaml file that includes other settings for generating reports, this can optionally work with AWS cloud watch for logs as well.\n\n\npython benchmarks/auto_benchmark.py --input benchmark_config_template.yaml\n```", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "```\nRunning the benchmarks, results will be written in \u201ccsv\u201d file that can be found in \u201c /tmp/benchmark/ab_report.csv\u201d and full report \u201c/tmp/ts_benchmark/report.md\". It will include items such as Torchserve average latency, model P99 latency, throughput, number of concurrency, number of requests, handler time, and some other metrics. Here we focus on some of the important ones that we track to tune the performance which are, concurrency, model P99 latency, throughput. We look at these numbers specifically in combination with batch size, the used device, number of workers and if any model optimization has been done.\nThe latency SLA for this model has been set to 100 ms, this is real-time application and as we discussed earlier, latency is more of a concern and throughput ideally should be as high as possible while it does not violate the latency SLA.", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "Through searching the space, over different batch sizes (1-32), number of workers (1-16) and devices (CPU,GPU), we have run a set of experiments that summarized the best ones in the table below.\n\n\nDevice \n \nConcurrency \n \n# Requests\n \n#workers\n \nBatch size\n \nPayload/image\n \nOptimization \n \nThroughput \n \nLatency P99\n \n\n\nCPU\n \n10\n \n1000\n \n1\n \n1\n \nsmall\n \nN/A\n \n3.45\n \n305.3 ms\n \n\n\nCPU\n \n1\n \n1000\n \n1\n \n1\n \nsmall\n \nN/A\n \n3.45\n \n291.8 ms\n \n\n\nGPU\n \n10\n \n1000\n \n1\n \n1\n \nsmall\n \nN/A\n \n41.05", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "\n\nN/A\n \n41.05\n \n25.48 ms\n \n\n\n\nGPU\n \n1\n \n1000\n \n1\n \n1\n \nsmall\n \nN/A\n \n42.21\n \n23.6 ms\n \n\n\nGPU\n \n10\n \n1000\n \n1\n \n4\n \nsmall\n \nN/A\n \n54.78\n \n73.62 ms\n \n\n\nGPU\n \n10\n \n1000\n \n1\n \n4\n \nsmall\n \nmodel.half()\n \n78.62\n \n50.69 ms\n \n\n\nGPU\n \n10\n \n1000\n \n1\n \n8\n \nsmall\n \nmodel.half()\n \n85.29\n \n94.4 ms\n \n\n", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "\n\n94.4 ms\n \n\n\nThe latency of this model on CPU with all of the tried settings in terms of batch size, concurrency and number of workers did not meet the SLA, in fact ~13x higher.\nMoving the model serving to GPU, immediately could improve the latency ~13x from 305 ms down to 23.6 ms. \nOne of the simplest optimizations that we could do for the model was lowering its precision to fp16, it is one liner (model.half()) and could reduce the model P99 latency by 32% and increase the throughput by almost the same amount.\nThere could be other optimization done by Torchscripting the model and using optimize_for_inference or other tricks including onnx or tensorrt runtime optimizations which leverage aggressive fusions are out of the scope of this post. We will discuss model optimizations in a separate post.", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "We found both on CPU and GPU , setting number of workers=1 worked the best in this case. \n\nMoving the model to GPU, using number of workers = 1, and batch size = 1 increased the Throughput ~12x compared to CPU and latency ~13x.\nMoving the model to GPU, using model.half(), number of workers = 1, and batch size = 8 yielded best results in terms of Throughput and tolerable latency. Throughput increased ~25x compared to CPU with latency still meeting the SLA (94.4ms).\n\nNote: if you are running the benchmark suite, make sure you are setting a proper batch_delay and set the concurrency of the request to a number proportional to your batch size. Concurrency here means the number of concurrent requests being sent to the server.\nConclusion", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "Conclusion\nIn this post, we have discussed the considerations and knobs that Torchserve expose to tune the performance in production. We have discussed the Torchserve benchmark suite as a means to tune the performance and get insights on possible choices for model optimizations, hardware choice and cost in general. We used Animated Drawings app which uses Detectron2\u2019s Mask-RCNN model as a case-study to showcase the performance tuning with benchmark suite. \nFor more details on Performance tuning in Torchserve please refer to our documentation here.\nAlso feel free to open a ticket on Torchserve repo for any further questions and feedback. \nAcknowledgement", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "Acknowledgement\nWe would like to thank Somya Jain (Meta), Christopher Gustave (Meta) for their great support and guidance throughout many steps of this blog and providing insights to Sketch Animator workflow. Also, special thanks to Li Ning from AWS for the great efforts to make performance tuning much easier on Torchserve with automated benchmark suite.\n", "source": "https://pytorch.org/blog/torchserve-performance-tuning/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Scaling Vision Model Training Platforms with PyTorch\"\nauthor: Vaibhav Aggarwal, Mannat Singh, Anjali Sridhar, Yanghao Li, Shoubhik Debnath, Ronghang Hu, Will Feng, Xinlei Chen, Tingting Markstrum, Diana Liskovich, Anupam Bhatnagar, Chay Ryali, Haoqi Fan, Tete Xiao, Min Xu, Rahul Iyer, Christoph Feichtenhofer, Ross Girshick, Piotr Dollar, Aaron Adcock, Wan-Yen Lo, CK Luk\nfeatured-img: \"/assets/images/scaling-vision-figure_1-solutions-to-the-challenges.png\"\n\nTL;DR: We demonstrate the use of PyTorch with FairScale\u2019s FullyShardedDataParallel (FSDP) API in writing large vision transformer models. We discuss our techniques for scaling and optimizing these models on a GPU cluster. The goal of this platform scaling effort is to enable research at scale. This blog does not discuss model accuracy, new model architectures, or new training recipes.\n1. Introduction", "source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"} {"text": "1. Introduction\nLatest vision research [1, 2] demonstrates model scaling as a promising research direction. In this project, we aim to enable our platforms to train massive vision transformer (ViT) [3] models. We present our work on scaling the largest trainable ViT from 1B to 120B parameters in FAIR vision platforms. We wrote ViT in PyTorch and leveraged its support for large-scale, distributed training on a GPU cluster.\nIn the rest of this blog, we will first discuss the main challenges, namely scalability, optimization, and numerical stability. Then we will discuss how we tackle them with techniques including data and model parallelism, automatic mixed precision, kernel fusion, and bfloat16. Finally, we present our results and conclude.\n2. Main Challenges\n2.1 Scalability", "source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"} {"text": "2. Main Challenges\n2.1 Scalability\nThe key scalability challenge is to efficiently shard a model\u2019s operations and state across multiple GPUs. A 100B parameter model requires ~200GB of RAM just for parameters, assuming fp16 representation. So, it is impossible to fit the model on a single GPU (A100 has at most 80GB RAM). Therefore, we need some way to efficiently shard a model\u2019s data (input, parameters, activations, and optimizer state) across multiple GPUs.\nAnother aspect of this problem is to scale without significantly changing the training recipe. E.g. Certain representation learning recipes use a global batch size of up to 4096 beyond which we start to see accuracy degradation. We cannot scale to more than 4096 GPUs without using some form of tensor or pipeline parallelism.\n2.2 Optimization", "source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"} {"text": "2.2 Optimization\nThe key optimization challenge is to maintain high GPU utilization even as we scale the number of model parameters and flops. When we scale models to teraflops and beyond, we start to hit major bottlenecks in our software stack that super-linearly increase training time and reduce accelerator utilization. We require hundreds or thousands of GPUs to run just a single experiment. Improvements in accelerator utilization can lead to significant reductions in cost and improve fleet utilization. It enables us to fund more projects and run more experiments in parallel.\n2.3 Numerical Stability", "source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"} {"text": "2.3 Numerical Stability\nThe key stability challenge is to avoid numerical instability and divergence at large scale. We empirically observed in our experiments that the training instability gets severe and hard to deal with when we scale up model sizes, data, batch sizes, learning rate, etc. Vision Transformers particularly face training instability even at a lower parameter threshold. E.g., we find it challenging to train even ViT-H (with just 630M parameters) in mixed-precision mode without using strong data augmentation. We need to study the model properties and training recipes to make sure that the models train stably and converge.\n3. Our Solutions\nFigure 1 depicts our solutions to each of the challenges.\n\n\n\n3.1 Addressing scaling challenges with data parallelism and model parallelism", "source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"} {"text": "We apply various forms of data and model parallelism to enable fitting very large models in GPU memory.\nWe use FairScale\u2019s FullyShardedDataParallel (FSDP) API [4], based on PyTorch, to shard parameters, gradients, and optimizer state across multiple GPUs, thereby reducing the memory footprint per GPU. This process consists of the following three steps:\n\n\nStep 1: We wrapped the entire model in a single FSDP instance. This shards the model parameters at the end of a forward pass and gathers parameters at the beginning of a forward pass. This enabled us to scale ~3x from 1.5B to 4.5B parameters. \n\n\nStep 2: We experimented with wrapping individual model layers in separate FSDP instances. This nested wrapping further reduced the memory footprint by sharding and gathering parameters of individual model layers instead of an entire model. The peak memory is then determined by an individually wrapped transformer block in GPU memory in this mode instead of the entire model.\n\n", "source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"} {"text": "\nStep 3: We used activation-checkpoint to reduce the memory consumption by activations. It saves the input tensors and discards the intermediate activation tensors during the forward pass. These are recomputed during the backward pass.\n\nIn addition, we experimented with model-parallelism techniques such as pipeline parallelism [5], which allow us to scale to more GPUs without increasing the batch size.\n3.2 Addressing optimization challenges with advanced AMP and kernel fusion\nAdvanced AMP\nAutomatic Mixed Precision (AMP) [6] training refers to training models using a lower precision of bits than FP32 or the default but still maintaining accuracy. We experimented with three levels of AMP as described below:\n\nAMP O1: This refers to training in mixed precision where weights are in FP32 and some operations are in FP16. With AMP O1, the ops that might impact accuracy remain in FP32 and are not autocasted to FP16.\n", "source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"} {"text": "\n\nAMP O2: This refers to training in mixed precision but with more weights and ops in FP16 than in O1. Weights do not implicitly remain in FP32 and are cast to FP16. A copy of the master weights is maintained in the FP32 precision that is used by the optimizer. If we want the normalization layer weights in FP32 then we need to explicitly use layer wrapping to ensure that.\n\n\nFull FP16: This refers to training in full FP16 where weights and operations are in FP16. FP16 is challenging to enable for training due to convergence issues.\n\n\nWe found that AMP O2 with LayerNorm wrapping in FP32 leads to the best performance without sacrificing accuracy.\nKernel Fusion\n\nTo reduce GPU kernel launch overhead and increase GPU work granularity, we experimented with kernel fusions, including fused dropout and fused layer-norm, using the xformers library [7].\n\n3.3 Addressing stability challenges by studying ops numerical stability and training recipes", "source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"} {"text": "BFloat16 in general but with LayerNorm in FP32\nThe bfloat16 (BF16) [8] floating-point format provides the same dynamic range as FP32 with a memory footprint identical to FP16. We found that we could train models in the BF16 format using the same set of hyperparameters as in FP32, without special parameter tuning. Nevertheless, we found that we need to keep LayerNorm in FP32 mode in order for the training to converge.\n3.4 Final training recipe\nA summary of the final training recipe.\n\nWrap the outer model in an FSDP instance. Enable parameter sharding after the forward pass.\nWrap individual ViT blocks with activation checkpointing, nested FSDP wrapping, and parameter flattening.\nEnable mixed precision mode (AMP O2) with bfloat16 representation. Maintain the optimizer state in FP32 precision to enhance numerical stability.\nWrap normalization layers like LayerNorm in FP32 for better numerical stability.\n", "source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"} {"text": "\nMaximize the Nvidia TensorCore utilization by keeping matrix dimensions to be multiple of 8. For More details check Nvidia Tensor Core Performance Guide.\n\n4. Results\nIn this section, we show the scaling results of ViT on three types of tasks: (1) image classification, (2) object detection (3) video understanding. Our key result is that we are able to train massive ViT backbones across these vision tasks after applying the discussed scaling and optimization techniques. This enables vision research at a much larger scale. We trained the models to convergence to verify that we maintain the current baselines even with all the optimizations. A common trend in Figures 2, 3, 4 is that we are able to train up to 25B-param models with an epoch time of less than 4 hours on 128 A100 GPUs. The 60B and 120B models are relatively slower to train.", "source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"} {"text": "Figure 2 shows the image-classification scaling result. It plots the epoch time for training ViTs on ImageNet using 128 A100-80GB GPUs with different model sizes.\n\n\n\n\nFigure 2: Image-classification scaling result.\n\nFigure 3 shows the object-detection scaling result. It plots the epoch time for training ViTDet [9] with different ViT backbones on COCO using 128 A100-80GB GPUs.\n\n\n\n\nFigure 3: Object-detection scaling result.\n", "source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"} {"text": "\nFigure 4 shows the video-understanding scaling result. It plots the epoch time for training MViTv2 [10] models on Kinetics 400 [11] using 128 V100 (32 GB) GPUs in FP32.\n\n\n\n\nFigure 4: Video-understanding scaling result.\n\nFigure 5 shows the optimization result with the ViT-H model in Figure 2 on 8 A100-40GB GPUs.\nThree versions are used: (1) the baseline uses PyTorch\u2019s DDP [12] with AMP O1, (2) FSDP + AMP-O2 + other optimizations, and (3) FSDP + FP16 + other optimizations. These optimizations altogether speed up the training by up to 2.2x.\n\n\n\n\nFigure 5: Training speedups from various optimizations.", "source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"} {"text": "\n5. Concluding Remarks\nWe have demonstrated the use of PyTorch with FairScale\u2019s FullyShardedDataParallel (FSDP) API in writing large vision transformer models. We discuss our techniques for scaling and optimizing these models on a GPU cluster. We hope that this article can motivate others to develop large-scale ML models with PyTorch and its ecosystem.\nReferences\n[1] Masked Autoencoders Are Scalable Vision Learners\n[2] Revisiting Weakly Supervised Pre-Training of Visual Perception Models\n[3] An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale\n[4] fairscale.nn.FullyShardedDataParallel\n[5] Pipeline parallelism in PyTorch\n[6] Automatic Mixed Precision (AMP) in PyTorch", "source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"} {"text": "[7] xformers\n[8] The bfloat16 numerical format\n[9] Exploring Plain Vision Transformer Backbones for Object Detection\n[10] MViTv2: Improved Multiscale Vision Transformers for Classification and Detection\n[11] https://www.deepmind.com/open-source/kinetics\n[12] Getting Started with Distributed Data Parallel (DDP)", "source": "https://pytorch.org/blog/scaling-vision-model-training-platforms-with-pytorch/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Announcing PyTorch Ecosystem Day'\nauthor: Team PyTorch\n\nWe\u2019re proud to announce our first PyTorch Ecosystem Day. The virtual, one-day event will focus completely on our Ecosystem and Industry PyTorch communities!\nPyTorch is a deep learning framework of choice for academics and companies, all thanks to its rich ecosystem of tools and strong community. As with our developers, our ecosystem partners play a pivotal role in the development and growth of the community.\n\n\n\nWe will be hosting our first PyTorch Ecosystem Day, a virtual event designed for our ecosystem and industry communities to showcase their work and discover new opportunities to collaborate.", "source": "https://pytorch.org/blog/ecosystem_day_2021/", "category": "pytorch blogs"} {"text": "PyTorch Ecosystem Day will be held on April 21, with both a morning and evening session, to ensure we reach our global community. Join us virtually for a day filled with discussions on new developments, trends, challenges, and best practices through keynotes, breakout sessions, and a unique networking opportunity hosted through Gather.Town . \nEvent Details\nApril 21, 2021 (Pacific Time)\nFully digital experience \n\n\nMorning Session: (EMEA)\nOpening Talks - 8:00 am-9:00 am PT\nPoster Exhibition & Breakout Sessions - 9:00 am-12:00 pm PT \n\n\nEvening Session (APAC/US)\nOpening Talks - 3:00 pm-4:00 pm PT\nPoster Exhibition & Breakout Sessions - 3:00 pm-6:00 pm PT \n\n\nNetworking - 9:00 am-7:00 pm PT\n\n\nThere are two ways to participate in PyTorch Ecosystem Day:", "source": "https://pytorch.org/blog/ecosystem_day_2021/", "category": "pytorch blogs"} {"text": "\n\nPoster Exhibition from the PyTorch ecosystem and industry communities covering a variety of topics. Posters are available for viewing throughout the duration of the event. To be part of the poster exhibition, please see below for submission details. If your poster is accepted, we highly recommend tending your poster during one of the morning or evening sessions or both!\n\n\nBreakout Sessions are 40-min sessions freely designed by the community. The breakouts can be talks, demos, tutorials or discussions. Note: you must have an accepted poster to apply for the breakout sessions.\n\n", "source": "https://pytorch.org/blog/ecosystem_day_2021/", "category": "pytorch blogs"} {"text": "Call for posters now open! Submit your proposal today! Please send us the title and summary of your projects, tools, and libraries that could benefit PyTorch researchers in academia and industry, application developers, and ML engineers for consideration. The focus must be on academic papers, machine learning research, or open-source projects. Please no sales pitches. Deadline for submission is March 18, 2021. \nVisit pytorchecosystemday.fbreg.com for more information and we look forward to welcoming you to PyTorch Ecosystem Day on April 21st!", "source": "https://pytorch.org/blog/ecosystem_day_2021/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'PyTorch 1.5 released, new and updated APIs including C++ frontend API parity with Python'\nauthor: Team PyTorch\n\nToday, we\u2019re announcing the availability of PyTorch 1.5, along with new and updated libraries. This release includes several major new API additions and improvements. PyTorch now includes a significant update to the C++ frontend, \u2018channels last\u2019 memory format for computer vision models, and a stable release of the distributed RPC framework used for model-parallel training. The release also has new APIs for autograd for hessians and jacobians, and an API that allows the creation of Custom C++ Classes that was inspired by pybind.\nYou can find the detailed release notes here.\nC++ Frontend API (Stable)\nThe C++ frontend API is now at parity with Python, and the features overall have been moved to \u2018stable\u2019 (previously tagged as experimental). Some of the major highlights include:", "source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"} {"text": "\nNow with ~100% coverage and docs for C++ torch::nn module/functional, users can easily translate their model from Python API to C++ API, making the model authoring experience much smoother.\nOptimizers in C++ had deviated from the Python equivalent: C++ optimizers can\u2019t take parameter groups as input while the Python ones can. Additionally, step function implementations were not exactly the same. With the 1.5 release, C++ optimizers will always behave the same as the Python equivalent.\nThe lack of tensor multi-dim indexing API in C++ is a well-known issue and had resulted in many posts in PyTorch Github issue tracker and forum. The previous workaround was to use a combination of narrow / select / index_select / masked_select, which was clunky and error-prone compared to the Python API\u2019s elegant tensor[:, 0, ..., mask] syntax. With the 1.5 release, users can use tensor.index({Slice(), 0, \"...\", mask}) to achieve the same purpose.\n", "source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"} {"text": "\u2018Channels last\u2019 memory format for Computer Vision models (Experimental)\n\u2018Channels last\u2019 memory layout unlocks ability to use performance efficient convolution algorithms and hardware (NVIDIA\u2019s Tensor Cores, FBGEMM, QNNPACK). Additionally, it is designed to automatically propagate through the operators, which allows easy switching between memory layouts.\nLearn more here on how to write memory format aware operators.\nCustom C++ Classes (Experimental)\nThis release adds a new API, torch::class_, for binding custom C++ classes into TorchScript and Python simultaneously. This API is almost identical in syntax to pybind11. It allows users to expose their C++ class and its methods to the TorchScript type system and runtime system such that they can instantiate and manipulate arbitrary C++ objects from TorchScript and Python. An example C++ binding:\n```python\ntemplate ", "source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"} {"text": "template \nstruct MyStackClass : torch::CustomClassHolder {\n std::vector stack_;\n MyStackClass(std::vector init) : stack_(std::move(init)) {}\n\n void push(T x) {\n stack_.push_back(x);\n }\n T pop() {\n auto val = stack_.back();\n stack_.pop_back();\n return val;\n }\n};\n\nstatic auto testStack =\n torch::class_>(\"myclasses\", \"MyStackClass\")\n .def(torch::init>())\n .def(\"push\", &MyStackClass::push)\n .def(\"pop\", &MyStackClass::pop)\n .def(\"size\", [](const c10::intrusive_ptr& self) {\n return self->stack_.size();\n });\n\nWhich exposes a class you can use in Python and TorchScript like so:\n@torch.jit.script\ndef do_stacks(s : torch.classes.myclasses.MyStackClass):\n s2 = torch.classes.myclasses.MyStackClass([\"hi\", \"mom\"])\n print(s2.pop()) # \"mom\"\n s2.push(\"foobar\")\n return s2 # [\"hi\", \"foobar\"]\n", "source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"} {"text": "return s2 # [\"hi\", \"foobar\"]\n```\nYou can try it out in the tutorial here.\nDistributed RPC framework APIs (Now Stable)\nThe Distributed RPC framework was launched as experimental in the 1.4 release and the proposal is to mark Distributed RPC framework as stable and no longer experimental. This work involves a lot of enhancements and bug fixes to make the distributed RPC framework more reliable and robust overall, as well as adding a couple of new features, including profiling support, using TorchScript functions in RPC, and several enhancements for ease of use. Below is an overview of the various APIs within the framework:\nRPC API\nThe RPC API allows users to specify functions to run and objects to be instantiated on remote nodes. These functions are transparently recorded so that gradients can backpropagate through remote nodes using Distributed Autograd.", "source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"} {"text": "Distributed Autograd\nDistributed Autograd connects the autograd graph across several nodes and allows gradients to flow through during the backwards pass. Gradients are accumulated into a context (as opposed to the .grad field as with Autograd) and users must specify their model\u2019s forward pass under a with dist_autograd.context() manager in order to ensure that all RPC communication is recorded properly. Currently, only FAST mode is implemented (see here for the difference between FAST and SMART modes).\nDistributed Optimizer", "source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"} {"text": "Distributed Optimizer\nThe distributed optimizer creates RRefs to optimizers on each worker with parameters that require gradients, and then uses the RPC API to run the optimizer remotely. The user must collect all remote parameters and wrap them in an RRef, as this is required input to the distributed optimizer. The user must also specify the distributed autograd context_id so that the optimizer knows in which context to look for gradients.\nLearn more about distributed RPC framework APIs here.\nNew High level autograd API (Experimental)\nPyTorch 1.5 brings new functions including jacobian, hessian, jvp, vjp, hvp and vhp to the torch.autograd.functional submodule. This feature builds on the current API and allows the user to easily perform these functions.\nDetailed design discussion on GitHub can be found here.\nPython 2 no longer supported", "source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"} {"text": "Python 2 no longer supported\nStarting PyTorch 1.5.0, we will no longer support Python 2, specifically version 2.7. Going forward support for Python will be limited to Python 3, specifically Python 3.5, 3.6, 3.7 and 3.8 (first enabled in PyTorch 1.4.0).\nWe\u2019d like to thank the entire PyTorch team and the community for all their contributions to this work.\nCheers!\nTeam PyTorch", "source": "https://pytorch.org/blog/pytorch-1-dot-5-released-with-new-and-updated-apis/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'OpenMined and PyTorch partner to launch fellowship funding for privacy-preserving ML community'\nauthor: Andrew Trask (OpenMined/U.Oxford), Shubho Sengupta, Laurens van der Maaten, Joe Spisak\nexcerpt: Many applications of machine learning (ML) pose a range of security and privacy challenges.\n\n\n\n\nMany applications of machine learning (ML) pose a range of security and privacy challenges. In particular, users may not be willing or allowed to share their data, which prevents them from taking full advantage of ML platforms like PyTorch. To take the field of privacy-preserving ML (PPML) forward, OpenMined and PyTorch are announcing plans to jointly develop a combined platform to accelerate PPML research as well as new funding for fellowships.", "source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"} {"text": "There are many techniques attempting to solve the problem of privacy in ML, each at various levels of maturity. These include (1) homomorphic encryption, (2) secure multi-party computation, (3) trusted execution environments, (4) on-device computation, (5) federated learning with secure aggregation, and (6) differential privacy. Additionally, a number of open source projects implementing these techniques were created with the goal of enabling research at the intersection of privacy, security, and ML. Among them, PySyft and CrypTen have taken an \u201cML-first\u201d approach by presenting an API that is familiar to the ML community, while masking the complexities of privacy and security protocols. We are excited to announce that these two projects are now collaborating closely to build a mature PPML ecosystem around PyTorch.", "source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"} {"text": "Additionally, to bolster this ecosystem and take the field of privacy preserving ML forward, we are also calling for contributions and supporting research efforts on this combined platform by providing funding to support the OpenMined community and the researchers that contribute, build proofs of concepts and desire to be on the cutting edge of how privacy-preserving technology is applied. We will provide funding through the RAAIS Foundation, a non-profit organization with a mission to advance education and research in artificial intelligence for the common good. We encourage interested parties to apply to one or more of the fellowships listed below.\nTools Powering the Future of Privacy-Preserving ML", "source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"} {"text": "The next generation of privacy-preserving open source tools enable ML researchers to easily experiment with ML models using secure computing techniques without needing to be cryptography experts. By integrating with PyTorch, PySyft and CrypTen offer familiar environments for ML developers to research and apply these techniques as part of their work.\nPySyft is a Python library for secure and private ML developed by the OpenMined community. It is a flexible, easy-to-use library that makes secure computation techniques like multi-party computation (MPC) and privacy-preserving techniques like differential privacy accessible to the ML community. It prioritizes ease of use and focuses on integrating these techniques into end-user use cases like federated learning with mobile phones and other edge devices, encrypted ML as a service, and privacy-preserving data science.", "source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"} {"text": "CrypTen is a framework built on PyTorch that enables private and secure ML for the PyTorch community. It is the first step along the journey towards a privacy-preserving mode in PyTorch that will make secure computing techniques accessible beyond cryptography researchers. It currently implements secure multiparty computation with the goal of offering other secure computing backends in the near future. Other benefits to ML researchers include:\n\nIt is ML first and presents secure computing techniques via a CrypTensor object that looks and feels exactly like a PyTorch Tensor. This allows the user to use automatic differentiation and neural network modules akin to those in PyTorch.\nThe framework focuses on scalability and performance and is built with real-world challenges in mind.\n", "source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"} {"text": "The focus areas for CrypTen and PySyft are naturally aligned and complement each other. The former focuses on building support for various secure and privacy preserving techniques on PyTorch through an encrypted tensor abstraction, while the latter focuses on end user use cases like deployment on edge devices and a user friendly data science platform.\nWorking together will enable PySyft to use CrypTen as a backend for encrypted tensors. This can lead to an increase in performance for PySyft and the adoption of CrypTen as a runtime by PySyft\u2019s userbase. In addition to this, PyTorch is also adding cryptography friendly features such as support for cryptographically secure random number generation. Over the long run, this allows each library to focus exclusively on its core competencies while enjoying the benefits of the synergistic relationship.\nNew Funding for OpenMined Contributors", "source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"} {"text": "New Funding for OpenMined Contributors\nWe are especially excited to announce that the PyTorch team has invested $250,000 to support OpenMined in furthering the development and proliferation of privacy-preserving ML. This gift will be facilitated via the RAAIS Foundation and will be available immediately to support paid fellowship grants for the OpenMined community.\nHow to get involved\nThanks to the support from the PyTorch team, OpenMined is able to offer three different opportunities for you to participate in the project\u2019s development. Each of these fellowships furthers our shared mission to lower the barrier-to-entry for privacy-preserving ML and to create a more privacy-preserving world.\nCore PySyft CrypTen Integration Fellowships", "source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"} {"text": "Core PySyft CrypTen Integration Fellowships\nDuring these fellowships, we will integrate CrypTen as a supported backend for encrypted computation in PySyft. This will allow for the high-performance, secure multi-party computation capabilities of CrypTen to be used alongside other important tools in PySyft such as differential privacy and federated learning. For more information on the roadmap and how to apply for a paid fellowship, check out the project\u2019s call for contributors.\nFederated Learning on Mobile, Web, and IoT Devices", "source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"} {"text": "During these fellowships, we will be extending PyTorch with the ability to perform federated learning across mobile, web, and IoT devices. To this end, a PyTorch front-end will be able to coordinate across federated learning backends that run in Javascript, Kotlin, Swift, and Python. Furthermore, we will also extend PySyft with the ability to coordinate these backends using peer-to-peer connections, providing low latency and the ability to run secure aggregation as a part of the protocol. For more information on the roadmap and how to apply for a paid fellowship, check out the project\u2019s call for contributors.\nDevelopment Challenges", "source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"} {"text": "Development Challenges\nOver the coming months, we will issue regular open competitions for increasing the performance and security of the PySyft and PyGrid codebases. For performance-related challenges, contestants will compete (for a cash prize) to make a specific PySyft demo (such as federated learning) as fast as possible. For security-related challenges, contestants will compete to hack into a PyGrid server. The first to demonstrate their ability will win the cash bounty! For more information on the challenges and to sign up to receive emails when each challenge is opened, sign up here.\nTo apply, select one of the above projects and identify a role that matches your strengths!\nCheers,\nAndrew, Laurens, Joe, and Shubho", "source": "https://pytorch.org/blog/openmined-and-pytorch-launch-fellowship-funding-for-privacy-preserving-ml/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'How Computational Graphs are Constructed in PyTorch'\nauthor: Preferred Networks\nfeatured-img: 'assets/images/augmented_computational_graph.png'\n\nIn the previous post we went over the theoretical foundations of automatic differentiation and reviewed the implementation in PyTorch. In this post, we will be showing the parts of PyTorch involved in creating the graph and executing it. In order to understand the following contents, please read @ezyang\u2019s wonderful blog post about PyTorch internals.\nAutograd components\nFirst of all, let\u2019s look at where the different components of autograd live:", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "tools/autograd: Here we can find the definition of the derivatives as we saw in the previous post derivatives.yaml, several python scripts and a folder called templates. These scripts and the templates are used at building time to generate the C++ code for the derivatives as specified in the yaml file. Also, the scripts here generate wrappers for the regular ATen functions so that the computational graph can be constructed.", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "torch/autograd: This folder is where the autograd components that can be used directly from python are located. In function.py we find the actual definition of torch.autograd.Function, a class used by users to write their own differentiable functions in python as per the documentation. functional.py holds components for functionally computing the jacobian vector product, hessian, and other gradient related computations of a given function.\nThe rest of the files have additional components such as gradient checkers, anomaly detection, and the autograd profiler.\ntorch/csrc/autograd: This is where the graph creation and execution-related code lives.", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "All this code is written in C++, since it is a critical part that is required to be extremely performant. Here we have several files that implement the engine, metadata storage, and all the needed components. Alongside this, we have several files whose names start with python_, and their main responsibility is to allow python objects to be used in the autograd engine.\nGraph Creation\nPreviously, we described the creation of a computational graph. Now, we will see how PyTorch creates these graphs with references to the actual codebase.\n\n\n\nFigure 1: Example of an augmented computational graph\n\nIt all starts when in our python code, where we request a tensor to require the gradient.\n>>> x = torch.tensor([0.5, 0.75], requires_grad=True)\n", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "\nWhen the `required_grad` flag is set in tensor creation, c10 will [allocate](https://github.com/pytorch/pytorch/blob/e7cd59c7a061c78d8d0265e4308b5933e44f9176/c10/core/TensorImpl.cpp#L382-L406) an `AutogradMeta` object that is used to hold the graph information.\n\n```c++\n\nvoid TensorImpl::set_requires_grad(bool requires_grad) {\n ...\n if (!autograd_meta_)\n autograd_meta_ = impl::GetAutogradMetaFactory()->make();\n autograd_meta_->set_requires_grad(requires_grad, this);\n}\n\nThe AutogradMeta object is defined in torch/csrc/autograd/variable.h as follows:\n\nstruct TORCH_API AutogradMeta : public c10::AutogradMetaInterface {\n std::string name_;\n\n Variable grad_;\n std::shared_ptr grad_fn_;\n std::weak_ptr grad_accumulator_;\n // other fields and methods\n ...\n};\n", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "// other fields and methods\n ...\n};\n\nThe most important fields in this structure are the computed gradient in `grad_` and a pointer to the function `grad_fn` that will be called by the engine to produce the actual gradient. Also, there is a gradient accumulator object that is used to add together all the different gradients where this tensor is involved as we will see in the graph execution.\n\n### Graphs, Nodes and Edges.\n\nNow, when we call a differentiable function that takes this tensor as an argument, the associated metadata will be populated. Let\u2019s suppose that we call a regular torch function that is implemented in ATen. Let it be the multiplication as in our previous blog post example. The resulting tensor has a field called `grad_fn` that is essentially a pointer to the function that will be used to compute the gradient of that operation.\n\n```py\n>>> x = torch.tensor([0.5, 0.75], requires_grad=True)\n>>> v = x[0] * x[1]\n>>> v\ntensor(0.3750, grad_fn=)\n", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "\n\n\nv\ntensor(0.3750, grad_fn=)\n```\n\n\n\nHere we see that the tensors\u2019 grad_fn has a MulBackward0 value. This function is the same that was written in the derivatives.yaml file, and its C++ code was generated automatically by all the scripts in tools/autograd. It\u2019s auto-generated source code can be seen in torch/csrc/autograd/generated/Functions.cpp.\n```c++\nvariable_list MulBackward0::apply(variable_list&& grads) {\n std::lock_guard lock(mutex_);\nIndexRangeGenerator gen;\n auto self_ix = gen.range(1);\n auto other_ix = gen.range(1);\n variable_list grad_inputs(gen.size());\n auto& grad = grads[0];\n auto self = self_.unpack();\n auto other = other_.unpack();\n bool any_grad_defined = any_variable_defined(grads);\n if (should_compute_output({ other_ix })) {", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "if (should_compute_output({ other_ix })) {\n auto grad_result = any_grad_defined ? (mul_tensor_backward(grad, self, other_scalar_type)) : Tensor();\n copy_range(grad_inputs, other_ix, grad_result);\n }\n if (should_compute_output({ self_ix })) {\n auto grad_result = any_grad_defined ? (mul_tensor_backward(grad, other, self_scalar_type)) : Tensor();\n copy_range(grad_inputs, self_ix, grad_result);\n }\n return grad_inputs;\n}\n```\nThe grad_fn objects inherit from the TraceableFunction class, a descendant of Node with just a property set to enable tracing for debugging and optimization purposes. A graph by definition has nodes and edges, so these functions are indeed the nodes of the computational graph that are linked together by using Edge objects to enable the graph traversal later on.", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "The Node definition can be found in the torch/csrc/autograd/function.h file.\nstruct TORCH_API Node : std::enable_shared_from_this {\n ...\n /// Evaluates the function on the given inputs and returns the result of the\n /// function call.\n variable_list operator()(variable_list&& inputs) {\n ...\n }\n\nprotected:\n /// Performs the `Node`'s actual operation.\n virtual variable_list apply(variable_list&& inputs) = 0;\n \u2026\n edge_list next_edges_;\n\nEssentially we see that it has an override of the operator () that performs the call to the actual function, and a pure virtual function called apply. The automatically generated functions override this apply method as we saw in the MulBackward0 example above. Finally, the node also has a list of edges to enable graph connectivity.", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "The Edge object is used to link Nodes together and its implementation is straightforward.\nstruct Edge {\n ...\n /// The function this `Edge` points to.\n std::shared_ptr function;\n /// The identifier of a particular input to the function.\n uint32_t input_nr;\n};\n\nIt only requires a function pointer (the actual grad_fn objects that the edges link together), and an input number that acts as an id for the edge.\nLinking nodes together\nWhen we invoke the product operation of two tensors, we enter into the realm of autogenerated code. All the scripts that we saw in tools/autograd fill a series of templates that wrap the differentiable functions in ATen. These functions have code to construct the backward graph during the forward pass.", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "The gen_variable_type.py script is in charge of writing all this wrapping code. This script is called from the tools/autograd/gen_autograd.py during the pytorch build process and it will output the automatically generated function wrappers to torch/csrc/autograd/generated/.\nLet\u2019s take a look at how the tensor multiplication generated function looks like. The code has been simplified, but it can be found in the torch/csrc/autograd/generated/VariableType_4.cpp file when compiling pytorch from source.\n```c++\nat::Tensor mul_Tensor(c10::DispatchKeySet ks, const at::Tensor & self, const at::Tensor & other) {\n ...\n auto _any_requires_grad = compute_requires_grad( self, other );\n std::shared_ptr grad_fn;\n if (_any_requires_grad) {", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "if (any_requires_grad) {\n // Creates the link to the actual grad_fn and links the graph for backward traversal\n grad_fn = std::shared_ptr(new MulBackward0(), deleteNode);\n grad_fn->set_next_edges(collect_next_edges( self, other ));\n ...\n }\n \u2026\n // Does the actual function call to ATen\n auto _tmp = (& {\n at::AutoDispatchBelowADInplaceOrView guard;\n return at::redispatch::mul(ks & c10::after_autograd_keyset, self, other_);\n })();\nauto result = std::move(_tmp);\n if (grad_fn) {\n // Connects the result to the graph\n set_history(flatten_tensor_args( result ), grad_fn);\n }\n ...\n return result;\n}\n```\nLet\u2019s walk through the most important lines of this code.\nFirst of all, the grad_fn object is created with: grad_fn = std::shared_ptr(new MulBackward0(), deleteNode);.", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "After the grad_fn object is created, the edges used to link the nodes together are created by using the grad_fn->set_next_edges(collect_next_edges( self, other )); calls.\nstruct MakeNextFunctionList : IterArgs {\n edge_list next_edges;\n using IterArgs::operator();\n void operator()(const Variable& variable) {\n if (variable.defined()) {\n next_edges.push_back(impl::gradient_edge(variable));\n } else {\n next_edges.emplace_back();\n }\n }\n void operator()(const c10::optional& variable) {\n if (variable.has_value() && variable->defined()) {\n next_edges.push_back(impl::gradient_edge(*variable));\n } else {\n next_edges.emplace_back();\n }\n }\n};\n\ntemplate \nedge_list collect_next_edges(Variables&&... variables) {\n detail::MakeNextFunctionList make;\n make.apply(std::forward(variables)...);\n return std::move(make.next_edges);\n}\n", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "return std::move(make.next_edges);\n}\n```\nGiven an input variable (it\u2019s just a regular tensor), collect_next_edges\n will create an Edge object by calling impl::gradient_edge\n``c++\n Edge gradient_edge(const Variable& self) {\n // If grad_fn is null (as is the case for a leaf node), we instead\n // interpret the gradient function to be a gradient accumulator, which will\n // accumulate its inputs into the grad property of the variable. These\n // nodes get suppressed in some situations, see \"suppress gradient\n // accumulation\" below. Note that only variables which haverequires_grad =\n // True` can have gradient accumulators.\n if (const auto& gradient = self.grad_fn()) {\n return Edge(gradient, self.output_nr());\n } else {", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "} else {\n return Edge(grad_accumulator(self), 0);\n }\n }\n```\nTo understand how edges work, let\u2019s assume that an early executed function produced two output tensors, both with their grad_fn set, each tensor also has an output_nr property with the order in which they were returned. When creating the edges for the current grad_fn, an Edge object per input variable will be created. The edges will point to the variable\u2019s grad_fn and will also track the output_nr to establish ids used when traversing the graph. In the case that the input variables are \u201cleaf\u201d, i.e. they were not produced by any differentiable function, they don\u2019t have a grad_fn attribute set. A special function called a gradient accumulator is set by default as seen in the above code snippet.", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "After the edges are created, the grad_fn graph Node object that is being currently created will hold them using the set_next_edges function. This is what connects grad_fns together, producing the computational graph.\n void set_next_edges(edge_list&& next_edges) {\n next_edges_ = std::move(next_edges);\n for(const auto& next_edge : next_edges_) {\n update_topological_nr(next_edge);\n }\n }\n\nNow, the forward pass of the function will execute, and after the execution set_history will connect the output tensors to the grad_fn Node. \n```c++\ninline void set_history(\n at::Tensor& variable,\n const std::shared_ptr& grad_fn) {\n AT_ASSERT(grad_fn);\n if (variable.defined()) {\n // If the codegen triggers this, you most likely want to add your newly added function\n // to the DONT_REQUIRE_DERIVATIVE list in tools/autograd/gen_variable_type.py", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "TORCH_INTERNAL_ASSERT(isDifferentiableType(variable.scalar_type()));\n auto output_nr =\n grad_fn->add_input_metadata(variable);\n impl::set_gradient_edge(variable, {grad_fn, output_nr});\n } else {\n grad_fn->add_input_metadata(Node::undefined_input());\n }\n}\n```\nset_history calls set_gradient_edge, which just copies the grad_fn and the output_nr to the AutogradMeta object that the tensor has.\n```c++\n void set_gradient_edge(const Variable& self, Edge edge) {\n auto* meta = materialize_autograd_meta(self);\n meta->grad_fn_ = std::move(edge.function);\n meta->output_nr_ = edge.input_nr;\n // For views, make sure this new grad_fn_ is not overwritten unless it is necessary\n // in the VariableHooks::grad_fn below.", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "// in the VariableHooks::grad_fn below.\n // This logic is only relevant for custom autograd Functions for which multiple\n // operations can happen on a given Tensor before its gradient edge is set when\n // exiting the custom Function.\n auto diff_view_meta = get_view_autograd_meta(self);\n if (diff_view_meta && diff_view_meta->has_bw_view()) {\n diff_view_meta->set_attr_version(self._version());\n }\n }\n```\nThis tensor now will be the input to another function and the above steps will be all repeated. Check the animation below to see how the graph is created.\n\n\n\nFigure 2: Animation that shows the graph creation\n\nRegistering Python Functions in the graph\nWe have seen how autograd creates the graph for the functions included in ATen. However, when we define our differentiable functions in Python, they are also included in the graph!", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "An autograd python defined function looks like the following:\nclass Exp(torch.autograd.Function):\n @staticmethod\n def forward(ctx, i):\n result = i.exp()\n ctx.save_for_backward(result)\n return result\n\n @staticmethod\n def backward(ctx, grad_output):\n result, = ctx.saved_tensors\n return grad_output * result\n\n# Call the function\nExp.apply(torch.tensor(0.5, requires_grad=True))\n# Outputs: tensor(1.6487, grad_fn=)\n\nIn the above snippet autograd detected our python function when creating the graph. All of this is possible thanks to the Function class. Let\u2019s take a look at what happens when we call apply.", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "apply is defined in the torch._C._FunctionBase class, but this class is not present in the python source. _FunctionBase is defined in C++ by using the python C API to hook C functions together into a single python class. We are looking for a function named THPFunction_apply. \n```c++\nPyObject THPFunction_apply(PyObject cls, PyObject *inputs)\n{\n// Generates the graph node\n THPObjectPtr backward_cls(PyObject_GetAttrString(cls, \"_backward_cls\"));\n if (!backward_cls) return nullptr;\n THPObjectPtr ctx_obj(PyObject_CallFunctionObjArgs(backward_cls, nullptr));\n if (!ctx_obj) return nullptr;\n THPFunction ctx = (THPFunction)ctx_obj.get();\nauto cdata = std::shared_ptr(new PyNode(std::move(ctx_obj)), deleteNode);", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "ctx->cdata = cdata;\n// Prepare inputs and allocate context (grad fn)\n // Unpack inputs will collect the edges\n auto info_pair = unpack_input(inputs);\n UnpackedInput& unpacked_input = info_pair.first;\n InputFlags& input_info = info_pair.second;\n// Initialize backward function (and ctx)\n bool is_executable = input_info.is_executable;\n cdata->set_next_edges(std::move(input_info.next_edges));\n ctx->needs_input_grad = input_info.needs_input_grad.release();\n ctx->is_variable_input = std::move(input_info.is_variable_input);\n// Prepend ctx to input_tuple, in preparation for static method call\n auto num_args = PyTuple_GET_SIZE(inputs);\n THPObjectPtr ctx_input_tuple(PyTuple_New(num_args + 1));\n if (!ctx_input_tuple) return nullptr;\n Py_INCREF(ctx);\n PyTuple_SET_ITEM(ctx_input_tuple.get(), 0, (PyObject)ctx);\n for (int i = 0; i < num_args; ++i) {\n PyObject arg = PyTuple_GET_ITEM(unpacked_input.input_tuple.get(), i);\n Py_INCREF(arg);", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "Py_INCREF(arg);\n PyTuple_SET_ITEM(ctx_input_tuple.get(), i + 1, arg);\n }\n// Call forward\n THPObjectPtr tensor_outputs;\n {\n AutoGradMode grad_mode(false);\n THPObjectPtr forward_fn(PyObject_GetAttrString(cls, \"forward\"));\n if (!forward_fn) return nullptr;\n tensor_outputs = PyObject_CallObject(forward_fn, ctx_input_tuple);\n if (!tensor_outputs) return nullptr;\n }\n// Here is where the outputs gets the tensors tracked\n return process_outputs(cls, cdata, ctx, unpacked_input, inputs, std::move(tensor_outputs),\n is_executable, node);\n END_HANDLE_TH_ERRORS\n}\n```\nAlthough this code is hard to read at first due to all the python API calls, it essentially does the same thing as the auto-generated forward functions that we saw for ATen:\nCreate a grad_fn object.\nCollect the edges to link the current grad_fn with the input tensors one.\nExecute the function forward.\nAssign the created grad_fn to the output tensors metadata.", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "The grad_fn object is created in:\n // Generates the graph node\n THPObjectPtr backward_cls(PyObject_GetAttrString(cls, \"_backward_cls\"));\n if (!backward_cls) return nullptr;\n THPObjectPtr ctx_obj(PyObject_CallFunctionObjArgs(backward_cls, nullptr));\n if (!ctx_obj) return nullptr;\n THPFunction* ctx = (THPFunction*)ctx_obj.get();\n\n auto cdata = std::shared_ptr(new PyNode(std::move(ctx_obj)), deleteNode);\n ctx->cdata = cdata;\n", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "ctx->cdata = cdata;\n```\nBasically, it asks the python API to get a pointer to the Python object that can execute the user-written function. Then it wraps it into a PyNode object that is a specialized Node object that calls the python interpreter with the provided python function when apply is executed during the forward pass. Note that in the code cdata is the actual Node object that is part of the graph. ctx is the object that is passed to the python forward/backward functions and it is used to store autograd related information by both, the user\u2019s function and PyTorch.\nAs in the regular C++ functions we also call collect_next_edges to track the inputs grad_fn objects, but this is done in unpack_input:\n```c++", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "template\nstd::pair unpack_input(PyObject *args) {\n ...\n flags.next_edges = (flags.is_executable ? collect_next_edges(unpacked.input_vars) : edge_list());\n return std::make_pair(std::move(unpacked), std::move(flags));\n}\n\nAfter this, the edges are assigned to the grad_fn by just doing cdata->set_next_edges(std::move(input_info.next_edges)); and the forward function is called through the python interpreter C API.\nOnce the output tensors are returned from the forward pass, they are processed and converted to variables inside the process_outputs function.\n```c++\nPyObject process_outputs(PyObject op_obj, const std::shared_ptr& cdata,\n THPFunction grad_fn, const UnpackedInput& unpacked,\n PyObject inputs, THPObjectPtr&& raw_output, bool is_executable,", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "torch::jit::Node* node) {\n ...\n _wrap_outputs(cdata, grad_fn, unpacked.input_vars, raw_output, outputs, is_executable);\n _trace_post_record(node, op_obj, unpacked.input_vars, outputs, is_inplace, unpack_output);\n if (is_executable) {\n _save_variables(cdata, grad_fn);\n } ...\n return outputs.release();\n}\n```\nHere, _wrap_outputs is in charge of setting the forward outputs grad_fn to the newly created one. For this, it calls another _wrap_outputs function defined in a different file, so the process here gets a little confusing.\n```c++\nstatic void _wrap_outputs(const std::shared_ptr& cdata, THPFunction *self,", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "const variable_list &input_vars, PyObject raw_output, PyObject outputs, bool is_executable)\n{\n auto cdata_if_executable = is_executable ? cdata : nullptr;\n ...\n// Wrap only the tensor outputs.\n // This calls csrc/autograd/custom_function.cpp\n auto wrapped_outputs = _wrap_outputs(input_vars, non_differentiable, dirty_inputs, raw_output_vars, cdata_if_executable);\n...\n}\n```\nThe called _wrap_outputs is the one in charge of setting the autograd metadata in the output tensors:\n```c++\nstd::vector> _wrap_outputs(const variable_list &input_vars,\n const std::unordered_set &non_differentiable,\n const std::unordered_set &dirty_inputs,\n const at::ArrayRef> raw_outputs,\n const std::shared_ptr &cdata) {\nstd::unordered_set inputs;\n \u2026\n // Sets the grad_fn and output_nr of an output Variable.\n auto set_history = [&](Variable& var, uint32_t output_nr, bool is_input, bool is_modified,", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "bool is_differentiable) {\n // Lots of checks\n if (!is_differentiable) {\n ...\n } else if (is_input) {\n // An input has been returned, but it wasn't modified. Return it as a view\n // so that we can attach a new grad_fn to the Variable.\n // Run in no_grad mode to mimic the behavior of the forward.\n {\n AutoGradMode grad_mode(false);\n var = var.view_as(var);\n }\n impl::set_gradient_edge(var, {cdata, output_nr});\n } else if (cdata) {\n impl::set_gradient_edge(var, {cdata, output_nr});\n }\n };\n```\nAnd this is where set_gradient_edge was called and this is how a user-written python function gets included in the computational graph with its associated backward function!\nClosing remarks\nThis blog post is intended to be a code overview on how PyTorch constructs the actual computational graphs that we discussed in the previous post. The next entry will deal with how the autograd engine executes these graphs.", "source": "https://pytorch.org/blog/computational-graphs-constructed-in-pytorch/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Efficient Multi-Objective Neural Architecture Search with Ax\"\nauthor: David Eriksson, Max Balandat\nfeatured-img: \"/assets/images/MOO-NAS-blog-img2-pareto_frontier_plot.png\"\n\ntl;dr\nMulti-Objective Optimization in Ax enables efficient exploration of tradeoffs (e.g. between model performance and model size or latency) in Neural Architecture Search. This method has been successfully applied at Meta for a variety of products such as On-Device AI. In this post, we provide an end-to-end tutorial that allows you to try it out yourself.\nIntroduction", "source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"} {"text": "Introduction\nNeural networks continue to grow in both size and complexity. Developing state-of-the-art architectures is often a cumbersome and time-consuming process that requires both domain expertise and large engineering efforts. In an attempt to overcome these challenges, several Neural Architecture Search (NAS) approaches have been proposed to automatically design well-performing architectures without requiring a human in-the-loop.", "source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"} {"text": "Despite being very sample-inefficient, na\u00efve approaches like random search and grid search are still popular for both hyperparameter optimization and NAS (a study conducted at NeurIPS 2019 and ICLR 2020 found that 80% of NeurIPS papers and 88% of ICLR papers tuned their ML model hyperparameters using manual tuning, random search, or grid search). But as models are often time-consuming to train and may require large amounts of computational resources, minimizing the number of configurations that are evaluated is important.", "source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"} {"text": "Ax is a general tool for black-box optimization that allows users to explore large search spaces in a sample-efficient manner using state-of-the art algorithms such as Bayesian Optimization. At Meta, Ax is used in a variety of domains, including hyperparameter tuning, NAS, identifying optimal product settings through large-scale A/B testing, infrastructure optimization, and designing cutting-edge AR/VR hardware.", "source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"} {"text": "In many NAS applications, there is a natural tradeoff between multiple metrics of interest. For instance, when deploying models on-device we may want to maximize model performance (e.g., accuracy), while simultaneously minimizing competing metrics such as power consumption, inference latency, or model size, in order to satisfy deployment constraints. In many cases, we have been able to reduce computational requirements or latency of predictions substantially by accepting a small degradation in model performance (in some cases we were able to both increase accuracy and reduce latency!). Principled methods for exploring such tradeoffs efficiently are key enablers of Sustainable AI.", "source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"} {"text": "At Meta, we have successfully used multi-objective Bayesian NAS in Ax to explore such tradeoffs. Our methodology is being used routinely for optimizing AR/VR on-device ML models. Beyond NAS applications, we have also developed MORBO which is a method for high-dimensional multi-objective optimization that can be used to optimize optical systems for augmented reality (AR).\nFully automated Multi-Objective NAS with Ax\nAx\u2019s Scheduler allows running experiments asynchronously in a closed-loop fashion by continuously deploying trials to an external system, polling for results, leveraging the fetched data to generate more trials, and repeating the process until a stopping condition is met. No human intervention or oversight is required. Features of the Scheduler include:", "source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"} {"text": "\n\nCustomizability of parallelism, failure tolerance, and many other settings;\n\n\nA large selection of state-of-the-art optimization algorithms;\n\n\nSaving in-progress experiments (to a SQL DB or json) and resuming an experiment from storage;\n\n\nEasy extensibility to new backends for running trial evaluations remotely.\n\n\nThe following illustration from the Ax scheduler tutorial summarizes how the scheduler interacts with any external system used to run trial evaluations:\n\n\n\n\nTo run automated NAS with the Scheduler, the main things we need to do are:", "source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"} {"text": "\n\nDefine a Runner, which is responsible for sending off a model with a particular architecture to be trained on a platform of our choice (like Kubernetes, or maybe just a Docker image on our local machine). In the tutorial below, we use TorchX for handling deployment of training jobs.\n\n\nDefine a Metric, which is responsible for fetching the objective metrics (such as accuracy, model size, latency) from the training job. In our tutorial, we use Tensorboard to log data, and so can use the Tensorboard metrics that come bundled with Ax.\n\n\nTutorial", "source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"} {"text": "Tutorial\nIn our tutorial we show how to use Ax to run multi-objective NAS for a simple neural network model on the popular MNIST dataset. While the underlying methodology can be used for more complicated models and larger datasets, we opt for a tutorial that is easily runnable end-to-end on a laptop in less than an hour. In our example, we will tune the widths of two hidden layers, the learning rate, the dropout probability, the batch size, and the number of training epochs. The goal is to trade off performance (accuracy on the validation set) and model size (the number of model parameters) using multi-objective Bayesian optimization.\nThe tutorial makes use of the following PyTorch libraries:\n\n\nPyTorch Lightning (specifying the model and training loop)\n\n\nTorchX (for running training jobs remotely / asynchronously)\n\n", "source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"} {"text": "\nBoTorch (the Bayesian optimization library that powers Ax\u2019s algorithms)\n\nThe complete runnable example is available as a PyTorch Tutorial.\nResults\nThe final results from the NAS optimization performed in the tutorial can be seen in the tradeoff plot below. Here, each point corresponds to the result of a trial, with the color representing its iteration number, and the star indicating the reference point defined by the thresholds we imposed on the objectives. We see that our method was able to successfully explore the trade-offs between validation accuracy and number of parameters and found both large models with high validation accuracy as well as small models with lower validation accuracy. Depending on the performance requirements and model size constraints, the decision maker can now choose which model to use or analyze further.\n", "source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"} {"text": "\n\n\nVisualizations\nAx provides a number of visualizations that make it possible to analyze and understand the results of an experiment. Here, we will focus on the performance of the Gaussian process models that model the unknown objectives, which are used to help us discover promising configurations faster. Ax makes it easy to better understand how accurate these models are and how they perform on unseen data via leave-one-out cross-validation. In the figures below, we see that the model fits look quite good - predictions are close to the actual outcomes, and predictive 95% confidence intervals cover the actual outcomes well. Additionally, we observe that the model size (num_params) metric is much easier to model than the validation accuracy (val_acc) metric.\n\n", "source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"} {"text": "flex-direction:row; \n }\n\n\n\n\n\n\n\n\n\nTakeaways\n\n\nWe showed how to run a fully automated multi-objective Neural Architecture Search using Ax.\n\n\nUsing the Ax Scheduler, we were able to run the optimization automatically in a fully asynchronous fashion - this can be done locally (as done in the tutorial) or by deploying trials remotely to a cluster (simply by changing the TorchX scheduler configuration).\n\n\nThe state-of-the-art multi-objective Bayesian optimization algorithms available in Ax allowed us to efficiently explore the tradeoffs between validation accuracy and model size.\n\n\nAdvanced Functionality\nAx has a number of other advanced capabilities that we did not discuss in our tutorial. Among these are the following:\nEarly Stopping", "source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"} {"text": "Early Stopping\nWhen evaluating a new candidate configuration, partial learning curves are typically available while the NN training job is running. We can use the information contained in the partial curves to identify under-performing trials to stop early in order to free up computational resources for more promising candidates. While not demonstrated in the above tutorial, Ax supports early stopping out-of-the-box - see our early stopping tutorial for more details.\nHigh-dimensional search spaces", "source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"} {"text": "High-dimensional search spaces\nIn our tutorial, we used Bayesian optimization with a standard Gaussian process in order to keep the runtime low. However, these models typically scale to only about 10-20 tunable parameters. Our new SAASBO method (paper, Ax tutorial, BoTorch tutorial) is very sample-efficient and enables tuning hundreds of parameters. SAASBO can easily be enabled by passing use_saasbo=True to choose_generation_strategy.\nAcknowledgements\nWe thank the TorchX team (in particular Kiuk Chung and Tristan Rice) for their help with integrating TorchX with Ax, and the Adaptive Experimentation team @ Meta for their contributions to Ax and BoTorch.\nReferences", "source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"} {"text": "References\nD. Eriksson, P. Chuang, S. Daulton, M. Balandat. Optimizing model accuracy and latency using Bayesian multi-objective neural architecture search. Meta Research blog, July 2021.", "source": "https://pytorch.org/blog/effective-multi-objective-nueral-architecture/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'PyTorch Adds New Ecosystem Projects for Encrypted AI and Quantum Computing, Expands PyTorch Hub'\nauthor: Team PyTorch\n\nThe PyTorch ecosystem includes projects, tools, models and libraries from a broad community of researchers in academia and industry, application developers, and ML engineers. The goal of this ecosystem is to support, accelerate, and aid in your exploration with PyTorch and help you push the state of the art, no matter what field you are exploring. Similarly, we are expanding the recently launched PyTorch Hub to further help you discover and reproduce the latest research.\nIn this post, we\u2019ll highlight some of the projects that have been added to the PyTorch ecosystem this year and provide some context on the criteria we use to evaluate community projects. We\u2019ll also provide an update on the fast-growing PyTorch Hub and share details on our upcoming PyTorch Summer Hackathon.\n", "source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"} {"text": "\n\n\nRecently added ecosystem projects\nFrom private AI to quantum computing, we\u2019ve seen the community continue to expand into new and interesting areas. The latest projects include:\n\n\nAdvertorch: A Python toolbox for adversarial robustness research. The primary functionalities are implemented in PyTorch. Specifically, AdverTorch contains modules for generating adversarial perturbations and defending against adversarial examples, as well as scripts for adversarial training.\n\n\nbotorch: A modular and easily extensible interface for composing Bayesian optimization primitives, including probabilistic models, acquisition functions, and optimizers.\n\n\nSkorch: A high-level library for PyTorch that provides full scikit-learn compatibility.\n\n", "source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"} {"text": "\n\nPyTorch Geometric: A library for deep learning on irregular input data such as graphs, point clouds, and manifolds.\n\n\nPySyft: A Python library for encrypted, privacy preserving deep learning.\n\n\nPennyLane: A library for quantum ML, automatic differentiation, and optimization of hybrid quantum-classical computations.\n\n\nFlair: A very simple framework for state-of-the-art natural language processing (NLP).\n\n\nWhat makes a great project?\nWhen we review project submissions for the PyTorch ecosystem, we take into account a number of factors that we feel are important and that we would want in the projects we use ourselves. Some of these criteria include:", "source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"} {"text": "\nWell-tested: Users should be confident that ecosystem projects will work well with PyTorch, and include support for CI to ensure that testing is occurring on a continuous basis and the project can run on the latest version of PyTorch.\nClear utility: Users should understand where each project fits within the PyTorch ecosystem and the value it brings.\nPermissive licensing: Users must be able to utilize ecosystem projects without licensing concerns. e.g. BSD-3, Apache-2 and MIT licenses\nEasy onboarding: Projects need to have support for binary installation options (pip/Conda), clear documentation and a rich set of tutorials (ideally built into Jupyter notebooks).\nOngoing maintenance: Project authors need to be committed to supporting and maintaining their projects.\nCommunity: Projects should have (or be on track to building) an active, broad-based community.\n", "source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"} {"text": "If you would like to have your project included in the PyTorch ecosystem and featured on pytorch.org/ecosystem, please complete the form here. If you've previously submitted a project for consideration and haven't heard back, we promise to get back to you as soon as we can - we've received a lot of submissions!\nPyTorch Hub for reproducible research | New models", "source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"} {"text": "Since launching the PyTorch Hub in beta, we\u2019ve received a lot of interest from the community including the contribution of many new models. Some of the latest include U-Net for Brain MRI contributed by researchers at Duke University, Single Shot Detection from NVIDIA and Transformer-XL from HuggingFace.\nWe\u2019ve seen organic integration of the PyTorch Hub by folks like paperswithcode, making it even easier for you to try out the state of the art in AI research. In addition, companies like Seldon provide production-level support for PyTorch Hub models on top of Kubernetes.", "source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"} {"text": "What are the benefits of contributing a model in the PyTorch Hub?\n\n\nCompatibility: PyTorch Hub models are prioritized first for testing by the TorchScript and Cloud TPU teams, and used as baselines for researchers across a number of fields.\n\n\nVisibility: Models in the Hub will be promoted on pytorch.org as well as on paperswithcode.\n\n\nEase of testing and reproducibility: Each model comes with code, clear preprocessing requirements, and methods/dependencies to run. There is also tight integration with Google Colab, making it a true single click to get started.\n\n\nPyTorch Hub contributions welcome!", "source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"} {"text": "PyTorch Hub contributions welcome!\nWe are actively looking to grow the PyTorch Hub and welcome contributions. You don\u2019t need to be an original paper author to contribute, and we\u2019d love to see the number of domains and fields broaden. So what types of contributions are we looking for?\n\nArtifacts of a published or an arXiv paper (or something of a similar nature that serves a different audience \u2014 such as ULMFit) that a large audience would need.\n\nAND\n\nReproduces the published results (or better)\n\nOverall these models are aimed at researchers either trying to reproduce a baseline, or trying to build downstream research on top of the model (such as feature-extraction or fine-tuning) as well as researchers looking for a demo of the paper for subjective evaluation. Please keep this audience in mind when contributing.", "source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"} {"text": "If you are short on inspiration or would just like to find out what the SOTA is an any given field or domain, checkout the Paperswithcode state-of-the-art gallery.\nPyTorch Summer Hackathon\nWe\u2019ll be hosting the first PyTorch Summer Hackathon next month. We invite you to apply to participate in the in-person hackathon on August 8th to 9th at Facebook's Menlo Park campus. We'll be bringing the community together to work on innovative ML projects that can solve a broad range of complex challenges.\nApplications will be reviewed and accepted on a rolling basis until spaces are filled. For those who cannot join this Hackathon in person, we\u2019ll be following up soon with other ways to participate.\nPlease visit this link to apply.\nThank you for being part of the PyTorch community!\n-Team PyTorch", "source": "https://pytorch.org/blog/pytorch-ecosystem/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models'\nauthor: Chaoyang He, Shen Li, Mahdi Soltanolkotabi, and Salman Avestimehr\nfeatured-img: 'assets/images/pipetransformer_overview.png'\n\nIn this blog post, we describe the first peer-reviewed research paper that explores accelerating the hybrid of PyTorch DDP (torch.nn.parallel.DistributedDataParallel) [1] and Pipeline (torch.distributed.pipeline) - PipeTransformer: Automated Elastic Pipelining for Distributed Training of Large-scale Models (Transformers such as BERT [2] and ViT [3]), published at ICML 2021.", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "PipeTransformer leverages automated elastic pipelining for efficient distributed training of Transformer models. In PipeTransformer, we designed an adaptive on-the-fly freeze algorithm that can identify and freeze some layers gradually during training and an elastic pipelining system that can dynamically allocate resources to train the remaining active layers. More specifically, PipeTransformer automatically excludes frozen layers from the pipeline, packs active layers into fewer GPUs, and forks more replicas to increase data-parallel width. We evaluate PipeTransformer using Vision Transformer (ViT) on ImageNet and BERT on SQuAD and GLUE datasets. Our results show that compared to the state-of-the-art baseline, PipeTransformer attains up to 2.83-fold speedup without losing accuracy. We also provide various performance analyses for a more comprehensive understanding of our algorithmic and system-wise design.", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "Next, we will introduce the background, motivation, our idea, design, and how we implement the algorithm and system with PyTorch Distributed APIs.\n\nPaper: http://proceedings.mlr.press/v139/he21a.html\nSource Code: https://DistML.ai.\nSlides: https://docs.google.com/presentation/d/1t6HWL33KIQo2as0nSHeBpXYtTBcy0nXCoLiKd0EashY/edit?usp=sharing\n\nIntroduction\n\n\n\nFigure 1: the Parameter Number of Transformer Models Increases Dramatically.\n", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "\nLarge Transformer models [4][5] have powered accuracy breakthroughs in both natural language processing and computer vision. GPT-3 [4] hit a new record high accuracy for nearly all NLP tasks. Vision Transformer (ViT) [3] also achieved 89\\% top-1 accuracy in ImageNet, outperforming state-of-the-art convolutional networks ResNet-152 and EfficientNet. To tackle the growth in model sizes, researchers have proposed various distributed training techniques, including parameter servers [6][7][8], pipeline parallelism [9][10][11][12], intra-layer parallelism [13][14][15], and zero redundancy data-parallel [16].\nExisting distributed training solutions, however, only study scenarios where all model weights are required to be optimized throughout the training (i.e., computation and communication overhead remains relatively static over different iterations). Recent works on progressive training suggest that parameters in neural networks can be trained dynamically:", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "\nFreeze Training: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability. NeurIPS 2017\nEfficient Training of BERT by Progressively Stacking. ICML 2019\nAccelerating Training of Transformer-Based Language Models with Progressive Layer Dropping. NeurIPS 2020.\nOn the Transformer Growth for Progressive BERT Training. NACCL 2021\n\n\n\n\n\nFigure 2. Interpretable Freeze Training: DNNs converge bottom-up (Results on CIFAR10 using ResNet). Each pane shows layer-by-layer similarity using SVCCA [17][18]", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "For example, in freeze training [17][18], neural networks usually converge from the bottom-up (i.e., not all layers need to be trained all the way through training). Figure 2 shows an example of how weights gradually stabilize during training in this approach. This observation motivates us to utilize freeze training for distributed training of Transformer models to accelerate training by dynamically allocating resources to focus on a shrinking set of active layers. Such a layer freezing strategy is especially pertinent to pipeline parallelism, as excluding consecutive bottom layers from the pipeline can reduce computation, memory, and communication overhead.\n\n\n\nFigure 3. The process of PipeTransformer\u2019s automated and elastic pipelining to accelerate distributed training of Transformer models\n", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "\nWe propose PipeTransformer, an elastic pipelining training acceleration framework that automatically reacts to frozen layers by dynamically transforming the scope of the pipelined model and the number of pipeline replicas. To the best of our knowledge, this is the first paper that studies layer freezing in the context of both pipeline and data-parallel training. Figure 3 demonstrates the benefits of such a combination. First, by excluding frozen layers from the pipeline, the same model can be packed into fewer GPUs, leading to both fewer cross-GPU communications and smaller pipeline bubbles. Second, after packing the model into fewer GPUs, the same cluster can accommodate more pipeline replicas, increasing the width of data parallelism. More importantly, the speedups acquired from these two benefits are multiplicative rather than additive, further accelerating the training.", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "The design of PipeTransformer faces four major challenges. First, the freeze algorithm must make on-the-fly and adaptive freezing decisions; however, existing work [17][18] only provides a posterior analysis tool. Second, the efficiency of pipeline re-partitioning results is influenced by multiple factors, including partition granularity, cross-partition activation size, and the chunking (the number of micro-batches) in mini-batches, which require reasoning and searching in a large solution space. Third, to dynamically introduce additional pipeline replicas, PipeTransformer must overcome the static nature of collective communications and avoid potentially complex cross-process messaging protocols when onboarding new processes (one pipeline is handled by one process). Finally, caching can save time for repeated forward propagation of frozen layers, but it must be shared between existing pipelines and newly added ones, as the system cannot afford to create and warm up a dedicated cache for each replica.", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "\n\n\nFigure 4: An Animation to Show the Dynamics of PipeTransformer\n", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "As shown in the animation (Figure 4), PipeTransformer is designed with four core building blocks to address the aforementioned challenges. First, we design a tunable and adaptive algorithm to generate signals that guide the selection of layers to freeze over different iterations (Freeze Algorithm). Once triggered by these signals, our elastic pipelining module (AutoPipe), then packs the remaining active layers into fewer GPUs by taking both activation sizes and variances of workloads across heterogeneous partitions (frozen layers and active layers) into account. It then splits a mini-batch into an optimal number of micro-batches based on prior profiling results for different pipeline lengths. Our next module, AutoDP, spawns additional pipeline replicas to occupy freed-up GPUs and maintains hierarchical communication process groups to attain dynamic membership for collective communications. Our final module, AutoCache, efficiently shares activations across existing and new data-parallel processes and automatically replaces stale caches during transitions.", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "Overall, PipeTransformer combines the Freeze Algorithm, AutoPipe, AutoDP, and AutoCache modules to provide a significant training speedup.\nWe evaluate PipeTransformer using Vision Transformer (ViT) on ImageNet and BERT on GLUE and SQuAD datasets. Our results show that PipeTransformer attains up to 2.83-fold speedup without losing accuracy. We also provide various performance analyses for a more comprehensive understanding of our algorithmic and system-wise design.\nFinally, we have also developed open-source flexible APIs for PipeTransformer, which offer a clean separation among the freeze algorithm, model definitions, and training accelerations, allowing for transferability to other algorithms that require similar freezing strategies.\nOverall Design", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "Overall Design\nSuppose we aim to train a massive model in a distributed training system where the hybrid of pipelined model parallelism and data parallelism is used to target scenarios where either the memory of a single GPU device cannot hold the model, or if loaded, the batch size is small enough to avoid running out of memory. More specifically, we define our settings as follows:\nTraining task and model definition. We train Transformer models (e.g., Vision Transformer, BERT on large-scale image or text datasets. The Transformer model has layers, in which the th layer is composed of a forward computation function and a corresponding set of parameters.", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "Training infrastructure. Assume the training infrastructure contains a GPU cluster that has GPU servers (i.e. nodes). Each node has GPUs. Our cluster is homogeneous, meaning that each GPU and server have the same hardware configuration. Each GPU's memory capacity is . Servers are connected by a high bandwidth network interface such as InfiniBand interconnect.", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "Pipeline parallelism. In each machine, we load a model into a pipeline which has partitions ( also represents the pipeline length). The th partition consists of consecutive layers. We assume each partition is handled by a single GPU device. , meaning that we can build multiple pipelines for multiple model replicas in a single machine. We assume all GPU devices in a pipeline belonging to the same machine. Our pipeline is a synchronous pipeline, which does not involve stale gradients, and the number of micro-batches is . In the Linux OS, each pipeline is handled by a single process. We refer the reader to GPipe [10] for more details.", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "Data parallelism. DDP is a cross-machine distributed data-parallel process group within parallel workers. Each worker is a pipeline replica (a single process). The th worker's index (ID) is rank . For any two pipelines in DDP, they can belong to either the same GPU server or different GPU servers, and they can exchange gradients with the AllReduce algorithm.", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "Under these settings, our goal is to accelerate training by leveraging freeze training, which does not require all layers to be trained throughout the duration of the training. Additionally, it may help save computation, communication, memory cost, and potentially prevent overfitting by consecutively freezing layers. However, these benefits can only be achieved by overcoming the four challenges of designing an adaptive freezing algorithm, dynamical pipeline re-partitioning, efficient resource reallocation, and cross-process caching, as discussed in the introduction.\n\n\n\nFigure 5. Overview of PipeTransformer Training System\n", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "PipeTransformer co-designs an on-the-fly freeze algorithm and an automated elastic pipelining training system that can dynamically transform the scope of the pipelined model and the number of pipeline replicas. The overall system architecture is illustrated in Figure 5. To support PipeTransformer\u2019s elastic pipelining, we maintain a customized version of PyTorch Pipeline. For data parallelism, we use PyTorch DDP as a baseline. Other libraries are standard mechanisms of an operating system (e.g.,multi-processing) and thus avoid specialized software or hardware customization requirements. To ensure the generality of our framework, we have decoupled the training system into four core components: freeze algorithm, AutoPipe, AutoDP, and AutoCache. The freeze algorithm (grey) samples indicators from the training loop and makes layer-wise freezing decisions, which will be shared with AutoPipe (green). AutoPipe is an elastic pipeline module that speeds up training by excluding frozen layers from the pipeline and packing the active layers into fewer GPUs (pink), leading to both fewer cross-GPU communications and smaller pipeline bubbles. Subsequently, AutoPipe passes pipeline length information to AutoDP (purple), which then spawns more pipeline replicas to increase data-parallel width, if possible. The illustration also includes an example in which AutoDP introduces a new replica (purple). AutoCache (orange edges) is a cross-pipeline caching module, as illustrated by connections between pipelines. The source code architecture is aligned with Figure 5 for readability and generality.", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "Implementation Using PyTorch APIs\nAs can be seen from Figure 5, PipeTransformers contain four components: Freeze Algorithm, AutoPipe, AutoDP, and AutoCache. Among them, AutoPipe and AutoDP relies on PyTorch DDP (torch.nn.parallel.DistributedDataParallel) [1] and Pipeline (torch.distributed.pipeline), respectively. In this blog, we only highlight the key implementation details of AutoPipe and AutoDP. For details of Freeze Algorithm and AutoCache, please refer to our paper.\nAutoPipe: Elastic Pipelining\nAutoPipe can accelerate training by excluding frozen layers from the pipeline and packing the active layers into fewer GPUs. This section elaborates on the key components of AutoPipe that dynamically 1) partition pipelines, 2) minimize the number of pipeline devices, and 3) optimize mini-batch chunk size accordingly.\nBasic Usage of PyTorch Pipeline", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "Basic Usage of PyTorch Pipeline\nBefore diving into details of AutoPipe, let us warm up the basic usage of PyTorch Pipeline (torch.distributed.pipeline.sync.Pipe, see this tutorial). More specially, we present a simple example to understand the design of Pipeline in practice:\n# Step 1: build a model including two linear layers\nfc1 = nn.Linear(16, 8).cuda(0)\nfc2 = nn.Linear(8, 4).cuda(1)\n\n# Step 2: wrap the two layers with nn.Sequential\nmodel = nn.Sequential(fc1, fc2)\n\n# Step 3: build Pipe (torch.distributed.pipeline.sync.Pipe)\nmodel = Pipe(model, chunks=8)\n\n# do training/inference\ninput = torch.rand(16, 16).cuda(0)\noutput_rref = model(input)\n", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "output_rref = model(input)\n```\nIn this basic example, we can see that before initializing Pipe, we need to partition the model nn.Sequential into multiple GPU devices and set optimal chunk number (chunks). Balancing computation time across partitions is critical to pipeline training speed, as skewed workload distributions across stages can lead to stragglers and forcing devices with lighter workloads to wait. The chunk number may also have a non-trivial influence on the throughput of the pipeline.\nBalanced Pipeline Partitioning\nIn dynamic training system such as PipeTransformer, maintaining optimally balanced partitions in terms of parameter numbers does not guarantee the fastest training speed because other factors also play a crucial role:\n\n\n\nFigure 6. The partition boundary is in the middle of a skip connection\n", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "\nCross-partition communication overhead. Placing a partition boundary in the middle of a skip connection leads to additional communications since tensors in the skip connection must now be copied to a different GPU. For example, with BERT partitions in Figure 6, partition must take intermediate outputs from both partition and partition . In contrast, if the boundary is placed after the addition layer, the communication overhead between partition and is visibly smaller. Our measurements show that having cross-device communication is more expensive than having slightly imbalanced partitions (see the Appendix in our paper). Therefore, we do not consider breaking skip connections (highlighted separately as an entire attention layer and MLP layer in green color at line 7 in Algorithm 1.\n", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "\nFrozen layer memory footprint. During training, AutoPipe must recompute partition boundaries several times to balance two distinct types of layers: frozen layers and active layers. The frozen layer's memory cost is a fraction of that inactive layer, given that the frozen layer does not need backward activation maps, optimizer states, and gradients. Instead of launching intrusive profilers to obtain thorough metrics on memory and computational cost, we define a tunable cost factor to estimate the memory footprint ratio of a frozen layer over the same active layer. Based on empirical measurements in our experimental hardware, we set it to .\n\n\n\n\n", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "\n\nBased on the above two considerations, AutoPipe balances pipeline partitions based on parameter sizes. More specifically, AutoPipe uses a greedy algorithm to allocate all frozen and active layers to evenly distribute partitioned sublayers into GPU devices. Pseudocode is described as the load\\_balance() function in Algorithm 1. The frozen layers are extracted from the original model and kept in a separate model instance in the first device of a pipeline.\nNote that the partition algorithm employed in this paper is not the only option; PipeTransformer is modularized to work with any alternatives.\nPipeline Compression", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "Pipeline Compression\nPipeline compression helps to free up GPUs to accommodate more pipeline replicas and reduce the number of cross-device communications between partitions. To determine the timing of compression, we can estimate the memory cost of the largest partition after compression, and then compare it with that of the largest partition of a pipeline at timestep . To avoid extensive memory profiling, the compression algorithm uses the parameter size as a proxy for the training memory footprint. Based on this simplification, the criterion of pipeline compression is as follows:\n\n\n\n", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "\n\nOnce the freeze notification is received, AutoPipe will always attempt to divide the pipeline length by 2 (e.g., from 8 to 4, then 2). By using as the input, the compression algorithm can verify if the result satisfies the criterion in Equation (1). Pseudocode is shown in lines 25-33 in Algorithm 1. Note that this compression makes the acceleration ratio exponentially increase during training, meaning that if a GPU server has a larger number of GPUs (e.g., more than 8), the acceleration ratio will be further amplified.\n\n\n", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "\nFigure 7. Pipeline Bubble: , and denote forward, backward, and the optimizer update of micro-batch on device , respectively. The total bubble size in each iteration is times per micro-batch forward and backward cost.\n", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "\nAdditionally, such a technique can also speed up training by shrinking the size of pipeline bubbles. To explain bubble sizes in a pipeline, Figure 7 depicts how 4 micro-batches run through a 4-device pipeline . In general, the total bubble size is times per micro-batch forward and backward cost. Therefore, it is clear that shorter pipelines have smaller bubble sizes.\nDynamic Number of Micro-Batches", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "Prior pipeline parallel systems use a fixed number of micro-batches per mini-batch ( ). GPipe suggests , where is the number of partitions (pipeline length). However, given that PipeTransformer dynamically configures , we find it to be sub-optimal to maintain a static during training. Moreover, when integrated with DDP, the value of also has an impact on the efficiency of DDP gradient synchronizations. Since DDP must wait for the last micro-batch to finish its backward computation on a parameter before launching its gradient synchronization, finer micro-batches lead to a smaller overlap between computation and communication. Hence, instead of using a static value, PipeTransformer searches for optimal on the fly in the hybrid of DDP environment by enumerating values ranging from to . For a specific training environment, the profiling needs only to be done once (see Algorithm 1 line 35).", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "For the complete source code, please refer to https://github.com/Distributed-AI/PipeTransformer/blob/master/pipe_transformer/pipe/auto_pipe.py.\nAutoDP: Spawning More Pipeline Replicas\nAs AutoPipe compresses the same pipeline into fewer GPUs, AutoDP can automatically spawn new pipeline replicas to increase data-parallel width.\nDespite the conceptual simplicity, subtle dependencies on communications and states require careful design. The challenges are threefold:\n\n\nDDP Communication: Collective communications in PyTorch DDP requires static membership, which prevents new pipelines from connecting with existing ones;\n\n\nState Synchronization: newly activated processes must be consistent with existing pipelines in the training progress (e.g., epoch number and learning rate), weights and optimizer states, the boundary of frozen layers, and pipeline GPU range;\n\n", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "\nDataset Redistribution: the dataset should be re-balanced to match a dynamic number of pipelines. This not only avoids stragglers but also ensures that gradients from all DDP processes are equally weighted.\n\n\n\n\nFigure 8. AutoDP: handling dynamical data-parallel with messaging between double process groups (Process 0-7 belong to machine 0, while process 8-15 belong to machine 1)\n\nTo tackle these challenges, we create double communication process groups for DDP. As in the example shown in Figure 8, the message process group (purple) is responsible for light-weight control messages and covers all processes, while the active training process group (yellow) only contains active processes and serves as a vehicle for heavy-weight tensor communications during training. The message group remains static, whereas the training group is dismantled and reconstructed to match active processes.", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "In T0, only processes 0 and 8 are active. During the transition to T1, process 0 activates processes 1 and 9 (newly added pipeline replicas) and synchronizes necessary information mentioned above using the message group. The four active processes then form a new training group, allowing static collective communications adaptive to dynamic memberships.\nTo redistribute the dataset, we implement a variant of DistributedSampler that can seamlessly adjust data samples to match the number of active pipeline replicas.\nThe above design also naturally helps to reduce DDP communication overhead. More specifically, when transitioning from T0 to T1, processes 0 and 1 destroy the existing DDP instances, and active processes construct a new DDP training group using a cached pipelined model (AutoPipe stores frozen model and cached model separately).\nWe use the following APIs to implement the design above.\n```python\nimport torch.distributed as dist\nfrom torch.nn.parallel import DistributedDataParallel as DDP", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "initialize the process group (this must be called in the initialization of PyTorch DDP)\ndist.init_process_group(init_method='tcp://' + str(self.config.master_addr) + ':' +\nstr(self.config.master_port), backend=Backend.GLOO, rank=self.global_rank, world_size=self.world_size)\n...\ncreate active process group (yellow color)\nself.active_process_group = dist.new_group(ranks=self.active_ranks, backend=Backend.NCCL, timeout=timedelta(days=365))\n...\ncreate message process group (yellow color)\nself.comm_broadcast_group = dist.new_group(ranks=[i for i in range(self.world_size)], backend=Backend.GLOO, timeout=timedelta(days=365))\n...\ncreate DDP-enabled model when the number of data-parallel workers is changed. Note:\n1. The process group to be used for distributed data all-reduction.\nIf None, the default process group, which is created by torch.distributed.init_process_group, will be used.\nIn our case, we set it as self.active_process_group", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "2. device_ids should be set when the pipeline length = 1 (the model resides on a single CUDA device).\nself.pipe_len = gpu_num_per_process\nif gpu_num_per_process > 1:\n model = DDP(model, process_group=self.active_process_group, find_unused_parameters=True)\nelse:\n model = DDP(model, device_ids=[self.local_rank], process_group=self.active_process_group, find_unused_parameters=True)\nto broadcast message among processes, we use dist.broadcast_object_list\ndef dist_broadcast(object_list, src, group):\n \"\"\"Broadcasts a given object to all parties.\"\"\"\n dist.broadcast_object_list(object_list, src, group=group)\n return object_list\n``\nFor the complete source code, please refer tohttps://github.com/Distributed-AI/PipeTransformer/blob/master/pipe_transformer/dp/auto_dp.py`.\nExperiments\nThis section first summarizes experiment setups and then evaluates PipeTransformer using computer vision and natural language processing tasks.", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "Hardware. Experiments were conducted on 2 identical machines connected by InfiniBand CX353A (GB/s), where each machine is equipped with 8 NVIDIA Quadro RTX 5000 (16GB GPU memory). GPU-to-GPU bandwidth within a machine (PCI 3.0, 16 lanes) is GB/s.\nImplementation. We used PyTorch Pipe as a building block. The BERT model definition, configuration, and related tokenizer are from HuggingFace 3.5.0. We implemented Vision Transformer using PyTorch by following its TensorFlow implementation. More details can be found in our source code.", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "Models and Datasets. Experiments employ two representative Transformers in CV and NLP: Vision Transformer (ViT) and BERT. ViT was run on an image classification task, initialized with pre-trained weights on ImageNet21K and fine-tuned on ImageNet and CIFAR-100. BERT was run on two tasks, text classification on the SST-2 dataset from the General Language Understanding Evaluation (GLUE) benchmark, and question answering on the SQuAD v1.1 Dataset (Stanford Question Answering), which is a collection of 100k crowdsourced question/answer pairs.", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "Training Schemes. Given that large models normally would require thousands of GPU-days {\\emph{e.g.}, GPT-3) if trained from scratch, fine-tuning downstream tasks using pre-trained models has become a trend in CV and NLP communities. Moreover, PipeTransformer is a complex training system that involves multiple core components. Thus, for the first version of PipeTransformer system development and algorithmic research, it is not cost-efficient to develop and evaluate from scratch using large-scale pre-training. Therefore, the experiments presented in this section focuses on pre-trained models. Note that since the model architectures in pre-training and fine-tuning are the same, PipeTransformer can serve both. We discussed pre-training results in the Appendix.", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "Baseline. Experiments in this section compare PipeTransformer to the state-of-the-art framework, a hybrid scheme of PyTorch Pipeline (PyTorch\u2019s implementation of GPipe) and PyTorch DDP. Since this is the first paper that studies accelerating distributed training by freezing layers, there are no perfectly aligned counterpart solutions yet.\nHyper-parameters. Experiments use ViT-B/16 (12 transformer layers, input patch size) for ImageNet and CIFAR-100, BERT-large-uncased (24 layers) for SQuAD 1.1, and BERT-base-uncased (12 layers) for SST-2. With PipeTransformer, ViT and BERT training can set the per-pipeline batch size to around 400 and 64, respectively. Other hyperparameters (e.g., epoch, learning rate) for all experiments are presented in Appendix.\nOverall Training Acceleration\n\n\n\n", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "\n\nWe summarize the overall experimental results in the table above. Note that the speedup we report is based on a conservative value that can obtain comparable or even higher accuracy. A more aggressive (, ) can obtain a higher speedup but may lead to a slight loss in accuracy. Note that the model size of BERT (24 layers) is larger than ViT-B/16 (12 layers), thus it takes more time for communication.\nPerformance Analysis\nSpeedup Breakdown\nThis section presents evaluation results and analyzes the performance of different components in \\autopipe. More experimental results can be found in the Appendix.\n", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "\n\n\nFigure 9. Speedup Breakdown (ViT on ImageNet)\n\nTo understand the efficacy of all four components and their impacts on training speed, we experimented with different combinations and used their training sample throughput (samples/second) and speedup ratio as metrics. Results are illustrated in Figure 9. Key takeaways from these experimental results are:\n\nthe main speedup is the result of elastic pipelining which is achieved through the joint use of AutoPipe and AutoDP;\nAutoCache's contribution is amplified by AutoDP;\nfreeze training alone without system-wise adjustment even downgrades the training speed.\n\nTuning in Freezing Algorithm\n\n\n", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "\nFigure 10. Tuning in Freezing Algorithm\n\nWe ran experiments to show how the in the freeze algorithms influences training speed. The result clearly demonstrates that a larger (excessive freeze) leads to a greater speedup but suffers from a slight performance degradation. In the case shown in Figure 10, where , freeze training outperforms normal training and obtains a -fold speedup. We provide more results in the Appendix.\nOptimal Chunks in the elastic pipeline\n\n\n\nFigure 11. Optimal chunk number in the elastic pipeline\n", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "\nWe profiled the optimal number of micro-batches for different pipeline lengths . Results are summarized in Figure 11. As we can see, different values lead to different optimal , and the throughput gaps across different M values are large (as shown when ), which confirms the necessity of an anterior profiler in elastic pipelining.\nUnderstanding the Timing of Caching\n\n\n\nFigure 12. the timing of caching\n", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "\nFigure 12. the timing of caching\n\nTo evaluate AutoCache, we compared the sample throughput of training that activates AutoCache from epoch (blue) with the training job without AutoCache (red). Figure 12 shows that enabling caching too early can slow down training, as caching can be more expensive than the forward propagation on a small number of frozen layers. After more layers are frozen, caching activations clearly outperform the corresponding forward propagation. As a result, AutoCache uses a profiler to determine the proper timing to enable caching. In our system, for ViT (12 layers), caching starts from 3 frozen layers, while for BERT (24 layers), caching starts from 5 frozen layers.\nFor more detailed experimental analysis, please refer to our paper.\nSummarization", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "Summarization\nThis blog introduces PipeTransformer, a holistic solution that combines elastic pipeline-parallel and data-parallel for distributed training using PyTorch Distributed APIs. More specifically, PipeTransformer incrementally freezes layers in the pipeline, packs remaining active layers into fewer GPUs, and forks more pipeline replicas to increase the data-parallel width. Evaluations on ViT and BERT models show that compared to the state-of-the-art baseline, PipeTransformer attains up to 2.83\u00d7 speedups without accuracy loss.\nReference\n[1] Li, S., Zhao, Y., Varma, R., Salpekar, O., Noordhuis, P., Li,T., Paszke, A., Smith, J., Vaughan, B., Damania, P., et al. Pytorch Distributed: Experiences on Accelerating Dataparallel Training. Proceedings of the VLDB Endowment,13(12), 2020\n[2] Devlin, J., Chang, M. W., Lee, K., and Toutanova, K. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In NAACL-HLT, 2019", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "[3] Dosovitskiy, A., Beyer, L., Kolesnikov, A., Weissenborn, D., Zhai, X., Unterthiner, T., Dehghani, M., Minderer, M., Heigold, G., Gelly, S., et al. An image is Worth 16x16 words: Transformers for Image Recognition at Scale.\n[4] Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., et al. Language Models are Few-shot Learners.\n[5] Lepikhin, D., Lee, H., Xu, Y., Chen, D., Firat, O., Huang, Y., Krikun, M., Shazeer, N., and Chen, Z. Gshard: Scaling Giant Models with Conditional Computation and Automatic Sharding.\n[6] Li, M., Andersen, D. G., Park, J. W., Smola, A. J., Ahmed, A., Josifovski, V., Long, J., Shekita, E. J., and Su, B. Y. Scaling Distributed Machine Learning with the Parameter Server. In 11th {USENIX} Symposium on Operating Systems Design and Implementation ({OSDI} 14), pp. 583\u2013598, 2014.", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "[7] Jiang, Y., Zhu, Y., Lan, C., Yi, B., Cui, Y., and Guo, C. A Unified Architecture for Accelerating Distributed DNN Training in Heterogeneous GPU/CPU Clusters. In 14th USENIX Symposium on Operating Systems Design and Implementation (OSDI 20), pp. 463\u2013479. USENIX Association, November 2020. ISBN 978-1-939133-19- 9.\n[8] Kim, S., Yu, G. I., Park, H., Cho, S., Jeong, E., Ha, H., Lee, S., Jeong, J. S., and Chun, B. G. Parallax: Sparsity-aware Data Parallel Training of Deep Neural Networks. In Proceedings of the Fourteenth EuroSys Conference 2019, pp. 1\u201315, 2019.\n[9] Kim, C., Lee, H., Jeong, M., Baek, W., Yoon, B., Kim, I., Lim, S., and Kim, S. TorchGPipe: On-the-fly Pipeline Parallelism for Training Giant Models.\n[10] Huang, Y., Cheng, Y., Bapna, A., Firat, O., Chen, M. X., Chen, D., Lee, H., Ngiam, J., Le, Q. V., Wu, Y., et al. Gpipe: Efficient Training of Giant Neural Networks using Pipeline Parallelism.", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "[11] Park, J. H., Yun, G., Yi, C. M., Nguyen, N. T., Lee, S., Choi, J., Noh, S. H., and ri Choi, Y. Hetpipe: Enabling Large DNN Training on (whimpy) Heterogeneous GPU Clusters through Integration of Pipelined Model Parallelism and Data Parallelism. In 2020 USENIX Annual Technical Conference (USENIX ATC 20), pp. 307\u2013321. USENIX Association, July 2020. ISBN 978-1-939133- 14-4.\n[12] Narayanan, D., Harlap, A., Phanishayee, A., Seshadri, V., Devanur, N. R., Ganger, G. R., Gibbons, P. B., and Zaharia, M. Pipedream: Generalized Pipeline Parallelism for DNN Training. In Proceedings of the 27th ACM Symposium on Operating Systems Principles, SOSP \u201919, pp. 1\u201315, New York, NY, USA, 2019. Association for Computing Machinery. ISBN 9781450368735. doi: 10.1145/3341301.3359646.\n[13] Lepikhin, D., Lee, H., Xu, Y., Chen, D., Firat, O., Huang, Y., Krikun, M., Shazeer, N., and Chen, Z. Gshard: Scaling Giant Models with Conditional Computation and Automatic Sharding.", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "[14] Shazeer, N., Cheng, Y., Parmar, N., Tran, D., Vaswani, A., Koanantakool, P., Hawkins, P., Lee, H., Hong, M., Young, C., Sepassi, R., and Hechtman, B. Mesh-Tensorflow: Deep Learning for Supercomputers. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 31, pp. 10414\u201310423. Curran Associates, Inc., 2018.\n[15] Shoeybi, M., Patwary, M., Puri, R., LeGresley, P., Casper, J., and Catanzaro, B. Megatron-LM: Training Multi-billion Parameter Language Models using Model Parallelism.\n[16] Rajbhandari, S., Rasley, J., Ruwase, O., and He, Y. ZERO: Memory Optimization towards Training a Trillion Parameter Models.\n[17] Raghu, M., Gilmer, J., Yosinski, J., and Sohl Dickstein, J. Svcca: Singular Vector Canonical Correlation Analysis for Deep Learning Dynamics and Interpretability. In NIPS, 2017.", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "[18] Morcos, A., Raghu, M., and Bengio, S. Insights on Representational Similarity in Neural Networks with Canonical Correlation. In Bengio, S., Wallach, H., Larochelle, H., Grauman, K., Cesa-Bianchi, N., and Garnett, R. (eds.), Advances in Neural Information Processing Systems 31, pp. 5732\u20135741. Curran Associates, Inc., 2018.", "source": "https://pytorch.org/blog/pipetransformer-automated-elastic-pipelining/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Model Serving in PyTorch'\nauthor: Jeff Smith\nredirect_from: /2019/05/08/model-serving-in-pyorch.html\n\nPyTorch has seen a lot of adoption in research, but people can get confused about how well PyTorch models can be taken into production. This blog post is meant to clear up any confusion people might have about the road to production in PyTorch.\nUsually when people talk about taking a model \u201cto production,\u201d they usually mean performing inference, sometimes called model evaluation or prediction or serving. At the level of a function call, in PyTorch, inference looks something like this:\n\nIn Python\nmodule(input)\n\n\nIn traced modules\nmodule(input)\n\n\nIn C++\nat::Tensor output = module->forward(inputs).toTensor();\n\n\n\nSince we at Facebook perform inference operations using PyTorch hundreds of trillions of times per day, we've done a lot to make sure that inference runs as efficiently as possible.\nServing Strategies", "source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"} {"text": "Serving Strategies\nThat zoomed-in view of how you use models in inference isn't usually the whole story, though. In a real world machine learning system, you often need to do more than just run a single inference operation in the REPL or Jupyter notebook. Instead, you usually need to integrate your model into a larger application in some way. Depending on what you need to do, you can usually take one of the following approaches.\nDirect embedding\nIn application settings like mobile, we often just directly call the model as part of a larger program. This isn't just for apps; usually this is how robotics and dedicated devices work as well. At a code-level, the call to the model is exactly the same as what is shown above in the section about inference shown above. A key concern is often that a Python interpreter is not present in such environments, which is why PyTorch allows you to call your models from C++ and ship a model without the need for a Python runtime.\nModel microservices", "source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"} {"text": "Model microservices\nIf you're using your model in a server side context and you're managing multiple models, you might choose to treat each individual model (or each individual model version) as a separate service, usually using some sort of packaging mechanism like a Docker container. Then that service is often made network accessible via some sort of service, either using JSON over HTTP or an RPC technology like gRPC. The key characteristic of this approach is that you're defining a service with a single endpoint that just calls your model. Then you do do all of your model management (promotion, rollback, etc.) via whatever system you already use to manage your services (e.g. kubernetes, ECS).\nModel servers", "source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"} {"text": "Model servers\nAn additional possible solution is to use a model server. This is an application built to manage and serve models. It allows you to upload multiple models and get distinct prediction endpoints for each of them. Typically such systems include a number of other features to help solve more of the whole problem of managing and serving models. This can include things like metrics, visualization, data pre-processing, and more. Even something as simple as having a system for automatically versioning models can make building important features like model rollbacks much easier.\nEvolving Patterns", "source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"} {"text": "Evolving Patterns\nThe above is a somewhat arbitrary breakdown of different approaches based on a snapshot in time. Design patterns are still evolving. Recently, model server designs have started to adopt more of the technologies of general service infrastructure such as Docker containers and kubernetes, so many model servers have started to share properties of the model microservice design discussed above. For a deeper dive into the general concepts of model server designs, you can check out my book on machine learning systems.\nServing PyTorch Models\nSo, if you're a PyTorch user, what should you use if you want to take your models to production?\nIf you're on mobile or working on an embedded system like a robot, direct embedding in your application is often the right choice. \nFor mobile specifically, your use case might be served by the ONNX export functionality.", "source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"} {"text": "Note that ONNX, by its very nature, has limitations and doesn't support all of the functionality provided by the larger PyTorch project.\nYou can check out this tutorial on deploying PyTorch models to mobile using ONNX to see if this path might suit your use case. \nThat said, we've heard that there's a lot more that PyTorch users want to do on mobile, so look for more mobile-specific functionality in PyTorch in the future.\nFor other embedded systems, like robots, running inference on a PyTorch model from the C++ API could be the right solution.\nIf you can't use the cloud or prefer to manage all services using the same technology, you can follow this example to build a simple model microservice using the Flask web framework.", "source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"} {"text": "If you want to manage multiple models within a non-cloud service solution, there are teams developing PyTorch support in model servers like MLFlow, Kubeflow, and RedisAI. We're excited to see innovation from multiple teams building OSS model servers, and we'll continue to highlight innovation in the PyTorch ecosystem in the future.", "source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"} {"text": "If you can use the cloud for your application, there are several great choices for working with models in the cloud. For AWS Sagemaker, you can start find a guide to all of the resources from AWS for working with PyTorch, including docs on how to use the Sagemaker Python SDK. You can also see some talks we've given on using PyTorch on Sagemaker. Finally, if you happen to be using PyTorch via FastAI, then they've written a really simple guide to getting up and running on Sagemaker.", "source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"} {"text": "The story is similar across other major clouds. On Google Cloud, you can follow these instructions to get access to a Deep Learning VM with PyTorch pre-installed. On Microsoft Azure, you have a number of ways to get started from Azure Machine Learning Service to Azure Notebooks showing how to use PyTorch.\nYour Models\nWhichever approach you take to bringing your PyTorch models to production, we want to support you and enable your success. Do you love one of the options above? Are you having difficulty with that one crucial feature you can't find support for? We'd love to discuss more on the deployment category on the PyTorch Discuss forums. We'd love to help, and where you're seeing success, amplify your story.", "source": "https://pytorch.org/blog/model-serving-in-pyorch/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Accelerating PyTorch with CUDA Graphs'\nauthor: Vinh Nguyen, Michael Carilli, Sukru Burc Eryilmaz, Vartika Singh, Michelle Lin, Natalia Gimelshein, Alban Desmaison, Edward Yang\nfeatured-img: 'assets/images/cudagraphs-pytorch.png'\n", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "Today, we are pleased to announce a new advanced CUDA feature, CUDA Graphs, has been brought to PyTorch. Modern DL frameworks have complicated software stacks that incur significant overheads associated with the submission of each operation to the GPU. When DL workloads are strong-scaled to many GPUs for performance, the time taken by each GPU operation diminishes to just a few microseconds and, in these cases, the high work submission latencies of frameworks often lead to low utilization of the GPU. As GPUs get faster and workloads are scaled to more devices, the likelihood of workloads suffering from these launch-induced stalls increases. To overcome these performance overheads, NVIDIA engineers worked with PyTorch developers to enable CUDA graph execution natively in PyTorch. This design was instrumental in scaling NVIDIA\u2019s MLPerf workloads (implemented in PyTorch) to over 4000 GPUs in order to achieve record-breaking performance.", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "\n\n", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "\nCUDA graphs support in PyTorch is just one more example of a long collaboration between NVIDIA and Facebook engineers. torch.cuda.amp, for example, trains with half precision while maintaining the network accuracy achieved with single precision and automatically utilizing tensor cores wherever possible. AMP delivers up to 3X higher performance than FP32 with just a few lines of code change. Similarly, NVIDIA\u2019s Megatron-LM was trained using PyTorch on up to 3072 GPUs. In PyTorch, one of the most performant methods to scale-out GPU training is with torch.nn.parallel.DistributedDataParallel coupled with the NVIDIA Collective Communications Library (NCCL) backend.\nCUDA Graphs", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "CUDA Graphs\nCUDA Graphs, which made its debut in CUDA 10, let a series of CUDA kernels to be defined and encapsulated as a single unit, i.e., a graph of operations, rather than a sequence of individually-launched operations. It provides a mechanism to launch multiple GPU operations through a single CPU operation, and hence reduces the launching overheads.", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "The benefits of CUDA graphs can be demonstrated with the simple example in Figure 1. On the top, a sequence of short kernels is launched one-by-one by the CPU. The CPU launching overhead creates a significant gap in between the kernels. If we replace this sequence of kernels with a CUDA graph, initially we will need to spend a little extra time on building the graph and launching the whole graph in one go on the first occasion, but subsequent executions will be very fast, as there will be very little gap between the kernels. The difference is more pronounced when the same sequence of operations is repeated many times, for example, overy many training steps. In that case, the initial costs of building and launching the graph will be amortized over the entire number of training iterations. For a more comprehensive introduction on the topic, see our blog", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "Getting Started with CUDA Graphs and GTC talk Effortless CUDA Graphs.\n\n\n\n Figure 1. Benefits of using CUDA graphs\n\nNCCL support for CUDA graphs\nThe previously mentioned benefits of reducing launch overheads also extend to NCCL kernel launches. NCCL enables GPU-based collective and P2P communications. With NCCL support for CUDA graphs, we can eliminate the NCCL kernel launch overhead.", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "Additionally, kernel launch timing can be unpredictable due to various CPU load and operating system factors. Such time skews can be harmful to the performance of NCCL collective operations. With CUDA graphs, kernels are clustered together so that performance is consistent across ranks in a distributed workload. This is especially useful in large clusters where even a single slow node can bring down overall cluster level performance.\nFor distributed multi-GPU workloads, NCCL is used for collective communications. If we look at training a neural network that leverages data parallelism, without NCCL support for CUDA graphs, we\u2019ll need a separate launch for each of forward/back propagation and NCCL AllReduce. By contrast, with NCCL support for CUDA graphs, we can reduce launch overhead by lumping together the forward/backward propagation and NCCL AllReduce all in a single graph launch.\n", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "\n\n\n Figure 2. Looking at a typical neural network, all the kernel launches for NCCL AllReduce can be bundled into a graph to reduce overhead launch time.\n\nPyTorch CUDA Graphs\nFrom PyTorch v1.10, the CUDA graphs functionality is made available as a set of beta APIs. \nAPI overview", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "API overview\nPyTorch supports the construction of CUDA graphs using stream capture, which puts a CUDA stream in capture mode. CUDA work issued to a capturing stream doesn\u2019t actually run on the GPU. Instead, the work is recorded in a graph. After capture, the graph can be launched to run the GPU work as many times as needed. Each replay runs the same kernels with the same arguments. For pointer arguments this means the same memory addresses are used. By filling input memory with new data (e.g., from a new batch) before each replay, you can rerun the same work on new data.", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "Replaying a graph sacrifices the dynamic flexibility of typical eager execution in exchange for greatly reduced CPU overhead. A graph\u2019s arguments and kernels are fixed, so a graph replay skips all layers of argument setup and kernel dispatch, including Python, C++, and CUDA driver overheads. Under the hood, a replay submits the entire graph\u2019s work to the GPU with a single call to cudaGraphLaunch. Kernels in a replay also execute slightly faster on the GPU, but eliding CPU overhead is the main benefit.\nYou should try CUDA graphs if all or part of your network is graph-safe (usually this means static shapes and static control flow, but see the other constraints) and you suspect its runtime is at least somewhat CPU-limited.\nAPI example", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "API example\nPyTorch exposes graphs via a raw torch.cuda.CUDAGraphclass and two convenience wrappers, torch.cuda.graph and torch.cuda.make_graphed_callables.", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "torch.cuda.graph is a simple, versatile context manager that captures CUDA work in its context. Before capture, warm up the workload to be captured by running a few eager iterations. Warmup must occur on a side stream. Because the graph reads from and writes to the same memory addresses in every replay, you must maintain long-lived references to tensors that hold input and output data during capture. To run the graph on new input data, copy new data to the capture\u2019s input tensor(s), replay the graph, then read the new output from the capture\u2019s output tensor(s).\nIf the entire network is capture safe, one can capture and replay the whole network as in the following example. \n```python\nN, D_in, H, D_out = 640, 4096, 2048, 1024\nmodel = torch.nn.Sequential(torch.nn.Linear(D_in, H),\n torch.nn.Dropout(p=0.2),\n torch.nn.Linear(H, D_out),", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "torch.nn.Dropout(p=0.1)).cuda()\nloss_fn = torch.nn.MSELoss()\noptimizer = torch.optim.SGD(model.parameters(), lr=0.1)\nPlaceholders used for capture\nstatic_input = torch.randn(N, D_in, device='cuda')\nstatic_target = torch.randn(N, D_out, device='cuda')\nwarmup\nUses static_input and static_target here for convenience,\nbut in a real setting, because the warmup includes optimizer.step()\nyou must use a few batches of real data.\ns = torch.cuda.Stream()\ns.wait_stream(torch.cuda.current_stream())\nwith torch.cuda.stream(s):\n for i in range(3):\n optimizer.zero_grad(set_to_none=True)\n y_pred = model(static_input)\n loss = loss_fn(y_pred, static_target)\n loss.backward()\n optimizer.step()\ntorch.cuda.current_stream().wait_stream(s)\ncapture\ng = torch.cuda.CUDAGraph()\nSets grads to None before capture, so backward() will create\n.grad attributes with allocations from the graph's private pool\noptimizer.zero_grad(set_to_none=True)", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "optimizer.zero_grad(set_to_none=True)\nwith torch.cuda.graph(g):\n static_y_pred = model(static_input)\n static_loss = loss_fn(static_y_pred, static_target)\n static_loss.backward()\n optimizer.step()\nreal_inputs = [torch.rand_like(static_input) for _ in range(10)]\nreal_targets = [torch.rand_like(static_target) for _ in range(10)]\nfor data, target in zip(real_inputs, real_targets):\n # Fills the graph's input memory with new data to compute on\n static_input.copy_(data)\n static_target.copy_(target)\n # replay() includes forward, backward, and step.\n # You don't even need to call optimizer.zero_grad() between iterations\n # because the captured backward refills static .grad tensors in place.\n g.replay()\n # Params have been updated. static_y_pred, static_loss, and .grad\n # attributes hold values from computing on this iteration's data.\n```", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "```\nIf some of your network is unsafe to capture (e.g., due to dynamic control flow, dynamic shapes, CPU syncs, or essential CPU-side logic), you can run the unsafe part(s) eagerly and use torch.cuda.make_graphed_callables to graph only the capture-safe part(s). This is demonstrated next.", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "make_graphed_callables accepts callables (functions or nn.Module and returns graphed versions. By default, callables returned by make_graphed_callables are autograd-aware, and can be used in the training loop as direct replacements for the functions or nn.Module you passed. make_graphed_callables internally creates CUDAGraph objects, runs warm up iterations, and maintains static inputs and outputs as needed. Therefore, (unlike with torch.cuda.graph) you don\u2019t need to handle those manually.", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "In the following example, data-dependent dynamic control flow means the network isn\u2019t capturable end-to-end, but make_graphed_callables() lets us capture and run graph-safe sections as graphs regardless:\n```python\nN, D_in, H, D_out = 640, 4096, 2048, 1024\nmodule1 = torch.nn.Linear(D_in, H).cuda()\nmodule2 = torch.nn.Linear(H, D_out).cuda()\nmodule3 = torch.nn.Linear(H, D_out).cuda()\nloss_fn = torch.nn.MSELoss()\noptimizer = torch.optim.SGD(chain(module1.parameters(),\n module2.parameters(),\n module3.parameters()),\n lr=0.1)\nSample inputs used for capture\nrequires_grad state of sample inputs must match\nrequires_grad state of real inputs each callable will see.\nx = torch.randn(N, D_in, device='cuda')\nh = torch.randn(N, H, device='cuda', requires_grad=True)", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "module1 = torch.cuda.make_graphed_callables(module1, (x,))\nmodule2 = torch.cuda.make_graphed_callables(module2, (h,))\nmodule3 = torch.cuda.make_graphed_callables(module3, (h,))\nreal_inputs = [torch.rand_like(x) for _ in range(10)]\nreal_targets = [torch.randn(N, D_out, device=\"cuda\") for _ in range(10)]\nfor data, target in zip(real_inputs, real_targets):\n optimizer.zero_grad(set_to_none=True)\ntmp = module1(data) # forward ops run as a graph\n\nif tmp.sum().item() > 0:\n tmp = module2(tmp) # forward ops run as a graph\nelse:\n tmp = module3(tmp) # forward ops run as a graph\n\nloss = loss_fn(tmp, target)\n# module2's or module3's (whichever was chosen) backward ops,\n# as well as module1's backward ops, run as graphs\nloss.backward()\noptimizer.step()\n\n```\nExample use cases\nMLPerf v1.0 training workloads", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "MLPerf v1.0 training workloads\nThe PyTorch CUDA graphs functionality was instrumental in scaling NVIDIA\u2019s MLPerf training v1.0 workloads (implemented in PyTorch) to over 4000 GPUs, setting new records across the board. We illustrate below two MLPerf workloads where the most significant gains were observed with the use of CUDA graphs, yielding up to ~1.7x speedup.\n\n\n\n\nNumber of GPUs\nSpeedup from CUDA-graphs\n\n\n\n\nMask R-CNN\n272\n1.70\u00d7\n\n\nBERT\n4096\n1.12\u00d7\n\n\n\nTable 1. MLPerf training v1.0 performance improvement with PyTorch CUDA graph.\nMask R-CNN", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "Mask R-CNN\nDeep learning frameworks use GPUs to accelerate computations, but a significant amount of code still runs on CPU cores. CPU cores process meta-data like tensor shapes in order to prepare arguments needed to launch GPU kernels. Processing meta-data is a fixed cost while the cost of the computational work done by the GPUs is positively correlated with batch size. For large batch sizes, CPU overhead is a negligible percentage of total run time cost, but at small batch sizes CPU overhead can become larger than GPU run time. When that happens, GPUs go idle between kernel calls. This issue can be identified on an NSight timeline plot in Figure 3. The plot below shows the \u201cbackbone\u201d portion of Mask R-CNN with per-gpu batch size of 1 before graphing. The green portion shows CPU load while the blue portion shows GPU load. In this profile we see that the CPU is maxed out at 100% load while GPU is idle most of the time, there is a lot of empty space between GPU kernels.\n", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "\n\n\n Figure 3: NSight timeline plot of Mask R-CNN\n\nCUDA graphs can automatically eliminate CPU overhead when tensor shapes are static. A complete graph of all the kernel calls is captured during the first step, in subsequent steps the entire graph is launched with a single op, eliminating all the CPU overhead, as observed in Figure 4.. \n\n\n\n Figure 4: CUDA graphs optimization\n", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "\n Figure 4: CUDA graphs optimization\n\nWith graphing, we see that the GPU kernels are tightly packed and GPU utilization remains high. The graphed portion now runs in 6 ms instead of 31ms, a speedup of 5x. We did not graph the entire model, mostly just the resnet backbone, which resulted in an overall speedup of ~1.7x.\nIn order to increase the scope of the graph, we made some changes in the software stack to eliminate some of the CPU-GPU synchronization points. In MLPerf v1.0, this work included changing the implementation of torch.randperm function to use CUB instead of Thrust because the latter is a synchronous C++ template library. These improvements are available in the latest NGC container.\nBERT", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "Similarly, by graph capturing the model, we eliminate CPU overhead and accompanying synchronization overhead. CUDA graphs implementation results in a 1.12x performance boost for our max-scale BERT configuration. To maximize the benefits from CUDA graphs, it is important to keep the scope of the graph as large as possible. To achieve this, we modified the model script to remove CPU-GPU synchronizations during the execution such that the full model can be graph captured. Furthermore, we also made sure that the tensor sizes during the execution are static within the scope of the graph. For instance, in BERT, only a specific subset of total tokens contribute to loss function, determined by a pre-generated mask tensor. Extracting the indices of valid tokens from this mask, and using these indices to gather the tokens that contribute to the loss, results in a tensor with a dynamic shape, i.e. with shape that is not constant across iterations. In order to make sure tensor sizes are static, instead of using the dynamic-shape tensors in the loss computation, we used static shape tensors where a mask is used to indicate which elements are valid. As a result, all tensor shapes are static. Dynamic shapes also require CPU-GPU synchronization since it has to involve the framework\u2019s memory management on the CPU side. With static-only shapes, no CPU-GPU synchronizations are necessary. This is shown in Figure 5.", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "\n\n\n Figure 5. By using a fixed size tensor and a boolean mask as described in the text, we are able to eliminate CPU synchronizations needed for dynamic sized tensors \n\nCUDA graphs in NVIDIA DL examples collection\nSingle GPU use cases can also benefit from using CUDA Graphs. This is particularly true for workloads launching many short kernels with small batches. A good example is training and inference for recommender systems. Below we present preliminary benchmark results for NVIDIA's implementation of the Deep Learning Recommendation Model (DLRM) from our Deep Learning Examples collection. Using CUDA graphs for this workload provides significant speedups for both training and inference. The effect is particularly visible when using very small batch sizes, where CPU overheads are more pronounced.", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "CUDA graphs are being actively integrated into other PyTorch NGC model scripts and the NVIDIA Github deep learning examples. Stay tuned for more examples on how to use it.\n\n\n\n\n\n\n Figure 6: CUDA graphs optimization for the DLRM model.\n\nCall to action: CUDA Graphs in PyTorch v1.10", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "Call to action: CUDA Graphs in PyTorch v1.10\nCUDA graphs can provide substantial benefits for workloads that comprise many small GPU kernels and hence bogged down by CPU launch overheads. This has been demonstrated in our MLPerf efforts, optimizing PyTorch models. Many of these optimizations, including CUDA graphs, have or will eventually be integrated into our PyTorch NGC model scripts collection and the NVIDIA Github deep learning examples. For now, check out our open-source MLPerf training v1.0 implementation which could serve as a good starting point to see CUDA graph in action. Alternatively, try the PyTorch CUDA graphs API on your own workloads.\nWe thank many NVIDIAN\u2019s and Facebook engineers for their discussions and suggestions:", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "Karthik Mandakolathur US,\nTomasz Grel, \nPLJoey Conway, \nArslan Zulfiqar US\nAuthors bios\nVinh Nguyen\nDL Engineer, NVIDIA\nVinh is a Deep learning engineer and data scientist, having published more than 50 scientific articles attracting more than 2500 citations. At NVIDIA, his work spans a wide range of deep learning and AI applications, including speech, language and vision processing, and recommender systems.\nMichael Carilli\nSenior Developer Technology Engineer, NVIDIA", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "Senior Developer Technology Engineer, NVIDIA\nMichael worked at the Air Force Research Laboratory optimizing CFD code for modern parallel architectures. He holds a PhD in computational physics from the University of California, Santa Barbara. A member of the PyTorch team, he focuses on making GPU training fast, numerically stable, and easy(er) for internal teams, external customers, and Pytorch community users.\nSukru Burc Eryilmaz\nSenior Architect in Dev Arch, NVIDIA\nSukru received his PhD from Stanford University, and B.S from Bilkent University. He currently works on improving the end-to-end performance of neural network training both at single-node scale and supercomputer scale. \nVartika Singh\nTech Partner Lead for DL Frameworks and Libraries, NVIDIA", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "Vartika has led teams working in confluence of cloud and distributed computing, scaling and AI, influencing the design and strategy of major corporations. She currently works with the major frameworks and compiler organizations and developers within and outside NVIDIA, to help the design to work efficiently and optimally on NVIDIA hardware.\nMichelle Lin\nProduct Intern, NVIDIA\nMichelle is currently pursuing an undergraduate degree in Computer Science and Business Administration at UC Berkeley. She is currently managing execution of projects such as conducting market research and creating marketing assets for Magnum IO.\nNatalia Gimelshein\nApplied Research Scientist, Facebook\nNatalia Gimelshein worked on GPU performance optimization for deep learning workloads at NVIDIA and Facebook. She is currently a member of the PyTorch core team, working with partners to seamlessly support new software and hardware features.", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "Alban Desmaison\nResearch Engineer, Facebook\nAlban studied engineering and did a PhD in Machine Learning and Optimization, during which he was an OSS contributor to PyTorch prior to joining Facebook. His main responsibilities are maintaining some core library and features (autograd, optim, nn) and working on making PyTorch better in general.\nEdward Yang\nResearch Engineer, Facebook\nEdward studied CS at MIT and then Stanford before starting at Facebook. He is a part of the PyTorch core team and is one of the leading contributors to PyTorch.", "source": "https://pytorch.org/blog/accelerating-pytorch-with-cuda-graphs/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'PyTorch 1.10 Release, including CUDA Graphs APIs, Frontend and Compiler Improvements'\nauthor: Team PyTorch\n\nWe are excited to announce the release of PyTorch 1.10. This release is composed of over 3,400 commits since 1.9, made by 426 contributors. We want to sincerely thank our community for continuously improving PyTorch. \nPyTorch 1.10 updates are focused on improving training and performance of PyTorch, and developer usability. The full release notes are available here. Highlights include:\n1. CUDA Graphs APIs are integrated to reduce CPU overheads for CUDA workloads.\n2. Several frontend APIs such as FX, torch.special, and nn.Module Parametrization, have moved from beta to stable.\n3. Support for automatic fusion in JIT Compiler expands to CPUs in addition to GPUs.\n4. Android NNAPI support is now available in beta.", "source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"} {"text": "Along with 1.10, we are also releasing major updates to the PyTorch libraries, which you can read about in this blog post.\nFrontend APIs\n(Stable) Python code transformations with FX\nFX provides a Pythonic platform for transforming and lowering PyTorch programs. It is a toolkit for pass writers to facilitate Python-to-Python transformation of functions and nn.Module instances. This toolkit aims to support a subset of Python language semantics\u2014rather than the whole Python language\u2014to facilitate ease of implementation of transforms. With 1.10, FX is moving to stable. \nYou can learn more about FX in the official documentation and GitHub examples of program transformations implemented using torch.fx.\n(Stable) torch.special", "source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"} {"text": "(Stable) torch.special\nA torch.special module, analogous to SciPy\u2019s special module, is now available in stable. The module has 30 operations, including gamma, Bessel, and (Gauss) error functions. \nRefer to this documentation for more details.\n(Stable) nn.Module Parametrization\nnn.Module parametrizaton, a feature that allows users to parametrize any parameter or buffer of an nn.Module without modifying the nn.Module itself, is available in stable. This release adds weight normalization (weight_norm), orthogonal parametrization (matrix constraints and part of pruning) and more flexibility when creating your own parametrization.", "source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"} {"text": "Refer to this tutorial and the general documentation for more details.\n(Beta) CUDA Graphs APIs Integration\nPyTorch now integrates CUDA Graphs APIs to reduce CPU overheads for CUDA workloads.\nCUDA Graphs greatly reduce the CPU overhead for CPU-bound cuda workloads and thus improve performance by increasing GPU utilization. For distributed workloads, CUDA Graphs also reduce jitter, and since parallel workloads have to wait for the slowest worker, reducing jitter improves overall parallel efficiency.\nIntegration allows seamless interop between the parts of the network captured by cuda graphs, and parts of the network that cannot be captured due to graph limitations.", "source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"} {"text": "Read the note for more details and examples, and refer to the general documentation for additional information. \n[Beta] Conjugate View\nPyTorch\u2019s conjugation for complex tensors (torch.conj()) is now a constant time operation, and returns a view of the input tensor with a conjugate bit set as can be seen by calling torch.is_conj() . This has already been leveraged in various other PyTorch operations like matrix multiplication, dot product etc., to fuse conjugation with the operation leading to significant performance gain and memory savings on both CPU and CUDA.\nDistributed Training\nDistributed Training Releases Now in Stable", "source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"} {"text": "Distributed Training Releases Now in Stable\nIn 1.10, there are a number of features that are moving from beta to stable in the distributed package:\n* (Stable) Remote Module: This feature allows users to operate a module on a remote worker like using a local module, where the RPCs are transparent to the user. Refer to this documentation for more details.\n* (Stable) DDP Communication Hook: This feature allows users to override how DDP synchronizes gradients across processes. Refer to this documentation for more details.", "source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"} {"text": "\n(Stable) ZeroRedundancyOptimizer: This feature can be used in conjunction with DistributedDataParallel to reduce the size of per-process optimizer states. With this stable release, it now can handle uneven inputs to different data-parallel workers. Check out this tutorial. We also improved the parameter partition algorithm to better balance memory and computation overhead across processes. Refer to this documentation and this tutorial to learn more. \n\nPerformance Optimization and Tooling\n[Beta] Profile-directed typing in TorchScript", "source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"} {"text": "[Beta] Profile-directed typing in TorchScript\nTorchScript has a hard requirement for source code to have type annotations in order for compilation to be successful. For a long time, it was only possible to add missing or incorrect type annotations through trial and error (i.e., by fixing the type-checking errors generated by torch.jit.script one by one), which was inefficient and time consuming. \nNow, we have enabled profile directed typing for torch.jit.script by leveraging existing tools like MonkeyType, which makes the process much easier, faster, and more efficient. For more details, refer to the documentation.\n(Beta) CPU Fusion\nIn PyTorch 1.10, we've added an LLVM-based JIT compiler for CPUs that can fuse together sequences of torch library calls to improve performance. While we've had this capability for some time on GPUs, this release is the first time we've brought compilation to the CPU.", "source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"} {"text": "You can check out a few performance results for yourself in this Colab notebook.\n(Beta) PyTorch Profiler\nThe objective of PyTorch Profiler is to target the execution steps that are the most costly in time and/or memory, and visualize the workload distribution between GPUs and CPUs. PyTorch 1.10 includes the following key features:\n\nEnhanced Memory View: This helps you understand your memory usage better. This tool will help you avoid Out of Memory errors by showing active memory allocations at various points of your program run.\nEnhanced Automated Recommendations: This helps provide automated performance recommendations to help optimize your model. The tools recommend changes to batch size, TensorCore, memory reduction technologies, etc.\nEnhanced Kernel View: Additional columns show grid and block sizes as well as shared memory usage and registers per thread.\n", "source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"} {"text": "\nDistributed Training: Gloo is now supported for distributed training jobs.\nCorrelate Operators in the Forward & Backward Pass: This helps map the operators found in the forward pass to the backward pass, and vice versa, in a trace view.\nTensorCore: This tool shows the Tensor Core (TC) usage and provides recommendations for data scientists and framework developers.\nNVTX: Support for NVTX markers was ported from the legacy autograd profiler.\nSupport for profiling on mobile devices: The PyTorch profiler now has better integration with TorchScript and mobile backends, enabling trace collection for mobile workloads.\n\nRefer to this documentation for details. Check out this tutorial to learn how to get started with this feature. \nPyTorch Mobile\n(Beta) Android NNAPI Support in Beta", "source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"} {"text": "(Beta) Android NNAPI Support in Beta\nLast year we released prototype support for Android\u2019s Neural Networks API (NNAPI). NNAPI allows Android apps to run computationally intensive neural networks on the most powerful and efficient parts of the chips that power mobile phones, including GPUs (Graphics Processing Units) and NPUs (specialized Neural Processing Units). \nSince the prototype we\u2019ve added more op coverage, added support for load-time flexible shapes and ability to run the model on the host for testing. Try out this feature using the tutorial.", "source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"} {"text": "Additionally, Transfer Learning steps have been added to Object Detection examples. Check out this GitHub page to learn more. Please provide your feedback or ask questions on the forum. You can also check out this presentation to get an overview. \nThanks for reading. If you\u2019re interested in these updates and want to join the PyTorch community, we encourage you to join the discussion forums and open GitHub issues. To get the latest news from PyTorch, follow us on Twitter, Medium, YouTube, and LinkedIn. \nCheers!\nTeam PyTorch", "source": "https://pytorch.org/blog/pytorch-1.10-released/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Ambient Clinical Intelligence: Generating Medical Reports with PyTorch\"\nauthor: Miguel Del-Agua, Principal Research Scientist, Nuance and Jeremy Jancsary, Senior Principal Research Scientist, Nuance\nfeatured-img: \"\"\n\nIntroduction\nComplete and accurate clinical documentation is an essential tool for tracking patient care. It allows for treatment plans to be shared among care teams to aid in continuity of care and ensures a transparent and effective process for reimbursement.", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "Physicians are responsible for documenting patient care. Traditional clinical documentation methods have resulted in a sub-par patient-provider experience, less time interacting with patients, and decreased work-life balance. A significant amount of physicians\u2019 time is spent in front of the computer doing administrative tasks. As a result, patients are less satisfied with the overall experience, and physicians, who prepare for years studying medicine, cannot practice at the top of their license and are burned out. Every hour physicians provide direct clinical face time to patients results in nearly two additional hours spent on EHR and desk work within the clinic day. Outside office hours, physicians spend another 1 to 2 hours of personal time each night doing additional computer and other clerical work.\n\n42% of all physicians reported having burnout. \u2013 Medscape\n", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "\nThe problem has grown worse due to the pandemic with 64% of U.S. physicians now reporting burnout. - AAFP\n\"Too many bureaucratic tasks e.g., charting and paperwork\" is the leading contribution to burnout, increased computerization ranks 4th. - Medscape\n75% of U.S. Consumers Wish Their Healthcare Experiences Were More Personalized,- Business Wire\n", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "\n61% of patients would visit their healthcare provider more often if the communication experience felt more personalized. \u2013 Business Wire\n\nPhysician burnout is one of the primary causes for increased medical errors, malpractice suits, turnover, and decreased access to care. Burnout leads to an increase in healthcare costs and a decrease in overall patient satisfaction. Burnout costs the United States $4.6 billion a year.", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "What can we do to bring back trust, joy, and humanity to the delivery of healthcare? A significant portion of the administrative work consists of entering patient data into Electronic Health Records (EHRs) and creating clinical documentation. Clinical documentation is created from information already in the EHR as well as from the patient-provider encounter conversation. \nThis article will showcase how the Nuance Dragon Ambient eXperience (DAX), an AI-powered, voice-enabled, ambient clinical intelligence solution, automatically documents patient encounters accurately and efficiently at the point of care and the technologies that enable it.\nNuance DAX enhances the quality of care and patient experience, increases provider efficiency and satisfaction, and improves financial outcomes. It can be used in office and telehealth settings in all ambulatory specialties, including primary and urgent care.\n\n\n", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "\nNatural Language Processing\nNatural Language Processing (NLP) is one of the most challenging fields in Artificial Intelligence (AI). It comprehends a set of algorithms that allow computers to understand or generate the language used by humans. These algorithms can process and analyze vast amounts of natural language data from different sources (either sound or text) to build models that can understand, classify, or even generate natural language as humans would. Like other fields in AI, NLP has significantly progressed thanks to the advent of Deep Learning (DL), which has resulted in models that can obtain results on par with humans in some tasks.", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "These advanced NLP techniques are being applied in healthcare. During a typical patient-provider encounter, a conversation ensues where the doctor constructs, through questions and answers, a chronological description of the development of the patient's presenting illness or symptoms. A physician examines the patient and makes clinical decisions to establish a diagnosis and determine a treatment plan. This conversation, and data in the EHR, provide the required information for physicians to generate the clinical documentation, referred to as medical reports.", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "Two main NLP components play a role in automating the creation of clinical documentation. The first component, Automatic Speech Recognition (ASR), is used to translate speech into text. It takes the audio recording of the encounter and generates a conversation transcription (cf. Figure 2). The second component, Automatic Text Summarization, helps generate summaries from large text documents. This component is responsible for understanding and capturing the nuances and most essential aspects from the transcribed conversation into a final report in narrative form (cf. Figure 3), structured form, or a combination of both.\nWe will focus on this second component, Automatic Text Summarization, which is a difficult task with many challenges:\n\nIts performance is tied to the ASR quality from multiple speakers (noisy input).\nThe input is conversational in nature and contains layman's terms.\nProtected Health Information (PHI) regulations limit medical data access.\n", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "\nThe information for one output sentence is potentially spread across multiple conversation turns.\nThere is no explicit sentence alignment between input and output.\nVarious medical specialties, encounter types, and EHR systems constitute a broad and complex output space. \nPhysicians have different styles of conducting encounters and have their preferences for medical reports; there is no standard. \nStandard summarization metrics might differ from human judgment of quality.\n\n\n\n\n\nFigure 2: Transcript of a patient-doctor conversation\n\n\n\n\n\nFigure 3: Excerpt of an AI-generated medical report. HPI stands for History of present illness.\n\nText Summarization with PyTorch and Fairseq", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "Text Summarization with PyTorch and Fairseq\nPyTorch is an open-source machine learning framework developed by Facebook that helps researchers prototype Deep Learning models. The Fairseq toolkit is built on top of PyTorch and focuses on sequence generation tasks, such as Neural Machine Translation (NMT) or Text Summarization. Fairseq features an active community that is continuously providing reference implementations of state-of-the-art models. It contains many built-in components (model architectures, modules, loss functions, and optimizers) and is easily extendable with plugins.", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "Text summarization constitutes a significant challenge in NLP. We need models capable of generating a short version of a document while retaining the key points and avoiding uninformative content. These challenges can be addressed with different approaches. 1). Abstractive text summarization aimed at training models that can generate a summary in narrative form. 2). Extractive methods where the models are trained to select the most important parts from the input text. 3). A combination of the two, where the essential parts from the input are selected and then summarized in an abstractive fashion. Hence, summarization can be accomplished via a single end-to-end network or as a pipeline of extractive and abstractive components. To that end, Fairseq provides all the necessary tools to be successful in our endeavor. It features either end-to-end models such as the classical Transformer, different types of Language Models and pre-trained versions that enable researchers to focus on what matters most\u2014to build state-of-the-art models that generate valuable reports.", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "However, we are not just summarizing the transcribed conversation; we generate high-quality medical reports, which have many considerations.\n\nEvery section of a medical report is different in terms of content, structure, fluency, etc.\nAll medical facts mentioned in the conversation should be present in the report, for example, a particular treatment or dosage.\nIn the healthcare domain, the vocabulary is extensive, and models need to deal with medical terminology.\nPatient-doctor conversations are usually much longer than the final report.\n", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "All these challenges require our researchers to run a battery of extensive experiments. Thanks to the flexibility of PyTorch and Fairseq, their productivity has greatly increased. Further, the ecosystem offers an easy path from ideation, implementation, experimentation, and final roll-out to production. Using multiple GPUs or CPUs is as simple as providing an additional argument to the tools, and because of the tight Python integration, PyTorch code can be easily debugged.", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "In our continuous effort to contribute to the open-source community, features have been developed at Nuance and pushed to the Fairseq GitHub repository. These try to overcome some of the challenges mentioned such as, facilitating copying of, especially rare or unseen, words from the input to summary, training speedups by improving Tensor Core utilization, and ensuring TorchScript compatibility of different Transformer configurations. Following, we will show an example of how to train a Transformer model with a Pointer Generator mechanism (Transformer-PG), which can copy words from the input.\nHow to build a Transformer model with a Pointer Generator mechanism\nIn this step-by-step guide, it is assumed the user has already installed PyTorch and Fairseq.\n1. Create a vocabulary and extend it with source position markers:\nThese markers will allow the model to point to any word in the input sequence.\n```python\nvocab_size=\nposition_markers=512\nexport LC_ALL=C\ncat train.src train.tgt |", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "export LC_ALL=C\ncat train.src train.tgt |\n tr -s '[:space:]' '\\n' |\n sort |\n uniq -c |\n sort -k1,1bnr -k2 |\n head -n \"$((vocab_size - 4))\" |\n awk '{ print $2 \" \" $1 }' > dict.pg.txt\npython3 -c \"[print(' 0'.format(n)) for n in range($position_markers)]\" >> dict.pg.txt\n\nThis will create a file \"dict.pg.txt\" that contains the \\ most frequent words followed by 512 position markers named from \"\\\" to \"\\\".\n\nIn case we have an input like\n\n```python\nsrc = \"Hello, I'm The Dogtor\"\n\nit could happen that our model has been trained without the word \"Dogtor\" in its vocabulary. Therefore, when we feed this sequence into the model, it should be converted to:\nsrc = \"Hello, I'm The \"\n\nNow, \"\\\" is part of our vocabulary and could be predicted by the model (this is where the pointer-generator comes in). In such a case, we will only need to post-process the output to replace \"\\\" by the word at input position 3.", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "2. Preprocess the text data to replace unknown words by its positional markers:\nWe can use the scripts from https://github.com/pytorch/fairseq/tree/master/examples/pointer_generator.\n```python\nConsidering we have our data in:\ntrain_src = /path/to/train.src\ntrain_tgt = /path/to/train.tgt\nvalid_src = /path/to/valid.src\nvalid_tgt = /path/to/valid.tgt\n./preprocess.py --source /path/to/train.src \\\n --target /path/to/train.tgt \\\n --vocab <(cut -d' ' -f1 dict.pg.txt) \\\n --source-out /path/to/train.pg.src \\\n --target-out /path/to/train.pg.tgt\n./preprocess.py --source /path/to/valid.src \\\n --target /path/to/valid.tgt \\\n --vocab <(cut -d' ' -f1 dict.pg.txt) \\\n --source-out /path/to/valid.pg.src \\\n --target-out /path/to/valid.pg.tgt\n./preprocess.py --source /path/to/test.src \\", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "./preprocess.py --source /path/to/test.src \\\n --vocab <(cut -d' ' -f1 dict.pg.txt) \\\n --source-out /path/to/test.pg.src\n\n### 3. Now let's binarize the data, so that it can be processed faster:\n\n```python\nfairseq-preprocess --task \"translation\" \\\n --source-lang \"pg.src\" \\\n --target-lang \"pg.tgt\" \\\n --trainpref /path/to/train \\\n --validpref /path/to/valid \\\n --srcdict dict.pg.txt \\\n --cpu \\\n --joined-dictionary \\\n --destdir \n\nYou might notice the type of task is \"translation\". This is because there is no \"summarization\" task available; we could understand it as a kind of NMT task where the input and output languages are shared and the output (summary) is shorter than the input.\n4. Now we can train the model:\n```python\nfairseq-train \\\n --save-dir \\", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "--save-dir \\\n --task \"translation\" \\\n --source-lang \"src\" \\\n --target-lang \"tgt\" \\\n --arch \"transformer_pointer_generator\" \\\n --max-source-positions 512 \\\n --max-target-positions 128 \\\n --truncate-source \\\n --max-tokens 2048 \\\n --required-batch-size-multiple 1 \\\n --required-seq-len-multiple 8 \\\n --share-all-embeddings \\\n --dropout 0.1 \\\n --criterion \"cross_entropy\" \\\n --optimizer adam \\\n --adam-betas '(0.9, 0.98)' \\\n --adam-eps 1e-9 \\\n --update-freq 4 \\\n --lr 0.004 \\\n # Pointer Generator\n --alignment-layer -1 \\\n --alignment-heads 1 \\\n --source-position-markers 512\n```\nThis configuration makes use of features Nuance has contributed back to Fairseq:", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "\nTransformer with a Pointer Generator mechanism to facilitate copying of words from the input.\nSequence length padded to a multiple of 8 to better use tensor cores and reduce training time.\n\n5. Now let's take a look at how to generate a summary with our new medical report generation system:\n```python\nimport torch\nfrom examples.pointer_generator.pointer_generator_src.transformer_pg import TransformerPointerGeneratorModel\nPatient-Doctor conversation\ninput = \"[doctor] Lisa Simpson, thirty six year old female, presents to the clinic today because \" \\\n \"she has severe right wrist pain\"\nLoad the model\nmodel = TransformerPointerGeneratorModel.from_pretrained(data_name_or_path=,\n model_name_or_path=,\n checkpoint_file=\"checkpoint_best.pt\")\nresult = model.translate([input], beam=2)\nprint(result[0])", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "print(result[0])\nMs. is a 36-year-old female who presents to the clinic today for evaluation of her right wrist.\n\n### 6. Alternatively, we can use fairseq-interactive and a postprocessing tool to substitute positional unknown tokens by its words from the input:\n\n```python\nfairseq-interactive \\\n --batch-size \\\n --task translation \\\n --source-lang src \\\n --target-lang tgt \\\n --path /checkpoint_last.pt \\\n --input /path/to/test.pg.src \\\n --buffer-size 20 \\\n --max-len-a 0 \\\n --max-len-b 128 \\\n --beam 2 \\\n --skip-invalid-size-inputs-valid-test | tee generate.out\n\ngrep \"^H-\" generate.out | cut -f 3- > generate.hyp\n\n./postprocess.py \\\n --source <(awk 'NF<512' /path/to/test.pg.src) \\\n --target generate.hyp \\\n --target-out generate.hyp.processed\n", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "--target-out generate.hyp.processed\n```\nNow we have the final set of reports in \"generate.hyp.processed\", with \"\\\" replaced by the original word from the input sequence.\nModel Deployment\nPyTorch offers great flexibility in modeling and a rich surrounding ecosystem. However, while several recent articles have suggested that the use of PyTorch in research and academia may be close to surpassing TensorFlow, there seems to be an overall sense of TensorFlow being the preferred platform for deployment to production. Is this still the case in 2021? Teams looking to serve their PyTorch models in production have a few options.\nBefore describing our journey, let's take a brief detour and define the term model.\nModels as computation graphs", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "A few years back, it was still common for machine learning toolkits to support only particular classes of models of a rather fixed and rigid structure, with only a few degrees of freedom (like the kernel of a support vector machine or the number of hidden layers of a neural network). Inspired by foundational work in Theano, toolkits like Microsoft's CNTK or Google's TensorFlow were among the first to popularize a more flexible view on models, as computation graphs with associated parameters that can be estimated from data. This view blurred the boundaries between popular types of models (such as DNNs or SVMs), as it became easy to blend the characteristics of each into your type of graph. Still, such a graph had to be defined upfront before estimating its parameters, and it was pretty static. This made it easy to save models to a self-contained bundle, like a TensorFlow SavedModel (such a bundle simply contains the structure of the graph, as well as the concrete values of the estimated parameters). However, debugging such models can be difficult because the statements in the Python code that build the graph are logically separate from the lines that execute it. Researchers also long for easier ways of expressing dynamic behavior, such as the computation steps of the forward pass of a model being conditionally dependent on its input data (or its previous output).", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "Most recently, the above limitations have led to a second revolution spearheaded by PyTorch and TensorFlow 2. The computation graph is no longer defined explicitly. Instead, it will be populated implicitly as the Python code executes operations on tensor arguments. An essential technique that powers this development is automatic differentiation. As the computation graph is being built implicitly while executing the steps of the forward pass, all the necessary data will be tracked for later computation of the gradient concerning the model parameters. This allows for great flexibility in training a model, but it raises an important question. If the computation happening inside a model is only implicitly defined through our Python code's steps as it executes concrete data, what is it that we want to save as a model? The answer \u2013 at least initially \u2013 was the Python code with all its dependencies, along with the estimated parameters. This is undesirable for practical reasons. For instance, there is a danger that the team working on model deployment does not exactly reproduce the Python code dependencies used during training, leading to subtly divergent behavior. The solution typically consists of combining two techniques, scripting and tracing, that is, extra annotations in your Python code and execution of your code on exemplary input data, allowing PyTorch to define and save the graph that should be executed during later inference on new, unseen data. This requires some discipline by whoever creates the model code (arguably voiding some of the original flexibility of eager execution), but it results in a self-contained model bundle in TorchScript format. The solution in TensorFlow 2 is remarkably similar.", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "Serving our report generation models", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "Our journey in deploying the report generation models reflects the above discussion. We started out serving our models by deploying the model code and its dependencies along with the parameter checkpoints in a custom Docker image exposing a gRPC service interface. However, we soon noticed that it became error-prone to replicate the exact code and environment used by the modeling team while estimating the parameters. Moreover, this approach prevented us from leveraging high-performance model serving frameworks like NVIDIA's Triton, which is written in C++ and requires self-contained models that can be used without a Python interpreter. At this stage, we were facing a choice between attempting to export our PyTorch models to ONNX or TorchScript format. ONNX is an open specification for representing machine learning models that increasingly finds adoption. It is powered by a high-performance runtime developed by Microsoft (ONNX Runtime). While we were able to achieve performance acceleration for our TensorFlow BERT-based model using ONNX Runtime, at the time one of our PyTorch model required some operators that weren\u2019t yet supported in ONNX. Rather than implement these using custom operators, we decided to look into TorchScript for the time being.", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "A maturing ecosystem\nIs it all roses? No, it has been a rockier journey than we expected. We encountered what seems to be a memory leak in the MKL libraries used by PyTorch while serving the PyTorch code directly. We encountered deadlocks in trying to load multiple models from multiple threads. We had difficulties exporting our models to ONNX and TorchScript formats. Models would not work out-of-the-box on hardware with multiple GPUs, they always accessed the particular GPU device on which they were exported. We encountered excessive memory usage in the Triton inference server while serving TorchScript models, which we found out was due to automatic differentiation accidentally being enabled during the forward pass. However, the ecosystem keeps improving, and there is a helpful and vibrant open-source community eager to work with us to mitigate such issues.", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "Where to go from here? For those that require the flexibility of serving PyTorch code directly, without going through the extra step of exporting self-contained models, it is worth pointing out that the TorchServe project now provides a way of bundling the code together with parameter checkpoints into a single servable archive, greatly reducing the risk of code and parameters running apart. To us, however, exporting models to TorchScript has proven beneficial. It provides a clear interface between modeling and deployment teams, and TorchScript further reduces the latency when serving models on GPU via its just-in-time compilation engine.\nScaling at large and the future", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "Scaling at large and the future\nFinally, efficient deployment to the cloud is about more than just computing the response of a single model instance efficiently. Flexibility is needed in managing, versioning and updating models. High-level scalability must be achieved via techniques such as load-balancing, horizontal scaling and vertical scaling. If many models are involved, scale-to-zero quickly becomes a topic as it is unacceptable to pay for serving models that do not answer any requests. Providing such extra functionality on top of a low-level inference server like Triton is the job of an orchestration framework. After gaining some first experience with KubeFlow, to that end, we decided to turn our attention to Azure ML, which provides similar functionality but integrates more deeply with the Azure platform, on which we crucially rely for large parts of our technology stack already. This part of our journey has just begun.\nConclusion", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "Academia has long recognized that we are \"standing on the shoulders of giants.\" As Artificial Intelligence is maturing from a scientific discipline into technology, the same spirit of collaboration that originally fueled its scientific foundation has carried over into the world of software engineering. Open-source enthusiasts join technology companies worldwide to build open software ecosystems that allow for new angles at solving some of the most pressing challenges of modern society. In this article, we've taken a look at Nuance's Dragon Ambient eXperience, an AI-powered, voice-enabled solution that automatically documents patient care, reducing healthcare providers' administrative burdens. Nuance DAX improves the patient-provider experience, reduces physician burnout, and improves financial outcomes. It brings back trust, joy, and humanity to the delivery of healthcare. Fairseq and PyTorch have proven to be an incredible platform for powering this AI technology, and in turn, Nuance has contributed back some of its innovations in this space. For further reading, we invite you to take a look at our recent ACL publication and the Nuance \"What's Next\" blog.", "source": "https://pytorch.org/blog/ambient-clinical-intelligence-generating-medical-reports-with-pytorch/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Microsoft becomes maintainer of the Windows version of PyTorch'\nauthor: Maxim Lukiyanov - Principal PM at Microsoft, Emad Barsoum - Group EM at Microsoft, Guoliang Hua - Principal EM at Microsoft, Nikita Shulga - Tech Lead at Facebook, Geeta Chauhan - PE Lead at Facebook, Chris Gottbrath - Technical PM at Facebook, Jiachen Pu - Engineer at Facebook\n\nAlong with the PyTorch 1.6 release, we are excited to announce that Microsoft has expanded its participation in the PyTorch community and will be responsible for the development and maintenance of the PyTorch build for Windows.", "source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"} {"text": "According to the latest Stack Overflow developer survey, Windows remains the primary operating system for the developer community (46% Windows vs 28% MacOS). Jiachen Pu initially made a heroic effort to add support for PyTorch on Windows, but due to limited resources, Windows support for PyTorch has lagged behind other platforms. Lack of test coverage resulted in unexpected issues popping up every now and then. Some of the core tutorials, meant for new users to learn and adopt PyTorch, would fail to run. The installation experience was also not as smooth, with the lack of official PyPI support for PyTorch on Windows. Lastly, some of the PyTorch functionality was simply not available on the Windows platform, such as the TorchAudio domain library and distributed training support. To help alleviate this pain, Microsoft is happy to bring its Windows expertise to the table and bring PyTorch on Windows to its best possible self.", "source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"} {"text": "In the PyTorch 1.6 release, we have improved the core quality of the Windows build by bringing test coverage up to par with Linux for core PyTorch and its domain libraries and by automating tutorial testing. Thanks to the broader PyTorch community, which contributed TorchAudio support to Windows, we were able to add test coverage to all three domain libraries: TorchVision, TorchText and TorchAudio. In subsequent releases of PyTorch, we will continue improving the Windows experience based on community feedback and requests. So far, the feedback we received from the community points to distributed training support and a better installation experience using pip as the next areas of improvement.", "source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"} {"text": "In addition to the native Windows experience, Microsoft released a preview adding GPU compute support to Windows Subsystem for Linux (WSL) 2 distros, with a focus on enabling AI and ML developer workflows. WSL is designed for developers that want to run any Linux based tools directly on Windows. This preview enables valuable scenarios for a variety of frameworks and Python packages that utilize NVIDIA CUDA for acceleration and only support Linux. This means WSL customers using the preview can run native Linux based PyTorch applications on Windows unmodified without the need for a traditional virtual machine or a dual boot setup.\nGetting started with PyTorch on Windows", "source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"} {"text": "Getting started with PyTorch on Windows\nIt's easy to get started with PyTorch on Windows. To install PyTorch using Anaconda with the latest GPU support, run the command below. To install different supported configurations of PyTorch, refer to the installation instructions on pytorch.org.\nconda install pytorch torchvision cudatoolkit=10.2 -c pytorch\nOnce you install PyTorch, learn more by visiting the PyTorch Tutorials and documentation.\n\n\n\nGetting started with PyTorch on Windows Subsystem for Linux", "source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"} {"text": "The preview of NVIDIA CUDA support in WSL is now available to Windows Insiders running Build 20150 or higher. In WSL, the command to install PyTorch using Anaconda is the same as the above command for native Windows. If you prefer pip, use the command below.\npip install torch torchvision\nYou can use the same tutorials and documentation inside your WSL environment as on native Windows. This functionality is still in preview so if you run into issues with WSL please share feedback via the WSL GitHub repo or with NVIDIA CUDA support share via NVIDIA\u2019s Community Forum for CUDA on WSL.\nFeedback", "source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"} {"text": "Feedback\nIf you find gaps in the PyTorch experience on Windows, please let us know on the PyTorch discussion forum or file an issue on GitHub using the #module: windows label.", "source": "https://pytorch.org/blog/microsoft-becomes-maintainer-of-the-windows-version-of-pytorch/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Accelerated PyTorch 2 Transformers\"\nauthor: Michael Gschwind, Driss Guessous, Christian Puhrsch\n\nThe PyTorch 2.0 release includes a new high-performance implementation of the PyTorch Transformer API with the goal of making training and deployment of state-of-the-art Transformer models affordable. Following the successful release of \u201cfastpath\u201d inference execution (\u201cBetter Transformer\u201d), this release introduces high-performance support for training and inference using a custom kernel architecture for scaled dot product attention (SPDA).", "source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"} {"text": "You can take advantage of the new fused SDPA kernels either by calling the new SDPA operator directly (as described in the SDPA tutorial), or transparently via integration into the pre-existing PyTorch Transformer API. All features of the PyTorch Transformer API will continue to work compatibly, with many features mapped to high-performance SDPA kernels, while other features are impossible to support with higher performance (e.g., need_weights, as per below) while expanded high-performance support for other features may still be under active development. \\\n \\", "source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"} {"text": "\\\nSimilar to the \u201cfastpath\u201d architecture, custom kernels are fully integrated into the PyTorch Transformer API \u2013 thus, using the native Transformer and MultiHeadAttention API will enable users to transparently see significant speed improvements. Unlike the \u201cfastpath\u201d architecture, the newly introduced \u201ccustom kernels\u201d support many more use cases including models using Cross-Attention, Transformer Decoders, and for training models, in addition to the existing fastpath inference for fixed and variable sequence length Transformer Encoder and Self Attention use cases.", "source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"} {"text": "To take full advantage of different hardware models and Transformer use cases, multiple SDPA custom kernels are supported, with custom kernel selection logic that will pick the highest-performance kernel for a given model and hardware type. In particular, the first custom kernels included with the PyTorch 2.0 release are the Flash Attention kernel (sdpa_flash, for 16-bit floating point training and inference on Nvidia GPUs with SM80+ architecture level) and the xFormers memory-efficient attention kernel (sdpa_mem_eff, for 16-bit and 32-bit floating point training and inference on a broad range of Nvidia GPUs). A general-purpose kernel sdpa_math provides an implementation when the custom kernels are not applicable.", "source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"} {"text": "As mentioned, custom kernels provide a wider range of support for execution scenarios To ensure efficient execution (e,g., to use GPU tensor cores), model configurations need to meet a small number of requirements. This list of requirements will evolve over time, prospectively relaxing constraints limiting the usage of currently supported custom kernels, or providing additional kernels in the future.\nFor the most up to date list of custom kernels and dispatch constraints, you can refer to sdp_utils.h. As of PyTorch 2.0, the existing fused SDPA kernels have the following constraints:\n\nFlash Attention only supports 16 bit floating point data types (float16 and bfloat16).\nThe head dimension must be a multiple of 8 for 16-bit floating point numbers and a multiple of 4 for 32-bit floating point numbers. At present, the maximum head_dim support for the Flash Attention custom kernel is 128.\n", "source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"} {"text": "\nThe CUDA architecture level must be sm5x or better for the mem_efficient kernel, and sm80 for Flash Attention.\nFlash Attention supports arbitrary dropout, in PyTorch 2.0 the mem_efficient kernel does not support dropout (i.e., dropout must be set to zero for this kernel to be selected in PyTorch 2.0). \nTo support variable-sequence length batches, all SDPA kernels support Nested Tensor inputs that combine input data and padding information using variable sequence length tensors for forward. (You can find more information about Nested Tensors in the Nested Tensor tutorial.)\nYou can specify both a key_padding_mask and an attn_mask by combining them before passing them to the SDPA operator. In particular, you can use the per-batch-element key padding mask of the nn.Transformer API to implement training for variable-sequence length inputs in a batch.\n", "source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"} {"text": "\nAt present, the only attention mask supported by fused kernel implementation is the causal mask commonly used for training. To specify the causal mask in custom kernels, it must be specified with the is_causal boolean and attn_mask must be None. \nSupport for Nested Tensors is still under development. Specifically, in PyTorch 2.0, only the sdpa_math kernel supports training with Nested Tensors. Also, PyTorch 2.0 does not support Nested Tensors as part of code being compiled with torch.compile(). \nThe SDPA operator does not support returning averaged attention weights because computing them defeats the optimizations that enabled fused kernels to execute more efficiently. The argument need_weights for torch.nn.MultiheadAttention's forward function defaults to True. In order to use the fused kernels, need_weights needs to be set to need_weights=False.\n", "source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"} {"text": "We find that an attention mask is rarely used in real-world applications, except for the causal mask during training. Consequently, we reduce kernel complexity and compute cost by building in the option to use a causal mask as attention mask, and select this new capability with the is_causal parameter introduced in conjunction with the new SDPA operator. \nProviding the is_causal Boolean flag for the frequently used causal mask also obviates the expensive and memory-intensive allocation of a causal mask, increasing training memory efficiency by allowing more memory to be used for large batch sizes, and reduce memory bandwidth and cache contention \u2013 which are both at a premium in GPU accelerators \u2013 by not needing to load an attention mask tensor.", "source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"} {"text": "If the constraints of none of the available custom kernels are met, then training falls back to using the default sdpa_math kernel, implementing the mathematical equations for scaled dot product attention using a sequence of PyTorch operator to implement SDPA. This is the most general \u201ccatch-all\u201d fallback kernel to ensure successful training for all models.\nIn addition to the existing Transformer API, model developers may also use the scaled dot product attention kernels directly by calling the new scaled_dot_product_attention() operator. This operator may be used to efficiently implement multi-head attention by combining it with in-projection and outprojection, as described in the SDPA tutorial.", "source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"} {"text": "In addition to adding custom kernels, Accelerated PyTorch 2 Transformers are integrated with PyTorch 2.0 compilation. To use your model while benefiting from the additional acceleration of PT2-compilation (for inference or training), pre-process the model with\nmodel = torch.compile(model)\n\nWe have achieved major speedups for training transformer models and in particular large language models with Accelerated PyTorch 2 Transformers using a combination of custom kernels and torch.compile(). \n\nFigure: Using scaled dot product attention with custom kernels and torch.compile delivers significant speedups for training large language models, such as for nanoGPT shown here.", "source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"} {"text": "Finally, because the custom kernels are much more memory efficient, try to increase the size of training batches to achieve faster training with increased batch size.\nIn addition to automatic kernel selection, a context manager enables developers to override the kernel selection algorithm \u2013 this is not required for day to day operation, but enables developers to debug their code as well as enable performance engineers to override kernel selection. The SDPA tutorial provides additional information on using the SDPA context manager.\nIn addition to availability as part of the nn.Transformer API, Accelerated PyTorch 2 Transformer custom kernels are also available in conjunction with the torchtext, torchvision, and fairseq domain libraries with the launch of PyTorch 2.0.", "source": "https://pytorch.org/blog/accelerated-pytorch-2/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Mapillary Research: Seamless Scene Segmentation and In-Place Activated BatchNorm'\nauthor: Lorenzo Porzi, Mapillary\nredirect_from: /2019/07/23/mapillary-research.html\n\nWith roads in developed countries like the US changing up to 15% annually, Mapillary addresses a growing demand for keeping maps updated by combining images from any camera into a 3D visualization of the world. Mapillary's independent and collaborative approach enables anyone to collect, share, and use street-level images for improving maps, developing cities, and advancing the automotive industry.\nToday, people and organizations all over the world have contributed more than 600 million images toward Mapillary's mission of helping people understand the world's places through images and making this data available, with clients and partners including the World Bank, HERE, and Toyota Research Institute.", "source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"} {"text": "Mapillary\u2019s computer vision technology brings intelligence to maps in an unprecedented way, increasing our overall understanding of the world. Mapillary runs state-of-the-art semantic image analysis and image-based 3d modeling at scale and on all its images. In this post we discuss two recent works from Mapillary Research and their implementations in PyTorch - Seamless Scene Segmentation [1] and In-Place Activated BatchNorm [2] - generating Panoptic segmentation results and saving up to 50% of GPU memory during training, respectively.\nSeamless Scene Segmentation\nGithub project page: https://github.com/mapillary/seamseg/\n\n\n", "source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"} {"text": "\nThe objective of Seamless Scene Segmentation is to predict a \u201cpanoptic\u201d segmentation [3] from an image, that is a complete labeling where each pixel is assigned with a class id and, where possible, an instance id. Like many modern CNNs dealing with instance detection and segmentation, we adopt the Mask R-CNN framework [4], using ResNet50 + FPN [5] as a backbone. This architecture works in two stages: first, the \u201cProposal Head\u201d selects a set of candidate bounding boxes on the image (i.e. the proposals) that could contain an object; then, the \u201cMask Head\u201d focuses on each proposal, predicting its class and segmentation mask. The output of this process is a \u201csparse\u201d instance segmentation, covering only the parts of the image that contain countable objects (e.g. cars and pedestrians).", "source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"} {"text": "To complete our panoptic approach coined Seamless Scene Segmentation, we add a third stage to Mask R-CNN. Stemming from the same backbone, the \u201cSemantic Head\u201d predicts a dense semantic segmentation over the whole image, also accounting for the uncountable or amorphous classes (e.g. road and sky). The outputs of the Mask and Semantic heads are finally fused using a simple non-maximum suppression algorithm to generate the final panoptic prediction. All details about the actual network architecture, used losses and underlying math can be found at the project website for our CVPR 2019 paper [1].", "source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"} {"text": "While several versions of Mask R-CNN are publicly available, including an official implementation written in Caffe2, at Mapillary we decided to build Seamless Scene Segmentation from scratch using PyTorch, in order to have full control and understanding of the whole pipeline. While doing so we encountered a couple of main stumbling blocks, and had to come up with some creative workarounds we are going to describe next.\nDealing with variable-sized tensors", "source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"} {"text": "Dealing with variable-sized tensors\nSomething that sets aside panoptic segmentation networks from traditional CNNs is the prevalence of variable-sized data. In fact, many of the quantities we are dealing with cannot be easily represented with fixed sized tensors: each image contains a different number of objects, the Proposal head can produce a different number of proposals for each image, and the images themselves can have different sizes. While this is not a problem per-se -- one could just process images one at a time -- we would still like to exploit batch-level parallelism as much as possible. Furthermore, when performing distributed training with multiple GPUs, DistributedDataParallel expects its inputs to be batched, uniformly-sized tensors.\n\n\n", "source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"} {"text": "\nOur solution to these issues is to wrap each batch of variable-sized tensors in a PackedSequence. PackedSequence is little more than a glorified list class for tensors, tagging its contents as \u201crelated\u201d, ensuring that they all share the same type, and providing useful methods like moving all the tensors to a particular device, etc. When performing light-weight operations that wouldn\u2019t be much faster with batch-level parallelism, we simply iterate over the contents of the PackedSequence in a for loop. When performance is crucial, e.g. in the body of the network, we simply concatenate the contents of the PackedSequence, adding zero padding as required (like in RNNs with variable-length inputs), and keeping track of the original dimensions of each tensor.", "source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"} {"text": "PackedSequences also help us deal with the second problem highlighted above. We slightly modify DistributedDataParallel to recognize PackedSequence inputs, splitting them in equally sized chunks and distributing their contents across the GPUs.\nAsymmetric computational graphs with Distributed Data Parallel\nAnother, perhaps more subtle, peculiarity of our network is that it can generate asymmetric computational graphs across GPUs. In fact, some of the modules that compose the network are \u201coptional\u201d, in the sense that they are not always computed for all images. As an example, when the Proposal head doesn\u2019t output any proposal, the Mask head is not traversed at all. If we are training on multiple GPUs with DistributedDataParallel, this results in one of the replicas not computing gradients for the Mask head parameters.", "source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"} {"text": "Prior to PyTorch 1.1, this resulted in a crash, so we had to develop a workaround. Our simple but effective solution was to compute a \u201cfake forward pass\u201d when no actual forward is required, i.e. something like this:\ndef fake_forward():\n fake_input = get_correctly_shaped_fake_input()\n fake_output = mask_head(fake_input)\n fake_loss = fake_output.sum() * 0\n return fake_loss\n\nHere, we generate a batch of bogus data, pass it through the Mask head, and return a loss that always back-progates zeros to all parameters.\nStarting from PyTorch 1.1 this workaround is no longer required: by setting find_unused_parameters=True in the constructor, DistributedDataParallel is told to identify parameters whose gradients have not been computed by all replicas and correctly handle them. This leads to some substantial simplifications in our code base!\nIn-place Activated BatchNorm\nGithub project page: https://github.com/mapillary/inplace_abn/", "source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"} {"text": "Most researchers would probably agree that there are always constraints in terms of available GPU resources, regardless if their research lab has access to only a few or multiple thousands of GPUs. In a time where at Mapillary we still worked at rather few and mostly 12GB Titan X - style prosumer GPUs, we were searching for a solution that virtually enhances the usable memory during training, so we would be able to obtain and push state-of-the-art results on dense labeling tasks like semantic segmentation. In-place activated BatchNorm is enabling us to use up to 50% more memory (at little computational overhead) and is therefore deeply integrated in all our current projects (including Seamless Scene Segmentation described above).\n\n\n", "source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"} {"text": "\nWhen processing a BN-Activation-Convolution sequence in the forward pass, most deep learning frameworks (including PyTorch) need to store two big buffers, i.e. the input x of BN and the input z of Conv. This is necessary because the standard implementations of the backward passes of BN and Conv depend on their inputs to calculate the gradients. Using InPlace-ABN to replace the BN-Activation sequence, we can safely discard x, thus saving up to 50% GPU memory at training time. To achieve this, we rewrite the backward pass of BN in terms of its output y, which is in turn reconstructed from z by inverting the activation function.", "source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"} {"text": "The only limitation of InPlace-ABN is that it requires using an invertible activation function, such as leaky relu or elu. Except for this, it can be used as a direct, drop-in replacement for BN+activation modules in any network. Our native CUDA implementation offers minimal computational overhead compared to PyTorch\u2019s standard BN, and is available for anyone to use from here: https://github.com/mapillary/inplace_abn/.\nSynchronized BN with asymmetric graphs and unbalanced batches", "source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"} {"text": "When training networks with synchronized SGD over multiple GPUs and/or multiple nodes, it\u2019s common practice to compute BatchNorm statistics separately on each device. However, in our experience working with semantic and panoptic segmentation networks, we found that accumulating mean and variance across all workers can bring a substantial boost in accuracy. This is particularly true when dealing with small batches, like in Seamless Scene Segmentation where we train with a single, super-high resolution image per GPU.\nInPlace-ABN supports synchronized operation over multiple GPUs and multiple nodes, and, since version 1.1, this can also be achieved in the standard PyTorch library using SyncBatchNorm. Compared to SyncBatchNorm, however, we support some additional functionality which is particularly important for Seamless Scene Segmentation: unbalanced batches and asymmetric graphs.", "source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"} {"text": "As mentioned before, Mask R-CNN-like networks naturally give rise to variable-sized tensors. Thus, in InPlace-ABN we calculate synchronized statistics using a variant of the parallel algorithm described here, which properly takes into account the fact that each GPU can hold a different number of samples. PyTorch\u2019s SyncBatchNorm is currently being revised to support this, and the improved functionality will be available in a future release.", "source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"} {"text": "Asymmetric graphs (in the sense mentioned above) are another complicating factor one has to deal with when creating a synchronized BatchNorm implementation. Luckily, PyTorch\u2019s distributed group functionality allows us to restrict distributed communication to a subset of workers, easily excluding those that are currently inactive. The only missing piece is that, in order to create a distributed group, each process needs to know the ids of all processes that will participate in the group, and even processes that are not part of the group need to call the new_group() function. In InPlace-ABN we handle it with a function like this:\n```python\nimport torch\nimport torch.distributed as distributed\ndef active_group(active):\n \"\"\"Initialize a distributed group where each process can independently decide whether to participate or not\"\"\"\n world_size = distributed.get_world_size()\n rank = distributed.get_rank()\n# Gather active status from all workers\n", "source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"} {"text": "Gather active status from all workers\nactive = torch.tensor(rank if active else -1, dtype=torch.long, device=torch.cuda.current_device())\nactive_workers = torch.empty(world_size, dtype=torch.long, device=torch.cuda.current_device())\ndistributed.all_gather(list(active_workers.unbind(0)), active)\n\n# Create group\nactive_workers = [int(i) for i in active_workers.tolist() if i != -1]\ngroup = distributed.new_group(active_workers)\nreturn group\n\n```\nFirst each process, including inactive ones, communicates its status to all others through an all_gather call, then it creates the distributed group with the shared information. In the actual implementation we also include a caching mechanism for groups, since new_group() is usually too expensive to call at each batch.\nReferences\n[1] Seamless Scene Segmentation; Lorenzo Porzi, Samuel Rota Bul\u00f2, Aleksander Colovic, Peter Kontschieder; Computer Vision and Pattern Recognition (CVPR), 2019", "source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"} {"text": "[2] In-place Activated BatchNorm for Memory-Optimized Training of DNNs; Samuel Rota Bul\u00f2, Lorenzo Porzi, Peter Kontschieder; Computer Vision and Pattern Recognition (CVPR), 2018\n[3] Panoptic Segmentation; Alexander Kirillov, Kaiming He, Ross Girshick, Carsten Rother, Piotr Dollar; Computer Vision and Pattern Recognition (CVPR), 2019\n[4] Mask R-CNN; Kaiming He, Georgia Gkioxari, Piotr Dollar, Ross Girshick; International Conference on Computer Vision (ICCV), 2017\n[5] Feature Pyramid Networks for Object Detection; Tsung-Yi Lin, Piotr Dollar, Ross Girshick, Kaiming He, Bharath Hariharan, Serge Belongie; Computer Vision and Pattern Recognition (CVPR), 2017", "source": "https://pytorch.org/blog/mapillary-research/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Introduction to Quantization on PyTorch'\nauthor: Raghuraman Krishnamoorthi, James Reed, Min Ni, Chris Gottbrath, and Seth Weidman\n\nIt\u2019s important to make efficient use of both server-side and on-device compute resources when developing machine learning applications. To support more efficient deployment on servers and edge devices, PyTorch added a support for model quantization using the familiar eager mode Python API.", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"} {"text": "Quantization leverages 8bit integer (int8) instructions to reduce the model size and run the inference faster (reduced latency) and can be the difference between a model achieving quality of service goals or even fitting into the resources available on a mobile device. Even when resources aren\u2019t quite so constrained it may enable you to deploy a larger and more accurate model. Quantization is available in PyTorch starting in version 1.3 and with the release of PyTorch 1.4 we published quantized models for ResNet, ResNext, MobileNetV2, GoogleNet, InceptionV3 and ShuffleNetV2 in the PyTorch torchvision 0.5 library.\nThis blog post provides an overview of the quantization support on PyTorch and its incorporation with the TorchVision domain library.\nWhat is Quantization?\nQuantization refers to techniques for doing both computations and memory accesses with lower precision data, usually int8 compared to floating point implementations. This enables performance gains in several important areas:", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"} {"text": "\n4x reduction in model size;\n2-4x reduction in memory bandwidth;\n2-4x faster inference due to savings in memory bandwidth and faster compute with int8 arithmetic (the exact speed up varies depending on the hardware, the runtime, and the model).\n\nQuantization does not however come without additional cost. Fundamentally quantization means introducing approximations and the resulting networks have slightly less accuracy. These techniques attempt to minimize the gap between the full floating point accuracy and the quantized accuracy.\nWe designed quantization to fit into the PyTorch framework. The means that:\n1. PyTorch has data types corresponding to quantized tensors, which share many of the features of tensors.", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"} {"text": "\nOne can write kernels with quantized tensors, much like kernels for floating point tensors to customize their implementation. PyTorch supports quantized modules for common operations as part of the torch.nn.quantized and torch.nn.quantized.dynamic name-space.\nQuantization is compatible with the rest of PyTorch: quantized models are traceable and scriptable. The quantization method is virtually identical for both server and mobile backends. One can easily mix quantized and floating point operations in a model.\nMapping of floating point tensors to quantized tensors is customizable with user defined observer/fake-quantization blocks. PyTorch provides default implementations that should work for most use cases.\n\n\n\n\nWe developed three techniques for quantizing neural networks in PyTorch as part of quantization tooling in the torch.quantization name-space.", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"} {"text": "The Three Modes of Quantization Supported in PyTorch starting version 1.3\n\n\nDynamic Quantization\n The easiest method of quantization PyTorch supports is called dynamic quantization. This involves not just converting the weights to int8 - as happens in all quantization variants - but also converting the activations to int8 on the fly, just before doing the computation (hence \u201cdynamic\u201d). The computations will thus be performed using efficient int8 matrix multiplication and convolution implementations, resulting in faster compute. However, the activations are read and written to memory in floating point format.\n", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"} {"text": "\nPyTorch API: we have a simple API for dynamic quantization in PyTorch. torch.quantization.quantize_dynamic takes in a model, as well as a couple other arguments, and produces a quantized model! Our end-to-end tutorial illustrates this for a BERT model; while the tutorial is long and contains sections on loading pre-trained models and other concepts unrelated to quantization, the part the quantizes the BERT model is simply:\n\npython\n import torch.quantization\n quantized_model = torch.quantization.quantize_dynamic(model, {torch.nn.Linear}, dtype=torch.qint8)", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"} {"text": "```\n * See the documentation for the function here an end-to-end example in our tutorials here and here.\n\n\nPost-Training Static Quantization\n\n", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"} {"text": "\n\nPost-Training Static Quantization\n\n\nOne can further improve the performance (latency) by converting networks to use both integer arithmetic and int8 memory accesses. Static quantization performs the additional step of first feeding batches of data through the network and computing the resulting distributions of the different activations (specifically, this is done by inserting \u201cobserver\u201d modules at different points that record these distributions). This information is used to determine how specifically the different activations should be quantized at inference time (a simple technique would be to simply divide the entire range of activations into 256 levels, but we support more sophisticated methods as well). Importantly, this additional step allows us to pass quantized values between operations instead of converting these values to floats - and then back to ints - between every operation, resulting in a significant speed-up.", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"} {"text": "With this release, we\u2019re supporting several features that allow users to optimize their static quantization:\n 1. Observers: you can customize observer modules which specify how statistics are collected prior to quantization to try out more advanced methods to quantize your data.\n 2. Operator fusion: you can fuse multiple operations into a single operation, saving on memory access while also improving the operation\u2019s numerical accuracy.\n 3. Per-channel quantization: we can independently quantize weights for each output channel in a convolution/linear layer, which can lead to higher accuracy with almost the same speed.\n\n\nPyTorch API:\n\nTo fuse modules, we have torch.quantization.fuse_modules\nObservers are inserted using torch.quantization.prepare\nFinally, quantization itself is done using torch.quantization.convert\n\n\n", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"} {"text": "We have a tutorial with an end-to-end example of quantization (this same tutorial also covers our third quantization method, quantization-aware training), but because of our simple API, the three lines that perform post-training static quantization on the pre-trained model myModel are:\n ```python\n # set quantization config for server (x86)\n deploymentmyModel.qconfig = torch.quantization.get_default_config('fbgemm')\n# insert observers\n torch.quantization.prepare(myModel, inplace=True)\n # Calibrate the model and collect statistics\n# convert to quantized version\n torch.quantization.convert(myModel, inplace=True)\n ```\n\n\nQuantization Aware Training\n\n", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"} {"text": "```\n\n\nQuantization Aware Training\nQuantization-aware training(QAT) is the third method, and the one that typically results in highest accuracy of these three. With QAT, all weights and activations are \u201cfake quantized\u201d during both the forward and backward passes of training: that is, float values are rounded to mimic int8 values, but all computations are still done with floating point numbers. Thus, all the weight adjustments during training are made while \u201caware\u201d of the fact that the model will ultimately be quantized; after quantizing, therefore, this method usually yields higher accuracy than the other two methods.\n\nPyTorch API:\n\ntorch.quantization.prepare_qat inserts fake quantization modules to model quantization.\nMimicking the static quantization API, torch.quantization.convert actually quantizes the model once training is complete.\n", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"} {"text": "For example, in the end-to-end example, we load in a pre-trained model as qat_model, then we simply perform quantization-aware training using:\n```python\n # specify quantization config for QAT\n qat_model.qconfig=torch.quantization.get_default_qat_qconfig('fbgemm')\n# prepare QAT\n torch.quantization.prepare_qat(qat_model, inplace=True)\n# convert to quantized version, removing dropout, to check for accuracy on each\n epochquantized_model=torch.quantization.convert(qat_model.eval(), inplace=False)\n ```\nDevice and Operator Support\nQuantization support is restricted to a subset of available operators, depending on the method being used, for a list of supported operators, please see the documentation at https://pytorch.org/docs/stable/quantization.html.", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"} {"text": "The set of available operators and the quantization numerics also depend on the backend being used to run quantized models. Currently quantized operators are supported only for CPU inference in the following backends: x86 and ARM. Both the quantization configuration (how tensors should be quantized and the quantized kernels (arithmetic with quantized tensors) are backend dependent. One can specify the backend by doing:\nimport torchbackend='fbgemm'\n# 'fbgemm' for server, 'qnnpack' for mobile\nmy_model.qconfig = torch.quantization.get_default_qconfig(backend)\n# prepare and convert model\n# Set the backend on which the quantized kernels need to be run\ntorch.backends.quantized.engine=backend\n", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"} {"text": "torch.backends.quantized.engine=backend\n```\nHowever, quantization aware training occurs in full floating point and can run on either GPU or CPU. Quantization aware training is typically only used in CNN models when post training static or dynamic quantization doesn\u2019t yield sufficient accuracy. This can occur with models that are highly optimized to achieve small size (such as Mobilenet).\nIntegration in torchvision\nWe\u2019ve also enabled quantization for some of the most popular models in torchvision: Googlenet, Inception, Resnet, ResNeXt, Mobilenet and Shufflenet. We have upstreamed these changes to torchvision in three forms:\n1. Pre-trained quantized weights so that you can use them right away.\n2. Quantization ready model definitions so that you can do post-training quantization or quantization aware training.", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"} {"text": "\nA script for doing quantization aware training \u2014 which is available for any of these model though, as you will learn below, we only found it necessary for achieving accuracy with Mobilenet.\nWe also have a tutorial showing how you can do transfer learning with quantization using one of the torchvision models.\n\nChoosing an approach\nThe choice of which scheme to use depends on multiple factors:\n1. Model/Target requirements: Some models might be sensitive to quantization, requiring quantization aware training.\n2. Operator/Backend support: Some backends require fully quantized operators.\nCurrently, operator coverage is limited and may restrict the choices listed in the table below:\nThe table below provides a guideline.\n", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"} {"text": ".tg td{font-family:Arial, sans-serif;font-size:14px;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;}\n.tg th{font-family:Arial, sans-serif;font-size:14px;font-weight:normal;padding:10px 5px;border-style:solid;border-width:1px;overflow:hidden;word-break:normal;border-color:black;}\n.tg .tg-c3ow{border-color:inherit;text-align:center;vertical-align:top}\n.tg .tg-0pky{border-color:inherit;text-align:left;vertical-align:top;font-weight:bold;color:black;}\narticle.pytorch-article table tr th:first-of-type, article.pytorch-article table tr td:first-of-type{padding-left:5px}\n\n\n\nModel Type\nPreferred scheme\nWhy\n\n\nLSTM/RNN\nDynamic Quantization\nThroughput dominated by compute/memory bandwidth for weights\n\n", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"} {"text": "\n\n\nBERT/Transformer\nDynamic Quantization\nThroughput dominated by compute/memory bandwidth for weights\n\n\nCNN\nStatic Quantization\nThroughput limited by memory bandwidth for activations\n\n\nCNN\nQuantization Aware Training\nIn the case where accuracy can't be achieved with static quantization\n\n\nPerformance Results\nQuantization provides a 4x reduction in the model size and a speedup of 2x to 3x compared to floating point implementations depending on the hardware platform and the model being benchmarked. Some sample results are:\n\n\n\nModel\nFloat Latency (ms)", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"} {"text": "Float Latency (ms)\n Quantized Latency (ms)\n Inference Performance Gain\n Device\n Notes\n\n\n BERT\n 581\n 313\n 1.8x\n Xeon-D2191 (1.6GHz)\n Batch size = 1, Maximum sequence length= 128, Single thread, x86-64, Dynamic quantization\n\n\n Resnet-50\n 214\n 103\n 2x\n Xeon-D2191 (1.6GHz)\n Single thread, x86-64, Static quantization\n\n\n Mobilenet-v2\n 97\n 17\n", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"} {"text": "17\n 5.7x\n Samsung S9\n Static quantization, Floating point numbers are based on Caffe2 run-time and are not optimized\n\n\n\n\nAccuracy results\nWe also compared the accuracy of static quantized models with the floating point models on Imagenet. For dynamic quantization, we compared the F1 score of BERT on the GLUE benchmark for MRPC.\nComputer Vision Model accuracy\n\n\nModel\nTop-1 Accuracy (Float)\nTop-1 Accuracy (Quantized)\nQuantization scheme\n\n\nGooglenet\n69.8\n69.7\nStatic post training quantization\n\n", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"} {"text": "\n\n\nInception-v3\n77.5\n77.1\nStatic post training quantization\n\n\nResNet-18\n69.8\n69.4\nStatic post training quantization\n\n\nResnet-50\n76.1\n75.9\nStatic post training quantization\n\n\nResNext-101 32x8d\n79.3\n79\nStatic post training quantization\n\n\nMobilenet-v2\n71.9\n71.6\nQuantization Aware Training\n\n\nShufflenet-v2\n69.4", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"} {"text": "69.4\n68.4\nStatic post training quantization\n\n\n\nSpeech and NLP Model accuracy\n\n\n\nModel\nF1 (GLUEMRPC) Float\nF1 (GLUEMRPC) Quantized\nQuantization scheme\n\n\nBERT\n0.902\n0.895\nDynamic quantization\n\n\n\nConclusion", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"} {"text": "\n\n\nConclusion\nTo get started on quantizing your models in PyTorch, start with the tutorials on the PyTorch website. If you are working with sequence data start with dynamic quantization for LSTM, or BERT. If you are working with image data then we recommend starting with the transfer learning with quantization tutorial. Then you can explore static post training quantization. If you find that the accuracy drop with post training quantization is too high, then try quantization aware training.", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"} {"text": "If you run into issues you can get community help by posting in at discuss.pytorch.org, use the quantization category for quantization related issues.\nThis post is authored by Raghuraman Krishnamoorthi, James Reed, Min Ni, Chris Gottbrath and Seth Weidman. Special thanks to Jianyu Huang, Lingyi Liu and Haixin Liu for producing quantization metrics included in this post.\nFurther reading:\n\nPyTorch quantization presentation at Neurips: (https://research.fb.com/wp-content/uploads/2019/12/2.-Quantization.pptx)\nQuantized Tensors (https://github.com/pytorch/pytorch/wiki/\nIntroducing-Quantized-Tensor)\nQuantization RFC on Github (https://github.com/pytorch/pytorch/\nissues/18318)\n", "source": "https://pytorch.org/blog/introduction-to-quantization-on-pytorch/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Scaling PyTorch FSDP for Training Foundation Models on IBM Cloud\"\nauthor: Linsong Chu, Less Wright, Hamid Shojanazeri, Sophia Wen, Raghu Ganti, Geeta Chauhan\nfeatured-img: \"/assets/images/scaling-pytorch-fsdp-image1-IBM_scaling_FSDP_visual_new.png\"\n\nLarge model training using a cloud native approach is of growing interest for many enterprises given the emergence and success of foundation models. Some AI practitioners may assume that the only way they can achieve high GPU utilization for distributed training jobs is to run them on HPC systems, such as those inter-connected with Infiniband and may not consider Ethernet connected systems. We demonstrate how the latest distributed training technique, Fully Sharded Data Parallel (FSDP) from PyTorch, successfully scales to models of size 10B+ parameters using commodity Ethernet networking in IBM Cloud.\nPyTorch FSDP Scaling", "source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"} {"text": "PyTorch FSDP Scaling\nAs models get larger, the standard techniques for data parallel training work only if the GPU can hold a full replica of the model, along with its training state (optimizer, activations, etc.). However, GPU memory increases have not kept up with the model size increases and new techniques for training such models have emerged (e.g., Fully Sharded Data Parallel, DeepSpeed), which allow us to efficiently distribute the model and data over multiple GPUs during training. In this blog post, we demonstrate a path to achieve remarkable scaling of model training to 64 nodes (512 GPUs) using PyTorch native FSDP APIs as we increase model sizes to 11B.\nWhat is Fully Sharded Data Parallel?", "source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"} {"text": "What is Fully Sharded Data Parallel?\nFSDP extends the distributed data parallel training (DDP) approach by sharding model parameters, gradient and optimizer states into K FSDP units, determined by using a wrapping policy. FSDP achieves large model training efficiency in terms of resources and performance by significantly reducing the memory footprint on each GPU and overlapping computation and communication.\nResource efficiency is achieved with memory footprint reduction by having all GPUs own a portion of each FSDP unit. To process a given FSDP unit, all GPUs share their locally owned portion via all_gather communication calls.", "source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"} {"text": "Performance efficiency is accomplished by overlapping all_gather communication calls for upcoming FSDP units with computation of the current FSDP unit. Once the current FSDP unit has been processed, the non-locally owned parameters are dropped, freeing memory for the upcoming FSDP units. This process achieves training efficiency by the overlap of computation and communication, while also reducing the peak memory needed by each GPU.", "source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"} {"text": "In what follows, we demonstrate how FSDP allows us to keep hundreds of GPUs highly utilized throughout a distributed training job, while running over standard Ethernet networking (system description towards the end of the blog). We chose the T5 architecture for our experiments and leveraged the code from the FSDP workshop. In each of our experiments, we start with a single node experiment to create a baseline and report the metric seconds/iteration normalized by the batch size as well as compute the teraflops based on the Megatron-LM paper (see Appendix for details of teraflop computation for T5). Our experiments aim to maximize the batch size (while avoiding cudaMalloc retries) to take full advantage of overlap in computation and communications, as discussed below. Scaling is defined as the ratio of the seconds/iteration normalized by batch size for N nodes versus a single node, representing how well we can utilize the additional GPUs as more nodes are added.", "source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"} {"text": "Experimental Results\nOur first set of experiments using the T5-3B configuration (mixed precision with BF16, activation checkpointing, and transformer wrapping policy) demonstrated scaling efficiency of 95% as we increased the number of GPUs from 8 to 512 (1 to 64 nodes, respectively). We achieved these results without any modifications to the existing FSDP APIs. We observed that, for this scale, over Ethernet based network, there is sufficient bandwidth to enable continuous overlap of communication and computation.", "source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"} {"text": "However, when we increased the T5 model size to 11B, the scaling efficiency declined substantially to 20%. The PyTorch profiler shows that overlap of communication and computation was very limited. Further investigation into the network bandwidth usage revealed that the poor overlap is being caused by latency in the communication of individual packets and not the bandwidth required (in fact, our peak bandwidth utilization is 1/4th of that available). This led us to hypothesize that if we can increase the compute time by increasing the batch size, we can better overlap communication and computation. However, given we are already at maximum GPU memory allocation, we must identify opportunities to rebalance the memory allocation to allow for increase in batch size. We identified that the model state was being allocated a lot more memory than was needed. The primary function of these reservations is to have pre-reserved memory ready to aggressively send/receive tensors during the communication periods and too few buffers can result in increased wait times, whereas too many buffers result in smaller batch sizes.", "source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"} {"text": "To achieve better efficiency, the PyTorch distributed team introduced a new control knob, the rate_limiter which controls how much memory is allocated for send/receive of tensors, alleviating the memory pressure and providing room for higher batch sizes. In our case, the rate_limiter could increase the batch size from 20 to 50, thus increasing compute time by 2.5x and allowing for much greater overlap of communication and computation. With this fix, we increased the scaling efficiency to >75% (at 32 nodes)!", "source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"} {"text": "Continued investigation into the factors limiting scaling efficiency uncovered that the rate limiter was creating a recurring pipeline bubble of GPU idle time. This was due to the rate limiter using a block and flush approach for the allocation and release of each set of memory buffers. By waiting for the entire block to complete before initiating a new all_gather, the GPU was idling at the start of each block, while waiting for the new set of all_gather parameters to arrive. This bubble was alleviated by moving to a sliding window approach. Upon the completion of a single all_gather step and its computation (rather than a block of them), the memory is freed and the next all_gather is immediately issued in a much more uniform manner. This improvement eliminated the pipeline bubble and boosted the scaling efficiencies to >90% (at 32 nodes).\n\n\n\n", "source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"} {"text": "\n\nFigure 1: Scaling of T5-XL (3B) and T5-XXL (11B) from 1 node to 64 nodes\n\n\n\n\n\nFigure 2: TFLOPs/sec usage for T5-XL(3B) and T5-XXL (11B) as we increase number of nodes\n\nIBM Cloud AI System and Middleware\nThe AI infrastructure used for this work is a large-scale AI system on IBM Cloud consisting of nearly 200 nodes, each node with 8 NVIDIA A100 80GB cards, 96 vCPUs, and 1.2TB CPU RAM. The GPU cards within a node are connected via NVLink with a card-to-card bandwidth of 600GBps. Nodes are connected by 2 x 100Gbps Ethernet links with SRIOV based TCP/IP stack, providing a usable bandwidth of 120Gbps.", "source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"} {"text": "The IBM Cloud AI System has been production-ready since May of 2022 and is configured with the OpenShift container platform to run AI workloads. We also built a software stack for production AI workloads that provide end-to-end tools for training workloads. The middleware leverages Ray for pre and post processing workloads and PyTorch for training of models. We also integrate a Kubernetes native scheduler, MCAD, that manages multiple jobs with job queuing, gang scheduling, prioritization, and quota management. A multi-NIC CNI discovers all available network interfaces and handles them as a single NIC pool enabling optimized use of the network interfaces in Kubernetes. Finally, CodeFlare CLI supports a single pane for observability of the full stack using a desktop CLI (e.g., GPU utilization, application metrics like loss, gradient norm).\n\n\n\n", "source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"} {"text": "\n\nFigure 3: Foundation Model Middleware Stack\n\nConclusion and Future Work\nIn conclusion, we demonstrated how we can achieve remarkable scaling of FSDP APIs over non-InfiniBand networks. We identified the bottleneck that had limited scaling to less than 20% efficiency for 11B parameter model training. After identifying the issue, we were able to correct this with a new rate limiter control to ensure a more optimal balance of reserved memory and communication overlap relative to compute time. With this improvement, we were able to achieve 90% scaling efficiency (a 4.5x improvement), at 256 GPUs and 80% at 512 GPUs for training of the 11B parameter model. In addition, the 3B parameter model scales extremely well with 95% efficiency even as we increase the number of GPUs to 512.", "source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"} {"text": "This is a first in the industry to achieve such scaling efficiencies for up to 11B parameter models using Kubernetes with vanilla Ethernet and PyTorch native FSDP API\u2019s. This improvement enables users to train huge models on a Hybrid Cloud platform in a cost efficient and sustainable manner.\nWe plan on continuing to investigate scaling with decoder only models and increasing the size of these models to 100B+ parameters. From a system design perspective, we are exploring capabilities such as RoCE and GDR that can improve latencies of communications over Ethernet networks.\nAcknowledgements\nThis blog was possible because of contributions from both PyTorch Distributed and IBM Research teams.\nFrom the PyTorch Distributed team, we would like to thank Less Wright, Hamid Shojanazeri, Geeta Chauhan, Shen Li, Rohan Varma, Yanli Zhao, Andrew Gu, Anjali Sridhar, Chien-Chin Huang, and Bernard Nguyen.", "source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"} {"text": "From the IBM Research team, we would like to thank Linsong Chu, Sophia Wen, Lixiang (Eric) Luo, Marquita Ellis, Davis Wertheimer, Supriyo Chakraborty, Raghu Ganti, Mudhakar Srivatsa, Seetharami Seelam, Carlos Costa, Abhishek Malvankar, Diana Arroyo, Alaa Youssef, Nick Mitchell.\nAppendix\nTeraflop computation\nThe T5-XXL (11B) architecture has two types of T5 blocks, one is an encoder and the second is a decoder. Following the approach of Megatron-LM, where each matrix multiplication requires 2m\u00d7k\u00d7n FLOPs, where the first matrix is of size m\u00d7k and the second is k\u00d7n. The encoder block consists of self-attention and feed forward layers, whereas the decoder block consists of self-attention, cross-attention, and feed forward layers.", "source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"} {"text": "The attention (both self and cross) block consists of a QKV projection, which requires 6Bsh2 operations, an attention matrix computation requiring 2Bs2h operations, an attention over values which needs 2Bs2h computations, and the post-attention linear projection requires 2Bsh2 operations. Finally, the feed forward layer requires 15Bsh2 operations. \nThe total for an encoder block is 23Bsh2+4Bs2h, whereas for a decoder block, it comes to 31Bsh2+8Bs2h. With a total of 24 encoder and 24 decoder blocks and 2 forward passes (as we discard the activations) and one backward pass (equivalent to two forward passes), the final FLOPs computation comes to be 96\u00d7(54Bsh2+ 12Bs2h) + 6BshV. Here, B is the batch size per GPU, s is sequence length, h is hidden state size, and V is vocabulary size. \nWe repeat a similar computation for T5-XL (3B) architecture, which is slightly different.", "source": "https://pytorch.org/blog/scaling-pytorch-fsdp-for-training-foundation-models-on-ibm-cloud/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Empowering PyTorch on Intel\u00ae Xeon\u00ae Scalable processors with Bfloat16\"\nauthor: Mingfei Ma (Intel), Vitaly Fedyunin (Meta), Wei Wei (Meta)\nfeatured-img: '\\assets\\images\\empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16.png'\n\nOverview\nRecent years, the growing complexity of AI models have been posing requirements on hardware for more and more compute capability. Reduced precision numeric format has been proposed to address this problem. Bfloat16 is a custom 16-bit floating point format for AI which consists of one sign bit, eight exponent bits, and seven mantissa bits. With the same dynamic range as float32, bfloat16 doesn\u2019t require a special handling such as loss scaling. Therefore, bfloat16 is a drop-in replacement for float32 when running deep neural networks for both inference and training.", "source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"} {"text": "The 3rd Gen Intel\u00ae Xeon\u00ae Scalable processor (codenamed Cooper Lake), is the first general purpose x86 CPU with native bfloat16 support. Three new bfloat16 instructions were introduced in Intel\u00ae Advanced Vector Extensions-512 (Intel\u00ae AVX-512): VCVTNE2PS2BF16, VCVTNEPS2BF16, and VDPBF16PS. The first two instructions perform conversion from float32 to bfloat16, and the last one performs a dot product of bfloat16 pairs. Bfloat16 theoretical compute throughput is doubled over float32 on Cooper Lake. On the next generation of Intel\u00ae Xeon\u00ae Scalable Processors, bfloat16 compute throughput will be further enhanced through Advanced Matrix Extensions (Intel\u00ae AMX) instruction set extension.", "source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"} {"text": "Intel and Meta previously collaborated to enable bfloat16 on PyTorch, and the related work was published in an earlier blog during launch of Cooper Lake. In that blog, we introduced the hardware advancement for native bfloat16 support and showcased a performance boost of 1.4x to 1.6x of bfloat16 over float32 from DLRM, ResNet-50 and ResNext-101-32x4d.\nIn this blog, we will introduce the latest software enhancement on bfloat16 in PyTorch 1.12, which would apply to much broader scope of user scenarios and showcase even higher performance boost.\nNative Level Optimization on Bfloat16", "source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"} {"text": "Native Level Optimization on Bfloat16\nOn PyTorch CPU bfloat16 path, the compute intensive operators, e.g., convolution, linear and bmm, use oneDNN (oneAPI Deep Neural Network Library) to achieve optimal performance on Intel CPUs with AVX512_BF16 or AMX support. The other operators, such as tensor operators and neural network operators, are optimized at PyTorch native level. We have enlarged bfloat16 kernel level optimizations to majority of operators on dense tensors, both inference and training applicable (sparse tensor bfloat16 support will be covered in future work), specifically:", "source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"} {"text": "\nBfloat16 vectorization: Bfloat16 is stored as unsigned 16-bit integer, which requires it to be casted to float32 for arithmetic operations such as add, mul, etc. Specifically, each bfloat16 vector will be converted to two float32 vectors, processed accordingly and then converted back. While for non-arithmetic operations such as cat, copy, etc., it is a straight memory copy and no data type conversion will be involved.\nBfloat16 reduction: Reduction on bfloat16 data uses float32 as accumulation type to guarantee numerical stability, e.g., sum, BatchNorm2d, MaxPool2d, etc.\nChannels Last optimization: For vision models, Channels Last is the preferable memory format over Channels First from performance perspective. We have implemented fully optimized CPU kernels for all the commonly used CV modules on channels last memory format, taking care of both float32 and bfloat16.\n\nRun Bfloat16 with Auto Mixed Precision", "source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"} {"text": "Run Bfloat16 with Auto Mixed Precision\nTo run model on bfloat16, typically user can either explicitly convert the data and model to bfloat16, for example:\n# with explicit conversion\ninput = input.to(dtype=torch.bfloat16)\nmodel = model.to(dtype=torch.bfloat16)\n\nor utilize torch.amp (Automatic Mixed Precision) package. The autocast instance serves as context managers or decorators that allow regions of your script to run in mixed precision, for example:\n# with AMP\nwith torch.autocast(device_type=\"cpu\", dtype=torch.bfloat16):\n output = model(input)\n\nGenerally, the explicit conversion approach and AMP approach have similar performance. Even though, we recommend run bfloat16 models with AMP, because:\n\nBetter user experience with automatic fallback: If your script includes operators that don\u2019t have bfloat16 support, autocast will implicitly convert them back to float32 while the explicit converted model will give a runtime error.\n", "source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"} {"text": "\nMixed data type for activation and parameters: Unlike the explicit conversion which converts all the model parameters to bfloat16, AMP mode will run in mixed data type. To be specific, input/output will be kept in bfloat16 while parameters, e.g., weight/bias, will be kept in float32. The mixed data type of activation and parameters will help improve performance while maintaining the accuracy.\n\nPerformance Gains\nWe benchmarked inference performance of TorchVision models on Intel\u00ae Xeon\u00ae Platinum 8380H CPU @ 2.90GHz (codenamed Cooper Lake), single instance per socket (batch size = 2 x number of physical cores). Results show that bfloat16 has 1.4x to 2.2x performance gain over float32.\n\n\n\nThe performance boost of bfloat16 over float32 primarily comes from 3 aspects:", "source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"} {"text": "\nThe compute intensive operators take advantage of the new bfloat16 native instruction VDPBF16PS which doubles the hardware compute throughput.\nBfloat16 have only half the memory footprint of float32, so theoretically the memory bandwidth intensive operators will be twice faster.\nOn Channels Last, we intentionally keep the same parallelization scheme for all the memory format aware operators (can\u2019t do this on Channels First though), which increases the data locality when passing each layer\u2019s output to the next. Basically, it keeps the data closer to CPU cores while data would reside in cache anyway. And bfloat16 will have a higher cache hit rate compared with float32 in such scenarios due to smaller memory footprint.\n\nConclusion & Future Work", "source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"} {"text": "Conclusion & Future Work\nIn this blog, we introduced recent software optimizations on bfloat16 introduced in PyTorch 1.12. Results on the 3rd Gen Intel\u00ae Xeon\u00ae Scalable processor show that bfloat16 has 1.4x to 2.2x performance gain over float32 on the TorchVision models. Further improvement is expected on the next generation of Intel\u00ae Xeon\u00ae Scalable Processors with AMX instruction support. Though the performance number for this blog is collected with TorchVision models, the benefit is broad across all topologies. And we will continue to extend the bfloat16 optimization effort to a broader scope in the future!\nAcknowledgement\nThe results presented in this blog is a joint effort of Meta and Intel PyTorch team. Special thanks to Vitaly Fedyunin and Wei Wei from Meta who spent precious time and gave substantial assistance! Together we made one more step on the path of improving the PyTorch CPU eco system.\nReference", "source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"} {"text": "Reference\n\nThe bfloat16 numerical format\nhttps://pytorch.org/docs/master/amp.html#torch.autocast\nIntel and Facebook Accelerate PyTorch Performance with 3rd Gen Intel\u00ae Xeon\u00ae Processors and Intel\u00ae Deep Learning Boost\u2019s new BFloat16 capability\n", "source": "https://pytorch.org/blog/empowering-pytorch-on-intel-xeon-scalable-processors-with-bfloat16/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Announcing PyTorch Conference 2022\"\nauthor:\nfeatured-img: \"/assets/images/pytorch-conference-2022.png\"\n\nWe are excited to announce that the PyTorch Conference returns in-person as a satellite event to NeurlPS (Neural Information Processing Systems) in New Orleans on Dec. 2nd.\n\n\n", "source": "https://pytorch.org/blog/announcing-pytorch-conference-2022/", "category": "pytorch blogs"} {"text": "\nWe changed the name from PyTorch Developer Day to PyTorch Conference to signify the turning of a new chapter as we look to the future of PyTorch, encompassing the entire PyTorch Community. This conference will bring together leading researchers, academics and developers from the Machine Learning (ML) and Deep Learning (DL) communities to join a multiple set of talks and a poster session; covering new software releases on PyTorch, use cases in academia and industry, as well as ML/DL development and production trends.\nEVENT OVERVIEW\nWhen: Dec 2nd, 2022 (In-Person and Virtual)\nWhere: New Orleans, Louisiana (USA) | Virtual option as well\nSCHEDULE\nAll times in Central Standard.\n8:00-9:00 am \u2003 Registration/Check in\n9:00-11:20 am \u2002 Keynote & Technical Talks\n11:30-1:00 pm \u2002 Lunch\n1:00-3:00 pm \u2003 Poster Session & Breakouts\n3:00-4:00 pm \u2003 Community/Partner Talks\n4:00-5:00 pm \u2003 Panel Discussion", "source": "https://pytorch.org/blog/announcing-pytorch-conference-2022/", "category": "pytorch blogs"} {"text": "4:00-5:00 pm \u2003 Panel Discussion\nAgenda subject to change.\nAll talks will be livestreamed and available to the public. The in-person event will be by invitation only as space is limited. If you\u2019d like to apply to attend in person, please submit all requests here.\nLINKS\n\nSubmit Content for Consideration by Sept. 30th\nLivestream event page\nApply for an invitation to the in-person event\n", "source": "https://pytorch.org/blog/announcing-pytorch-conference-2022/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'New PyTorch library releases including TorchVision Mobile, TorchAudio I/O, and more'\nauthor: Team PyTorch \n\nToday, we are announcing updates to a number of PyTorch libraries, alongside the PyTorch 1.8 release. The updates include new releases for the domain libraries including TorchVision, TorchText and TorchAudio as well as new version of TorchCSPRNG. These releases include a number of new features and improvements and, along with the PyTorch 1.8 release, provide a broad set of updates for the PyTorch community to build on and leverage. \nSome highlights include:\n* TorchVision - Added support for PyTorch Mobile including Detectron2Go (D2Go), auto-augmentation of data during training, on the fly type conversion, and AMP autocasting.", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"} {"text": "\nTorchAudio - Major improvements to I/O, including defaulting to sox_io backend and file-like object support. Added Kaldi Pitch feature and support for CMake based build allowing TorchAudio to better support no-Python environments.\nTorchText - Updated the dataset loading API to be compatible with standard PyTorch data loading utilities.\nTorchCSPRNG - Support for cryptographically secure pseudorandom number generators for PyTorch is now stable with new APIs for AES128 ECB/CTR and CUDA support on Windows.\n\nPlease note that, starting in PyTorch 1.6, features are classified as Stable, Beta, and Prototype. Prototype features are not included as part of the binary distribution and are instead available through either building from source, using nightlies or via compiler flag. You can see the detailed announcement here.\nTorchVision 0.9.0\n[Stable] TorchVision Mobile: Operators, Android Binaries, and Tutorial", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"} {"text": "We are excited to announce the first on-device support and binaries for a PyTorch domain library. We have seen significant appetite in both research and industry for on-device vision support to allow low latency, privacy friendly, and resource efficient mobile vision experiences. You can follow this new tutorial to build your own Android object detection app using TorchVision operators, D2Go, or your own custom operators and model.\n\n\n\n[Stable] New Mobile models for Classification, Object Detection and Semantic Segmentation\nWe have added support for the MobileNetV3 architecture and provided pre-trained weights for Classification, Object Detection and Segmentation. It is easy to get up and running with these models, just import and load them as you would any torchvision model:\n```python\nimport torch\nimport torchvision", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"} {"text": "import torch\nimport torchvision\n\n# Classification\nx = torch.rand(1, 3, 224, 224)\nm_classifier = torchvision.models.mobilenet_v3_large(pretrained=True)\nm_classifier.eval()\npredictions = m_classifier(x)\n\n# Quantized Classification\nx = torch.rand(1, 3, 224, 224)\nm_classifier = torchvision.models.quantization.mobilenet_v3_large(pretrained=True)\nm_classifier.eval()\npredictions = m_classifier(x)\n\n# Object Detection: Highly Accurate High Resolution Mobile Model\nx = [torch.rand(3, 300, 400), torch.rand(3, 500, 400)]\nm_detector = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn(pretrained=True)\nm_detector.eval()\npredictions = m_detector(x)\n\n# Semantic Segmentation: Highly Accurate Mobile Model\nx = torch.rand(1, 3, 520, 520)\nm_segmenter = torchvision.models.segmentation.deeplabv3_mobilenet_v3_large(pretrained=True)\nm_segmenter.eval()\npredictions = m_segmenter(x)\n", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"} {"text": "predictions = m_segmenter(x)\nThese models are highly competitive with TorchVision\u2019s existing models on resource efficiency, speed, and accuracy. See our [release notes](https://github.com/pytorch/vision/releases) for detailed performance metrics.\n\n### [Stable] AutoAugment\n[AutoAugment](https://arxiv.org/pdf/1805.09501.pdf) is a common Data Augmentation technique that can increase the accuracy of Scene Classification models. Though the data augmentation policies are directly linked to their trained dataset, empirical studies show that ImageNet policies provide significant improvements when applied to other datasets. We\u2019ve implemented 3 policies learned on the following datasets: ImageNet, CIFA10 and SVHN. These can be used standalone or mixed-and-matched with existing transforms:\n```python\nfrom torchvision import transforms\n\nt = transforms.AutoAugment()\ntransformed = t(image)\n\n\ntransform=transforms.Compose([\n transforms.Resize(256),\n transforms.AutoAugment(),\n transforms.ToTensor()])\n", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"} {"text": "transforms.ToTensor()])\n```\nOther New Features for TorchVision\n\n[Stable] All read and decode methods in the io.image package now support:\nPalette, Grayscale Alpha and RBG Alpha image types during PNG decoding\nOn-the-fly conversion of image from one type to the other during read\n[Stable] WiderFace dataset\n[Stable] Improved FasterRCNN speed and accuracy by introducing a score threshold on RPN\n[Stable] Modulation input for DeformConv2D\n[Stable] Option to write audio to a video file\n[Stable] Utility to draw bounding boxes\n[Beta] Autocast support in all Operators\nFind the full TorchVision release notes here.\n\nTorchAudio 0.8.0\nI/O Improvements\nWe have continued our work from the previous release to improve TorchAudio\u2019s I/O support, including:", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"} {"text": "\n[Stable] Changing the default backend to \u201csox_io\u201d (for Linux/macOS), and updating the \u201csoundfile\u201d backend\u2019s interface to align with that of \u201csox_io\u201d. The legacy backend and interface are still accessible, though it is strongly discouraged to use them.\n[Stable] File-like object support in both \"sox_io\" backend, \u201csoundfile\u201d backend and sox_effects.\n[Stable] New options to change the format, encoding, and bits_per_sample when saving.\n[Stable] Added GSM, HTK, AMB, AMR-NB and AMR-WB format support to the \u201csox_io\u201d backend.\n[Beta] A new functional.apply_codec function which can degrade audio data by applying audio codecs supported by \u201csox_io\u201d backend in an in-memory fashion.\nHere are some examples of features landed in this release:\n\n```python\nLoad audio over HTTP\nwith requests.get(URL, stream=True) as response:\n waveform, sample_rate = torchaudio.load(response.raw)\nSaving to Bytes buffer as 32-bit floating-point PCM\nbuffer_ = io.BytesIO()\ntorchaudio.save(\n buffer_, waveform, sample_rate,", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"} {"text": "buffer_, waveform, sample_rate,\n format=\"wav\", encoding=\"PCM_S\", bits_per_sample=16)\nApply effects while loading audio from S3\nclient = boto3.client('s3')\nresponse = client.get_object(Bucket=S3_BUCKET, Key=S3_KEY)\nwaveform, sample_rate = torchaudio.sox_effects.apply_effect_file(\n response['Body'],\n [[\"lowpass\", \"-1\", \"300\"], [\"rate\", \"8000\"]])\nApply GSM codec to Tensor\nencoded = torchaudio.functional.apply_codec(\n waveform, sample_rate, format=\"gsm\")\n```\nCheck out the revamped audio preprocessing tutorial, Audio Manipulation with TorchAudio.\n[Stable] Switch to CMake-based build", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"} {"text": "[Stable] Switch to CMake-based build\nIn the previous version of TorchAudio, it was utilizing CMake to build third party dependencies. Starting in 0.8.0, TorchaAudio uses CMake to build its C++ extension. This will open the door to integrate TorchAudio in non-Python environments (such as C++ applications and mobile). We will continue working on adding example applications and mobile integrations.\n[Beta] Improved and New Audio Transforms", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"} {"text": "[Beta] Improved and New Audio Transforms\nWe have added two widely requested operators in this release: the SpectralCentroid transform and the Kaldi Pitch feature extraction (detailed in \"A pitch extraction algorithm tuned for automatic speech recognition\"). We\u2019ve also exposed a normalization method to Mel transforms, and additional STFT arguments to Spectrogram. We would like to ask our community to continue to raise feature requests for core audio processing features like these!\nCommunity Contributions", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"} {"text": "Community Contributions\nWe had more contributions from the open source community in this release than ever before, including several completely new features. We would like to extend our sincere thanks to the community. Please check out the newly added CONTRIBUTING.md for ways to contribute code, and remember that reporting bugs and requesting features are just as valuable. We will continue posting well-scoped work items as issues labeled \u201chelp-wanted\u201d and \u201ccontributions-welcome\u201d for anyone who would like to contribute code, and are happy to coach new contributors through the contribution process.\nFind the full TorchAudio release notes here.\nTorchText 0.9.0\n[Beta] Dataset API Updates", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"} {"text": "TorchText 0.9.0\n[Beta] Dataset API Updates\nIn this release, we are updating TorchText\u2019s dataset API to be compatible with PyTorch data utilities, such as DataLoader, and are deprecating TorchText\u2019s custom data abstractions such as Field. The updated datasets are simple string-by-string iterators over the data. For guidance about migrating from the legacy abstractions to use modern PyTorch data utilities, please refer to our migration guide.\nThe text datasets listed below have been updated as part of this work. For examples of how to use these datasets, please refer to our end-to-end text classification tutorial.\n* Language modeling: WikiText2, WikiText103, PennTreebank, EnWik9", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"} {"text": "\nText classification: AG_NEWS, SogouNews, DBpedia, YelpReviewPolarity, YelpReviewFull, YahooAnswers, AmazonReviewPolarity, AmazonReviewFull, IMDB\nSequence tagging: UDPOS, CoNLL2000Chunking\nTranslation: IWSLT2016, IWSLT2017\nQuestion answer: SQuAD1, SQuAD2\n\nFind the full TorchText release notes here.\n[Stable] TorchCSPRNG 0.2.0\nWe released TorchCSPRNG in August 2020, a PyTorch C++/CUDA extension that provides cryptographically secure pseudorandom number generators for PyTorch. Today, we are releasing the 0.2.0 version and designating the library as stable. This release includes a new API for encrypt/decrypt with AES128 ECB/CTR as well as CUDA 11 and Windows CUDA support.\nFind the full TorchCSPRNG release notes here.", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"} {"text": "Thanks for reading, and if you are excited about these updates and want to participate in the future of PyTorch, we encourage you to join the discussion forums and open GitHub issues.\nCheers!\nTeam PyTorch", "source": "https://pytorch.org/blog/pytorch-1.8-new-library-releases/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'An overview of the ML models introduced in TorchVision v0.9'\nauthor: Team PyTorch \n\nTorchVision v0.9 has been released and it is packed with numerous new Machine Learning models and features, speed improvements and bug fixes. In this blog post, we provide a quick overview of the newly introduced ML models and discuss their key features and characteristics.\nClassification", "source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"} {"text": "\nMobileNetV3 Large & Small: These two classification models are optimized for Mobile use-cases and are used as backbones on other Computer Vision tasks. The implementation of the new MobileNetV3 architecture supports the Large & Small variants and the depth multiplier parameter as described in the original paper. We offer pre-trained weights on ImageNet for both Large and Small networks with depth multiplier 1.0 and resolution 224x224. Our previous training recipes have been updated and can be used to easily train the models from scratch (shoutout to Ross Wightman for inspiring some of our training configuration). The Large variant offers a competitive accuracy comparing to ResNet50 while being over 6x faster on CPU, meaning that it is a good candidate for applications where speed is important. For applications where speed is critical, one can sacrifice further accuracy for speed and use the Small variant which is 15x faster than ResNet50.\n", "source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"} {"text": "\nQuantized MobileNetV3 Large: The quantized version of MobilNetV3 Large reduces the number of parameters by 45% and it is roughly 2.5x faster than the non-quantized version while remaining competitive in terms of accuracy. It was fitted on ImageNet using Quantization Aware Training by iterating on the non-quantized version and it can be trained from scratch using the existing reference scripts.\n\nUsage:\nmodel = torchvision.models.mobilenet_v3_large(pretrained=True)\n# model = torchvision.models.mobilenet_v3_small(pretrained=True)\n# model = torchvision.models.quantization.mobilenet_v3_large(pretrained=True)\nmodel.eval()\npredictions = model(img)\n\nObject Detection", "source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"} {"text": "predictions = model(img)\n### Object Detection\n* **Faster R-CNN MobileNetV3-Large FPN:** Combining the MobileNetV3 Large backbone with a Faster R-CNN detector and a Feature Pyramid Network leads to a highly accurate and fast object detector. The pre-trained weights are fitted on COCO 2017 using the provided reference [scripts](https://github.com/pytorch/vision/tree/master/references/detection#faster-r-cnn-mobilenetv3-large-fpn) and the model is 5x faster on CPU than the equivalent ResNet50 detector while remaining competitive in [terms of accuracy](https://github.com/pytorch/vision/blob/master/docs/source/models.rst#object-detection-instance-segmentation-and-person-keypoint-detection). \n* **Faster R-CNN MobileNetV3-Large 320 FPN:** This is an iteration of the previous model that uses reduced resolution (min_size=320 pixel) and sacrifices accuracy for speed. It is 25x faster on CPU than the equivalent ResNet50 detector and thus it is good for real mobile use-cases.\n\n**Usage:**\n", "source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"} {"text": "Usage:\nmodel = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn(pretrained=True)\n# model = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_320_fpn(pretrained=True)\nmodel.eval()\npredictions = model(img)\n\nSemantic Segmentation\n\nDeepLabV3 with Dilated MobileNetV3 Large Backbone: A dilated version of the MobileNetV3 Large backbone combined with DeepLabV3 helps us build a highly accurate and fast semantic segmentation model. The pre-trained weights are fitted on COCO 2017 using our standard training recipes. The final model has the same accuracy as the FCN ResNet50 but it is 8.5x faster on CPU and thus making it an excellent replacement for the majority of applications.\n", "source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"} {"text": "\nLite R-ASPP with Dilated MobileNetV3 Large Backbone: We introduce the implementation of a new segmentation head called Lite R-ASPP and combine it with the dilated MobileNetV3 Large backbone to build a very fast segmentation model. The new model sacrifices some accuracy to achieve a 15x speed improvement comparing to the previously most lightweight segmentation model which was the FCN ResNet50.\n\nUsage:\nmodel = torchvision.models.segmentation.deeplabv3_mobilenet_v3_large(pretrained=True)\n# model = torchvision.models.segmentation.lraspp_mobilenet_v3_large(pretrained=True)\nmodel.eval()\npredictions = model(img)\n\nIn the near future we plan to publish an article that covers the details of how the above models were trained and discuss their tradeoffs and design choices. Until then we encourage you to try out the new models and provide your feedback.", "source": "https://pytorch.org/blog/ml-models-torchvision-v0.9/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Introducing nvFuser, a deep learning compiler for PyTorch'\nauthor: Christian Sarofeen, Piotr Bialecki, Jie Jiang, Kevin Stephano, Masaki Kozuki, Neal Vaidya, Stas Bekman\nfeatured-img: \"/assets/images/introducing-nvfuser-a-deep-learning-compiler-for-pytorch-1.png\"\n\nnvFuser is a Deep Learning Compiler for NVIDIA GPUs that automatically just-in-time compiles fast and flexible kernels to reliably accelerate users' networks. It provides significant speedups for deep learning networks running on Volta and later CUDA accelerators by generating fast custom \u201cfusion\u201d kernels at runtime. nvFuser is specifically designed to meet the unique requirements of the PyTorch community, and it supports diverse network architectures and programs with dynamic inputs of varying shapes and strides.", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} {"text": "In this blog post we\u2019ll describe nvFuser and how it\u2019s used today, show the significant performance improvements it can obtain on models from HuggingFace and TIMM, and look ahead to nvFuser in PyTorch 1.13 and beyond. If you would like to know more about how and why fusion improves the speed of training for Deep Learning networks, please see our previous talks on nvFuser from GTC 2022 and GTC 2021.", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} {"text": "nvFuser relies on a graph representation of PyTorch operations to optimize and accelerate. Since PyTorch has an eager execution model, the PyTorch operations users are running are not directly accessible as a whole program that can be optimized by a system like nvFuser. Therefore users must utilize systems built on top of nvFuser which are capable of capturing users programs and translating them into a form that is optimizable by nvFuser. These higher level systems then pass these captured operations to nvFuser, so that nvFuser can optimize the execution of the user\u2019s script for NVIDIA GPUs. There are three systems that capture, translate, and pass user programs to nvFuser for optimization:\n\nTorchScript jit.script\n", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} {"text": "\nThis system directly parses sections of an annotated python script to translate into its own representation what the user is doing. This system then applies its own version of auto differentiation to the graph, and passes sections of the subsequent forward and backwards graphs to nvFuser for optimization.\nFuncTorch\nThis system doesn\u2019t directly look at the user python script, instead inserting a mechanism that captures PyTorch operations as they\u2019re being run. We refer to this type of capture system as \u201ctrace program acquisition\u201d, since we\u2019re tracing what has been performed. FuncTorch doesn\u2019t perform its own auto differentiation \u2013 it simply traces PyTorch\u2019s autograd directly to get backward graphs.\nTorchDynamo\n", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} {"text": "\nTorchDynamo is another program acquisition mechanism built on top of FuncTorch. TorchDynamo parses the Python bytecode produced from the user script in order to select portions to trace with FuncTorch. The benefit of TorchDynamo is that it\u2019s able to apply decorators to a user\u2019s script, effectively isolating what should be sent to FuncTorch, making it easier for FuncTorch to successfully trace complex Python scripts.\n\nThese systems are available for users to interact with directly while nvFuser automatically and seamlessly optimizes performance critical regions of the user\u2019s code. These systems automatically send parsed user programs to nvFuser so nvFuser can:\n\nAnalyze the operations being run on GPUs\nPlan parallelization and optimization strategies for those operations\nApply those strategies in generated GPU code\nRuntime-compile the generated optimized GPU functions\nExecute those CUDA kernels on subsequent iterations\n", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} {"text": "It is important to note nvFuser does not yet support all PyTorch operations, and there are still some scenarios that are actively being improved in nvFuser that are discussed herein. However, nvFuser does support many DL performance critical operations today, and the number of supported operations will grow in subsequent PyTorch releases. nvFuser is capable of generating highly specialized and optimized GPU functions for the operations it does have support for. This means nvFuser is able to power new PyTorch systems like TorchDynamo and FuncTorch to combine the flexibility PyTorch is known for with unbeatable performance.\nnvFuser Performance", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} {"text": "nvFuser Performance\nBefore getting into how to use nvFuser, in this section we\u2019ll show the improvements in training speed nvFuser provides for a variety of models from the HuggingFace Transformers and PyTorch Image Models (TIMM) repositories and we will discuss current gaps in nvFuser performance that are under development today. All performance numbers in this section were taken using an NVIDIA A100 40GB GPU, and used either FuncTorch alone or Functorch with TorchDynamo.\nHuggingFace Transformer Benchmarks\nnvFuser can dramatically accelerate training of HuggingFace Transformers when combined with another important optimization (more on that in a moment). Performance improvements can be seen in Figure 1 to range between 1.12x and 1.50x across a subset of popular HuggingFace Transformer networks.\n", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} {"text": "\n\n\n\nFigure 1: Performance gains of 8 training scenarios from HuggingFace\u2019s Transformer repository. First performance boost in the dark green is due to replacing the optimizer with an NVIDIA Apex fused AdamW optimizer. The light green is due to adding nvFuser. Models were run with batch size and sequence lengths of [64, 128], [8, 512], [2, 1024], [64, 128], [8, 512], [8, src_seql=512, tgt_seql=128], [8, src_seql=1024, tgt_seql=128], and [8, 512] respectively. All networks were run with Automatic Mixed Precision (AMP) enabled with dtype=float16.\n", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} {"text": "\nWhile these speedups are significant, it\u2019s important to understand that nvFuser doesn\u2019t (yet) automate everything about running networks quickly. For HuggingFace Transformers, for example, it was important to use the AdamW fused optimizer from NVIDIA\u2019s Apex repository as the optimizer otherwise consumed a large portion of runtime. Using the fused AdamW optimizer to make the network faster exposes the next major performance bottleneck \u2014 memory bound operations. These operations are optimized by nvFuser, providing another large performance boost. With the fused optimizer and nvFuser enabled, the training speed of these networks improved between 1.12x to 1.5x.", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} {"text": "HuggingFace Transformer models were run with the torch.amp module. (\u201camp\u201d stands for Automated Mixed Precision, see the \u201cWhat Every User Should Know about Mixed Precision in PyTorch\u201d blog post for details.) An option to use nvFuser was added to HuggingFace\u2019sTrainer. If you have TorchDynamo installed you can activate it to enable nvFuser in HuggingFace by passing torchdynamo = \u2018nvfuser\u2019 to the Trainer class.\nnvFuser has great support for normalization kernels and related fusions frequently found in Natural Language Processing (NLP) models, and it is recommended users try nvFuser in their NLP workloads.\nPyTorch Image Models (TIMM) Benchmarks", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} {"text": "PyTorch Image Models (TIMM) Benchmarks\nnvFuser, can also significantly reduce the training time of TIMM networks, up to over 1.3x vs. eager PyTorch, and up to 1.44x vs. eager PyTorch when combined with the torch.amp module. Figure 1 shows nvFuser\u2019s speedup without torch.amp, and when torch.amp is used with the NHWC (\u201cchannels last\u201d) and NCHW (\u201cchannels first\u201d) formats. nvFuser is integrated in TIMM through FuncTorch tracing directly (without TorchDynamo) and can be used by adding the --aot-autograd command line argument when running the TIMM benchmark or training script.\n\n\n\n", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} {"text": "\n\nFigure 1: The Y-axis is the performance gain nvFuser provides over not using nvFuser. A value of 1.0 means no change in perf, 2.0 would mean nvFuser is twice as fast, 0.5 would mean nvFuser takes twice the time to run. Square markers are with float16 Automatic Mixed Precision (AMP) and channels first contiguous inputs, circle markers are float32 inputs, and triangles are with float16 AMP and channels last contiguous inputs. Missing data points are due to an error being encountered when tracing.\n", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} {"text": "When running with float32 precision nvFuser provides a 1.12x geometric mean (\u201cgeomean\u201d) speedup on TIMM networks, and when running with torch.amp and \u201cchannels first\u201d it provides a 1.14x geomean speedup. However, nvFuser currently doesn\u2019t speedup torch.amp and \u201cchannels last\u201d training (a .9x geomean regression), so we recommend not using it in those cases. We are actively working on improving \u201cchannels last\u201d performance now, and soon we will have two additional optimization strategies (grid persistent optimizations for channels-last normalizations and fast transposes) which we expect will provide speedups comparable to \u201cchannels first\u201d in PyTorch version 1.13 and later. Many of nvFuser\u2019s optimizations can also help in inference cases. However, in PyTorch when running inference on small batch sizes, the performance is typically limited by CPU overhead, which nvFuser can\u2019t completely remove or fix. Therefore, typically the most important optimization for inference is to enable CUDA Graphs when possible. Once CUDA Graphs is enabled, then it can also be beneficial to also enable fusion through nvFuser. Performance of inference is shown in Figure 2 and Figure 3. Inference is only run with float16 AMP as it is uncommon to run inference workloads in full float32 precision.", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} {"text": "\n\n\n\n\n\n\nFigure 2: Performance gains of enabling CUDA Graphs, and CUDA Graphs with nvFuser compared to the performance of native PyTorch without CUDA Graphs and nvFuser across TIMM models with float16 AMP, channels first inputs, and a batch size of 1 and 8 respectively. There is a geomean speedup of 2.74x with CUDA Graphs and 2.71x with CUDA Graphs + nvFuser respectively. nvFuser provides a maximum regression of 0.68x and a maximum performance gain of 2.74x (relative to CUDA Graphs without nvFuser). Performance gain is measured relative to the average time per iteration PyTorch takes without CUDA Graphs and without nvFuser. Models are sorted by how much additional performance nvFuser is providing.\n\n", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} {"text": "\n\n\n\n\n\n\n\nFigure 3: Performance gains of enabling CUDA Graphs, and CUDA Graphs with nvFuser compared to the performance of native PyTorch without CUDA Graphs and nvFuser across TIMM models with AMP, channels last inputs, and a batch size of 1 and 8 respectively. There is a geomean speedup of 2.29x with CUDA Graphs and 2.95x with CUDA Graphs + nvFuser respectively. nvFuser provides a maximum regression of 0.86x and a maximum performance gain of 3.82x (relative to CUDA Graphs without nvFuser). Performance gain is measured relative to the average time per iteration PyTorch takes without CUDA Graphs and without nvFuser. Models are sorted by how much additional performance nvFuser is providing.\n", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} {"text": "\nSo far nvFuser performance has not been tuned for inference workloads so its performance benefit is not consistent across all cases. However, there are still many models that benefit significantly from nvFuser during inference and we encourage users to try nvFuser in inference workloads to see if you would benefit today. Performance of nvFuser in inference workloads will improve in the future and if you\u2019re interested in nvFuser in inference workloads please reach out to us on the PyTorch forums.\nGetting Started - Accelerate Your Scripts with nvFuser", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} {"text": "We\u2019ve created a tutorial demonstrating how to take advantage of nvFuser to accelerate part of a standard transformer block, and how nvFuser can be used to define fast and novel operations. There are still some rough edges in nvFuser that we\u2019re working hard on improving as we\u2019ve outlined in this blog post. However we\u2019ve also demonstrated some great improvements for training speed on multiple networks in HuggingFace and TIMM and we expect there are opportunities in your networks where nvFuser can help today, and many more opportunities it will help in the future.\nIf you would like to learn more about nvFuser we recommend watching our presentations from NVIDIA\u2019s GTC conference GTC 2022 and GTC 2021.", "source": "https://pytorch.org/blog/introducing-nvfuser-a-deep-learning-compiler-for-pytorch/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Introducing PyTorch Profiler - the new and improved performance tool'\nauthor: Maxim Lukiyanov - Principal PM at Microsoft, Guoliang Hua - Principal Engineering Manager at Microsoft, Geeta Chauhan - Partner Engineering Lead at Facebook, Gisle Dankel - Tech Lead at Facebook\n\nAlong with PyTorch 1.8.1 release, we are excited to announce PyTorch Profiler \u2013 the new and improved performance debugging profiler for PyTorch. Developed as part of a collaboration between Microsoft and Facebook, the PyTorch Profiler is an open-source tool that enables accurate and efficient performance analysis and troubleshooting for large-scale deep learning models.", "source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"} {"text": "Analyzing and improving large-scale deep learning model performance is an ongoing challenge that grows in importance as the model sizes increase. For a long time, PyTorch users had a hard time solving this challenge due to the lack of available tools. There were standard performance debugging tools that provide GPU hardware level information but missed PyTorch-specific context of operations. In order to recover missed information, users needed to combine multiple tools together or manually add minimum correlation information to make sense of the data. There was also the autograd profiler (torch.autograd.profiler) which can capture information about PyTorch operations but does not capture detailed GPU hardware-level information and cannot provide support for visualization.", "source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"} {"text": "The new PyTorch Profiler (torch.profiler) is a tool that brings both types of information together and then builds experience that realizes the full potential of that information. This new profiler collects both GPU hardware and PyTorch related information, correlates them, performs automatic detection of bottlenecks in the model, and generates recommendations on how to resolve these bottlenecks. All of this information from the profiler is visualized for the user in TensorBoard. The new Profiler API is natively supported in PyTorch and delivers the simplest experience available to date where users can profile their models without installing any additional packages and see results immediately in TensorBoard with the new PyTorch Profiler plugin. Below is the screenshot of PyTorch Profiler - automatic bottleneck detection. \n\n\n\nGetting started", "source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"} {"text": "\nGetting started\nPyTorch Profiler is the next version of the PyTorch autograd profiler. It has a new module namespace torch.profiler but maintains compatibility with autograd profiler APIs. The Profiler uses a new GPU profiling engine, built using Nvidia CUPTI APIs, and is able to capture GPU kernel events with high fidelity. To profile your model training loop, wrap the code in the profiler context manager as shown below.\n with torch.profiler.profile(\n schedule=torch.profiler.schedule(\n wait=2,\n warmup=2,\n active=6,\n repeat=1),\n on_trace_ready=tensorboard_trace_handler,\n with_stack=True\n) as profiler:\n for step, data in enumerate(trainloader, 0):\n print(\"step:{}\".format(step))\n inputs, labels = data[0].to(device=device), data[1].to(device=device)\n\n outputs = model(inputs)\n loss = criterion(outputs, labels)\n\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n profiler.step()\n", "source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"} {"text": "profiler.step()\nThe ```schedule``` parameter allows you to limit the number of training steps included in the profile to reduce the amount of data collected and simplify visual analysis by focusing on what\u2019s important. The ```tensorboard_trace_handler``` automatically saves profiling results to disk for analysis in TensorBoard.\n\nTo view results of the profiling session in TensorBoard, install PyTorch Profiler TensorBoard Plugin package.\n\n```python\npip install torch_tb_profiler\n\nVisual Studio Code Integration", "source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"} {"text": "```\nVisual Studio Code Integration\nMicrosoft Visual Studio Code is one of the most popular code editors for Python developers and data scientists. The Python extension for VS Code recently added the integration of TensorBoard into the code editor, including support for the PyTorch Profiler. Once you have VS Code and the Python extension installed, you can quickly open the TensorBoard Profiler plugin by launching the Command Palette using the keyboard shortcut CTRL + SHIFT + P (CMD + SHIFT + P on a Mac) and typing the \u201cLaunch TensorBoard\u201d command.\n\n\n", "source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"} {"text": "\nThis integration comes with a built-in lifecycle management feature. VS Code will install the TensorBoard package and the PyTorch Profiler plugin package (coming in mid-April) automatically if you don\u2019t have them on your system. VS Code will also launch TensorBoard process for you and automatically look for any TensorBoard log files within your current directory. When you\u2019re done, just close the tab and VS Code will automatically close the process. No more Terminal windows running on your system to provide a backend for the TensorBoard UI! Below is PyTorch Profiler Trace View running in TensorBoard.\n\n\n\nLearn more about TensorBoard support in VS Code in this blog.\nFeedback", "source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"} {"text": "Feedback\nReview PyTorch Profiler documentation, give Profiler a try and let us know about your experience. Provide your feedback on PyTorch Discussion Forum or file issues on PyTorch GitHub.", "source": "https://pytorch.org/blog/introducing-pytorch-profiler-the-new-and-improved-performance-tool/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"How Disney Improved Activity Recognition Through Multimodal Approaches with PyTorch\"\nauthor: Monica Alfaro, Albert Aparicio, Francesc Guitart, Marc Junyent, Pablo Pernias, Marcel Porta, and Miquel \u00c0ngel Farr\u00e9 (former Senior Technology Manager)\nfeatured-img: 'assets/images/disney_media_logo.jpg'\n\nIntroduction\nAmong the many things Disney Media & Entertainment Distribution (DMED) is responsible for, is the management and distribution of a huge array of media assets including news, sports, entertainment and features, episodic programs, marketing and advertising and more.\n\n\n\nOur team focuses on media annotation as part of DMED Technology\u2019s content platforms group. In our day-to-day work, we automatically analyze a variety of content that constantly challenges the efficiency of our machine learning workflow and the accuracy of our models.", "source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"} {"text": "Several of our colleagues recently discussed the workflow efficiencies that we achieved by switching to an end-to-end video analysis pipeline using PyTorch, as well as how we approach animated character recognition. We invite you to read more about both in this previous post.\nWhile the conversion to an end-to-end PyTorch pipeline is a solution that any company might benefit from, animated character recognition was a uniquely-Disney concept and solution.\nIn this article we will focus on activity recognition, which is a general challenge across industries \u2014 but with some specific opportunities when leveraged in the media production field, because we can combine audio, video, and subtitles to provide a solution.\nExperimenting with Multimodality", "source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"} {"text": "Experimenting with Multimodality\nWorking on a multimodal problem adds more complexity to the usual training pipelines. Having multiple information modes for each example means that the multimodal pipeline has to have specific implementations to process each mode in the dataset. Usually after this processing step, the pipeline has to merge or fuse the outputs.", "source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"} {"text": "Our initial experiments in multimodality were completed using the MMF framework. MMF is a modular framework for vision and language multimodal research. MMF contains reference implementations of state-of-the-art vision and language models and has also powered multiple research projects at Meta AI Research (as seen in this poster presented in PyTorch Ecosystem Day 2020). Along with the recent release of TorchMultimodal, a PyTorch library for training state-of-the-art multimodal models at scale, MMF highlights the growing interest in Multimodal understanding.\nMMF tackles this complexity with modular management of all the elements of the pipeline through a wide set of different implementations for specific modules, ranging from the processing of the modalities to the fusion of the processed information.", "source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"} {"text": "In our scenario, MMF was a great entry point to experiment with multimodality. It allowed us to iterate quickly by combining audio, video and closed captioning and experiment at different levels of scale with certain multimodal models, shifting from a single GPU to TPU Pods.\nMultimodal Transformers\nWith a workbench based on MMF, our initial model was based on a concatenation of features from each modality evolving to a pipeline that included a Transformer-based fusion module to combine the different input modes.\nSpecifically, we made use of the fusion module called MMFTransformer, developed in collaboration with the Meta AI Research team. This is an implementation based on VisualBERT for which the necessary modifications were added to be able to work with text, audio and video.\nDespite having decent results with the out-of-box implementation MMFTransformer, we were still far from our goal, and the Transformers-based models required more data than we had available.", "source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"} {"text": "Searching for less data-hungry solutions\nSearching for less data-hungry solutions, our team started studying MLP-Mixer. This new architecture has been proposed by the Google Brain team and it provides an alternative to well established de facto architectures like convolutions or self-attention for computer vision tasks.\nMLP-Mixer\nThe core idea behind mixed variations consists of replacing the convolutions or self-attention mechanisms used in transformers with Multilayer Perceptrons. This change in architecture favors the performance of the model in high data regimes (especially with respect to the Transformers), while also opening some questions regarding the inductive biases hidden in the convolutions and the self-attention layers.\nThose proposals perform great in solving image classification tasks by splitting the image in chunks, flattening those chunks into 1D vectors and passing them through a sequence of Mixer Layers.\n", "source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"} {"text": "\n\n\nInspired by the advantages of Mixer based architectures, our team searched for parallelisms with the type of problems we try to solve in video classification: specifically, instead of a single image, we have a set of frames that need to be classified, along with audio and closed captioning in the shape of new modalities.\nActivity Recognition reinterpreting the MLP-Mixer\nOur proposal takes the core idea of the MLP-Mixer \u2014 using multiple multi-layer perceptrons on a sequence and transposed sequence and extends it into a Multi Modal framework that allows us to process video, audio & text with the same architecture.", "source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"} {"text": "For each of the modalities, we use different extractors that will provide embeddings describing the content. Given the embeddings of each modality, the MLP-Mixer architecture solves the problem of deciding which of the modalities might be the most important, while also weighing how much each modality contributes to the final labeling.\nFor example, when it comes to detecting laughs, sometimes the key information is in audio or in the frames, and in some of the cases we have a strong signal in the closed caption.\nWe tried processing each frame separately with a ResNet34 and getting a sequence of embeddings and by using a video-specific model called R3D, both pre-trained on ImageNet and Kinetics400 respectively.\n\n\n\nTo process the audio, we use the pretrained ResNet34, and we remove the final layers to be able to extract 2D embeddings from the audio spectrograms (for 224x224 images we end up with 7x7 embeddings).", "source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"} {"text": "\n\n\nFor closed captioning, we are using a pre-trained BERT-large, with all layers frozen, except for the Embeddings & LayerNorms.\n\n\n\nOnce we have extracted the embedding from each modality, we concatenate them into a single sequence and pass it through a set of MLP-Mixer blocks; next we use average pooling & a classification head to get predictions.\n\n\n\nOur experiments have been performed on a custom, manually labeled dataset for activity recognition with 15 classes, which we know from experiments are hard and cannot all be predicted accurately using a single modality.\nThese experiments have shown a significant increase in performance using our approach, especially in a low/mid-data regime (75K training samples).", "source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"} {"text": "When it comes to using only Text and Audio, our experiments showed a 15 percent improvement in accuracy over using a classifier on top of the features extracted by state-of-the-art backbones.\nUsing Text, Audio and Video we have seen a 17 percent improvement in accuracy over using Meta AIFacebook\u2019s MMF Framework, which uses a VisualBERT-like model to combine modalities using more powerful state of the art backbones.\nCurrently, we extended the initial model to cover up to 55 activity classes and 45 event classes. One of the challenges we expect to improve upon in the future is to include all activities and events, even those that are less frequent.\nInterpreting the MLP-Mixer mode combinations\nAn MLP-Mixer is a concatenation of MultiLayer Perceptrons. This can be, very roughly, approximated to a linear operation, in the sense that, once trained, the weights are fixed and the input will directly affect the output.", "source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"} {"text": "Once we assume that approximation, we also assume that for an input consisting of NxM numbers, we could find a NxM matrix that (when multiplied elementwise) could approximate the predictions of the MLP-Mixer for a class.\n\n\n\nWe will call this matrix a stencil, and if we have access to it, we can find what parts of the input embeddings are responsible for a specific prediction.\nYou can think of it as a punch card with holes in specific positions. Only information in those positions will pass and contribute to a specific prediction. So we can measure the intensity of the input at those positions.\n\n\n", "source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"} {"text": "\nOf course, this is an oversimplification, and there won't exist a unique stencil that perfectly represents all of the contributions of the input to a class (otherwise that would mean that the problem could be solved linearly). So this should be used for visualization purposes only, not as an accurate predictor.\nOnce we have a set of stencils for each class, we can effortlessly measure input contribution without relying on any external visualization techniques.\nTo find a stencil, we can start from a \"random noise\" stencil and optimize it to maximize the activations for a specific class by just back-propagating through the MLP-Mixer.\n\n\n\nBy doing this we can end up with many valid stencils, and we can reduce them to a few by using K-means to cluster them into similar stencils and averaging each cluster.\nUsing the Mixer to get the best of each world", "source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"} {"text": "Using the Mixer to get the best of each world\nMLP-Mixer, used as an image classification model without convolutional layers, requires a lot of data, since the lack of inductive bias \u2013 one of the model's good points overall \u2013 is a weakness when it comes to working in low data domains.\nWhen used as a way to combine information previously extracted by large pretrained backbones (as opposed to being used as a full end-to-end solution), they shine. The Mixer\u2019s strength lies in finding temporal or structural coherence between different inputs. For example, in video-related tasks we could extract embeddings from the frames using a powerful, pretrained model that understands what is going on at frame level and use the mixer to make sense of it in a sequential manner.", "source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"} {"text": "This way of using the Mixer allows us to work with limited amounts of data and still get better results than what was achieved with Transformers. This is because Mixers seem to be more stable during training and seem to pay attention to all the inputs, while Transformers tend to collapse and pay attention only to some modalities/parts of the sequence.\nAcknowledgements: We would like to thank the Meta AI Research and Partner Engineering teams for this collaboration.", "source": "https://pytorch.org/blog/how-disney-improved-activity-recognition-with-multimodal-approaches-with-pytorch/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Practical Quantization in PyTorch'\nauthor: Suraj Subramanian, Mark Saroufim, Jerry Zhang\nfeatured-img: ''\n\nQuantization is a cheap and easy way to make your DNN run faster and with lower memory requirements. PyTorch offers a few different approaches to quantize your model. In this blog post, we'll lay a (quick) foundation of quantization in deep learning, and then take a look at how each technique looks like in practice. Finally we'll end with recommendations from the literature for using quantization in your workflows.\n\n\n\n Fig 1. PyTorch <3 Quantization\n\nContents\n* TOC\nFundamentals of Quantization\n\nIf someone asks you what time it is, you don't respond \"10:14:34:430705\", but you might say \"a quarter past 10\".\n\nQuantization has roots in information compression; in deep networks it refers to reducing the numerical precision of its weights and/or activations.", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "Overparameterized DNNs have more degrees of freedom and this makes them good candidates for information compression [[1]]. When you quantize a model, two things generally happen - the model gets smaller and runs with better efficiency. Hardware vendors explicitly allow for faster processing of 8-bit data (than 32-bit data) resulting in higher throughput. A smaller model has lower memory footprint and power consumption [[2]], crucial for deployment at the edge.\nMapping function\nThe mapping function is what you might guess - a function that maps values from floating-point to integer space. A commonly used mapping function is a linear transformation given by , where is the input and are quantization parameters.", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "To reconvert to floating point space, the inverse function is given by . \n, and their difference constitutes the quantization error.\nQuantization Parameters\nThe mapping function is parameterized by the scaling factor and zero-point . \n is simply the ratio of the input range to the output range \n", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "where [] is the clipping range of the input, i.e. the boundaries of permissible inputs. [] is the range in quantized output space that it is mapped to. For 8-bit quantization, the output range .\n acts as a bias to ensure that a 0 in the input space maps perfectly to a 0 in the quantized space. \nCalibration", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "Calibration\nThe process of choosing the input clipping range is known as calibration. The simplest technique (also the default in PyTorch) is to record the running mininmum and maximum values and assign them to and . TensorRT also uses entropy minimization (KL divergence), mean-square-error minimization, or percentiles of the input range.", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "In PyTorch, Observer modules (docs, code) collect statistics on the input values and calculate the qparams . Different calibration schemes result in different quantized outputs, and it's best to empirically verify which scheme works best for your application and architecture (more on that later).\n```python\nfrom torch.quantization.observer import MinMaxObserver, MovingAverageMinMaxObserver, HistogramObserver\nC, L = 3, 4\nnormal = torch.distributions.normal.Normal(0,1)\ninputs = [normal.sample((C, L)), normal.sample((C, L))]\nprint(inputs)\n>>>>>\n[tensor([[-0.0590, 1.1674, 0.7119, -1.1270],\n[-1.3974, 0.5077, -0.5601, 0.0683],\n[-0.0929, 0.9473, 0.7159, -0.4574]]]),\ntensor([[-0.0236, -0.7599, 1.0290, 0.8914],", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "tensor([[-0.0236, -0.7599, 1.0290, 0.8914],\n[-1.1727, -1.2556, -0.2271, 0.9568],\n[-0.2500, 1.4579, 1.4707, 0.4043]])]\nobservers = [MinMaxObserver(), MovingAverageMinMaxObserver(), HistogramObserver()]\nfor obs in observers:\n for x in inputs: obs(x) \n print(obs.class.name, obs.calculate_qparams())\n>>>>>\nMinMaxObserver (tensor([0.0112]), tensor([124], dtype=torch.int32))\nMovingAverageMinMaxObserver (tensor([0.0101]), tensor([139], dtype=torch.int32))\nHistogramObserver (tensor([0.0100]), tensor([106], dtype=torch.int32))\n```\nAffine and Symmetric Quantization Schemes\nAffine or asymmetric quantization schemes assign the input range to the min and max observed values. Affine schemes generally offer tighter clipping ranges and are useful for quantizing non-negative activations (you don't need the input range to contain negative values if your input tensors are never negative). The range is calculated as", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": ". Affine quantization leads to more computationally expensive inference when used for weight tensors [[3]].\nSymmetric quantization schemes center the input range around 0, eliminating the need to calculate a zero-point offset. The range is calculated as \n. For skewed signals (like non-negative activations) this can result in bad quantization resolution because the clipping range includes values that never show up in the input (see the pyplot below).\n```python\nact = torch.distributions.pareto.Pareto(1, 10).sample((1,1024))\nweights = torch.distributions.normal.Normal(0, 0.12).sample((3, 64, 7, 7)).flatten()\ndef get_symmetric_range(x):\n beta = torch.max(x.max(), x.min().abs())\n return -beta.item(), beta.item()\ndef get_affine_range(x):\n return x.min().item(), x.max().item()\ndef plot(plt, data, scheme):", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "def plot(plt, data, scheme):\n boundaries = get_affine_range(data) if scheme == 'affine' else get_symmetric_range(data)\n a, _, _ = plt.hist(data, density=True, bins=100)\n ymin, ymax = np.quantile(a[a>0], [0.25, 0.95])\n plt.vlines(x=boundaries, ls='--', colors='purple', ymin=ymin, ymax=ymax)\nfig, axs = plt.subplots(2,2)\nplot(axs[0, 0], act, 'affine')\naxs[0, 0].set_title(\"Activation, Affine-Quantized\")\nplot(axs[0, 1], act, 'symmetric')\naxs[0, 1].set_title(\"Activation, Symmetric-Quantized\")\nplot(axs[1, 0], weights, 'affine')\naxs[1, 0].set_title(\"Weights, Affine-Quantized\")\nplot(axs[1, 1], weights, 'symmetric')\naxs[1, 1].set_title(\"Weights, Symmetric-Quantized\")\nplt.show()\n```\n\n\n Fig 2. Clipping ranges (in purple) for affine and symmetric schemes\n\nIn PyTorch, you can specify affine or symmetric schemes while initializing the Observer. Note that not all observers support both schemes.", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "for qscheme in [torch.per_tensor_affine, torch.per_tensor_symmetric]:\n obs = MovingAverageMinMaxObserver(qscheme=qscheme)\n for x in inputs: obs(x)\n print(f\"Qscheme: {qscheme} | {obs.calculate_qparams()}\")\n\n# >>>>>\n# Qscheme: torch.per_tensor_affine | (tensor([0.0101]), tensor([139], dtype=torch.int32))\n# Qscheme: torch.per_tensor_symmetric | (tensor([0.0109]), tensor([128]))\n\nPer-Tensor and Per-Channel Quantization Schemes\nQuantization parameters can be calculated for the layer's entire weight tensor as a whole, or separately for each channel. In per-tensor, the same clipping range is applied to all the channels in a layer\n\n\n Fig 3. Per-Channel uses one set of qparams for each channel. Per-tensor uses the same qparams for the entire tensor.\n", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "\nFor weights quantization, symmetric-per-channel quantization provides better accuracies; per-tensor quantization performs poorly, possibly due to high variance in conv weights across channels from batchnorm folding [[3]].\nfrom torch.quantization.observer import MovingAveragePerChannelMinMaxObserver\nobs = MovingAveragePerChannelMinMaxObserver(ch_axis=0) # calculate qparams for all `C` channels separately\nfor x in inputs: obs(x)\nprint(obs.calculate_qparams())\n\n# >>>>>\n# (tensor([0.0090, 0.0075, 0.0055]), tensor([125, 187, 82], dtype=torch.int32))\n\nBackend Engine", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "\n### Backend Engine\nCurrently, quantized operators run on x86 machines via the [FBGEMM backend](https://github.com/pytorch/FBGEMM), or use [QNNPACK](https://github.com/pytorch/QNNPACK) primitives on ARM machines. Backend support for server GPUs (via TensorRT and cuDNN) is coming soon. Learn more about extending quantization to custom backends: [RFC-0019](https://github.com/pytorch/rfcs/blob/master/RFC-0019-Extending-PyTorch-Quantization-to-Custom-Backends.md).\n\n```python\nbackend = 'fbgemm' if x86 else 'qnnpack'\nqconfig = torch.quantization.get_default_qconfig(backend) \ntorch.backends.quantized.engine = backend\n\nQConfig\nThe QConfig (code, docs) NamedTuple stores the Observers and the quantization schemes used to quantize activations and weights.", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "Be sure to pass the Observer class (not the instance), or a callable that can return Observer instances. Use with_args() to override the default arguments.\nmy_qconfig = torch.quantization.QConfig(\n activation=MovingAverageMinMaxObserver.with_args(qscheme=torch.per_tensor_affine),\n weight=MovingAveragePerChannelMinMaxObserver.with_args(qscheme=torch.qint8)\n)\n# >>>>>\n# QConfig(activation=functools.partial(, qscheme=torch.per_tensor_affine){}, weight=functools.partial(, qscheme=torch.qint8){})\n\nIn PyTorch\nPyTorch allows you a few different ways to quantize your model depending on\n- if you prefer a flexible but manual, or a restricted automagic process (Eager Mode v/s FX Graph Mode)\n- if qparams for quantizing activations (layer outputs) are precomputed for all inputs, or calculated afresh with each input (static v/s dynamic),", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "\nif qparams are computed with or without retraining (quantization-aware training v/s post-training quantization)\n\nFX Graph Mode automatically fuses eligible modules, inserts Quant/DeQuant stubs, calibrates the model and returns a quantized module - all in two method calls - but only for networks that are symbolic traceable. The examples below contain the calls using Eager Mode and FX Graph Mode for comparison.\nIn DNNs, eligible candidates for quantization are the FP32 weights (layer parameters) and activations (layer outputs). Quantizing weights reduces the model size. Quantized activations typically result in faster inference.\nAs an example, the 50-layer ResNet network has ~26 million weight parameters and computes ~16 million activations in the forward pass.\nPost-Training Dynamic/Weight-only Quantization", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "Here the model's weights are pre-quantized; the activations are quantized on-the-fly (\"dynamic\") during inference. The simplest of all approaches, it has a one line API call in torch.quantization.quantize_dynamic. Currently only Linear and Recurrent (LSTM, GRU, RNN) layers are supported for dynamic quantization.\n(+) Can result in higher accuracies since the clipping range is exactly calibrated for each input [[1]].\n(+) Dynamic quantization is preferred for models like LSTMs and Transformers where writing/retrieving the model's weights from memory dominate bandwidths [[4]]. \n(-) Calibrating and quantizing the activations at each layer during runtime can add to the compute overhead. \n```python\nimport torch\nfrom torch import nn\ntoy model\nm = nn.Sequential(\n nn.Conv2d(2, 64, (8,)),\n nn.ReLU(),\n nn.Linear(16,10),\n nn.LSTM(10, 10))\nm.eval()\nEAGER MODE\nfrom torch.quantization import quantize_dynamic\nmodel_quantized = quantize_dynamic(", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "model_quantized = quantize_dynamic(\n model=m, qconfig_spec={nn.LSTM, nn.Linear}, dtype=torch.qint8, inplace=False\n)\nFX MODE\nfrom torch.quantization import quantize_fx\nqconfig_dict = {\"\": torch.quantization.default_dynamic_qconfig} # An empty key denotes the default applied to all modules\nmodel_prepared = quantize_fx.prepare_fx(m, qconfig_dict)\nmodel_quantized = quantize_fx.convert_fx(model_prepared)\n```\nPost-Training Static Quantization (PTQ)\nPTQ also pre-quantizes model weights but instead of calibrating activations on-the-fly, the clipping range is pre-calibrated and fixed (\"static\") using validation data. Activations stay in quantized precision between operations during inference. About 100 mini-batches of representative data are sufficient to calibrate the observers [[2]]. The examples below use random data in calibration for convenience - using that in your application will result in bad qparams.\n", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "\n\n\n Fig 4. Steps in Post-Training Static Quantization\n\nModule fusion combines multiple sequential modules (eg: [Conv2d, BatchNorm, ReLU]) into one. Fusing modules means the compiler needs to only run one kernel instead of many; this speeds things up and improves accuracy by reducing quantization error.\n(+) Static quantization has faster inference than dynamic quantization because it eliminates the float<->int conversion costs between layers. \n(-) Static quantized models may need regular re-calibration to stay robust against distribution-drift.\n```python\nStatic quantization of a model consists of the following steps:\nFuse modules\nInsert Quant/DeQuant Stubs\nPrepare the fused module (insert observers before and after layers)\nCalibrate the prepared module (pass it representative data)", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "Convert the calibrated module (replace with quantized version)\nimport torch\nfrom torch import nn\nimport copy\nbackend = \"fbgemm\" # running on a x86 CPU. Use \"qnnpack\" if running on ARM.\nmodel = nn.Sequential(\n nn.Conv2d(2,64,3),\n nn.ReLU(),\n nn.Conv2d(64, 128, 3),\n nn.ReLU()\n)\nEAGER MODE\nm = copy.deepcopy(model)\nm.eval()\n\"\"\"Fuse\n- Inplace fusion replaces the first module in the sequence with the fused module, and the rest with identity modules\n\"\"\"\ntorch.quantization.fuse_modules(m, ['0','1'], inplace=True) # fuse first Conv-ReLU pair\ntorch.quantization.fuse_modules(m, ['2','3'], inplace=True) # fuse second Conv-ReLU pair\n\"\"\"Insert stubs\"\"\"\nm = nn.Sequential(torch.quantization.QuantStub(), \n *m, \n torch.quantization.DeQuantStub())\n\"\"\"Prepare\"\"\"\nm.qconfig = torch.quantization.get_default_qconfig(backend)\ntorch.quantization.prepare(m, inplace=True)\n\"\"\"Calibrate", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "\"\"\"Calibrate\n- This example uses random data for convenience. Use representative (validation) data instead.\n\"\"\"\nwith torch.inference_mode():\n for _ in range(10):\n x = torch.rand(1,2, 28, 28)\n m(x)\n\"\"\"Convert\"\"\"\ntorch.quantization.convert(m, inplace=True)\n\"\"\"Check\"\"\"\nprint(m[[1]].weight().element_size()) # 1 byte instead of 4 bytes for FP32\nFX GRAPH\nfrom torch.quantization import quantize_fx\nm = copy.deepcopy(model)\nm.eval()\nqconfig_dict = {\"\": torch.quantization.get_default_qconfig(backend)}\nPrepare\nmodel_prepared = quantize_fx.prepare_fx(m, qconfig_dict)\nCalibrate - Use representative (validation) data.\nwith torch.inference_mode():\n for _ in range(10):\n x = torch.rand(1,2,28, 28)\n model_prepared(x)\nquantize\nmodel_quantized = quantize_fx.convert_fx(model_prepared)\n```\nQuantization-aware Training (QAT)\n\n\n\n Fig 5. Steps in Quantization-Aware Training", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "Fig 5. Steps in Quantization-Aware Training\n\nThe PTQ approach is great for large models, but accuracy suffers in smaller models [[6]]. This is of course due to the loss in numerical precision when adapting a model from FP32 to the INT8 realm (Figure 6(a)). QAT tackles this by including this quantization error in the training loss, thereby training an INT8-first model.\n\n\n\n Fig 6. Comparison of PTQ and QAT convergence [3]\n", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "\nAll weights and biases are stored in FP32, and backpropagation happens as usual. However in the forward pass, quantization is internally simulated via FakeQuantize modules. They are called fake because they quantize and immediately dequantize the data, adding quantization noise similar to what might be encountered during quantized inference. The final loss thus accounts for any expected quantization errors. Optimizing on this allows the model to identify a wider region in the loss function (Figure 6(b)), and identify FP32 parameters such that quantizing them to INT8 does not significantly affect accuracy.\n\n\n Fig 7. Fake Quantization in the forward and backward pass \n Image source: https://developer.nvidia.com/blog/achieving-fp32-accuracy-for-int8-inference-using-quantization-aware-training-with-tensorrt\n", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "\n(+) QAT yields higher accuracies than PTQ.\n(+) Qparams can be learned during model training for more fine-grained accuracy (see LearnableFakeQuantize)\n(-) Computational cost of retraining a model in QAT can be several hundred epochs [[1]]\n```python\nQAT follows the same steps as PTQ, with the exception of the training loop before you actually convert the model to its quantized version\nimport torch\nfrom torch import nn\nbackend = \"fbgemm\" # running on a x86 CPU. Use \"qnnpack\" if running on ARM.\nm = nn.Sequential(\n nn.Conv2d(2,64,8),\n nn.ReLU(),\n nn.Conv2d(64, 128, 8),\n nn.ReLU()\n)\n\"\"\"Fuse\"\"\"\ntorch.quantization.fuse_modules(m, ['0','1'], inplace=True) # fuse first Conv-ReLU pair\ntorch.quantization.fuse_modules(m, ['2','3'], inplace=True) # fuse second Conv-ReLU pair\n\"\"\"Insert stubs\"\"\"\nm = nn.Sequential(torch.quantization.QuantStub(), \n *m,", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "*m, \n torch.quantization.DeQuantStub())\n\"\"\"Prepare\"\"\"\nm.train()\nm.qconfig = torch.quantization.get_default_qconfig(backend)\ntorch.quantization.prepare_qat(m, inplace=True)\n\"\"\"Training Loop\"\"\"\nn_epochs = 10\nopt = torch.optim.SGD(m.parameters(), lr=0.1)\nloss_fn = lambda out, tgt: torch.pow(tgt-out, 2).mean()\nfor epoch in range(n_epochs):\n x = torch.rand(10,2,24,24)\n out = m(x)\n loss = loss_fn(out, torch.rand_like(out))\n opt.zero_grad()\n loss.backward()\n opt.step()\n\"\"\"Convert\"\"\"\nm.eval()\ntorch.quantization.convert(m, inplace=True)\n```\nSensitivity Analysis", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "```\nSensitivity Analysis\nNot all layers respond to quantization equally, some are more sensitive to precision drops than others. Identifying the optimal combination of layers that minimizes accuracy drop is time-consuming, so [[3]] suggest a one-at-a-time sensitivity analysis to identify which layers are most sensitive, and retaining FP32 precision on those. In their experiments, skipping just 2 conv layers (out of a total 28 in MobileNet v1) give them near-FP32 accuracy. Using FX Graph Mode, we can create custom qconfigs to do this easily:\n```python\nONE-AT-A-TIME SENSITIVITY ANALYSIS\nfor quantized_layer, _ in model.named_modules():\n print(\"Only quantizing layer: \", quantized_layer)\n# The module_name key allows module-specific qconfigs. \n qconfig_dict = {\"\": None, \n \"module_name\":[(quantized_layer, torch.quantization.get_default_qconfig(backend))]}\nmodel_prepared = quantize_fx.prepare_fx(model, qconfig_dict)\n # calibrate\n model_quantized = quantize_fx.convert_fx(model_prepared)", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "evaluate(model)\n```\nAnother approach is to compare statistics of the FP32 and INT8 layers; commonly used metrics for these are SQNR (Signal to Quantized Noise Ratio) and Mean-Squre-Error. Such a comparative analysis may also help in guiding further optimizations. \n\n\n\n Fig 8. Comparing model weights and activations\n\nPyTorch provides tools to help with this analysis under the Numeric Suite. Learn more about using Numeric Suite from the full tutorial.\n```python\nextract from https://pytorch.org/tutorials/prototype/numeric_suite_tutorial.html\nimport torch.quantization._numeric_suite as ns\ndef SQNR(x, y):\n # Higher is better\n Ps = torch.norm(x)\n Pn = torch.norm(x-y)\n return 20*torch.log10(Ps/Pn)", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "return 20*torch.log10(Ps/Pn)\nwt_compare_dict = ns.compare_weights(fp32_model.state_dict(), int8_model.state_dict())\nfor key in wt_compare_dict:\n print(key, compute_error(wt_compare_dict[key]['float'], wt_compare_dict[key]['quantized'].dequantize()))\nact_compare_dict = ns.compare_model_outputs(fp32_model, int8_model, input_data)\nfor key in act_compare_dict:\n print(key, compute_error(act_compare_dict[key]['float'][0], act_compare_dict[key]['quantized'][0].dequantize()))\n```\nRecommendations for your workflow\n\n\n\n Fig 9. Suggested quantization workflow\n\n Click for larger image \nPoints to note\n\nLarge (10M+ parameters) models are more robust to quantization error. [[2]]\n", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "\nQuantizing a model from a FP32 checkpoint provides better accuracy than training an INT8 model from scratch.[[2]]\nProfiling the model runtime is optional but it can help identify layers that bottleneck inference.\nDynamic Quantization is an easy first step, especially if your model has many Linear or Recurrent layers.\nUse symmetric-per-channel quantization with MinMax observers for quantizing weights. Use affine-per-tensor quantization with MovingAverageMinMax observers for quantizing activations[[2], [3]]\nUse metrics like SQNR to identify which layers are most suscpetible to quantization error. Turn off quantization on these layers.\nUse QAT to fine-tune for around 10% of the original training schedule with an annealing learning rate schedule starting at 1% of the initial training learning rate. [[3]]\n", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "\nIf the above workflow didn't work for you, we want to know more. Post a thread with details of your code (model architecture, accuracy metric, techniques tried). Feel free to cc me @suraj.pt.\n\nThat was a lot to digest, congratulations for sticking with it! Next, we'll take a look at quantizing a \"real-world\" model that uses dynamic control structures (if-else, loops). These elements disallow symbolic tracing a model, which makes it a bit tricky to directly quantize the model out of the box. In the next post of this series, we'll get our hands dirty on a model that is chock full of loops and if-else blocks, and even uses third-party libraries in the forward call.", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "We'll also cover a cool new feature in PyTorch Quantization called Define-by-Run, that tries to ease this constraint by needing only subsets of the model's computational graph to be free of dynamic flow. Check out the Define-by-Run poster at PTDD'21 for a preview.\nReferences\n[[1]] Gholami, A., Kim, S., Dong, Z., Yao, Z., Mahoney, M. W., & Keutzer, K. (2021). A survey of quantization methods for efficient neural network inference. arXiv preprint arXiv:2103.13630.\n[[2]] Krishnamoorthi, R. (2018). Quantizing deep convolutional networks for efficient inference: A whitepaper. arXiv preprint arXiv:1806.08342.\n[[3]] Wu, H., Judd, P., Zhang, X., Isaev, M., & Micikevicius, P. (2020). Integer quantization for deep learning inference: Principles and empirical evaluation. arXiv preprint arXiv:2004.09602.\n[[4]] PyTorch Quantization Docs", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "", "source": "https://pytorch.org/blog/quantization-in-practice/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Towards Reproducible Research with PyTorch Hub'\nauthor: Team PyTorch\nredirect_from: /2019/06/10/pytorch_hub.html\n\nReproducibility is an essential requirement for many fields of research including those based on machine learning techniques. However, many machine learning publications are either not reproducible or are difficult to reproduce. With the continued growth in the number of research publications, including tens of thousands of papers now hosted on arXiv and submissions to conferences at an all time high, research reproducibility is more important than ever. While many of these publications are accompanied by code as well as trained models which is helpful but still leaves a number of steps for users to figure out for themselves.", "source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"} {"text": "We are excited to announce the availability of PyTorch Hub, a simple API and workflow that provides the basic building blocks for improving machine learning research reproducibility. PyTorch Hub consists of a pre-trained model repository designed specifically to facilitate research reproducibility and enable new research. It also has built-in support for Colab, integration with Papers With Code and currently contains a broad set of models that include Classification and Segmentation, Generative, Transformers, etc.\n\n\n\n[Owner] Publishing models\nPyTorch Hub supports the publication of pre-trained models (model definitions and pre-trained weights) to a GitHub repository by adding a simple hubconf.py file.\nThis provides an enumeration of which models are to be supported and a list of dependencies needed to run the models.", "source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"} {"text": "Examples can be found in the torchvision, huggingface-bert and gan-model-zoo repositories.\nLet us look at the simplest case: torchvision's hubconf.py:\n```python\nOptional list of dependencies required by the package\ndependencies = ['torch']\nfrom torchvision.models.alexnet import alexnet\nfrom torchvision.models.densenet import densenet121, densenet169, densenet201, densenet161\nfrom torchvision.models.inception import inception_v3\nfrom torchvision.models.resnet import resnet18, resnet34, resnet50, resnet101, resnet152,\\\nresnext50_32x4d, resnext101_32x8d\nfrom torchvision.models.squeezenet import squeezenet1_0, squeezenet1_1\nfrom torchvision.models.vgg import vgg11, vgg13, vgg16, vgg19, vgg11_bn, vgg13_bn, vgg16_bn, vgg19_bn\nfrom torchvision.models.segmentation import fcn_resnet101, deeplabv3_resnet101", "source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"} {"text": "from torchvision.models.googlenet import googlenet\nfrom torchvision.models.shufflenetv2 import shufflenet_v2_x0_5, shufflenet_v2_x1_0\nfrom torchvision.models.mobilenet import mobilenet_v2\n```\nIn torchvision, the models have the following properties:\n- Each model file can function and be executed independently\n- They dont require any package other than PyTorch (encoded in hubconf.py as dependencies['torch'])\n- They dont need separate entry-points, because the models when created, work seamlessly out of the box\nMinimizing package dependencies reduces the friction for users to load your model for immediate experimentation.\nA more involved example is HuggingFace's BERT models. Here is their hubconf.py\n```python\ndependencies = ['torch', 'tqdm', 'boto3', 'requests', 'regex']\nfrom hubconfs.bert_hubconf import (\n bertTokenizer,\n bertModel,\n bertForNextSentencePrediction,\n bertForPreTraining,\n bertForMaskedLM,\n bertForSequenceClassification,\n bertForMultipleChoice,", "source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"} {"text": "bertForMultipleChoice,\n bertForQuestionAnswering,\n bertForTokenClassification\n)\n\nEach model then requires an entrypoint to be created. Here is a code snippet to specify an entrypoint of the ```bertForMaskedLM``` model, which returns the pre-trained model weights.\n\n```python\ndef bertForMaskedLM(*args, **kwargs):\n \"\"\"\n BertForMaskedLM includes the BertModel Transformer followed by the\n pre-trained masked language modeling head.\n Example:\n ...\n \"\"\"\n model = BertForMaskedLM.from_pretrained(*args, **kwargs)\n return model\n\nThese entry-points can serve as wrappers around complex model factories. They can give a clean and consistent help docstring, have logic to support downloading of pretrained weights (for example via pretrained=True) or have additional hub-specific functionality such as visualization.\nWith a hubconf.py in place, you can send a pull request based on the template here.", "source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"} {"text": "Our goal is to curate high-quality, easily-reproducible, maximally-beneficial models for research reproducibility.\nHence, we may work with you to refine your pull request and in some cases reject some low-quality models to be published.\nOnce we accept your pull request, your model will soon appear on Pytorch hub webpage for all users to explore.\n[User] Workflow\nAs a user, PyTorch Hub allows you to follow a few simple steps and do things like: 1) explore available models; 2) load a model; and 3) understand what methods are available for any given model. Let's walk through some examples of each.\nExplore available entrypoints.\nUsers can list all available entrypoints in a repo using the torch.hub.list() API.\n```python\n\n\n\ntorch.hub.list('pytorch/vision')\n['alexnet',\n'deeplabv3_resnet101',\n'densenet121',\n...\n'vgg16',\n'vgg16_bn',\n'vgg19',\n 'vgg19_bn']\n ```\n\n\n", "source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"} {"text": "'vgg16',\n'vgg16_bn',\n'vgg19',\n 'vgg19_bn']\n ```\nNote that PyTorch Hub also allows auxillary entrypoints (other than pretrained models), e.g. bertTokenizer for preprocessing in the BERT models, to make the user workflow smoother.\nLoad a model\nNow that we know which models are available in the Hub, users can load a model entrypoint using the torch.hub.load() API. This only requires a single command without the need to install a wheel. In addition the torch.hub.help() API can provide useful information about how to instantiate the model.\nprint(torch.hub.help('pytorch/vision', 'deeplabv3_resnet101'))\nmodel = torch.hub.load('pytorch/vision', 'deeplabv3_resnet101', pretrained=True)\n\nIt is also common that repo owners will want to continually add bug fixes or performance improvements. PyTorch Hub makes it super simple for users to get the latest update by calling:\n```python", "source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"} {"text": "model = torch.hub.load(..., force_reload=True)\n\nWe believe this will help to alleviate the burden of repetitive package releases by repo owners and instead allow them to focus more on their research.\nIt also ensures that, as a user, you are getting the freshest available models.\nOn the contrary, stability is important for users. Hence, some model owners serve them from a specificed branch or tag, rather than the master branch, to ensure stability of the code.\nFor example, pytorch_GAN_zoo serves them from the hub branch:\nmodel = torch.hub.load('facebookresearch/pytorch_GAN_zoo:hub', 'DCGAN', pretrained=True, useGPU=False)\n\nNote that the *args, **kwargs passed to hub.load() are used to instantiate a model. In the above example, pretrained=True and useGPU=False are given to the model's entrypoint.\nExplore a loaded model", "source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"} {"text": "Explore a loaded model\nOnce you have a model from PyTorch Hub loaded, you can use the following workflow to find out the available methods that are supported as well as understand better what arguments are requires to run it.\ndir(model) to see all available methods of the model. Let's take a look at bertForMaskedLM's available methods.\n>>> dir(model)\n>>>\n['forward'\n...\n'to'\n'state_dict',\n]\n\nhelp(model.forward) provides a view into what arguments are required to make your loaded model run\n>>> help(model.forward)\n>>>\nHelp on method forward in module pytorch_pretrained_bert.modeling:\nforward(input_ids, token_type_ids=None, attention_mask=None, masked_lm_labels=None)\n...\n\nHave a closer look at the BERT and DeepLabV3 pages, where you can see how these models can be used once loaded.\nOther ways to explore", "source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"} {"text": "Other ways to explore\nModels available in PyTorch Hub also support both Colab and are directly linked on Papers With Code and you can get started with a single click. Here is a good example to get started with (shown below).\n\n\n\nAdditional resources:\n\nPyTorch Hub API documentation can be found here.\nSubmit a model here for publication in PyTorch Hub.\nGo to https://pytorch.org/hub to learn more about the available models.\nLook for more models to come on paperswithcode.com.\n", "source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"} {"text": "A BIG thanks to the folks at HuggingFace, the PapersWithCode team, fast.ai and Nvidia as well as Morgane Riviere (FAIR Paris) and lots of others for helping bootstrap this effort!!\nCheers!\nTeam PyTorch\nFAQ:\nQ: If we would like to contribute a model that is already in the Hub but perhaps mine has better accuracy, should I still contribute?\nA: Yes!! A next step for Hub is to implement an upvote/downvote system to surface the best models.\nQ: Who hosts the model weights for PyTorch Hub?\nA: You, as the contributor, are responsible to host the model weights. You can host your model in your favorite cloud storage or, if it fits within the limits, on GitHub. If it is not within your means to host the weights, check with us via opening an issue on the hub repository.\nQ: What if my model is trained on private data? Should I still contribute this model?", "source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"} {"text": "A: No! PyTorch Hub is centered around open research and that extends to the usage of open datasets to train these models on. If a pull request for a proprietary model is submitted, we will kindly ask that you resubmit a model trained on something open and available.\nQ: Where are my downloaded models saved?\nA: We follow the XDG Base Directory Specification and adhere to common standards around cached files and directories.\nThe locations are used in the order of:\n\nCalling hub.set_dir()\n$TORCH_HOME/hub, if environment variable TORCH_HOME is set.\n$XDG_CACHE_HOME/torch/hub, if environment variable XDG_CACHE_HOME is set.\n~/.cache/torch/hub\n", "source": "https://pytorch.org/blog/towards-reproducible-research-with-pytorch-hub/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'The road to 1.0: production ready PyTorch'\nauthor: The PyTorch Team\nredirect_from: /2018/05/02/road-to-1.0.html\n\nWe would like to give you a preview of the roadmap for PyTorch 1.0 , the next release of PyTorch. Over the last year, we've had 0.2, 0.3 and 0.4 transform PyTorch from a [Torch+Chainer]-like interface into something cleaner, adding double-backwards, numpy-like functions, advanced indexing and removing Variable boilerplate. At this time, we're confident that the API is in a reasonable and stable state to confidently release a 1.0.\nHowever, 1.0 isn't just about stability of the interface.\nOne of PyTorch's biggest strengths is its first-class Python integration, imperative style, simplicity of the API and options. These are aspects that make PyTorch good for research and hackability.\nOne of its biggest downsides has been production-support. What we mean by production-support is the countless things one has to do to models to run them efficiently at massive scale:", "source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"} {"text": "\nexporting to C++-only runtimes for use in larger projects\noptimizing mobile systems on iPhone, Android, Qualcomm and other systems\nusing more efficient data layouts and performing kernel fusion to do faster inference (saving 10% of speed or memory at scale is a big win)\nquantized inference (such as 8-bit inference)\n\nStartups, large companies and anyone who wants to build a product around PyTorch have asked for production support. At Facebook (the largest stakeholder for PyTorch) we have Caffe2, which has been the production-ready platform, running in our datacenters and shipping to more than 1 billion phones spanning eight generations of iPhones and six generations of Android CPU architectures. It has server-optimized inference on Intel / ARM, TensorRT support, and all the necessary bits for production. Considering all this value locked-in to a platform that the PyTorch team works quite closely with, we decided to marry PyTorch and Caffe2 which gives the production-level readiness for PyTorch.", "source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"} {"text": "Supporting production features without adding usability issues for our researchers and end-users needs creative solutions.\nProduction != Pain for researchers\nAdding production capabilities involves increasing the API complexity and number of configurable options for models. One configures memory-layouts (NCHW vs NHWC vs N,C/32,H,W,32, each providing different performance characteristics), quantization (8-bit? 3-bit?), fusion of low-level kernels (you used a Conv + BatchNorm + ReLU, let's fuse them into a single kernel), separate backend options (MKLDNN backend for a few layers and NNPACK backend for other layers), etc.\nPyTorch's central goal is to provide a great platform for research and hackability. So, while we add all these optimizations, we've been working with a hard design constraint to never trade these off against usability.", "source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"} {"text": "To pull this off, we are introducing torch.jit, a just-in-time (JIT) compiler that at runtime takes your PyTorch models and rewrites them to run at production-efficiency. The JIT compiler can also export your model to run in a C++-only runtime based on Caffe2 bits.\n\nIn 1.0, your code continues to work as-is, we're not making any big changes to the existing API.\n\nMaking your model production-ready is an opt-in annotation, which uses the torch.jit compiler to export your model to a Python-less environment, and improving its performance. Let's walk through the JIT compiler in detail.\ntorch.jit: A JIT-compiler for your models", "source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"} {"text": "torch.jit: A JIT-compiler for your models\nWe strongly believe that it's hard to match the productivity you get from specifying your models directly as idiomatic Python code. This is what makes PyTorch so flexible, but it also means that PyTorch pretty much never knows the operation you'll run next. This however is a big blocker for export/productionization and heavyweight automatic performance optimizations because they need full upfront knowledge of how the computation will look before it even gets executed.\nWe provide two opt-in ways of recovering this information from your code, one based on tracing native python code and one based on compiling a subset of the python language annotated into a python-free intermediate representation. After thorough discussions we concluded that they're both going to be useful in different contexts, and as such you will be able to mix and match them freely.\nTracing Mode", "source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"} {"text": "Tracing Mode\nThe PyTorch tracer, torch.jit.trace, is a function that records all the native PyTorch operations performed in a code region, along with the data dependencies between them. In fact, PyTorch has had a tracer since 0.3, which has been used for exporting models through ONNX. What changes now, is that you no longer necessarily need to take the trace and run it elsewhere - PyTorch can re-execute it for you, using a carefully designed high-performance C++ runtime. As we develop PyTorch 1.0 this runtime will integrate all the optimizations and hardware integrations that Caffe2 provides.", "source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"} {"text": "The biggest benefit of this approach is that it doesn't really care how your Python code is structured \u2014 you can trace through generators or coroutines, modules or pure functions. Since we only record native PyTorch operators, these details have no effect on the trace recorded. This behavior, however, is a double-edged sword. For example, if you have a loop in your model, it will get unrolled in the trace, inserting a copy of the loop body for as many times as the loop ran. This opens up opportunities for zero-cost abstraction (e.g. you can loop over modules, and the actual trace will be loop-overhead free!), but on the other hand this will also affect data dependent loops (think of e.g. processing sequences of varying lengths), effectively hard-coding a single length into the trace.\nFor networks that do not contain loops and if statements, tracing is non-invasive and is robust enough to handle a wide variety of coding styles. This code example illustrates what tracing looks like:\n```python", "source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"} {"text": "# This will run your nn.Module or regular Python function with the example\n# input that you provided. The returned callable can be used to re-execute\n# all operations that happened during the example run, but it will no longer\n# use the Python interpreter.\nfrom torch.jit import trace\ntraced_model = trace(model, example_input=input)\ntraced_fn = trace(fn, example_input=input)\n\n# The training loop doesn't change. Traced model behaves exactly like an\n# nn.Module, except that you can't edit what it does or change its attributes.\n# Think of it as a \"frozen module\".\nfor input, target in data_loader:\n loss = loss_fn(traced_model(input), target)\n\nScript Mode\nTracing mode is a great way to minimize the impact on your code, but we're also very excited about the models that fundamentally make use of control flow such as RNNs. Our solution to this is a scripting mode.", "source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"} {"text": "In this case you write out a regular Python function, except that you can no longer use certain more complicated language features. Once you isolated the desired functionality, you let us know that you'd like the function to get compiled by decorating it with an @script decorator. This annotation will transform your python function directly into our high-performance C++ runtime. This lets us recover all the PyTorch operations along with loops and conditionals. They will be embedded into our internal representation of this function, and will be accounted for every time this function is run.\nfrom torch.jit import script\n\n@script\ndef rnn_loop(x):\n hidden = None\n for x_t in x.split(1):\n x, hidden = model(x, hidden)\n return x\n\nOptimization and Export\nRegardless of whether you use tracing or @script, the result is a python-free representation of your model, which can be used to optimize the model or to export the model from python for use in production environments.", "source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"} {"text": "Extracting bigger segments of the model into an intermediate representation makes it possible to do sophisticated whole-program optimizations and to offload computation to specialized AI accelerators which operate on graphs of computation. We have already been developing the beginnings of these optimizations, including passes that fuse GPU operations together to improve the performance of smaller RNN models.\nIt also allows us to use existing high-performance backends available in Caffe2 today to run the model efficiently. Additionally, @script functions (and modules!) can be fully exported to ONNX in a way that retains their dynamic nature, such that you can easily run them in a Python-free environment using the model executors from Caffe2 or by transferring the model to any other framework supporting ONNX.\nUsability", "source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"} {"text": "Usability\nWe care deeply about maintaining our current level of usability and we know that execution of the code not directly in Python leads to harder debugging, but this is something that we think about a lot, and we're making sure that you're not getting locked in to a completely different programming language.", "source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"} {"text": "First, we follow the principle of pay for what you use \u2014 if you don't need to optimize or export your model, you do not have to use these new features and won't see any downsides. Furthermore, use of traced or @script modules/functions can be done incrementally. For instance, all of these behaviors are allowed: You can trace part of your model and use the trace in a larger non-traced model. You can use tracing for 90% of your model, and use @script for the one sub-module that actually has some control flow in it. You can write a function using @script and have it call a native python function. If something appears incorrect in an @script function, you can remove the annotation and the code will execute in native python where it is easy to debug using your favorite tools and methods. Think of tracing and @script like type annotations using MyPy or TypeScript \u2014 each additional annotation can be tested incrementally, and none are required until you want to optimize or productionize.", "source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"} {"text": "Most importantly, these modes will be built into the core of PyTorch so that mixing and matching them with your existing code can happen seamlessly.\nNote: The name JIT for these components is a bit of a misnomer and comes from historical reasons. The tracing/function execution in PyTorch started out as an optimizing JIT compiler that generated fused CUDA kernels but then grew to encompass optimization, @script, and export. When it is ready for release we will likely rename this functionality to the hybrid frontend, but we wanted to present it here as it is named in the code so that you can follow along as we develop it.\nOther changes and improvements\nProduction support is the big feature for 1.0, but we will continue optimizing and fixing other parts of PyTorch as course of the standard release process.", "source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"} {"text": "On the backend side of things, PyTorch will see some changes, which might affect user-written C and C++ extensions. We are replacing (or refactoring) the backend ATen library to incorporate features and optimizations from Caffe2.\nLast Words\nWe aim to release 1.0 some time during the summer. You can follow-along our progress on the Pull Requests page.\nYou can read this from the perspective of the Caffe2 project at: https://caffe2.ai/blog/2018/05/02/Caffe2_PyTorch_1_0.html", "source": "https://pytorch.org/blog/the-road-to-1_0/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'PyTorch adds new tools and libraries, welcomes Preferred Networks to its community'\nauthor: Team PyTorch\n\nPyTorch continues to be used for the latest state-of-the-art research on display at the NeurIPS conference next week, making up nearly 70% of papers that cite a framework. In addition, we\u2019re excited to welcome Preferred Networks, the maintainers of the Chainer framework, to the PyTorch community. Their teams are moving fully over to PyTorch for developing their ML capabilities and services.\nThis growth underpins PyTorch\u2019s focus on building for the needs of the research community, and increasingly, supporting the full workflow from research to production deployment. To further support researchers and developers, we\u2019re launching a number of new tools and libraries for large scale computer vision and elastic fault tolerant training. Learn more on GitHub and at our NeurIPS booth.", "source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"} {"text": "Preferred Networks joins the PyTorch community\nPreferred Networks, Inc. (PFN) announced plans to move its deep learning framework from Chainer to PyTorch. As part of this change, PFN will collaborate with the PyTorch community and contributors, including people from Facebook, Microsoft, CMU, and NYU, to participate in the development of PyTorch.\nPFN developed Chainer, a deep learning framework that introduced the concept of define-by-run (also referred to as eager execution), to support and speed up its deep learning development. Chainer has been used at PFN since 2015 to rapidly solve real-world problems with the latest, cutting-edge technology. Chainer was also one of the inspirations for PyTorch\u2019s initial design, as outlined in the PyTorch NeurIPS paper.", "source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"} {"text": "PFN has driven innovative work with CuPy, ImageNet in 15 minutes, Optuna, and other projects that have pushed the boundaries of design and engineering. As part of the PyTorch community, PFN brings with them creative engineering capabilities and experience to help take the framework forward. In addition, PFN\u2019s migration to PyTorch will allow it to efficiently incorporate the latest research results to accelerate its R&D activities, given PyTorch\u2019s broad adoption with researchers, and to collaborate with the community to add support for PyTorch on MN-Core, a deep learning processor currently in development.", "source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"} {"text": "We are excited to welcome PFN to the PyTorch community, and to jointly work towards the common goal of furthering advances in deep learning technology. Learn more about the PFN\u2019s migration to PyTorch here.\nTools for elastic training and large scale computer vision\nPyTorch Elastic (Experimental)\nLarge scale model training is becoming commonplace with architectures like BERT and the growth of model parameter counts into the billions or even tens of billions. To achieve convergence at this scale in a reasonable amount of time, the use of distributed training is needed.", "source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"} {"text": "The current PyTorch Distributed Data Parallel (DDP) module enables data parallel training where each process trains the same model but on different shards of data. It enables bulk synchronous, multi-host, multi-GPU/CPU execution of ML training. However, DDP has several shortcomings; e.g. jobs cannot start without acquiring all the requested nodes; jobs cannot continue after a node fails due to error or transient issue; jobs cannot incorporate a node that joined later; and lastly; progress cannot be made with the presence of a slow/stuck node.\nThe focus of PyTorch Elastic, which uses Elastic Distributed Data Parallelism, is to address these issues and build a generic framework/APIs for PyTorch to enable reliable and elastic execution of these data parallel training workloads. It will provide better programmability, higher resilience to failures of all kinds, higher-efficiency and larger-scale training compared with pure DDP.", "source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"} {"text": "Elasticity, in this case, means both: 1) the ability for a job to continue after node failure (by running with fewer nodes and/or by incorporating a new host and transferring state to it); and 2) the ability to add/remove nodes dynamically due to resource availability changes or bottlenecks.\nWhile this feature is still experimental, you can try it out on AWS EC2, with the instructions here. Additionally, the PyTorch distributed team is working closely with teams across AWS to support PyTorch Elastic training within services such as Amazon Sagemaker and Elastic Kubernetes Service (EKS). Look for additional updates in the near future.\nNew Classification Framework", "source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"} {"text": "New Classification Framework\nImage and video classification are at the core of content understanding. To that end, you can now leverage a new end-to-end framework for large-scale training of state-of-the-art image and video classification models. It allows researchers to quickly prototype and iterate on large distributed training jobs at the scale of billions of images. Advantages include:\n\nEase of use - This framework features a modular, flexible design that allows anyone to train machine learning models on top of PyTorch using very simple abstractions. The system also has out-of-the-box integration with AWS on PyTorch Elastic, facilitating research at scale and making it simple to move between research and production.\nHigh performance - Researchers can use the framework to train models such as Resnet50 on ImageNet in as little as 15 minutes.\n", "source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"} {"text": "You can learn more at the NeurIPS Expo workshop on Multi-Modal research to production or get started with the PyTorch Elastic Imagenet example here.\nCome see us at NeurIPS\nThe PyTorch team will be hosting workshops at NeurIPS during the industry expo on 12/8. Join the sessions below to learn more, and visit the team at the PyTorch booth on the show floor and during the Poster Session. At the booth, we\u2019ll be walking through an interactive demo of PyTorch running fast neural style transfer on a Cloud TPU - here\u2019s a sneak peek.", "source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"} {"text": "We\u2019re also publishing a paper that details the principles that drove the implementation of PyTorch and how they\u2019re reflected in its architecture.\nMulti-modal Research to Production - This workshop will dive into a number of modalities such as computer vision (large scale image classification and instance segmentation) and Translation and Speech (seq-to-seq Transformers) from the lens of taking cutting edge research to production. Lastly, we will also walk through how to use the latest APIs in PyTorch to take eager mode developed models into graph mode via Torchscript and quantize them for scale production deployment on servers or mobile devices. Libraries used include:", "source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"} {"text": "\nClassification Framework - a newly open sourced PyTorch framework developed by Facebook AI for research on large-scale image and video classification. It allows researchers to quickly prototype and iterate on large distributed training jobs. Models built on the framework can be seamlessly deployed to production.\nDetectron2 - the recently released object detection library built by the Facebook AI Research computer vision team. We will articulate the improvements over the previous version including: 1) Support for latest models and new tasks; 2) Increased flexibility, to enable new computer vision research; 3) Maintainable and scalable, to support production use cases.\nFairseq - general purpose sequence-to-sequence library, can be used in many applications, including (unsupervised) translation, summarization, dialog and speech recognition.\n", "source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"} {"text": "Responsible and Reproducible AI - This workshop on Responsible and Reproducible AI will dive into important areas that are shaping the future of how we interpret, reproduce research, and build AI with privacy in mind. We will cover major challenges, walk through solutions, and finish each talk with a hands-on tutorial.\n\nReproducibility: As the number of research papers submitted to arXiv and conferences skyrockets, scaling reproducibility becomes difficult. We must address the following challenges: aid extensibility by standardizing code bases, democratize paper implementation by writing hardware agnostic code, facilitate results validation by documenting \u201ctricks\u201d authors use to make their complex systems function. To offer solutions, we will dive into tool like PyTorch Hub and PyTorch Lightning which are used by some of the top researchers in the world to reproduce the state of the art.\n", "source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"} {"text": "\nInterpretability: With the increase in model complexity and the resulting lack of transparency, model interpretability methods have become increasingly important. Model understanding is both an active area of research as well as an area of focus for practical applications across industries using machine learning. To get hands on, we will use the recently released Captum library that provides state-of-the-art algorithms to provide researchers and developers with an easy way to understand the importance of neurons/layers and the predictions made by our models.`\n", "source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"} {"text": "\nPrivate AI: Practical applications of ML via cloud-based or machine-learning-as-a-service platforms pose a range of security and privacy challenges. There are a number of technical approaches being studied including: homomorphic encryption, secure multi-party computation, trusted execution environments, on-device computation, and differential privacy. To provide an immersive understanding of how some of these technologies are applied, we will use the CrypTen project which provides a community based research platform to take the field of Private AI forward.\n\nWe\u2019d like to thank the entire PyTorch team and the community for all their contributions to this work.\nCheers!\nTeam PyTorch", "source": "https://pytorch.org/blog/pytorch-adds-new-tools-and-libraries-welcomes-preferred-networks-to-its-community/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"How Computational Graphs are Executed in PyTorch\"\nauthor: Preferred Networks\nfeatured-img: \"\"\n\nWelcome to the last entry into understanding the autograd engine of PyTorch series!\nIf you haven\u2019t read parts 1 & 2 check them now to understand how PyTorch creates the computational graph for the backward pass!\nThis post is based on PyTorch v1.11, so some highlighted parts may differ across versions.\nPyTorch autograd graph execution\nThe last post showed how PyTorch constructs the graph to calculate the outputs' derivatives w.r.t. the inputs when executing the forward pass. Now we will see how the execution of the backward pass is coordinated and done by looking at the whole process, starting from Python down to the lower C++ level internals.\nWhat Happens when Calling backward()/grad() from Python\nUsing variable.backward()", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "Using variable.backward()\nAfter doing all our calculations with an input set to require the gradient, we call .backward() on the result to initiate the backward pass execution.\n>>> x = torch.tensor([0.5, 0.75], requires_grad=True)\n>>> y = torch.exp(x).sum()\n>>> y.backward()\n\nCalling .backward() on a tensor results in a call to torch.autograd.backward().\n# torch/_tensor.py\n\ndef backward(self, gradient=None, retain_graph=None, create_graph=False, inputs=None):\n \u2026\n torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)\n\n\ntorch.autograd.backward() checks the arguments and calls the autograd engine in the C++ layer.\n``` python\ndef backward(\n tensors: _TensorOrTensors,", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "def backward(\n tensors: _TensorOrTensors,\n grad_tensors: Optional[_TensorOrTensors] = None,\n retain_graph: Optional[bool] = None,\n create_graph: bool = False,\n grad_variables: Optional[_TensorOrTensors] = None,\n inputs: Optional[_TensorOrTensors] = None,\n) -> None:\n \u2026\nif inputs is not None and len(inputs) == 0:\n raise RuntimeError(\"'inputs' argument to backward() cannot be empty.\")\n\ntensors = (tensors,) if isinstance(tensors, torch.Tensor) else tuple(tensors)\ninputs = (inputs,) if isinstance(inputs, torch.Tensor) else \\\n tuple(inputs) if inputs is not None else tuple()\n\ngrad_tensors_ = _tensor_or_tensors_to_tuple(grad_tensors, len(tensors))\ngrad_tensors_ = _make_grads(tensors, grad_tensors_)\nif retain_graph is None:\n retain_graph = create_graph\n\nVariable._execution_engine.run_backward(\n tensors, grad_tensors_, retain_graph, create_graph, inputs,\n allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag\n\n```", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "``\nFirst, whether thegrad_tensorsargument was specified or not, there is a call to the [_make_grads](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/autograd/__init__.py#L30-L74) function. This is used to check the providedgrad_tensorsor to specify the default value for them by looking at thetensorsargument values\u2019 shapes. Check the first blog post for details on the default value for thegrad_tensors` of the backward pass. This function just provides the vector of the vector jacobian product if it was not initially specified.", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "In the above code, Variable has an _execution_engine attribute that is defined in torch.autograd.variable to be of type ImperativeEngine; the C++ engine exported to python and declared in torch/csrc/autograd/python_engine.cpp. In the following sections, we explain in detail how this object executes the backward pass.\nNote that the torch.autograd.backward function has an inputs optional argument. This argument is used when we want to calculate the .grad field of only a subset of input tensors in the forward pass.\n```python\n\n\n\nx = torch.tensor([0.5, 0.75], requires_grad=True)\ny = torch.tensor([0.1, 0.90], requires_grad=True)\nz = torch.exp(x * y).sum()\ntorch.autograd.backward([z], inputs=[x])\nx.grad\ntensor([0.1051, 1.7676])\n\n\n", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "\n\n\nx.grad\ntensor([0.1051, 1.7676])\ny.grad # None\n\n\n\n\n```\nUsing torch.autograd.grad\nAn alternative to backward() is to use torch.autograd.grad(). The main difference to backward() is that grad() returns a tuple of tensors with the gradients of the outputs w.r.t. the inputs kwargs instead of storing them in the .grad field of the tensors. As you can see, the grad() code shown below is very similar to backward.\n```python\ndef grad(\n outputs: _TensorOrTensors,\n inputs: _TensorOrTensors,\n grad_outputs: Optional[_TensorOrTensors] = None,\n retain_graph: Optional[bool] = None,\n create_graph: bool = False,\n only_inputs: bool = True,\n allow_unused: bool = False,\n is_grads_batched: bool = False\n) -> Tuple[torch.Tensor, ...]:\noutputs = (outputs,) if isinstance(outputs, torch.Tensor) else tuple(outputs)\n", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "inputs = (inputs,) if isinstance(inputs, torch.Tensor) else tuple(inputs)\n overridable_args = outputs + inputs\n if has_torch_function(overridable_args):\n return handle_torch_function(\n grad,\n overridable_args,\n outputs,\n inputs,\n grad_outputs=grad_outputs,\n retain_graph=retain_graph,\n create_graph=create_graph,\n only_inputs=only_inputs,\n allow_unused=allow_unused,\n )\ngrad_outputs_ = _tensor_or_tensors_to_tuple(grad_outputs, len(outputs))\ngrad_outputs_ = _make_grads(outputs, grad_outputs_)\n\nif retain_graph is None:\n retain_graph = create_graph\n\nif is_grads_batched:\n # \u2026. It will not be covered here\nelse:\n return Variable._execution_engine.run_backward(\n outputs, grad_outputs_, retain_graph, create_graph, inputs,\n allow_unused, accumulate_grad=False) # Calls into the C++ engine to run the backward pass\n\n```", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "```\nFigure 1 shows the computational graph with the backward() and grad() arguments highlighted in red and blue, respectively:\n\n\n\n\nFgiure 1: Correspondence of `backward`/`grad` arguments in the graphs.\n\nGoing Inside the Autograd Engine\nRefreshing Concepts: Nodes and Edges\nAs we saw in 2\nThe computational graph comprises Node and Edge objects. Please read that post if you haven\u2019t done it yet.\nNodes", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "Nodes\nNode objects are defined in torch/csrc/autograd/function.h, and they provide an overload of operator() for the associated function and a list of edges to do the graph traversal. Note that Node is a base class that autograd functions inherit from and override the apply method to execute the backward function.\nstruct TORCH_API Node : std::enable_shared_from_this {\n ...\n /// Evaluates the function on the given inputs and returns the result of the\n /// function call.\n variable_list operator()(variable_list&& inputs) {\n ...\n }\n\nprotected:\n /// Performs the `Node`'s actual operation.\n virtual variable_list apply(variable_list&& inputs) = 0;\n \u2026\n edge_list next_edges_;\n uint64_t topological_nr_ = 0;\n \u2026\n\n", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "uint64_t topological_nr_ = 0;\n \u2026\n```\nThere is an attribute called topological_nr_ in every node object. This number is used to optimize the graph execution as it allows to discard of graph branches under certain conditions. The topological number is the longest distance between this node and any leaf node and it is shown in Figure 2. Its main property is that for any pair of nodes x, y in a directed graph topo_nr(x) < topo_nr(y) means that there is no path from x to y. So this allows for reducing the number of paths in the graph in need of traversal. Check the topological_nr\n) method comment for further details.\n\n\n\n", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "\n\nFigure 2: Example of the Topological Number calculation\n\nEdges\nThe Edge object links Nodes together, and its implementation is straightforward.\nstruct Edge {\n ...\n /// The function this `Edge` points to.\n std::shared_ptr function;\n /// The identifier of a particular input to the function.\n uint32_t input_nr;\n};\n\n\nIt only requires a function pointer to the Node and an input number that is the index of the output from the forward function this edge points to. When preparing the set of gradients before calling \"function\", we know that what is flowing from this edge should be accumulated in the \"input_nr\"th argument. Note that the input/output name is flipped here and this is the input to the backward function.", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "Edge objects are constructed using the gradient_edge function method.\n Edge gradient_edge(const Variable& self) {\n if (const auto& gradient = self.grad_fn()) {\n return Edge(gradient, self.output_nr());\n } else {\n return Edge(grad_accumulator(self), 0);\n }\n }\n\n\nEntering the C++ Realm\nOnce that torch.autograd.backward() has been invoked, the\nTHPEngine_run_backward routine starts the graph traversal. Following is a schema of the function body:\n```c++\nPyObject THPEngine_run_backward(PyObject self, PyObject args, PyObject kwargs)\n{\n HANDLE_TH_ERRORS\n PyObject tensors = nullptr;\n PyObject grad_tensors = nullptr;\n unsigned char keep_graph = 0;\n unsigned char create_graph = 0;\n PyObject *inputs = nullptr;", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "PyObject *inputs = nullptr;\n// Convert the python arguments to C++ objects\n const char accepted_kwargs[] = { // NOLINT\n \"tensors\", \"grad_tensors\", \"keep_graph\", \"create_graph\", \"inputs\",\n \"allow_unreachable\", \"accumulate_grad\", nullptr\n };\n if (!PyArg_ParseTupleAndKeywords(args, kwargs, \"OObb|Obb\", (char*)accepted_kwargs,\n &tensors, &grad_tensors, &keep_graph, &create_graph, &inputs, &allow_unreachable, &accumulate_grad))\n// Prepare arguments\n for(const auto i : c10::irange(num_tensors)) {\n // Check that the tensors require gradients\n }\nstd::vector output_edges;\n if (inputs != nullptr) {\n // Prepare outputs\n }\n{\n // Calls the actual autograd engine\n pybind11::gil_scoped_release no_gil;\n outputs = engine.execute(roots, grads, keep_graph, create_graph, accumulate_grad, output_edges);\n }\n // Clean up and finish\n}\n```", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "}\n // Clean up and finish\n}\n\nFirst, we prepare the input arguments after converting the `PyObject` arguments to actual C++ objects. The `tensors` list contains the tensors from which we start the backward pass. These tensors are converted to edges using `torch::autograd::impl::gradient_edge` and added to a list called `roots` where the graph traversal starts. \n\n\n```c++\n edge_list roots;\n roots.reserve(num_tensors);\n variable_list grads;\n grads.reserve(num_tensors);\n for(const auto i : c10::irange(num_tensors)) {\n PyObject *_tensor = PyTuple_GET_ITEM(tensors, i);\n const auto& variable = THPVariable_Unpack(_tensor);\n auto gradient_edge = torch::autograd::impl::gradient_edge(variable);\n roots.push_back(std::move(gradient_edge));\n\n PyObject *grad = PyTuple_GET_ITEM(grad_tensors, i);\n if (THPVariable_Check(grad)) {\n const Variable& grad_var = THPVariable_Unpack(grad);\n grads.push_back(grad_var);\n } \n }\n\n", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "grads.push_back(grad_var);\n } \n }\n```\nNow, if the inputs argument was specified in backward or we used the torch.autograd.grad api, the following code creates a list of edges to accumulate the gradients in the specified tensors at the end of the computation. The engine uses this later to optimize the execution as it doesn\u2019t add the gradients in all the leaf nodes, just the specified ones.\n```c++\n std::vector output_edges;\n if (inputs != nullptr) {\n int num_inputs = PyTuple_GET_SIZE(inputs);\n output_edges.reserve(num_inputs);\n for (const auto i : c10::irange(num_inputs)) {\n PyObject *input = PyTuple_GET_ITEM(inputs, i);\n const auto& tensor = THPVariable_Unpack(input);\n const auto output_nr = tensor.output_nr();\n auto grad_fn = tensor.grad_fn();\n if (!grad_fn) {\n grad_fn = torch::autograd::impl::try_get_grad_accumulator(tensor);\n }\n if (accumulate_grad) {\n tensor.retain_grad();\n }\n if (!grad_fn) {", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "}\n if (!grad_fn) {\n output_edges.emplace_back(std::make_shared(), 0);\n } else {\n output_edges.emplace_back(grad_fn, output_nr);\n }\n }\n }\n\nThe next step is the actual graph traversal and node function execution, and finally, the cleanup and return.\n\n```c++\n {\n // Calls the actual autograd engine\n pybind11::gil_scoped_release no_gil;\n auto& engine = python::PythonEngine::get_python_engine();\n outputs = engine.execute(roots, grads, keep_graph, create_graph, accumulate_grad, output_edges);\n }\n // Clean up and finish\n}\n\n\nStarting the Real Execution\nengine.executeis present in torch/csrc/autograd/engine.cpp \nThere are two differentiated steps here:\nAnalyze the graph to find dependencies between functions\nCreate worker threads that traverse the graph\nData Structures Used for the Execution\nGraphTask", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "GraphTask\nAll the execution metadata is managed by the GraphTask class in torch/csrc/autograd/engine.h\nstruct GraphTask: std::enable_shared_from_this {\n std::atomic outstanding_tasks_{0};\n // \u2026 \n std::unordered_map not_ready_;\n std::unordered_map dependencies_;\n\n struct ExecInfo {\n // \u2026\n };\n std::unordered_map exec_info_;\n std::vector captured_vars_;\n // \u2026\n std::shared_ptr cpu_ready_queue_;\n};\n\n\nHere we see a series of variables dedicated to maintaining the execution state.", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "outstanding_tasks_ tracks the number of tasks left to be executed for the backward pass to complete. not_ready_ holds the input arguments for the Nodes that are not ready to be executed. dependencies_ track the number of predecessors that a Node has. As the count reaches 0, the Node is ready for execution; it is placed in a ready queue to be retrieved and executed later. \nexec_info_ and the associated ExecInfo struct are used only when the inputs argument is specified or it is a call to autograd.grad(). They allow filter paths on the graph that are not needeed since only the gradients are calculated only for the variables in the inputs list.\ncaptured_vars_ is where the results of the graph execution are temporarily stored if we used the torch.autograd.grad() api instead of torch.autograd.backward() since grad() returns the gradients as tensors instead of just filling the .grad field of the inputs.\nNodeTask", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "NodeTask\nThe NodeTask struct is a basic class that holds an fn_ pointer to the node to execute, and an inputs_ buffer to store the input arguments to this function. Note that the functions executed by the backward pass are the derivatives specified in the derivatives.yaml file. or the user provided backward function when using custom functions as described in the second blog post.\nThe inputs_ buffer is also where the output gradients of the previously executed functions are aggregated, and it is defined as a std::vector container with facilities to accumulate values at a given position.\n```c++\nstruct NodeTask {\n std::weak_ptr base_;\n std::shared_ptr fn_;\n // This buffer serves as an implicit \"addition\" node for all of the", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "// gradients flowing here. Once all the dependencies are finished, we\n // use the contents of this buffer to run the function.\n InputBuffer inputs_;\n};\n### GraphRoot\n\nThe [`GraphRoot`](https://github.com/pytorch/pytorch/blob/bc2c6edaf163b1a1330e37a6e34caf8c553e4755/torch/csrc/autograd/functions/basic_ops.h#L72-L89) is a special function used to hold multiple input variables in a single place. The code is pretty simple as it only acts as a container of variables.\n\n```c++\nstruct TORCH_API GraphRoot : public Node {\n GraphRoot(edge_list functions, variable_list inputs)\n : Node(std::move(functions)),\n outputs(std::move(inputs)) {\n for (const auto& t : outputs) {\n add_input_metadata(t);\n }\n }\n\n variable_list apply(variable_list&& inputs) override {\n return outputs;\n }\n\n\nAccumulateGrad\nThis function is set during the graph creation in gradient_edge when the Variable object doesn\u2019t have a grad_fn. This is, it is a leaf node.\n```c++", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": " if (const auto& gradient = self.grad_fn()) {\n // \u2026\n } else {\n return Edge(grad_accumulator(self), 0);\n }\n\n\nThe function body is defined in torch/csrc/autograd/functions/accumulate_grad.cpp and it essentially accumulates the input grads in the object\u2019s .grad attribute.\nauto AccumulateGrad::apply(variable_list&& grads) -> variable_list {\n check_input_variables(\"AccumulateGrad\", grads, 1, 0);\n \u2026\n\n at::Tensor new_grad = callHooks(variable, std::move(grads[0]));\n std::lock_guard lock(mutex_);\n\n at::Tensor& grad = variable.mutable_grad();\n accumulateGrad(\n variable,\n grad,\n new_grad,\n 1 + !post_hooks().empty() /* num_expected_refs */,\n [&grad](at::Tensor&& grad_update) { grad = std::move(grad_update); });\n return variable_list();\n}\n}} // namespace torch::autograd\n\n\n\n", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "}\n}} // namespace torch::autograd\n```\naccumulateGrad\ndoes several checks on the tensors format and eventually performs the variable_grad += new_grad; accumulation.\nPreparing the graph for execution\nNow, let\u2019s walk through Engine::execute. The first thing to do besides arguments consistency checks is to create the actual GraphTask object we described above. This object keeps all the metadata of the graph execution.\n```c++\nauto Engine::execute(const edge_list& roots,\n const variable_list& inputs,\n bool keep_graph,\n bool create_graph,\n bool accumulate_grad,\n const edge_list& outputs) -> variable_list {", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "validate_outputs(roots, const_cast(inputs), {\n return msg;\n });\n// Checks\nauto graph_task = std::make_shared(\n / keep_graph / keep_graph,\n / create_graph / create_graph,\n / depth / not_reentrant_backward_call ? 0 : total_depth + 1,\n / cpu_ready_queue / local_ready_queue);\n// If we receive a single root, skip creating extra root node\n // \u2026\n // Prepare graph by computing dependencies\n // \u2026\n // Queue the root \n // \u2026\n // launch execution\n // \u2026\n}\n\nAfter creating the `GraphTask`, we use its associated function if we only have one root node. If we have multiple root nodes, we create a special `GraphRoot` object as described before.\n\n```c++\n bool skip_dummy_node = roots.size() == 1;\n auto graph_root = skip_dummy_node ?\n roots.at(0).function :\n std::make_shared(roots, inputs);\n\n", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "\nThe next step is to fill the `dependencies_` map in the `GraphTask` object since the engine must know when it can execute a task. The `outputs` here is the `inputs` argument passed to the `torch.autograd.backward()` call in Python. But here, we have reversed the names since the gradients w.r.t. the inputs of the forward pass are now the outputs of the backward pass. And from now on, there is no concept of forward/backward, but only graph traversal and execution.\n\n```c++\n auto min_topo_nr = compute_min_topological_nr(outputs);\n // Now compute the dependencies for all executable functions\n compute_dependencies(graph_root.get(), *graph_task, min_topo_nr);\n\n if (!outputs.empty()) {\n graph_task->init_to_execute(*graph_root, outputs, accumulate_grad, min_topo_nr);\n }\n\n", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "}\n```\nHere we preprocess the graph for the execution of the nodes. First, compute_min_topological_nr is called to to obtain the minimum topological number of the tensors specified in outputs (0 if no inputs kwarg was supplied to .backward or input for .grad). This computation prunes paths in the graph that lead to input variables of which we don\u2019t want/need to calculate the grads.", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "Second, is the compute_dependencies call. This function is a very simple graph traversal that starts with the root Node, and for each of the edges in node.next_edges() it increments the counter in dependencies_. Figure 3 shows the result of the dependencies calculation for the example graph. Note that the number of dependencies of any node is just the number of edges arriving at it.\n\n\n\n\nFigure 3: Number of dependencies for each node\n", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "\nFinally, the init_to_execute call, this is the one that populates the GraphTask::exec_info_ map in case that inputs were specified in the python backward call. It iterates the graph again, starting from the root, and records in the exec_info_ map the intermediate nodes needed to calculate only the given inputs gradients.\n```c++\n // Queue the root\n if (skip_dummy_node) {\n InputBuffer input_buffer(roots.at(0).function->num_inputs());\n auto input = inputs.at(0);\ninput_buffer.add(roots.at(0).input_nr,\n std::move(input),\n input_stream,\n opt_next_stream);\n\nexecute_with_graph_task(graph_task, graph_root, std::move(input_buffer));\n\n} else {\n execute_with_graph_task(graph_task, graph_root, InputBuffer(variable_list()));\n }", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "}\n // Avoid a refcount bump for the Future, since we check for refcount in\n // DistEngine (see TORCH_INTERNAL_ASSERT(futureGrads.use_count() == 1)\n // in dist_engine.cpp).\n auto& fut = graph_task->future_result_;\n fut->wait();\n return fut->value().toTensorVector();\n}\n```\nAnd now, we are ready to start the actual execution by creating the InputBuffer. In case we only have one root variable, we begin by copying the value of the inputs tensor (this is the gradients passed to python backward) in position 0 of the input_buffer. This is a small optimization that avoids running the RootNode for no reason. Also, if the rest of the graph is not on the cpu, we directly start on that worker while the RootNode is always placed on the cpu ready queue. Details of the workers and ready queues are explained in the section below.\nOn the other hand, if we have multiple roots, the GraphRoot object also holds the inputs, so it is enough to pass it an empty InputBuffer.", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "Graph Traversal and Node Execution\nDevices, Threads and Queues\nBefore diving into the actual execution, we need to see how the engine is structured.\nFirst of all, the engine is multithreaded with one thread per device. For example, the caller thread is associated with the CPU while additional threads are created and associated with each GPU or other devices available in the system. Each thread tracks its device using thread-local storage in the worker_device variable. In addition, the threads have a queue of tasks to be executed also located in thread-local storage, the local_ready_queue. This is where work is queued for this thread to execute in the thread_main function that is explained later.", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "You will wonder how the device where a task should be executed is decided. The InputBuffer class has a device() function that returns the first non-cpu device of all its tensors.\nThis function is used together with Engine::ready_queue to select the queue to queue a task.\nauto Engine::ready_queue(std::shared_ptr cpu_ready_queue, at::Device device) -> std::shared_ptr{\n if (device.type() == at::kCPU || device.type() == at::DeviceType::Meta) {\n return cpu_ready_queue;\n } else {\n // See Note [Allocating GPUs to autograd threads]\n return device_ready_queues_.at(device.index());\n }\n}\n\n", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "}\n}\n```\nThe ReadyQueue object is defined in torch/csrc/autograd/engine.h and it is a simple wrapper over std::priority_queue that allows a thread to wait for a task if it\u2019s empty. One interesting property of the ReadyQueue is that it increases the GraphTask::outstanding_tasks_ value used to determine if the execution has completed or not.\n```c++\nauto ReadyQueue::push(NodeTask item, bool incrementOutstandingTasks) -> void {\n {\n std::lock_guard lock(mutex_);\n if (incrementOutstandingTasks) {\n std::shared_ptr graph_task = item.base_.lock();\n ++graph_task->outstanding_tasks_;\n }", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "++graph_task->outstanding_tasks_;\n }\n heap_.push(std::move(item));\n }\n not_empty_.notify_one();\n}\nauto ReadyQueue::pop() -> NodeTask {\n std::unique_lock lock(mutex_);\n not_empty_.wait(lock, [this]{ return !heap_.empty(); });\n auto task = std::move(const_cast(heap_.top())); heap_.pop();\n return task;\n}\n```\nReentrant Backward\nA reentrant backward happens when one of the tasks in a backward pass calls again backward. It is not a very common case, but it can be used to reduce memory utilization as it could potentially avoid saving intermediate results. For more information, check this PyTorch forum post.\n```python\nclass ReentrantBackward(torch.autograd.Function):\n @staticmethod\n def forward(ctx, input):\n return input.sum()\n@staticmethod\ndef backward(ctx, input):\n # Let's compute the backward by using autograd\n", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "input = input.detach().requires_grad_()\n with torch.enable_grad():\n out = input.sum()\n out.backward() # REENTRANT CALL!!\n return out.detach()\n```\nHere, we call backward() inside backward() for a user custom-defined autograd function.\nThis situation can lead to deadlocks because the first backward needs to wait for the second one to complete. But some internal implementation details can prevent the second backward from completing as it is explained in the dedicated subsection.\nThread Initialization\nexecute_with_graph_task is in charge of initializing the threads taking care of the computation and placing the root node in the queue of the device that produced it.\n```c++\nc10::intrusive_ptr Engine::execute_with_graph_task(\n const std::shared_ptr& graph_task,\n std::shared_ptr graph_root,", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "std::shared_ptr graph_root,\n InputBuffer&& input_buffer) {\ninitialize_device_threads_pool();\n // Lock mutex for GraphTask.\n std::unique_lock lock(graph_task->mutex_);\nauto queue = ready_queue(graph_task->cpu_ready_queue_, input_buffer.device());\nif (worker_device == NO_DEVICE) {\n set_device(CPU_DEVICE);\n graph_task->owner_ = worker_device;\n queue->push(NodeTask(graph_task, std::move(graph_root), std::move(input_buffer)));\n lock.unlock();\n thread_main(graph_task);\n worker_device = NO_DEVICE;\n } else {\n // This deals with reentrant backwards, we will see it later.\n }\n return graph_task->future_result_;\n}\n```\nFirst, this function initializes several threads (one per device) calling initialize_device_threads_pool() where several things happen:\nOne ReadyQueue per device is created.\nOne thread per non-cpu device is created.", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "One thread per non-cpu device is created.\nA thread local worker_device variable is set to track the current device associated with the thread.\nthread_main function is called, and threads wait for tasks to be put in their queues.\nThen it retrieves the queue to place the root node based on the device that holds the tensors present in the input_buffer using the ready_queue function. Now, the main thread (the one also executing the Python interpreter) has its worker_device set to NO_DEVICE, and it is in charge of executing functions with all its tensors living in the cpu. If worker_device is set to any other value, the graph execution is already started, and .backward() was called inside a running Node, creating a reentrant backward call. This is explained later. For now, \nthe main thread places the task in the queue and call thread_main.\nWhere the Magic Happens", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "Where the Magic Happens\nIt\u2019s been a long way, but finally, we are ready to traverse the graph and execute the nodes. Each of the spawned threads, and the main thread call thread_main.\n```c++\nauto Engine::thread_main(const std::shared_ptr& graph_task) -> void {\nwhile (graph_task == nullptr || !graph_task->future_result_->completed()) {\n std::shared_ptr local_graph_task;\n {\n NodeTask task = local_ready_queue->pop();\n if (task.isShutdownTask_) {\n break;\n }\n\n if (!(local_graph_task = task.base_.lock())) {\n // GraphTask for function is no longer valid, skipping further\n // execution.\n continue;\n }\n\n if (task.fn_ && !local_graph_task->has_error_.load()) {\n at::ThreadLocalStateGuard tls_guard(local_graph_task->thread_locals_);\n\n try {\n GraphTaskGuard guard(local_graph_task);\n", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "GraphTaskGuard guard(local_graph_task);\n NodeGuard ndguard(task.fn_);\n {\n evaluate_function(\n local_graph_task,\n task.fn_.get(),\n task.inputs_,\n local_graph_task->cpu_ready_queue_);\n }\n } catch (std::exception& e) {\n thread_on_exception(local_graph_task, task.fn_, e);\n }\n }\n }\n// Decrement the outstanding tasks.\n--local_graph_task->outstanding_tasks_;\n\n// Check if we've completed execution.\nif (local_graph_task->completed()) {\n local_graph_task->mark_as_completed_and_run_post_processing();\n auto base_owner = local_graph_task->owner_;\n if (worker_device != base_owner) {\n std::atomic_thread_fence(std::memory_order_release);\n ready_queue_by_index(local_graph_task->cpu_ready_queue_, base_owner)\n ->push(NodeTask(local_graph_task, nullptr, InputBuffer(0)));\n }\n}\n\n}\n}\n```", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "}\n }\n }\n}\n```\nThe code here is simple, given the local_ready_queue assigned to each thread in thread-local storage. The threads loop until there are no tasks left to execute in the graph. Note that for device-associated threads, the passed graph_task argument is nullptr, and they block in local_ready_queue->pop() until a task is pushed in their queue. After some consistency checks (the task type is shutdown, or the graph is still valid). We get to the actual function invocation in evaluate_function.\n```c++\n try {\n GraphTaskGuard guard(local_graph_task);\n NodeGuard ndguard(task.fn_);\n {\n evaluate_function(\n local_graph_task,\n task.fn_.get(),\n task.inputs_,\n local_graph_task->cpu_ready_queue_);\n }\n } catch (std::exception& e) {", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "}\n } catch (std::exception& e) {\n thread_on_exception(local_graph_task, task.fn_, e);\n }\n }\n```\nAfter calling evaluate_function, we check if the graph_task execution is complete by looking the outstanding_tasks_ number. This number increases when a task is pushed to a queue and is decreased in local_graph_task->completed() when a task is executed. When the execution is done, we return the results that are be in the captured_vars_ in case we called torch.autograd.grad() instead of torch.autograd.backward() as this function returns tensors instead of storing them in the .grad attribute of the inputs. Finally we wake up the main thread if it\u2019s waiting by sending a dummy task.\n```c++\n // Decrement the outstanding tasks.\n --local_graph_task->outstanding_tasks_;\n// Check if we've completed execution.\nif (local_graph_task->completed()) {\n local_graph_task->mark_as_completed_and_run_post_processing();\n", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "auto base_owner = local_graph_task->owner_;\n if (worker_device != base_owner) {\n std::atomic_thread_fence(std::memory_order_release);\n ready_queue_by_index(local_graph_task->cpu_ready_queue_, base_owner)\n ->push(NodeTask(local_graph_task, nullptr, InputBuffer(0)));\n }\n }\n```\nCalling the Function and Unlocking New Tasks\nevaluate_function serves three purposes:\nRun the function.\nAccumulate its results in the next node InputBuffers.\nDecrease the dependencies counter of the next nodes and enqueues the tasks reaching 0 to be executed.\n```c++\nvoid Engine::evaluate_function(\n std::shared_ptr& graph_task,\n Node* func,\n InputBuffer& inputs,\n const std::shared_ptr& cpu_ready_queue) {\n// If exec_info_ is not empty, we have to instrument the execution\n auto& exec_info_ = graph_task->exec_info_;", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "auto& exec_info_ = graph_task->exec_info_;\n if (!exec_info_.empty()) {\n // Checks if the function needs to be executed \n if (!fn_info.needed_) {\n // Skip execution if we don't need to execute the function.\n return;\n }\n }\nauto outputs = call_function(graph_task, func, inputs);\nauto& fn = *func;\n if (!graph_task->keep_graph_) {\n fn.release_variables();\n }\n```\nInitially, we check the exec_info_ map of the GraphTask structure to determine if the current node needs to be executed. Remember that if this map is empty, all the nodes are executed because we are calculating the grads for all the inputs of the forward pass.\nAfter this check, the function is executed by running call_function. Its implementation is very straightforward and calls the actual derivative function and registered hooks if any.\n```c++\n int num_outputs = outputs.size();", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": " int num_outputs = outputs.size();\n if (num_outputs == 0) {\n // Records leaf stream (if applicable)\n return;\n }\n\n if (AnomalyMode::is_enabled()) {\n // check for nan values in result\n }\n\n\nNext, we check the outputs of the function after call_function is done. If the number of outputs is 0, there are no following nodes to be executed so we can safely return. This is the case of the AccumulateGrad node associated with the leaf nodes.\nAlso, the check for NaN values in the gradients is done here if requested.\n\n std::lock_guard lock(graph_task->mutex_);\n for (const auto i : c10::irange(num_outputs)) {\n auto& output = outputs[i];\n const auto& next = fn.next_edge(i);\n\n if (!next.is_valid()) continue;\n\n\n\n", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "if (!next.is_valid()) continue;\n```\nWe have now executed a grad_fn that has returned one gradient per each of the associated forward pass function inputs. As we saw in the previous blog post, we have an Edge object per each of these input tensors, and the grad_fn of the function producing them in the forward pass. Essentially, Output[0] of the node in the backward pass, corresponds to the first argument of the forward pass associated function. Figure 4 shows how the outputs of a backward function are related to the inputs of the forward function. See that the outputs of grad_fn C are the gradients of z w.r.t. the inputs of Function C\n\n\n\n\nFigure 4: Correspondence between forward and backward functions inputs and outputs\n", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "\nWe now iterate through these edges and check if the associated functions are ready to be executed.\n // Check if the next function is ready to be computed\n bool is_ready = false;\n auto& dependencies = graph_task->dependencies_;\n auto it = dependencies.find(next.function.get());\n\n if (it == dependencies.end()) {\n auto name = next.function->name();\n throw std::runtime_error(std::string(\"dependency not found for \") + name);\n } else if (--it->second == 0) {\n dependencies.erase(it);\n is_ready = true;\n }\n\n auto& not_ready = graph_task->not_ready_;\n auto not_ready_it = not_ready.find(next.function.get());\n\n\nFor this, we check the graph_task->dependencies_ map. We decrement the counter, and if it reaches 0, we mark the function pointed by the edge ready to be executed. Following, we prepare the input buffers of the tasks indicated by the next edges.\n```c++\n if (not_ready_it == not_ready.end()) {\n if (!exec_info_.empty()) {", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "if (!exec_info_.empty()) {\n // Skip functions that aren't supposed to be executed\n }\n // Creates an InputBuffer and moves the output to the corresponding input position\n InputBuffer input_buffer(next.function->num_inputs());\n input_buffer.add(next.input_nr,\n std::move(output),\n opt_parent_stream,\n opt_next_stream);\n\n if (is_ready) {\n auto queue = ready_queue(cpu_ready_queue, input_buffer.device());\n queue->push(\n NodeTask(graph_task, next.function, std::move(input_buffer)));\n } else {\n not_ready.emplace(next.function.get(), std::move(input_buffer));\n }\n\n```", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "}\n\nHere, we look for the task in the `graph_task->not_ready_` map. If it is not present, we create a new `InputBuffer` object and set the current output in the `input_nr` position of the buffer associated with the edge. If the task is ready to be executed, we enqueue it in the appropriate device `ready_queue` and complete the execution. However, if the task is not ready and we have seen it before, it is present in the `not_ready_map_`.\n\n```c++\n } else {\n // The function already has a buffer\n auto &input_buffer = not_ready_it->second;\n // Accumulates into buffer\n input_buffer.add(next.input_nr,\n std::move(output),\n opt_parent_stream,\n opt_next_stream);\n if (is_ready) {\n auto queue = ready_queue(cpu_ready_queue, input_buffer.device());\n queue->push(NodeTask(graph_task, next.function, std::move(input_buffer)));\n not_ready.erase(not_ready_it);\n }\n }\n }\n}\n\n", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "}\n }\n }\n}\n```\nIn this case, we accumulate the output in the existing input_buffer instead of creating a new one. Once all the tasks are processed, the worker thread exits the loop and complete.\nAll this process is summarized in the animation in Figure 5. We see how a thread peeks at the tasks in the ready queue and decrements the next nodes' dependencies, unlocking them for execution.\n\n\n\n\nFigure 5: Animation of the execution of the computational graph\n\nFlow with Reentrant Backward\nAs we saw above, the reentrant backward problem is when the currently executed function does a nested call to backward. When this happens, the thread running this function goes all the way down to execute_with_graph_task as in the non-reentrant case, but here is when things are different.\n```c++\nc10::intrusive_ptr Engine::execute_with_graph_task(", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "const std::shared_ptr& graph_task,\n std::shared_ptr graph_root,\n InputBuffer&& input_buffer) {\ninitialize_device_threads_pool();\n // Lock mutex for GraphTask.\n std::unique_lock lock(graph_task->mutex_);\nauto queue = ready_queue(graph_task->cpu_ready_queue_, input_buffer.device());\nif (worker_device == NO_DEVICE) {\n //Regular case\n } else {\n // If worker_device is any devices (i.e. CPU, CUDA): this is a re-entrant\n // backward call from that device.\n graph_task->owner_ = worker_device;\n// Now that all the non-thread safe fields of the graph_task have been populated,\n// we can enqueue it.\nqueue->push(NodeTask(graph_task, std::move(graph_root), std::move(input_buffer)));\n\nif (current_depth >= max_recursion_depth_) {\n // If reached the max depth, switch to a different thread\n add_thread_pool_task(graph_task);\n} else {\n ++total_depth;\n ++current_depth;\n lock.unlock();\n thread_main(graph_task);\n", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "thread_main(graph_task);\n --current_depth;\n --total_depth;\n }\n }\n return graph_task->future_result_;\n}\n```\nHere, execute_with_graph_task detects this as a reentrant call and then looks for the current number of nested calls. If it exceeds the limit, we create a new thread to take care of the execution of this graph, and if not, we execute this reentrant call regularly.\nThe limit of nested calls was originally set to avoid stack overflow due to reentrant calls creating very large call stacks. However, the number was further reduced when sanitizer tests were added because of the maximum amount of locks a thread can hold at a given moment. This can be seen in torch/csrc/autograd/engine.h.", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "When this maximum depth is exceeded, a new thread is created with the add_thread_pool_task function.\nvoid Engine::add_thread_pool_task(const std::weak_ptr& graph_task) {\n std::unique_lock lck(thread_pool_shared_->mutex_);\n // if we have pending graph_task objects to be processed, create a worker.\n bool create_thread = (thread_pool_shared_->num_workers_ <= thread_pool_shared_->graphtasks_queue_.size());\n thread_pool_shared_->graphtasks_queue_.push(graph_task);\n\n\n lck.unlock();\n if (create_thread) {\n std::thread t(&Engine::reentrant_thread_init, this);\n t.detach();\n }\n\n thread_pool_shared_->work_.notify_one();\n}\n\n\n\n", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "}\n```\nBefore going in-depth, let's look at the thread_pool_shared_ object in the Engine which manages all the information related to the threads associated to the reentrant backward calls.\n```c++\n struct ThreadPoolShared {\n unsigned int num_workers_;\n std::condition_variable work_;\n std::mutex mutex_;\n std::queue> graphtasks_queue_;\n// NOLINTNEXTLINE(cppcoreguidelines-pro-type-member-init)\nThreadPoolShared() : num_workers_(0) {}\n\n};\n```\nThreadPoolShared is a simple container holding a queue of GraphTask objects with synchronization mechanisms and the number of current workers.", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "Now it is easy to understand how add_thread_pool_task creates a thread when there are graph_task objects enqueued and insufficient workers to process them.\nadd_thread_pool_task initializes a thread by executing reentrant_thread_init\n```c++\nvoid Engine::reentrant_thread_init() {\n at::init_num_threads();\n auto tp_shared = thread_pool_shared_;\n while(true) {\n std::unique_lock lk(tp_shared->mutex_);\n ++thread_pool_shared_->num_workers_;\n tp_shared->work_.wait(lk, [&tp_shared]{ return !tp_shared->graphtasks_queue_.empty();});\n --thread_pool_shared_->num_workers_;\n auto task = tp_shared->graphtasks_queue_.front();\n tp_shared->graphtasks_queue_.pop();\n lk.unlock();\n std::shared_ptr graph_task;\n if (!(graph_task = task.lock())) {\n continue;\n }\n set_device(graph_task->owner_);", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "}\n set_device(graph_task->owner_);\n // set the local_ready_queue to the ready queue on the graph_task->owner_ device\n local_ready_queue = ready_queue_by_index(graph_task->cpu_ready_queue_, graph_task->owner_);\n total_depth = graph_task->reentrant_depth_;\n thread_main(graph_task);\n }\n}\n```\nThe code is straightforward. The newly created thread waits on the thread_pool_shared->graphtasks_queue_ for reentrant backward graphs to be available and executes them. Notice that this thread uses the task-ready queue associated with the device of the thread that started this call by accessing the graph_task->owner_ field set in the execute_with_graph_task function. \nError Handling\nWhenever an error happens in one of the worker threads. It will be propagated to the backward calling thread.", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "To achieve this, there is a try/catch block in the thread_main that catches any exception in the Node function call and sets it to the associated GraphTask object.\n try {\n \u2026\n GraphTaskGuard guard(local_graph_task);\n NodeGuard ndguard(task.fn_);\n {\n evaluate_function(\n \u2026\n }\n } catch (std::exception& e) {\n thread_on_exception(local_graph_task, task.fn_, e);\n }\n }\n }\n\n\nthread_on_exception and the functions it calls end up setting the exception in the local_graph_task object.\n```c++\nvoid Engine::thread_on_exception(", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "void Engine::thread_on_exception(\n std::shared_ptr graph_task,\n const std::shared_ptr& fn,\n std::exception& e) {\n graph_task->set_exception(std::current_exception(), fn);\n}\n\nvoid GraphTask::set_exception_without_signal(const std::shared_ptr& fn) {\n if (!has_error_.exchange(true)) {\n if (AnomalyMode::is_enabled() && fn) {\n fn->metadata()->print_stack(fn->name());\n }\n }\n}\n\nvoid GraphTask::set_exception(\n std::exception_ptr eptr,\n const std::shared_ptr& fn) {\n set_exception_without_signal(fn);\n if (!future_completed_.exchange(true)) {\n // NOLINTNEXTLINE(performance-move-const-arg)\n future_result_->setError(std::move(eptr));\n }\n}\n\n\nIn set_exception it sets the has_error_ flag to true and it calls the setError", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "function of the future_result_ object. This will make the error to be re-thrown at the caller thread when future_result_->value() is accessed.\n IValue value() {\n std::unique_lock lock(mutex_);\n AT_ASSERT(completed());\n if (eptr_) {\n std::rethrow_exception(eptr_);\n }\n return value_;\n }\n\n\nClosing Remarks\nThis has been the last post of this series covering how PyTorch does the auto differentiation. We hope you enjoyed reading it and that now you are familiar enough with PyTorch internals to start contributing in PyTorch development!", "source": "https://pytorch.org/blog/how-computational-graphs-are-executed-in-pytorch/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'PyTorch 1.6 released w/ Native AMP Support, Microsoft joins as maintainers for Windows'\nauthor: Team PyTorch\n\nToday, we\u2019re announcing the availability of PyTorch 1.6, along with updated domain libraries. We are also excited to announce the team at Microsoft is now maintaining Windows builds and binaries and will also be supporting the community on GitHub as well as the PyTorch Windows discussion forums.\nThe PyTorch 1.6 release includes a number of new APIs, tools for performance improvement and profiling, as well as major updates to both distributed data parallel (DDP) and remote procedure call (RPC) based distributed training. \nA few of the highlights include:", "source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"} {"text": "A few of the highlights include: \n\nAutomatic mixed precision (AMP) training is now natively supported and a stable feature (See here for more details) - thanks for NVIDIA\u2019s contributions; \nNative TensorPipe support now added for tensor-aware, point-to-point communication primitives built specifically for machine learning; \nAdded support for complex tensors to the frontend API surface;\nNew profiling tools providing tensor-level memory consumption information;\nNumerous improvements and new features for both distributed data parallel (DDP) training and the remote procedural call (RPC) packages.\n", "source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"} {"text": "Additionally, from this release onward, features will be classified as Stable, Beta and Prototype. Prototype features are not included as part of the binary distribution and are instead available through either building from source, using nightlies or via compiler flag. You can learn more about what this change means in the post here. You can also find the full release notes here. \nPerformance & Profiling\n[Stable] Automatic Mixed Precision (AMP) Training", "source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"} {"text": "AMP allows users to easily enable automatic mixed precision training enabling higher performance and memory savings of up to 50% on Tensor Core GPUs. Using the natively supported torch.cuda.amp API, AMP provides convenience methods for mixed precision, where some operations use the torch.float32 (float) datatype and other operations use torch.float16 (half). Some ops, like linear layers and convolutions, are much faster in float16. Other ops, like reductions, often require the dynamic range of float32. Mixed precision tries to match each op to its appropriate datatype.\n\nDesign doc (Link)\nDocumentation (Link)\nUsage examples (Link)\n\n[Beta] Fork/Join Parallelism", "source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"} {"text": "[Beta] Fork/Join Parallelism\nThis release adds support for a language-level construct as well as runtime support for coarse-grained parallelism in TorchScript code. This support is useful for situations such as running models in an ensemble in parallel, or running bidirectional components of recurrent nets in parallel, and allows the ability to unlock the computational power of parallel architectures (e.g. many-core CPUs) for task level parallelism.\nParallel execution of TorchScript programs is enabled through two primitives: torch.jit.fork and torch.jit.wait. In the below example, we parallelize execution of foo:\n```python\nimport torch\nfrom typing import List\ndef foo(x):\n return torch.neg(x)\n@torch.jit.script\ndef example(x):\n futures = [torch.jit.fork(foo, x) for _ in range(100)]\n results = [torch.jit.wait(future) for future in futures]\n return torch.sum(torch.stack(results))\nprint(example(torch.ones([])))\n ```\n\nDocumentation (Link)\n", "source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"} {"text": "[Beta] Memory Profiler\nThe torch.autograd.profiler API now includes a memory profiler that lets you inspect the tensor memory cost of different operators inside your CPU and GPU models.\nHere is an example usage of the API:\n```python\nimport torch\nimport torchvision.models as models\nimport torch.autograd.profiler as profiler\nmodel = models.resnet18()\ninputs = torch.randn(5, 3, 224, 224)\nwith profiler.profile(profile_memory=True, record_shapes=True) as prof:\n model(inputs)\nNOTE: some columns were removed for brevity\nprint(prof.key_averages().table(sort_by=\"self_cpu_memory_usage\", row_limit=10))\n--------------------------- --------------- --------------- ---------------\nName CPU Mem Self CPU Mem Number of Calls\n--------------------------- --------------- --------------- ---------------\nempty 94.79 Mb 94.79 Mb 123\nresize_ 11.48 Mb 11.48 Mb 2", "source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"} {"text": "addmm 19.53 Kb 19.53 Kb 1\nempty_strided 4 b 4 b 1\nconv2d 47.37 Mb 0 b 20\n--------------------------- --------------- --------------- ---------------\n```\n\nPR (Link)\nDocumentation (Link)\n\nDistributed Training & RPC\n[Beta] TensorPipe backend for RPC", "source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"} {"text": "[Beta] TensorPipe backend for RPC\nPyTorch 1.6 introduces a new backend for the RPC module which leverages the TensorPipe library, a tensor-aware point-to-point communication primitive targeted at machine learning, intended to complement the current primitives for distributed training in PyTorch (Gloo, MPI, ...) which are collective and blocking. The pairwise and asynchronous nature of TensorPipe lends itself to new networking paradigms that go beyond data parallel: client-server approaches (e.g., parameter server for embeddings, actor-learner separation in Impala-style RL, ...) and model and pipeline parallel training (think GPipe), gossip SGD, etc.\n# One-line change needed to opt in\ntorch.distributed.rpc.init_rpc(\n ...\n backend=torch.distributed.rpc.BackendType.TENSORPIPE,\n)\n\n# No changes to the rest of the RPC API\ntorch.distributed.rpc.rpc_sync(...)\n\n\nDesign doc (Link)\n", "source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"} {"text": "\nDocumentation (Link)\n\n[Beta] DDP+RPC\nPyTorch Distributed supports two powerful paradigms: DDP for full sync data parallel training of models and the RPC framework which allows for distributed model parallelism. Previously, these two features worked independently and users couldn\u2019t mix and match these to try out hybrid parallelism paradigms.\nStarting in PyTorch 1.6, we\u2019ve enabled DDP and RPC to work together seamlessly so that users can combine these two techniques to achieve both data parallelism and model parallelism. An example is where users would like to place large embedding tables on parameter servers and use the RPC framework for embedding lookups, but store smaller dense parameters on trainers and use DDP to synchronize the dense parameters. Below is a simple code snippet. \n```python\n// On each trainer\nremote_emb = create_emb(on=\"ps\", ...)\nddp_model = DDP(dense_model)\nfor data in batch:\n with torch.distributed.autograd.context():", "source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"} {"text": "with torch.distributed.autograd.context():\n res = remote_emb(data)\n loss = ddp_model(res)\n torch.distributed.autograd.backward([loss])\n```\n\nDDP+RPC Tutorial (Link)\nDocumentation (Link)\nUsage Examples (Link)\n\n[Beta] RPC - Asynchronous User Functions", "source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"} {"text": "[Beta] RPC - Asynchronous User Functions\nRPC Asynchronous User Functions supports the ability to yield and resume on the server side when executing a user-defined function. Prior to this feature, when a callee processes a request, one RPC thread waits until the user function returns. If the user function contains IO (e.g., nested RPC) or signaling (e.g., waiting for another request to unblock), the corresponding RPC thread would sit idle waiting for these events. As a result, some applications have to use a very large number of threads and send additional RPC requests, which can potentially lead to performance degradation. To make a user function yield on such events, applications need to: 1) Decorate the function with the @rpc.functions.async_execution decorator; and 2) Let the function return a torch.futures.Future and install the resume logic as callbacks on the Future object. See below for an example:\n```python\n@rpc.functions.async_execution\ndef async_add_chained(to, x, y, z):", "source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"} {"text": "def async_add_chained(to, x, y, z):\n return rpc.rpc_async(to, torch.add, args=(x, y)).then(\n lambda fut: fut.wait() + z\n )\nret = rpc.rpc_sync(\n \"worker1\", \n async_add_chained, \n args=(\"worker2\", torch.ones(2), 1, 1)\n)\nprint(ret) # prints tensor([3., 3.])\n```\n\nTutorial for performant batch RPC using Asynchronous User Functions (Link)\nDocumentation (Link)\nUsage examples (Link)\n\nFrontend API Updates\n[Beta] Complex Numbers", "source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"} {"text": "[Beta] Complex Numbers\nThe PyTorch 1.6 release brings beta level support for complex tensors including torch.complex64 and torch.complex128 dtypes. A complex number is a number that can be expressed in the form a + bj, where a and b are real numbers, and j is a solution of the equation x^2 = \u22121. Complex numbers frequently occur in mathematics and engineering, especially in signal processing and the area of complex neural networks is an active area of research. The beta release of complex tensors will support common PyTorch and complex tensor functionality, plus functions needed by Torchaudio, ESPnet and others. While this is an early version of this feature, and we expect it to improve over time, the overall goal is provide a NumPy compatible user experience that leverages PyTorch\u2019s ability to run on accelerators and work with autograd to better support the scientific community. \nMobile Updates", "source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"} {"text": "Mobile Updates\nPyTorch 1.6 brings increased performance and general stability for mobile on-device inference. We squashed a few bugs, continued maintenance and added few new features while improving fp32 and int8 performance on a large variety of ML model inference on CPU backend.\n[Beta] Mobile Features and Performance\n\nStateless and stateful XNNPACK Conv and Linear operators\nStateless MaxPool2d + JIT optimization passes\nJIT pass optimizations: Conv + BatchNorm fusion, graph rewrite to replace conv2d/linear with xnnpack ops, relu/hardtanh fusion, dropout removal\nQNNPACK integration removes requantization scale constraint\nPer-channel quantization for conv, linear and dynamic linear\nDisable tracing for mobile client to save ~600 KB on full-jit builds\n\nUpdated Domain Libraries\ntorchvision 0.7", "source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"} {"text": "Updated Domain Libraries\ntorchvision 0.7\ntorchvision 0.7 introduces two new pretrained semantic segmentation models, FCN ResNet50 and DeepLabV3 ResNet50, both trained on COCO and using smaller memory footprints than the ResNet101 backbone. We also introduced support for AMP (Automatic Mixed Precision) autocasting for torchvision models and operators, which automatically selects the floating point precision for different GPU operations to improve performance while maintaining accuracy. \n\nRelease notes (Link)\n\ntorchaudio 0.6\ntorchaudio now officially supports Windows. This release also introduces a new model module (with wav2letter included), new functionals (contrast, cvm, dcshift, overdrive, vad, phaser, flanger, biquad), datasets (GTZAN, CMU), and a new optional sox backend with support for TorchScript.\n\nRelease notes (Link)\n", "source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"} {"text": "Additional updates\nHACKATHON\nThe Global PyTorch Summer Hackathon is back! This year, teams can compete in three categories virtually:\n\nPyTorch Developer Tools: Tools or libraries designed to improve productivity and efficiency of PyTorch for researchers and developers\nWeb/Mobile Applications powered by PyTorch: Applications with web/mobile interfaces and/or embedded devices powered by PyTorch \nPyTorch Responsible AI Development Tools: Tools, libraries, or web/mobile apps for responsible AI development\n\nThis is a great opportunity to connect with the community and practice your machine learning skills. \n\nJoin the hackathon\nWatch educational videos\n\nLPCV Challenge", "source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"} {"text": "LPCV Challenge\nThe 2020 CVPR Low-Power Vision Challenge (LPCV) - Online Track for UAV video submission deadline is coming up shortly. You have until July 31, 2020 to build a system that can discover and recognize characters in video captured by an unmanned aerial vehicle (UAV) accurately using PyTorch and Raspberry Pi 3B+. \nPrototype Features\nTo reiterate, Prototype features in PyTorch are early features that we are looking to gather feedback on, gauge the usefulness of and improve ahead of graduating them to Beta or Stable. The following features are not part of the PyTorch 1.6 release and instead are available in nightlies with separate docs/tutorials to help facilitate early usage and feedback. \nDistributed RPC/Profiler", "source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"} {"text": "Distributed RPC/Profiler\nAllow users to profile training jobs that use torch.distributed.rpc using the autograd profiler, and remotely invoke the profiler in order to collect profiling information across different nodes. The RFC can be found here and a short recipe on how to use this feature can be found here.\nTorchScript Module Freezing\nModule Freezing is the process of inlining module parameters and attributes values into the TorchScript internal representation. Parameter and attribute values are treated as final value and they cannot be modified in the frozen module. The PR for this feature can be found here and a short tutorial on how to use this feature can be found here.\nGraph Mode Quantization", "source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"} {"text": "Graph Mode Quantization\nEager mode quantization requires users to make changes to their model, including explicitly quantizing activations, module fusion, rewriting use of torch ops with Functional Modules and quantization of functionals are not supported. If we can trace or script the model, then the quantization can be done automatically with graph mode quantization without any of the complexities in eager mode, and it is configurable through a qconfig_dict. A tutorial on how to use this feature can be found here.\nQuantization Numerical Suite", "source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"} {"text": "Quantization Numerical Suite\nQuantization is good when it works, but it\u2019s difficult to know what's wrong when it doesn't satisfy the expected accuracy. A prototype is now available for a Numerical Suite that measures comparison statistics between quantized modules and float modules. This is available to test using eager mode and on CPU only with more support coming. A tutorial on how to use this feature can be found here.\nCheers!\nTeam PyTorch", "source": "https://pytorch.org/blog/pytorch-1.6-released/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Accelerating Large Language Models with Accelerated Transformers\"\nauthor: Lucas Pasqualin, Driss Guessous, Christian Puhrsch, Bertrand Maher, Michael Gschwind\n", "source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"} {"text": "\nTL;DR. We show how to use Accelerated PyTorch 2.0 Transformers and the newly introduced torch.compile() method to accelerate Large Language Models on the example of nanoGPT, a compact open-source implementation of the GPT model from Andrej Karpathy. Using the new scaled dot product attention operator introduced with Accelerated PT2 Transformers, we select the flash_attention custom kernel and achieve faster training time per batch (measured with Nvidia A100 GPUs), going from a ~143ms/batch baseline to ~113 ms/batch. In addition, the enhanced implementation using the SDPA operator offers better numerical stability. Finally, further optimizations are achieved using padded inputs, which when combined with flash attention lead to ~87ms/batch.", "source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"} {"text": "Recent times have seen exponential adoption of large language models (LLMs) and Generative AI in everyday life. Tightly coupled with these ever-growing models is the ever-growing training cost - in terms of both time and hardware utilization. The PyTorch team has tackled these challenges head on with Accelerated PyTorch 2 Transformers (previously known as \u201cBetter Transformer\u201d) and JIT Compilation in PyTorch 2.0.", "source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"} {"text": "In this blog post, we explore training optimizations gained by utilizing custom kernel implementations of SDPA - also known as scaled dot product attention - a critical layer in transformer models. The custom kernel for SDPA replaces several discrete sequential operations with one globally optimized kernel which avoids allocating a large amount of intermediate CUDA memory. This approach offers a number of advantages, including but not limited to: higher performance computation of SDPA by reducing memory bandwidth bottleneck, reduced memory footprint to support larger batch sizes, and finally added numerical stability by prescaling input tensors. These optimizations are demonstrated on nanoGPT, an open-source implementation of GPT from Andrej Karpathy.\nBackground\nScaled dot product attention is the fundamental building block of multihead attention, as introduced in \u201cAttention is All You Need\u201d, and has a wide range of applications in LLM and Generative AI models.", "source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"} {"text": " \nFigure 1: The Transformer model architecture based on \u201cAttention is All You Need\u201d. With the new PyTorch SDPA operator, Multi-Head Attention is efficiently implemented by a linear layer for the in-projection, the SDPA operator, and a linear layer for the out-projection.\nWith the new scaled_dot_product_attention operator, multihead attention can be implemented in just 3 steps: in projection with a linear layer, SDPA, and out projection with a linear layer.\n```\nIn Projection\nvariable descriptions:\nq,k,v = Query, Key, Value tensors\nbsz = batch size\nnum_heads = Numner of heads for Multihead Attention\ntgt_len = Target length\nsrc_len = Source Length\nhead_dim: Head Dimension", "source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"} {"text": "head_dim: Head Dimension\nq, k, v = _in_projection(query, key, value, q_proj_weight, k_proj_weight, v_proj_weight, b_q, b_k, b_v)\nq = q.view(bsz, num_heads, tgt_len, head_dim)\nk = k.view(bsz, num_heads, src_len, head_dim)\nv = v.view(bsz, num_heads, src_len, head_dim)\n\n# Scaled Dot Product Attention\nattn_output = scaled_dot_product_attention(q, k, v, attn_mask, dropout_p, is_causal)\n\n# Out Projection\nattn_output = attn_output.permute(2, 0, 1, 3).contiguous().view(bsz * tgt_len, embed_dim)\nattn_output = linear(attn_output, out_proj_weight, out_proj_bias)\nattn_output = attn_output.view(tgt_len, bsz, attn_output.size(1))\n\n```", "source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"} {"text": "```\nPyTorch 2. supports multiple different kernels optimized for specific use cases, with specific requirements. A kernel picker picks the best kernel for a particular combination of input parameters. If no optimized \"custom kernel\" for a particular combination of input parameters can be identified, the kernel picker selects a general kernel that can handle all input combinations. \nWhile future releases may extend this set of operators, PyTorch 2.0 launches with 3 implementations for the SDPA operator:\n\nA generic kernel which implements the mathematical equation of SDPA in the function sdpa_math()\nAn optimized kernel based on the paper \u201cFlash Attention\u201d, which supports evaluation of SDPA with 16 bit floating point data types on compute architecture SM80 (A100).\n", "source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"} {"text": "\nAn optimized kernel based on the paper \u201cSelf-Attention Does Not Need O(n^2) Memory\" and implemented in xFormer, which supports both 32 and 16 bit floating data types on a wider range of architectures (SM40 and later). This blog post refers to this kernel as the mem_efficient kernel.\n\nNote that both optimized kernels (two and three listed above), support a key padding mask and limit the supported attention mask to causal attention. Accelerated PyTorch 2.0 Transformers today only support the causal mask when it is specified using the is_causal boolean. When a mask is specified, the general-purpose kernel will be selected because it is too expensive to analyze the contents of a provided mask to determine if it is the causal mask. Additional explanations on the constraints for each kernel can be found in the Accelerated PT2 Transformer blog.", "source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"} {"text": "Enabling Accelerated Transformers with nanoGPT\nThe SDPA operator being a critical component of the GPT model, we identified the open source nanoGPT model as an excellent candidate for both demonstrating the ease of implementation and benefits of PyTorch 2.0\u2019s Accelerated Transformers. The following demonstrates the exact process by which Accelerated Transformers was enabled on nanoGPT. \nThis process largely revolves around replacing the existing SDPA implementation with the newly added F.scaled_dot_product_attention operator from functional.py. This process can be easily adapted to enable the operator in many other LLMs. Alternatively, users can instead choose to call F.multi_head_attention_forward() or utilize the nn.MultiHeadAttention module directly where applicable. The following code snippets are adapted from Karpathy\u2019s nanoGPT repository.", "source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"} {"text": "Step 1: Identify the existing SDPA implementation\nIn the case of nanoGPT, SDPA is implemented in the model\u2019s CausalSelfAttention class. The original implementation at time of writing is adapted below for this post.\n \nStep 2: Replace with Torch\u2019s scaled_dot_product_attention\nAt this point we can note the following:\n\nLines 36 - 42 define the mathematical implementation of SDPA which we are replacing\nThe mask applied on line 39 is no longer relevant since we are using scaled_dot_product_attention\u2019s is_causal flag.\nThe dropout layer used in line 41 is also now unnecessary. \n\nSwapping out the SDPA implementation for torch\u2019s scaled_dot_product_attention and removing the now redundant code yields the following implementation.", "source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"} {"text": " \nAlternatively, the original mask can be passed into the attn_mask field however due to the mentioned kernel constraints that would limit the implementation to only support the generic sdpa_math kernel.\nStep 3 (Bonus): Faster matmuls with padding\nOn top of the performance improvements from SDPA, our analysis yielded a nice ancillary win. In Andrej's words \"The most dramatic optimization to nanoGPT so far (~25% speedup) is to simply increase the vocab size from 50257 to 50304 (nearest multiple of 64).\"\n", "source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"} {"text": "The vocab size determines the dimensions of matmuls in the output layer of GPT, and these are so large that they were taking a majority of the time for the entire training loop! We discovered that they were achieving performance significantly below the peak throughput achievable on the A100 GPU, and guessed from NVIDIA's matmul documentation that 64-element alignment would yield better results. Indeed, padding these matmuls achieves nearly a 3x speedup! The underlying cause is that unaligned memory accesses significantly reduce efficiency. A deeper analysis can be found in this Twitter thread.\nWith this optimization we were able to further reduce training time from ~113 ms (using flash attention) to ~87 ms per batch.\nResults\nThe figure below demonstrates the performance gained using Pytorch custom kernels. Here are the exact figures:", "source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"} {"text": "\nbaseline (nanoGPT implementation): ~143ms\nsdpa_math (generic): ~134ms (6.71% faster)\nmem_efficient kernel: ~119ms (20.16% faster)\nflash_attention kernel: ~113ms (26.54% faster)\nflash_attention + padded vocab: ~87ms (64.37% faster)\n\nAll code was run on an 8 x NVIDIA Corporation A100 server with 80 GB HBM [A100 SXM4 80GB], and for the purpose of this experiment dropout was set to 0.\n \nFigure 2: Using scaled dot product attention with custom kernels and torch.compile delivers significant speedups for training large language models, such as for nanoGPT shown here.\nEnhancing Numerical Model Stability", "source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"} {"text": "Enhancing Numerical Model Stability\nIn addition to being faster, PyTorch's implementation offers increased numerical stability by avoiding loss of precision in many execution scenarios. There is a great explanation here, but essentially the PyTorch implementation scales the Query and Key matrices before multiplication, which is said to be more stable and avoid loss of precision. Because of the merged custom kernel architecture of SDPA, this scaling does not introduce additional overhead in the computation of the attention result. In comparison, an implementation from the individual computational components would require separate pre-scaling at additional cost. For an additional explanation, see Appendix A.\nImproved Memory Consumption", "source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"} {"text": "Improved Memory Consumption\nYet another large advantage of using the torch SDPA kernels is the reduced memory footprint, which allows for the utilization of larger batch sizes. The following chart compares the best validation loss after one hour of training for both flash attention and the baseline implementations of causal attention. As can be seen, the maximum batch size achieved with the baseline causal attention implementation (on 8 x NVIDIA Corporation A100 server with 80 GB HBM) was 24, significantly less then the maximum achieved with flash attention, which was 39.\n \nFigure 3: Using Flash Attention enables the usage of larger batch sizes, allowing users to achieve lower validation loss after one hour of training (smaller is better).\nConclusion", "source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"} {"text": "Conclusion\nAccelerated PyTorch 2 Transformers were designed to make the training and production deployment of state-of-the-art transformer models affordable and integrated with PyTorch 2.0 model JIT compilation. The newly introduced PyTorch SDPA operator provides improved performance for training Transformer models and is particularly valuable for the expensive Large Language Model training. In this post we demonstrate a number of optimizations on the exemplary nanoGPT model including:\n\nOver 26% training speedup, when compared against the baseline with constant batch size\nAn additional speedup achieved with padded vocabulary, bringing the total optimization to approximately 64% compared to the baseline\nAdditional numerical stability\n\nAppendix A: Analyzing Attention Numeric Stability", "source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"} {"text": "In this section we provide a more in depth explanation of the previously mentioned enhanced numerical stability which is gained by prescaling SDPA\u2019s input vectors. The following is a simplified version of nanoGPT\u2019s mathematical implementation of SDPA. The important thing to note here is that the query undergoes matrix multiplication without being scaled.\n# nanoGPT implementation of SDPA\n# notice q (our query vector) is not scaled !\natt = (q @ k.transpose(-2, -1)) * (1.0 / math.sqrt(k.size(-1)))\natt = att.masked_fill(self.bias[:,:,:T,:T] == 0, float('-inf'))\natt = F.softmax(att, dim=-1)\n\n# Dropout is set to 0, so we can safely ignore this line in the implementation# att = self.attn_dropout(att) \n\ny_nanogpt = att @ v # (B, nh, T, T) x (B, nh, T, hs) -> (B, nh, T, hs)\n\nThe following is the equivalent mathematical implementation in torch\u2019s scaled_dot_product_attention.\n```\nPyTorch implementation of SDPA\nembed_size = q.size(-1)\nscaling_factor = math.sqrt(math.sqrt(embed_size))", "source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"} {"text": "scaling_factor = math.sqrt(math.sqrt(embed_size))\nq = q / scaling_factor # notice q is scaled here !\nsame as above, but with scaling factor\natt = q @ (k.transpose(-2, -1) / scaling_factor)\natt = att.masked_fill(self.bias[:,:,:T,:T] == 0, float('-inf'))\natt = F.softmax(att0, dim=-1)\nDropout is set to 0, so we can safely ignore this line in the implementation# att = self.attn_dropout(att)\ny_scale_before = att @ v\n\nMathematically both approaches should be equivalent, however our experimentation shows that in practice we receive different results from each approach. \n\nUsing the approach above, we verified `y_scale_before` matches the expected output from using the `scaled_dot_product_attention `method while `y_nanogpt` does not.\n\nThe `torch.allclose` method was used to test equivalence. Specifically, we showed that:\n\n\ny_sdpa = torch.nn.functional._scaled_dot_product_attention(\n q,\n k,\n v,\n attn_mask=self.bias[:,:,:T,:T] != 0,\n dropout_p=0.0,\n need_attn_weights=False,\n is_causal=False,\n)", "source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"} {"text": "need_attn_weights=False,\n is_causal=False,\n)\ntorch.allclose(y_sdpa, y_nanogpt) # False, indicating fp issues\ntorch.allclose(y_sdpa, y_scale_before) # True, as expected\n\n## Appendix B: Reproducing Experiment Results\n\nResearchers seeking to reproduce these results should start with the following commit from Andrej\u2019s nanoGPT repository - **b3c17c6c6a363357623f223aaa4a8b1e89d0a465**. This commit was used as the baseline when measuring the per batch speed improvements. For results which include padded vocabulary optimizations (which yielded the most significant improvements to batch speed), use the following commit - **77e7e04c2657846ddf30c1ca2dd9f7cbb93ddeab**. From either checkout, selecting kernels for experimentation is made trivial with the use of the [torch.backends](https://pytorch.org/docs/stable/backends.html) API. \n\nThe desired kernel can be selected via a context manager:\n\n", "source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"} {"text": "with torch.backends.cuda.sdp_kernel (\n enable_math = False,\n enable_flash = False,\n enable_mem_efficient = True\n):\n train(model)\n", "source": "https://pytorch.org/blog/accelerating-large-language-models/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Straggler Mitigation On PyTorch DDP By Hierarchical SGD\"\nauthor: Yi Wang (Cruise AI), Rohan Varma (Meta AI)\n\nPyTorch DDP has been widely adopted across the industry for distributed training, which by default runs synchronous SGD to synchronize gradients across model replicas at every step. The performance of this technique is critical for fast iteration during model exploration as well as resource and cost saving. The performance is critical for fast iteration and cost saving of model development and exploration. To resolve a ubiquitous performance bottleneck introduced by slow nodes in large-scale training, Cruise and Meta co-developed a solution based on the Hierarchical SGD algorithm to significantly accelerate training in the presence of these stragglers.\nThe Need For Straggler Mitigation", "source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"} {"text": "The Need For Straggler Mitigation\nIn DDP setup, a straggler problem can occur when one or more processes run much slower (\"stragglers\") than other processes. When this happens, all the processes have to wait for the stragglers before synchronizing gradients and completing the communication, which essentially bottlenecks distributed performance to the slowest worker.As a result, even for the cases of training relatively small models, the communication cost can still be a major performance bottleneck.\nPotential Causes of Stragglers\nSevere straggler issues are usually caused by workload imbalance before synchronization, and many factors can contribute to this imbalance. For instance, some data loader workers in the distributed environment can become stragglers, because some input examples can be outliers in terms of the data size, or the data transfer of some examples can be drastically slowed down due to unstable network I/O, or the on-the-fly data transformation costs can have a high variance.", "source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"} {"text": "Besides data loading, other phases before gradient synchronization can also cause stragglers, such as unbalanced workloads of embedding table lookup during the forward pass in recommendation systems.\nThe Appearance of Stragglers\nIf we profile DDP training jobs that have stragglers, we can find that some processes may have much higher gradient synchronization costs (a.k.a., allreducing gradients) than other processes at a certain step. As a result, the distributed performance can be dominated by the communication cost even if the model size is very small. In this case, some processes run faster than the straggler(s) at a step, and hence they have to wait for the stragglers and spend a much longer time on allreduce.\nThe below shows screenshots of two trace files output by PyTorch profiler in a use case. Each screenshot profiles 3 steps.", "source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"} {"text": "\nThe first screenshot shows that a process has a very high allreduce cost in both the first and the third steps, because this process reaches the synchronization phase earlier than the straggler(s), and it spends more time on waiting. On the other hand, the allreduce cost is relatively small in the second step, this suggests that 1) there is no straggler at this step; or 2) this process is the straggler among all the processes, so it does not need to wait for any other process.\n\n \nBoth the 1st and the 3rd Steps Are Slowed Down by Stragglers\n\nThe second screenshot shows a normal case without stragglers. In this case, all the gradient synchronizations are relatively short.\n\n", "source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"} {"text": "Normal Case Without Stragglers\nHierarchical SGD in PyTorch\nRecently hierarchical SGD has been proposed to optimize the communication costs by mainly reducing the total amount of data transfer in large-scale distributed training, and multiple convergence analyses have been provided (example). As a main novelty of this post, at Cruise we could leverage hierarchical SGD to mitigate stragglers, which may also occur on training relatively small models. Our implementation has been upstreamed by Cruise to PyTorch in early 2022.\nHow Does Hierarchical SGD Work?\nAs the name implies, hierarchical SGD organizes all the processes into groups at different levels as a hierarchy, and runs synchronization by following the rules below:", "source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"} {"text": "\nAll the groups at the same level have the same number of processes, and the processes in these groups synchronize at the same frequency concurrently, where the synchronization period is pre-defined by the user.\nThe higher level a group is, the larger synchronization period is used, as the synchronization becomes more expensive.\nWhen multiple overlapping groups are supposed to synchronize according to their periods, to reduce redundant synchronization and avoid data race across groups, only the highest-level group runs synchronization.\n\nThe following figure illustrates an example of 4-level hierarchy SGD among 16 processes on 8 machines, each of which has 2 GPUs:\n\nLevel 1: Each process runs mini-batch SGD locally;\nLevel 2: Each 4-process group across 2 machines runs synchronization every 2 steps;\nLevel 3: Each 8-process group across 4 machines runs synchronization every 4 steps;\n", "source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"} {"text": "\nLevel 4: The global process group of all 16 processes over 8 machines runs synchronization every 8 steps.\n\nParticularly, when the step number can be divided by 8, only the synchronization at 3) is executed, and when the step number can be divided by 4 but not 8, only the synchronization at 2) is executed.\n \nIntuitively, hierarchical SGD can be viewed as an extension of local SGD, which only has a two-level hierarchy \u2013 every process runs mini-batch SGD locally and then synchronizes globally at a certain frequency. This can also help explain that, just like local SGD, hierarchical SGD synchronizes model parameters instead of gradients. Otherwise the gradient descent will be mathematically incorrect when the frequency is greater than 1.", "source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"} {"text": "Why Can Hierarchical SGD Mitigate Stragglers?\nThe key insight here is that, when there is a random straggler, it only directly slows down a relatively small group of processes instead of all the processes. Next time another random straggler is very likely to slow down a different small group, and hence a hierarchy can help smooth out the straggler effect.\nThe example below assumes that there is a random straggler among totally 8 processes at every step. After 4 steps, vanilla DDP that runs synchronous SGD will be slowed down by straggler 4 times, because it runs global synchronization at every step. In contrast, hierarchical SGD runs synchronization with the groups of 4 processes after the first two steps, and then a global synchronization after another two steps. We can see that both the first two and the last two stragglers have a large overlap, and hence the performance loss can be mitigated.", "source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"} {"text": " \nEssentially, the mitigation effect of this hierarchical SGD example actually is between local SGD at a frequency of every 2 steps and every 4 steps. The main advantage of hierarchical SGD over local SGD is a better convergence efficiency of the same global synchronization frequency, because hierarchical SGD allows more low-level synchronization. Moreover, it is possible for hierarchical SGD to provide a global synchronization frequency lower than local SGD with model parity, leading to a higher training performance, especially in a large-scale distributed training.\nEase of Use", "source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"} {"text": "Straggler mitigation is not a novel study in distributed training. Multiple approaches have been proposed, such as gossip SGD, data encoding, gradient coding, as well as some particularly designed for parameter-server architecture, including backup workers and stale synchronous parallel. However, to the best of our knowledge, before this effort we have not found a good open-source PyTorch implementation of straggler mitigation that can work like a plugin to our training system at Cruise. In contrast, our implementation only requires the minimal changes \u2013 no need to modify the existing code or tune any existing hyperparameters. This is a very appealing advantage for industry users.", "source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"} {"text": "As the code example below shows, only a few lines need to be added to the setup of DDP model, and the training loop code can keep untouched. As explained previously, hierarchical SGD is an extended form of local SGD, so the enablement can be quite similar to local SGD (see PyTorch docs of PostLocalSGDOptimizer):\n\nRegister a post-local SGD communication hook to run a warmup stage of fully synchronous SGD and defer hierarchical SGD.\nCreate a post-local SGD optimizer that wraps an existing local optimizer and a hierarchical SGD configuration.\n\n```\nimport torch.distributed.algorithms.model_averaging.hierarchical_model_averager as hierarchicalSGD\nfrom torch.distributed.algorithms.ddp_comm_hooks.post_localSGD_hook import (\n PostLocalSGDState,\n post_localSGD_hook,\n)\nfrom torch.distributed.optim import PostLocalSGDOptimizer\nddp_model = nn.parallel.DistributedDataParallel(\n module=model,", "source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"} {"text": "module=model,\n device_ids=[rank],\n)\nRegister a post-local SGD communication hook for the warmup.\nsubgroup, _ = torch.distributed.new_subgroups()\nstate = PostLocalSGDState(subgroup=subgroup, start_localSGD_iter=1_000)\nddp_model.register_comm_hook(state, post_localSGD_hook)\nWraps the existing (local) optimizer to run hierarchical model averaging.\noptim = PostLocalSGDOptimizer(\n optim=optim,\n averager=hierarchicalSGD.HierarchicalModelAverager(\n # The config runs a 4-level hierarchy SGD among 128 processes:\n # 1) Each process runs mini-batch SGD locally;\n # 2) Each 8-process group synchronize every 2 steps;\n # 3) Each 32-process group synchronize every 4 steps;\n # 4) All 128 processes synchronize every 8 steps.\n period_group_size_dict=OrderedDict([(2, 8), (4, 32), (8, 128)]),\n # Do not run hierarchical SGD until 1K steps for model parity.\n warmup_steps=1_000)\n)\n```\nAlgorithm Hyperparameters", "source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"} {"text": ")\n```\nAlgorithm Hyperparameters\nHierarchical SGD has two major hyperparameters: period_group_size_dict and warmup_steps.\n\nperiod_group_size_dict is an ordered dictionary mapping from synchronization period to process group size, used for initializing process groups of different sizes in a hierarchy to synchronize parameters concurrently. A larger group is expected to use a larger synchronization period.\nwarmup_steps specifies a number of steps as the warmup stage to run synchronous SGD before hierarchical SGD. Similar to post-local SGD algorithm, a warmup stage is usually recommended to achieve a higher accuracy. The value should be the same as start_localSGD_iter arg used in PostLocalSGDState when post_localSGD_hook is registered. Typically the warmup stage should at least cover the beginning of training when the loss is decreased drastically.\n", "source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"} {"text": "A subtle difference between the PyTorch implementation and the initial design proposed by relevant papers is that, after the warmup stage, by default the processes within each host still run intra-host gradient synchronization at every step. This is because that:\n\nThe intra-host communication is relatively cheap, and it can usually significantly accelerate the convergence;\nThe intra-host group (of size 4 or 8 for most industry users) can usually be a good choice of the smallest group of processes that synchronize most frequently in hierarchical SGD. If the synchronization period is 1, then gradient synchronization is faster than model parameter synchronization (a.k.a., model averaging), because DDP automatically overlaps gradient synchronization and the backward pass.\n\nSuch intra-host gradient synchronization can be disabled by unsetting post_local_gradient_allreduce arg in PostLocalSGDState.\nDemonstration", "source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"} {"text": "Demonstration\nNow we demonstrate that hierarchical SGD can accelerate distributed training by mitigating stragglers.\nExperimental Setup\nWe compared the performance of hierarchical SGD against local SGD and synchronous SGD on ResNet18 (model size: 45MB). Since the model is so small, the training is not bottlenecked by data transfer cost during synchronization. To avoid the noises incurred by data loading from remote storage, the input data was randomly simulated from memory. We varied the number of GPUs used by training from 64 to 256. The batch size per worker is 32, and the number of iterations of training is 1,000. Since we don\u2019t evaluate convergence efficiency in this set of experiments, warmup is not enabled.", "source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"} {"text": "We also emulated stragglers at a rate of 1% on 128 and 256 GPUs, and 2% on 64 GPUs, to make sure at least one stragglers at every step on average. These stragglers randomly appear on different CUDA devices. Each straggler stalls for 1 second besides the normal per-step training time (~55ms in our setup). This can be perceived as a practical scenario where 1% or 2% of input data are outliers in terms of the data pre-processing cost (I/O and/or data transformation on the fly) during training, and such cost is 20X+ larger than the average.\nThe code snippet below shows how a straggler can be emulated in the training loop. We applied it to a ResNet model, and it can be easily applied to the other models as well.\n loss = loss_fn(y_pred, y)\n # Emulate a straggler that lags for 1 second at a rate of 1%.\n if random.randint(1, 100) == 1:\n time.sleep(1)\n loss.backward()\n optimizer.step()\n", "source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"} {"text": "loss.backward()\n optimizer.step()\n```\nThe experiments are conducted on us-central1 GCP cluster. Each machine has 4 NVIDIA Tesla T4 GPUs with 16 GB memory per GPU, connected through a 32 Gbit/s ethernet network. Each instance also features 96 vCPUs, 360 GB RAM.\n\n\nArchitecture\n \nResNet18 (45MB)\n \n\n\nWorkers\n \n64, 128, 256\n \n\n\nBackend\n \nNCCL\n \n\n\nGPU\n \nTesla T4, 16 GB memory\n \n\n\nBatch size\n \n32 x ## of workers\n \n\n\nStraggler Duration\n \n1 sec\n \n\n\nStraggler Rate\n \n1% on 128 and 256 GPUs, 2% on 64 GPUs\n \n\n\nWe used multiple configurations for both local SGD and hierarchical SGD. Local SGD runs global synchronization every 2, 4, and 8 steps, respectively.", "source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"} {"text": "We ran hierarchical SGD with the following configurations:\n\nOn 64 GPUs:\nEach 8-process group, 32-process, and the global 64-process group synchronizes every 2, 4, and 8 steps, respectively. Denoted as \"HSGD 2-8,4-32,8-64\".\nEach 32-process group and the global 64-process group synchronizes every 4 and 8 steps, respectively. Denoted as \"HSGD 4-32,8-64\".\n\n\nOn 128 GPUs:\nEach 8-process group, 32-process group, and the global 128-process group synchronizes every 2, 4, and 8 steps, respectively. Denoted as \"HSGD 2-8,4-32,8-128\".\nEach 32-process group and the global 128-process group synchronizes every 4 and 8 steps, respectively. Denoted as \"HSGD 4-32,8-128\".\n\n\nOn 256 GPUs:\nEach 4-process group, 16-process group, 64-process group, and the global 256-process group synchronizes every 1, 2, 4, and 8 steps, respectively. Denoted as \"HSGD 1-4,2-16,4-64,8-256\".\n\n\n", "source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"} {"text": "\nEach 8-process group, 64-process group, and the global 256-process group synchronizes every 2, 4, and 8 steps. Denoted as \"HSGD 2-8,4-64,8-256\".\nEach 16-process group and the global 256-process group synchronizes every 4 and 8 steps, respectively. Denoted as \"HSGD 4-16,8-256\".\n\n\n\nExperimental Results\nThe figures below show the speedups of different communication schemes against the baseline of synchronous SGD, with the emulated stragglers. We can make the following observations:\n\nAs expected, we can see that both hierarchical SGD and local SGD can achieve a higher speedup with a lower synchronization frequency.\nThe speedups of the hierarchical SGD schemes are 2.08X-2.45X on 64 GPUs, 2.57X-2.68X on 128 GPUs, and 2.63X-3.25X on 256 GPUs, respectively. This shows that hierarchical SGD can significantly mitigate stragglers, and such mitigation can be more effective at a larger scale.\n", "source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"} {"text": "\nThe performance of local SGD with the synchronization period of 2 steps and 8 steps can be perceived as the lower bound and upper bound of the experimented hierarchical SGD schemes, respectively. This is because the hierarchical SGD schemes synchronize less frequently than every 2 steps globally, but their low-level synchronization at small groups are the extra overheads in comparison with the global synchronization every 8 steps.\n\nOverall, hierarchical SGD can provide a finer-grained trade-off between communication cost and model quality than local SGD. Therefore, when local SGD at a relatively large synchronization period like 8 or 4 cannot give a satisfactory convergence efficiency, hierarchical SGD can have a much better chance to achieve both a good speedup and a model parity.\nSince only simulated data is used in the experiments, we did not demonstrate the model parity here, which in practice can be achieved in two ways:\n1. Tuning the hyperparameters including both hierarchy and warmup steps;", "source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"} {"text": "\nFor some cases, hierarchical SGD could lead to a slightly lower quality than the original model for the same number of training steps (i.e., lower convergence rate), but with a speedup like 2X+ per training step, it is still possible to achieve model parity with more steps but still less total training time.\n\n \n \n \nLimitations\nBefore applying hierarchical SGD to straggler mitigation, the user should be aware of a few limitations of this approach:", "source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"} {"text": "\nThis approach can only mitigate non-persistent stragglers, which occur to different workers at different times. However, for the case of persistent stragglers, which can be caused by hardware degradation or a network issue on a specific host, these stragglers will slow down the same low-level subgroup at every time, leading to nearly no straggler mitigation.\nThis approach can only mitigate low-frequency stragglers. E.g., if 30% workers can randomly become stragglers at every step, then most low-level synchronizations will still be slowed down by stragglers. As a result, hierarchical SGD may not show an obvious performance advantage over synchronous SGD.\n", "source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"} {"text": "\nSince hierarchical SGD applies model averaging that does not overlap with backward like gradient averaging used by vanilla DDP, its performance gain of straggler mitigation must outweigh the performance loss of no overlap between communication and backward pass. Therefore, if stragglers only slow down training by less than 10%, hierarchical SGD may not be able to bring much speedup. This limitation can be addressed by overlapping optimizer step and backward pass in the future.\n", "source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"} {"text": "\nSince hierarchical SGD is less well-studied than local SGD, there is no guarantee that hierarchical SGD with a finer-grained synchronization granularity can converge faster than certain advanced forms of local SGD, such as SlowMo, which can improve convergence efficiency with slow momentum. However, to the best of our knowledge, these advanced algorithms cannot be natively supported as a PyTorch DDP plugin like hierarchical SGD yet.\n\nAcknowledgements\nWe would like to thank Cruise teammates Bo Tian, Sergei Vorobev, Eugene Selivonchyk, Tsugn-Hsien Lee, Dan Ring, Ian Ackerman, Lei Chen, Maegan Chew, Viet Anh To, Xiaohui Long, Zeyu Chen, Alexander Sidorov, Igor Tsvetkov, Xin Hu, Manav Kataria, Marina Rubtsova, and Mohamed Fawzy, as well as Meta teammates Shen Li, Yanli Zhao, Suraj Subramanian, Hamid Shojanzeri, Anjali Sridhar and Bernard Nguyen for the support.", "source": "https://pytorch.org/blog/straggler-mitigation/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Easily list and initialize models with new APIs in TorchVision\"\nauthor: Vasilis Vryniotis and Laurence Rouesnel\nfeatured-img: \"/assets/images/easily-list-and-initialize-models-with-new-apis-in-torchvision-1.png\"\n\nTorchVision now supports listing and initializing all available built-in models and weights by name. This new API builds upon the recently introduced Multi-weight support API, is currently in Beta, and it addresses a long-standing request from the community.\n\n\n", "source": "https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/", "category": "pytorch blogs"} {"text": "\nYou can try out the new API in the latest nightly release of TorchVision. We\u2019re looking to collect feedback ahead of finalizing the feature in TorchVision v0.14. We have created a dedicated Github Issue where you can post your comments, questions and suggestions!\nQuerying and initializing available models\nBefore the new model registration API, developers had to query the __dict__ attribute of the modules in order to list all available models or to fetch a specific model builder method by its name:\n# Initialize a model by its name:\nmodel = torchvision.models.__dict__[model_name]()\n\n# List available models:\navailable_models = [\n k for k, v in torchvision.models.__dict__.items()\n if callable(v) and k[0].islower() and k[0] != \"_\"\n]\n", "source": "https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/", "category": "pytorch blogs"} {"text": "]\n\nThe above approach does not always produce the expected results and is hard to discover. For example, since the [``get_weight()``](https://pytorch.org/vision/main/models.html#using-models-from-hub) method is exposed publicly under the same module, it will be included in the list despite not being a model. In general, reducing the verbosity (less imports, shorter names etc) and being able to initialize models and weights directly from their names (better support of configs, TorchHub etc) was [feedback](https://github.com/pytorch/vision/issues/5088) provided previously by the community. To solve this problem, we have developed a model registration API.\n\n## A new approach\n\nWe\u2019ve added 4 new methods under the torchvision.models module:\n\n```python\nfrom torchvision.models import get_model, get_model_weights, get_weight, list_models\n", "source": "https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/", "category": "pytorch blogs"} {"text": "\nThe styles and naming conventions align closely with a prototype mechanism proposed by Philip Meier for the [Datasets V2](https://github.com/pytorch/vision/blob/main/torchvision/prototype/datasets/_api.py) API, aiming to offer a similar user experience. The model registration methods are kept private on purpose as we currently focus only on supporting the built-in models of TorchVision.\n\n### List models\n\nListing all available models in TorchVision can be done with a single function call:\n\n```python\n>>> list_models()\n['alexnet', 'mobilenet_v3_large', 'mobilenet_v3_small', 'quantized_mobilenet_v3_large', ...]\n\nTo list the available models of specific submodules:\n>>> list_models(module=torchvision.models)\n['alexnet', 'mobilenet_v3_large', 'mobilenet_v3_small', ...]\n>>> list_models(module=torchvision.models.quantization)\n['quantized_mobilenet_v3_large', ...]\n\nInitialize models\nNow that you know which models are available, you can easily initialize a model with pre-trained weights:", "source": "https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/", "category": "pytorch blogs"} {"text": ">>> get_model(\"quantized_mobilenet_v3_large\", weights=\"DEFAULT\")\nQuantizableMobileNetV3(\n (features): Sequential(\n ....\n )\n)\n\nGet weights\nSometimes, while working with config files or using TorchHub, you might have the name of a specific weight entry and wish to get its instance. This can be easily done with the following method:\n>>> get_weight(\"ResNet50_Weights.IMAGENET1K_V2\")\nResNet50_Weights.IMAGENET1K_V2\n\nTo get the enum class with all available weights of a specific model you can use either its name:\n>>> get_model_weights(\"quantized_mobilenet_v3_large\")\n\n\nOr its model builder method:\n>>> get_model_weights(torchvision.models.quantization.mobilenet_v3_large)\n\n\nTorchHub support\nThe new methods are also available via TorchHub:\n```python\nimport torch\nFetching a specific weight entry by its name:", "source": "https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/", "category": "pytorch blogs"} {"text": "Fetching a specific weight entry by its name:\nweights = torch.hub.load(\"pytorch/vision\", \"get_weight\", weights=\"ResNet50_Weights.IMAGENET1K_V2\")\nFetching the weights enum class to list all available entries:\nweight_enum = torch.hub.load(\"pytorch/vision\", \"get_model_weights\", name=\"resnet50\")\nprint([weight for weight in weight_enum])\n```\nPutting it all together\nFor example, if you wanted to retrieve all the small-sized models with pre-trained weights and initialize one of them, it\u2019s a matter of using the above APIs:\n```python\nimport torchvision\nfrom torchvision.models import get_model, get_model_weights, list_models\nmax_params = 5000000\ntiny_models = []\nfor model_name in list_models(module=torchvision.models):\n weights_enum = get_model_weights(model_name)\n if len([w for w in weights_enum if w.meta[\"num_params\"] <= max_params]) > 0:\n tiny_models.append(model_name)\nprint(tiny_models)\n['mnasnet0_5', 'mnasnet0_75', 'mnasnet1_0', 'mobilenet_v2', ...]", "source": "https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/", "category": "pytorch blogs"} {"text": "model = get_model(tiny_models[0], weights=\"DEFAULT\")\nprint(sum(x.numel() for x in model.state_dict().values()))\n2239188\n```\nFor more technical details please see the original RFC. Please spare a few minutes to provide your feedback on the new API, as this is crucial for graduating it from beta and including it in the next release. You can do this on the dedicated Github Issue. We are looking forward to reading your comments!", "source": "https://pytorch.org/blog/easily-list-and-initialize-models-with-new-apis-in-torchvision/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Introducing native PyTorch automatic mixed precision for faster training on NVIDIA GPUs'\nauthor: Mengdi Huang, Chetan Tekur, Michael Carilli\n\nMost deep learning frameworks, including PyTorch, train with 32-bit floating point (FP32) arithmetic by default. However this is not essential to achieve full accuracy for many deep learning models. In 2017, NVIDIA researchers developed a methodology for mixed-precision training, which combined single-precision (FP32) with half-precision (e.g. FP16) format when training a network, and achieved the same accuracy as FP32 training using the same hyperparameters, with additional performance benefits on NVIDIA GPUs:\n\nShorter training time;\nLower memory requirements, enabling larger batch sizes, larger models, or larger inputs.\n", "source": "https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/", "category": "pytorch blogs"} {"text": "In order to streamline the user experience of training in mixed precision for researchers and practitioners, NVIDIA developed Apex in 2018, which is a lightweight PyTorch extension with Automatic Mixed Precision (AMP) feature. This feature enables automatic conversion of certain GPU operations from FP32 precision to mixed precision, thus improving performance while maintaining accuracy.\nFor the PyTorch 1.6 release, developers at NVIDIA and Facebook moved mixed precision functionality into PyTorch core as the AMP package, torch.cuda.amp. torch.cuda.amp is more flexible and intuitive compared to apex.amp. Some of apex.amp's known pain points that torch.cuda.amp has been able to fix:\n\nGuaranteed PyTorch version compatibility, because it's part of PyTorch\nNo need to build extensions\nWindows support\n", "source": "https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/", "category": "pytorch blogs"} {"text": "\nNo need to build extensions\nWindows support\nBitwise accurate saving/restoring of checkpoints\nDataParallel and intra-process model parallelism (although we still recommend torch.nn.DistributedDataParallel with one GPU per process as the most performant approach)\nGradient penalty (double backward)\n", "source": "https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/", "category": "pytorch blogs"} {"text": "\ntorch.cuda.amp.autocast() has no effect outside regions where it's enabled, so it should serve cases that formerly struggled with multiple calls to apex.amp.initialize() (including cross-validation) without difficulty. Multiple convergence runs in the same script should each use a fresh GradScaler instance, but GradScalers are lightweight and self-contained so that's not a problem.\nSparse gradient support\n\nWith AMP being added to PyTorch core, we have started the process of deprecating apex.amp. We have moved apex.amp to maintenance mode and will support customers using apex.amp. However, we highly encourage apex.amp customers to transition to using torch.cuda.amp from PyTorch Core.\nExample Walkthrough\nPlease see official docs for usage:", "source": "https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/", "category": "pytorch blogs"} {"text": "Please see official docs for usage:\n* https://pytorch.org/docs/stable/amp.html\n* https://pytorch.org/docs/stable/notes/amp_examples.html\nExample:\nimport torch\n# Creates once at the beginning of training\nscaler = torch.cuda.amp.GradScaler()\n\nfor data, label in data_iter:\n optimizer.zero_grad()\n # Casts operations to mixed precision\n with torch.cuda.amp.autocast():\n loss = model(data)\n\n # Scales the loss, and calls backward()\n # to create scaled gradients\n scaler.scale(loss).backward()\n\n # Unscales gradients and calls\n # or skips optimizer.step()\n scaler.step(optimizer)\n\n # Updates the scale for next iteration\n scaler.update()\n\nPerformance Benchmarks", "source": "https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/", "category": "pytorch blogs"} {"text": "scaler.update()\n```\nPerformance Benchmarks\nIn this section, we discuss the accuracy and performance of mixed precision training with AMP on the latest NVIDIA GPU A100 and also previous generation V100 GPU. The mixed precision performance is compared to FP32 performance, when running Deep Learning workloads in the NVIDIA pytorch:20.06-py3 container from NGC.\nAccuracy: AMP (FP16), FP32", "source": "https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/", "category": "pytorch blogs"} {"text": "Accuracy: AMP (FP16), FP32\nThe advantage of using AMP for Deep Learning training is that the models converge to the similar final accuracy while providing improved training performance. To illustrate this point, for Resnet 50 v1.5 training, we see the following accuracy results where higher is better. Please note that the below accuracy numbers are sample numbers that are subject to run to run variance of up to 0.4%. Accuracy numbers for other models including BERT, Transformer, ResNeXt-101, Mask-RCNN, DLRM can be found at NVIDIA Deep Learning Examples Github.\nTraining accuracy: NVIDIA DGX A100 (8x A100 40GB)\n\n\n\n\u00a0epochs\n\u00a0Mixed Precision Top 1(%)", "source": "https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/", "category": "pytorch blogs"} {"text": "\u00a0TF32 Top1(%)\n\n\n  90\n  76.93\n  76.85\n\n\n\n\nTraining accuracy: NVIDIA DGX-1 (8x V100 16GB)\n\n\n\n\u00a0epochs\n\u00a0Mixed Precision Top 1(%)\n\u00a0FP32 Top1(%)\n\n\n50\n76.25\n76.26\n\n\n90\n77.09\n77.01\n\n\n250\n78.42\n78.30\n\n\n\nSpeedup Performance:\nFP16 on NVIDIA V100 vs. FP32 on V100", "source": "https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/", "category": "pytorch blogs"} {"text": "FP16 on NVIDIA V100 vs. FP32 on V100\nAMP with FP16 is the most performant option for DL training on the V100. In Table 1, we can observe that for various models, AMP on V100 provides a speedup of 1.5x to 5.5x over FP32 on V100 while converging to the same final accuracy.\n\n\n\nFigure 2. Performance of mixed precision training on NVIDIA 8xV100 vs. FP32 training on 8xV100 GPU. Bars represent the speedup factor of V100 AMP over V100 FP32. The higher the better.\nFP16 on NVIDIA A100 vs. FP16 on V100\nAMP with FP16 remains the most performant option for DL training on the A100. In Figure 3, we can observe that for various models, AMP on A100 provides a speedup of 1.3x to 2.5x over AMP on V100 while converging to the same final accuracy.\n\n\n", "source": "https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/", "category": "pytorch blogs"} {"text": "\nFigure 3. Performance of mixed precision training on NVIDIA 8xA100 vs. 8xV100 GPU. Bars represent the speedup factor of A100 over V100. The higher the better.\nCall to action\nAMP provides a healthy speedup for Deep Learning training workloads on Nvidia Tensor Core GPUs, especially on the latest Ampere generation A100 GPUs. You can start experimenting with AMP enabled models and model scripts for A100, V100, T4 and other GPUs available at NVIDIA deep learning examples. NVIDIA PyTorch with native AMP support is available from the PyTorch NGC container version 20.06. We highly encourage existing apex.amp customers to transition to using torch.cuda.amp from PyTorch Core available in the latest PyTorch 1.6 release.", "source": "https://pytorch.org/blog/accelerating-training-on-nvidia-gpus-with-pytorch-automatic-mixed-precision/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Tensor Comprehensions in PyTorch'\nauthor: Priya Goyal (FAIR), Nicolas Vasilache (FAIR), Oleksandr Zinenko (Inria & DI ENS), Theodoros Theodoridis (ETH Z\u00fcrich), Zachary DeVito (FAIR), William S. Moses (MIT CSAIL), Sven Verdoolaege (FAIR), Andrew Adams (FAIR), Albert Cohen (Inria & DI ENS & FAIR)\nredirect_from: /2018/03/05/tensor-comprehensions.html\n\nTensor Comprehensions (TC) is a tool that lowers the barrier for writing high-performance code. It generates GPU code from a simple high-level language and autotunes the code for specific input sizes.\nWe highly recommend reading the Tensor Comprehensions blogpost first.\nIf you ran into any of the following scenarios, TC is a useful tool for you.\n\nYour PyTorch layer is large and slow, and you contemplated writing a dedicated C++ or CUDA code for it. But you don't know how to program in CUDA or write low-level code.\n", "source": "https://pytorch.org/blog/tensor-comprehensions/", "category": "pytorch blogs"} {"text": "\n\nYou wrote a CUDA layer, but it took a week to write, debug, optimize for speed. You wished you could do this in an hour.\n\n\nYou want to fuse multiple layers like Conv-ReLU-BatchNorm or Linear-ReLU-Linear-ReLU in your network for speed, but it was quite difficult to comprehend\n\n\nYour research involves weird Tensor shapes that CuDNN and MKL are not optimized for. For example, you do convolutions of 13 x 24 with an input image of 143 x 55. You tried running it with CuDNN and it was slower than you wished.\n\n\nYour code is slowed-down by transposing Tensors constantly to fit a particular memory layout. You wish it was easy to write custom code that operates efficiently on your input layout.\n\n\nTensor Comprehensions are seamless to use in PyTorch, interoperating with PyTorch Tensors and nn Variables.\nLet us run through using TC with PyTorch.\n1. Install the package\nconda install -c pytorch -c tensorcomp tensor_comprehensions\n", "source": "https://pytorch.org/blog/tensor-comprehensions/", "category": "pytorch blogs"} {"text": "\nAt this time we only provide Linux-64 binaries which have been tested on Ubuntu 16.04 and CentOS7.\n\nTC depends on heavyweight C++ projects such as [Halide](http://halide-lang.org/), [Tapir-LLVM](https://github.com/wsmoses/Tapir-LLVM) and [ISL](http://isl.gforge.inria.fr/). Hence, we rely on Anaconda to distribute these dependencies reliably. For the same reason, TC is not available via PyPI.\n\n#### 2. Import the python package\n\n```python\nimport tensor_comprehensions as tc\n\n3. Define the TC expression and create a python function\nlang = \"\"\"\ndef fcrelu(float(B,M) I, float(N,M) W1, float(N) B1) -> (O1) {\n O1(b, n) +=! I(b, m) * W1(n, m)\n O1(b, n) = O1(b, n) + B1(n)\n O1(b, n) = fmax(O1(b, n), 0)\n}\n\"\"\"\nfcrelu = tc.define(lang, name=\"fcrelu\")\n\nThis fcrelu function takes PyTorch Tensors as input and returns a PyTorch Tensor. It takes input I, weight W1, bias B1 and returns output O1.\n4. Let's create some dummy input tensors\n```python\nB, M, N = 100, 128, 100", "source": "https://pytorch.org/blog/tensor-comprehensions/", "category": "pytorch blogs"} {"text": "B, M, N = 100, 128, 100\nI, W1, B1 = torch.randn(B, M).cuda(), torch.randn(N, M).cuda(), torch.randn(N).cuda()\n\n5. Now autotune the function for your input sizes\nfcrelu.autotune(I, W1, B1, cache=\"fcrelu_100_128_100.tc\")\n\nThe autotuner is your biggest friend. You generally do not want to use a tc function without autotuning it first.\nWhen the autotuning is running, the current best performance is displayed. If you are satisfied with the current result or you are out of time, stop the tuning procedure by pressing Ctrl+C.\n\ncache saves the results of the autotuned kernel search and saves it to the file fcrelu_100_128_100.tc. The next time you call the same line of code, it loads the results of the autotuning without recomputing it.", "source": "https://pytorch.org/blog/tensor-comprehensions/", "category": "pytorch blogs"} {"text": "The autotuner has a few hyperparameters (just like your ConvNet has learning rate, number of layers, etc.). We pick reasonable defaults, but you can read about using advanced options here.\n6. Call the function with the inputs, to get your result\nout = fcrelu(I, W1, B1)\n\nNow, let's look at how to write TC expressions.\nA quick primer on the TC language\nThe TC notation focuses on the mathematical nature of the layer, leaving performance considerations to it's backend code that uses Halide and polyhedral compilation techniques which accumulate decades of cutting edge Loop Nest Optimization (LNO) research.\nTC is close to np.einsum. We shall quickly learn TC by example\n```python\nlang = \"\"\"\ndef matmul(float(M,N) A, float(N,K) B) -> (output) {\n output(i, j) +=! A(i, kk) * B(kk, j)", "source": "https://pytorch.org/blog/tensor-comprehensions/", "category": "pytorch blogs"} {"text": "output(i, j) +=! A(i, kk) * B(kk, j)\n}\n\"\"\"\n\nIn this example, we define a function `matmul` which takes two input `A` and `B` of shapes `M x N` and `N x K` and returns a single `output`. The shape of `output` is automatically inferred by the TC language (discussed below).\n\nLet's look at this line:\n\n```python\noutput(i, j) +=! A(i, kk) * B(kk, j)\n\nIt says:\n\noutput(i, j) means output is 2D.\nfor each location output(i, j), we add (+=) A(i, kk) * B(kk, j).\ni is well-defined as all locations in A dim=0, i.e. i in range(0, M)\nj is well-defined as all locations in B dim=1, i.e. j in range(0, K)\nkk is inferred as all locations from 0 to N\n\nThe shape of output is inferred from the maximum values i and j can take, which is M and K, so output is of size M x K.\nThe ! symbol initializes output with 0.0. It is equivalent to:\noutput(i, j) = 0\noutput(i, j) += A(i, kk) * B(kk, j)\n\nScalar inputs and range constraints: implement AvgPool2d\n```python", "source": "https://pytorch.org/blog/tensor-comprehensions/", "category": "pytorch blogs"} {"text": "\"\"\"\n\n{% raw %}def avgpool(float(B, C, H, W) input) -> (output) {{{% endraw %}\n output(b, c, h, w) += input(b, c, h * {sH} + kh, w * {sW} + kw) where kh in 0:{kH}, kw in 0:{kW}\n{% raw %}}}{% endraw %}\n\n\"\"\"\navgpool = tc.define(LANG, name=\"avgpool\", constants={\"sH\":1, \"sW\":1, \"kH\":2, \"kW\":2})\n\nhere the where keyword can take ranges of values to operate on. 0:{kH} is equivalent range(kH) in Python.\nNote: the syntax for passing in scalars is subject to change in the next release.\ntorch.nn layers\nWe added some sugar-coating around the basic PyTorch integration of TC to make it easy to integrate TC into larger torch.nn models by defining the forward and backward TC expressions and taking Variable inputs / outputs. Here is an example of defining a convolution layer with TC.\nSome essentials that you will miss (we're working on them)", "source": "https://pytorch.org/blog/tensor-comprehensions/", "category": "pytorch blogs"} {"text": "Autotuning for variable-length sequences\nThe TC auto-tuner requires all input sizes to be specified before-hand. For example, if you have input I1 which is an image batch, the autotuner wants to know the exact shape of I1 to generate an optimized kernel. You cannot specify: image with height between 200 and 300. This is more essential in sequence data such as NLP, where each sentence can have a different length.\nThe reason why the autotuner is non-parametric is because it's harder and harder to auto-tune parametric constraints, this is active research. Hence, for the first release, we made a conscious decision to give you the tool in a form where we know it works well.\nAs a work-around, if you know that you have a few specific shapes of interest, you can run the autotuner with these multiple shapes.\n```python\nrelu = tc.define(LANG, name=\"relu\")\nbatch, channels = 16, 3\ntc.autotune((batch, channels, 32, 32)) # image of size 32 x 32\ntc.autotune((batch, channels, 48, 48)) # image of size 48 x 48", "source": "https://pytorch.org/blog/tensor-comprehensions/", "category": "pytorch blogs"} {"text": "tc.autotune((batch, channels, 64, 64)) # image of size 64 x 64\n```\nNow the autotuner is tuned for these three specific image sizes 32x32, 48x48 and 64x64.\nLack of loops\nIf you want to write an RNN, it's easy to see it as a for loop over time. However, the TC language does not have loops yet. If you really want to write RNNs, you can write unrolled loops.\nStrided-Tensors\nThe TC backend does not support non-contiguous Tensors yet. If the inputs you give are not contiguous, they are made contiguous before passing to the TC backend.\nReshaping Tensors within a TC expression\nYou cannot write this operation in TC: torch.matmul(...).view(...).mean(...). Whenever there is need for a view to change the shape of an input, you have to get the output, view it at the PyTorch level.\nGetting Started", "source": "https://pytorch.org/blog/tensor-comprehensions/", "category": "pytorch blogs"} {"text": "Getting Started\n\nWalk through Tutorial to quickly get started with understanding and using Tensor Comprehensions PyTorch package.\nOver 20 examples of various ML layers with TC, including avgpool, maxpool, matmul, matmul - give output buffers and batch-matmul, convolution, strided-convolution, batchnorm, copy, cosine similarity, Linear, Linear + ReLU, group-convolutions, strided group-convolutions, indexing, Embedding (lookup table), small-mobilenet, softmax, tensordot, transpose\nDetailed docs on Tensor Comprehensions and integration with PyTorch.\n\nCommunication", "source": "https://pytorch.org/blog/tensor-comprehensions/", "category": "pytorch blogs"} {"text": "Communication\n\nSlack: For discussion around framework integration, build support, collaboration, etc. join our slack channel.\nEmail: tensorcomp@fb.com\nGitHub: bug reports, feature requests, install issues, RFCs, thoughts, etc.\n\nAcknowledgements\nWe would like to thank Soumith Chintala, Edward Yang and Sam Gross for their immense guidance and help in making the integration API nice and smooth. We would also like to thank rest of the PyTorch team and our pre-release users for their helpful feedback that guided us in making the integration better.", "source": "https://pytorch.org/blog/tensor-comprehensions/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Introducing TorchVision\u2019s New Multi-Weight Support API\"\nauthor: Vasilis Vryniotis\nfeatured-img: \"assets/images/torchvision_featured.png\"\n\nTorchVision has a new backwards compatible API for building models with multi-weight support. The new API allows loading different pre-trained weights on the same model variant, keeps track of vital meta-data such as the classification labels and includes the preprocessing transforms necessary for using the models. In this blog post, we plan to review the prototype API, show-case its features and highlight key differences with the existing one.\n\n\n\nWe are hoping to get your thoughts about the API prior finalizing it. To collect your feedback, we have created a Github issue where you can post your thoughts, questions and comments.\nLimitations of the current API", "source": "https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/", "category": "pytorch blogs"} {"text": "Limitations of the current API\nTorchVision currently provides pre-trained models which could be a starting point for transfer learning or used as-is in Computer Vision applications. The typical way to instantiate a pre-trained model and make a prediction is:\n```Python\nimport torch\nfrom PIL import Image\nfrom torchvision import models as M\nfrom torchvision.transforms import transforms as T\nimg = Image.open(\"test/assets/encode_jpeg/grace_hopper_517x606.jpg\")\nStep 1: Initialize model\nmodel = M.resnet50(pretrained=True)\nmodel.eval()\nStep 2: Define and initialize the inference transforms\npreprocess = T.Compose([\n T.Resize([256, ]),\n T.CenterCrop(224),\n T.PILToTensor(),\n T.ConvertImageDtype(torch.float),\n T.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])\n])\nStep 3: Apply inference preprocessing transforms\nbatch = preprocess(img).unsqueeze(0)\nprediction = model(batch).squeeze(0).softmax(0)\nStep 4: Use the model and print the predicted category", "source": "https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/", "category": "pytorch blogs"} {"text": "class_id = prediction.argmax().item()\nscore = prediction[class_id].item()\nwith open(\"imagenet_classes.txt\", \"r\") as f:\n categories = [s.strip() for s in f.readlines()]\n category_name = categories[class_id]\nprint(f\"{category_name}: {100 * score}%\")\n```\nThere are a few limitations with the above approach:\n\nInability to support multiple pre-trained weights: Since the pretrained variable is boolean, we can only offer one set of weights. This poses a severe limitation when we significantly improve the accuracy of existing models and we want to make those improvements available to the community. It also stops us from offering pre-trained weights of the same model variant on different datasets.\n", "source": "https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/", "category": "pytorch blogs"} {"text": "\nMissing inference/preprocessing transforms: The user is forced to define the necessary transforms prior using the model. The inference transforms are usually linked to the training process and dataset used to estimate the weights. Any minor discrepancies in these transforms (such as interpolation value, resize/crop sizes etc) can lead to major reductions in accuracy or unusable models.\nLack of meta-data: Critical pieces of information in relation to the weights are unavailable to the users. For example, one needs to look into external sources and the documentation to find things like the category labels, the training recipe, the accuracy metrics etc.\n\nThe new API addresses the above limitations and reduces the amount of boilerplate code needed for standard tasks.\nOverview of the prototype API\nLet\u2019s see how we can achieve exactly the same results as above using the new API:\n```Python\nfrom PIL import Image", "source": "https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/", "category": "pytorch blogs"} {"text": "from PIL import Image\nfrom torchvision.prototype import models as PM\n\n\nimg = Image.open(\"test/assets/encode_jpeg/grace_hopper_517x606.jpg\")\n\n# Step 1: Initialize model\nweights = PM.ResNet50_Weights.IMAGENET1K_V1\nmodel = PM.resnet50(weights=weights)\nmodel.eval()\n\n# Step 2: Initialize the inference transforms\npreprocess = weights.transforms()\n\n# Step 3: Apply inference preprocessing transforms\nbatch = preprocess(img).unsqueeze(0)\nprediction = model(batch).squeeze(0).softmax(0)\n\n# Step 4: Use the model and print the predicted category\nclass_id = prediction.argmax().item()\nscore = prediction[class_id].item()\ncategory_name = weights.meta[\"categories\"][class_id]\nprint(f\"{category_name}: {100 * score}*%*\")\n\nAs we can see the new API eliminates the aforementioned limitations. Let\u2019s explore the new features in detail.\nMulti-weight support", "source": "https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/", "category": "pytorch blogs"} {"text": "Multi-weight support\nAt the heart of the new API, we have the ability to define multiple different weights for the same model variant. Each model building method (eg resnet50) has an associated Enum class (eg ResNet50_Weights) which has as many entries as the number of pre-trained weights available. Additionally, each Enum class has a DEFAULT alias which points to the best available weights for the specific model. This allows the users who want to always use the best available weights to do so without modifying their code.\nHere is an example of initializing models with different weights:\n```python\nfrom torchvision.prototype.models import resnet50, ResNet50_Weights\nLegacy weights with accuracy 76.130%\nmodel = resnet50(weights=ResNet50_Weights.IMAGENET1K_V1)\nNew weights with accuracy 80.858%\nmodel = resnet50(weights=ResNet50_Weights.IMAGENET1K_V2)\nBest available weights (currently alias for IMAGENET1K_V2)\nmodel = resnet50(weights=ResNet50_Weights.DEFAULT)", "source": "https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/", "category": "pytorch blogs"} {"text": "No weights - random initialization\nmodel = resnet50(weights=None)\n\n### Associated meta-data & preprocessing transforms\n\nThe weights of each model are associated with meta-data. The type of information we store depends on the task of the model (Classification, Detection, Segmentation etc). Typical information includes a link to the training recipe, the interpolation mode, information such as the categories and validation metrics. These values are programmatically accessible via the `meta` attribute:\n\n```Python\nfrom torchvision.prototype.models import ResNet50_Weights\n\n# Accessing a single record\nsize = ResNet50_Weights.IMAGENET1K_V2.meta[\"size\"]\n\n# Iterating the items of the meta-data dictionary\nfor k, v in ResNet50_Weights.IMAGENET1K_V2.meta.items():\n print(k, v)\n", "source": "https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/", "category": "pytorch blogs"} {"text": "print(k, v)\n\nAdditionally, each weights entry is associated with the necessary preprocessing transforms. All current preprocessing transforms are JIT-scriptable and can be accessed via the `transforms` attribute. Prior using them with the data, the transforms need to be initialized/constructed. This lazy initialization scheme is done to ensure the solution is memory efficient. The input of the transforms can be either a `PIL.Image` or a `Tensor` read using `torchvision.io`.\n\n```Python\nfrom torchvision.prototype.models import ResNet50_Weights\n\n# Initializing preprocessing at standard 224x224 resolution\npreprocess = ResNet50_Weights.IMAGENET1K_V2.transforms()\n\n# Initializing preprocessing at 400x400 resolution\npreprocess = ResNet50_Weights.IMAGENET1K_V2.transforms(crop_size=400, resize_size=400)\n\n# Once initialized the callable can accept the image data:\n# img_preprocessed = preprocess(img)\n", "source": "https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/", "category": "pytorch blogs"} {"text": "img_preprocessed = preprocess(img)\n\nAssociating the weights with their meta-data and preprocessing will boost transparency, improve reproducibility and make it easier to document how a set of weights was produced.\n\n### Get weights by name\n\nThe ability to link directly the weights with their properties (meta data, preprocessing callables etc) is the reason why our implementation uses Enums instead of Strings. Nevertheless for cases when only the name of the weights is available, we offer a method capable of linking Weight names to their Enums:\n\n```Python\nfrom torchvision.prototype.models import get_weight\n\n# Weights can be retrieved by name:\nassert get_weight(\"ResNet50_Weights.IMAGENET1K_V1\") == ResNet50_Weights.IMAGENET1K_V1\nassert get_weight(\"ResNet50_Weights.IMAGENET1K_V2\") == ResNet50_Weights.IMAGENET1K_V2\n\n# Including using the DEFAULT alias:\nassert get_weight(\"ResNet50_Weights.DEFAULT\") == ResNet50_Weights.IMAGENET1K_V2\n\nDeprecations", "source": "https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/", "category": "pytorch blogs"} {"text": "\n## Deprecations\n\nIn the new API the boolean `pretrained` and `pretrained_backbone` parameters, which were previously used to load weights to the full model or to its backbone, are deprecated. The current implementation is fully backwards compatible as it seamlessly maps the old parameters to the new ones. Using the old parameters to the new builders emits the following deprecation warnings:\n\n```Python\n>>> model = torchvision.prototype.models.resnet50(pretrained=True)\n UserWarning: The parameter 'pretrained' is deprecated, please use 'weights' instead.\nUserWarning:\nArguments other than a weight enum or `None` for 'weights' are deprecated.\nThe current behavior is equivalent to passing `weights=ResNet50_Weights.IMAGENET1K_V1`.\nYou can also use `weights=ResNet50_Weights.DEFAULT` to get the most up-to-date weights.\n\nAdditionally the builder methods require using keyword parameters. The use of positional parameter is deprecated and using them emits the following warning:\n```Python", "source": "https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/", "category": "pytorch blogs"} {"text": ">>> model = torchvision.prototype.models.resnet50(None)\nUserWarning:\nUsing 'weights' as positional parameter(s) is deprecated.\nPlease use keyword parameter(s) instead.\n\nTesting the new API\nMigrating to the new API is very straightforward. The following method calls between the 2 APIs are all equivalent:\n# Using pretrained weights:\ntorchvision.prototype.models.resnet50(weights=ResNet50_Weights.IMAGENET1K_V1)\ntorchvision.models.resnet50(pretrained=True)\ntorchvision.models.resnet50(True)\n\n# Using no weights:\ntorchvision.prototype.models.resnet50(weights=None)\ntorchvision.models.resnet50(pretrained=False)\ntorchvision.models.resnet50(False)\n\nNote that the prototype features are available only on the nightly versions of TorchVision, so to use it you need to install it as follows:\nconda install torchvision -c pytorch-nightly\n", "source": "https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/", "category": "pytorch blogs"} {"text": "conda install torchvision -c pytorch-nightly\n```\nFor alternative ways to install the nightly have a look on the PyTorch download page. You can also install TorchVision from source from the latest main; for more information have a look on our repo.\nAccessing state-of-the-art model weights with the new API\nIf you are still unconvinced about giving a try to the new API, here is one more reason to do so. We\u2019ve recently refreshed our training recipe and achieved SOTA accuracy from many of our models. The improved weights can easily be accessed via the new API. Here is a quick overview of the model improvements:\n\n\n\n| Model | Old Acc@1 | New Acc@1 |", "source": "https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/", "category": "pytorch blogs"} {"text": "| -------------------------- | --------- | --------- |\n| EfficientNet B1 | 78.642 | 79.838 |\n| MobileNetV3 Large | 74.042 | 75.274 |\n| Quantized ResNet50 | 75.92 | 80.282 |\n| Quantized ResNeXt101 32x8d | 78.986 | 82.574 |\n| RegNet X 400mf | 72.834 | 74.864 |\n| RegNet X 800mf | 75.212 | 77.522 |\n| RegNet X 1 6gf | 77.04 | 79.668 |\n| RegNet X 3 2gf | 78.364 | 81.198 |\n| RegNet X 8gf | 79.344 | 81.682 |\n| RegNet X 16gf | 80.058 | 82.72 |\n| RegNet X 32gf | 80.622 | 83.018 |\n| RegNet Y 400mf | 74.046 | 75.806 |\n| RegNet Y 800mf | 76.42 | 78.838 |\n| RegNet Y 1 6gf | 77.95 | 80.882 |\n| RegNet Y 3 2gf | 78.948 | 81.984 |\n| RegNet Y 8gf | 80.032 | 82.828 |\n| RegNet Y 16gf | 80.424 | 82.89 |", "source": "https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/", "category": "pytorch blogs"} {"text": "| RegNet Y 32gf | 80.878 | 83.366 |\n| ResNet50 | 76.13 | 80.858 |\n| ResNet101 | 77.374 | 81.886 |\n| ResNet152 | 78.312 | 82.284 |\n| ResNeXt50 32x4d | 77.618 | 81.198 |\n| ResNeXt101 32x8d | 79.312 | 82.834 |\n| Wide ResNet50 2 | 78.468 | 81.602 |\n| Wide ResNet101 2 | 78.848 | 82.51 |\nPlease spare a few minutes to provide your feedback on the new API, as this is crucial for graduating it from prototype and including it in the next release. You can do this on the dedicated Github Issue. We are looking forward to reading your comments!", "source": "https://pytorch.org/blog/introducing-torchvision-new-multi-weight-support-api/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Optimizing CUDA Recurrent Neural Networks with TorchScript\"\nauthor: \"The PyTorch Team\"\ndate: 2019-05-01 8:00:00 -0500\n\nThis week, we officially released PyTorch 1.1, a large feature update to PyTorch 1.0. One of the new features we've added is better support for fast, custom Recurrent Neural Networks (fastrnns) with TorchScript (the PyTorch JIT) (https://pytorch.org/docs/stable/jit.html). \nRNNs are popular models that have shown good performance on a variety of NLP tasks that come in different shapes and sizes. PyTorch implements a number of the most popular ones, the Elman RNN, GRU, and LSTM as well as multi-layered and bidirectional variants.", "source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"} {"text": "However, many users want to implement their own custom RNNs, taking ideas from recent literature. Applying Layer Normalization to LSTMs is one such use case. Because the PyTorch CUDA LSTM implementation uses a fused kernel, it is difficult to insert normalizations or even modify the base LSTM implementation. Many users have turned to writing custom implementations using standard PyTorch operators, but such code suffers from high overhead: most PyTorch operations launch at least one kernel on the GPU and RNNs generally run many operations due to their recurrent nature. However, we can apply TorchScript to fuse operations and optimize our code automatically, launching fewer, more optimized kernels on the GPU.", "source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"} {"text": "Our goal is for users to be able to write fast, custom RNNs in TorchScript without writing specialized CUDA kernels to achieve similar performance. In this post, we'll provide a tutorial for how to write your own fast RNNs with TorchScript. To better understand the optimizations TorchScript applies, we'll examine how those work on a standard LSTM implementation but most of the optimizations can be applied to general RNNs.\nWriting custom RNNs\nTo get started, you can use this file as a template to write your own custom RNNs.\nWe are constantly improving our infrastructure on trying to make the performance better. If you want to gain the speed/optimizations that TorchScript currently provides (like operator fusion, batch matrix multiplications, etc.), here are some guidelines to follow. The next section explains the optimizations in depth.", "source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"} {"text": "\n\nIf the customized operations are all element-wise, that's great because you can get the benefits of the PyTorch JIT's operator fusion automatically!\n\n\nIf you have more complex operations (e.g. reduce ops mixed with element-wise ops), consider grouping the reduce operations and element-wise ops separately in order to fuse the element-wise operations into a single fusion group. \n\n\nIf you want to know about what has been fused in your custom RNN, you can inspect the operation's optimized graph by using graph_for . Using LSTMCell as an example:\n```python\nget inputs and states for LSTMCell\ninputs = get_lstm_inputs()\ninstantiate a ScriptModule\ncell = LSTMCell(input_size, hidden_size)\nprint the optimized graph using graph_for\nout = cell(inputs)\nprint(cell.graph_for(inputs))\n```\nThis will generate the optimized TorchScript graph (a.k.a PyTorch JIT IR) for the specialized inputs that you provides:\n```\ngraph(%x : Float(, ),\n\n", "source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"} {"text": "```\n graph(%x : Float(, ),\n %hx : Float(, ),\n %cx : Float(, ),\n %w_ih : Float(, ),\n %w_hh : Float(, ),\n %b_ih : Float(),\n %b_hh : Float()):\n %hy : Float(, ), %cy : Float(, ) = prim::DifferentiableGraph_0(%cx, %b_hh, %b_ih, %hx, %w_hh, %x, %w_ih)\n %30 : (Float(, ), Float(, )) = prim::TupleConstruct(%hy, %cy)\n return (%30)\n with prim::DifferentiableGraph_0 = graph(%13 : Float(, ),\n %29 : Float(),\n %33 : Float(),\n %40 : Float(, ),\n %43 : Float(, ),\n %45 : Float(, ),\n %48 : Float(, )):\n %49 : Float(, ) = aten::t(%48)\n %47 : Float(, ) = aten::mm(%45, %49)\n %44 : Float(, ) = aten::t(%43)\n %42 : Float(, ) = aten::mm(%40, %44)\n ...some broadcast sizes operations...", "source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"} {"text": "...some broadcast sizes operations...\n %hy : Float(, ), %287 : Float(, ), %cy : Float(, ), %outgate.1 : Float(, ), %cellgate.1 : Float(, ), %forgetgate.1 : Float(, ), %ingate.1 : Float(, ) = prim::FusionGroup_0(%13, %346, %345, %344, %343)\n ...some broadcast sizes operations...\n return (%hy, %cy, %49, %44, %196, %199, %340, %192, %325, %185, %ingate.1, %forgetgate.1, %cellgate.1, %outgate.1, %395, %396, %287)\n with prim::FusionGroup_0 = graph(%13 : Float(, ),\n %71 : Tensor,\n %76 : Tensor,\n %81 : Tensor,\n %86 : Tensor):\n ...some chunks, constants, and add operations...\n %ingate.1 : Float(, ) = aten::sigmoid(%38)\n %forgetgate.1 : Float(, ) = aten::sigmoid(%34)\n %cellgate.1 : Float(, ) = aten::tanh(%30)\n %outgate.1 : Float(, ) = aten::sigmoid(%26)\n %14 : Float(, ) = aten::mul(%forgetgate.1, %13)\n %11 : Float(, ) = aten::mul(%ingate.1, %cellgate.1)", "source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"} {"text": "%cy : Float(, ) = aten::add(%14, %11, %69)\n %4 : Float(, ) = aten::tanh(%cy)\n %hy : Float(, ) = aten::mul(%outgate.1, %4)\n return (%hy, %4, %cy, %outgate.1, %cellgate.1, %forgetgate.1, %ingate.1)\n ```\nFrom the above graph we can see that it has a prim::FusionGroup_0 subgraph that is fusing all element-wise operations in LSTMCell (transpose and matrix multiplication are not element-wise ops). Some graph nodes might be hard to understand in the first place but we will explain some of them in the optimization section, we also omitted some long verbose operators in this post that is there just for correctness. \nVariable-length sequences best practices\nTorchScript does not support PackedSequence. Generally, when one is handling variable-length sequences, it is best to pad them into a single tensor and send that tensor through a TorchScript LSTM. Here's an example:\n```python\nsequences = [...] # List[Tensor], each Tensor is T' x C", "source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"} {"text": "padded = torch.utils.rnn.pad_sequence(sequences)\nlengths = [seq.size(0) for seq in sequences]\npadded # T x N x C, where N is batch size and T is the max of all T'\nmodel = LSTM(...)\noutput, hiddens = model(padded)\noutput # T x N x C\n```\nOf course, output may have some garbage data in the padded regions; use lengths to keep track of which part you don't need.\nOptimizations\nWe will now explain the optimizations performed by the PyTorch JIT to speed up custom RNNs. We will use a simple custom LSTM model in TorchScript to illustrate the optimizations, but many of these are general and apply to other RNNs. \nTo illustrate the optimizations we did and how we get benefits from those optimizations, we will run a simple custom LSTM model written in TorchScript (you can refer the code in the custom_lstm.py or the below code snippets) and time our changes.", "source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"} {"text": "We set up the environment in a machine equipped with 2 Intel Xeon chip and one Nvidia P100, with cuDNN v7.3, CUDA 9.2 installed. The basic set up for the LSTM model is as follows:\ninput_size = 512\nhidden_size = 512\nmini_batch = 64\nnumLayers = 1\nseq_length = 100 \n", "source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"} {"text": "```\nThe most important thing PyTorch JIT did is to compile the python program to a PyTorch JIT IR, which is an intermediate representation used to model the program's graph structure. This IR can then benefit from whole program optimization, hardware acceleration and overall has the potential to provide large computation gains. In this example, we run the initial TorchScript model with only compiler optimization passes that are provided by the JIT, including common subexpression elimination, constant pooling, constant propagation, dead code elimination and some peephole optimizations. We run the model training for 100 times after warm up and average the training time. The initial results for model forward time is around 27ms and backward time is around 64ms, which is a bit far away from what PyTorch cuDNN LSTM provided. Next we will explain the major optimizations we did on how we improve the performance on training or inferencing, starting with LSTMCell and LSTMLayer, and some misc optimizations.", "source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"} {"text": "LSTM Cell (forward)\nAlmost all the computations in an LSTM happen in the LSTMCell, so it's important for us to take a look at the computations it contains and how can we improve their speed. Below is a sample LSTMCell implementation in TorchScript:\n```python\nclass LSTMCell(jit.ScriptModule):\n def init(self, input_size, hidden_size):\n super(LSTMCell, self).init()\n self.input_size = input_size\n self.hidden_size = hidden_size\n self.weight_ih = Parameter(torch.randn(4 * hidden_size, input_size))\n self.weight_hh = Parameter(torch.randn(4 * hidden_size, hidden_size))\n self.bias_ih = Parameter(torch.randn(4 * hidden_size))\n self.bias_hh = Parameter(torch.randn(4 * hidden_size))\n@jit.script_method\ndef forward(self, input, state):\n # type: (Tensor, Tuple[Tensor, Tensor]) -> Tuple[Tensor, Tuple[Tensor, Tensor]]\n hx, cx = state\n gates = (torch.mm(input, self.weight_ih.t()) + self.bias_ih +\n", "source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"} {"text": "torch.mm(hx, self.weight_hh.t()) + self.bias_hh)\n ingate, forgetgate, cellgate, outgate = gates.chunk(4, 1)\n ingate = torch.sigmoid(ingate)\n forgetgate = torch.sigmoid(forgetgate)\n cellgate = torch.tanh(cellgate)\n outgate = torch.sigmoid(outgate)\n\n cy = (forgetgate * cx) + (ingate * cellgate)\n hy = outgate * torch.tanh(cy)\n\n return hy, (hy, cy)\n\n```\nThis graph representation (IR) that TorchScript generated enables several optimizations and scalable computations. In addition to the typical compiler optimizations that we could do (CSE, constant propagation, etc. ) we can also run other IR transformations to make our code run faster.", "source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"} {"text": "\nElement-wise operator fusion. PyTorch JIT will automatically fuse element-wise ops, so when you have adjacent operators that are all element-wise, JIT will automatically group all those operations together into a single FusionGroup, this FusionGroup can then be launched with a single GPU/CPU kernel and performed in one pass. This avoids expensive memory reads and writes for each operation.\nReordering chunks and pointwise ops to enable more fusion. An LSTM cell adds gates together (a pointwise operation), and then chunks the gates into four pieces: the ifco gates. Then, it performs pointwise operations on the ifco gates like above. This leads to two fusion groups in practice: one fusion group for the element-wise ops pre-chunk, and one group for the element-wise ops post-chunk.\n", "source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"} {"text": "The interesting thing to note here is that pointwise operations commute with torch.chunk: Instead of performing pointwise ops on some input tensors and chunking the output, we can chunk the input tensors and then perform the same pointwise ops on the output tensors. By moving the chunk to before the first fusion group, we can merge the first and second fusion groups into one big group. \n\n\n", "source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"} {"text": "\n\nTensor creation on the CPU is expensive, but there is ongoing work to make it faster. At this point, a LSTMCell runs three CUDA kernels: two gemm kernels and one for the single pointwise group. One of the things we noticed was that there was a large gap between the finish of the second gemm and the start of the single pointwise group. This gap was a period of time when the GPU was idling around and not doing anything. Looking into it more, we discovered that the problem was that torch.chunk constructs new tensors and that tensor construction was not as fast as it could be. Instead of constructing new Tensor objects, we taught the fusion compiler how to manipulate a data pointer and strides to do the torch.chunk before sending it into the fused kernel, shrinking the amount of idle time between the second gemm and the launch of the element-wise fusion group. This give us around 1.2x increase speed up on the LSTM forward pass.\n", "source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"} {"text": "By doing the above tricks, we are able to fuse the almost all LSTMCell forward graph (except the two gemm kernels) into a single fusion group, which corresponds to the prim::FusionGroup_0 in the above IR graph. It will then be launched into a single fused kernel for execution. With these optimizations the model performance improves significantly with average forward time reduced by around 17ms (1.7x speedup) to 10ms, and average backward time reduce by 37ms to 27ms (1.37x speed up). \nLSTM Layer (forward)\n```python\nclass LSTMLayer(jit.ScriptModule):\n def init(self, cell, cell_args):\n super(LSTMLayer, self).init()\n self.cell = cell(cell_args)\n@jit.script_method\ndef forward(self, input, state):\n # type: (Tensor, Tuple[Tensor, Tensor]) -> Tuple[Tensor, Tuple[Tensor, Tensor]]\n inputs = input.unbind(0)\n outputs = torch.jit.annotate(List[Tensor], [])\n for i in range(len(inputs)):\n out, state = self.cell(inputs[i], state)\n", "source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"} {"text": "outputs += [out]\n return torch.stack(outputs), state\n```\nWe did several tricks on the IR we generated for TorchScript LSTM to boost the performance, some example optimizations we did:\n\nLoop Unrolling: We automatically unroll loops in the code (for big loops, we unroll a small subset of it), which then empowers us to do further optimizations on the for loops control flow. For example, the fuser can fuse together operations across iterations of the loop body, which results in a good performance improvement for control flow intensive models like LSTMs.\nBatch Matrix Multiplication: For RNNs where the input is pre-multiplied (i.e. the model has a lot of matrix multiplies with the same LHS or RHS), we can efficiently batch those operations together into a single matrix multiply while chunking the outputs to achieve equivalent semantics.\n", "source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"} {"text": "By applying these techniques, we reduced our time in the forward pass by an additional 1.6ms to 8.4ms (1.2x speed up) and timing in backward by 7ms to around 20ms (1.35x speed up). \nLSTM Layer (backward)\n\n\n\u201cTree\u201d Batch Matrix Muplication: It is often the case that a single weight is reused multiple times in the LSTM backward graph, forming a tree where the leaves are matrix multiplies and nodes are adds. These nodes can be combined together by concatenating the LHSs and RHSs in different dimensions, then computed as a single matrix multiplication. The formula of equivalence can be denoted as follows:\n$L1 * R1 + L2 * R2 = torch.cat((L1, L2), dim=1) * torch.cat((R1, R2), dim=0)$\n\n", "source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"} {"text": "\nAutograd is a critical component of what makes PyTorch such an elegant ML framework. As such, we carried this through to PyTorch JIT, but using a new Automatic Differentiation (AD) mechanism that works on the IR level. JIT automatic differentiation will slice the forward graph into symbolically differentiable subgraphs, and generate backwards nodes for those subgraphs. Taking the above IR as an example, we group the graph nodes into a single prim::DifferentiableGraph_0 for the operations that has AD formulas. For operations that have not been added to AD formulas, we will fall back to Autograd during execution.\n", "source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"} {"text": "\nOptimizing the backwards path is hard, and the implicit broadcasting semantics make the optimization of automatic differentiation harder. PyTorch makes it convenient to write tensor operations without worrying about the shapes by broadcasting the tensors for you. For performance, the painful point in backward is that we need to have a summation for such kind of broadcastable operations. This results in the derivative of every broadcastable op being followed by a summation. Since we cannot currently fuse reduce operations, this causes FusionGroups to break into multiple small groups leading to bad performance. To deal with this, refer to this great post written by Thomas Viehmann.\n\nMisc Optimizations", "source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"} {"text": "Misc Optimizations\n\nIn addition to the steps laid about above, we also eliminated overhead between CUDA kernel launches and unnecessary tensor allocations. One example is when you do a tensor device look up. This can provide some poor performance initially with a lot of unnecessary allocations. When we remove these this results in a reduction from milliseconds to nanoseconds between kernel launches.\nLastly, there might be normalization applied in the custom LSTMCell like LayerNorm. Since LayerNorm and other normalization ops contains reduce operations, it is hard to fuse it in its entirety. Instead, we automatically decompose Layernorm to a statistics computation (reduce operations) + element-wise transformations, and then fuse those element-wise parts together. As of this post, there are some limitations on our auto differentiation and graph fuser infrastructure which limits the current support to inference mode only. We plan to add backward support in a future release.\n", "source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"} {"text": "With the above optimizations on operation fusion, loop unrolling, batch matrix multiplication and some misc optimizations, we can see a clear performance increase on our custom TorchScript LSTM forward and backward from the following figure: \n\n\n\nThere are a number of additional optimizations that we did not cover in this post. In addition to the ones laid out in this post, we now see that our custom LSTM forward pass is on par with cuDNN. We are also working on optimizing backward more and expect to see improvements in future releases. Besides the speed that TorchScript provides, we introduced a much more flexible API that enable you to hand draft a lot more custom RNNs, which cuDNN could not provide.", "source": "https://pytorch.org/blog/optimizing-cuda-rnn-with-torchscript/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"PyTorch, a year in....\"\nauthor: \"The PyTorch Team\"\ndate: 2018-01-19 12:00:00 -0500\nredirect_from: /2018/01/19/a-year-in.html\n\nToday marks 1 year since PyTorch was released publicly. It's been a wild ride \u2014 our quest to build a flexible deep learning research platform. Over the last year, we've seen an amazing community of people using, contributing to and evangelizing PyTorch \u2014 thank you for the love.\nLooking back, we wanted to summarize PyTorch over the past year: the progress, the news and highlights from the community.\nCommunity\nWe've been blessed with a strong organic community of researchers and engineers who fell in love with PyTorch. The core team has engineers and researchers from multiple countries, companies and universities, and we couldn't have made PyTorch what it is without each contribution.\nResearch papers, packages and Github", "source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"} {"text": "Research papers, packages and Github\nWithin days of release, users from the community started to implement their favorite research papers in PyTorch and release the code on Github. Open-source code is a primary and essential tool for researchers today.\nFolks came together to create torchtext, torchvision and torchaudio packages to help facilitate and democratize research in different domains.\nThe first community package based on PyTorch came from Brandon Amos, titled Block, and helped with easier manipulation of block matrices. The Locus Lab at CMU subsequently went on to publish PyTorch packages and implementations for most of their research. The first research paper code came from Sergey Zagoruyko titled Paying more attention to attention.", "source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"} {"text": "Jun-Yan Zhu, Taesung Park, Phillip Isola, Alyosha Efros and team from U.C.Berkeley released the hugely popular Cycle-GAN and pix2pix which does image to image transforms.\n\n\n\nThe researchers at HarvardNLP and Systran started developing and improving OpenNMT in PyTorch, seeded by initial reimplementation of the [Lua]Torch code from Adam Lerer at Facebook.\nThe MagicPony team at Twitter contributed implementations of their Super-resolution work early on into PyTorch's examples.", "source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"} {"text": "Salesforce Research released several packages, including their highlight release of PyTorch-QRNN, a type of RNN that is 2x to 17x faster than standard LSTMs optimized by CuDNN. James Bradbury and team form one of the most active and engaging forces in the PyTorch community.\nWe're releasing @PyTorch-QRNN, 2-17x faster than NVIDIA's cuDNN LSTM.Speed thanks to 50 lines of CUDA via CuPy.https://t.co/KaWhN4yDZd pic.twitter.com/yoLYj3pMI0\u2014 Smerity (@Smerity) October 9, 2017\n", "source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"} {"text": "Researchers from Uber, Northeastern and Stanford came together to form an active probabilistic programming community around their packages Pyro and ProbTorch. They are actively developing the torch.distributions core package. This community is so active and fast-moving, we had our first pytorch-probabilistic-programming meetup at NIPS 2017 with Fritz Obermeyer, Noah Goodman, Jan-Willem van de Meent, Brooks Paige, Dustin Tran and 22 additional attendees discussing how to make the world bayesian.\n\n\n", "source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"} {"text": "\nNVIDIA Researchers released three high-quality repositories that implemented pix2pix-HD, Sentiment Neuron and FlowNet2 papers. Their analysis of scalability of different Data Parallel models in PyTorch was helpful to the community.\n\n\n\nThe Allen Institute for AI released AllenNLP which includes several state-of-the-art models in NLP \u2014 reference implementations and easy to use web demos for standard NLP tasks.\n\n\n", "source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"} {"text": "\nWe also had our first Kaggle winning team grt123 in July. They won the DataScience Bowl 2017 on Lung Cancer detection and subsequently released their PyTorch implementations.\nOn the visualization front, Tzu-Wei Huang implemented a TensorBoard-PyTorch plugin and Facebook AI Research released PyTorch compatibility for their visdom visualization package.\n\n\n\n\nLastly, Facebook AI Research released several projects such as ParlAI, fairseq-py, VoiceLoop and FaderNetworks that implemented cutting-edge models and interfaced datasets in multiple domains.", "source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"} {"text": "There are countless good projects that we haven't highlighted for the lack of space, you can find a curated list here.\nWe would also like to give a huge shout-out to folks who actively help others out on the Forums, especially ptrblck, jpeg729, QuantScientist, albanD, Thomas Viehmann and chenyuntc. You are providing an invaluable service, thank you so much!\nMetrics\nIn terms of sheer numbers,\n\n87,769 lines of Python code on github that import torch\n3,983 repositories on Github that mention PyTorch in their name or description\n", "source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"} {"text": "\nMore than half a million downloads of PyTorch binaries. 651,916 to be precise.\n5,400 users wrote 21,500 posts discussing 5,200 topics on our forums discuss.pytorch.org (http://discuss.pytorch.org/)\n131 mentions of PyTorch on Reddit's /r/machinelearning since the day of release. In the same period, TensorFlow was mentioned 255 times.\n\nResearch Metrics\nPyTorch is a research-focused framework. So one of the metrics of interest is to see the usage of PyTorch in machine learning research papers.\n\n\nIn the recent ICLR2018 conference submissions, PyTorch was mentioned in 87 papers, compared to TensorFlow at 228 papers, Keras at 42 papers, Theano and Matlab at 32 papers.\n\n\nMonthly arxiv.org mentions for frameworks had PyTorch at 72 mentions, with TensorFlow at 273 mentions, Keras at 100 mentions, Caffe at 94 mentions and Theano at 53 mentions.\n\n\nCourses, Tutorials and Books", "source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"} {"text": "Courses, Tutorials and Books\nWhen we released PyTorch, we had good API documentation, but our tutorials were limited to a few ipython notebooks \u2014 helpful, but not good enough.\nSasank Chilamkurthy took it upon himself to revamp the tutorials into the beautiful website that it is today.\n\n\n\nSean Robertson and Justin Johnson wrote great new tutorials \u2014 in NLP, and to learn by example. Yunjey Choi wrote a beautiful tutorial where most models were implemented in 30 lines or less.\nEach new tutorial helped users find their way faster, with different approaches to learning.", "source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"} {"text": "Goku Mohandas and Delip Rao switched the code content of their book-in-progress to use PyTorch.\nWe've seen quite a few university machine learning courses being taught with PyTorch as the primary tool, such as Harvard's CS287. Taking it one step further and democratizing learning, we had three online courses pop up that teach using PyTorch.\n\nFast.ai's \u201cDeep Learning for Coders\u201d is a popular online course. In September, Jeremy and Rachel announced that the next fast.ai courses will be nearly entirely based on PyTorch.\nRitchie Ng, a researcher with ties to NUS Singapore and Tsinghua released a Udemy course titled Practical Deep Learning with PyTorch.\n", "source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"} {"text": "\nSung Kim from HKUST released an online course on Youtube that was aimed towards a general audience, titled: \u201cPyTorch Zero to All\u201d.\n\nEngineering\nOver the last year we implemented multiple features, improved performance across the board and fixed lots of bugs. A full list of the work we've done is found in our release notes.\nHere are highlights from our work over the last year:\nHigher-order gradients\nWith the release of several papers that implement penalties of gradients and with ongoing research in 2nd order gradient methods, this was an essential and sought-after feature. In August, we implemented a generalized interface that can take n-th order derivatives and increased the coverage of functions that support higher-order gradients over time, such that at the moment of writing almost all ops support this.\nDistributed PyTorch", "source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"} {"text": "Distributed PyTorch\nIn August, we released a small distributed package that followed the highly popular MPI-collective approach. The package has multiple backends such as TCP, MPI, Gloo and NCCL2 to support various types of CPU/GPU collective operations and use-cases, and integrates distributed technologies such as Infiniband and RoCE. Distributed is hard, and we had bugs in the initial iteration. Over subsequent releases, we made the package more stable and improved performance.\nCloser to NumPy\nOne of the biggest demands from users were NumPy features that they were familiar with. Features such as Broadcasting and Advanced Indexing are convenient and save users a lot of verbosity. We implemented these features and started to align our API to be closer to NumPy. Over time, we expect to get closer and closer to NumPy's API where appropriate.\nSparse Tensors", "source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"} {"text": "Sparse Tensors\nIn March, we released a small package supporting sparse Tensors and in May we released CUDA support for the sparse package. The package is small and limited in functionality, and is used for implementing Sparse Embeddings and commonly used sparse paradigms in deep learning. This package is still small in scope and there's demand to expand it \u2014 if you are interested in working on expanding the sparse package, reach out to us on our Discussion Boards\nPerformance\nPerformance is always an ongoing battle, especially for PyTorch which is a dynamic framework that wants to maximize flexibility. Over the last year, we've improved performance across board, from our core Tensor library to the neural network operators, writing faster micro-optimized across board.\n\nWe've added specialized AVX and AVX2 intrinsics for Tensor operations\nWrote faster GPU kernels for frequent workloads like concatenation and Softmax (among many other things)\n", "source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"} {"text": "\nRewrote the code for several neural network operators (too many to list), but notably nn.Embedding and group convolutions.\n\nReducing framework overhead by 10x across board\nSince PyTorch is a dynamic graph framework, we create a new graph on the fly at every iteration of a training loop. Hence, the framework overhead has to be low, or the workload has to be large enough that the framework overhead is hidden. In August, the authors of DyNet (Graham Neubig and team) showcased that it's much faster than PyTorch on small NLP models. This was an interesting challenge, we didn't realize that models of those sizes were being trained. In a multi-month (and ongoing) effort, we embarked upon a significant rewrite of PyTorch internals that reduced the framework overhead from more than 10 microseconds per operator execution to as little as 1 microsecond.\nATen", "source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"} {"text": "ATen\nAs we embarked upon a redesign of the PyTorch internals, we built the ATen C++11 library that now powers all of the PyTorch backend. ATen has an API that mirrors PyTorch's Python API, which makes it a convenient C++ library for Tensor computation. ATen can be built and used independently of PyTorch.\nExporting models to production \u2014 ONNX Support and the JIT compiler\nOne of the common requests we've received was to export PyTorch models to another framework. Users engaged in a rapid research cycle in PyTorch and when they were done, they wanted to ship it to larger projects with C++ only requirements.\nWith this in mind, we built a tracer for PyTorch \u2014 which can export PyTorch models into an intermediate representation.", "source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"} {"text": "The subsequent trace can be either used to run the current PyTorch model more efficiently (by running optimization passes on it), or be converted to the ONNX format to be shipped to other frameworks such as Caffe2, MXNet, TensorFlow and others or directly to the hardware accelerated libraries like CoreML or TensorRT. Over the next year, you will hear more about the JIT compiler for performance improvements.\nUsers being funny :)\nOur users express their support in funny ways, made us laugh, thanks for this :)\nI've been using PyTorch a few months now and I've never felt better. I have more energy. My skin is clearer. My eye sight has improved.\u2014 Andrej Karpathy (@karpathy) May 26, 2017\n", "source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"} {"text": "Talk to your doctor to find out if PyTorch is right for you.\u2014 Sean Robertson (@sprobertson) May 26, 2017\n\nPyTorch gave me so much life that my skin got cleared, my grades are up, my bills are paid and my crops are watered.\u2014 Adam Will \u00f0\ufe0f\u200d\u00f0 (@adam_will_do_it) May 26, 2017\n", "source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"} {"text": "So have I! But my hair is also shiner and I've lost weight. @PyTorch for the win. https://t.co/qgU4oIOB4K\u2014 Mariya (@thinkmariya) May 26, 2017\n", "source": "https://pytorch.org/blog/a-year-in/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'PyTorch for AMD ROCm\u2122 Platform now available as Python package'\nauthor: Niles Burbank \u2013 Director PM at AMD, Mayank Daga \u2013 Director, Deep Learning Software at AMD\n\nWith the PyTorch 1.8 release, we are delighted to announce a new installation option for users of\nPyTorch on the ROCm\u2122 open software platform. An installable Python package is now hosted on\npytorch.org, along with instructions for local installation in the same simple, selectable format as\nPyTorch packages for CPU-only configurations and other GPU platforms. PyTorch on ROCm includes full\ncapability for mixed-precision and large-scale training using AMD\u2019s MIOpen & RCCL libraries. This\nprovides a new option for data scientists, researchers, students, and others in the community to get\nstarted with accelerated PyTorch using AMD GPUs.\n\n\n\nThe ROCm Ecosystem", "source": "https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-available-as-python-package/", "category": "pytorch blogs"} {"text": "\nThe ROCm Ecosystem\nROCm is AMD\u2019s open source software platform for GPU-accelerated high performance computing and\nmachine learning. Since the original ROCm release in 2016, the ROCm platform has evolved to support\nadditional libraries and tools, a wider set of Linux\u00ae distributions, and a range of new GPUs. This includes\nthe AMD Instinct\u2122 MI100, the first GPU based on AMD CDNA\u2122 architecture. \nThe ROCm ecosystem has an established history of support for PyTorch, which was initially implemented\nas a fork of the PyTorch project, and more recently through ROCm support in the upstream PyTorch\ncode. PyTorch users can install PyTorch for ROCm using AMD\u2019s public PyTorch docker image, and can of\ncourse build PyTorch for ROCm from source. With PyTorch 1.8, these existing installation options are\nnow complemented by the availability of an installable Python package. \nThe primary focus of ROCm has always been high performance computing at scale. The combined", "source": "https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-available-as-python-package/", "category": "pytorch blogs"} {"text": "capabilities of ROCm and AMD\u2019s Instinct family of data center GPUs are particularly suited to the\nchallenges of HPC at data center scale. PyTorch is a natural fit for this environment, as HPC and ML\nworkflows become more intertwined.\nGetting started with PyTorch for ROCm\nThe scope for this build of PyTorch is AMD GPUs with ROCm support, running on Linux. The GPUs\nsupported by ROCm include all of AMD\u2019s Instinct family of compute-focused data center GPUs, along\nwith some other select GPUs. A current list of supported GPUs can be found in the ROCm Github\nrepository. After confirming that the target system includes supported GPUs and the current 4.0.1\nrelease of ROCm, installation of PyTorch follows the same simple Pip-based installation as any other", "source": "https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-available-as-python-package/", "category": "pytorch blogs"} {"text": "Python package. As with PyTorch builds for other platforms, the configurator at https://pytorch.org/get-started/locally/ provides the specific command line to be run.\nPyTorch for ROCm is built from the upstream PyTorch repository, and is a full featured implementation.\nNotably, it includes support for distributed training across multiple GPUs and supports accelerated\nmixed precision training.\nMore information\nA list of ROCm supported GPUs and operating systems can be found at\nhttps://github.com/RadeonOpenCompute/ROCm\nGeneral documentation on the ROCm platform is available at https://rocmdocs.amd.com/en/latest/", "source": "https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-available-as-python-package/", "category": "pytorch blogs"} {"text": "ROCm Learning Center at https://developer.amd.com/resources/rocm-resources/rocm-learning-center/ General information on AMD\u2019s offerings for HPC and ML can be found at https://amd.com/hpc\nFeedback\nAn engaged user base is a tremendously important part of the PyTorch ecosystem. We would be deeply\nappreciative of feedback on the PyTorch for ROCm experience in the PyTorch discussion forum and, where appropriate, reporting any issues via Github.", "source": "https://pytorch.org/blog/pytorch-for-amd-rocm-platform-now-available-as-python-package/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"A Tour of PyTorch Internals (Part I)\"\nauthor: \"Trevor Killeen\"\ndate: 2017-05-11 12:00:00 -0500\nredirect_from: /2017/05/11/Internals.html\n\nThe fundamental unit in PyTorch is the Tensor. This post will serve as an overview for how we implement Tensors in PyTorch, such that the user can interact with it from the Python shell. In particular, we want to answer four main questions:\n\nHow does PyTorch extend the Python interpreter to define a Tensor type that can be manipulated from Python code?\nHow does PyTorch wrap the C libraries that actually define the Tensor's properties and methods?\nHow does PyTorch cwrap work to generate code for Tensor methods?\nHow does PyTorch's build system take all of these components to compile and generate a workable application?\n\nExtending the Python Interpreter", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"} {"text": "Extending the Python Interpreter\nPyTorch defines a new package torch. In this post we will consider the ._C module. This module is known as an \"extension module\" - a Python module written in C. Such modules allow us to define new built-in object types (e.g. the Tensor) and to call C/C++ functions.\nThe ._C module is defined in torch/csrc/Module.cpp. The init_C() / PyInit__C() function creates the module and adds the method definitions as appropriate. This module is passed around to a number of different __init() functions that add further objects to the module, register new types, etc.\nOne collection of these __init() calls is the following:\nASSERT_TRUE(THPDoubleTensor_init(module));\nASSERT_TRUE(THPFloatTensor_init(module));\nASSERT_TRUE(THPHalfTensor_init(module));\nASSERT_TRUE(THPLongTensor_init(module));\nASSERT_TRUE(THPIntTensor_init(module));\nASSERT_TRUE(THPShortTensor_init(module));\nASSERT_TRUE(THPCharTensor_init(module));\nASSERT_TRUE(THPByteTensor_init(module));\n", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"} {"text": "ASSERT_TRUE(THPByteTensor_init(module));\n```\nThese __init() functions add the Tensor object for each type to the ._C module so that they can be used in the module. Let's learn how these methods work.\nThe THPTensor Type\nMuch like the underlying TH and THC libraries, PyTorch defines a \"generic\" Tensor which is then specialized to a number of different types. Before considering how this specialization works, let's first consider how defining a new type in Python works, and how we create the generic THPTensor type.", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"} {"text": "The Python runtime sees all Python objects as variables of type PyObject *, which serves as a \"base type\" for all Python objects. Every Python type contains the refcount for the object, and a pointer to the object's type object. The type object determines the properties of the type. For example, it might contain a list of methods associated with the type, and which C functions get called to implement those methods. The object also contains any fields necessary to represent its state.\nThe formula for defining a new type is as follows:\n\nCreate a struct that defines what the new object will contain\nDefine the type object for the type\n\nThe struct itself could be very simple. Inn Python, all floating point types are actually objects on the heap. The Python float struct is defined as:\ntypedef struct {\n PyObject_HEAD\n double ob_fval;\n} PyFloatObject;\n", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"} {"text": "double ob_fval;\n} PyFloatObject;\nThe `PyObject_HEAD` is a macro that brings in the code that implements an object's reference counting, and a pointer to the corresponding type object. So in this case, to implement a float, the only other \"state\" needed is the floating point value itself.\n\nNow, let's see the struct for our `THPTensor` type:\n```cpp\nstruct THPTensor {\n PyObject_HEAD\n THTensor *cdata;\n};\n\nPretty simple, right? We are just wrapping the underlying TH tensor by storing a pointer to it.\nThe key part is defining the \"type object\" for a new type. An example definition of a type object for our Python float takes the form:\n```cpp\nstatic PyTypeObject py_FloatType = {\n PyVarObject_HEAD_INIT(NULL, 0)\n \"py.FloatObject\", / tp_name /\n sizeof(PyFloatObject), / tp_basicsize /\n 0, / tp_itemsize /\n 0, / tp_dealloc /\n 0, / tp_print /\n 0, / tp_getattr /", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"} {"text": "0, / tp_getattr /\n 0, / tp_setattr /\n 0, / tp_as_async /\n 0, / tp_repr /\n 0, / tp_as_number /\n 0, / tp_as_sequence /\n 0, / tp_as_mapping /\n 0, / tp_hash /\n 0, / tp_call /\n 0, / tp_str /\n 0, / tp_getattro /\n 0, / tp_setattro /\n 0, / tp_as_buffer /\n Py_TPFLAGS_DEFAULT, / tp_flags /\n \"A floating point number\", / tp_doc /\n};\n```", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"} {"text": "\"A floating point number\", / tp_doc /\n};\nThe easiest way to think of a *type object* is as a set of fields which define the properties of the object. For example, the `tp_basicsize` field is set to `sizeof(PyFloatObject)`. This is so that Python knows how much memory to allocate when calling `PyObject_New()` for a `PyFloatObject.` The full list of fields you can set is defined in `object.h` in the CPython backend:\nhttps://github.com/python/cpython/blob/master/Include/object.h.\n\nThe type object for our `THPTensor` is `THPTensorType`, defined in `csrc/generic/Tensor.cpp`. This object defines the name, size, mapping methods, etc. for a `THPTensor`.\n\nAs an example, let's take a look at the `tp_new` function we set in the `PyTypeObject`:\n\n```cpp\nPyTypeObject THPTensorType = {\n PyVarObject_HEAD_INIT(NULL, 0)\n ...\n THPTensor_(pynew), /* tp_new */\n};\n", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"} {"text": "...\n THPTensor_(pynew), / tp_new /\n};\nThe `tp_new` function enables object creation. It is responsible for creating (as opposed to initializing) objects of that type and is equivalent to the `__new__()` method at the Python level. The C implementation is a static method that is passed the type being instantiated and any arguments, and returns a newly created object.\n\n```cpp\nstatic PyObject * THPTensor_(pynew)(PyTypeObject *type, PyObject *args, PyObject *kwargs)\n{\n HANDLE_TH_ERRORS\n Py_ssize_t num_args = args ? PyTuple_Size(args) : 0;\n\n THPTensorPtr self = (THPTensor *)type->tp_alloc(type, 0);\n// more code below\n", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"} {"text": "// more code below\nThe first thing our new function does is allocate the `THPTensor`. It then runs through a series of initializations based off of the args passed to the function. For example, when creating a `THPTensor` *x* from another `THPTensor` *y*, we set the newly created `THPTensor`'s `cdata` field to be the result of calling `THTensor_(newWithTensor)` with the *y*'s underlying `TH` Tensor as an argument. Similar constructors exist for sizes, storages, NumPy arrays, and sequences.\n\n** Note that we solely use `tp_new`, and not a combination of `tp_new` and `tp_init` (which corresponds to the `__init__()` function).\n\nThe other important thing defined in Tensor.cpp is how indexing works. PyTorch Tensors support Python's **Mapping Protocol**. This allows us to do things like:\n```python\nx = torch.Tensor(10).fill_(1)\ny = x[3] // y == 1\nx[4] = 2\n// etc.\n\n** Note that this indexing extends to Tensor with more than one dimension", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"} {"text": "We are able to use the []-style notation by defining the three mapping methods described here.\nThe most important methods are THPTensor_(getValue) and THPTensor_(setValue) which describe how to index a Tensor, for returning a new Tensor/Scalar, or updating the values of an existing Tensor in place. Read through these implementations to better understand how PyTorch supports basic tensor indexing.\nGeneric Builds (Part One)\nWe could spend a ton of time exploring various aspects of the THPTensor and how it relates to defining a new Python object. But we still need to see how the THPTensor_(init)() function is translated to the THPIntTensor_init() we used in our module initialization. How do we take our Tensor.cpp file that defines a \"generic\" Tensor and use it to generate Python objects for all the permutations of types? To put it another way, Tensor.cpp is littered with lines of code like:\n```cpp", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"} {"text": "return THPTensor_(New)(THTensor_(new)(LIBRARY_STATE_NOARGS));\n\nThis illustrates both cases we need to make type-specific:\n\nOur output code will call THPTensor_New(...) in place of THPTensor_(New)\nOur output code will call THTensor_new(...) in place of THTensor_(new)\n\nIn other words, for all supported Tensor types, we need to \"generate\" source code that has done the above substitutions. This is part of the \"build\" process for PyTorch. PyTorch relies on Setuptools (https://setuptools.readthedocs.io/en/latest/) for building the package, and we define a setup.py file in the top-level directory to customize the build process.\nOne component building an Extension module using Setuptools is to list the source files involved in the compilation. However, our csrc/generic/Tensor.cpp file is not listed! So how does the code in this file end up being a part of the end product?", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"} {"text": "Recall that we are calling the THPTensor* functions (such as init) from the directory above generic. If we take a look in this directory, there is another file Tensor.cpp defined. The last line of this file is important:\n//generic_include TH torch/csrc/generic/Tensor.cpp\n\nNote that this Tensor.cpp file is included in setup.py, but it is wrapped in a call to a Python helper function called split_types. This function takes as input a file, and looks for the \"//generic_include\" string in the file contents. If it is found, it generates a new output file for each Tensor type, with the following changes:\n\nThe output file is renamed to Tensor.cpp\nThe output file is slightly modified as follows:\n\n# Before:\n//generic_include TH torch/csrc/generic/Tensor.cpp\n\n# After:\n#define TH_GENERIC_FILE \"torch/src/generic/Tensor.cpp\"\n#include \"TH/THGenerateType.h\"\n", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"} {"text": "include \"TH/THGenerateType.h\"\nIncluding the header file on the second line has the side effect of including the source code in `Tensor.cpp` with some additional context defined. Let's take a look at one of the headers:\n\n```cpp\n#ifndef TH_GENERIC_FILE\n#error \"You must define TH_GENERIC_FILE before including THGenerateFloatType.h\"\n#endif\n\n#define real float\n#define accreal double\n#define TH_CONVERT_REAL_TO_ACCREAL(_val) (accreal)(_val)\n#define TH_CONVERT_ACCREAL_TO_REAL(_val) (real)(_val)\n#define Real Float\n#define THInf FLT_MAX\n#define TH_REAL_IS_FLOAT\n#line 1 TH_GENERIC_FILE\n#include TH_GENERIC_FILE\n#undef accreal\n#undef real\n#undef Real\n#undef THInf\n#undef TH_REAL_IS_FLOAT\n#undef TH_CONVERT_REAL_TO_ACCREAL\n#undef TH_CONVERT_ACCREAL_TO_REAL\n\n#ifndef THGenerateManyTypes\n#undef TH_GENERIC_FILE\n#endif\n", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"} {"text": "undef TH_GENERIC_FILE\nendif\n```\nWhat this is doing is bringing in the code from the generic Tensor.cpp file and surrounding it with the following macro definitions. For example, we define real as a float, so any code in the generic Tensor implementation that refers to something as a real will have that real replaced with a float. In the corresponding file THGenerateIntType.h, the same macro would replace real with int.\nThese output files are returned from split_types and added to the list of source files, so we can see how the .cpp code for different types is created.", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"} {"text": "There are a few things to note here: First, the split_types function is not strictly necessary. We could wrap the code in Tensor.cpp in a single file, repeating it for each type. The reason we split the code into separate files is to speed up compilation. Second, what we mean when we talk about the type replacement (e.g. replace real with a float) is that the C preprocessor will perform these substitutions during compilation. Merely surrounding the source code with these macros has no side effects until preprocessing.\nGeneric Builds (Part Two)\nNow that we have source files for all the Tensor types, we need to consider how the corresponding header declarations are created, and also how the conversions from THTensor_(method) and THPTensor_(method) to THTensor_method and THPTensor_method work. For example, csrc/generic/Tensor.h has declarations like:\nTHP_API PyObject * THPTensor_(New)(THTensor *ptr);\n", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"} {"text": "We use the same strategy for generating code in the source files for the headers. In `csrc/Tensor.h`, we do the following:\n```cpp\n#include \"generic/Tensor.h\"\n#include \n\n#include \"generic/Tensor.h\"\n#include \n\nThis has the same effect, where we draw in the code from the generic header, wrapped with the same macro definitions, for each type. The only difference is that the resulting code is contained all within the same header file, as opposed to being split into multiple source files.\nLastly, we need to consider how we \"convert\" or \"substitute\" the function types. If we look in the same header file, we see a bunch of #define statements, including:\n#define THPTensor_(NAME) TH_CONCAT_4(THP,Real,Tensor_,NAME)\n", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"} {"text": "``\nThis macro says that any string in the source code matching the formatTHPTensor_(NAME)should be replaced withTHPRealTensor_NAME, where Real is derived from whatever the symbol Real is#define'd to be at the time. Because our header code and source code is surrounded by macro definitions for all the types as seen above, after the preprocessor has run, the resulting code is what we would expect. The code in theTHlibrary defines the same macro forTHTensor_(NAME)`, supporting the translation of those functions as well. In this way, we end up with header and source files with specialized code.\nModule Objects and Type Methods\nNow we have seen how we have wrapped TH's Tensor definition in THP, and generated THP methods such as THPFloatTensor_init(...). Now we can explore what the above code actually does in terms of the module we are creating. The key line in THPTensor_(init) is:\n```cpp\nTHPTensorBaseStr, THPTensorType are also macros that are specific\nto each type", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"} {"text": "to each type\nPyModule_AddObject(module, THPTensorBaseStr, (PyObject *)&THPTensorType);\nThis function registers our Tensor objects to the extension module, so we can use THPFloatTensor, THPIntTensor, etc. in our Python code.\n\nJust being able to create Tensors isn't very useful - we need to be able to call all the methods that `TH` defines. A simple example shows calling the in-place `zero_` method on a Tensor.\n```python\nx = torch.FloatTensor(10)\nx.zero_()\n\nLet's start by seeing how we add methods to newly defined types. One of the fields in the \"type object\" is tp_methods. This field holds an array of method definitions (PyMethodDefs) and is used to associate methods (and their underlying C/C++ implementations) with a type. Suppose we wanted to define a new method on our PyFloatObject that replaces the value. We could implement this as follows:\n```cpp\nstatic PyObject * replace(PyFloatObject self, PyObject args) {\n double val;\n if (!PyArg_ParseTuple(args, \"d\", &val))\n return NULL;", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"} {"text": "return NULL;\n self->ob_fval = val;\n Py_RETURN_NONE\n}\nThis is equivalent to the Python method:\n```python\ndef replace(self, val):\n self.ob_fval = val\n\nIt is instructive to read more about how defining methods works in CPython. In general, methods take as the first parameter the instance of the object, and optionally parameters for the positional arguments and keyword arguments. This static function is registered as a method on our float:\nstatic PyMethodDef float_methods[] = {\n {\"replace\", (PyCFunction)replace, METH_VARARGS,\n \"replace the value in the float\"\n },\n {NULL} /* Sentinel */\n}\n\nThis registers a method called replace, which is implemented by the C function of the same name. The METH_VARARGS flag indicates that the method takes a tuple of arguments representing all the arguments to the function. This array is set to the tp_methods field of the type object, and then we can use the replace method on objects of that type.", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"} {"text": "We would like to be able to call all of the methods for TH tensors on our THP tensor equivalents. However, writing wrappers for all of the TH methods would be time-consuming and error prone. We need a better way to do this.\nPyTorch cwrap\nPyTorch implements its own cwrap tool to wrap the TH Tensor methods for use in the Python backend. We define a .cwrap file containing a series of C method declarations in our custom YAML format. The cwrap tool takes this file and outputs .cpp source files containing the wrapped methods in a format that is compatible with our THPTensor Python object and the Python C extension method calling format. This tool is used to generate code to wrap not only TH, but also CuDNN. It is defined to be extensible.\nAn example YAML \"declaration\" for the in-place addmv_ function is as follows:\n```\n[[\n name: addmv_\n cname: addmv\n return: self\n arguments:\n - THTensor self\n - arg: real beta\n default: AS_REAL(1)\n - THTensor self", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"} {"text": "default: AS_REAL(1)\n - THTensor self\n - arg: real alpha\n default: AS_REAL(1)\n - THTensor mat\n - THTensor* vec\n]]\n``\nThe architecture of the cwrap tool is very simple. It reads in a file, and then processes it with a series of **plugins.** Seetools/cwrap/plugins/init.py` for documentation on all the ways a plugin can alter the code.\nThe source code generation occurs in a series of passes. First, the YAML \"declaration\" is parsed and processed. Then the source code is generated piece-by-piece - adding things like argument checks and extractions, defining the method header, and the actual call to the underlying library such as TH. Finally, the cwrap tool allows for processing the entire file at a time. The resulting output for addmv_ can be explored here.", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"} {"text": "In order to interface with the CPython backend, the tool generates an array of PyMethodDefs that can be stored or appended to the THPTensor's tp_methods field.\nIn the specific case of wrapping Tensor methods, the build process first generates the output source file from TensorMethods.cwrap. This source file is #include'd in the generic Tensor source file. This all occurs before the preprocessor does its magic. As a result, all of the method wrappers that are generated undergo the same pass as the THPTensor code above. Thus a single generic declaration and definition is specialized for each type as well.\nPutting It All Together\nSo far, we have shown how we extend the Python interpreter to create a new extension module, how such a module defines our new THPTensor type, and how we can generate source code for Tensors of all types that interface with TH. Briefly, we will touch on compilation.", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"} {"text": "Setuptools allows us to define an Extension for compilation. The entire torch._C extension is compiled by collecting all of the source files, header files, libraries, etc. and creating a setuptools Extension. Then setuptools handles building the extension itself. I will explore the build process more in a subsequent post.\nTo summarize, let's revisit our four questions:\n\nHow does PyTorch extend the Python interpreter to define a Tensor type that can be manipulated from Python code?\n\nIt uses CPython's framework for extending the Python interpreter and defining new types, while taking special care to generate code for all types.\n\nHow does PyTorch wrap the C libraries that actually define the Tensor's properties and methods?\n\nIt does so by defining a new type, THPTensor, that is backed by a TH Tensor. Function calls are forwarded to this tensor via the CPython backend's conventions.\n\nHow does PyTorch cwrap work to generate code for Tensor methods?\n", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"} {"text": "It takes our custom YAML-formatted code and generates source code for each method by processing it through a series of steps using a number of plugins.\n\nHow does PyTorch's build system take all of these components to compile and generate a workable application?\n\nIt takes a bunch of source/header files, libraries, and compilation directives to build an extension using Setuptools.\nThis is just a snapshot of parts of the build system for PyTorch. There is more nuance, and detail, but I hope this serves as a gentle introduction to a lot of the components of our Tensor library.\nResources:\n\nhttps://docs.python.org/3.7/extending/index.html is invaluable for understanding how to write C/C++ Extension to Python\n", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-1/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"PyTorch 1.12: TorchArrow, Functional API for Modules and nvFuser, are now available\"\nauthor: Team PyTorch\nfeatured-img: ''\n\nWe are excited to announce the release of PyTorch 1.12 (release note)! This release is composed of over 3124 commits, 433 contributors. Along with 1.12, we are releasing beta versions of AWS S3 Integration, PyTorch Vision Models on Channels Last on CPU, Empowering PyTorch on Intel\u00ae Xeon\u00ae Scalable processors with Bfloat16 and FSDP API. We want to sincerely thank our dedicated community for your contributions.\nSummary:\n- Functional APIs to functionally apply module computation with a given set of parameters\n- Complex32 and Complex Convolutions in PyTorch\n- DataPipes from TorchData fully backward compatible with DataLoader \n- functorch with improved coverage for APIs\n- nvFuser a deep learning compiler for PyTorch", "source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"} {"text": "\nnvFuser a deep learning compiler for PyTorch\nChanges to float32 matrix multiplication precision on Ampere and later CUDA hardware\nTorchArrow, a new beta library for machine learning preprocessing over batch data\n\nFrontend APIs\nIntroducing TorchArrow\nWe\u2019ve got a new Beta release ready for you to try and use: TorchArrow. This is a library for machine learning preprocessing over batch data. It features a performant and Pandas-style, easy-to-use API in order to speed up your preprocessing workflows and development.\nCurrently, it provides a Python DataFrame interface with the following features:\n- High-performance CPU backend, vectorized and extensible User-Defined Functions (UDFs) with Velox\n- Seamless handoff with PyTorch or other model authoring, such as Tensor collation and easily plugging into PyTorch DataLoader and DataPipes\n- Zero copy for external readers via Arrow in-memory columnar format", "source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"} {"text": "For more details, please find our 10-min tutorial, installation instructions, API documentation, and a prototype for data preprocessing in TorchRec.\n(Beta) Functional API for Modules\nPyTorch 1.12 introduces a new beta feature to functionally apply Module computation with a given set of parameters. Sometimes, the traditional PyTorch Module usage pattern that maintains a static set of parameters internally is too restrictive. This is often the case when implementing algorithms for meta-learning, where multiple sets of parameters may need to be maintained across optimizer steps. \nThe new torch.nn.utils.stateless.functional_call() API allows for: \n- Module computation with full flexibility over the set of parameters used", "source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"} {"text": "\nNo need to reimplement your module in a functional way\nAny parameter or buffer present in the module can be swapped with an externally-defined value for use in the call. Naming for referencing parameters / buffers follows the fully-qualified form in the module\u2019s state_dict()\n\nExample:\n```python\nimport torch\nfrom torch import nn\nfrom torch.nn.utils.stateless import functional_call\nclass MyModule(nn.Module):\n def init(self):\n super().init()\n self.fc1 = nn.Linear(3, 3)\n self.bn = nn.BatchNorm1d(3)\n self.fc2 = nn.Linear(3, 3)\ndef forward(self, x):\n return self.fc2(self.bn(self.fc1(x)))\n\nm = MyModule()\nDefine parameter / buffer values to use during module computation.\nmy_weight = torch.randn(3, 3, requires_grad=True)\nmy_bias = torch.tensor([1., 2., 3.], requires_grad=True)\nparams_and_buffers = {\n 'fc1.weight': my_weight,\n 'fc1.bias': my_bias,\n # Custom buffer values can be used too.\n 'bn.running_mean': torch.randn(3),\n}", "source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"} {"text": "'bn.running_mean': torch.randn(3),\n}\nApply module computation to the input with the specified parameters / buffers.\ninp = torch.randn(5, 3)\noutput = functional_call(m, params_and_buffers, inp)\n```\n(Beta) Complex32 and Complex Convolutions in PyTorch\nPyTorch today natively supports complex numbers, complex autograd, complex modules, and numerous complex operations, including linear algebra and Fast Fourier Transform (FFT) operators. Many libraries, including torchaudio and ESPNet, already make use of complex numbers in PyTorch, and PyTorch 1.12 further extends complex functionality with complex convolutions and the experimental complex32 (\u201ccomplex half\u201d) data type that enables half precision FFT operations. Due to the bugs in CUDA 11.3 package, we recommend using CUDA 11.6 package from wheels if you are using complex numbers.\n(Beta) Forward-mode Automatic Differentiation", "source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"} {"text": "(Beta) Forward-mode Automatic Differentiation\nForward-mode AD allows the computation of directional derivatives (or equivalently, Jacobian-vector products) eagerly in the forward pass. PyTorch 1.12 significantly improves the operator coverage for forward-mode AD. See our tutorial for more information.\nTorchData\nBC DataLoader + DataPipe\n`DataPipe` from TorchData becomes fully backward compatible with the existing `DataLoader` regarding shuffle determinism and dynamic sharding in both multiprocessing and distributed environments. For more details, please check out the tutorial.\n(Beta) AWS S3 Integration\nDataPipes based on AWSSDK have been integrated into TorchData. It provides the following features backed by native AWSSDK:", "source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"} {"text": "\n\nRetrieve list of urls from each S3 bucket based on prefix\n\nSupport timeout to prevent hanging indefinitely\nSupport to specify S3 bucket region\n\n\n\nLoad data from S3 urls\n\nSupport buffered and multi-part download\nSupport to specify S3 bucket region\n\n\n\nAWS native DataPipes are still in the beta phase. And, we will keep tuning them to improve their performance.\n(Prototype) DataLoader2\nDataLoader2 became available in prototype mode. We are introducing new ways to interact between DataPipes, DataLoading API, and backends (aka ReadingServices). Feature is stable in terms of API, but functionally not complete yet. We welcome early adopters and feedback, as well as potential contributors.\nFor more details, please checkout the link.\nfunctorch", "source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"} {"text": "functorch\nInspired by Google JAX, functorch is a library that offers composable vmap (vectorization) and autodiff transforms. It enables advanced autodiff use cases that would otherwise be tricky to express in PyTorch. Examples of these include:\n- running ensembles of models on a single machine\n- efficiently computing Jacobians and Hessians\n- computing per-sample-gradients (or other per-sample quantities)\nWe\u2019re excited to announce functorch 0.2.0 with a number of improvements and new experimental features.\nSignificantly improved coverage\nWe significantly improved coverage for functorch.jvp (our forward-mode autodiff API) and other APIs that rely on it (functorch.{jacfwd, hessian}).\n(Prototype) functorch.experimental.functionalize", "source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"} {"text": "Given a function f, functionalize(f) returns a new function without mutations (with caveats). This is useful for constructing traces of PyTorch functions without in-place operations. For example, you can use make_fx(functionalize(f)) to construct a mutation-free trace of a pytorch function. To learn more, please see the documentation.\nFor more details, please see our installation instructions, documentation, tutorials, and release notes.\nPerformance Improvements\nIntroducing nvFuser, a deep learning compiler for PyTorch", "source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"} {"text": "In PyTorch 1.12, Torchscript is updating its default fuser (for Volta and later CUDA accelerators) to nvFuser, which supports a wider range of operations and is faster than NNC, the previous fuser for CUDA devices. A soon to be published blog post will elaborate on nvFuser and show how it speeds up training on a variety of networks. \nSee the nvFuser documentation for more details on usage and debugging.\nChanges to float32 matrix multiplication precision on Ampere and later CUDA hardware", "source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"} {"text": "PyTorch supports a variety of \u201cmixed precision\u201d techniques, like the torch.amp (Automated Mixed Precision) module and performing float32 matrix multiplications using the TensorFloat32 datatype on Ampere and later CUDA hardware for faster internal computations. In PyTorch 1.12 we\u2019re changing the default behavior of float32 matrix multiplications to always use full IEEE fp32 precision, which is more precise but slower than using the TensorFloat32 datatype for internal computation. For devices with a particularly high ratio of TensorFloat32 to float32 throughput such as A100, this change in defaults can result in a large slowdown.\nIf you\u2019ve been using TensorFloat32 matrix multiplications then you can continue to do so by setting torch.backends.cuda.matmul.allow_tf32 = True\nwhich is supported since PyTorch 1.7. Starting in PyTorch 1.12 the new matmul precision API can be used, too: torch.set_float32_matmul_precision(\u201chighest\u201d|\u201dhigh\u201d|\u201dmedium\u201d)", "source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"} {"text": "To reiterate, PyTorch\u2019s new default is \u201chighest\u201d precision for all device types. We think this provides better consistency across device types for matrix multiplications. Documentation for the new precision API can be found here. Setting the \u201chigh\u201d or \u201cmedium\u201d precision types will enable TensorFloat32 on Ampere and later CUDA devices. If you\u2019re updating to PyTorch 1.12 then to preserve the current behavior and faster performance of matrix multiplications on Ampere devices, set precision to \u201chigh\u201d.\nUsing mixed precision techniques is essential for training many modern deep learning networks efficiently, and if you\u2019re already using torch.amp this change is unlikely to affect you. If you\u2019re not familiar with mixed precision training then see our soon to be published \u201cWhat Every User Should Know About Mixed Precision Training in PyTorch\u201d blogpost.", "source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"} {"text": "(Beta) Accelerating PyTorch Vision Models with Channels Last on CPU\nMemory formats have a significant impact on performance when running vision models, generally Channels Last is more favorable from a performance perspective due to better data locality. 1.12 includes fundamental concepts of memory formats and demonstrates performance benefits using Channels Last on popular PyTorch vision models on Intel\u00ae Xeon\u00ae Scalable processors.\n- Enables Channels Last memory format support for the commonly used operators in CV domain on CPU, applicable for both inference and training\n- Provides native level optimization on Channels Last kernels from ATen, applicable for both AVX2 and AVX512\n- Delivers 1.3x to 1.8x inference performance gain over Channels First for TorchVision models on Intel\u00ae Xeon\u00ae Ice Lake (or newer) CPUs\n(Beta) Empowering PyTorch on Intel\u00ae Xeon\u00ae Scalable processors with Bfloat16", "source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"} {"text": "Reduced precision numeric formats like bfloat16 improves PyTorch performance across multiple deep learning training workloads. PyTorch 1.12 includes the latest software enhancements on bfloat16 which applies to a broader scope of user scenarios and showcases even higher performance gains. The main improvements include:\n- 2x hardware compute throughput vs. float32 with the new bfloat16 native instruction VDPBF16PS, introduced on Intel\u00ae Xeon\u00ae Cooper Lake CPUs\n- 1/2 memory footprint of float32, faster speed for memory bandwidth intensive operators\n- 1.4x to 2.2x inference performance gain over float32 for TorchVision models on Intel\u00ae Xeon\u00ae Cooper Lake (or newer) CPUs\n(Prototype) Introducing Accelerated PyTorch Training on Mac", "source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"} {"text": "With the PyTorch 1.12 release, developers and researchers can now take advantage of Apple silicon GPUs for significantly faster model training. This unlocks the ability to perform machine learning workflows like prototyping and fine-tuning locally, right on Mac. Accelerated GPU training is enabled using Apple\u2019s Metal Performance Shaders (MPS) as a backend. The benefits include performance speedup from accelerated GPU training and the ability to train larger networks or batch sizes locally. Learn more here. \n\n\n\n\n Accelerated GPU training and evaluation speedups over CPU-only (times faster)\n\nAlongside the new MPS device support, the M1 binaries for Core and Domain libraries that have been available for the last few releases are now an official prototype feature. These binaries can be used to run PyTorch natively on Apple Silicon.", "source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"} {"text": "(Prototype) BetterTransformer: Fastpath execution for Transformer Encoder Inference\nPyTorch now supports CPU and GPU fastpath implementations (\u201cBetterTransformer\u201d) for several Transformer Encoder modules including TransformerEncoder, TransformerEncoderLayer, and MultiHeadAttention (MHA). The BetterTransformer fastpath architecture Better Transformer is consistently faster \u2013 2x for many common execution scenarios, depending on model and input characteristics. The new BetterTransformer-enabled modules are API compatible with previous releases of the PyTorch Transformer API and will accelerate existing models if they meet fastpath execution requirements, as well as read models trained with previous versions of PyTorch. PyTorch 1.12 includes: \n- BetterTransformer integration for Torchtext\u2019s pretrained RoBERTa and XLM-R models\n- Torchtext which builds on the PyTorch Transformer API", "source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"} {"text": "\nFastpath execution for improved performance by reducing execution overheads with fused kernels which combines multiple operators into a single kernel\nOption to achieve additional speedups by taking advantage of data sparsity during the processing of padding tokens in natural-language processing (by setting enable_nested_tensor=True when creating a TransformerEncoder)\nDiagnostics to help users understand why fastpath execution did not occur\n\n\n\n\nDistributed\n(Beta) Fully Sharded Data Parallel (FSDP) API", "source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"} {"text": "(Beta) Fully Sharded Data Parallel (FSDP) API\nFSDP API helps easily scale large model training by sharding a model\u2019s parameters, gradients and optimizer states across data parallel workers while maintaining the simplicity of data parallelism. The prototype version was released in PyTorch 1.11 with a minimum set of features that helped scaling tests of models with up to 1T parameters. \nIn this beta release, FSDP API added the following features to support various production workloads. Highlights of the the newly added features in this beta release include:", "source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"} {"text": "\nUniversal sharding strategy API - Users can easily change between sharding strategies with a single line change, and thus compare and use DDP (only data sharding), FSDP (full model and data sharding), or Zero2 (only sharding of optimizer and gradients) to optimize memory and performance for their specific training needs\nFine grained mixed precision policies - Users can specify a mix of half and full data types (bfloat16, fp16 or fp32) for model parameters, gradient communication, and buffers via mixed precision policies. Models are automatically saved in fp32 to allow for maximum portability\nTransformer auto wrapping policy - allows for optimal wrapping of Transformer based models by registering the models layer class, and thus accelerated training performance\nFaster model initialization using device_id init - initialization is performed in a streaming fashion to avoid OOM issues and optimize init performance vs CPU init\n", "source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"} {"text": "\nRank0 streaming for full model saving of larger models - Fully sharded models can be saved by all GPU\u2019s streaming their shards to the rank 0 GPU, and the model is built in full state on the rank 0 CPU for saving\n\nFor more details and example code, please checkout the documentation and the tutorial. \nThanks for reading, If you\u2019re interested in these updates and want to join the PyTorch community, we encourage you to join the discussion forums and open GitHub issues. To get the latest news from PyTorch, follow us on Twitter, Medium, YouTube, and LinkedIn.\nCheers! \nTeam PyTorch", "source": "https://pytorch.org/blog/pytorch-1.12-released/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'PyTorch 1.4 released, domain libraries updated'\nauthor: Team PyTorch\n\nToday, we\u2019re announcing the availability of PyTorch 1.4, along with updates to the PyTorch domain libraries. These releases build on top of the announcements from NeurIPS 2019, where we shared the availability of PyTorch Elastic, a new classification framework for image and video, and the addition of Preferred Networks to the PyTorch community. For those that attended the workshops at NeurIPS, the content can be found here.\nPyTorch 1.4\nThe 1.4 release of PyTorch adds new capabilities, including the ability to do fine grain build level customization for PyTorch Mobile, and new experimental features including support for model parallel training and Java language bindings.\nPyTorch Mobile - Build level customization", "source": "https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/", "category": "pytorch blogs"} {"text": "PyTorch Mobile - Build level customization\nFollowing the open sourcing of PyTorch Mobile in the 1.3 release, PyTorch 1.4 adds additional mobile support including the ability to customize build scripts at a fine-grain level. This allows mobile developers to optimize library size by only including the operators used by their models and, in the process, reduce their on device footprint significantly. Initial results show that, for example, a customized MobileNetV2 is 40% to 50% smaller than the prebuilt PyTorch mobile library. You can learn more here about how to create your own custom builds and, as always, please engage with the community on the PyTorch forums to provide any feedback you have.\nExample code snippet for selectively compiling only the operators needed for MobileNetV2:\n```python", "source": "https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/", "category": "pytorch blogs"} {"text": "# Dump list of operators used by MobileNetV2:\nimport torch, yaml\nmodel = torch.jit.load('MobileNetV2.pt')\nops = torch.jit.export_opnames(model)\nwith open('MobileNetV2.yaml', 'w') as output:\n yaml.dump(ops, output)\n\n# Build PyTorch Android library customized for MobileNetV2:\nSELECTED_OP_LIST=MobileNetV2.yaml scripts/build_pytorch_android.sh arm64-v8a\n\n# Build PyTorch iOS library customized for MobileNetV2:\nSELECTED_OP_LIST=MobileNetV2.yaml BUILD_PYTORCH_MOBILE=1 IOS_ARCH=arm64 scripts/build_ios.sh\n\nDistributed model parallel training (Experimental)", "source": "https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/", "category": "pytorch blogs"} {"text": "With the scale of models, such as RoBERTa, continuing to increase into the billions of parameters, model parallel training has become ever more important to help researchers push the limits. This release provides a distributed RPC framework to support distributed model parallel training. It allows for running functions remotely and referencing remote objects without copying the real data around, and provides autograd and optimizer APIs to transparently run backwards and update parameters across RPC boundaries.\nTo learn more about the APIs and the design of this feature, see the links below:\n\nAPI documentation\nDistributed Autograd design doc\nRemote Reference design doc\n\nFor the full tutorials, see the links below: \n\nA full RPC tutorial\n", "source": "https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/", "category": "pytorch blogs"} {"text": "\nExamples using model parallel training for reinforcement learning and with an LSTM\n\nAs always, you can connect with community members and discuss more on the forums.\nJava bindings (Experimental)\nIn addition to supporting Python and C++, this release adds experimental support for Java bindings. Based on the interface developed for Android in PyTorch Mobile, the new bindings allow you to invoke TorchScript models from any Java program. Note that the Java bindings are only available for Linux for this release, and for inference only. We expect support to expand in subsequent releases. See the code snippet below for how to use PyTorch within Java:\n```java\nModule mod = Module.load(\"demo-model.pt1\");\nTensor data =\n Tensor.fromBlob(\n new int[] {1, 2, 3, 4, 5, 6}, // data\n new long[] {2, 3} // shape\n );", "source": "https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/", "category": "pytorch blogs"} {"text": "new long[] {2, 3} // shape\n );\nIValue result = mod.forward(IValue.from(data), IValue.from(3.0));\nTensor output = result.toTensor();\nSystem.out.println(\"shape: \" + Arrays.toString(output.shape()));\nSystem.out.println(\"data: \" + Arrays.toString(output.getDataAsFloatArray()));\n```\nLearn more about how to use PyTorch from Java here, and see the full Javadocs API documentation here.\nFor the full 1.4 release notes, see here.\nDomain Libraries\nPyTorch domain libraries like torchvision, torchtext, and torchaudio complement PyTorch with common datasets, models, and transforms. We\u2019re excited to share new releases for all three domain libraries alongside the PyTorch 1.4 core release.\ntorchvision 0.5\nThe improvements to torchvision 0.5 mainly focus on adding support for production deployment including quantization, TorchScript, and ONNX. Some of the highlights include:", "source": "https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/", "category": "pytorch blogs"} {"text": "\nAll models in torchvision are now torchscriptable making them easier to ship into non-Python production environments\nResNets, MobileNet, ShuffleNet, GoogleNet and InceptionV3 now have quantized counterparts with pre-trained models, and also include scripts for quantization-aware training.\nIn partnership with the Microsoft team, we\u2019ve added ONNX support for all models including Mask R-CNN.\n\nLearn more about torchvision 0.5 here.\ntorchaudio 0.4\nImprovements in torchaudio 0.4 focus on enhancing the currently available transformations, datasets, and backend support. Highlights include:\n\nSoX is now optional, and a new extensible backend dispatch mechanism exposes SoundFile as an alternative to SoX.\nThe interface for datasets has been unified. This enables the addition of two large datasets: LibriSpeech and Common Voice.\n", "source": "https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/", "category": "pytorch blogs"} {"text": "\nNew filters such as biquad, data augmentation such as time and frequency masking, transforms such as MFCC, gain and dither, and new feature computation such as deltas, are now available.\nTransformations now support batches and are jitable.\nAn interactive speech recognition demo with voice activity detection is available for experimentation.\n\nLearn more about torchaudio 0.4 here.\ntorchtext 0.5\ntorchtext 0.5 focuses mainly on improvements to the dataset loader APIs, including compatibility with core PyTorch APIs, but also adds support for unsupervised text tokenization. Highlights include:\n\nAdded bindings for SentencePiece for unsupervised text tokenization .\nAdded a new unsupervised learning dataset - enwik9.\nMade revisions to PennTreebank, WikiText103, WikiText2, IMDb to make them compatible with torch.utils.data. Those datasets are in an experimental folder and we welcome your feedback.\n", "source": "https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/", "category": "pytorch blogs"} {"text": "Learn more about torchtext 0.5 here.\nWe\u2019d like to thank the entire PyTorch team and the community for all their contributions to this work.\nCheers!\nTeam PyTorch", "source": "https://pytorch.org/blog/pytorch-1-dot-4-released-and-domain-libraries-updated/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Stochastic Weight Averaging in PyTorch'\nauthor: Pavel Izmailov and Andrew Gordon Wilson\nredirect_from: /2019/04/29/road-to-1.0.html\n\nIn this blogpost we describe the recently proposed Stochastic Weight Averaging (SWA) technique [1, 2], and its new implementation in torchcontrib. SWA is a simple procedure that improves generalization in deep learning over Stochastic Gradient Descent (SGD) at no additional cost, and can be used as a drop-in replacement for any other optimizer in PyTorch. SWA has a wide range of applications and features:\n\nSWA has been shown to significantly improve generalization in computer vision tasks, including VGG, ResNets, Wide ResNets and DenseNets on ImageNet and CIFAR benchmarks [1, 2].\nSWA provides state-of-the-art performance on key benchmarks in semi-supervised learning and domain adaptation [2].\n", "source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"} {"text": "\nSWA is shown to improve the stability of training as well as the final average rewards of policy-gradient methods in deep reinforcement learning [3].\nAn extension of SWA can obtain efficient Bayesian model averaging, as well as high quality uncertainty estimates and calibration in deep learning [4].\nSWA for low precision training, SWALP, can match the performance of full-precision SGD even with all numbers quantized down to 8 bits, including gradient accumulators [5].\n\nIn short, SWA performs an equal average of the weights traversed by SGD with a modified learning rate schedule (see the left panel of Figure 1.). SWA solutions end up in the center of a wide flat region of loss, while SGD tends to converge to the boundary of the low-loss region, making it susceptible to the shift between train and test error surfaces (see the middle and right panels of Figure 1).\n\n\n", "source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"} {"text": "\nFigure 1. Illustrations of SWA and SGD with a Preactivation ResNet-164 on CIFAR-100 [1]. Left: test error surface for three FGE samples and the corresponding SWA solution (averaging in weight space). Middle and Right: test error and train loss surfaces showing the weights proposed by SGD (at convergence) and SWA, starting from the same initialization of SGD after 125 training epochs. Please see [1] for details on how these figures were constructed.\nWith our new implementation in torchcontrib using SWA is as easy as using any other optimizer in PyTorch:\nfrom torchcontrib.optim import SWA\n\n...\n...\n\n# training loop\nbase_opt = torch.optim.SGD(model.parameters(), lr=0.1)\nopt = torchcontrib.optim.SWA(base_opt, swa_start=10, swa_freq=5, swa_lr=0.05)\nfor _ in range(100):\n opt.zero_grad()\n loss_fn(model(input), target).backward()\n opt.step()\nopt.swap_swa_sgd()\n", "source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"} {"text": "opt.step()\nopt.swap_swa_sgd()\n```\nYou can wrap any optimizer from torch.optim using the SWA class, and then train your model as usual. When training is complete you simply call swap_swa_sgd() to set the weights of your model to their SWA averages. Below we explain the SWA procedure and the parameters of the SWA class in detail. We emphasize that SWA can be combined with any optimization procedure, such as Adam, in the same way that it can be combined with SGD.\nIs this just Averaged SGD?", "source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"} {"text": "Is this just Averaged SGD?\nAt a high level, averaging SGD iterates dates back several decades in convex optimization [6, 7], where it is sometimes referred to as Polyak-Ruppert averaging, or averaged SGD. But the details matter. Averaged SGD is often employed in conjunction with a decaying learning rate, and an exponentially moving average, typically for convex optimization. In convex optimization, the focus has been on improved rates of convergence. In deep learning, this form of averaged SGD smooths the trajectory of SGD iterates, but does not perform very differently.\nBy contrast, SWA is focused on an equal average of SGD iterates with a modified cyclical or high constant learning rate, and exploits the flatness of training objectives [8] specific to deep learning for improved generalization.\nStochastic Weight Averaging", "source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"} {"text": "Stochastic Weight Averaging\nThere are two important ingredients that make SWA work. First, SWA uses a modified learning rate schedule so that SGD continues to explore the set of high-performing networks instead of simply converging to a single solution. For example, we can use the standard decaying learning rate strategy for the first 75% of training time, and then set the learning rate to a reasonably high constant value for the remaining 25% of the time (see the Figure 2 below). The second ingredient is to average the weights of the networks traversed by SGD. For example, we can maintain a running average of the weights obtained in the end of every epoch within the last 25% of training time (see Figure 2).\n\n\n", "source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"} {"text": "\nFigure 2. Illustration of the learning rate schedule adopted by SWA. Standard decaying schedule is used for the first 75% of the training and then a high constant value is used for the remaining 25%. The SWA averages are formed during the last 25% of training.\nIn our implementation the auto mode of the SWA optimizer allows us to run the procedure described above. To run SWA in auto mode you just need to wrap your optimizer base_opt of choice (can be SGD, Adam, or any other torch.optim.Optimizer) with SWA(base_opt, swa_start, swa_freq, swa_lr). After swa_start optimization steps the learning rate will be switched to a constant value swa_lr, and in the end of every swa_freq optimization steps a snapshot of the weights will be added to the SWA running average. Once you run opt.swap_swa_sgd(), the weights of your model are replaced with their SWA running averages.\nBatch Normalization", "source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"} {"text": "Batch Normalization\nOne important detail to keep in mind is batch normalization. Batch normalization layers compute running statistics of activations during training. Note that the SWA averages of the weights are never used to make predictions during training, and so the batch normalization layers do not have the activation statistics computed after you reset the weights of your model with opt.swap_swa_sgd(). To compute the activation statistics you can just make a forward pass on your training data using the SWA model once the training is finished. In the SWA class we provide a helper function opt.bn_update(train_loader, model). It updates the activation statistics for every batch normalization layer in the model by making a forward pass on the train_loader data loader. You only need to call this function once in the end of training.\nAdvanced Learning-Rate Schedules", "source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"} {"text": "Advanced Learning-Rate Schedules\nSWA can be used with any learning rate schedule that encourages exploration of the flat region of solutions. For example, you can use cyclical learning rates in the last 25% of the training time instead of a constant value, and average the weights of the networks corresponding to the lowest values of the learning rate within each cycle (see Figure 3).\n\n\n\nFigure 3. Illustration of SWA with an alternative learning rate schedule. Cyclical learning rates are adopted in the last 25% of training, and models for averaging are collected in the end of each cycle.\nIn our implementation you can implement custom learning rate and weight averaging strategies by using SWA in the manual mode. The following code is equivalent to the auto mode code presented in the beginning of this blogpost.\n```python\nopt = torchcontrib.optim.SWA(base_opt)\nfor i in range(100):", "source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"} {"text": "for i in range(100):\n opt.zero_grad()\n loss_fn(model(input), target).backward()\n opt.step()\n if i > 10 and i % 5 == 0:\n opt.update_swa()\nopt.swap_swa_sgd()\n```\nIn manual mode you don\u2019t specify swa_start, swa_lr and swa_freq, and just call opt.update_swa() whenever you want to update the SWA running averages (for example in the end of each learning rate cycle). In manual mode SWA doesn\u2019t change the learning rate, so you can use any schedule you want as you would normally do with any other torch.optim.Optimizer.\nWhy does it work?\nSGD converges to a solution within a wide flat region of loss. The weight space is extremely high-dimensional, and most of the volume of the flat region is concentrated near the boundary, so SGD solutions will always be found near the boundary of the flat region of the loss. SWA on the other hand averages multiple SGD solutions, which allows it to move towards the center of the flat region.", "source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"} {"text": "We expect solutions that are centered in the flat region of the loss to generalize better than those near the boundary. Indeed, train and test error surfaces are not perfectly aligned in the weight space. Solutions that are centered in the flat region are not as susceptible to the shifts between train and test error surfaces as those near the boundary. In Figure 4 below we show the train loss and test error surfaces along the direction connecting the SWA and SGD solutions. As you can see, while SWA solution has a higher train loss compared to the SGD solution, it is centered in the region of low loss, and has a substantially better test error.\n\n\n", "source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"} {"text": "\nFigure 4. Train loss and test error along the line connecting the SWA solution (circle) and SGD solution (square). SWA solution is centered in a wide region of low train loss while the SGD solution lies near the boundary. Because of the shift between train loss and test error surfaces, SWA solution leads to much better generalization.\nExamples and Results\nWe released a GitHub repo here with examples of using the torchcontrib implementation of SWA for training DNNs. For example, these examples can be used to achieve the following results on CIFAR-100:\n\n\n\nDNN (Budget)\nSGD\nSWA 1 Budget\nSWA 1.25 Budgets\nSWA 1.5 Budgets\n\n\n\n\nVGG16 (200)\n72.55 \u00b1 0.10\n73.91 \u00b1 0.12\n74.17 \u00b1 0.15\n74.27 \u00b1 0.25\n\n\nPreResNet110 (150)\n76.77 \u00b1 0.38\n78.75 \u00b1 0.16\n78.91 \u00b1 0.29\n79.10 \u00b1 0.21\n\n\n", "source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"} {"text": "| PreResNet164 (150) | 78.49 \u00b1 0.36 | 79.77 \u00b1 0.17 | 80.18 \u00b1 0.23 | 80.35 \u00b1 0.16 |\n| WideResNet28x10 (200) | 80.82 \u00b1 0.23 | 81.46 \u00b1 0.23 | 81.91 \u00b1 0.27 | 82.15 \u00b1 0.27 |\nSemi-Supervised Learning\nIn a follow-up paper SWA was applied to semi-supervised learning, where it illustrated improvements beyond the best reported results in multiple settings. For example, with SWA you can get 95% accuracy on CIFAR-10 if you only have the training labels for 4k training data points (the previous best reported result on this problem was 93.7%). This paper also explores averaging multiple times within epochs, which can accelerate convergence and find still flatter solutions in a given time.\n\n\n\nFigure 5. Performance of fast-SWA on semi-supervised learning with CIFAR-10. fast-SWA achieves record results in every setting considered.", "source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"} {"text": "Calibration and Uncertainty Estimates\nSWA-Gaussian (SWAG) is a simple, scalable and convenient approach to uncertainty estimation and calibration in Bayesian deep learning. Similarly to SWA, which maintains a running average of SGD iterates, SWAG estimates the first and second moments of the iterates to construct a Gaussian distribution over weights. SWAG distribution approximates the shape of the true posterior: Figure 6 below shows the SWAG distribution on top of the posterior log-density for PreResNet-164 on CIFAR-100.\n\n\n\nFigure 6. SWAG distribution on top of posterior log-density for PreResNet-164 on CIFAR-100. The shape of SWAG distribution is aligned with the posterior.", "source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"} {"text": "Empirically, SWAG performs on par or better than popular alternatives including MC dropout, KFAC Laplace, and temperature scaling on uncertainty quantification, out-of-distribution detection, calibration and transfer learning in computer vision tasks. Code for SWAG is available here.\nReinforcement Learning\nIn another follow-up paper SWA was shown to improve the performance of policy gradient methods A2C and DDPG on several Atari games and MuJoCo environments.\n\n\n\nEnvironment\nA2C\nA2C + SWA\n\n\n\n\nBreakout\n522 \u00b1 34\n703 \u00b1 60\n\n\nQbert\n18777 \u00b1 778\n21272 \u00b1 655\n\n\nSpaceInvaders\n7727 \u00b1 1121\n21676 \u00b1 8897\n\n\nSeaquest\n1779 \u00b1 4\n1795 \u00b1 4\n\n\nCrazyClimber\n147030 \u00b1 10239\n139752 \u00b1 11618\n\n\nBeamRider\n9999 \u00b1 402\n11321 \u00b1 1065\n\n\n", "source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"} {"text": "Low Precision Training\nWe can filter through quantization noise by combining weights that have been rounded down with weights that have been rounded up. Moreover, by averaging weights to find a flat region of the loss surface, large perturbations of the weights will not affect the quality of the solution (Figures 7 and 8). Recent work shows that by adapting SWA to the low precision setting, in a method called SWALP, one can match the performance of full-precision SGD even with all training in 8 bits [5]. This is quite a practically important result, given that (1) SGD training in 8 bits performs notably worse than full precision SGD, and (2) low precision training is significantly harder than predictions in low precision after training (the usual setting). For example, a ResNet-164 trained on CIFAR-100 with float (16-bit) SGD achieves 22.2% error, while 8-bit SGD achieves 24.0% error. By contrast, SWALP with 8 bit training achieves 21.8% error.\n", "source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"} {"text": "\n\n\nFigure 7. Quantizing in a flat region can still provide solutions with low loss.\n\n\n\nFigure 8. Low precision SGD training (with a modified learning rate schedule) and SWALP.\nConclusion", "source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"} {"text": "Conclusion\nOne of the greatest open questions in deep learning is why SGD manages to find good solutions, given that the training objectives are highly multimodal, and there are in principle many settings of parameters that achieve no training loss but poor generalization. By understanding geometric features such as flatness, which relate to generalization, we can begin to resolve these questions and build optimizers that provide even better generalization, and many other useful features, such as uncertainty representation. We have presented SWA, a simple drop-in replacement for standard SGD, which can in principle benefit anyone training a deep neural network. SWA has been demonstrated to have strong performance in a number of areas, including computer vision, semi-supervised learning, reinforcement learning, uncertainty representation, calibration, Bayesian model averaging, and low precision training.", "source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"} {"text": "We encourage you try out SWA! Using SWA is now as easy as using any other optimizer in PyTorch. And even if you have already trained your model with SGD (or any other optimizer), it\u2019s very easy to realize the benefits of SWA by running SWA for a small number of epochs starting with a pre-trained model.\n\n[1] Averaging Weights Leads to Wider Optima and Better Generalization; Pavel Izmailov, Dmitry Podoprikhin, Timur Garipov, Dmitry Vetrov, Andrew Gordon Wilson; Uncertainty in Artificial Intelligence (UAI), 2018\n[2] There Are Many Consistent Explanations of Unlabeled Data: Why You Should Average; Ben Athiwaratkun, Marc Finzi, Pavel Izmailov, Andrew Gordon Wilson; International Conference on Learning Representations (ICLR), 2019\n[3] Improving Stability in Deep Reinforcement Learning with Weight Averaging; Evgenii Nikishin, Pavel Izmailov, Ben Athiwaratkun, Dmitrii Podoprikhin, Timur Garipov, Pavel Shvechikov, Dmitry Vetrov, Andrew Gordon Wilson, UAI 2018 Workshop: Uncertainty in Deep Learning, 2018\n", "source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"} {"text": "\n[4] A Simple Baseline for Bayesian Uncertainty in Deep Learning, Wesley Maddox, Timur Garipov, Pavel Izmailov, Andrew Gordon Wilson, arXiv pre-print, 2019: https://arxiv.org/abs/1902.02476\n[5] SWALP : Stochastic Weight Averaging in Low Precision Training, Guandao Yang, Tianyi Zhang, Polina Kirichenko, Junwen Bai, Andrew Gordon Wilson, Christopher De Sa, To appear at the International Conference on Machine Learning (ICML), 2019.\n[6] David Ruppert. Efficient estimations from a slowly convergent Robbins-Monro process. Technical report, Cornell University Operations Research and Industrial Engineering, 1988.\n[7] Acceleration of stochastic approximation by averaging. Boris T Polyak and Anatoli B Juditsky. SIAM Journal on Control and Optimization, 30(4):838\u2013855, 1992.\n", "source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"} {"text": "\n[8] Loss Surfaces, Mode Connectivity, and Fast Ensembling of DNNs, Timur Garipov, Pavel Izmailov, Dmitrii Podoprikhin, Dmitry Vetrov, Andrew Gordon Wilson. Neural Information Processing Systems (NeurIPS), 2018\n", "source": "https://pytorch.org/blog/stochastic-weight-averaging-in-pytorch/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Introducing the PlayTorch app: Rapidly Create Mobile AI Experiences\"\nauthor: PlayTorch Team\nfeatured-img: \"\"\n\n\n\n\n\n\nIn December, we announced PyTorch Live, a toolkit for building AI-powered mobile prototypes in minutes. The initial release included a command-line interface to set up a development environment and an SDK for building AI-powered experiences in React Native. Today, we're excited to share that PyTorch Live will now be known as PlayTorch. This new release provides an improved and simplified developer experience. PlayTorch development is independent from the PyTorch project and the PlayTorch code repository is moving into the Meta Research GitHub organization.\nA New Workflow: The PlayTorch App", "source": "https://pytorch.org/blog/introducing-the-playtorch-app/", "category": "pytorch blogs"} {"text": "A New Workflow: The PlayTorch App\nThe PlayTorch team is excited to announce that we have partnered with Expo to change the way AI powered mobile experiences are built. Our new release simplifies the process of building mobile AI experiences by eliminating the need for a complicated development environment. You will now be able to build cross platform AI powered prototypes from the very browser you are using to read this blog.\nIn order to make this happen, we are releasing the PlayTorch app which is able to run AI-powered experiences built in the Expo Snack web based code editor.\n\n\n", "source": "https://pytorch.org/blog/introducing-the-playtorch-app/", "category": "pytorch blogs"} {"text": "\nThe PlayTorch app can be downloaded from the Apple App Store and Google Play Store. With the app installed, you can head over to playtorch.dev/snack and write the code for your AI-powered PlayTorch Snack. When you want to try what you\u2019ve built, you can use the PlayTorch app\u2019s QR code scanner to scan the QR code on the Snack page and load the code to your device.\nNOTE: PlayTorch Snacks will not work in the Expo Go app.\nMore to Explore in the PlayTorch App\nAI Demos\nThe PlayTorch app comes with several examples of how you can build AI powered experiences with a variety of different machine learning models from object detection to natural language processing. See what can be built with the PlayTorch SDK and be inspired to make something of your own as you play with the examples.\n\n\n\nSharing Your Creations", "source": "https://pytorch.org/blog/introducing-the-playtorch-app/", "category": "pytorch blogs"} {"text": "\nSharing Your Creations\nAny PlayTorch Snack that you run in the PlayTorch app can be shared with others in an instant. When they open the link on their device, the PlayTorch app will instantly load what you\u2019ve built from the cloud so they can experience it first hand.\n\n\n\nWhen you have something you want to share, let us know on Discord or Twitter or embed the PlayTorch Snack on your own webpage.\nSDK Overhaul\nWe learned a lot from the community after our initial launch in December and have been hard at work over the past several months to make the PlayTorch SDK (formerly known as PyTorch Live) simple, performant, and robust. In our initial version, the SDK relied on config files to define how a model ingested and output data.", "source": "https://pytorch.org/blog/introducing-the-playtorch-app/", "category": "pytorch blogs"} {"text": "Today, we are happy to announce the next version of our SDK can handle data processing in JavaScript for your prototypes with the new PlayTorch API that leverages the JavaScript Interface (JSI) to directly call C++ code. Not only have we completely redone the way you can interact with models, but we have also greatly expanded the variety of supported model architectures.\nA New Data Processing API for Prototyping\nWith this JSI API, we now allow users direct access to tensors (data format for machine learning). Instead of only having access to predefined transformations, you can now manipulate tensors however you would like for your prototypes.\n\n\n\nNo more switching back and forth between code and config. You will now be able to write everything in JavaScript and have access to all of the type annotations and autocomplete features available to you in those languages.", "source": "https://pytorch.org/blog/introducing-the-playtorch-app/", "category": "pytorch blogs"} {"text": "Check out our tutorials to see the new Data Processing API in action, take a deeper dive in the API docs, or inspect the code yourself on GitHub.\nExpanded Use Cases\nWith the new version of the SDK, we have added support for several cutting edge models.\n\n\n\nImage-to-image transformations are now supported thanks to our robust JSI API, so you can see what your world would look like if it were an anime.\n\n\n\nTranslate French to English with an AI powered translator using the Seq2Seq model.\n\n\n\nUse DeepLab V3 to segment images!\nStart Playing", "source": "https://pytorch.org/blog/introducing-the-playtorch-app/", "category": "pytorch blogs"} {"text": "Start Playing\nIf you want to start creating AI experiences yourself, head over to playtorch.dev and try out our tutorials. Each tutorial will guide you through building a simple AI powered experience that you can instantly run on your phone and share with others.\nHow to Get Support\nJoin us on Discord, collaborate with us on GitHub, or follow us on Twitter. Got questions or feedback? We\u2019d love to hear from you!", "source": "https://pytorch.org/blog/introducing-the-playtorch-app/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Overview of PyTorch Autograd Engine'\nauthor: Preferred Networks, Inc.\n\nThis blog post is based on PyTorch version 1.8, although it should apply for older versions too, since most of the mechanics have remained constant.\nTo help understand the concepts explained here, it is recommended that you read the awesome blog post by @ezyang: PyTorch internals if you are not familiar with PyTorch architecture components such as ATen or c10d.\nWhat is autograd?\nBackground", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "What is autograd?\nBackground\nPyTorch computes the gradient of a function with respect to the inputs by using automatic differentiation. Automatic differentiation is a technique that, given a computational graph, calculates the gradients of the inputs. Automatic differentiation can be performed in two different ways; forward and reverse mode. Forward mode means that we calculate the gradients along with the result of the function, while reverse mode requires us to evaluate the function first, and then we calculate the gradients starting from the output. While both modes have their pros and cons, the reverse mode is the de-facto choice since the number of outputs is smaller than the number of inputs, which allows a much more efficient computation. Check [3] to learn more about this.\nAutomatic differentiation relies on a classic calculus formula known as the chain-rule. The chain rule allows us to calculate very complex derivatives by splitting them and recombining them later.", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "Formally speaking, given a composite function , we can calculate its derivative as . This result is what makes automatic differentiation work.\nBy combining the derivatives of the simpler functions that compose a larger one, such as a neural network, it is possible to compute the exact value of the gradient at a given point rather than relying on the numerical approximation, which would require multiple perturbations in the input to obtain a value.", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "To get the intuition of how the reverse mode works, let\u2019s look at a simple function . Figure 1 shows its computational graph where the inputs x, y in the left, flow through a series of operations to generate the output z.\n\n\nFigure 1: Computational graph of f(x, y) = log(x*y)\n\nThe automatic differentiation engine will normally execute this graph. It will also extend it to calculate the derivatives of w with respect to the inputs x, y, and the intermediate result v.", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "The example function can be decomposed in f and g, where and . Every time the engine executes an operation in the graph, the derivative of that operation is added to the graph to be executed later in the backward pass. Note, that the engine knows the derivatives of the basic functions.", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "In the example above, when multiplying x and y to obtain v, the engine will extend the graph to calculate the partial derivatives of the multiplication by using the multiplication derivative definition that it already knows. and . The resulting extended graph is shown in Figure 2, where the MultDerivative node also calculates the product of the resulting gradients by an input gradient to apply the chain rule; this will be explicitly seen in the following operations. Note that the backward graph (green nodes) will not be executed until all the forward steps are completed.", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "\n\nFigure 2: Computational graph extended after executing the logarithm\n", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "Continuing, the engine now calculates the operation and extends the graph again with the log derivative that it knows to be . This is shown in figure 3. This operation generates the result that when propagated backward and multiplied by the multiplication derivative as in the chain rule, generates the derivatives , .", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "\n\nFigure 3: Computational graph extended after executing the logarithm\n\nThe original computation graph is extended with a new dummy variable z that is the same w. The derivative of z with respect to w is 1 as they are the same variable, this trick allows us to apply the chain rule to calculate the derivatives of the inputs. After the forward pass is complete, we start the backward pass, by supplying the initial value of 1.0 for . This is shown in Figure 4.\n\n", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "Figure 4: Computational graph extended for reverse auto differentiation\n", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "Then following the green graph we execute the LogDerivative operation that the auto differentiation engine introduced, and multiply its result by to obtain the gradient as per the chain rule states. Next, the multiplication derivative is executed in the same way, and the desired derivatives are finally obtained.", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "Formally, what we are doing here, and PyTorch autograd engine also does, is computing a Jacobian-vector product (Jvp) to calculate the gradients of the model parameters, since the model parameters and inputs are vectors.\nThe Jacobian-vector product\nWhen we calculate the gradient of a vector-valued function (a function whose inputs and outputs are vectors), we are essentially constructing a Jacobian matrix .", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "Thanks to the chain rule, multiplying the Jacobian matrix of a function by a vector with the previously calculated gradients of a scalar function results in the gradients of the scalar output with respect to the vector-valued function inputs.", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "As an example, let\u2019s look at some functions in python notation to show how the chain rule applies.\n\n\ndef f(x1, x2):\n a = x1 * x2\n y1 = log(a)\n y2 = sin(x2)\n return (y1, y2)\n \n\ndef g(y1, y2):\n return y1 * y2\n \n", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "return y1 * y2\n \n\nNow, if we derive this by hand using the chain rule and the definition of the derivatives, we obtain the following set of identities that we can directly plug into the Jacobian matrix of \n", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "\n", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "\n", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "\n\nNext, let\u2019s consider the gradients for the scalar function \n\n", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "\n\nIf we now calculate the transpose-Jacobian vector product obeying the chain rule, we obtain the following expression:\n", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "\nEvaluating the Jvp for yields the result:\n\nWe can execute the same expression in PyTorch and calculate the gradient of the input:\n\n>>> import torch\n>>> x = torch.tensor([0.5, 0.75], requires_grad=True)\n>>> y = torch.log(x[0] * x[1]) * torch.sin(x[1])\n>>> y.backward(1.0)\n>>> x.grad\n tensor([1.3633,\n 0.1912])", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "tensor([1.3633,\n 0.1912])\n\nThe result is the same as our hand-calculated Jacobian-vector product!\nHowever, PyTorch never constructed the matrix as it could grow prohibitively large but instead, created a graph of operations that traversed backward while applying the Jacobian-vector products defined in tools/autograd/derivatives.yaml.\nGoing through the graph\nEvery time PyTorch executes an operation, the autograd engine constructs the graph to be traversed backward.", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "The reverse mode auto differentiation starts by adding a scalar variable at the end so that as we saw in the introduction. This is the initial gradient value that is supplied to the Jvp engine calculation as we saw in the section above.\nIn PyTorch, the initial gradient is explicitly set by the user when he calls the backward method.", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "Then, the Jvp calculation starts but it never constructs the matrix. Instead, when PyTorch records the computational graph, the derivatives of the executed forward operations are added (Backward Nodes). Figure 5 shows a backward graph generated by the execution of the functions and seen before.\n\n\nFigure 5: Computational Graph extended with the backward pass\n", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "\nOnce the forward pass is done, the results are used in the backward pass where the derivatives in the computational graph are executed. The basic derivatives are stored in the tools/autograd/derivatives.yaml file and they are not regular derivatives but the Jvp versions of them [3]. They take their primitive function inputs and outputs as parameters along with the gradient of the function outputs with respect to the final outputs. By repeatedly multiplying the resulting gradients by the next Jvp derivatives in the graph, the gradients up to the inputs will be generated following the chain rule.\n\n\nFigure 6: How the chain rule is applied in backward differentiation\n", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "\nFigure 6 represents the process by showing the chain rule. We started with a value of 1.0 as detailed before which is the already calculated gradient highlighted in green. And we move to the next node in the graph. The backward function registered in derivatives.yaml will calculate the associated", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": " value highlighted in red and multiply it by . By the chain rule this results in which will be the already calculated gradient (green) when we process the next backward node in the graph.", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "You may also have noticed that in Figure 5 there is a gradient generated from two different sources. When two different functions share an input, the gradients with respect to the output are aggregated for that input, and calculations using that gradient can\u2019t proceed unless all the paths have been aggregated together.\nLet\u2019s see an example of how the derivatives are stored in PyTorch.", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "Suppose that we are currently processing the backward propagation of the function, in the LogBackward node in Figure 2. The derivative of in derivatives.yaml is specified as grad.div(self.conj()). grad is the already calculated gradient and self.conj() is the complex conjugate of the input vector. For complex numbers PyTorch calculates a special derivative called the conjugate Wirtinger derivative [6]. This derivative takes the complex number and its conjugate and by operating some magic that is described in [6], they are the direction of steepest descent when plugged into optimizers.", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "This code translates to , the corresponding green, and red squares in Figure 3. Continuing, the autograd engine will execute the next operation; backward of the multiplication. As before, the inputs are the original function\u2019s inputs and the gradient calculated from the backward step. This step will keep repeating until we reach the gradient with respect to the inputs and the computation will be finished. The gradient of is only completed once the multiplication and sin gradients are added together. As you can see, we computed the equivalent of the Jvp but without constructing the matrix.", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "In the next post we will dive inside PyTorch code to see how this graph is constructed and where are the relevant pieces should you want to experiment with it!\nReferences\n\nhttps://pytorch.org/tutorials/beginner/blitz/autograd_tutorial.html\nhttps://web.stanford.edu/class/cs224n/readings/gradient-notes.pdf\nhttps://www.cs.toronto.edu/~rgrosse/courses/csc321_2018/slides/lec10.pdf\nhttps://mustafaghali11.medium.com/how-pytorch-backward-function-works-55669b3b7c62", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "https://indico.cern.ch/event/708041/contributions/3308814/attachments/1813852/2963725/automatic_differentiation_and_deep_learning.pdf\nhttps://pytorch.org/docs/stable/notes/autograd.html#complex-autograd-doc\nRecommended: shows why the backprop is formally expressed with the Jacobian\nhttps://cs.ubc.ca/~fwood/CS340/lectures/AD1.pdf\n", "source": "https://pytorch.org/blog/overview-of-pytorch-autograd-engine/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Case Study: Amazon Ads Uses PyTorch and AWS Inferentia to Scale Models for Ads Processing\"\nauthor: Yashal Kanungo \u2013 Applied Scientist, Kamran Khan - Sr. Technical Product Manager, Shubha Kumbadakone \u2013 Sr. Specialist, ML Frameworks\nfeatured-img: \"\"\n\nAmazon Ads uses PyTorch, TorchServe, and AWS Inferentia to reduce inference costs by 71% and drive scale out.\nAmazon Ads helps companies build their brand and connect with shoppers through ads shown both within and beyond Amazon\u2019s store, including websites, apps, and streaming TV content in more than 15 countries. Businesses and brands of all sizes, including registered sellers, vendors, book vendors, Kindle Direct Publishing (KDP) authors, app developers, and agencies can upload their own ad creatives, which can include images, video, audio, and, of course, products sold on Amazon.\n\n\n", "source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"} {"text": "\nTo promote an accurate, safe, and pleasant shopping experience, these ads must comply with content guidelines. For example, ads cannot flash on and off, products must be featured in an appropriate context, and images and text should be appropriate for a general audience. To help ensure that ads meet the required policies and standards, we needed to develop scalable mechanisms and tools.", "source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"} {"text": "As a solution, we used machine learning (ML) models to surface ads that might need revision. As deep neural networks flourished over the past decade, our data science team began exploring more versatile deep learning (DL) methods capable of processing text, images, audio, or video with minimal human intervention. To that end, we\u2019ve used PyTorch to build computer vision (CV) and natural language processing (NLP) models that automatically flag potentially non-compliant ads. PyTorch is intuitive, flexible, and user-friendly, and has made our transition to using DL models seamless. Deploying these new models on AWS Inferentia-based Amazon EC2 Inf1 instances, rather than on GPU-based instances, reduced our inference latency by 30 percent and our inference costs by 71 percent for the same workloads.\nTransition to deep learning", "source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"} {"text": "Transition to deep learning\nOur ML systems paired classical models with word embeddings to evaluate ad text. But our requirements evolved, and as the volume of submissions continued to expand, we needed a method nimble enough to scale along with our business. In addition, our models must be fast and serve ads within milliseconds to provide an optimal customer experience.\nOver the last decade, DL has become very popular in numerous domains, including natural language, vision, and audio. Because deep neural networks channel data sets through many layers \u2014 extracting progressively higher-level features \u2014 they can make more nuanced inferences than classical ML models. Rather than simply detecting prohibited language, for example, a DL model can reject an ad for making false claims.", "source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"} {"text": "In addition, DL techniques are transferable\u2013 a model trained for one task can be adapted to carry out a related task. For instance, a pre-trained neural network can be optimized to detect objects in images and then fine-tuned to identify specific objects that are not allowed to be displayed in an ad.\nDeep neural networks can automate two of classical ML\u2019s most time-consuming steps: feature engineering and data labeling. Unlike traditional supervised learning approaches, which require exploratory data analysis and hand-engineered features, deep neural networks learn the relevant features directly from the data. DL models can also analyze unstructured data, like text and images, without the preprocessing necessary in ML. Deep neural networks scale effectively with more data and perform especially well in applications involving large data sets.", "source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"} {"text": "We chose PyTorch to develop our models because it helped us maximize the performance of our systems. With PyTorch, we can serve our customers better while taking advantage of Python\u2019s most intuitive concepts. The programming in PyTorch is object-oriented: it groups processing functions with the data they modify. As a result, our codebase is modular, and we can reuse pieces of code in different applications. In addition, PyTorch\u2019s eager mode allows loops and control structures and, therefore, more complex operations in the model. Eager mode makes it easy to prototype and iterate upon our models, and we can work with various data structures. This flexibility helps us update our models quickly to meet changing business requirements.", "source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"} {"text": "\u201cBefore this, we experimented with other frameworks that were \u201cPythonic,\u201d but PyTorch was the clear winner for us here.\u201d said Yashal Kanungo, Applied Scientist. \u201cUsing PyTorch was easy because the structure felt native to Python programming, which the data scientists were very familiar with\u201d.\nTraining pipeline\nToday, we build our text models entirely in PyTorch. To save time and money, we often skip the early stages of training by fine-tuning a pre-trained NLP model for language analysis. If we need a new model to evaluate images or video, we start by browsing PyTorch\u2019s torchvision library, which offers pretrained options for image and video classification, object detection, instance segmentation, and pose estimation. For specialized tasks, we build a custom model from the ground up. PyTorch is perfect for this, because eager mode and the user-friendly front end make it easy to experiment with different architectures.", "source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"} {"text": "To learn how to finetune neural networks in PyTorch, head to this tutorial.", "source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"} {"text": "Before we begin training, we optimize our model\u2019s hyperparameters, the variables that define the network architecture (for example, the number of hidden layers) and training mechanics (such as learning rate and batch size). Choosing appropriate hyperparameter values is essential, because they will shape the training behavior of the model. We rely on the Bayesian search feature in SageMaker, AWS\u2019s ML platform, for this step. Bayesian search treats hyperparameter tuning as a regression problem: It proposes the hyperparameter combinations that are likely to produce the best results and runs training jobs to test those values. After each trial, a regression algorithm determines the next set of hyperparameter values to test, and performance improves incrementally.", "source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"} {"text": "We prototype and iterate upon our models using SageMaker Notebooks. Eager mode lets us prototype models quickly by building a new computational graph for each training batch; the sequence of operations can change from iteration to iteration to accommodate different data structures or to jibe with intermediate results. That frees us to adjust the network during training without starting over from scratch. These dynamic graphs are particularly valuable for recursive computations based on variable sequence lengths, such as the words, sentences, and paragraphs in an ad that are analyzed with NLP.", "source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"} {"text": "When we\u2019ve finalized the model architecture, we deploy training jobs on SageMaker. PyTorch helps us develop large models faster by running numerous training jobs at the same time. PyTorch\u2019s Distributed Data Parallel (DDP) module replicates a single model across multiple interconnected machines within SageMaker, and all the processes run forward passes simultaneously on their own unique portion of the data set. During the backward pass, the module averages the gradients of all the processes, so each local model is updated with the same parameter values.\nModel deployment pipeline\nWhen we deploy the model in production, we want to ensure lower inference costs without impacting prediction accuracy. Several PyTorch features and AWS services have helped us address the challenge.", "source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"} {"text": "The flexibility of a dynamic graph enriches training, but in deployment we want to maximize performance and portability. An advantage of developing NLP models in PyTorch is that out of the box, they can be traced into a static sequence of operations by TorchScript, a subset of Python specialized for ML applications. Torchscript converts PyTorch models to a more efficient, production-friendly intermediate representation (IR) graph that is easily compiled. We run a sample input through the model, and TorchScript records the operations executed during the forward pass. The resulting IR graph can run in high-performance environments, including C++ and other multithreaded Python-free contexts, and optimizations such as operator fusion can speed up the runtime.\nNeuron SDK and AWS Inferentia powered compute", "source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"} {"text": "Neuron SDK and AWS Inferentia powered compute\nWe deploy our models on Amazon EC2 Inf1 instances powered by AWS Inferentia, Amazon's first ML silicon designed to accelerate deep learning inference workloads. Inferentia has shown to reduce inference costs by up to 70% compared to Amazon EC2 GPU-based instances.\nWe used the AWS Neuron SDK \u2014 a set of software tools used with Inferentia \u2014 to compile and optimize our models for deployment on EC2 Inf1 instances.\nThe code snippet below shows how to compile a Hugging Face BERT model with Neuron. Like torch.jit.trace(), neuron.trace() records the model\u2019s operations on an example input during the forward pass to build a static IR graph.\n```python\nimport torch\nfrom transformers import BertModel, BertTokenizer\nimport torch.neuron\ntokenizer = BertTokenizer.from_pretrained(\"path to saved vocab\")", "source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"} {"text": "model = BertModel.from_pretrained(\"path to the saved model\", returned_dict=False)\ninputs = tokenizer (\"sample input\", return_tensor=\"pt\")\nneuron_model = torch.neuron.trace(model,\n example_inputs = (inputs['input_ids'], inputs['attention_mask']),\n verbose = 1)\noutput = neuron_model(*(inputs['input_ids'], inputs['attention_mask']))\n```\nAutocasting and recalibration\nUnder the hood, Neuron optimizes our models for performance by autocasting them to a smaller data type. As a default, most applications represent neural network values in the 32-bit single-precision floating point (FP32) number format. Autocasting the model to a 16-bit format \u2014 half-precision floating point (FP16) or Brain Floating Point (BF16) \u2014 reduces a model\u2019s memory footprint and execution time. In our case, we decided to use FP16 to optimize for performance while maintaining high accuracy.", "source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"} {"text": "Autocasting to a smaller data type can, in some cases, trigger slight differences in the model\u2019s predictions. To ensure that the model\u2019s accuracy is not affected, Neuron compares the performance metrics and predictions of the FP16 and FP32 models. When autocasting diminishes the model\u2019s accuracy, we can tell the Neuron compiler to convert only the weights and certain data inputs to FP16, keeping the rest of the intermediate results in FP32. In addition, we often run a few iterations with the training data to recalibrate our autocasted models. This process is much less intensive than the original training.\nDeployment", "source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"} {"text": "Deployment\nTo analyze multimedia ads, we run an ensemble of DL models. All ads uploaded to Amazon are run through specialized models that assess every type of content they include: images, video and audio, headlines, texts, backgrounds, and even syntax, grammar, and potentially inappropriate language. The signals we receive from these models indicate whether or not an advertisement complies with our criteria.", "source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"} {"text": "Deploying and monitoring multiple models is significantly complex, so we depend on TorchServe, SageMaker\u2019s default PyTorch model serving library. Jointly developed by Facebook\u2019s PyTorch team and AWS to streamline the transition from prototyping to production, TorchServe helps us deploy trained PyTorch models at scale without having to write custom code. It provides a secure set of REST APIs for inference, management, metrics, and explanations. With features such as multi-model serving, model versioning, ensemble support, and automatic batching, TorchServe is ideal for supporting our immense workload. You can read more about deploying your Pytorch models on SageMaker with native TorchServe integration in this blog post.", "source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"} {"text": "In some use cases, we take advantage of PyTorch\u2019s object-oriented programming paradigm to wrap multiple DL models into one parent object \u2014 a PyTorch nn.Module \u2014 and serve them as a single ensemble. In other cases, we use TorchServe to serve individual models on separate SageMaker endpoints, running on AWS Inf1 instances.\nCustom handlers\nWe particularly appreciate that TorchServe allows us to embed our model initialization, preprocessing, inferencing, and post processing code in a single Python script, handler.py, which lives on the server. This script \u2014 the handler \u2014preprocesses the un-labeled data from an ad, runs that data through our models, and delivers the resulting inferences to downstream systems. TorchServe provides several default handlers that load weights and architecture and prepare the model to run on a particular device. We can bundle all the additional required artifacts, such as vocabulary files or label maps, with the model in a single archive file.", "source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"} {"text": "When we need to deploy models that have complex initialization processes or that originated in third-party libraries, we design custom handlers in TorchServe. These let us load any model, from any library, with any required process. The following snippet shows a simple handler that can serve Hugging Face BERT models on any SageMaker hosting endpoint instance.\n```python\nimport torch\nimport torch.neuron\nfrom ts.torch_handler.base_handler import BaseHandler\nimport transformers\nfrom transformers import AutoModelForSequenceClassification,AutoTokenizer\nclass MyModelHandler(BaseHandler):\n def initialize(self, context):\n self.manifest = ctx.manifest\n properties = ctx.system_properties\n model_dir = properties.get(\"model_dir\")\n serialized_file = self.manifest[\"model\"][\"serializedFile\"]\n model_pt_path = os.path.join(model_dir, serialized_file)\n self.tokenizer = AutoTokenizer.from_pretrained(\n model_dir, do_lower_case=True\n )\n", "source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"} {"text": ")\n self.model = AutoModelForSequenceClassification.from_pretrained(\n model_dir\n )\ndef preprocess(self, data):\n\n input_text = data.get(\"data\")\n if input_text is None:\n input_text = data.get(\"body\")\n inputs = self.tokenizer.encode_plus(input_text, max_length=int(max_length), pad_to_max_length=True, add_special_tokens=True, return_tensors='pt')\n return inputs\n\ndef inference(self,inputs):\n predictions = self.model(**inputs)\n return predictions\n\ndef postprocess(self, output):\n return output\n\n```\nBatching", "source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"} {"text": "return output\n```\nBatching\nHardware accelerators are optimized for parallelism, and batching \u2014 feeding a model multiple inputs in a single step \u2014 helps saturate all available capacity, typically resulting in higher throughputs. Excessively high batch sizes, however, can increase latency with minimal improvement in throughputs. Experimenting with different batch sizes helps us identify the sweet spot for our models and hardware accelerator. We run experiments to determine the best batch size for our model size, payload size, and request traffic patterns.\nThe Neuron compiler now supports variable batch sizes. Previously, tracing a model hardcoded the predefined batch size, so we had to pad our data, which can waste compute, slow throughputs, and exacerbate latency. Inferentia is optimized to maximize throughput for small batches, reducing latency by easing the load on the system.\nParallelism", "source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"} {"text": "Parallelism\nModel parallelism on multi-cores also improves throughput and latency, which is crucial for our heavy workloads. Each Inferentia chip contains four NeuronCores that can either run separate models simultaneously or form a pipeline to stream a single model. In our use case, the data parallel configuration offers the highest throughput at the lowest cost, because it scales out concurrent processing requests.\nData Parallel:\n\n\n\nModel Parallel:\n\n\n\nMonitoring\nIt is critical that we monitor the accuracy of our inferences in production. Models that initially make good predictions can eventually degrade in deployment as they are exposed to a wider variety of data. This phenomenon, called model drift, usually occurs when the input data distributions or the prediction targets change.", "source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"} {"text": "We use SageMaker Model Monitor to track parity between the training and production data. Model Monitor notifies us when predictions in production begin to deviate from the training and validation results. Thanks to this early warning, we can restore accuracy \u2014 by retraining the model if necessary \u2014 before our advertisers are affected. To track performance in real time, Model Monitor also sends us metrics about the quality of predictions, such as accuracy, F-scores, and the distribution of the predicted classes.\nTo determine if our application needs to scale, TorchServe logs resource utilization metrics for the CPU, Memory, and Disk at regular intervals; it also records the number of requests received versus the number served. For custom metrics, TorchServe offers a Metrics API.\nA rewarding result", "source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"} {"text": "A rewarding result\nOur DL models, developed in PyTorch and deployed on Inferentia, sped up our ads analysis while cutting costs. Starting with our first explorations in DL, programming in PyTorch felt natural. Its user-friendly features helped smooth the course from our early experiments to the deployment of our multimodal ensembles. PyTorch lets us prototype and build models quickly, which is vital as our advertising service evolves and expands. For an added benefit, PyTorch works seamlessly with Inferentia and our AWS ML stack. We look forward to building more use cases with PyTorch, so we can continue to serve our clients accurate, real-time results.", "source": "https://pytorch.org/blog/amazon-ads-case-study/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'PyTorch feature classification changes'\nauthor: Team PyTorch\n\nTraditionally features in PyTorch were classified as either stable or experimental with an implicit third option of testing bleeding edge features by building master or through installing nightly builds (available via prebuilt whls). This has, in a few cases, caused some confusion around the level of readiness, commitment to the feature and backward compatibility that can be expected from a user perspective. Moving forward, we\u2019d like to better classify the 3 types of features as well as define explicitly here what each mean from a user perspective.\nNew Feature Designations\nWe will continue to have three designations for features but, as mentioned, with a few changes: Stable, Beta (previously Experimental) and Prototype (previously Nightlies). Below is a brief description of each and a comment on the backward compatibility expected:\nStable", "source": "https://pytorch.org/blog/pytorch-feature-classification-changes/", "category": "pytorch blogs"} {"text": "Stable\nNothing changes here. A stable feature means that the user value-add is or has been proven, the API isn\u2019t expected to change, the feature is performant and all documentation exists to support end user adoption.\nLevel of commitment: We expect to maintain these features long term and generally there should be no major performance limitations, gaps in documentation and we also expect to maintain backwards compatibility (although breaking changes can happen and notice will be given one release ahead of time).\nBeta", "source": "https://pytorch.org/blog/pytorch-feature-classification-changes/", "category": "pytorch blogs"} {"text": "Beta\nWe previously called these features \u2018Experimental\u2019 and we found that this created confusion amongst some of the users. In the case of a Beta level features, the value add, similar to a Stable feature, has been proven (e.g. pruning is a commonly used technique for reducing the number of parameters in NN models, independent of the implementation details of our particular choices) and the feature generally works and is documented. This feature is tagged as Beta because the API may change based on user feedback, because the performance needs to improve or because coverage across operators is not yet complete.\nLevel of commitment: We are committing to seeing the feature through to the Stable classification. We are however not committing to Backwards Compatibility. Users can depend on us providing a solution for problems in this area going forward, but the APIs and performance characteristics of this feature may change.\n", "source": "https://pytorch.org/blog/pytorch-feature-classification-changes/", "category": "pytorch blogs"} {"text": "\n\n\nPrototype\nPreviously these were features that were known about by developers who paid close attention to RFCs and to features that land in master. In this case the feature is not available as part of binary distributions like PyPI or Conda (except maybe behind run-time flags), but we would like to get high bandwidth partner feedback ahead of a real release in order to gauge utility and any changes we need to make to the UX. To test these kinds of features we would, depending on the feature, recommend building from master or using the nightly whls that are made available on pytorch.org. For each prototype feature, a pointer to draft docs or other instructions will be provided.", "source": "https://pytorch.org/blog/pytorch-feature-classification-changes/", "category": "pytorch blogs"} {"text": "Level of commitment: We are committing to gathering high bandwidth feedback only. Based on this feedback and potential further engagement between community members, we as a community will decide if we want to upgrade the level of commitment or to fail fast. Additionally, while some of these features might be more speculative (e.g. new Frontend APIs), others have obvious utility (e.g. model optimization) but may be in a state where gathering feedback outside of high bandwidth channels is not practical, e.g. the feature may be in an earlier state, may be moving fast (PRs are landing too quickly to catch a major release) and/or generally active development is underway.\nWhat changes for current features?\nFirst and foremost, you can find these designations on pytorch.org/docs. We will also be linking any early stage features here for clarity.\nAdditionally, the following features will be reclassified under this new rubric:", "source": "https://pytorch.org/blog/pytorch-feature-classification-changes/", "category": "pytorch blogs"} {"text": "\nHigh Level Autograd APIs: Beta (was Experimental)\nEager Mode Quantization: Beta (was Experimental)\nNamed Tensors: Prototype (was Experimental)\nTorchScript/RPC: Prototype (was Experimental)\nChannels Last Memory Layout: Beta (was Experimental)\nCustom C++ Classes: Beta (was Experimental)\nPyTorch Mobile: Beta (was Experimental)\nJava Bindings: Beta (was Experimental)\nTorch.Sparse: Beta (was Experimental)\n\nCheers,\nJoe, Greg, Woo & Jessica", "source": "https://pytorch.org/blog/pytorch-feature-classification-changes/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Introducing TorchRec, a library for modern production recommendation systems'\nauthor: Meta AI - Donny Greenberg, Colin Taylor, Dmytro Ivchenko, Xing Liu, Anirudh Sudarshan \nfeatured-img: ''\n\nWe are excited to announce TorchRec, a PyTorch domain library for Recommendation Systems. This new library provides common sparsity and parallelism primitives, enabling researchers to build state-of-the-art personalization models and deploy them in production.\n\n\n\nHow did we get here?", "source": "https://pytorch.org/blog/introducing-torchrec/", "category": "pytorch blogs"} {"text": "\nHow did we get here?\nRecommendation Systems (RecSys) comprise a large footprint of production-deployed AI today, but you might not know it from looking at Github. Unlike areas like Vision and NLP, much of the ongoing innovation and development in RecSys is behind closed company doors. For academic researchers studying these techniques or companies building personalized user experiences, the field is far from democratized. Further, RecSys as an area is largely defined by learning models over sparse and/or sequential events, which has large overlaps with other areas of AI. Many of the techniques are transferable, particularly for scaling and distributed execution. A large portion of the global investment in AI is in developing these RecSys techniques, so cordoning them off blocks this investment from flowing into the broader AI field.", "source": "https://pytorch.org/blog/introducing-torchrec/", "category": "pytorch blogs"} {"text": "By mid-2020, the PyTorch team received a lot of feedback that there hasn't been a large-scale production-quality recommender systems package in the open-source PyTorch ecosystem. While we were trying to find a good answer, a group of engineers at Meta wanted to contribute Meta\u2019s production RecSys stack as a PyTorch domain library, with a strong commitment to growing an ecosystem around it. This seemed like a good idea that benefits researchers and companies across the RecSys domain. So, starting from Meta\u2019s stack, we began modularizing and designing a fully-scalable codebase that is adaptable for diverse recommendation use-cases. Our goal was to extract the key building blocks from across Meta\u2019s software stack to simultaneously enable creative exploration and scale. After nearly two years, a battery of benchmarks, migrations, and testing across Meta, we\u2019re excited to finally embark on this journey together with the RecSys community. We want this package to open a dialogue and collaboration across the RecSys industry, starting with Meta as the first sizable contributor.", "source": "https://pytorch.org/blog/introducing-torchrec/", "category": "pytorch blogs"} {"text": "Introducing TorchRec\nTorchRec includes a scalable low-level modeling foundation alongside rich batteries-included modules. We initially target \u201ctwo-tower\u201d ([[1]], [[2]]) architectures that have separate submodules to learn representations of candidate items and the query or context. Input signals can be a mix of floating point \u201cdense\u201d features or high-cardinality categorical \u201csparse\u201d features that require large embedding tables to be trained. Efficient training of such architectures involves combining data parallelism that replicates the \u201cdense\u201d part of computation and model parallelism that partitions large embedding tables across many nodes.\nIn particular, the library includes:\n- Modeling primitives, such as embedding bags and jagged tensors, that enable easy authoring of large, performant multi-device/multi-node models using hybrid data-parallelism and model-parallelism.", "source": "https://pytorch.org/blog/introducing-torchrec/", "category": "pytorch blogs"} {"text": "\nOptimized RecSys kernels powered by FBGEMM , including support for sparse and quantized operations.\nA sharder which can partition embedding tables with a variety of different strategies including data-parallel, table-wise, row-wise, table-wise-row-wise, and column-wise sharding.\nA planner which can automatically generate optimized sharding plans for models.\nPipelining to overlap dataloading device transfer (copy to GPU), inter-device communications (input_dist), and computation (forward, backward) for increased performance.\nGPU inference support.\nCommon modules for RecSys, such as models and public datasets (Criteo & Movielens).\n\nTo showcase the flexibility of this tooling, let\u2019s look at the following code snippet, pulled from our DLRM Event Prediction example:\n```python\nSpecify the sparse embedding layers\neb_configs = [\n EmbeddingBagConfig(\n name=f\"t_{feature_name}\",\n embedding_dim=64,\n num_embeddings=100_000,", "source": "https://pytorch.org/blog/introducing-torchrec/", "category": "pytorch blogs"} {"text": "num_embeddings=100_000,\n feature_names=[feature_name],\n )\n for feature_idx, feature_name in enumerate(DEFAULT_CAT_NAMES)\n]\nImport and instantiate the model with the embedding configuration\nThe \"meta\" device indicates lazy instantiation, with no memory allocated\ntrain_model = DLRM(\n embedding_bag_collection=EmbeddingBagCollection(\n tables=eb_configs, device=torch.device(\"meta\")\n ),\n dense_in_features=len(DEFAULT_INT_NAMES),\n dense_arch_layer_sizes=[512, 256, 64],\n over_arch_layer_sizes=[512, 512, 256, 1],\n dense_device=device,\n)\nDistribute the model over many devices, just as one would with DDP.\nmodel = DistributedModelParallel(\n module=train_model,\n device=device,\n)\noptimizer = torch.optim.SGD(params, lr=args.learning_rate)\nOptimize the model in a standard loop just as you would any other model!\nOr, you can use the pipeliner to synchronize communication and compute\nfor epoch in range(epochs):\n # Train\n```\nScaling Performance", "source": "https://pytorch.org/blog/introducing-torchrec/", "category": "pytorch blogs"} {"text": "Train\n```\nScaling Performance\nTorchRec has state-of-the-art infrastructure for scaled Recommendations AI, powering some of the largest models at Meta. It was used to train a 1.25 trillion parameter model, pushed to production in January, and a 3 trillion parameter model which will be in production soon. This should be a good indication that PyTorch is fully capable of the largest scale RecSys problems in industry. We\u2019ve heard from many in the community that sharded embeddings are a pain point. TorchRec cleanly addresses that. Unfortunately it is challenging to provide large-scale benchmarks with public datasets, as most open-source benchmarks are too small to show performance at scale.\nLooking ahead", "source": "https://pytorch.org/blog/introducing-torchrec/", "category": "pytorch blogs"} {"text": "Looking ahead\nOpen-source and open-technology have universal benefits. Meta is seeding the PyTorch community with a state-of-the-art RecSys package, with the hope that many join in on building it forward, enabling new research and helping many companies. The team behind TorchRec plan to continue this program indefinitely, building up TorchRec to meet the needs of the RecSys community, to welcome new contributors, and to continue to power personalization at Meta. We\u2019re excited to begin this journey and look forward to contributions, ideas, and feedback!\nReferences\n[[1]] Sampling-Bias-Corrected Neural Modeling for Large Corpus Item Recommendations\n[[2]] DLRM: An advanced, open source deep learning recommendation model", "source": "https://pytorch.org/blog/introducing-torchrec/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Scaling PyTorch models on Cloud TPUs with FSDP\"\nauthor: Ronghang Hu, Vaibhav Singh, Jack Cao, Milad Mohammadi, Yeounoh Chung, Shauheen Zahirazami, Ross Girshick\nfeatured-img: \"/assets/images/scaling-pytorch-models-on-cloud-tpus-with-fsdp.jpg\"\n\nIntroduction\nThe research community has witnessed a lot of successes with large models across NLP, computer vision, and other domains in recent years. Many of these successes were enabled by Cloud TPUs -- which are powerful hardware for distributed training. To support TPUs in PyTorch, the PyTorch/XLA library provides a backend for XLA devices (most notably TPUs) and lays the groundwork for scaling large PyTorch models on TPUs.\nHowever, most existing modeling scaling tools in the PyTorch ecosystem assume GPU (or CPU) devices, often depend on specific features in CUDA, and do not work directly on TPUs. The lack of scaling tools makes it challenging to build large models that cannot fit into the memory of a single TPU chip.", "source": "https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/", "category": "pytorch blogs"} {"text": "To support model scaling on TPUs, we implemented the widely-adopted Fully Sharded Data Parallel (FSDP) algorithm for XLA devices as part of the PyTorch/XLA 1.12 release. We provide an FSDP interface with a similar high-level design to the CUDA-based PyTorch FSDP class while also handling several restrictions in XLA (see Design Notes below for more details). This FSDP interface allowed us to easily build models with e.g. 10B+ parameters on TPUs and has enabled many research explorations.\nUsing Fully Sharded Data Parallel (FSDP) in PyTorch/XLA\nWe provide a wrapper class XlaFullyShardedDataParallel over a given PyTorch model to shard its parameters across data-parallel workers. An example usage is as follows:\n```python\nimport torch\nimport torch_xla.core.xla_model as xm\nfrom torch_xla.distributed.fsdp import XlaFullyShardedDataParallel as FSDP\nmodel = FSDP(my_module)\noptim = torch.optim.Adam(model.parameters(), lr=0.0001)\noutput = model(x, y)", "source": "https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/", "category": "pytorch blogs"} {"text": "output = model(x, y)\nloss = output.sum()\nloss.backward()\noptim.step()\n```\nWrapping an nn.Module instance with XlaFullyShardedDataParallel enables the ZeRO-2 algorithm on it, where its gradients and the optimizer states are sharded for the entire training process. During its forward and backward passes, the full parameters of the wrapped module are first reconstructed from their corresponding shards for computation.", "source": "https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/", "category": "pytorch blogs"} {"text": "Nested FSDP wrapping can be used to further save memory. This allows the model to store only the full parameters of one individual layer at any given time. For nested FSDP, one should first wrap its individual submodules with an inner FSDP before wrapping the base model with an outer FSDP. This allows the model to store only the full parameters of one individual layer at any given time. And having an outer wrapper ensures to handle any leftover parameters, corresponding to the ZeRO-3 algorithm. Nested FSDP wrapping can be applied at any depth of submodules and there can be more than 2 layers of nesting.", "source": "https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/", "category": "pytorch blogs"} {"text": "Model checkpoint saving and loading for models and optimizers can be done like before by saving and loading their .state_dict(). Meanwhile, each training process should save its own checkpoint file of the sharded model parameters and optimizer states, and load the checkpoint file for the corresponding rank when resuming (regardless of ZeRO-2 or ZeRO-3, i.e. nested wrapping or not). A command line tool and a Python interface are provided to consolidate the sharded model checkpoint files together into a full/unshareded model checkpoint file.\nGradient checkpointing (also referred to as \"activation checkpointing\" or \"rematerialization\") is another common technique for model scaling and can be used in conjunction with FSDP. We provide checkpoint_module, a wrapper function over a given nn.Module instance for gradient checkpointing (based on torch_xla.utils.checkpoint.checkpoint).", "source": "https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/", "category": "pytorch blogs"} {"text": "The MNIST and ImageNet examples below provide illustrative usages of (plain or nested) FSDP, saving and consolidation of model checkpoints, as well as gradient checkpointing.\nStarting examples of FSDP in PyTorch/XLA\nTraining MNIST and ImageNet with FSDP\nMNIST and ImageNet classification can often be used as starting points to build more complicated deep learning models. We provide the following FSDP examples on these two datasets:\n\nMNIST: test/test_train_mp_mnist_fsdp_with_ckpt.py (it also illustrates checkpoint saving and consolidation)\nImageNet: test/test_train_mp_imagenet_fsdp.py\n", "source": "https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/", "category": "pytorch blogs"} {"text": "A comparison of them with the vanilla data-parallel examples of MNIST and ImageNet illustrates how to adapt a training script to use FSDP. A major distinction to keep in mind is that when stepping the optimizer on an FSDP-wrapped model, one should directly call optimizer.step() instead of xm.optimizer_step(optimizer). The latter reduces the gradients across ranks, which is not what we need in FSDP, where the gradients are already reduced and sharded (from a reduce-scatter op in its backward pass).\nInstallation\nFSDP is available from the PyTorch/XLA 1.12 and newer nightly releases. Please refer to https://github.com/pytorch/xla#-available-images-and-wheels for a guide on installation as well as Cloud TPU allocation. Then clone PyTorch/XLA repo on a TPU VM as follows\n```python", "source": "https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/", "category": "pytorch blogs"} {"text": "mkdir -p ~/pytorch && cd ~/pytorch\ngit clone --recursive https://github.com/pytorch/xla.git\ncd ~/\n\nTrain MNIST on v3-8 TPU\nIt gets around 98.9 accuracy for 2 epochs:\npython3 ~/pytorch/xla/test/test_train_mp_mnist_fsdp_with_ckpt.py \\\n --batch_size 16 --drop_last --num_epochs 2 \\\n --use_nested_fsdp\n\nThe script above automatically tests consolidation of the sharded model checkpoints at the end. You can also manually consolidate the sharded checkpoint files via\npython3 -m torch_xla.distributed.fsdp.consolidate_sharded_ckpts \\\n --ckpt_prefix /tmp/mnist-fsdp/final_ckpt \\\n --ckpt_suffix \"_rank-*-of-*.pth\"\n\nTrain ImageNet with ResNet-50 on v3-8 TPU\nIt gets around 75.9 accuracy for 100 epochs, same as what one would get without using FSDP; download and preprocess the ImageNet-1k dataset to /datasets/imagenet-1k:\n```python\npython3 ~/pytorch/xla/test/test_train_mp_imagenet_fsdp.py \\", "source": "https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/", "category": "pytorch blogs"} {"text": "--datadir /datasets/imagenet-1k --drop_last \\\n --model resnet50 --test_set_batch_size 64 --eval_interval 10 \\\n --lr 0.4 --batch_size 128 --num_warmup_epochs 5 \\\n --lr_scheduler_divide_every_n_epochs 30 --lr_scheduler_divisor 10 \\\n --num_epochs 100 \\\n --use_nested_fsdp\n```\nYou can also explore other options in these two examples, such as --use_gradient_checkpointing to apply gradient checkpointing (i.e. activation checkpointing) on the ResNet blocks, or --compute_dtype bfloat16 to perform forward and backward passes in bfloat16 precision.\nExamples on large-scale models\nWhen building large models on TPUs, we often need to be aware of the memory constraints (e.g. 16 GB per core in TPU v3 and 32 GB per chip in TPU v4). For large models that cannot fit into a single TPU memory or the host CPU memory, one should use nested FSDP to implement the ZeRO-3 algorithm interleave submodule construction with inner FSDP wrapping, so that the full model never needs to be stored in memory during construction.", "source": "https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/", "category": "pytorch blogs"} {"text": "We illustrate these cases in https://github.com/ronghanghu/ptxla_scaling_examples, which provides examples of training a Vision Transformer (ViT) model with 10B+ parameters on a TPU v3 pod (with 128 cores) as well as other cases.\nDesign Notes\nOne might wonder why we need to develop a separate FSDP class in PyTorch/XLA instead of directly reusing PyTorch's FSDP class or extending it to the XLA backend. The main motivation behind a separate FSDP class in PyTorch/XLA is that the native PyTorch's FSDP class heavily relies on CUDA features that are not supported by XLA devices, while XLA also has several unique characteristics that need special handling. These distinctions require a different implementation of FSDP that would be much easier to build in a separate class.\nChanges in API calls", "source": "https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/", "category": "pytorch blogs"} {"text": "Changes in API calls\nOne prominent distinction is that the native PyTorch FSDP is built upon separate CUDA streams for asynchronous execution in eager mode, while PyTorch/XLA runs in lazy mode and also does not support streams. In addition, TPU requires that all devices homogeneously run the same program. As a result, in the PyTorch/XLA FSDP implementation, CUDA calls and per-process heterogeneity need to be replaced by XLA APIs and alternative homogeneous implementations.\nTensor Storage Handling", "source": "https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/", "category": "pytorch blogs"} {"text": "Tensor Storage Handling\nAnother prominent distinction is how to free a tensor's storage, which is much harder in XLA than in CUDA. To implement ZeRO-3, one needs to free the storage of full parameters after a module's forward pass, so that the next module can reuse this memory buffer for subsequent computation. PyTorch's FSPD accomplishes this on CUDA by freeing the actual storage of a parameter p via p.data.storage().resize_(0). However, XLA tensors do not have this .storage() handle given that the XLA HLO IRs are completely functional and do not provide any ops to deallocate a tensor or resize its storage. Below the PyTorch interface, only the XLA compiler can decide when to free a TPU device memory corresponding to an XLA tensor, and a prerequisite is that the memory can only be released when the tensor object gets deallocated in Python -- which cannot happen in FSDP because these parameter tensors are referenced as module attributes and also saved by PyTorch autograd for the backward pass.", "source": "https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/", "category": "pytorch blogs"} {"text": "Our solution to this issue is to split a tensor's value properties from its autograd Variable properties, and to free a nn.Parameter tensor by setting its .data attribute to a dummy scalar of size 1. This way the actual data tensor for the full parameter gets dereferenced in Python so that XLA can recycle its memory for other computation, while autograd can still trace the base nn.Parameter as a weak reference to the parameter data. To get this to work, one also needs to handle views over the parameters as views in PyTorch also hold references to its actual data (this required fixing a shape-related issue with views in PyTorch/XLA).\nWorking with XLA compiler", "source": "https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/", "category": "pytorch blogs"} {"text": "The solution above should be enough to free full parameters if the XLA compiler faithfully preserves the operations and their execution order in our PyTorch program. But there is another problem -- XLA attempts to optimize the program to speed up its execution by applying common subexpression elimination (CSE) to the HLO IRs. In a naive implementation of FSDP, the XLA compiler typically eliminates the 2nd all-gather in the backward pass to reconstruct the full parameters when it sees that it is a repeated computation from the forward pass, and directly holds and reuses the full parameters we want to free up after the forward pass. To guard against this undesired compiler behavior, we introduced the optimization barrier op into PyTorch/XLA and used it to stop eliminating the 2nd all-gather. This optimization barrier is also applied to a similar case of gradient checkpointing to prevent CSE between forward and backward passes that could eliminate the rematerialization.", "source": "https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/", "category": "pytorch blogs"} {"text": "In the future, if the distinctions between CUDA and XLA become not as prominent as mentioned above, it could be worth considering a merge of the PyTorch/XLA FSDP with the native PyTorch FSDP to have a unified interface.\nAcknowledgments\nThanks to Junmin Hao from AWS for reviewing the PyTorch/XLA FSDP pull request. Thanks to Brian Hirsh from the Meta PyTorch team for support on the PyTorch core issues. Thanks to Isaack Karanja, Will Cromar, and Blake Hechtman from Google for support on GCP, XLA, and TPU issues.\nThanks to Piotr Dollar, Wan-Yen Lo, Alex Berg, Ryan Mark, Kaiming He, Xinlei Chen, Saining Xie, Shoubhik Debnath, Min Xu, and Vaibhav Aggarwal from Meta FAIR for various TPU-related discussions.", "source": "https://pytorch.org/blog/scaling-pytorch-models-on-cloud-tpus-with-fsdp/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Accelerated Diffusers with PyTorch 2.0\"\nauthor: Pedro Cuenca, Patrick von Platen, Suraj Patil\n\nPyTorch 2.0 has just been released. Its flagship new feature is torch.compile(), a one-line code change that promises to automatically improve performance across codebases. We have previously checked on that promise in Hugging Face Transformers and TIMM models, and delved deep into its motivation, architecture and the road ahead.", "source": "https://pytorch.org/blog/accelerated-diffusers-pt-20/", "category": "pytorch blogs"} {"text": "As important as torch.compile() is, there\u2019s much more to PyTorch 2.0. Notably, PyTorch 2.0 incorporates several strategies to accelerate transformer blocks, and these improvements are very relevant for diffusion models too. Techniques such as FlashAttention, for example, have become very popular in the diffusion community thanks to their ability to significantly speed up Stable Diffusion and achieve larger batch sizes, and they are now part of PyTorch 2.0.\nIn this post we discuss how attention layers are optimized in PyTorch 2.0 and how these optimization are applied to the popular \ud83e\udde8 Diffusers library. We finish with a benchmark that shows how the use of PyTorch 2.0 and Diffusers immediately translates to significant performance improvements across different hardware.\nAccelerating transformer blocks", "source": "https://pytorch.org/blog/accelerated-diffusers-pt-20/", "category": "pytorch blogs"} {"text": "Accelerating transformer blocks\nPyTorch 2.0 includes a scaled dot-product attention function as part of torch.nn.functional. This function encompasses several implementations that can be applied depending on the inputs and the hardware in use. Before PyTorch 2.0, you had to search for third-party implementations and install separate packages in order to take advantage of memory optimized algorithms, such as FlashAttention. The available implementations are:\n* FlashAttention, from the official FlashAttention project. \n* Memory-Efficient Attention, from the xFormers project.\n* A native C++ implementation suitable for non-CUDA devices or when high-precision is required.", "source": "https://pytorch.org/blog/accelerated-diffusers-pt-20/", "category": "pytorch blogs"} {"text": "All these methods are available by default, and PyTorch will try to select the optimal one automatically through the use of the new scaled dot-product attention (SDPA) API. You can also individually toggle them for finer-grained control, see the documentation for details.\nUsing scaled dot-product attention in diffusers", "source": "https://pytorch.org/blog/accelerated-diffusers-pt-20/", "category": "pytorch blogs"} {"text": "The incorporation of Accelerated PyTorch 2.0 Transformer attention to the Diffusers library was achieved through the use of the set_attn_processor method, which allows for pluggable attention modules to be configured. In this case, a new attention processor was created, which is enabled by default when PyTorch 2.0 is available. For clarity, this is how you could enable it manually (but it\u2019s usually not necessary since diffusers will automatically take care of it):\n```\nfrom diffusers import StableDiffusionPipeline\nfrom diffusers.models.cross_attention import AttnProcessor2_0\npipe = StableDiffusionPipeline.from_pretrained(\"runwayml/stable-diffusion-v1-5\")", "source": "https://pytorch.org/blog/accelerated-diffusers-pt-20/", "category": "pytorch blogs"} {"text": "pipe.to(\"cuda\")\npipe.unet.set_attn_processor(AttnProcessor2_0())\nprompt = \"a photo of an astronaut riding a horse on mars\"\nimage = pipe(prompt).images[0]\n```\nStable Diffusion Benchmark\nWe ran a number of tests using accelerated dot-product attention from PyTorch 2.0 in Diffusers. We installed diffusers from pip and used nightly versions of PyTorch 2.0, since our tests were performed before the official release. We also used torch.set_float32_matmul_precision('high') to enable additional fast matrix multiplication algorithms.\nWe compared results with the traditional attention implementation in diffusers (referred to as vanilla below) as well as with the best-performing solution in pre-2.0 PyTorch: PyTorch 1.13.1 with the xFormers package (v0.0.16) installed.", "source": "https://pytorch.org/blog/accelerated-diffusers-pt-20/", "category": "pytorch blogs"} {"text": "Results were measured without compilation (i.e., no code changes at all), and also with a single call to torch.compile() to wrap the UNet module. We did not compile the image decoder because most of the time is spent in the 50 denoising iterations that run UNet evaluations.\nResults in float32\n\nThe following figures explore performance improvement vs batch size for various representative GPUs belonging to different generations. We collected data for each combination until we reached maximum memory utilization. Vanilla attention runs out of memory earlier than xFormers or PyTorch 2.0, which explains the missing bars for larger batch sizes. Similarly, A100 (we used the 40 GB version) is capable of running batch sizes of 64, but the other GPUs could only reach 32 in our tests.", "source": "https://pytorch.org/blog/accelerated-diffusers-pt-20/", "category": "pytorch blogs"} {"text": "\n\n\n", "source": "https://pytorch.org/blog/accelerated-diffusers-pt-20/", "category": "pytorch blogs"} {"text": "We found very significant performance improvements over vanilla attention across the board, without even using torch.compile(). An out of the box installation of PyTorch 2.0 and diffusers yields about 50% speedup on A100 and between 35% and 50% on 4090 GPUs, depending on batch size. Performance improvements are more pronounced for modern CUDA architectures such as Ada (4090) or Ampere (A100), but they are still very significant for older architectures still heavily in use in cloud services.\nIn addition to faster speeds, the accelerated transformers implementation in PyTorch 2.0 allows much larger batch sizes to be used. A single 40GB A100 GPU runs out of memory with a batch size of 10, and 24 GB high-end consumer cards such as 3090 and 4090 cannot generate 8 images at once. Using PyTorch 2.0 and diffusers we could achieve batch sizes of 48 for 3090 and 4090, and 64 for A100. This is of great significance for cloud services and applications, as they can efficiently process more images at a time.", "source": "https://pytorch.org/blog/accelerated-diffusers-pt-20/", "category": "pytorch blogs"} {"text": "When compared with PyTorch 1.13.1 + xFormers, the new accelerated transformers implementation is still faster and requires no additional packages or dependencies. In this case we found moderate speedups of up to 2% on datacenter cards such as A100 or T4, but performance was great on the two last generations of consumer cards: up to 20% speed improvement on 3090 and between 10% and 45% on 4090, depending on batch size.\nWhen torch.compile() is used, we get an additional performance boost of (typically) 2% and 3% over the previous improvements. As compilation takes some time, this is better geared towards user-facing inference services or training.\nResults in float16\n\n", "source": "https://pytorch.org/blog/accelerated-diffusers-pt-20/", "category": "pytorch blogs"} {"text": "\n\nWhen we consider float16 inference, the performance improvements of the accelerated transformers implementation in PyTorch 2.0 are between 20% and 28% over standard attention, across all the GPUs we tested, except for the 4090, which belongs to the more modern Ada architecture. This GPU benefits from a dramatic performance improvement when using PyTorch 2.0 nightlies. With respect to optimized SDPA vs xFormers, results are usually on par for most GPUs, except again for the 4090. Adding torch.compile() to the mix boosts performance a few more percentage points across the board.\nConclusions", "source": "https://pytorch.org/blog/accelerated-diffusers-pt-20/", "category": "pytorch blogs"} {"text": "Conclusions\nPyTorch 2.0 comes with multiple features to optimize the crucial components of the foundational transformer block, and they can be further improved with the use of torch.compile. These optimizations lead to significant memory and time improvements for diffusion models, and remove the need for third-party library installations.\nTo take advantage of these speed and memory improvements all you have to do is upgrade to PyTorch 2.0 and use diffusers >= 0.13.0.\nFor more examples and in-detail benchmark numbers, please also have a look at the Diffusers with PyTorch 2.0 docs.\nAcknowledgement\nThe authors are grateful to the PyTorch team for creating such excellent software.", "source": "https://pytorch.org/blog/accelerated-diffusers-pt-20/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Get Started with PyTorch 2.0 Summary and Overview\"\nauthor: Team PyTorch\nfeatured-img: \"assets/images/Pytorch_2_0_Animation_AdobeExpress.gif\"\n\nIntroducing PyTorch 2.0, our first steps toward the next generation 2-series release of PyTorch. Over the last few years we have innovated and iterated from PyTorch 1.0 to the most recent 1.13 and moved to the newly formed PyTorch Foundation, part of the Linux Foundation.\nTo complement the PyTorch 2.0 announcement and conference, we have also posted a comprehensive introduction and technical overview within the Get Started menu at https://pytorch.org/get-started/pytorch-2.0.\nWe also wanted to ensure you had all the information to quickly leverage PyTorch 2.0 in your models so we added the technical requirements, tutorial, user experience, Hugging Face benchmarks and FAQs to get you started today!", "source": "https://pytorch.org/blog/getting-started-with-pytorch-2.0/", "category": "pytorch blogs"} {"text": "Finally we are launching a new \u201cAsk the Engineers: 2.0 Live Q&A\u201d series that allows you to go deeper on a range of topics with PyTorch subject matter experts. We hope this content is helpful for the entire community and level of users/contributors.\nhttps://pytorch.org/get-started/pytorch-2.0", "source": "https://pytorch.org/blog/getting-started-with-pytorch-2.0/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'New Library Releases in PyTorch 1.10, including TorchX, TorchAudio, TorchVision'\nauthor: Team PyTorch \n\nToday, we are announcing a number of new features and improvements to PyTorch libraries, alongside the PyTorch 1.10 release. Some highlights include:\nSome highlights include:\n\nTorchX - a new SDK for quickly building and deploying ML applications from research & development to production. \nTorchAudio - Added text-to-speech pipeline, self-supervised model support, multi-channel support and MVDR beamforming module, RNN transducer (RNNT) loss function, and batch and filterbank support to lfilter function. See the TorchAudio release notes here.\n", "source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"} {"text": "\nTorchVision - Added new RegNet and EfficientNet models, FX based feature extraction added to utilities, two new Automatic Augmentation techniques: Rand Augment and Trivial Augment, and updated training recipes. See the TorchVision release notes here.\n\nIntroducing TorchX\nTorchX is a new SDK for quickly building and deploying ML applications from research & development to production. It offers various builtin components that encode MLOps best practices and make advanced features like distributed training and hyperparameter optimization accessible to all.", "source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"} {"text": "Users can get started with TorchX 0.1 with no added setup cost since it supports popular ML schedulers and pipeline orchestrators that are already widely adopted and deployed in production. No two production environments are the same. To comply with various use cases, TorchX\u2019s core APIs allow tons of customization at well-defined extension points so that even the most unique applications can be serviced without customizing the whole vertical stack.\nRead the documentation for more details and try out this feature using this quickstart tutorial. \nTorchAudio 0.10\n[Beta] Text-to-speech pipeline", "source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"} {"text": "[Beta] Text-to-speech pipeline\nTorchAudio now adds the Tacotron2 model and pretrained weights. It is now possible to build a text-to-speech pipeline with existing vocoder implementations like WaveRNN and Griffin-Lim. Building a TTS pipeline requires matching data processing and pretrained weights, which are often non-trivial to users. So TorchAudio introduces a bundle API so that constructing pipelines for specific pretrained weights is easy. The following example illustrates this.\n```python\n\n\n\nimport torchaudio\nbundle = torchaudio.pipelines.TACOTRON2_WAVERNN_CHAR_LJSPEECH\nBuild text processor, Tacotron2 and vocoder (WaveRNN) model\nprocessor = bundle.get_text_processor()\ntacotron2 = bundle.get_tacotron2()\nDownloading:\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 107M/107M [00:01<00:00, 87.9MB/s]\nvocoder = bundle.get_vocoder()\nDownloading:\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 16.7M/16.7M [00:00<00:00, 78.1MB/s]\ntext = \"Hello World!\"\nEncode text\n\n\n", "source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"} {"text": "\n\n\ntext = \"Hello World!\"\nEncode text\ninput, lengths = processor(text)\nGenerate (mel-scale) spectrogram\nspecgram, lengths, _ = tacotron2.infer(input, lengths)\nConvert spectrogram to waveform\nwaveforms, lengths = vocoder(specgram, lengths)\nSave audio\ntorchaudio.save('hello-world.wav', waveforms, vocoder.sample_rate)\n\n\n\n```\nFor the details of this API please refer to the documentation. You can also try this from the tutorial.\n(Beta) Self-Supervised Model Support", "source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"} {"text": "(Beta) Self-Supervised Model Support\nTorchAudio added HuBERT model architecture and pre-trained weight support for wav2vec 2.0 and HuBERT. HuBERT and wav2vec 2.0 are novel ways for audio representation learning and they yield high accuracy when fine-tuned on downstream tasks. These models can serve as baseline in future research, therefore, TorchAudio is providing a simple way to run the model. Similar to the TTS pipeline, the pretrained weights and associated information, such as expected sample rates and output class labels (for fine-tuned weights) are put together as a bundle, so that they can be used to build pipelines. The following example illustrates this.\n```python\n\n\n\nimport torchaudio\nbundle = torchaudio.pipelines.HUBERT_ASR_LARGE\nBuild the model and load pretrained weight.\nmodel = bundle.get_model()\nDownloading:\n100%|\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588\u2588| 1.18G/1.18G [00:17<00:00, 73.8MB/s]\nCheck the corresponding labels of the output.\nlabels = bundle.get_labels()\n\n\n", "source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"} {"text": "\n\n\nlabels = bundle.get_labels()\nprint(labels)\n('', '', '', '', '|', 'E', 'T', 'A', 'O', 'N', 'I', 'H', 'S', 'R', 'D', 'L', 'U', 'M', 'W', 'C', 'F', 'G', 'Y', 'P', 'B', 'V', 'K', \"'\", 'X', 'J', 'Q', 'Z')\nInfer the label probability distribution\nwaveform, sample_rate = torchaudio.load(hello-world.wav')\nemissions, _ = model(waveform)\nPass emission to (hypothetical) decoder\ntranscripts = ctc_decode(emissions, labels)\nprint(transcripts[0])\nHELLO WORLD\n\n\n\n```\nPlease refer to the documentation for more details and try out this feature using this tutorial.\n(Beta) Multi-channel support and MVDR beamforming", "source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"} {"text": "Far-field speech recognition is a more challenging task compared to near-field recognition. Multi-channel methods such as beamforming help reduce the noises and enhance the target speech. \nTorchAudio now adds support for differentiable Minimum Variance Distortionless Response (MVDR) beamforming on multi-channel audio using Time-Frequency masks. Researchers can easily assemble it with any multi-channel ASR pipeline. There are three solutions (ref_channel, stv_evd, stv_power) and it supports single-channel and multi-channel (perform average in the method) masks. It provides an online option that recursively updates the parameters for streaming audio. We also provide a tutorial on how to apply MVDR beamforming to the multi-channel audio in the example directory.\n```python\n\n\n\nfrom torchaudio.transforms import MVDR, Spectrogram, InverseSpectrogram\nLoad the multi-channel noisy audio\nwaveform_mix, sr = torchaudio.load('mix.wav')\nInitialize the stft and istft modules\n\n\n", "source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"} {"text": "\n\n\nInitialize the stft and istft modules\nstft = Spectrogram(n_fft=1024, hop_length=256, return_complex=True, power=None)\nistft = InverseSpectrogram(n_fft=1024, hop_length=256)\nGet the noisy spectrogram\nspecgram_mix = stft(waveform_mix)\nGet the Time-Frequency mask via machine learning models\nmask = model(waveform)\nInitialize the MVDR module\nmvdr = MVDR(ref_channel=0, solution=\u201dref_channel\u201d, multi_mask=False)\nApply MVDR beamforming\nspecgram_enhanced = mvdr(specgram_mix, mask)\nGet the enhanced waveform via iSTFT\nwaveform_enhanced = istft(specgram_enhanced, length=waveform.shape[-1])\n```\nPlease refer to the documentation for more details and try out this feature using the MVDR tutorial.\n\n\n\n(Beta) RNN Transducer Loss", "source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"} {"text": "(Beta) RNN Transducer Loss\nThe RNN transducer (RNNT) loss is part of the RNN transducer pipeline, which is a popular architecture for speech recognition tasks. Recently it has gotten attention for being used in a streaming setting, and has also achieved state-of-the-art WER for the LibriSpeech benchmark.\nTorchAudio\u2019s loss function supports float16 and float32 logits, has autograd and torchscript support, and can be run on both CPU and GPU, which has a custom CUDA kernel implementation for improved performance. The implementation is consistent with the original loss function in Sequence Transduction with Recurrent Neural Networks, but relies on code from Alignment Restricted Streaming Recurrent Neural Network Transducer. Special thanks to Jay Mahadeokar and Ching-Feng Yeh for their code contributions and guidance.\nPlease refer to the documentation for more details.", "source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"} {"text": "(Beta) Batch support and filter bank support\ntorchaudio.functional.lfilter now supports batch processing and multiple filters.\n(Prototype) Emformer Module\nAutomatic speech recognition (ASR) research and productization have increasingly focused on on-device applications. Towards supporting such efforts, TorchAudio now includes Emformer, a memory-efficient transformer architecture that has achieved state-of-the-art results on LibriSpeech in low-latency streaming scenarios, as a prototype feature.\nPlease refer to the documentation for more details.\nGPU Build\nGPU builds that support custom CUDA kernels in TorchAudio, like the one being used for RNN transducer loss, have been added. Following this change, TorchAudio\u2019s binary distribution now includes CPU-only versions and CUDA-enabled versions. To use CUDA-enabled binaries, PyTorch also needs to be compatible with CUDA.\nTorchVision 0.11", "source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"} {"text": "TorchVision 0.11\n(Stable) New Models\nRegNet and EfficientNet are two popular architectures that can be scaled to different computational budgets. In this release we include 22 pre-trained weights for their classification variants. The models were trained on ImageNet and the accuracies of the pre-trained models obtained on ImageNet val can be found below (see #4403, #4530 and #4293 for more details). \nThe models can be used as follows:\nimport torch\nfrom torchvision import models\n\nx = torch.rand(1, 3, 224, 224)\n\nregnet = models.regnet_y_400mf(pretrained=True)\nregnet.eval()\npredictions = regnet(x)\n\nefficientnet = models.efficientnet_b0(pretrained=True)\nefficientnet.eval()\npredictions = efficientnet(x)\n", "source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"} {"text": "predictions = efficientnet(x)\nSee the full list of new models on the [torchvision.models](https://pytorch.org/vision/master/models.html) documentation page.\n\nWe would like to thank Ross Wightman and Luke Melas-Kyriazi for contributing the weights of the EfficientNet variants.\n\n### (Beta) FX-based Feature Extraction \nA new Feature Extraction method has been added to our utilities. It uses [torch.fx](https://pytorch.org/docs/stable/fx.html) and enables us to retrieve the outputs of intermediate layers of a network which is useful for feature extraction and visualization. \n\nHere is an example of how to use the new utility:\n\n```python\nimport torch\nfrom torchvision.models import resnet50\nfrom torchvision.models.feature_extraction import create_feature_extractor\n\n\nx = torch.rand(1, 3, 224, 224)\n\nmodel = resnet50()\n\nreturn_nodes = {\n\"layer4.2.relu_2\": \"layer4\"\n}\nmodel2 = create_feature_extractor(model, return_nodes=return_nodes)\nintermediate_outputs = model2(x)\n\nprint(intermediate_outputs['layer4'].shape)\n", "source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"} {"text": "print(intermediate_outputs['layer4'].shape)\n```\nWe would like to thank Alexander Soare for developing this utility.\n(Stable) New Data Augmentations\nTwo new Automatic Augmentation techniques were added: RandAugment and Trivial Augment. They apply a series of transformations on the original data to enhance them and to boost the performance of the models. The new techniques build on top of the previously added AutoAugment and focus on simplifying the approach, reducing the search space for the optimal policy and improving the performance gain in terms of accuracy. These techniques enable users to reproduce recipes to achieve state-of-the-art performance on the offered models. Additionally, it enables users to apply these techniques in order to do transfer learning and achieve optimal accuracy on new datasets.", "source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"} {"text": "Both methods can be used as drop-in replacement of the AutoAugment technique as seen below:\nfrom torchvision import transforms\n\nt = transforms.RandAugment()\n# t = transforms.TrivialAugmentWide()\ntransformed = t(image)\n\ntransform = transforms.Compose([\ntransforms.Resize(256),\ntransforms.RandAugment(), # transforms.TrivialAugmentWide()\ntransforms.ToTensor()])\n\nRead the automatic augmentation transforms for more details.\nWe would like to thank Samuel G. M\u00fcller for contributing to Trivial Augment and for his help on refactoring the AA package.\nUpdated Training Recipes", "source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"} {"text": "Updated Training Recipes\nWe have updated our training reference scripts to add support for Exponential Moving Average, Label Smoothing, Learning-Rate Warmup, Mixup, Cutmix and other SOTA primitives. The above enabled us to improve the classification Acc@1 of some pre-trained models by over 4 points. A major update of the existing pre-trained weights is expected in the next release.\nThanks for reading. If you\u2019re interested in these updates and want to join the PyTorch community, we encourage you to join the discussion forums and open GitHub issues. To get the latest news from PyTorch, follow us on Twitter, Medium, YouTube and LinkedIn. \nCheers!\nTeam PyTorch", "source": "https://pytorch.org/blog/pytorch-1.10-new-library-releases/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Introducing TorchMultimodal - a library for accelerating exploration in Multimodal AI\"\nauthor: Kartikay Khandelwal, Ankita De\nfeatured-img: \"assets/images/torch-multimodal-feature-image.png\"\n\nWe are announcing TorchMultimodal Beta, a PyTorch domain library for training SoTA multi-task multimodal models at scale. The library provides composable building blocks (modules, transforms, loss functions) to accelerate model development, SoTA model architectures (FLAVA, MDETR, Omnivore) from published research, training and evaluation scripts, as well as notebooks for exploring these models. The library is under active development, and we\u2019d love to hear your feedback! You can find more details on how to get started here.\nWhy TorchMultimodal?", "source": "https://pytorch.org/blog/introducing-torchmultimodal/", "category": "pytorch blogs"} {"text": "Why TorchMultimodal?\nInterest is rising around AI models that understand multiple input types (text, images, videos and audio signals), and optionally use this understanding to generate different forms of outputs (sentences, pictures, videos). Recent work from FAIR such as FLAVA, Omnivore and data2vec have shown that multimodal models for understanding are competitive with unimodal counterparts, and in some cases are establishing the new state-of-the art. Generative models such as Make-a-video and Make-a-scene are redefining what modern AI systems can do.", "source": "https://pytorch.org/blog/introducing-torchmultimodal/", "category": "pytorch blogs"} {"text": "As interest in multimodal AI has grown, researchers are looking for tools and libraries to quickly experiment with ideas, and build on top of the latest research in the field. While the PyTorch ecosystem has a rich repository of libraries and frameworks, it\u2019s not always obvious how components from these interoperate with each other, or how they can be stitched together to build SoTA multimodal models.\nTorchMultimodal solves this problem by providing:\n\n\nComposable and easy-to-use building blocks which researchers can use to accelerate model development and experimentation in their own workflows. These are designed to be modular, and can be easily extended to handle new modalities.\n\n\nEnd-to-end examples for training and evaluating the latest models from research. These should serve as starting points for ongoing/future research, as well as examples for using advanced features such as integrating with FSDP and activation checkpointing for scaling up model and batch sizes.\n\n", "source": "https://pytorch.org/blog/introducing-torchmultimodal/", "category": "pytorch blogs"} {"text": "Introducing TorchMultimodal\nTorchMultimodal is a PyTorch domain library for training multi-task multimodal models at scale. In the repository, we provide:\n\n\nBuilding Blocks. A collection of modular and composable building blocks like models, fusion layers, loss functions, datasets and utilities. Some examples include:\n\n\nContrastive Loss with Temperature. Commonly used function for training models like CLIP and FLAVA. We also include variants such as ImageTextContrastiveLoss used in models like ALBEF.\n\n", "source": "https://pytorch.org/blog/introducing-torchmultimodal/", "category": "pytorch blogs"} {"text": "\n\nCodebook layers which compresses high dimensional data by nearest neighbor lookup in an embedding space and is a vital component of VQVAEs (provided as a model in the repository).\n\n\nShifted-window Attention window based multi-head self attention which is a vital component of encoders like Swin 3D Transformers.\n\n\nComponents for CLIP. A popular model published by OpenAI which has proven to be extremely effective at learning text and image representations.\n\n", "source": "https://pytorch.org/blog/introducing-torchmultimodal/", "category": "pytorch blogs"} {"text": "\n\nMultimodal GPT. An abstraction that extends OpenAI\u2019s GPT architecture for multimodal generation when combined with the generation utility.\n\n\nMultiHeadAttention. A critical component for attention-based models with support for fast auto-regressive decoding.\n\n", "source": "https://pytorch.org/blog/introducing-torchmultimodal/", "category": "pytorch blogs"} {"text": "\n\nExamples. A collection of examples that show how to combine these building blocks with components and common infrastructure (Lightning, TorchMetrics) from across the PyTorch Ecosystem to replicate state-of-the-art models published in literature. We currently provide five examples, which include.\n\n\nFLAVA [paper]. Official code for the paper accepted at CVPR, including a tutorial on finetuning FLAVA.\n\n", "source": "https://pytorch.org/blog/introducing-torchmultimodal/", "category": "pytorch blogs"} {"text": "\n\nMDETR [paper]. Collaboration with authors from NYU to provide an example which alleviates interoperability pain points in the PyTorch ecosystem, including a notebook on using MDETR for phrase grounding and visual question answering.\n\n\nOmnivore [paper]. First example in TorchMultimodal of a model which deals with Video and 3D data, including a notebook for exploring the model.\n\n", "source": "https://pytorch.org/blog/introducing-torchmultimodal/", "category": "pytorch blogs"} {"text": "\n\nMUGEN [paper]. Foundational work for auto-regressive generation and retrieval, including demos for text-video generation and retrieval with a large-scale synthetic dataset enriched from OpenAI coinrun.\n\n\nALBEF [paper] Code for the model, including a notebook for using this model for Visual Question Answering.\n\n\nThe following code snippet showcases an example usage of several TorchMultimodal components related to CLIP:\n```python\ninstantiate clip transform", "source": "https://pytorch.org/blog/introducing-torchmultimodal/", "category": "pytorch blogs"} {"text": "\n# instantiate clip transform\nclip_transform = CLIPTransform()\n\n# pass the transform to your dataset. Here we use coco captions\ndataset = CocoCaptions(root= ..., annFile=..., transforms=clip_transform)\ndataloader = DataLoader(dataset, batch_size=16)\n\n# instantiate model. Here we use clip with vit-L as the image encoder\nmodel= clip_vit_l14()\n\n# define loss and other things needed for training\nclip_loss = ContrastiveLossWithTemperature()\noptim = torch.optim.AdamW(model.parameters(), lr = 1e-5)\nepochs = 1\n\n# write your train loop\nfor _ in range(epochs):\n for batch_idx, batch in enumerate(dataloader):\n image, text = batch\n image_embeddings, text_embeddings = model(image, text)\n loss = contrastive_loss_with_temperature(image_embeddings, text_embeddings)\n loss.backward()\n optimizer.step()\n", "source": "https://pytorch.org/blog/introducing-torchmultimodal/", "category": "pytorch blogs"} {"text": "loss.backward()\n optimizer.step()\n```\nApart from the code, we are also releasing a tutorial for fine-tuning multimodal foundation models, and a blog post (with code pointers) on how to scale up such models using techniques from PyTorch Distributed (FSDP and activation checkpointing). We hope such examples and tutorials will serve to demystify a number of advanced features available in the PyTorch ecosystem.\nWhat\u2019s Next?\nWhile this is an exciting launch, there\u2019s a lot more to come. The library is under development and we are working on adding some of the exciting developments in the space of diffusion models, and examples to showcase common trends from research. As you explore and use the library, we\u2019d love to hear any feedback you might have! You can find more details on how to get started here.\nTeam", "source": "https://pytorch.org/blog/introducing-torchmultimodal/", "category": "pytorch blogs"} {"text": "Team\nThe primary contributors and developers of TorchMultimodal include Ankita De, Evan Smothers, Kartikay Khandelwal, Lan Gong, Laurence Rouesnel, Nahiyan Malik, Rafi Ayub and Yosua Michael Maranatha.", "source": "https://pytorch.org/blog/introducing-torchmultimodal/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Announcing the Winners of the 2021 PyTorch Annual Hackathon'\nauthor: Team PyTorch\nfeatured-img: 'assets/images/social_hackathon21.png'\n\nMore than 1,900 people worked hard in this year\u2019s PyTorch Annual Hackathon to create unique tools and applications for PyTorch developers and researchers.\nNotice: None of the projects submitted to the hackathon are associated with or offered by Meta Platforms, Inc.\n\n\n\nThis year, participants could enter their projects into following three categories:\n* PyTorch Developer Tools: a tool or library for improving productivity and efficiency for PyTorch researchers and developers.\n* Web and Mobile Applications Powered by PyTorch: a web or mobile interface and/or an embedded device built using PyTorch.", "source": "https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/", "category": "pytorch blogs"} {"text": "\nPyTorch Responsible AI Development Tools: a tool, library, or web/mobile app to support researchers and developers in creating responsible AI that factors in fairness, security, privacy, and more throughout its entire development process.\n\nThe virtual hackathon ran from September 8 through November 2, 2021, with more than 1,900 registered participants from 110 countries, submitting a total of 65 projects. Entrants were judged on their idea\u2019s quality, originality, potential impact, and how well they implemented it. All projects can be viewed here.\nMeet the winners of each category below!\nPYTORCH DEVELOPER TOOLS\nFirst Place: RaNNC\nRaNNC is a middleware to automate hybrid model/data parallelism for training very large-scale neural networks capable of training 100 billion parameter models without any manual tuning.", "source": "https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/", "category": "pytorch blogs"} {"text": "Second Place: XiTorch\nXiTorch provides first and higher order gradients of functional routines, such as optimization, rootfinder, and ODE solver. It also contains operations for implicit linear operators (e.g. large matrix that is expressed only by its matrix-vector multiplication) such as symmetric eigen-decomposition, linear solve, and singular value decomposition.\nThird Place: TorchLiberator\nTorchLiberator automates model surgery, finding the maximum correspondence between weights in two networks.\nHonorable Mentions\n\nPADL manages your entire PyTorch work flow with a single python abstraction and a beautiful functional API, so there\u2019s no more complex configuration or juggling preprocessing, postprocessing and forward passes.\n", "source": "https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/", "category": "pytorch blogs"} {"text": "\nPyTree is a PyTorch package for recursive neural networks that provides highly generic recursive neural network implementations as well as efficient batching methods. \nIndicLP makes it easier for developers and researchers to build applications and models in Indian Languages, thus making NLP a more diverse field. \n\nWEB/MOBILE APPLICATIONS POWERED BY PYTORCH\nFirst Place: PyTorch Driving Guardian\nPyTorch Driving Guardian is a tool that monitors driver alertness, emotional state, and potential blind spots on the road. \nSecond Place: Kronia\nKronia is an Android mobile app built to maximize the harvest outputs for farmers. \nThird Place: Heyoh camera for Mac", "source": "https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/", "category": "pytorch blogs"} {"text": "Heyoh is a Mac virtual camera for Zoom and Meets that augments live video by recognizing hand gestures and smiles and shows animated effects to other video participants. \nHonorable Mentions\n\nMamma AI is a tool that helps doctors with the breast cancer identification process by identifying areas likely to have cancer using ultrasonic and x-ray images. \nAgingClock is a tool that predicts biological age first with methylation genome data, then blood test data and eventually with multimodal omics and lifestyle data.\nIris is an open source photos platform which is more of an alternative of Google Photos that includes features such as Listing photos, Detecting Categories, Detecting and Classifying Faces from Photos, Detecting and Clustering by Location and Things in Photos.\n\nPYTORCH RESPONSIBLE AI DEVELOPMENT TOOLS", "source": "https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/", "category": "pytorch blogs"} {"text": "PYTORCH RESPONSIBLE AI DEVELOPMENT TOOLS\nFirst Place: FairWell\nFairWell aims to address model bias on specific groups of people by allowing data scientists to evaluate their dataset and model predictions and take steps to make their datasets more inclusive and their models less biased. \nSecond Place: promp2slip\nPromp2slip is a library that tests the ethics of language models by using natural adversarial texts. \nThird Place: Phorch\nPhorch adversarially attacks the data using FIGA (Feature Importance Guided Attack) and creates 3 different attack sets of data based on certain parameters. These features are utilized to implement adversarial training as a defense against FIGA using neural net architecture in PyTorch.\nHonorable Mentions", "source": "https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/", "category": "pytorch blogs"} {"text": "Honorable Mentions\n\nGreenops helps to measure the footprints of deep learning models at training, testing and evaluating to reduce energy consumption and carbon footprints.\nXaitk-saliency is an open-source, explainable AI toolkit for visual saliency algorithm interfaces and implementations, built for analytic and autonomy applications.\n\nThank you,\nTeam PyTorch", "source": "https://pytorch.org/blog/announcing-the-winners-of-the-2021-pytorch-annual-hackathon/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'PyTorch 1.8 Release, including Compiler and Distributed Training updates, and New Mobile Tutorials'\nauthor: Team PyTorch \n\nWe are excited to announce the availability of PyTorch 1.8. This release is composed of more than 3,000 commits since 1.7. It includes major updates and new features for compilation, code optimization, frontend APIs for scientific computing, and AMD ROCm support through binaries that are available via pytorch.org. It also provides improved features for large-scale training for pipeline and model parallelism, and gradient compression. A few of the highlights include:\n1. Support for doing python to python functional transformations via torch.fx;\n2. Added or stabilized APIs to support FFTs (torch.fft), Linear Algebra functions (torch.linalg), added support for autograd for complex tensors and updates to improve performance for calculating hessians and jacobians; and", "source": "https://pytorch.org/blog/pytorch-1.8-released/", "category": "pytorch blogs"} {"text": "\nSignificant updates and improvements to distributed training including: Improved NCCL reliability; Pipeline parallelism support; RPC profiling; and support for communication hooks adding gradient compression.\nSee the full release notes here.\n\nAlong with 1.8, we are also releasing major updates to PyTorch libraries including TorchCSPRNG, TorchVision, TorchText and TorchAudio. For more on the library releases, see the post here. As previously noted, features in PyTorch releases are classified as Stable, Beta and Prototype. You can learn more about the definitions in the post here. \nNew and Updated APIs", "source": "https://pytorch.org/blog/pytorch-1.8-released/", "category": "pytorch blogs"} {"text": "New and Updated APIs\nThe PyTorch 1.8 release brings a host of new and updated API surfaces ranging from additional APIs for NumPy compatibility, also support for ways to improve and scale your code for performance at both inference and training time. Here is a brief summary of the major features coming in this release:\n[Stable] Torch.fft support for high performance NumPy style FFTs\nAs part of PyTorch\u2019s goal to support scientific computing, we have invested in improving our FFT support and with PyTorch 1.8, we are releasing the torch.fft module. This module implements the same functions as NumPy\u2019s np.fft module, but with support for hardware acceleration and autograd.\n* See this blog post for more details\n* Documentation\n[Beta] Support for NumPy style linear algebra functions via torch.linalg", "source": "https://pytorch.org/blog/pytorch-1.8-released/", "category": "pytorch blogs"} {"text": "The torch.linalg module, modeled after NumPy\u2019s np.linalg module, brings NumPy-style support for common linear algebra operations including Cholesky decompositions, determinants, eigenvalues and many others.\n* Documentation\n[Beta] Python code Transformations with FX\nFX allows you to write transformations of the form transform(input_module : nn.Module) -> nn.Module, where you can feed in a Module instance and get a transformed Module instance out of it.", "source": "https://pytorch.org/blog/pytorch-1.8-released/", "category": "pytorch blogs"} {"text": "This kind of functionality is applicable in many scenarios. For example, the FX-based Graph Mode Quantization product is releasing as a prototype contemporaneously with FX. Graph Mode Quantization automates the process of quantizing a neural net and does so by leveraging FX\u2019s program capture, analysis and transformation facilities. We are also developing many other transformation products with FX and we are excited to share this powerful toolkit with the community.\nBecause FX transforms consume and produce nn.Module instances, they can be used within many existing PyTorch workflows. This includes workflows that, for example, train in Python then deploy via TorchScript.", "source": "https://pytorch.org/blog/pytorch-1.8-released/", "category": "pytorch blogs"} {"text": "You can read more about FX in the official documentation. You can also find several examples of program transformations implemented using torch.fx here. We are constantly improving FX and invite you to share any feedback you have about the toolkit on the forums or issue tracker.\nWe\u2019d like to acknowledge TorchScript tracing, Apache MXNet hybridize, and more recently JAX as influences for program acquisition via tracing. We\u2019d also like to acknowledge Caffe2, JAX, and TensorFlow as inspiration for the value of simple, directed dataflow graph program representations and transformations over those representations.", "source": "https://pytorch.org/blog/pytorch-1.8-released/", "category": "pytorch blogs"} {"text": "Distributed Training\nThe PyTorch 1.8 release added a number of new features as well as improvements to reliability and usability. Concretely, support for: Stable level async error/timeout handling was added to improve NCCL reliability; and stable support for RPC based profiling. Additionally, we have added support for pipeline parallelism as well as gradient compression through the use of communication hooks in DDP. Details are below:\n[Beta] Pipeline Parallelism\nAs machine learning models continue to grow in size, traditional Distributed DataParallel (DDP) training no longer scales as these models don\u2019t fit on a single GPU device. The new pipeline parallelism feature provides an easy to use PyTorch API to leverage pipeline parallelism as part of your training loop.\n* RFC", "source": "https://pytorch.org/blog/pytorch-1.8-released/", "category": "pytorch blogs"} {"text": "\nDocumentation\n\n[Beta] DDP Communication Hook\nThe DDP communication hook is a generic interface to control how to communicate gradients across workers by overriding the vanilla allreduce in DistributedDataParallel. A few built-in communication hooks are provided including PowerSGD, and users can easily apply any of these hooks to optimize communication. Additionally, the communication hook interface can also support user-defined communication strategies for more advanced use cases.\n* RFC\n* Documentation\nAdditional Prototype Features for Distributed Training\nIn addition to the major stable and beta distributed training features in this release, we also have a number of prototype features available in our nightlies to try out and provide feedback. We have linked in the draft docs below for reference:", "source": "https://pytorch.org/blog/pytorch-1.8-released/", "category": "pytorch blogs"} {"text": "\n(Prototype) ZeroRedundancyOptimizer - Based on and in partnership with the Microsoft DeepSpeed team, this feature helps reduce per-process memory footprint by sharding optimizer states across all participating processes in the ProcessGroup gang. Refer to this documentation for more details. \n(Prototype) Process Group NCCL Send/Recv - The NCCL send/recv API was introduced in v2.7 and this feature adds support for it in NCCL process groups. This feature will provide an option for users to implement collective operations at Python layer instead of C++ layer. Refer to this documentation and code examples to learn more.\n", "source": "https://pytorch.org/blog/pytorch-1.8-released/", "category": "pytorch blogs"} {"text": "\n(Prototype) CUDA-support in RPC using TensorPipe - This feature should bring consequent speed improvements for users of PyTorch RPC with multiple-GPU machines, as TensorPipe will automatically leverage NVLink when available, and avoid costly copies to and from host memory when exchanging GPU tensors between processes. When not on the same machine, TensorPipe will fall back to copying the tensor to host memory and sending it as a regular CPU tensor. This will also improve the user experience as users will be able to treat GPU tensors like regular CPU tensors in their code. Refer to this documentation for more details.\n", "source": "https://pytorch.org/blog/pytorch-1.8-released/", "category": "pytorch blogs"} {"text": "\n(Prototype) Remote Module - This feature allows users to operate a module on a remote worker like using a local module, where the RPCs are transparent to the user. In the past, this functionality was implemented in an ad-hoc way and overall this feature will improve the usability of model parallelism on PyTorch. Refer to this documentation for more details.\n\nPyTorch Mobile\nSupport for PyTorch Mobile is expanding with a new set of tutorials to help new users launch models on-device quicker and give existing users a tool to get more out of our framework. These include:\n* Image segmentation DeepLabV3 on iOS\n* Image segmentation DeepLabV3 on Android", "source": "https://pytorch.org/blog/pytorch-1.8-released/", "category": "pytorch blogs"} {"text": "Our new demo apps also include examples of image segmentation, object detection, neural machine translation, question answering, and vision transformers. They are available on both iOS and Android:\n* iOS demo app\n* Android demo app\nIn addition to performance improvements on CPU for MobileNetV3 and other models, we also revamped our Android GPU backend prototype for broader models coverage and faster inferencing:\n* Android tutorial\nLastly, we are launching the PyTorch Mobile Lite Interpreter as a prototype feature in this release. The Lite Interpreter allows users to reduce the runtime binary size. Please try these out and send us your feedback on the PyTorch Forums. All our latest updates can be found on the PyTorch Mobile page\n[Prototype] PyTorch Mobile Lite Interpreter", "source": "https://pytorch.org/blog/pytorch-1.8-released/", "category": "pytorch blogs"} {"text": "[Prototype] PyTorch Mobile Lite Interpreter\nPyTorch Lite Interpreter is a streamlined version of the PyTorch runtime that can execute PyTorch programs in resource constrained devices, with reduced binary size footprint. This prototype feature reduces binary sizes by up to 70% compared to the current on-device runtime in the current release. \n* iOS/Android Tutorial\nPerformance Optimization\nIn 1.8, we are releasing the support for benchmark utils to enable users to better monitor performance. We are also opening up a new automated quantization API. See the details below:\n(Beta) Benchmark utils\nBenchmark utils allows users to take accurate performance measurements, and provides composable tools to help with both benchmark formulation and post processing. This expected to be helpful for contributors to PyTorch to quickly understand how their contributions are impacting PyTorch performance.\nExample:\n```python", "source": "https://pytorch.org/blog/pytorch-1.8-released/", "category": "pytorch blogs"} {"text": "Example:\nfrom torch.utils.benchmark import Timer\n\nresults = []\nfor num_threads in [1, 2, 4]:\n timer = Timer(\n stmt=\"torch.add(x, y, out=out)\",\n setup=\"\"\"\n n = 1024\n x = torch.ones((n, n))\n y = torch.ones((n, 1))\n out = torch.empty((n, n))\n \"\"\",\n num_threads=num_threads,\n )\n results.append(timer.blocked_autorange(min_run_time=5))\n print(\n f\"{num_threads} thread{'s' if num_threads > 1 else ' ':<4}\"\n f\"{results[-1].median * 1e6:>4.0f} us \" +\n (f\"({results[0].median / results[-1].median:.1f}x)\" if num_threads > 1 else '')\n )\n\n1 thread 376 us \n2 threads 189 us (2.0x)\n4 threads 99 us (3.8x)\n\n\nDocumentation\nTutorial\n\n(Prototype) FX Graph Mode Quantization", "source": "https://pytorch.org/blog/pytorch-1.8-released/", "category": "pytorch blogs"} {"text": "(Prototype) FX Graph Mode Quantization\nFX Graph Mode Quantization is the new automated quantization API in PyTorch. It improves upon Eager Mode Quantization by adding support for functionals and automating the quantization process, although people might need to refactor the model to make the model compatible with FX Graph Mode Quantization (symbolically traceable with torch.fx).\n* Documentation\n* Tutorials:\n * (Prototype) FX Graph Mode Post Training Dynamic Quantization\n * (Prototype) FX Graph Mode Post Training Static Qunatization\n * (Prototype) FX Graph Mode Quantization User Guide\nHardware Support\n[Beta] Ability to Extend the PyTorch Dispatcher for a new backend in C++", "source": "https://pytorch.org/blog/pytorch-1.8-released/", "category": "pytorch blogs"} {"text": "In PyTorch 1.8, you can now create new out-of-tree devices that live outside the pytorch/pytorch repo. The tutorial linked below shows how to register your device and keep it in sync with native PyTorch devices.\n* Tutorial\n[Beta] AMD GPU Binaries Now Available\nStarting in PyTorch 1.8, we have added support for ROCm wheels providing an easy onboarding to using AMD GPUs. You can simply go to the standard PyTorch installation selector and choose ROCm as an installation option and execute the provided command.\nThanks for reading, and if you are excited about these updates and want to participate in the future of PyTorch, we encourage you to join the discussion forums and open GitHub issues.\nCheers!\nTeam PyTorch", "source": "https://pytorch.org/blog/pytorch-1.8-released/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Efficient PyTorch I/O library for Large Datasets, Many Files, Many GPUs'\nauthor: Alex Aizman, Gavin Maltby, Thomas Breuel\n\nData sets are growing bigger every day and GPUs are getting faster. This means there are more data sets for deep learning researchers and engineers to train and validate their models.\n\nMany datasets for research in still image recognition are becoming available with 10 million or more images, including OpenImages and Places.\nmillion YouTube videos (YouTube 8M) consume about 300 TB in 720p, used for research in object recognition, video analytics, and action recognition.\nThe Tobacco Corpus consists of about 20 million scanned HD pages, useful for OCR and text analytics research.\n", "source": "https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/", "category": "pytorch blogs"} {"text": "Although the most commonly encountered big data sets right now involve images and videos, big datasets occur in many other domains and involve many other kinds of data types: web pages, financial transactions, network traces, brain scans, etc.\nHowever, working with the large amount of data sets presents a number of challenges:\n\nDataset Size: datasets often exceed the capacity of node-local disk storage, requiring distributed storage systems and efficient network access.\nNumber of Files: datasets often consist of billions of files with uniformly random access patterns, something that often overwhelms both local and network file systems.\nData Rates: training jobs on large datasets often use many GPUs, requiring aggregate I/O bandwidths to the dataset of many GBytes/s; these can only be satisfied by massively parallel I/O systems.\nShuffling and Augmentation: training data needs to be shuffled and augmented prior to training.\n", "source": "https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/", "category": "pytorch blogs"} {"text": "\nScalability: users often want to develop and test on small datasets and then rapidly scale up to large datasets.\n\nTraditional local and network file systems, and even object storage servers, are not designed for these kinds of applications. The WebDataset I/O library for PyTorch, together with the optional AIStore server and Tensorcom RDMA libraries, provide an efficient, simple, and standards-based solution to all these problems. The library is simple enough for day-to-day use, is based on mature open source standards, and is easy to migrate to from existing file-based datasets.", "source": "https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/", "category": "pytorch blogs"} {"text": "Using WebDataset is simple and requires little effort, and it will let you scale up the same code from running local experiments to using hundreds of GPUs on clusters or in the cloud with linearly scalable performance. Even on small problems and on your desktop, it can speed up I/O tenfold and simplifies data management and processing of large datasets. The rest of this blog post tells you how to get started with WebDataset and how it works.\nThe WebDataset Library\nThe WebDataset library provides a simple solution to the challenges listed above. Currently, it is available as a separate library (github.com/tmbdev/webdataset), but it is on track for being incorporated into PyTorch (see RFC 38419). The WebDataset implementation is small (about 1500 LOC) and has no external dependencies.", "source": "https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/", "category": "pytorch blogs"} {"text": "Instead of inventing a new format, WebDataset represents large datasets as collections of POSIX tar archive files consisting of the original data files. The WebDataset library can use such tar archives directly for training, without the need for unpacking or local storage.\nWebDataset scales perfectly from small, local datasets to petascale datasets and training on hundreds of GPUs and allows data to be stored on local disk, on web servers, or dedicated file servers. For container-based training, WebDataset eliminates the need for volume plugins or node-local storage. As an additional benefit, datasets need not be unpacked prior to training, simplifying the distribution and use of research data.", "source": "https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/", "category": "pytorch blogs"} {"text": "WebDataset implements PyTorch\u2019s IterableDataset interface and can be used like existing DataLoader-based code. Since data is stored as files inside an archive, existing loading and data augmentation code usually requires minimal modification.\nThe WebDataset library is a complete solution for working with large datasets and distributed training in PyTorch (and also works with TensorFlow, Keras, and DALI via their Python APIs). Since POSIX tar archives are a standard, widely supported format, it is easy to write other tools for manipulating datasets in this format. E.g., the tarp command is written in Go and can shuffle and process training datasets.\nBenefits", "source": "https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/", "category": "pytorch blogs"} {"text": "Benefits\nThe use of sharded, sequentially readable formats is essential for very large datasets. In addition, it has benefits in many other environments. WebDataset provides a solution that scales well from small problems on a desktop machine to very large deep learning problems in clusters or in the cloud. The following table summarizes some of the benefits in different environments.\n{:.table.table-striped.table-bordered}\n | Environment | Benefits of WebDataset |\n| ------------- | ------------- |\n| Local Cluster with AIStore | AIStore can be deployed easily as K8s containers and offers linear scalability and near 100% utilization of network and I/O bandwidth. Suitable for petascale deep learning. |\n| Cloud Computing | WebDataset deep learning jobs can be trained directly against datasets stored in cloud buckets; no volume plugins required. Local and cloud jobs work identically. Suitable for petascale learning. |", "source": "https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/", "category": "pytorch blogs"} {"text": "| Local Cluster with existing distributed FS or object store | WebDataset\u2019s large sequential reads improve performance with existing distributed stores and eliminate the need for dedicated volume plugins. |\n| Educational Environments | WebDatasets can be stored on existing web servers and web caches, and can be accessed directly by students by URL |\n| Training on Workstations from Local Drives | Jobs can start training as the data still downloads. Data doesn\u2019t need to be unpacked for training. Ten-fold improvements in I/O performance on hard drives over random access file-based datasets. |\n| All Environments | Datasets are represented in an archival format and contain metadata such as file types. Data is compressed in native formats (JPEG, MP4, etc.). Data management, ETL-style jobs, and data transformations and I/O are simplified and easily parallelized. |\nWe will be adding more examples giving benchmarks and showing how to use WebDataset in these environments over the coming months.\nHigh-Performance", "source": "https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/", "category": "pytorch blogs"} {"text": "High-Performance\nFor high-performance computation on local clusters, the companion open-source AIStore server provides full disk to GPU I/O bandwidth, subject only to hardware constraints. This Bigdata 2019 Paper contains detailed benchmarks and performance measurements. In addition to benchmarks, research projects at NVIDIA and Microsoft have used WebDataset for petascale datasets and billions of training samples.\nBelow is a benchmark of AIStore with WebDataset clients using 12 server nodes with 10 rotational drives each.\n\n\n", "source": "https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/", "category": "pytorch blogs"} {"text": "\nThe left axis shows the aggregate bandwidth from the cluster, while the right scale shows the measured per drive I/O bandwidth. WebDataset and AIStore scale linearly to about 300 clients, at which point they are increasingly limited by the maximum I/O bandwidth available from the rotational drives (about 150 MBytes/s per drive). For comparison, HDFS is shown. HDFS uses a similar approach to AIStore/WebDataset and also exhibits linear scaling up to about 192 clients; at that point, it hits a performance limit of about 120 MBytes/s per drive, and it failed when using more than 1024 clients. Unlike HDFS, the WebDataset-based code just uses standard URLs and HTTP to access data and works identically with local files, with files stored on web servers, and with AIStore. For comparison, NFS in similar experiments delivers about 10-20 MBytes/s per drive.\nStoring Datasets in Tar Archives", "source": "https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/", "category": "pytorch blogs"} {"text": "Storing Datasets in Tar Archives\nThe format used for WebDataset is standard POSIX tar archives, the same archives used for backup and data distribution. In order to use the format to store training samples for deep learning, we adopt some simple naming conventions:\n* datasets are POSIX tar archives\n* each training sample consists of adjacent files with the same basename\n* shards are numbered consecutively\nFor example, ImageNet is stored in 1282 separate 100 Mbyte shards with names pythonimagenet-train-000000.tar to imagenet-train-001281.tar, the contents of the first shard are:\n```python\n-r--r--r-- bigdata/bigdata 3 2020-05-08 21:23 n03991062_24866.cls\n-r--r--r-- bigdata/bigdata 108611 2020-05-08 21:23 n03991062_24866.jpg\n-r--r--r-- bigdata/bigdata 3 2020-05-08 21:23 n07749582_9506.cls\n-r--r--r-- bigdata/bigdata 129044 2020-05-08 21:23 n07749582_9506.jpg\n-r--r--r-- bigdata/bigdata 3 2020-05-08 21:23 n03425413_23604.cls", "source": "https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/", "category": "pytorch blogs"} {"text": "-r--r--r-- bigdata/bigdata 106255 2020-05-08 21:23 n03425413_23604.jpg\n-r--r--r-- bigdata/bigdata 3 2020-05-08 21:23 n02795169_27274.cls\n```\nWebDataset datasets can be used directly from local disk, from web servers (hence the name), from cloud storage and object stores, just by changing a URL. WebDataset datasets can be used for training without unpacking, and training can even be carried out on streaming data, with no local storage.\nShuffling during training is important for many deep learning applications, and WebDataset performs shuffling both at the shard level and at the sample level. Splitting of data across multiple workers is performed at the shard level using a user-provided shard_selection function that defaults to a function that splits based on get_worker_info. (WebDataset can be combined with the tensorcom library to offload decompression/data augmentation and provide RDMA and direct-to-GPU loading; see below.)\nCode Sample", "source": "https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/", "category": "pytorch blogs"} {"text": "Code Sample\nHere are some code snippets illustrating the use of WebDataset in a typical PyTorch deep learning application (you can find a full example at http://github.com/tmbdev/pytorch-imagenet-wds.\n```python\nimport webdataset as wds\nimport ...\nsharedurl = \"/imagenet/imagenet-train-{000000..001281}.tar\"\nnormalize = transforms.Normalize(\n mean=[0.485, 0.456, 0.406],\n std=[0.229, 0.224, 0.225])\npreproc = transforms.Compose([\n transforms.RandomResizedCrop(224),\n transforms.RandomHorizontalFlip(),\n transforms.ToTensor(),\n normalize,\n])\ndataset = (\n wds.Dataset(sharedurl)\n .shuffle(1000)\n .decode(\"pil\")\n .rename(image=\"jpg;png\", data=\"json\")\n .map_dict(image=preproc)\n .to_tuple(\"image\", \"data\")\n)\nloader = torch.utils.data.DataLoader(dataset, batch_size=64, num_workers=8)\nfor inputs, targets in loader:\n ...\n ```", "source": "https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/", "category": "pytorch blogs"} {"text": "for inputs, targets in loader:\n ...\n ```\nThis code is nearly identical to the file-based I/O pipeline found in the PyTorch Imagenet example: it creates a preprocessing/augmentation pipeline, instantiates a dataset using that pipeline and a data source location, and then constructs a DataLoader instance from the dataset.\nWebDataset uses a fluent API for a configuration that internally builds up a processing pipeline. Without any added processing stages, In this example, WebDataset is used with the PyTorch DataLoader class, which replicates DataSet instances across multiple threads and performs both parallel I/O and parallel data augmentation.\nWebDataset instances themselves just iterate through each training sample as a dictionary:\n```python\nload from a web server using a separate client process\nsharedurl = \"pipe:curl -s http://server/imagenet/imagenet-train-{000000..001281}.tar\"\ndataset = wds.Dataset(sharedurl)\nfor sample in dataset:\n # sample[\"jpg\"] contains the raw image data", "source": "https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/", "category": "pytorch blogs"} {"text": "sample[\"jpg\"] contains the raw image data\n# sample[\"cls\"] contains the class\n ...\n ```\nFor a general introduction to how we handle large scale training with WebDataset, see these YouTube videos.\nRelated Software\n\nAIStore is an open-source object store capable of full-bandwidth disk-to-GPU data delivery (meaning that if you have 1000 rotational drives with 200 MB/s read speed, AIStore actually delivers an aggregate bandwidth of 200 GB/s to the GPUs). AIStore is fully compatible with WebDataset as a client, and in addition understands the WebDataset format, permitting it to perform shuffling, sorting, ETL, and some map-reduce operations directly in the storage system. AIStore can be thought of as a remix of a distributed object store, a network file system, a distributed database, and a GPU-accelerated map-reduce implementation.\n", "source": "https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/", "category": "pytorch blogs"} {"text": "\n\ntarp is a small command-line program for splitting, merging, shuffling, and processing tar archives and WebDataset datasets.\n\n\ntensorcom is a library supporting distributed data augmentation and RDMA to GPU.\n\n\npytorch-imagenet-wds contains an example of how to use WebDataset with ImageNet, based on the PyTorch ImageNet example.\n\n\nBigdata 2019 Paper with Benchmarks\n\n\nCheck out the library and provide your feedback for RFC 38419.", "source": "https://pytorch.org/blog/efficient-pytorch-io-library-for-large-datasets-many-files-many-gpus/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'PyTorch 0.4.0 Migration Guide'\nredirect_from: /2018/04/22/0_4_0-migration-guide.html\n\nWelcome to the migration guide for PyTorch 0.4.0. In this release we introduced many exciting new features and critical bug fixes, with the goal of providing users a better and cleaner interface. In this guide, we will cover the most important changes in migrating existing code from previous versions:\n\nTensors and Variables have merged\nSupport for 0-dimensional (scalar) Tensors\nDeprecation of the volatile flag\ndtypes, devices, and Numpy-style Tensor creation functions\nWriting device-agnostic code\nNew edge-case constraints on names of submodules, parameters, and buffers in nn.Module\n\nMerging Tensor and Variable and classes", "source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"} {"text": "torch.Tensor and torch.autograd.Variable are now the same class. More precisely, torch.Tensor is capable of tracking history and behaves like the old Variable; Variable wrapping continues to work as before but returns an object of type torch.Tensor. This means that you don't need the Variable wrapper everywhere in your code anymore.\nThe type() of a Tensor has changed\nNote also that the type() of a Tensor no longer reflects the data type. Use isinstance() or x.type()instead:\n>>> x = torch.DoubleTensor([1, 1, 1])\n>>> print(type(x)) # was torch.DoubleTensor\n\"\"\n>>> print(x.type()) # OK: 'torch.DoubleTensor'\n'torch.DoubleTensor'\n>>> print(isinstance(x, torch.DoubleTensor)) # OK: True\nTrue\n", "source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"} {"text": "True\n```\nWhen does autograd start tracking history now?\nrequires_grad, the central flag for autograd, is now an attribute on Tensors. The same rules previously used for Variables applies to Tensors; autograd starts tracking history when any input Tensor of an operation has requires_grad=True. For example,\n```python\n\n\n\nx = torch.ones(1) # create a tensor with requires_grad=False (default)\nx.requires_grad\nFalse\ny = torch.ones(1) # another tensor with requires_grad=False\nz = x + y\nboth inputs have requires_grad=False. so does the output\nz.requires_grad\nFalse\nthen autograd won't track this computation. let's verify!\nz.backward()\nRuntimeError: element 0 of tensors does not require grad and does not have a grad_fn\nnow create a tensor with requires_grad=True\nw = torch.ones(1, requires_grad=True)\n\n\n", "source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"} {"text": "\n\n\nw = torch.ones(1, requires_grad=True)\nw.requires_grad\nTrue\nadd to the previous result that has require_grad=False\ntotal = w + z\nthe total sum now requires grad!\ntotal.requires_grad\nTrue\nautograd can compute the gradients as well\ntotal.backward()\nw.grad\ntensor([ 1.])\nand no computation is wasted to compute gradients for x, y and z, which don't require grad\nz.grad == x.grad == y.grad == None\nTrue\n\n\n\n\n#### Manipulating `requires_grad` flag\n\nOther than directly setting the attribute, you can change this flag `in-place` using [`my_tensor.requires_grad_()`](https://pytorch.org/docs/0.4.0/tensors.html#torch.Tensor.requires_grad_), or, as in the above example, at creation time by passing it in as an argument (default is `False`), e.g.,\n\n```python\n>>> existing_tensor.requires_grad_()\n>>> existing_tensor.requires_grad\nTrue\n>>> my_tensor = torch.zeros(3, 4, requires_grad=True)\n>>> my_tensor.requires_grad\nTrue\n\nWhat about .data?", "source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"} {"text": "True\n```\nWhat about .data?\n.data was the primary way to get the underlying Tensor from a Variable. After this merge, calling y = x.data still has similar semantics. So y will be a Tensor that shares the same data with x, is unrelated with the computation history of x, and has requires_grad=False.\nHowever, .data can be unsafe in some cases. Any changes on x.data wouldn't be tracked by autograd, and the computed gradients would be incorrect if x is needed in a backward pass. A safer alternative is to use x.detach(), which also returns a Tensor that shares data with requires_grad=False, but will have its in-place changes reported by autograd if x is needed in backward.\nHere is an example of the difference between .data and x.detach() (and why we recommend using detach in general).\nIf you use Tensor.detach(), the gradient computation is guaranteed to be correct.\n```python", "source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"} {"text": ">>> a = torch.tensor([1,2,3.], requires_grad = True)\n>>> out = a.sigmoid()\n>>> c = out.detach()\n>>> c.zero_()\ntensor([ 0., 0., 0.])\n\n>>> out # modified by c.zero_() !!\ntensor([ 0., 0., 0.])\n\n>>> out.sum().backward() # Requires the original value of out, but that was overwritten by c.zero_()\nRuntimeError: one of the variables needed for gradient computation has been modified by an\n\nHowever, using Tensor.data can be unsafe and can easily result in incorrect gradients when a tensor is required for gradient computation but modified in-place.\n>>> a = torch.tensor([1,2,3.], requires_grad = True)\n>>> out = a.sigmoid()\n>>> c = out.data\n>>> c.zero_()\ntensor([ 0., 0., 0.])\n\n>>> out # out was modified by c.zero_()\ntensor([ 0., 0., 0.])\n\n>>> out.sum().backward()\n>>> a.grad # The result is very, very wrong because `out` changed!\ntensor([ 0., 0., 0.])\n\nSupport for 0-dimensional (scalar) Tensors", "source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"} {"text": "Support for 0-dimensional (scalar) Tensors\nPreviously, indexing into a Tensor vector (1-dimensional tensor) gave a Python number but indexing into a Variable vector gave (inconsistently!) a vector of size (1,)! Similar behavior existed with reduction functions, e.g. tensor.sum() would return a Python number, but variable.sum() would return a vector of size (1,).\nFortunately, this release introduces proper scalar (0-dimensional tensor) support in PyTorch! Scalars can be created using the new torch.tensor function (which will be explained in more detail later; for now just think of it as the PyTorch equivalent of numpy.array). Now you can do things like:\n```python\n\n\n\ntorch.tensor(3.1416) # create a scalar directly\ntensor(3.1416)\ntorch.tensor(3.1416).size() # scalar is 0-dimensional\ntorch.Size([])\ntorch.tensor([3]).size() # compare to a vector of size 1\ntorch.Size([1])\nvector = torch.arange(2, 6) # this is a vector\nvector\ntensor([ 2., 3., 4., 5.])\n\n\n", "source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"} {"text": "\n\n\nvector\ntensor([ 2., 3., 4., 5.])\nvector.size()\ntorch.Size([4])\nvector[3] # indexing into a vector gives a scalar\ntensor(5.)\nvector[3].item() # .item() gives the value as a Python number\n5.0\nmysum = torch.tensor([2, 3]).sum()\nmysum\ntensor(5)\nmysum.size()\ntorch.Size([])\n```\n\n\n\nAccumulating losses\nConsider the widely used pattern total_loss += loss.data[0]. Before 0.4.0. loss was a Variable wrapping a tensor of size (1,), but in 0.4.0 loss is now a scalar and has 0 dimensions. Indexing into a scalar doesn't make sense (it gives a warning now, but will be a hard error in 0.5.0). Use loss.item() to get the Python number from a scalar.", "source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"} {"text": "Note that if you don't convert to a Python number when accumulating losses, you may find increased memory usage in your program. This is because the right-hand-side of the above expression used to be a Python float, while it is now a zero-dim Tensor. The total loss is thus accumulating Tensors and their gradient history, which may keep around large autograd graphs for much longer than necessary.\nDeprecation of volatile flag\nThe volatile flag is now deprecated and has no effect. Previously, any computation that involves a Variable with volatile=True wouldn't be tracked by autograd. This has now been replaced by a set of more flexible context managers including torch.no_grad(), torch.set_grad_enabled(grad_mode), and others.\n```python\n\n\n\nx = torch.zeros(1, requires_grad=True)\nwith torch.no_grad():\n... y = x * 2\ny.requires_grad\nFalse\nis_train = False\n\n\n", "source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"} {"text": "\n\n\ny.requires_grad\nFalse\nis_train = False\nwith torch.set_grad_enabled(is_train):\n... y = x * 2\ny.requires_grad\nFalse\ntorch.set_grad_enabled(True) # this can also be used as a function\ny = x * 2\ny.requires_grad\nTrue\ntorch.set_grad_enabled(False)\ny = x * 2\ny.requires_grad\nFalse\n```\n\n\n\ndtypes, devices and NumPy-style creation functions\nIn previous versions of PyTorch, we used to specify data type (e.g. float vs double), device type (cpu vs cuda) and layout (dense vs sparse) together as a \"tensor type\". For example, torch.cuda.sparse.DoubleTensor was the Tensor type representing the double data type, living on CUDA devices, and with COO sparse tensor layout.", "source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"} {"text": "In this release, we introduce torch.dtype, torch.device and torch.layout classes to allow better management of these properties via NumPy-style creation functions.\ntorch.dtype\nBelow is a complete list of available torch.dtypes (data types) and their corresponding tensor types.\n\n\n\nData\ntype torch.dtype\nTensor types\n\n\n\n\n32-bit floating point\ntorch.float32 or torch.float\ntorch.*.FloatTensor\n\n\n64-bit floating point\ntorch.float64 or torch.double\ntorch.*.DoubleTensor\n\n\n16-bit floating point\ntorch.float16 or torch.half\ntorch.*.HalfTensor\n\n\n", "source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"} {"text": "| 8-bit integer (unsigned) | torch.uint8 | torch.*.ByteTensor\n| 8-bit integer (signed) | torch.int8 | torch.*.CharTensor\n| 16-bit integer (signed) | torch.int16 or torch.short | torch.*.ShortTensor\n| 32-bit integer (signed) | torch.int32 or torch.int | torch.*.IntTensor\n| 64-bit integer (signed) | torch.int64 or torch.long | torch.*.LongTensor\nThe dtype of a tensor can be access via its dtype attribute.\ntorch.device\nA torch.device contains a device type ('cpu' or 'cuda') and optional device ordinal (id) for the device type. It can be initialized with torch.device('{device_type}') or torch.device('{device_type}:{device_ordinal}').", "source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"} {"text": "If the device ordinal is not present, this represents the current device for the device type; e.g., torch.device('cuda') is equivalent to torch.device('cuda:X') where X is the result of torch.cuda.current_device().\nThe device of a tensor can be accessed via its device attribute.\ntorch.layout\ntorch.layout represents the data layout of a Tensor. Currently torch.strided (dense tensors, the default) and torch.sparse_coo (sparse tensors with COO format) are supported.\nThe layout of a tensor can be access via its layout attribute.\nCreating Tensors", "source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"} {"text": "Creating Tensors\nMethods that create a Tensor now also take in dtype, device, layout, and requires_grad options to specify the desired attributes on the returned Tensor. For example,\n>>> device = torch.device(\"cuda:1\")\n>>> x = torch.randn(3, 3, dtype=torch.float64, device=device)\ntensor([[-0.6344, 0.8562, -1.2758],\n [ 0.8414, 1.7962, 1.0589],\n [-0.1369, -1.0462, -0.4373]], dtype=torch.float64, device='cuda:1')\n>>> x.requires_grad # default is False\nFalse\n>>> x = torch.zeros(3, requires_grad=True)\n>>> x.requires_grad\nTrue\n\ntorch.tensor(data, ...)", "source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"} {"text": "torch.tensor is one of the newly added tensor creation methods. It takes in array-like data of all kinds and copies the contained values into a new Tensor. As mentioned earlier, torch.tensor is the PyTorch equivalent of NumPy's numpy.arrayconstructor. Unlike the torch.*Tensor methods, you can also create zero-dimensional Tensors (aka scalars) this way (a single python number is treated as a Size in the torch.*Tensor methods). Moreover, if a dtype argument isn't given, it will infer the suitable dtype given the data. It is the recommended way to create a tensor from existing data like a Python list. For example,\n```python\n\n\n\ncuda = torch.device(\"cuda\")\ntorch.tensor([[1], [2], [3]], dtype=torch.half, device=cuda)\ntensor([[ 1],\n [ 2],\n [ 3]], device='cuda:0')\ntorch.tensor(1) # scalar\n\n\n", "source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"} {"text": "\n\n\ntorch.tensor(1) # scalar\ntensor(1)\ntorch.tensor([1, 2.3]).dtype # type inferece\ntorch.float32\ntorch.tensor([1, 2]).dtype # type inferece\ntorch.int64\n```\n\n\n\nWe've also added more tensor creation methods. Some of them have torch.*_like and/or tensor.new_* variants.\n\ntorch.*_like takes in an input Tensor instead of a shape. It returns a Tensor with same attributes as the input Tensor by default unless otherwise specified:\n\n```python\n\n\n\nx = torch.randn(3, dtype=torch.float64)\ntorch.zeros_like(x)\n tensor([ 0., 0., 0.], dtype=torch.float64)\ntorch.zeros_like(x, dtype=torch.int)\n tensor([ 0, 0, 0], dtype=torch.int32)\n ```\n\n\n\n\ntensor.new_* can also create Tensors with same attributes as tensor, but it always takes in a shape argument:\n\n```python\n\n\n\nx = torch.randn(3, dtype=torch.float64)\nx.new_ones(2)\n tensor([ 1., 1.], dtype=torch.float64)\nx.new_ones(4, dtype=torch.int)\n tensor([ 1, 1, 1, 1], dtype=torch.int32)\n ```\n\n\n", "source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"} {"text": "```\nTo specify the desired shape, you can either use a tuple (e.g., torch.zeros((2, 3))) or variable arguments (e.g., torch.zeros(2, 3)) in most cases.\n\n\n\nName\nReturned Tensor\ntorch.*_like variant\ntensor.new_* variant\n\n\n\n\ntorch.empty\nuninitialized memory\n\u2714\n\u2714\n\n\ntorch.zeros\nall zeros\n\u2714\n\u2714\n\n\ntorch.ones\nall ones\n\u2714\n\u2714\n\n\ntorch.full\nfilled with a given value\n\u2714\n\u2714\n\n\ntorch.rand\ni.i.d. continuous Uniform[0, 1)\n\u2714\n\n\n\ntorch.randn\ni.i.d. Normal(0, 1)\n\u2714\n\n\n\n", "source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"} {"text": "| torch.randint | i.i.d. discrete Uniform in given range | \u2714 |\n| torch.randperm | random permutation of {0, 1, ..., n - 1} |\n| torch.tensor | copied from existing data (list, NumPy ndarray, etc.) | | \u2714 |\n| torch.from_numpy* | from NumPy ndarray (sharing storage without copying) |\n| torch.arange, torch.range, and torch.linspace | uniformly spaced values in a given range |\n| torch.logspace | logarithmically spaced values in a given range |\n| torch.eye | identity matrix |", "source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"} {"text": "*: torch.from_numpy only takes in a NumPy ndarray as its input argument.\nWriting device-agnostic code\nPrevious versions of PyTorch made it difficult to write code that was device agnostic (i.e. that could run on both CUDA-enabled and CPU-only machines without modification).\nPyTorch 0.4.0 makes this easier in two ways:\n\nThe device attribute of a Tensor gives the torch.device for all Tensors (get_device only works for CUDA tensors)\nThe to method of Tensors and Modules can be used to easily move objects to different devices (instead of having to call cpu() or cuda() based on the context)\n\nWe recommend the following pattern:\n```python\nat beginning of the script\ndevice = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")\n...\nthen whenever you get a new Tensor or Module\nthis won't copy if they are already on the desired device", "source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"} {"text": "input = data.to(device)\nmodel = MyModule(...).to(device)\n```\nNew edge-case constraints on names of submodules, parameters, and buffers in nn.Module\nname that is an empty string or contains \".\" is no longer permitted in module.add_module(name, value), module.add_parameter(name, value) or module.add_buffer(name, value) because such names may cause lost data in the state_dict. If you are loading a checkpoint for modules containing such names, please update the module definition and patch the state_dict before loading it.\nCode Samples (Putting it all together)\nTo get a flavor of the overall recommended changes in 0.4.0, let's look at a quick example for a common code pattern in both 0.3.1 and 0.4.0:\n\n0.3.1 (old):\n ```python\n model = MyRNN()\n if use_cuda:\n model = model.cuda()\n\n# train\n total_loss = 0\n for input, target in train_loader:\n input, target = Variable(input), Variable(target)\n hidden = Variable(torch.zeros(*h_shape)) # init hidden\n if use_cuda:", "source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"} {"text": "if use_cuda:\n input, target, hidden = input.cuda(), target.cuda(), hidden.cuda()\n ... # get loss and optimize\n total_loss += loss.data[0]\n# evaluate\n for input, target in test_loader:\n input = Variable(input, volatile=True)\n if use_cuda:\n ...\n ...\n ```\n\n0.4.0 (new):\n ```python\n # torch.device object used throughout this script\n device = torch.device(\"cuda\" if use_cuda else \"cpu\")\n\nmodel = MyRNN().to(device)\n# train\n total_loss = 0\n for input, target in train_loader:\n input, target = input.to(device), target.to(device)\n hidden = input.new_zeros(*h_shape) # has the same device & dtype as input\n ... # get loss and optimize\n total_loss += loss.item() # get Python number from 1-element Tensor\n# evaluate\n with torch.no_grad(): # operations inside don't track history\n for input, target in test_loader:\n ...\n ```", "source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"} {"text": "...\n ```\nThank you for reading! Please refer to our documentation and release notes for more details.\nHappy PyTorch-ing!", "source": "https://pytorch.org/blog/pytorch-0_4_0-migration-guide/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Efficient Large-Scale Training with Pytorch FSDP and AWS\"\nauthor: Less Wright, Hamid Shojanazeri, Geeta Chauhan\nfeatured-img: \"assets/images/largeblog_index_1.png\"\n\nCutting-edge AI models are becoming extremely large. The cost and overhead of training these models is increasing rapidly, and involves large amounts of engineering and guesswork to find the right training regime. FSDP reduces these costs significantly by enabling you to train much larger models with the same amount of resources. FSDP lowers the memory footprint on your GPUs, and is usable via a lightweight configuration that requires substantially less effort, typically with just a few lines of code.", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "The main performance gains in FSDP come from maximizing the overlap between network communication and model computation, and eliminating the memory redundancy inherent in traditional data parallel training (DDP). PyTorch FSDP can train models approximately 4x larger on the same server resources as DDP and 20x larger if we combine activation checkpointing and activation offloading.\nSince PyTorch 1.12, FSDP is now in beta status, and has added a number of new features that can be tuned to further accelerate your model training. \nIn this series of blog posts, we will explain multiple performance optimizations you can run with FSDP to boost your distributed training speed and model sizes within the context of your available server resources. We use the HuggingFace T5 3B, 11B and DeepVit, in fine-tuning mode, as the running examples throughout the series.", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "As a preview of some of the optimizations discussed in this series, we show the before and after performance scaled in Flops below (Note that these results can vary based on your server resources and model architecture). \n\n\n\n *T5 3B Performance measured on AWS A100 and A10 servers. Original with no optimizations and Tuned with the applied optimization \n\n\n\n *T5 11B Performance measured on A100 servers. Original with no optimizations and Tuned with the applied optimization ", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "In this first post, we will provide a quick overview of FSDP and how it can make training large- scale AI models more efficient. We will highlight briefly the multiple performance options available, and dive deeper into the details on these in upcoming posts. We will then conclude with an overview on how to leverage AWS parallel cluster for large- scale training with FSDP. \n\n\nOptimization \n\nT5 Model \n\nThroughput Improvement \n\n\n\nMixed Precision\n \n3 B\n \n5x\n \n\n\n11 B\n \n10x\n \n\n\nActivation Checkpointing (AC)\n \n3 B\n \n10x\n \n\n\n11 B\n \n100x\n \n\n\nTransformer Wrapping Policy\n \n3 B\n \n2x\n \n", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "3 B\n \n2x\n \n\n\n\n11 B\n \nUnable to run the experiment without the Transformer wrapping policy.\n\n\n\nFull Shard Strategy\n \n3 B\n \n1.5x\n \n\n\n11 B\n \nNot able to run with Zero2\n\n\n\nPerformance optimization gains on T5 models over non-optimized.", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "In our experiments with the T5 3B model, using the transformer wrapping policy resulted in >2x higher throughput measured in TFLOPS versus the default wrapping policy. Activation checkpointing resulted in 10x improvement by reinvesting the freed memory from the checkpoints into larger batch size. Mixed precision with BFloat16 resulted in ~5x improvement versus FP32 and finally the full sharding strategy versus zero2 (DDP) resulted in 1.5x improvement.", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "We ran similar experiments for a larger model, T5 11B, but the larger model size resulted in some changes to the experiment space. Specifically, we found that two optimizations, transformer wrapping policy and activation checkpointing, were needed to enable us to run these experiments on 3 nodes (each node had 8 A100 gpus with 80 GB of memory). With these optimizations, we could fit a batch size of 50 and get higher throughput compared to removing each one of them. Thus rather than running on/off solely for a single optimization test as with the 3B model, the larger model experiments were done with 1 of 3 optimizations turned on/off while always running the other two in order to allow a usable batch size for both test states for each item.", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "Based on TFLOP comparisons, with the 11B model, we saw even more payoff from the optimizations. Mixed precision(~10x improvement) and activation checkpointing (~100x improvement) had a much larger impact with the 11B model compared to the 3B parameter model. With mixed precision we could fit ~2x larger batch sizes and with activation checkpointing >15x batch sizes (from 3 with no activation checkpointing to 50 with activation checkpointing) which translated into large throughput improvements.\nWe also have observed that for these larger models > 3B, using Zero2 sharding strategy would result in minimal room left in memory for the batch data, and had to go with very small batch sizes (e.g 1-2) that essentially makes full sharding strategy a necessity to enable fitting larger batches sizes.", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "Note - this tutorial assumes a basic understanding of FSDP. To learn more about basics of FSDP please refer to the getting started and advanced FSDP tutorials.\nWhat is FSDP? How does it make Large-Scale Training More Efficient\nFSDP expands upon distributed data parallel, by parallelizing not just data, but the model parameters, the optimizer states and gradients associated with the model. Specifically - each GPU only stores a subset of the entire model and the associated subset of optimizer states and gradients.\nTo show the evolution of distributed training, we can start from the beginning, where AI models were simply trained on a single GPU.", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "DDP (Distributed Data Parallel) was the initial step up from training with only a single GPU, and was an effort to address the data and model size growth, where multiple GPUs each housed their own copy of the same model. The gain here is that the data for each batch could be split and processed independently on each GPU, all at the same time,thus parallelizing the processing of the data set and increasing training speed by the increasing number of GPUs. The tradeoff is the need to communicate the gradients between each GPU to synchronize the models after the backward pass.", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "FSDP expands on scaling models by removing the redundancy of optimizer calculations and state storage, as well as gradient and memory storage of model parameters that are present in DDP (DDP = Distributed Data Parallel). This redundancy reduction, along with increased communication overlap where model parameter communication takes place at the same time as model computation, is what allows FSDP to train much larger models with the same resources as DDP.\nA key point is that this efficiency also allows for AI models that are larger than a single GPU to be trained. The model size available for training is now increased to the aggregate memory of all GPUs, rather than the size of a single GPU. (And as a point of note, FSDP can go beyond aggregated GPU memory by leveraging CPU memory as well, though we will not directly cover this aspect here).", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "As discussed in a previous blog post, with DDP the largest model that we could train on 32, A100 gpus with 40 GB memory (4 nodes) was up to 3B parameters, and batch size of 128, with the help of activation checkpointing. By contrast, using FSDP we were able to train up to 81B model size, combining activation checkpointing, along with activation and parameter offloading. In another experiment, we benchmarked a 1T parameter model with FSDP using 512 gpus.\n\n\n\nFor intuition on the parameter level workings of FSDP, below we show an animation detailing how the model parameters are sharded and communicated assuming a two GPU scenario and a simple 8 parameter model:\n", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "\n\n\nAbove - the animations walk through the steps involved with the initial sharding of the model amongst ranks, and we start the all_gathers and forward pass\n\n\n\nWe continue through the model with the forward pass. After each FSDP unit completes, non-locally owned params are dropped to free memory, and optionally activations can be checkpointed. This continues until we finish the forward pass and compute the loss.\n\n\n\nDuring the backward pass, another all_gather is used to load the parameters and the gradients are computed. These gradients are then reduce_scattered so that the local owners of each param can aggregate and prepare to update the weights.\n\n\n", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "\nFinally, each rank passes the summed gradients through the optimizer states and updates the weights to complete the mini-batch.\nWith the model now distributed across the entire set of available GPUs, the logical question is how data moves through the model given this sharding of model parameters.\nThis is accomplished by FSDP coordinating with all GPUs to effectively share (communicate) the respective parts of the model. The model is decomposed into FSDP units and parameters within each unit are flattened and then sharded across all GPUs. Within each FSDP unit, GPU\u2019s are assigned interleaving ownership of individual model parameters.\nBy interleaving, we mean the following - assuming 2 gpus with an id of 1 and 2, the FSDP unit ownership pattern would be [12121212], rather than a contiguous chunk of [111222].", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "During training, an all_gather is initiated and the locally owned model parameters within a FSDP unit are shared by the owner GPU with the other non-owners, when they need it, on a \u2018just in time\u2019 type basis. FSDP prefetches parameters to overlap all_gather communication with computation. \nWhen those requested parameters arrive, the GPU uses the delivered parameters, in combination with the parameters it already owns, to create a fully populated FSDP unit. Thus there is a moment where each GPU hits peak memory usage while holding a fully populated FSDP unit.", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "It then processes the data through the FSDP unit, and drops the parameters it received from other GPU\u2019s to free up memory for the next unit\u2026the process continues over and over proceeding through the entire model to complete the forward pass.The process is then repeated (in general) for the backward pass.(note - this is a simplified version for understanding..there is additional complexity but this should help construct a basic mental model of the FSDP process).", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "This eliminates much of the memory redundancy present in DDP, but imposes the cost of higher amounts of network communication to shuttle these requested parameters back and forth amongst all the GPUs.Overlapping the communication timing with the computation taking place is the basis of many of the performance improvements we\u2019ll discuss in this series. The key gains are frequently based on the fact that communication can often take place at the same time as computation.As you can surmise, having high communication speed is vital for FSDP performance.\nHow do I optimize my training with FSDP?\nThere are four main performance improvements we will cover - the transformer wrapper, activation checkpointing, mixed precision, and selecting the proper sharding strategy. The flowchart below will help as a checklist for tuning options that we will discuss in this post.\n\n\n", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "\nWrapping policy - for transformers, use Transformer wrapping policy\nThe first performance optimization is leveraging the FSDP transformer wrapper for transformer models. \nOne of the pre-defined wrapping policy is size_based_autowrap_policy. With size_based_autowrap_policy, FSDP will traverse the module structure from bottom to top, a new FSDP unit will be created once the current unit has at least the min_num_params specified within the size policy (this defaults to 1e8, or 100M). If the module can not be created as an FSDP unit, FSDP will continue to check its parent module. This size based wrapping policy may not be ideal for some model structures, PyTorch distributed team is actively working on a new default wrapping policy in the next release which is based on size and also module execution order, users can simply tune the size and achieve the optimized performance.", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "In the current release, you can greatly improve your performance when running Transformer models by using the \u2018transformer wrapper\u2019. You will need to provide the appropriate layer class for your model. Here, layer class is the class that houses the Multi-Head Attention and Feed Forward Network.", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "FSDP will then form the FSDP units around the layer class rather than arbitrary breaks based on parameter size. By sharding the model around layer classes that are uniformly repeated within the transformer, FSDP can create uniform FSDP units that better balance the overlap of computation and communication. By contrast, size based wrapping can produce very uneven or skewed shards for models, which then have uneven matching of compute vs communication overlap. As discussed earlier, the main driver of FSDP high performance is the overlap of communication and computation, and hence why the Transformer wrapper provides improved performance. Note that the Transformer wrapper can also be used for non-transformer models if these models have a list of uniform layers.\nLet\u2019s compare the performance difference on a T5, 3B parameter model when running under the default wrapper and the transformer wrapper.\nFor default wrapping, we don\u2019t need to take any action - we simply pass the model to FSDP as shown:\n```python", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "model = FSDP(\n model,\n device_id=torch.cuda.current_device(),\n )\n\nIn this case FSDP will simply wrap the whole model in a single FSDP unit.\nRunning on an NVIDIA A100-SXM4\u201340GB with 8 GPUs, we are able to reach 2.3 TFlops and 95% GPU memory utilization with a batch size of 14.\nHowever, since T5 is a transformer model, we are better served to leverage the transformer wrapper for this model. \nTo use that, we need to isolate the layer class for the transformer, and then pass it in to create our transformer wrapper. \nfrom transformers.models.t5.modeling_t5 import T5Block\n\nAnd now we can create our Transformer wrapper: \ntransformer_auto_wrapper_policy = functools.partial(\n transformer_auto_wrap_policy,\n transformer_layer_cls={\n T5Block, # < ---- Your Transformer layer class\n },\n )\n", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "},\n )\n\nWith our model aware wrapper ready, we can initialize FSDP:\n\n```python\n# invoke FSDP with your transformer wrapper policy:\nmodel = FSDP(\n model,\n auto_wrap_policy=transformer_auto_wrapper_policy,\n device_id=torch.cuda.current_device(), # streaming init\n )\n\nRunning this wrapped model, we can see some substantial performance gains.We can fit nearly double the batch size, going to 28, and with better memory and communication efficiency, we see a TFlops increase to 5.07 from 2.3.\nThus, we\u2019ve increased our training throughput by over 200% (2.19x) due to providing greater model info to FSDP! The transformer wrapping policy results in more fine-grained and balanced FSDP units each holding a layer class, which leads to a more effective communication-computation overlap.\n\n\n\nAbove: Graphical comparison of TFlops based on wrapper type", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "If you are training a Transformer model, it pays to configure your training with FSDP using the transformer wrapper. For more information on how to isolate your layer class, please see our in depth video on Transformer wrapping here, where we walk through a number of transformers showing where the layer class can be found.\nMixed precision - use BF16 if you have an Ampere architecture GPU\nFSDP supports a flexible mixed precision policy that gives you granular control over parameters, gradients and buffer data types. This lets you easily leverage BFloat16 or FP16 to increase your training speed by up to 70%. \n*Note that BFloat 16 is only available on Ampere type GPUs. On AWS this is available with p4dn and g5 instances.\nBy way of comparison, we can show a 77% speed improvement when comparing fully tuned BFloat16 vs FP32 on an 8B DeepVit model.\n\n\n", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "\nWe have obtained even greater acceleration using BFloat16 in fine-tuning a 3B HuggingFace T5 model as shown in the figures below. We observed that because of the lower precision the validation loss of BFloat16 is slightly behind in the first few epochs, but it is able to catch up and results in the same final accuracy as FP32.\n\n\n\nTo use mixed precision, we create a policy with our desired data types, and pass it in during the FSDP initialization.\nTo create our policy, we need to import the MixedPrecision class, and then define our custom policy using our customized class:\n```python\nfrom torch.distributed.fsdp import MixedPrecision\nbfSixteen = MixedPrecision(\n param_dtype=torch.bfloat16,\n # Gradient communication precision.\n reduce_dtype=torch.bfloat16,\n # Buffer precision.\n buffer_dtype=torch.bfloat16,\n)\nmodel = FSDP(\n model,\n auto_wrap_policy=transformer_auto_wrapper_policy,", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "mixed_precision=bfloatPolicy)\n\nYou can mix and match the precision for parameters, gradients and buffers as you prefer:\n\n```python\ncomboPolicy = MixedPrecision(\n # Param precision\n param_dtype=torch.bfloat16,\n # Gradient communication precision.\n reduce_dtype=torch.float32,\n # Buffer precision.\n buffer_dtype=torch.float32,\n )\n\nFor training with FP16, you will need to also use the ShardedGradScaler, which we will cover in subsequent posts. For BFloat16, it is a drop-in replacement.\nAnyPrecision Optimizer - going beyond mixed precision with full BF16 training", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "Mixed precision training, both in FSDP and elsewhere, maintains the working weights in the reduced datatype (BF16 or FP16) while keeping the master weights in full FP32. The reason for the master weights in FP32 is that running in pure BF16 will result in \u2018weight stagnation\u2019, where very small weight updates are lost due to the lower precision, and the accuracy flatlines over time while FP32 weights can continue to improve from these small updates.\nIn order to resolve this dilemma, we can use the new AnyPrecision optimizer available in TorchDistX (Torch Distributed Experimental) that allows you to successfully train and keep the master weights in pure BF16 instead of FP32. In addition, unlike the typical storage of optimizer states in FP32, AnyPrecision is able to maintain states in pure BF16 as well.", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "AnyPrecision enables pure BF16 training by maintaining an extra buffer that tracks the precision lost during the weight updates and re-applies that during the next update\u2026effectively resolving the weight stagnation issue without requiring FP32. \nAs a comparison of the throughput gains available with pure BF16 training using AnyPrecision, we ran experiments using FSDP with the T5 11B model with regular FP32 training, Mixed Precision training with BF16, and pure BF16 training using the AnyPrecision optimizer on 3 nodes with A100 gpus as mentioned previously. \n\n\n\nAs shown above, training with AnyPrecision and pure BF16 resulted in 2x the throughput vs Mixed Precision, and over 20x improvement vs FP32.", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "The potential tradeoff is the impact on final accuracy - in the cases we tested, the accuracy was equal or better than FP32 due to a regularization effect from the slightly reduced precision, but your results may vary. \nAnyPrecision optimizer is available for you to test with here, and is a drop in replacement for AdamW optimizer. \nActivation checkpointing - increasing throughput by trading compute for memory\n\n\n\nFSDP supports activation checkpointing once the model has been sharded, and makes it easy to implement. The graph above shows ~4x throughput improvement using activation checkpointing.\nActivation checkpointing is where the intermediate activations are freed during the forward pass, and a checkpoint is left as a placeholder. This generally increases available GPU memory by over 30%.", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "The tradeoff is that during the backward pass, these previously removed intermediate activations must be re-calculated again using information in the checkpoint (duplicate compute), but by leveraging the increased GPU memory, one can increase the batch size such that the net throughput can increase substantially.\n# verify we have FSDP activation support ready by importing:\nfrom torch.distributed.algorithms._checkpoint.checkpoint_wrapper import (\n checkpoint_wrapper,\n CheckpointImpl,\n apply_activation_checkpointing_wrapper,\n)\n\nThe steps required to implement activation checkpointing is to first import the FSDP checkpointing functions. We need declare our checkpointer wrapper type which is non-reentrant and create a check function to identify which layer to wrap as follows\nnon_reentrant_wrapper = partial(\n checkpoint_wrapper,\n offload_to_cpu=False,\n checkpoint_impl=CheckpointImpl.NO_REENTRANT,\n)\ncheck_fn = lambda submodule: isinstance(submodule, T5Block)\n\n```python", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "\n```python\napply_activation_checkpointing_wrapper(\n model, checkpoint_wrapper_fn=non_reentrant_wrapper, check_fn=check_fn\n )\n\nImportant note - this must be run after the model has been initialized with FSDP.\nHowever, hopefully you\u2019ve seen how some initial tuning with FSDP options can have a large impact on your training performance. \nWith that, we turn our attention from how to scale within FSDP, to how to scale your server hardware for FSDP using AWS.\nLarge Scale Training with FSDP on AWS - For multi-node prioritize high speed network\nAWS provides several services that can be used to run distributed training with FSDP: Amazon EC2 Accelerated Computing instances, AWS ParallelCluster, and Amazon Sagemaker.", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "In this series of blog posts, we used Amazon EC2 p4d instances in a single-instance multi-GPU configuration and in a multi-instance configuration using AWS ParallelCluster and SageMaker in order to run our training jobs.\nHere, we\u2019ll focus specifically on AWS parallel cluster and provide an overview of how to utilize it for training purposes.\nAWS ParallelCluster Setup", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "AWS ParallelCluster Setup\nAWS ParallelCluster is an open source, cluster management tool that makes it easy for you to deploy and manage High Performance Computing (HPC) clusters on AWS. AWS ParallelCluster uses yaml configuration files to provision all the necessary resources. It also supports multiple instance types, job submission queues, shared file systems like Amazon EFS (NFS) or Amazon FSx for Lustre, and job schedulers like AWS Batch and Slurm.\n\n\n", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "\nWorkflow on Clusters\nThe high level idea is to have a cluster that has a head node which controls the compute nodes. The actual training job runs on the compute nodes. Overall steps to run a training job on a cluster are as follows:\n\nSet up an AWS ParallelCuster (we discuss below)\nConnect to the head node, and import the training code/ setup the environment.\nPull the data and place it in a shared folder that compute nodes can access (FSx Lustre drive).\nRun the training job using a job scheduler (in this case Slurm).\n\nSetup AWS ParallelCuster\nTo setup AWS ParallelCluster,\n\nDeploy a network stack. This step is optional since you could use your account default VPC and let AWS ParallelCluster create your subnets and security groups. However, we prefer to compartmentalize our desired network infrastructure and do this deployment via a CloudFormation stack.\n", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "Since we deploy a public and a private subnet, we want to create them into an Availability Zone that contains our target instances, in this case p4d. We consult their availability in the region we use (us-east-1) through the following AWS CLI command:\n`aws ec2 describe-instance-type-offerings --location-type availability-zone \\ --filters Name=instance-type,Values=p4d.24xlarge --region us-east-1 --output table`\n\nWe see three availability zones containing p4d instances, we pick one of them (`us-east-1c`, yours may be different) when deploying our network stack. This can be done with the AWS Console or the AWS CLI. In our case we use the latter as follows\n\n`aws cloudformation create-stack --stack-name VPC-Large-Scale --capabilities CAPABILITY_IAM --template-body file://VPC-Large-Scale.yaml --parameters ParameterKey=SubnetsAZ,ParameterValue=us-east-1c`\n", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "CloudFormation will deploy our new VPC, subnets, security groups and endpoints on our behalf. Once done, you can retrieve the IDs of the public and private subnets by querying the stack outputs and the values PublicSubnet and PrivateSubnet.\nFor example, using the AWS CLI for the private subnet:\n\n`aws cloudformation describe-stacks --stack-name VPC-Large-Scale --query \"Stacks[0].Outputs[?OutputKey=='PrivateSubnet'].OutputValue\" --output text`\n\n\nCreate ParallelCluster, The cluster configuration file specifies the resources for our cluster. These resources include instance type for Head node, compute nodes, access to S3 buckets, shared storage where our data will be located. We will use Amazon FSx for Lustre that offers a fully managed shared storage service with Lustre.\n", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "Here is an example of a cluster configuration file. We can use AWs ParallelCluster CLI to create the cluster. Please note that the private and public subnet IDs will need to be replaced by the ones you retrieved earlier. You will be able to control the cluster using the AWS ParallelCluster CLI to start, stop, pause, etc.\n```\npcluster create-cluster --cluster-name my-hpc-cluster --cluster-configuration cluster.yaml\n```\n\n\nSSH to Head node - once the cluster is ready, we can connect to the Head node using the SSH protocol, pull our training code with and place the data in the shared storage specified in the cluster configuration file.pcluster ssh --cluster-name cluster -i your-key_pair\n\n\n", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "\nLaunch the training job - now that we have the data and training code, we can launch the slurm job for training. Here is an example of a slurm script to launch the job using torchrun.\n\nMore details on how to set up the cluster is out of the scope of this post, however we will have a separate post on it.\nWhat\u2019s next?\nWith this post we provided a high level overview of FSDP and how it efficiently scales distributed AI training. The flowchart included will help provide a checklist for you to review tuning options discussed such as the transformer wrapper and activation checkpointing.", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "In the next posts, we will continue with the T5 model and go deeper into each of the topics above, specifically with sharding strategy and other optimizations to provide more insight and details. For now, a good reference for the sharding strategy is in our video tutorial here:\nIf you have questions or find an issue, please find the authors Less, Hamid and Geeta or open an issue on PyTorch github.\nSpecial thanks to:\nPytorch Distributed team, Shen Li, Rohan Varma, Yanli Zhao, Andrew Gu, Anjali Sridhar, Ana Simoes, Pierre-Yves Aquilanti, Sundar Ranganathan, and the broader AWS team for supporting us with providing infrastructure and technical support for running the large scale experiments.\nResources:", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "Resources:\nFSDP video series\nGetting started with FSDP\nAdvanced tutorial on FSDP\nAPI documentation\n", "source": "https://pytorch.org/blog/efficient-large-scale-training-with-pytorch/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Performance Debugging of Production PyTorch Models at Meta\"\nauthor: CK Luk, Lei Tian\nfeatured-img: \"/assets/images/performance-debugging-of-production-pytorch-models-at-meta-1.png\"\n\n1. Meta\u2019s AI Performance Profiling (MAIProf)\n\n\n\n\nFigure 1: A simplified illustration of the Meta\u2019s AI performance profiling (MAIProf) infrastructure.\n", "source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"} {"text": "\nFigure 1 gives a simplified illustration of the AI performance profiling infrastructure at Meta. ML research and performance engineers submit through the User Portal a profiling request for a training job to the Profiling Service, which subsequently broadcasts the request to all the GPU hosts running the training job. When the Monitoring Daemon on a GPU host receives the profiling request, it will notify the Kineto GPU tracer (built on top of NVIDIA\u2019s libcupti) inside the PyTorch program corresponding to the training job. As a result, Kineto traces will be collected and uploaded to the Object Store asynchronously (in more details: there is one Kineto trace collected for each individual GPU, each is treated and stored as a blob; an example will be given in Section 2). Meanwhile, MAIProf also collects a variety of aggregated performance metrics: the Monitoring Daemon on every GPU host continuously reads performance counters from NVIDIA\u2019s DCGM/NVML and logs them to a Time Series DB.", "source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"} {"text": "Once both trace and metrics collections are completed, the Profiling Service will automatically download traces from the Object Store for trace analysis and performance metrics from the Time Series DB for metric analysis. Finally, an overall profiling report with detailed and insightful analysis is delivered to the user.\nTo serve production uses, we deliberately made the following design choices for MAIProf:\n\nNo source-code change required in the PyTorch models: profiling is triggered by sampling the execution of an unmodified model for a user-specified amount of time.\nProvide a holistic view of performance: MAIProf performs system-wide analysis that cover both CPU and GPU. Under the hood, it invokes various CPU tools (e.g., Python tracer, Autograd Observer) and GPU tools (e.g., Kineto, DCGM) and correlates their results.\n", "source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"} {"text": "\nProvide multiple tools that target a wide range of AI partitioners: At Meta, there are engineers with different backgrounds who may need to tune their AI workload performance. Some of them are AI experts while others are general software engineers. Therefore, MAIProf provides a variety of tools for different levels of performance debugging, from high-level automatic trace comprehension to low-level trace analysis.\nSupport distributed GPU profiling: MAIProf can collect profiling data from multiple hosts, each with multiple GPUs. It then shows a combined view/analysis of the entire system.\nHighly scalable: MAIProf is built as a service on top of existing infrastructures in Meta data centers such as a scalable storage system called Manifold. Its profiling capability can be easily scaled by adding more machines in the service pool with the increase of workloads.\n\n2. Case Study: Optimizing a Protection PyTorch Model", "source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"} {"text": "To be concrete, we use a case study on a protection PyTorch model used in production. First, we discuss our steps for identifying the performance bottlenecks in the model with MAIProf. Then we describe the corresponding optimizations applied and their impacts.\n2.1 Performance Bottlenecks\nStep 1:\nInspect the CPU and GPU utilization on the same timeline, as shown in Figure 2.\n\n\n\n\nFigure 2: CPU usage over time (the top) vs. GPU usage over time (the bottom).\n\nThe first performance anomaly we noticed in Figure 2 is the pattern: \u201cGPU-idle, GPU-active, GPU-idle, GPU-active \u2026\u201d throughout the training. Overall, the GPU is idle for more than half of the training time (this is bad for performance because the GPU is a higher-performance device and so we want it to be utilized as much as possible).\nStep 2:", "source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"} {"text": "Step 2:\nCollect a Python function call trace on the CPU with MAIProf while the GPU is idle, which is shown in Figure 3.\n\n\n\n\nFigure 3: A Python call trace.\n", "source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"} {"text": "Figure 3: A Python call trace.\n\nThe Python trace shows that most of the CPU time is spent inside a Python function sharded_iterrows(). From the source code of the model, we learned that this function processes a big feature table in parallel. The number of worker threads used is controlled by a configurable parameter (num_worker_threads). Also, after investigating how the feature table is generated, we understood the performance anomaly: the training dataset is too large to fit in the CPU memory all at once; it needs to be broken into multiple sub-datasets, each has sufficient data for running 10 epochs. Consequently, a new sub-dataset needs to be read from the disk to memory every 10 epochs, during which the GPU is totally idle.\nStep 3:\nCollect GPU performance metrics, which is shown in Figure 4.\n\n\n\n", "source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"} {"text": "\n\nFigure 4: GPU performance metrics in MAIProf.\n\nWe made the following observations from Figure 4:\n\nThe streaming multiprocessor (SM) runs the model\u2019s CUDA kernels. Its utilization [1] is 9.1%, indicating that the parallel compute units on the GPU are not well utilized.\nTensor Core utilization is 0, meaning that Tensor Core (the mixed-precision compute unit on GPU) [2] is not used at all.\nMax GPU memory utilization is 47.13%, indicating that half of the GPU memory is left unused.\n\nStep 4:\nCollect a GPU trace (aka Kineto trace) of the training loop as shown in Figure 5.\n\n\n\n\nFigure 5: A GPU trace (aka Kineto trace) of the training loop.\n", "source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"} {"text": "\nSince commonly used PyTorch functions are already annotated, their names are automatically shown on the trace. With them, we can roughly divide the trace into the four phases in a training iteration: (1) data loading, (2) forward pass, (3) backward pass, (4) gradient optimization (note: In Figure 5, the \u201coptimizer\u201d phase is from the previous batch while the other three phases are from the current batch).\n2.2 Optimizations\nWe performed four simple optimizations that target the bottlenecks identified above, each requiring only a change in a config parameter or at most a few source lines. They are listed in Figure 6.\n\n\n\nOptimization\nAmount of changes\nBottlenecks addressed\n\n\n\n\nTune num_worker_threads by trying a few possible values within the number of CPU cores on each host.\n1 source line\nGPU totally idle time\n\n\nDouble the batch sizes\n2 config parameters\nGPU memory under-utilization\n\n\n", "source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"} {"text": "| Use automatic mixed precision in PyTorch | 13 source lines | Zero Tensor Core utilization |\n| Use mulitensor optimizer in PyTorch | 1 source line | Many small GPU kernels in the optimizer |\n\nFigure 6: Four simple optimizations applied.\n\n3. Concluding Remarks\nPerformance tuning for PyTorch in production environments is increasingly important. A capable performance-debugging tool is a key to this process. We demonstrate with a case study on a production model that MAIProf is a powerful infrastructure for identifying optimization opportunities.", "source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"} {"text": "At Meta, MAIProf has been used by 100s of engineers, from performance novices to experts, to identify many more types of bottlenecks. These include slow data loading, small and/or slow GPU kernels, distributed training issues such as load imbalance and excessive communication. MAIProf covers major classes of models, including recommendation, vision, and natural language processing. In summary, it is now an indispensable tool for tuning the performance of production PyTorch workloads.\nReferences\n[1] https://docs.nvidia.com/gameworks/content/developertools/desktop/analysis/report/ cudaexperiments/kernellevel/achievedoccupancy.htm\n[2] https://www.nvidia.com/en-us/data-center/tensor-cores/", "source": "https://pytorch.org/blog/performance-debugging-of-production-pytorch-models-at-meta/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'The torch.fft module: Accelerated Fast Fourier Transforms with Autograd in PyTorch'\nauthor: Mike Ruberry, Peter Bell, and Joe Spisak \n\nThe Fast Fourier Transform (FFT) calculates the Discrete Fourier Transform in O(n log n) time. It is foundational to a wide variety of numerical algorithms and signal processing techniques since it makes working in signals\u2019 \u201cfrequency domains\u201d as tractable as working in their spatial or temporal domains.\nAs part of PyTorch\u2019s goal to support hardware-accelerated deep learning and scientific computing, we have invested in improving our FFT support, and with PyTorch 1.8, we are releasing the torch.fft module. This module implements the same functions as NumPy\u2019s np.fft module, but with support for accelerators, like GPUs, and autograd. \nGetting started", "source": "https://pytorch.org/blog/the-torch.fft-module-accelerated-fast-fourier-transforms-with-autograd-in-pyTorch/", "category": "pytorch blogs"} {"text": "Getting started\nGetting started with the new torch.fft module is easy whether you are familiar with NumPy\u2019s np.fft module or not. While complete documentation for each function in the module can be found here, a breakdown of what it offers is:\n\nfft, which computes a complex FFT over a single dimension, and ifft, its inverse\nthe more general fftn and ifftn, which support multiple dimensions\nThe \u201creal\u201d FFT functions, rfft, irfft, rfftn, irfftn, designed to work with signals that are real-valued in their time domains\nThe \"Hermitian\" FFT functions, hfft and ihfft, designed to work with signals that are real-valued in their frequency domains\nHelper functions, like fftfreq, rfftfreq, fftshift, ifftshift, that make it easier to manipulate signals\n", "source": "https://pytorch.org/blog/the-torch.fft-module-accelerated-fast-fourier-transforms-with-autograd-in-pyTorch/", "category": "pytorch blogs"} {"text": "We think these functions provide a straightforward interface for FFT functionality, as vetted by the NumPy community, although we are always interested in feedback and suggestions!\nTo better illustrate how easy it is to move from NumPy\u2019s np.fft module to PyTorch\u2019s torch.fft module, let\u2019s look at a NumPy implementation of a simple low-pass filter that removes high-frequency variance from a 2-dimensional image, a form of noise reduction or blurring:\nimport numpy as np\nimport numpy.fft as fft\n\ndef lowpass_np(input, limit):\n pass1 = np.abs(fft.rfftfreq(input.shape[-1])) < limit\n pass2 = np.abs(fft.fftfreq(input.shape[-2])) < limit\n kernel = np.outer(pass2, pass1)\n\n fft_input = fft.rfft2(input)\n return fft.irfft2(fft_input * kernel, s=input.shape[-2:])\n\nNow let\u2019s see the same filter implemented in PyTorch:\n```python\nimport torch\nimport torch.fft as fft\ndef lowpass_torch(input, limit):\n pass1 = torch.abs(fft.rfftfreq(input.shape[-1])) < limit", "source": "https://pytorch.org/blog/the-torch.fft-module-accelerated-fast-fourier-transforms-with-autograd-in-pyTorch/", "category": "pytorch blogs"} {"text": "pass2 = torch.abs(fft.fftfreq(input.shape[-2])) < limit\n kernel = torch.outer(pass2, pass1)\nfft_input = fft.rfft2(input)\nreturn fft.irfft2(fft_input * kernel, s=input.shape[-2:])\n\n```\nNot only do current uses of NumPy\u2019s np.fft module translate directly to torch.fft, the torch.fft operations also support tensors on accelerators, like GPUs and autograd. This makes it possible to (among other things) develop new neural network modules using the FFT.\nPerformance\nThe torch.fft module is not only easy to use \u2014 it is also fast! PyTorch natively supports Intel\u2019s MKL-FFT library on Intel CPUs, and NVIDIA\u2019s cuFFT library on CUDA devices, and we have carefully optimized how we use those libraries to maximize performance. While your own results will depend on your CPU and CUDA hardware, computing Fast Fourier Transforms on CUDA devices can be many times faster than computing it on the CPU, especially for larger signals.", "source": "https://pytorch.org/blog/the-torch.fft-module-accelerated-fast-fourier-transforms-with-autograd-in-pyTorch/", "category": "pytorch blogs"} {"text": "In the future, we may add support for additional math libraries to support more hardware. See below for where you can request additional hardware support.\nUpdating from older PyTorch versions\nSome PyTorch users might know that older versions of PyTorch also offered FFT functionality with the torch.fft() function. Unfortunately, this function had to be removed because its name conflicted with the new module\u2019s name, and we think the new functionality is the best way to use the Fast Fourier Transform in PyTorch. In particular, torch.fft() was developed before PyTorch supported complex tensors, while the torch.fft module was designed to work with them.\nPyTorch also has a \u201cShort Time Fourier Transform\u201d, torch.stft, and its inverse torch.istft. These functions are being kept but updated to support complex tensors. \nFuture", "source": "https://pytorch.org/blog/the-torch.fft-module-accelerated-fast-fourier-transforms-with-autograd-in-pyTorch/", "category": "pytorch blogs"} {"text": "Future\nAs mentioned, PyTorch 1.8 offers the torch.fft module, which makes it easy to use the Fast Fourier Transform (FFT) on accelerators and with support for autograd. We encourage you to try it out!\nWhile this module has been modeled after NumPy\u2019s np.fft module so far, we are not stopping there. We are eager to hear from you, our community, on what FFT-related functionality you need, and we encourage you to create posts on our forums at https://discuss.pytorch.org/, or file issues on our Github with your feedback and requests. Early adopters have already started asking about Discrete Cosine Transforms and support for more hardware platforms, for example, and we are investigating those features now.\nWe look forward to hearing from you and seeing what the community does with PyTorch\u2019s new FFT functionality!", "source": "https://pytorch.org/blog/the-torch.fft-module-accelerated-fast-fourier-transforms-with-autograd-in-pyTorch/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"PyTorch Internals Part II - The Build System\"\nauthor: \"Trevor Killeen\"\ndate: 2017-06-27 12:00:00 -0500\nredirect_from: /2017/06/27/Internals2.html\n\nIn the first post I explained how we generate a torch.Tensor object that you can use in your Python interpreter. Next, I will explore the build system for PyTorch. The PyTorch codebase has a variety of components:\n\nThe core Torch libraries: TH, THC, THNN, THCUNN\nVendor libraries: CuDNN, NCCL\nPython Extension libraries\nAdditional third-party libraries: NumPy, MKL, LAPACK\n\nHow does a simple invocation of python setup.py install do the work that allows you to call import torch and use the PyTorch library in your code?", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "The first part of this document will explain the build process from and end-user point of view. This will explain how we take the components above to build the library. The second part of the document will be important for PyTorch developers. It will document ways to improve your iteration speed by building only a subset of the code that you are working on.\nSetuptools and PyTorch's setup( ) function\nPython uses Setuptools to build the library. Setuptools is an extension to the original distutils system from the core Python library. The core component of Setuptools is the setup.py file which contains all the information needed to build the project. The most important function is the setup() function which serves as the main entry point. Let's take a look at the one in PyTorch:\n```python\nsetup(name=\"torch\", version=version,\n description=\"Tensors and Dynamic neural networks in Python with strong GPU acceleration\",", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "ext_modules=extensions,\n cmdclass={\n 'build': build,\n 'build_py': build_py,\n 'build_ext': build_ext,\n 'build_deps': build_deps,\n 'build_module': build_module,\n 'develop': develop,\n 'install': install,\n 'clean': clean,\n },\n packages=packages,\n package_data={'torch': [\n 'lib/.so', 'lib/.dylib',\n 'lib/torch_shm_manager',\n 'lib/.h',\n 'lib/include/TH/.h', 'lib/include/TH/generic/.h',\n 'lib/include/THC/.h', 'lib/include/THC/generic/*.h']},\n install_requires=['pyyaml'],\n )\n```\nThe function is composed entirely of keyword arguments, which serve two purposes:\n\nMetadata (e.g. name, description, version)\nThe contents of the package\n\nWe are concerned with #2. Let's break down the individual components:", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "\next_modules: Python modules are either \"pure\" modules, containing only Python code, or \"extension\" modules written in the low-level language of the Python implementation. Here we are listing the extension modules in the build, including the main torch._C library that contains our Python Tensor\ncmdclass: When using the setup.py script from the command line, the user must specify one or more \"commands\", code snippets that perform a specific action. For example, the \"install\" command builds and installs the package. This mapping routes specific commands to functions in setup.py that implement them\npackages: The list of packages in the project. These are \"pure\" - i.e. they only contain Python code. These are defined elsewhere in setup.py\npackage_data: Additional files that need to be installed into a package: in this case the header files and shared libraries that the build will generate must be included in our installation\n", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "\ninstall_requires: In order to build PyTorch, we need pyyaml. Setuptools will handle making sure that pyyaml will be available, downloading and installing it if necessary\n\nWe will consider these components in more detail, but for now it is instructive to look at the end product of an installation -- i.e. what Setuptools does after building the code.\nsite_packages\nThird party packages are by default installed into the lib//site_packages directory associated with your Python binary. For example, because I am using an Miniconda environment, my Python binary is found at:\n(p3) killeent@devgpu047:pytorch (master)$ which python\n~/local/miniconda2/envs/p3/bin/python\n\nAnd thus packages are installed into:\n/home/killeent/local/miniconda2/envs/p3/lib/python3.6/site-packages\n\nI installed PyTorch, and let's take a look into torch folder in site-packages:\n```bash\n(p3) killeent@devgpu047:site-packages$ cd torch", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "(p3) killeent@devgpu047:site-packages$ cd torch\n(p3) killeent@devgpu047:torch$ ls\nautograd backends _C.cpython-36m-x86_64-linux-gnu.so cuda distributed _dl.cpython-36m-x86_64-linux-gnu.so functional.py init.py legacy lib multiprocessing nn optim pycache serialization.py _six.py sparse storage.py _tensor_docs.py tensor.py _tensor_str.py _thnn _torch_docs.py utils _utils.py version.py\n\nNote that everything we would expect to be here is here:\n\n - All the \"pure\" packages are here [todo print packages from setup.py to explain]\n - The extension libraries are here - the ._C* and ._dl* shared libraries\n - The package_data is here: the contents of lib/ match exactly what we described in the setup function:\n\n```bash\n(p3) killeent@devgpu047:torch$ ls lib/\ninclude libnccl.so.1 libTHC.so.1 libTHCUNN.so.1 libTHNN.so.1 libTH.so.1 THCUNN.h torch_shm_manager libnccl.so libshm.so libTHCS.so.1 libTHD.so.1 libTHPP.so.1 libTHS.so.1 THNN.h\n", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "\nThe Python interpreter looks into `site_packages` during an import. If we call `import torch` in our Python code it will find the module here and initialize and import it. You can read more about the import system [here](https://docs.python.org/3/tutorial/modules.html).\n\n### Building Individual Parts\n\nNext, we will look at the various individual components of the build from start to finish. This will illustrate how we combine all the code we mentioned in the introduction.\n\n### Backend Torch and Vendor Libraries\n\nLet's take a look at the `install` cmd override in PyTorch's `setup.py`:\n\n```python\nclass install(setuptools.command.install.install):\n\n def run(self):\n if not self.skip_build:\n self.run_command('build_deps')\n setuptools.command.install.install.run(self)\n\nWe note the first thing it does is run a command called \"build_deps\" - let's take a look at it's run() method:\n```python\ndef run(self):\n from tools.nnwrap import generate_wrappers as generate_nn_wrappers", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "build_all_cmd = ['bash', 'torch/lib/build_all.sh']\n if WITH_CUDA:\n build_all_cmd += ['--with-cuda']\n if WITH_NCCL and not SYSTEM_NCCL:\n build_all_cmd += ['--with-nccl']\n if WITH_DISTRIBUTED:\n build_all_cmd += ['--with-distributed']\n if subprocess.call(build_all_cmd) != 0:\n sys.exit(1)\n generate_nn_wrappers()\n\nHere we note that that we have a shell script `build_all.sh` in the `torch/lib/` directory. This script is configurable by whether we are on a system with CUDA enabled, the NCCL library enabled, and PyTorch's distributed library enabled.\n\nLet's take a look in `torch/lib`:\n\n```bash\n(p3) killeent@devgpu047:lib (master)$ ls\nbuild_all.sh libshm nccl README.md TH THC THCS THCUNN THD THNN THPP THS\n", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "```\nHere we see the directories for all the backend libraries. TH, THC, THNN, THCUNN, and nccl are git subtrees that are in sync with the libraries in e.g. github.com/torch. THS, THCS, THD, THPP and libshm are libraries specific to PyTorch. All of the libraries contain CMakeLists.txt - indicating they are built with CMake.\nThe build_all.sh is essentially a script that runs the CMake configure step on all of these libraries, and then make install. Let's run ./build_all.sh and see what we are left with:\n```bash\n(p3) killeent@devgpu047:lib (master)$ ./build_all.sh --with-cuda --with-nccl --with-distributed\n[various CMake output logs]\n(p3) killeent@devgpu047:lib (master)$ ls", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "(p3) killeent@devgpu047:lib (master)$ ls\nbuild build_all.sh include libnccl.so libnccl.so.1 libshm libshm.so libTHC.so.1 libTHCS.so.1 libTHCUNN.so.1 libTHD.so.1 libTHNN.so.1 libTHPP.so.1 libTH.so.1 libTHS.so.1 nccl README.md TH THC THCS THCUNN THCUNN.h THD THNN THNN.h THPP THS tmp_install torch_shm_manager\n\nNow there are a number of extra things in the directory:\n\n - Shared library files for each library\n - Headers for `THNN` and `THCUNN`\n - `build` and `tmp_install` directories\n - The `torch_shm_manager` executable\n\nLet's explore further. In the shell script, we create the `build` directory and a subdir for each library to build:\n\n```bash\n# We create a build directory for the library, which will\n# contain the cmake output. $1 is the library to be built\n mkdir -p build/$1\n cd build/$1\n\nThus e.g. build/TH contains the CMake configuration output including the Makefile for building TH, and also the result of running make install in this directory.", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "Let's also look at tmp_install:\n(p3) killeent@devgpu047:lib (master)$ ls tmp_install/\nbin include lib share\n\ntmp_install looks like a standard install directory containing binaries, header files and library files. For example, tmp_install/include/TH contains all the TH headers, and tmp_install/lib/ contains the libTH.so.1 file.\nSo why have this directory? It is used to compile the libraries that depend on each other. For example, the THC library depends on the TH library and its headers. This is referenced in the build shell script as arguments to the cmake command:\n# install_dir is tmp_install\ncmake ...\n -DTH_INCLUDE_PATH=\"$INSTALL_DIR/include\" \\\n -DTH_LIB_PATH=\"$INSTALL_DIR/lib\" \\\n\nAnd indeed if we look at the THC library we built:\n(p3) killeent@devgpu047:lib (master)$ ldd libTHC.so.1\n ...\n libTH.so.1 => /home/killeent/github/pytorch/torch/lib/tmp_install/lib/./libTH.so.1 (0x00007f84478b7000)\n", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "\nThe way the `build_all.sh` specifies the include and library paths is a little messy but this is representative of the overall idea. Finally, at the end of the script:\n\n```bash\n# If all the builds succeed we copy the libraries, headers,\n# binaries to torch/lib\ncp $INSTALL_DIR/lib/* .\ncp THNN/generic/THNN.h .\ncp THCUNN/generic/THCUNN.h .\ncp -r $INSTALL_DIR/include .\ncp $INSTALL_DIR/bin/* .\n\nAs we can see, at the end, we copy everything to the top-level torch/lib directory - explaining the contents we saw above. We'll see why we do this next:\nNN Wrappers\nBriefly, let's touch on the last part of the build_deps command: generate_nn_wrappers(). We bind into the backend libraries using PyTorch's custom cwrap tooling, which we touched upon in a previous post. For binding TH and THC we manually write the YAML declarations for each function. However, due to the relative simplicity of the THNN and THCUNN libraries, we auto-generate both the cwrap declarations and the resulting C++ code.", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "The reason we copy the THNN.h and THCUNN.h header files into torch/lib is that this is where the generate_nn_wrappers() code expects these files to be located. generate_nn_wrappers() does a few things:\n\nParses the header files, generating cwrap YAML declarations and writing them to output .cwrap files\nCalls cwrap with the appropriate plugins on these .cwrap files to generate source code for each\nParses the headers a second time to generate THNN_generic.h - a library that takes THPP Tensors, PyTorch's \"generic\" C++ Tensor Library, and calls into the appropriate THNN/THCUNN library function based on the dynamic type of the Tensor\n\nIf we take a look into torch/csrc/nn after running generate_nn_wrappers() we can see the output:\n(p3) killeent@devgpu047:nn (master)$ ls\nTHCUNN.cpp THCUNN.cwrap THNN.cpp THNN.cwrap THNN_generic.cpp THNN_generic.cwrap THNN_generic.h THNN_generic.inc.h\n\nFor example, the code generates cwrap like:\n```\n[[", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "[[\n name: FloatBatchNormalization_updateOutput\n return: void\n cname: THNN_FloatBatchNormalization_updateOutput\n arguments:\n - void* state\n - THFloatTensor* input\n - THFloatTensor* output\n - type: THFloatTensor*\n name: weight\n nullable: True\n - type: THFloatTensor*\n name: bias\n nullable: True\n - THFloatTensor* running_mean\n - THFloatTensor* running_var\n - THFloatTensor* save_mean\n - THFloatTensor* save_std\n - bool train\n - double momentum\n - double eps\n]]\n\nwith corresponding .cpp:\n```cpp\nextern \"C\" void THNN_FloatBatchNormalization_updateOutput(void, THFloatTensor, THFloatTensor, THFloatTensor, THFloatTensor, THFloatTensor, THFloatTensor, THFloatTensor, THFloatTensor*, bool, double, double);\nPyObject * FloatBatchNormalization_updateOutput(PyObject _unused, PyObject args) {\n // argument checking, unpacking\n PyThreadState *_save = NULL;\n try {\n Py_UNBLOCK_THREADS;", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "try {\n Py_UNBLOCK_THREADS;\n THNN_FloatBatchNormalization_updateOutput(arg_state, arg_input, arg_output, arg_weight, arg_bias, arg_running_mean, arg_running_var, arg_save_mean, arg_save_std, arg_train, arg_momentum, arg_eps);\n Py_BLOCK_THREADS;\n Py_RETURN_NONE;\n } catch (...) {\n if (_save) {\n Py_BLOCK_THREADS;\n }\n throw;\n }\n...\n\n}\n\nIn the `THPP` generated code, the function looks like this:\n\n```cpp\nvoid BatchNormalization_updateOutput(thpp::Tensor* input, thpp::Tensor* output, thpp::Tensor* weight, thpp::Tensor* bias, thpp::Tensor* running_mean, thpp::Tensor* running_var, thpp::Tensor* save_mean, thpp::Tensor* save_std, bool train, double momentum, double eps) {\n // Call appropriate THNN function based on tensor type, whether its on CUDA, etc.\n}\n\nWe will look a little more at how these source files are used later.\n\"Building\" the Pure Python Modules", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "\"Building\" the Pure Python Modules\nNow that we have built the backend libraries (the \"dependencies\") we can move forward with building the actual PyTorch code. The next Setuptools command that runs is build_py, which is used to build all the \"Pure\" python modules in our library. These are the \"packages\" passed to setup.py.\nThe packages are found using the Setuptools' utility function find_packages():\n```python\npackages = find_packages(exclude=('tools.*',))", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "packages = find_packages(exclude=('tools.*',))\n['torch', 'torch._thnn', 'torch.autograd', 'torch.backends', 'torch.cuda', 'torch.distributed', 'torch.legacy', 'torch.multiprocessing', 'torch.nn', 'torch.optim', 'torch.sparse', 'torch.utils', 'torch.autograd._functions', 'torch.backends.cudnn', 'torch.legacy.nn', 'torch.legacy.optim', 'torch.nn._functions', 'torch.nn.backends', 'torch.nn.modules', 'torch.nn.parallel', 'torch.nn.utils', 'torch.nn._functions.thnn', 'torch.utils.data', 'torch.utils.ffi', 'torch.utils.serialization', 'torch.utils.trainer', 'torch.utils.backcompat', 'torch.utils.trainer.plugins']\n```\nAs we can see, find_package has recursively traversed the torch directory, finding all the directory paths that have an __init__.py file.", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "When building with Setuptools, the tool creates a build directory in the distribution root, i.e. the same location as the setup.py file. Because PyTorch is composed of both \"Pure\" python modules and Extension Modules, we need to preserve information about the Operating System and Python version used when performing the build. So if we look in my build directory, we see:\n(p3) killeent@devgpu047:pytorch (master)$ ls build\nlib.linux-x86_64-3.6 temp.linux-x86_64-3.6\n\nThis indicates that I've built the project on linux-x86-64 using Python 3.6. The lib directory contains the library files, while the temp directory contains files generated during the build that aren't needed in the final installation.\nBecause \"Pure\" python modules are just Python code, and don't need to be \"compiled\", the build_py process simply copies files from their locations as found by find_packages to the equivalent location in build/. So our build output is littered with lines like:\n```bash", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "copying torch/autograd/_functions/blas.py -> build/lib.linux-x86_64-3.6/torch/autograd/_functions\n\nWe also noted earlier that we could pass files and directories to the package_data keyword argument to the main setup() function, and that Setuptools would handle copying those files to the installation location. During build_py, these files are copied to the build/ directory, so we also see lines like:\ncopying torch/lib/libTH.so.1 -> build/lib.linux-x86_64-3.6/torch/lib\n...\ncopying torch/lib/include/THC/generic/THCTensor.h -> build/lib.linux-x86_64-3.6/torch/lib/include/THC/generic\n\nBuilding the Extension Modules\nFinally, we need to build the Extension Modules, i.e. the PyTorch modules written in C++ using the CPython backend. This also constitutes the majority of the code logic in setup.py. Our overridden build_ext Command has some special logic before the extensions themselves are actually built:\n```python\nfrom tools.cwrap import cwrap", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "```python\nfrom tools.cwrap import cwrap\nfrom tools.cwrap.plugins.THPPlugin import THPPlugin\nfrom tools.cwrap.plugins.ArgcountSortPlugin import ArgcountSortPlugin\nfrom tools.cwrap.plugins.AutoGPU import AutoGPU\nfrom tools.cwrap.plugins.BoolOption import BoolOption\nfrom tools.cwrap.plugins.KwargsPlugin import KwargsPlugin\nfrom tools.cwrap.plugins.NullableArguments import NullableArguments\nfrom tools.cwrap.plugins.CuDNNPlugin import CuDNNPlugin\nfrom tools.cwrap.plugins.WrapDim import WrapDim\nfrom tools.cwrap.plugins.AssertNDim import AssertNDim\nfrom tools.cwrap.plugins.Broadcast import Broadcast\nfrom tools.cwrap.plugins.ProcessorSpecificPlugin import ProcessorSpecificPlugin\n thp_plugin = THPPlugin()\n cwrap('torch/csrc/generic/TensorMethods.cwrap', plugins=[\n ProcessorSpecificPlugin(), BoolOption(), thp_plugin,\n AutoGPU(condition='IS_CUDA'), ArgcountSortPlugin(), KwargsPlugin(),\n AssertNDim(), WrapDim(), Broadcast()\n ])", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "])\n cwrap('torch/csrc/cudnn/cuDNN.cwrap', plugins=[\n CuDNNPlugin(), NullableArguments()\n ])\n\nRecall above that I documented that we auto-generated C++ code for calling into the `THNN` etc. libraries. Here is where we bind `TH`, `THC` and `CuDNN`. We take the YAML declarations in `TensorMethods.cwrap`, and use them to generate output C++ source files that contain implementations that work within PyTorch's C++ Ecosystem. For example, a simple declaration like zero_:\n\n\n[[\n name: zero_\n cname: zero\n return: self\n arguments:\n - THTensor* self\n]]\n\nGenerates code like:\n\n```cpp\n PyObject * THPTensor_(zero_)(PyObject *self, PyObject *args, PyObject *kwargs) {\n ...\n THTensor_(zero)(LIBRARY_STATE arg_self);\n ...\n}\n", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "...\n}\n```\nIn the previous post we documented how these functions are tied to specific Tensor types, so I won't expand on that there. For the build process its enough to know that these C++ files are generated prior to the extension being built, because these source files are used during Extension compilation.\nSpecifying the Extensions\nUnlike pure modules, it\u2019s not enough just to list modules or packages and expect the Setuptools to go out and find the right files; you have to specify the extension name, source file(s), and any compile/link requirements (include directories, libraries to link with, etc.).", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "The bulk (200~ LOC at the time of this writing) of the setup.py goes into specifying how to build these Extensions. Here, some of the choices we make in build_all.sh begin to make sense. For example, we saw that our build script specified a tmp_install directory where we installed our backend libraries. In our setup.py code, we reference this directory when adding to the list of directories containing header files to include:\n# tmp_install_path is torch/lib/tmp_install\ninclude_dirs += [\n cwd,\n os.path.join(cwd, \"torch\", \"csrc\"),\n tmp_install_path + \"/include\",\n tmp_install_path + \"/include/TH\",\n tmp_install_path + \"/include/THPP\",\n tmp_install_path + \"/include/THNN\",\n\nSimilarly, we copied the shared object libraries to torch/csrc at the end of the build_all.sh script. We reference these locations directly in our setup.py code when identifying libraries that we may link against:\n```python\nlib_path is torch/lib\nTH_LIB = os.path.join(lib_path, 'libTH.so.1')", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "TH_LIB = os.path.join(lib_path, 'libTH.so.1')\nTHS_LIB = os.path.join(lib_path, 'libTHS.so.1')\nTHC_LIB = os.path.join(lib_path, 'libTHC.so.1')\nTHCS_LIB = os.path.join(lib_path, 'libTHCS.so.1')\nTHNN_LIB = os.path.join(lib_path, 'libTHNN.so.1')\n...\n\nLet's consider how we build the main `torch._C` Extension Module:\n\n```python\nC = Extension(\"torch._C\",\n libraries=main_libraries,\n sources=main_sources,\n language='c++',\n extra_compile_args=main_compile_args + extra_compile_args,\n include_dirs=include_dirs,\n library_dirs=library_dirs,\n extra_link_args=extra_link_args + main_link_args + [make_relative_rpath('lib')],\n )\n\n\nThe main libraries are all the libraries we link against. This includes things like shm, PyTorch's shared memory management library, and also system libraries like cudart and cudnn. Note that the TH libraries are not listed here\n", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "\nThe main sources are the C++ files that make up the C++ backend for PyTorch\nThe compile args are various flags that configure compilation. For example, we might want to add debug flags when compiling in debug mode\nThe include dirs are the paths to all the directories containing header files. This is also another example where the build_all.sh script is important - for example, we look for the TH header files in torch/lib/tmp_install/include/TH - which is the install location we specified with our CMake configuration\nThe library dirs are directories to search for shared libraries at link time. For example, we include torch/lib - the location we copied our .so files to at the end of build_all.sh, but also the paths to the CUDA and CuDNN directories\n", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "\nThe link arguments are used when linking object files together to create the extension. In PyTorch, this includes more normal options like decided to link libstdc++ statically. However, there is one key component: this is where we link the backend TH libraries. Note that we have lines like:\n\n# The explicit paths to .so files we described above\nmain_link_args = [TH_LIB, THS_LIB, THPP_LIB, THNN_LIB]\n\nYou might be wondering why we do this as opposed to adding these libraries to the list we pass to the libraries keyword argument. After all, that is a list of libraries to link against. The issue is that Lua Torch installs often set the LD_LIBRARY_PATH variable, and thus we could mistakenly link against a TH library built for Lua Torch, instead of the library we have built locally. This would be problematic because the code could be out of date, and also there are various configuration options for Lua Torch's TH that would not play nicely with PyTorch.", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "As such, we manually specify the paths to the shared libraries we generated directly to the linker.\nThere are other extensions needed to power PyTorch and they are built in a similar way. The Setuptools library invokes the C++ compiler and linker to build all of these extensions. If the builds succeed, we have successfully built the PyTorch library and we can move on to installation.\nInstallation\nAfter building has finished, installation is quite simple. We simply have to copy everything from our build/lib.linux-x86_64-3.6 directory to the appropriate installation directory. Recall that we noted above that this directory is the site_packages directory associated with our Python binary. As a result, we see lines like:\n```bash\nrunning install_lib\ncreating /home/killeent/local/miniconda2/envs/p3/lib/python3.6/site-packages/torch\ncopying build/lib.linux-x86_64-3.6/torch/_C.cpython-36m-x86_64-linux-gnu.so -> /home/killeent/local/miniconda2/envs/p3/lib/python3.6/site-packages/torch", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "copying build/lib.linux-x86_64-3.6/torch/_dl.cpython-36m-x86_64-linux-gnu.so -> /home/killeent/local/miniconda2/envs/p3/lib/python3.6/site-packages/torch\ncreating /home/killeent/local/miniconda2/envs/p3/lib/python3.6/site-packages/torch/_thnn\ncopying build/lib.linux-x86_64-3.6/torch/_thnn/_THNN.cpython-36m-x86_64-linux-gnu.so -> /home/killeent/local/miniconda2/envs/p3/lib/python3.6/site-packages/torch/_thnn\ncopying build/lib.linux-x86_64-3.6/torch/_thnn/_THCUNN.cpython-36m-x86_64-linux-gnu.so -> /home/killeent/local/miniconda2/envs/p3/lib/python3.6/site-packages/torch/_thnn\n```\nFinally lets power up the Python interpreter. When the Python interpreter executes an import statement, it searches for Python code and extension modules along a search path. A default value for the path is configured into the Python binary when the interpreter is built.\n```bash\nnote we are now in my home directory\n(p3) killeent@devgpu047:~$ python\nPython 3.6.1 |Continuum Analytics, Inc.| (default, Mar 22 2017, 19:54:23)", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n\n\n\nimport sys\nsys.path\n['', '/home/killeent/local/miniconda2/envs/p3/lib/python36.zip', '/home/killeent/local/miniconda2/envs/p3/lib/python3.6', '/home/killeent/local/miniconda2/envs/p3/lib/python3.6/lib-dynload', '/home/killeent/.local/lib/python3.6/site-packages', '/home/killeent/local/miniconda2/envs/p3/lib/python3.6/site-packages', '/home/killeent/github/pytorch', '/home/killeent/local/miniconda2/envs/p3/lib/python3.6/site-packages/setuptools-27.2.0-py3.6.egg']\n\n\n\n\nAs we can see, the `site-packages` directory we copied our PyTorch installation to is part of search path. Now let's load the `torch` module and see its location:\n\n```python\n>>> import torch\n>>> import inspect\n>>> inspect.getfile(torch)\n'/home/killeent/local/miniconda2/envs/p3/lib/python3.6/site-packages/torch/__init__.py'\n", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "```\nAs we can see, we have loaded the module from site_packages as expected - and our build and installation is successful!\nNote: Python prepends the empty string to sys.path to represent the current working directory - making it the first place we search for a module. So if we run Python from the pytorch directory, we would accidentally load the local version of PyTorch rather than our installed version. This is something to watch out for.\nAddendum - Developer Efficiency, 3rd Party Libraries, Things I Didn't Cover\nThe entire installation loop for PyTorch can be quite time-consuming. On my devserver, it takes around 5 minutes for an installation from source. Often times, when developing PyTorch, we only want to work on a subset of the entire project, and re-build only that subset in order to test changes. Fortunately, our build system enables this.\nSetuptools Develop Mode\nThe main tool that supports this is Setuptools develop command. The documentation states that:", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "\nThis command allows you to deploy your project\u2019s source for use in one or more \u201cstaging areas\u201d where it will be available for importing. This deployment is done in such a way that changes to the project source are immediately available in the staging area(s), without needing to run a build or install step after each change.\n\nBut how does it work? Suppose we run python setup.py build develop in the PyTorch directory. The build command is run, building our dependencies (TH, THPP, etc.) and the extension libraries. However, if we look inside site-packages:\n(p3) killeent@devgpu047:site-packages$ ls -la torch*\n-rw-r--r--. 1 killeent users 31 Jun 27 08:02 torch.egg-link\n\nLooking at the contents of the torch.egg-link file, it simply references the PyTorch directory:\n(p3) killeent@devgpu047:site-packages$ cat torch.egg-link\n/home/killeent/github/pytorch\n\nIf we navigate back to the PyTorch directory, we see there is a new directory torch.egg-info:\n```bash", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "(p3) killeent@devgpu047:pytorch (master)$ ls -la torch.egg-info/\ntotal 28\ndrwxr-xr-x. 2 killeent users 4096 Jun 27 08:09 .\ndrwxr-xr-x. 10 killeent users 4096 Jun 27 08:01 ..\n-rw-r--r--. 1 killeent users 1 Jun 27 08:01 dependency_links.txt\n-rw-r--r--. 1 killeent users 255 Jun 27 08:01 PKG-INFO\n-rw-r--r--. 1 killeent users 7 Jun 27 08:01 requires.txt\n-rw-r--r--. 1 killeent users 16080 Jun 27 08:01 SOURCES.txt\n-rw-r--r--. 1 killeent users 12 Jun 27 08:01 top_level.txt\n\nThis file contains metadata about the PyTorch project. For example, requirements.txt lists all of the dependencies for setting up PyTorch:\n(p3) killeent@devgpu047:pytorch (master)$ cat torch.egg-info/requires.txt\npyyaml\n\nWithout going into too much detail, develop allows us to essentially treat the PyTorch repo itself as if it were in site-packages, so we can import the module and it just works:\n```bash\n(p3) killeent@devgpu047:~$ python", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "(p3) killeent@devgpu047:~$ python\nPython 3.6.1 |Continuum Analytics, Inc.| (default, Mar 22 2017, 19:54:23)\n[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n>>> import torch\n>>> torch.__file__\n'/home/killeent/github/pytorch/torch/__init__.py'\n\nAs a result, the following consequences hold:\n\nIf we change a Python source file, the changes are automatically picked up, and we don't have to run any commands to let the Python interpreter see this change\nIf we change a C++ Source File in one of the extension libraries, we can re-run the develop command, it will re-build the extension\n\nThus we can develop the PyTorch codebases seamlessly, and test our changes in an easy way.\nWorking on the Dependency Libraries", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "Working on the Dependency Libraries\nIf we are working on the dependencies (e.g. TH, THPP, etc.) we can re-build our changes more quickly by simply running the build_deps command directly. This will automatically call into build_all.sh to re-build our libraries, and copy the generated libraries appropriately. If we are using Setuptools develop mode, we will be using the local extension library built in the PyTorch directory. Because we have specified the paths to the shared libraries when compiling our Extension Libraries, the changes will be picked up:\n```bash\nwe are using the local extension\n(p3) killeent@devgpu047:~$ python\nPython 3.6.1 |Continuum Analytics, Inc.| (default, Mar 22 2017, 19:54:23)\n[GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux\nType \"help\", \"copyright\", \"credits\" or \"license\" for more information.\n\n\n\nimport torch\ntorch._C.file\n'/home/killeent/github/pytorch/torch/_C.cpython-36m-x86_64-linux-gnu.so'\n\n\n\nit references the local shared object library we just re-built", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "(p3) killeent@devgpu047:~$ ldd /home/killeent/github/pytorch/torch/_C.cpython-36m-x86_64-linux-gnu.so\n...\nlibTH.so.1 => /home/killeent/github/pytorch/torch/lib/libTH.so.1 (0x00007f543d0e2000)\n...\n\nAs such, we can test any changes here without having to do a full rebuild.\n\n#### 3rd Party Libraries\n\nPyTorch has dependencies on some 3rd party libraries. The usual mechanism for using these libraries is to install them via Anaconda, and then link against them. For example, we can use the `mkl` library with PyTorch by doing:\n\n```bash\n# installed to miniconda2/envs/p3/lib/libmkl_intel_lp64.so\nconda install mkl\n\nAnd then as long as we have the path to this lib directory on our $CMAKE_PREFIX_PATH, it will successfully find this library when compiling:\n# in the site-packages dir\n(p3) killeent@devgpu047:torch$ ldd _C.cpython-36m-x86_64-linux-gnu.so\n# ...\nlibmkl_intel_lp64.so => /home/killeent/local/miniconda2/envs/p3/lib/libmkl_intel_lp64.so (0x00007f3450bba000)\n# ...\n", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "...\n```\nNot Covered, But Also Relevant\n\nHow ccache is used to speed up build times\nHow PyTorch's top-level __init__.py file handles the initial module import and pulling together all the various modules and extension libraries\nThe CMake build system, how the backend libraries are configured and built with CMake\n", "source": "https://pytorch.org/blog/a-tour-of-pytorch-internals-2/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'An Overview of the PyTorch Mobile Demo Apps'\nauthor: Jeff Tang and Mark Saroufim\nfeatured-img: 'assets/images/android-demo-app.png'\ndate: 2021-06-18 12:00:00 -0500\n\nPyTorch Mobile provides a runtime environment to execute state-of-the-art machine learning models on mobile devices. Latency is reduced, privacy preserved, and models can run on mobile devices anytime, anywhere.\nIn this blog post, we provide a quick overview of 10 currently available PyTorch Mobile powered demo apps running various state-of-the-art PyTorch 1.9 machine learning models spanning images, video, audio and text.\nIt\u2019s never been easier to deploy a state-of-the-art ML model to a phone. You don\u2019t need any domain knowledge in Machine Learning and we hope one of the below examples resonates enough with you to be the starting point for your next project.\n\n\n\nComputer Vision\nImage Classification", "source": "https://pytorch.org/blog/mobile-demo-apps-overview/", "category": "pytorch blogs"} {"text": "Computer Vision\nImage Classification\nThis app demonstrates how to use PyTorch C++ libraries on iOS and Android to classify a static image with the MobileNetv2/3 model.\n iOS #1 iOS #2 Android #1 Android #2\n iOS Android\n\n\n\nLive Image Classification\nThis app demonstrates how to run a quantized MobileNetV2 and Resnet18 models to classify images in real time with an iOS and Android device camera.", "source": "https://pytorch.org/blog/mobile-demo-apps-overview/", "category": "pytorch blogs"} {"text": " iOS Android\n\n\n\n\nImage Segmentation\nThis app demonstrates how to use the PyTorch DeepLabV3 model to segment images. The updated app for PyTorch 1.9 also demonstrates how to create the model using the Mobile Interpreter and load the model with the LiteModuleLoader API.\n iOS Android", "source": "https://pytorch.org/blog/mobile-demo-apps-overview/", "category": "pytorch blogs"} {"text": " iOS Android\n\n \n\nVision Transformer for Handwritten Digit Recognition\nThis app demonstrates how to use Facebook's latest optimized Vision Transformer DeiT model to do image classification and handwritten digit recognition.\n iOS Android\n Android\n", "source": "https://pytorch.org/blog/mobile-demo-apps-overview/", "category": "pytorch blogs"} {"text": "\n \n\nObject Detection\nThis app demonstrates how to convert the popular YOLOv5 model and use it on an iOS app that detects objects from pictures in your photos, taken with camera, or with live camera.\n iOS Android\n iOS Android\n\n \n\nD2Go", "source": "https://pytorch.org/blog/mobile-demo-apps-overview/", "category": "pytorch blogs"} {"text": "\nD2Go\nThis app demonstrates how to create and use a much lighter and faster Facebook D2Go model to detect objects from pictures in your photos, taken with camera, or with live camera.\n iOS Android\n iOS Android\n\n \n\nVideo\nVideo Classification\nThis app demonstrates how to use a pre-trained PyTorchVideo model to perform video classification on tested videos, videos from the Photos library, or even real-time videos.", "source": "https://pytorch.org/blog/mobile-demo-apps-overview/", "category": "pytorch blogs"} {"text": " iOS Android\n iOS Android Deep Dive\n\n \n\nNatural Language Processing\nText Classification\nThis app demonstrates how to use a pre-trained Reddit model to perform text classification.\n iOS Android\n", "source": "https://pytorch.org/blog/mobile-demo-apps-overview/", "category": "pytorch blogs"} {"text": "\n \n\nMachine Translation\nThis app demonstrates how to convert a sequence-to-sequence neural machine translation model trained with the code in the PyTorch NMT tutorial for french to english translation.\n iOS Android\n iOS Android\n\n \n", "source": "https://pytorch.org/blog/mobile-demo-apps-overview/", "category": "pytorch blogs"} {"text": "\nQuestion Answering\nThis app demonstrates how to use the DistilBERT Hugging Face transformer model to answer questions about Pytorch Mobile itself.\n iOS Android\n iOS Android\n\n \n\nAudio\nSpeech Recognition\nThis app demonstrates how to convert Facebook AI's torchaudio-powered wav2vec 2.0, one of the leading models in speech recognition to TorchScript before deploying it.", "source": "https://pytorch.org/blog/mobile-demo-apps-overview/", "category": "pytorch blogs"} {"text": " iOS Android\n\n \n\nWe really hope one of these demo apps stood out for you. For the full list, make sure to visit the iOS and Android demo app repos. You should also definitely check out the video An Overview of the PyTorch Mobile Demo Apps which provides both an overview of the PyTorch mobile demo apps and a deep dive into the PyTorch Video app for iOS and Android.", "source": "https://pytorch.org/blog/mobile-demo-apps-overview/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Accelerating PyTorch Vision Models with Channels Last on CPU\"\nauthor: Mingfei Ma (Intel), Vitaly Fedyunin (Meta), Wei Wei (Meta)\nfeatured-img: '/assets/images/accelerating-pytorch-vision-models-with-channels-last-on-cpu-2.png'\n\nOverview\nMemory formats has significant impact on performance when running vision models, generally Channels Last is a more favorable from performance perspective due to better data locality.\nThis blog will introduce fundamental concepts of memory formats and demonstrate performance benefits using Channels Last on popular PyTorch vision models on Intel\u00ae Xeon\u00ae Scalable processors.\nMemory Formats Introduction\nMemory format refers to data representation that describes how a multidimensional (nD) array is stored in linear (1D) memory address space. The concept of memory format has two aspects:", "source": "https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/", "category": "pytorch blogs"} {"text": "\nPhysical Order is the layout of data storage in physical memory. For vision models, usually we talk about NCHW, NHWC. These are the descriptions of physical memory layout, also referred as Channels First and Channels Last respectively.\nLogical Order is a convention on how to describe tensor shape and stride. In PyTorch, this convention is NCHW. No matter what the physical order is, tensor shape and stride will always be depicted in the order of NCHW.\n\nFig-1 is the physical memory layout of a tensor with shape of [1, 3, 4, 4] on both Channels First and Channels Last memory format (channels denoted as R, G, B respectively):\n\n\n\n\nFig-1 Physical memory layout of Channels First and Channels Last\n\nMemory Formats Propagation", "source": "https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/", "category": "pytorch blogs"} {"text": "\nMemory Formats Propagation\nThe general rule for PyTorch memory format propagation is to preserve the input tensor\u2019s memory format. Which means a Channels First input will generate a Channels First output and a Channels Last input will generate a Channels Last output. \nFor Convolution layers, PyTorch uses oneDNN (oneAPI Deep Neural Network Library) by default to achieve optimal performance on Intel CPUs. Since it is physically impossible to achieve highly optimized performance directly with Channels Frist memory format, input and weight are firstly converted to blocked format and then computed. oneDNN may choose different blocked formats according to input shapes, data type and hardware architecture, for vectorization and cache reuse purposes. The blocked format is opaque to PyTorch, so the output needs to be converted back to Channels First. Though blocked format would bring about optimal computing performance, the format conversions may add overhead and therefore offset the performance gain.", "source": "https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/", "category": "pytorch blogs"} {"text": "On the other hand, oneDNN is optimized for Channels Last memory format to use it for optimal performance directly and PyTorch will simply pass a memory view to oneDNN. Which means the conversion of input and output tensor is saved. Fig-2 indicates memory format propagation behavior of convolution on PyTorch CPU (the solid arrow indicates a memory format conversion, and the dashed arrow indicates a memory view):\n\n\n\n\nFig-2 CPU Conv memory format propagation\n\nOn PyTorch, the default memory format is Channels First. In case a particular operator doesn't have support on Channels Last, the NHWC input would be treated as a non-contiguous NCHW and therefore fallback to Channels First, which will consume the previous memory bandwidth on CPU and result in suboptimal performance.", "source": "https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/", "category": "pytorch blogs"} {"text": "Therefore, it is very important to extend the scope of Channels Last support for optimal performance. And we have implemented Channels Last kernels for the commonly use operators in CV domain, applicable for both inference and training, such as:\n\nActivations (e.g., ReLU, PReLU, etc.)\nConvolution (e.g., Conv2d)\nNormalization (e.g., BatchNorm2d, GroupNorm, etc.)\nPooling (e.g., AdaptiveAvgPool2d, MaxPool2d, etc.)\nShuffle (e.g., ChannelShuffle, PixelShuffle)\n\nRefer to Operators-with-Channels-Last-support for details.\nNative Level Optimization on Channels Last\nAs mentioned above, PyTorch uses oneDNN to achieve optimal performance on Intel CPUs for convolutions. The rest of memory format aware operators are optimized at PyTorch native level, which doesn\u2019t require any third-party library support.", "source": "https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/", "category": "pytorch blogs"} {"text": "\nCache friendly parallelization scheme: keep the same parallelization scheme for all the memory format aware operators, this will help increase data locality when passing each layer\u2019s output to the next.\nVectorization on multiple archs: generally, we can vectorize on the most inner dimension on Channels Last memory format. And each of the vectorized CPU kernels will be generated for both AVX2 and AVX512.\n\nWhile contributing to Channels Last kernels, we tried our best to optimize Channels First counterparts as well. The fact is some operators are physically impossible to achieve optimal performance on Channels First, such as Convolution, Pooling, etc.\nRun Vision Models on Channels Last\nThe Channels Last related APIs are documented at PyTorch memory format tutorial. Typically, we can convert a 4D tensor from Channels First to Channels Last by:\n```python\nconvert x to channels last\nsuppose x\u2019s shape is (N, C, H, W)", "source": "https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/", "category": "pytorch blogs"} {"text": "suppose x\u2019s shape is (N, C, H, W)\nthen x\u2019s stride will be (HWC, 1, WC, C)\nx = x.to(memory_format=torch.channels_last)\n\nTo run models on Channels Last memory format, simply need to convert input and model to Channels Last and then you are ready to go. The following is a minimal example showing how to run ResNet50 with TorchVision on Channels Last memory format:\n\n```python\nimport torch\nfrom torchvision.models import resnet50\n\nN, C, H, W = 1, 3, 224, 224\nx = torch.rand(N, C, H, W)\nmodel = resnet50()\nmodel.eval()\n\n# convert input and model to channels last\nx = x.to(memory_format=torch.channels_last)\nmodel = model.to(memory_format=torch.channels_last)\nmodel(x)\n\nThe Channels Last optimization is implemented at native kernel level, which means you may apply other functionalities such as torch.fx and torch script together with Channels Last as well.\nPerformance Gains", "source": "https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/", "category": "pytorch blogs"} {"text": "Performance Gains\nWe benchmarked inference performance of TorchVision models on Intel\u00ae Xeon\u00ae Platinum 8380 CPU @ 2.3 GHz, single instance per socket (batch size = 2 x number of physical cores). Results show that Channels Last has 1.3x to 1.8x performance gain over Channels First.\n\n\n\nThe performance gain primarily comes from two aspects:\n\nFor Convolution layers, Channels Last saved the memory format conversion to blocked format for activations, which improves the overall computation efficiency.\nFor Pooling and Upsampling layers, Channels Last can use vectorized logic along the most inner dimension, e.g., \u201cC\u201d, while Channels First can\u2019t.\n\nFor memory format non aware layers, Channels Last and Channels First has the same performance.\nConclusion & Future Work", "source": "https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/", "category": "pytorch blogs"} {"text": "Conclusion & Future Work\nIn this blog we introduced fundamental concepts of Channels Last and demonstrated the performance benefits of CPU using Channels Last on vision models. The current work is limited to 2D models at the current stage, and we will extend the optimization effort to 3D models in near future!\nAcknowledgement\nThe results presented in this blog is a joint effort of Meta and Intel PyTorch team. Special thanks to Vitaly Fedyunin and Wei Wei from Meta who spent precious time and gave substantial assistance! Together we made one more step on the path of improving the PyTorch CPU eco system.\nReferences\n\nPyTorch memory format tutorial\noneDNN guide on memory formats\nPyTorch operators with Channels Last support\n", "source": "https://pytorch.org/blog/accelerating-pytorch-vision-models-with-channels-last-on-cpu/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Announcing the PyTorch Enterprise Support Program'\nauthor: Team PyTorch\n\nToday, we are excited to announce the PyTorch Enterprise Support Program, a participatory program that enables service providers to develop and offer tailored enterprise-grade support to their customers. This new offering, built in collaboration between Facebook and Microsoft, was created in direct response to feedback from PyTorch enterprise users who are developing models in production at scale for mission-critical applications.\nThe PyTorch Enterprise Support Program is available to any service provider. It is designed to mutually benefit all program Participants by sharing and improving PyTorch long-term support (LTS), including contributions of hotfixes and other improvements found while working closely with customers and on their systems.", "source": "https://pytorch.org/blog/announcing-pytorch-enterprise/", "category": "pytorch blogs"} {"text": "To benefit the open source community, all hotfixes developed by Participants will be tested and fed back to the LTS releases of PyTorch regularly through PyTorch\u2019s standard pull request process. To participate in the program, a service provider must apply and meet a set of program terms and certification requirements. Once accepted, the service provider becomes a program Participant and can offer a packaged PyTorch Enterprise support service with LTS, prioritized troubleshooting, useful integrations, and more.\n\n\n", "source": "https://pytorch.org/blog/announcing-pytorch-enterprise/", "category": "pytorch blogs"} {"text": "\nAs one of the founding members and an inaugural member of the PyTorch Enterprise Support Program, Microsoft is launching PyTorch Enterprise on Microsoft Azure to deliver a reliable production experience for PyTorch users. Microsoft will support each PyTorch release for as long as it is current. In addition, it will support selected releases for two years, enabling a stable production experience. Microsoft Premier and Unified Support customers can access prioritized troubleshooting for hotfixes, bugs, and security patches at no additional cost. Microsoft will extensively test PyTorch releases for performance regression. The latest release of PyTorch will be integrated with Azure Machine Learning and other PyTorch add-ons including ONNX Runtime for faster inference.", "source": "https://pytorch.org/blog/announcing-pytorch-enterprise/", "category": "pytorch blogs"} {"text": "PyTorch Enterprise on Microsoft Azure not only benefits its customers, but also the PyTorch community users. All improvements will be tested and fed back to the future release for PyTorch so everyone in the community can use them.\nAs an organization or PyTorch user, the standard way of researching and deploying with different release versions of PyTorch does not change. If your organization is looking for the managed long-term support, prioritized patches, bug fixes, and additional enterprise-grade support, then you should reach out to service providers participating in the program.\nTo learn more and participate in the program as a service provider, visit the PyTorch Enterprise Support Program. If you want to learn more about Microsoft\u2019s offering, visit PyTorch Enterprise on Microsoft Azure.\nThank you,\nTeam PyTorch", "source": "https://pytorch.org/blog/announcing-pytorch-enterprise/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Everything You Need To Know About Torchvision\u2019s SSD Implementation'\nauthor: Vasilis Vryniotis\nfeatured-img: 'assets/images/prediction-examples.png'\n\nIn TorchVision v0.10, we\u2019ve released two new Object Detection models based on the SSD architecture. Our plan is to cover the key implementation details of the algorithms along with information on how they were trained in a two-part article.", "source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"} {"text": "In part 1 of the series, we will focus on the original implementation of the SSD algorithm as described on the Single Shot MultiBox Detector paper. We will briefly give a high-level description of how the algorithm works, then go through its main components, highlight key parts of its code, and finally discuss how we trained the released model. Our goal is to cover all the necessary details to reproduce the model including those optimizations which are not covered on the paper but are part on the original implementation.\nHow Does SSD Work?\nReading the aforementioned paper is highly recommended but here is a quick oversimplified refresher. Our target is to detect the locations of objects in an image along with their categories. Here is the Figure 5 from the SSD paper with prediction examples of the model:\n", "source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"} {"text": "\n\n\nThe SSD algorithm uses a CNN backbone, passes the input image through it and takes the convolutional outputs from different levels of the network. The list of these outputs are called feature maps. These feature maps are then passed through the Classification and Regression heads which are responsible for predicting the class and the location of the boxes.", "source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"} {"text": "Since the feature maps of each image contain outputs from different levels of the network, their size varies and thus they can capture objects of different dimensions. On top of each, we tile several default boxes which can be thought as our rough prior guesses. For each default box, we predict whether there is an object (along with its class) and its offset (correction over the original location). During training time, we need to first match the ground truth to the default boxes and then we use those matches to estimate our loss. During inference, similar prediction boxes are combined to estimate the final predictions. \nThe SSD Network Architecture\nIn this section, we will discuss the key components of SSD. Our code follows closely the paper and makes use of many of the undocumented optimizations included in the official implementation.\nDefaultBoxGenerator", "source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"} {"text": "DefaultBoxGenerator\nThe DefaultBoxGenerator class is responsible for generating the default boxes of SSD and operates similarly to the AnchorGenerator of FasterRCNN (for more info on their differences see pages 4-6 of the paper). It produces a set of predefined boxes of specific width and height which are tiled across the image and serve as the first rough prior guesses of where objects might be located. Here is Figure 1 from the SSD paper with a visualization of ground truths and default boxes:\n\n\n", "source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"} {"text": "\nThe class is parameterized by a set of hyperparameters that control their shape and tiling. The implementation will provide automatically good guesses with the default parameters for those who want to experiment with new backbones/datasets but one can also pass optimized custom values.\nSSDMatcher", "source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"} {"text": "The SSDMatcher class extends the standard Matcher used by FasterRCNN and it is responsible for matching the default boxes to the ground truth. After estimating the IoUs of all combinations, we use the matcher to find for each default box the best candidate ground truth with overlap higher than the IoU threshold. The SSD version of the matcher has an extra step to ensure that each ground truth is matched with the default box that has the highest overlap. The results of the matcher are used in the loss estimation during the training process of the model.", "source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"} {"text": "Classification and Regression Heads\nThe SSDHead class is responsible for initializing the Classification and Regression parts of the network. Here are a few notable details about their code:\n\nBoth the Classification and the Regression head inherit from the same class which is responsible for making the predictions for each feature map.\n", "source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"} {"text": "\nEach level of the feature map uses a separate 3x3 Convolution to estimate the class logits and box locations. \nThe number of predictions that each head makes per level depends on the number of default boxes and the sizes of the feature maps.\n\nBackbone Feature Extractor\nThe feature extractor reconfigures and enhances a standard VGG backbone with extra layers as depicted on the Figure 2 of the SSD paper: \n\n", "source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"} {"text": "\nThe class supports all VGG models of TorchVision and one can create a similar extractor class for other types of CNNs (see this example for ResNet). Here are a few implementation details of the class:\n\nPatching the ceil_mode parameter of the 3rd Maxpool layer is necessary to get the same feature map sizes as the paper. This is due to small differences between PyTorch and the original Caffe implementation of the model.\n", "source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"} {"text": "\nIt adds a series of extra feature layerson top of VGG. If the highres parameter is True during its construction, it will append an extra convolution. This is useful for the SSD512 version of the model.\nAs discussed on section 3 of the paper, the fully connected layers of the original VGG are converted to convolutions with the first one using Atrous. Moreover maxpool5\u2019s stride and kernel size is modified.\n", "source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"} {"text": "\nAs described on section 3.1, L2 normalization is used on the output of conv4_3 and a set of learnable weights are introduced to control its scaling.\n\nSSD Algorithm\nThe final key piece of the implementation is on the SSD class. Here are some notable details:", "source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"} {"text": "\nThe algorithm is parameterized by a set of arguments similar to other detection models. The mandatory parameters are: the backbone which is responsible for estimating the feature maps, the anchor_generator which should be a configured instance of the DefaultBoxGenerator class, the size to which the input images will be resized and the num_classes for classification excluding the background.\n", "source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"} {"text": "\nIf a head is not provided, the constructor will initialize the default SSDHead. To do so, we need to know the number of output channels for each feature map produced by the backbone. Initially we try to retrieve this information from the backbone but if not available we will dynamically estimate it.\n", "source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"} {"text": "\nThe algorithm reuses the standard BoxCoder class used by other Detection models. The class is responsible for encoding and decoding the bounding boxes and is configured to use the same prior variances as the original implementation.\n", "source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"} {"text": "\nThough we reuse the standard GeneralizedRCNNTransform class to resize and normalize the input images, the SSD algorithm configures it to ensure that the image size will remain fixed. \n\nHere are the two core methods of the implementation:", "source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"} {"text": "\nThe compute_loss method estimates the standard Multi-box loss as described on page 5 of the SSD paper. It uses the smooth L1 loss for regression and the standard cross-entropy loss with hard-negative sampling for classification.\n", "source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"} {"text": "\nAs in all detection models, the forward method currently has different behaviour depending on whether the model is on training or eval mode. It starts by resizing & normalizing the input images and then passes them through the backbone to get the feature maps. The feature maps are then passed through the head to get the predictions and then the method generates the default boxes.\n", "source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"} {"text": "\nIf the model is on training mode, the forward will estimate the IoUs of the default boxes with the ground truth, use the SSDmatcher to produce matches and finally estimate the losses by calling the compute_loss method.\n", "source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"} {"text": "\nIf the model is on eval mode, we first select the best detections by keeping only the ones that pass the score threshold, select the most promising boxes and run NMS to clean up and select the best predictions. Finally we postprocess the predictions to resize them to the original image size.\n\nThe SSD300 VGG16 Model", "source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"} {"text": "The SSD300 VGG16 Model\nThe SSD is a family of models because it can be configured with different backbones and different Head configurations. In this section, we will focus on the provided SSD pre-trained model. We will discuss the details of its configuration and the training process used to reproduce the reported results.\nTraining process\nThe model was trained using the COCO dataset and all of its hyper-parameters and scripts can be found in our references folder. Below we provide details on the most notable aspects of the training process.\nPaper Hyperparameters", "source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"} {"text": "Paper Hyperparameters\nIn order to achieve the best possible results on COCO, we adopted the hyperparameters described on the section 3 of the paper concerning the optimizer configuration, the weight regularization etc. Moreover we found it useful to adopt the optimizations that appear in the official implementation concerning the tiling configuration of the DefaultBox generator. This optimization was not described in the paper but it was crucial for improving the detection precision of smaller objects. \nData Augmentation", "source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"} {"text": "Data Augmentation\nImplementing the SSD Data Augmentation strategy as described on page 6 and page 12 of the paper was critical to reproducing the results. More specifically the use of random \u201cZoom In\u201d and \u201cZoom Out\u201d transformations make the model robust to various input sizes and improve its precision on the small and medium objects. Finally since the VGG16 has quite a few parameters, the photometric distortions included in the augmentations have a regularization effect and help avoid the overfitting. \nWeight Initialization & Input Scaling", "source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"} {"text": "Another aspect that we found beneficial was to follow the weight initialization scheme proposed by the paper. To do that, we had to adapt our input scaling method by undoing the 0-1 scaling performed by ToTensor() and use pre-trained ImageNet weights fitted with this scaling (shoutout to Max deGroot for providing them in his repo). All the weights of new convolutions were initialized using Xavier and their biases were set to zero. After initialization, the network was trained end-to-end.", "source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"} {"text": "LR Scheme\nAs reported on the paper, after applying aggressive data augmentations it\u2019s necessary to train the models for longer. Our experiments confirm this and we had to tweak the Learning rate, batch sizes and overall steps to achieve the best results. Our proposed learning scheme is configured to be rather on the safe side, showed signs of plateauing between the steps and thus one is likely to be able to train a similar model by doing only 66% of our epochs.\nBreakdown of Key Accuracy Improvements", "source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"} {"text": "Breakdown of Key Accuracy Improvements\nIt is important to note that implementing a model directly from a paper is an iterative process that circles between coding, training, bug fixing and adapting the configuration until we match the accuracies reported on the paper. Quite often it also involves simplifying the training recipe or enhancing it with more recent methodologies. It is definitely not a linear process where incremental accuracy improvements are achieved by improving a single direction at a time but instead involves exploring different hypothesis, making incremental improvements in different aspects and doing a lot of backtracking. \nWith that in mind, below we try to summarize the optimizations that affected our accuracy the most. We did this by grouping together the various experiments in 4 main groups and attributing the experiment improvements to the closest match. Note that the Y-axis of the graph starts from 18 instead from 0 to make the difference between optimizations more visible:", "source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"} {"text": "\n\n\n\n\n\nModel Configuration\nmAP delta\nmAP\n\n\n\n\nBaseline with \"FasterRCNN-style\" Hyperparams\n-\n19.5\n\n\n+ Paper Hyperparams\n1.6\n21.1\n\n\n+ Data Augmentation\n1.8\n22.9\n\n\n+ Weight Initialization & Input Scaling\n1\n23.9\n\n\n+ LR scheme\n1.2\n25.1\n\n\n\nOur final model achieves an mAP of 25.1 and reproduces exactly the COCO results reported on the paper. Here is a detailed breakdown of the accuracy metrics.\nWe hope you found the part 1 of the series interesting. On the part 2, we will focus on the implementation of SSDlite and discuss its differences from SSD. Until then, we are looking forward to your feedback.", "source": "https://pytorch.org/blog/torchvision-ssd-implementation/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"PyTorch 1.13 release, including beta versions of functorch and improved support for Apple\u2019s new M1 chips.\"\nauthor: Team PyTorch\nfeatured-img: \"/assets/images/blog-2022-10-25-Pytorch-1.13-Release.png\"\n\nWe are excited to announce the release of PyTorch\u00ae 1.13 (release note)! This includes Stable versions of BetterTransformer. We deprecated CUDA 10.2 and 11.3 and completed migration of CUDA 11.6 and 11.7. Beta includes improved support for Apple M1 chips and functorch, a library that offers composable vmap (vectorization) and autodiff transforms, being included in-tree with the PyTorch release. This release is composed of over 3,749 commits and 467 contributors since 1.12.1. We want to sincerely thank our dedicated community for your contributions.\nSummary:", "source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"} {"text": "Summary:\n\n\nThe BetterTransformer feature set supports fastpath execution for common Transformer models during Inference out-of-the-box, without the need to modify the model. Additional improvements include accelerated add+matmul linear algebra kernels for sizes commonly used in Transformer models and Nested Tensors is now enabled by default.\n\n\nTimely deprecating older CUDA versions allows us to proceed with introducing the latest CUDA version as they are introduced by Nvidia\u00ae, and hence allows support for C++17 in PyTorch and new NVIDIA Open GPU Kernel Modules.\n\n\nPreviously, functorch was released out-of-tree in a separate package. After installing PyTorch, a user will be able to import functorch and use functorch without needing to install another package.\n\n", "source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"} {"text": "\nPyTorch is offering native builds for Apple\u00ae silicon machines that use Apple's new M1 chip as a beta feature, providing improved support across PyTorch's APIs.\n\n\n\n\n\nStable\nBeta\nPrototype\n\n\n\n\nBetter Transformer\n Enable Intel\u00ae VTune\u2122 Profiler\u2019s Instrumentation and Tracing Technology APIs \n Arm\u00ae Compute Library backend support for AWS Graviton \n\n\nCUDA 10.2 and 11.3 CI/CD Deprecation \nExtend NNC to support channels last and bf16 \nCUDA Sanitizer \n\n\n\u00a0\nFunctorch now in PyTorch Core Library\n\u00a0", "source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"} {"text": "\u00a0\n\n\n  \n Beta Support for M1 devices\n  \n\n\n\n\n\nAlong with 1.13, we are also releasing major updates to the PyTorch libraries, more details can be found in this blog.\nStable Features\n(Stable) BetterTransformer API\nThe BetterTransformer feature set, first released in PyTorch 1.12, is stable. PyTorch BetterTransformer supports fastpath execution for common Transformer models during Inference out-of-the-box, without the need to modify the model. To complement the improvements in Better Transformer, we have also accelerated add+matmul linear algebra kernels for sizes commonly used in Transformer models.", "source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"} {"text": "Reflecting the performance benefits for many NLP users, Nested Tensors use for Better Transformer is now enabled by default. To ensure compatibility, a mask check is performed to ensure a contiguous mask is supplied. In Transformer Encoder, the mask check for src_key_padding_mask may be suppressed by setting mask_check=False. This accelerates processing for users than can guarantee that only aligned masks are provided. Finally, better error messages are provided to diagnose incorrect inputs, together with improved diagnostics why fastpath execution cannot be used.\nBetter Transformer is directly integrated into the PyTorch TorchText library, enabling TorchText users to transparently and automatically take advantage of BetterTransformer speed and efficiency performance. (Tutorial)\n\n\n\n \n", "source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"} {"text": "\n \n\nFigure: BetterTransformer fastpath execution is now stable and enables sparsity optimization using Nested Tensor representation as default\n\nIntroduction of CUDA 11.6 and 11.7 and deprecation of CUDA 10.2 and 11.3\nTimely deprecating older CUDA versions allows us to proceed with introducing the latest CUDA version as they are introduced by Nvidia\u00ae, and hence allows developers to use the latest features of CUDA and benefit from correctness fixes provided by the latest version.\nDecommissioning of CUDA 10.2. CUDA 11 is the first CUDA version to support C++17. Hence decommissioning legacy CUDA 10.2 was a major step in adding support for C++17 in PyTorch. It also helps to improve PyTorch code by eliminating legacy CUDA 10.2 specific instructions.", "source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"} {"text": "Decommissioning of CUDA 11.3 and introduction of CUDA 11.7 brings compatibility support for the new NVIDIA Open GPU Kernel Modules and another significant highlight is the lazy loading support. CUDA 11.7 is shipped with cuDNN 8.5.0 which contains a number of optimizations accelerating transformer-based models, 30% reduction in library size , and various improvements in the runtime fusion engine. Learn more on CUDA 11.7 with our release notes.\nBeta Features\n(Beta) functorch\nInspired by Google\u00ae JAX, functorch is a library that offers composable vmap (vectorization) and autodiff transforms. It enables advanced autodiff use cases that would otherwise be tricky to express in PyTorch. Examples include:\n\nmodel ensembling", "source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"} {"text": "efficiently computing jacobians and hessians\ncomputing per-sample-gradients (or other per-sample quantities)\n\n\n\nWe\u2019re excited to announce that, as a first step towards closer integration with PyTorch, functorch has moved to inside the PyTorch library and no longer requires the installation of a separate functorch package. After installing PyTorch via conda or pip, you\u2019ll be able to `import functorch\u2019 in your program. Learn more with our detailed instructions, nightly and release notes.\n(Beta) Intel\u00ae VTune\u2122 Profiler's Instrumentation and Tracing Technology APIs (ITT) integration", "source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"} {"text": "PyTorch users are able to visualize op-level timeline of PyTorch scripts execution in Intel\u00ae VTune\u2122 Profiler when they need to analyze per-op performance with low-level performance metrics on Intel platforms.\nwith torch.autograd.profiler.emit_itt():\n for i in range(10):\n torch.itt.range_push('step_{}'.format(i))\n model(input)\n torch.itt.range_pop()\n\n \nLearn more with our tutorial.\n(Beta) NNC: Add BF16 and Channels last support\nTorchScript graph-mode inference performance on x86 CPU is boosted by adding channels last and BF16 support to NNC. PyTorch users may benefit from channels last optimization on most popular x86 CPUs and benefit from BF16 optimization on Intel Cooper Lake Processor and Sapphire Rapids Processor. >2X geomean performance boost is observed on broad vision models with these two optimizations on Intel Cooper Lake Processor.", "source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"} {"text": "The performance benefit can be obtained with existing TorchScript, channels last and BF16 Autocast APIs. See code snippet below. We will migrate the optimizations in NNC to the new PyTorch DL Compiler TorchInductor.\n \nimport torch\nimport torchvision.models as models\nmodel = models.resnet50(pretrained=True)\n# Convert the model to channels-last\nmodel = model.to(memory_format=torch.channels_last)\nmodel.eval()\ndata = torch.rand(1, 3, 224, 224)\n# Convert the data to channels-lastdata = data.to(memory_format=torch.channels_last)\n# Enable autocast to run with BF16\nwith torch.cpu.amp.autocast(), torch.no_grad():\n# Trace the model\nmodel = torch.jit.trace(model, torch.rand(1, 3, 224, 224))\n model = torch.jit.freeze(model)\n # Run the traced model\n model(data)\n\n(Beta) Support for M1 Devices", "source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"} {"text": "```\n(Beta) Support for M1 Devices\nSince v1.12, PyTorch has been offering native builds for Apple\u00ae silicon machines that use Apple's new M1 chip as a prototype feature. In this release, we bring this feature to beta, providing improved support across PyTorch's APIs.\nWe now run tests for all submodules except torch.distributed on M1 macOS 12.6 instances. With this improved testing, we were able to fix features such as cpp extension and convolution correctness for certain inputs.\nTo get started, just install PyTorch v1.13 on your Apple silicon Mac running macOS 12 or later with a native version (arm64) of Python. Learn more with our release notes.\nPrototype Features\n\n(Prototype) Arm\u00ae Compute Library (ACL) backend support for AWS Graviton\nWe achieved substantial improvements for CV and NLP inference on aarch64 cpu with Arm Compute Library (acl) to enable acl backend for pytorch and torch-xla modules. Highlights include:", "source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"} {"text": " \n\nEnabled mkldnn + acl as the default backend for aarch64 torch wheel.\nEnabled mkldnn matmul operator for aarch64 bf16 device.\nBrought TensorFlow xla+acl feature into torch-xla. We enhanced the TensorFlow xla with Arm Compute Library runtime for aarch64 cpu. These changes are included in TensorFlow master and then the upcoming TF 2.10. Once the torch-xla repo is updated for the tensorflow commit, it will have compiling support for torch-xla. We observed ~2.5-3x improvement for MLPerf Bert inference compared to the torch 1.12 wheel on Graviton3.\n\n(Prototype) CUDA Sanitizer", "source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"} {"text": "(Prototype) CUDA Sanitizer\nWhen enabled, the sanitizer begins to analyze low-level CUDA operations invoked as a result of the user\u2019s PyTorch code to detect data race errors caused by unsynchronized data access from different CUDA streams. The errors found are then printed along with stack traces of faulty accesses, much like Thread Sanitizer does. An example of a simple error and the output produced by the sanitizer can be viewed here. It will be especially useful for machine learning applications, where corrupted data can be easy to miss for a human and the errors may not always manifest themselves; the sanitizer will always be able to detect them.\n(Prototype) Limited Python 3.11 support", "source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"} {"text": "(Prototype) Limited Python 3.11 support\nBinaries for Linux with Python 3.11 support are available to download via pip. Please follow the instructions on the get started page. Please note that Python 3.11 support is only a preview. In particular, features including Distributed, Profiler, FX and JIT might not be fully functional yet.", "source": "https://pytorch.org/blog/PyTorch-1.13-release/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Celebrate PyTorch 2.0 with New Performance Features for AI Developers\"\nauthor: Intel\n\nCongratulations to the PyTorch Foundation for its release of PyTorch 2.0! In this blog, I discuss the four features for which Intel made significant contributions to PyTorch 2.0:\n\nTorchInductor\nGNN\nINT8 Inference Optimization\noneDNN Graph API\n\nWe at Intel are delighted to be part of the PyTorch community and appreciate the collaboration with and feedback from our colleagues at Meta as we co-developed these features.\nLet\u2019s get started.\n1. TorchInductor CPU FP32 Inference Optimized\nAs part of the PyTorch 2.0 compilation stack, TorchInductor CPU backend optimization brings notable performance improvements via graph compilation over the PyTorch eager mode.", "source": "https://pytorch.org/blog/celebrate-pytorch-2.0/", "category": "pytorch blogs"} {"text": "The TorchInductor CPU backend is sped up by leveraging the technologies from the Intel\u00ae Extension for PyTorch for Conv/GEMM ops with post-op fusion and weight prepacking, and PyTorch ATen CPU kernels for memory-bound ops with explicit vectorization on top of OpenMP*-based thread parallelization.\nWith these optimizations on top of the powerful loop fusions in TorchInductor codegen, we achieved up to a 1.7x FP32 inference performance boost over three representative deep learning benchmarks: TorchBench, HuggingFace, and timm1. Training and low-precision support are under development.\nSee the Improvements\nThe performance improvements on various backends are tracked on this TouchInductor CPU Performance Dashboard.\nImprove Graph Neural Network (GNN) in PyG for Inference and Training Performance on CPU", "source": "https://pytorch.org/blog/celebrate-pytorch-2.0/", "category": "pytorch blogs"} {"text": "GNN is a powerful tool to analyze graph structure data. This feature is designed to improve GNN inference and training performance on Intel\u00ae CPUs, including the new 4th Gen Intel\u00ae Xeon\u00ae Scalable processors.\nPyTorch Geometric (PyG) is a very popular library built upon PyTorch to perform GNN workflows. Currently on CPU, GNN models of PyG run slowly due to the lack of GNN-related sparse matrix multiplication operations (i.e., SpMM_reduce) and the lack of several critical kernel-level optimizations (scatter/gather, etc.) tuned for GNN compute.\nTo address this, optimizations are provided for message passing between adjacent neural network nodes:\n\nscatter_reduce: performance hotspot in message-passing when the edge index is stored in coordinate format (COO).\ngather: backward computation of scatter_reduce, specially tuned for the GNN compute when the index is an expanded tensor.\n", "source": "https://pytorch.org/blog/celebrate-pytorch-2.0/", "category": "pytorch blogs"} {"text": "\ntorch.sparse.mm with reduce flag: performance hotspot in message-passing when the edge index is stored in compressed sparse row (CSR). Supported reduce flag for: sum, mean, amax, amin.\n\nEnd-to-end performance benchmark results for both inference and training on 3rd Gen Intel\u00ae Xeon\u00ae Scalable processors 8380 platform and on 4th Gen 8480+ platform are discussed in Accelerating PyG on Intel CPUs.\nOptimize int8 Inference with Unified Quantization Backend for x86 CPU Platforms\nThe new X86 quantization backend is a combination of FBGEMM (Facebook General Matrix-Matrix Multiplication) and oneAPI Deep Neural Network Library (oneDNN) backends and replaces FBGEMM as the default quantization backend for x86 platforms. The result: better end-to-end int8 inference performance than FBGEMM.", "source": "https://pytorch.org/blog/celebrate-pytorch-2.0/", "category": "pytorch blogs"} {"text": "Users access the x86 quantization backend by default for x86 platforms, and the selection between different kernels is automatically done behind the scenes. The rules of selection are based on prior performance testing data done by Intel during feature development. Thus, the x86 backend replaces FBGEMM and may offer better performance, depending on the use case.\nThe selection rules are:\n\nOn platforms without VNNI (e.g., Intel\u00ae Core\u2122 i7 processors), FBGEMM is always used.\nOn platforms with VNNI (e.g., 2nd-4th Gen Intel\u00ae Xeon\u00ae Scalable processors and future platforms):\nFor linear, FBGEMM is always used.\nFor convolution layers, FBGEMM is used for depth-wise convolution whose layers > 100; otherwise, oneDNN is used.\n\n\n\nNote that as the kernels continue to evolve.", "source": "https://pytorch.org/blog/celebrate-pytorch-2.0/", "category": "pytorch blogs"} {"text": "Note that as the kernels continue to evolve.\nThe selection rules above are subject to change to achieve better performance. Performance metrics for through-put speed-up ratios of unified x86 backend vs. pure FBGEMM are discussed in [RFC] Unified quantization backend for x86 CPU platforms #83888.\nLeverage oneDNN Graph API to Accelerate Inference on CPU", "source": "https://pytorch.org/blog/celebrate-pytorch-2.0/", "category": "pytorch blogs"} {"text": "oneDNN Graph API extends oneDNN with a flexible graph API to maximize the optimization opportunity for generating efficient code on Intel\u00ae AI hardware. It automatically identifies the graph partitions to be accelerated via fusion. The fusion patterns focus on fusing compute-intensive operations such as convolution, matmul, and their neighbor operations for both inference and training use cases.\nCurrently, BFloat16 and Float32 datatypes are supported and only inference workloads can be optimized. BF16 is only optimized on machines with Intel\u00ae Advanced Vector Extensions 512 (Intel\u00ae AVX-512) BF16 support.\nFew or no modifications are needed in PyTorch to support newer oneDNN Graph fusions/optimized kernels. To use oneDNN Graph, users can:", "source": "https://pytorch.org/blog/celebrate-pytorch-2.0/", "category": "pytorch blogs"} {"text": "\nEither use the API torch.jit.enable_onednn_fusion(True) before JIT tracing a model, OR \u2026\nUse its context manager, viz. with torch.jit.fuser(\u201cfuser3\u201d).\nFor accelerating BFloat16 inference, we rely on eager-mode AMP (Automatic Mixed Precision) support in PyTorch and disable JIT mode\u2019s AMP.\n\nSee the PyTorch performance tuning guide.\nNext Steps\nGet the Software\nTry out PyTorch 2.0 and realize the performance benefits for yourself from these Intel-contributed features.", "source": "https://pytorch.org/blog/celebrate-pytorch-2.0/", "category": "pytorch blogs"} {"text": "We encourage you to check out Intel\u2019s other AI Tools and Framework optimizations and learn about the open, standards-based oneAPI multiarchitecture, multivendor programming model that forms the foundation of Intel\u2019s AI software portfolio.\nFor more details about 4th Gen Intel Xeon Scalable processor, visit AI Platform where you can learn about how Intel is empowering developers to run high-performance, efficient end-to-end AI pipelines.\nPyTorch Resources\n\nPyTorch Get Started\nDev Discussions\n", "source": "https://pytorch.org/blog/celebrate-pytorch-2.0/", "category": "pytorch blogs"} {"text": "\nDocumentation\n", "source": "https://pytorch.org/blog/celebrate-pytorch-2.0/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"What Every User Should Know About Mixed Precision Training in PyTorch\"\nauthor: Syed Ahmed, Christian Sarofeen, Mike Ruberry, Eddie Yan, Natalia Gimelshein, Michael Carilli, Szymon Migacz, Piotr Bialecki, Paulius Micikevicius, Dusan Stosic, Dong Yang, and Naoya Maruyama\nfeatured-img: ''\n", "source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"} {"text": "featured-img: ''\nEfficient training of modern neural networks often relies on using lower precision data types. Peak float16 matrix multiplication and convolution performance is 16x faster than peak float32 performance on A100 GPUs. And since the float16 and bfloat16 data types are only half the size of float32 they can double the performance of bandwidth-bound kernels and reduce the memory required to train a network, allowing for larger models, larger batches, or larger inputs. Using a module like torch.amp (short for \u201cAutomated Mixed Precision\u201d) makes it easy to get the speed and memory usage benefits of lower precision data types while preserving convergence behavior.", "source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"} {"text": "Going faster and using less memory is always advantageous \u2013 deep learning practitioners can test more model architectures and hyperparameters, and larger, more powerful models can be trained. Training very large models like those described in Narayanan et al. and Brown et al. (which take thousands of GPUs months to train even with expert handwritten optimizations) is infeasible without using mixed precision.\nWe\u2019ve talked about mixed precision techniques before (here, here, and here), and this blog post is a summary of those techniques and an introduction if you\u2019re new to mixed precision.\nMixed Precision Training in Practice", "source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"} {"text": "Mixed Precision Training in Practice\nMixed precision training techniques \u2013 the use of the lower precision float16 or bfloat16 data types alongside the float32 data type \u2013 are broadly applicable and effective. See Figure 1 for a sampling of models successfully trained with mixed precision, and Figures 2 and 3 for example speedups using torch.amp.\n\n\n\n\n Figure 1: Sampling of DL Workloads Successfully Trained with float16 (Source).\n\n\n\n\n\n Figure 2: Performance of mixed precision training using torch.amp on NVIDIA 8xV100 vs. float32 training on 8xV100 GPU. Bars represent the speedup factor of torch.amp over float32.", "source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"} {"text": "(Higher is better.) (Source).\n\n\n\n\n\n Figure 3. Performance of mixed precision training using torch.amp on NVIDIA 8xA100 vs. 8xV100 GPU. Bars represent the speedup factor of A100 over V100.\n(Higher is Better.) (Source).\n\nSee the NVIDIA Deep Learning Examples repository for more sample mixed precision workloads.", "source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"} {"text": "Similar performance charts can be seen in 3D medical image analysis, gaze estimation, video synthesis, conditional GANs, and convolutional LSTMs. Huang et al. showed that mixed precision training is 1.5x to 5.5x faster over float32 on V100 GPUs, and an additional 1.3x to 2.5x faster on A100 GPUs on a variety of networks. On very large networks the need for mixed precision is even more evident. Narayanan et al. reports that it would take 34 days to train GPT-3 175B on 1024 A100 GPUs (with a batch size of 1536), but it\u2019s estimated it would take over a year using float32!", "source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"} {"text": "Getting Started With Mixed Precision Using torch.amp\ntorch.amp, introduced in PyTorch 1.6, makes it easy to leverage mixed precision training using the float16 or bfloat16 dtypes. See this blog post, tutorial, and documentation for more details. Figure 4 shows an example of applying AMP with grad scaling to a network.\n```console\nimport torch\nCreates once at the beginning of training\nscaler = torch.cuda.amp.GradScaler()\nfor data, label in data_iter:\n optimizer.zero_grad()\n # Casts operations to mixed precision\n with torch.amp.autocast(device_type=\u201ccuda\u201d, dtype=torch.float16):\n loss = model(data)\n# Scales the loss, and calls backward()\n # to create scaled gradients\n scaler.scale(loss).backward()\n# Unscales gradients and calls\n # or skips optimizer.step()", "source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"} {"text": "or skips optimizer.step()\nscaler.step(optimizer)\n# Updates the scale for next iteration\n scaler.update()\n```\n\n Figure 4: AMP recipe\n\nPicking The Right Approach\nOut-of-the-box mixed precision training with either float16 or bfloat16 is effective at speeding up the convergence of many deep learning models, but some models may require more careful numerical accuracy management. Here are some options:\n\nFull float32 precision. Floating point tensors and modules are created in float32 precision by default in PyTorch, but this is a historic artifact not representative of training most modern deep learning networks. It\u2019s rare that networks need this much numerical accuracy.\n", "source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"} {"text": "\nEnabling TensorFloat32 (TF32) mode. On Ampere and later CUDA devices matrix multiplications and convolutions can use the TensorFloat32 (TF32) mode for faster but slightly less accurate computations. See the Accelerating AI Training with NVIDIA TF32 Tensor Cores blog post for more details. By default PyTorch enables TF32 mode for convolutions but not matrix multiplications, and unless a network requires full float32 precision we recommend enabling this setting for matrix multiplications, too (see the documentation here for how to do so). It can significantly speed up computations with typically negligible loss of numerical accuracy.\n", "source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"} {"text": "\nUsing torch.amp with bfloat16 or float16. Both these low precision floating point data types are usually comparably fast, but some networks may only converge with one vs the other. If a network requires more precision it may need to use float16, and if a network requires more dynamic range it may need to use bfloat16, whose dynamic range is equal to that of float32. If overflows are observed, for example, then we suggest trying bfloat16.\n\nThere are even more advanced options than those presented here, like using torch.amp\u2019s autocasting for only parts of a model, or managing mixed precision directly. These topics are largely beyond the scope of this blog post, but see the \u201cBest Practices\u201d section below.\nBest Practices\nWe strongly recommend using mixed precision with torch.amp or the TF32 mode (on Ampere and later CUDA devices) whenever possible when training a network. If one of those approaches doesn\u2019t work, however, we recommend the following:", "source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"} {"text": "\nHigh Performance Computing (HPC) applications, regression tasks, and generative networks may simply require full float32 IEEE precision to converge as expected.\nTry selectively applying torch.amp. In particular we recommend first disabling it on regions performing operations from the torch.linalg module or when doing pre- or post-processing. These operations are often especially sensitive. Note that TF32 mode is a global switch and can\u2019t be used selectively on regions of a network. Enable TF32 first to check if a network\u2019s operators are sensitive to the mode, otherwise disable it.\nIf you encounter type mismatches while using torch.amp we don\u2019t suggest inserting manual casts to start. This error is indicative of something being off with the network, and it\u2019s usually worth investigating first.\n", "source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"} {"text": "\nFigure out by experimentation if your network is sensitive to range and/or precision of a format. For example fine-tuning bfloat16-pretrained models in float16 can easily run into range issues in float16 because of the potentially large range from training in bfloat16, so users should stick with bfloat16 fine-tuning if the model was trained in bfloat16.\nThe performance gain of mixed precision training can depend on multiple factors (e.g. compute-bound vs memory-bound problems) and users should use the tuning guide to remove other bottlenecks in their training scripts. Although having similar theoretical performance benefits, BF16 and FP16 can have different speeds in practice. It\u2019s recommended to try the mentioned formats and use the one with best speed while maintaining the desired numeric behavior.\n", "source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"} {"text": "For more details, refer to the AMP Tutorial, Training Neural Networks with Tensor Cores, and see the post \u201cMore In-Depth Details of Floating Point Precision\" on PyTorch Dev Discussion.\nConclusion\nMixed precision training is an essential tool for training deep learning models on modern hardware, and it will become even more important in the future as the performance gap between lower precision operations and float32 continues to grow on newer hardware, as reflected in Figure 5.\n\n\n\n", "source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"} {"text": "\n\nFigure 5: Relative peak throughput of float16 (FP16) vs float32 matrix multiplications on Volta and Ampere GPUs. On Ampere relative peak throughput for the TensorFloat32 (TF32) mode and bfloat16 matrix multiplications are shown, too. The relative peak throughput of low precision data types like float16 and bfloat16 vs. float32 matrix multiplications is expected to grow as new hardware is released.\n\nPyTorch\u2019s torch.amp module makes it easy to get started with mixed precision, and we highly recommend using it to train faster and reduce memory usage. torch.amp supports both float16 and bfloat16 mixed precision.\nThere are still some networks that are tricky to train with mixed precision, and for these networks we recommend trying TF32 accelerated matrix multiplications on Ampere and later CUDA hardware. Networks are rarely so precision sensitive that they require full float32 precision for every operation.", "source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"} {"text": "If you have questions or suggestions for torch.amp or mixed precision support in PyTorch then let us know by posting to the mixed precision category on the PyTorch Forums or filing an issue on the PyTorch GitHub page.", "source": "https://pytorch.org/blog/what-every-user-should-know-about-mixed-precision-training-in-pytorch/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Everything you need to know about TorchVision\u2019s MobileNetV3 implementation'\nauthor: Vasilis Vryniotis and Francisco Massa\n\nIn TorchVision v0.9, we released a series of new mobile-friendly models that can be used for Classification, Object Detection and Semantic Segmentation. In this article, we will dig deep into the code of the models, share notable implementation details, explain how we configured and trained them, and highlight important tradeoffs we made during their tuning. Our goal is to disclose technical details that typically remain undocumented in the original papers and repos of the models.\nNetwork Architecture", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "Network Architecture\nThe implementation of the MobileNetV3 architecture follows closely the original paper. It is customizable and offers different configurations for building Classification, Object Detection and Semantic Segmentation backbones. It was designed to follow a similar structure to MobileNetV2 and the two share common building blocks.", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "Off-the-shelf, we offer the two variants described on the paper: the Large and the Small. Both are constructed using the same code with the only difference being their configuration which describes the number of blocks, their sizes, their activation functions etc.\nConfiguration parameters", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "Configuration parameters\nEven though one can write a custom InvertedResidual setting and pass it to the MobileNetV3 class directly, for the majority of applications we can adapt the existing configs by passing parameters to the model building methods. Some of the key configuration parameters are the following:", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "\nThe width_mult parameter is a multiplier that affects the number of channels of the model. The default value is 1 and by increasing or decreasing it one can change the number of filters of all convolutions, including the ones of the first and last layers. The implementation ensures that the number of filters is always a multiple of 8. This is a hardware optimization trick which allows for faster vectorization of operations.\n", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "\nThe reduced_tail parameter halves the number of channels on the last blocks of the network. This version is used by some Object Detection and Semantic Segmentation models. It\u2019s a speed optimization which is described on the MobileNetV3 paper and reportedly leads to a 15% latency reduction without a significant negative effect on accuracy.\n", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "\nThe dilated parameter affects the last 3 InvertedResidual blocks of the model and turns their normal depthwise Convolutions to Atrous Convolutions. This is used to control the output stride of these blocks and has a significant positive effect on the accuracy of Semantic Segmentation models.\n\nImplementation details\nBelow we provide additional information on some notable implementation details of the architecture.\nThe MobileNetV3 class is responsible for building a network out of the provided configuration. Here are some implementation details of the class:", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "\n\nThe last convolution block expands the output of the last InvertedResidual block by a factor of 6. The implementation is aligned with the Large and Small configurations described on the paper and can adapt to different values of the multiplier parameter.\n\n\nSimilarly to other models such as MobileNetV2, a dropout layer is placed just before the final Linear layer of the classifier.\n\n\nThe InvertedResidual class is the main building block of the network. Here are some notable implementation details of the block along with its visualization which comes from Figure 4 of the paper:", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "\n\nThere is no expansion step if the input channels and the expanded channels are the same. This happens on the first convolution block of the network.\n\n\nThere is always a projection step even when the expanded channels are the same as the output channels.\n\n\nThe activation method of the depthwise block is placed before the Squeeze-and-Excite layer as this improves marginally the accuracy.\n\n\n\n\n\nClassification", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "\nClassification\nIn this section we provide benchmarks of the pre-trained models and details on how they were configured, trained and quantized.\nBenchmarks\nHere is how to initialize the pre-trained models:\nlarge = torchvision.models.mobilenet_v3_large(pretrained=True, width_mult=1.0, reduced_tail=False, dilated=False)\nsmall = torchvision.models.mobilenet_v3_small(pretrained=True)\nquantized = torchvision.models.quantization.mobilenet_v3_large(pretrained=True)\n\nBelow we have the detailed benchmarks between new and selected previous models. As we can see MobileNetV3-Large is a viable replacement of ResNet50 for users who are willing to sacrifice a bit of accuracy for a roughly 6x speed-up:\n\n\n\nModel\nAcc@1\nAcc@5\nInference on CPU (sec)\n# Params (M)\n\n\n\n\nMobileNetV3-Large\n74.042\n91.340\n0.0411\n5.48\n\n\n", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "| MobileNetV3-Small | 67.668 | 87.402 | 0.0165 | 2.54 |\n| Quantized MobileNetV3-Large | 73.004 | 90.858 | 0.0162 | 2.96 |\n| MobileNetV2 | 71.880 | 90.290 | 0.0608 | 3.50 |\n| ResNet50 | 76.150 | 92.870 | 0.2545 | 25.56 |\n| ResNet18 | 69.760 | 89.080 | 0.1032 | 11.69 |\nNote that the inference times are measured on CPU. They are not absolute benchmarks, but they allow for relative comparisons between models.\nTraining process", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "Training process\nAll pre-trained models are configured with a width multiplier of 1, have full tails, are non-dilated, and were fitted on ImageNet. Both the Large and Small variants were trained using the same hyper-parameters and scripts which can be found in our references folder. Below we provide details on the most notable aspects of the training process.\nAchieving fast and stable training", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "Achieving fast and stable training\nConfiguring RMSProp correctly was crucial to achieve fast training with numerical stability. The authors of the paper used TensorFlow in their experiments and in their runs they reported using quite high rmsprop_epsilon comparing to the default. Typically this hyper-parameter takes small values as it\u2019s used to avoid zero denominators, but in this specific model choosing the right value seems important to avoid numerical instabilities in the loss.", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "Another important detail is that though PyTorch\u2019s and TensorFlow\u2019s RMSProp implementations typically behave similarly, there are a few differences with the most notable in our setup being how the epsilon hyperparameter is handled. More specifically, PyTorch adds the epsilon outside of the square root calculation while TensorFlow adds it inside. The result of this implementation detail is that one needs to adjust the epsilon value while porting the hyper parameter of the paper. A reasonable approximation can be taken with the formula PyTorch_eps = sqrt(TF_eps).\nIncreasing our accuracy by tuning hyperparameters & improving our training recipe", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "After configuring the optimizer to achieve fast and stable training, we turned into optimizing the accuracy of the model. There are a few techniques that helped us achieve this. First of all, to avoid overfitting we augmented out data using the AutoAugment algorithm, followed by RandomErasing. Additionally we tuned parameters such as the weight decay using cross validation. We also found beneficial to perform weight averaging across different epoch checkpoints after the end of the training. Finally, though not used in our published training recipe, we found that using Label Smoothing, Stochastic Depth and LR noise injection improve the overall accuracy by over 1.5 points.", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "The graph and table depict a simplified summary of the most important iterations for improving the accuracy of the MobileNetV3 Large variant. Note that the actual number of iterations done while training the model was significantly larger and that the progress in accuracy was not always monotonically increasing. Also note that the Y-axis of the graph starts from 70% instead from 0% to make the difference between iterations more visible:\n\n\n\n\n\n\nIteration\nAcc@1\nAcc@5\n\n\n\n\nBaseline with \"MobileNetV2-style\" Hyperparams\n71.542\n90.068\n\n\n+ RMSProp with default eps\n70.684\n89.38\n\n\n+ RMSProp with adjusted eps & LR scheme\n71.764\n90.178\n\n\n+ Data Augmentation & Tuned Hyperparams\n73.86\n91.292\n\n\n", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "| + Checkpoint Averaging | 74.028 | 91.382 |\n| + Label Smoothing & Stochastic Depth & LR noise | 75.536 | 92.368 |\nNote that once we\u2019ve achieved an acceptable accuracy, we verified the model performance on the hold-out test dataset which hasn't been used before for training or hyper-parameter tuning. This process helps us detect overfitting and is always performed for all pre-trained models prior their release.\nQuantization\nWe currently offer quantized weights for the QNNPACK backend of the MobileNetV3-Large variant which provides a speed-up of 2.5x. To quantize the model, Quantized Aware Training (QAT) was used. The hyper parameters and the scripts used to train the model can be found in our references folder.", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "Note that QAT allows us to model the effects of quantization and adjust the weights so that we can improve the model accuracy. This translates to an accuracy increase of 1.8 points comparing to simple post-training quantization:\n\n\n\nQuantization Status\nAcc@1\nAcc@5\n\n\n\n\nNon-quantized\n74.042\n91.340\n\n\nQuantized Aware Training\n73.004\n90.858\n\n\nPost-training Quantization\n71.160\n89.834\n\n\n\nObject Detection", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "Object Detection\nIn this section, we will first provide benchmarks of the released models, and then discuss how the MobileNetV3-Large backbone was used in a Feature Pyramid Network along with the FasterRCNN detector to perform Object Detection. We will also explain how the network was trained and tuned alongside with any tradeoffs we had to make. We will not cover details about how it was used with SSDlite as this will be discussed on a future article.\nBenchmarks\nHere is how the models are initialized:\nhigh_res = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_fpn(pretrained=True) \nlow_res = torchvision.models.detection.fasterrcnn_mobilenet_v3_large_320_fpn(pretrained=True)\n", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "```\nBelow are some benchmarks between new and selected previous models. As we can see the high resolution Faster R-CNN with MobileNetV3-Large FPN backbone seems a viable replacement of the equivalent ResNet50 model for those users who are willing to sacrifice few accuracy points for a 5x speed-up:\n\n\n\nModel\nmAP\nInference on CPU (sec)\n# Params (M)\n\n\n\n\nFaster R-CNN MobileNetV3-Large FPN (High-Res)\n32.8\n0.8409\n19.39\n\n\nFaster R-CNN MobileNetV3-Large 320 FPN (Low-Res)\n22.8\n0.1679\n19.39\n\n\nFaster R-CNN ResNet-50 FPN\n37.0\n4.1514\n41.76\n\n\nRetinaNet ResNet-50 FPN\n36.4\n4.8825\n34.01\n\n\n\nImplementation details", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "Implementation details\nThe Detector uses a FPN-style backbone which extracts features from different convolutions of the MobileNetV3 model. By default the pre-trained model uses the output of the 13th InvertedResidual block and the output of the Convolution prior to the pooling layer but the implementation supports using the outputs of more stages.", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "All feature maps extracted from the network have their output projected down to 256 channels by the FPN block as this greatly improves the speed of the network. These feature maps provided by the FPN backbone are used by the FasterRCNN detector to provide box and class predictions at different scales.\nTraining & Tuning process\nWe currently offer two pre-trained models capable of doing object detection at different resolutions. Both models were trained on the COCO dataset using the same hyper-parameters and scripts which can be found in our references folder.", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "The High Resolution detector was trained with images of 800-1333px, while the mobile-friendly Low Resolution detector was trained with images of 320-640px. The reason why we provide two separate sets of pre-trained weights is because training a detector directly on the smaller images leads to a 5 mAP increase in precision comparing to passing small images to the pre-trained high-res model. Both backbones were initialized with weights fitted on ImageNet and the 3 last stages of their weights where fined-tuned during the training process.", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "An additional speed optimization can be applied on the mobile-friendly model by tuning the RPN NMS thresholds. By sacrificing only 0.2 mAP of precision we were able to improve the CPU speed of the model by roughly 45%. The details of the optimization can be seen below:\n\n\n\nTuning Status\nmAP\nInference on CPU (sec)\n\n\n\n\nBefore\n23.0\n0.2904\n\n\nAfter\n22.8\n0.1679\n\n\n\nBelow we provide some examples of visualizing the predictions of the Faster R-CNN MobileNetV3-Large FPN model:\n\n\n\nSemantic Segmentation", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "\nSemantic Segmentation\nIn this section we will start by providing some benchmarks of the released pre-trained models. Then we will discuss how a MobileNetV3-Large backbone was combined with segmentation heads such as LR-ASPP, DeepLabV3 and the FCN to conduct Semantic Segmentation. We will also explain how the network was trained and propose a few optional optimization techniques for speed critical applications.\nBenchmarks\nThis is how to initialize the pre-trained models:\nlraspp = torchvision.models.segmentation.lraspp_mobilenet_v3_large(pretrained=True) \ndeeplabv3 = torchvision.models.segmentation.deeplabv3_mobilenet_v3_large(pretrained=True)\n", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "```\nBelow are the detailed benchmarks between new and selected existing models. As we can see, the DeepLabV3 with a MobileNetV3-Large backbone is a viable replacement of FCN with ResNet50 for the majority of applications as it achieves similar accuracy with a 8.5x speed-up. We also observe that the LR-ASPP network supersedes the equivalent FCN in all metrics:\n\n\n\nModel\nmIoU\nGlobal Pixel Acc\nInference on CPU (sec)\n# Params (M)\n\n\n\n\nLR-ASPP MobileNetV3-Large\n57.9\n91.2\n0.3278\n3.22\n\n\nDeepLabV3 MobileNetV3-Large\n60.3\n91.2\n0.5869\n11.03\n\n\nFCN MobileNetV3-Large (not released)\n57.8\n90.9\n0.3702\n5.05\n\n\nDeepLabV3 ResNet50\n66.4\n92.4\n6.3531\n39.64\n\n\n", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "| FCN ResNet50 | 60.5 | 91.4 | 5.0146 | 32.96 |\nImplementation details\nIn this section we will discuss important implementation details of tested segmentation heads. Note that all models described in this section use a dilated MobileNetV3-Large backbone.\nLR-ASPP\nThe LR-ASPP is the Lite variant of the Reduced Atrous Spatial Pyramid Pooling model proposed by the authors of the MobileNetV3 paper. Unlike the other segmentation models in TorchVision, it does not make use of an auxiliary loss. Instead it uses low and high-level features with output strides of 8 and 16 respectively.", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "Unlike the paper where a 49x49 AveragePooling layer with variable strides is used, our implementation uses an AdaptiveAvgPool2d layer to process the global features. This is because the authors of the paper tailored the head to the Cityscapes dataset while our focus is to provide a general purpose implementation that can work on multiple datasets. Finally our implementation always has a bilinear interpolation before returning the output to ensure that the sizes of the input and output images match exactly.\nDeepLabV3 & FCN", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "DeepLabV3 & FCN\nThe combination of MobileNetV3 with DeepLabV3 and FCN follows closely the ones of other models and the stage estimation for these methods is identical to LR-ASPP. The only notable difference is that instead of using high and low level features, we attach the normal loss to the feature map with output stride 16 and an auxiliary loss on the feature map with output stride 8.\nFinally we should note that the FCN version of the model was not released because it was completely superseded by the LR-ASPP both in terms of speed and accuracy. The pre-trained weights are still available and can be used with minimal changes to the code.\nTraining & Tuning process", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "Training & Tuning process\nWe currently offer two MobileNetV3 pre-trained models capable of doing semantic segmentation: the LR-ASPP and the DeepLabV3. The backbones of the models were initialized with ImageNet weights and trained end-to-end. Both architectures were trained on the COCO dataset using the same scripts with similar hyper-parameters. Their details can be found in our references folder.", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "Normally, during inference the images are resized to 520 pixels. An optional speed optimization is to construct a Low Res configuration of the model by using the High-Res pre-trained weights and reducing the inference resizing to 320 pixels. This will improve the CPU execution times by roughly 60% while sacrificing a couple of mIoU points. The detailed numbers of this optimization can be found on the table below:\n\n\n\nLow-Res Configuration\nmIoU Difference\nSpeed Improvement\nmIoU\nGlobal Pixel Acc\nInference on CPU (sec)\n\n\n\n\nLR-ASPP MobileNetV3-Large\n-2.1\n65.26%\n55.8\n90.3\n0.1139\n\n\n", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "| DeepLabV3 MobileNetV3-Large | -3.8 | 63.86% | 56.5 | 90.3 | 0.2121 |\n| FCN MobileNetV3-Large (not released) | -3.0 | 57.57% | 54.8 | 90.1 | 0.1571 |\nHere are some examples of visualizing the predictions of the LR-ASPP MobileNetV3-Large model:\n\n\n\nWe hope that you found this article interesting. We are looking forward to your feedback to see if this is the type of content you would like us to publish more often. If the community finds that such posts are useful, we will be happy to publish more articles that cover the implementation details of newly introduced Machine Learning models.", "source": "https://pytorch.org/blog/torchvision-mobilenet-v3-implementation/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"PyTorch strengthens its governance by joining the Linux Foundation\"\nauthor: Soumith Chintala\nfeatured-img: \"/assets/images/pytorch-foundation-blog-image.jpg\"\n", "source": "https://pytorch.org/blog/PyTorchfoundation/", "category": "pytorch blogs"} {"text": "Today, I am proud to announce that PyTorch is moving to the Linux Foundation (LF) as a top-level project under the name PyTorch Foundation. The core mission of the Linux Foundation is the collaborative development of open source software. With a governing board of leaders from AMD, Amazon Web Services (AWS), Google Cloud, Meta, Microsoft Azure and NVIDIA, this model aligns with where PyTorch stands today and what it needs to travel forward. The creation of the PyTorch Foundation will ensure business decisions are being made in a transparent and open manner by a diverse group of members for years to come. The technical decisions remain in control of individual maintainers. I\u2019m excited that the Linux Foundation will be our new home as they have notable experience supporting large open-source projects like ours such as Kubernetes and NodeJS. At this pivotal moment, I want to take a look back at how we started, share why we are moving, and what\u2019s ahead.", "source": "https://pytorch.org/blog/PyTorchfoundation/", "category": "pytorch blogs"} {"text": "This January, PyTorch celebrated its 5 year anniversary! I reflected on what it meant to me in this tweet thread, and this conversation with my colleagues Mike Schroepfer, Lin Qiao, and Yann LeCun. When we started PyTorch development in 2016, it was a collective effort by a band of people from the [Lua]Torch community with a big chunk of people and funding from Meta and individuals contributing from NVIDIA, Twitter and other entities.", "source": "https://pytorch.org/blog/PyTorchfoundation/", "category": "pytorch blogs"} {"text": "Since 2017, PyTorch has grown far beyond our initial vision. With over 2,400 contributors who have built nearly 154,000 projects using PyTorch as a foundation, PyTorch has become one of the primary platforms for AI research, as well as commercial production use. We\u2019ve seen its impact across industry and academia, from large companies to numerous university courses at Stanford, NYU, EPFL, Oxford, and other academic institutions. As a maintainer of PyTorch, the journey has been extremely fulfilling, with the impact of the project seen in various fields from self-driving cars to healthcare to aerospace.", "source": "https://pytorch.org/blog/PyTorchfoundation/", "category": "pytorch blogs"} {"text": "As PyTorch grew, many companies have made foundational investments around it. While Meta remains the largest contributor to PyTorch, companies such as AMD, Amazon Web Services (AWS), Google Cloud, HuggingFace, Lightning AI, Microsoft Azure, Nvidia, and many others have made significant investments, including both technical contributions and community building efforts. They\u2019ve established teams around PyTorch or filled significant voids within the PyTorch community and sent countless contributions to the PyTorch core and to the ecosystem around it \u2014 PyTorch is an important part of their future. With PyTorch continuing to grow as a multi-stakeholder project, it\u2019s time to move to a broader open-source foundation.", "source": "https://pytorch.org/blog/PyTorchfoundation/", "category": "pytorch blogs"} {"text": "The business governance of PyTorch was fairly unstructured for quite some time since launch \u2013 we operated like a scrappy startup. Team members at Meta spent the time and energy to structure this properly and organize PyTorch into an organizationally more healthy entity. Meta helped PyTorch with introducing many structures, such as Contributor License Agreements, Branding Guidelines, and Trademark registration. Keeping PyTorch\u2019s organizational health up to check is essential and beneficial for the community. The next stage of our organizational progress is to support the interests of multiple stakeholders, hence moving to a foundation is good. We chose the Linux Foundation as it has vast organization experience hosting large multi-stakeholder open-source projects with the right balance of organizational structure and finding specific solutions for these projects.", "source": "https://pytorch.org/blog/PyTorchfoundation/", "category": "pytorch blogs"} {"text": "Simultaneously, the technical governance of PyTorch has been a loosely structured community model of open-source development \u2014 A set of people maintaining PyTorch by area with their responsibility often tied to their individual identity rather than their employment. While we kept a codified list at the PyTorch - Maintainers page, the technical governance was not formalized nor codified. As PyTorch scales as a community, the next step is to structure and codify. The PyTorch Technical Governance now supports a hierarchical maintainer structure and clear outlining of processes around day to day work and escalations. This doesn\u2019t change how we run things, but it does add discipline and openness that at our scale feels essential and timely.", "source": "https://pytorch.org/blog/PyTorchfoundation/", "category": "pytorch blogs"} {"text": "It\u2019s been an exciting journey since 2016. I am grateful for the experiences and people I\u2019ve met along the way. PyTorch started with a small group of contributors which have grown and diversified over the years, all bringing in new ideas and innovations that would not have been possible without our community. We want to continue the open-source spirit \u2013 for the community and by the community. Thank you to our contributors, maintainers, users, supporters and new foundation members. We look forward to the next chapter of PyTorch with the PyTorch Foundation.", "source": "https://pytorch.org/blog/PyTorchfoundation/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Fast Beam Search Decoding in PyTorch with TorchAudio and Flashlight Text\"\nauthor: Caroline Chen, Jacob Kahn (@jacob_d_kahn)\nfeatured-img: \"/assets/images/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text-6.png\"\n\nBeam search decoding with industry-leading speed from Flashlight Text (part of the Flashlight ML framework) is now available with official support in TorchAudio, bringing high-performance beam search and text utilities for speech and text applications built on top of PyTorch. The current integration supports CTC-style decoding, but it can be used for any modeling setting that outputs token-level probability distributions over time steps.\nA brief beam search refresher", "source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"} {"text": "A brief beam search refresher\nIn speech and language settings, beam search is an efficient, greedy algorithm that can convert sequences of continuous values (i.e. probabilities or scores) into graphs or sequences (i.e. tokens, word-pieces, words) using optional constraints on valid sequences (i.e. a lexicon), optional external scoring (i.e. an LM which scores valid sequences), and other score adjustments for particular sequences.\nIn the example that follows, we'll consider \u2014 a token set of {\u03f5, a, b}, where \u03f5 is a special token that we can imagine denotes a space between words or a pause in speech. Graphics here and below are taken from Awni Hannun's excellent distill.pub writeup on CTC and beam search.\n\n\n", "source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"} {"text": "\nWith a greedy-like approach, beam search considers the next viable token given an existing sequence of tokens \u2014 in the example above, a, b, b is a valid sequence, but a, b, a is not. We rank each possible next token at each step of the beam search according to a scoring function. Scoring functions (s) typically looks something like:\n\n\n", "source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"} {"text": "\nWhere \u0177 is a potential path/sequence of tokens, x is the input (P(\u0177|x) represents the model's predictions over time), and \ud835\udefc is a weight on the language model probability (P(y) the probability of the sequence under the language model). Some scoring functions add \ud835\udf37 which adjusts a score based on the length of the predicted sequence |\u0177|. This particular scoring function is used in FAIR's prior work on end-to-end ASR, and there are many variations on scoring functions which can vary across application areas.", "source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"} {"text": "Given a particular sequence, to assess the next viable token in that sequence (perhaps constrained by a set of allowed words or sequences, such as a lexicon of words), the beam search algorithm scores the sequence with each candidate token added, and sorts token candidates based on those scores. For efficiency and since the number of paths is exponential in the token set size, the top-k highest-scoring candidates are kept \u2014 k represents the beam size.\n\n\n\nThere are many other nuances with how beam search can progress: similar hypothesis sequences can be \u201cmerged\u201d, for instance.\n", "source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"} {"text": "\nThe scoring function can be further augmented to up/down-weight token insertion or long or short words. Scoring with stronger external language models, while incurring computational cost, can also significantly improve performance; this is frequently referred to as LM fusion. There are many other knobs to tune for decoding \u2014 these are documented in TorchAudio\u2019s documentation and explored further in TorchAudio\u2019s ASR Inference tutorial. Since decoding is quite efficient, parameters can be easily swept and tuned.", "source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"} {"text": "Beam search has been used in ASR extensively over the years in far too many works to cite, and in strong, recent results and systems including wav2vec 2.0 and NVIDIA's NeMo.\nWhy beam search?\nBeam search remains a fast competitor to heavier-weight decoding approaches such as RNN-Transducer that Google has invested in putting on-device and has shown strong results with on common benchmarks. Autoregressive text models at scale can benefit from beam search as well. Among other things, beam search gives:", "source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"} {"text": "\nA flexible performance/latency tradeoff \u2014 by adjusting beam size and the external LM, users can sacrifice latency for accuracy or pay for more accurate results with a small latency cost. Decoding with no external LM can improve results at very little performance cost.\nPortability without retraining \u2014 existing neural models can benefit from multiple decoding setups and plug-and-play with external LMs without training or fine-tuning.\nA compelling complexity/accuracy tradeoff \u2014 adding beam search to an existing modeling pipeline incurs little additional complexity and can improve performance.\n\nPerformance Benchmarks", "source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"} {"text": "Performance Benchmarks\nToday's most commonly-used beam search decoding libraries today that support external language model integration include Kensho's pyctcdecode, NVIDIA's NeMo toolkit. We benchmark the TorchAudio + Flashlight decoder against them with a wav2vec 2.0 base model trained on 100 hours of audio evaluated on LibriSpeech dev-other with the official KenLM 3-gram LM. Benchmarks were run on Intel E5-2698 CPUs on a single thread. All computation was in-memory \u2014 KenLM memory mapping was disabled as it wasn't widely supported.", "source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"} {"text": "When benchmarking, we measure the time-to-WER (word error rate) \u2014 because of subtle differences in the implementation of decoding algorithms and the complex relationships between parameters and decoding speed, some hyperparameters differed across runs. To fairly assess performance, we first sweep for parameters that achieve a baseline WER, minimizing beam size if possible.\n\n\n\n\nDecoding performance on Librispeech dev-other of a pretrained wav2vec 2.0 model. TorchAudio + Flashlight decoding outperforms by an order of magnitude at low WERs.\n\n\n\n\n", "source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"} {"text": "\n\nTime-to-WER results, deferring to smaller beam size, across decoders. The TorchAudio + Flashlight decoder scales far better with larger beam sizes and at lower WERs.\n\nTorchAudio API and Usage\nTorchAudio provides a Python API for CTC beam search decoding, with support for the following:\n\nlexicon and lexicon-free decoding\nKenLM n-gram language model integration\ncharacter and word-piece decoding\nsample pretrained LibriSpeech KenLM models and corresponding lexicon and token files\nvarious customizable beam search parameters (beam size, pruning threshold, LM weight...)\n\nTo set up the decoder, use the factory function torchaudio.models.decoder.ctc_decoder\nfrom torchaudio.models.decoder import ctc_decoder, download_pretrained_files\nfiles = download_pretrained_files(\"librispeech-4-gram\")\ndecoder = ctc_decoder(\n lexicon=files.lexicon,\n tokens=files.tokens,\n lm=files.lm,\n nbest=1,\n ... additional optional customizable args ...\n)\n", "source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"} {"text": ")\n\nGiven emissions of shape *(batch, time, num_tokens)*, the decoder will compute and return a List of batch Lists, each consisting of the nbest hypotheses corresponding to the emissions. Each hypothesis can be further broken down into tokens, words (if a lexicon is provided), score, and timesteps components.\n\n```python\nemissions = acoustic_model(waveforms) # (B, T, N)\nbatch_hypotheses = decoder(emissions) # List[List[CTCHypothesis]]\n\n# transcript for a lexicon decoder\ntranscripts = [\" \".join(hypo[0].words) for hypo in batch_hypotheses]\n\n# transcript for a lexicon free decoder, splitting by sil token\nbatch_tokens = [decoder.idxs_to_tokens(hypo[0].tokens) for hypo in batch_hypotheses]\ntranscripts = [\"\".join(tokens) for tokens in batch_tokens]\n", "source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"} {"text": "```\nPlease refer to the documentation for more API details, and the tutorial (ASR Inference Decoding) or sample inference script for more usage examples.\nUpcoming Improvements\nFull NNLM support \u2014 decoding with large neural language models (e.g. transformers) remains somewhat unexplored at scale. Already supported in Flashlight, we plan to add support in TorchAudio, allowing users to use custom decoder-compatible LMs. Custom word level language models are already available in the nightly TorchAudio build, and is slated to be released in TorchAudio 0.13.", "source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"} {"text": "Autoregressive/seq2seq decoding \u2014 Flashlight Text also supports sequence-to-sequence (seq2seq) decoding for autoregressive models, which we hope to add bindings for and add to TorchAudio and TorchText with efficient GPU implementations as well.\nBetter build support \u2014 to benefit from improvements in Flashlight Text, TorchAudio will directly submodule Flashlight Text to make upstreaming modifications and improvements easier. This is already in effect in the nightly TorchAudio build, and is slated to be released in TorchAudio 0.13.\nCitation\nTo cite the decoder, please use the following:\n```python\n@inproceedings{kahn2022flashlight,\n title={Flashlight: Enabling innovation in tools for machine learning},", "source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"} {"text": "author={Kahn, Jacob D and Pratap, Vineel and Likhomanenko, Tatiana and Xu, Qiantong and Hannun, Awni and Cai, Jeff and Tomasello, Paden and Lee, Ann and Grave, Edouard and Avidov, Gilad and others},\n booktitle={International Conference on Machine Learning},\n pages={10557--10574},\n year={2022},\n organization={PMLR}\n}\n```python\n@inproceedings{yang2022torchaudio,\n title={Torchaudio: Building blocks for audio and speech processing},\n author={Yang, Yao-Yuan and Hira, Moto and Ni, Zhaoheng and Astafurov, Artyom and Chen, Caroline and Puhrsch, Christian and Pollack, David and Genzel, Dmitriy and Greenberg, Donny and Yang, Edward Z and others},\n booktitle={ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},\n pages={6982--6986},\n year={2022},\n organization={IEEE}\n}\n", "source": "https://pytorch.org/blog/fast-beam-search-decoding-in-pytorch-with-torchaudio-and-flashlight-text/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'New PyTorch Library Releases in PyTorch 1.9, including TorchVision, TorchAudio, and more'\nauthor: Team PyTorch \n\nToday, we are announcing updates to a number of PyTorch libraries, alongside the PyTorch 1.9 release. The updates include new releases for the domain libraries including TorchVision, TorchText and TorchAudio. These releases, along with the PyTorch 1.9 release, include a number of new features and improvements that will provide a broad set of updates for the PyTorch community.\nSome highlights include:\n\nTorchVision - Added new SSD and SSDLite models, quantized kernels for object detection, GPU Jpeg decoding, and iOS support. See release notes here.\n", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} {"text": "\nTorchAudio - Added wav2vec 2.0 model deployable in non-Python environments (including C++, Android, and iOS). Many performance improvements in lfilter, spectral operations, resampling. Added options for quality control in sampling (i.e. Kaiser window support). Initiated the migration of complex tensors operations. Improved autograd support. See release notes here.\nTorchText - Added a new high-performance Vocab module that provides common functional APIs for NLP workflows. See release notes here.\n\nWe\u2019d like to thank the community for their support and work on this latest release.\nFeatures in PyTorch releases are classified as Stable, Beta, and Prototype. You can learn more about the definitions in this blog post. \nTorchVision 0.10\n(Stable) Quantized kernels for object detection", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} {"text": "The forward pass of the nms and roi_align operators now support tensors with a quantized dtype, which can help lower the memory footprint of object detection models, particularly on mobile environments. For more details, refer to the documentation. \n(Stable) Speed optimizations for Tensor transforms\nThe resize and flip transforms have been optimized and its runtime improved by up to 5x on the CPU. \n(Stable) Documentation improvements\nSignificant improvements were made to the documentation. In particular, a new gallery of examples is available. These examples visually illustrate how each transform acts on an image, and also properly documents and illustrates the output of the segmentation models.", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} {"text": "The example gallery will be extended in the future to provide more comprehensive examples and serve as a reference for common torchvision tasks. For more details, refer to the documentation.\n(Beta) New models for detection\nSSD and SSDlite are two popular object detection architectures that are efficient in terms of speed and provide good results for low resolution pictures. In this release, we provide implementations for the original SSD model with VGG16 backbone and for its mobile-friendly variant SSDlite with MobileNetV3-Large backbone.\nThe models were pre-trained on COCO train2017 and can be used as follows:\n```python\nimport torch\nimport torchvision\nOriginal SSD variant\nx = [torch.rand(3, 300, 300), torch.rand(3, 500, 400)]\nm_detector = torchvision.models.detection.ssd300_vgg16(pretrained=True)\nm_detector.eval()\npredictions = m_detector(x)", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} {"text": "m_detector.eval()\npredictions = m_detector(x)\nMobile-friendly SSDlite variant\nx = [torch.rand(3, 320, 320), torch.rand(3, 500, 400)]\nm_detector = torchvision.models.detection.ssdlite320_mobilenet_v3_large(pretrained=True)\nm_detector.eval()\npredictions = m_detector(x)\n```\nThe following accuracies can be obtained on COCO val2017 (full results available in #3403 and #3757):\n{:.table.table-striped.table-bordered}\n| Model | mAP | mAP@50 | mAP@75 |\n| ------------- | ------------- | ------------- | ------------- |\n| SSD300 VGG16 | 25.1 | 41.5 | 26.2 | \n| SSDlite320 MobileNetV3-Large | 21.3 | 34.3 | 22.1 |\nFor more details, refer to the documentation.\n(Beta) JPEG decoding on the GPU", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} {"text": "(Beta) JPEG decoding on the GPU\nDecoding jpegs is now possible on GPUs with the use of nvjpeg, which should be readily available in your CUDA setup. The decoding time of a single image should be about 2 to 3 times faster than with libjpeg on CPU. While the resulting tensor will be stored on the GPU device, the input raw tensor still needs to reside on the host (CPU), because the first stages of the decoding process take place on the host:\nfrom torchvision.io.image import read_file, decode_jpeg\ndata = read_file('path_to_image.jpg') # raw data is on CPU\nimg = decode_jpeg(data, device='cuda') # decoded image in on GPU\n\nFor more details, see the documentation.\n(Beta) iOS support", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} {"text": "(Beta) iOS support\nTorchVision 0.10 now provides pre-compiled iOS binaries for its C++ operators, which means you can run Faster R-CNN and Mask R-CNN on iOS. An example app on how to build a program leveraging those ops can be found here. \nTorchAudio 0.9.0\n(Stable) Complex Tensor Migration\nTorchAudio has functions that handle complex-valued tensors. These functions follow a convention to use an extra dimension to represent real and imaginary parts. In PyTorch 1.6, the native complex type was introduced. As its API is getting stable, torchaudio has started to migrate to the native complex type.", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} {"text": "In this release, we added support for native complex tensors, and you can opt-in to use them. Using the native complex types, we have verified that affected functions continue to support autograd and TorchScript, moreover, switching to native complex types improves their performance. For more details, refer to pytorch/audio#1337. \n(Stable) Filtering Improvement", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} {"text": "(Stable) Filtering Improvement\nIn release 0.8, we added the C++ implementation of the core part of lfilter for CPU, which improved the performance. In this release, we optimized some internal operations of the CPU implementation for further performance improvement. We also added autograd support to both CPU and GPU. Now lfilter and all the biquad filters (biquad, band_biquad, bass_biquad, treble_biquad, allpass_biquad, lowpass_biquad, highpass_biquad, bandpass_biquad, equalizer_biquad and bandrefect_biquad) benefit from the performance improvement and support autograd. We also moved the implementation of overdrive to C++ for performance improvement. For more details, refer to the documentation.\n(Stable) Improved Autograd Support", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} {"text": "(Stable) Improved Autograd Support\nAlong with the work of Complex Tensor Migration and Filtering Improvement, we also added autograd tests to transforms. lfilter, biquad and its variants, and most transforms are now guaranteed to support autograd. For more details, refer to the release note.\n(Stable) Improved Windows Support\nTorchaudio implements some operations in C++ for reasons such as performance and integration with third-party libraries. These C++ components were only available on Linux and macOS. In this release, we have added support to Windows. With this, the efficient filtering implementation mentioned above is also available on Windows.\nHowever, please note that not all the C++ components are available for Windows. \u201csox_io\u201d backend and torchaudio.functional.compute_kaldi_pitch are not supported. \n(Stable) I/O Functions Migration", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} {"text": "(Stable) I/O Functions Migration\nSince the 0.6 release, we have continuously improved I/O functionality. Specifically, in 0.8 we changed the default backend from \u201csox\u201d to \u201csox_io\u201d and applied the same switch to API of the \u201csoundfile\u201d backend. The 0.9 release concludes this migration by removing the deprecated backends. For more details, please refer to #903. \n(Beta) Wav2Vec2.0 Model", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} {"text": "(Beta) Wav2Vec2.0 Model\nWe have added the model architectures from Wav2Vec2.0. You can import fine-tuned models parameters published on fairseq and Hugging Face Hub. Our model definition supports TorchScript, and it is possible to deploy the model to non-Python environments, such as C++, Android and iOS. \nThe following code snippet illustrates such a use case. Please check out our c++ example directory for the complete example. Currently, it is designed for running inference. If you would like more support for training, please file a feature request.\n```python\nImport fine-tuned model from Hugging Face Hub\nimport transformers", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} {"text": "import transformers\nfrom torchaudio.models.wav2vec2.utils import import_huggingface_model\noriginal = Wav2Vec2ForCTC.from_pretrained(\"facebook/wav2vec2-base-960h\")\nimported = import_huggingface_model(original)\n\n```python\n# Import fine-tuned model from fairseq\nimport fairseq\nfrom torchaudio.models.wav2vec2.utils import import_fairseq_model\n\noriginal, _, _ = fairseq.checkpoint_utils.load_model_ensemble_and_task(\n [\"wav2vec_small_960h.pt\"], arg_overrides={'data': \"\"})\nimported = import_fairseq_model(original[0].w2v_encoder)\n\n```python\nBuild uninitialized model and load state dict\nfrom torchaudio.models import wav2vec2_base\nmodel = wav2vec2_base(num_out=32)\nmodel.load_state_dict(imported.state_dict())\nQuantize / script / optimize for mobile\nquantized_model = torch.quantization.quantize_dynamic(\n model, qconfig_spec={torch.nn.Linear}, dtype=torch.qint8)\nscripted_model = torch.jit.script(quantized_model)\noptimized_model = optimize_for_mobile(scripted_model)", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} {"text": "optimized_model.save(\"model_for_deployment.pt\")\n```\nFor more details, see the documentation. \n(Beta) Resampling Improvement\nIn release 0.8, we vectorized the operation in torchaudio.compliance.kaldi.resample_waveform, which improved the performance of resample_waveform and torchaudio.transforms.Resample. In this release, we have further revised the way the resampling algorithm is implemented. \nWe have:\n* Added Kaiser Window support for a wider range of resampling quality.\n* Added rolloff parameter for anti-aliasing control.\n* Added the mechanism to precompute the kernel and cache it in torchaudio.transforms.Resample for even faster operation.\n* Moved the implementation from torchaudio.compliance.kaldi.resample_waveform to torchaudio.functional.resample and deprecated torchaudio.compliance.kaldi.resample_waveform.", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} {"text": "For more details, see the documentation. \n(Prototype) RNN Transducer Loss\nThe RNN transducer loss is used in training RNN transducer models, which is a popular architecture for speech recognition tasks. The prototype loss in torchaudio currently supports autograd, torchscript, float16 and float32, and can also be run on both CPU and CUDA. For more details, please refer to the documentation.\nTorchText 0.10.0\n(Beta) New Vocab Module", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} {"text": "TorchText 0.10.0\n(Beta) New Vocab Module\nIn this release, we introduce a new Vocab module that replaces the current Vocab class. The new Vocab provides common functional APIs for NLP workflows. This module is backed by an efficient C++ implementation that reduces batch look-up time by up-to ~85% (refer to summary of #1248 and #1290 for further information on benchmarks), and provides support for TorchScript. We provide accompanying factory functions that can be used to build the Vocab object either through a python ordered dictionary or an Iterator that yields lists of tokens.\n```python\ncreating Vocab from text file\nimport io\nfrom torchtext.vocab import build_vocab_from_iterator\ngenerator that yield list of tokens\ndef yield_tokens(file_path):\n with io.open(file_path, encoding = 'utf-8') as f:\n for line in f:\n yield line.strip().split()\nget Vocab object", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} {"text": "get Vocab object\nvocab_obj = build_vocab_from_iterator(yield_tokens(file_path), specials=[\"\"])\ncreating Vocab through ordered dict\nfrom torchtext.vocab import vocab\nfrom collections import Counter, OrderedDict\ncounter = Counter([\"a\", \"a\", \"b\", \"b\", \"b\"])\nsorted_by_freq_tuples = sorted(counter.items(), key=lambda x: x[1], reverse=True)\nordered_dict = OrderedDict(sorted_by_freq_tuples)\nvocab_obj = vocab(ordered_dict)\ncommon API usage\nlook-up index\nvocab_obj[\"a\"]\nbatch look-up indices\nvocab_obj.looup_indices([\"a\",\"b\"])\nsupport forward API of PyTorch nn Modules\nvocab_obj([\"a\",\"b\"])\nbatch look-up tokens\nvocab_obj.lookup_tokens([0,1])\nset default index to return when token not found\nvocab_obj.set_default_index(0)\nvocab_obj[\"out_of_vocabulary\"] #prints 0\n```\nFor more details, refer to the documentation.", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} {"text": "Thanks for reading. If you\u2019re interested in these updates and want to join the PyTorch community, we encourage you to join the discussion forums and open GitHub issues. To get the latest news from PyTorch, follow us on Facebook, Twitter, Medium, YouTube or LinkedIn. \nCheers!\n-Team PyTorch", "source": "https://pytorch.org/blog/pytorch-1.9-new-library-releases/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Everything You Need To Know About Torchvision\u2019s SSDlite Implementation'\nauthor: Vasilis Vryniotis\nfeatured-img: 'assets/images/mAP-of-SSD320-MobileNetV3-Large.png'\n\nIn the previous article, we\u2019ve discussed how the SSD algorithm works, covered its implementation details and presented its training process. If you have not read the previous blog post, I encourage you to check it out before continuing.\nIn this part 2 of the series, we will focus on the mobile-friendly variant of SSD called SSDlite. Our plan is to first go through the main components of the algorithm highlighting the parts that differ from the original SSD, then discuss how the released model was trained and finally provide detailed benchmarks for all the new Object Detection models that we explored.\nThe SSDlite Network Architecture", "source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"} {"text": "The SSDlite Network Architecture\nThe SSDlite is an adaptation of SSD which was first briefly introduced on the MobileNetV2 paper and later reused on the MobileNetV3 paper. Because the main focus of the two papers was to introduce novel CNN architectures, most of the implementation details of SSDlite were not clarified. Our code follows all the details presented on the two papers and where necessary fills the gaps from the official implementation. \nAs noted before, the SSD is a family of models because one can configure it with different backbones (such as VGG, MobileNetV3 etc) and different Heads (such as using regular convolutions, separable convolutions etc). Thus many of the SSD components remain the same in SSDlite. Below we discuss only those that are different\nClassification and Regression Heads", "source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"} {"text": "Classification and Regression Heads\nFollowing the Section 6.2 of the MobileNetV2 paper, SSDlite replaces the regular convolutions used on the original Heads with separable convolutions. Consequently, our implementation introduces new heads that use 3x3 Depthwise convolutions and 1x1 projections. Since all other components of the SSD method remain the same, to create an SSDlite model our implementation initializes the SSDlite head and passes it directly to the SSD constructor.\nBackbone Feature Extractor", "source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"} {"text": "Our implementation introduces a new class for building MobileNet feature extractors. Following the Section 6.3 of the MobileNetV3 paper, the backbone returns the output of the expansion layer of the Inverted Bottleneck block which has an output stride of 16 and the output of the layer just before the pooling which has an output stride of 32. Moreover, all extra blocks of the backbone are replaced with lightweight equivalents which use a 1x1 compression, a separable 3x3 convolution with stride 2 and a 1x1 expansion. Finally to ensure that the heads have enough prediction power even when small width multipliers are used, the minimum depth size of all convolutions is controlled by the min_depth hyperparameter.", "source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"} {"text": "The SSDlite320 MobileNetV3-Large model\n\n\n\nThis section discusses the configuration of the provided SSDlite pre-trained model along with the training processes followed to replicate the paper results as closely as possible. \nTraining process\nAll of the hyperparameters and scripts used to train the model on the COCO dataset can be found in our references folder. Here we discuss the most notable details of the training process.\nTuned Hyperparameters", "source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"} {"text": "Tuned Hyperparameters\nThough the papers don\u2019t provide any information on the hyperparameters used for training the models (such as regularization, learning rate and the batch size), the parameters listed in the configuration files on the official repo were good starting points and using cross validation we adjusted them to their optimal values. All the above gave us a significant boost over the baseline SSD configuration.\nData Augmentation", "source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"} {"text": "Data Augmentation\nKey important difference of SSDlite comparing to SSD is that the backbone of the first has only a fraction of the weights of the latter. This is why in SSDlite, the Data Augmentation focuses more on making the model robust to objects of variable sizes than trying to avoid overfitting. Consequently, SSDlite uses only a subset of the SSD transformations and this way it avoids the over-regularization of the model.\nLR Scheme", "source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"} {"text": "LR Scheme\nDue to the reliance on Data Augmentation to make the model robust to small and medium sized objects, we found that it is particularly beneficial for the training recipe to use large number of epochs. More specifically by using roughly 3x more epochs than SSD we are able to increase our precision by 4.2mAP points and by using a 6x multiplier we improve by 4.9mAP. Increasing further the epochs seems to yield diminishing returns and makes the training too slow and impractical, nevertheless based on the model configuration it seems that the authors of the paper used an equivalent 16x multiplier. \nWeight Initialization & Input Scaling & ReLU6", "source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"} {"text": "A set of final optimizations that brought our implementation very close to the official one and helped us bridge the accuracy gap was training the backbone from scratch instead of initializing from ImageNet, adapting our weight initialization scheme, changing our Input Scaling and replacing all standard ReLUs added on the SSDlite heads with ReLU6. Note that since we trained the model from random weights, we additionally applied the speed optimization described on the paper of using a reduced tail on the backbone.", "source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"} {"text": "Implementation Differences", "source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"} {"text": "Comparing the above implementation with the one on the official repo, we\u2019ve identified a few differences. Most of them are minor and they are related to how we initialize the weights (for example Normal initialization vs Truncated Normal), how we parameterize the LR Scheduling (for example smaller vs larger warmup rate, shorter vs longer training) etc. The biggest known difference lies in the way we compute the Classification loss. More specifically the implementation of SSDlite with MobileNetV3 backbone on the official repo doesn\u2019t use the SSD\u2019s Multibox loss but instead uses RetinaNet\u2019s focal loss. This is a rather significant deviation from the paper and since TorchVision already offers a full implementation of RetinaNet, we decided to implement SSDlite using the normal Multi-box SSD loss.", "source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"} {"text": "Break down of key accuracy improvements\nAs discussed in previous articles, reproducing research papers and porting them to code is not a journey of monotonically increasing accuracies, especially in cases where the full training and implementation details are not known. Typically the process involves lots of backtracking as one needs to identify those implementation details and parameters that have significant impact on the accuracy from those that don\u2019t. Below we try to visualize the most important iterations that improved our accuracy from the baseline:\n\n\n\n{:.table.table-striped.table-bordered}\n| Iteration | mAP | \n| ------------- | ------------- |\n| Baseline with \"SSD-style\" Hyperparams | 10.6 | \n| + Tuned Hyperparams | 14.2 | \n| + SSDlite Data Augmentation | 15.2 |\n| + 3x LR Scheme | 19.4 |\n| + 6x LR Scheme | 20.1 |", "source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"} {"text": "| + 6x LR Scheme | 20.1 | \n| + Weight Initialization & Input Scaling & ReLU6 | 21.3 |", "source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"} {"text": "The order of optimizations presented above is accurate, though a bit idealized in some cases. For example, though different schedulers were tested during the Hyperparameter tuning phase, none of them provided significant improvements and thus we maintained the MultiStepLR which was used in the baseline. Nevertheless while later experimenting with different LR Schemes, we found it beneficial to switch to CosineAnnealingLR, as it required less configuration. Consequently, we believe that the main takeaway from the above summary should be that even by starting with a correct implementation and a set of optimal hyperparams from a model of the same family, there is always accuracy points to be found by optimizing the training recipe and tuning the implementation. Admittedly the above is a rather extreme case where the accuracy doubled, but still in many cases there is a large number of optimizations that can help us push the accuracy significantly. \nBenchmarks", "source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"} {"text": "Benchmarks\nHere is how to initialize the two pre-trained models:\nssdlite = torchvision.models.detection.ssdlite320_mobilenet_v3_large(pretrained=True)\nssd = torchvision.models.detection.ssd300_vgg16(pretrained=True)\n\nBelow are the benchmarks between the new and selected previous detection models:\n{:.table.table-striped.table-bordered}\n| Model | mAP | Inference on CPU (sec) | # Params (M) |\n| ------------- | ------------- | ------------- | ------------- |\n| SSDlite320 MobileNetV3-Large | 21.3 | 0.0911 | 3.44 |\n| SSD300 VGG16 | 25.1 | 0.8303 | 35.64 |\n| SSD512 VGG16 (not released) | 28.8| 2.2494 | 37.08 |\n| SSD512 ResNet50 (not released) | 30.2 | 1.1137 | 42.70 |\n| Faster R-CNN MobileNetV3-Large 320 FPN (Low-Res) | 22.8 | 0.1679 | 19.39|\n| Faster R-CNN MobileNetV3-Large FPN (High-Res) | 32.8 | 0.8409 | 19.39 |", "source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"} {"text": "As we can see, the SSDlite320 MobileNetV3-Large model is by far the fastest and smallest model and thus it\u2019s an excellent candidate for real-world mobile applications. Though its accuracy is lower than the pre-trained low-resolution Faster R-CNN equivalent, the SSDlite framework is adaptable and one can boost its accuracy by introducing heavier heads with more convolutions. \nOn the other hand, the SSD300 VGG16 model is rather slow and less accurate. This is mainly because of its VGG16 backbone. Though extremely important and influential, the VGG architecture is nowadays quite outdated. Thus though the specific model has historical and research value and hence it\u2019s included in TorchVision, we recommend to users who want high-resolution detectors for real world applications to either combine SSD with alternative backbones (see this example on how to create one) or use one of the Faster R-CNN pre-trained models.", "source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"} {"text": "We hope you enjoyed the 2nd and final part of the SSD series. We are looking forward to your feedback.", "source": "https://pytorch.org/blog/torchvision-ssdlite-implementation/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Announcing PyTorch Annual Hackathon 2021'\nauthor: Team PyTorch\nfeatured-img: 'assets/images/social_hackathon21.png'\n\nWe\u2019re excited to announce the PyTorch Annual Hackathon 2021! This year, we\u2019re looking to support the community in creating innovative PyTorch tools, libraries, and applications. 2021 is the third year we\u2019re hosting this Hackathon, and we welcome you to join the PyTorch community and put your machine learning skills into action. Submissions start on September 8 and end on November 3. Good luck to everyone!\n\n\n\nSubmission Categories\nYou can enter your PyTorch projects into three categories:", "source": "https://pytorch.org/blog/pytorch-hackathon-2021/", "category": "pytorch blogs"} {"text": "\n\nPyTorch Responsible AI Development Tools & Libraries - Build an AI development tool or library that helps develop AI models and applications responsibly. These tools, libraries, and apps need to support a researcher or developer to factor in fairness, security, and privacy throughout the entire machine learning development process of data gathering, model training, model validation, inferences, monitoring, and more. \n\n\nWeb and Mobile Applications Powered by PyTorch - Build an application with the web, mobile interface, and/or embedded device powered by PyTorch so the end users can interact with it. The submission must be built on PyTorch or use PyTorch-based libraries such as torchvision, torchtext, and fast.ai.\n\n", "source": "https://pytorch.org/blog/pytorch-hackathon-2021/", "category": "pytorch blogs"} {"text": "\nPyTorch Developer Tools & Libraries - Build a creative, useful, and well-implemented tool or library for improving the productivity and efficiency of PyTorch researchers and developers. The submission must be a machine learning algorithm, model, or application built using PyTorch or PyTorch-based libraries.\n\nPrizes\nSubmissions will be judged on the idea\u2019s quality, originality, implementation, and potential impact.\n\n\nFirst-Place Winners in each category of the Hackathon will receive $5,000 in cash, along with a 30-minute call with the PyTorch development team. \n\n\nSecond-Place Winners will receive $3,000.\n\n\nThird-Place Winners will receive $2,000.\n\n\nAll winners will also receive the opportunity to create blog posts that will be featured throughout PyTorch channels as well as an exclusive Github badge. Honorable Mentions will also be awarded to the following three highest-scoring entries in each category and will receive $1,000 each.\nCloud Computing Credits", "source": "https://pytorch.org/blog/pytorch-hackathon-2021/", "category": "pytorch blogs"} {"text": "Cloud Computing Credits\nRequest $100 in credits from Amazon Web Services or Google Cloud for your computing costs. Please allow 3 business days for your request to be reviewed. Credits will be provided to verified registrants until the supplies run out. For more information, see https://pytorch2021.devpost.com/details/sponsors. \n2020 Winning Projects\nDeMask won first place in the PyTorch Developer Tools category. Built using Asteroid, a PyTorch-based audio source separation toolkit, DeMask is an end-to-end model for enhancing speech while wearing face masks.\nQ&Aid won first place in the Web/Mobile Applications Powered by PyTorch category. Backed by PyTorch core algorithms and models, Q&Aid is a conceptual health care chatbot aimed at making health care diagnoses and facilitating communication between patients and doctors.", "source": "https://pytorch.org/blog/pytorch-hackathon-2021/", "category": "pytorch blogs"} {"text": "FairTorch won first place in the PyTorch Responsible AI Development Tools category. FairTorch is a PyTorch fairness library that lets developers add constraints to their models to equalize metrics across subgroups by simply adding a few lines of code.\nHow to Join\nIf you\u2019re interested in joining this year\u2019s PyTorch Hackathon, register at http://pytorch2021.devpost.com.", "source": "https://pytorch.org/blog/pytorch-hackathon-2021/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Accelerated Generative Diffusion Models with PyTorch 2\"\nauthor: Grigory Sizov, Michael Gschwind, Hamid Shojanazeri, Driss Guessous, Daniel Haziza, Christian Puhrsch\n\nTL;DR: PyTorch 2.0 nightly offers out-of-the-box performance improvement for Generative Diffusion models by using the new torch.compile() compiler and optimized implementations of Multihead Attention integrated with PyTorch 2.\nIntroduction\nA large part of the recent progress in Generative AI came from denoising diffusion models, which allow producing high quality images and videos from text prompts. This family includes Imagen, DALLE, Latent Diffusion, and others. However, all models in this family share a common drawback: generation is rather slow, due to the iterative nature of the sampling process by which the images are produced. This makes it important to optimize the code running inside the sampling loop.", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} {"text": "We took an open source implementation of a popular text-to-image diffusion model as a starting point and accelerated its generation using two optimizations available in PyTorch 2: compilation and fast attention implementation. Together with a few minor memory processing improvements in the code these optimizations give up to 49% inference speedup relative to the original implementation without xFormers, and 39% inference speedup relative to using the original code with xFormers (excluding the compilation time), depending on the GPU architecture and batch size. Importantly, the speedup comes without a need to install xFormers or any other extra dependencies.", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} {"text": "The table below shows the improvement in runtime between the original implementation with xFormers installed and our optimized version with PyTorch-integrated memory efficient attention (originally developed for and released in the xFormers library) and PyTorch compilation. The compilation time is excluded.\nRuntime improvement in % compared to original+xFormers\nSee the absolute runtime numbers in section \u201cBenchmarking setup and results summary\u201d\n\n\n\nGPU\n\nBatch size 1\n\nBatch size 2\n\nBatch size 4\n\n\n\n\nP100 (no compilation)\n\n-3.8\n \n0.44\n \n5.47\n \n\n\nT4\n\n2.12\n \n10.51\n ", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} {"text": "2.12\n \n10.51\n \n14.2\n \n\n\n\nA10\n\n-2.34\n \n8.99\n \n10.57\n \n\n\nV100\n\n18.63\n \n6.39\n \n10.43\n \n\n\nA100\n\n38.5\n \n20.33\n \n12.17\n \n\n\nOne can notice the following:\n\nThe improvements are significant for powerful GPUs like A100 and V100. For those GPUs the improvement is most pronounced for batch size 1\nFor less powerful GPUs we observe smaller speedups (or in two cases slight regressions). The batch size trend is reversed here: improvement is larger for larger batches\n\nIn the following sections we describe the applied optimizations and provide detailed benchmarking data, comparing the generation time with various optimization features on/off.", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} {"text": "Specifically, we benchmark 5 configurations and the plots below compare their absolute performance for different GPUs and batch sizes. For definitions of these configurations see section \u201cBenchmarking setup and results\u201d.\n \n \n \nOptimizations", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} {"text": "Optimizations\nHere we\u2019ll go into more detail about the optimizations introduced into the model code. These optimizations rely on features of PyTorch 2.0 which has been released recently. \nOptimized Attention\nOne part of the code which we optimized is the scaled dot-product attention. Attention is known to be a heavy operation: naive implementation materializes the attention matrix, leading to time and memory complexity quadratic in sequence length. It is common for diffusion models to use attention (CrossAttention) as part of Transformer blocks in multiple parts of the U-Net. Since the U-Net runs at every sampling step, this becomes a critical point to optimize. Instead of custom attention implementation one can use torch.nn.MultiheadAttention, which in PyTorch 2 has optimized attention implementation is integrated into it. This optimization schematically boils down to the following pseudocode:\n```\nclass CrossAttention(nn.Module):\n def init(self, ...):", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} {"text": "def init(self, ...):\n # Create matrices: Q, K, V, out_proj\n ...\n def forward(self, x, context=None, mask=None):\n # Compute out = SoftMax(Q*K/sqrt(d))V\n # Return out_proj(out)\n \u2026\n\ngets replaced with\n\n\nclass CrossAttention(nn.Module):\n def init(self, ...):\n self.mha = nn.MultiheadAttention(...)\n def forward(self, x, context):\n return self.mha(x, context, context)\n```", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} {"text": "return self.mha(x, context, context)\n```\nThe optimized implementation of attention was available already in PyTorch 1.13 (see here) and widely adopted (see e.g. HuggingFace transformers library example). In particular, it integrates memory-efficient attention from the xFormers library and flash attention from https://arxiv.org/abs/2205.14135. PyTorch 2.0 expands this to additional attention functions such as cross attention and custom kernels for further acceleration, making it applicable to diffusion models.", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} {"text": "Flash attention is available on GPUs with compute capability SM 7.5 or SM 8.x - for example, on T4, A10, and A100, which are included in our benchmark (you can check compute capability of each NVIDIA GPU here). However, in our tests on A100 the memory efficient attention performed better than flash attention for the particular case of diffusion models, due to the small number of attention heads and small batch size. PyTorch understands this and in this case chooses memory efficient attention over flash attention when both are available (see the logic here). For full control over the attention backends (memory-efficient attention, flash attention, \u201cvanilla math\u201d, or any future ones), power users can enable and disable them manually with the help of the context manager torch.backends.cuda.sdp_kernel.", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} {"text": "Compilation\nCompilation is a new feature of PyTorch 2.0, enabling significant speedups with a very simple user experience. To invoke the default behavior, simply wrap a PyTorch module or a function into torch.compile:\nmodel = torch.compile(model)\n\nPyTorch compiler then turns Python code into a set of instructions which can be executed efficiently without Python overhead. The compilation happens dynamically the first time the code is executed. With the default behavior, under the hood PyTorch utilized TorchDynamo to compile the code and TorchInductor to further optimize it. See this tutorial for more details.", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} {"text": "Although the one-liner above is enough for compilation, certain modifications in the code can squeeze a larger speedup. In particular, one should avoid so-called graph breaks - places in the code which PyTorch can\u2019t compile. As opposed to previous PyTorch compilation approaches (like TorchScript), PyTorch 2 compiler doesn\u2019t break in this case. Instead it falls back on eager execution - so the code runs, but with reduced performance. We introduced a few minor changes to the model code to get rid of graph breaks. This included eliminating functions from libraries not supported by the compiler, such as inspect.isfunction and einops.rearrange. See this doc to learn more about graph breaks and how to eliminate them.", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} {"text": "Theoretically, one can apply torch.compileon the whole diffusion sampling loop. However, in practice it is enough to just compile the U-Net. The reason is that torch.compile doesn\u2019t yet have a loop analyzer and would recompile the code for each iteration of the sampling loop. Moreover, compiled sampler code is likely to generate graph breaks - so one would need to adjust it if one wants to get a good performance from the compiled version.\nNote that compilation requires GPU compute capability >= SM 7.0 to run in non-eager mode. This covers all GPUs in our benchmarks - T4, V100, A10, A100 - except for P100 (see the full list). \nOther optimizations", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} {"text": "Other optimizations\nIn addition, we have improved efficiency of GPU memory operations by eliminating some common pitfalls, e.g. creating a tensor on GPU directly rather than creating it on CPU and later moving to GPU. The places where such optimizations were necessary were determined by line-profiling and looking at CPU/GPU traces and Flame Graphs.\nBenchmarking setup and results summary\nWe have two versions of code to compare: original and optimized. On top of this, several optimization features (xFormers, PyTorch memory efficient attention, compilation) can be turned on/off. Overall, as mentioned in the introduction, we will be benchmarking 5 configurations:\n\nOriginal code without xFormers\nOriginal code with xFormers\nOptimized code with vanilla math attention backend and no compilation\nOptimized code with memory-efficient attention backend and no compilation\n", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} {"text": "\nOptimized code with memory-efficient attention backend and compilation\n\nAs the original version we took the version of the code which uses PyTorch 1.12 and a custom implementation of attention. The optimized version uses nn.MultiheadAttention in CrossAttention and PyTorch 2.0.0.dev20230111+cu117. It also has a few other minor optimizations in PyTorch-related code. \nThe table below shows runtime of each version of the code in seconds, and the percentage improvement compared to the _original with xFormers. _The compilation time is excluded.\nRuntimes for batch size 1. In parenthesis - relative improvement with respect to the \u201cOriginal with xFormers\u201d row\n\n\n\nConfiguration\n\nP100\n\nT4\n\nA10\n\nV100\n\nA100\n\n\n\n", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} {"text": "\n\n\n\n\nOriginal without xFormers\n\n30.4s (-19.3%)\n \n29.8s (-77.3%)\n \n13.0s (-83.9%)\n \n10.9s (-33.1%)\n \n8.0s (-19.3%)\n \n\n\nOriginal with xFormers\n\n25.5s (0.0%)\n \n16.8s (0.0%)\n \n7.1s (0.0%)\n \n8.2s (0.0%)\n \n6.7s (0.0%)\n \n\n\nOptimized with vanilla math attention, no compilation\n\n27.3s (-7.0%)\n \n19.9s (-18.7%)\n \n13.2s (-87.2%)\n \n7.5s (8.7%)\n \n5.7s (15.1%)\n \n\n\nOptimized with mem. efficient attention, no compilation\n\n26.5s (-3.8%)\n \n16.8s (0.2%)\n \n7.1s (-0.8%)\n \n6.9s (16.0%)\n \n5.3s (20.6%)\n \n", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} {"text": "\n\n5.3s (20.6%)\n \n\n\n\nOptimized with mem. efficient attention and compilation\n\n-\n \n16.4s (2.1%)\n \n7.2s (-2.3%)\n \n6.6s (18.6%)\n \n4.1s (38.5%)\n \n\n\nRuntimes for batch size 2\n\n\n\nConfiguration\n\nP100\n\nT4\n\nA10\n\nV100\n\nA100\n\n\n\n\nOriginal without xFormers\n\n58.0s (-21.6%)\n \n57.6s (-84.0%)\n \n24.4s (-95.2%)\n \n18.6s (-63.0%)\n \n12.0s (-50.6%)\n \n\n\nOriginal with xFormers\n\n47.7s (0.0%)\n \n31.3s (0.0%)", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} {"text": "47.7s (0.0%)\n \n31.3s (0.0%)\n \n12.5s (0.0%)\n \n11.4s (0.0%)\n \n8.0s (0.0%)\n \n\n\n\nOptimized with vanilla math attention, no compilation\n\n49.3s (-3.5%)\n \n37.9s (-21.0%)\n \n17.8s (-42.2%)\n \n12.7s (-10.7%)\n \n7.8s (1.8%)\n \n\n\nOptimized with mem. efficient attention, no compilation\n\n47.5s (0.4%)\n \n31.2s (0.5%)\n \n12.2s (2.6%)\n \n11.5s (-0.7%)\n \n7.0s (12.6%)\n \n\n\nOptimized with mem. efficient attention and compilation\n\n-\n \n28.0s (10.5%)\n \n11.4s (9.0%)\n \n10.7s (6.4%)\n \n6.4s (20.3%)\n \n\n\nRuntimes for batch size 4", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} {"text": "\n\nRuntimes for batch size 4\n\n\n\nConfiguration\n\nP100\n\nT4\n\nA10\n\nV100\n\nA100\n\n\n\n\nOriginal without xFormers\n\n117.9s (-20.0%)\n \n112.4s (-81.8%)\n \n47.2s (-101.7%)\n \n35.8s (-71.9%)\n \n22.8s (-78.9%)\n \n\n\nOriginal with xFormers\n\n98.3s (0.0%)\n \n61.8s (0.0%)\n \n23.4s (0.0%)\n \n20.8s (0.0%)\n \n12.7s (0.0%)\n \n\n\nOptimized with vanilla math attention, no compilation\n\n101.1s (-2.9%)\n \n73.0s (-18.0%)\n \n28.3s (-21.0%)\n \n23.3s (-11.9%)\n ", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} {"text": "\n\n23.3s (-11.9%)\n \n14.5s (-13.9%)\n \n\n\n\nOptimized with mem. efficient attention, no compilation\n\n92.9s (5.5%)\n \n61.1s (1.2%)\n \n23.9s (-1.9%)\n \n20.8s (-0.1%)\n \n12.8s (-0.9%)\n \n\n\nOptimized with mem. efficient attention and compilation\n\n-\n \n53.1s (14.2%)\n \n20.9s (10.6%)\n \n18.6s (10.4%)\n \n11.2s (12.2%)\n \n\n", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} {"text": "\n\n\nTo minimize fluctuations and external influence on the performance of the benchmarked code, we ran each version of the code one after another, and then repeated this sequence 10 times: A, B, C, D, E, A, B, \u2026 So the results of a typical run would look like the one in the picture below.. Note that one shouldn\u2019t rely on comparison of absolute run times between different graphs, but comparison of run times_ inside_ one graph is pretty reliable, thanks to our benchmarking setup.\n", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} {"text": "Each run of text-to-image generation script produces several batches, the number of which is regulated by the CLI parameter --n_iter. In the benchmarks we used n_iter = 2, but introduced an additional \u201cwarm-up\u201d iteration, which doesn\u2019t contribute to the run time. This was necessary for the runs with compilation, because compilation happens the first time the code runs, and so the first iteration is much longer than all subsequent. To make comparison fair, we also introduced this additional \u201cwarm-up\u201d iteration to all other runs. \nThe numbers in the table above are for number of iterations 2 (plus a \u201cwarm-up one\u201d), prompt \u201dA photo\u201d, seed 1, PLMS sampler, and autocast turned on.\nBenchmarks were done using P100, V100, A100, A10 and T4 GPUs. The T4 benchmarks were done in Google Colab Pro. The A10 benchmarks were done on g5.4xlarge AWS instances with 1 GPU.\nConclusions and next steps", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} {"text": "Conclusions and next steps\nWe have shown that new features of PyTorch 2 - compiler and optimized attention implementation - give performance improvements exceeding or comparable with what previously required installation of an external dependency (xFormers). PyTorch achieved this, in particular, by integrating memory efficient attention from xFormers into its codebase. This is a significant improvement for user experience, given that xFormers, being a state-of-the-art library, in many scenarios requires custom installation process and long builds.\nThere are a few natural directions in which this work can be continued: \n\nThe optimizations we implemented and described here are only benchmarked for text-to-image inference so far. It would be interesting to see how they affect training performance. PyTorch compilation can be directly applied to training; enabling training with PyTorch optimized attention is on the roadmap\n", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} {"text": "\nWe intentionally minimized changes to the original model code. Further profiling and optimization can probably bring more improvements\nAt the moment compilation is applied only to the U-Net model inside the sampler. Since there is a lot happening outside of U-Net (e.g. operations directly in the sampling loop), it would be beneficial to compile the whole sampler. However, this would require analysis of the compilation process to avoid recompilation at every sampling step\nCurrent code only applies compilation within the PLMS sampler, but it should be trivial to extend it to other samplers\nBesides text-to-image generation, diffusion models are also applied to other tasks - image-to-image and inpainting. It would be interesting to measure how their performance improves from PyTorch 2 optimizations \n\nSee if you can increase performance of open source diffusion models using the methods we described, and share the results! \nResources", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} {"text": "Resources\n\nPyTorch 2.0 overview, which has a lot of information on torch.compile: https://pytorch.org/get-started/pytorch-2.0/ \nTutorial on torch.compile: https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html\nGeneral compilation troubleshooting: https://pytorch.org/docs/master/dynamo/troubleshooting.html\nDetails on graph breaks: https://pytorch.org/docs/master/dynamo/faq.html#identifying-the-cause-of-a-graph-break\nDetails on guards: https://pytorch.org/docs/master/dynamo/guards-overview.html\nVideo deep dive on TorchDynamo https://www.youtube.com/watch?v=egZB5Uxki0I\n", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} {"text": "\nTutorial on optimized attention in PyTorch 1.12: https://pytorch.org/tutorials/beginner/bettertransformer_tutorial.html \n\nAcknowledgements\nWe would like to thank Geeta Chauhan, Natalia Gimelshein, Patrick Labatut, Bert Maher, Mark Saroufim, Michael Voznesensky and Francisco Massa for their valuable advice and early feedback on the text.\nSpecial thanks to Yudong Tao initiating the work on using PyTorch native attention in diffusion models.", "source": "https://pytorch.org/blog/accelerated-generative-diffusion-models/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"New Library Updates in PyTorch 2.0\"\n\nSummary\nWe are bringing a number of improvements to the current PyTorch libraries, alongside the PyTorch 2.0 release. These updates demonstrate our focus on developing common and extensible APIs across all domains to make it easier for our community to build ecosystem projects on PyTorch. \nAlong with 2.0, we are also releasing a series of beta updates to the PyTorch domain libraries, including those that are in-tree, and separate libraries including TorchAudio, TorchVision, and TorchText. An update for TorchX is also being released as it moves to community supported mode. Please find the list of the latest stable versions and updates below.\nLatest Stable Library Versions (Full List)\n\n\nTorchArrow 0.1.0\n \nTorchRec 0.4.0\n \nTorchVision 0.15\n \n\n", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/", "category": "pytorch blogs"} {"text": "TorchVision 0.15\n \n\n\n\nTorchAudio 2.0\n \nTorchServe 0.7.1\n \nTorchX 0.4.0\n \n\n\nTorchData 0.6.0\n \nTorchText 0.15.0\n \nPyTorch on XLA Devices 1.14\n \n\n\n*To see prior versions or (unstable) nightlies, click on versions in the top left menu above \u2018Search Docs\u2019.\nTorchAudio\n[Beta] Data augmentation operators\nThe release adds several data augmentation operators under torchaudio.functional and torchaudio.transforms:\n* torchaudio.functional.add_noise\n* torchaudio.functional.convolve\n* torchaudio.functional.deemphasis\n* torchaudio.functional.fftconvolve\n* torchaudio.functional.preemphasis\n* torchaudio.functional.speed\n* torchaudio.transforms.AddNoise\n* torchaudio.transforms.Convolve\n* torchaudio.transforms.Deemphasis\n* torchaudio.transforms.FFTConvolve\n* torchaudio.transforms.Preemphasis\n* torchaudio.transforms.Speed", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/", "category": "pytorch blogs"} {"text": "\ntorchaudio.transforms.Speed\ntorchaudio.transforms.SpeedPerturbation\n\nThe operators can be used to synthetically diversify training data to improve the generalizability of downstream models.\nFor usage details, please refer to the functional and transform documentation and Audio Data Augmentation tutorial.\n[Beta] WavLM and XLS-R models\nThe release adds two self-supervised learning models for speech and audio.\n\nWavLM that is robust to noise and reverberation.\nXLS-R that is trained on cross-lingual datasets.\n\nBesides the model architectures, torchaudio also supports corresponding pre-trained pipelines:\n\ntorchaudio.pipelines.WAVLM_BASE\ntorchaudio.pipelines.WAVLM_BASE_PLUS\ntorchaudio.pipelines.WAVLM_LARGE\n", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/", "category": "pytorch blogs"} {"text": "\ntorchaudio.pipelines.WAVLM_LARGE\ntorchaudio.pipelines.WAV2VEC_XLSR_300M\ntorchaudio.pipelines.WAV2VEC_XLSR_1B\ntorchaudio.pipelines.WAV2VEC_XLSR_2B\n\nFor usage details, please refer to the factory function and pre-trained pipelines documentation.\nTorchRL\nThe initial release of torchrl includes several features that span across the entire RL domain. TorchRL can already be used in online, offline, multi-agent, multi-task and distributed RL settings, among others. See below:\n[Beta] Environment wrappers and transforms\ntorchrl.envs includes several wrappers around common environment libraries. This allows users to swap one library with another without effort. These wrappers build an interface between these simulators and torchrl:\n\ndm_control: \nGym\nBrax\nEnvPool\nJumanji\nHabitat\n", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/", "category": "pytorch blogs"} {"text": "\nGym\nBrax\nEnvPool\nJumanji\nHabitat\n\nIt also comes with many commonly used transforms and vectorized environment utilities that allow for a fast execution across simulation libraries. Please refer to the documentation for more detail.\n[Beta] Datacollectors\nData collection in RL is made easy via the usage of single process or multiprocessed/distributed data collectors that execute the policy in the environment over a desired duration and deliver samples according to the user\u2019s needs. These can be found in torchrl.collectors and are documented here.\n[Beta] Objective modules\nSeveral objective functions are included in torchrl.objectives, among which: \n\nA generic PPOLoss class and derived ClipPPOLoss and KLPPOLoss\nSACLoss and DiscreteSACLoss\nDDPGLoss\nDQNLoss\nREDQLoss\nA2CLoss\nTD3Loss\nReinforceLoss\nDreamer\n", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/", "category": "pytorch blogs"} {"text": "\nA2CLoss\nTD3Loss\nReinforceLoss\nDreamer\n\nVectorized value function operators also appear in the library. Check the documentation here.\n[Beta] Models and exploration strategies\nWe provide multiple models, modules and exploration strategies. Get a detailed description in the doc.\n[Beta] Composable replay buffer\nA composable replay buffer class is provided that can be used to store data in multiple contexts including single and multi-agent, on and off-policy and many more.. Components include:\n\nStorages (list, physical or memory-based contiguous storages)\nSamplers (Prioritized, sampler without repetition)\nWriters\nPossibility to add transforms\n\nReplay buffers and other data utilities are documented here.\n[Beta] Logging tools and trainer\nWe support multiple logging tools including tensorboard, wandb and mlflow.", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/", "category": "pytorch blogs"} {"text": "We provide a generic Trainer class that allows for easy code recycling and checkpointing.\nThese features are documented here.\nTensorDict\nTensorDict is a new data carrier for PyTorch.\n[Beta] TensorDict: specialized dictionary for PyTorch\nTensorDict allows you to execute many common operations across batches of tensors carried by a single container. TensorDict supports many shape and device or storage operations, and can readily be used in distributed settings. Check the documentation to know more.\n[Beta] @tensorclass: a dataclass for PyTorch\nLike TensorDict, tensorclass provides the opportunity to write dataclasses with built-in torch features such as shape or device operations. \n[Beta] tensordict.nn: specialized modules for TensorDict", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/", "category": "pytorch blogs"} {"text": "The tensordict.nn module provides specialized nn.Module subclasses that make it easy to build arbitrarily complex graphs that can be executed with TensorDict inputs. It is compatible with the latest PyTorch features such as functorch, torch.fx and torch.compile.\nTorchRec\n[Beta] KeyedJaggedTensor All-to-All Redesign and Input Dist Fusion\nWe observed performance regression due to a bottleneck in sparse data distribution for models that have multiple, large KJTs to redistribute. \nTo combat this we altered the comms pattern to transport the minimum data required in the initial collective to support the collective calls for the actual KJT tensor data. This data sent in the initial collective, \u2018splits\u2019 means more data is transmitted over the comms stream overall, but the CPU is blocked for significantly shorter amounts of time leading to better overall QPS.", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/", "category": "pytorch blogs"} {"text": "Furthermore, we altered the TorchRec train pipeline to group the initial collective calls for the splits together before launching the more expensive KJT tensor collective calls. This fusion minimizes the CPU blocked time as launching each subsequent input distribution is no longer dependent on the previous input distribution.\nWith this feature, variable batch sizes are now natively supported across ranks. These features are documented here.\nTorchVision\n[Beta] Extending TorchVision\u2019s Transforms to Object Detection, Segmentation & Video tasks\nTorchVision is extending its Transforms API! Here is what\u2019s new:\n\nYou can use them not only for Image Classification but also for Object Detection, Instance & Semantic Segmentation and Video Classification.\nYou can use new functional transforms for transforming Videos, Bounding Boxes and Segmentation Masks.\n", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/", "category": "pytorch blogs"} {"text": "Learn more about these new transforms from our docs, and submit any feedback in our dedicated issue.\nTorchText\n[Beta] Adding scriptable T5 and Flan-T5 to the TorchText library with incremental decoding support!\nTorchText has added the T5 model architecture with pre-trained weights for both the original T5 paper and Flan-T5. The model is fully torchscriptable and features an optimized multiheaded attention implementation. We include several examples of how to utilize the model including summarization, classification, and translation.\nFor more details, please refer to our docs.\nTorchX", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/", "category": "pytorch blogs"} {"text": "TorchX\nTorchX is moving to community supported mode. More details will be coming in at a later time.", "source": "https://pytorch.org/blog/new-library-updates-in-pytorch-2.0/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Deprecation of CUDA 11.6 and Python 3.7 Support\"\n\nFor the upcoming PyTorch 2.0 feature release (target March 2023), we will target CUDA 11.7 as the stable version and CUDA 11.8 as the experimental version of CUDA and Python >=3.8, <=3.11. \nIf you are still using or depending on CUDA 11.6 or Python 3.7 builds, we strongly recommend moving to at least CUDA 11.7 and Python 3.8, as it would be the minimum versions required for PyTorch 2.0.\nPlease note that as of Feb 1, CUDA 11.6 and Python 3.7 are no longer included in the nightlies\nPlease refer to the Release Compatibility Matrix for PyTorch releases:\n\n\nPyTorch Version\n\nPython\n\nStable CUDA\n\nExperimental CUDA\n\n\n\n2.0\n \n>=3.8, <=3.11\n \nCUDA 11.7, CUDNN 8.5.0.96\n \nCUDA 11.8, CUDNN 8.7.0.84\n \n\n", "source": "https://pytorch.org/blog/deprecation-cuda-python-support/", "category": "pytorch blogs"} {"text": "\n\n\n\n1.13\n \n>=3.7, <=3.10\n \nCUDA 11.6, CUDNN 8.3.2.44\n \nCUDA 11.7, CUDNN 8.5.0.96\n \n\n\n1.12\n \n>=3.7, <=3.10\n \nCUDA 11.3, CUDNN 8.3.2.44\n \nCUDA 11.6, CUDNN 8.3.2.44\n \n\n\nAs of 2/1/2023\nFor more information on PyTorch releases, updated compatibility matrix and release policies, please see (and bookmark) Readme.", "source": "https://pytorch.org/blog/deprecation-cuda-python-support/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'torchvision 0.3: segmentation, detection models, new datasets and more..'\nauthor: Francisco Massa\nredirect_from: /2019/05/23/torchvision03.html\n\nPyTorch domain libraries like torchvision provide convenient access to common datasets and models that can be used to quickly create a state-of-the-art baseline. Moreover, they also provide common abstractions to reduce boilerplate code that users might have to otherwise repeatedly write. The torchvision 0.3 release brings several new features including models for semantic segmentation, object detection, instance segmentation, and person keypoint detection, as well as custom C++ / CUDA ops specific to computer vision.\n\n\n\nNew features include:", "source": "https://pytorch.org/blog/torchvision03/", "category": "pytorch blogs"} {"text": "\nNew features include:\nReference training / evaluation scripts: torchvision now provides, under the references/ folder, scripts for training and evaluation of the following tasks: classification, semantic segmentation, object detection, instance segmentation and person keypoint detection. These serve as a log of how to train a specific model and provide baseline training and evaluation scripts to quickly bootstrap research.\ntorchvision ops: torchvision now contains custom C++ / CUDA operators. Those operators are specific to computer vision, and make it easier to build object detection models. These operators currently do not support PyTorch script mode, but support for it is planned for in the next release. Some of the ops supported include:\n\nroi_pool (and the module version RoIPool)\nroi_align (and the module version RoIAlign)\nnms, for non-maximum suppression of bounding boxes\nbox_iou, for computing the intersection over union metric between two sets of bounding boxes\n", "source": "https://pytorch.org/blog/torchvision03/", "category": "pytorch blogs"} {"text": "\nbox_area, for computing the area of a set of bounding boxes\n\nHere are a few examples on using torchvision ops:\nimport torch\nimport torchvision\n\n# create 10 random boxes\nboxes = torch.rand(10, 4) * 100\n# they need to be in [x0, y0, x1, y1] format\nboxes[:, 2:] += boxes[:, :2]\n# create a random image\nimage = torch.rand(1, 3, 200, 200)\n# extract regions in `image` defined in `boxes`, rescaling\n# them to have a size of 3x3\npooled_regions = torchvision.ops.roi_align(image, [boxes], output_size=(3, 3))\n# check the size\nprint(pooled_regions.shape)\n# torch.Size([10, 3, 3, 3])\n\n# or compute the intersection over union between\n# all pairs of boxes\nprint(torchvision.ops.box_iou(boxes, boxes).shape)\n# torch.Size([10, 10])\n", "source": "https://pytorch.org/blog/torchvision03/", "category": "pytorch blogs"} {"text": "torch.Size([10, 10])\n```\nNew models and datasets: torchvision now adds support for object detection, instance segmentation and person keypoint detection models. In addition, several popular datasets have been added. Note: The API is currently experimental and might change in future versions of torchvision. New models include:\nSegmentation Models\nThe 0.3 release also contains models for dense pixelwise prediction on images.\nIt adds FCN and DeepLabV3 segmentation models, using a ResNet50 and ResNet101 backbones.\nPre-trained weights for ResNet101 backbone are available, and have been trained on a subset of COCO train2017, which contains the same 20 categories as those from Pascal VOC.\nThe pre-trained models give the following results on the subset of COCO val2017 which contain the same 20 categories as those present in Pascal VOC:\n\n\n\nNetwork\nmean IoU\nglobal pixelwise acc\n\n\n\n\nFCN ResNet101\n63.7\n91.9\n\n\nDeepLabV3 ResNet101\n67.4\n92.4\n\n\n\nDetection Models", "source": "https://pytorch.org/blog/torchvision03/", "category": "pytorch blogs"} {"text": "Detection Models\n\n\n\nNetwork\nbox AP\nmask AP\nkeypoint AP\n\n\n\n\nFaster R-CNN ResNet-50 FPN trained on COCO\n37.0\n\u00a0\n\u00a0\n\n\nMask R-CNN ResNet-50 FPN trained on COCO\n37.9\n34.6\n\u00a0\n\n\nKeypoint R-CNN ResNet-50 FPN trained on COCO\n54.6\n\u00a0\n65.0\n\n\n\nThe implementations of the models for object detection, instance segmentation and keypoint detection are fast, specially during training.\nIn the following table, we use 8 V100 GPUs, with CUDA 10.0 and CUDNN 7.4 to report the results. During training, we use a batch size of 2 per GPU, and during testing a batch size of 1 is used.\nFor test time, we report the time for the model evaluation and post-processing (including mask pasting in image), but not the time for computing the precision-recall.\n\n\n\nNetwork\ntrain time (s / it)\ntest time (s / it)\nmemory (GB)\n\n\n\n\nFaster R-CNN ResNet-50 FPN\n0.2288\n0.0590\n5.2\n\n\nMask R-CNN ResNet-50 FPN\n0.2728\n0.0903\n5.4\n\n\nKeypoint R-CNN ResNet-50 FPN\n0.3789\n0.1242\n6.8\n\n\n", "source": "https://pytorch.org/blog/torchvision03/", "category": "pytorch blogs"} {"text": "You can load and use pre-trained detection and segmentation models with a few lines of code\nimport torchvision\n\nmodel = torchvision.models.detection.maskrcnn_resnet50_fpn(pretrained=True)\n# set it to evaluation mode, as the model behaves differently\n# during training and during evaluation\nmodel.eval()\n\nimage = PIL.Image.open('/path/to/an/image.jpg')\nimage_tensor = torchvision.transforms.functional.to_tensor(image)\n\n# pass a list of (potentially different sized) tensors\n# to the model, in 0-1 range. The model will take care of\n# batching them together and normalizing\noutput = model([image_tensor])\n# output is a list of dict, containing the postprocessed predictions\n\nClassification Models\nThe following classification models were added:\n\nGoogLeNet (Inception v1)\nMobileNet V2\nShuffleNet v2\nResNeXt-50 32x4d and ResNeXt-101 32x8d\n\nDatasets\nThe following datasets were added:\n\nCaltech101, Caltech256, and CelebA\nImageNet dataset (improving on ImageFolder, provides class-strings)\n", "source": "https://pytorch.org/blog/torchvision03/", "category": "pytorch blogs"} {"text": "\nSemantic Boundaries Dataset\nVisionDataset as a base class for all datasets\n\nIn addition, we've added more image transforms, general improvements and bug fixes, as well as improved documentation.\nSee the full release notes here as well as this getting started tutorial on Google Colab here, which describes how to fine tune your own instance segmentation model on a custom dataset.\nCheers!\nTeam PyTorch", "source": "https://pytorch.org/blog/torchvision03/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Prototype Features Now Available - APIs for Hardware Accelerated Mobile and ARM64 Builds'\nauthor: Team PyTorch\n\nToday, we are announcing four PyTorch prototype features. The first three of these will enable Mobile machine-learning developers to execute models on the full set of hardware (HW) engines making up a system-on-chip (SOC). This gives developers options to optimize their model execution for unique performance, power, and system-level concurrency.\nThese features include enabling execution on the following on-device HW engines:\n* DSP and NPUs using the Android Neural Networks API (NNAPI), developed in collaboration with Google\n* GPU execution on Android via Vulkan\n* GPU execution on iOS via Metal\nThis release also includes developer efficiency benefits with newly introduced support for ARM64 builds for Linux.", "source": "https://pytorch.org/blog/prototype-features-now-available-apis-for-hardware-accelerated-mobile-and-arm64-builds/", "category": "pytorch blogs"} {"text": "Below, you\u2019ll find brief descriptions of each feature with the links to get you started. These features are available through our nightly builds. Reach out to us on the PyTorch Forums for any comment or feedback. We would love to get your feedback on those and hear how you are using them!\nNNAPI Support with Google Android", "source": "https://pytorch.org/blog/prototype-features-now-available-apis-for-hardware-accelerated-mobile-and-arm64-builds/", "category": "pytorch blogs"} {"text": "NNAPI Support with Google Android\nThe Google Android and PyTorch teams collaborated to enable support for Android\u2019s Neural Networks API (NNAPI) via PyTorch Mobile. Developers can now unlock high-performance execution on Android phones as their machine-learning models will be able to access additional hardware blocks on the phone\u2019s system-on-chip. NNAPI allows Android apps to run computationally intensive neural networks on the most powerful and efficient parts of the chips that power mobile phones, including DSPs (Digital Signal Processors) and NPUs (specialized Neural Processing Units). The API was introduced in Android 8 (Oreo) and significantly expanded in Android 10 and 11 to support a richer set of AI models. With this integration, developers can now seamlessly access NNAPI directly from PyTorch Mobile. This initial release includes fully-functional support for a core set of features and operators, and Google and Facebook will be working to expand capabilities in the coming months.\nLinks", "source": "https://pytorch.org/blog/prototype-features-now-available-apis-for-hardware-accelerated-mobile-and-arm64-builds/", "category": "pytorch blogs"} {"text": "Links\n* Android Blog: Android Neural Networks API 1.3 and PyTorch Mobile support\n* PyTorch Medium Blog: Support for Android NNAPI with PyTorch Mobile\nPyTorch Mobile GPU support", "source": "https://pytorch.org/blog/prototype-features-now-available-apis-for-hardware-accelerated-mobile-and-arm64-builds/", "category": "pytorch blogs"} {"text": "PyTorch Mobile GPU support\nInferencing on GPU can provide great performance on many models types, especially those utilizing high-precision floating-point math. Leveraging the GPU for ML model execution as those found in SOCs from Qualcomm, Mediatek, and Apple allows for CPU-offload, freeing up the Mobile CPU for non-ML use cases. This initial prototype level support provided for on device GPUs is via the Metal API specification for iOS, and the Vulkan API specification for Android. As this feature is in an early stage: performance is not optimized and model coverage is limited. We expect this to improve significantly over the course of 2021 and would like to hear from you which models and devices you would like to see performance improvements on.\nLinks\n* Prototype source workflows\nARM64 Builds for Linux", "source": "https://pytorch.org/blog/prototype-features-now-available-apis-for-hardware-accelerated-mobile-and-arm64-builds/", "category": "pytorch blogs"} {"text": "ARM64 Builds for Linux\nWe will now provide prototype level PyTorch builds for ARM64 devices on Linux. As we see more ARM usage in our community with platforms such as Raspberry Pis and Graviton(2) instances spanning both at the edge and on servers respectively. This feature is available through our nightly builds.\nWe value your feedback on these features and look forward to collaborating with you to continuously improve them further!\nThank you,\nTeam PyTorch", "source": "https://pytorch.org/blog/prototype-features-now-available-apis-for-hardware-accelerated-mobile-and-arm64-builds/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"PyTorch Enterprise Support Program Update\"\nauthor: Team PyTorch\nfeatured-img: \"\"\n\nOn May 25, 2021, we announced the PyTorch Enterprise Support Program (ESP) that enabled providers to develop and offer tailored enterprise-grade support to their customers.\nThe program enabled Program certified service providers to develop and offer tailored enterprise-grade support to their customers through contribution of hotfixes and other improvements requested by PyTorch enterprise users who were developing models in production at scale for mission-critical applications. However, as we evaluate community feedback, we found ongoing ESP support was not necessary at this time and will immediately divert these resources to other areas to improve the user experience for the entire community.", "source": "https://pytorch.org/blog/pytorch-enterprise-support-update/", "category": "pytorch blogs"} {"text": "Today, we are removing the PyTorch long-term support (LTS 1.8.2) download link from the \u201cGet Started\u201d page from the \u201cStart Locally\u201d download option in order to simplify the user experience. One can download PyTorch v1.8.2 in previous versions. Please note that it is only supported for Python while it is being deprecated. If there are any updates to ESP/LTS, we will update future blogs.\n\n\n\nPlease reach out to marketing@pytorch.org with any questions.", "source": "https://pytorch.org/blog/pytorch-enterprise-support-update/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'New Releases: PyTorch 1.2, torchtext 0.4, torchaudio 0.3, and torchvision 0.4'\nauthor: Team PyTorch\nredirect_from: /2019/08/06/pytorch_aug2019_releases.html\n\nSince the release of PyTorch 1.0, we\u2019ve seen the community expand to add new tools, contribute to a growing set of models available in the PyTorch Hub, and continually increase usage in both research and production.\nFrom a core perspective, PyTorch has continued to add features to support both research and production usage, including the ability to bridge these two worlds via TorchScript. Today, we are excited to announce that we have four new releases including PyTorch 1.2, torchvision 0.4, torchaudio 0.3, and torchtext 0.4. You can get started now with any of these releases at pytorch.org.\nPyTorch 1.2", "source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"} {"text": "PyTorch 1.2\nWith PyTorch 1.2, the open source ML framework takes a major step forward for production usage with the addition of an improved and more polished TorchScript environment. These improvements make it even easier to ship production models, expand support for exporting ONNX formatted models, and enhance module level support for Transformers. In addition to these new features, TensorBoard is now no longer experimental - you can simply type from torch.utils.tensorboard import SummaryWriter to get started.\nTorchScript Improvements\nSince its release in PyTorch 1.0, TorchScript has provided a path to production for eager PyTorch models. The TorchScript compiler converts PyTorch models to a statically typed graph representation, opening up opportunities for\noptimization and execution in constrained environments where Python is not available. You can incrementally convert your model to TorchScript, mixing compiled code seamlessly with Python.", "source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"} {"text": "PyTorch 1.2 significantly expands TorchScript's support for the subset of Python used in PyTorch models and delivers a new, easier-to-use API for compiling your models to TorchScript. See the migration guide for details. Below is an example usage of the new API:\nimport torch\n\nclass MyModule(torch.nn.Module):\n def __init__(self, N, M):\n super(MyModule, self).__init__()\n self.weight = torch.nn.Parameter(torch.rand(N, M))\n\n def forward(self, input):\n if input.sum() > 0:\n output = self.weight.mv(input)\n else:\n output = self.weight + input\n return output\n\n# Compile the model code to a static representation\nmy_script_module = torch.jit.script(MyModule(3, 4))\n\n# Save the compiled code and model data so it can be loaded elsewhere\nmy_script_module.save(\"my_script_module.pt\")\n", "source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"} {"text": "my_script_module.save(\"my_script_module.pt\")\n```\nTo learn more, see our Introduction to TorchScript and Loading a\nPyTorch Model in C++ tutorials.\nExpanded ONNX Export", "source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"} {"text": "Expanded ONNX Export\nThe ONNX community continues to grow with an open governance structure and additional steering committee members, special interest groups (SIGs), and working groups (WGs). In collaboration with Microsoft, we\u2019ve added full support to export ONNX Opset versions 7(v1.2), 8(v1.3), 9(v1.4) and 10 (v1.5). We\u2019ve have also enhanced the constant folding pass to support Opset 10, the latest available version of ONNX. ScriptModule has also been improved including support for multiple outputs, tensor factories, and tuples as inputs and outputs. Additionally, users are now able to register their own symbolic to export custom ops, and specify the dynamic dimensions of inputs during export. Here is a summary of the all of the major improvements:\n\nSupport for multiple Opsets including the ability to export dropout, slice, flip, and interpolate in Opset 10.\n", "source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"} {"text": "\nImprovements to ScriptModule including support for multiple outputs, tensor factories, and tuples as inputs and outputs.\nMore than a dozen additional PyTorch operators supported including the ability to export a custom operator.\nMany big fixes and test infra improvements.\n\nYou can try out the latest tutorial here, contributed by @lara-hdr at Microsoft. A big thank you to the entire Microsoft team for all of their hard work to make this release happen!\nnn.Transformer", "source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"} {"text": "nn.Transformer\nIn PyTorch 1.2, we now include a standard nn.Transformer module, based on the paper \u201cAttention is All You Need\u201d. The nn.Transformer module relies entirely on an attention mechanism to draw global dependencies between input and output. The individual components of the nn.Transformer module are designed so they can be adopted independently. For example, the nn.TransformerEncoder can be used by itself, without the larger nn.Transformer. The new APIs include:\n\nnn.Transformer\nnn.TransformerEncoder and nn.TransformerEncoderLayer\nnn.TransformerDecoder and nn.TransformerDecoderLayer\n\n", "source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"} {"text": "\n\n\nSee the Transformer Layers documentation for more information. See here for the full PyTorch 1.2 release notes.\nDomain API Library Updates", "source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"} {"text": "Domain API Library Updates\nPyTorch domain libraries like torchvision, torchtext, and torchaudio provide convenient access to common datasets, models, and transforms that can be used to quickly create a state-of-the-art baseline. Moreover, they also provide common abstractions to reduce boilerplate code that users might have to otherwise repeatedly write. Since research domains have distinct requirements, an ecosystem of specialized libraries called domain APIs (DAPI) has emerged around PyTorch to simplify the development of new and existing algorithms in a number of fields. We\u2019re excited to release three updated DAPI libraries for text, audio, and vision that compliment the PyTorch 1.2 core release.\nTorchaudio 0.3 with Kaldi Compatibility, New Transforms\n\n\n", "source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"} {"text": "\nTorchaudio specializes in machine understanding of audio waveforms. It is an ML library that provides relevant signal processing functionality (but is not a general signal processing library). It leverages PyTorch\u2019s GPU support to provide many tools and transformations for waveforms to make data loading and standardization easier and more readable. For example, it offers data loaders for waveforms using sox, and transformations such as spectrograms, resampling, and mu-law encoding and decoding.\nWe are happy to announce the availability of torchaudio 0.3.0, with a focus on standardization and complex numbers, a transformation (resample) and two new functionals (phase_vocoder, ISTFT), Kaldi compatibility, and a new tutorial. Torchaudio was redesigned to be an extension of PyTorch and a part of the domain APIs (DAPI) ecosystem.\nStandardization", "source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"} {"text": "Standardization\nSignificant effort in solving machine learning problems goes into data preparation. In this new release, we've updated torchaudio's interfaces for its transformations to standardize around the following vocabulary and conventions.\nTensors are assumed to have channel as the first dimension and time as the last dimension (when applicable). This makes it consistent with PyTorch's dimensions. For size names, the prefix n_ is used (e.g. \"a tensor of size (n_freq, n_mel)\") whereas dimension names do not have this prefix (e.g. \"a tensor of dimension (channel, time)\"). The input of all transforms and functions now assumes channel first. This is done to be consistent with PyTorch, which has channel followed by the number of samples. The channel parameter of all transforms and functions is now deprecated.", "source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"} {"text": "The output of STFT is (channel, frequency, time, 2), meaning for each channel, the columns are the Fourier transform of a certain window, so as we travel horizontally we can see each column (the Fourier transformed waveform) change over time. This matches the output of librosa so we no longer need to transpose in our test comparisons with Spectrogram, MelScale, MelSpectrogram, and MFCC. Moreover, because of these new conventions, we deprecated LC2CL and BLC2CBL which were used to transfer from one shape of signal to another.\nAs part of this release, we're also introducing support for complex numbers via tensors of dimension (..., 2), and providing magphase to convert such a tensor into its magnitude and phase, and similarly complex_norm and angle.\nThe details of the standardization are provided in the README.\nFunctionals, Transformations, and Kaldi Compatibility", "source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"} {"text": "Prior to the standardization, we separated state and computation into torchaudio.transforms and torchaudio.functional.\nAs part of the transforms, we're adding a new transformation in 0.3.0: Resample. Resample can upsample or downsample a waveform to a different frequency.\nAs part of the functionals, we're introducing: phase_vocoder, a phase vocoder to change the speed of a waveform without changing its pitch, and ISTFT, the inverse STFT implemented to be compatible with STFT provided by PyTorch. This separation allows us to make functionals weak scriptable and to utilize JIT in 0.3.0. We thus have JIT and CUDA support for the following transformations: Spectrogram, AmplitudeToDB (previously named SpectrogramToDB), MelScale,\nMelSpectrogram, MFCC, MuLawEncoding, MuLawDecoding (previously named MuLawExpanding).", "source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"} {"text": "We now also provide a compatibility interface with Kaldi to ease onboarding and reduce a user's code dependency on Kaldi. We now have an interface for spectrogram, fbank, and resample_waveform.\nNew Tutorial\nTo showcase the new conventions and transformations, we have a new tutorial demonstrating how to preprocess waveforms using torchaudio. This tutorial walks through an example of loading a waveform and applying some of the available transformations to it.\nWe are excited to see an active community around torchaudio and eager to further grow and support it. We encourage you to go ahead and experiment for yourself with this tutorial and the two datasets that are available: VCTK and YESNO! They have an interface to download the datasets and preprocess them in a convenient format. You can find the details in the release notes here.", "source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"} {"text": "Torchtext 0.4 with supervised learning datasets\nA key focus area of torchtext is to provide the fundamental elements to help accelerate NLP research. This includes easy access to commonly used datasets and basic preprocessing pipelines for working on raw text based data. The torchtext 0.4.0 release includes several popular supervised learning baselines with \"one-command\" data loading. A tutorial is included to show how to use the new datasets for text classification analysis. We also added and improved on a few functions such as get_tokenizer and build_vocab_from_iterator to make it easier to implement future datasets. Additional examples can be found here.", "source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"} {"text": "Text classification is an important task in Natural Language Processing with many applications, such as sentiment analysis. The new release includes several popular text classification datasets for supervised learning including:\n\nAG_NEWS\nSogouNews\nDBpedia\nYelpReviewPolarity\nYelpReviewFull\nYahooAnswers\nAmazonReviewPolarity\nAmazonReviewFull\n\nEach dataset comes with two parts (train vs. test), and can be easily loaded with a single command. The datasets also support an ngrams feature to capture the partial information about the local word order. Take a look at the tutorial here to learn more about how to use the new datasets for supervised problems such as text classification analysis.\n```python\nfrom torchtext.datasets.text_classification import DATASETS", "source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"} {"text": "train_dataset, test_dataset = DATASETS'AG_NEWS'\n\nIn addition to the domain library, PyTorch provides many tools to make data loading easy. Users now can load and preprocess the text classification datasets with some well supported tools, like [torch.utils.data.DataLoader](https://pytorch.org/docs/stable/_modules/torch/utils/data/dataloader.html) and [torch.utils.data.IterableDataset](https://pytorch.org/docs/master/data.html#torch.utils.data.IterableDataset). Here are a few lines to wrap the data with DataLoader. More examples can be found [here](https://github.com/pytorch/text/tree/master/examples/text_classification).\n\n```python\nfrom torch.utils.data import DataLoader\ndata = DataLoader(train_dataset, collate_fn=generate_batch)\n\nCheck out the release notes here to learn more and try out the tutorial here.\nTorchvision 0.4 with Support for Video", "source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"} {"text": "Torchvision 0.4 with Support for Video\nVideo is now a first-class citizen in torchvision, with support for data loading, datasets, pre-trained models, and transforms. The 0.4 release of torchvision includes:\n\nEfficient IO primitives for reading/writing video files (including audio), with support for arbitrary encodings and formats.\nStandard video datasets, compatible with torch.utils.data.Dataset and torch.utils.data.DataLoader.\nPre-trained models built on the Kinetics-400 dataset for action classification on videos (including the training scripts).\nReference training scripts for training your own video models.\n\nWe wanted working with video data in PyTorch to be as straightforward as possible, without compromising too much on performance.\nAs such, we avoid the steps that would require re-encoding the videos beforehand, as it would involve:\n\nA preprocessing step which duplicates the dataset in order to re-encode it.\nAn overhead in time and space because this re-encoding is time-consuming.\n", "source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"} {"text": "\nGenerally, an external script should be used to perform the re-encoding.\n\nAdditionally, we provide APIs such as the utility class, VideoClips, that simplifies the task of enumerating all possible clips of fixed size in a list of video files by creating an index of all clips in a set of videos. It also allows you to specify a fixed frame-rate for the videos. An example of the API is provided below:\nfrom torchvision.datasets.video_utils import VideoClips\n\nclass MyVideoDataset(object):\n def __init__(self, video_paths):\n self.video_clips = VideoClips(video_paths,\n clip_length_in_frames=16,\n frames_between_clips=1,\n frame_rate=15)\n\n def __getitem__(self, idx):\n video, audio, info, video_idx = self.video_clips.get_clip(idx)\n return video, audio\n\n def __len__(self):\n return self.video_clips.num_clips()\n", "source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"} {"text": "return self.video_clips.num_clips()\n```\nMost of the user-facing API is in Python, similar to PyTorch, which makes it easily extensible. Plus, the underlying implementation is fast \u2014 torchvision decodes as little as possible from the video on-the-fly in order to return a clip from the video.\nCheck out the torchvision 0.4 release notes here for more details.\nWe look forward to continuing our collaboration with the community and hearing your feedback as we further improve and expand the PyTorch deep learning platform.\nWe\u2019d like to thank the entire PyTorch team and the community for all of the contributions to this work!", "source": "https://pytorch.org/blog/pytorch-1.2-and-domain-api-release/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'How to Train State-Of-The-Art Models Using TorchVision\u2019s Latest Primitives'\nauthor: Vasilis Vryniotis\nfeatured-img: 'assets/images/fx-image2.png'\n\n", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "\nA few weeks ago, TorchVision v0.11 was released packed with numerous new primitives, models and training recipe improvements which allowed achieving state-of-the-art (SOTA) results. The project was dubbed \u201cTorchVision with Batteries Included\u201d and aimed to modernize our library. We wanted to enable researchers to reproduce papers and conduct research more easily by using common building blocks. Moreover, we aspired to provide the necessary tools to Applied ML practitioners to train their models on their own data using the same SOTA techniques as in research. Finally, we wanted to refresh our pre-trained weights and offer\u00a0better off-the-shelf models to our users, hoping that they would build better applications.", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "Though there is still much work to be done, we wanted to share with you some exciting results from the above work. We will showcase how one can use the new tools included in TorchVision to achieve state-of-the-art results on a highly competitive and well-studied architecture such as ResNet50 [1]. We will share the exact recipe used to improve our baseline by over 4.7 accuracy points to reach a final top-1 accuracy of 80.9% and share the journey for deriving the new training process. Moreover, we will show that this recipe generalizes well to other model variants and families. We hope that the above will influence future research for developing stronger generalizable training methodologies and will inspire the community to adopt and contribute to our efforts.\nThe Results\nUsing our new training recipe found on ResNet50, we\u2019ve refreshed the pre-trained weights of the following models:\n\n\n\nModel\nAccuracy@1\nAccuracy@5\n\n\n\n\n\n\n\n\n\n", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "|----------|:--------:|:----------:|\n| ResNet50 | 80.858 | 95.434| \n|----------|:--------:|:----------:|\n| ResNet101 | 81.886 | 95.780| \n|----------|:--------:|:----------:|\n| ResNet152 | 82.284 | 96.002| \n|----------|:--------:|:----------:|\n| ResNeXt50-32x4d | 81.198 | 95.340| \nNote that the accuracy of all models except RetNet50 can be further improved by adjusting their training parameters slightly, but our focus was to have a single robust recipe which performs well for all. \nUPDATE: We have refreshed the majority of popular classification models of TorchVision, you can find the details on this blog post.\nThere are currently two ways to use the latest weights of the model.\nUsing the Multi-pretrained weight API", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "Using the Multi-pretrained weight API\nWe are currently working on a new prototype mechanism which will extend the model builder methods of TorchVision to support multiple weights. Along with the weights, we store useful meta-data (such as the labels, the accuracy, links to recipe etc) and the preprocessing transforms necessary for using the models. Example:\n```python\n from PIL import Image\n from torchvision import prototype as P\n img = Image.open(\"test/assets/encode_jpeg/grace_hopper_517x606.jpg\")\n \u00a0\n # Initialize model\n weights = P.models.ResNet50_Weights.IMAGENET1K_V2\n model = P.models.resnet50(weights=weights)\n model.eval()\n# Initialize inference transforms\n preprocess = weights.transforms()\n \u00a0\n # Apply inference preprocessing transforms\n batch = preprocess(img).unsqueeze(0)", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "batch = preprocess(img).unsqueeze(0)\n prediction = model(batch).squeeze(0).softmax(0)\n \u00a0\n # Make predictions\n label = prediction.argmax().item()\n score = prediction[label].item()\n \u00a0\n # Use meta to get the labels\n category_name = weights.meta['categories'][label]\n print(f\"{category_name}: {100 * score}%\")\n\n## Using the legacy API\n\nThose who don\u2019t want to use a prototype API have the option of accessing the new weights via the legacy API using the following approach:\n\n```python\n from torchvision.models import resnet\n \u00a0\n # Overwrite the URL of the previous weights\n resnet.model_urls[\"resnet50\"] = \"https://download.pytorch.org/models/resnet50-11ad3fa6.pth\"\n \u00a0\n # Initialize the model using the legacy API\n model = resnet.resnet50(pretrained=True)\n \u00a0\n # TODO: Apply preprocessing + call the model\n # ...\n\nThe Training Recipe", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "...\n```\nThe Training Recipe\nOur goal was to use the newly introduced primitives of TorchVision to derive a new strong training recipe which achieves state-of-the-art results for the vanilla ResNet50 architecture when trained from scratch on ImageNet with no additional external data. Though by using architecture specific tricks\u00a0[2] one could further improve the accuracy, we\u2019ve decided not to include them so that the recipe can be used in other architectures. Our recipe\u00a0heavily focuses on simplicity and builds upon work by FAIR [3], [4], [5], [6], [7].\u00a0Our findings align with the\u00a0parallel study of Wightman et al. [7], who also report major accuracy improvements by focusing on the training recipes.", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "Without further ado, here are the main parameters of our recipe:\n # Optimizer & LR scheme\n ngpus=8,\n batch_size=128,\u00a0 # per GPU\n\n epochs=600, \n opt='sgd', \u00a0\n momentum=0.9,\n\n lr=0.5, \n lr_scheduler='cosineannealinglr', \n lr_warmup_epochs=5, \n lr_warmup_method='linear', \n lr_warmup_decay=0.01, \n\n\n # Regularization and Augmentation\n weight_decay=2e-05, \n norm_weight_decay=0.0,\n\n label_smoothing=0.1, \n mixup_alpha=0.2, \n cutmix_alpha=1.0, \n auto_augment='ta_wide', \n random_erase=0.1, \n\n ra_sampler=True,\n ra_reps=4,\n\n\n # EMA configuration\n model_ema=True, \n model_ema_steps=32, \n model_ema_decay=0.99998, \n\n\n # Resizing\n interpolation='bilinear', \n val_resize_size=232, \n val_crop_size=224, \n train_crop_size=176,\n\nUsing our standard training reference script, we can train a ResNet50 using the following command:\n```\ntorchrun --nproc_per_node=8 train.py --model resnet50 --batch-size 128 --lr 0.5 \\", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "--lr-scheduler cosineannealinglr --lr-warmup-epochs 5 --lr-warmup-method linear \\\n--auto-augment ta_wide --epochs 600 --random-erase 0.1 --weight-decay 0.00002\u00a0\\\n--norm-weight-decay 0.0 --label-smoothing 0.1 --mixup-alpha 0.2 --cutmix-alpha 1.0\u00a0\\\n--train-crop-size 176 --model-ema --val-resize-size 232 --ra-sampler --ra-reps 4\n```\nMethodology\nThere are a few principles we kept in mind during our explorations:\n\nTraining is a stochastic process and the validation metric we try to optimize is a random variable. This is due to the random weight initialization scheme employed and the existence of random effects during the training process. This means that we can\u2019t do a single run to assess the effect of a recipe change. The standard practice is doing multiple runs (usually 3 to 5) and studying the summarization stats (such as mean, std, median, max, etc).\n", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "\nThere is usually a significant interaction between different parameters, especially for techniques that focus on Regularization and reducing overfitting. Thus changing the value of one can have effects on the optimal configurations of others. To account for that one can either adopt a greedy search approach (which often leads to suboptimal results but tractable experiments) or apply grid search (which leads to better results but is computationally expensive). In this work, we used a mixture of both.\nTechniques that are non-deterministic or introduce noise usually require longer training cycles to improve model performance. To keep things tractable, we initially used short training cycles (small number of epochs) to decide which paths can be eliminated early and which should be explored using longer training.\n", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "\nThere is a risk of overfitting the validation dataset [8] because of the repeated experiments. To mitigate some of the risk, we apply only training optimizations that provide a significant accuracy improvements and use K-fold cross validation to verify optimizations done on the validation set. Moreover we confirm that our recipe ingredients generalize well on other models for which we didn\u2019t optimize the hyper-parameters.\n\nBreak down of key accuracy improvements", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "Break down of key accuracy improvements\nAs discussed in\u00a0earlier blogposts, training models is not a journey of monotonically increasing accuracies and the process involves a lot of backtracking. To quantify the effect of each optimization, below we attempt to show-case an idealized linear journey of deriving the final recipe starting from the original recipe of TorchVision. We would like to clarify that this is an oversimplification of the actual path we followed and thus it should be taken with a grain of salt.\u00a0\n\n\n\nIn the table below, we provide a summary of the performance of stacked incremental improvements on top of Baseline. Unless denoted otherwise, we report the model with best Acc@1 out of 3 runs:", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "\n\n\n\nAccuracy@1\nAccuracy@5\nIncremental Diff\nAbsolute Diff\n\n\n\n\nResNet50 Baseline\n76.130\n92.862\n0.000\n0.000\n\n\n----------\n:--------:\n:----------:\n:---------\n:--------:\n\n\n+ LR optimizations\n76.494\n93.198\n0.364\n0.364\n\n\n----------\n:--------:\n:----------:\n:---------\n:--------:\n\n\n+ TrivialAugment\n76.806\n93.272\n0.312\n0.676\n\n\n----------\n:--------:\n:----------:\n:---------\n:--------:\n\n\n+ Long Training\n78.606\n94.052\n1.800\n2.476\n\n\n----------\n:--------:\n:----------:\n:---------\n:--------:\n\n\n+ Random Erasing\n78.796\n94.094\n0.190\n2.666\n\n\n----------\n:--------:\n:----------:\n:---------\n:--------:\n\n\n+ Label Smoothing\n79.114\n94.374\n0.318\n2.984\n\n\n----------\n:--------:\n:----------:\n:---------\n:--------:\n\n\n+ Mixup\n79.232\n94.536\n0.118\n3.102\n\n\n----------\n:--------:\n:----------:\n:---------\n:--------:\n\n\n+ Cutmix\n79.510\n94.642\n0.278\n3.380\n\n\n----------\n:--------:\n:----------:\n:---------\n:--------:\n\n\n", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "\n\n\n+ Weight Decay tuning\n80.036\n94.746\n0.526\n3.906\n\n\n\n\n+ FixRes mitigations\n80.196\n94.672\n0.160\n4.066\n\n\n----------\n:--------:\n:----------:\n:---------\n:--------:\n\n\n+ EMA\n80.450\n94.908\n0.254\n4.320\n\n\n----------\n:--------:\n:----------:\n:---------\n:--------:\n\n\n+ Inference Resize tuning *\n80.674\n95.166\n0.224\n4.544\n\n\n----------\n:--------:\n:----------:\n:---------\n:--------:\n\n\n+ Repeated Augmentation **\n80.858\n95.434\n0.184\n4.728\n\n\n\n*The tuning of the inference size was done on top of the last model. See below for details.\n** Community contribution done after the release of the article. See below for details.\nBaseline\nOur baseline is the previously released ResNet50 model of TorchVision. It was trained with the following recipe:\n```python \n # Optimizer & LR scheme\n ngpus=8,\n batch_size=32,\u00a0 # per GPU\nepochs=90, \n opt='sgd', \u00a0\n momentum=0.9,\nlr=0.1, \n lr_scheduler='steplr', \n lr_step_size=30, \n lr_gamma=0.1,", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "lr_step_size=30, \n lr_gamma=0.1, \n# Regularization\n weight_decay=1e-4,\n# Resizing\n interpolation='bilinear', \n val_resize_size=256, \n val_crop_size=224, \n train_crop_size=224,\n```\nMost of the above parameters are the defaults on our training scripts. We will start building on top of this baseline by introducing optimizations until we gradually arrive at the final recipe.\nLR optimizations", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "LR optimizations\nThere are a few parameter updates we can apply to improve both the accuracy and the speed of our training. This can be achieved by increasing the batch size and tuning the LR. Another common method is to apply warmup and gradually increase our learning rate. This is beneficial especially when we use very high learning rates and helps with the stability of the training in the early epochs. Finally, another optimization is to apply Cosine Schedule to adjust our LR during the epochs. A big advantage of cosine is that there are no hyper-parameters to optimize, which cuts down our search space.\nHere are the additional optimizations applied on top of the baseline recipe. Note that we\u2019ve run multiple experiments to determine the optimal configuration of the parameters:\n batch_size=128,\u00a0 # per GPU\n\n lr=0.5, \n lr_scheduler='cosineannealinglr', \n lr_warmup_epochs=5, \n lr_warmup_method='linear', \n lr_warmup_decay=0.01,\n", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "lr_warmup_decay=0.01,\n\nThe above optimizations increase our top-1 Accuracy by 0.364 points comparing to the baseline. Note that in order to combine the different LR strategies we use the newly introduced [SequentialLR](https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.SequentialLR.html#torch.optim.lr_scheduler.SequentialLR) scheduler.\n\n## TrivialAugment\n\nThe original model was trained using basic augmentation transforms such as Random resized crops and horizontal flips. An easy way to improve our accuracy is to apply more complex \u201cAutomatic-Augmentation\u201d techniques. The one that performed best for us is TrivialAugment\u00a0[[9]](https://arxiv.org/abs/2103.10158), which\u00a0is extremely simple and can be considered \u201cparameter free\u201d, which means it can help us cut down our search space further.\n\nHere is the update applied on top of the previous step:\n\n\nauto_augment='ta_wide',\n```\nThe use of TrivialAugment increased our top-1 Accuracy by\u00a00.312 points\u00a0compared to the previous step.", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "Long Training\nLonger training cycles are beneficial when our recipe contains ingredients that behave randomly. More specifically as we start adding more and more techniques that introduce noise, increasing the number of epochs becomes crucial. Note that at early stages of our exploration, we used relatively short cycles of roughly 200 epochs which was later increased to 400 as we started narrowing down most of the parameters and finally increased to 600 epochs at the final versions of the recipe.\nBelow we see the update applied on top of the earlier steps:\nepochs=600,\n", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "epochs=600,\n\nThis further increases our top-1 Accuracy by 1.8 points on top of the previous step. This is the biggest increase we will observe in this iterative process. It\u2019s worth noting that the effect of this single optimization is overstated and somehow misleading. Just increasing the number of epochs on top of the old baseline won\u2019t yield such significant improvements. Nevertheless the combination of the LR optimizations with strong Augmentation strategies helps the model benefit from longer cycles. It\u2019s also worth mentioning that the reason we introduce the lengthy training cycles so early in the process is because in the next steps we will introduce techniques that\u00a0require significantly more epochs to provide good results.\nRandom Erasing", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "Random Erasing\nAnother data augmentation technique known to help the classification accuracy is Random Erasing [10], [11]. Often paired with Automatic Augmentation methods, it usually yields additional improvements in accuracy due to its regularization effect. In our experiments we tuned only the probability of applying the method via a grid search and found that it\u2019s beneficial to keep its probability at low levels, typically around 10%.\u00a0\nHere is the extra parameter introduced on top of the previous:\nrandom_erase=0.1,\n\nApplying Random Erasing increases our Acc@1 by further\u00a00.190 points.\nLabel Smoothing", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "Label Smoothing\nA good technique to reduce overfitting is to stop the model from becoming overconfident. This can be achieved by softening the ground truth using Label Smoothing\u00a0[12]. There is a single parameter which controls the degree of smoothing (the higher the stronger) that we need to specify. Though optimizing it via grid search is possible, we found that values around 0.05-0.15 yield similar results, so to avoid overfitting it we used the same value as on the\u00a0paper that introduced it.\nBelow we can find the extra config added on this step:\nlabel_smoothing=0.1,\n\nWe use PyTorch\u2019s newly introduced\u00a0CrossEntropyLoss label_smoothing parameter and that increases our accuracy by an additional\u00a00.318 points.\nMixup and Cutmix", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "Mixup and Cutmix\nTwo data augmentation techniques often used to produce SOTA results are Mixup and Cutmix\u00a0[13], [14]. They both provide strong regularization effects by softening not only the labels but also the images. In our setup we found it beneficial to apply one of them randomly with equal probability. Each is parameterized with a\u00a0hyperparameter alpha, which controls the shape of the Beta distribution from which the smoothing probability is sampled. We did a very limited grid search, focusing primarily on common values proposed on the papers.\u00a0\nBelow you will find the optimal values for the alpha parameters of the two techniques:\nmixup_alpha=0.2, \ncutmix_alpha=1.0,\n\nApplying mixup increases our accuracy by\u00a00.118 points and combining it with cutmix improves it by additional 0.278 points.\nWeight Decay tuning", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "Weight Decay tuning\nOur standard recipe uses L2 regularization to reduce overfitting. The Weight Decay parameter controls the degree of the regularization (the larger the stronger) and is applied universally to all learned parameters of the model by default. In this recipe, we apply two optimizations to the standard approach. First we perform grid search to tune the parameter of weight decay and second we disable weight decay for the parameters of the normalization layers.\u00a0\nBelow you can find the optimal configuration of weight decay for our recipe:\nweight_decay=2e-05, \nnorm_weight_decay=0.0,\n\nThe above update improves our accuracy by a further\u00a00.526 points, providing additional experimental evidence for a known fact that tuning weight decay has significant effects on the performance of the model. Our approach for separating the Normalization parameters from the rest was inspired by\u00a0ClassyVision\u2019s approach.\nFixRes mitigations", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "FixRes mitigations\nAn important property identified early in our experiments is the fact that the models performed significantly better if the resolution used during validation was increased from the 224x224 of training. This effect is studied in detail on the FixRes paper [5]\u00a0and two mitigations are proposed: a) one could try to reduce the training resolution so that the accuracy on the validation resolution is maximized or b) one could fine-tune the model on a two-phase training so that it adjusts on the target resolution. Since we didn\u2019t want to introduce a 2-phase training, we went for option a). This means that we reduced the train crop size from 224 and used grid search to find the one that maximizes the validation on resolution of 224x224.\nBelow you can see the optimal value used on our recipe:\nval_crop_size=224, \ntrain_crop_size=176,\n\nThe above optimization improved our accuracy by an additional 0.160 points and sped up our training by 10%.", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "It\u2019s worth noting that the FixRes effect still persists, meaning that the model continues to perform better on validation when we increase the resolution. Moreover, further reducing the training crop-size actually hurts the accuracy. This intuitively makes sense because one can only reduce the resolution so much before critical details start disappearing from the picture. Finally, we should note that the above FixRes mitigation seems to benefit models with similar depth to ResNet50. Deeper variants with larger receptive fields seem to be slightly negatively affected (typically by 0.1-0.2 points). Hence we consider this part of the recipe optional. Below we visualize the performance of the best available checkpoints (with the full recipe) for models trained with 176 and 224 resolution:\n\n", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "\n\nExponential Moving Average (EMA)\nEMA is a technique that allows one to push the accuracy of a model without increasing its complexity or inference time. It performs an exponential moving average on the model weights and this leads to increased accuracy and more stable models. The averaging happens every few iterations and its decay parameter was tuned via grid search.\u00a0\nBelow you can see the optimal values for our recipe:\nmodel_ema=True, \nmodel_ema_steps=32, \nmodel_ema_decay=0.99998,\n", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "model_ema_steps=32, \nmodel_ema_decay=0.99998,\n```\nThe use of EMA increases our accuracy by\u00a00.254 points comparing to the previous step. Note that TorchVision\u2019s\u00a0EMA implementation is build on top of PyTorch\u2019s AveragedModel class with the key difference being that it averages not only the model parameters but also its buffers. Moreover, we have adopted tricks from\u00a0Pycls\u00a0which allow us to parameterize the decay in a way that doesn\u2019t depend on the number of epochs.\nInference Resize tuning", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "Inference Resize tuning\nUnlike all other steps of the process which involved training models with different parameters, this optimization was done on top of the final model. During inference, the image is resized to a specific resolution and then a central 224x224 crop is taken from it. The original recipe used a resize size of 256, which caused a similar discrepancy as the one described on the FixRes paper [5]. By bringing this resize value closer to the target inference resolution, one can improve the accuracy. To select the value we run a short grid search between interval [224, 256] with step of 8. To avoid overfitting, the value was selected using half of the validation set and confirmed using the other half.\nBelow you can see the optimal value used on our recipe:\nval_resize_size=232,\n", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "val_resize_size=232,\n\nThe above is an optimization which improved our accuracy by\u00a00.224 points.\u00a0It\u2019s worth noting that the optimal value for ResNet50 works also best for ResNet101, ResNet152 and ResNeXt50, which hints that it generalizes across models:\n\n\n\n\n\n[UPDATE] Repeated Augmentation", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "\n[UPDATE] Repeated Augmentation\nRepeated Augmentation [15], [16] is another technique which can improve the overall accuracy and has been used by other strong recipes such as those at [6], [7]. Tal Ben-Nun, a community contributor, has further improved upon our original recipe by proposing training the model with 4 repetitions. His contribution came after the release of this article.\nBelow you can see the optimal value used on our recipe:\nra_sampler=True,\nra_reps=4,\n\nThe above is the final optimization which improved our accuracy by\u00a00.184 points.\u00a0\nOptimizations that were tested but not adopted", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "Optimizations that were tested but not adopted\nDuring the early stages of our research, we experimented with additional techniques, configurations and optimizations. Since our target was to keep our recipe as simple as possible, we decided not to include anything that didn\u2019t provide a significant improvement. Here are a few approaches that we took but didn\u2019t make it to our final recipe:\n\nOptimizers: Using more complex optimizers such as Adam, RMSProp or SGD with Nesterov momentum didn\u2019t\u00a0provide significantly better results than vanilla SGD with momentum.\nLR Schedulers:\u00a0We tried different LR Scheduler schemes such as StepLR and Exponential. Though the latter tends to work better with EMA, it often requires additional hyper-parameters such as defining the minimum LR to work well. Instead, we just use cosine annealing decaying the LR up to zero and choose the checkpoint with the highest accuracy.\n", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "\nAutomatic Augmentations:\u00a0We\u2019ve tried different augmentation strategies such as AutoAugment and RandAugment. None of these outperformed the simpler parameter-free TrivialAugment.\nInterpolation: Using bicubic or nearest interpolation didn\u2019t\u00a0provide significantly better results than bilinear.\nNormalization layers: Using Sync Batch Norm didn\u2019t yield\u00a0significantly better results than using the regular Batch Norm.\n\nAcknowledgements\nWe would like to thank\u00a0Piotr Dollar, Mannat Singh and Hugo Touvron for providing their insights and feedback during the development of the recipe and for their previous research work on which our recipe is based on. Their support was invaluable for achieving the above result. Moreover, we would like to thank\u00a0Prabhat Roy, Kai Zhang, Yiwen Song, Joel Schlosser, Ilqar Ramazanli, Francisco Massa, Mannat Singh, Xiaoliang Dai, Samuel Gabriel, Allen Goodman and Tal Ben-Nun for their contributions to the Batteries Included project.\nReferences", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "References\n\nKaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. \u201cDeep Residual Learning for Image Recognition\u201d.\nTong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, Mu Li. \u201cBag of Tricks for Image Classification with Convolutional Neural Networks\u201d\nPiotr Doll\u00e1r, Mannat Singh, Ross Girshick. \u201cFast and Accurate Model Scaling\u201d\nTete Xiao, Mannat Singh, Eric Mintun, Trevor Darrell, Piotr Doll\u00e1r, Ross Girshick. \u201cEarly Convolutions Help Transformers See Better\u201d\nHugo Touvron, Andrea Vedaldi, Matthijs Douze, Herv\u00e9 J\u00e9gou. \u201cFixing the train-test resolution discrepancy\nHugo Touvron, Matthieu Cord, Matthijs Douze, Francisco Massa, Alexandre Sablayrolles, Herv\u00e9 J\u00e9gou. \u201cTraining data-efficient image transformers & distillation through attention\u201d\nRoss Wightman, Hugo Touvron, Herv\u00e9 J\u00e9gou. \u201cResNet strikes back: An improved training procedure in timm\u201d\nBenjamin Recht, Rebecca Roelofs, Ludwig Schmidt, Vaishaal Shankar. \u201cDo ImageNet Classifiers Generalize to ImageNet?\u201d\n", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "\nSamuel G. M\u00fcller, Frank Hutter. \u201cTrivialAugment: Tuning-free Yet State-of-the-Art Data Augmentation\u201d\nZhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, Yi Yang. \u201cRandom Erasing Data Augmentation\u201d\nTerrance DeVries, Graham W. Taylor. \u201cImproved Regularization of Convolutional Neural Networks with Cutout\u201d\nChristian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, Zbigniew Wojna. \u201cRethinking the Inception Architecture for Computer Vision\u201d\nHongyi Zhang, Moustapha Cisse, Yann N. Dauphin, David Lopez-Paz. \u201cmixup: Beyond Empirical Risk Minimization\u201d\nSangdoo Yun, Dongyoon Han, Seong Joon Oh, Sanghyuk Chun, Junsuk Choe, Youngjoon Yoo. \u201cCutMix: Regularization Strategy to Train Strong Classifiers with Localizable Features\u201d\nElad Hoffer, Tal Ben-Nun, Itay Hubara, Niv Giladi, Torsten Hoefler, Daniel Soudry. \u201cAugment your batch: better training with larger batches\u201d\n", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "\nMaxim Berman, Herv\u00e9 J\u00e9gou, Andrea Vedaldi, Iasonas Kokkinos, Matthijs Douze. \u201cMultigrain: a unified image embedding for classes and instances\u201d\n", "source": "https://pytorch.org/blog/how-to-train-state-of-the-art-models-using-torchvision-latest-primitives/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: 'Updates & Improvements to PyTorch Tutorials'\nauthor: Team PyTorch\n\nPyTorch.org provides researchers and developers with documentation, installation instructions, latest news, community projects, tutorials, and more. Today, we are introducing usability and content improvements including tutorials in additional categories, a new recipe format for quickly referencing common topics, sorting using tags, and an updated homepage. \nLet\u2019s take a look at them in detail. \nTUTORIALS HOME PAGE UPDATE\nThe tutorials home page now provides clear actions that developers can take. For new PyTorch users, there is an easy-to-discover button to take them directly to \u201cA 60 Minute Blitz\u201d. Right next to it, there is a button to view all recipes which are designed to teach specific features quickly with examples. \n\n\n", "source": "https://pytorch.org/blog/updates-improvements-to-pytorch-tutorials/", "category": "pytorch blogs"} {"text": "\nIn addition to the existing left navigation bar, tutorials can now be quickly filtered by multi-select tags. Let\u2019s say you want to view all tutorials related to \u201cProduction\u201d and \u201cQuantization\u201d. You can select the \u201cProduction\u201d and \u201cQuantization\u201d filters as shown in the image shown below:\n\n\n\nThe following additional resources can also be found at the bottom of the Tutorials homepage:\n* PyTorch Cheat Sheet\n* PyTorch Examples\n* Tutorial on GitHub\nPYTORCH RECIPES\nRecipes are new bite-sized, actionable examples designed to teach researchers and developers how to use specific PyTorch features. Some notable new recipes include:\n* Loading Data in PyTorch", "source": "https://pytorch.org/blog/updates-improvements-to-pytorch-tutorials/", "category": "pytorch blogs"} {"text": "\nModel Interpretability Using Captum\nHow to Use TensorBoard\n\nView the full recipes here.\nLEARNING PYTORCH\nThis section includes tutorials designed for users new to PyTorch. Based on community feedback, we have made updates to the current Deep Learning with PyTorch: A 60 Minute Blitz tutorial, one of our most popular tutorials for beginners. Upon completion, one can understand what PyTorch and neural networks are, and be able to build and train a simple image classification network. Updates include adding explanations to clarify output meanings and linking back to where users can read more in the docs, cleaning up confusing syntax errors, and reconstructing and explaining new concepts for easier readability.", "source": "https://pytorch.org/blog/updates-improvements-to-pytorch-tutorials/", "category": "pytorch blogs"} {"text": "DEPLOYING MODELS IN PRODUCTION\nThis section includes tutorials for developers looking to take their PyTorch models to production. The tutorials include:\n* Deploying PyTorch in Python via a REST API with Flask\n* Introduction to TorchScript\n* Loading a TorchScript Model in C++\n* Exporting a Model from PyTorch to ONNX and Running it using ONNX Runtime\nFRONTEND APIS\nPyTorch provides a number of frontend API features that can help developers to code, debug, and validate their models more efficiently. This section includes tutorials that teach what these features are and how to use them. Some tutorials to highlight:", "source": "https://pytorch.org/blog/updates-improvements-to-pytorch-tutorials/", "category": "pytorch blogs"} {"text": "\nIntroduction to Named Tensors in PyTorch\nUsing the PyTorch C++ Frontend\nExtending TorchScript with Custom C++ Operators\nExtending TorchScript with Custom C++ Classes\nAutograd in C++ Frontend\n\nMODEL OPTIMIZATION\nDeep learning models often consume large amounts of memory, power, and compute due to their complexity. This section provides tutorials for model optimization:\n* Pruning\n* Dynamic Quantization on BERT", "source": "https://pytorch.org/blog/updates-improvements-to-pytorch-tutorials/", "category": "pytorch blogs"} {"text": "\nStatic Quantization with Eager Mode in PyTorch\n\nPARALLEL AND DISTRIBUTED TRAINING\nPyTorch provides features that can accelerate performance in research and production such as native support for asynchronous execution of collective operations and peer-to-peer communication that is accessible from Python and C++. This section includes tutorials on parallel and distributed training: \n* Single-Machine Model Parallel Best Practices\n* Getting started with Distributed Data Parallel\n* Getting started with Distributed RPC Framework\n* Implementing a Parameter Server Using Distributed RPC Framework", "source": "https://pytorch.org/blog/updates-improvements-to-pytorch-tutorials/", "category": "pytorch blogs"} {"text": "Making these improvements are just the first step of improving PyTorch.org for the community. Please submit your suggestions here.\nCheers,\nTeam PyTorch", "source": "https://pytorch.org/blog/updates-improvements-to-pytorch-tutorials/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"PyTorch 2.0: Our next generation release that is faster, more Pythonic and Dynamic as ever\"\n\nWe are excited to announce the release of PyTorch\u00ae 2.0 which we highlighted during the PyTorch Conference on 12/2/22! PyTorch 2.0 offers the same eager-mode development and user experience, while fundamentally changing and supercharging how PyTorch operates at compiler level under the hood with faster performance and support for Dynamic Shapes and Distributed.", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "This next-generation release includes a Stable version of Accelerated Transformers (formerly called Better Transformers); Beta includes torch.compile as the main API for PyTorch 2.0, the scaled_dot_product_attention function as part of torch.nn.functional, the MPS backend, functorch APIs in the torch.func module; and other Beta/Prototype improvements across various inferences, performance and training optimization features on GPUs and CPUs. For a comprehensive introduction and technical overview of torch.compile, please visit the 2.0 Get Started page.\nAlong with 2.0, we are also releasing a series of beta updates to the PyTorch domain libraries, including those that are in-tree, and separate libraries including TorchAudio, TorchVision, and TorchText. An update for TorchX is also being released as it moves to community supported mode. More details can be found in this library blog.", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "This release is composed of over 4,541 commits and 428 contributors since 1.13.1. We want to sincerely thank our dedicated community for your contributions. As always, we encourage you to try these out and report any issues as we improve 2.0 and the overall 2-series this year.\nSummary: \n* torch.compile is the main API for PyTorch 2.0, which wraps your model and returns a compiled model. It is a fully additive (and optional) feature and hence 2.0 is 100% backward compatible by definition.\n* As an underpinning technology of torch.compile, TorchInductor with Nvidia and AMD GPUs will rely on OpenAI Triton deep learning compiler to generate performant code and hide low level hardware details. OpenAI Triton-generated kernels achieve performance that's on par with hand-written kernels and specialized cuda libraries such as cublas.", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "\nAccelerated Transformers introduce high-performance support for training and inference using a custom kernel architecture for scaled dot product attention (SPDA). The API is integrated with torch.compile() and model developers may also use the scaled dot product attention kernels directly by calling the new scaled_dot_product_attention() operator. \nMetal Performance Shaders (MPS) backend provides GPU accelerated PyTorch training on Mac platforms with added support for Top 60 most used ops, bringing coverage to over 300 operators. \nAmazon AWS optimizes the PyTorch CPU inference on AWS Graviton3 based C7g instances. PyTorch 2.0 improves inference performance on Graviton compared to the previous releases, including improvements for Resnet50 and Bert.\n", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "\nNew prototype features and technologies across TensorParallel, DTensor, 2D parallel, TorchDynamo, AOTAutograd, PrimTorch and TorchInductor.\n\n\n\n\n\nStable\n\nBeta\n\nPrototype\n\nPerformance Improvements\n\n\n\n\n\nAccelerated PT 2 Transformers\n\n\ntorch.compile\n\n\nDTensor\n\n\nCUDA support for 11.7 & 11.8 (deprecating CUDA 11.6) \n\n\n\n\n\n\nPyTorch MPS Backend\n\n\nTensorParallel\n\n", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "\n\n\nPython 3.8 (deprecating Python 3.7)\n\n\n\n\n\n\n\nScaled dot product attention\n\n\n2D Parallel\n\n\nAWS Graviton3\n\n\n\n\n\n\nfunctorch\n\n\nTorch.compile (dynamic=True)\n\n\n\n\n\n\n\nDispatchable Collectives\n\n\n\n\n\n\n\nTorch.set_default & torch.device\n\n\n\n\n\n\n\n\n\n", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "\n\n\n\n\n\n\nX86 quantization backend\n\n\n\n\n\n\n\n\n\n\nGNN inference and training performance\n\n\n\n\n\n\n\n*To see a full list of public 2.0, 1.13 and 1.12 feature submissions click here.\nStable Features\n[Stable] Accelerated PyTorch 2 Transformers", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "[Stable] Accelerated PyTorch 2 Transformers\nThe PyTorch 2.0 release includes a new high-performance implementation of the PyTorch Transformer API. In releasing Accelerated PT2 Transformers, our goal is to make training and deployment of state-of-the-art Transformer models affordable across the industry. This release introduces high-performance support for training and inference using a custom kernel architecture for scaled dot product attention (SPDA), extending the inference \u201cfastpath\u201d architecture, previously known as \"Better Transformer.\"\nSimilar to the \u201cfastpath\u201d architecture, custom kernels are fully integrated into the PyTorch Transformer API \u2013 thus, using the native Transformer and MultiHeadAttention API will enable users to:\n\ntransparently see significant speed improvements; \nsupport many more use cases including models using Cross-Attention, Transformer Decoders, and for training models; and\n", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "\ncontinue to use fastpath inference for fixed and variable sequence length Transformer Encoder and Self Attention use cases.\n\nTo take full advantage of different hardware models and Transformer use cases, multiple SDPA custom kernels are supported (see below), with custom kernel selection logic that will pick the highest-performance kernel for a given model and hardware type. In addition to the existing Transformer API, model developers may also use the scaled dot product attention kernels directly by calling the new scaled_dot_product_attention() operator. Accelerated PyTorch 2 Transformers are integrated with torch.compile() . To use your model while benefiting from the additional acceleration of PT2-compilation (for inference or training), pre-process the model with model = torch.compile(model).", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "We have achieved major speedups for training transformer models and in particular large language models with Accelerated PyTorch 2 Transformers using a combination of custom kernels and torch.compile(). \n\nFigure: Using scaled dot product attention with custom kernels and torch.compile delivers significant speedups for training large language models, such as for nanoGPT shown here.\nBeta Features\n[Beta] torch.compile\ntorch.compile is the main API for PyTorch 2.0, which wraps your model and returns a compiled model. It is a fully additive (and optional) feature and hence 2.0 is 100% backward compatible by definition.\nUnderpinning torch.compile are new technologies \u2013 TorchDynamo, AOTAutograd, PrimTorch and TorchInductor:", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "\nTorchDynamo captures PyTorch programs safely using Python Frame Evaluation Hooks and is a significant innovation that was a result of 5 years of our R&D into safe graph capture. \nAOTAutograd overloads PyTorch\u2019s autograd engine as a tracing autodiff for generating ahead-of-time backward traces. \nPrimTorch canonicalizes ~2000+ PyTorch operators down to a closed set of ~250 primitive operators that developers can target to build a complete PyTorch backend. This substantially lowers the barrier of writing a PyTorch feature or backend. \nTorchInductor is a deep learning compiler that generates fast code for multiple accelerators and backends. For NVIDIA and AMD GPUs, it uses OpenAI Triton as a key building block. For intel CPUs, we generate C++ code using multithreading, vectorized instructions and offloading appropriate operations to mkldnn when possible.\n", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "With all the new technologies, torch.compile is able to work 93% of time across 165 open-source models and runs 20% faster on average at float32 precision and 36% faster on average at AMP precision. \nFor more information, please refer to https://pytorch.org/get-started/pytorch-2.0/ and for TorchInductor CPU with Intel here.\n[Beta] PyTorch MPS Backend\nMPS backend provides GPU-accelerated PyTorch training on Mac platforms. This release brings improved correctness, stability, and operator coverage.", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "MPS backend now includes support for the Top 60 most used ops, along with the most frequently requested operations by the community, bringing coverage to over 300 operators. The major focus of the release was to enable full OpInfo-based forward and gradient mode testing to address silent correctness issues. These changes have resulted in wider adoption of MPS backend by 3rd party networks such as Stable Diffusion, YoloV5, WhisperAI, along with increased coverage for Torchbench networks and Basic tutorials. We encourage developers to update to the latest macOS release to see the best performance and stability on the MPS backend. \nLinks\n\nMPS Backend\nDeveloper information\nAccelerated PyTorch training on Mac\n", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "\nMetal, Metal Performance Shaders & Metal Performance Shaders Graph\n\n[Beta] Scaled dot product attention 2.0\nWe are thrilled to announce the release of PyTorch 2.0, which introduces a powerful scaled dot product attention function as part of torch.nn.functional. This function includes multiple implementations that can be seamlessly applied depending on the input and hardware in use.\nIn previous versions of PyTorch, you had to rely on third-party implementations and install separate packages to take advantage of memory-optimized algorithms like FlashAttention. With PyTorch 2.0, all these implementations are readily available by default.", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "These implementations include FlashAttention from HazyResearch, Memory-Efficient Attention from the xFormers project, and a native C++ implementation that is ideal for non-CUDA devices or when high-precision is required.\nPyTorch 2.0 will automatically select the optimal implementation for your use case, but you can also toggle them individually for finer-grained control. Additionally, the scaled dot product attention function can be used to build common transformer architecture components.\nLearn more with the documentation and this tutorial.\n[Beta] functorch -> torch.func", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "[Beta] functorch -> torch.func\nInspired by Google JAX, functorch is a library that offers composable vmap (vectorization) and autodiff transforms. It enables advanced autodiff use cases that would otherwise be tricky to express in PyTorch. Examples include:\n* model ensembling\n* efficiently computing jacobians and hessians\n* computing per-sample-gradients (or other per-sample quantities)", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "We\u2019re excited to announce that, as the final step of upstreaming and integrating functorch into PyTorch, the functorch APIs are now available in the torch.func module. Our function transform APIs are identical to before, but we have changed how the interaction with NN modules work. Please see the docs and the migration guide for more details.\nFurthermore, we have added support for torch.autograd.Function: one is now able to apply function transformations (e.g. vmap, grad, jvp) over torch.autograd.Function.\n[Beta] Dispatchable Collectives", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "[Beta] Dispatchable Collectives\nDispatchable collectives is an improvement to the existing init_process_group() API which changes backend to an optional argument. For users, the main advantage of this feature is that it will allow them to write code that can run on both GPU and CPU machines without having to change the backend specification. The dispatchability feature will also make it easier for users to support both GPU and CPU collectives, as they will no longer need to specify the backend manually (e.g. \u201cNCCL\u201d or \u201cGLOO\u201d). Existing backend specifications by users will be honored and will not require change.\nUsage example:\nimport torch.distributed.dist\n\u2026\n# old\ndist.init_process_group(backend=\u201dnccl\u201d, ...)\ndist.all_reduce(...) # with CUDA tensors works\ndist.all_reduce(...) # with CPU tensors does not work\n\n# new\ndist.init_process_group(...) # backend is optional\ndist.all_reduce(...) # with CUDA tensors works\ndist.all_reduce(...) # with CPU tensors works\n", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "dist.all_reduce(...) # with CPU tensors works\n```\nLearn more here.\n[Beta] torch.set_default_device and torch.device as context manager\ntorch.set_default_device allows users to change the default device that factory functions in PyTorch allocate on. For example, if you torch.set_default_device(\u2018cuda\u2019), a call to torch.empty(2) will allocate on CUDA (rather than on CPU). You can also use torch.device as a context manager to change the default device on a local basis. This resolves a long standing feature request from PyTorch\u2019s initial release for a way to do this.\nLearn more here. \n[Beta] \"X86\" as the new default quantization backend for x86 CPU", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "The new X86 quantization backend, which utilizes FBGEMM and oneDNN kernel libraries, replaces FBGEMM as the default quantization backend for x86 CPU platforms and offers improved int8 inference performance compared to the original FBGEMM backend, leveraging the strengths of both libraries, with 1.3X \u2013 2X inference performance speedup measured on 40+ deep learning models. The new backend is functionally compatible with the original FBGEMM backend.\nTable: Geomean Speedup of X86 Quantization Backend vs. FBGEMM Backend\n\n\n\n\n1 core/instance\n \n2 cores/instance\n \n4 cores/instance\n \n1 socket (32 cores)/instance\n \n\n\nIntel(R) Xeon(R) Platinum 8358 CPU @ 2.60GHz\n \n1.76X\n \n1.80X\n \n2.04X\n \n1.34X\n \n\n", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "\n\n1.34X\n \n\n\nBy default, users on x86 platforms will utilize the x86 quantization backend and their PyTorch programs will remain unchanged when using the default backend. Alternatively, users have the option to specify \"X86\" as the quantization backend explicitly. Example code is shown below:\nimport torch\nfrom torch.ao.quantization import get_default_qconfig_mappingfrom torch.quantization.quantize_fx\nimport prepare_fx, convert_fx\n\n# get default configuration\nqconfig_mapping = get_default_qconfig_mapping()\n\n# or explicitly specify the backend\n# qengine = 'x86'\n# torch.backends.quantized.engine = qengine\n# qconfig_mapping = get_default_qconfig_mapping(qengine)\n\n# construct fp32 model\nmodel_fp32 = ...\n\n# prepare\nprepared_model = prepare_fx(model_fp32, qconfig_mapping, example_inputs=x)\n\n# calibrate\n...\n\n# convert\nquantized_model = convert_fx(prepared_model)\n", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "quantized_model = convert_fx(prepared_model)\n```\nFind more information: https://github.com/pytorch/pytorch/issues/83888 and https://www.intel.com/content/www/us/en/developer/articles/technical/accelerate-pytorch-int8-inf-with-new-x86-backend.html.\n[Beta] GNN inference and training optimization on CPU\nPyTorch 2.0 includes several critical optimizations to improve GNN inference and training performance on CPU. Before 2.0, GNN models of PyG suffers from low efficiency on CPU due to lack of performance tuning for several critical kernels (scatter/gather, etc) and the lack of GNN-related sparse matrix multiplication ops. To be specific, optimizations include:\n* scatter_reduce: performance hotspot in Message Passing when the edge index is stored in Coordinate format (COO).", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "\ngather: backward of scatter_reduce, specially tuned for the GNN compute when the index is an expanded tensor.\ntorch.sparse.mm with reduce flag: performance hotspot in Message Passing when the edge index is stored in Compressed Sparse Row (CSR). Supported reduce flag of: sum, mean, amax, amin.\n\nOn PyG benchmarks/examples, OGB benchmarks, a 1.12x - 4.07x performance speedup is measured (1.13.1 compared with 2.0) for single node inference and training.\n\n\n\nModel-Dataset\n\nOption\n\nSpeedup Ratio\n\n\n\n\nGCN-Reddit (inference)\n \n512-2-64-dense\n \n1.22x\n \n\n\n1024-3-128-dense\n \n1.25x\n \n\n\n512-2-64-sparse\n \n1.31x\n \n\n\n1024-3-128-sparse\n \n1.68x\n ", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "\n\n1.68x\n \n\n\n\n512-2-64-dense\n \n1.22x\n \n\n\n \nGraphSage-ogbn-products (inference)\n \n1024-3-128-dense\n \n1.15x\n \n\n\n512-2-64-sparse\n \n1.20x\n \n\n\n1024-3-128-sparse\n \n1.33x\n \n\n\nfull-batch-sparse\n \n4.07x\n \n\n\nGCN-PROTEINS (training)\n \n3-32\n \n1.67x\n \n\n\nGCN-REDDIT-BINARY (training)\n \n3-32\n \n1.67x\n \n\n\nGCN-Reddit (training)\n \n512-2-64-dense\n \n1.20x\n \n\n\n1024-3-128-dense\n \n1.12x\n \n\n\nLearn more: PyG CPU Performance Optimization.", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "[Beta] Accelerating inference on CPU with PyTorch by leveraging oneDNN Graph\noneDNN Graph API extends oneDNN with a flexible graph API to maximize the optimization opportunity for generating efficient code on AI hardware. \n* It automatically identifies the graph partitions to be accelerated via fusion. \n* The fusion patterns focus on fusing compute-intensive operations such as convolution, matmul and their neighbor operations for both inference and training use cases.", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "\nAlthough work is ongoing to integrate oneDNN Graph with TorchDynamo as well, its integration with the PyTorch JIT Fuser attained beta status in PyTorch 2.0 for Float32 & BFloat16 inference (on machines that support AVX512_BF16 ISA).\n\nFrom a developer\u2019s/researcher\u2019s perspective, the usage is quite simple & intuitive, with the only change in code being an API invocation:\n* Leverage oneDNN Graph, with JIT-tracing, a model is profiled with an example input. \n* The context manager with torch.jit.fuser(\u201cfuser3\u201d): can also be used instead of invoking torch.jit.enable_onednn_fusion(True).", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "\nFor accelerating BFloat16 inference, we rely on eager-mode AMP (Automatic Mixed Precision) support in PyTorch & disable JIT mode\u2019s AMP, as both of them are currently divergent:\n\n# Assuming we have a model of the name 'model'\n\nexample_input = torch.rand(1, 3, 224, 224)\n\n# enable oneDNN Graph\ntorch.jit.enable_onednn_fusion(True)\n# Disable AMP for JIT\ntorch._C._jit_set_autocast_mode(False)\nwith torch.no_grad(), torch.cpu.amp.autocast():\n model = torch.jit.trace(model, (example_input))\n model = torch.jit.freeze(model)\n # 2 warm-ups (2 for tracing/scripting with an example, 3 without an example)\n model(example_input)\n model(example_input)\n\n # speedup would be observed in subsequent runs.\n model(example_input)\n\nLearn more here.\nPrototype Features\nDistributed API", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "Prototype Features\nDistributed API\n[Prototype] DTensor", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "PyTorch DistributedTensor (DTensor) is a prototyping effort with distributed tensor primitives to allow easier distributed computation authoring in the SPMD (Single Program Multiple Devices) paradigm. The primitives are simple but powerful when used to express tensor distributions with both sharded and replicated parallelism strategies. PyTorch DTensor empowered PyTorch Tensor Parallelism along with other advanced parallelism explorations. In addition, it also offers a uniform way to save/load state_dict for distributed checkpointing purposes, even when there\u2019re complex tensor distribution strategies such as combining tensor parallelism with parameter sharding in FSDP. More details can be found in this RFC and the DTensor examples notebook.", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "[Prototype] TensorParallel\nWe now support DTensor based Tensor Parallel which users can distribute their model parameters across different GPU devices. We also support Pairwise Parallel which shards two concatenated linear layers in a col-wise and row-wise style separately so that only one collective(all-reduce/reduce-scatter) is needed in the end. More details can be found in this example.\n[Prototype] 2D Parallel\nWe implemented the integration of the aforementioned TP with FullyShardedDataParallel(FSDP) as 2D parallel to further scale large model training. More details can be found in this slide and code example.\n[Prototype] torch.compile(dynamic=True)", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "[Prototype] torch.compile(dynamic=True)\nExperimental support for PT2 compilation with dynamic shapes is available in this release. Inference compilation with inductor for simple models is supported, but there are a lot of limitations:\n\nTraining available in a future release (This is partially fixed in nightlies!)\nMinifier available in a future release.\nIt is easy to end up in a situation where the dimension you wanted to be dynamic gets specialized anyway. Some of these issues are fixed in nightlies, others are not.\nWe do not appropriately propagate Inductor guards to the top-level, this is tracked at #96296.\nData-dependent operations like nonzero still require a graph break.\nDynamic does not work with non-standard modes like reduce-overhead or max-autotune.\n", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "\nThere are many bugs in Inductor compilation. To track known bugs, check the dynamic shapes label on the PyTorch issue tracker.\n\nFor the latest and greatest news about dynamic shapes support on master, check out our status reports.\nHighlights/Performance Improvements\nDeprecation of Cuda 11.6 and Python 3.7 support for PyTorch 2.0\nIf you are still using or depending on CUDA 11.6 or Python 3.7 builds, we strongly recommend moving to at least CUDA 11.7 and Python 3.8, as it would be the minimum versions required for PyTorch 2.0. For more detail, please refer to the Release Compatibility Matrix for PyTorch releases.\nPython 3.11 support on Anaconda Platform", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "Python 3.11 support on Anaconda Platform\nDue to lack of Python 3.11 support for packages that PyTorch depends on, including NumPy, SciPy, SymPy, Pillow and others on the Anaconda platform. We will not be releasing Conda binaries compiled with Python 3.11 for PyTorch Release 2.0. The Pip packages with Python 3.11 support will be released, hence if you intend to use PyTorch 2.0 with Python 3.11 please use our Pip packages. Please note: Conda packages with Python 3.11 support will be made available on our nightly channel. Also we are planning on releasing Conda Python 3.11 binaries as part of future release once Anaconda provides these key dependencies. More information and instructions on how to download the Pip packages can be found here.\nOptimized PyTorch Inference with AWS Graviton processors", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "The optimizations focused on three key areas: GEMM kernels, bfloat16 support, primitive caching and the memory allocator. For aarch64 platforms, PyTorch supports Arm Compute Library (ACL) GEMM kernels via Mkldnn(OneDNN) backend. The ACL library provides Neon/SVE GEMM kernels for fp32 and bfloat16 formats. The bfloat16 support on c7g allows efficient deployment of bfloat16 trained, AMP (Automatic Mixed Precision) trained, or even the standard fp32 trained models. The standard fp32 models leverage bfloat16 kernels via OneDNN fast math mode, without any model quantization. Next we implemented primitive caching for conv, matmul and inner product operators. More information on the updated PyTorch user guide with the upcoming 2.0 release improvements and TorchBench benchmark details can be found here.", "source": "https://pytorch.org/blog/pytorch-2.0-release/", "category": "pytorch blogs"} {"text": "\nlayout: blog_detail\ntitle: \"Case Study: PathAI Uses PyTorch to Improve Patient Outcomes with AI-powered Pathology\"\nauthor: Logan Kilpatrick - Sr. Technology Advocate, Harshith Padigela - ML Engineer, Syed Ashar Javed - ML Technical Lead, Robert Egger - Biomedical Data Scientist\nfeatured-img: \"/assets/images/2022-7-15-PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology-1.png\"\n\n\u200b\u200bPathAI is the leading provider of AI-powered technology tools and services for pathology (the study of disease). Our platform was built to enable substantial improvements to the accuracy of diagnosis and the measurement of therapeutic efficacy for complex diseases, leveraging modern approaches in machine learning like image segmentation, graph neural networks, and multiple instance learning.\n\n\n", "source": "https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/", "category": "pytorch blogs"} {"text": "\nTraditional manual pathology is prone to subjectivity and observer variability that can negatively affect diagnoses and drug development trials. Before we dive into how we use PyTorch to improve our diagnosis workflow, let us first lay out the traditional analog Pathology workflow without machine learning.\nHow Traditional Biopharma Works\nThere are many avenues that biopharma companies take to discover novel therapeutics or diagnostics. One of those avenues relies heavily on the analysis of pathology slides to answer a variety of questions: how does a particular cellular communication pathway work? Can a specific disease state be linked to the presence or lack of a particular protein? Why did a particular drug in a clinical trial work for some patients but not others? Might there be an association between patient outcomes and a novel biomarker?", "source": "https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/", "category": "pytorch blogs"} {"text": "To help answer these questions, biopharma companies rely on expert pathologists to analyze slides and help evaluate the questions they might have.\u00a0\nAs you might imagine, it takes an expert board certified pathologist to make accurate interpretations and diagnosis. In one study, a single biopsy result was given to 36 different pathologists and the outcome was 18 different diagnoses varying in severity from no treatment to aggressive treatment necessary. Pathologists also often solicit feedback from colleagues in difficult edge cases. Given the complexity of the problem, even with expert training and collaboration, pathologists can still have a hard time making a correct diagnosis. This potential variance can be the difference between a drug being approved and it failing the clinical trial.\nHow PathAI utilizes machine learning to power drug development", "source": "https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/", "category": "pytorch blogs"} {"text": "PathAI develops machine learning models which provide insights for drug development R&D, for powering clinical trials, and for making diagnoses. To this end, PathAI leverages PyTorch for slide level inference using a variety of methods including graph neural networks (GNN) as well as multiple instance learning. In this context, \u201cslides\u201d refers to full size scanned images of glass slides, which are pieces of glass with a thin slice of tissue between them, stained to show various cell formations. PyTorch enables our teams using these different methodologies to share a common framework which is robust enough to work in all the conditions we need. PyTorch\u2019s high level, imperative, and pythonic syntax allows us to prototype models quickly and then take those models to scale once we have the results we want.\u00a0\nMulti-instance learning on gigabyte images", "source": "https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/", "category": "pytorch blogs"} {"text": "One of the uniquely challenging aspects of applying ML to pathology is the immense size of the images. These digital slides can often be 100,000 x 100,000 pixels or more in resolution and gigabytes in size. Loading the full image in GPU memory and applying traditional computer vision algorithms on them is an almost impossible task. It also takes both a considerable amount of time and resources to have a full slide image (100k x 100k) annotated, especially when annotators need to be domain experts (board-certified pathologists). We often build models to predict image-level labels, like the presence of cancer, on a patient slide which covers a few thousand pixels in the whole image. The cancerous area is sometimes a tiny fraction of the entire slide, which makes the ML problem similar to finding a needle in a haystack. On the other hand, some problems like the prediction of certain histological biomarkers require an aggregation of information from the whole slide which is again hard due to the size of the images. All these factors add significant algorithmic, computational, and logistical complexity when applying ML techniques to pathology problems.", "source": "https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/", "category": "pytorch blogs"} {"text": "Breaking down the image into smaller patches, learning patch representations, and then pooling those representations to predict an image-level label is one way to solve this problem as is depicted in the image below. One popular method for doing this is called Multiple Instance Learning (MIL). Each patch is considered an \u2018instance\u2019 and a set of patches forms a \u2018bag\u2019. The individual patch representations are pooled together to predict a final bag-level label. Algorithmically, the individual patch instances in the bag do not require labels and hence allow us to learn bag-level labels in a weakly-supervised way. They also use permutation invariant pooling functions which make the prediction independent of the order of patches and allows for an efficient aggregation of information. Typically, attention based pooling functions are used which not only allow for efficient aggregation but also provide attention values for each patch in the bag. These values indicate the importance of the corresponding patch in the prediction and can be visualized to better understand the model predictions. This element of interpretability can be very important to drive adoption of these models in the real world and we use variations like Additive MIL models to enable such spatial explainability. Computationally, MIL models circumvent the problem of applying neural networks to large image sizes since patch representations are obtained independently of the size of the image.", "source": "https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/", "category": "pytorch blogs"} {"text": "\n\n\nAt PathAI, we use custom MIL models based on deep nets to predict image-level labels. The overview of this process is as follows:\n\nSelect patches from a slide using different sampling approaches.\nConstruct a bag of patches based on random sampling or heuristic rules.\nGenerate patch representations for each instance based on pre-trained models or large-scale representation learning models.\nApply permutation invariant pooling functions to get the final slide-level score.\n\nNow that we have walked through some of the high-level details around MIL in PyTorch, let\u2019s look at some code to see how simple it is to go from ideation to code in production with PyTorch. We begin by defining a sampler, transformations, and our MIL dataset:\n```python\nCreate a bag sampler which randomly samples patches from a slide", "source": "https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/", "category": "pytorch blogs"} {"text": "bag_sampler = RandomBagSampler(bag_size=12)\nSetup the transformations\ncrop_transform = FlipRotateCenterCrop(use_flips=True)\nCreate the dataset which loads patches for each bag\ntrain_dataset = MILDataset(\n bag_sampler=bag_sampler,\n samples_loader=sample_loader,\n transform=crop_transform,\n)\n\nAfter we have defined our sampler and dataset, we need to define the model we will actually train with said dataset. PyTorch\u2019s familiar model definition syntax makes this easy to do while also allowing us to create bespoke models at the same time.\n\n```python\nclassifier = DefaultPooledClassifier(hidden_dims=[256, 256], input_dims=1024, output_dims=1)\n\npooling = DefaultAttentionModule(\n input_dims=1024,\n hidden_dims=[256, 256],\n output_activation=StableSoftmax()\n)\n\n# Define the model which is a composition of the featurizer, pooling module and a classifier\nmodel = DefaultMILGraph(featurizer=ShuffleNetV2(), classifier=classifier, pooling = pooling)\n", "source": "https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/", "category": "pytorch blogs"} {"text": "```\nSince these models are trained end-to-end, they offer a powerful way to go directly from a gigapixel whole slide image to a single label. Due to their wide applicability to different biological problems, two aspects of their implementation and deployment are important:\n\nConfigurable control over each part of the pipeline including the data loaders, the modular parts of the model, and their interaction with each other.\nAbility to rapidly iterate through the ideate-implement-experiment-productionize loop.\n", "source": "https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/", "category": "pytorch blogs"} {"text": "PyTorch has various advantages when it comes to MIL modeling. It offers an intuitive way to create dynamic computational graphs with flexible control flow which is great for rapid research experimentation. The map-style datasets, configurable sampler and batch-samplers allow us to customize how we construct bags of patches, enabling faster experimentation. Since MIL models are IO heavy, data parallelism and pythonic data loaders make the task very efficient and user friendly. Lastly, the object-oriented nature of PyTorch enables building of reusable modules which aid in the rapid experimentation, maintainable implementation and ease of building compositional components of the pipeline.\nExploring spatial tissue organization with GNNs in PyTorch\n\n\n", "source": "https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/", "category": "pytorch blogs"} {"text": "\nIn both healthy and diseased tissue, the spatial arrangement and structure of cells can oftentimes be as important as the cells themselves. For example, when assessing lung cancers, pathologists try to look at the overall grouping and structure of tumor cells (do they form solid sheets? Or do they occur in smaller, localized clusters?) to determine if the cancer belongs to specific subtypes which can have vastly different prognosis. Such spatial relationships between cells and other tissue structures can be modeled using graphs to capture tissue topology and cellular composition at the same time. Graph Neural Networks (GNNs) allow learning spatial patterns within these graphs that relate to other clinical variables, for example overexpression of genes in certain cancers.", "source": "https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/", "category": "pytorch blogs"} {"text": "In late 2020, when PathAI started using GNNs on tissue samples, PyTorch had the best and most mature support for GNN functionality via the PyG package. This made PyTorch the natural choice for our team given that GNN models were something that we knew would be an important ML concept we wanted to explore.", "source": "https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/", "category": "pytorch blogs"} {"text": "One of the main value-adds of GNN\u2019s in the context of tissue samples is that the graph itself can uncover spatial relationships that would otherwise be very difficult to find by visual inspection alone. In our recent AACR publication, we showed that by using GNNs, we can better understand the way the presence of immune cell aggregates (specifically tertiary lymphoid structures, or TLS) in the tumor microenvironment can influence patient prognosis. In this case, the GNN approach was used to predict expression of genes associated with the presence of TLS, and identify histological features beyond the TLS region itself that are relevant to TLS. Such insights into gene expression are difficult to identify from tissue sample images when unassisted by ML models.", "source": "https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/", "category": "pytorch blogs"} {"text": "One of the most promising GNN variations we have had success with is self attention graph pooling. Let\u2019s take a look at how we define our Self Attention Graph Pooling (SAGPool) model using PyTorch and PyG:\nclass SAGPool(torch.nn.Module):\n def __init__(self, ...):\n super().__init__()\n self.conv1 = GraphConv(in_features, hidden_features, aggr='mean')\n self.convs = torch.nn.ModuleList()\n self.pools = torch.nn.ModuleList()\n self.convs.extend([GraphConv(hidden_features, hidden_features, aggr='mean') for i in range(num_layers - 1)])\n self.pools.extend([SAGPooling(hidden_features, ratio, GNN=GraphConv, min_score=min_score) for i in range((num_layers) // 2)])\n self.jump = JumpingKnowledge(mode='cat')\n self.lin1 = Linear(num_layers * hidden_features, hidden_features)\n self.lin2 = Linear(hidden_features, out_features)\n self.out_activation = out_activation\n self.dropout = dropout\n", "source": "https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/", "category": "pytorch blogs"} {"text": "self.dropout = dropout\n```\nIn the above code, we begin by defining a single convolutional graph layer and then add two module list layers which allow us to pass in a variable number of layers. We then take our empty module list and append a variable number of GraphConv layers followed by a variable number of SAGPooling layers. We finish up our SAGPool definition by adding a JumpingKnowledge Layer, two linear layers, our activation function, and our dropout value. PyTorch\u2019s intuitive syntax allows us to abstract away the complexity of working with state of the art methods like SAG Poolings while also maintaining the common approach to model development we are familiar with.", "source": "https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/", "category": "pytorch blogs"} {"text": "Models like our SAG Pool one described above are just one example of how GNNs with PyTorch are allowing us to explore new and novel ideas. We also recently explored multimodal CNN - GNN hybrid models which ended up being 20% more accurate than traditional Pathologist consensus scores. These innovations and interplay between traditional CNNs and GNNs are again enabled by the short research to production model development loop.\nImproving Patient Outcomes", "source": "https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/", "category": "pytorch blogs"} {"text": "Improving Patient Outcomes\nIn order to achieve our mission of improving patient outcomes with AI-powered pathology, PathAI needs to rely on an ML development framework that (1) facilitates quick iteration and easy extension (i.e. Model configuration as code) during initial phases of development and exploration (2) scales model training and inference to massive images (3) easily and robustly serves models for production uses of our products (in clinical trials and beyond). As we\u2019ve demonstrated, PyTorch offers us all of these capabilities and more. We are incredibly excited about the future of PyTorch and cannot wait to see what other impactful challenges we can solve using the framework.", "source": "https://pytorch.org/blog/PathAI-Uses-PyTorch-to-Improve-Patient-Outcomes-with-AI-powered-Pathology/", "category": "pytorch blogs"} {"text": "Torch Distributed Elastic\nMakes distributed PyTorch fault-tolerant and elastic.\nGet Started\nUsage\n^^^^^\n\n\nQuickstart\n\n\nTrain script\n\n\nExamples\n\n\nDocumentation\nAPI\n^^^\n\n\ntorchrun (Elastic Launch)\n\n\nElastic Agent\n\n\nMultiprocessing\n\n\nError Propagation\n\n\nRendezvous\n\n\nExpiration Timers\n\n\nMetrics\n\n\nEvents\n\n\nAdvanced\n^^^^^^^^\n\nCustomization\n\nPlugins\n^^^^^^^\n\nTorchElastic Kubernetes\n", "source": "https://pytorch.org/docs/stable/distributed.elastic.html", "category": "pytorch docs"} {"text": "torch.overrides\nThis module exposes various helper functions for the\n\"torch_function\" protocol. See Extending torch for more detail on\nthe \"torch_function\" protocol.\nFunctions\ntorch.overrides.get_ignored_functions()\nReturn public functions that cannot be overridden by\n \"torch_function\".\nReturns:\n A tuple of functions that are publicly available in the torch\n API but cannot be overridden with \"torch_function\". Mostly\n this is because none of the arguments of these functions are\n tensors or tensor-likes.\nReturn type:\n Set[Callable]\n-[ Examples ]-\n\n\n\ntorch.Tensor.as_subclass in torch.overrides.get_ignored_functions()\n True\ntorch.add in torch.overrides.get_ignored_functions()\n False\n\n\n\ntorch.overrides.get_overridable_functions()\nList functions that are overridable via torch_function\nReturns:\n A dictionary that maps namespaces that contain overridable", "source": "https://pytorch.org/docs/stable/torch.overrides.html", "category": "pytorch docs"} {"text": "functions to functions in that namespace that can be overridden.\nReturn type:\n Dict[Any, List[Callable]]\ntorch.overrides.resolve_name(f)\nGet a human readable string name for a function passed to\n torch_function\nParameters:\n f (Callable) -- Function to resolve the name of.\nReturns:\n Name of the function; if eval'ed it should give back the input\n function.\nReturn type:\n str\ntorch.overrides.get_testing_overrides()\nReturn a dict containing dummy overrides for all overridable\n functions\nReturns:\n A dictionary that maps overridable functions in the PyTorch API\n to lambda functions that have the same signature as the real\n function and unconditionally return -1. These lambda functions\n are useful for testing API coverage for a type that defines\n \"torch_function\".\nReturn type:\n Dict[Callable, Callable]\n-[ Examples ]-\n\n\n\nimport inspect\n\n\n", "source": "https://pytorch.org/docs/stable/torch.overrides.html", "category": "pytorch docs"} {"text": "-[ Examples ]-\n\n\n\nimport inspect\nmy_add = torch.overrides.get_testing_overrides()[torch.add]\ninspect.signature(my_add)\n \n\n\n\ntorch.overrides.handle_torch_function(public_api, relevant_args, args, *kwargs)\nImplement a function with checks for \"torch_function\"\n overrides.\nSee torch::autograd::handle_torch_function for the equivalent of\n this function in the C++ implementation.\nParameters:\n * public_api (function) -- Function exposed by the public\n torch API originally called like \"public_api(args, *kwargs)\"\n on which arguments are now being checked.\n * **relevant_args** (*iterable*) -- Iterable of arguments to\n check for __torch_function__ methods.\n\n * **args** (*tuple*) -- Arbitrary positional arguments\n originally passed into \"public_api\".\n\n * **kwargs** (*tuple*) -- Arbitrary keyword arguments originally\n passed into \"public_api\".\n\nReturns:", "source": "https://pytorch.org/docs/stable/torch.overrides.html", "category": "pytorch docs"} {"text": "passed into \"public_api\".\nReturns:\n Result from calling \"implementation\" or an \"torch_function\"\n method, as appropriate.\nReturn type:\n object\n:raises TypeError : if no implementation is found.:\n-[ Example ]-\n\n\n\ndef func(a):\n ... if has_torch_function_unary(a):\n ... return handle_torch_function(func, (a,), a)\n ... return a + 0\n\n\n\ntorch.overrides.has_torch_function()\nCheck for torch_function implementations in the elements of an\n iterable or if a torch_function mode is enabled. Considers\n exact \"Tensor\" s and \"Parameter\" s non-dispatchable. Use this to\n guard a call to \"handle_torch_function()\"; don't use it to test if\n something is Tensor-like, use \"is_tensor_like()\" instead. :param\n relevant_args: Iterable or arguments to check for\n torch_function methods. :type relevant_args: iterable\nReturns:\n True if any of the elements of relevant_args have\n torch_function implementations, False otherwise.", "source": "https://pytorch.org/docs/stable/torch.overrides.html", "category": "pytorch docs"} {"text": "Return type:\n bool\nSee also:\n \"torch.is_tensor_like\"\n Checks if something is a Tensor-like, including an exact\n \"Tensor\".\n\ntorch.overrides.is_tensor_like(inp)\nReturns \"True\" if the passed-in input is a Tensor-like.\nCurrently, this occurs whenever there's a \"torch_function\"\n attribute on the type of the input.\n-[ Examples ]-\nA subclass of tensor is generally a Tensor-like.\n\n\n\nclass SubTensor(torch.Tensor): ...\nis_tensor_like(SubTensor([0]))\n True\n\n\n\nBuilt-in or user types aren't usually Tensor-like.\n\n\n\nis_tensor_like(6)\n False\nis_tensor_like(None)\n False\nclass NotATensor: ...\nis_tensor_like(NotATensor())\n False\n\n\n\nBut, they can be made Tensor-like by implementing\n torch_function.\n\n\n\nclass TensorLike:\n ... @classmethod\n ... def torch_function(cls, func, types, args, kwargs):\n ... return -1\nis_tensor_like(TensorLike())\n True\n\n\n", "source": "https://pytorch.org/docs/stable/torch.overrides.html", "category": "pytorch docs"} {"text": "\n\n\nis_tensor_like(TensorLike())\n True\n\n\n\ntorch.overrides.is_tensor_method_or_property(func)\nReturns True if the function passed in is a handler for a method or\n property belonging to \"torch.Tensor\", as passed into\n \"torch_function\".\nNote:\n For properties, their \"__get__\" method must be passed in.\n\nThis may be needed, in particular, for the following reasons:\n\n\nMethods/properties sometimes don't contain a module slot.\n\n\nThey require that the first passed-in argument is an instance of\n \"torch.Tensor\".\n\n\n-[ Examples ]-\n\n\n\nis_tensor_method_or_property(torch.Tensor.add)\n True\nis_tensor_method_or_property(torch.add)\n False\n\n\n\nReturn type:\n bool\ntorch.overrides.wrap_torch_function(dispatcher)\nWraps a given function with \"torch_function\" -related\n functionality.\nParameters:\n dispatcher (Callable) -- A callable that returns an\n iterable of Tensor-likes passed into the function.\nNote:", "source": "https://pytorch.org/docs/stable/torch.overrides.html", "category": "pytorch docs"} {"text": "Note:\n This decorator may reduce the performance of your code.\n Generally, it's enough to express your code as a series of\n functions that, themselves, support __torch_function__. If you\n find yourself in the rare situation where this is not the case,\n e.g. if you're wrapping a low-level library and you also need it\n to work for Tensor-likes, then this function is available.\n\n-[ Examples ]-\n\n\n\ndef dispatcher(a): # Must have the same signature as func\n ... return (a,)\n@torch.overrides.wrap_torch_function(dispatcher)\ndef func(a): # This will make func dispatchable by torch_function\n ... return a + 0\n\n\n", "source": "https://pytorch.org/docs/stable/torch.overrides.html", "category": "pytorch docs"} {"text": "Quantization Accuracy Debugging\nThis document provides high level strategies for improving\nquantization accuracy. If a quantized model has error compared to the\noriginal model, we can categorize the error into:\n\n\ndata insensitive error - caused by intrinsic model quantization\n error, large portion of input data has large error\n\n\ndata sensitive error - caused by outlier input data, small\n portion of input data has large error\n\n\nimplementation error - quantized kernel is not matching\n reference implementation\n\n\nData insensitive error\nGeneral tips\n\nFor PTQ, ensure that the data you are calibrating with is\n representative of your dataset. For example, for a classification\n problem a general guideline is to have multiple samples in every\n category, and the overall number of samples should be at least 100.\n There is no penalty for calibrating with more data other than\n calibration time.\n", "source": "https://pytorch.org/docs/stable/quantization-accuracy-debugging.html", "category": "pytorch docs"} {"text": "calibration time.\n\n\nIf your model has Conv-BN or Linear-BN patterns, consider fusing\n them. If you are using FX graph mode quantization, this is done\n automatically by the workflow. If you are using Eager mode\n quantization, you can do this manually with the\n \"torch.ao.quantization.fuse_modules\" API.\n\n\nIncrease the precision of dtype of the problematic ops. Usually,\n fp32 will have the highest accuracy, followed by fp16, followed by\n dynamically quantized int8, followed by statically quantized int8.\n\n\nNote: this is trading off performance for accuracy.\n\n\nNote: availability of kernels per dtype per op can vary by\n backend.\n\n\nNote: dtype conversions add an additional performance cost. For\n example, \"fp32_op -> quant -> int8_op -> dequant -> fp32_op ->\n quant -> int8_op -> dequant\" will have a performance penalty\n compared to \"fp32_op -> fp32_op -> quant -> int8_op -> int8_op\n -> dequant\" because of a higher number of required dtype\n conversions.\n\n", "source": "https://pytorch.org/docs/stable/quantization-accuracy-debugging.html", "category": "pytorch docs"} {"text": "conversions.\n\nIf you are using PTQ, consider using QAT to recover some of the\n accuracy loss from quantization.\n\nInt8 quantization tips\n\n\nIf you are using per-tensor weight quantization, consider using\n per-channel weight quantization.\n\n\nIf you are doing inference on fbgemm, ensure that you set the\n reduce_range argument to False if your CPU is Cooperlake or\n newer, and to True otherwise.\n\n\nAudit the input activation distribution variation across different\n samples. If this variation is high, the layer may be suitable for\n dynamic quantization but not static quantization.\n\n\nData sensitive error\nIf you are using static quantization and a small portion of your input\ndata is resulting in high quantization error, you can try:\n\n\nAdjust your calibration dataset to make it more representative of\n your inference dataset.\n\n\nManually inspect (using Numeric Suite) which layers have high\n\n", "source": "https://pytorch.org/docs/stable/quantization-accuracy-debugging.html", "category": "pytorch docs"} {"text": "quantization error. For these layers, consider leaving them in\n floating point or adjusting the observer settings to choose a\n better scale and zero_point.\nImplementation error\nIf you are using PyTorch quantization with your own backend you may\nsee differences between the reference implementation of an operation\n(such as \"dequant -> op_fp32 -> quant\") and the quantized\nimplementation (such as op_int8) of the op on the target hardware.\nThis could mean one of two things:\n\n\nthe differences (usually small) are expected due to specific\n behavior of the target kernel on the target hardware compared to\n fp32/cpu. An example of this is accumulating in an integer dtype.\n Unless the kernel guarantees bitwise equivalency with the reference\n implementation, this is expected.\n\n\nthe kernel on the target hardware has an accuracy issue. In this\n case, reach out to the kernel developer.\n\n\nNumerical Debugging Tooling (prototype)\nWarning:", "source": "https://pytorch.org/docs/stable/quantization-accuracy-debugging.html", "category": "pytorch docs"} {"text": "\nWarning:\nNumerical debugging tooling is early prototype and subject to\n change.\n\n\ntorch.ao.ns._numeric_suite Eager mode numeric suite\n\n\ntorch.ao.ns._numeric_suite_fx FX numeric suite\n\n", "source": "https://pytorch.org/docs/stable/quantization-accuracy-debugging.html", "category": "pytorch docs"} {"text": "JIT Utils - torch.utils.jit\n", "source": "https://pytorch.org/docs/stable/jit_utils.html", "category": "pytorch docs"} {"text": "Distributed Optimizers\nWarning:\nDistributed optimizer is not currently supported when using CUDA\n tensors\n\"torch.distributed.optim\" exposes DistributedOptimizer, which takes a\nlist of remote parameters (\"RRef\") and runs the optimizer locally on\nthe workers where the parameters live. The distributed optimizer can\nuse any of the local optimizer Base class to apply the gradients on\neach worker.\nclass torch.distributed.optim.DistributedOptimizer(optimizer_class, params_rref, args, *kwargs)\nDistributedOptimizer takes remote references to parameters\n scattered across workers and applies the given optimizer locally\n for each parameter.\nThis class uses \"get_gradients()\" in order to retrieve the\n gradients for specific parameters.\nConcurrent calls to \"step()\", either from the same or different\n clients, will be serialized on each worker -- as each worker's\n optimizer can only work on one set of gradients at a time. However,", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"} {"text": "there is no guarantee that the full forward-backward-optimizer\n sequence will execute for one client at a time. This means that the\n gradients being applied may not correspond to the latest forward\n pass executed on a given worker. Also, there is no guaranteed\n ordering across workers.\nDistributedOptimizer creates the local optimizer with TorchScript\n enabled by default, so that optimizer updates are not blocked by\n the Python Global Interpreter Lock (GIL) in the case of\n multithreaded training (e.g. Distributed Model Parallel). This\n feature is currently enabled for most optimizers. You can also\n follow the recipe in PyTorch tutorials to enable TorchScript\n support for your own custom optimizers.\nParameters:\n * optimizer_class (optim.Optimizer) -- the class of\n optimizer to instantiate on each worker.\n * **params_rref** (*list**[**RRef**]*) -- list of RRefs to local\n or remote parameters to optimize.\n", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"} {"text": "or remote parameters to optimize.\n * **args** -- arguments to pass to the optimizer constructor on\n each worker.\n\n * **kwargs** -- arguments to pass to the optimizer constructor\n on each worker.\n\nExample::\n >>> import torch.distributed.autograd as dist_autograd\n >>> import torch.distributed.rpc as rpc\n >>> from torch import optim\n >>> from torch.distributed.optim import DistributedOptimizer\n >>>\n >>> with dist_autograd.context() as context_id:\n >>> # Forward pass.\n >>> rref1 = rpc.remote(\"worker1\", torch.add, args=(torch.ones(2), 3))\n >>> rref2 = rpc.remote(\"worker1\", torch.add, args=(torch.ones(2), 1))\n >>> loss = rref1.to_here() + rref2.to_here()\n >>>\n >>> # Backward pass.\n >>> dist_autograd.backward(context_id, [loss.sum()])\n >>>\n >>> # Optimizer.\n >>> dist_optim = DistributedOptimizer(\n >>> optim.SGD,\n >>> [rref1, rref2],\n >>> lr=0.05,", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"} {"text": "\n\n\n lr=0.05,\n >>> )\n >>> dist_optim.step(context_id)\n\n\n\n\nstep(context_id)\n Performs a single optimization step.\n\n This will call \"torch.optim.Optimizer.step()\" on each worker\n containing parameters to be optimized, and will block until all\n workers return. The provided \"context_id\" will be used to\n retrieve the corresponding \"context\" that contains the gradients\n that should be applied to the parameters.\n\n Parameters:\n **context_id** -- the autograd context id for which we should\n run the optimizer step.\n\nclass torch.distributed.optim.PostLocalSGDOptimizer(optim, averager)\nWraps an arbitrary \"torch.optim.Optimizer\" and runs post-local SGD,\n This optimizer runs local optimizer at every step. After the warm-\n up stage, it averages parameters periodically afer the local\n optimizer is applied.\nParameters:\n * optim (Optimizer) -- The local optimizer.", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"} {"text": "\naverager (ModelAverager) -- A model averager instance to\n run post-localSGD algorithm.\n\nExample:\n >>> import torch\n >>> import torch.distributed as dist\n >>> import torch.distributed.algorithms.model_averaging.averagers as averagers\n >>> import torch.nn as nn\n >>> from torch.distributed.optim import PostLocalSGDOptimizer\n >>> from torch.distributed.algorithms.ddp_comm_hooks.post_localSGD_hook import (\n >>> PostLocalSGDState,\n >>> post_localSGD_hook,\n >>> )\n >>>\n >>> model = nn.parallel.DistributedDataParallel(\n >>> module, device_ids=[rank], output_device=rank\n >>> )\n >>>\n >>> # Register a post-localSGD communication hook.\n >>> state = PostLocalSGDState(process_group=None, subgroup=None, start_localSGD_iter=100)\n >>> model.register_comm_hook(state, post_localSGD_hook)\n >>>\n >>> # Create a post-localSGD optimizer that wraps a local optimizer.\n", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"} {"text": "\n\n\nNote that warmup_steps used in PostLocalSGDOptimizer must be the same as\n >>> # ``start_localSGD_iter`` used in ``PostLocalSGDState``.\n >>> local_optim = torch.optim.SGD(params=model.parameters(), lr=0.01)\n >>> opt = PostLocalSGDOptimizer(\n >>> optim=local_optim,\n >>> averager=averagers.PeriodicModelAverager(period=4, warmup_steps=100)\n >>> )\n >>>\n >>> # In the first 100 steps, DDP runs global gradient averaging at every step.\n >>> # After 100 steps, DDP runs gradient averaging within each subgroup (intra-node by default),\n >>> # and post-localSGD optimizer runs global model averaging every 4 steps after applying the local optimizer.\n >>> for step in range(0, 200):\n >>> opt.zero_grad()\n >>> loss = loss_fn(output, labels)\n >>> loss.backward()\n >>> opt.step()\n\n\n\n\nload_state_dict(state_dict)\n This is the same as \"torch.optim.Optimizer\" \"load_state_dict()\",\n", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"} {"text": "but also restores model averager's step value to the one saved\n in the provided \"state_dict\".\n If there is no \"\"step\"\" entry in \"state_dict\", it will raise a\n warning and initialize the model averager's step to 0.\n\nstate_dict()\n This is the same as \"torch.optim.Optimizer\" \"state_dict()\", but\n adds an extra entry to record model averager's step to the\n checkpoint to ensure reload does not cause unnecessary warm up\n again.\n\nstep()\n Performs a single optimization step (parameter update).\n\nclass torch.distributed.optim.ZeroRedundancyOptimizer(params, optimizer_class, process_group=None, parameters_as_bucket_view=False, overlap_with_ddp=False, **defaults)\nThis class wraps an arbitrary \"optim.Optimizer\" and shards its\n states across ranks in the group as described by ZeRO. The local\n optimizer instance in each rank is only responsible for updating\n approximately \"1 / world_size\" parameters and hence only needs to", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"} {"text": "keep \"1 / world_size\" optimizer states. After parameters are\n updated locally, each rank will broadcast its parameters to all\n other peers to keep all model replicas in the same state.\n \"ZeroRedundancyOptimizer\" can be used in conjunction with\n \"torch.nn.parallel.DistributedDataParallel\" to reduce per-rank peak\n memory consumption.\n\"ZeroRedundancyOptimizer\" uses a sorted-greedy algorithm to pack a\n number of parameters at each rank. Each parameter belongs to a\n single rank and is not divided among ranks. The partition is\n arbitrary and might not match the the parameter registration or\n usage order.\nParameters:\n params (\"Iterable\") -- an \"Iterable\" of \"torch.Tensor\" s or\n \"dict\" s giving all parameters, which will be sharded across\n ranks.\nKeyword Arguments:\n * optimizer_class (\"torch.nn.Optimizer\") -- the class of the\n local optimizer.\n * **process_group** (\"ProcessGroup\", optional) --\n", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"} {"text": "\"torch.distributed\" \"ProcessGroup\" (default:\n \"dist.group.WORLD\" initialized by\n \"torch.distributed.init_process_group()\").\n * **parameters_as_bucket_view** (*bool**, **optional*) -- if\n \"True\", parameters are packed into buckets to speed up\n communication, and \"param.data\" fields point to bucket views\n at different offsets; if \"False\", each individual parameter is\n communicated separately, and each \"params.data\" stays intact\n (default: \"False\").\n\n * **overlap_with_ddp** (*bool**, **optional*) -- if \"True\",\n \"step()\" is overlapped with \"DistributedDataParallel\" 's\n gradient synchronization; this requires (1) either a\n functional optimizer for the \"optimizer_class\" argument or one\n with a functional equivalent and (2) registering a DDP\n communication hook constructed from one of the functions in\n \"ddp_zero_hook.py\"; parameters are packed into buckets\n", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"} {"text": "matching those in \"DistributedDataParallel\", meaning that the\n \"parameters_as_bucket_view\" argument is ignored. If \"False\",\n \"step()\" runs disjointly after the backward pass (per normal).\n (default: \"False\")\n * ****defaults** -- any trailing arguments, which are forwarded\n to the local optimizer.\n\nExample:\n >>> import torch.nn as nn\n >>> from torch.distributed.optim import ZeroRedundancyOptimizer\n >>> from torch.nn.parallel import DistributedDataParallel as DDP\n >>> model = nn.Sequential(*[nn.Linear(2000, 2000).to(rank) for _ in range(20)])\n >>> ddp = DDP(model, device_ids=[rank])\n >>> opt = ZeroRedundancyOptimizer(\n >>> ddp.parameters(),\n >>> optimizer_class=torch.optim.Adam,\n >>> lr=0.01\n >>> )\n >>> ddp(inputs).sum().backward()\n >>> opt.step()\n\nWarning:\n Currently, \"ZeroRedundancyOptimizer\" requires that all of the\n passed-in parameters are the same dense type.\n\nWarning:", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"} {"text": "Warning:\n If you pass \"overlap_with_ddp=True\", be wary of the following:\n Given the way that overlapping \"DistributedDataParallel\" with\n \"ZeroRedundancyOptimizer\" is currently implemented, the first two\n or three training iterations do not perform parameter updates in\n the optimizer step, depending on if \"static_graph=False\" or\n \"static_graph=True\", respectively. This is because it needs\n information about the gradient bucketing strategy used by\n \"DistributedDataParallel\", which is not finalized until the\n second forward pass if \"static_graph=False\" or until the third\n forward pass if \"static_graph=True\". To adjust for this, one\n option is to prepend dummy inputs.\n\nWarning:\n ZeroRedundancyOptimizer is experimental and subject to change.\n\nadd_param_group(param_group)\n Add a parameter group to the \"Optimizer\" 's \"param_groups\".\n\n This can be useful when fine tuning a pre-trained network, as\n", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"} {"text": "frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n Parameters:\n **param_group** (*dict*) -- specifies the parameters to be\n optimized and group-specific optimization options.\n\n Warning:\n\n This method handles updating the shards on all partitions but\n needs to be called on all ranks. Calling this on a subset of\n the ranks will cause the training to hang because\n communication primitives are called depending on the managed\n parameters and expect all the ranks to participate on the same\n set of parameters.\n\nconsolidate_state_dict(to=0)\n Consolidate a list of \"state_dict\" s (one per rank) on the\n target rank.\n\n Parameters:\n **to** (*int*) -- the rank that receives the optimizer states\n (default: 0).\n\n Raises:\n **RuntimeError** -- if \"overlap_with_ddp=True\" and this\n method is called before this \"ZeroRedundancyOptimizer\"\n", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"} {"text": "instance has been fully initialized, which happens once\n \"DistributedDataParallel\" gradient buckets have been\n rebuilt.\n Warning:\n\n This needs to be called on all ranks.\n\njoin_hook(**kwargs)\n Returns the ZeRO join hook, which enables training on uneven\n inputs by shadowing the collective communications in the\n optimizer step.\n\n Gradients must be properly set before this hook is called.\n\n Parameters:\n **kwargs** (*dict*) -- a \"dict\" containing any keyword\n arguments to modify the behavior of the join hook at run\n time; all \"Joinable\" instances sharing the same join context\n manager are forwarded the same value for \"kwargs\".\n\n This hook does not support any keyword arguments; i.e. \"kwargs\"\n is unused.\n\nload_state_dict(state_dict)\n Load the state pertaining to the given rank from the input\n \"state_dict\", updating the local optimizer as needed.\n\n Parameters:\n", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"} {"text": "Parameters:\n state_dict (dict) -- optimizer state; should be an\n object returned from a call to \"state_dict()\".\n Raises:\n **RuntimeError** -- if \"overlap_with_ddp=True\" and this\n method is called before this \"ZeroRedundancyOptimizer\"\n instance has been fully initialized, which happens once\n \"DistributedDataParallel\" gradient buckets have been\n rebuilt.\n\nstate_dict()\n Returns the last global optimizer state known to this rank.\n\n Raises:\n **RuntimeError** -- if \"overlap_with_ddp=True\" and this\n method is called before this \"ZeroRedundancyOptimizer\"\n instance has been fully initialized, which happens once\n \"DistributedDataParallel\" gradient buckets have been\n rebuilt; or if this method is called without a preceding call\n to \"consolidate_state_dict()\".\n\n Return type:\n *Dict*[str, *Any*]\n\nstep(closure=None, **kwargs)", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"} {"text": "step(closure=None, **kwargs)\n Performs a single optimizer step and syncs parameters across all\n ranks.\n\n Parameters:\n **closure** (*Callable*) -- a closure that re-evaluates the\n model and returns the loss; optional for most optimizers.\n\n Returns:\n Optional loss depending on the underlying local optimizer.\n\n Return type:\n *Optional*[float]\n", "source": "https://pytorch.org/docs/stable/distributed.optim.html", "category": "pytorch docs"} {"text": "Distributed Autograd Design\nThis note will present the detailed design for distributed autograd\nand walk through the internals of the same. Make sure you're familiar\nwith Autograd mechanics and the Distributed RPC Framework before\nproceeding.\nBackground\nLet's say you have two nodes and a very simple model partitioned\nacross two nodes. This can be implemented using\n\"torch.distributed.rpc\" as follows:\nimport torch\n import torch.distributed.rpc as rpc\ndef my_add(t1, t2):\n return torch.add(t1, t2)\n# On worker 0:\n t1 = torch.rand((3, 3), requires_grad=True)\n t2 = torch.rand((3, 3), requires_grad=True)\n# Perform some computation remotely.\n t3 = rpc.rpc_sync(\"worker1\", my_add, args=(t1, t2))\n# Perform some computation locally based on remote result.\n t4 = torch.rand((3, 3), requires_grad=True)\n t5 = torch.mul(t3, t4)\n# Compute some loss.\n loss = t5.sum()\nThe main motivation behind distributed autograd is to enable running a", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"} {"text": "backward pass on such distributed models with the \"loss\" that we've\ncomputed and record appropriate gradients for all tensors that require\ngradients.\nAutograd recording during the forward pass\nPyTorch builds the autograd graph during the forward pass and this\ngraph is used to execute the backward pass. For more details see How\nautograd encodes the history.\nFor distributed autograd, we need to keep track of all RPCs during the\nforward pass to ensure the backward pass is executed appropriately.\nFor this purpose, we attach \"send\" and \"recv\" functions to the\nautograd graph when we perform an RPC.\n\n\nThe \"send\" function is attached to the source of the RPC and its\n output edges point to the autograd function for the input tensors of\n the RPC. The input for this function during the backward pass is\n received from the destination as the output of the appropriate\n \"recv\" function.\n\n\nThe \"recv\" function is attached to the destination of the RPC and\n\n", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"} {"text": "its inputs are retrieved from operators executed on the destination\n using the input tensors. The output gradients of this function are\n sent to the source node to the appropriate \"send\" function during\n the backward pass.\n\n\nEach \"send-recv\" pair is assigned a globally unique\n \"autograd_message_id\" to uniquely identify the pair. This is useful\n to look up the corresponding function on a remote node during the\n backward pass.\n\n\nFor RRef, whenever we call \"torch.distributed.rpc.RRef.to_here()\" we\n attach an appropriate \"send-recv\" pair for the tensors involved.\n\n\nAs an example, this is what the autograd graph for our example above\nwould look like (t5.sum() excluded for simplicity):\n[image]\nDistributed Autograd Context\nEach forward and backward pass that uses distributed autograd is\nassigned a unique \"torch.distributed.autograd.context\" and this\ncontext has a globally unique \"autograd_context_id\". This context is\ncreated on each node as needed.", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"} {"text": "created on each node as needed.\nThis context serves the following purpose:\n\n\nMultiple nodes running distributed backward passes might accumulate\n gradients on the same tensor and as a result the \".grad\" field of\n the tensor would have gradients from a variety of distributed\n backward passes before we have the opportunity to run the\n optimizer. This is similar to calling \"torch.autograd.backward()\"\n multiple times locally. In order to provide a way of separating out\n the gradients for each backward pass, the gradients are accumulated\n in the \"torch.distributed.autograd.context\" for each backward pass.\n\n\nDuring the forward pass we store the \"send\" and \"recv\" functions\n for each autograd pass in this context. This ensures we hold\n references to the appropriate nodes in the autograd graph to keep\n it alive. In addition to this, it is easy to look up the\n appropriate \"send\" and \"recv\" functions during the backward pass.\n\n", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"} {"text": "\nIn general we also use this context to store some metadata for each\n distributed autograd pass.\n\nFrom the user's perspective the autograd context is setup as follows:\nimport torch.distributed.autograd as dist_autograd\n with dist_autograd.context() as context_id:\n loss = model.forward()\n dist_autograd.backward(context_id, loss)\nIt is important to note that your model's forward pass must be invoked\nwithin the distributed autograd context manager, as a valid context is\nneeded in order to ensure that all \"send\" and \"recv\" functions are\nstored properly to run the backward pass across all participating\nnodes.\nDistributed Backward Pass\nIn this section we outline the challenge of computing dependencies\naccurately during a distributed backward pass and describe a couple of\nalgorithms (with tradeoffs) on how we can execute a distributed\nbackward pass.\nComputing dependencies\nConsider the following piece of code being run on a single machine", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"} {"text": "import torch\n a = torch.rand((3, 3), requires_grad=True)\n b = torch.rand((3, 3), requires_grad=True)\n c = torch.rand((3, 3), requires_grad=True)\n d = a + b\n e = b * c\n d.sum.().backward()\nThis is what the autograd graph for the code above would look like:\n[image]\nThe first step the autograd engine performs as part of the backward\npass is computing the number of dependencies for each node in the\nautograd graph. This helps the autograd engine know when a node in the\ngraph is ready for execution. The numbers in brackets for \"add(1)\" and\n\"mul(0)\" denote the number of dependencies. As you can see, this means\nduring the backward pass the \"add\" node needs 1 input and the \"mul\"\nnode doesn't need any inputs (in other words doesn't need to be\nexecuted). The local autograd engine computes these dependencies by\ntraversing the graph from the root nodes (\"d\" in this case).\nThe fact that certain nodes in the autograd graph might not be\nexecuted in the backward pass poses a challenge for distributed", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"} {"text": "autograd. Consider this piece of code which uses RPC.\nimport torch\n import torch.distributed.rpc as rpc\na = torch.rand((3, 3), requires_grad=True)\n b = torch.rand((3, 3), requires_grad=True)\n c = torch.rand((3, 3), requires_grad=True)\nd = rpc.rpc_sync(\"worker1\", torch.add, args=(a, b))\n e = rpc.rpc_sync(\"worker1\", torch.mul, args=(b, c))\n loss = d.sum()\nThe associated autograd graph for the code above would be:\n[image]\nComputing dependencies of this distributed autograd graph is much more\nchallenging and requires some overhead (either in terms of computation\nor network communication).\nFor performance sensitive applications we can avoid a lot of overhead\nby assuming every \"send\" and \"recv\" function are valid as part of the\nbackward pass (most applications don't perform RPCs that aren't used).\nThis simplifies the distributed autograd algorithm and is much more\nefficient, but at the cost that the application needs to be aware of", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"} {"text": "the limitations. This algorithm is called the FAST mode algorithm and\nis described in detail below.\nIn the general case it might not be necessary that every \"send\" and\n\"recv\" function is valid as part of the backward pass. To address\nthis, we have proposed a SMART mode algorithm which is described in a\nlater section. Please note that currently, only the FAST mode\nalgorithm is implemented.\nFAST mode algorithm\nThe key assumption of this algorithm is that each \"send\" function has\na dependency of 1 when we run a backward pass. In other words, we\nassume we'll receive a gradient over RPC from another node.\nThe algorithm is as follows:\n\n\nWe start from the worker which has the roots for the backward pass\n (all roots must be local).\n\n\nLookup all the \"send\" functions for the current Distributed\n Autograd Context.\n\n\nCompute dependencies locally starting from the provided roots and\n all the \"send\" functions we retrieved.\n\n", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"} {"text": "all the \"send\" functions we retrieved.\n\n\nAfter computing dependencies, kick off the local autograd engine\n with the provided roots.\n\n\nWhen the autograd engine executes the \"recv\" function, the \"recv\"\n function sends the input gradients via RPC to the appropriate\n worker. Each \"recv\" function knows the destination worker id since\n it is recorded as part of the forward pass. The \"recv\" function\n also sends over the \"autograd_context_id\" and \"autograd_message_id\"\n to the remote host.\n\n\nWhen this request is received on the remote host, we use the\n \"autograd_context_id\" and \"autograd_message_id\" to look up the\n appropriate \"send\" function.\n\n\nIf this is the first time a worker has received a request for the\n given \"autograd_context_id\", it will compute dependencies locally\n as described in points 1-3 above.\n\n\nThe \"send\" function retrieved in 6. is then enqueued for execution\n on the local autograd engine for that worker.\n\n", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"} {"text": "on the local autograd engine for that worker.\n\nFinally, instead of accumulating the gradients on the \".grad\" field\n of the Tensor, we accumulate the gradients separately per\n Distributed Autograd Context. The gradients are stored in a\n \"Dict[Tensor, Tensor]\", which is basically a map from Tensor to its\n associated gradient and this map can be retrieved using the\n \"get_gradients()\" API.\n\nAs an example the complete code with distributed autograd would be as\nfollows:\nimport torch\n import torch.distributed.autograd as dist_autograd\n import torch.distributed.rpc as rpc\ndef my_add(t1, t2):\n return torch.add(t1, t2)\n# On worker 0:\n# Setup the autograd context. Computations that take\n # part in the distributed backward pass must be within\n # the distributed autograd context manager.\n with dist_autograd.context() as context_id:\n t1 = torch.rand((3, 3), requires_grad=True)\n t2 = torch.rand((3, 3), requires_grad=True)\n # Perform some computation remotely.\n", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"} {"text": "Perform some computation remotely.\n t3 = rpc.rpc_sync(\"worker1\", my_add, args=(t1, t2))\n\n # Perform some computation locally based on remote result.\n t4 = torch.rand((3, 3), requires_grad=True)\n t5 = torch.mul(t3, t4)\n\n # Compute some loss.\n loss = t5.sum()\n\n # Run the backward pass.\n dist_autograd.backward(context_id, [loss])\n\n # Retrieve the gradients from the context.\n dist_autograd.get_gradients(context_id)\n\nThe distributed autograd graph with dependencies would be as follows\n(t5.sum() excluded for simplicity):\n[image]\nThe FAST mode algorithm applied to the above example would be as\nfollows:\n\n\nOn \"Worker 0\" we start from the roots \"loss\" and \"send1\" to compute\n dependencies. As a result \"send1\" is marked with a dependency of 1\n and \"mul\" on \"Worker 0\" is marked with a dependency of 1.\n\n\nNow, we kickoff the local autograd engine on \"Worker 0\". We first\n execute the \"mul\" function, accumulate its output in the autograd\n\n", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"} {"text": "context as the gradient for \"t4\". Then, we execute \"recv2\" which\n sends the gradients to \"Worker 1\".\n\n\nSince this is the first time \"Worker 1\" has heard about this\n backward pass, it starts dependency computation and marks the\n dependencies for \"send2\", \"add\" and \"recv1\" appropriately.\n\n\nNext, we enqueue \"send2\" on the local autograd engine of \"Worker\n 1\", which in turn executes \"add\" and \"recv1\".\n\n\nWhen \"recv1\" is executed it sends the gradients over to \"Worker 0\".\n\n\nSince \"Worker 0\" has already computed dependencies for this\n backward pass, it just enqueues and executes \"send1\" locally.\n\n\nFinally, gradients for \"t1\", \"t2\" and \"t4\" are accumulated in the\n Distributed Autograd Context.\n\n\nSMART mode algorithm\nFull details of this algorithm are still in the works, but for the\ngeneral idea you can refer to Distributed Autograd Algorithm Smart\nmode section in the RFC.\nDistributed Optimizer\nThe \"DistributedOptimizer\" operates as follows:", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"} {"text": "The \"DistributedOptimizer\" operates as follows:\n\n\nTakes a list of remote parameters (\"RRef\") to optimize. These could\n also be local parameters wrapped within a local \"RRef\".\n\n\nTakes a \"Optimizer\" class as the local optimizer to run on all\n distinct \"RRef\" owners.\n\n\nThe distributed optimizer creates an instance of the local\n \"Optimizer\" on each of the worker nodes and holds an \"RRef\" to\n them.\n\n\nWhen \"torch.distributed.optim.DistributedOptimizer.step()\" is\n invoked, the distributed optimizer uses RPC to remotely execute all\n the local optimizers on the appropriate remote workers. A\n distributed autograd \"context_id\" must be provided as input to\n \"torch.distributed.optim.DistributedOptimizer.step()\". This is used\n by local optimizers to apply gradients stored in the corresponding\n context.\n\n\nIf multiple concurrent distributed optimizers are updating the same\n parameters on a worker, these updates are serialized via a lock.\n\n\nSimple end to end example", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"} {"text": "=========================\nPutting it all together, the following is a simple end to end example\nusing distributed autograd and the distributed optimizer. If the code\nis placed into a file called \"dist_autograd_simple.py\", it can be run\nwith the command \"MASTER_ADDR=\"localhost\" MASTER_PORT=29500 python\ndist_autograd_simple.py\":\nimport torch\n import torch.multiprocessing as mp\n import torch.distributed.autograd as dist_autograd\n from torch.distributed import rpc\n from torch import optim\n from torch.distributed.optim import DistributedOptimizer\ndef random_tensor():\n return torch.rand((3, 3), requires_grad=True)\ndef _run_process(rank, dst_rank, world_size):\n name = \"worker{}\".format(rank)\n dst_name = \"worker{}\".format(dst_rank)\n # Initialize RPC.\n rpc.init_rpc(\n name=name,\n rank=rank,\n world_size=world_size\n )\n\n # Use a distributed autograd context.\n with dist_autograd.context() as context_id:\n", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"} {"text": "with dist_autograd.context() as context_id:\n # Forward pass (create references on remote nodes).\n rref1 = rpc.remote(dst_name, random_tensor)\n rref2 = rpc.remote(dst_name, random_tensor)\n loss = rref1.to_here() + rref2.to_here()\n # Backward pass (run distributed autograd).\n dist_autograd.backward(context_id, [loss.sum()])\n\n # Build DistributedOptimizer.\n dist_optim = DistributedOptimizer(\n optim.SGD,\n [rref1, rref2],\n lr=0.05,\n )\n\n # Run the distributed optimizer step.\n dist_optim.step(context_id)\n\ndef run_process(rank, world_size):\n dst_rank = (rank + 1) % world_size\n _run_process(rank, dst_rank, world_size)\n rpc.shutdown()\nif name == 'main':\n # Run world_size workers\n world_size = 2\n mp.spawn(run_process, args=(world_size,), nprocs=world_size)", "source": "https://pytorch.org/docs/stable/rpc/distributed_autograd.html", "category": "pytorch docs"} {"text": "torch.utils.mobile_optimizer\nWarning:\nThis API is in beta and may change in the near future.\nTorch mobile supports\n\"torch.utils.mobile_optimizer.optimize_for_mobile\" utility to run a\nlist of optimization pass with modules in eval mode. The method takes\nthe following parameters: a torch.jit.ScriptModule object, a\nblocklisting optimization set, a preserved method list, and a backend.\nFor CPU Backend, by default, if optimization blocklist is None or\nempty, \"optimize_for_mobile\" will run the following optimizations:\n * Conv2D + BatchNorm fusion (blocklisting option\n mobile_optimizer.MobileOptimizerType.CONV_BN_FUSION): This\n optimization pass folds \"Conv2d-BatchNorm2d\" into \"Conv2d\" in\n \"forward\" method of this module and all its submodules. The\n weight and bias of the \"Conv2d\" are correspondingly updated.\n\nInsert and Fold prepacked ops (blocklisting option\n mobile_optimizer.MobileOptimizerType.INSERT_FOLD_PREPACK_OPS):\n", "source": "https://pytorch.org/docs/stable/mobile_optimizer.html", "category": "pytorch docs"} {"text": "This optimization pass rewrites the graph to replace 2D\n convolutions and linear ops with their prepacked counterparts.\n Prepacked ops are stateful ops in that, they require some state\n to be created, such as weight prepacking and use this state, i.e.\n prepacked weights, during op execution. XNNPACK is one such\n backend that provides prepacked ops, with kernels optimized for\n mobile platforms (such as ARM CPUs). Prepacking of weight enables\n efficient memory access and thus faster kernel execution. At the\n moment \"optimize_for_mobile\" pass rewrites the graph to replace\n \"Conv2D/Linear\" with 1) op that pre-packs weight for XNNPACK\n conv2d/linear ops and 2) op that takes pre-packed weight and\n activation as input and generates output activations. Since 1\n needs to be done only once, we fold the weight pre-packing such\n that it is done only once at model load time. This pass of the\n \"optimize_for_mobile\" does 1 and 2 and then folds, i.e. removes,", "source": "https://pytorch.org/docs/stable/mobile_optimizer.html", "category": "pytorch docs"} {"text": "weight pre-packing ops.\n\n\nReLU/Hardtanh fusion: XNNPACK ops support fusion of clamping.\n That is clamping of output activation is done as part of the\n kernel, including for 2D convolution and linear op kernels. Thus\n clamping effectively comes for free. Thus any op that can be\n expressed as clamping op, such as \"ReLU\" or \"hardtanh\", can be\n fused with previous \"Conv2D\" or \"linear\" op in XNNPACK. This pass\n rewrites graph by finding \"ReLU/hardtanh\" ops that follow XNNPACK\n \"Conv2D/linear\" ops, written by the previous pass, and fuses them\n together.\n\n\nDropout removal (blocklisting option\n mobile_optimizer.MobileOptimizerType.REMOVE_DROPOUT): This\n optimization pass removes \"dropout\" and \"dropout_\" nodes from\n this module when training is false.\n\n\nConv packed params hoisting (blocklisting option\n mobile_optimizer.MobileOptimizerType.HOIST_CONV_PACKED_PARAMS):\n This optimization pass moves convolution packed params to the\n\n", "source": "https://pytorch.org/docs/stable/mobile_optimizer.html", "category": "pytorch docs"} {"text": "root module, so that the convolution structs can be deleted. This\n decreases model size without impacting numerics.\n\nAdd/ReLU fusion (blocklisting option\n mobile_optimizer.MobileOptimizerType.FUSE_ADD_RELU): This pass\n finds instances of \"relu\" ops that follow \"add\" ops and fuses\n them into a single \"add_relu\".\n\nfor Vulkan Backend, by default, if optimization blocklist is None or\nempty, \"optimize_for_mobile\" will run the following optimization:\n * Automatic GPU Transfer (blocklisting option mobile_optimize\n r.MobileOptimizerType.VULKAN_AUTOMATIC_GPU_TRANSFER): This\n optimization pass rewrites the graph so that moving input and\n output data to and from the GPU becomes part of the model.\n\"optimize_for_mobile\" will also invoke freeze_module pass which only\npreserves \"forward\" method. If you have other method to that needed to\nbe preserved, add them into the preserved method list and pass into\nthe method.", "source": "https://pytorch.org/docs/stable/mobile_optimizer.html", "category": "pytorch docs"} {"text": "the method.\ntorch.utils.mobile_optimizer.optimize_for_mobile(script_module, optimization_blocklist=None, preserved_methods=None, backend='CPU')\nParameters:\n * script_module (ScriptModule) -- An instance of torch\n script module with type of ScriptModule.\n * **optimization_blocklist**\n (*Optional**[**Set**[**_MobileOptimizerType**]**]*) -- A set\n with type of MobileOptimizerType. When set is not passed,\n optimization method will run all the optimizer pass;\n otherwise, optimizer method will run the optimization pass\n that is not included inside optimization_blocklist.\n\n * **preserved_methods** (*Optional**[**List**]*) -- A list of\n methods that needed to be preserved when freeze_module pass is\n invoked\n\n * **backend** (*str*) -- Device type to use for running the\n result model ('CPU'(default), 'Vulkan' or 'Metal').\n\nReturns:\n A new optimized torch script module\nReturn type:\n RecursiveScriptModule", "source": "https://pytorch.org/docs/stable/mobile_optimizer.html", "category": "pytorch docs"} {"text": "Quantization Backend Configuration\nFX Graph Mode Quantization allows the user to configure various\nquantization behaviors of an op in order to match the expectation of\ntheir backend.\nIn the future, this document will contain a detailed spec of these\nconfigurations.\nDefault values for native configurations\nBelow is the output of the configuration for quantization of ops in\nx86 and qnnpack (PyTorch's default quantized backends).\nResults:\n{\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': ,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'root_module': ,\n 'reference_quantized_module_for_root': ,\n 'fuser_method': ,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'fused_module': ,\n 'fuser_method': ,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': ,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n 'fuser_method': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': ,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n 'fuser_method': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': clamp,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': contiguous,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'qat_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'input_type_to_index': {'weight': 1, 'bias': 2},\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'qat_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'input_type_to_index': {'weight': 1, 'bias': 2},\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'qat_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'input_type_to_index': {'weight': 1, 'bias': 2},\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'qat_module': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'qat_module': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'qat_module': ,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'qat_module': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'qat_module': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'qat_module': ,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'qat_module': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'qat_module': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'qat_module': ,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': detach,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': detach_,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.quint4x2, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'qat_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'reference_quantized_module_for_root': ,\n 'input_output_observed': False,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.quint4x2, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'input_output_observed': False,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.quint4x2, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'qat_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'reference_quantized_module_for_root': ,\n 'input_output_observed': False,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.quint4x2, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'input_output_observed': False,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'input_type_to_index': {'weight': 2, 'bias': 3},\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': hardsigmoid,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': hardsigmoid_,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'input_type_to_index': {'weight': 3, 'bias': 4},\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'input_type_to_index': {'weight': 2, 'bias': 3},\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'qat_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'bias_dtype': torch.float32,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'input_type_to_index': {'weight': 1, 'bias': 2},\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'qat_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'bias_dtype': torch.float32,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'qat_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'bias_dtype': torch.float32,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': torch.nn.functional.max_pool1d,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'pattern': torch.nn.functional.max_pool2d,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': torch.nn.functional.max_pool3d,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': mean,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': permute,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': .fuser_method at 0x7f8f27895630>,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'fuser_method': .fuser_method at 0x7f8f278955a0>,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, , ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': ,\n },\n {\n 'pattern': (, , ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'fused_module': ,\n 'fuser_method': ,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': .fuser_method at 0x7f8f278956c0>,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'fuser_method': .fuser_method at 0x7f8f27895750>,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, , ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': ,\n },\n {\n 'pattern': (, , ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'fused_module': ,\n 'fuser_method': ,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': .fuser_method at 0x7f8f278957e0>,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'fuser_method': .fuser_method at 0x7f8f27895870>,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, , ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': ,\n },\n {\n 'pattern': (, , ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'fused_module': ,\n 'fuser_method': ,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': .fuser_method at 0x7f8f27895900>,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'fused_module': ,\n 'fuser_method': .fuser_method at 0x7f8f27895990>,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'bias_dtype': torch.float32,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n ],\n 'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'num_tensor_args_to_observation_type': {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'num_tensor_args_to_observation_type': {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'num_tensor_args_to_observation_type': {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'num_tensor_args_to_observation_type': {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n ],\n 'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'num_tensor_args_to_observation_type': {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'num_tensor_args_to_observation_type': {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'num_tensor_args_to_observation_type': {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'num_tensor_args_to_observation_type': {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'num_tensor_args_to_observation_type': {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n ],\n 'num_tensor_args_to_observation_type': {\n 0: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 1: ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n 2: ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': relu,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': relu_,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': .fuser_method at 0x7f8f27895a20>,\n },\n {\n 'pattern': (, ),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': .fuser_method at 0x7f8f27895ab0>,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': .fuser_method at 0x7f8f27895b40>,\n },\n {\n 'pattern': (, ),\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'fused_module': ,\n 'fuser_method': .fuser_method at 0x7f8f27895bd0>,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': repeat,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': repeat_interleave,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': reshape,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': resize_,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'weight_dtype': DTypeWithConstraints(dtype=torch.qint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.float32, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'weight_dtype': DTypeWithConstraints(dtype=torch.float16, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'bias_dtype': torch.float32,\n 'is_dynamic': True,\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n 'root_module': ,\n 'reference_quantized_module_for_root': ,\n },\n {\n 'pattern': shape,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': sigmoid,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': sigmoid_,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': size,\n 'dtype_configs': [", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "{\n 'pattern': size,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.00390625, zero_point_exact_match=0),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'pattern': squeeze,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': squeeze_,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.0078125, zero_point_exact_match=128),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.0078125, zero_point_exact_match=128),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.0078125, zero_point_exact_match=128),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.0078125, zero_point_exact_match=128),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': tanh,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.0078125, zero_point_exact_match=128),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.0078125, zero_point_exact_match=128),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': tanh_,\n 'dtype_configs': [", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'pattern': tanh_,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.0078125, zero_point_exact_match=128),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=0.0078125, zero_point_exact_match=128),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT,\n },\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': transpose,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "},\n {\n 'pattern': ,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': unsqueeze,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': unsqueeze_,\n 'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n },\n {\n 'pattern': view,\n 'dtype_configs': [\n {", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "'dtype_configs': [\n {\n 'input_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n 'output_dtype': DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None),\n },\n ],\n 'observation_type': ObservationType.OUTPUT_SHARE_OBSERVER_WITH_INPUT,\n }", "source": "https://pytorch.org/docs/stable/quantization-backend-configuration.html", "category": "pytorch docs"} {"text": "\u008b\b\u0000\u0000\u0000\u0000\u0000\u0000\u00ed\u00c1\u0001\n\u0000\u0000\u0000\u00c2\u00a0\u00f7Om\u000e7\u00a0\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u0000\u00807\u009a\u00de\u001d'\u0000(\u0000\u0000", "source": "https://pytorch.org/docs/stable/output_text.tar.gz.html", "category": "pytorch docs"} {"text": "torch.utils.checkpoint\nNote:\nCheckpointing is implemented by rerunning a forward-pass segment for\n each checkpointed segment during backward. This can cause\n persistent states like the RNG state to be advanced than they would\n without checkpointing. By default, checkpointing includes logic to\n juggle the RNG state such that checkpointed passes making use of RNG\n (through dropout for example) have deterministic output as compared\n to non-checkpointed passes. The logic to stash and restore RNG\n states can incur a moderate performance hit depending on the runtime\n of checkpointed operations. If deterministic output compared to\n non-checkpointed passes is not required, supply\n \"preserve_rng_state=False\" to \"checkpoint\" or\n \"checkpoint_sequential\" to omit stashing and restoring the RNG state\n during each checkpoint.The stashing logic saves and restores the RNG\n state for the current device and the device of all cuda Tensor", "source": "https://pytorch.org/docs/stable/checkpoint.html", "category": "pytorch docs"} {"text": "arguments to the \"run_fn\". However, the logic has no way to\n anticipate if the user will move Tensors to a new device within the\n \"run_fn\" itself. Therefore, if you move Tensors to a new device\n (\"new\" meaning not belonging to the set of [current device + devices\n of Tensor arguments]) within \"run_fn\", deterministic output compared\n to non-checkpointed passes is never guaranteed.\ntorch.utils.checkpoint.checkpoint(function, args, use_reentrant=True, *kwargs)\nCheckpoint a model or part of the model\nCheckpointing works by trading compute for memory. Rather than\n storing all intermediate activations of the entire computation\n graph for computing backward, the checkpointed part does not\n save intermediate activations, and instead recomputes them in\n backward pass. It can be applied on any part of a model.\nSpecifically, in the forward pass, \"function\" will run in\n \"torch.no_grad()\" manner, i.e., not storing the intermediate", "source": "https://pytorch.org/docs/stable/checkpoint.html", "category": "pytorch docs"} {"text": "activations. Instead, the forward pass saves the inputs tuple and\n the \"function\" parameter. In the backwards pass, the saved inputs\n and \"function\" is retrieved, and the forward pass is computed on\n \"function\" again, now tracking the intermediate activations, and\n then the gradients are calculated using these activation values.\nThe output of \"function\" can contain non-Tensor values and gradient\n recording is only performed for the Tensor values. Note that if the\n output consists of nested structures (ex: custom objects, lists,\n dicts etc.) consisting of Tensors, these Tensors nested in custom\n structures will not be considered as part of autograd.\nWarning:\n If \"function\" invocation during backward does anything different\n than the one during forward, e.g., due to some global variable,\n the checkpointed version won't be equivalent, and unfortunately\n it can't be detected.\n\nWarning:\n If \"use_reentrant=True\" is specified, then if the checkpointed\n", "source": "https://pytorch.org/docs/stable/checkpoint.html", "category": "pytorch docs"} {"text": "segment contains tensors detached from the computational graph by\n detach() or torch.no_grad(), the backward pass will raise an\n error. This is because checkpoint makes all the outputs require\n gradients which causes issues when a tensor is defined to have no\n gradient in the model. To circumvent this, detach the tensors\n outside of the checkpoint function. Note that the checkpointed\n segment can contain tensors detached from the computational graph\n if \"use_reentrant=False\" is specified.\nWarning:\n If \"use_reentrant=True\" is specified, at least one of the inputs\n needs to have \"requires_grad=True\" if grads are needed for model\n inputs, otherwise the checkpointed part of the model won't have\n gradients. At least one of the outputs needs to have\n \"requires_grad=True\" as well. Note that this does not apply if\n \"use_reentrant=False\" is specified.\n\nWarning:\n If \"use_reentrant=True\" is specified, checkpointing currently\n", "source": "https://pytorch.org/docs/stable/checkpoint.html", "category": "pytorch docs"} {"text": "only supports \"torch.autograd.backward()\" and only if its\n inputs argument is not passed. \"torch.autograd.grad()\" is not\n supported. If \"use_reentrant=False\" is specified, checkpointing\n will work with \"torch.autograd.grad()\".\nParameters:\n * function -- describes what to run in the forward pass of\n the model or part of the model. It should also know how to\n handle the inputs passed as the tuple. For example, in LSTM,\n if user passes \"(activation, hidden)\", \"function\" should\n correctly use the first input as \"activation\" and the second\n input as \"hidden\"\n * **preserve_rng_state** (*bool**, **optional*) -- Omit stashing\n and restoring the RNG state during each checkpoint. Default:\n \"True\"\n\n * **use_reentrant** (*bool**, **optional*) -- Use checkpointing\n implementation that requires re-entrant autograd. If\n \"use_reentrant=False\" is specified, \"checkpoint\" will use an\n", "source": "https://pytorch.org/docs/stable/checkpoint.html", "category": "pytorch docs"} {"text": "implementation that does not require re-entrant autograd. This\n allows \"checkpoint\" to support additional functionality, such\n as working as expected with \"torch.autograd.grad\" and support\n for keyword arguments input into the checkpointed function.\n Note that future versions of PyTorch will default to\n \"use_reentrant=False\". Default: \"True\"\n * **args** -- tuple containing inputs to the \"function\"\n\nReturns:\n Output of running \"function\" on \"*args\"\ntorch.utils.checkpoint.checkpoint_sequential(functions, segments, input, use_reentrant=True, **kwargs)\nA helper function for checkpointing sequential models.\nSequential models execute a list of modules/functions in order\n (sequentially). Therefore, we can divide such a model in various\n segments and checkpoint each segment. All segments except the last\n will run in \"torch.no_grad()\" manner, i.e., not storing the\n intermediate activations. The inputs of each checkpointed segment", "source": "https://pytorch.org/docs/stable/checkpoint.html", "category": "pytorch docs"} {"text": "will be saved for re-running the segment in the backward pass.\nSee \"checkpoint()\" on how checkpointing works.\nWarning:\n Checkpointing currently only supports \"torch.autograd.backward()\"\n and only if its *inputs* argument is not passed.\n \"torch.autograd.grad()\" is not supported.\n\nParameters:\n * functions -- A \"torch.nn.Sequential\" or the list of\n modules or functions (comprising the model) to run\n sequentially.\n * **segments** -- Number of chunks to create in the model\n\n * **input** -- A Tensor that is input to \"functions\"\n\n * **preserve_rng_state** (*bool**, **optional*) -- Omit stashing\n and restoring the RNG state during each checkpoint. Default:\n \"True\"\n\n * **use_reentrant** (*bool**, **optional*) -- Use checkpointing\n implementation that requires re-entrant autograd. If\n \"use_reentrant=False\" is specified, \"checkpoint\" will use an\n implementation that does not require re-entrant autograd. This\n", "source": "https://pytorch.org/docs/stable/checkpoint.html", "category": "pytorch docs"} {"text": "allows \"checkpoint\" to support additional functionality, such\n as working as expected with \"torch.autograd.grad\" and support\n for keyword arguments input into the checkpointed function.\n Default: \"True\"\nReturns:\n Output of running \"functions\" sequentially on \"*inputs\"\n-[ Example ]-\n\n\n\nmodel = nn.Sequential(...)\ninput_var = checkpoint_sequential(model, chunks, input_var)\n\n\n", "source": "https://pytorch.org/docs/stable/checkpoint.html", "category": "pytorch docs"} {"text": "torch.func\ntorch.func, previously known as \"functorch\", is JAX-like composable\nfunction transforms for PyTorch.\nNote:\nThis library is currently in beta. What this means is that the\n features generally work (unless otherwise documented) and we (the\n PyTorch team) are committed to bringing this library forward.\n However, the APIs may change under user feedback and we don't have\n full coverage over PyTorch operations.If you have suggestions on the\n API or use-cases you'd like to be covered, please open an GitHub\n issue or reach out. We'd love to hear about how you're using the\n library.\nWhat are composable function transforms?\n\n\nA \"function transform\" is a higher-order function that accepts a\n numerical function and returns a new function that computes a\n different quantity.\n\n\n\"torch.func\" has auto-differentiation transforms (\"grad(f)\" returns\n a function that computes the gradient of \"f\"), a\n\n", "source": "https://pytorch.org/docs/stable/func.html", "category": "pytorch docs"} {"text": "a function that computes the gradient of \"f\"), a\n vectorization/batching transform (\"vmap(f)\" returns a function that\n computes \"f\" over batches of inputs), and others.\n\nThese function transforms can compose with each other arbitrarily.\n For example, composing \"vmap(grad(f))\" computes a quantity called\n per-sample-gradients that stock PyTorch cannot efficiently compute\n today.\n\nWhy composable function transforms?\nThere are a number of use cases that are tricky to do in PyTorch\ntoday:\n\n\ncomputing per-sample-gradients (or other per-sample quantities)\n\n\nrunning ensembles of models on a single machine\n\n\nefficiently batching together tasks in the inner-loop of MAML\n\n\nefficiently computing Jacobians and Hessians\n\n\nefficiently computing batched Jacobians and Hessians\n\n\nComposing \"vmap()\", \"grad()\", and \"vjp()\" transforms allows us to\nexpress the above without designing a separate subsystem for each.\nThis idea of composable function transforms comes from the JAX", "source": "https://pytorch.org/docs/stable/func.html", "category": "pytorch docs"} {"text": "framework.\nRead More\n\n\ntorch.func Whirlwind Tour\n\n\nWhat is torch.func?\n\n\nWhy composable function transforms?\n\n\nWhat are the transforms?\n\n\ntorch.func API Reference\n\n\nFunction Transforms\n\n\nUtilities for working with torch.nn.Modules\n\n\nUX Limitations\n\n\nGeneral limitations\n\n\ntorch.autograd APIs\n\n\nvmap limitations\n\n\nRandomness\n\n\nMigrating from functorch to torch.func\n\n\nfunction transforms\n\n\nNN module utilities\n\n\nfunctorch.compile\n\n", "source": "https://pytorch.org/docs/stable/func.html", "category": "pytorch docs"} {"text": "torch.ao.ns._numeric_suite\nWarning:\nThis module is an early prototype and is subject to change.\ntorch.ao.ns._numeric_suite.compare_weights(float_dict, quantized_dict)\nCompare the weights of the float module with its corresponding\n quantized module. Return a dict with key corresponding to module\n names and each entry being a dictionary with two keys 'float' and\n 'quantized', containing the float and quantized weights. This dict\n can be used to compare and compute the quantization error of the\n weights of float and quantized models.\nExample usage:\n wt_compare_dict = compare_weights(\n float_model.state_dict(), qmodel.state_dict())\n for key in wt_compare_dict:\n print(\n key,\n compute_error(\n wt_compare_dict[key]['float'],\n wt_compare_dict[key]['quantized'].dequantize()\n )\n )\n\nParameters:", "source": "https://pytorch.org/docs/stable/torch.ao.ns._numeric_suite.html", "category": "pytorch docs"} {"text": ")\n )\nParameters:\n * float_dict (Dict[str, Any]) -- state dict of\n the float model\n * **quantized_dict** (*Dict**[**str**, **Any**]*) -- state dict\n of the quantized model\n\nReturns:\n dict with key corresponding to module names and each entry being\n a dictionary with two keys 'float' and 'quantized', containing\n the float and quantized weights\nReturn type:\n weight_dict\ntorch.ao.ns._numeric_suite.get_logger_dict(mod, prefix='')\nTraverse the modules and save all logger stats into target dict.\n This is mainly used for quantization accuracy debug.\nType of loggers supported:\n ShadowLogger: used to log the outputs of the quantized module\n and its matching float shadow module, OutputLogger: used to log\n the outputs of the modules\nParameters:\n * mod (Module) -- module we want to save all logger stats\n * **prefix** (*str*) -- prefix for the current module\n\nReturns:", "source": "https://pytorch.org/docs/stable/torch.ao.ns._numeric_suite.html", "category": "pytorch docs"} {"text": "Returns:\n the dictionary used to save all logger stats\nReturn type:\n target_dict\nclass torch.ao.ns._numeric_suite.Logger\nBase class for stats logging\nforward(x)\nclass torch.ao.ns._numeric_suite.ShadowLogger\nClass used in Shadow module to record the outputs of the original\n and shadow modules.\nforward(x, y)\nclass torch.ao.ns._numeric_suite.OutputLogger\nClass used to log the outputs of the module\nforward(x)\nclass torch.ao.ns._numeric_suite.Shadow(q_module, float_module, logger_cls)\nShadow module attaches the float module to its matching quantized\n module as the shadow. Then it uses Logger module to process the\n outputs of both modules.\nParameters:\n * q_module -- module quantized from float_module that we\n want to shadow\n * **float_module** -- float module used to shadow q_module\n\n * **logger_cls** -- type of logger used to process the outputs\n of q_module and float_module. ShadowLogger or custom loggers\n", "source": "https://pytorch.org/docs/stable/torch.ao.ns._numeric_suite.html", "category": "pytorch docs"} {"text": "can be used.\nforward(*x)\n Return type:\n *Tensor*\n\nadd(x, y)\n Return type:\n *Tensor*\n\nadd_scalar(x, y)\n Return type:\n *Tensor*\n\nmul(x, y)\n Return type:\n *Tensor*\n\nmul_scalar(x, y)\n Return type:\n *Tensor*\n\ncat(x, dim=0)\n Return type:\n *Tensor*\n\nadd_relu(x, y)\n Return type:\n *Tensor*\n\ntorch.ao.ns._numeric_suite.prepare_model_with_stubs(float_module, q_module, module_swap_list, logger_cls)\nPrepare the model by attaching the float module to its matching\n quantized module as the shadow if the float module type is in\n module_swap_list.\nExample usage:\n prepare_model_with_stubs(float_model, q_model, module_swap_list, Logger)\n q_model(data)\n ob_dict = get_logger_dict(q_model)\n\nParameters:\n * float_module (Module) -- float module used to generate\n the q_module\n * **q_module** (*Module*) -- module quantized from float_module\n", "source": "https://pytorch.org/docs/stable/torch.ao.ns._numeric_suite.html", "category": "pytorch docs"} {"text": "\n\nmodule_swap_list (Set[type]) -- list of float\n module types to attach the shadow\n\nlogger_cls (Callable) -- type of logger to be used in\n shadow module to process the outputs of quantized module and\n its float shadow module\n\n\n\ntorch.ao.ns._numeric_suite.compare_model_stub(float_model, q_model, module_swap_list, *data, logger_cls=)\nCompare quantized module in a model with its floating point\n counterpart, feeding both of them the same input. Return a dict\n with key corresponding to module names and each entry being a\n dictionary with two keys 'float' and 'quantized', containing the\n output tensors of quantized and its matching float shadow module.\n This dict can be used to compare and compute the module level\n quantization error.\nThis function first call prepare_model_with_stubs() to swap the\n quantized module that we want to compare with the Shadow module,", "source": "https://pytorch.org/docs/stable/torch.ao.ns._numeric_suite.html", "category": "pytorch docs"} {"text": "which takes quantized module, corresponding float module and logger\n as input, and creates a forward path inside to make the float\n module to shadow quantized module sharing the same input. The\n logger can be customizable, default logger is ShadowLogger and it\n will save the outputs of the quantized module and float module that\n can be used to compute the module level quantization error.\nExample usage:\n module_swap_list = [torchvision.models.quantization.resnet.QuantizableBasicBlock]\n ob_dict = compare_model_stub(float_model,qmodel,module_swap_list, data)\n for key in ob_dict:\n print(key, compute_error(ob_dict[key]['float'], ob_dict[key]['quantized'].dequantize()))\n\nParameters:\n * float_model (Module) -- float model used to generate the\n q_model\n * **q_model** (*Module*) -- model quantized from float_model\n\n * **module_swap_list** (*Set**[**type**]*) -- list of float\n module types at which shadow modules will be attached.\n", "source": "https://pytorch.org/docs/stable/torch.ao.ns._numeric_suite.html", "category": "pytorch docs"} {"text": "\n\ndata -- input data used to run the prepared q_model\n\nlogger_cls -- type of logger to be used in shadow module\n to process the outputs of quantized module and its float\n shadow module\n\n\n\nReturn type:\n Dict[str, Dict]\ntorch.ao.ns._numeric_suite.get_matching_activations(float_module, q_module)\nFind the matching activation between float and quantized modules.\nParameters:\n * float_module (Module) -- float module used to generate\n the q_module\n * **q_module** (*Module*) -- module quantized from float_module\n\nReturns:\n dict with key corresponding to quantized module names and each\n entry being a dictionary with two keys 'float' and 'quantized',\n containing the matching float and quantized activations\nReturn type:\n act_dict\ntorch.ao.ns._numeric_suite.prepare_model_outputs(float_module, q_module, logger_cls=, allow_list=None)", "source": "https://pytorch.org/docs/stable/torch.ao.ns._numeric_suite.html", "category": "pytorch docs"} {"text": "Prepare the model by attaching the logger to both float module and\n quantized module if they are in the allow_list.\nParameters:\n * float_module (Module) -- float module used to generate\n the q_module\n * **q_module** (*Module*) -- module quantized from float_module\n\n * **logger_cls** -- type of logger to be attached to\n float_module and q_module\n\n * **allow_list** -- list of module types to attach logger\n\ntorch.ao.ns._numeric_suite.compare_model_outputs(float_model, q_model, *data, logger_cls=, allow_list=None)\nCompare output activations between float and quantized models at\n corresponding locations for the same input. Return a dict with key\n corresponding to quantized module names and each entry being a\n dictionary with two keys 'float' and 'quantized', containing the\n activations of quantized model and float model at matching\n locations. This dict can be used to compare and compute the", "source": "https://pytorch.org/docs/stable/torch.ao.ns._numeric_suite.html", "category": "pytorch docs"} {"text": "propagation quantization error.\nExample usage:\n act_compare_dict = compare_model_outputs(float_model, qmodel, data)\n for key in act_compare_dict:\n print(\n key,\n compute_error(\n act_compare_dict[key]['float'],\n act_compare_dict[key]['quantized'].dequantize()\n )\n )\n\nParameters:\n * float_model (Module) -- float model used to generate the\n q_model\n * **q_model** (*Module*) -- model quantized from float_model\n\n * **data** -- input data used to run the prepared float_model\n and q_model\n\n * **logger_cls** -- type of logger to be attached to\n float_module and q_module\n\n * **allow_list** -- list of module types to attach logger\n\nReturns:\n dict with key corresponding to quantized module names and each\n entry being a dictionary with two keys 'float' and 'quantized',\n containing the matching float and quantized activations\nReturn type:", "source": "https://pytorch.org/docs/stable/torch.ao.ns._numeric_suite.html", "category": "pytorch docs"} {"text": "Return type:\n act_compare_dict", "source": "https://pytorch.org/docs/stable/torch.ao.ns._numeric_suite.html", "category": "pytorch docs"} {"text": "torch.utils.model_zoo\nMoved to torch.hub.\ntorch.utils.model_zoo.load_url(url, model_dir=None, map_location=None, progress=True, check_hash=False, file_name=None)\nLoads the Torch serialized object at the given URL.\nIf downloaded file is a zip file, it will be automatically\n decompressed.\nIf the object is already present in model_dir, it's deserialized\n and returned. The default value of \"model_dir\" is\n \"/checkpoints\" where \"hub_dir\" is the directory returned\n by \"get_dir()\".\nParameters:\n * url (str) -- URL of the object to download\n * **model_dir** (*str**, **optional*) -- directory in which to\n save the object\n\n * **map_location** (*optional*) -- a function or a dict\n specifying how to remap storage locations (see torch.load)\n\n * **progress** (*bool**, **optional*) -- whether or not to\n display a progress bar to stderr. Default: True\n", "source": "https://pytorch.org/docs/stable/model_zoo.html", "category": "pytorch docs"} {"text": "\n\ncheck_hash (bool, optional) -- If True, the filename\n part of the URL should follow the naming convention\n \"filename-.ext\" where \"\" is the first eight or\n more digits of the SHA256 hash of the contents of the file.\n The hash is used to ensure unique names and to verify the\n contents of the file. Default: False\n\nfile_name (str, optional) -- name for the downloaded\n file. Filename from \"url\" will be used if not set.\n\n\n\nReturn type:\n Dict[str, Any]\n-[ Example ]-\n\n\n\nstate_dict = torch.hub.load_state_dict_from_url('https://s3.amazonaws.com/pytorch/models/resnet18-5c106cde.pth')\n\n\n", "source": "https://pytorch.org/docs/stable/model_zoo.html", "category": "pytorch docs"} {"text": "Warning:\nThere are known non-determinism issues for RNN functions on some\n versions of cuDNN and CUDA. You can enforce deterministic behavior\n by setting the following environment variables:On CUDA 10.1, set\n environment variable \"CUDA_LAUNCH_BLOCKING=1\". This may affect\n performance.On CUDA 10.2 or later, set environment variable (note\n the leading colon symbol) \"CUBLAS_WORKSPACE_CONFIG=:16:8\" or\n \"CUBLAS_WORKSPACE_CONFIG=:4096:2\".See the cuDNN 8 Release Notes for\n more information.", "source": "https://pytorch.org/docs/stable/cudnn_rnn_determinism.html", "category": "pytorch docs"} {"text": "PyTorch documentation\nPyTorch is an optimized tensor library for deep learning using GPUs\nand CPUs.\nFeatures described in this documentation are classified by release\nstatus:\nStable: These features will be maintained long-term and there\n should generally be no major performance limitations or gaps in\n documentation. We also expect to maintain backwards compatibility\n (although breaking changes can happen and notice will be given one\n release ahead of time).\nBeta: These features are tagged as Beta because the API may\n change based on user feedback, because the performance needs to\n improve, or because coverage across operators is not yet complete.\n For Beta features, we are committing to seeing the feature through\n to the Stable classification. We are not, however, committing to\n backwards compatibility.\nPrototype: These features are typically not available as part of\n binary distributions like PyPI or Conda, except sometimes behind", "source": "https://pytorch.org/docs/stable/index.html", "category": "pytorch docs"} {"text": "run-time flags, and are at an early stage for feedback and testing.\nCommunity\n^^^^^^^^^\n\n\nPyTorch Governance | Build + CI\n\n\nPyTorch Contribution Guide\n\n\nPyTorch Design Philosophy\n\n\nPyTorch Governance | Mechanics\n\n\nPyTorch Governance | Maintainers\n\n\nDeveloper Notes\n^^^^^^^^^^^^^^^\n\n\nCUDA Automatic Mixed Precision examples\n\n\nAutograd mechanics\n\n\nBroadcasting semantics\n\n\nCPU threading and TorchScript inference\n\n\nCUDA semantics\n\n\nDistributed Data Parallel\n\n\nExtending PyTorch\n\n\nExtending torch.func with autograd.Function\n\n\nFrequently Asked Questions\n\n\nGradcheck mechanics\n\n\nHIP (ROCm) semantics\n\n\nFeatures for large-scale deployments\n\n\nModules\n\n\nMPS backend\n\n\nMultiprocessing best practices\n\n\nNumerical accuracy\n\n\nReproducibility\n\n\nSerialization semantics\n\n\nWindows FAQ\n\n\nLanguage Bindings\n^^^^^^^^^^^^^^^^^\n\n\nC++\n\n\nJavadoc\n\n\ntorch::deploy\n\n\nPython API\n^^^^^^^^^^\n\n\ntorch\n\n\nTensors\n\n\nGenerators\n\n\nRandom sampling\n\n\nSerialization\n\n\nParallelism\n\n", "source": "https://pytorch.org/docs/stable/index.html", "category": "pytorch docs"} {"text": "\n\nSerialization\n\n\nParallelism\n\n\nLocally disabling gradient computation\n\n\nMath operations\n\n\nUtilities\n\n\nSymbolic Numbers\n\n\nOptimizations\n\n\nOperator Tags\n\n\nEngine Configuration\n\n\ntorch.nn\n\n\nParameter\n\n\nUninitializedParameter\n\n\nUninitializedBuffer\n\n\nContainers\n\n\nConvolution Layers\n\n\nPooling layers\n\n\nPadding Layers\n\n\nNon-linear Activations (weighted sum, nonlinearity)\n\n\nNon-linear Activations (other)\n\n\nNormalization Layers\n\n\nRecurrent Layers\n\n\nTransformer Layers\n\n\nLinear Layers\n\n\nDropout Layers\n\n\nSparse Layers\n\n\nDistance Functions\n\n\nLoss Functions\n\n\nVision Layers\n\n\nShuffle Layers\n\n\nDataParallel Layers (multi-GPU, distributed)\n\n\nUtilities\n\n\nQuantized Functions\n\n\nLazy Modules Initialization\n\n\ntorch.nn.functional\n\n\nConvolution functions\n\n\nPooling functions\n\n\nNon-linear activation functions\n\n\nLinear functions\n\n\nDropout functions\n\n\nSparse functions\n\n\nDistance functions\n\n\nLoss functions\n\n", "source": "https://pytorch.org/docs/stable/index.html", "category": "pytorch docs"} {"text": "\n\nDistance functions\n\n\nLoss functions\n\n\nVision functions\n\n\nDataParallel functions (multi-GPU, distributed)\n\n\ntorch.Tensor\n\n\nData types\n\n\nInitializing and basic operations\n\n\nTensor class reference\n\n\nTensor Attributes\n\n\ntorch.dtype\n\n\ntorch.device\n\n\ntorch.layout\n\n\ntorch.memory_format\n\n\nTensor Views\n\n\ntorch.amp\n\n\nAutocasting\n\n\nGradient Scaling\n\n\nAutocast Op Reference\n\n\ntorch.autograd\n\n\ntorch.autograd.backward\n\n\ntorch.autograd.grad\n\n\nForward-mode Automatic Differentiation\n\n\nFunctional higher level API\n\n\nLocally disabling gradient computation\n\n\nDefault gradient layouts\n\n\nIn-place operations on Tensors\n\n\nVariable (deprecated)\n\n\nTensor autograd functions\n\n\nFunction\n\n\nContext method mixins\n\n\nNumerical gradient checking\n\n\nProfiler\n\n\nAnomaly detection\n\n\nAutograd graph\n\n\ntorch.library\n\n\ntorch.cuda\n\n\nStreamContext\n\n\ntorch.cuda.can_device_access_peer\n\n\ntorch.cuda.current_blas_handle\n\n\ntorch.cuda.current_device\n\n", "source": "https://pytorch.org/docs/stable/index.html", "category": "pytorch docs"} {"text": "\n\ntorch.cuda.current_device\n\n\ntorch.cuda.current_stream\n\n\ntorch.cuda.default_stream\n\n\ndevice\n\n\ntorch.cuda.device_count\n\n\ndevice_of\n\n\ntorch.cuda.get_arch_list\n\n\ntorch.cuda.get_device_capability\n\n\ntorch.cuda.get_device_name\n\n\ntorch.cuda.get_device_properties\n\n\ntorch.cuda.get_gencode_flags\n\n\ntorch.cuda.get_sync_debug_mode\n\n\ntorch.cuda.init\n\n\ntorch.cuda.ipc_collect\n\n\ntorch.cuda.is_available\n\n\ntorch.cuda.is_initialized\n\n\ntorch.cuda.memory_usage\n\n\ntorch.cuda.set_device\n\n\ntorch.cuda.set_stream\n\n\ntorch.cuda.set_sync_debug_mode\n\n\ntorch.cuda.stream\n\n\ntorch.cuda.synchronize\n\n\ntorch.cuda.utilization\n\n\ntorch.cuda.OutOfMemoryError\n\n\nRandom Number Generator\n\n\nCommunication collectives\n\n\nStreams and events\n\n\nGraphs (beta)\n\n\nMemory management\n\n\nNVIDIA Tools Extension (NVTX)\n\n\nJiterator (beta)\n\n\nStream Sanitizer (prototype)\n\n\ntorch.backends\n\n\ntorch.backends.cuda\n\n\ntorch.backends.cudnn\n\n\ntorch.backends.mps\n\n", "source": "https://pytorch.org/docs/stable/index.html", "category": "pytorch docs"} {"text": "\n\ntorch.backends.cudnn\n\n\ntorch.backends.mps\n\n\ntorch.backends.mkl\n\n\ntorch.backends.mkldnn\n\n\ntorch.backends.openmp\n\n\ntorch.backends.opt_einsum\n\n\ntorch.backends.xeon\n\n\ntorch.distributed\n\n\nBackends\n\n\nBasics\n\n\nInitialization\n\n\nPost-Initialization\n\n\nDistributed Key-Value Store\n\n\nGroups\n\n\nPoint-to-point communication\n\n\nSynchronous and asynchronous collective operations\n\n\nCollective functions\n\n\nProfiling Collective Communication\n\n\nMulti-GPU collective functions\n\n\nThird-party backends\n\n\nLaunch utility\n\n\nSpawn utility\n\n\nDebugging \"torch.distributed\" applications\n\n\nLogging\n\n\ntorch.distributed.algorithms.join\n\n\ntorch.distributed.elastic\n\n\nGet Started\n\n\nDocumentation\n\n\ntorch.distributed.fsdp\n\n\ntorch.distributed.optim\n\n\ntorch.distributed.tensor.parallel\n\n\ntorch.distributed.checkpoint\n\n\ntorch.distributions\n\n\nScore function\n\n\nPathwise derivative\n\n\nDistribution\n\n\nExponentialFamily\n\n\nBernoulli\n\n\nBeta\n\n\nBinomial\n\n", "source": "https://pytorch.org/docs/stable/index.html", "category": "pytorch docs"} {"text": "\n\nBernoulli\n\n\nBeta\n\n\nBinomial\n\n\nCategorical\n\n\nCauchy\n\n\nChi2\n\n\nContinuousBernoulli\n\n\nDirichlet\n\n\nExponential\n\n\nFisherSnedecor\n\n\nGamma\n\n\nGeometric\n\n\nGumbel\n\n\nHalfCauchy\n\n\nHalfNormal\n\n\nIndependent\n\n\nKumaraswamy\n\n\nLKJCholesky\n\n\nLaplace\n\n\nLogNormal\n\n\nLowRankMultivariateNormal\n\n\nMixtureSameFamily\n\n\nMultinomial\n\n\nMultivariateNormal\n\n\nNegativeBinomial\n\n\nNormal\n\n\nOneHotCategorical\n\n\nPareto\n\n\nPoisson\n\n\nRelaxedBernoulli\n\n\nLogitRelaxedBernoulli\n\n\nRelaxedOneHotCategorical\n\n\nStudentT\n\n\nTransformedDistribution\n\n\nUniform\n\n\nVonMises\n\n\nWeibull\n\n\nWishart\n\n\nKL Divergence\n\n\nTransforms\n\n\nConstraints\n\n\nConstraint Registry\n\n\ntorch._dynamo\n\n\ntorch.fft\n\n\nFast Fourier Transforms\n\n\nHelper Functions\n\n\ntorch.func\n\n\nWhat are composable function transforms?\n\n\nWhy composable function transforms?\n\n\nRead More\n\n\ntorch.futures\n\n\ntorch.fx\n\n\nOverview\n\n\nWriting Transformations\n\n\nDebugging\n\n", "source": "https://pytorch.org/docs/stable/index.html", "category": "pytorch docs"} {"text": "\n\nWriting Transformations\n\n\nDebugging\n\n\nLimitations of Symbolic Tracing\n\n\nAPI Reference\n\n\ntorch.hub\n\n\nPublishing models\n\n\nLoading models from Hub\n\n\ntorch.jit\n\n\nTorchScript Language Reference\n\n\nCreating TorchScript Code\n\n\nMixing Tracing and Scripting\n\n\nTorchScript Language\n\n\nBuilt-in Functions and Modules\n\n\nDebugging\n\n\nFrequently Asked Questions\n\n\nKnown Issues\n\n\nAppendix\n\n\ntorch.linalg\n\n\nMatrix Properties\n\n\nDecompositions\n\n\nSolvers\n\n\nInverses\n\n\nMatrix Functions\n\n\nMatrix Products\n\n\nTensor Operations\n\n\nMisc\n\n\nExperimental Functions\n\n\ntorch.monitor\n\n\nAPI Reference\n\n\ntorch.signal\n\n\ntorch.signal.windows\n\n\ntorch.special\n\n\nFunctions\n\n\ntorch.overrides\n\n\nFunctions\n\n\ntorch.package\n\n\nTutorials\n\n\nHow do I...\n\n\nExplanation\n\n\nAPI Reference\n\n\ntorch.profiler\n\n\nOverview\n\n\nAPI Reference\n\n\nIntel Instrumentation and Tracing Technology APIs\n\n\ntorch.nn.init\n\n\ntorch.onnx\n\n\nExample: AlexNet from PyTorch to ONNX\n\n", "source": "https://pytorch.org/docs/stable/index.html", "category": "pytorch docs"} {"text": "\n\nExample: AlexNet from PyTorch to ONNX\n\n\nTracing vs Scripting\n\n\nAvoiding Pitfalls\n\n\nLimitations\n\n\nAdding support for operators\n\n\nFrequently Asked Questions\n\n\nContributing / developing\n\n\nFunctions\n\n\nClasses\n\n\ntorch.onnx diagnostics\n\n\nOverview\n\n\nDiagnostic Rules\n\n\nAPI Reference\n\n\ntorch.optim\n\n\nHow to use an optimizer\n\n\nBase class\n\n\nAlgorithms\n\n\nHow to adjust learning rate\n\n\nStochastic Weight Averaging\n\n\nComplex Numbers\n\n\nCreating Complex Tensors\n\n\nTransition from the old representation\n\n\nAccessing real and imag\n\n\nAngle and abs\n\n\nLinear Algebra\n\n\nSerialization\n\n\nAutograd\n\n\nDDP Communication Hooks\n\n\nHow to Use a Communication Hook?\n\n\nWhat Does a Communication Hook Operate On?\n\n\nDefault Communication Hooks\n\n\nPowerSGD Communication Hook\n\n\nDebugging Communication Hooks\n\n\nCheckpointing of Communication Hooks\n\n\nAcknowledgements\n\n\nPipeline Parallelism\n\n\nModel Parallelism using multiple GPUs\n\n\nPipelined Execution\n\n", "source": "https://pytorch.org/docs/stable/index.html", "category": "pytorch docs"} {"text": "\n\nPipelined Execution\n\n\nPipe APIs in PyTorch\n\n\nTutorials\n\n\nAcknowledgements\n\n\nQuantization\n\n\nIntroduction to Quantization\n\n\nQuantization API Summary\n\n\nQuantization Stack\n\n\nQuantization Support Matrix\n\n\nQuantization API Reference\n\n\nQuantization Backend Configuration\n\n\nQuantization Accuracy Debugging\n\n\nQuantization Customizations\n\n\nBest Practices\n\n\nFrequently Asked Questions\n\n\nCommon Errors\n\n\nDistributed RPC Framework\n\n\nBasics\n\n\nRPC\n\n\nRRef\n\n\nRemoteModule\n\n\nDistributed Autograd Framework\n\n\nDistributed Optimizer\n\n\nDesign Notes\n\n\nTutorials\n\n\ntorch.random\n\n\ntorch.masked\n\n\nIntroduction\n\n\nSupported Operators\n\n\ntorch.nested\n\n\nIntroduction\n\n\nConstruction\n\n\nsize\n\n\nunbind\n\n\nNested tensor constructor and conversion functions\n\n\nSupported operations\n\n\ntorch.sparse\n\n\nWhy and when to use sparsity\n\n\nFunctionality overview\n\n\nOperator overview\n\n\nSparse COO tensors\n\n\nSparse Compressed Tensors\n\n\nSupported operations\n\n", "source": "https://pytorch.org/docs/stable/index.html", "category": "pytorch docs"} {"text": "\n\nSupported operations\n\n\ntorch.Storage\n\n\ntorch.testing\n\n\ntorch.utils.benchmark\n\n\ntorch.utils.bottleneck\n\n\ntorch.utils.checkpoint\n\n\ntorch.utils.cpp_extension\n\n\ntorch.utils.data\n\n\nDataset Types\n\n\nData Loading Order and \"Sampler\"\n\n\nLoading Batched and Non-Batched Data\n\n\nSingle- and Multi-process Data Loading\n\n\nMemory Pinning\n\n\ntorch.utils.jit\n\n\ntorch.utils.dlpack\n\n\ntorch.utils.mobile_optimizer\n\n\ntorch.utils.model_zoo\n\n\ntorch.utils.tensorboard\n\n\nType Info\n\n\ntorch.finfo\n\n\ntorch.iinfo\n\n\nNamed Tensors\n\n\nCreating named tensors\n\n\nNamed dimensions\n\n\nName propagation semantics\n\n\nExplicit alignment by names\n\n\nManipulating dimensions\n\n\nAutograd support\n\n\nCurrently supported operations and subsystems\n\n\nNamed tensor API reference\n\n\nNamed Tensors operator coverage\n\n\nKeeps input names\n\n\nRemoves dimensions\n\n\nUnifies names from inputs\n\n\nPermutes dimensions\n\n\nContracts away dims\n\n\nFactory functions\n\n\nout function and in-place variants\n\n", "source": "https://pytorch.org/docs/stable/index.html", "category": "pytorch docs"} {"text": "\n\nout function and in-place variants\n\n\ntorch.config\n\n\nLibraries\n^^^^^^^^^\n\n\ntorchaudio\n\n\nTorchData\n\n\nTorchRec\n\n\nTorchServe\n\n\ntorchtext\n\n\ntorchvision\n\n\nPyTorch on XLA Devices\n\n\nIndices and tables\n* Index\n\nModule Index\n", "source": "https://pytorch.org/docs/stable/index.html", "category": "pytorch docs"} {"text": "Events\nModule contains events processing mechanisms that are integrated with\nthe standard python logging.\nExample of usage:\nfrom torch.distributed.elastic import events\n event = events.Event(name=\"test_event\", source=events.EventSource.WORKER, metadata={...})\n events.get_logging_handler(destination=\"console\").info(event)\nAPI Methods\ntorch.distributed.elastic.events.record(event, destination='null')\ntorch.distributed.elastic.events.get_logging_handler(destination='null')\nReturn type:\n Handler\nEvent Objects\nclass torch.distributed.elastic.events.api.Event(name, source, timestamp=0, metadata=)\nThe class represents the generic event that occurs during the\n torchelastic job execution. The event can be any kind of meaningful\n action.\nParameters:\n * name (str) -- event name.\n * **source** (*EventSource*) -- the event producer, e.g. agent\n or worker\n", "source": "https://pytorch.org/docs/stable/elastic/events.html", "category": "pytorch docs"} {"text": "or worker\n * **timestamp** (*int*) -- timestamp in milliseconds when event\n occured.\n\n * **metadata** (*Dict**[**str**, **Optional**[**Union**[**str**,\n **int**, **float**, **bool**]**]**]*) -- additional data that\n is associated with the event.\n\nclass torch.distributed.elastic.events.api.EventSource(value)\nKnown identifiers of the event producers.\ntorch.distributed.elastic.events.api.EventMetadataValue\nalias of \"Optional\"[\"Union\"[\"str\", \"int\", \"float\", \"bool\"]]", "source": "https://pytorch.org/docs/stable/elastic/events.html", "category": "pytorch docs"} {"text": "Metrics\nMetrics API\nOverview:\nThe metrics API in torchelastic is used to publish telemetry metrics.\nIt is designed to be used by torchelastic's internal modules to\npublish metrics for the end user with the goal of increasing\nvisibility and helping with debugging. However you may use the same\nAPI in your jobs to publish metrics to the same metrics \"sink\".\nA \"metric\" can be thought of as timeseries data and is uniquely\nidentified by the string-valued tuple \"(metric_group, metric_name)\".\ntorchelastic makes no assumptions about what a \"metric_group\" is and\nwhat relationship it has with \"metric_name\". It is totally up to the\nuser to use these two fields to uniquely identify a metric.\nNote:\nThe metric group \"torchelastic\" is reserved by torchelastic for\n platform level metrics that it produces. For instance torchelastic\n may output the latency (in milliseconds) of a re-rendezvous\n operation from the agent as \"(torchelastic,\n agent.rendezvous.duration.ms)\"", "source": "https://pytorch.org/docs/stable/elastic/metrics.html", "category": "pytorch docs"} {"text": "agent.rendezvous.duration.ms)\"\nA sensible way to use metric groups is to map them to a stage or\nmodule in your job. You may also encode certain high level properties\nthe job such as the region or stage (dev vs prod).\nPublish Metrics:\nUsing torchelastic's metrics API is similar to using python's logging\nframework. You first have to configure a metrics handler before trying\nto add metric data.\nThe example below measures the latency for the \"calculate()\" function.\nimport time\n import torch.distributed.elastic.metrics as metrics\n# makes all metrics other than the one from \"my_module\" to go /dev/null\n metrics.configure(metrics.NullMetricsHandler())\n metrics.configure(metrics.ConsoleMetricsHandler(), \"my_module\")\ndef my_method():\n start = time.time()\n calculate()\n end = time.time()\n metrics.put_metric(\"calculate_latency\", int(end-start), \"my_module\")\nYou may also use the torch.distributed.elastic.metrics.prof` decorator\nto conveniently and succinctly profile functions", "source": "https://pytorch.org/docs/stable/elastic/metrics.html", "category": "pytorch docs"} {"text": "to conveniently and succinctly profile functions\n# -- in module examples.foobar --\nimport torch.distributed.elastic.metrics as metrics\nmetrics.configure(metrics.ConsoleMetricsHandler(), \"foobar\")\n metrics.configure(metrics.ConsoleMetricsHandler(), \"Bar\")\n@metrics.prof\n def foo():\n pass\nclass Bar():\n @metrics.prof\n def baz():\n pass\n\n\"@metrics.prof\" will publish the following metrics\n.success - 1 if the function finished successfully\n .failure - 1 if the function threw an exception\n .duration.ms - function duration in milliseconds\nConfiguring Metrics Handler:\ntorch.distributed.elastic.metrics.MetricHandler is responsible for\nemitting the added metric values to a particular destination. Metric\ngroups can be configured with different metric handlers.\nBy default torchelastic emits all metrics to \"/dev/null\". By adding", "source": "https://pytorch.org/docs/stable/elastic/metrics.html", "category": "pytorch docs"} {"text": "the following configuration metrics, \"torchelastic\" and \"my_app\"\nmetric groups will be printed out to console.\nimport torch.distributed.elastic.metrics as metrics\nmetrics.configure(metrics.ConsoleMetricHandler(), group = \"torchelastic\")\n metrics.configure(metrics.ConsoleMetricHandler(), group = \"my_app\")\nWriting a Custom Metric Handler:\nIf you want your metrics to be emitted to a custom location, implement\nthe torch.distributed.elastic.metrics.MetricHandler interface and\nconfigure your job to use your custom metric handler.\nBelow is a toy example that prints the metrics to \"stdout\"\nimport torch.distributed.elastic.metrics as metrics\nclass StdoutMetricHandler(metrics.MetricHandler):\n def emit(self, metric_data):\n ts = metric_data.timestamp\n group = metric_data.group_name\n name = metric_data.name\n value = metric_data.value\n print(f\"[{ts}][{group}]: {name}={value}\")\nmetrics.configure(StdoutMetricHandler(), group=\"my_app\")", "source": "https://pytorch.org/docs/stable/elastic/metrics.html", "category": "pytorch docs"} {"text": "Now all metrics in the group \"my_app\" will be printed to stdout as:\n[1574213883.4182858][my_app]: my_metric=\n [1574213940.5237644][my_app]: my_metric=\nMetric Handlers\nBelow are the metric handlers that come included with torchelastic.\nclass torch.distributed.elastic.metrics.api.MetricHandler\nclass torch.distributed.elastic.metrics.api.ConsoleMetricHandler\nclass torch.distributed.elastic.metrics.api.NullMetricHandler\nMethods\ntorch.distributed.elastic.metrics.configure(handler, group=None)\ntorch.distributed.elastic.metrics.prof(fn=None, group='torchelastic')\n@profile decorator publishes duration.ms, count, success, failure\n metrics for the function that it decorates. The metric name\n defaults to the qualified name (\"class_name.def_name\") of the\n function. If the function does not belong to a class, it uses the\n leaf module name instead.\nUsage\n @metrics.prof\n def x():\n pass\n\n @metrics.prof(group=\"agent\")\n", "source": "https://pytorch.org/docs/stable/elastic/metrics.html", "category": "pytorch docs"} {"text": "pass\n @metrics.prof(group=\"agent\")\n def y():\n pass\n\ntorch.distributed.elastic.metrics.put_metric(metric_name, metric_value, metric_group='torchelastic')\nPublishes a metric data point.\nUsage\n put_metric(\"metric_name\", 1)\n put_metric(\"metric_name\", 1, \"metric_group_name\")\n", "source": "https://pytorch.org/docs/stable/elastic/metrics.html", "category": "pytorch docs"} {"text": "Quickstart\nTo launch a fault-tolerant job, run the following on all nodes.\ntorchrun\n --nnodes=NUM_NODES\n --nproc_per_node=TRAINERS_PER_NODE\n --max_restarts=NUM_ALLOWED_FAILURES\n --rdzv_id=JOB_ID\n --rdzv_backend=c10d\n --rdzv_endpoint=HOST_NODE_ADDR\n YOUR_TRAINING_SCRIPT.py (--arg1 ... train script args...)\nTo launch an elastic job, run the following on at least \"MIN_SIZE\"\nnodes and at most \"MAX_SIZE\" nodes.\ntorchrun\n --nnodes=MIN_SIZE:MAX_SIZE\n --nproc_per_node=TRAINERS_PER_NODE\n --max_restarts=NUM_ALLOWED_FAILURES_OR_MEMBERSHIP_CHANGES\n --rdzv_id=JOB_ID\n --rdzv_backend=c10d\n --rdzv_endpoint=HOST_NODE_ADDR\n YOUR_TRAINING_SCRIPT.py (--arg1 ... train script args...)\nNote:\nTorchElastic models failures as membership changes. When a node\n fails, this is treated as a \"scale down\" event. When the failed node\n is replaced by the scheduler, it is a \"scale up\" event. Hence for", "source": "https://pytorch.org/docs/stable/elastic/quickstart.html", "category": "pytorch docs"} {"text": "both fault tolerant and elastic jobs, \"--max_restarts\" is used to\n control the total number of restarts before giving up, regardless of\n whether the restart was caused due to a failure or a scaling event.\n\"HOST_NODE_ADDR\", in form [:] (e.g.\nnode1.example.com:29400), specifies the node and the port on which the\nC10d rendezvous backend should be instantiated and hosted. It can be\nany node in your training cluster, but ideally you should pick a node\nthat has a high bandwidth.\nNote:\nIf no port number is specified \"HOST_NODE_ADDR\" defaults to 29400.\nNote:\nThe \"--standalone\" option can be passed to launch a single node job\n with a sidecar rendezvous backend. You don\u00e2\u0080\u0099t have to pass \"--\n rdzv_id\", \"--rdzv_endpoint\", and \"--rdzv_backend\" when the \"--\n standalone\" option is used.\nNote:\nLearn more about writing your distributed training script here.\nIf \"torchrun\" does not meet your requirements you may use our APIs\ndirectly for more powerful customization. Start by taking a look at", "source": "https://pytorch.org/docs/stable/elastic/quickstart.html", "category": "pytorch docs"} {"text": "the elastic agent API.", "source": "https://pytorch.org/docs/stable/elastic/quickstart.html", "category": "pytorch docs"} {"text": "Examples\nPlease refer to the elastic/examples README.", "source": "https://pytorch.org/docs/stable/elastic/examples.html", "category": "pytorch docs"} {"text": "Customization\nThis section describes how to customize TorchElastic to fit your\nneeds.\nLauncher\nThe launcher program that ships with TorchElastic should be sufficient\nfor most use-cases (see torchrun (Elastic Launch)). You can implement\na custom launcher by programmatically creating an agent and passing it\nspecs for your workers as shown below.\n# my_launcher.py\nif name == \"main\":\n args = parse_args(sys.argv[1:])\n rdzv_handler = RendezvousHandler(...)\n spec = WorkerSpec(\n local_world_size=args.nproc_per_node,\n fn=trainer_entrypoint_fn,\n args=(trainer_entrypoint_fn args.fn_args,...),\n rdzv_handler=rdzv_handler,\n max_restarts=args.max_restarts,\n monitor_interval=args.monitor_interval,\n )\n agent = LocalElasticAgent(spec, start_method=\"spawn\")\n try:\n run_result = agent.run()\n if run_result.is_failed():\n print(f\"worker 0 failed with: run_result.failures[0]\")\n else:\n", "source": "https://pytorch.org/docs/stable/elastic/customization.html", "category": "pytorch docs"} {"text": "else:\n print(f\"worker 0 return value is: run_result.return_values[0]\")\n except Exception ex:\n # handle exception\nRendezvous Handler\nTo implement your own rendezvous, extend\n\"torch.distributed.elastic.rendezvous.RendezvousHandler\" and implement\nits methods.\nWarning:\nRendezvous handlers are tricky to implement. Before you begin make\n sure you completely understand the properties of rendezvous. Please\n refer to Rendezvous for more information.\nOnce implemented you can pass your custom rendezvous handler to the\nworker spec when creating the agent.\nspec = WorkerSpec(\n rdzv_handler=MyRendezvousHandler(params),\n ...\n )\n elastic_agent = LocalElasticAgent(spec, start_method=start_method)\n elastic_agent.run(spec.role)\nMetric Handler\nTorchElastic emits platform level metrics (see Metrics). By default\nmetrics are emitted to /dev/null so you will not see them. To have", "source": "https://pytorch.org/docs/stable/elastic/customization.html", "category": "pytorch docs"} {"text": "the metrics pushed to a metric handling service in your\ninfrastructure, implement a\ntorch.distributed.elastic.metrics.MetricHandler and configure it\nin your custom launcher.\n# my_launcher.py\nimport torch.distributed.elastic.metrics as metrics\nclass MyMetricHandler(metrics.MetricHandler):\n def emit(self, metric_data: metrics.MetricData):\n # push metric_data to your metric sink\ndef main():\n metrics.configure(MyMetricHandler())\n spec = WorkerSpec(...)\n agent = LocalElasticAgent(spec)\n agent.run()\n\nEvents Handler\nTorchElastic supports events recording (see Events). The events module\ndefines API that allows you to record events and implement custom\nEventHandler. EventHandler is used for publishing events produced\nduring torchelastic execution to different sources, e.g. AWS\nCloudWatch. By default it uses\ntorch.distributed.elastic.events.NullEventHandler that ignores\nevents. To configure custom events handler you need to implement", "source": "https://pytorch.org/docs/stable/elastic/customization.html", "category": "pytorch docs"} {"text": "torch.distributed.elastic.events.EventHandler interface and\nconfigure it in your custom launcher.\n# my_launcher.py\nimport torch.distributed.elastic.events as events\nclass MyEventHandler(events.EventHandler):\n def record(self, event: events.Event):\n # process event\ndef main():\n events.configure(MyEventHandler())\n spec = WorkerSpec(...)\n agent = LocalElasticAgent(spec)\n agent.run()\n", "source": "https://pytorch.org/docs/stable/elastic/customization.html", "category": "pytorch docs"} {"text": "Multiprocessing\nLibrary that launches and manages \"n\" copies of worker subprocesses\neither specified by a function or a binary.\nFor functions, it uses \"torch.multiprocessing\" (and therefore python\n\"multiprocessing\") to spawn/fork worker processes. For binaries it\nuses python \"subprocessing.Popen\" to create worker processes.\nUsage 1: Launching two trainers as a function\nfrom torch.distributed.elastic.multiprocessing import Std, start_processes\ndef trainer(a, b, c):\n pass # train\n# runs two trainers\n # LOCAL_RANK=0 trainer(1,2,3)\n # LOCAL_RANK=1 trainer(4,5,6)\n ctx = start_processes(\n name=\"trainer\",\n entrypoint=trainer,\n args={0: (1,2,3), 1: (4,5,6)},\n envs={0: {\"LOCAL_RANK\": 0}, 1: {\"LOCAL_RANK\": 1}},\n log_dir=\"/tmp/foobar\",\n redirects=Std.ALL, # write all worker stdout/stderr to a log file\n tee={0: Std.ERR}, # tee only local rank 0's stderr to console\n )", "source": "https://pytorch.org/docs/stable/elastic/multiprocessing.html", "category": "pytorch docs"} {"text": ")\n# waits for all copies of trainer to finish\n ctx.wait()\nUsage 2: Launching 2 echo workers as a binary\n# same as invoking\n # echo hello\n # echo world > stdout.log\n ctx = start_processes(\n name=\"echo\"\n entrypoint=\"echo\",\n log_dir=\"/tmp/foobar\",\n args={0: \"hello\", 1: \"world\"},\n redirects={1: Std.OUT},\n )\nJust like \"torch.multiprocessing\", the return value of the function\n\"start_processes()\" is a process context (\"api.PContext\"). If a\nfunction was launched, a \"api.MultiprocessContext\" is returned and if\na binary was launched a \"api.SubprocessContext\" is returned. Both are\nspecific implementations of the parent \"api.PContext\" class.\nStarting Multiple Workers\ntorch.distributed.elastic.multiprocessing.start_processes(name, entrypoint, args, envs, log_dir, start_method='spawn', redirects=Std.NONE, tee=Std.NONE)\nStarts \"n\" copies of \"entrypoint\" processes with the provided", "source": "https://pytorch.org/docs/stable/elastic/multiprocessing.html", "category": "pytorch docs"} {"text": "options. \"entrypoint\" is either a \"Callable\" (function) or a \"str\"\n (binary). The number of copies is determined by the number of\n entries for \"args\" and \"envs\" arguments, which need to have the\n same key set.\n\"args\" and \"env\" parameters are the arguments and environment\n variables to pass down to the entrypoint mapped by the replica\n index (local rank). All local ranks must be accounted for. That is,\n the keyset should be \"{0,1,...,(nprocs-1)}\".\nNote:\n When the \"entrypoint\" is a binary (\"str\"), \"args\" can only be\n strings. If any other type is given, then it is casted to a\n string representation (e.g. \"str(arg1)\"). Furthermore, a binary\n failure will only write an \"error.json\" error file if the main\n function is annotated with\n \"torch.distributed.elastic.multiprocessing.errors.record\". For\n function launches, this is done by default and there is no need\n to manually annotate with the \"@record\" annotation.\n", "source": "https://pytorch.org/docs/stable/elastic/multiprocessing.html", "category": "pytorch docs"} {"text": "\"redirects\" and \"tee\" are bitmasks specifying which std stream(s)\n to redirect to a log file in the \"log_dir\". Valid mask values are\n defined in \"Std\". To redirect/tee only certain local ranks, pass\n \"redirects\" as a map with the key as the local rank to specify the\n redirect behavior for. Any missing local ranks will default to\n \"Std.NONE\".\n\"tee\" acts like the unix \"tee\" command in that it redirects +\n prints to console. To avoid worker stdout/stderr from printing to\n console, use the \"redirects\" parameter.\nFor each process, the \"log_dir\" will contain:\n\n\n\"{local_rank}/error.json\": if the process failed, a file with\n the error info\n\n\n\"{local_rank}/stdout.json\": if \"redirect & STDOUT == STDOUT\"\n\n\n\"{local_rank}/stderr.json\": if \"redirect & STDERR == STDERR\"\n\n\nNote:\n It is expected that the \"log_dir\" exists, is empty, and is a\n directory.\n\nExample:\n log_dir = \"/tmp/test\"\n\n # ok; two copies of foo: foo(\"bar0\"), foo(\"bar1\")\n", "source": "https://pytorch.org/docs/stable/elastic/multiprocessing.html", "category": "pytorch docs"} {"text": "start_processes(\n name=\"trainer\",\n entrypoint=foo,\n args:{0:(\"bar0\",), 1:(\"bar1\",),\n envs:{0:{}, 1:{}},\n log_dir=log_dir\n )\n # invalid; envs missing for local rank 1\n start_processes(\n name=\"trainer\",\n entrypoint=foo,\n args:{0:(\"bar0\",), 1:(\"bar1\",),\n envs:{0:{}},\n log_dir=log_dir\n )\n\n # ok; two copies of /usr/bin/touch: touch file1, touch file2\n start_processes(\n name=\"trainer\",\n entrypoint=\"/usr/bin/touch\",\n args:{0:(\"file1\",), 1:(\"file2\",),\n envs:{0:{}, 1:{}},\n log_dir=log_dir\n )\n\n # caution; arguments casted to string, runs:\n # echo \"1\" \"2\" \"3\" and echo \"[1, 2, 3]\"\n start_processes(\n name=\"trainer\",\n entrypoint=\"/usr/bin/echo\",\n args:{0:(1,2,3), 1:([1,2,3],),\n envs:{0:{}, 1:{}},\n log_dir=log_dir\n )\n\nParameters:\n * name (str) -- a human readable short name that describes", "source": "https://pytorch.org/docs/stable/elastic/multiprocessing.html", "category": "pytorch docs"} {"text": "what the processes are (used as header when tee'ing\n stdout/stderr outputs)\n * **entrypoint** (*Union**[**Callable**, **str**]*) -- either a\n \"Callable\" (function) or \"cmd\" (binary)\n\n * **args** (*Dict**[**int**, **Tuple**]*) -- arguments to each\n replica\n\n * **envs** (*Dict**[**int**, **Dict**[**str**, **str**]**]*) --\n env vars to each replica\n\n * **log_dir** (*str*) -- directory used to write log files\n\n * **start_method** (*str*) -- multiprocessing start method\n (spawn, fork, forkserver) ignored for binaries\n\n * **redirects** (*Union**[**Std**, **Dict**[**int**,\n **Std**]**]*) -- which std streams to redirect to a log file\n\n * **tee** (*Union**[**Std**, **Dict**[**int**, **Std**]**]*) --\n which std streams to redirect + print to console\n\nReturn type:\n PContext\nProcess Context", "source": "https://pytorch.org/docs/stable/elastic/multiprocessing.html", "category": "pytorch docs"} {"text": "Process Context\nclass torch.distributed.elastic.multiprocessing.api.PContext(name, entrypoint, args, envs, stdouts, stderrs, tee_stdouts, tee_stderrs, error_files)\nThe base class that standardizes operations over a set of processes\n that are launched via different mechanisms. The name \"PContext\" is\n intentional to disambiguate with\n \"torch.multiprocessing.ProcessContext\".\nWarning:\n stdouts and stderrs should ALWAYS be a superset of tee_stdouts\n and tee_stderrs (respectively) this is b/c tee is implemented as\n a redirect + tail -f \n\nclass torch.distributed.elastic.multiprocessing.api.MultiprocessContext(name, entrypoint, args, envs, stdouts, stderrs, tee_stdouts, tee_stderrs, error_files, start_method)\n\"PContext\" holding worker processes invoked as a function.\nclass torch.distributed.elastic.multiprocessing.api.SubprocessContext(name, entrypoint, args, envs, stdouts, stderrs, tee_stdouts, tee_stderrs, error_files)", "source": "https://pytorch.org/docs/stable/elastic/multiprocessing.html", "category": "pytorch docs"} {"text": "\"PContext\" holding worker processes invoked as a binary.\nclass torch.distributed.elastic.multiprocessing.api.RunProcsResult(return_values=, failures=, stdouts=, stderrs=)\nResults of a completed run of processes started with\n \"start_processes()\". Returned by \"PContext\".\nNote the following:\n\n\nAll fields are mapped by local rank\n\n\n\"return_values\" - only populated for functions (not the\n binaries).\n\n\n\"stdouts\" - path to stdout.log (empty string if no redirect)\n\n\n\"stderrs\" - path to stderr.log (empty string if no redirect)\n\n", "source": "https://pytorch.org/docs/stable/elastic/multiprocessing.html", "category": "pytorch docs"} {"text": "Elastic Agent\nServer\nThe elastic agent is the control plane of torchelastic. It is a\nprocess that launches and manages underlying worker processes. The\nagent is responsible for:\n\n\nWorking with distributed torch: the workers are started with all\n the necessary information to successfully and trivially call\n \"torch.distributed.init_process_group()\".\n\n\nFault tolerance: monitors workers and upon detecting worker\n failures or unhealthiness, tears down all workers and restarts\n everyone.\n\n\nElasticity: Reacts to membership changes and restarts workers with\n the new members.\n\n\nThe simplest agents are deployed per node and works with local\nprocesses. A more advanced agent can launch and manage workers\nremotely. Agents can be completely decentralized, making decisions\nbased on the workers it manages. Or can be coordinated, communicating\nto other agents (that manage workers in the same job) to make a\ncollective decision.", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"} {"text": "collective decision.\nBelow is a diagram of an agent that manages a local group of workers.\n[image]\nConcepts\nThis section describes the high-level classes and concepts that are\nrelevant to understanding the role of the \"agent\" in torchelastic.\nclass torch.distributed.elastic.agent.server.ElasticAgent\nAgent process responsible for managing one or more worker\n processes. The worker processes are assumed to be regular\n distributed PyTorch scripts. When the worker process is created by\n the agent, the agent provides the necessary information for the\n worker processes to properly initialize a torch process group.\nThe exact deployment topology and ratio of agent-to-worker is\n dependent on the specific implementation of the agent and the\n user's job placement preferences. For instance, to run a\n distributed training job on GPU with 8 trainers (one per GPU) one\n can:\n\nUse 8 x single GPU instances, place an agent per instance,\n managing 1 worker per agent.\n", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"} {"text": "managing 1 worker per agent.\n\n\nUse 4 x double GPU instances, place an agent per instance,\n managing 2 workers per agent.\n\n\nUse 2 x quad GPU instances, place an agent per instance,\n managing 4 workers per agent.\n\n\nUse 1 x 8 GPU instance, place an agent per instance, managing 8\n workers per agent.\n\n\nUsage\n group_result = agent.run()\n if group_result.is_failed():\n # workers failed\n failure = group_result.failures[0]\n log.exception(f\"worker 0 failed with exit code : {failure.exit_code}\")\n else:\n return group_result.return_values[0] # return rank 0's results\n\nabstract get_worker_group(role='default')\n Returns:\n The \"WorkerGroup\" for the given \"role\". Note that the worker\n group is a mutable object and hence in a multi-\n threaded/process environment it may change state.\n Implementors are encouraged (but not required) to return a\n defensive read-only copy.\n\n Return type:\n", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"} {"text": "Return type:\n WorkerGroup\nabstract run(role='default')\n Runs the agent, retrying the worker group on failures up to\n \"max_restarts\".\n\n Returns:\n The result of the execution, containing the return values or\n failure details for each worker mapped by the worker's global\n rank.\n\n Raises:\n **Exception - any other failures NOT related to worker\n process** --\n\n Return type:\n *RunResult*\n\nclass torch.distributed.elastic.agent.server.WorkerSpec(role, local_world_size, rdzv_handler, fn=None, entrypoint=None, args=(), max_restarts=3, monitor_interval=30.0, master_port=None, master_addr=None, local_addr=None, redirects=Std.NONE, tee=Std.NONE)\nContains blueprint information about a particular type of worker.\n For a given role, there must only exist a single worker spec.\n Worker spec is expected to be homogenous across all nodes\n (machine), that is each node runs the same number of workers for a\n particular spec.", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"} {"text": "particular spec.\nParameters:\n * role (str) -- user-defined role for the workers with\n this spec\n * **local_world_size** (*int*) -- number local workers to run\n\n * **fn** (*Optional**[**Callable**]*) -- (deprecated use\n entrypoint instead)\n\n * **entrypoint** (*Optional**[**Union**[**Callable**,\n **str**]**]*) -- worker function or command\n\n * **args** (*Tuple*) -- arguments to pass to \"entrypoint\"\n\n * **rdzv_handler** (*RendezvousHandler*) -- handles rdzv for\n this set of workers\n\n * **max_restarts** (*int*) -- number of max retries for the\n workers\n\n * **monitor_interval** (*float*) -- monitor status of workers\n every \"n\" seconds\n\n * **master_port** (*Optional**[**int**]*) -- fixed port to run\n the c10d store on rank 0 if not specified then will chose a\n random free port\n\n * **master_addr** (*Optional**[**str**]*) -- fixed master_addr\n", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"} {"text": "to run the c10d store on rank 0 if not specified then will\n chose hostname on agent rank 0\n * **redirects** (*Union**[**Std**, **Dict**[**int**,\n **Std**]**]*) -- redirect std streams to a file, selectively\n redirect for a particular local rank by passing a map\n\n * **tee** (*Union**[**Std**, **Dict**[**int**, **Std**]**]*) --\n tees the specified std stream(s) to console + file,\n selectively tee for a particular local rank by passing a map,\n takes precedence over \"redirects\" settings.\n\nget_entrypoint_name()\n If the entrypoint is a function (e.g. \"Callable\") returns its\n \"__qualname__\", else if the entrypoint is a binary (e.g. \"str\"),\n returns the binary name.\n\nclass torch.distributed.elastic.agent.server.WorkerState(value)\nState of the \"WorkerGroup\". Workers in a worker group change state\n as a unit. If a single worker in a worker group fails the entire\n set is considered failed:", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"} {"text": "set is considered failed:\n UNKNOWN - agent lost track of worker group state, unrecoverable\n INIT - worker group object created not yet started\n HEALTHY - workers running and healthy\n UNHEALTHY - workers running and unhealthy\n STOPPED - workers stopped (interrupted) by the agent\n SUCCEEDED - workers finished running (exit 0)\n FAILED - workers failed to successfully finish (exit !0)\n\nA worker group starts from an initial \"INIT\" state, then progresses\n to \"HEALTHY\" or \"UNHEALTHY\" states, and finally reaches a terminal\n \"SUCCEEDED\" or \"FAILED\" state.\nWorker groups can be interrupted and temporarily put into \"STOPPED\"\n state by the agent. Workers in \"STOPPED\" state are scheduled to be\n restarted in the near future by the agent. Some examples of workers\n being put into \"STOPPED\" state are:\n\n\nWorker group failure|unhealthy observed\n\n\nMembership change detected\n\n\nWhen actions (start, stop, rdzv, retry, etc) on worker group fails", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"} {"text": "and results in the action being partially applied to the worker\n group the state will be \"UNKNOWN\". Typically this happens on\n uncaught/unhandled exceptions during state change events on the\n agent. The agent is not expected to recover worker groups in\n \"UNKNOWN\" state and is better off self terminating and allowing the\n job manager to retry the node.\nstatic is_running(state)\n Returns:\n True if the worker state represents workers still running\n (e.g. that the process exists but not necessarily healthy).\n\n Return type:\n bool\n\nclass torch.distributed.elastic.agent.server.Worker(local_rank, global_rank=- 1, role_rank=- 1, world_size=- 1, role_world_size=- 1)\nRepresents a worker instance. Contrast this with \"WorkerSpec\" that\n represents the specifications of a worker. A \"Worker\" is created\n from a \"WorkerSpec\". A \"Worker\" is to a \"WorkerSpec\" as an object\n is to a class.\nThe \"id\" of the worker is interpreted by the specific", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"} {"text": "implementation of \"ElasticAgent\". For a local agent, it could be\n the \"pid (int)\" of the worker, for a remote agent it could be\n encoded as \"host:port (string)\".\nParameters:\n * id (Any) -- uniquely identifies a worker (interpreted by\n the agent)\n * **local_rank** (*int*) -- local rank of the worker\n\n * **global_rank** (*int*) -- global rank of the worker\n\n * **role_rank** (*int*) -- rank of the worker across all workers\n that have the same role\n\n * **world_size** (*int*) -- number of workers (globally)\n\n * **role_world_size** (*int*) -- number of workers that have the\n same role\n\nclass torch.distributed.elastic.agent.server.WorkerGroup(spec)\nRepresents the set of \"Worker\" instances for the given \"WorkerSpec\"\n managed by \"ElasticAgent\". Whether the worker group contains cross\n instance workers or not depends on the implementation of the agent.\nImplementations", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"} {"text": "Implementations\nBelow are the agent implementations provided by torchelastic.\nclass torch.distributed.elastic.agent.server.local_elastic_agent.LocalElasticAgent(spec, start_method='spawn', exit_barrier_timeout=300, log_dir=None)\nAn implementation of \"torchelastic.agent.server.ElasticAgent\" that\n handles host-local workers. This agent is deployed per host and is\n configured to spawn \"n\" workers. When using GPUs, \"n\" maps to the\n number of GPUs available on the host.\nThe local agent does not communicate to other local agents deployed\n on other hosts, even if the workers may communicate inter-host. The\n worker id is interpreted to be a local process. The agent starts\n and stops all worker processes as a single unit.\nThe worker function and argument passed to the worker function must\n be python multiprocessing compatible. To pass multiprocessing data\n structures to the workers you may create the data structure in the", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"} {"text": "same multiprocessing context as the specified \"start_method\" and\n pass it as a function argument.\nThe \"exit_barrier_timeout\" specifies the amount of time (in\n seconds) to wait for other agents to finish. This acts as a safety\n net to handle cases where workers finish at different times, to\n prevent agents from viewing workers that finished early as a scale-\n down event. It is strongly advised that the user code deal with\n ensuring that workers are terminated in a synchronous manner rather\n than relying on the exit_barrier_timeout.\nA named pipe based watchdog can be enabled in \"LocalElasticAgent\"\n if an environment variable \"TORCHELASTIC_ENABLE_FILE_TIMER\" with\n value 1 has been defined in the \"LocalElasticAgent\" process.\n Optionally, another environment variable\n \"TORCHELASTIC_TIMER_FILE\" can be set with a unique file name for\n the named pipe. If the environment variable\n \"TORCHELASTIC_TIMER_FILE\" is not set, \"LocalElasticAgent\" will", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"} {"text": "internally create a unique file name and set it to the environment\n variable \"TORCHELASTIC_TIMER_FILE\", and this environment variable\n will be propagated to the worker processes to allow them to connect\n to the same named pipe that \"LocalElasticAgent\" uses.\nExample launching function\n def trainer(args) -> str:\n return \"do train\"\n\n def main():\n start_method=\"spawn\"\n shared_queue= multiprocessing.get_context(start_method).Queue()\n spec = WorkerSpec(\n role=\"trainer\",\n local_world_size=nproc_per_process,\n entrypoint=trainer,\n args=(\"foobar\",),\n ...)\n agent = LocalElasticAgent(spec, start_method)\n results = agent.run()\n\n if results.is_failed():\n print(\"trainer failed\")\n else:\n print(f\"rank 0 return value: {results.return_values[0]}\")\n", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"} {"text": "prints -> rank 0 return value: do train\nExample launching binary\n def main():\n spec = WorkerSpec(\n role=\"trainer\",\n local_world_size=nproc_per_process,\n entrypoint=\"/usr/local/bin/trainer\",\n args=(\"--trainer_args\", \"foobar\"),\n ...)\n agent = LocalElasticAgent(spec)\n results = agent.run()\n\n if not results.is_failed():\n print(\"binary launches do not have return values\")\n\nExtending the Agent\nTo extend the agent you can implement \"`ElasticAgent\" directly,\nhowever we recommend you extend \"SimpleElasticAgent\" instead, which\nprovides most of the scaffolding and leaves you with a few specific\nabstract methods to implement.\nclass torch.distributed.elastic.agent.server.SimpleElasticAgent(spec, exit_barrier_timeout=300)\nAn \"ElasticAgent\" that manages workers (\"WorkerGroup\") for a single", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"} {"text": "\"WorkerSpec\" (e.g. one particular type of worker role).\n_assign_worker_ranks(store, group_rank, group_world_size, spec)\n Determines proper ranks for worker processes. The rank\n assignment is done according to the following algorithm:\n\n 1. Each agent writes its configuration(group_rank,\n group_world_size , num_workers) to the common store.\n\n 2. Each agent retrieves configuration for all agents and\n performs two level sort using role and rank.\n\n 3. Determine the global rank: the global rank of the workers for\n the current agent is the offset of the infos array up to\n group_rank of the agent. The offset is computed as a sum of\n local_world_size of all agents that have rank less than the\n group_rank. The workers would have the ranks: [offset,\n offset+local_world_size)\n\n 4. Determine the role rank: The role rank is determined using\n the algorithms in the point 3 with the exception that the\n", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"} {"text": "offset is done from the first agent that has the same role as\n current one and has the minimum group rank.\n Return type:\n *List*[*Worker*]\n\n_exit_barrier()\n Wait for \"exit_barrier_timeout\" seconds for all agents to finish\n executing their local workers (either successfully or not). This\n acts as a safety guard against user scripts that terminate at\n different times. This barrier keeps the agent process alive\n until all workers finish.\n\n_initialize_workers(worker_group)\n Starts a fresh set of workers for the worker_group. Essentially\n a rendezvous followed by a start_workers.\n\n The caller should first call \"_stop_workers()\" to stop running\n workers prior to calling this method.\n\n Optimistically sets the state of the worker group that just\n started as \"HEALTHY\" and delegates the actual monitoring of\n state to \"_monitor_workers()\" method\n\nabstract _monitor_workers(worker_group)", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"} {"text": "abstract _monitor_workers(worker_group)\n Checks on the workers for the \"worker_group\" and returns the new\n state of the worker group.\n\n Return type:\n *RunResult*\n\n_rendezvous(worker_group)\n Runs rendezvous for the workers specified by worker spec.\n Assigns workers a new global rank and world size. Updates the\n rendezvous store for the worker group.\n\n_restart_workers(worker_group)\n Restarts (stops, rendezvous, starts) all local workers in the\n group.\n\nabstract _shutdown(death_sig=Signals.SIGTERM)\n Cleans up any resources that were allocated during the agent's\n work.\n\n Parameters:\n **death_sig** (*Signals*) -- Signal to send to the child\n process, SIGTERM is default\n\nabstract _start_workers(worker_group)\n Starts \"worker_group.spec.local_world_size\" number of workers\n according to worker spec for the worker group .\n\n Returns a map of \"local_rank\" to worker \"id\".\n\n Return type:\n", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"} {"text": "Return type:\n Dict[int, Any]\nabstract _stop_workers(worker_group)\n Stops all workers in the given worker group. Implementors must\n deal with workers in all states defined by \"WorkerState\". That\n is, it must gracefully handle stopping non-existent workers,\n unhealthy (stuck) workers, etc.\n\nclass torch.distributed.elastic.agent.server.api.RunResult(state, return_values=, failures=)\nResults returned by the worker executions. Run results follow an\n \"all-or-nothing\" policy where the run is successful if and only if\n ALL local workers managed by this agent complete successfully.\nIf the result is successful (e.g. \"is_failed() = False\") then the\n \"return_values\" field contains the outputs (return values) of the\n workers managed by THIS agent mapped by their GLOBAL ranks. That is\n \"result.return_values[0]\" is the return value of global rank 0.\nNote:\n \"return_values\" are only meaningful for when the worker\n", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"} {"text": "entrypoint is a function. Workers specified as a binary\n entrypoint do not canonically have a return value and the\n \"return_values\" field is meaningless and may be empty.\nIf \"is_failed()\" returns \"True\" then the \"failures\" field contains\n the failure information, again, mapped by the GLOBAL rank of the\n worker that failed.\nThe keys in \"return_values\" and \"failures\" are mutually exclusive,\n that is, a worker's final state can only be one of: succeeded,\n failed. Workers intentionally terminated by the agent according to\n the agent's restart policy, are not represented in either\n \"return_values\" nor \"failures\".\nWatchdog in the Agent\nA named pipe based watchdog can be enabled in \"LocalElasticAgent\" if\nan environment variable \"TORCHELASTIC_ENABLE_FILE_TIMER\" with value 1\nhas been defined in the \"LocalElasticAgent\" process. Optionally,\nanother environment variable \"TORCHELASTIC_TIMER_FILE\" can be set", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"} {"text": "with a unique file name for the named pipe. If the environment\nvariable \"TORCHELASTIC_TIMER_FILE\" is not set, \"LocalElasticAgent\"\nwill internally create a unique file name and set it to the\nenvironment variable \"TORCHELASTIC_TIMER_FILE\", and this environment\nvariable will be propagated to the worker processes to allow them to\nconnect to the same named pipe that \"LocalElasticAgent\" uses.", "source": "https://pytorch.org/docs/stable/elastic/agent.html", "category": "pytorch docs"} {"text": "Expiration Timers\nExpiration timers are set up on the same process as the agent and used\nfrom your script to deal with stuck workers. When you go into a code-\nblock that has the potential to get stuck you can acquire an\nexpiration timer, which instructs the timer server to kill the process\nif it does not release the timer by the self-imposed expiration\ndeadline.\nUsage:\nimport torchelastic.timer as timer\n import torchelastic.agent.server as agent\ndef main():\n start_method = \"spawn\"\n message_queue = mp.get_context(start_method).Queue()\n server = timer.LocalTimerServer(message, max_interval=0.01)\n server.start() # non-blocking\n spec = WorkerSpec(\n fn=trainer_func,\n args=(message_queue,),\n ...)\n agent = agent.LocalElasticAgent(spec, start_method)\n agent.run()\n\ndef trainer_func(message_queue):\n timer.configure(timer.LocalTimerClient(message_queue))", "source": "https://pytorch.org/docs/stable/elastic/timer.html", "category": "pytorch docs"} {"text": "with timer.expires(after=60): # 60 second expiry\n # do some work\nIn the example above if \"trainer_func\" takes more than 60 seconds to\ncomplete, then the worker process is killed and the agent retries the\nworker group.\nClient Methods\ntorch.distributed.elastic.timer.configure(timer_client)\nConfigures a timer client. Must be called before using \"expires\".\ntorch.distributed.elastic.timer.expires(after, scope=None, client=None)\nAcquires a countdown timer that expires in \"after\" seconds from\n now, unless the code-block that it wraps is finished within the\n timeframe. When the timer expires, this worker is eligible to be\n reaped. The exact meaning of \"reaped\" depends on the client\n implementation. In most cases, reaping means to terminate the\n worker process. Note that the worker is NOT guaranteed to be reaped\n at exactly \"time.now() + after\", but rather the worker is\n \"eligible\" for being reaped and the \"TimerServer\" that the client", "source": "https://pytorch.org/docs/stable/elastic/timer.html", "category": "pytorch docs"} {"text": "talks to will ultimately make the decision when and how to reap the\n workers with expired timers.\nUsage:\n torch.distributed.elastic.timer.configure(LocalTimerClient())\n with expires(after=10):\n torch.distributed.all_reduce(...)\n\nServer/Client Implementations\nBelow are the timer server and client pairs that are provided by\ntorchelastic.\nNote:\nTimer server and clients always have to be implemented and used in\n pairs since there is a messaging protocol between the server and\n client.\nBelow is a pair of timer server and client that is implemented based\non a \"multiprocess.Queue\".\nclass torch.distributed.elastic.timer.LocalTimerServer(mp_queue, max_interval=60, daemon=True)\nServer that works with \"LocalTimerClient\". Clients are expected to\n be subprocesses to the parent process that is running this server.\n Each host in the job is expected to start its own timer server\n locally and each server instance manages timers for local workers", "source": "https://pytorch.org/docs/stable/elastic/timer.html", "category": "pytorch docs"} {"text": "(running on processes on the same host).\nclass torch.distributed.elastic.timer.LocalTimerClient(mp_queue)\nClient side of \"LocalTimerServer\". This client is meant to be used\n on the same host that the \"LocalTimerServer\" is running on and uses\n pid to uniquely identify a worker. This is particularly useful in\n situations where one spawns a subprocess (trainer) per GPU on a\n host with multiple GPU devices.\nBelow is another pair of timer server and client that is implemented\nbased on a named pipe.\nclass torch.distributed.elastic.timer.FileTimerServer(file_path, max_interval=10, daemon=True, log_event=None)\nServer that works with \"FileTimerClient\". Clients are expected to\n be running on the same host as the process that is running this\n server. Each host in the job is expected to start its own timer\n server locally and each server instance manages timers for local\n workers (running on processes on the same host).\nParameters:", "source": "https://pytorch.org/docs/stable/elastic/timer.html", "category": "pytorch docs"} {"text": "Parameters:\n * file_path (str) -- str, the path of a FIFO special file\n to be created.\n * **max_interval** (*float*) -- float, max interval in seconds\n for each watchdog loop.\n\n * **daemon** (*bool*) -- bool, running the watchdog thread in\n daemon mode or not. A daemon thread will not block a process\n to stop.\n\n * **log_event** (*Callable**[**[**str**,\n **Optional**[**FileTimerRequest**]**]**, **None**]*) --\n Callable[[Dict[str, str]], None], an optional callback for\n logging the events in JSON format.\n\nclass torch.distributed.elastic.timer.FileTimerClient(file_path, signal=Signals.SIGKILL)\nClient side of \"FileTimerServer\". This client is meant to be used\n on the same host that the \"FileTimerServer\" is running on and uses\n pid to uniquely identify a worker. This client uses a named_pipe to\n send timer requests to the \"FileTimerServer\". This client is a\n producer while the \"FileTimerServer\" is a consumer. Multiple", "source": "https://pytorch.org/docs/stable/elastic/timer.html", "category": "pytorch docs"} {"text": "clients can work with the same \"FileTimerServer\".\nParameters:\n * file_path (str) -- str, the path of a FIFO special file.\n \"FileTimerServer\" must have created it by calling os.mkfifo().\n * **signal** -- signal, the signal to use to kill the process.\n Using a negative or zero signal will not kill the process.\n\nWriting a custom timer server/client\nTo write your own timer server and client extend the\n\"torch.distributed.elastic.timer.TimerServer\" for the server and\n\"torch.distributed.elastic.timer.TimerClient\" for the client. The\n\"TimerRequest\" object is used to pass messages between the server and\nclient.\nclass torch.distributed.elastic.timer.TimerRequest(worker_id, scope_id, expiration_time)\nData object representing a countdown timer acquisition and release\n that is used between the \"TimerClient\" and \"TimerServer\". A\n negative \"expiration_time\" should be interpreted as a \"release\"\n request.\nNote:", "source": "https://pytorch.org/docs/stable/elastic/timer.html", "category": "pytorch docs"} {"text": "request.\nNote:\n the type of \"worker_id\" is implementation specific. It is\n whatever the TimerServer and TimerClient implementations have on\n to uniquely identify a worker.\n\nclass torch.distributed.elastic.timer.TimerServer(request_queue, max_interval, daemon=True)\nEntity that monitors active timers and expires them in a timely\n fashion. This server is responsible for reaping workers that have\n expired timers.\nabstract clear_timers(worker_ids)\n Clears all timers for the given \"worker_ids\".\n\nabstract get_expired_timers(deadline)\n Returns all expired timers for each worker_id. An expired timer\n is a timer for which the expiration_time is less than or equal\n to the provided deadline.\n\n Return type:\n *Dict*[str, *List*[*TimerRequest*]]\n\nabstract register_timers(timer_requests)\n Processes the incoming timer requests and registers them with\n the server. The timer request can either be a acquire-timer or\n", "source": "https://pytorch.org/docs/stable/elastic/timer.html", "category": "pytorch docs"} {"text": "release-timer request. Timer requests with a negative\n expiration_time should be interpreted as a release-timer\n request.\nclass torch.distributed.elastic.timer.TimerClient\nClient library to acquire and release countdown timers by\n communicating with the TimerServer.\nabstract acquire(scope_id, expiration_time)\n Acquires a timer for the worker that holds this client object\n given the scope_id and expiration_time. Typically registers the\n timer with the TimerServer.\n\nabstract release(scope_id)\n Releases the timer for the \"scope_id\" on the worker this client\n represents. After this method is called, the countdown timer on\n the scope is no longer in effect.\n", "source": "https://pytorch.org/docs/stable/elastic/timer.html", "category": "pytorch docs"} {"text": "Train script\nIf your train script works with \"torch.distributed.launch\" it will\ncontinue working with \"torchrun\" with these differences:\n\n\nNo need to manually pass \"RANK\", \"WORLD_SIZE\", \"MASTER_ADDR\", and\n \"MASTER_PORT\".\n\n\n\"rdzv_backend\" and \"rdzv_endpoint\" can be provided. For most users\n this will be set to \"c10d\" (see rendezvous). The default\n \"rdzv_backend\" creates a non-elastic rendezvous where\n \"rdzv_endpoint\" holds the master address.\n\n\nMake sure you have a \"load_checkpoint(path)\" and\n \"save_checkpoint(path)\" logic in your script. When any number of\n workers fail we restart all the workers with the same program\n arguments so you will lose progress up to the most recent\n checkpoint (see elastic launch).\n\n\n\"use_env\" flag has been removed. If you were parsing local rank by\n parsing the \"--local_rank\" option, you need to get the local rank\n from the environment variable \"LOCAL_RANK\" (e.g.\n \"int(os.environ[\"LOCAL_RANK\"])\").\n\n", "source": "https://pytorch.org/docs/stable/elastic/train_script.html", "category": "pytorch docs"} {"text": "\"int(os.environ[\"LOCAL_RANK\"])\").\nBelow is an expository example of a training script that checkpoints\non each epoch, hence the worst-case progress lost on failure is one\nfull epoch worth of training.\ndef main():\n args = parse_args(sys.argv[1:])\n state = load_checkpoint(args.checkpoint_path)\n initialize(state)\n # torch.distributed.run ensures that this will work\n # by exporting all the env vars needed to initialize the process group\n torch.distributed.init_process_group(backend=args.backend)\n\n for i in range(state.epoch, state.total_num_epochs)\n for batch in iter(state.dataset)\n train(batch, state.model)\n\n state.epoch += 1\n save_checkpoint(state)\n\nFor concrete examples of torchelastic-compliant train scripts, visit\nour examples page.", "source": "https://pytorch.org/docs/stable/elastic/train_script.html", "category": "pytorch docs"} {"text": "TorchElastic Kubernetes\nPlease refer to our GitHub's Kubernetes README for more information on\nElastic Job Controller and custom resource definition.", "source": "https://pytorch.org/docs/stable/elastic/kubernetes.html", "category": "pytorch docs"} {"text": "Automatic Mixed Precision package - torch.amp\n\"torch.amp\" provides convenience methods for mixed precision, where\nsome operations use the \"torch.float32\" (\"float\") datatype and other\noperations use lower precision floating point datatype\n(\"lower_precision_fp\"): \"torch.float16\" (\"half\") or \"torch.bfloat16\".\nSome ops, like linear layers and convolutions, are much faster in\n\"lower_precision_fp\". Other ops, like reductions, often require the\ndynamic range of \"float32\". Mixed precision tries to match each op to\nits appropriate datatype.\nOrdinarily, \"automatic mixed precision training\" with datatype of\n\"torch.float16\" uses \"torch.autocast\" and \"torch.cuda.amp.GradScaler\"\ntogether, as shown in the CUDA Automatic Mixed Precision examples and\nCUDA Automatic Mixed Precision recipe. However, \"torch.autocast\" and\n\"torch.cuda.amp.GradScaler\" are modular, and may be used separately if\ndesired. As shown in the CPU example section of \"torch.autocast\",", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"} {"text": "\"automatic mixed precision training/inference\" on CPU with datatype of\n\"torch.bfloat16\" only uses \"torch.autocast\".\nFor CUDA and CPU, APIs are also provided separately:\n\n\n\"torch.autocast(\"cuda\", args...)\" is equivalent to\n \"torch.cuda.amp.autocast(args...)\".\n\n\n\"torch.autocast(\"cpu\", args...)\" is equivalent to\n \"torch.cpu.amp.autocast(args...)\". For CPU, only lower precision\n floating point datatype of \"torch.bfloat16\" is supported for now.\n\n\nAutocasting\n\n\nGradient Scaling\n\n\nAutocast Op Reference\n\n\nOp Eligibility\n\n\nCUDA Op-Specific Behavior\n\n\nCUDA Ops that can autocast to \"float16\"\n\n\nCUDA Ops that can autocast to \"float32\"\n\n\nCUDA Ops that promote to the widest input type\n\n\nPrefer \"binary_cross_entropy_with_logits\" over\n \"binary_cross_entropy\"\n\n\n\n\nCPU Op-Specific Behavior\n\n\nCPU Ops that can autocast to \"bfloat16\"\n\n\nCPU Ops that can autocast to \"float32\"\n\n\nCPU Ops that promote to the widest input type\n\n\n\n\nAutocasting", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"} {"text": "Autocasting\nclass torch.autocast(device_type, dtype=None, enabled=True, cache_enabled=None)\nInstances of \"autocast\" serve as context managers or decorators\n that allow regions of your script to run in mixed precision.\nIn these regions, ops run in an op-specific dtype chosen by\n autocast to improve performance while maintaining accuracy. See the\n Autocast Op Reference for details.\nWhen entering an autocast-enabled region, Tensors may be any type.\n You should not call \"half()\" or \"bfloat16()\" on your model(s) or\n inputs when using autocasting.\n\"autocast\" should wrap only the forward pass(es) of your network,\n including the loss computation(s). Backward passes under autocast\n are not recommended. Backward ops run in the same type that\n autocast used for corresponding forward ops.\nExample for CUDA Devices:\n # Creates model and optimizer in default precision\n model = Net().cuda()\n optimizer = optim.SGD(model.parameters(), ...)\n", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"} {"text": "for input, target in data:\n optimizer.zero_grad()\n # Enables autocasting for the forward pass (model + loss)\n with autocast():\n output = model(input)\n loss = loss_fn(output, target)\n\n # Exits the context manager before backward()\n loss.backward()\n optimizer.step()\n\nSee the CUDA Automatic Mixed Precision examples for usage (along\n with gradient scaling) in more complex scenarios (e.g., gradient\n penalty, multiple models/losses, custom autograd functions).\n\"autocast\" can also be used as a decorator, e.g., on the \"forward\"\n method of your model:\n class AutocastModel(nn.Module):\n ...\n @autocast()\n def forward(self, input):\n ...\n\nFloating-point Tensors produced in an autocast-enabled region may\n be \"float16\". After returning to an autocast-disabled region, using\n them with floating-point Tensors of different dtypes may cause type", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"} {"text": "mismatch errors. If so, cast the Tensor(s) produced in the\n autocast region back to \"float32\" (or other dtype if desired). If a\n Tensor from the autocast region is already \"float32\", the cast is a\n no-op, and incurs no additional overhead. CUDA Example:\n # Creates some tensors in default dtype (here assumed to be float32)\n a_float32 = torch.rand((8, 8), device=\"cuda\")\n b_float32 = torch.rand((8, 8), device=\"cuda\")\n c_float32 = torch.rand((8, 8), device=\"cuda\")\n d_float32 = torch.rand((8, 8), device=\"cuda\")\n\n with autocast():\n # torch.mm is on autocast's list of ops that should run in float16.\n # Inputs are float32, but the op runs in float16 and produces float16 output.\n # No manual casts are required.\n e_float16 = torch.mm(a_float32, b_float32)\n # Also handles mixed input types\n f_float16 = torch.mm(d_float32, e_float16)\n\n # After exiting autocast, calls f_float16.float() to use with d_float32\n", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"} {"text": "g_float32 = torch.mm(d_float32, f_float16.float())\nCPU Training Example:\n # Creates model and optimizer in default precision\n model = Net()\n optimizer = optim.SGD(model.parameters(), ...)\n\n for epoch in epochs:\n for input, target in data:\n optimizer.zero_grad()\n\n # Runs the forward pass with autocasting.\n with torch.autocast(device_type=\"cpu\", dtype=torch.bfloat16):\n output = model(input)\n loss = loss_fn(output, target)\n\n loss.backward()\n optimizer.step()\n\nCPU Inference Example:\n # Creates model in default precision\n model = Net().eval()\n\n with torch.autocast(device_type=\"cpu\", dtype=torch.bfloat16):\n for input in data:\n # Runs the forward pass with autocasting.\n output = model(input)\n\nCPU Inference Example with Jit Trace:\n class TestModel(nn.Module):\n def __init__(self, input_size, num_classes):\n", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"} {"text": "super(TestModel, self).init()\n self.fc1 = nn.Linear(input_size, num_classes)\n def forward(self, x):\n return self.fc1(x)\n input_size = 2\n num_classes = 2\n model = TestModel(input_size, num_classes).eval()\n\n # For now, we suggest to disable the Jit Autocast Pass,\n # As the issue: https://github.com/pytorch/pytorch/issues/75956\n torch._C._jit_set_autocast_mode(False)\n\n with torch.cpu.amp.autocast(cache_enabled=False):\n model = torch.jit.trace(model, torch.randn(1, input_size))\n model = torch.jit.freeze(model)\n # Models Run\n for _ in range(3):\n model(torch.randn(1, input_size))\n\nType mismatch errors in an autocast-enabled region are a bug; if\n this is what you observe, please file an issue.\n\"autocast(enabled=False)\" subregions can be nested in autocast-\n enabled regions. Locally disabling autocast can be useful, for", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"} {"text": "example, if you want to force a subregion to run in a particular\n \"dtype\". Disabling autocast gives you explicit control over the\n execution type. In the subregion, inputs from the surrounding\n region should be cast to \"dtype\" before use:\n # Creates some tensors in default dtype (here assumed to be float32)\n a_float32 = torch.rand((8, 8), device=\"cuda\")\n b_float32 = torch.rand((8, 8), device=\"cuda\")\n c_float32 = torch.rand((8, 8), device=\"cuda\")\n d_float32 = torch.rand((8, 8), device=\"cuda\")\n\n with autocast():\n e_float16 = torch.mm(a_float32, b_float32)\n with autocast(enabled=False):\n # Calls e_float16.float() to ensure float32 execution\n # (necessary because e_float16 was created in an autocasted region)\n f_float32 = torch.mm(c_float32, e_float16.float())\n\n # No manual casts are required when re-entering the autocast-enabled region.\n", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"} {"text": "torch.mm again runs in float16 and produces float16 output, regardless of input types.\n g_float16 = torch.mm(d_float32, f_float32)\n\nThe autocast state is thread-local. If you want it enabled in a\n new thread, the context manager or decorator must be invoked in\n that thread. This affects \"torch.nn.DataParallel\" and\n \"torch.nn.parallel.DistributedDataParallel\" when used with more\n than one GPU per process (see Working with Multiple GPUs).\nParameters:\n * device_type (str, required) -- Whether to use 'cuda'\n or 'cpu' device\n * **enabled** (*bool**, **optional*) -- Whether autocasting\n should be enabled in the region. Default: \"True\"\n\n * **dtype** (*torch_dtype**, **optional*) -- Whether to use\n torch.float16 or torch.bfloat16.\n\n * **cache_enabled** (*bool**, **optional*) -- Whether the weight\n cache inside autocast should be enabled. Default: \"True\"\n", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"} {"text": "class torch.cuda.amp.autocast(enabled=True, dtype=torch.float16, cache_enabled=True)\nSee \"torch.autocast\". \"torch.cuda.amp.autocast(args...)\" is\n equivalent to \"torch.autocast(\"cuda\", args...)\"\ntorch.cuda.amp.custom_fwd(fwd=None, *, cast_inputs=None)\nHelper decorator for \"forward\" methods of custom autograd functions\n (subclasses of \"torch.autograd.Function\"). See the example page\n for more detail.\nParameters:\n cast_inputs (\"torch.dtype\" or None, optional, default=None)\n -- If not \"None\", when \"forward\" runs in an autocast-enabled\n region, casts incoming floating-point CUDA Tensors to the target\n dtype (non-floating-point Tensors are not affected), then\n executes \"forward\" with autocast disabled. If \"None\",\n \"forward\"'s internal ops execute with the current autocast\n state.\nNote:\n If the decorated \"forward\" is called outside an autocast-enabled\n region, \"custom_fwd\" is a no-op and \"cast_inputs\" has no effect.\n", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"} {"text": "torch.cuda.amp.custom_bwd(bwd)\nHelper decorator for backward methods of custom autograd functions\n (subclasses of \"torch.autograd.Function\"). Ensures that \"backward\"\n executes with the same autocast state as \"forward\". See the example\n page for more detail.\nclass torch.cpu.amp.autocast(enabled=True, dtype=torch.bfloat16, cache_enabled=True)\nSee \"torch.autocast\". \"torch.cpu.amp.autocast(args...)\" is\n equivalent to \"torch.autocast(\"cpu\", args...)\"\nGradient Scaling\nIf the forward pass for a particular op has \"float16\" inputs, the\nbackward pass for that op will produce \"float16\" gradients. Gradient\nvalues with small magnitudes may not be representable in \"float16\".\nThese values will flush to zero (\"underflow\"), so the update for the\ncorresponding parameters will be lost.\nTo prevent underflow, \"gradient scaling\" multiplies the network's\nloss(es) by a scale factor and invokes a backward pass on the scaled\nloss(es). Gradients flowing backward through the network are then", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"} {"text": "scaled by the same factor. In other words, gradient values have a\nlarger magnitude, so they don't flush to zero.\nEach parameter's gradient (\".grad\" attribute) should be unscaled\nbefore the optimizer updates the parameters, so the scale factor does\nnot interfere with the learning rate.\nclass torch.cuda.amp.GradScaler(init_scale=65536.0, growth_factor=2.0, backoff_factor=0.5, growth_interval=2000, enabled=True)\nget_backoff_factor()\n Returns a Python float containing the scale backoff factor.\n\nget_growth_factor()\n Returns a Python float containing the scale growth factor.\n\nget_growth_interval()\n Returns a Python int containing the growth interval.\n\nget_scale()\n Returns a Python float containing the current scale, or 1.0 if\n scaling is disabled.\n\n Warning:\n\n \"get_scale()\" incurs a CPU-GPU sync.\n\nis_enabled()\n Returns a bool indicating whether this instance is enabled.\n\nload_state_dict(state_dict)", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"} {"text": "load_state_dict(state_dict)\n Loads the scaler state. If this instance is disabled,\n \"load_state_dict()\" is a no-op.\n\n Parameters:\n **state_dict** (*dict*) -- scaler state. Should be an object\n returned from a call to \"state_dict()\".\n\nscale(outputs)\n Multiplies ('scales') a tensor or list of tensors by the scale\n factor.\n\n Returns scaled outputs. If this instance of \"GradScaler\" is not\n enabled, outputs are returned unmodified.\n\n Parameters:\n **outputs** (*Tensor** or **iterable of Tensors*) -- Outputs\n to scale.\n\nset_backoff_factor(new_factor)\n Parameters:\n **new_scale** (*float*) -- Value to use as the new scale\n backoff factor.\n\nset_growth_factor(new_factor)\n Parameters:\n **new_scale** (*float*) -- Value to use as the new scale\n growth factor.\n\nset_growth_interval(new_interval)\n Parameters:\n **new_interval** (*int*) -- Value to use as the new growth\n", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"} {"text": "interval.\nstate_dict()\n Returns the state of the scaler as a \"dict\". It contains five\n entries:\n\n * \"\"scale\"\" - a Python float containing the current scale\n\n * \"\"growth_factor\"\" - a Python float containing the current\n growth factor\n\n * \"\"backoff_factor\"\" - a Python float containing the current\n backoff factor\n\n * \"\"growth_interval\"\" - a Python int containing the current\n growth interval\n\n * \"\"_growth_tracker\"\" - a Python int containing the number of\n recent consecutive unskipped steps.\n\n If this instance is not enabled, returns an empty dict.\n\n Note:\n\n If you wish to checkpoint the scaler's state after a\n particular iteration, \"state_dict()\" should be called after\n \"update()\".\n\nstep(optimizer, args, *kwargs)\n \"step()\" carries out the following two operations:\n\n 1. Internally invokes \"unscale_(optimizer)\" (unless \"unscale_()\"\n was explicitly called for \"optimizer\" earlier in the\n", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"} {"text": "iteration). As part of the \"unscale_()\", gradients are\n checked for infs/NaNs.\n 2. If no inf/NaN gradients are found, invokes \"optimizer.step()\"\n using the unscaled gradients. Otherwise, \"optimizer.step()\"\n is skipped to avoid corrupting the params.\n\n \"*args\" and \"**kwargs\" are forwarded to \"optimizer.step()\".\n\n Returns the return value of \"optimizer.step(*args, **kwargs)\".\n\n Parameters:\n * **optimizer** (*torch.optim.Optimizer*) -- Optimizer that\n applies the gradients.\n\n * **args** -- Any arguments.\n\n * **kwargs** -- Any keyword arguments.\n\n Warning:\n\n Closure use is not currently supported.\n\nunscale_(optimizer)\n Divides (\"unscales\") the optimizer's gradient tensors by the\n scale factor.\n\n \"unscale_()\" is optional, serving cases where you need to modify\n or inspect gradients between the backward pass(es) and \"step()\".\n If \"unscale_()\" is not called explicitly, gradients will be\n", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"} {"text": "unscaled automatically during \"step()\".\n Simple example, using \"unscale_()\" to enable clipping of\n unscaled gradients:\n\n ...\n scaler.scale(loss).backward()\n scaler.unscale_(optimizer)\n torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm)\n scaler.step(optimizer)\n scaler.update()\n\n Parameters:\n **optimizer** (*torch.optim.Optimizer*) -- Optimizer that\n owns the gradients to be unscaled.\n\n Note:\n\n \"unscale_()\" does not incur a CPU-GPU sync.\n\n Warning:\n\n \"unscale_()\" should only be called once per optimizer per\n \"step()\" call, and only after all gradients for that\n optimizer's assigned parameters have been accumulated. Calling\n \"unscale_()\" twice for a given optimizer between each \"step()\"\n triggers a RuntimeError.\n\n Warning:\n\n \"unscale_()\" may unscale sparse gradients out of place,\n replacing the \".grad\" attribute.\n\nupdate(new_scale=None)", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"} {"text": "update(new_scale=None)\n Updates the scale factor.\n\n If any optimizer steps were skipped the scale is multiplied by\n \"backoff_factor\" to reduce it. If \"growth_interval\" unskipped\n iterations occurred consecutively, the scale is multiplied by\n \"growth_factor\" to increase it.\n\n Passing \"new_scale\" sets the new scale value manually.\n (\"new_scale\" is not used directly, it's used to fill\n GradScaler's internal scale tensor. So if \"new_scale\" was a\n tensor, later in-place changes to that tensor will not further\n affect the scale GradScaler uses internally.)\n\n Parameters:\n **new_scale** (float or \"torch.cuda.FloatTensor\", optional,\n default=None) -- New scale factor.\n\n Warning:\n\n \"update()\" should only be called at the end of the iteration,\n after \"scaler.step(optimizer)\" has been invoked for all\n optimizers used this iteration.\n\nAutocast Op Reference\nOp Eligibility", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"} {"text": "Op Eligibility\nOps that run in \"float64\" or non-floating-point dtypes are not\neligible, and will run in these types whether or not autocast is\nenabled.\nOnly out-of-place ops and Tensor methods are eligible. In-place\nvariants and calls that explicitly supply an \"out=...\" Tensor are\nallowed in autocast-enabled regions, but won't go through autocasting.\nFor example, in an autocast-enabled region \"a.addmm(b, c)\" can\nautocast, but \"a.addmm_(b, c)\" and \"a.addmm(b, c, out=d)\" cannot. For\nbest performance and stability, prefer out-of-place ops in autocast-\nenabled regions.\nOps called with an explicit \"dtype=...\" argument are not eligible, and\nwill produce output that respects the \"dtype\" argument.\nCUDA Op-Specific Behavior\nThe following lists describe the behavior of eligible ops in autocast-\nenabled regions. These ops always go through autocasting whether they\nare invoked as part of a \"torch.nn.Module\", as a function, or as a", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"} {"text": "\"torch.Tensor\" method. If functions are exposed in multiple\nnamespaces, they go through autocasting regardless of the namespace.\nOps not listed below do not go through autocasting. They run in the\ntype defined by their inputs. However, autocasting may still change\nthe type in which unlisted ops run if they're downstream from\nautocasted ops.\nIf an op is unlisted, we assume it's numerically stable in \"float16\".\nIf you believe an unlisted op is numerically unstable in \"float16\",\nplease file an issue.\nCUDA Ops that can autocast to \"float16\"\n\n\"__matmul__\", \"addbmm\", \"addmm\", \"addmv\", \"addr\", \"baddbmm\", \"bmm\",\n\"chain_matmul\", \"multi_dot\", \"conv1d\", \"conv2d\", \"conv3d\",\n\"conv_transpose1d\", \"conv_transpose2d\", \"conv_transpose3d\", \"GRUCell\",\n\"linear\", \"LSTMCell\", \"matmul\", \"mm\", \"mv\", \"prelu\", \"RNNCell\"\n\n\nCUDA Ops that can autocast to \"float32\"\n\n\"pow\", \"rdiv\", \"rpow\", \"rtruediv\", \"acos\", \"asin\",", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"} {"text": "\"binary_cross_entropy_with_logits\", \"cosh\", \"cosine_embedding_loss\",\n\"cdist\", \"cosine_similarity\", \"cross_entropy\", \"cumprod\", \"cumsum\",\n\"dist\", \"erfinv\", \"exp\", \"expm1\", \"group_norm\",\n\"hinge_embedding_loss\", \"kl_div\", \"l1_loss\", \"layer_norm\", \"log\",\n\"log_softmax\", \"log10\", \"log1p\", \"log2\", \"margin_ranking_loss\",\n\"mse_loss\", \"multilabel_margin_loss\", \"multi_margin_loss\", \"nll_loss\",\n\"norm\", \"normalize\", \"pdist\", \"poisson_nll_loss\", \"pow\", \"prod\",\n\"reciprocal\", \"rsqrt\", \"sinh\", \"smooth_l1_loss\", \"soft_margin_loss\",\n\"softmax\", \"softmin\", \"softplus\", \"sum\", \"renorm\", \"tan\",\n\"triplet_margin_loss\"\nCUDA Ops that promote to the widest input type\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nThese ops don't require a particular dtype for stability, but take\nmultiple inputs and require that the inputs' dtypes match. If all of\nthe inputs are \"float16\", the op runs in \"float16\". If any of the\ninputs is \"float32\", autocast casts all inputs to \"float32\" and runs\nthe op in \"float32\".", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"} {"text": "the op in \"float32\".\n\"addcdiv\", \"addcmul\", \"atan2\", \"bilinear\", \"cross\", \"dot\",\n\"grid_sample\", \"index_put\", \"scatter_add\", \"tensordot\"\nSome ops not listed here (e.g., binary ops like \"add\") natively\npromote inputs without autocasting's intervention. If inputs are a\nmixture of \"float16\" and \"float32\", these ops run in \"float32\" and\nproduce \"float32\" output, regardless of whether autocast is enabled.\nPrefer \"binary_cross_entropy_with_logits\" over \"binary_cross_entropy\"\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nThe backward passes of \"torch.nn.functional.binary_cross_entropy()\"\n(and \"torch.nn.BCELoss\", which wraps it) can produce gradients that\naren't representable in \"float16\". In autocast-enabled regions, the\nforward input may be \"float16\", which means the backward gradient must\nbe representable in \"float16\" (autocasting \"float16\" forward inputs to\n\"float32\" doesn't help, because that cast must be reversed in\nbackward). Therefore, \"binary_cross_entropy\" and \"BCELoss\" raise an", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"} {"text": "error in autocast-enabled regions.\nMany models use a sigmoid layer right before the binary cross entropy\nlayer. In this case, combine the two layers using\n\"torch.nn.functional.binary_cross_entropy_with_logits()\" or\n\"torch.nn.BCEWithLogitsLoss\". \"binary_cross_entropy_with_logits\" and\n\"BCEWithLogits\" are safe to autocast.\nCPU Op-Specific Behavior\nThe following lists describe the behavior of eligible ops in autocast-\nenabled regions. These ops always go through autocasting whether they\nare invoked as part of a \"torch.nn.Module\", as a function, or as a\n\"torch.Tensor\" method. If functions are exposed in multiple\nnamespaces, they go through autocasting regardless of the namespace.\nOps not listed below do not go through autocasting. They run in the\ntype defined by their inputs. However, autocasting may still change\nthe type in which unlisted ops run if they're downstream from\nautocasted ops.\nIf an op is unlisted, we assume it's numerically stable in \"bfloat16\".", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"} {"text": "If you believe an unlisted op is numerically unstable in \"bfloat16\",\nplease file an issue.\nCPU Ops that can autocast to \"bfloat16\"\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\"conv1d\", \"conv2d\", \"conv3d\", \"bmm\", \"mm\", \"baddbmm\", \"addmm\",\n\"addbmm\", \"linear\", \"matmul\", \"_convolution\"\nCPU Ops that can autocast to \"float32\"\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\n\"conv_transpose1d\", \"conv_transpose2d\", \"conv_transpose3d\",\n\"avg_pool3d\", \"binary_cross_entropy\", \"grid_sampler\",\n\"grid_sampler_2d\", \"_grid_sampler_2d_cpu_fallback\", \"grid_sampler_3d\",\n\"polar\", \"prod\", \"quantile\", \"nanquantile\", \"stft\", \"cdist\", \"trace\",\n\"view_as_complex\", \"cholesky\", \"cholesky_inverse\", \"cholesky_solve\",\n\"inverse\", \"lu_solve\", \"orgqr\", \"inverse\", \"ormqr\", \"pinverse\",\n\"max_pool3d\", \"max_unpool2d\", \"max_unpool3d\", \"adaptive_avg_pool3d\",\n\"reflection_pad1d\", \"reflection_pad2d\", \"replication_pad1d\",\n\"replication_pad2d\", \"replication_pad3d\", \"mse_loss\", \"ctc_loss\",\n\"kl_div\", \"multilabel_margin_loss\", \"fft_fft\", \"fft_ifft\", \"fft_fft2\",", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"} {"text": "\"fft_ifft2\", \"fft_fftn\", \"fft_ifftn\", \"fft_rfft\", \"fft_irfft\",\n\"fft_rfft2\", \"fft_irfft2\", \"fft_rfftn\", \"fft_irfftn\", \"fft_hfft\",\n\"fft_ihfft\", \"linalg_matrix_norm\", \"linalg_cond\",\n\"linalg_matrix_rank\", \"linalg_solve\", \"linalg_cholesky\",\n\"linalg_svdvals\", \"linalg_eigvals\", \"linalg_eigvalsh\", \"linalg_inv\",\n\"linalg_householder_product\", \"linalg_tensorinv\",\n\"linalg_tensorsolve\", \"fake_quantize_per_tensor_affine\", \"eig\",\n\"geqrf\", \"lstsq\", \"_lu_with_info\", \"qr\", \"solve\", \"svd\", \"symeig\",\n\"triangular_solve\", \"fractional_max_pool2d\", \"fractional_max_pool3d\",\n\"adaptive_max_pool3d\", \"multilabel_margin_loss_forward\", \"linalg_qr\",\n\"linalg_cholesky_ex\", \"linalg_svd\", \"linalg_eig\", \"linalg_eigh\",\n\"linalg_lstsq\", \"linalg_inv_ex\"\nCPU Ops that promote to the widest input type\n~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\nThese ops don't require a particular dtype for stability, but take\nmultiple inputs and require that the inputs' dtypes match. If all of\nthe inputs are \"bfloat16\", the op runs in \"bfloat16\". If any of the", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"} {"text": "inputs is \"float32\", autocast casts all inputs to \"float32\" and runs\nthe op in \"float32\".\n\"cat\", \"stack\", \"index_copy\"\nSome ops not listed here (e.g., binary ops like \"add\") natively\npromote inputs without autocasting's intervention. If inputs are a\nmixture of \"bfloat16\" and \"float32\", these ops run in \"float32\" and\nproduce \"float32\" output, regardless of whether autocast is enabled.", "source": "https://pytorch.org/docs/stable/amp.html", "category": "pytorch docs"} {"text": "torch._dynamo\nWarning:\nThis module is an early prototype and is subject to change.\ntorch._dynamo.allow_in_graph(fn)\nCustomize which functions TorchDynamo will include in the generated\n graph. Similar to torch.fx.wrap().\n torch._dynamo.allow_in_graph(my_custom_function)\n\n @torch._dynamo.optimize(...)\n def fn(a):\n x = torch.add(x, 1)\n x = my_custom_function(x)\n x = torch.add(x, 1)\n return x\n\n fn(...)\n\nWill capture a single graph containing my_custom_function().\ntorch._dynamo.disallow_in_graph(fn)\nCustomize which functions TorchDynamo will exclude in the generated\n graph and force a graph break on.\n torch._dynamo.disallow_in_graph(torch.sub)\n\n @torch._dynamo.optimize(...)\n def fn(a):\n x = torch.add(x, 1)\n x = torch.sub(x, 1)\n x = torch.add(x, 1)\n return x\n\n fn(...)\n\nWill break the graph on torch.sub, and give two graphs each with", "source": "https://pytorch.org/docs/stable/_dynamo.html", "category": "pytorch docs"} {"text": "a single torch.add() op.\ntorch._dynamo.graph_break()\nForce a graph break\ntorch._dynamo.optimize(backend='inductor', *, nopython=False, guard_export_fn=None, guard_fail_fn=None, disable=False, dynamic=False)\nThe main entrypoint of TorchDynamo. Do graph capture and call\n backend() to optimize extracted graphs.\nParameters:\n * backend -- One of the two things: - Either, a\n function/callable taking a torch.fx.GraphModule and\n example_inputs and returning a python callable that runs the\n graph faster. One can also provide additional context for the\n backend, like torch.jit.fuser(\"fuser2\"), by setting the\n backend_ctx_ctor attribute. See\n AOTAutogradMemoryEfficientFusionWithContext for the usage. -\n Or, a string backend name in torch._dynamo.list_backends()\n * **nopython** -- If True, graph breaks will be errors and there\n will be a single whole-program graph.\n", "source": "https://pytorch.org/docs/stable/_dynamo.html", "category": "pytorch docs"} {"text": "will be a single whole-program graph.\n * **disable** -- If True, turn this decorator into a no-op\n\n * **dynamic** -- If True, turn on dynamic shapes support\n\nExample Usage:\n @torch._dynamo.optimize()\n def toy_example(a, b):\n ...\n\ntorch._dynamo.optimize_assert(backend, *, hooks=Hooks(guard_export_fn=None, guard_fail_fn=None), export=False, dynamic=False)\nThe same as torch._dynamo.optimize(backend, nopython=True)\ntorch._dynamo.run(fn=None)\nDon't do any dynamic compiles, just use prior optimizations\ntorch._dynamo.disable(fn=None)\nDecorator and context manager to disable TorchDynamo\ntorch._dynamo.reset()\nClear all compile caches and restore initial state\ntorch._dynamo.list_backends()\nReturn valid strings that can be passed to:\n @torch._dynamo.optimize()\n def foo(...):\n ....\n\ntorch._dynamo.skip(fn=None)\nSkip frames associated with the function code, but still process\n recursively invoked frames", "source": "https://pytorch.org/docs/stable/_dynamo.html", "category": "pytorch docs"} {"text": "recursively invoked frames\nclass torch._dynamo.OptimizedModule(mod, dynamo_ctx)\nWraps the original nn.Module object and later patches its forward\n method to optimized self.forward method.", "source": "https://pytorch.org/docs/stable/_dynamo.html", "category": "pytorch docs"} {"text": "Tensor Views\nPyTorch allows a tensor to be a \"View\" of an existing tensor. View\ntensor shares the same underlying data with its base tensor.\nSupporting \"View\" avoids explicit data copy, thus allows us to do fast\nand memory efficient reshaping, slicing and element-wise operations.\nFor example, to get a view of an existing tensor \"t\", you can call\n\"t.view(...)\".\n\n\n\nt = torch.rand(4, 4)\nb = t.view(2, 8)\nt.storage().data_ptr() == b.storage().data_ptr() # t and b share the same underlying data.\n True\n # Modifying view tensor changes base tensor as well.\nb[0][0] = 3.14\nt[0][0]\n tensor(3.14)\n\n\n\nSince views share underlying data with its base tensor, if you edit\nthe data in the view, it will be reflected in the base tensor as well.\nTypically a PyTorch op returns a new tensor as output, e.g. \"add()\".\nBut in case of view ops, outputs are views of input tensors to avoid\nunnecessary data copy. No data movement occurs when creating a view,", "source": "https://pytorch.org/docs/stable/tensor_view.html", "category": "pytorch docs"} {"text": "view tensor just changes the way it interprets the same data. Taking a\nview of contiguous tensor could potentially produce a non-contiguous\ntensor. Users should pay additional attention as contiguity might have\nimplicit performance impact. \"transpose()\" is a common example.\n\n\n\nbase = torch.tensor([[0, 1],[2, 3]])\nbase.is_contiguous()\n True\nt = base.transpose(0, 1) # t is a view of base. No data movement happened here.\n # View tensors might be non-contiguous.\nt.is_contiguous()\n False\n # To get a contiguous tensor, call .contiguous() to enforce\n # copying data when t is not contiguous.\nc = t.contiguous()\n\n\n\nFor reference, here\u00e2\u0080\u0099s a full list of view ops in PyTorch:\n\n\nBasic slicing and indexing op, e.g. \"tensor[0, 2:, 1:7:2]\" returns a\n view of base \"tensor\", see note below.\n\n\n\"adjoint()\"\n\n\n\"as_strided()\"\n\n\n\"detach()\"\n\n\n\"diagonal()\"\n\n\n\"expand()\"\n\n\n\"expand_as()\"\n\n\n\"movedim()\"\n\n\n\"narrow()\"\n\n\n\"permute()\"\n\n\n\"select()\"\n\n\n\"squeeze()\"\n\n\n\"transpose()\"\n\n", "source": "https://pytorch.org/docs/stable/tensor_view.html", "category": "pytorch docs"} {"text": "\n\n\"select()\"\n\n\n\"squeeze()\"\n\n\n\"transpose()\"\n\n\n\"t()\"\n\n\n\"T\"\n\n\n\"H\"\n\n\n\"mT\"\n\n\n\"mH\"\n\n\n\"real\"\n\n\n\"imag\"\n\n\n\"view_as_real()\"\n\n\n\"unflatten()\"\n\n\n\"unfold()\"\n\n\n\"unsqueeze()\"\n\n\n\"view()\"\n\n\n\"view_as()\"\n\n\n\"unbind()\"\n\n\n\"split()\"\n\n\n\"hsplit()\"\n\n\n\"vsplit()\"\n\n\n\"tensor_split()\"\n\n\n\"split_with_sizes()\"\n\n\n\"swapaxes()\"\n\n\n\"swapdims()\"\n\n\n\"chunk()\"\n\n\n\"indices()\" (sparse tensor only)\n\n\n\"values()\" (sparse tensor only)\n\n\nNote:\nWhen accessing the contents of a tensor via indexing, PyTorch\n follows Numpy behaviors that basic indexing returns views, while\n advanced indexing returns a copy. Assignment via either basic or\n advanced indexing is in-place. See more examples in Numpy indexing\n documentation.\nIt's also worth mentioning a few ops with special behaviors:\n\n\n\"reshape()\", \"reshape_as()\" and \"flatten()\" can return either a view\n or new tensor, user code shouldn't rely on whether it's view or not.\n\n\n\"contiguous()\" returns itself if input tensor is already\n\n", "source": "https://pytorch.org/docs/stable/tensor_view.html", "category": "pytorch docs"} {"text": "contiguous, otherwise it returns a new contiguous tensor by copying\n data.\nFor a more detailed walk-through of PyTorch internal implementation,\nplease refer to ezyang's blogpost about PyTorch Internals.", "source": "https://pytorch.org/docs/stable/tensor_view.html", "category": "pytorch docs"} {"text": "torch.Storage\n\"torch.Storage\" is an alias for the storage class that corresponds\nwith the default data type (\"torch.get_default_dtype()\"). For\ninstance, if the default data type is \"torch.float\", \"torch.Storage\"\nresolves to \"torch.FloatStorage\".\nThe \"torch.Storage\" and \"torch.cuda.Storage\" classes, like\n\"torch.FloatStorage\", \"torch.IntStorage\", etc., are not actually ever\ninstantiated. Calling their constructors creates a\n\"torch.TypedStorage\" with the appropriate \"torch.dtype\" and\n\"torch.device\". \"torch.Storage\" classes have all of the same\nclass methods that \"torch.TypedStorage\" has.\nA \"torch.TypedStorage\" is a contiguous, one-dimensional array of\nelements of a particular \"torch.dtype\". It can be given any\n\"torch.dtype\", and the internal data will be interpreted\nappropriately. \"torch.TypedStorage\" contains a \"torch.UntypedStorage\"\nwhich holds the data as an untyped array of bytes.\nEvery strided \"torch.Tensor\" contains a \"torch.TypedStorage\", which", "source": "https://pytorch.org/docs/stable/storage.html", "category": "pytorch docs"} {"text": "stores all of the data that the \"torch.Tensor\" views.\nWarning:\nAll storage classes except for \"torch.UntypedStorage\" will be\n removed in the future, and \"torch.UntypedStorage\" will be used in\n all cases.\nclass torch.TypedStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)\nbfloat16()\n Casts this storage to bfloat16 type\n\nbool()\n Casts this storage to bool type\n\nbyte()\n Casts this storage to byte type\n\nchar()\n Casts this storage to char type\n\nclone()\n Returns a copy of this storage\n\ncomplex_double()\n Casts this storage to complex double type\n\ncomplex_float()\n Casts this storage to complex float type\n\ncopy_(source, non_blocking=None)\ncpu()\n Returns a CPU copy of this storage if it's not already on the\n CPU\n\ncuda(device=None, non_blocking=False, **kwargs)\n Returns a copy of this object in CUDA memory.\n\n If this object is already in CUDA memory and on the correct\n", "source": "https://pytorch.org/docs/stable/storage.html", "category": "pytorch docs"} {"text": "device, then no copy is performed and the original object is\n returned.\n Parameters:\n * **device** (*int*) -- The destination GPU id. Defaults to\n the current device.\n\n * **non_blocking** (*bool*) -- If \"True\" and the source is in\n pinned memory, the copy will be asynchronous with respect\n to the host. Otherwise, the argument has no effect.\n\n * ****kwargs** -- For compatibility, may contain the key\n \"async\" in place of the \"non_blocking\" argument.\n\n Return type:\n *T*\n\ndata_ptr()\nproperty device\ndouble()\n Casts this storage to double type\n\ndtype: dtype\nelement_size()\nfill_(value)\nfloat()\n Casts this storage to float type\n\nclassmethod from_buffer(args, *kwargs)\nclassmethod from_file(filename, shared=False, size=0) -> Storage\n If *shared* is *True*, then memory is shared between all\n processes. All changes are written to the file. If *shared* is\n", "source": "https://pytorch.org/docs/stable/storage.html", "category": "pytorch docs"} {"text": "False, then the changes on the storage do not affect the file.\n *size* is the number of elements in the storage. If *shared* is\n *False*, then the file must contain at least *size *\n sizeof(Type)* bytes (*Type* is the type of storage). If *shared*\n is *True* the file will be created if needed.\n\n Parameters:\n * **filename** (*str*) -- file name to map\n\n * **shared** (*bool*) -- whether to share memory\n\n * **size** (*int*) -- number of elements in the storage\n\nget_device()\n Return type:\n int\n\nhalf()\n Casts this storage to half type\n\nint()\n Casts this storage to int type\n\nproperty is_cuda\nis_pinned()\nis_shared()\nis_sparse = False\nlong()\n Casts this storage to long type\n\nnbytes()\npickle_storage_type()\npin_memory()\n Coppies the storage to pinned memory, if it's not already\n pinned.\n\nresize_(size)\nshare_memory_()\n Moves the storage to shared memory.\n", "source": "https://pytorch.org/docs/stable/storage.html", "category": "pytorch docs"} {"text": "Moves the storage to shared memory.\n This is a no-op for storages already in shared memory and for\n CUDA storages, which do not need to be moved for sharing across\n processes. Storages in shared memory cannot be resized.\n\n Returns: self\n\nshort()\n Casts this storage to short type\n\nsize()\ntolist()\n Returns a list containing the elements of this storage\n\ntype(dtype=None, non_blocking=False)\n Returns the type if *dtype* is not provided, else casts this\n object to the specified type.\n\n If this is already of the correct type, no copy is performed and\n the original object is returned.\n\n Parameters:\n * **dtype** (*type** or **string*) -- The desired type\n\n * **non_blocking** (*bool*) -- If \"True\", and the source is\n in pinned memory and destination is on the GPU or vice\n versa, the copy is performed asynchronously with respect to\n the host. Otherwise, the argument has no effect.\n", "source": "https://pytorch.org/docs/stable/storage.html", "category": "pytorch docs"} {"text": "\n\n**kwargs -- For compatibility, may contain the key\n \"async\" in place of the \"non_blocking\" argument. The\n \"async\" arg is deprecated.\nReturn type:\n Union[T, str]\n\n\nuntyped()\n Returns the internal \"torch.UntypedStorage\"\n\nclass torch.UntypedStorage(args, *kwargs)\nbfloat16()\n Casts this storage to bfloat16 type\n\nbool()\n Casts this storage to bool type\n\nbyte()\n Casts this storage to byte type\n\nchar()\n Casts this storage to char type\n\nclone()\n Returns a copy of this storage\n\ncomplex_double()\n Casts this storage to complex double type\n\ncomplex_float()\n Casts this storage to complex float type\n\ncopy_()\ncpu()\n Returns a CPU copy of this storage if it's not already on the\n CPU\n\ncuda(device=None, non_blocking=False, **kwargs)\n Returns a copy of this object in CUDA memory.\n\n If this object is already in CUDA memory and on the correct\n", "source": "https://pytorch.org/docs/stable/storage.html", "category": "pytorch docs"} {"text": "device, then no copy is performed and the original object is\n returned.\n Parameters:\n * **device** (*int*) -- The destination GPU id. Defaults to\n the current device.\n\n * **non_blocking** (*bool*) -- If \"True\" and the source is in\n pinned memory, the copy will be asynchronous with respect\n to the host. Otherwise, the argument has no effect.\n\n * ****kwargs** -- For compatibility, may contain the key\n \"async\" in place of the \"non_blocking\" argument.\n\ndata_ptr()\ndevice: device\ndouble()\n Casts this storage to double type\n\nelement_size()\nfill_()\nfloat()\n Casts this storage to float type\n\nstatic from_buffer()\nstatic from_file(filename, shared=False, size=0) -> Storage\n If *shared* is *True*, then memory is shared between all\n processes. All changes are written to the file. If *shared* is\n *False*, then the changes on the storage do not affect the file.\n", "source": "https://pytorch.org/docs/stable/storage.html", "category": "pytorch docs"} {"text": "size is the number of elements in the storage. If shared is\n False, then the file must contain at least size *\n sizeof(Type) bytes (Type is the type of storage). If shared\n is True the file will be created if needed.\n Parameters:\n * **filename** (*str*) -- file name to map\n\n * **shared** (*bool*) -- whether to share memory\n\n * **size** (*int*) -- number of elements in the storage\n\nget_device()\n Return type:\n int\n\nhalf()\n Casts this storage to half type\n\nint()\n Casts this storage to int type\n\nproperty is_cuda\nis_pinned()\nis_shared()\nis_sparse: bool = False\nis_sparse_csr: bool = False\nlong()\n Casts this storage to long type\n\nmps()\n Returns a CPU copy of this storage if it's not already on the\n CPU\n\nnbytes()\nnew()\npin_memory()\n Copies the storage to pinned memory, if it's not already pinned.\n\nresize_()\nshare_memory_()", "source": "https://pytorch.org/docs/stable/storage.html", "category": "pytorch docs"} {"text": "resize_()\nshare_memory_()\n Moves the storage to shared memory.\n\n This is a no-op for storages already in shared memory and for\n CUDA storages, which do not need to be moved for sharing across\n processes. Storages in shared memory cannot be resized.\n\n Returns: self\n\nshort()\n Casts this storage to short type\n\nsize()\n Return type:\n int\n\ntolist()\n Returns a list containing the elements of this storage\n\ntype(dtype=None, non_blocking=False, **kwargs)\n Returns the type if *dtype* is not provided, else casts this\n object to the specified type.\n\n If this is already of the correct type, no copy is performed and\n the original object is returned.\n\n Parameters:\n * **dtype** (*type** or **string*) -- The desired type\n\n * **non_blocking** (*bool*) -- If \"True\", and the source is\n in pinned memory and destination is on the GPU or vice\n", "source": "https://pytorch.org/docs/stable/storage.html", "category": "pytorch docs"} {"text": "versa, the copy is performed asynchronously with respect to\n the host. Otherwise, the argument has no effect.\n * ****kwargs** -- For compatibility, may contain the key\n \"async\" in place of the \"non_blocking\" argument. The\n \"async\" arg is deprecated.\n\nuntyped()\nclass torch.DoubleStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)\ndtype: dtype = torch.float64\nclass torch.FloatStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)\ndtype: dtype = torch.float32\nclass torch.HalfStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)\ndtype: dtype = torch.float16\nclass torch.LongStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)\ndtype: dtype = torch.int64\nclass torch.IntStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)\ndtype: dtype = torch.int32", "source": "https://pytorch.org/docs/stable/storage.html", "category": "pytorch docs"} {"text": "dtype: dtype = torch.int32\nclass torch.ShortStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)\ndtype: dtype = torch.int16\nclass torch.CharStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)\ndtype: dtype = torch.int8\nclass torch.ByteStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)\ndtype: dtype = torch.uint8\nclass torch.BoolStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)\ndtype: dtype = torch.bool\nclass torch.BFloat16Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)\ndtype: dtype = torch.bfloat16\nclass torch.ComplexDoubleStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)\ndtype: dtype = torch.complex128\nclass torch.ComplexFloatStorage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)\ndtype: dtype = torch.complex64", "source": "https://pytorch.org/docs/stable/storage.html", "category": "pytorch docs"} {"text": "dtype: dtype = torch.complex64\nclass torch.QUInt8Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)\ndtype: dtype = torch.quint8\nclass torch.QInt8Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)\ndtype: dtype = torch.qint8\nclass torch.QInt32Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)\ndtype: dtype = torch.qint32\nclass torch.QUInt4x2Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)\ndtype: dtype = torch.quint4x2\nclass torch.QUInt2x4Storage(*args, wrap_storage=None, dtype=None, device=None, _internal=False)\ndtype: dtype = torch.quint2x4", "source": "https://pytorch.org/docs/stable/storage.html", "category": "pytorch docs"} {"text": "torch.monitor\nWarning:\nThis module is a prototype release, and its interfaces and\n functionality may change without warning in future PyTorch releases.\n\"torch.monitor\" provides an interface for logging events and counters\nfrom PyTorch.\nThe stat interfaces are designed to be used for tracking high level\nmetrics that are periodically logged out to be used for monitoring\nsystem performance. Since the stats aggregate with a specific window\nsize you can log to them from critical loops with minimal performance\nimpact.\nFor more infrequent events or values such as loss, accuracy, usage\ntracking the event interface can be directly used.\nEvent handlers can be registered to handle the events and pass them to\nan external event sink.\nAPI Reference\nclass torch.monitor.Aggregation\n These are types of aggregations that can be used to accumulate\n stats.\n\nMembers:\n VALUE :\n VALUE returns the last value to be added.\n\n MEAN :\n", "source": "https://pytorch.org/docs/stable/monitor.html", "category": "pytorch docs"} {"text": "MEAN :\n MEAN computes the arithmetic mean of all the added values.\n COUNT :\n COUNT returns the total number of added values.\n\n SUM :\n SUM returns the sum of the added values.\n\n MAX :\n MAX returns the max of the added values.\n\n MIN :\n MIN returns the min of the added values.\n\nproperty name\nclass torch.monitor.Stat\nStat is used to compute summary statistics in a performant way over\n fixed intervals. Stat logs the statistics as an Event once every\n \"window_size\" duration. When the window closes the stats are logged\n via the event handlers as a \"torch.monitor.Stat\" event.\n\"window_size\" should be set to something relatively high to avoid a\n huge number of events being logged. Ex: 60s. Stat uses millisecond\n precision.\nIf \"max_samples\" is set, the stat will cap the number of samples\n per window by discarding add calls once \"max_samples\" adds have\n occurred. If it's not set, all \"add\" calls during the window will", "source": "https://pytorch.org/docs/stable/monitor.html", "category": "pytorch docs"} {"text": "be included. This is an optional field to make aggregations more\n directly comparable across windows when the number of samples might\n vary.\nWhen the Stat is destructed it will log any remaining data even if\n the window hasn't elapsed.\ninit(self: torch._C._monitor.Stat, name: str, aggregations: List[torch._C._monitor.Aggregation], window_size: datetime.timedelta, max_samples: int = 9223372036854775807) -> None\n Constructs the \"Stat\".\n\nadd(self: torch._C._monitor.Stat, v: float) -> None\n Adds a value to the stat to be aggregated according to the\n configured stat type and aggregations.\n\nproperty count\n Number of data points that have currently been collected. Resets\n once the event has been logged.\n\nget(self: torch._C._monitor.Stat) -> Dict[torch._C._monitor.Aggregation, float]\n Returns the current value of the stat, primarily for testing\n purposes. If the stat has logged and no additional values have\n been added this will be zero.\n", "source": "https://pytorch.org/docs/stable/monitor.html", "category": "pytorch docs"} {"text": "been added this will be zero.\nproperty name\n The name of the stat that was set during creation.\n\nclass torch.monitor.data_value_t\ndata_value_t is one of \"str\", \"float\", \"int\", \"bool\".\nclass torch.monitor.Event\nEvent represents a specific typed event to be logged. This can\n represent high-level data points such as loss or accuracy per epoch\n or more low-level aggregations such as through the Stats provided\n through this library.\nAll Events of the same type should have the same name so downstream\n handlers can correctly process them.\ninit(self: torch._C._monitor.Event, name: str, timestamp: datetime.datetime, data: Dict[str, data_value_t]) -> None\n Constructs the \"Event\".\n\nproperty data\n The structured data contained within the \"Event\".\n\nproperty name\n The name of the \"Event\".\n\nproperty timestamp\n The timestamp when the \"Event\" happened.\n\nclass torch.monitor.EventHandlerHandle\nEventHandlerHandle is a wrapper type returned by", "source": "https://pytorch.org/docs/stable/monitor.html", "category": "pytorch docs"} {"text": "\"register_event_handler\" used to unregister the handler via\n \"unregister_event_handler\". This cannot be directly initialized.\ntorch.monitor.log_event(event: torch._C._monitor.Event) -> None\nlog_event logs the specified event to all of the registered event\n handlers. It's up to the event handlers to log the event out to the\n corresponding event sink.\nIf there are no event handlers registered this method is a no-op.\ntorch.monitor.register_event_handler(callback: Callable[[torch._C._monitor.Event], None]) -> torch._C._monitor.EventHandlerHandle\nregister_event_handler registers a callback to be called whenever\n an event is logged via \"log_event\". These handlers should avoid\n blocking the main thread since that may interfere with training as\n they run during the \"log_event\" call.\ntorch.monitor.unregister_event_handler(handler: torch._C._monitor.EventHandlerHandle) -> None\nunregister_event_handler unregisters the \"EventHandlerHandle\"", "source": "https://pytorch.org/docs/stable/monitor.html", "category": "pytorch docs"} {"text": "returned after calling \"register_event_handler\". After this returns\n the event handler will no longer receive events.\nclass torch.monitor.TensorboardEventHandler(writer)\nTensorboardEventHandler is an event handler that will write known\n events to the provided SummaryWriter.\nThis currently only supports \"torch.monitor.Stat\" events which are\n logged as scalars.\n-[ Example ]-\n\n\n\nfrom torch.utils.tensorboard import SummaryWriter\nfrom torch.monitor import TensorboardEventHandler, register_event_handler\nwriter = SummaryWriter(\"log_dir\")\nregister_event_handler(TensorboardEventHandler(writer))\n\n\n\ninit(writer)\n Constructs the \"TensorboardEventHandler\".\n", "source": "https://pytorch.org/docs/stable/monitor.html", "category": "pytorch docs"} {"text": "Note:\nIf the following conditions are satisfied: 1) cudnn is enabled, 2)\n input data is on the GPU 3) input data has dtype \"torch.float16\" 4)\n V100 GPU is used, 5) input data is not in \"PackedSequence\" format\n persistent algorithm can be selected to improve performance.", "source": "https://pytorch.org/docs/stable/cudnn_persistent_rnn.html", "category": "pytorch docs"} {"text": "C++\nNote:\nIf you are looking for the PyTorch C++ API docs, directly go here.\nPyTorch provides several features for working with C++, and it\u00e2\u0080\u0099s best\nto choose from them based on your needs. At a high level, the\nfollowing support is available:\nTorchScript C++ API\nTorchScript allows PyTorch models defined in Python to be serialized\nand then loaded and run in C++ capturing the model code via\ncompilation or tracing its execution. You can learn more in the\nLoading a TorchScript Model in C++ tutorial. This means you can define\nyour models in Python as much as possible, but subsequently export\nthem via TorchScript for doing no-Python execution in production or\nembedded environments. The TorchScript C++ API is used to interact\nwith these models and the TorchScript execution engine, including:\n\n\nLoading serialized TorchScript models saved from Python\n\n\nDoing simple model modifications if needed (e.g. pulling out\n submodules)\n\n", "source": "https://pytorch.org/docs/stable/cpp_index.html", "category": "pytorch docs"} {"text": "submodules)\n\nConstructing the input and doing preprocessing using C++ Tensor API\n\nExtending PyTorch and TorchScript with C++ Extensions\nTorchScript can be augmented with user-supplied code through custom\noperators and custom classes. Once registered with TorchScript, these\noperators and classes can be invoked in TorchScript code run from\nPython or from C++ as part of a serialized TorchScript model. The\nExtending TorchScript with Custom C++ Operators tutorial walks through\ninterfacing TorchScript with OpenCV. In addition to wrapping a\nfunction call with a custom operator, C++ classes and structs can be\nbound into TorchScript through a pybind11-like interface which is\nexplained in the Extending TorchScript with Custom C++ Classes\ntutorial.\nTensor and Autograd in C++\nMost of the tensor and autograd operations in PyTorch Python API are\nalso available in the C++ API. These include:", "source": "https://pytorch.org/docs/stable/cpp_index.html", "category": "pytorch docs"} {"text": "also available in the C++ API. These include:\n\n\n\"torch::Tensor\" methods such as \"add\" / \"reshape\" / \"clone\". For the\n full list of methods available, please see:\n https://pytorch.org/cppdocs/api/classat_1_1_tensor.html\n\n\nC++ tensor indexing API that looks and behaves the same as the\n Python API. For details on its usage, please see:\n https://pytorch.org/cppdocs/notes/tensor_indexing.html\n\n\nThe tensor autograd APIs and the \"torch::autograd\" package that are\n crucial for building dynamic neural networks in C++ frontend. For\n more details, please see:\n https://pytorch.org/tutorials/advanced/cpp_autograd.html\n\n\nAuthoring Models in C++\nThe \"author in TorchScript, infer in C++\" workflow requires model\nauthoring to be done in TorchScript. However, there might be cases\nwhere the model has to be authored in C++ (e.g. in workflows where a\nPython component is undesirable). To serve such use cases, we provide\nthe full capability of authoring and training a neural net model", "source": "https://pytorch.org/docs/stable/cpp_index.html", "category": "pytorch docs"} {"text": "purely in C++, with familiar components such as \"torch::nn\" /\n\"torch::nn::functional\" / \"torch::optim\" that closely resemble the\nPython API.\n\n\nFor an overview of the PyTorch C++ model authoring and training API,\n please see: https://pytorch.org/cppdocs/frontend.html\n\n\nFor a detailed tutorial on how to use the API, please see:\n https://pytorch.org/tutorials/advanced/cpp_frontend.html\n\n\nDocs for components such as \"torch::nn\" / \"torch::nn::functional\" /\n \"torch::optim\" can be found at:\n https://pytorch.org/cppdocs/api/library_root.html\n\n\nPackaging for C++\nFor guidance on how to install and link with libtorch (the library\nthat contains all of the above C++ APIs), please see:\nhttps://pytorch.org/cppdocs/installing.html. Note that on Linux there\nare two types of libtorch binaries provided: one compiled with GCC\npre-cxx11 ABI and the other with GCC cxx11 ABI, and you should make\nthe selection based on the GCC ABI your system is using.", "source": "https://pytorch.org/docs/stable/cpp_index.html", "category": "pytorch docs"} {"text": "torch.random\ntorch.random.fork_rng(devices=None, enabled=True, _caller='fork_rng', _devices_kw='devices')\nForks the RNG, so that when you return, the RNG is reset to the\n state that it was previously in.\nParameters:\n * devices (iterable of CUDA IDs) -- CUDA devices for which\n to fork the RNG. CPU RNG state is always forked. By default,\n \"fork_rng()\" operates on all devices, but will emit a warning\n if your machine has a lot of devices, since this function will\n run very slowly in that case. If you explicitly specify\n devices, this warning will be suppressed\n * **enabled** (*bool*) -- if \"False\", the RNG is not forked.\n This is a convenience argument for easily disabling the\n context manager without having to delete it and unindent your\n Python code under it.\n\nReturn type:\n Generator\ntorch.random.get_rng_state()\nReturns the random number generator state as a torch.ByteTensor.\nReturn type:", "source": "https://pytorch.org/docs/stable/random.html", "category": "pytorch docs"} {"text": "Return type:\n Tensor\ntorch.random.initial_seed()\nReturns the initial seed for generating random numbers as a Python\n long.\nReturn type:\n int\ntorch.random.manual_seed(seed)\nSets the seed for generating random numbers. Returns a\n torch.Generator object.\nParameters:\n seed (int) -- The desired seed. Value must be within the\n inclusive range [-0x8000_0000_0000_0000,\n 0xffff_ffff_ffff_ffff]. Otherwise, a RuntimeError is raised.\n Negative inputs are remapped to positive values with the formula\n 0xffff_ffff_ffff_ffff + seed.\nReturn type:\n Generator\ntorch.random.seed()\nSets the seed for generating random numbers to a non-deterministic\n random number. Returns a 64 bit number used to seed the RNG.\nReturn type:\n int\ntorch.random.set_rng_state(new_state)\nSets the random number generator state.\nParameters:\n new_state (torch.ByteTensor) -- The desired state", "source": "https://pytorch.org/docs/stable/random.html", "category": "pytorch docs"} {"text": "AvgPool1d\nclass torch.nn.AvgPool1d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True)\nApplies a 1D average pooling over an input signal composed of\n several input planes.\nIn the simplest case, the output value of the layer with input size\n (N, C, L), output (N, C, L_{out}) and \"kernel_size\" k can be\n precisely described as:\n \\text{out}(N_i, C_j, l) = \\frac{1}{k} \\sum_{m=0}^{k-1}\n \\text{input}(N_i, C_j, \\text{stride} \\times l + m)\n\nIf \"padding\" is non-zero, then the input is implicitly zero-padded\n on both sides for \"padding\" number of points.\nNote:\n When ceil_mode=True, sliding windows are allowed to go off-bounds\n if they start within the left padding or the input. Sliding\n windows that would start in the right padded region are ignored.\n\nThe parameters \"kernel_size\", \"stride\", \"padding\" can each be an\n \"int\" or a one-element tuple.\nParameters:\n * kernel_size (Union[int, Tuple[int]]) --", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AvgPool1d.html", "category": "pytorch docs"} {"text": "the size of the window\n * **stride** (*Union**[**int**, **Tuple**[**int**]**]*) -- the\n stride of the window. Default value is \"kernel_size\"\n\n * **padding** (*Union**[**int**, **Tuple**[**int**]**]*) --\n implicit zero padding to be added on both sides\n\n * **ceil_mode** (*bool*) -- when True, will use *ceil* instead\n of *floor* to compute the output shape\n\n * **count_include_pad** (*bool*) -- when True, will include the\n zero-padding in the averaging calculation\n\nShape:\n * Input: (N, C, L_{in}) or (C, L_{in}).\n * Output: (N, C, L_{out}) or (C, L_{out}), where\n\n L_{out} = \\left\\lfloor \\frac{L_{in} + 2 \\times\n \\text{padding} - \\text{kernel\\_size}}{\\text{stride}} +\n 1\\right\\rfloor\n\nExamples:\n >>> # pool with window of size=3, stride=2\n >>> m = nn.AvgPool1d(3, stride=2)\n >>> m(torch.tensor([[[1., 2, 3, 4, 5, 6, 7]]]))\n tensor([[[2., 4., 6.]]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AvgPool1d.html", "category": "pytorch docs"} {"text": "torch.Tensor.tanh\nTensor.tanh() -> Tensor\nSee \"torch.tanh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.tanh.html", "category": "pytorch docs"} {"text": "torch.eq\ntorch.eq(input, other, *, out=None) -> Tensor\nComputes element-wise equality\nThe second argument can be a number or a tensor whose shape is\n broadcastable with the first argument.\nParameters:\n * input (Tensor) -- the tensor to compare\n * **other** (*Tensor** or **float*) -- the tensor or value to\n compare\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nReturns:\n A boolean tensor that is True where \"input\" is equal to \"other\"\n and False elsewhere\nExample:\n >>> torch.eq(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))\n tensor([[ True, False],\n [False, True]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.eq.html", "category": "pytorch docs"} {"text": "torch.floor\ntorch.floor(input, *, out=None) -> Tensor\nReturns a new tensor with the floor of the elements of \"input\", the\n largest integer less than or equal to each element.\nFor integer inputs, follows the array-api convention of returning a\n copy of the input tensor.\n \\text{out}_{i} = \\left\\lfloor \\text{input}_{i} \\right\\rfloor\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(4)\n >>> a\n tensor([-0.8166, 1.5308, -0.2530, -0.2091])\n >>> torch.floor(a)\n tensor([-1., 1., -1., -1.])\n", "source": "https://pytorch.org/docs/stable/generated/torch.floor.html", "category": "pytorch docs"} {"text": "torch.autograd.Function.jvp\nstatic Function.jvp(ctx, *grad_inputs)\nDefines a formula for differentiating the operation with forward\n mode automatic differentiation. This function is to be overridden\n by all subclasses. It must accept a context \"ctx\" as the first\n argument, followed by as many inputs as the \"forward()\" got (None\n will be passed in for non tensor inputs of the forward function),\n and it should return as many tensors as there were outputs to\n \"forward()\". Each argument is the gradient w.r.t the given input,\n and each returned value should be the gradient w.r.t. the\n corresponding output. If an output is not a Tensor or the function\n is not differentiable with respect to that output, you can just\n pass None as a gradient for that input.\nYou can use the \"ctx\" object to pass any value from the forward to\n this functions.\nReturn type:\n Any", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.Function.jvp.html", "category": "pytorch docs"} {"text": "ConvReLU3d\nclass torch.ao.nn.intrinsic.quantized.ConvReLU3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)\nA ConvReLU3d module is a fused module of Conv3d and ReLU\nWe adopt the same interface as \"torch.ao.nn.quantized.Conv3d\".\nAttributes: Same as torch.ao.nn.quantized.Conv3d", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.quantized.ConvReLU3d.html", "category": "pytorch docs"} {"text": "torch.Tensor.corrcoef\nTensor.corrcoef() -> Tensor\nSee \"torch.corrcoef()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.corrcoef.html", "category": "pytorch docs"} {"text": "torch.Tensor.tolist\nTensor.tolist() -> list or number\nReturns the tensor as a (nested) list. For scalars, a standard\n Python number is returned, just like with \"item()\". Tensors are\n automatically moved to the CPU first if necessary.\nThis operation is not differentiable.\nExamples:\n >>> a = torch.randn(2, 2)\n >>> a.tolist()\n [[0.012766935862600803, 0.5415473580360413],\n [-0.08909505605697632, 0.7729271650314331]]\n >>> a[0,0].tolist()\n 0.012766935862600803\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.tolist.html", "category": "pytorch docs"} {"text": "torch.autograd.gradgradcheck\ntorch.autograd.gradgradcheck(func, inputs, grad_outputs=None, *, eps=1e-06, atol=1e-05, rtol=0.001, gen_non_contig_grad_outputs=False, raise_exception=True, nondet_tol=0.0, check_undefined_grad=True, check_grad_dtypes=False, check_batched_grad=False, check_fwd_over_rev=False, check_rev_over_rev=True, fast_mode=False)\nCheck gradients of gradients computed via small finite differences\n against analytical gradients w.r.t. tensors in \"inputs\" and\n \"grad_outputs\" that are of floating point or complex type and with\n \"requires_grad=True\".\nThis function checks that backpropagating through the gradients\n computed to the given \"grad_outputs\" are correct.\nThe check between numerical and analytical gradients uses\n \"allclose()\".\nNote:\n The default values are designed for \"input\" and \"grad_outputs\" of\n double precision. This check will likely fail if they are of less\n precision, e.g., \"FloatTensor\".\n\nWarning:", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.gradgradcheck.html", "category": "pytorch docs"} {"text": "precision, e.g., \"FloatTensor\".\nWarning:\n If any checked tensor in \"input\" and \"grad_outputs\" has\n overlapping memory, i.e., different indices pointing to the same\n memory address (e.g., from \"torch.expand()\"), this check will\n likely fail because the numerical gradients computed by point\n perturbation at such indices will change values at all other\n indices that share the same memory address.\n\nParameters:\n * func (function) -- a Python function that takes Tensor\n inputs and returns a Tensor or a tuple of Tensors\n * **inputs** (*tuple of Tensor** or **Tensor*) -- inputs to the\n function\n\n * **grad_outputs** (*tuple of Tensor** or **Tensor**,\n **optional*) -- The gradients with respect to the function's\n outputs.\n\n * **eps** (*float**, **optional*) -- perturbation for finite\n differences\n\n * **atol** (*float**, **optional*) -- absolute tolerance\n\n * **rtol** (*float**, **optional*) -- relative tolerance\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.gradgradcheck.html", "category": "pytorch docs"} {"text": "\n\ngen_non_contig_grad_outputs (bool, optional) -- if\n \"grad_outputs\" is \"None\" and \"gen_non_contig_grad_outputs\" is\n \"True\", the randomly generated gradient outputs are made to be\n noncontiguous\n\n\nraise_exception (bool, optional) -- indicating\n whether to raise an exception if the check fails. The\n exception gives more information about the exact nature of the\n failure. This is helpful when debugging gradchecks.\n\n\nnondet_tol (float, optional) -- tolerance for non-\n determinism. When running identical inputs through the\n differentiation, the results must either match exactly\n (default, 0.0) or be within this tolerance. Note that a small\n amount of nondeterminism in the gradient will lead to larger\n inaccuracies in the second derivative.\n\n\ncheck_undefined_grad (bool, optional) -- if True,\n check if undefined output grads are supported and treated as\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.gradgradcheck.html", "category": "pytorch docs"} {"text": "zeros\n * **check_batched_grad** (*bool**, **optional*) -- if True,\n check if we can compute batched gradients using prototype vmap\n support. Defaults to False.\n\n * **fast_mode** (*bool**, **optional*) -- if True, run a faster\n implementation of gradgradcheck that no longer computes the\n entire jacobian.\n\nReturns:\n True if all differences satisfy allclose condition\nReturn type:\n bool", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.gradgradcheck.html", "category": "pytorch docs"} {"text": "torch.Tensor.bmm\nTensor.bmm(batch2) -> Tensor\nSee \"torch.bmm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bmm.html", "category": "pytorch docs"} {"text": "default_fused_wt_fake_quant\ntorch.quantization.fake_quantize.default_fused_wt_fake_quant\nalias of functools.partial(, observer=,\n quant_min=-128, quant_max=127, dtype=torch.qint8,\n qscheme=torch.per_tensor_symmetric){}", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.default_fused_wt_fake_quant.html", "category": "pytorch docs"} {"text": "torch.jit.trace\ntorch.jit.trace(func, example_inputs=None, optimize=None, check_trace=True, check_inputs=None, check_tolerance=1e-05, strict=True, _force_outplace=False, _module_class=None, _compilation_unit=, example_kwarg_inputs=None)\nTrace a function and return an executable or \"ScriptFunction\" that\n will be optimized using just-in-time compilation. Tracing is ideal\n for code that operates only on \"Tensor\"s and lists, dictionaries,\n and tuples of \"Tensor\"s.\nUsing torch.jit.trace and torch.jit.trace_module, you can turn\n an existing module or Python function into a TorchScript\n \"ScriptFunction\" or \"ScriptModule\". You must provide example\n inputs, and we run the function, recording the operations performed\n on all the tensors.\n\n\nThe resulting recording of a standalone function produces\n ScriptFunction.\n\n\nThe resulting recording of nn.Module.forward or nn.Module\n produces ScriptModule.\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.trace.html", "category": "pytorch docs"} {"text": "produces ScriptModule.\nThis module also contains any parameters that the original module\n had as well.\nWarning:\n Tracing only correctly records functions and modules which are\n not data dependent (e.g., do not have conditionals on data in\n tensors) and do not have any untracked external dependencies\n (e.g., perform input/output or access global variables). Tracing\n only records operations done when the given function is run on\n the given tensors. Therefore, the returned *ScriptModule* will\n always run the same traced graph on any input. This has some\n important implications when your module is expected to run\n different sets of operations, depending on the input and/or the\n module state. For example,\n\n * Tracing will not record any control-flow like if-statements or\n loops. When this control-flow is constant across your module,\n this is fine and it often inlines the control-flow decisions.\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.trace.html", "category": "pytorch docs"} {"text": "But sometimes the control-flow is actually part of the model\n itself. For instance, a recurrent network is a loop over the\n (possibly dynamic) length of an input sequence.\n * In the returned \"ScriptModule\", operations that have different\n behaviors in \"training\" and \"eval\" modes will always behave as\n if it is in the mode it was in during tracing, no matter which\n mode the *ScriptModule* is in.\n\n In cases like these, tracing would not be appropriate and\n \"scripting\" is a better choice. If you trace such models, you may\n silently get incorrect results on subsequent invocations of the\n model. The tracer will try to emit warnings when doing something\n that may cause an incorrect trace to be produced.\n\nParameters:\n func (callable or torch.nn.Module) -- A Python\n function or torch.nn.Module that will be run with\n example_inputs. func arguments and return values must be", "source": "https://pytorch.org/docs/stable/generated/torch.jit.trace.html", "category": "pytorch docs"} {"text": "tensors or (possibly nested) tuples that contain tensors. When a\n module is passed torch.jit.trace, only the \"forward\" method is\n run and traced (see \"torch.jit.trace\" for details).\nKeyword Arguments:\n * example_inputs (tuple or torch.Tensor or None,\n optional) -- A tuple of example inputs that will be passed\n to the function while tracing. Default: \"None\". Either this\n argument or \"example_kwarg_inputs\" should be specified. The\n resulting trace can be run with inputs of different types and\n shapes assuming the traced operations support those types and\n shapes. example_inputs may also be a single Tensor in which\n case it is automatically wrapped in a tuple. When the value is\n None, \"example_kwarg_inputs\" should be specified.\n * **check_trace** (\"bool\", optional) -- Check if the same inputs\n run through traced code produce the same outputs. Default:\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.trace.html", "category": "pytorch docs"} {"text": "\"True\". You might want to disable this if, for example, your\n network contains non- deterministic ops or if you are sure\n that the network is correct despite a checker failure.\n * **check_inputs** (*list of tuples**, **optional*) -- A list of\n tuples of input arguments that should be used to check the\n trace against what is expected. Each tuple is equivalent to a\n set of input arguments that would be specified in\n \"example_inputs\". For best results, pass in a set of checking\n inputs representative of the space of shapes and types of\n inputs you expect the network to see. If not specified, the\n original \"example_inputs\" are used for checking\n\n * **check_tolerance** (*float**, **optional*) -- Floating-point\n comparison tolerance to use in the checker procedure. This\n can be used to relax the checker strictness in the event that\n results diverge numerically for a known reason, such as\n operator fusion.\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.trace.html", "category": "pytorch docs"} {"text": "operator fusion.\n * **strict** (\"bool\", optional) -- run the tracer in a strict\n mode or not (default: \"True\"). Only turn this off when you\n want the tracer to record your mutable container types\n (currently \"list\"/\"dict\") and you are sure that the container\n you are using in your problem is a \"constant\" structure and\n does not get used as control flow (if, for) conditions.\n\n * **example_kwarg_inputs** (*dict**, **optional*) -- This\n parameter is a pack of keyword arguments of example inputs\n that will be passed to the function while tracing. Default:\n \"None\". Either this argument or \"example_inputs\" should be\n specified. The dict will be unpacking by the arguments name of\n the traced function. If the keys of the dict don't not match\n with the traced function's arguments name, a runtime exception\n will be raised.\n\nReturns:\n If func is nn.Module or \"forward\" of nn.Module, trace", "source": "https://pytorch.org/docs/stable/generated/torch.jit.trace.html", "category": "pytorch docs"} {"text": "returns a \"ScriptModule\" object with a single \"forward\" method\n containing the traced code. The returned ScriptModule will\n have the same set of sub-modules and parameters as the original\n \"nn.Module\". If \"func\" is a standalone function, \"trace\"\n returns ScriptFunction.\nExample (tracing a function):\n import torch\n\n def foo(x, y):\n return 2 * x + y\n\n # Run `foo` with the provided inputs and record the tensor operations\n traced_foo = torch.jit.trace(foo, (torch.rand(3), torch.rand(3)))\n\n # `traced_foo` can now be run with the TorchScript interpreter or saved\n # and loaded in a Python-free environment\n\nExample (tracing an existing module):\n import torch\n import torch.nn as nn\n\n class Net(nn.Module):\n def __init__(self):\n super(Net, self).__init__()\n self.conv = nn.Conv2d(1, 1, 3)\n\n def forward(self, x):\n return self.conv(x)\n\n n = Net()\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.trace.html", "category": "pytorch docs"} {"text": "return self.conv(x)\n n = Net()\n example_weight = torch.rand(1, 1, 3, 3)\n example_forward_input = torch.rand(1, 1, 3, 3)\n\n # Trace a specific method and construct `ScriptModule` with\n # a single `forward` method\n module = torch.jit.trace(n.forward, example_forward_input)\n\n # Trace a module (implicitly traces `forward`) and construct a\n # `ScriptModule` with a single `forward` method\n module = torch.jit.trace(n, example_forward_input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.trace.html", "category": "pytorch docs"} {"text": "Unflatten\nclass torch.nn.Unflatten(dim, unflattened_size)\nUnflattens a tensor dim expanding it to a desired shape. For use\n with \"Sequential\".\n\n\n\"dim\" specifies the dimension of the input tensor to be\n unflattened, and it can be either int or str when Tensor or\n NamedTensor is used, respectively.\n\n\n\"unflattened_size\" is the new shape of the unflattened dimension\n of the tensor and it can be a tuple of ints or a list of ints\n or torch.Size for Tensor input; a NamedShape (tuple of\n (name, size) tuples) for NamedTensor input.\n\n\nShape:\n * Input: (, S_{\\text{dim}}, ), where S_{\\text{dim}} is the\n size at dimension \"dim\" and * means any number of dimensions\n including none.\n * Output: (*, U_1, ..., U_n, *), where U = \"unflattened_size\"\n and \\prod_{i=1}^n U_i = S_{\\text{dim}}.\n\nParameters:\n * dim (Union[int, str]) -- Dimension to be\n unflattened", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Unflatten.html", "category": "pytorch docs"} {"text": "unflattened\n * **unflattened_size** (*Union**[**torch.Size**, **Tuple**,\n **List**, **NamedShape**]*) -- New shape of the unflattened\n dimension\n\n-[ Examples ]-\n\n\n\ninput = torch.randn(2, 50)\nWith tuple of ints\nm = nn.Sequential(\n nn.Linear(50, 50),\n nn.Unflatten(1, (2, 5, 5))\n)\noutput = m(input)\noutput.size()\n torch.Size([2, 2, 5, 5])\nWith torch.Size\nm = nn.Sequential(\n nn.Linear(50, 50),\n nn.Unflatten(1, torch.Size([2, 5, 5]))\n)\noutput = m(input)\noutput.size()\n torch.Size([2, 2, 5, 5])\nWith namedshape (tuple of tuples)\ninput = torch.randn(2, 50, names=('N', 'features'))\nunflatten = nn.Unflatten('features', (('C', 2), ('H', 5), ('W', 5)))\noutput = unflatten(input)\noutput.size()\n torch.Size([2, 2, 5, 5])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Unflatten.html", "category": "pytorch docs"} {"text": "torch.Tensor.coalesce\nTensor.coalesce() -> Tensor\nReturns a coalesced copy of \"self\" if \"self\" is an uncoalesced\n tensor.\nReturns \"self\" if \"self\" is a coalesced tensor.\nWarning:\n Throws an error if \"self\" is not a sparse COO tensor.\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.coalesce.html", "category": "pytorch docs"} {"text": "torch.outer\ntorch.outer(input, vec2, *, out=None) -> Tensor\nOuter product of \"input\" and \"vec2\". If \"input\" is a vector of size\n n and \"vec2\" is a vector of size m, then \"out\" must be a matrix of\n size (n \\times m).\nNote:\n This function does not broadcast.\n\nParameters:\n * input (Tensor) -- 1-D input vector\n * **vec2** (*Tensor*) -- 1-D input vector\n\nKeyword Arguments:\n out (Tensor, optional) -- optional output matrix\nExample:\n >>> v1 = torch.arange(1., 5.)\n >>> v2 = torch.arange(1., 4.)\n >>> torch.outer(v1, v2)\n tensor([[ 1., 2., 3.],\n [ 2., 4., 6.],\n [ 3., 6., 9.],\n [ 4., 8., 12.]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.outer.html", "category": "pytorch docs"} {"text": "torch.nn.functional.avg_pool3d\ntorch.nn.functional.avg_pool3d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None) -> Tensor\nApplies 3D average-pooling operation in kT \\times kH \\times kW\n regions by step size sT \\times sH \\times sW steps. The number of\n output features is equal to \\lfloor\\frac{\\text{input\n planes}}{sT}\\rfloor.\nSee \"AvgPool3d\" for details and output shape.\nParameters:\n * input -- input tensor (\\text{minibatch} ,\n \\text{in_channels} , iT \\times iH , iW)\n * **kernel_size** -- size of the pooling region. Can be a single\n number or a tuple *(kT, kH, kW)*\n\n * **stride** -- stride of the pooling operation. Can be a single\n number or a tuple *(sT, sH, sW)*. Default: \"kernel_size\"\n\n * **padding** -- implicit zero paddings on both sides of the\n input. Can be a single number or a tuple *(padT, padH, padW)*,\n Default: 0\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.avg_pool3d.html", "category": "pytorch docs"} {"text": "Default: 0\n * **ceil_mode** -- when True, will use *ceil* instead of *floor*\n in the formula to compute the output shape\n\n * **count_include_pad** -- when True, will include the zero-\n padding in the averaging calculation\n\n * **divisor_override** -- if specified, it will be used as\n divisor, otherwise size of the pooling region will be used.\n Default: None\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.avg_pool3d.html", "category": "pytorch docs"} {"text": "torch.autograd.graph.Node.next_functions\nabstract property Node.next_functions: Tuple[Tuple[Optional[Node], int], ...]", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.graph.Node.next_functions.html", "category": "pytorch docs"} {"text": "torch.Tensor.byte\nTensor.byte(memory_format=torch.preserve_format) -> Tensor\n\"self.byte()\" is equivalent to \"self.to(torch.uint8)\". See \"to()\".\nParameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.byte.html", "category": "pytorch docs"} {"text": "LinearReLU\nclass torch.ao.nn.intrinsic.quantized.dynamic.LinearReLU(in_features, out_features, bias=True, dtype=torch.qint8)\nA LinearReLU module fused from Linear and ReLU modules that can be\n used for dynamic quantization. Supports both, FP16 and INT8\n quantization.\nWe adopt the same interface as\n \"torch.ao.nn.quantized.dynamic.Linear\".\nVariables:\n torch.ao.nn.quantized.dynamic.Linear (Same as) --\nExamples:\n >>> m = nn.intrinsic.quantized.dynamic.LinearReLU(20, 30)\n >>> input = torch.randn(128, 20)\n >>> output = m(input)\n >>> print(output.size())\n torch.Size([128, 30])\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.quantized.dynamic.LinearReLU.html", "category": "pytorch docs"} {"text": "torch.nn.functional.adaptive_max_pool2d\ntorch.nn.functional.adaptive_max_pool2d(args, *kwargs)\nApplies a 2D adaptive max pooling over an input signal composed of\n several input planes.\nSee \"AdaptiveMaxPool2d\" for details and output shape.\nParameters:\n * output_size -- the target output size (single integer or\n double-integer tuple)\n * **return_indices** -- whether to return pooling indices.\n Default: \"False\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.adaptive_max_pool2d.html", "category": "pytorch docs"} {"text": "MaxUnpool1d\nclass torch.nn.MaxUnpool1d(kernel_size, stride=None, padding=0)\nComputes a partial inverse of \"MaxPool1d\".\n\"MaxPool1d\" is not fully invertible, since the non-maximal values\n are lost.\n\"MaxUnpool1d\" takes in as input the output of \"MaxPool1d\" including\n the indices of the maximal values and computes a partial inverse in\n which all non-maximal values are set to zero.\nNote:\n \"MaxPool1d\" can map several input sizes to the same output sizes.\n Hence, the inversion process can get ambiguous. To accommodate\n this, you can provide the needed output size as an additional\n argument \"output_size\" in the forward call. See the Inputs and\n Example below.\n\nParameters:\n * kernel_size (int or tuple) -- Size of the max\n pooling window.\n * **stride** (*int** or **tuple*) -- Stride of the max pooling\n window. It is set to \"kernel_size\" by default.\n\n * **padding** (*int** or **tuple*) -- Padding that was added to\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxUnpool1d.html", "category": "pytorch docs"} {"text": "the input\nInputs:\n * input: the input Tensor to invert\n * *indices*: the indices given out by \"MaxPool1d\"\n\n * *output_size* (optional): the targeted output size\n\nShape:\n * Input: (N, C, H_{in}) or (C, H_{in}).\n * Output: (N, C, H_{out}) or (C, H_{out}), where\n\n H_{out} = (H_{in} - 1) \\times \\text{stride}[0] - 2 \\times\n \\text{padding}[0] + \\text{kernel\\_size}[0]\n\n or as given by \"output_size\" in the call operator\n\nExample:\n >>> pool = nn.MaxPool1d(2, stride=2, return_indices=True)\n >>> unpool = nn.MaxUnpool1d(2, stride=2)\n >>> input = torch.tensor([[[1., 2, 3, 4, 5, 6, 7, 8]]])\n >>> output, indices = pool(input)\n >>> unpool(output, indices)\n tensor([[[ 0., 2., 0., 4., 0., 6., 0., 8.]]])\n\n >>> # Example showcasing the use of output_size\n >>> input = torch.tensor([[[1., 2, 3, 4, 5, 6, 7, 8, 9]]])\n >>> output, indices = pool(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxUnpool1d.html", "category": "pytorch docs"} {"text": "\n\n\noutput, indices = pool(input)\n >>> unpool(output, indices, output_size=input.size())\n tensor([[[ 0., 2., 0., 4., 0., 6., 0., 8., 0.]]])\n\n\n\n >>> unpool(output, indices)\n tensor([[[ 0., 2., 0., 4., 0., 6., 0., 8.]]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxUnpool1d.html", "category": "pytorch docs"} {"text": "torch.Tensor.argmax\nTensor.argmax(dim=None, keepdim=False) -> LongTensor\nSee \"torch.argmax()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.argmax.html", "category": "pytorch docs"} {"text": "torch.nn.functional.max_pool2d\ntorch.nn.functional.max_pool2d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)\nApplies a 2D max pooling over an input signal composed of several\n input planes.\nNote:\n The order of \"ceil_mode\" and \"return_indices\" is different from\n what seen in \"MaxPool2d\", and will change in a future release.\n\nSee \"MaxPool2d\" for details.\nParameters:\n * input -- input tensor (\\text{minibatch} ,\n \\text{in_channels} , iH , iW), minibatch dim optional.\n * **kernel_size** -- size of the pooling region. Can be a single\n number or a tuple *(kH, kW)*\n\n * **stride** -- stride of the pooling operation. Can be a single\n number or a tuple *(sH, sW)*. Default: \"kernel_size\"\n\n * **padding** -- Implicit negative infinity padding to be added\n on both sides, must be >= 0 and <= kernel_size / 2.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.max_pool2d.html", "category": "pytorch docs"} {"text": "\n\ndilation -- The stride between elements within a sliding\n window, must be > 0.\n\n\nceil_mode -- If \"True\", will use ceil instead of floor\n to compute the output shape. This ensures that every element\n in the input tensor is covered by a sliding window.\n\n\nreturn_indices -- If \"True\", will return the argmax along\n with the max values. Useful for\n \"torch.nn.functional.max_unpool2d\" later\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.max_pool2d.html", "category": "pytorch docs"} {"text": "torch.Tensor.unsqueeze_\nTensor.unsqueeze_(dim) -> Tensor\nIn-place version of \"unsqueeze()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.unsqueeze_.html", "category": "pytorch docs"} {"text": "QFunctional\nclass torch.ao.nn.quantized.QFunctional\nWrapper class for quantized operations.\nThe instance of this class can be used instead of the\n \"torch.ops.quantized\" prefix. See example usage below.\nNote:\n This class does not provide a \"forward\" hook. Instead, you must\n use one of the underlying functions (e.g. \"add\").\n\nExamples:\n >>> q_add = QFunctional()\n >>> a = torch.quantize_per_tensor(torch.tensor(3.0), 1.0, 0, torch.qint32)\n >>> b = torch.quantize_per_tensor(torch.tensor(4.0), 1.0, 0, torch.qint32)\n >>> q_add.add(a, b) # Equivalent to ``torch.ops.quantized.add(a, b, 1.0, 0)``\n\nValid operation names:\n * add\n * cat\n\n * mul\n\n * add_relu\n\n * add_scalar\n\n * mul_scalar\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.QFunctional.html", "category": "pytorch docs"} {"text": "LazyBatchNorm1d\nclass torch.nn.LazyBatchNorm1d(eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)\nA \"torch.nn.BatchNorm1d\" module with lazy initialization of the\n \"num_features\" argument of the \"BatchNorm1d\" that is inferred from\n the \"input.size(1)\". The attributes that will be lazily initialized\n are weight, bias, running_mean and running_var.\nCheck the \"torch.nn.modules.lazy.LazyModuleMixin\" for further\n documentation on lazy modules and their limitations.\nParameters:\n * eps (float) -- a value added to the denominator for\n numerical stability. Default: 1e-5\n * **momentum** (*float*) -- the value used for the running_mean\n and running_var computation. Can be set to \"None\" for\n cumulative moving average (i.e. simple average). Default: 0.1\n\n * **affine** (*bool*) -- a boolean value that when set to\n \"True\", this module has learnable affine parameters. Default:\n \"True\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyBatchNorm1d.html", "category": "pytorch docs"} {"text": "\"True\"\n * **track_running_stats** (*bool*) -- a boolean value that when\n set to \"True\", this module tracks the running mean and\n variance, and when set to \"False\", this module does not track\n such statistics, and initializes statistics buffers\n \"running_mean\" and \"running_var\" as \"None\". When these buffers\n are \"None\", this module always uses batch statistics. in both\n training and eval modes. Default: \"True\"\n\ncls_to_become\n alias of \"BatchNorm1d\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyBatchNorm1d.html", "category": "pytorch docs"} {"text": "torch.fliplr\ntorch.fliplr(input) -> Tensor\nFlip tensor in the left/right direction, returning a new tensor.\nFlip the entries in each row in the left/right direction. Columns\n are preserved, but appear in a different order than before.\nNote:\n Requires the tensor to be at least 2-D.\n\nNote:\n *torch.fliplr* makes a copy of \"input\"'s data. This is different\n from NumPy's *np.fliplr*, which returns a view in constant time.\n Since copying a tensor's data is more work than viewing that\n data, *torch.fliplr* is expected to be slower than *np.fliplr*.\n\nParameters:\n input (Tensor) -- Must be at least 2-dimensional.\nExample:\n >>> x = torch.arange(4).view(2, 2)\n >>> x\n tensor([[0, 1],\n [2, 3]])\n >>> torch.fliplr(x)\n tensor([[1, 0],\n [3, 2]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.fliplr.html", "category": "pytorch docs"} {"text": "EmbeddingBag\nclass torch.nn.EmbeddingBag(num_embeddings, embedding_dim, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, mode='mean', sparse=False, _weight=None, include_last_offset=False, padding_idx=None, device=None, dtype=None)\nComputes sums or means of 'bags' of embeddings, without\n instantiating the intermediate embeddings.\nFor bags of constant length, no \"per_sample_weights\", no indices\n equal to \"padding_idx\", and with 2D inputs, this class\n * with \"mode=\"sum\"\" is equivalent to \"Embedding\" followed by\n \"torch.sum(dim=1)\",\n\n * with \"mode=\"mean\"\" is equivalent to \"Embedding\" followed by\n \"torch.mean(dim=1)\",\n\n * with \"mode=\"max\"\" is equivalent to \"Embedding\" followed by\n \"torch.max(dim=1)\".\n\nHowever, \"EmbeddingBag\" is much more time and memory efficient than\n using a chain of these operations.\nEmbeddingBag also supports per-sample weights as an argument to the\n forward pass. This scales the output of the Embedding before", "source": "https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html", "category": "pytorch docs"} {"text": "performing a weighted reduction as specified by \"mode\". If\n \"per_sample_weights\" is passed, the only supported \"mode\" is\n \"\"sum\"\", which computes a weighted sum according to\n \"per_sample_weights\".\nParameters:\n * num_embeddings (int) -- size of the dictionary of\n embeddings\n * **embedding_dim** (*int*) -- the size of each embedding vector\n\n * **max_norm** (*float**, **optional*) -- If given, each\n embedding vector with norm larger than \"max_norm\" is\n renormalized to have norm \"max_norm\".\n\n * **norm_type** (*float**, **optional*) -- The p of the p-norm\n to compute for the \"max_norm\" option. Default \"2\".\n\n * **scale_grad_by_freq** (*bool**, **optional*) -- if given,\n this will scale gradients by the inverse of frequency of the\n words in the mini-batch. Default \"False\". Note: this option is\n not supported when \"mode=\"max\"\".\n\n * **mode** (*str**, **optional*) -- \"\"sum\"\", \"\"mean\"\" or\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html", "category": "pytorch docs"} {"text": "\"\"max\"\". Specifies the way to reduce the bag. \"\"sum\"\" computes\n the weighted sum, taking \"per_sample_weights\" into\n consideration. \"\"mean\"\" computes the average of the values in\n the bag, \"\"max\"\" computes the max value over each bag.\n Default: \"\"mean\"\"\n * **sparse** (*bool**, **optional*) -- if \"True\", gradient\n w.r.t. \"weight\" matrix will be a sparse tensor. See Notes for\n more details regarding sparse gradients. Note: this option is\n not supported when \"mode=\"max\"\".\n\n * **include_last_offset** (*bool**, **optional*) -- if \"True\",\n \"offsets\" has one additional element, where the last element\n is equivalent to the size of *indices*. This matches the CSR\n format.\n\n * **padding_idx** (*int**, **optional*) -- If specified, the\n entries at \"padding_idx\" do not contribute to the gradient;\n therefore, the embedding vector at \"padding_idx\" is not\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html", "category": "pytorch docs"} {"text": "updated during training, i.e. it remains as a fixed \"pad\". For\n a newly constructed EmbeddingBag, the embedding vector at\n \"padding_idx\" will default to all zeros, but can be updated to\n another value to be used as the padding vector. Note that the\n embedding vector at \"padding_idx\" is excluded from the\n reduction.\nVariables:\n weight (Tensor) -- the learnable weights of the module of\n shape (num_embeddings, embedding_dim) initialized from\n \\mathcal{N}(0, 1).\nExamples:\n >>> # an EmbeddingBag module containing 10 tensors of size 3\n >>> embedding_sum = nn.EmbeddingBag(10, 3, mode='sum')\n >>> # a batch of 2 samples of 4 indices each\n >>> input = torch.tensor([1, 2, 4, 5, 4, 3, 2, 9], dtype=torch.long)\n >>> offsets = torch.tensor([0, 4], dtype=torch.long)\n >>> embedding_sum(input, offsets)\n tensor([[-0.8861, -5.4350, -0.0523],\n [ 1.1306, -2.5798, -1.0044]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html", "category": "pytorch docs"} {"text": "[ 1.1306, -2.5798, -1.0044]])\n >>> # Example with padding_idx\n >>> embedding_sum = nn.EmbeddingBag(10, 3, mode='sum', padding_idx=2)\n >>> input = torch.tensor([2, 2, 2, 2, 4, 3, 2, 9], dtype=torch.long)\n >>> offsets = torch.tensor([0, 4], dtype=torch.long)\n >>> embedding_sum(input, offsets)\n tensor([[ 0.0000, 0.0000, 0.0000],\n [-0.7082, 3.2145, -2.6251]])\n\n >>> # An EmbeddingBag can be loaded from an Embedding like so\n >>> embedding = nn.Embedding(10, 3, padding_idx=2)\n >>> embedding_sum = nn.EmbeddingBag.from_pretrained(\n embedding.weight,\n padding_idx=embedding.padding_idx,\n mode='sum')\n\nforward(input, offsets=None, per_sample_weights=None)\n Forward pass of EmbeddingBag.\n\n Parameters:\n * **input** (*Tensor*) -- Tensor containing bags of indices\n into the embedding matrix.\n\n * **offsets** (*Tensor**, **optional*) -- Only used when\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html", "category": "pytorch docs"} {"text": "\"input\" is 1D. \"offsets\" determines the starting index\n position of each bag (sequence) in \"input\".\n * **per_sample_weights** (*Tensor**, **optional*) -- a tensor\n of float / double weights, or None to indicate all weights\n should be taken to be \"1\". If specified,\n \"per_sample_weights\" must have exactly the same shape as\n input and is treated as having the same \"offsets\", if those\n are not \"None\". Only supported for \"mode='sum'\".\n\n Returns:\n Tensor output shape of *(B, embedding_dim)*.\n\n Return type:\n *Tensor*\n\n Note:\n\n A few notes about \"input\" and \"offsets\":\n\n * \"input\" and \"offsets\" have to be of the same type, either\n int or long\n\n * If \"input\" is 2D of shape *(B, N)*, it will be treated as\n \"B\" bags (sequences) each of fixed length \"N\", and this will\n return \"B\" values aggregated in a way depending on the\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html", "category": "pytorch docs"} {"text": "\"mode\". \"offsets\" is ignored and required to be \"None\" in\n this case.\n * If \"input\" is 1D of shape *(N)*, it will be treated as a\n concatenation of multiple bags (sequences). \"offsets\" is\n required to be a 1D tensor containing the starting index\n positions of each bag in \"input\". Therefore, for \"offsets\"\n of shape *(B)*, \"input\" will be viewed as having \"B\" bags.\n Empty bags (i.e., having 0-length) will have returned\n vectors filled by zeros.\n\nclassmethod from_pretrained(embeddings, freeze=True, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, mode='mean', sparse=False, include_last_offset=False, padding_idx=None)\n Creates EmbeddingBag instance from given 2-dimensional\n FloatTensor.\n\n Parameters:\n * **embeddings** (*Tensor*) -- FloatTensor containing weights\n for the EmbeddingBag. First dimension is being passed to\n EmbeddingBag as 'num_embeddings', second as\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html", "category": "pytorch docs"} {"text": "'embedding_dim'.\n * **freeze** (*bool**, **optional*) -- If \"True\", the tensor\n does not get updated in the learning process. Equivalent to\n \"embeddingbag.weight.requires_grad = False\". Default:\n \"True\"\n\n * **max_norm** (*float**, **optional*) -- See module\n initialization documentation. Default: \"None\"\n\n * **norm_type** (*float**, **optional*) -- See module\n initialization documentation. Default \"2\".\n\n * **scale_grad_by_freq** (*bool**, **optional*) -- See module\n initialization documentation. Default \"False\".\n\n * **mode** (*str**, **optional*) -- See module initialization\n documentation. Default: \"\"mean\"\"\n\n * **sparse** (*bool**, **optional*) -- See module\n initialization documentation. Default: \"False\".\n\n * **include_last_offset** (*bool**, **optional*) -- See\n module initialization documentation. Default: \"False\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html", "category": "pytorch docs"} {"text": "\n\npadding_idx (int, optional) -- See module\n initialization documentation. Default: \"None\".\nReturn type:\n EmbeddingBag\nExamples:\n >>> # FloatTensor containing pretrained weights\n >>> weight = torch.FloatTensor([[1, 2.3, 3], [4, 5.1, 6.3]])\n >>> embeddingbag = nn.EmbeddingBag.from_pretrained(weight)\n >>> # Get embeddings for index 1\n >>> input = torch.LongTensor([[1, 0]])\n >>> embeddingbag(input)\n tensor([[ 2.5000, 3.7000, 4.6500]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.EmbeddingBag.html", "category": "pytorch docs"} {"text": "default_float_qparams_observer\ntorch.quantization.observer.default_float_qparams_observer\nalias of functools.partial(,\n dtype=torch.quint8, qscheme=torch.per_channel_affine_float_qparams,\n ch_axis=0){}", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.default_float_qparams_observer.html", "category": "pytorch docs"} {"text": "torch.Tensor.retains_grad\nTensor.retains_grad\nIs \"True\" if this Tensor is non-leaf and its \"grad\" is enabled to\n be populated during \"backward()\", \"False\" otherwise.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.retains_grad.html", "category": "pytorch docs"} {"text": "torch.Tensor.index_copy_\nTensor.index_copy_(dim, index, tensor) -> Tensor\nCopies the elements of \"tensor\" into the \"self\" tensor by selecting\n the indices in the order given in \"index\". For example, if \"dim ==\n 0\" and \"index[i] == j\", then the \"i\"th row of \"tensor\" is copied to\n the \"j\"th row of \"self\".\nThe \"dim\"th dimension of \"tensor\" must have the same size as the\n length of \"index\" (which must be a vector), and all other\n dimensions must match \"self\", or an error will be raised.\nNote:\n If \"index\" contains duplicate entries, multiple elements from\n \"tensor\" will be copied to the same index of \"self\". The result\n is nondeterministic since it depends on which copy occurs last.\n\nParameters:\n * dim (int) -- dimension along which to index\n * **index** (*LongTensor*) -- indices of \"tensor\" to select from\n\n * **tensor** (*Tensor*) -- the tensor containing values to copy\n\nExample:\n >>> x = torch.zeros(5, 3)\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_copy_.html", "category": "pytorch docs"} {"text": "Example:\n >>> x = torch.zeros(5, 3)\n >>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float)\n >>> index = torch.tensor([0, 4, 2])\n >>> x.index_copy_(0, index, t)\n tensor([[ 1., 2., 3.],\n [ 0., 0., 0.],\n [ 7., 8., 9.],\n [ 0., 0., 0.],\n [ 4., 5., 6.]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_copy_.html", "category": "pytorch docs"} {"text": "torch.Tensor.vsplit\nTensor.vsplit(split_size_or_sections) -> List of Tensors\nSee \"torch.vsplit()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.vsplit.html", "category": "pytorch docs"} {"text": "MultiheadAttention\nclass torch.nn.MultiheadAttention(embed_dim, num_heads, dropout=0.0, bias=True, add_bias_kv=False, add_zero_attn=False, kdim=None, vdim=None, batch_first=False, device=None, dtype=None)\nAllows the model to jointly attend to information from different\n representation subspaces as described in the paper: Attention Is\n All You Need.\nMulti-Head Attention is defined as:\n \\text{MultiHead}(Q, K, V) =\n \\text{Concat}(head_1,\\dots,head_h)W^O\n\nwhere head_i = \\text{Attention}(QW_i^Q, KW_i^K, VW_i^V).\n\"forward()\" will use a special optimized implementation if all of\n the following conditions are met:\n\n\nself attention is being computed (i.e., \"query\", \"key\", and\n \"value\" are the same tensor. This restriction will be loosened in\n the future.)\n\n\ninputs are batched (3D) with \"batch_first==True\"\n\n\nEither autograd is disabled (using \"torch.inference_mode\" or\n \"torch.no_grad\") or no tensor argument \"requires_grad\"\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html", "category": "pytorch docs"} {"text": "\n\ntraining is disabled (using \".eval()\")\n\n\n\"add_bias_kv\" is \"False\"\n\n\n\"add_zero_attn\" is \"False\"\n\n\n\"batch_first\" is \"True\" and the input is batched\n\n\n\"kdim\" and \"vdim\" are equal to \"embed_dim\"\n\n\nif a NestedTensor is passed, neither \"key_padding_mask\" nor\n \"attn_mask\" is passed\n\n\nautocast is disabled\n\n\nIf the optimized implementation is in use, a NestedTensor can be\n passed for \"query\"/\"key\"/\"value\" to represent padding more\n efficiently than using a padding mask. In this case, a NestedTensor\n will be returned, and an additional speedup proportional to the\n fraction of the input that is padding can be expected.\nParameters:\n * embed_dim -- Total dimension of the model.\n * **num_heads** -- Number of parallel attention heads. Note that\n \"embed_dim\" will be split across \"num_heads\" (i.e. each head\n will have dimension \"embed_dim // num_heads\").\n\n * **dropout** -- Dropout probability on \"attn_output_weights\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html", "category": "pytorch docs"} {"text": "Default: \"0.0\" (no dropout).\n * **bias** -- If specified, adds bias to input / output\n projection layers. Default: \"True\".\n\n * **add_bias_kv** -- If specified, adds bias to the key and\n value sequences at dim=0. Default: \"False\".\n\n * **add_zero_attn** -- If specified, adds a new batch of zeros\n to the key and value sequences at dim=1. Default: \"False\".\n\n * **kdim** -- Total number of features for keys. Default: \"None\"\n (uses \"kdim=embed_dim\").\n\n * **vdim** -- Total number of features for values. Default:\n \"None\" (uses \"vdim=embed_dim\").\n\n * **batch_first** -- If \"True\", then the input and output\n tensors are provided as (batch, seq, feature). Default:\n \"False\" (seq, batch, feature).\n\nExamples:\n >>> multihead_attn = nn.MultiheadAttention(embed_dim, num_heads)\n >>> attn_output, attn_output_weights = multihead_attn(query, key, value)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html", "category": "pytorch docs"} {"text": "forward(query, key, value, key_padding_mask=None, need_weights=True, attn_mask=None, average_attn_weights=True, is_causal=False)\n Parameters:\n * **query** (*Tensor*) -- Query embeddings of shape (L, E_q)\n for unbatched input, (L, N, E_q) when \"batch_first=False\"\n or (N, L, E_q) when \"batch_first=True\", where L is the\n target sequence length, N is the batch size, and E_q is the\n query embedding dimension \"embed_dim\". Queries are compared\n against key-value pairs to produce the output. See\n \"Attention Is All You Need\" for more details.\n\n * **key** (*Tensor*) -- Key embeddings of shape (S, E_k) for\n unbatched input, (S, N, E_k) when \"batch_first=False\" or\n (N, S, E_k) when \"batch_first=True\", where S is the source\n sequence length, N is the batch size, and E_k is the key\n embedding dimension \"kdim\". See \"Attention Is All You Need\"\n for more details.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html", "category": "pytorch docs"} {"text": "for more details.\n * **value** (*Tensor*) -- Value embeddings of shape (S, E_v)\n for unbatched input, (S, N, E_v) when \"batch_first=False\"\n or (N, S, E_v) when \"batch_first=True\", where S is the\n source sequence length, N is the batch size, and E_v is the\n value embedding dimension \"vdim\". See \"Attention Is All You\n Need\" for more details.\n\n * **key_padding_mask** (*Optional**[**Tensor**]*) -- If\n specified, a mask of shape (N, S) indicating which elements\n within \"key\" to ignore for the purpose of attention (i.e.\n treat as \"padding\"). For unbatched *query*, shape should be\n (S). Binary and byte masks are supported. For a binary\n mask, a \"True\" value indicates that the corresponding \"key\"\n value will be ignored for the purpose of attention. For a\n float mask, it will be directly added to the corresponding\n \"key\" value.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html", "category": "pytorch docs"} {"text": "\"key\" value.\n * **need_weights** (*bool*) -- If specified, returns\n \"attn_output_weights\" in addition to \"attn_outputs\".\n Default: \"True\".\n\n * **attn_mask** (*Optional**[**Tensor**]*) -- If specified, a\n 2D or 3D mask preventing attention to certain positions.\n Must be of shape (L, S) or (N\\cdot\\text{num\\_heads}, L, S),\n where N is the batch size, L is the target sequence length,\n and S is the source sequence length. A 2D mask will be\n broadcasted across the batch while a 3D mask allows for a\n different mask for each entry in the batch. Binary, byte,\n and float masks are supported. For a binary mask, a \"True\"\n value indicates that the corresponding position is not\n allowed to attend. For a byte mask, a non-zero value\n indicates that the corresponding position is not allowed to\n attend. For a float mask, the mask values will be added to\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html", "category": "pytorch docs"} {"text": "the attention weight.\n * **is_causal** (*bool*) -- If specified, applies a causal\n mask as attention mask. Mutually exclusive with providing\n attn_mask. Default: \"False\".\n\n * **average_attn_weights** (*bool*) -- If true, indicates\n that the returned \"attn_weights\" should be averaged across\n heads. Otherwise, \"attn_weights\" are provided separately\n per head. Note that this flag only has an effect when\n \"need_weights=True\". Default: \"True\" (i.e. average weights\n across heads)\n\n Return type:\n *Tuple*[*Tensor*, *Optional*[*Tensor*]]\n\n Outputs:\n * **attn_output** - Attention outputs of shape (L, E) when\n input is unbatched, (L, N, E) when \"batch_first=False\" or\n (N, L, E) when \"batch_first=True\", where L is the target\n sequence length, N is the batch size, and E is the\n embedding dimension \"embed_dim\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html", "category": "pytorch docs"} {"text": "embedding dimension \"embed_dim\".\n * **attn_output_weights** - Only returned when\n \"need_weights=True\". If \"average_attn_weights=True\",\n returns attention weights averaged across heads of shape\n (L, S) when input is unbatched or (N, L, S), where N is the\n batch size, L is the target sequence length, and S is the\n source sequence length. If \"average_attn_weights=False\",\n returns attention weights per head of shape\n (\\text{num\\_heads}, L, S) when input is unbatched or (N,\n \\text{num\\_heads}, L, S).\n\n Note:\n\n *batch_first* argument is ignored for unbatched inputs.\n\nmerge_masks(attn_mask, key_padding_mask, query)\n Determine mask type and combine masks if necessary. If only one\n mask is provided, that mask and the corresponding mask type will\n be returned. If both masks are provided, they will be both\n expanded to shape \"(batch_size, num_heads, seq_len, seq_len)\",\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html", "category": "pytorch docs"} {"text": "combined with logical \"or\" and mask type 2 will be returned\n :param attn_mask: attention mask of shape \"(seq_len, seq_len)\",\n mask type 0 :param key_padding_mask: padding mask of shape\n \"(batch_size, seq_len)\", mask type 1 :param query: query\n embeddings of shape \"(batch_size, seq_len, embed_dim)\"\n Returns:\n merged mask mask_type: merged mask type (0, 1, or 2)\n\n Return type:\n merged_mask\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiheadAttention.html", "category": "pytorch docs"} {"text": "torch.bitwise_xor\ntorch.bitwise_xor(input, other, *, out=None) -> Tensor\nComputes the bitwise XOR of \"input\" and \"other\". The input tensor\n must be of integral or Boolean types. For bool tensors, it computes\n the logical XOR.\nParameters:\n * input -- the first input tensor\n * **other** -- the second input tensor\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> torch.bitwise_xor(torch.tensor([-1, -2, 3], dtype=torch.int8), torch.tensor([1, 0, 3], dtype=torch.int8))\n tensor([-2, -2, 0], dtype=torch.int8)\n >>> torch.bitwise_xor(torch.tensor([True, True, False]), torch.tensor([False, True, False]))\n tensor([ True, False, False])\n", "source": "https://pytorch.org/docs/stable/generated/torch.bitwise_xor.html", "category": "pytorch docs"} {"text": "torch.cuda.list_gpu_processes\ntorch.cuda.list_gpu_processes(device=None)\nReturns a human-readable printout of the running processes and\n their GPU memory use for a given device.\nThis can be useful to display periodically during training, or when\n handling out-of-memory exceptions.\nParameters:\n device (torch.device or int, optional) -- selected\n device. Returns printout for the current device, given by\n \"current_device()\", if \"device\" is \"None\" (default).\nReturn type:\n str", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.list_gpu_processes.html", "category": "pytorch docs"} {"text": "torch.full_like\ntorch.full_like(input, fill_value, *, dtype=None, layout=torch.strided, device=None, requires_grad=False, memory_format=torch.preserve_format) -> Tensor\nReturns a tensor with the same size as \"input\" filled with\n \"fill_value\". \"torch.full_like(input, fill_value)\" is equivalent to\n \"torch.full(input.size(), fill_value, dtype=input.dtype,\n layout=input.layout, device=input.device)\".\nParameters:\n * input (Tensor) -- the size of \"input\" will determine\n size of the output tensor.\n * **fill_value** -- the number to fill the output tensor with.\n\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned Tensor. Default: if \"None\", defaults to the dtype\n of \"input\".\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned tensor. Default: if \"None\", defaults to the layout of\n \"input\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.full_like.html", "category": "pytorch docs"} {"text": "\"input\".\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", defaults to the device of\n \"input\".\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\n * **memory_format** (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.full_like.html", "category": "pytorch docs"} {"text": "ConvTranspose2d\nclass torch.nn.ConvTranspose2d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)\nApplies a 2D transposed convolution operator over an input image\n composed of several input planes.\nThis module can be seen as the gradient of Conv2d with respect to\n its input. It is also known as a fractionally-strided convolution\n or a deconvolution (although it is not an actual deconvolution\n operation as it does not compute a true inverse of convolution).\n For more information, see the visualizations here and the\n Deconvolutional Networks paper.\nThis module supports TensorFloat32.\nOn certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.\n\n\n\"stride\" controls the stride for the cross-correlation.\n\n\n\"padding\" controls the amount of implicit zero padding on both\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html", "category": "pytorch docs"} {"text": "sides for \"dilation * (kernel_size - 1) - padding\" number of\n points. See note below for details.\n\n\n\"output_padding\" controls the additional size added to one side\n of the output shape. See note below for details.\n\n\n\"dilation\" controls the spacing between the kernel points; also\n known as the \u00c3\u00a0 trous algorithm. It is harder to describe, but the\n link here has a nice visualization of what \"dilation\" does.\n\n\n\"groups\" controls the connections between inputs and outputs.\n \"in_channels\" and \"out_channels\" must both be divisible by\n \"groups\". For example,\n* At groups=1, all inputs are convolved to all outputs.\n\n* At groups=2, the operation becomes equivalent to having two\n conv layers side by side, each seeing half the input\n channels and producing half the output channels, and both\n subsequently concatenated.\n\n* At groups= \"in_channels\", each input channel is convolved\n with its own set of filters (of size\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html", "category": "pytorch docs"} {"text": "with its own set of filters (of size\n \\frac{\\text{out_channels}}{\\text{in_channels}}).\nThe parameters \"kernel_size\", \"stride\", \"padding\", \"output_padding\"\n can either be:\n * a single \"int\" -- in which case the same value is used for the\n height and width dimensions\n\n * a \"tuple\" of two ints -- in which case, the first *int* is\n used for the height dimension, and the second *int* for the\n width dimension\n\nNote:\n The \"padding\" argument effectively adds \"dilation * (kernel_size\n - 1) - padding\" amount of zero padding to both sizes of the\n input. This is set so that when a \"Conv2d\" and a\n \"ConvTranspose2d\" are initialized with same parameters, they are\n inverses of each other in regard to the input and output shapes.\n However, when \"stride > 1\", \"Conv2d\" maps multiple input shapes\n to the same output shape. \"output_padding\" is provided to resolve\n this ambiguity by effectively increasing the calculated output\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html", "category": "pytorch docs"} {"text": "shape on one side. Note that \"output_padding\" is only used to\n find output shape, but does not actually add zero-padding to\n output.\nNote:\n In some circumstances when given tensors on a CUDA device and\n using CuDNN, this operator may select a nondeterministic\n algorithm to increase performance. If this is undesirable, you\n can try to make the operation deterministic (potentially at a\n performance cost) by setting \"torch.backends.cudnn.deterministic\n = True\". See Reproducibility for more information.\n\nParameters:\n * in_channels (int) -- Number of channels in the input\n image\n * **out_channels** (*int*) -- Number of channels produced by the\n convolution\n\n * **kernel_size** (*int** or **tuple*) -- Size of the convolving\n kernel\n\n * **stride** (*int** or **tuple**, **optional*) -- Stride of the\n convolution. Default: 1\n\n * **padding** (*int** or **tuple**, **optional*) -- \"dilation *\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html", "category": "pytorch docs"} {"text": "(kernel_size - 1) - padding\" zero-padding will be added to\n both sides of each dimension in the input. Default: 0\n * **output_padding** (*int** or **tuple**, **optional*) --\n Additional size added to one side of each dimension in the\n output shape. Default: 0\n\n * **groups** (*int**, **optional*) -- Number of blocked\n connections from input channels to output channels. Default: 1\n\n * **bias** (*bool**, **optional*) -- If \"True\", adds a learnable\n bias to the output. Default: \"True\"\n\n * **dilation** (*int** or **tuple**, **optional*) -- Spacing\n between kernel elements. Default: 1\n\nShape:\n * Input: (N, C_{in}, H_{in}, W_{in}) or (C_{in}, H_{in}, W_{in})\n * Output: (N, C_{out}, H_{out}, W_{out}) or (C_{out}, H_{out},\n W_{out}), where\n\n H_{out} = (H_{in} - 1) \\times \\text{stride}[0] - 2 \\times\n \\text{padding}[0] + \\text{dilation}[0] \\times\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html", "category": "pytorch docs"} {"text": "(\\text{kernel_size}[0] - 1) + \\text{output_padding}[0] + 1\n W_{out} = (W_{in} - 1) \\times \\text{stride}[1] - 2 \\times\n \\text{padding}[1] + \\text{dilation}[1] \\times\n (\\text{kernel\\_size}[1] - 1) + \\text{output\\_padding}[1] + 1\n\nVariables:\n * weight (Tensor) -- the learnable weights of the module\n of shape (\\text{in_channels},\n \\frac{\\text{out_channels}}{\\text{groups}},\n \\text{kernel_size[0]}, \\text{kernel_size[1]}). The values of\n these weights are sampled from \\mathcal{U}(-\\sqrt{k},\n \\sqrt{k}) where k = \\frac{groups}{C_\\text{out} *\n \\prod_{i=0}^{1}\\text{kernel_size}[i]}\n * **bias** (*Tensor*) -- the learnable bias of the module of\n shape (out_channels) If \"bias\" is \"True\", then the values of\n these weights are sampled from \\mathcal{U}(-\\sqrt{k},\n \\sqrt{k}) where k = \\frac{groups}{C_\\text{out} *\n \\prod_{i=0}^{1}\\text{kernel\\_size}[i]}\n\nExamples:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html", "category": "pytorch docs"} {"text": "Examples:\n >>> # With square kernels and equal stride\n >>> m = nn.ConvTranspose2d(16, 33, 3, stride=2)\n >>> # non-square kernels and unequal stride and with padding\n >>> m = nn.ConvTranspose2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))\n >>> input = torch.randn(20, 16, 50, 100)\n >>> output = m(input)\n >>> # exact output size can be also specified as an argument\n >>> input = torch.randn(1, 16, 12, 12)\n >>> downsample = nn.Conv2d(16, 16, 3, stride=2, padding=1)\n >>> upsample = nn.ConvTranspose2d(16, 16, 3, stride=2, padding=1)\n >>> h = downsample(input)\n >>> h.size()\n torch.Size([1, 16, 6, 6])\n >>> output = upsample(h, output_size=input.size())\n >>> output.size()\n torch.Size([1, 16, 12, 12])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose2d.html", "category": "pytorch docs"} {"text": "torch.cuda.comm.reduce_add\ntorch.cuda.comm.reduce_add(inputs, destination=None)\nSums tensors from multiple GPUs.\nAll inputs should have matching shapes, dtype, and layout. The\n output tensor will be of the same shape, dtype, and layout.\nParameters:\n * inputs (Iterable[Tensor]) -- an iterable of\n tensors to add.\n * **destination** (*int**, **optional*) -- a device on which the\n output will be placed (default: current device).\n\nReturns:\n A tensor containing an elementwise sum of all inputs, placed on\n the \"destination\" device.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.comm.reduce_add.html", "category": "pytorch docs"} {"text": "torch.Tensor.negative\nTensor.negative() -> Tensor\nSee \"torch.negative()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.negative.html", "category": "pytorch docs"} {"text": "torch.Tensor.t\nTensor.t() -> Tensor\nSee \"torch.t()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.t.html", "category": "pytorch docs"} {"text": "torch.Tensor.cauchy_\nTensor.cauchy_(median=0, sigma=1, *, generator=None) -> Tensor\nFills the tensor with numbers drawn from the Cauchy distribution:\n f(x) = \\dfrac{1}{\\pi} \\dfrac{\\sigma}{(x - \\text{median})^2 +\n \\sigma^2}\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cauchy_.html", "category": "pytorch docs"} {"text": "torch.autograd.functional.hvp\ntorch.autograd.functional.hvp(func, inputs, v=None, create_graph=False, strict=False)\nFunction that computes the dot product between the Hessian of a\n given scalar function and a vector \"v\" at the point given by the\n inputs.\nParameters:\n * func (function) -- a Python function that takes Tensor\n inputs and returns a Tensor with a single element.\n * **inputs** (*tuple of Tensors** or **Tensor*) -- inputs to the\n function \"func\".\n\n * **v** (*tuple of Tensors** or **Tensor*) -- The vector for\n which the Hessian vector product is computed. Must be the same\n size as the input of \"func\". This argument is optional when\n \"func\"'s input contains a single element and (if it is not\n provided) will be set as a Tensor containing a single \"1\".\n\n * **create_graph** (*bool**, **optional*) -- If \"True\", both the\n output and result will be computed in a differentiable way.\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.hvp.html", "category": "pytorch docs"} {"text": "Note that when \"strict\" is \"False\", the result can not require\n gradients or be disconnected from the inputs. Defaults to\n \"False\".\n * **strict** (*bool**, **optional*) -- If \"True\", an error will\n be raised when we detect that there exists an input such that\n all the outputs are independent of it. If \"False\", we return a\n Tensor of zeros as the hvp for said inputs, which is the\n expected mathematical value. Defaults to \"False\".\n\nReturns:\n tuple with:\n func_output (tuple of Tensors or Tensor): output of\n \"func(inputs)\"\n hvp (tuple of Tensors or Tensor): result of the dot product\n with the same shape as the inputs.\n\nReturn type:\n output (tuple)\n-[ Example ]-\n\n\n\ndef pow_reducer(x):\n ... return x.pow(3).sum()\ninputs = torch.rand(2, 2)\nv = torch.ones(2, 2)\nhvp(pow_reducer, inputs, v)\n (tensor(0.1448),\n tensor([[2.0239, 1.6456],\n [2.4988, 1.4310]]))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.hvp.html", "category": "pytorch docs"} {"text": "[2.4988, 1.4310]]))\n\n\n\nhvp(pow_reducer, inputs, v, create_graph=True)\n (tensor(0.1448, grad_fn=),\n tensor([[2.0239, 1.6456],\n [2.4988, 1.4310]], grad_fn=))\ndef pow_adder_reducer(x, y):\n ... return (2 * x.pow(2) + 3 * y.pow(2)).sum()\ninputs = (torch.rand(2), torch.rand(2))\nv = (torch.zeros(2), torch.ones(2))\nhvp(pow_adder_reducer, inputs, v)\n (tensor(2.3030),\n (tensor([0., 0.]),\n tensor([6., 6.])))\n\n\n\nNote:\n This function is significantly slower than *vhp* due to backward\n mode AD constraints. If your functions is twice continuously\n differentiable, then hvp = vhp.t(). So if you know that your\n function satisfies this condition, you should use vhp instead\n that is much faster with the current implementation.\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.hvp.html", "category": "pytorch docs"} {"text": "torch.Tensor.tril\nTensor.tril(diagonal=0) -> Tensor\nSee \"torch.tril()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.tril.html", "category": "pytorch docs"} {"text": "torch.Tensor.lt\nTensor.lt(other) -> Tensor\nSee \"torch.lt()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.lt.html", "category": "pytorch docs"} {"text": "torch.Tensor.exp\nTensor.exp() -> Tensor\nSee \"torch.exp()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.exp.html", "category": "pytorch docs"} {"text": "torch.nn.functional.conv2d\ntorch.nn.functional.conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) -> Tensor\nApplies a 2D convolution over an input image composed of several\n input planes.\nThis operator supports TensorFloat32.\nSee \"Conv2d\" for details and output shape.\nNote:\n In some circumstances when given tensors on a CUDA device and\n using CuDNN, this operator may select a nondeterministic\n algorithm to increase performance. If this is undesirable, you\n can try to make the operation deterministic (potentially at a\n performance cost) by setting \"torch.backends.cudnn.deterministic\n = True\". See Reproducibility for more information.\n\nNote:\n This operator supports complex data types i.e. \"complex32,\n complex64, complex128\".\n\nParameters:\n * input -- input tensor of shape (\\text{minibatch} ,\n \\text{in_channels} , iH , iW)\n * **weight** -- filters of shape (\\text{out\\_channels} ,\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv2d.html", "category": "pytorch docs"} {"text": "\\frac{\\text{in_channels}}{\\text{groups}} , kH , kW)\n * **bias** -- optional bias tensor of shape\n (\\text{out\\_channels}). Default: \"None\"\n\n * **stride** -- the stride of the convolving kernel. Can be a\n single number or a tuple *(sH, sW)*. Default: 1\n\n * **padding** --\n\n implicit paddings on both sides of the input. Can be a string\n {'valid', 'same'}, single number or a tuple *(padH, padW)*.\n Default: 0 \"padding='valid'\" is the same as no padding.\n \"padding='same'\" pads the input so the output has the same\n shape as the input. However, this mode doesn't support any\n stride values other than 1.\n\n Warning:\n\n For \"padding='same'\", if the \"weight\" is even-length and\n \"dilation\" is odd in any dimension, a full \"pad()\" operation\n may be needed internally. Lowering performance.\n\n * **dilation** -- the spacing between kernel elements. Can be a\n single number or a tuple *(dH, dW)*. Default: 1\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv2d.html", "category": "pytorch docs"} {"text": "\ngroups -- split input into groups, \\text{in_channels}\n should be divisible by the number of groups. Default: 1\n\nExamples:\n >>> # With square kernels and equal stride\n >>> filters = torch.randn(8, 4, 3, 3)\n >>> inputs = torch.randn(1, 4, 5, 5)\n >>> F.conv2d(inputs, filters, padding=1)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv2d.html", "category": "pytorch docs"} {"text": "torch.func.vmap\ntorch.func.vmap(func, in_dims=0, out_dims=0, randomness='error', *, chunk_size=None)\nvmap is the vectorizing map; \"vmap(func)\" returns a new function\n that maps \"func\" over some dimension of the inputs. Semantically,\n vmap pushes the map into PyTorch operations called by \"func\",\n effectively vectorizing those operations.\nvmap is useful for handling batch dimensions: one can write a\n function \"func\" that runs on examples and then lift it to a\n function that can take batches of examples with \"vmap(func)\". vmap\n can also be used to compute batched gradients when composed with\n autograd.\nNote:\n \"torch.vmap()\" is aliased to \"torch.func.vmap()\" for convenience.\n Use whichever one you'd like.\n\nParameters:\n * func (function) -- A Python function that takes one or\n more arguments. Must return one or more Tensors.\n * **in_dims** (*int** or **nested structure*) -- Specifies which\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.vmap.html", "category": "pytorch docs"} {"text": "dimension of the inputs should be mapped over. \"in_dims\"\n should have a structure like the inputs. If the \"in_dim\" for a\n particular input is None, then that indicates there is no map\n dimension. Default: 0.\n * **out_dims** (*int** or **Tuple**[**int**]*) -- Specifies\n where the mapped dimension should appear in the outputs. If\n \"out_dims\" is a Tuple, then it should have one element per\n output. Default: 0.\n\n * **randomness** (*str*) -- Specifies whether the randomness in\n this vmap should be the same or different across batches. If\n 'different', the randomness for each batch will be different.\n If 'same', the randomness will be the same across batches. If\n 'error', any calls to random functions will error. Default:\n 'error'. WARNING: this flag only applies to random PyTorch\n operations and does not apply to Python's random module or\n numpy randomness.\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.vmap.html", "category": "pytorch docs"} {"text": "numpy randomness.\n * **chunk_size** (*None** or **int*) -- If None (default), apply\n a single vmap over inputs. If not None, then compute the vmap\n \"chunk_size\" samples at a time. Note that \"chunk_size=1\" is\n equivalent to computing the vmap with a for-loop. If you run\n into memory issues computing the vmap, please try a non-None\n chunk_size.\n\nReturns:\n Returns a new \"batched\" function. It takes the same inputs as\n \"func\", except each input has an extra dimension at the index\n specified by \"in_dims\". It takes returns the same outputs as\n \"func\", except each output has an extra dimension at the index\n specified by \"out_dims\".\nReturn type:\n Callable\nOne example of using \"vmap()\" is to compute batched dot products.\n PyTorch doesn't provide a batched \"torch.dot\" API; instead of\n unsuccessfully rummaging through docs, use \"vmap()\" to construct a\n new function.", "source": "https://pytorch.org/docs/stable/generated/torch.func.vmap.html", "category": "pytorch docs"} {"text": "new function.\n\n\n\ntorch.dot # [D], [D] -> []\nbatched_dot = torch.func.vmap(torch.dot) # [N, D], [N, D] -> [N]\nx, y = torch.randn(2, 5), torch.randn(2, 5)\nbatched_dot(x, y)\n\n\n\n\"vmap()\" can be helpful in hiding batch dimensions, leading to a\n simpler model authoring experience.\n\n\n\nbatch_size, feature_size = 3, 5\nweights = torch.randn(feature_size, requires_grad=True)\ndef model(feature_vec):\n # Very simple linear model with activation\n return feature_vec.dot(weights).relu()\nexamples = torch.randn(batch_size, feature_size)\nresult = torch.vmap(model)(examples)\n\n\n\n\"vmap()\" can also help vectorize computations that were previously\n difficult or impossible to batch. One example is higher-order\n gradient computation. The PyTorch autograd engine computes vjps\n (vector-Jacobian products). Computing a full Jacobian matrix for\n some function f: R^N -> R^N usually requires N calls to", "source": "https://pytorch.org/docs/stable/generated/torch.func.vmap.html", "category": "pytorch docs"} {"text": "\"autograd.grad\", one per Jacobian row. Using \"vmap()\", we can\n vectorize the whole computation, computing the Jacobian in a single\n call to \"autograd.grad\".\n\n\n\nSetup\nN = 5\nf = lambda x: x ** 2\nx = torch.randn(N, requires_grad=True)\ny = f(x)\nI_N = torch.eye(N)\nSequential approach\njacobian_rows = [torch.autograd.grad(y, x, v, retain_graph=True)[0]\n for v in I_N.unbind()]\njacobian = torch.stack(jacobian_rows)\nvectorized gradient computation\ndef get_vjp(v):\n return torch.autograd.grad(y, x, v)\njacobian = torch.vmap(get_vjp)(I_N)\n\n\n\n\"vmap()\" can also be nested, producing an output with multiple\n batched dimensions\n\n\n\ntorch.dot # [D], [D] -> []\nbatched_dot = torch.vmap(torch.vmap(torch.dot)) # [N1, N0, D], [N1, N0, D] -> [N1, N0]\nx, y = torch.randn(2, 3, 5), torch.randn(2, 3, 5)\nbatched_dot(x, y) # tensor of size [2, 3]\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.vmap.html", "category": "pytorch docs"} {"text": "\n\n\nbatched_dot(x, y) # tensor of size [2, 3]\n\n\n\nIf the inputs are not batched along the first dimension, \"in_dims\"\n specifies the dimension that each inputs are batched along as\n\n\n\ntorch.dot # [N], [N] -> []\nbatched_dot = torch.vmap(torch.dot, in_dims=1) # [N, D], [N, D] -> [D]\nx, y = torch.randn(2, 5), torch.randn(2, 5)\nbatched_dot(x, y) # output is [5] instead of [2] if batched along the 0th dimension\n\n\n\nIf there are multiple inputs each of which is batched along\n different dimensions, \"in_dims\" must be a tuple with the batch\n dimension for each input as\n\n\n\ntorch.dot # [D], [D] -> []\nbatched_dot = torch.vmap(torch.dot, in_dims=(0, None)) # [N, D], [D] -> [N]\nx, y = torch.randn(2, 5), torch.randn(5)\nbatched_dot(x, y) # second arg doesn't have a batch dim because in_dim[1] was None\n\n\n\nIf the input is a Python struct, \"in_dims\" must be a tuple", "source": "https://pytorch.org/docs/stable/generated/torch.func.vmap.html", "category": "pytorch docs"} {"text": "containing a struct matching the shape of the input:\n\n\n\nf = lambda dict: torch.dot(dict['x'], dict['y'])\nx, y = torch.randn(2, 5), torch.randn(5)\ninput = {'x': x, 'y': y}\nbatched_dot = torch.vmap(f, in_dims=({'x': 0, 'y': None},))\nbatched_dot(input)\n\n\n\nBy default, the output is batched along the first dimension.\n However, it can be batched along any dimension by using \"out_dims\"\n\n\n\nf = lambda x: x ** 2\nx = torch.randn(2, 5)\nbatched_pow = torch.vmap(f, out_dims=1)\nbatched_pow(x) # [5, 2]\n\n\n\nFor any function that uses kwargs, the returned function will not\n batch the kwargs but will accept kwargs\n\n\n\nx = torch.randn([2, 5])\ndef fn(x, scale=4.):\n return x * scale\nbatched_pow = torch.vmap(fn)\nassert torch.allclose(batched_pow(x), x * 4)\nbatched_pow(x, scale=x) # scale is not batched, output has shape [2, 2, 5]\n\n\n\nNote:\n vmap does not provide general autobatching or handle variable-\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.vmap.html", "category": "pytorch docs"} {"text": "length sequences out of the box.", "source": "https://pytorch.org/docs/stable/generated/torch.func.vmap.html", "category": "pytorch docs"} {"text": "torch.Tensor.bernoulli_\nTensor.bernoulli_(p=0.5, *, generator=None) -> Tensor\nFills each location of \"self\" with an independent sample from\n \\text{Bernoulli}(\\texttt{p}). \"self\" can have integral \"dtype\".\n\"p\" should either be a scalar or tensor containing probabilities to\n be used for drawing the binary random number.\nIf it is a tensor, the \\text{i}^{th} element of \"self\" tensor will\n be set to a value sampled from\n \\text{Bernoulli}(\\texttt{p_tensor[i]}). In this case p must have\n floating point \"dtype\".\nSee also \"bernoulli()\" and \"torch.bernoulli()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bernoulli_.html", "category": "pytorch docs"} {"text": "torch.Tensor.is_meta\nTensor.is_meta\nIs \"True\" if the Tensor is a meta tensor, \"False\" otherwise. Meta\n tensors are like normal tensors, but they carry no data.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_meta.html", "category": "pytorch docs"} {"text": "torch.jit.onednn_fusion_enabled\ntorch.jit.onednn_fusion_enabled()\nReturns whether onednn JIT fusion is enabled", "source": "https://pytorch.org/docs/stable/generated/torch.jit.onednn_fusion_enabled.html", "category": "pytorch docs"} {"text": "torch.Tensor.absolute_\nTensor.absolute_() -> Tensor\nIn-place version of \"absolute()\" Alias for \"abs_()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.absolute_.html", "category": "pytorch docs"} {"text": "torch.logaddexp2\ntorch.logaddexp2(input, other, *, out=None) -> Tensor\nLogarithm of the sum of exponentiations of the inputs in base-2.\nCalculates pointwise \\log_2\\left(2^x + 2^y\\right). See\n \"torch.logaddexp()\" for more details.\nParameters:\n * input (Tensor) -- the input tensor.\n * **other** (*Tensor*) -- the second input tensor\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.logaddexp2.html", "category": "pytorch docs"} {"text": "torch.cuda.memory_snapshot\ntorch.cuda.memory_snapshot()\nReturns a snapshot of the CUDA memory allocator state across all\n devices.\nInterpreting the output of this function requires familiarity with\n the memory allocator internals.\nNote:\n See Memory management for more details about GPU memory\n management.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.memory_snapshot.html", "category": "pytorch docs"} {"text": "torch.Tensor.sigmoid\nTensor.sigmoid() -> Tensor\nSee \"torch.sigmoid()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sigmoid.html", "category": "pytorch docs"} {"text": "LazyInstanceNorm2d\nclass torch.nn.LazyInstanceNorm2d(eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)\nA \"torch.nn.InstanceNorm2d\" module with lazy initialization of the\n \"num_features\" argument of the \"InstanceNorm2d\" that is inferred\n from the \"input.size(1)\". The attributes that will be lazily\n initialized are weight, bias, running_mean and running_var.\nCheck the \"torch.nn.modules.lazy.LazyModuleMixin\" for further\n documentation on lazy modules and their limitations.\nParameters:\n * num_features -- C from an expected input of size (N, C, H,\n W) or (C, H, W)\n * **eps** (*float*) -- a value added to the denominator for\n numerical stability. Default: 1e-5\n\n * **momentum** (*float*) -- the value used for the running_mean\n and running_var computation. Default: 0.1\n\n * **affine** (*bool*) -- a boolean value that when set to\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyInstanceNorm2d.html", "category": "pytorch docs"} {"text": "\"True\", this module has learnable affine parameters,\n initialized the same way as done for batch normalization.\n Default: \"False\".\n * **track_running_stats** (*bool*) -- a boolean value that when\n set to \"True\", this module tracks the running mean and\n variance, and when set to \"False\", this module does not track\n such statistics and always uses batch statistics in both\n training and eval modes. Default: \"False\"\n\nShape:\n * Input: (N, C, H, W) or (C, H, W)\n * Output: (N, C, H, W) or (C, H, W) (same shape as input)\n\ncls_to_become\n alias of \"InstanceNorm2d\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyInstanceNorm2d.html", "category": "pytorch docs"} {"text": "torch.isreal\ntorch.isreal(input) -> Tensor\nReturns a new tensor with boolean elements representing if each\n element of \"input\" is real-valued or not. All real-valued types are\n considered real. Complex values are considered real when their\n imaginary part is 0.\nParameters:\n input (Tensor) -- the input tensor.\nReturns:\n A boolean tensor that is True where \"input\" is real and False\n elsewhere\nExample:\n >>> torch.isreal(torch.tensor([1, 1+1j, 2+0j]))\n tensor([True, False, True])\n", "source": "https://pytorch.org/docs/stable/generated/torch.isreal.html", "category": "pytorch docs"} {"text": "TransformerEncoderLayer\nclass torch.nn.TransformerEncoderLayer(d_model, nhead, dim_feedforward=2048, dropout=0.1, activation=, layer_norm_eps=1e-05, batch_first=False, norm_first=False, device=None, dtype=None)\nTransformerEncoderLayer is made up of self-attn and feedforward\n network. This standard encoder layer is based on the paper\n \"Attention Is All You Need\". Ashish Vaswani, Noam Shazeer, Niki\n Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser,\n and Illia Polosukhin. 2017. Attention is all you need. In Advances\n in Neural Information Processing Systems, pages 6000-6010. Users\n may modify or implement in a different way during application.\nParameters:\n * d_model (int) -- the number of expected features in the\n input (required).\n * **nhead** (*int*) -- the number of heads in the\n multiheadattention models (required).\n\n * **dim_feedforward** (*int*) -- the dimension of the\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html", "category": "pytorch docs"} {"text": "feedforward network model (default=2048).\n * **dropout** (*float*) -- the dropout value (default=0.1).\n\n * **activation** (*Union**[**str**,\n **Callable**[**[**Tensor**]**, **Tensor**]**]*) -- the\n activation function of the intermediate layer, can be a string\n (\"relu\" or \"gelu\") or a unary callable. Default: relu\n\n * **layer_norm_eps** (*float*) -- the eps value in layer\n normalization components (default=1e-5).\n\n * **batch_first** (*bool*) -- If \"True\", then the input and\n output tensors are provided as (batch, seq, feature). Default:\n \"False\" (seq, batch, feature).\n\n * **norm_first** (*bool*) -- if \"True\", layer norm is done prior\n to attention and feedforward operations, respectively.\n Otherwise it's done after. Default: \"False\" (after).\n\nExamples::\n >>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8)\n >>> src = torch.rand(10, 32, 512)\n >>> out = encoder_layer(src)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html", "category": "pytorch docs"} {"text": "\n\n\nout = encoder_layer(src)\n\n\n\nAlternatively, when \"batch_first\" is \"True\":\n >>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8, batch_first=True)\n >>> src = torch.rand(32, 10, 512)\n >>> out = encoder_layer(src)\nFast path:\n forward() will use a special optimized implementation if all of\n the following conditions are met:\n * Either autograd is disabled (using \"torch.inference_mode\" or\n \"torch.no_grad\") or no tensor argument \"requires_grad\"\n\n * training is disabled (using \".eval()\")\n\n * batch_first is \"True\" and the input is batched (i.e.,\n \"src.dim() == 3\")\n\n * activation is one of: \"\"relu\"\", \"\"gelu\"\",\n \"torch.functional.relu\", or \"torch.functional.gelu\"\n\n * at most one of \"src_mask\" and \"src_key_padding_mask\" is passed\n\n * if src is a NestedTensor, neither \"src_mask\" nor\n \"src_key_padding_mask\" is passed\n\n * the two \"LayerNorm\" instances have a consistent \"eps\" value\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html", "category": "pytorch docs"} {"text": "(this will naturally be the case unless the caller has\n manually modified one without modifying the other)\n If the optimized implementation is in use, a NestedTensor can be\n passed for \"src\" to represent padding more efficiently than\n using a padding mask. In this case, a NestedTensor will be\n returned, and an additional speedup proportional to the fraction\n of the input that is padding can be expected.\n\nforward(src, src_mask=None, src_key_padding_mask=None, is_causal=False)\n Pass the input through the encoder layer.\n\n Parameters:\n * **src** (*Tensor*) -- the sequence to the encoder layer\n (required).\n\n * **src_mask** (*Optional**[**Tensor**]*) -- the mask for the\n src sequence (optional).\n\n * **is_causal** (*bool*) -- If specified, applies a causal\n mask as src_mask. Mutually exclusive with providing\n src_mask. Default: \"False\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html", "category": "pytorch docs"} {"text": "src_mask. Default: \"False\".\n * **src_key_padding_mask** (*Optional**[**Tensor**]*) -- the\n mask for the src keys per batch (optional).\n\n Return type:\n *Tensor*\n\n Shape:\n see the docs in Transformer class.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoderLayer.html", "category": "pytorch docs"} {"text": "MaxUnpool3d\nclass torch.nn.MaxUnpool3d(kernel_size, stride=None, padding=0)\nComputes a partial inverse of \"MaxPool3d\".\n\"MaxPool3d\" is not fully invertible, since the non-maximal values\n are lost. \"MaxUnpool3d\" takes in as input the output of \"MaxPool3d\"\n including the indices of the maximal values and computes a partial\n inverse in which all non-maximal values are set to zero.\nNote:\n \"MaxPool3d\" can map several input sizes to the same output sizes.\n Hence, the inversion process can get ambiguous. To accommodate\n this, you can provide the needed output size as an additional\n argument \"output_size\" in the forward call. See the Inputs\n section below.\n\nParameters:\n * kernel_size (int or tuple) -- Size of the max\n pooling window.\n * **stride** (*int** or **tuple*) -- Stride of the max pooling\n window. It is set to \"kernel_size\" by default.\n\n * **padding** (*int** or **tuple*) -- Padding that was added to\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxUnpool3d.html", "category": "pytorch docs"} {"text": "the input\nInputs:\n * input: the input Tensor to invert\n * *indices*: the indices given out by \"MaxPool3d\"\n\n * *output_size* (optional): the targeted output size\n\nShape:\n * Input: (N, C, D_{in}, H_{in}, W_{in}) or (C, D_{in}, H_{in},\n W_{in}).\n * Output: (N, C, D_{out}, H_{out}, W_{out}) or (C, D_{out},\n H_{out}, W_{out}), where\n\n D_{out} = (D_{in} - 1) \\times \\text{stride[0]} - 2 \\times\n \\text{padding[0]} + \\text{kernel\\_size[0]}\n\n H_{out} = (H_{in} - 1) \\times \\text{stride[1]} - 2 \\times\n \\text{padding[1]} + \\text{kernel\\_size[1]}\n\n W_{out} = (W_{in} - 1) \\times \\text{stride[2]} - 2 \\times\n \\text{padding[2]} + \\text{kernel\\_size[2]}\n\n or as given by \"output_size\" in the call operator\n\nExample:\n >>> # pool of square window of size=3, stride=2\n >>> pool = nn.MaxPool3d(3, stride=2, return_indices=True)\n >>> unpool = nn.MaxUnpool3d(3, stride=2)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxUnpool3d.html", "category": "pytorch docs"} {"text": "\n\n\nunpool = nn.MaxUnpool3d(3, stride=2)\n >>> output, indices = pool(torch.randn(20, 16, 51, 33, 15))\n >>> unpooled_output = unpool(output, indices)\n >>> unpooled_output.size()\n torch.Size([20, 16, 51, 33, 15])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxUnpool3d.html", "category": "pytorch docs"} {"text": "torch.Tensor.is_leaf\nTensor.is_leaf\nAll Tensors that have \"requires_grad\" which is \"False\" will be leaf\n Tensors by convention.\nFor Tensors that have \"requires_grad\" which is \"True\", they will be\n leaf Tensors if they were created by the user. This means that they\n are not the result of an operation and so \"grad_fn\" is None.\nOnly leaf Tensors will have their \"grad\" populated during a call to\n \"backward()\". To get \"grad\" populated for non-leaf Tensors, you can\n use \"retain_grad()\".\nExample:\n >>> a = torch.rand(10, requires_grad=True)\n >>> a.is_leaf\n True\n >>> b = torch.rand(10, requires_grad=True).cuda()\n >>> b.is_leaf\n False\n # b was created by the operation that cast a cpu Tensor into a cuda Tensor\n >>> c = torch.rand(10, requires_grad=True) + 2\n >>> c.is_leaf\n False\n # c was created by the addition operation\n >>> d = torch.rand(10).cuda()\n >>> d.is_leaf\n True\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_leaf.html", "category": "pytorch docs"} {"text": "\n\n\nd.is_leaf\n True\n # d does not require gradients and so has no operation creating it (that is tracked by the autograd engine)\n >>> e = torch.rand(10).cuda().requires_grad_()\n >>> e.is_leaf\n True\n # e requires gradients and has no operations creating it\n >>> f = torch.rand(10, requires_grad=True, device=\"cuda\")\n >>> f.is_leaf\n True\n # f requires grad, has no operation creating it\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_leaf.html", "category": "pytorch docs"} {"text": "torch.jit.wait\ntorch.jit.wait(future)\nForces completion of a torch.jit.Future[T] asynchronous task,\n returning the result of the task. See \"fork()\" for docs and\n examples. :param future: an asynchronous task reference, created\n through torch.jit.fork :type future: torch.jit.Future[T]\nReturns:\n the return value of the the completed task\nReturn type:\n T", "source": "https://pytorch.org/docs/stable/generated/torch.jit.wait.html", "category": "pytorch docs"} {"text": "torch.Tensor.scatter_add\nTensor.scatter_add(dim, index, src) -> Tensor\nOut-of-place version of \"torch.Tensor.scatter_add_()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_add.html", "category": "pytorch docs"} {"text": "torch.Tensor.reshape\nTensor.reshape(*shape) -> Tensor\nReturns a tensor with the same data and number of elements as\n \"self\" but with the specified shape. This method returns a view if\n \"shape\" is compatible with the current shape. See\n \"torch.Tensor.view()\" on when it is possible to return a view.\nSee \"torch.reshape()\"\nParameters:\n shape (tuple of ints or int...) -- the desired shape", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.reshape.html", "category": "pytorch docs"} {"text": "ObserverBase\nclass torch.quantization.observer.ObserverBase(dtype)\nBase observer Module. Any observer implementation should derive\n from this class.\nConcrete observers should follow the same API. In forward, they\n will update the statistics of the observed Tensor. And they should\n provide a calculate_qparams function that computes the\n quantization parameters given the collected statistics.\nParameters:\n dtype -- dtype argument to the quantize node needed to\n implement the reference model spec.\nclassmethod with_args(**kwargs)\n Wrapper that allows creation of class factories.\n\n This can be useful when there is a need to create classes with\n the same constructor arguments, but different instances. Can be\n used in conjunction with _callable_args\n\n Example:\n\n >>> Foo.with_args = classmethod(_with_args)\n >>> foo_builder = Foo.with_args(a=3, b=4).with_args(answer=42)\n >>> foo_instance1 = foo_builder()\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.ObserverBase.html", "category": "pytorch docs"} {"text": "\n\n\nfoo_instance1 = foo_builder()\n >>> foo_instance2 = foo_builder()\n >>> id(foo_instance1) == id(foo_instance2)\n False\n\n\n\nclassmethod with_callable_args(**kwargs)\n Wrapper that allows creation of class factories args that need\n to be called at construction time.\n\n This can be useful when there is a need to create classes with\n the same constructor arguments, but different instances and\n those arguments should only be calculated at construction time.\n Can be used in conjunction with _with_args\n\n Example:\n\n >>> Foo.with_callable_args = classmethod(_with_callable_args)\n >>> Foo.with_args = classmethod(_with_args)\n >>> foo_builder = Foo.with_callable_args(cur_time=get_time_func).with_args(name=\"dan\")\n >>> foo_instance1 = foo_builder()\n >>> # wait 50\n >>> foo_instance2 = foo_builder()\n >>> id(foo_instance1.creation_time) == id(foo_instance2.creation_time)\n False\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.ObserverBase.html", "category": "pytorch docs"} {"text": "torch.Tensor.igamma_\nTensor.igamma_(other) -> Tensor\nIn-place version of \"igamma()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.igamma_.html", "category": "pytorch docs"} {"text": "torch.Tensor.log10\nTensor.log10() -> Tensor\nSee \"torch.log10()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.log10.html", "category": "pytorch docs"} {"text": "torch.cuda.can_device_access_peer\ntorch.cuda.can_device_access_peer(device, peer_device)\nChecks if peer access between two devices is possible.\nReturn type:\n bool", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.can_device_access_peer.html", "category": "pytorch docs"} {"text": "torch.linalg.det\ntorch.linalg.det(A, *, out=None) -> Tensor\nComputes the determinant of a square matrix.\nSupports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\nSee also:\n \"torch.linalg.slogdet()\" computes the sign and natural logarithm\n of the absolute value of the determinant of square matrices.\n\nParameters:\n A (Tensor) -- tensor of shape (, n, n)* where *** is\n zero or more batch dimensions.\nKeyword Arguments:\n out (Tensor, optional) -- output tensor. Ignored if\n None. Default: None.\nExamples:\n >>> A = torch.randn(3, 3)\n >>> torch.linalg.det(A)\n tensor(0.0934)\n\n >>> A = torch.randn(3, 2, 2)\n >>> torch.linalg.det(A)\n tensor([1.1990, 0.4099, 0.7386])\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.det.html", "category": "pytorch docs"} {"text": "TripletMarginWithDistanceLoss\nclass torch.nn.TripletMarginWithDistanceLoss(*, distance_function=None, margin=1.0, swap=False, reduction='mean')\nCreates a criterion that measures the triplet loss given input\n tensors a, p, and n (representing anchor, positive, and negative\n examples, respectively), and a nonnegative, real-valued function\n (\"distance function\") used to compute the relationship between the\n anchor and positive example (\"positive distance\") and the anchor\n and negative example (\"negative distance\").\nThe unreduced loss (i.e., with \"reduction\" set to \"'none'\") can be\n described as:\n \\ell(a, p, n) = L = \\{l_1,\\dots,l_N\\}^\\top, \\quad l_i = \\max\n \\{d(a_i, p_i) - d(a_i, n_i) + {\\rm margin}, 0\\}\n\nwhere N is the batch size; d is a nonnegative, real-valued function\n quantifying the closeness of two tensors, referred to as the\n \"distance_function\"; and margin is a nonnegative margin", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TripletMarginWithDistanceLoss.html", "category": "pytorch docs"} {"text": "representing the minimum difference between the positive and\n negative distances that is required for the loss to be 0. The\n input tensors have N elements each and can be of any shape that the\n distance function can handle.\nIf \"reduction\" is not \"'none'\" (default \"'mean'\"), then:\n \\ell(x, y) = \\begin{cases} \\operatorname{mean}(L), &\n \\text{if reduction} = \\text{`mean';}\\\\\n \\operatorname{sum}(L), & \\text{if reduction} = \\text{`sum'.}\n \\end{cases}\n\nSee also \"TripletMarginLoss\", which computes the triplet loss for\n input tensors using the l_p distance as the distance function.\nParameters:\n * distance_function (Callable, optional) -- A\n nonnegative, real-valued function that quantifies the\n closeness of two tensors. If not specified,\n nn.PairwiseDistance will be used. Default: \"None\"\n * **margin** (*float**, **optional*) -- A nonnegative margin\n representing the minimum difference between the positive and\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TripletMarginWithDistanceLoss.html", "category": "pytorch docs"} {"text": "negative distances required for the loss to be 0. Larger\n margins penalize cases where the negative examples are not\n distant enough from the anchors, relative to the positives.\n Default: 1.\n * **swap** (*bool**, **optional*) -- Whether to use the distance\n swap described in the paper *Learning shallow convolutional\n feature descriptors with triplet losses* by V. Balntas, E.\n Riba et al. If True, and if the positive example is closer to\n the negative example than the anchor is, swaps the positive\n example and the anchor in the loss computation. Default:\n \"False\".\n\n * **reduction** (*str**, **optional*) -- Specifies the\n (optional) reduction to apply to the output: \"'none'\" |\n \"'mean'\" | \"'sum'\". \"'none'\": no reduction will be applied,\n \"'mean'\": the sum of the output will be divided by the number\n of elements in the output, \"'sum'\": the output will be summed.\n Default: \"'mean'\"\n\nShape:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TripletMarginWithDistanceLoss.html", "category": "pytorch docs"} {"text": "Default: \"'mean'\"\nShape:\n * Input: (N, *) where * represents any number of additional\n dimensions as supported by the distance function.\n * Output: A Tensor of shape (N) if \"reduction\" is \"'none'\", or a\n scalar otherwise.\n\nExamples:\n >>> # Initialize embeddings\n >>> embedding = nn.Embedding(1000, 128)\n >>> anchor_ids = torch.randint(0, 1000, (1,))\n >>> positive_ids = torch.randint(0, 1000, (1,))\n >>> negative_ids = torch.randint(0, 1000, (1,))\n >>> anchor = embedding(anchor_ids)\n >>> positive = embedding(positive_ids)\n >>> negative = embedding(negative_ids)\n >>>\n >>> # Built-in Distance Function\n >>> triplet_loss = \\\n >>> nn.TripletMarginWithDistanceLoss(distance_function=nn.PairwiseDistance())\n >>> output = triplet_loss(anchor, positive, negative)\n >>> output.backward()\n >>>\n >>> # Custom Distance Function\n >>> def l_infinity(x1, x2):\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TripletMarginWithDistanceLoss.html", "category": "pytorch docs"} {"text": "\n\n\ndef l_infinity(x1, x2):\n >>> return torch.max(torch.abs(x1 - x2), dim=1).values\n >>>\n >>> triplet_loss = (\n >>> nn.TripletMarginWithDistanceLoss(distance_function=l_infinity, margin=1.5))\n >>> output = triplet_loss(anchor, positive, negative)\n >>> output.backward()\n >>>\n >>> # Custom Distance Function (Lambda)\n >>> triplet_loss = (\n >>> nn.TripletMarginWithDistanceLoss(\n >>> distance_function=lambda x, y: 1.0 - F.cosine_similarity(x, y)))\n >>> output = triplet_loss(anchor, positive, negative)\n >>> output.backward()\n\n\n\nReference:\n V. Balntas, et al.: Learning shallow convolutional feature\n descriptors with triplet losses:\n http://www.bmva.org/bmvc/2016/papers/paper119/index.html", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TripletMarginWithDistanceLoss.html", "category": "pytorch docs"} {"text": "torch.rot90\ntorch.rot90(input, k=1, dims=[0, 1]) -> Tensor\nRotate an n-D tensor by 90 degrees in the plane specified by dims\n axis. Rotation direction is from the first towards the second axis\n if k > 0, and from the second towards the first for k < 0.\nParameters:\n * input (Tensor) -- the input tensor.\n * **k** (*int*) -- number of times to rotate. Default value is 1\n\n * **dims** (*a list** or **tuple*) -- axis to rotate. Default\n value is [0, 1]\n\nExample:\n >>> x = torch.arange(4).view(2, 2)\n >>> x\n tensor([[0, 1],\n [2, 3]])\n >>> torch.rot90(x, 1, [0, 1])\n tensor([[1, 3],\n [0, 2]])\n\n >>> x = torch.arange(8).view(2, 2, 2)\n >>> x\n tensor([[[0, 1],\n [2, 3]],\n\n [[4, 5],\n [6, 7]]])\n >>> torch.rot90(x, 1, [1, 2])\n tensor([[[1, 3],\n [0, 2]],\n\n [[5, 7],\n [4, 6]]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.rot90.html", "category": "pytorch docs"} {"text": "torch.nn.utils.prune.global_unstructured\ntorch.nn.utils.prune.global_unstructured(parameters, pruning_method, importance_scores=None, **kwargs)\nGlobally prunes tensors corresponding to all parameters in\n \"parameters\" by applying the specified \"pruning_method\". Modifies\n modules in place by:\n\n\nadding a named buffer called \"name+'_mask'\" corresponding to the\n binary mask applied to the parameter \"name\" by the pruning\n method.\n\n\nreplacing the parameter \"name\" by its pruned version, while the\n original (unpruned) parameter is stored in a new parameter named\n \"name+'_orig'\".\n\n\nParameters:\n * parameters (Iterable of (module, name)\n tuples) -- parameters of the model to prune in a global\n fashion, i.e. by aggregating all weights prior to deciding\n which ones to prune. module must be of type \"nn.Module\", and\n name must be a string.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.global_unstructured.html", "category": "pytorch docs"} {"text": "name must be a string.\n * **pruning_method** (*function*) -- a valid pruning function\n from this module, or a custom one implemented by the user that\n satisfies the implementation guidelines and has\n \"PRUNING_TYPE='unstructured'\".\n\n * **importance_scores** (*dict*) -- a dictionary mapping\n (module, name) tuples to the corresponding parameter's\n importance scores tensor. The tensor should be the same shape\n as the parameter, and is used for computing mask for pruning.\n If unspecified or None, the parameter will be used in place of\n its importance scores.\n\n * **kwargs** -- other keyword arguments such as: amount (int or\n float): quantity of parameters to prune across the specified\n parameters. If \"float\", should be between 0.0 and 1.0 and\n represent the fraction of parameters to prune. If \"int\", it\n represents the absolute number of parameters to prune.\n\nRaises:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.global_unstructured.html", "category": "pytorch docs"} {"text": "Raises:\n TypeError -- if \"PRUNING_TYPE != 'unstructured'\"\nNote:\n Since global structured pruning doesn't make much sense unless\n the norm is normalized by the size of the parameter, we now limit\n the scope of global pruning to unstructured methods.\n\n-[ Examples ]-\n\n\n\nfrom torch.nn.utils import prune\nfrom collections import OrderedDict\nnet = nn.Sequential(OrderedDict([\n ... ('first', nn.Linear(10, 4)),\n ... ('second', nn.Linear(4, 1)),\n ... ]))\nparameters_to_prune = (\n ... (net.first, 'weight'),\n ... (net.second, 'weight'),\n ... )\nprune.global_unstructured(\n ... parameters_to_prune,\n ... pruning_method=prune.L1Unstructured,\n ... amount=10,\n ... )\nprint(sum(torch.nn.utils.parameters_to_vector(net.buffers()) == 0))\n tensor(10)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.global_unstructured.html", "category": "pytorch docs"} {"text": "TransformerDecoderLayer\nclass torch.nn.TransformerDecoderLayer(d_model, nhead, dim_feedforward=2048, dropout=0.1, activation=, layer_norm_eps=1e-05, batch_first=False, norm_first=False, device=None, dtype=None)\nTransformerDecoderLayer is made up of self-attn, multi-head-attn\n and feedforward network. This standard decoder layer is based on\n the paper \"Attention Is All You Need\". Ashish Vaswani, Noam\n Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez,\n Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you\n need. In Advances in Neural Information Processing Systems, pages\n 6000-6010. Users may modify or implement in a different way during\n application.\nParameters:\n * d_model (int) -- the number of expected features in the\n input (required).\n * **nhead** (*int*) -- the number of heads in the\n multiheadattention models (required).\n\n * **dim_feedforward** (*int*) -- the dimension of the\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TransformerDecoderLayer.html", "category": "pytorch docs"} {"text": "feedforward network model (default=2048).\n * **dropout** (*float*) -- the dropout value (default=0.1).\n\n * **activation** (*Union**[**str**,\n **Callable**[**[**Tensor**]**, **Tensor**]**]*) -- the\n activation function of the intermediate layer, can be a string\n (\"relu\" or \"gelu\") or a unary callable. Default: relu\n\n * **layer_norm_eps** (*float*) -- the eps value in layer\n normalization components (default=1e-5).\n\n * **batch_first** (*bool*) -- If \"True\", then the input and\n output tensors are provided as (batch, seq, feature). Default:\n \"False\" (seq, batch, feature).\n\n * **norm_first** (*bool*) -- if \"True\", layer norm is done prior\n to self attention, multihead attention and feedforward\n operations, respectively. Otherwise it's done after. Default:\n \"False\" (after).\n\nExamples::\n >>> decoder_layer = nn.TransformerDecoderLayer(d_model=512, nhead=8)\n >>> memory = torch.rand(10, 32, 512)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TransformerDecoderLayer.html", "category": "pytorch docs"} {"text": "\n\n\nmemory = torch.rand(10, 32, 512)\n >>> tgt = torch.rand(20, 32, 512)\n >>> out = decoder_layer(tgt, memory)\n\n\n\nAlternatively, when \"batch_first\" is \"True\":\n >>> decoder_layer = nn.TransformerDecoderLayer(d_model=512, nhead=8, batch_first=True)\n >>> memory = torch.rand(32, 10, 512)\n >>> tgt = torch.rand(32, 20, 512)\n >>> out = decoder_layer(tgt, memory)\nforward(tgt, memory, tgt_mask=None, memory_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None, tgt_is_causal=False, memory_is_causal=False)\n Pass the inputs (and mask) through the decoder layer.\n\n Parameters:\n * **tgt** (*Tensor*) -- the sequence to the decoder layer\n (required).\n\n * **memory** (*Tensor*) -- the sequence from the last layer\n of the encoder (required).\n\n * **tgt_mask** (*Optional**[**Tensor**]*) -- the mask for the\n tgt sequence (optional).\n\n * **memory_mask** (*Optional**[**Tensor**]*) -- the mask for\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TransformerDecoderLayer.html", "category": "pytorch docs"} {"text": "the memory sequence (optional).\n * **tgt_key_padding_mask** (*Optional**[**Tensor**]*) -- the\n mask for the tgt keys per batch (optional).\n\n * **memory_key_padding_mask** (*Optional**[**Tensor**]*) --\n the mask for the memory keys per batch (optional).\n\n * **tgt_is_causal** (*bool*) -- If specified, applies a\n causal mask as tgt mask. Mutually exclusive with providing\n tgt_mask. Default: \"False\".\n\n * **memory_is_causal** (*bool*) -- If specified, applies a\n causal mask as tgt mask. Mutually exclusive with providing\n memory_mask. Default: \"False\".\n\n Return type:\n *Tensor*\n\n Shape:\n see the docs in Transformer class.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TransformerDecoderLayer.html", "category": "pytorch docs"} {"text": "torch.randint_like\ntorch.randint_like(input, low=0, high, *, dtype=None, layout=torch.strided, device=None, requires_grad=False, memory_format=torch.preserve_format) -> Tensor\nReturns a tensor with the same shape as Tensor \"input\" filled with\n random integers generated uniformly between \"low\" (inclusive) and\n \"high\" (exclusive).\nParameters:\n * input (Tensor) -- the size of \"input\" will determine\n size of the output tensor.\n * **low** (*int**, **optional*) -- Lowest integer to be drawn\n from the distribution. Default: 0.\n\n * **high** (*int*) -- One above the highest integer to be drawn\n from the distribution.\n\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned Tensor. Default: if \"None\", defaults to the dtype\n of \"input\".\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned tensor. Default: if \"None\", defaults to the layout of\n", "source": "https://pytorch.org/docs/stable/generated/torch.randint_like.html", "category": "pytorch docs"} {"text": "\"input\".\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", defaults to the device of\n \"input\".\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\n * **memory_format** (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.randint_like.html", "category": "pytorch docs"} {"text": "torch.Tensor.masked_select\nTensor.masked_select(mask) -> Tensor\nSee \"torch.masked_select()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.masked_select.html", "category": "pytorch docs"} {"text": "torch.Tensor.bernoulli\nTensor.bernoulli(*, generator=None) -> Tensor\nReturns a result tensor where each \\texttt{result[i]} is\n independently sampled from \\text{Bernoulli}(\\texttt{self[i]}).\n \"self\" must have floating point \"dtype\", and the result will have\n the same \"dtype\".\nSee \"torch.bernoulli()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bernoulli.html", "category": "pytorch docs"} {"text": "torch.fft.fftfreq\ntorch.fft.fftfreq(n, d=1.0, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\nComputes the discrete Fourier Transform sample frequencies for a\n signal of size \"n\".\nNote:\n By convention, \"fft()\" returns positive frequency terms first,\n followed by the negative frequencies in reverse order, so that\n \"f[-i]\" for all 0 < i \\leq n/2` in Python gives the negative\n frequency terms. For an FFT of length \"n\" and with inputs spaced\n in length unit \"d\", the frequencies are:\n\n f = [0, 1, ..., (n - 1) // 2, -(n // 2), ..., -1] / (d * n)\n\nNote:\n For even lengths, the Nyquist frequency at \"f[n/2]\" can be\n thought of as either negative or positive. \"fftfreq()\" follows\n NumPy's convention of taking it to be negative.\n\nParameters:\n * n (int) -- the FFT length\n * **d** (*float**, **optional*) -- The sampling length scale.\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fftfreq.html", "category": "pytorch docs"} {"text": "The spacing between individual samples of the FFT input. The\n default assumes unit spacing, dividing that result by the\n actual spacing gives the result in physical frequency units.\nKeyword Arguments:\n * out (Tensor, optional) -- the output tensor.\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fftfreq.html", "category": "pytorch docs"} {"text": "record operations on the returned tensor. Default: \"False\".\n-[ Example ]-\n\n\n\ntorch.fft.fftfreq(5)\n tensor([ 0.0000, 0.2000, 0.4000, -0.4000, -0.2000])\n\n\n\nFor even input, we can see the Nyquist frequency at \"f[2]\" is given\n as negative:\n\n\n\ntorch.fft.fftfreq(4)\n tensor([ 0.0000, 0.2500, -0.5000, -0.2500])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fftfreq.html", "category": "pytorch docs"} {"text": "torch.Tensor.broadcast_to\nTensor.broadcast_to(shape) -> Tensor\nSee \"torch.broadcast_to()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.broadcast_to.html", "category": "pytorch docs"} {"text": "torch.cuda.nvtx.range_push\ntorch.cuda.nvtx.range_push(msg)\nPushes a range onto a stack of nested range span. Returns zero-\n based depth of the range that is started.\nParameters:\n msg (str) -- ASCII message to associate with range", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.nvtx.range_push.html", "category": "pytorch docs"} {"text": "GELU\nclass torch.nn.GELU(approximate='none')\nApplies the Gaussian Error Linear Units function:\n \\text{GELU}(x) = x * \\Phi(x)\n\nwhere \\Phi(x) is the Cumulative Distribution Function for Gaussian\n Distribution.\nWhen the approximate argument is 'tanh', Gelu is estimated with:\n \\text{GELU}(x) = 0.5 * x * (1 + \\text{Tanh}(\\sqrt(2 / \\pi) * (x\n + 0.044715 * x^3)))\n\nParameters:\n approximate (str, optional) -- the gelu approximation\n algorithm to use: \"'none'\" | \"'tanh'\". Default: \"'none'\"\nShape:\n * Input: (*), where * means any number of dimensions.\n * Output: (*), same shape as the input.\n\n[image]\nExamples:\n >>> m = nn.GELU()\n >>> input = torch.randn(2)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GELU.html", "category": "pytorch docs"} {"text": "torch.func.functionalize\ntorch.func.functionalize(func, *, remove='mutations')\nfunctionalize is a transform that can be used to remove\n (intermediate) mutations and aliasing from a function, while\n preserving the function's semantics.\n\"functionalize(func)\" returns a new function with the same\n semantics as \"func\", but with all intermediate mutations removed.\n Every inplace operation performed on an intermediate tensor:\n \"intermediate.foo_()\" gets replaced by its out-of-place equivalent:\n \"intermediate_updated = intermediate.foo()\".\nfunctionalize is useful for shipping a pytorch program off to\n backends or compilers that aren't able to easily represent\n mutations or aliasing operators.\nParameters:\n * func (Callable) -- A Python function that takes one or\n more arguments.\n * **remove** (*str*) -- An optional string argument, that takes\n on either the value 'mutations' or 'mutations_and_views'. If\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.functionalize.html", "category": "pytorch docs"} {"text": "'mutations' is passed in then all mutating operators will be\n replaced with their non-mutating equivalents. If\n 'mutations_and_views' is passed in, then additionally, all\n aliasing operators will be replaced with their non-aliasing\n equivalents. Default: 'mutations'.\nReturns:\n Returns a new \"functionalized\" function. It takes the same\n inputs as \"func\", and has the same behavior, but any mutations\n (and optionally aliasing) performed on intermeidate tensors in\n the function will be removed.\nReturn type:\n Callable\nfunctionalize will also remove mutations (and views) that were\n performed on function inputs. However to preserve semantics,\n functionalize will \"fix up\" the mutations after the transform has\n finished running, by detecting if any tensor inputs \"should have\"\n been mutated, and copying the new data back to the inputs if\n necessary.\nExample:\n >>> import torch\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.functionalize.html", "category": "pytorch docs"} {"text": "necessary.\nExample:\n >>> import torch\n >>> from torch.fx.experimental.proxy_tensor import make_fx\n >>> from torch.func import functionalize\n >>>\n >>> # A function that uses mutations and views, but only on intermediate tensors.\n >>> def f(a):\n ... b = a + 1\n ... c = b.view(-1)\n ... c.add_(1)\n ... return b\n ...\n >>> inpt = torch.randn(2)\n >>>\n >>> out1 = f(inpt)\n >>> out2 = functionalize(f)(inpt)\n >>>\n >>> # semantics are the same (outputs are equivalent)\n >>> print(torch.allclose(out1, out2))\n True\n >>>\n >>> f_traced = make_fx(f)(inpt)\n >>> f_no_mutations_traced = make_fx(functionalize(f))(inpt)\n >>> f_no_mutations_and_views_traced = make_fx(functionalize(f, remove='mutations_and_views'))(inpt)\n >>>\n >>> print(f_traced.code)\n\n\n\n def forward(self, a_1):\n add = torch.ops.aten.add(a_1, 1); a_1 = None\n view = torch.ops.aten.view(add, [-1])\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.functionalize.html", "category": "pytorch docs"} {"text": "view = torch.ops.aten.view(add, [-1])\n add_ = torch.ops.aten.add_(view, 1); view = None\n return add\n >>> print(f_no_mutations_traced.code)\n\n\n\n def forward(self, a_1):\n add = torch.ops.aten.add(a_1, 1); a_1 = None\n view = torch.ops.aten.view(add, [-1]); add = None\n add_1 = torch.ops.aten.add(view, 1); view = None\n view_1 = torch.ops.aten.view(add_1, [2]); add_1 = None\n return view_1\n\n >>> print(f_no_mutations_and_views_traced.code)\n\n\n\n def forward(self, a_1):\n add = torch.ops.aten.add(a_1, 1); a_1 = None\n view_copy = torch.ops.aten.view_copy(add, [-1]); add = None\n add_1 = torch.ops.aten.add(view_copy, 1); view_copy = None\n view_copy_1 = torch.ops.aten.view_copy(add_1, [2]); add_1 = None\n return view_copy_1\n\n\n >>> # A function that mutates its input tensor\n >>> def f(a):\n ... b = a.view(-1)\n ... b.add_(1)\n ... return a\n ...\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.functionalize.html", "category": "pytorch docs"} {"text": "... return a\n ...\n >>> f_no_mutations_and_views_traced = make_fx(functionalize(f, remove='mutations_and_views'))(inpt)\n >>> #\n >>> # All mutations and views have been removed,\n >>> # but there is an extra copy_ in the graph to correctly apply the mutation to the input\n >>> # after the function has completed.\n >>> print(f_no_mutations_and_views_traced.code)\n def forward(self, a_1):\n view_copy = torch.ops.aten.view_copy(a_1, [-1])\n add = torch.ops.aten.add(view_copy, 1); view_copy = None\n view_copy_1 = torch.ops.aten.view_copy(add, [2]); add = None\n copy_ = torch.ops.aten.copy_(a_1, view_copy_1); a_1 = None\n return view_copy_1\n\nThere are a few \"failure modes\" for functionalize that are worth\n calling out:\n 1. Like other torch.func transforms, functionalize() doesn't\n work with functions that directly use .backward(). The same", "source": "https://pytorch.org/docs/stable/generated/torch.func.functionalize.html", "category": "pytorch docs"} {"text": "is true for torch.autograd.grad. If you want to use autograd,\n you can compute gradients directly with\n functionalize(grad(f)).\n 2. Like other torch.func transforms, *functionalize()* doesn't\n work with global state. If you call *functionalize(f)* on a\n function that takes views / mutations of non-local state,\n functionalization will simply no-op and pass the\n view/mutation calls directly to the backend. One way to work\n around this is is to ensure that any non-local state creation\n is wrapped into a larger function, which you then call\n functionalize on.\n\n 3. *resize_()* has some limitations: functionalize will only\n work on programs that use resize_()` as long as the tensor\n being resized is not a view.\n\n 4. *as_strided()* has some limitations: functionalize will not\n work on *as_strided()* calls that result in tensors with\n overlapping memory.\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.functionalize.html", "category": "pytorch docs"} {"text": "overlapping memory.\nFinally, a helpful mental model for understanding functionalization\n is that most user pytorch programs are writting with the public\n torch API. When executed, torch operators are generally decomposed\n into our internal C++ \"ATen\" API. The logic for functionalization\n happens entirely at the level of ATen. Functionalization knows how\n to take every aliasing operator in ATen, and map it to its non-\n aliasing equivalent (e.g. \"tensor.view({-1})\" ->\n \"at::view_copy(tensor, {-1})\"), and how to take every mutating\n operator in ATen, and map it to its non-mutating equivalent (e.g.\n \"tensor.add_(1)\" -> \"at::add(tensor, -1)\"), while tracking aliases\n and mutations out-of-line to know when to fix things up.\n Information about which ATen operators are aliasing or mutating all\n comes from https://github.com/pytorch/pytorch/blob/master/aten/src\n /ATen/native/native_functions.yaml.", "source": "https://pytorch.org/docs/stable/generated/torch.func.functionalize.html", "category": "pytorch docs"} {"text": "torch.bernoulli\ntorch.bernoulli(input, *, generator=None, out=None) -> Tensor\nDraws binary random numbers (0 or 1) from a Bernoulli distribution.\nThe \"input\" tensor should be a tensor containing probabilities to\n be used for drawing the binary random number. Hence, all values in\n \"input\" have to be in the range: 0 \\leq \\text{input}_i \\leq 1.\nThe \\text{i}^{th} element of the output tensor will draw a value 1\n according to the \\text{i}^{th} probability value given in \"input\".\n \\text{out}_{i} \\sim \\mathrm{Bernoulli}(p = \\text{input}_{i})\n\nThe returned \"out\" tensor only has values 0 or 1 and is of the same\n shape as \"input\".\n\"out\" can have integral \"dtype\", but \"input\" must have floating\n point \"dtype\".\nParameters:\n input (Tensor) -- the input tensor of probability values\n for the Bernoulli distribution\nKeyword Arguments:\n * generator (\"torch.Generator\", optional) -- a pseudorandom\n number generator for sampling", "source": "https://pytorch.org/docs/stable/generated/torch.bernoulli.html", "category": "pytorch docs"} {"text": "number generator for sampling\n * **out** (*Tensor**, **optional*) -- the output tensor.\n\nExample:\n >>> a = torch.empty(3, 3).uniform_(0, 1) # generate a uniform random matrix with range [0, 1]\n >>> a\n tensor([[ 0.1737, 0.0950, 0.3609],\n [ 0.7148, 0.0289, 0.2676],\n [ 0.9456, 0.8937, 0.7202]])\n >>> torch.bernoulli(a)\n tensor([[ 1., 0., 0.],\n [ 0., 0., 0.],\n [ 1., 1., 1.]])\n\n >>> a = torch.ones(3, 3) # probability of drawing \"1\" is 1\n >>> torch.bernoulli(a)\n tensor([[ 1., 1., 1.],\n [ 1., 1., 1.],\n [ 1., 1., 1.]])\n >>> a = torch.zeros(3, 3) # probability of drawing \"1\" is 0\n >>> torch.bernoulli(a)\n tensor([[ 0., 0., 0.],\n [ 0., 0., 0.],\n [ 0., 0., 0.]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.bernoulli.html", "category": "pytorch docs"} {"text": "torch.minimum\ntorch.minimum(input, other, *, out=None) -> Tensor\nComputes the element-wise minimum of \"input\" and \"other\".\nNote:\n If one of the elements being compared is a NaN, then that element\n is returned. \"minimum()\" is not supported for tensors with\n complex dtypes.\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **other** (*Tensor*) -- the second input tensor\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.tensor((1, 2, -1))\n >>> b = torch.tensor((3, 0, 4))\n >>> torch.minimum(a, b)\n tensor([1, 0, -1])\n", "source": "https://pytorch.org/docs/stable/generated/torch.minimum.html", "category": "pytorch docs"} {"text": "torch.logical_and\ntorch.logical_and(input, other, *, out=None) -> Tensor\nComputes the element-wise logical AND of the given input tensors.\n Zeros are treated as \"False\" and nonzeros are treated as \"True\".\nParameters:\n * input (Tensor) -- the input tensor.\n * **other** (*Tensor*) -- the tensor to compute AND with\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> torch.logical_and(torch.tensor([True, False, True]), torch.tensor([True, False, False]))\n tensor([ True, False, False])\n >>> a = torch.tensor([0, 1, 10, 0], dtype=torch.int8)\n >>> b = torch.tensor([4, 0, 1, 0], dtype=torch.int8)\n >>> torch.logical_and(a, b)\n tensor([False, False, True, False])\n >>> torch.logical_and(a.double(), b.double())\n tensor([False, False, True, False])\n >>> torch.logical_and(a.double(), b)\n tensor([False, False, True, False])\n", "source": "https://pytorch.org/docs/stable/generated/torch.logical_and.html", "category": "pytorch docs"} {"text": "tensor([False, False, True, False])\n >>> torch.logical_and(a, b, out=torch.empty(4, dtype=torch.bool))\n tensor([False, False, True, False])", "source": "https://pytorch.org/docs/stable/generated/torch.logical_and.html", "category": "pytorch docs"} {"text": "CELU\nclass torch.nn.CELU(alpha=1.0, inplace=False)\nApplies the element-wise function:\n \\text{CELU}(x) = \\max(0,x) + \\min(0, \\alpha * (\\exp(x/\\alpha) -\n 1))\n\nMore details can be found in the paper Continuously Differentiable\n Exponential Linear Units .\nParameters:\n * alpha (float) -- the \\alpha value for the CELU\n formulation. Default: 1.0\n * **inplace** (*bool*) -- can optionally do the operation in-\n place. Default: \"False\"\n\nShape:\n * Input: (*), where * means any number of dimensions.\n * Output: (*), same shape as the input.\n\n[image]\nExamples:\n >>> m = nn.CELU()\n >>> input = torch.randn(2)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CELU.html", "category": "pytorch docs"} {"text": "TransformerDecoder\nclass torch.nn.TransformerDecoder(decoder_layer, num_layers, norm=None)\nTransformerDecoder is a stack of N decoder layers\nParameters:\n * decoder_layer -- an instance of the\n TransformerDecoderLayer() class (required).\n * **num_layers** -- the number of sub-decoder-layers in the\n decoder (required).\n\n * **norm** -- the layer normalization component (optional).\n\nExamples::\n >>> decoder_layer = nn.TransformerDecoderLayer(d_model=512, nhead=8)\n >>> transformer_decoder = nn.TransformerDecoder(decoder_layer, num_layers=6)\n >>> memory = torch.rand(10, 32, 512)\n >>> tgt = torch.rand(20, 32, 512)\n >>> out = transformer_decoder(tgt, memory)\nforward(tgt, memory, tgt_mask=None, memory_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None)\n Pass the inputs (and mask) through the decoder layer in turn.\n\n Parameters:\n * **tgt** (*Tensor*) -- the sequence to the decoder\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TransformerDecoder.html", "category": "pytorch docs"} {"text": "(required).\n * **memory** (*Tensor*) -- the sequence from the last layer\n of the encoder (required).\n\n * **tgt_mask** (*Optional**[**Tensor**]*) -- the mask for the\n tgt sequence (optional).\n\n * **memory_mask** (*Optional**[**Tensor**]*) -- the mask for\n the memory sequence (optional).\n\n * **tgt_key_padding_mask** (*Optional**[**Tensor**]*) -- the\n mask for the tgt keys per batch (optional).\n\n * **memory_key_padding_mask** (*Optional**[**Tensor**]*) --\n the mask for the memory keys per batch (optional).\n\n Return type:\n *Tensor*\n\n Shape:\n see the docs in Transformer class.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TransformerDecoder.html", "category": "pytorch docs"} {"text": "avg_pool3d\nclass torch.ao.nn.quantized.functional.avg_pool3d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None)\nApplies 3D average-pooling operation in kD \\ times kH \\times kW\n regions by step size sD \\times sH \\times sW steps. The number of\n output features is equal to the number of input planes.\nNote:\n The input quantization parameters propagate to the output.\n\nParameters:\n * input -- quantized input tensor (\\text{minibatch} ,\n \\text{in_channels} , iH , iW)\n * **kernel_size** -- size of the pooling region. Can be a single\n number or a tuple *(kD, kH, kW)*\n\n * **stride** -- stride of the pooling operation. Can be a single\n number or a tuple *(sD, sH, sW)*. Default: \"kernel_size\"\n\n * **padding** -- implicit zero paddings on both sides of the\n input. Can be a single number or a tuple *(padD, padH, padW)*.\n Default: 0\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.avg_pool3d.html", "category": "pytorch docs"} {"text": "Default: 0\n * **ceil_mode** -- when True, will use *ceil* instead of *floor*\n in the formula to compute the output shape. Default: \"False\"\n\n * **count_include_pad** -- when True, will include the zero-\n padding in the averaging calculation. Default: \"True\"\n\n * **divisor_override** -- if specified, it will be used as\n divisor, otherwise size of the pooling region will be used.\n Default: None\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.avg_pool3d.html", "category": "pytorch docs"} {"text": "torch.Tensor.nan_to_num\nTensor.nan_to_num(nan=0.0, posinf=None, neginf=None) -> Tensor\nSee \"torch.nan_to_num()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.nan_to_num.html", "category": "pytorch docs"} {"text": "torch.nn.functional.dropout\ntorch.nn.functional.dropout(input, p=0.5, training=True, inplace=False)\nDuring training, randomly zeroes some of the elements of the input\n tensor with probability \"p\" using samples from a Bernoulli\n distribution.\nSee \"Dropout\" for details.\nParameters:\n * p (float) -- probability of an element to be zeroed.\n Default: 0.5\n * **training** (*bool*) -- apply dropout if is \"True\". Default:\n \"True\"\n\n * **inplace** (*bool*) -- If set to \"True\", will do this\n operation in-place. Default: \"False\"\n\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.dropout.html", "category": "pytorch docs"} {"text": "torch.Tensor.dot\nTensor.dot(other) -> Tensor\nSee \"torch.dot()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.dot.html", "category": "pytorch docs"} {"text": "torch.Tensor.fmin\nTensor.fmin(other) -> Tensor\nSee \"torch.fmin()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.fmin.html", "category": "pytorch docs"} {"text": "torch.Tensor.expand\nTensor.expand(*sizes) -> Tensor\nReturns a new view of the \"self\" tensor with singleton dimensions\n expanded to a larger size.\nPassing -1 as the size for a dimension means not changing the size\n of that dimension.\nTensor can be also expanded to a larger number of dimensions, and\n the new ones will be appended at the front. For the new dimensions,\n the size cannot be set to -1.\nExpanding a tensor does not allocate new memory, but only creates a\n new view on the existing tensor where a dimension of size one is\n expanded to a larger size by setting the \"stride\" to 0. Any\n dimension of size 1 can be expanded to an arbitrary value without\n allocating new memory.\nParameters:\n sizes (torch.Size or int...*) -- the desired\n expanded size\nWarning:\n More than one element of an expanded tensor may refer to a single\n memory location. As a result, in-place operations (especially\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.expand.html", "category": "pytorch docs"} {"text": "ones that are vectorized) may result in incorrect behavior. If\n you need to write to the tensors, please clone them first.\nExample:\n >>> x = torch.tensor([[1], [2], [3]])\n >>> x.size()\n torch.Size([3, 1])\n >>> x.expand(3, 4)\n tensor([[ 1, 1, 1, 1],\n [ 2, 2, 2, 2],\n [ 3, 3, 3, 3]])\n >>> x.expand(-1, 4) # -1 means not changing the size of that dimension\n tensor([[ 1, 1, 1, 1],\n [ 2, 2, 2, 2],\n [ 3, 3, 3, 3]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.expand.html", "category": "pytorch docs"} {"text": "RReLU\nclass torch.nn.RReLU(lower=0.125, upper=0.3333333333333333, inplace=False)\nApplies the randomized leaky rectified liner unit function,\n element-wise, as described in the paper:\nEmpirical Evaluation of Rectified Activations in Convolutional\n Network.\nThe function is defined as:\n \\text{RReLU}(x) = \\begin{cases} x & \\text{if } x \\geq 0 \\\\\n ax & \\text{ otherwise } \\end{cases}\n\nwhere a is randomly sampled from uniform distribution\n \\mathcal{U}(\\text{lower}, \\text{upper}).\n See: https://arxiv.org/pdf/1505.00853.pdf\n\nParameters:\n * lower (float) -- lower bound of the uniform\n distribution. Default: \\frac{1}{8}\n * **upper** (*float*) -- upper bound of the uniform\n distribution. Default: \\frac{1}{3}\n\n * **inplace** (*bool*) -- can optionally do the operation in-\n place. Default: \"False\"\n\nShape:\n * Input: (*), where * means any number of dimensions.\n * Output: (*), same shape as the input.\n\n[image]", "source": "https://pytorch.org/docs/stable/generated/torch.nn.RReLU.html", "category": "pytorch docs"} {"text": "[image]\nExamples:\n >>> m = nn.RReLU(0.1, 0.3)\n >>> input = torch.randn(2)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.RReLU.html", "category": "pytorch docs"} {"text": "torch.Tensor.transpose_\nTensor.transpose_(dim0, dim1) -> Tensor\nIn-place version of \"transpose()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.transpose_.html", "category": "pytorch docs"} {"text": "torch.nn.functional.max_unpool1d\ntorch.nn.functional.max_unpool1d(input, indices, kernel_size, stride=None, padding=0, output_size=None)\nComputes a partial inverse of \"MaxPool1d\".\nSee \"MaxUnpool1d\" for details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.max_unpool1d.html", "category": "pytorch docs"} {"text": "torch.nn.functional.linear\ntorch.nn.functional.linear(input, weight, bias=None) -> Tensor\nApplies a linear transformation to the incoming data: y = xA^T + b.\nThis operation supports 2-D \"weight\" with sparse layout\nWarning:\n Sparse support is a beta feature and some layout(s)/dtype/device\n combinations may not be supported, or may not have autograd\n support. If you notice missing functionality please open a\n feature request.\n\nThis operator supports TensorFloat32.\nShape:\n * Input: (*, in\\_features) where *** means any number of\n additional dimensions, including none\n\n * Weight: (out\\_features, in\\_features) or (in\\_features)\n\n * Bias: (out\\_features) or ()\n\n * Output: (*, out\\_features) or (*), based on the shape of the\n weight\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.linear.html", "category": "pytorch docs"} {"text": "torch.nansum\ntorch.nansum(input, *, dtype=None) -> Tensor\nReturns the sum of all elements, treating Not a Numbers (NaNs) as\n zero.\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n dtype (\"torch.dtype\", optional) -- the desired data type of\n returned tensor. If specified, the input tensor is casted to\n \"dtype\" before the operation is performed. This is useful for\n preventing data type overflows. Default: None.\nExample:\n >>> a = torch.tensor([1., 2., float('nan'), 4.])\n >>> torch.nansum(a)\n tensor(7.)\n\ntorch.nansum(input, dim, keepdim=False, *, dtype=None) -> Tensor\nReturns the sum of each row of the \"input\" tensor in the given\n dimension \"dim\", treating Not a Numbers (NaNs) as zero. If \"dim\" is\n a list of dimensions, reduce over all of them.\nIf \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension(s) \"dim\" where it is of size 1.", "source": "https://pytorch.org/docs/stable/generated/torch.nansum.html", "category": "pytorch docs"} {"text": "Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensor having 1 (or \"len(dim)\") fewer dimension(s).\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int** or **tuple of ints**, **optional*) -- the\n dimension or dimensions to reduce. If \"None\", all dimensions\n are reduced.\n\n * **keepdim** (*bool*) -- whether the output tensor has \"dim\"\n retained or not.\n\nKeyword Arguments:\n dtype (\"torch.dtype\", optional) -- the desired data type of\n returned tensor. If specified, the input tensor is casted to\n \"dtype\" before the operation is performed. This is useful for\n preventing data type overflows. Default: None.\nExample:\n >>> torch.nansum(torch.tensor([1., float(\"nan\")]))\n 1.0\n >>> a = torch.tensor([[1, 2], [3., float(\"nan\")]])\n >>> torch.nansum(a)\n tensor(6.)\n >>> torch.nansum(a, dim=0)\n tensor([4., 2.])\n >>> torch.nansum(a, dim=1)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nansum.html", "category": "pytorch docs"} {"text": "\n\n\ntorch.nansum(a, dim=1)\n tensor([3., 3.])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nansum.html", "category": "pytorch docs"} {"text": "torch.Tensor.maximum\nTensor.maximum(other) -> Tensor\nSee \"torch.maximum()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.maximum.html", "category": "pytorch docs"} {"text": "torch.t\ntorch.t(input) -> Tensor\nExpects \"input\" to be <= 2-D tensor and transposes dimensions 0 and\n 1.\n0-D and 1-D tensors are returned as is. When input is a 2-D tensor\n this is equivalent to \"transpose(input, 0, 1)\".\nParameters:\n input (Tensor) -- the input tensor.\nExample:\n >>> x = torch.randn(())\n >>> x\n tensor(0.1995)\n >>> torch.t(x)\n tensor(0.1995)\n >>> x = torch.randn(3)\n >>> x\n tensor([ 2.4320, -0.4608, 0.7702])\n >>> torch.t(x)\n tensor([ 2.4320, -0.4608, 0.7702])\n >>> x = torch.randn(2, 3)\n >>> x\n tensor([[ 0.4875, 0.9158, -0.5872],\n [ 0.3938, -0.6929, 0.6932]])\n >>> torch.t(x)\n tensor([[ 0.4875, 0.3938],\n [ 0.9158, -0.6929],\n [-0.5872, 0.6932]])\n\nSee also \"torch.transpose()\".", "source": "https://pytorch.org/docs/stable/generated/torch.t.html", "category": "pytorch docs"} {"text": "torch.Tensor.lt_\nTensor.lt_(other) -> Tensor\nIn-place version of \"lt()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.lt_.html", "category": "pytorch docs"} {"text": "torch.nn.functional.binary_cross_entropy\ntorch.nn.functional.binary_cross_entropy(input, target, weight=None, size_average=None, reduce=None, reduction='mean')\nFunction that measures the Binary Cross Entropy between the target\n and input probabilities.\nSee \"BCELoss\" for details.\nParameters:\n * input (Tensor) -- Tensor of arbitrary shape as\n probabilities.\n * **target** (*Tensor*) -- Tensor of the same shape as input\n with values between 0 and 1.\n\n * **weight** (*Tensor**, **optional*) -- a manual rescaling\n weight if provided it's repeated to match input tensor shape\n\n * **size_average** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n multiple elements per sample. If the field \"size_average\" is\n set to \"False\", the losses are instead summed for each\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.binary_cross_entropy.html", "category": "pytorch docs"} {"text": "minibatch. Ignored when reduce is \"False\". Default: \"True\"\n * **reduce** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n\n * **reduction** (*str**, **optional*) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\n\nReturn type:\n Tensor\nExamples:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.binary_cross_entropy.html", "category": "pytorch docs"} {"text": "Return type:\n Tensor\nExamples:\n >>> input = torch.randn(3, 2, requires_grad=True)\n >>> target = torch.rand(3, 2, requires_grad=False)\n >>> loss = F.binary_cross_entropy(torch.sigmoid(input), target)\n >>> loss.backward()\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.binary_cross_entropy.html", "category": "pytorch docs"} {"text": "torch.fmin\ntorch.fmin(input, other, *, out=None) -> Tensor\nComputes the element-wise minimum of \"input\" and \"other\".\nThis is like \"torch.minimum()\" except it handles NaNs differently:\n if exactly one of the two elements being compared is a NaN then the\n non-NaN element is taken as the minimum. Only if both elements are\n NaN is NaN propagated.\nThis function is a wrapper around C++'s \"std::fmin\" and is similar\n to NumPy's \"fmin\" function.\nSupports broadcasting to a common shape, type promotion, and\n integer and floating-point inputs.\nParameters:\n * input (Tensor) -- the input tensor.\n * **other** (*Tensor*) -- the second input tensor\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.tensor([2.2, float('nan'), 2.1, float('nan')])\n >>> b = torch.tensor([-9.3, 0.1, float('nan'), float('nan')])\n >>> torch.fmin(a, b)\n tensor([-9.3000, 0.1000, 2.1000, nan])\n", "source": "https://pytorch.org/docs/stable/generated/torch.fmin.html", "category": "pytorch docs"} {"text": "torch.min\ntorch.min(input) -> Tensor\nReturns the minimum value of all elements in the \"input\" tensor.\nWarning:\n This function produces deterministic (sub)gradients unlike\n \"min(dim=0)\"\n\nParameters:\n input (Tensor) -- the input tensor.\nExample:\n >>> a = torch.randn(1, 3)\n >>> a\n tensor([[ 0.6750, 1.0857, 1.7197]])\n >>> torch.min(a)\n tensor(0.6750)\n\ntorch.min(input, dim, keepdim=False, *, out=None)\nReturns a namedtuple \"(values, indices)\" where \"values\" is the\n minimum value of each row of the \"input\" tensor in the given\n dimension \"dim\". And \"indices\" is the index location of each\n minimum value found (argmin).\nIf \"keepdim\" is \"True\", the output tensors are of the same size as\n \"input\" except in the dimension \"dim\" where they are of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensors having 1 fewer dimension than \"input\".\nNote:", "source": "https://pytorch.org/docs/stable/generated/torch.min.html", "category": "pytorch docs"} {"text": "Note:\n If there are multiple minimal values in a reduced row then the\n indices of the first minimal value are returned.\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int*) -- the dimension to reduce.\n\n * **keepdim** (*bool*) -- whether the output tensor has \"dim\"\n retained or not.\n\nKeyword Arguments:\n out (tuple, optional) -- the tuple of two output\n tensors (min, min_indices)\nExample:\n >>> a = torch.randn(4, 4)\n >>> a\n tensor([[-0.6248, 1.1334, -1.1899, -0.2803],\n [-1.4644, -0.2635, -0.3651, 0.6134],\n [ 0.2457, 0.0384, 1.0128, 0.7015],\n [-0.1153, 2.9849, 2.1458, 0.5788]])\n >>> torch.min(a, 1)\n torch.return_types.min(values=tensor([-1.1899, -1.4644, 0.0384, -0.1153]), indices=tensor([2, 0, 1, 0]))\n\ntorch.min(input, other, *, out=None) -> Tensor\nSee \"torch.minimum()\".", "source": "https://pytorch.org/docs/stable/generated/torch.min.html", "category": "pytorch docs"} {"text": "torch.Tensor.remainder\nTensor.remainder(divisor) -> Tensor\nSee \"torch.remainder()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.remainder.html", "category": "pytorch docs"} {"text": "torch._assert\ntorch._assert(condition, message)\nA wrapper around Python's assert which is symbolically traceable.", "source": "https://pytorch.org/docs/stable/generated/torch._assert.html", "category": "pytorch docs"} {"text": "torch.foreach_log10\ntorch.foreach_log10(self: List[Tensor]) -> None\nApply \"torch.log10()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_log10_.html", "category": "pytorch docs"} {"text": "torch.Tensor.argsort\nTensor.argsort(dim=- 1, descending=False) -> LongTensor\nSee \"torch.argsort()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.argsort.html", "category": "pytorch docs"} {"text": "CosineAnnealingWarmRestarts\nclass torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(optimizer, T_0, T_mult=1, eta_min=0, last_epoch=- 1, verbose=False)\nSet the learning rate of each parameter group using a cosine\n annealing schedule, where \\eta_{max} is set to the initial lr,\n T_{cur} is the number of epochs since the last restart and T_{i} is\n the number of epochs between two warm restarts in SGDR:\n \\eta_t = \\eta_{min} + \\frac{1}{2}(\\eta_{max} -\n \\eta_{min})\\left(1 +\n \\cos\\left(\\frac{T_{cur}}{T_{i}}\\pi\\right)\\right)\n\nWhen T_{cur}=T_{i}, set \\eta_t = \\eta_{min}. When T_{cur}=0 after\n restart, set \\eta_t=\\eta_{max}.\nIt has been proposed in SGDR: Stochastic Gradient Descent with Warm\n Restarts.\nParameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * **T_0** (*int*) -- Number of iterations for the first restart.\n\n * **T_mult** (*int**, **optional*) -- A factor increases T_{i}\n after a restart. Default: 1.\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CosineAnnealingWarmRestarts.html", "category": "pytorch docs"} {"text": "after a restart. Default: 1.\n * **eta_min** (*float**, **optional*) -- Minimum learning rate.\n Default: 0.\n\n * **last_epoch** (*int**, **optional*) -- The index of last\n epoch. Default: -1.\n\n * **verbose** (*bool*) -- If \"True\", prints a message to stdout\n for each update. Default: \"False\".\n\nget_last_lr()\n Return last computed learning rate by current scheduler.\n\nload_state_dict(state_dict)\n Loads the schedulers state.\n\n Parameters:\n **state_dict** (*dict*) -- scheduler state. Should be an\n object returned from a call to \"state_dict()\".\n\nprint_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n\nstate_dict()\n Returns the state of the scheduler as a \"dict\".\n\n It contains an entry for every variable in self.__dict__ which\n is not the optimizer.\n\nstep(epoch=None)\n Step could be called after every batch update\n\n -[ Example ]-\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CosineAnnealingWarmRestarts.html", "category": "pytorch docs"} {"text": "-[ Example ]-\n >>> scheduler = CosineAnnealingWarmRestarts(optimizer, T_0, T_mult)\n >>> iters = len(dataloader)\n >>> for epoch in range(20):\n >>> for i, sample in enumerate(dataloader):\n >>> inputs, labels = sample['inputs'], sample['labels']\n >>> optimizer.zero_grad()\n >>> outputs = net(inputs)\n >>> loss = criterion(outputs, labels)\n >>> loss.backward()\n >>> optimizer.step()\n >>> scheduler.step(epoch + i / iters)\n\n This function can be called in an interleaved way.\n\n -[ Example ]-\n\n >>> scheduler = CosineAnnealingWarmRestarts(optimizer, T_0, T_mult)\n >>> for epoch in range(20):\n >>> scheduler.step()\n >>> scheduler.step(26)\n >>> scheduler.step() # scheduler.step(27), instead of scheduler(20)\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CosineAnnealingWarmRestarts.html", "category": "pytorch docs"} {"text": "torch.inverse\ntorch.inverse(input, *, out=None) -> Tensor\nAlias for \"torch.linalg.inv()\"", "source": "https://pytorch.org/docs/stable/generated/torch.inverse.html", "category": "pytorch docs"} {"text": "upsample_bilinear\nclass torch.ao.nn.quantized.functional.upsample_bilinear(input, size=None, scale_factor=None)\nUpsamples the input, using bilinear upsampling.\nWarning:\n This function is deprecated in favor of\n \"torch.nn.quantized.functional.interpolate()\". This is equivalent\n with \"nn.quantized.functional.interpolate(..., mode='bilinear',\n align_corners=True)\".\n\nNote:\n The input quantization parameters propagate to the output.\n\nNote:\n Only 2D inputs are supported\n\nParameters:\n * input (Tensor) -- quantized input\n * **size** (*int** or **Tuple**[**int**, **int**]*) -- output\n spatial size.\n\n * **scale_factor** (*int** or **Tuple**[**int**, **int**]*) --\n multiplier for spatial size\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.upsample_bilinear.html", "category": "pytorch docs"} {"text": "torch.Tensor.diagonal_scatter\nTensor.diagonal_scatter(src, offset=0, dim1=0, dim2=1) -> Tensor\nSee \"torch.diagonal_scatter()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.diagonal_scatter.html", "category": "pytorch docs"} {"text": "torch.unflatten\ntorch.unflatten(input, dim, sizes) -> Tensor\nExpands a dimension of the input tensor over multiple dimensions.\nSee also:\n \"torch.flatten()\" the inverse of this function. It coalesces\n several dimensions into one.\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int*) -- Dimension to be unflattened, specified as\n an index into \"input.shape\".\n\n * **sizes** (*Tuple**[**int**]*) -- New shape of the unflattened\n dimension. One of its elements can be *-1* in which case the\n corresponding output dimension is inferred. Otherwise, the\n product of \"sizes\" *must* equal \"input.shape[dim]\".\n\nReturns:\n A View of input with the specified dimension unflattened.\nExamples::\n >>> torch.unflatten(torch.randn(3, 4, 1), 1, (2, 2)).shape\n torch.Size([3, 2, 2, 1])\n >>> torch.unflatten(torch.randn(3, 4, 1), 1, (-1, 2)).shape\n torch.Size([3, 2, 2, 1])", "source": "https://pytorch.org/docs/stable/generated/torch.unflatten.html", "category": "pytorch docs"} {"text": "torch.Size([3, 2, 2, 1])\n >>> torch.unflatten(torch.randn(5, 12, 3), -1, (2, 2, 3, 1, 1)).shape\n torch.Size([5, 2, 2, 3, 1, 1, 3])", "source": "https://pytorch.org/docs/stable/generated/torch.unflatten.html", "category": "pytorch docs"} {"text": "torch.nn.functional.interpolate\ntorch.nn.functional.interpolate(input, size=None, scale_factor=None, mode='nearest', align_corners=None, recompute_scale_factor=None, antialias=False)\nDown/up samples the input to either the given \"size\" or the given\n \"scale_factor\"\nThe algorithm used for interpolation is determined by \"mode\".\nCurrently temporal, spatial and volumetric sampling are supported,\n i.e. expected inputs are 3-D, 4-D or 5-D in shape.\nThe input dimensions are interpreted in the form: mini-batch x\n channels x [optional depth] x [optional height] x width.\nThe modes available for resizing are: nearest, linear (3D-\n only), bilinear, bicubic (4D-only), trilinear (5D-only),\n area, nearest-exact\nParameters:\n * input (Tensor) -- the input tensor\n * **size** (*int** or **Tuple**[**int**] or **Tuple**[**int**,\n **int**] or **Tuple**[**int**, **int**, **int**]*) -- output\n spatial size.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.interpolate.html", "category": "pytorch docs"} {"text": "spatial size.\n * **scale_factor** (*float** or **Tuple**[**float**]*) --\n multiplier for spatial size. If *scale_factor* is a tuple, its\n length has to match the number of spatial dimensions;\n *input.dim() - 2*.\n\n * **mode** (*str*) -- algorithm used for upsampling: \"'nearest'\"\n | \"'linear'\" | \"'bilinear'\" | \"'bicubic'\" | \"'trilinear'\" |\n \"'area'\" | \"'nearest-exact'\". Default: \"'nearest'\"\n\n * **align_corners** (*bool**, **optional*) -- Geometrically, we\n consider the pixels of the input and output as squares rather\n than points. If set to \"True\", the input and output tensors\n are aligned by the center points of their corner pixels,\n preserving the values at the corner pixels. If set to \"False\",\n the input and output tensors are aligned by the corner points\n of their corner pixels, and the interpolation uses edge value\n padding for out-of-boundary values, making this operation\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.interpolate.html", "category": "pytorch docs"} {"text": "independent of input size when \"scale_factor\" is kept the\n same. This only has an effect when \"mode\" is \"'linear'\",\n \"'bilinear'\", \"'bicubic'\" or \"'trilinear'\". Default: \"False\"\n * **recompute_scale_factor** (*bool**, **optional*) -- recompute\n the scale_factor for use in the interpolation calculation. If\n *recompute_scale_factor* is \"True\", then *scale_factor* must\n be passed in and *scale_factor* is used to compute the output\n *size*. The computed output *size* will be used to infer new\n scales for the interpolation. Note that when *scale_factor* is\n floating-point, it may differ from the recomputed\n *scale_factor* due to rounding and precision issues. If\n *recompute_scale_factor* is \"False\", then *size* or\n *scale_factor* will be used directly for interpolation.\n Default: \"None\".\n\n * **antialias** (*bool**, **optional*) -- flag to apply anti-\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.interpolate.html", "category": "pytorch docs"} {"text": "aliasing. Default: \"False\". Using anti-alias option together\n with \"align_corners=False\", interpolation result would match\n Pillow result for downsampling operation. Supported modes:\n \"'bilinear'\", \"'bicubic'\".\nReturn type:\n Tensor\nNote:\n With \"mode='bicubic'\", it's possible to cause overshoot, in other\n words it can produce negative values or values greater than 255\n for images. Explicitly call \"result.clamp(min=0, max=255)\" if you\n want to reduce the overshoot when displaying the image.\n\nNote:\n Mode \"mode='nearest-exact'\" matches Scikit-Image and PIL nearest\n neighbours interpolation algorithms and fixes known issues with\n \"mode='nearest'\". This mode is introduced to keep backward\n compatibility. Mode \"mode='nearest'\" matches buggy OpenCV's\n \"INTER_NEAREST\" interpolation algorithm.\n\nNote:\n This operation may produce nondeterministic gradients when given\n tensors on a CUDA device. See Reproducibility for more\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.interpolate.html", "category": "pytorch docs"} {"text": "information.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.interpolate.html", "category": "pytorch docs"} {"text": "torch.not_equal\ntorch.not_equal(input, other, *, out=None) -> Tensor\nAlias for \"torch.ne()\".", "source": "https://pytorch.org/docs/stable/generated/torch.not_equal.html", "category": "pytorch docs"} {"text": "LPPool2d\nclass torch.nn.LPPool2d(norm_type, kernel_size, stride=None, ceil_mode=False)\nApplies a 2D power-average pooling over an input signal composed of\n several input planes.\nOn each window, the function computed is:\n f(X) = \\sqrt[p]{\\sum_{x \\in X} x^{p}}\n\n\n\nAt p = \\infty, one gets Max Pooling\n\n\nAt p = 1, one gets Sum Pooling (which is proportional to average\n pooling)\n\n\nThe parameters \"kernel_size\", \"stride\" can either be:\n * a single \"int\" -- in which case the same value is used for the\n height and width dimension\n\n * a \"tuple\" of two ints -- in which case, the first *int* is\n used for the height dimension, and the second *int* for the\n width dimension\n\nNote:\n If the sum to the power of *p* is zero, the gradient of this\n function is not defined. This implementation will set the\n gradient to zero in this case.\n\nParameters:\n * kernel_size (Union[int, Tuple[int*,", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LPPool2d.html", "category": "pytorch docs"} {"text": "int]*]) -- the size of the window\n * **stride** (*Union**[**int**, **Tuple**[**int**, **int**]**]*)\n -- the stride of the window. Default value is \"kernel_size\"\n\n * **ceil_mode** (*bool*) -- when True, will use *ceil* instead\n of *floor* to compute the output shape\n\nShape:\n * Input: (N, C, H_{in}, W_{in})\n * Output: (N, C, H_{out}, W_{out}), where\n\n H_{out} = \\left\\lfloor\\frac{H_{in} -\n \\text{kernel\\_size}[0]}{\\text{stride}[0]} + 1\\right\\rfloor\n\n W_{out} = \\left\\lfloor\\frac{W_{in} -\n \\text{kernel\\_size}[1]}{\\text{stride}[1]} + 1\\right\\rfloor\n\nExamples:\n >>> # power-2 pool of square window of size=3, stride=2\n >>> m = nn.LPPool2d(2, 3, stride=2)\n >>> # pool of non-square window of power 1.2\n >>> m = nn.LPPool2d(1.2, (3, 2), stride=(2, 1))\n >>> input = torch.randn(20, 16, 50, 32)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LPPool2d.html", "category": "pytorch docs"} {"text": "default_histogram_observer\ntorch.quantization.observer.default_histogram_observer\nalias of functools.partial(, quant_min=0,\n quant_max=127){}", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.default_histogram_observer.html", "category": "pytorch docs"} {"text": "torch.Tensor.cumsum_\nTensor.cumsum_(dim, dtype=None) -> Tensor\nIn-place version of \"cumsum()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cumsum_.html", "category": "pytorch docs"} {"text": "PixelShuffle\nclass torch.nn.PixelShuffle(upscale_factor)\nRearranges elements in a tensor of shape (, C \\times r^2, H, W) to\n a tensor of shape (, C, H \\times r, W \\times r), where r is an\n upscale factor.\nThis is useful for implementing efficient sub-pixel convolution\n with a stride of 1/r.\nSee the paper: Real-Time Single Image and Video Super-Resolution\n Using an Efficient Sub-Pixel Convolutional Neural Network by Shi\n et. al (2016) for more details.\nParameters:\n upscale_factor (int) -- factor to increase spatial\n resolution by\nShape:\n * Input: (*, C_{in}, H_{in}, W_{in}), where * is zero or more\n batch dimensions\n * Output: (*, C_{out}, H_{out}, W_{out}), where\n\n C_{out} = C_{in} \\div \\text{upscale\\_factor}^2\n\n H_{out} = H_{in} \\times \\text{upscale\\_factor}\n\n W_{out} = W_{in} \\times \\text{upscale\\_factor}\n\nExamples:\n >>> pixel_shuffle = nn.PixelShuffle(3)\n >>> input = torch.randn(1, 9, 4, 4)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.PixelShuffle.html", "category": "pytorch docs"} {"text": "\n\n\ninput = torch.randn(1, 9, 4, 4)\n >>> output = pixel_shuffle(input)\n >>> print(output.size())\n torch.Size([1, 1, 12, 12])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.PixelShuffle.html", "category": "pytorch docs"} {"text": "default_histogram_fake_quant\ntorch.quantization.fake_quantize.default_histogram_fake_quant\nalias of functools.partial(,\n observer=, quant_min=0,\n quant_max=255, dtype=torch.quint8, qscheme=torch.per_tensor_affine,\n reduce_range=True){}", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.default_histogram_fake_quant.html", "category": "pytorch docs"} {"text": "torch.Tensor.min\nTensor.min(dim=None, keepdim=False)\nSee \"torch.min()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.min.html", "category": "pytorch docs"} {"text": "Conv3d\nclass torch.ao.nn.qat.Conv3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', qconfig=None, device=None, dtype=None)\nA Conv3d module attached with FakeQuantize modules for weight, used\n for quantization aware training.\nWe adopt the same interface as torch.nn.Conv3d, please see https\n ://pytorch.org/docs/stable/nn.html?highlight=conv3d#torch.nn.Conv3d\n for documentation.\nSimilar to torch.nn.Conv3d, with FakeQuantize modules initialized\n to default.\nVariables:\n weight_fake_quant -- fake quant module for weight", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.qat.Conv3d.html", "category": "pytorch docs"} {"text": "PoissonNLLLoss\nclass torch.nn.PoissonNLLLoss(log_input=True, full=False, size_average=None, eps=1e-08, reduce=None, reduction='mean')\nNegative log likelihood loss with Poisson distribution of target.\nThe loss can be described as:\n \\text{target} \\sim \\mathrm{Poisson}(\\text{input})\n \\text{loss}(\\text{input}, \\text{target}) = \\text{input} -\n \\text{target} * \\log(\\text{input}) +\n \\log(\\text{target!})\n\nThe last term can be omitted or approximated with Stirling formula.\n The approximation is used for target values more than 1. For\n targets less or equal to 1 zeros are added to the loss.\nParameters:\n * log_input (bool, optional) -- if \"True\" the loss is\n computed as \\exp(\\text{input}) - \\text{target}\\text{input},\n if \"False\" the loss is \\text{input} -\n \\text{target}\\log(\\text{input}+\\text{eps}).\n * **full** (*bool**, **optional*) --\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.PoissonNLLLoss.html", "category": "pytorch docs"} {"text": "\n\nfull (bool, optional) --\nwhether to compute full loss, i. e. to add the Stirling\napproximation term\n\n \\text{target}*\\log(\\text{target}) - \\text{target} + 0.5 *\n \\log(2\\pi\\text{target}).\n\n\n\nsize_average (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n\n\neps (float, optional) -- Small value to avoid\n evaluation of \\log(0) when \"log_input = False\". Default: 1e-8\n\n\nreduce (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.PoissonNLLLoss.html", "category": "pytorch docs"} {"text": "\"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n * **reduction** (*str**, **optional*) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\n\nExamples:\n >>> loss = nn.PoissonNLLLoss()\n >>> log_input = torch.randn(5, 2, requires_grad=True)\n >>> target = torch.randn(5, 2)\n >>> output = loss(log_input, target)\n >>> output.backward()\n\nShape:\n * Input: (*), where * means any number of dimensions.\n * Target: (*), same shape as the input.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.PoissonNLLLoss.html", "category": "pytorch docs"} {"text": "\n\nTarget: (*), same shape as the input.\n\nOutput: scalar by default. If \"reduction\" is \"'none'\", then\n (*), the same shape as the input.\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.PoissonNLLLoss.html", "category": "pytorch docs"} {"text": "torch._foreach_acos\ntorch._foreach_acos(self: List[Tensor]) -> List[Tensor]\nApply \"torch.acos()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_acos.html", "category": "pytorch docs"} {"text": "torch.bincount\ntorch.bincount(input, weights=None, minlength=0) -> Tensor\nCount the frequency of each value in an array of non-negative ints.\nThe number of bins (size 1) is one larger than the largest value in\n \"input\" unless \"input\" is empty, in which case the result is a\n tensor of size 0. If \"minlength\" is specified, the number of bins\n is at least \"minlength\" and if \"input\" is empty, then the result is\n tensor of size \"minlength\" filled with zeros. If \"n\" is the value\n at position \"i\", \"out[n] += weights[i]\" if \"weights\" is specified\n else \"out[n] += 1\".\nNote:\n This operation may produce nondeterministic gradients when given\n tensors on a CUDA device. See Reproducibility for more\n information.\n\nParameters:\n * input (Tensor) -- 1-d int tensor\n * **weights** (*Tensor*) -- optional, weight for each value in\n the input tensor. Should be of same size as input tensor.\n", "source": "https://pytorch.org/docs/stable/generated/torch.bincount.html", "category": "pytorch docs"} {"text": "\nminlength (int) -- optional, minimum number of bins.\n Should be non-negative.\n\nReturns:\n a tensor of shape \"Size([max(input) + 1])\" if \"input\" is non-\n empty, else \"Size(0)\"\nReturn type:\n output (Tensor)\nExample:\n >>> input = torch.randint(0, 8, (5,), dtype=torch.int64)\n >>> weights = torch.linspace(0, 1, steps=5)\n >>> input, weights\n (tensor([4, 3, 6, 3, 4]),\n tensor([ 0.0000, 0.2500, 0.5000, 0.7500, 1.0000])\n\n >>> torch.bincount(input)\n tensor([0, 0, 0, 2, 2, 0, 1])\n\n >>> input.bincount(weights)\n tensor([0.0000, 0.0000, 0.0000, 1.0000, 1.0000, 0.0000, 0.5000])\n", "source": "https://pytorch.org/docs/stable/generated/torch.bincount.html", "category": "pytorch docs"} {"text": "torch.tril\ntorch.tril(input, diagonal=0, *, out=None) -> Tensor\nReturns the lower triangular part of the matrix (2-D tensor) or\n batch of matrices \"input\", the other elements of the result tensor\n \"out\" are set to 0.\nThe lower triangular part of the matrix is defined as the elements\n on and below the diagonal.\nThe argument \"diagonal\" controls which diagonal to consider. If\n \"diagonal\" = 0, all elements on and below the main diagonal are\n retained. A positive value includes just as many diagonals above\n the main diagonal, and similarly a negative value excludes just as\n many diagonals below the main diagonal. The main diagonal are the\n set of indices \\lbrace (i, i) \\rbrace for i \\in [0, \\min{d_{1},\n d_{2}} - 1] where d_{1}, d_{2} are the dimensions of the matrix.\nParameters:\n * input (Tensor) -- the input tensor.\n * **diagonal** (*int**, **optional*) -- the diagonal to consider\n\nKeyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.tril.html", "category": "pytorch docs"} {"text": "Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(3, 3)\n >>> a\n tensor([[-1.0813, -0.8619, 0.7105],\n [ 0.0935, 0.1380, 2.2112],\n [-0.3409, -0.9828, 0.0289]])\n >>> torch.tril(a)\n tensor([[-1.0813, 0.0000, 0.0000],\n [ 0.0935, 0.1380, 0.0000],\n [-0.3409, -0.9828, 0.0289]])\n\n >>> b = torch.randn(4, 6)\n >>> b\n tensor([[ 1.2219, 0.5653, -0.2521, -0.2345, 1.2544, 0.3461],\n [ 0.4785, -0.4477, 0.6049, 0.6368, 0.8775, 0.7145],\n [ 1.1502, 3.2716, -1.1243, -0.5413, 0.3615, 0.6864],\n [-0.0614, -0.7344, -1.3164, -0.7648, -1.4024, 0.0978]])\n >>> torch.tril(b, diagonal=1)\n tensor([[ 1.2219, 0.5653, 0.0000, 0.0000, 0.0000, 0.0000],\n [ 0.4785, -0.4477, 0.6049, 0.0000, 0.0000, 0.0000],\n [ 1.1502, 3.2716, -1.1243, -0.5413, 0.0000, 0.0000],\n", "source": "https://pytorch.org/docs/stable/generated/torch.tril.html", "category": "pytorch docs"} {"text": "[-0.0614, -0.7344, -1.3164, -0.7648, -1.4024, 0.0000]])\n >>> torch.tril(b, diagonal=-1)\n tensor([[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],\n [ 0.4785, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],\n [ 1.1502, 3.2716, 0.0000, 0.0000, 0.0000, 0.0000],\n [-0.0614, -0.7344, -1.3164, 0.0000, 0.0000, 0.0000]])", "source": "https://pytorch.org/docs/stable/generated/torch.tril.html", "category": "pytorch docs"} {"text": "torch._foreach_expm1\ntorch._foreach_expm1(self: List[Tensor]) -> List[Tensor]\nApply \"torch.expm1()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_expm1.html", "category": "pytorch docs"} {"text": "torch.cuda.max_memory_allocated\ntorch.cuda.max_memory_allocated(device=None)\nReturns the maximum GPU memory occupied by tensors in bytes for a\n given device.\nBy default, this returns the peak allocated memory since the\n beginning of this program. \"reset_peak_memory_stats()\" can be used\n to reset the starting point in tracking this metric. For example,\n these two functions can measure the peak allocated memory usage of\n each iteration in a training loop.\nParameters:\n device (torch.device or int, optional) -- selected\n device. Returns statistic for the current device, given by\n \"current_device()\", if \"device\" is \"None\" (default).\nReturn type:\n int\nNote:\n See Memory management for more details about GPU memory\n management.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.max_memory_allocated.html", "category": "pytorch docs"} {"text": "torch.cuda.caching_allocator_delete\ntorch.cuda.caching_allocator_delete(mem_ptr)\nDeletes memory allocated using the CUDA memory allocator.\nMemory allocated with \"caching_allocator_alloc()\". is freed here.\n The associated device and stream are tracked inside the allocator.\nParameters:\n mem_ptr (int) -- memory address to be freed by the\n allocator.\nNote:\n See Memory management for more details about GPU memory\n management.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.caching_allocator_delete.html", "category": "pytorch docs"} {"text": "torch.Tensor.multinomial\nTensor.multinomial(num_samples, replacement=False, *, generator=None) -> Tensor\nSee \"torch.multinomial()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.multinomial.html", "category": "pytorch docs"} {"text": "torch.foreach_zero\ntorch.foreach_zero(self: List[Tensor]) -> None\nApply \"torch.zero()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_zero_.html", "category": "pytorch docs"} {"text": "torch.Tensor.round_\nTensor.round_(decimals=0) -> Tensor\nIn-place version of \"round()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.round_.html", "category": "pytorch docs"} {"text": "torch.Tensor.msort\nTensor.msort() -> Tensor\nSee \"torch.msort()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.msort.html", "category": "pytorch docs"} {"text": "torch.Tensor.resolve_conj\nTensor.resolve_conj() -> Tensor\nSee \"torch.resolve_conj()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.resolve_conj.html", "category": "pytorch docs"} {"text": "LazyConvTranspose1d\nclass torch.nn.LazyConvTranspose1d(out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)\nA \"torch.nn.ConvTranspose1d\" module with lazy initialization of the\n \"in_channels\" argument of the \"ConvTranspose1d\" that is inferred\n from the \"input.size(1)\". The attributes that will be lazily\n initialized are weight and bias.\nCheck the \"torch.nn.modules.lazy.LazyModuleMixin\" for further\n documentation on lazy modules and their limitations.\nParameters:\n * out_channels (int) -- Number of channels produced by the\n convolution\n * **kernel_size** (*int** or **tuple*) -- Size of the convolving\n kernel\n\n * **stride** (*int** or **tuple**, **optional*) -- Stride of the\n convolution. Default: 1\n\n * **padding** (*int** or **tuple**, **optional*) -- \"dilation *\n (kernel_size - 1) - padding\" zero-padding will be added to\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyConvTranspose1d.html", "category": "pytorch docs"} {"text": "both sides of the input. Default: 0\n * **output_padding** (*int** or **tuple**, **optional*) --\n Additional size added to one side of the output shape.\n Default: 0\n\n * **groups** (*int**, **optional*) -- Number of blocked\n connections from input channels to output channels. Default: 1\n\n * **bias** (*bool**, **optional*) -- If \"True\", adds a learnable\n bias to the output. Default: \"True\"\n\n * **dilation** (*int** or **tuple**, **optional*) -- Spacing\n between kernel elements. Default: 1\n\nSee also:\n \"torch.nn.ConvTranspose1d\" and\n \"torch.nn.modules.lazy.LazyModuleMixin\"\n\ncls_to_become\n alias of \"ConvTranspose1d\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyConvTranspose1d.html", "category": "pytorch docs"} {"text": "torch.Tensor.erfinv_\nTensor.erfinv_() -> Tensor\nIn-place version of \"erfinv()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.erfinv_.html", "category": "pytorch docs"} {"text": "torch.Tensor.rot90\nTensor.rot90(k, dims) -> Tensor\nSee \"torch.rot90()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.rot90.html", "category": "pytorch docs"} {"text": "torch.Tensor.tril_\nTensor.tril_(diagonal=0) -> Tensor\nIn-place version of \"tril()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.tril_.html", "category": "pytorch docs"} {"text": "torch.Tensor.float\nTensor.float(memory_format=torch.preserve_format) -> Tensor\n\"self.float()\" is equivalent to \"self.to(torch.float32)\". See\n \"to()\".\nParameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.float.html", "category": "pytorch docs"} {"text": "Embedding\nclass torch.nn.Embedding(num_embeddings, embedding_dim, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False, _weight=None, _freeze=False, device=None, dtype=None)\nA simple lookup table that stores embeddings of a fixed dictionary\n and size.\nThis module is often used to store word embeddings and retrieve\n them using indices. The input to the module is a list of indices,\n and the output is the corresponding word embeddings.\nParameters:\n * num_embeddings (int) -- size of the dictionary of\n embeddings\n * **embedding_dim** (*int*) -- the size of each embedding vector\n\n * **padding_idx** (*int**, **optional*) -- If specified, the\n entries at \"padding_idx\" do not contribute to the gradient;\n therefore, the embedding vector at \"padding_idx\" is not\n updated during training, i.e. it remains as a fixed \"pad\". For\n a newly constructed Embedding, the embedding vector at\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html", "category": "pytorch docs"} {"text": "\"padding_idx\" will default to all zeros, but can be updated to\n another value to be used as the padding vector.\n * **max_norm** (*float**, **optional*) -- If given, each\n embedding vector with norm larger than \"max_norm\" is\n renormalized to have norm \"max_norm\".\n\n * **norm_type** (*float**, **optional*) -- The p of the p-norm\n to compute for the \"max_norm\" option. Default \"2\".\n\n * **scale_grad_by_freq** (*bool**, **optional*) -- If given,\n this will scale gradients by the inverse of frequency of the\n words in the mini-batch. Default \"False\".\n\n * **sparse** (*bool**, **optional*) -- If \"True\", gradient\n w.r.t. \"weight\" matrix will be a sparse tensor. See Notes for\n more details regarding sparse gradients.\n\nVariables:\n weight (Tensor) -- the learnable weights of the module of\n shape (num_embeddings, embedding_dim) initialized from\n \\mathcal{N}(0, 1)\nShape:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html", "category": "pytorch docs"} {"text": "\\mathcal{N}(0, 1)\nShape:\n * Input: (*), IntTensor or LongTensor of arbitrary shape\n containing the indices to extract\n * Output: (*, H), where *** is the input shape and\n H=\\text{embedding\\_dim}\n\nNote:\n Keep in mind that only a limited number of optimizers support\n sparse gradients: currently it's \"optim.SGD\" (*CUDA* and *CPU*),\n \"optim.SparseAdam\" (*CUDA* and *CPU*) and \"optim.Adagrad\" (*CPU*)\n\nNote:\n When \"max_norm\" is not \"None\", \"Embedding\"'s forward method will\n modify the \"weight\" tensor in-place. Since tensors needed for\n gradient computations cannot be modified in-place, performing a\n differentiable operation on \"Embedding.weight\" before calling\n \"Embedding\"'s forward method requires cloning \"Embedding.weight\"\n when \"max_norm\" is not \"None\". For example:\n\n n, d, m = 3, 5, 7\n embedding = nn.Embedding(n, d, max_norm=True)\n W = torch.randn((m, d), requires_grad=True)\n idx = torch.tensor([1, 2])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html", "category": "pytorch docs"} {"text": "idx = torch.tensor([1, 2])\n a = embedding.weight.clone() @ W.t() # weight must be cloned for this to be differentiable\n b = embedding(idx) @ W.t() # modifies weight in-place\n out = (a.unsqueeze(0) + b.unsqueeze(1))\n loss = out.sigmoid().prod()\n loss.backward()\nExamples:\n >>> # an Embedding module containing 10 tensors of size 3\n >>> embedding = nn.Embedding(10, 3)\n >>> # a batch of 2 samples of 4 indices each\n >>> input = torch.LongTensor([[1, 2, 4, 5], [4, 3, 2, 9]])\n >>> embedding(input)\n tensor([[[-0.0251, -1.6902, 0.7172],\n [-0.6431, 0.0748, 0.6969],\n [ 1.4970, 1.3448, -0.9685],\n [-0.3677, -2.7265, -0.1685]],\n\n [[ 1.4970, 1.3448, -0.9685],\n [ 0.4362, -0.4004, 0.9400],\n [-0.6431, 0.0748, 0.6969],\n [ 0.9124, -2.3616, 1.1151]]])\n\n\n >>> # example with padding_idx\n >>> embedding = nn.Embedding(10, 3, padding_idx=0)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html", "category": "pytorch docs"} {"text": "\n\n\ninput = torch.LongTensor([[0, 2, 0, 5]])\n >>> embedding(input)\n tensor([[[ 0.0000, 0.0000, 0.0000],\n [ 0.1535, -2.0309, 0.9315],\n [ 0.0000, 0.0000, 0.0000],\n [-0.1655, 0.9897, 0.0635]]])\n\n\n\n >>> # example of changing `pad` vector\n >>> padding_idx = 0\n >>> embedding = nn.Embedding(3, 3, padding_idx=padding_idx)\n >>> embedding.weight\n Parameter containing:\n tensor([[ 0.0000, 0.0000, 0.0000],\n [-0.7895, -0.7089, -0.0364],\n [ 0.6778, 0.5803, 0.2678]], requires_grad=True)\n >>> with torch.no_grad():\n ... embedding.weight[padding_idx] = torch.ones(3)\n >>> embedding.weight\n Parameter containing:\n tensor([[ 1.0000, 1.0000, 1.0000],\n [-0.7895, -0.7089, -0.0364],\n [ 0.6778, 0.5803, 0.2678]], requires_grad=True)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html", "category": "pytorch docs"} {"text": "classmethod from_pretrained(embeddings, freeze=True, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False)\n Creates Embedding instance from given 2-dimensional FloatTensor.\n\n Parameters:\n * **embeddings** (*Tensor*) -- FloatTensor containing weights\n for the Embedding. First dimension is being passed to\n Embedding as \"num_embeddings\", second as \"embedding_dim\".\n\n * **freeze** (*bool**, **optional*) -- If \"True\", the tensor\n does not get updated in the learning process. Equivalent to\n \"embedding.weight.requires_grad = False\". Default: \"True\"\n\n * **padding_idx** (*int**, **optional*) -- If specified, the\n entries at \"padding_idx\" do not contribute to the gradient;\n therefore, the embedding vector at \"padding_idx\" is not\n updated during training, i.e. it remains as a fixed \"pad\".\n\n * **max_norm** (*float**, **optional*) -- See module\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html", "category": "pytorch docs"} {"text": "initialization documentation.\n * **norm_type** (*float**, **optional*) -- See module\n initialization documentation. Default \"2\".\n\n * **scale_grad_by_freq** (*bool**, **optional*) -- See module\n initialization documentation. Default \"False\".\n\n * **sparse** (*bool**, **optional*) -- See module\n initialization documentation.\n\n Examples:\n\n >>> # FloatTensor containing pretrained weights\n >>> weight = torch.FloatTensor([[1, 2.3, 3], [4, 5.1, 6.3]])\n >>> embedding = nn.Embedding.from_pretrained(weight)\n >>> # Get embeddings for index 1\n >>> input = torch.LongTensor([1])\n >>> embedding(input)\n tensor([[ 4.0000, 5.1000, 6.3000]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Embedding.html", "category": "pytorch docs"} {"text": "torch.Tensor.amax\nTensor.amax(dim=None, keepdim=False) -> Tensor\nSee \"torch.amax()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.amax.html", "category": "pytorch docs"} {"text": "torch.sparse_csc_tensor\ntorch.sparse_csc_tensor(ccol_indices, row_indices, values, size=None, *, dtype=None, device=None, requires_grad=False, check_invariants=None) -> Tensor\nConstructs a sparse tensor in CSC (Compressed Sparse Column) with\n specified values at the given \"ccol_indices\" and \"row_indices\".\n Sparse matrix multiplication operations in CSC format are typically\n faster than that for sparse tensors in COO format. Make you have a\n look at the note on the data type of the indices.\nNote:\n If the \"device\" argument is not specified the device of the given\n \"values\" and indices tensor(s) must match. If, however, the\n argument is specified the input Tensors will be converted to the\n given device and in turn determine the device of the constructed\n sparse tensor.\n\nParameters:\n * ccol_indices (array_like) -- (B+1)-dimensional array of\n size \"(*batchsize, ncols + 1)\". The last element of each", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_csc_tensor.html", "category": "pytorch docs"} {"text": "batch is the number of non-zeros. This tensor encodes the\n index in values and row_indices depending on where the given\n column starts. Each successive number in the tensor subtracted\n by the number before it denotes the number of elements in a\n given column.\n * **row_indices** (*array_like*) -- Row co-ordinates of each\n element in values. (B+1)-dimensional tensor with the same\n length as values.\n\n * **values** (*array_list*) -- Initial values for the tensor.\n Can be a list, tuple, NumPy \"ndarray\", scalar, and other types\n that represents a (1+K)-dimensional tensor where \"K\" is the\n number of dense dimensions.\n\n * **size** (list, tuple, \"torch.Size\", optional) -- Size of the\n sparse tensor: \"(*batchsize, nrows, ncols, *densesize)\". If\n not provided, the size will be inferred as the minimum size\n big enough to hold all non-zero elements.\n\nKeyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_csc_tensor.html", "category": "pytorch docs"} {"text": "Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if None, infers data type from\n \"values\".\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if None, uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\n * **check_invariants** (*bool**, **optional*) -- If sparse\n tensor invariants are checked. Default: as returned by\n \"torch.sparse.check_sparse_tensor_invariants.is_enabled()\",\n initially False.\n\nExample::\n >>> ccol_indices = [0, 2, 4]\n >>> row_indices = [0, 1, 0, 1]\n >>> values = [1, 2, 3, 4]", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_csc_tensor.html", "category": "pytorch docs"} {"text": "\n\n\nvalues = [1, 2, 3, 4]\n >>> torch.sparse_csc_tensor(torch.tensor(ccol_indices, dtype=torch.int64),\n ... torch.tensor(row_indices, dtype=torch.int64),\n ... torch.tensor(values), dtype=torch.double)\n tensor(ccol_indices=tensor([0, 2, 4]),\n row_indices=tensor([0, 1, 0, 1]),\n values=tensor([1., 2., 3., 4.]), size=(2, 2), nnz=4,\n dtype=torch.float64, layout=torch.sparse_csc)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_csc_tensor.html", "category": "pytorch docs"} {"text": "torch.Tensor.trace\nTensor.trace() -> Tensor\nSee \"torch.trace()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.trace.html", "category": "pytorch docs"} {"text": "torch.cuda.memory_summary\ntorch.cuda.memory_summary(device=None, abbreviated=False)\nReturns a human-readable printout of the current memory allocator\n statistics for a given device.\nThis can be useful to display periodically during training, or when\n handling out-of-memory exceptions.\nParameters:\n * device (torch.device or int, optional) --\n selected device. Returns printout for the current device,\n given by \"current_device()\", if \"device\" is \"None\" (default).\n * **abbreviated** (*bool**, **optional*) -- whether to return an\n abbreviated summary (default: False).\n\nReturn type:\n str\nNote:\n See Memory management for more details about GPU memory\n management.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.memory_summary.html", "category": "pytorch docs"} {"text": "torch.diff\ntorch.diff(input, n=1, dim=- 1, prepend=None, append=None) -> Tensor\nComputes the n-th forward difference along the given dimension.\nThe first-order differences are given by out[i] = input[i + 1] -\n input[i]. Higher-order differences are calculated by using\n \"torch.diff()\" recursively.\nParameters:\n * input (Tensor) -- the tensor to compute the differences\n on\n * **n** (*int**, **optional*) -- the number of times to\n recursively compute the difference\n\n * **dim** (*int**, **optional*) -- the dimension to compute the\n difference along. Default is the last dimension.\n\n * **prepend** (*Tensor**, **optional*) -- values to prepend or\n append to \"input\" along \"dim\" before computing the difference.\n Their dimensions must be equivalent to that of input, and\n their shapes must match input's shape except on \"dim\".\n\n * **append** (*Tensor**, **optional*) -- values to prepend or\n", "source": "https://pytorch.org/docs/stable/generated/torch.diff.html", "category": "pytorch docs"} {"text": "append to \"input\" along \"dim\" before computing the difference.\n Their dimensions must be equivalent to that of input, and\n their shapes must match input's shape except on \"dim\".\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.tensor([1, 3, 2])\n >>> torch.diff(a)\n tensor([ 2, -1])\n >>> b = torch.tensor([4, 5])\n >>> torch.diff(a, append=b)\n tensor([ 2, -1, 2, 1])\n >>> c = torch.tensor([[1, 2, 3], [3, 4, 5]])\n >>> torch.diff(c, dim=0)\n tensor([[2, 2, 2]])\n >>> torch.diff(c, dim=1)\n tensor([[1, 1],\n [1, 1]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.diff.html", "category": "pytorch docs"} {"text": "torch.Tensor.eq_\nTensor.eq_(other) -> Tensor\nIn-place version of \"eq()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.eq_.html", "category": "pytorch docs"} {"text": "torch.addbmm\ntorch.addbmm(input, batch1, batch2, *, beta=1, alpha=1, out=None) -> Tensor\nPerforms a batch matrix-matrix product of matrices stored in\n \"batch1\" and \"batch2\", with a reduced add step (all matrix\n multiplications get accumulated along the first dimension). \"input\"\n is added to the final result.\n\"batch1\" and \"batch2\" must be 3-D tensors each containing the same\n number of matrices.\nIf \"batch1\" is a (b \\times n \\times m) tensor, \"batch2\" is a (b\n \\times m \\times p) tensor, \"input\" must be broadcastable with a (n\n \\times p) tensor and \"out\" will be a (n \\times p) tensor.\n out = \\beta\\ \\text{input} + \\alpha\\ (\\sum_{i=0}^{b-1}\n \\text{batch1}_i \\mathbin{@} \\text{batch2}_i)\n\nIf \"beta\" is 0, then \"input\" will be ignored, and nan and inf\n in it will not be propagated.\nFor inputs of type FloatTensor or DoubleTensor, arguments\n \"beta\" and \"alpha\" must be real numbers, otherwise they should be\n integers.", "source": "https://pytorch.org/docs/stable/generated/torch.addbmm.html", "category": "pytorch docs"} {"text": "integers.\nThis operator supports TensorFloat32.\nOn certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.\nParameters:\n * batch1 (Tensor) -- the first batch of matrices to be\n multiplied\n * **batch2** (*Tensor*) -- the second batch of matrices to be\n multiplied\n\nKeyword Arguments:\n * beta (Number, optional) -- multiplier for \"input\"\n (\\beta)\n * **input** (*Tensor*) -- matrix to be added\n\n * **alpha** (*Number**, **optional*) -- multiplier for *batch1 @\n batch2* (\\alpha)\n\n * **out** (*Tensor**, **optional*) -- the output tensor.\n\nExample:\n >>> M = torch.randn(3, 5)\n >>> batch1 = torch.randn(10, 3, 4)\n >>> batch2 = torch.randn(10, 4, 5)\n >>> torch.addbmm(M, batch1, batch2)\n tensor([[ 6.6311, 0.0503, 6.9768, -12.0362, -2.1653],\n [ -4.8185, -1.4255, -6.6760, 8.9453, 2.5743],\n", "source": "https://pytorch.org/docs/stable/generated/torch.addbmm.html", "category": "pytorch docs"} {"text": "[ -3.8202, 4.3691, 1.0943, -1.1109, 5.4730]])", "source": "https://pytorch.org/docs/stable/generated/torch.addbmm.html", "category": "pytorch docs"} {"text": "ScriptModule\nclass torch.jit.ScriptModule\nA wrapper around C++ \"torch::jit::Module\". \"ScriptModule\"s contain\n methods, attributes, parameters, and constants. These can be\n accessed the same way as on a normal \"nn.Module\".\nadd_module(name, module)\n Adds a child module to the current module.\n\n The module can be accessed as an attribute using the given name.\n\n Parameters:\n * **name** (*str*) -- name of the child module. The child\n module can be accessed from this module using the given\n name\n\n * **module** (*Module*) -- child module to be added to the\n module.\n\napply(fn)\n Applies \"fn\" recursively to every submodule (as returned by\n \".children()\") as well as self. Typical use includes\n initializing the parameters of a model (see also torch.nn.init).\n\n Parameters:\n **fn** (\"Module\" -> None) -- function to be applied to each\n submodule\n\n Returns:\n self\n\n Return type:\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "Returns:\n self\n Return type:\n Module\n\n Example:\n\n >>> @torch.no_grad()\n >>> def init_weights(m):\n >>> print(m)\n >>> if type(m) == nn.Linear:\n >>> m.weight.fill_(1.0)\n >>> print(m.weight)\n >>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))\n >>> net.apply(init_weights)\n Linear(in_features=2, out_features=2, bias=True)\n Parameter containing:\n tensor([[1., 1.],\n [1., 1.]], requires_grad=True)\n Linear(in_features=2, out_features=2, bias=True)\n Parameter containing:\n tensor([[1., 1.],\n [1., 1.]], requires_grad=True)\n Sequential(\n (0): Linear(in_features=2, out_features=2, bias=True)\n (1): Linear(in_features=2, out_features=2, bias=True)\n )\n\nbfloat16()\n Casts all floating point parameters and buffers to \"bfloat16\"\n datatype.\n\n Note:\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "datatype.\n Note:\n\n This method modifies the module in-place.\n\n Returns:\n self\n\n Return type:\n Module\n\nbuffers(recurse=True)\n Returns an iterator over module buffers.\n\n Parameters:\n **recurse** (*bool*) -- if True, then yields buffers of this\n module and all submodules. Otherwise, yields only buffers\n that are direct members of this module.\n\n Yields:\n *torch.Tensor* -- module buffer\n\n Return type:\n *Iterator*[*Tensor*]\n\n Example:\n\n >>> for buf in model.buffers():\n >>> print(type(buf), buf.size())\n (20L,)\n (20L, 1L, 5L, 5L)\n\nchildren()\n Returns an iterator over immediate children modules.\n\n Yields:\n *Module* -- a child module\n\n Return type:\n *Iterator*[*Module*]\n\nproperty code\n Returns a pretty-printed representation (as valid Python syntax)\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "of the internal graph for the \"forward\" method. See Inspecting\n Code for details.\nproperty code_with_constants\n Returns a tuple of:\n\n [0] a pretty-printed representation (as valid Python syntax) of\n the internal graph for the \"forward\" method. See *code*. [1] a\n ConstMap following the CONSTANT.cN format of the output in [0].\n The indices in the [0] output are keys to the underlying\n constant's values.\n\n See Inspecting Code for details.\n\ncpu()\n Moves all model parameters and buffers to the CPU.\n\n Note:\n\n This method modifies the module in-place.\n\n Returns:\n self\n\n Return type:\n Module\n\ncuda(device=None)\n Moves all model parameters and buffers to the GPU.\n\n This also makes associated parameters and buffers different\n objects. So it should be called before constructing optimizer if\n the module will live on GPU while being optimized.\n\n Note:\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "Note:\n This method modifies the module in-place.\n\n Parameters:\n **device** (*int**, **optional*) -- if specified, all\n parameters will be copied to that device\n\n Returns:\n self\n\n Return type:\n Module\n\ndouble()\n Casts all floating point parameters and buffers to \"double\"\n datatype.\n\n Note:\n\n This method modifies the module in-place.\n\n Returns:\n self\n\n Return type:\n Module\n\neval()\n Sets the module in evaluation mode.\n\n This has any effect only on certain modules. See documentations\n of particular modules for details of their behaviors in\n training/evaluation mode, if they are affected, e.g. \"Dropout\",\n \"BatchNorm\", etc.\n\n This is equivalent with \"self.train(False)\".\n\n See Locally disabling gradient computation for a comparison\n between *.eval()* and several similar mechanisms that may be\n confused with it.\n\n Returns:\n self\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "Returns:\n self\n Return type:\n Module\n\nextra_repr()\n Set the extra representation of the module\n\n To print customized extra information, you should re-implement\n this method in your own modules. Both single-line and multi-line\n strings are acceptable.\n\n Return type:\n str\n\nfloat()\n Casts all floating point parameters and buffers to \"float\"\n datatype.\n\n Note:\n\n This method modifies the module in-place.\n\n Returns:\n self\n\n Return type:\n Module\n\nget_buffer(target)\n Returns the buffer given by \"target\" if it exists, otherwise\n throws an error.\n\n See the docstring for \"get_submodule\" for a more detailed\n explanation of this method's functionality as well as how to\n correctly specify \"target\".\n\n Parameters:\n **target** (*str*) -- The fully-qualified string name of the\n buffer to look for. (See \"get_submodule\" for how to specify a\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "fully-qualified string.)\n Returns:\n The buffer referenced by \"target\"\n\n Return type:\n torch.Tensor\n\n Raises:\n **AttributeError** -- If the target string references an\n invalid path or resolves to something that is not a\n buffer\n\nget_extra_state()\n Returns any extra state to include in the module's state_dict.\n Implement this and a corresponding \"set_extra_state()\" for your\n module if you need to store extra state. This function is called\n when building the module's *state_dict()*.\n\n Note that extra state should be picklable to ensure working\n serialization of the state_dict. We only provide provide\n backwards compatibility guarantees for serializing Tensors;\n other objects may break backwards compatibility if their\n serialized pickled form changes.\n\n Returns:\n Any extra state to store in the module's state_dict\n\n Return type:\n object\n\nget_parameter(target)", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "object\nget_parameter(target)\n Returns the parameter given by \"target\" if it exists, otherwise\n throws an error.\n\n See the docstring for \"get_submodule\" for a more detailed\n explanation of this method's functionality as well as how to\n correctly specify \"target\".\n\n Parameters:\n **target** (*str*) -- The fully-qualified string name of the\n Parameter to look for. (See \"get_submodule\" for how to\n specify a fully-qualified string.)\n\n Returns:\n The Parameter referenced by \"target\"\n\n Return type:\n torch.nn.Parameter\n\n Raises:\n **AttributeError** -- If the target string references an\n invalid path or resolves to something that is not an\n \"nn.Parameter\"\n\nget_submodule(target)\n Returns the submodule given by \"target\" if it exists, otherwise\n throws an error.\n\n For example, let's say you have an \"nn.Module\" \"A\" that looks\n like this:\n\n A(\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "like this:\n A(\n (net_b): Module(\n (net_c): Module(\n (conv): Conv2d(16, 33, kernel_size=(3, 3), stride=(2, 2))\n )\n (linear): Linear(in_features=100, out_features=200, bias=True)\n )\n )\n\n (The diagram shows an \"nn.Module\" \"A\". \"A\" has a nested\n submodule \"net_b\", which itself has two submodules \"net_c\" and\n \"linear\". \"net_c\" then has a submodule \"conv\".)\n\n To check whether or not we have the \"linear\" submodule, we would\n call \"get_submodule(\"net_b.linear\")\". To check whether we have\n the \"conv\" submodule, we would call\n \"get_submodule(\"net_b.net_c.conv\")\".\n\n The runtime of \"get_submodule\" is bounded by the degree of\n module nesting in \"target\". A query against \"named_modules\"\n achieves the same result, but it is O(N) in the number of\n transitive modules. So, for a simple check to see if some\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "submodule exists, \"get_submodule\" should always be used.\n Parameters:\n **target** (*str*) -- The fully-qualified string name of the\n submodule to look for. (See above example for how to specify\n a fully-qualified string.)\n\n Returns:\n The submodule referenced by \"target\"\n\n Return type:\n torch.nn.Module\n\n Raises:\n **AttributeError** -- If the target string references an\n invalid path or resolves to something that is not an\n \"nn.Module\"\n\nproperty graph\n Returns a string representation of the internal graph for the\n \"forward\" method. See Interpreting Graphs for details.\n\nhalf()\n Casts all floating point parameters and buffers to \"half\"\n datatype.\n\n Note:\n\n This method modifies the module in-place.\n\n Returns:\n self\n\n Return type:\n Module\n\nproperty inlined_graph\n Returns a string representation of the internal graph for the\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "\"forward\" method. This graph will be preprocessed to inline all\n function and method calls. See Interpreting Graphs for details.\nipu(device=None)\n Moves all model parameters and buffers to the IPU.\n\n This also makes associated parameters and buffers different\n objects. So it should be called before constructing optimizer if\n the module will live on IPU while being optimized.\n\n Note:\n\n This method modifies the module in-place.\n\n Parameters:\n **device** (*int**, **optional*) -- if specified, all\n parameters will be copied to that device\n\n Returns:\n self\n\n Return type:\n Module\n\nload_state_dict(state_dict, strict=True)\n Copies parameters and buffers from \"state_dict\" into this module\n and its descendants. If \"strict\" is \"True\", then the keys of\n \"state_dict\" must exactly match the keys returned by this\n module's \"state_dict()\" function.\n\n Parameters:\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "Parameters:\n * state_dict (dict) -- a dict containing parameters and\n persistent buffers.\n * **strict** (*bool**, **optional*) -- whether to strictly\n enforce that the keys in \"state_dict\" match the keys\n returned by this module's \"state_dict()\" function. Default:\n \"True\"\n\n Returns:\n * **missing_keys** is a list of str containing the missing\n keys\n\n * **unexpected_keys** is a list of str containing the\n unexpected keys\n\n Return type:\n \"NamedTuple\" with \"missing_keys\" and \"unexpected_keys\" fields\n\n Note:\n\n If a parameter or buffer is registered as \"None\" and its\n corresponding key exists in \"state_dict\", \"load_state_dict()\"\n will raise a \"RuntimeError\".\n\nmodules()\n Returns an iterator over all modules in the network.\n\n Yields:\n *Module* -- a module in the network\n\n Return type:\n *Iterator*[*Module*]\n\n Note:\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "Iterator[Module]\n Note:\n\n Duplicate modules are returned only once. In the following\n example, \"l\" will be returned only once.\n\n Example:\n\n >>> l = nn.Linear(2, 2)\n >>> net = nn.Sequential(l, l)\n >>> for idx, m in enumerate(net.modules()):\n ... print(idx, '->', m)\n\n 0 -> Sequential(\n (0): Linear(in_features=2, out_features=2, bias=True)\n (1): Linear(in_features=2, out_features=2, bias=True)\n )\n 1 -> Linear(in_features=2, out_features=2, bias=True)\n\nnamed_buffers(prefix='', recurse=True, remove_duplicate=True)\n Returns an iterator over module buffers, yielding both the name\n of the buffer as well as the buffer itself.\n\n Parameters:\n * **prefix** (*str*) -- prefix to prepend to all buffer\n names.\n\n * **recurse** (*bool**, **optional*) -- if True, then yields\n buffers of this module and all submodules. Otherwise,\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "yields only buffers that are direct members of this module.\n Defaults to True.\n * **remove_duplicate** (*bool**, **optional*) -- whether to\n remove the duplicated buffers in the result. Defaults to\n True.\n\n Yields:\n *(str, torch.Tensor)* -- Tuple containing the name and buffer\n\n Return type:\n *Iterator*[*Tuple*[str, *Tensor*]]\n\n Example:\n\n >>> for name, buf in self.named_buffers():\n >>> if name in ['running_var']:\n >>> print(buf.size())\n\nnamed_children()\n Returns an iterator over immediate children modules, yielding\n both the name of the module as well as the module itself.\n\n Yields:\n *(str, Module)* -- Tuple containing a name and child module\n\n Return type:\n *Iterator*[*Tuple*[str, *Module*]]\n\n Example:\n\n >>> for name, module in model.named_children():\n >>> if name in ['conv4', 'conv5']:\n >>> print(module)\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "\n\n\n print(module)\n\n\n\n\nnamed_modules(memo=None, prefix='', remove_duplicate=True)\n Returns an iterator over all modules in the network, yielding\n both the name of the module as well as the module itself.\n\n Parameters:\n * **memo** (*Optional**[**Set**[**Module**]**]*) -- a memo to\n store the set of modules already added to the result\n\n * **prefix** (*str*) -- a prefix that will be added to the\n name of the module\n\n * **remove_duplicate** (*bool*) -- whether to remove the\n duplicated module instances in the result or not\n\n Yields:\n *(str, Module)* -- Tuple of name and module\n\n Note:\n\n Duplicate modules are returned only once. In the following\n example, \"l\" will be returned only once.\n\n Example:\n\n >>> l = nn.Linear(2, 2)\n >>> net = nn.Sequential(l, l)\n >>> for idx, m in enumerate(net.named_modules()):\n ... print(idx, '->', m)\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "... print(idx, '->', m)\n 0 -> ('', Sequential(\n (0): Linear(in_features=2, out_features=2, bias=True)\n (1): Linear(in_features=2, out_features=2, bias=True)\n ))\n 1 -> ('0', Linear(in_features=2, out_features=2, bias=True))\n\nnamed_parameters(prefix='', recurse=True, remove_duplicate=True)\n Returns an iterator over module parameters, yielding both the\n name of the parameter as well as the parameter itself.\n\n Parameters:\n * **prefix** (*str*) -- prefix to prepend to all parameter\n names.\n\n * **recurse** (*bool*) -- if True, then yields parameters of\n this module and all submodules. Otherwise, yields only\n parameters that are direct members of this module.\n\n * **remove_duplicate** (*bool**, **optional*) -- whether to\n remove the duplicated parameters in the result. Defaults to\n True.\n\n Yields:\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "True.\n Yields:\n *(str, Parameter)* -- Tuple containing the name and parameter\n\n Return type:\n *Iterator*[*Tuple*[str, *Parameter*]]\n\n Example:\n\n >>> for name, param in self.named_parameters():\n >>> if name in ['bias']:\n >>> print(param.size())\n\nparameters(recurse=True)\n Returns an iterator over module parameters.\n\n This is typically passed to an optimizer.\n\n Parameters:\n **recurse** (*bool*) -- if True, then yields parameters of\n this module and all submodules. Otherwise, yields only\n parameters that are direct members of this module.\n\n Yields:\n *Parameter* -- module parameter\n\n Return type:\n *Iterator*[*Parameter*]\n\n Example:\n\n >>> for param in model.parameters():\n >>> print(type(param), param.size())\n (20L,)\n (20L, 1L, 5L, 5L)\n\nregister_backward_hook(hook)", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "register_backward_hook(hook)\n Registers a backward hook on the module.\n\n This function is deprecated in favor of\n \"register_full_backward_hook()\" and the behavior of this\n function will change in future versions.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemovableHandle\"\n\nregister_buffer(name, tensor, persistent=True)\n Adds a buffer to the module.\n\n This is typically used to register a buffer that should not to\n be considered a model parameter. For example, BatchNorm's\n \"running_mean\" is not a parameter, but is part of the module's\n state. Buffers, by default, are persistent and will be saved\n alongside parameters. This behavior can be changed by setting\n \"persistent\" to \"False\". The only difference between a\n persistent buffer and a non-persistent buffer is that the latter\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "will not be a part of this module's \"state_dict\".\n Buffers can be accessed as attributes using given names.\n\n Parameters:\n * **name** (*str*) -- name of the buffer. The buffer can be\n accessed from this module using the given name\n\n * **tensor** (*Tensor** or **None*) -- buffer to be\n registered. If \"None\", then operations that run on buffers,\n such as \"cuda\", are ignored. If \"None\", the buffer is\n **not** included in the module's \"state_dict\".\n\n * **persistent** (*bool*) -- whether the buffer is part of\n this module's \"state_dict\".\n\n Example:\n\n >>> self.register_buffer('running_mean', torch.zeros(num_features))\n\nregister_forward_hook(hook, *, prepend=False, with_kwargs=False)\n Registers a forward hook on the module.\n\n The hook will be called every time after \"forward()\" has\n computed an output.\n\n If \"with_kwargs\" is \"False\" or not specified, the input contains\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "only the positional arguments given to the module. Keyword\n arguments won't be passed to the hooks and only to the\n \"forward\". The hook can modify the output. It can modify the\n input inplace but it will not have effect on forward since this\n is called after \"forward()\" is called. The hook should have the\n following signature:\n hook(module, args, output) -> None or modified output\n\n If \"with_kwargs\" is \"True\", the forward hook will be passed the\n \"kwargs\" given to the forward function and be expected to return\n the output possibly modified. The hook should have the following\n signature:\n\n hook(module, args, kwargs, output) -> None or modified output\n\n Parameters:\n * **hook** (*Callable*) -- The user defined hook to be\n registered.\n\n * **prepend** (*bool*) -- If \"True\", the provided \"hook\" will\n be fired before all existing \"forward\" hooks on this\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "\"torch.nn.modules.Module\". Otherwise, the provided \"hook\"\n will be fired after all existing \"forward\" hooks on this\n \"torch.nn.modules.Module\". Note that global \"forward\" hooks\n registered with \"register_module_forward_hook()\" will fire\n before all hooks registered by this method. Default:\n \"False\"\n * **with_kwargs** (*bool*) -- If \"True\", the \"hook\" will be\n passed the kwargs given to the forward function. Default:\n \"False\"\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemovableHandle\"\n\nregister_forward_pre_hook(hook, *, prepend=False, with_kwargs=False)\n Registers a forward pre-hook on the module.\n\n The hook will be called every time before \"forward()\" is\n invoked.\n\n If \"with_kwargs\" is false or not specified, the input contains\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "only the positional arguments given to the module. Keyword\n arguments won't be passed to the hooks and only to the\n \"forward\". The hook can modify the input. User can either return\n a tuple or a single modified value in the hook. We will wrap the\n value into a tuple if a single value is returned (unless that\n value is already a tuple). The hook should have the following\n signature:\n hook(module, args) -> None or modified input\n\n If \"with_kwargs\" is true, the forward pre-hook will be passed\n the kwargs given to the forward function. And if the hook\n modifies the input, both the args and kwargs should be returned.\n The hook should have the following signature:\n\n hook(module, args, kwargs) -> None or a tuple of modified input and kwargs\n\n Parameters:\n * **hook** (*Callable*) -- The user defined hook to be\n registered.\n\n * **prepend** (*bool*) -- If true, the provided \"hook\" will\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "be fired before all existing \"forward_pre\" hooks on this\n \"torch.nn.modules.Module\". Otherwise, the provided \"hook\"\n will be fired after all existing \"forward_pre\" hooks on\n this \"torch.nn.modules.Module\". Note that global\n \"forward_pre\" hooks registered with\n \"register_module_forward_pre_hook()\" will fire before all\n hooks registered by this method. Default: \"False\"\n * **with_kwargs** (*bool*) -- If true, the \"hook\" will be\n passed the kwargs given to the forward function. Default:\n \"False\"\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemovableHandle\"\n\nregister_full_backward_hook(hook, prepend=False)\n Registers a backward hook on the module.\n\n The hook will be called every time the gradients with respect to\n a module are computed, i.e. the hook will execute if and only if\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "the gradients with respect to module outputs are computed. The\n hook should have the following signature:\n hook(module, grad_input, grad_output) -> tuple(Tensor) or None\n\n The \"grad_input\" and \"grad_output\" are tuples that contain the\n gradients with respect to the inputs and outputs respectively.\n The hook should not modify its arguments, but it can optionally\n return a new gradient with respect to the input that will be\n used in place of \"grad_input\" in subsequent computations.\n \"grad_input\" will only correspond to the inputs given as\n positional arguments and all kwarg arguments are ignored.\n Entries in \"grad_input\" and \"grad_output\" will be \"None\" for all\n non-Tensor arguments.\n\n For technical reasons, when this hook is applied to a Module,\n its forward function will receive a view of each Tensor passed\n to the Module. Similarly the caller will receive a view of each\n Tensor returned by the Module's forward function.\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "Warning:\n Modifying inputs or outputs inplace is not allowed when using\n backward hooks and will raise an error.\n\n Parameters:\n * **hook** (*Callable*) -- The user-defined hook to be\n registered.\n\n * **prepend** (*bool*) -- If true, the provided \"hook\" will\n be fired before all existing \"backward\" hooks on this\n \"torch.nn.modules.Module\". Otherwise, the provided \"hook\"\n will be fired after all existing \"backward\" hooks on this\n \"torch.nn.modules.Module\". Note that global \"backward\"\n hooks registered with\n \"register_module_full_backward_hook()\" will fire before all\n hooks registered by this method.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemovableHandle\"\n\nregister_full_backward_pre_hook(hook, prepend=False)\n Registers a backward pre-hook on the module.\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "The hook will be called every time the gradients for the module\n are computed. The hook should have the following signature:\n hook(module, grad_output) -> Tensor or None\n\n The \"grad_output\" is a tuple. The hook should not modify its\n arguments, but it can optionally return a new gradient with\n respect to the output that will be used in place of\n \"grad_output\" in subsequent computations. Entries in\n \"grad_output\" will be \"None\" for all non-Tensor arguments.\n\n For technical reasons, when this hook is applied to a Module,\n its forward function will receive a view of each Tensor passed\n to the Module. Similarly the caller will receive a view of each\n Tensor returned by the Module's forward function.\n\n Warning:\n\n Modifying inputs inplace is not allowed when using backward\n hooks and will raise an error.\n\n Parameters:\n * **hook** (*Callable*) -- The user-defined hook to be\n registered.\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "registered.\n * **prepend** (*bool*) -- If true, the provided \"hook\" will\n be fired before all existing \"backward_pre\" hooks on this\n \"torch.nn.modules.Module\". Otherwise, the provided \"hook\"\n will be fired after all existing \"backward_pre\" hooks on\n this \"torch.nn.modules.Module\". Note that global\n \"backward_pre\" hooks registered with\n \"register_module_full_backward_pre_hook()\" will fire before\n all hooks registered by this method.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemovableHandle\"\n\nregister_load_state_dict_post_hook(hook)\n Registers a post hook to be run after module's \"load_state_dict\"\n is called.\n\n It should have the following signature::\n hook(module, incompatible_keys) -> None\n\n The \"module\" argument is the current module that this hook is\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "registered on, and the \"incompatible_keys\" argument is a\n \"NamedTuple\" consisting of attributes \"missing_keys\" and\n \"unexpected_keys\". \"missing_keys\" is a \"list\" of \"str\"\n containing the missing keys and \"unexpected_keys\" is a \"list\" of\n \"str\" containing the unexpected keys.\n The given incompatible_keys can be modified inplace if needed.\n\n Note that the checks performed when calling \"load_state_dict()\"\n with \"strict=True\" are affected by modifications the hook makes\n to \"missing_keys\" or \"unexpected_keys\", as expected. Additions\n to either set of keys will result in an error being thrown when\n \"strict=True\", and clearing out both missing and unexpected keys\n will avoid an error.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemovableHandle\"\n\nregister_module(name, module)\n Alias for \"add_module()\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "Alias for \"add_module()\".\nregister_parameter(name, param)\n Adds a parameter to the module.\n\n The parameter can be accessed as an attribute using given name.\n\n Parameters:\n * **name** (*str*) -- name of the parameter. The parameter\n can be accessed from this module using the given name\n\n * **param** (*Parameter** or **None*) -- parameter to be\n added to the module. If \"None\", then operations that run on\n parameters, such as \"cuda\", are ignored. If \"None\", the\n parameter is **not** included in the module's \"state_dict\".\n\nregister_state_dict_pre_hook(hook)\n These hooks will be called with arguments: \"self\", \"prefix\", and\n \"keep_vars\" before calling \"state_dict\" on \"self\". The\n registered hooks can be used to perform pre-processing before\n the \"state_dict\" call is made.\n\nrequires_grad_(requires_grad=True)\n Change if autograd should record operations on parameters in\n this module.\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "this module.\n This method sets the parameters' \"requires_grad\" attributes in-\n place.\n\n This method is helpful for freezing part of the module for\n finetuning or training parts of a model individually (e.g., GAN\n training).\n\n See Locally disabling gradient computation for a comparison\n between *.requires_grad_()* and several similar mechanisms that\n may be confused with it.\n\n Parameters:\n **requires_grad** (*bool*) -- whether autograd should record\n operations on parameters in this module. Default: \"True\".\n\n Returns:\n self\n\n Return type:\n Module\n\nsave(f, _extra_files={})\n See \"torch.jit.save\" for details.\n\nset_extra_state(state)\n This function is called from \"load_state_dict()\" to handle any\n extra state found within the *state_dict*. Implement this\n function and a corresponding \"get_extra_state()\" for your module\n if you need to store extra state within its *state_dict*.\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "Parameters:\n state (dict) -- Extra state from the state_dict\nshare_memory()\n See \"torch.Tensor.share_memory_()\"\n\n Return type:\n *T*\n\nstate_dict(*args, destination=None, prefix='', keep_vars=False)\n Returns a dictionary containing references to the whole state of\n the module.\n\n Both parameters and persistent buffers (e.g. running averages)\n are included. Keys are corresponding parameter and buffer names.\n Parameters and buffers set to \"None\" are not included.\n\n Note:\n\n The returned object is a shallow copy. It contains references\n to the module's parameters and buffers.\n\n Warning:\n\n Currently \"state_dict()\" also accepts positional arguments for\n \"destination\", \"prefix\" and \"keep_vars\" in order. However,\n this is being deprecated and keyword arguments will be\n enforced in future releases.\n\n Warning:\n\n Please avoid the use of argument \"destination\" as it is not\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "designed for end-users.\n Parameters:\n * **destination** (*dict**, **optional*) -- If provided, the\n state of module will be updated into the dict and the same\n object is returned. Otherwise, an \"OrderedDict\" will be\n created and returned. Default: \"None\".\n\n * **prefix** (*str**, **optional*) -- a prefix added to\n parameter and buffer names to compose the keys in\n state_dict. Default: \"''\".\n\n * **keep_vars** (*bool**, **optional*) -- by default the\n \"Tensor\" s returned in the state dict are detached from\n autograd. If it's set to \"True\", detaching will not be\n performed. Default: \"False\".\n\n Returns:\n a dictionary containing a whole state of the module\n\n Return type:\n dict\n\n Example:\n\n >>> module.state_dict().keys()\n ['bias', 'weight']\n\nto(args, *kwargs)\n Moves and/or casts the parameters and buffers.\n\n This can be called as\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "This can be called as\n to(device=None, dtype=None, non_blocking=False)\n\n to(dtype, non_blocking=False)\n\n to(tensor, non_blocking=False)\n\n to(memory_format=torch.channels_last)\n\n Its signature is similar to \"torch.Tensor.to()\", but only\n accepts floating point or complex \"dtype\"s. In addition, this\n method will only cast the floating point or complex parameters\n and buffers to \"dtype\" (if given). The integral parameters and\n buffers will be moved \"device\", if that is given, but with\n dtypes unchanged. When \"non_blocking\" is set, it tries to\n convert/move asynchronously with respect to the host if\n possible, e.g., moving CPU Tensors with pinned memory to CUDA\n devices.\n\n See below for examples.\n\n Note:\n\n This method modifies the module in-place.\n\n Parameters:\n * **device** (\"torch.device\") -- the desired device of the\n parameters and buffers in this module\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "parameters and buffers in this module\n * **dtype** (\"torch.dtype\") -- the desired floating point or\n complex dtype of the parameters and buffers in this module\n\n * **tensor** (*torch.Tensor*) -- Tensor whose dtype and\n device are the desired dtype and device for all parameters\n and buffers in this module\n\n * **memory_format** (\"torch.memory_format\") -- the desired\n memory format for 4D parameters and buffers in this module\n (keyword only argument)\n\n Returns:\n self\n\n Return type:\n Module\n\n Examples:\n\n >>> linear = nn.Linear(2, 2)\n >>> linear.weight\n Parameter containing:\n tensor([[ 0.1913, -0.3420],\n [-0.5113, -0.2325]])\n >>> linear.to(torch.double)\n Linear(in_features=2, out_features=2, bias=True)\n >>> linear.weight\n Parameter containing:\n tensor([[ 0.1913, -0.3420],\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "tensor([[ 0.1913, -0.3420],\n [-0.5113, -0.2325]], dtype=torch.float64)\n >>> gpu1 = torch.device(\"cuda:1\")\n >>> linear.to(gpu1, dtype=torch.half, non_blocking=True)\n Linear(in_features=2, out_features=2, bias=True)\n >>> linear.weight\n Parameter containing:\n tensor([[ 0.1914, -0.3420],\n [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1')\n >>> cpu = torch.device(\"cpu\")\n >>> linear.to(cpu)\n Linear(in_features=2, out_features=2, bias=True)\n >>> linear.weight\n Parameter containing:\n tensor([[ 0.1914, -0.3420],\n [-0.5112, -0.2324]], dtype=torch.float16)\n >>> linear = nn.Linear(2, 2, bias=None).to(torch.cdouble)\n >>> linear.weight\n Parameter containing:\n tensor([[ 0.3741+0.j, 0.2382+0.j],\n [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128)\n >>> linear(torch.ones(3, 2, dtype=torch.cdouble))\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "tensor([[0.6122+0.j, 0.1150+0.j],\n [0.6122+0.j, 0.1150+0.j],\n [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)\nto_empty(*, device)\n Moves the parameters and buffers to the specified device without\n copying storage.\n\n Parameters:\n **device** (\"torch.device\") -- The desired device of the\n parameters and buffers in this module.\n\n Returns:\n self\n\n Return type:\n Module\n\ntrain(mode=True)\n Sets the module in training mode.\n\n This has any effect only on certain modules. See documentations\n of particular modules for details of their behaviors in\n training/evaluation mode, if they are affected, e.g. \"Dropout\",\n \"BatchNorm\", etc.\n\n Parameters:\n **mode** (*bool*) -- whether to set training mode (\"True\") or\n evaluation mode (\"False\"). Default: \"True\".\n\n Returns:\n self\n\n Return type:\n Module\n\ntype(dst_type)", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "Module\ntype(dst_type)\n Casts all parameters and buffers to \"dst_type\".\n\n Note:\n\n This method modifies the module in-place.\n\n Parameters:\n **dst_type** (*type** or **string*) -- the desired type\n\n Returns:\n self\n\n Return type:\n Module\n\nxpu(device=None)\n Moves all model parameters and buffers to the XPU.\n\n This also makes associated parameters and buffers different\n objects. So it should be called before constructing optimizer if\n the module will live on XPU while being optimized.\n\n Note:\n\n This method modifies the module in-place.\n\n Parameters:\n **device** (*int**, **optional*) -- if specified, all\n parameters will be copied to that device\n\n Returns:\n self\n\n Return type:\n Module\n\nzero_grad(set_to_none=False)\n Sets gradients of all model parameters to zero. See similar\n function under \"torch.optim.Optimizer\" for more context.\n\n Parameters:\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "Parameters:\n set_to_none (bool) -- instead of setting to zero, set\n the grads to None. See \"torch.optim.Optimizer.zero_grad()\"\n for details.", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptModule.html", "category": "pytorch docs"} {"text": "torch.nn.functional.avg_pool2d\ntorch.nn.functional.avg_pool2d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None) -> Tensor\nApplies 2D average-pooling operation in kH \\times kW regions by\n step size sH \\times sW steps. The number of output features is\n equal to the number of input planes.\nSee \"AvgPool2d\" for details and output shape.\nParameters:\n * input -- input tensor (\\text{minibatch} ,\n \\text{in_channels} , iH , iW)\n * **kernel_size** -- size of the pooling region. Can be a single\n number or a tuple *(kH, kW)*\n\n * **stride** -- stride of the pooling operation. Can be a single\n number or a tuple *(sH, sW)*. Default: \"kernel_size\"\n\n * **padding** -- implicit zero paddings on both sides of the\n input. Can be a single number or a tuple *(padH, padW)*.\n Default: 0\n\n * **ceil_mode** -- when True, will use *ceil* instead of *floor*\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.avg_pool2d.html", "category": "pytorch docs"} {"text": "in the formula to compute the output shape. Default: \"False\"\n * **count_include_pad** -- when True, will include the zero-\n padding in the averaging calculation. Default: \"True\"\n\n * **divisor_override** -- if specified, it will be used as\n divisor, otherwise size of the pooling region will be used.\n Default: None\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.avg_pool2d.html", "category": "pytorch docs"} {"text": "torch.Tensor.values\nTensor.values() -> Tensor\nReturn the values tensor of a sparse COO tensor.\nWarning:\n Throws an error if \"self\" is not a sparse COO tensor.\n\nSee also \"Tensor.indices()\".\nNote:\n This method can only be called on a coalesced sparse tensor. See\n \"Tensor.coalesce()\" for details.\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.values.html", "category": "pytorch docs"} {"text": "torch.Tensor.to_sparse\nTensor.to_sparse(sparseDims) -> Tensor\nReturns a sparse copy of the tensor. PyTorch supports sparse\n tensors in coordinate format.\nParameters:\n sparseDims (int, optional) -- the number of sparse\n dimensions to include in the new sparse tensor\nExample:\n >>> d = torch.tensor([[0, 0, 0], [9, 0, 10], [0, 0, 0]])\n >>> d\n tensor([[ 0, 0, 0],\n [ 9, 0, 10],\n [ 0, 0, 0]])\n >>> d.to_sparse()\n tensor(indices=tensor([[1, 1],\n [0, 2]]),\n values=tensor([ 9, 10]),\n size=(3, 3), nnz=2, layout=torch.sparse_coo)\n >>> d.to_sparse(1)\n tensor(indices=tensor([[1]]),\n values=tensor([[ 9, 0, 10]]),\n size=(3, 3), nnz=1, layout=torch.sparse_coo)\n\nto_sparse(*, layout=None, blocksize=None, dense_dim=None) -> Tensor\nReturns a sparse tensor with the specified layout and blocksize.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse.html", "category": "pytorch docs"} {"text": "If the \"self\" is strided, the number of dense dimensions could be\n specified, and a hybrid sparse tensor will be created, with\n dense_dim dense dimensions and self.dim() - 2 - dense_dim batch\n dimension.\nNote:\n If the \"self\" layout and blocksize parameters match with the\n specified layout and blocksize, return \"self\". Otherwise, return\n a sparse tensor copy of \"self\".\n\nParameters:\n * layout (\"torch.layout\", optional) -- The desired sparse\n layout. One of \"torch.sparse_coo\", \"torch.sparse_csr\",\n \"torch.sparse_csc\", \"torch.sparse_bsr\", or \"torch.sparse_bsc\".\n Default: if \"None\", \"torch.sparse_coo\".\n * **blocksize** (list, tuple, \"torch.Size\", optional) -- Block\n size of the resulting BSR or BSC tensor. For other layouts,\n specifying the block size that is not \"None\" will result in a\n RuntimeError exception. A block size must be a tuple of\n length two such that its items evenly divide the two sparse\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse.html", "category": "pytorch docs"} {"text": "dimensions.\n * **dense_dim** (*int**, **optional*) -- Number of dense\n dimensions of the resulting CSR, CSC, BSR or BSC tensor. This\n argument should be used only if \"self\" is a strided tensor,\n and must be a value between 0 and dimension of \"self\" tensor\n minus two.\n\nExample:\n >>> x = torch.tensor([[1, 0], [0, 0], [2, 3]])\n >>> x.to_sparse(layout=torch.sparse_coo)\n tensor(indices=tensor([[0, 2, 2],\n [0, 0, 1]]),\n values=tensor([1, 2, 3]),\n size=(3, 2), nnz=3, layout=torch.sparse_coo)\n >>> x.to_sparse(layout=torch.sparse_bsr, blocksize=(1, 2))\n tensor(crow_indices=tensor([0, 1, 1, 2]),\n col_indices=tensor([0, 0]),\n values=tensor([[[1, 0]],\n [[2, 3]]]), size=(3, 2), nnz=2, layout=torch.sparse_bsr)\n >>> x.to_sparse(layout=torch.sparse_bsr, blocksize=(2, 1))\n RuntimeError: Tensor size(-2) 3 needs to be divisible by blocksize[0] 2\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse.html", "category": "pytorch docs"} {"text": "\n\n\nx.to_sparse(layout=torch.sparse_csr, blocksize=(3, 1))\n RuntimeError: to_sparse for Strided to SparseCsr conversion does not use specified blocksize\n\n\n\n >>> x = torch.tensor([[[1], [0]], [[0], [0]], [[2], [3]]])\n >>> x.to_sparse(layout=torch.sparse_csr, dense_dim=1)\n tensor(crow_indices=tensor([0, 1, 1, 3]),\n col_indices=tensor([0, 0, 1]),\n values=tensor([[1],\n [2],\n [3]]), size=(3, 2, 1), nnz=3, layout=torch.sparse_csr)\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse.html", "category": "pytorch docs"} {"text": "torch.Tensor.ccol_indices\nTensor.ccol_indices()", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ccol_indices.html", "category": "pytorch docs"} {"text": "torch.Tensor.select\nTensor.select(dim, index) -> Tensor\nSee \"torch.select()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.select.html", "category": "pytorch docs"} {"text": "Adamax\nclass torch.optim.Adamax(params, lr=0.002, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, foreach=None, *, maximize=False, differentiable=False)\nImplements Adamax algorithm (a variant of Adam based on infinity\n norm).\n \\begin{aligned} &\\rule{110mm}{0.4pt}\n \\\\ &\\textbf{input} : \\gamma \\text{ (lr)}, \\beta_1,\n \\beta_2 \\text{ (betas)},\\theta_0 \\text{\n (params)},f(\\theta) \\text{ (objective)}, \\: \\lambda\n \\text{ (weight decay)},\n \\\\ &\\hspace{13mm} \\epsilon \\text{ (epsilon)}\n \\\\ &\\textbf{initialize} : m_0 \\leftarrow 0 \\text{ ( first\n moment)}, u_0 \\leftarrow 0 \\text{ ( infinity norm)}\n \\\\[-1.ex] &\\rule{110mm}{0.4pt}\n \\\\ &\\textbf{for} \\: t=1 \\: \\textbf{to} \\: \\ldots \\:\n \\textbf{do} \\\\ &\\hspace{5mm}g_t\n \\leftarrow \\nabla_{\\theta} f_t (\\theta_{t-1}) \\\\\n &\\hspace{5mm}if \\: \\lambda \\neq 0\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adamax.html", "category": "pytorch docs"} {"text": "&\\hspace{5mm}if \\: \\lambda \\neq 0\n \\ &\\hspace{10mm} g_t \\leftarrow g_t + \\lambda\n \\theta_{t-1} \\ &\\hspace{5mm}m_t\n \\leftarrow \\beta_1 m_{t-1} + (1 - \\beta_1) g_t\n \\ &\\hspace{5mm}u_t \\leftarrow \\mathrm{max}(\\beta_2\n u_{t-1}, |g_{t}|+\\epsilon) \\ &\\hspace{5mm}\\theta_t\n \\leftarrow \\theta_{t-1} - \\frac{\\gamma m_t}{(1-\\beta^t_1) u_t}\n \\ &\\rule{110mm}{0.4pt}\n \\[-1.ex] &\\bf{return} \\: \\theta_t\n \\[-1.ex] &\\rule{110mm}{0.4pt}\n \\[-1.ex] \\end{aligned}\nFor further details regarding the algorithm we refer to Adam: A\n Method for Stochastic Optimization.\nParameters:\n * params (iterable) -- iterable of parameters to optimize\n or dicts defining parameter groups\n * **lr** (*float**, **optional*) -- learning rate (default:\n 2e-3)\n\n * **betas** (*Tuple**[**float**, **float**]**, **optional*) --\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adamax.html", "category": "pytorch docs"} {"text": "coefficients used for computing running averages of gradient\n and its square\n * **eps** (*float**, **optional*) -- term added to the\n denominator to improve numerical stability (default: 1e-8)\n\n * **weight_decay** (*float**, **optional*) -- weight decay (L2\n penalty) (default: 0)\n\n * **foreach** (*bool**, **optional*) -- whether foreach\n implementation of optimizer is used. If unspecified by the\n user (so foreach is None), we will try to use foreach over the\n for-loop implementation on CUDA, since it is usually\n significantly more performant. (default: None)\n\n * **maximize** (*bool**, **optional*) -- maximize the params\n based on the objective, instead of minimizing (default: False)\n\n * **differentiable** (*bool**, **optional*) -- whether autograd\n should occur through the optimizer step in training.\n Otherwise, the step() function runs in a torch.no_grad()\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adamax.html", "category": "pytorch docs"} {"text": "context. Setting to True can impair performance, so leave it\n False if you don't intend to run autograd through this\n instance (default: False)\nadd_param_group(param_group)\n Add a param group to the \"Optimizer\" s *param_groups*.\n\n This can be useful when fine tuning a pre-trained network as\n frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n\n Parameters:\n **param_group** (*dict*) -- Specifies what Tensors should be\n optimized along with group specific optimization options.\n\nload_state_dict(state_dict)\n Loads the optimizer state.\n\n Parameters:\n **state_dict** (*dict*) -- optimizer state. Should be an\n object returned from a call to \"state_dict()\".\n\nregister_step_post_hook(hook)\n Register an optimizer step post hook which will be called after\n optimizer step. It should have the following signature:\n\n hook(optimizer, args, kwargs) -> None\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adamax.html", "category": "pytorch docs"} {"text": "hook(optimizer, args, kwargs) -> None\n The \"optimizer\" argument is the optimizer instance being used.\n\n Parameters:\n **hook** (*Callable*) -- The user defined hook to be\n registered.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n\nregister_step_pre_hook(hook)\n Register an optimizer step pre hook which will be called before\n optimizer step. It should have the following signature:\n\n hook(optimizer, args, kwargs) -> None or modified args and kwargs\n\n The \"optimizer\" argument is the optimizer instance being used.\n If args and kwargs are modified by the pre-hook, then the\n transformed values are returned as a tuple containing the\n new_args and new_kwargs.\n\n Parameters:\n **hook** (*Callable*) -- The user defined hook to be\n registered.\n\n Returns:\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adamax.html", "category": "pytorch docs"} {"text": "registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n\nstate_dict()\n Returns the state of the optimizer as a \"dict\".\n\n It contains two entries:\n\n * state - a dict holding current optimization state. Its content\n differs between optimizer classes.\n\n * param_groups - a list containing all parameter groups where\n each\n parameter group is a dict\n\nzero_grad(set_to_none=False)\n Sets the gradients of all optimized \"torch.Tensor\" s to zero.\n\n Parameters:\n **set_to_none** (*bool*) -- instead of setting to zero, set\n the grads to None. This will in general have lower memory\n footprint, and can modestly improve performance. However, it\n changes certain behaviors. For example: 1. When the user\n tries to access a gradient and perform manual ops on it, a\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adamax.html", "category": "pytorch docs"} {"text": "None attribute or a Tensor full of 0s will behave\n differently. 2. If the user requests\n \"zero_grad(set_to_none=True)\" followed by a backward pass,\n \".grad\"s are guaranteed to be None for params that did not\n receive a gradient. 3. \"torch.optim\" optimizers have a\n different behavior if the gradient is 0 or None (in one case\n it does the step with a gradient of 0 and in the other it\n skips the step altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adamax.html", "category": "pytorch docs"} {"text": "torch.foreach_cos\ntorch.foreach_cos(self: List[Tensor]) -> None\nApply \"torch.cos()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_cos_.html", "category": "pytorch docs"} {"text": "ModuleDict\nclass torch.nn.ModuleDict(modules=None)\nHolds submodules in a dictionary.\n\"ModuleDict\" can be indexed like a regular Python dictionary, but\n modules it contains are properly registered, and will be visible by\n all \"Module\" methods.\n\"ModuleDict\" is an ordered dictionary that respects\n\n\nthe order of insertion, and\n\n\nin \"update()\", the order of the merged \"OrderedDict\", \"dict\"\n (started from Python 3.6) or another \"ModuleDict\" (the argument\n to \"update()\").\n\n\nNote that \"update()\" with other unordered mapping types (e.g.,\n Python's plain \"dict\" before Python version 3.6) does not preserve\n the order of the merged mapping.\nParameters:\n modules (iterable, optional) -- a mapping (dictionary)\n of (string: module) or an iterable of key-value pairs of type\n (string, module)\nExample:\n class MyModule(nn.Module):\n def __init__(self):\n super(MyModule, self).__init__()\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ModuleDict.html", "category": "pytorch docs"} {"text": "super(MyModule, self).init()\n self.choices = nn.ModuleDict({\n 'conv': nn.Conv2d(10, 10, 3),\n 'pool': nn.MaxPool2d(3)\n })\n self.activations = nn.ModuleDict([\n ['lrelu', nn.LeakyReLU()],\n ['prelu', nn.PReLU()]\n ])\n def forward(self, x, choice, act):\n x = self.choices[choice](x)\n x = self.activations[act](x)\n return x\n\nclear()\n Remove all items from the ModuleDict.\n\nitems()\n Return an iterable of the ModuleDict key/value pairs.\n\n Return type:\n *Iterable*[*Tuple*[str, *Module*]]\n\nkeys()\n Return an iterable of the ModuleDict keys.\n\n Return type:\n *Iterable*[str]\n\npop(key)\n Remove key from the ModuleDict and return its module.\n\n Parameters:\n **key** (*str*) -- key to pop from the ModuleDict\n\n Return type:\n *Module*\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ModuleDict.html", "category": "pytorch docs"} {"text": "Return type:\n Module\nupdate(modules)\n Update the \"ModuleDict\" with the key-value pairs from a mapping\n or an iterable, overwriting existing keys.\n\n Note:\n\n If \"modules\" is an \"OrderedDict\", a \"ModuleDict\", or an\n iterable of key-value pairs, the order of new elements in it\n is preserved.\n\n Parameters:\n **modules** (*iterable*) -- a mapping (dictionary) from\n string to \"Module\", or an iterable of key-value pairs of type\n (string, \"Module\")\n\nvalues()\n Return an iterable of the ModuleDict values.\n\n Return type:\n *Iterable*[*Module*]\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ModuleDict.html", "category": "pytorch docs"} {"text": "torch.Tensor.tensor_split\nTensor.tensor_split(indices_or_sections, dim=0) -> List of Tensors\nSee \"torch.tensor_split()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.tensor_split.html", "category": "pytorch docs"} {"text": "OneCycleLR\nclass torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr, total_steps=None, epochs=None, steps_per_epoch=None, pct_start=0.3, anneal_strategy='cos', cycle_momentum=True, base_momentum=0.85, max_momentum=0.95, div_factor=25.0, final_div_factor=10000.0, three_phase=False, last_epoch=- 1, verbose=False)\nSets the learning rate of each parameter group according to the\n 1cycle learning rate policy. The 1cycle policy anneals the learning\n rate from an initial learning rate to some maximum learning rate\n and then from that maximum learning rate to some minimum learning\n rate much lower than the initial learning rate. This policy was\n initially described in the paper Super-Convergence: Very Fast\n Training of Neural Networks Using Large Learning Rates.\nThe 1cycle learning rate policy changes the learning rate after\n every batch. step should be called after a batch has been used\n for training.\nThis scheduler is not chainable.", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.OneCycleLR.html", "category": "pytorch docs"} {"text": "This scheduler is not chainable.\nNote also that the total number of steps in the cycle can be\n determined in one of two ways (listed in order of precedence):\n\n\nA value for total_steps is explicitly provided.\n\n\nA number of epochs (epochs) and a number of steps per epoch\n (steps_per_epoch) are provided. In this case, the number of\n total steps is inferred by total_steps = epochs *\n steps_per_epoch\n\n\nYou must either provide a value for total_steps or provide a value\n for both epochs and steps_per_epoch.\nThe default behaviour of this scheduler follows the fastai\n implementation of 1cycle, which claims that \"unpublished work has\n shown even better results by using only two phases\". To mimic the\n behaviour of the original paper instead, set \"three_phase=True\".\nParameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * **max_lr** (*float** or **list*) -- Upper learning rate\n boundaries in the cycle for each parameter group.\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.OneCycleLR.html", "category": "pytorch docs"} {"text": "\n\ntotal_steps (int) -- The total number of steps in the\n cycle. Note that if a value is not provided here, then it must\n be inferred by providing a value for epochs and\n steps_per_epoch. Default: None\n\n\nepochs (int) -- The number of epochs to train for. This\n is used along with steps_per_epoch in order to infer the total\n number of steps in the cycle if a value for total_steps is not\n provided. Default: None\n\n\nsteps_per_epoch (int) -- The number of steps per epoch\n to train for. This is used along with epochs in order to infer\n the total number of steps in the cycle if a value for\n total_steps is not provided. Default: None\n\n\npct_start (float) -- The percentage of the cycle (in\n number of steps) spent increasing the learning rate. Default:\n 0.3\n\n\nanneal_strategy (str) -- {'cos', 'linear'} Specifies the\n annealing strategy: \"cos\" for cosine annealing, \"linear\" for\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.OneCycleLR.html", "category": "pytorch docs"} {"text": "linear annealing. Default: 'cos'\n * **cycle_momentum** (*bool*) -- If \"True\", momentum is cycled\n inversely to learning rate between 'base_momentum' and\n 'max_momentum'. Default: True\n\n * **base_momentum** (*float** or **list*) -- Lower momentum\n boundaries in the cycle for each parameter group. Note that\n momentum is cycled inversely to learning rate; at the peak of\n a cycle, momentum is 'base_momentum' and learning rate is\n 'max_lr'. Default: 0.85\n\n * **max_momentum** (*float** or **list*) -- Upper momentum\n boundaries in the cycle for each parameter group.\n Functionally, it defines the cycle amplitude (max_momentum -\n base_momentum). Note that momentum is cycled inversely to\n learning rate; at the start of a cycle, momentum is\n 'max_momentum' and learning rate is 'base_lr' Default: 0.95\n\n * **div_factor** (*float*) -- Determines the initial learning\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.OneCycleLR.html", "category": "pytorch docs"} {"text": "rate via initial_lr = max_lr/div_factor Default: 25\n * **final_div_factor** (*float*) -- Determines the minimum\n learning rate via min_lr = initial_lr/final_div_factor\n Default: 1e4\n\n * **three_phase** (*bool*) -- If \"True\", use a third phase of\n the schedule to annihilate the learning rate according to\n 'final_div_factor' instead of modifying the second phase (the\n first two phases will be symmetrical about the step indicated\n by 'pct_start').\n\n * **last_epoch** (*int*) -- The index of the last batch. This\n parameter is used when resuming a training job. Since *step()*\n should be invoked after each batch instead of after each\n epoch, this number represents the total number of *batches*\n computed, not the total number of epochs computed. When\n last_epoch=-1, the schedule is started from the beginning.\n Default: -1\n\n * **verbose** (*bool*) -- If \"True\", prints a message to stdout\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.OneCycleLR.html", "category": "pytorch docs"} {"text": "for each update. Default: \"False\".\n-[ Example ]-\n\n\n\ndata_loader = torch.utils.data.DataLoader(...)\noptimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)\nscheduler = torch.optim.lr_scheduler.OneCycleLR(optimizer, max_lr=0.01, steps_per_epoch=len(data_loader), epochs=10)\nfor epoch in range(10):\n for batch in data_loader:\n train_batch(...)\n scheduler.step()\n\n\n\nget_last_lr()\n Return last computed learning rate by current scheduler.\n\nload_state_dict(state_dict)\n Loads the schedulers state.\n\n Parameters:\n **state_dict** (*dict*) -- scheduler state. Should be an\n object returned from a call to \"state_dict()\".\n\nprint_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n\nstate_dict()\n Returns the state of the scheduler as a \"dict\".\n\n It contains an entry for every variable in self.__dict__ which\n is not the optimizer.\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.OneCycleLR.html", "category": "pytorch docs"} {"text": "torch.Tensor.untyped_storage\nTensor.untyped_storage() -> torch.UntypedStorage\nReturns the underlying \"UntypedStorage\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.untyped_storage.html", "category": "pytorch docs"} {"text": "InstanceNorm3d\nclass torch.ao.nn.quantized.InstanceNorm3d(num_features, weight, bias, scale, zero_point, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False, device=None, dtype=None)\nThis is the quantized version of \"InstanceNorm3d\".\nAdditional args:\n * scale - quantization scale of the output, type: double.\n * **zero_point** - quantization zero point of the output, type:\n long.\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.InstanceNorm3d.html", "category": "pytorch docs"} {"text": "torch.nn.functional.avg_pool1d\ntorch.nn.functional.avg_pool1d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True) -> Tensor\nApplies a 1D average pooling over an input signal composed of\n several input planes.\nSee \"AvgPool1d\" for details and output shape.\nParameters:\n * input -- input tensor of shape (\\text{minibatch} ,\n \\text{in_channels} , iW)\n * **kernel_size** -- the size of the window. Can be a single\n number or a tuple *(kW,)*\n\n * **stride** -- the stride of the window. Can be a single number\n or a tuple *(sW,)*. Default: \"kernel_size\"\n\n * **padding** -- implicit zero paddings on both sides of the\n input. Can be a single number or a tuple *(padW,)*. Default: 0\n\n * **ceil_mode** -- when True, will use *ceil* instead of *floor*\n to compute the output shape. Default: \"False\"\n\n * **count_include_pad** -- when True, will include the zero-\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.avg_pool1d.html", "category": "pytorch docs"} {"text": "padding in the averaging calculation. Default: \"True\"\nExamples:\n >>> # pool of square window of size=3, stride=2\n >>> input = torch.tensor([[[1, 2, 3, 4, 5, 6, 7]]], dtype=torch.float32)\n >>> F.avg_pool1d(input, kernel_size=3, stride=2)\n tensor([[[ 2., 4., 6.]]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.avg_pool1d.html", "category": "pytorch docs"} {"text": "torch.Tensor.bitwise_not_\nTensor.bitwise_not_() -> Tensor\nIn-place version of \"bitwise_not()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_not_.html", "category": "pytorch docs"} {"text": "torch.Tensor.sparse_dim\nTensor.sparse_dim() -> int\nReturn the number of sparse dimensions in a sparse tensor \"self\".\nNote:\n Returns \"0\" if \"self\" is not a sparse tensor.\n\nSee also \"Tensor.dense_dim()\" and hybrid tensors.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sparse_dim.html", "category": "pytorch docs"} {"text": "torch.Tensor.hypot\nTensor.hypot(other) -> Tensor\nSee \"torch.hypot()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.hypot.html", "category": "pytorch docs"} {"text": "torch.scatter\ntorch.scatter(input, dim, index, src) -> Tensor\nOut-of-place version of \"torch.Tensor.scatter_()\"", "source": "https://pytorch.org/docs/stable/generated/torch.scatter.html", "category": "pytorch docs"} {"text": "torch.swapdims\ntorch.swapdims(input, dim0, dim1) -> Tensor\nAlias for \"torch.transpose()\".\nThis function is equivalent to NumPy's swapaxes function.\nExamples:\n >>> x = torch.tensor([[[0,1],[2,3]],[[4,5],[6,7]]])\n >>> x\n tensor([[[0, 1],\n [2, 3]],\n\n [[4, 5],\n [6, 7]]])\n >>> torch.swapdims(x, 0, 1)\n tensor([[[0, 1],\n [4, 5]],\n\n [[2, 3],\n [6, 7]]])\n >>> torch.swapdims(x, 0, 2)\n tensor([[[0, 4],\n [2, 6]],\n\n [[1, 5],\n [3, 7]]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.swapdims.html", "category": "pytorch docs"} {"text": "torch.Tensor.true_divide_\nTensor.true_divide_(value) -> Tensor\nIn-place version of \"true_divide_()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.true_divide_.html", "category": "pytorch docs"} {"text": "torch.Tensor.fmax\nTensor.fmax(other) -> Tensor\nSee \"torch.fmax()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.fmax.html", "category": "pytorch docs"} {"text": "torch.is_storage\ntorch.is_storage(obj)\nReturns True if obj is a PyTorch storage object.\nParameters:\n obj (Object) -- Object to test", "source": "https://pytorch.org/docs/stable/generated/torch.is_storage.html", "category": "pytorch docs"} {"text": "Generator\nclass torch.Generator(device='cpu')\nCreates and returns a generator object that manages the state of\n the algorithm which produces pseudo random numbers. Used as a\n keyword argument in many In-place random sampling functions.\nParameters:\n device (\"torch.device\", optional) -- the desired device for\n the generator.\nReturns:\n An torch.Generator object.\nReturn type:\n Generator\nExample:\n >>> g_cpu = torch.Generator()\n >>> g_cuda = torch.Generator(device='cuda')\n\ndevice\n Generator.device -> device\n\n Gets the current device of the generator.\n\n Example:\n\n >>> g_cpu = torch.Generator()\n >>> g_cpu.device\n device(type='cpu')\n\nget_state() -> Tensor\n Returns the Generator state as a \"torch.ByteTensor\".\n\n Returns:\n A \"torch.ByteTensor\" which contains all the necessary bits to\n restore a Generator to a specific point in time.\n\n Return type:\n Tensor\n", "source": "https://pytorch.org/docs/stable/generated/torch.Generator.html", "category": "pytorch docs"} {"text": "Return type:\n Tensor\n Example:\n\n >>> g_cpu = torch.Generator()\n >>> g_cpu.get_state()\n\ninitial_seed() -> int\n Returns the initial seed for generating random numbers.\n\n Example:\n\n >>> g_cpu = torch.Generator()\n >>> g_cpu.initial_seed()\n 2147483647\n\nmanual_seed(seed) -> Generator\n Sets the seed for generating random numbers. Returns a\n *torch.Generator* object. It is recommended to set a large seed,\n i.e. a number that has a good balance of 0 and 1 bits. Avoid\n having many 0 bits in the seed.\n\n Parameters:\n **seed** (*int*) -- The desired seed. Value must be within\n the inclusive range *[-0x8000_0000_0000_0000,\n 0xffff_ffff_ffff_ffff]*. Otherwise, a RuntimeError is raised.\n Negative inputs are remapped to positive values with the\n formula *0xffff_ffff_ffff_ffff + seed*.\n\n Returns:\n An torch.Generator object.\n\n Return type:\n Generator\n", "source": "https://pytorch.org/docs/stable/generated/torch.Generator.html", "category": "pytorch docs"} {"text": "Return type:\n Generator\n Example:\n\n >>> g_cpu = torch.Generator()\n >>> g_cpu.manual_seed(2147483647)\n\nseed() -> int\n Gets a non-deterministic random number from std::random_device\n or the current time and uses it to seed a Generator.\n\n Example:\n\n >>> g_cpu = torch.Generator()\n >>> g_cpu.seed()\n 1516516984916\n\nset_state(new_state) -> void\n Sets the Generator state.\n\n Parameters:\n **new_state** (*torch.ByteTensor*) -- The desired state.\n\n Example:\n\n >>> g_cpu = torch.Generator()\n >>> g_cpu_other = torch.Generator()\n >>> g_cpu.set_state(g_cpu_other.get_state())\n", "source": "https://pytorch.org/docs/stable/generated/torch.Generator.html", "category": "pytorch docs"} {"text": "torch.Tensor.ge_\nTensor.ge_(other) -> Tensor\nIn-place version of \"ge()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ge_.html", "category": "pytorch docs"} {"text": "torch.Tensor.pin_memory\nTensor.pin_memory() -> Tensor\nCopies the tensor to pinned memory, if it's not already pinned.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.pin_memory.html", "category": "pytorch docs"} {"text": "torch.Tensor.gt\nTensor.gt(other) -> Tensor\nSee \"torch.gt()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.gt.html", "category": "pytorch docs"} {"text": "torch.cummax\ntorch.cummax(input, dim, *, out=None)\nReturns a namedtuple \"(values, indices)\" where \"values\" is the\n cumulative maximum of elements of \"input\" in the dimension \"dim\".\n And \"indices\" is the index location of each maximum value found in\n the dimension \"dim\".\n y_i = max(x_1, x_2, x_3, \\dots, x_i)\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int*) -- the dimension to do the operation over\n\nKeyword Arguments:\n out (tuple, optional) -- the result tuple of two\n output tensors (values, indices)\nExample:\n >>> a = torch.randn(10)\n >>> a\n tensor([-0.3449, -1.5447, 0.0685, -1.5104, -1.1706, 0.2259, 1.4696, -1.3284,\n 1.9946, -0.8209])\n >>> torch.cummax(a, dim=0)\n torch.return_types.cummax(\n values=tensor([-0.3449, -0.3449, 0.0685, 0.0685, 0.0685, 0.2259, 1.4696, 1.4696,\n 1.9946, 1.9946]),\n indices=tensor([0, 0, 2, 2, 2, 5, 6, 6, 8, 8]))\n", "source": "https://pytorch.org/docs/stable/generated/torch.cummax.html", "category": "pytorch docs"} {"text": "torch.nn.functional.upsample_nearest\ntorch.nn.functional.upsample_nearest(input, size=None, scale_factor=None)\nUpsamples the input, using nearest neighbours' pixel values.\nWarning:\n This function is deprecated in favor of\n \"torch.nn.functional.interpolate()\". This is equivalent with\n \"nn.functional.interpolate(..., mode='nearest')\".\n\nCurrently spatial and volumetric upsampling are supported (i.e.\n expected inputs are 4 or 5 dimensional).\nParameters:\n * input (Tensor) -- input\n * **size** (*int** or **Tuple**[**int**, **int**] or\n **Tuple**[**int**, **int**, **int**]*) -- output spatia size.\n\n * **scale_factor** (*int*) -- multiplier for spatial size. Has\n to be an integer.\n\nNote:\n This operation may produce nondeterministic gradients when given\n tensors on a CUDA device. See Reproducibility for more\n information.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.upsample_nearest.html", "category": "pytorch docs"} {"text": "torch.Tensor.index_put_\nTensor.index_put_(indices, values, accumulate=False) -> Tensor\nPuts values from the tensor \"values\" into the tensor \"self\" using\n the indices specified in \"indices\" (which is a tuple of Tensors).\n The expression \"tensor.index_put_(indices, values)\" is equivalent\n to \"tensor[indices] = values\". Returns \"self\".\nIf \"accumulate\" is \"True\", the elements in \"values\" are added to\n \"self\". If accumulate is \"False\", the behavior is undefined if\n indices contain duplicate elements.\nParameters:\n * indices (tuple of LongTensor) -- tensors used to index\n into self.\n * **values** (*Tensor*) -- tensor of same dtype as *self*.\n\n * **accumulate** (*bool*) -- whether to accumulate into self\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_put_.html", "category": "pytorch docs"} {"text": "FloatFunctional\nclass torch.ao.nn.quantized.FloatFunctional\nState collector class for float operations.\nThe instance of this class can be used instead of the \"torch.\"\n prefix for some operations. See example usage below.\nNote:\n This class does not provide a \"forward\" hook. Instead, you must\n use one of the underlying functions (e.g. \"add\").\n\nExamples:\n >>> f_add = FloatFunctional()\n >>> a = torch.tensor(3.0)\n >>> b = torch.tensor(4.0)\n >>> f_add.add(a, b) # Equivalent to ``torch.add(a, b)``\n\nValid operation names:\n * add\n * cat\n\n * mul\n\n * add_relu\n\n * add_scalar\n\n * mul_scalar\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.FloatFunctional.html", "category": "pytorch docs"} {"text": "torch.Tensor.hypot_\nTensor.hypot_(other) -> Tensor\nIn-place version of \"hypot()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.hypot_.html", "category": "pytorch docs"} {"text": "torch.Tensor.mm\nTensor.mm(mat2) -> Tensor\nSee \"torch.mm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.mm.html", "category": "pytorch docs"} {"text": "ELU\nclass torch.nn.ELU(alpha=1.0, inplace=False)\nApplies the Exponential Linear Unit (ELU) function, element-wise,\n as described in the paper: Fast and Accurate Deep Network Learning\n by Exponential Linear Units (ELUs).\nELU is defined as:\n \\text{ELU}(x) = \\begin{cases} x, & \\text{ if } x > 0\\\\ \\alpha *\n (\\exp(x) - 1), & \\text{ if } x \\leq 0 \\end{cases}\n\nParameters:\n * alpha (float) -- the \\alpha value for the ELU\n formulation. Default: 1.0\n * **inplace** (*bool*) -- can optionally do the operation in-\n place. Default: \"False\"\n\nShape:\n * Input: (*), where * means any number of dimensions.\n * Output: (*), same shape as the input.\n\n[image]\nExamples:\n >>> m = nn.ELU()\n >>> input = torch.randn(2)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ELU.html", "category": "pytorch docs"} {"text": "torch.Tensor.swapdims\nTensor.swapdims(dim0, dim1) -> Tensor\nSee \"torch.swapdims()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.swapdims.html", "category": "pytorch docs"} {"text": "torch.Tensor.atan\nTensor.atan() -> Tensor\nSee \"torch.atan()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.atan.html", "category": "pytorch docs"} {"text": "torch.optim.Optimizer.step\nOptimizer.step(closure)\nPerforms a single optimization step (parameter update).\nParameters:\n closure (Callable) -- A closure that reevaluates the model\n and returns the loss. Optional for most optimizers.\nNote:\n Unless otherwise specified, this function should not modify the\n \".grad\" field of the parameters.\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.step.html", "category": "pytorch docs"} {"text": "torch.Tensor.is_quantized\nTensor.is_quantized\nIs \"True\" if the Tensor is quantized, \"False\" otherwise.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_quantized.html", "category": "pytorch docs"} {"text": "torch.Tensor.arcsinh\nTensor.arcsinh() -> Tensor\nSee \"torch.arcsinh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arcsinh.html", "category": "pytorch docs"} {"text": "torch.baddbmm\ntorch.baddbmm(input, batch1, batch2, *, beta=1, alpha=1, out=None) -> Tensor\nPerforms a batch matrix-matrix product of matrices in \"batch1\" and\n \"batch2\". \"input\" is added to the final result.\n\"batch1\" and \"batch2\" must be 3-D tensors each containing the same\n number of matrices.\nIf \"batch1\" is a (b \\times n \\times m) tensor, \"batch2\" is a (b\n \\times m \\times p) tensor, then \"input\" must be broadcastable with\n a (b \\times n \\times p) tensor and \"out\" will be a (b \\times n\n \\times p) tensor. Both \"alpha\" and \"beta\" mean the same as the\n scaling factors used in \"torch.addbmm()\".\n \\text{out}_i = \\beta\\ \\text{input}_i + \\alpha\\ (\\text{batch1}_i\n \\mathbin{@} \\text{batch2}_i)\n\nIf \"beta\" is 0, then \"input\" will be ignored, and nan and inf\n in it will not be propagated.\nFor inputs of type FloatTensor or DoubleTensor, arguments\n \"beta\" and \"alpha\" must be real numbers, otherwise they should be\n integers.", "source": "https://pytorch.org/docs/stable/generated/torch.baddbmm.html", "category": "pytorch docs"} {"text": "integers.\nThis operator supports TensorFloat32.\nOn certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.\nParameters:\n * input (Tensor) -- the tensor to be added\n * **batch1** (*Tensor*) -- the first batch of matrices to be\n multiplied\n\n * **batch2** (*Tensor*) -- the second batch of matrices to be\n multiplied\n\nKeyword Arguments:\n * beta (Number, optional) -- multiplier for \"input\"\n (\\beta)\n * **alpha** (*Number**, **optional*) -- multiplier for\n \\text{batch1} \\mathbin{@} \\text{batch2} (\\alpha)\n\n * **out** (*Tensor**, **optional*) -- the output tensor.\n\nExample:\n >>> M = torch.randn(10, 3, 5)\n >>> batch1 = torch.randn(10, 3, 4)\n >>> batch2 = torch.randn(10, 4, 5)\n >>> torch.baddbmm(M, batch1, batch2).size()\n torch.Size([10, 3, 5])\n", "source": "https://pytorch.org/docs/stable/generated/torch.baddbmm.html", "category": "pytorch docs"} {"text": "HistogramObserver\nclass torch.quantization.observer.HistogramObserver(bins=2048, upsample_rate=128, dtype=torch.quint8, qscheme=torch.per_tensor_affine, reduce_range=False, quant_min=None, quant_max=None, factory_kwargs=None, eps=1.1920928955078125e-07)\nThe module records the running histogram of tensor values along\n with min/max values. \"calculate_qparams\" will calculate scale and\n zero_point.\nParameters:\n * bins (int) -- Number of bins to use for the histogram\n * **upsample_rate** (*int*) -- Factor by which the histograms\n are upsampled, this is used to interpolate histograms with\n varying ranges across observations\n\n * **dtype** (*dtype*) -- dtype argument to the *quantize* node\n needed to implement the reference model spec\n\n * **qscheme** -- Quantization scheme to be used\n\n * **reduce_range** -- Reduces the range of the quantized data\n type by 1 bit\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.HistogramObserver.html", "category": "pytorch docs"} {"text": "type by 1 bit\n * **eps** (*Tensor*) -- Epsilon value for float32, Defaults to\n *torch.finfo(torch.float32).eps*.\n\nThe scale and zero point are computed as follows:\n\n\nCreate the histogram of the incoming inputs.\n The histogram is computed continuously, and the ranges per\n bin change with every new tensor observed.\n\n\nSearch the distribution in the histogram for optimal min/max\n values.\n The search for the min/max values ensures the minimization of\n the quantization error with respect to the floating point\n model.\n\n\nCompute the scale and zero point the same way as in the\n \"MinMaxObserver\"\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.HistogramObserver.html", "category": "pytorch docs"} {"text": "torch.promote_types\ntorch.promote_types(type1, type2) -> dtype\nReturns the \"torch.dtype\" with the smallest size and scalar kind\n that is not smaller nor of lower kind than either type1 or\n type2. See type promotion documentation for more information on\n the type promotion logic.\nParameters:\n * type1 (\"torch.dtype\") --\n * **type2** (\"torch.dtype\") --\n\nExample:\n >>> torch.promote_types(torch.int32, torch.float32)\n torch.float32\n >>> torch.promote_types(torch.uint8, torch.long)\n torch.long\n", "source": "https://pytorch.org/docs/stable/generated/torch.promote_types.html", "category": "pytorch docs"} {"text": "torch.Tensor.resolve_neg\nTensor.resolve_neg() -> Tensor\nSee \"torch.resolve_neg()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.resolve_neg.html", "category": "pytorch docs"} {"text": "torch.nn.functional.threshold_\ntorch.nn.functional.threshold_(input, threshold, value) -> Tensor\nIn-place version of \"threshold()\".", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.threshold_.html", "category": "pytorch docs"} {"text": "torch.linalg.lstsq\ntorch.linalg.lstsq(A, B, rcond=None, *, driver=None)\nComputes a solution to the least squares problem of a system of\n linear equations.\nLetting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, the least squares\n problem for a linear system AX = B with A \\in \\mathbb{K}^{m\n \\times n}, B \\in \\mathbb{K}^{m \\times k} is defined as\n \\min_{X \\in \\mathbb{K}^{n \\times k}} \\|AX - B\\|_F\n\nwhere |-|_F denotes the Frobenius norm.\nSupports inputs of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if the inputs are batches of\n matrices then the output has the same batch dimensions.\n\"driver\" chooses the backend function that will be used. For CPU\n inputs the valid values are 'gels', 'gelsy', 'gelsd,\n 'gelss'. To choose the best driver on CPU consider:\n\nIf \"A\" is well-conditioned (its condition number is not too\n large), or you do not mind some precision loss.\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lstsq.html", "category": "pytorch docs"} {"text": "\n\nFor a general matrix: 'gelsy' (QR with pivoting) (default)\n\nIf \"A\" is full-rank: 'gels' (QR)\n\n\n\nIf \"A\" is not well-conditioned.\n\n\n'gelsd' (tridiagonal reduction and SVD)\n\n\nBut if you run into memory issues: 'gelss' (full SVD).\n\n\n\n\nFor CUDA input, the only valid driver is 'gels', which assumes\n that \"A\" is full-rank.\nSee also the full description of these drivers\n\"rcond\" is used to determine the effective rank of the matrices in\n \"A\" when \"driver\" is one of ('gelsy', 'gelsd', 'gelss'). In\n this case, if \\sigma_i are the singular values of A in decreasing\n order, \\sigma_i will be rounded down to zero if \\sigma_i \\leq\n \\text{rcond} \\cdot \\sigma_1. If \"rcond\"= None (default), \"rcond\"\n is set to the machine precision of the dtype of \"A\" times max(m,\n n).\nThis function returns the solution to the problem and some extra\n information in a named tuple of four tensors *(solution, residuals,", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lstsq.html", "category": "pytorch docs"} {"text": "rank, singular_values). For inputs \"A\", \"B\" of shape (, m, n),\n (, m, k)* respectively, it contains\n\n\nsolution: the least squares solution. It has shape (, n, k)*.\n\n\nresiduals: the squared residuals of the solutions, that is,\n |AX - B|_F^2. It has shape equal to the batch dimensions of\n \"A\". It is computed when m > n and every matrix in \"A\" is full-\n rank, otherwise, it is an empty tensor. If \"A\" is a batch of\n matrices and any matrix in the batch is not full rank, then an\n empty tensor is returned. This behavior may change in a future\n PyTorch release.\n\n\nrank: tensor of ranks of the matrices in \"A\". It has shape\n equal to the batch dimensions of \"A\". It is computed when\n \"driver\" is one of ('gelsy', 'gelsd', 'gelss'), otherwise\n it is an empty tensor.\n\n\nsingular_values: tensor of singular values of the matrices in\n \"A\". It has shape (, min(m, n))*. It is computed when \"driver\"\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lstsq.html", "category": "pytorch docs"} {"text": "is one of ('gelsd', 'gelss'), otherwise it is an empty\n tensor.\nNote:\n This function computes *X = *\"A\"*.pinverse() @ *\"B\" in a faster\n and more numerically stable way than performing the computations\n separately.\n\nWarning:\n The default value of \"rcond\" may change in a future PyTorch\n release. It is therefore recommended to use a fixed value to\n avoid potential breaking changes.\n\nParameters:\n * A (Tensor) -- lhs tensor of shape (, m, n)* where ***\n is zero or more batch dimensions.\n * **B** (*Tensor*) -- rhs tensor of shape *(*, m, k)* where ***\n is zero or more batch dimensions.\n\n * **rcond** (*float**, **optional*) -- used to determine the\n effective rank of \"A\". If \"rcond\"*= None*, \"rcond\" is set to\n the machine precision of the dtype of \"A\" times *max(m, n)*.\n Default: *None*.\n\nKeyword Arguments:\n driver (str, optional) -- name of the LAPACK/MAGMA", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lstsq.html", "category": "pytorch docs"} {"text": "method to be used. If None, 'gelsy' is used for CPU inputs\n and 'gels' for CUDA inputs. Default: None.\nReturns:\n A named tuple (solution, residuals, rank, singular_values).\nExamples:\n >>> A = torch.randn(1,3,3)\n >>> A\n tensor([[[-1.0838, 0.0225, 0.2275],\n [ 0.2438, 0.3844, 0.5499],\n [ 0.1175, -0.9102, 2.0870]]])\n >>> B = torch.randn(2,3,3)\n >>> B\n tensor([[[-0.6772, 0.7758, 0.5109],\n [-1.4382, 1.3769, 1.1818],\n [-0.3450, 0.0806, 0.3967]],\n [[-1.3994, -0.1521, -0.1473],\n [ 1.9194, 1.0458, 0.6705],\n [-1.1802, -0.9796, 1.4086]]])\n >>> X = torch.linalg.lstsq(A, B).solution # A is broadcasted to shape (2, 3, 3)\n >>> torch.dist(X, torch.linalg.pinv(A) @ B)\n tensor(1.5152e-06)\n\n >>> S = torch.linalg.lstsq(A, B, driver='gelsd').singular_values\n >>> torch.dist(S, torch.linalg.svdvals(A))\n tensor(2.3842e-07)\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lstsq.html", "category": "pytorch docs"} {"text": "tensor(2.3842e-07)\n >>> A[:, 0].zero_() # Decrease the rank of A\n >>> rank = torch.linalg.lstsq(A, B).rank\n >>> rank\n tensor([2])\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lstsq.html", "category": "pytorch docs"} {"text": "LinearLR\nclass torch.optim.lr_scheduler.LinearLR(optimizer, start_factor=0.3333333333333333, end_factor=1.0, total_iters=5, last_epoch=- 1, verbose=False)\nDecays the learning rate of each parameter group by linearly\n changing small multiplicative factor until the number of epoch\n reaches a pre-defined milestone: total_iters. Notice that such\n decay can happen simultaneously with other changes to the learning\n rate from outside this scheduler. When last_epoch=-1, sets initial\n lr as lr.\nParameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * **start_factor** (*float*) -- The number we multiply learning\n rate in the first epoch. The multiplication factor changes\n towards end_factor in the following epochs. Default: 1./3.\n\n * **end_factor** (*float*) -- The number we multiply learning\n rate at the end of linear changing process. Default: 1.0.\n\n * **total_iters** (*int*) -- The number of iterations that\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.LinearLR.html", "category": "pytorch docs"} {"text": "multiplicative factor reaches to 1. Default: 5.\n * **last_epoch** (*int*) -- The index of the last epoch.\n Default: -1.\n\n * **verbose** (*bool*) -- If \"True\", prints a message to stdout\n for each update. Default: \"False\".\n\n-[ Example ]-\n\n\n\nAssuming optimizer uses lr = 0.05 for all groups\nlr = 0.025 if epoch == 0\nlr = 0.03125 if epoch == 1\nlr = 0.0375 if epoch == 2\nlr = 0.04375 if epoch == 3\nlr = 0.05 if epoch >= 4\nscheduler = LinearLR(self.opt, start_factor=0.5, total_iters=4)\nfor epoch in range(100):\n train(...)\n validate(...)\n scheduler.step()\n\n\n\nget_last_lr()\n Return last computed learning rate by current scheduler.\n\nload_state_dict(state_dict)\n Loads the schedulers state.\n\n Parameters:\n **state_dict** (*dict*) -- scheduler state. Should be an\n object returned from a call to \"state_dict()\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.LinearLR.html", "category": "pytorch docs"} {"text": "print_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n\nstate_dict()\n Returns the state of the scheduler as a \"dict\".\n\n It contains an entry for every variable in self.__dict__ which\n is not the optimizer.\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.LinearLR.html", "category": "pytorch docs"} {"text": "torch.sigmoid\ntorch.sigmoid(input, *, out=None) -> Tensor\nAlias for \"torch.special.expit()\".", "source": "https://pytorch.org/docs/stable/generated/torch.sigmoid.html", "category": "pytorch docs"} {"text": "LazyBatchNorm2d\nclass torch.nn.LazyBatchNorm2d(eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)\nA \"torch.nn.BatchNorm2d\" module with lazy initialization of the\n \"num_features\" argument of the \"BatchNorm2d\" that is inferred from\n the \"input.size(1)\". The attributes that will be lazily initialized\n are weight, bias, running_mean and running_var.\nCheck the \"torch.nn.modules.lazy.LazyModuleMixin\" for further\n documentation on lazy modules and their limitations.\nParameters:\n * eps (float) -- a value added to the denominator for\n numerical stability. Default: 1e-5\n * **momentum** (*float*) -- the value used for the running_mean\n and running_var computation. Can be set to \"None\" for\n cumulative moving average (i.e. simple average). Default: 0.1\n\n * **affine** (*bool*) -- a boolean value that when set to\n \"True\", this module has learnable affine parameters. Default:\n \"True\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyBatchNorm2d.html", "category": "pytorch docs"} {"text": "\"True\"\n * **track_running_stats** (*bool*) -- a boolean value that when\n set to \"True\", this module tracks the running mean and\n variance, and when set to \"False\", this module does not track\n such statistics, and initializes statistics buffers\n \"running_mean\" and \"running_var\" as \"None\". When these buffers\n are \"None\", this module always uses batch statistics. in both\n training and eval modes. Default: \"True\"\n\ncls_to_become\n alias of \"BatchNorm2d\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyBatchNorm2d.html", "category": "pytorch docs"} {"text": "torch.copysign\ntorch.copysign(input, other, *, out=None) -> Tensor\nCreate a new floating-point tensor with the magnitude of \"input\"\n and the sign of \"other\", elementwise.\n \\text{out}_{i} = \\begin{cases} -|\\text{input}_{i}| &\n \\text{if } \\text{other}_{i} \\leq -0.0 \\\\ |\\text{input}_{i}|\n & \\text{if } \\text{other}_{i} \\geq 0.0 \\\\ \\end{cases}\n\nSupports broadcasting to a common shape, and integer and float\n inputs.\nParameters:\n * input (Tensor) -- magnitudes.\n * **other** (*Tensor** or **Number*) -- contains value(s) whose\n signbit(s) are applied to the magnitudes in \"input\".\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(5)\n >>> a\n tensor([-1.2557, -0.0026, -0.5387, 0.4740, -0.9244])\n >>> torch.copysign(a, 1)\n tensor([1.2557, 0.0026, 0.5387, 0.4740, 0.9244])\n >>> a = torch.randn(4, 4)\n >>> a\n", "source": "https://pytorch.org/docs/stable/generated/torch.copysign.html", "category": "pytorch docs"} {"text": "\n\n\na = torch.randn(4, 4)\n >>> a\n tensor([[ 0.7079, 0.2778, -1.0249, 0.5719],\n [-0.0059, -0.2600, -0.4475, -1.3948],\n [ 0.3667, -0.9567, -2.5757, -0.1751],\n [ 0.2046, -0.0742, 0.2998, -0.1054]])\n >>> b = torch.randn(4)\n tensor([ 0.2373, 0.3120, 0.3190, -1.1128])\n >>> torch.copysign(a, b)\n tensor([[ 0.7079, 0.2778, 1.0249, -0.5719],\n [ 0.0059, 0.2600, 0.4475, -1.3948],\n [ 0.3667, 0.9567, 2.5757, -0.1751],\n [ 0.2046, 0.0742, 0.2998, -0.1054]])\n >>> a = torch.tensor([1.])\n >>> b = torch.tensor([-0.])\n >>> torch.copysign(a, b)\n tensor([-1.])\n\n\n\nNote:\n copysign handles signed zeros. If the other argument has a\n negative zero (-0), the corresponding output value will be\n negative.\n", "source": "https://pytorch.org/docs/stable/generated/torch.copysign.html", "category": "pytorch docs"} {"text": "torch.Tensor.histc\nTensor.histc(bins=100, min=0, max=0) -> Tensor\nSee \"torch.histc()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.histc.html", "category": "pytorch docs"} {"text": "torch.pca_lowrank\ntorch.pca_lowrank(A, q=None, center=True, niter=2)\nPerforms linear Principal Component Analysis (PCA) on a low-rank\n matrix, batches of such matrices, or sparse matrix.\nThis function returns a namedtuple \"(U, S, V)\" which is the nearly\n optimal approximation of a singular value decomposition of a\n centered matrix A such that A = U diag(S) V^T.\nNote:\n The relation of \"(U, S, V)\" to PCA is as follows:\n\n * A is a data matrix with \"m\" samples and \"n\" features\n\n * the V columns represent the principal directions\n\n * S ** 2 / (m - 1) contains the eigenvalues of A^T A / (m - 1)\n which is the covariance of \"A\" when \"center=True\" is provided.\n\n * \"matmul(A, V[:, :k])\" projects data to the first k principal\n components\n\nNote:\n Different from the standard SVD, the size of returned matrices\n depend on the specified rank and q values as follows:\n\n * U is m x q matrix\n\n * S is q-vector\n", "source": "https://pytorch.org/docs/stable/generated/torch.pca_lowrank.html", "category": "pytorch docs"} {"text": "\nS is q-vector* V is n x q matrix\n\n\n\nNote:\n To obtain repeatable results, reset the seed for the pseudorandom\n number generator\n\nParameters:\n * A (Tensor) -- the input tensor of size (*, m, n)\n * **q** (*int**, **optional*) -- a slightly overestimated rank\n of A. By default, \"q = min(6, m, n)\".\n\n * **center** (*bool**, **optional*) -- if True, center the input\n tensor, otherwise, assume that the input is centered.\n\n * **niter** (*int**, **optional*) -- the number of subspace\n iterations to conduct; niter must be a nonnegative integer,\n and defaults to 2.\n\nReturn type:\n Tuple[Tensor, Tensor, Tensor]\nReferences:\n - Nathan Halko, Per-Gunnar Martinsson, and Joel Tropp, Finding\n structure with randomness: probabilistic algorithms for\n constructing approximate matrix decompositions,\n arXiv:0909.4061 [math.NA; math.PR], 2009 (available at\n", "source": "https://pytorch.org/docs/stable/generated/torch.pca_lowrank.html", "category": "pytorch docs"} {"text": "arXiv _).", "source": "https://pytorch.org/docs/stable/generated/torch.pca_lowrank.html", "category": "pytorch docs"} {"text": "torch.unique\ntorch.unique(input, sorted=True, return_inverse=False, return_counts=False, dim=None) -> Tuple[Tensor, Tensor, Tensor]\nReturns the unique elements of the input tensor.\nNote:\n This function is different from \"torch.unique_consecutive()\" in\n the sense that this function also eliminates non-consecutive\n duplicate values.\n\nNote:\n Currently in the CUDA implementation and the CPU implementation\n when dim is specified, *torch.unique* always sort the tensor at\n the beginning regardless of the *sort* argument. Sorting could be\n slow, so if your input tensor is already sorted, it is\n recommended to use \"torch.unique_consecutive()\" which avoids the\n sorting.\n\nParameters:\n * input (Tensor) -- the input tensor\n * **sorted** (*bool*) -- Whether to sort the unique elements in\n ascending order before returning as output.\n\n * **return_inverse** (*bool*) -- Whether to also return the\n", "source": "https://pytorch.org/docs/stable/generated/torch.unique.html", "category": "pytorch docs"} {"text": "indices for where elements in the original input ended up in\n the returned unique list.\n * **return_counts** (*bool*) -- Whether to also return the\n counts for each unique element.\n\n * **dim** (*int*) -- the dimension to apply unique. If \"None\",\n the unique of the flattened input is returned. default: \"None\"\n\nReturns:\n A tensor or a tuple of tensors containing\n * **output** (*Tensor*): the output list of unique scalar\n elements.\n\n * **inverse_indices** (*Tensor*): (optional) if\n \"return_inverse\" is True, there will be an additional\n returned tensor (same shape as input) representing the\n indices for where elements in the original input map to in\n the output; otherwise, this function will only return a\n single tensor.\n\n * **counts** (*Tensor*): (optional) if \"return_counts\" is\n True, there will be an additional returned tensor (same\n", "source": "https://pytorch.org/docs/stable/generated/torch.unique.html", "category": "pytorch docs"} {"text": "shape as output or output.size(dim), if dim was specified)\n representing the number of occurrences for each unique\n value or tensor.\nReturn type:\n (Tensor, Tensor (optional), Tensor (optional))\nExample:\n >>> output = torch.unique(torch.tensor([1, 3, 2, 3], dtype=torch.long))\n >>> output\n tensor([1, 2, 3])\n\n >>> output, inverse_indices = torch.unique(\n ... torch.tensor([1, 3, 2, 3], dtype=torch.long), sorted=True, return_inverse=True)\n >>> output\n tensor([1, 2, 3])\n >>> inverse_indices\n tensor([0, 2, 1, 2])\n\n >>> output, inverse_indices = torch.unique(\n ... torch.tensor([[1, 3], [2, 3]], dtype=torch.long), sorted=True, return_inverse=True)\n >>> output\n tensor([1, 2, 3])\n >>> inverse_indices\n tensor([[0, 2],\n [1, 2]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.unique.html", "category": "pytorch docs"} {"text": "AdaptiveLogSoftmaxWithLoss\nclass torch.nn.AdaptiveLogSoftmaxWithLoss(in_features, n_classes, cutoffs, div_value=4.0, head_bias=False, device=None, dtype=None)\nEfficient softmax approximation as described in Efficient softmax\n approximation for GPUs by Edouard Grave, Armand Joulin, Moustapha\n Ciss\u00c3\u00a9, David Grangier, and Herv\u00c3\u00a9 J\u00c3\u00a9gou.\nAdaptive softmax is an approximate strategy for training models\n with large output spaces. It is most effective when the label\n distribution is highly imbalanced, for example in natural language\n modelling, where the word frequency distribution approximately\n follows the Zipf's law.\nAdaptive softmax partitions the labels into several clusters,\n according to their frequency. These clusters may contain different\n number of targets each. Additionally, clusters containing less\n frequent labels assign lower dimensional embeddings to those\n labels, which speeds up the computation. For each minibatch, only", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveLogSoftmaxWithLoss.html", "category": "pytorch docs"} {"text": "clusters for which at least one target is present are evaluated.\nThe idea is that the clusters which are accessed frequently (like\n the first one, containing most frequent labels), should also be\n cheap to compute -- that is, contain a small number of assigned\n labels.\nWe highly recommend taking a look at the original paper for more\n details.\n\n\n\"cutoffs\" should be an ordered Sequence of integers sorted in the\n increasing order. It controls number of clusters and the\n partitioning of targets into clusters. For example setting\n \"cutoffs = [10, 100, 1000]\" means that first 10 targets will be\n assigned to the 'head' of the adaptive softmax, targets 11, 12,\n ..., 100 will be assigned to the first cluster, and targets\n 101, 102, ..., 1000 will be assigned to the second cluster,\n while targets 1001, 1002, ..., n_classes - 1 will be assigned\n to the last, third cluster.\n\n\n\"div_value\" is used to compute the size of each additional\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveLogSoftmaxWithLoss.html", "category": "pytorch docs"} {"text": "cluster, which is given as \\left\\lfloor\\frac{\\texttt{in_feature\n s}}{\\texttt{div_value}^{idx}}\\right\\rfloor, where idx is the\n cluster index (with clusters for less frequent words having\n larger indices, and indices starting from 1).\n\n\"head_bias\" if set to True, adds a bias term to the 'head' of the\n adaptive softmax. See paper for details. Set to False in the\n official implementation.\n\nWarning:\n Labels passed as inputs to this module should be sorted according\n to their frequency. This means that the most frequent label\n should be represented by the index *0*, and the least frequent\n label should be represented by the index *n_classes - 1*.\n\nNote:\n This module returns a \"NamedTuple\" with \"output\" and \"loss\"\n fields. See further documentation for details.\n\nNote:\n To compute log-probabilities for all classes, the \"log_prob\"\n method can be used.\n\nParameters:\n * in_features (int) -- Number of features in the input", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveLogSoftmaxWithLoss.html", "category": "pytorch docs"} {"text": "tensor\n * **n_classes** (*int*) -- Number of classes in the dataset\n\n * **cutoffs** (*Sequence*) -- Cutoffs used to assign targets to\n their buckets\n\n * **div_value** (*float**, **optional*) -- value used as an\n exponent to compute sizes of the clusters. Default: 4.0\n\n * **head_bias** (*bool**, **optional*) -- If \"True\", adds a bias\n term to the 'head' of the adaptive softmax. Default: \"False\"\n\nReturns:\n * output is a Tensor of size \"N\" containing computed target\n log probabilities for each example\n * **loss** is a Scalar representing the computed negative log\n likelihood loss\n\nReturn type:\n \"NamedTuple\" with \"output\" and \"loss\" fields\nShape:\n * input: (N, \\texttt{in_features}) or (\\texttt{in_features})\n * target: (N) or () where each value satisfies 0 <=\n \\texttt{target[i]} <= \\texttt{n\\_classes}\n\n * output1: (N) or ()\n\n * output2: \"Scalar\"\n\nlog_prob(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveLogSoftmaxWithLoss.html", "category": "pytorch docs"} {"text": "\noutput2: \"Scalar\"\n\nlog_prob(input)\n Computes log probabilities for all \\texttt{n\\_classes}\n\n Parameters:\n **input** (*Tensor*) -- a minibatch of examples\n\n Returns:\n log-probabilities of for each class c in range 0 <= c <=\n \\texttt{n\\_classes}, where \\texttt{n\\_classes} is a parameter\n passed to \"AdaptiveLogSoftmaxWithLoss\" constructor.\n\n Return type:\n *Tensor*\n\n Shape:\n * Input: (N, \\texttt{in\\_features})\n\n * Output: (N, \\texttt{n\\_classes})\n\npredict(input)\n This is equivalent to *self.log_prob(input).argmax(dim=1)*, but\n is more efficient in some cases.\n\n Parameters:\n **input** (*Tensor*) -- a minibatch of examples\n\n Returns:\n a class with the highest probability for each example\n\n Return type:\n output (Tensor)\n\n Shape:\n * Input: (N, \\texttt{in\\_features})\n\n * Output: (N)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveLogSoftmaxWithLoss.html", "category": "pytorch docs"} {"text": "torch.atleast_1d\ntorch.atleast_1d(*tensors)\nReturns a 1-dimensional view of each input tensor with zero\n dimensions. Input tensors with one or more dimensions are returned\n as-is.\nParameters:\n input (Tensor or list of Tensors) --\nReturns:\n output (Tensor or tuple of Tensors)\nExample:\n >>> x = torch.arange(2)\n >>> x\n tensor([0, 1])\n >>> torch.atleast_1d(x)\n tensor([0, 1])\n >>> x = torch.tensor(1.)\n >>> x\n tensor(1.)\n >>> torch.atleast_1d(x)\n tensor([1.])\n >>> x = torch.tensor(0.5)\n >>> y = torch.tensor(1.)\n >>> torch.atleast_1d((x, y))\n (tensor([0.5000]), tensor([1.]))\n", "source": "https://pytorch.org/docs/stable/generated/torch.atleast_1d.html", "category": "pytorch docs"} {"text": "torch.set_flush_denormal\ntorch.set_flush_denormal(mode) -> bool\nDisables denormal floating numbers on CPU.\nReturns \"True\" if your system supports flushing denormal numbers\n and it successfully configures flush denormal mode.\n \"set_flush_denormal()\" is only supported on x86 architectures\n supporting SSE3.\nParameters:\n mode (bool) -- Controls whether to enable flush denormal\n mode or not\nExample:\n >>> torch.set_flush_denormal(True)\n True\n >>> torch.tensor([1e-323], dtype=torch.float64)\n tensor([ 0.], dtype=torch.float64)\n >>> torch.set_flush_denormal(False)\n True\n >>> torch.tensor([1e-323], dtype=torch.float64)\n tensor(9.88131e-324 *\n [ 1.0000], dtype=torch.float64)\n", "source": "https://pytorch.org/docs/stable/generated/torch.set_flush_denormal.html", "category": "pytorch docs"} {"text": "SiLU\nclass torch.nn.SiLU(inplace=False)\nApplies the Sigmoid Linear Unit (SiLU) function, element-wise. The\n SiLU function is also known as the swish function.\n \\text{silu}(x) = x * \\sigma(x), \\text{where } \\sigma(x) \\text{\n is the logistic sigmoid.}\n\nNote:\n See Gaussian Error Linear Units (GELUs) where the SiLU (Sigmoid\n Linear Unit) was originally coined, and see Sigmoid-Weighted\n Linear Units for Neural Network Function Approximation in\n Reinforcement Learning and Swish: a Self-Gated Activation\n Function where the SiLU was experimented with later.\n\nShape:\n * Input: (*), where * means any number of dimensions.\n * Output: (*), same shape as the input.\n\n[image]\nExamples:\n >>> m = nn.SiLU()\n >>> input = torch.randn(2)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SiLU.html", "category": "pytorch docs"} {"text": "torch.Tensor.flatten\nTensor.flatten(start_dim=0, end_dim=- 1) -> Tensor\nSee \"torch.flatten()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.flatten.html", "category": "pytorch docs"} {"text": "ELU\nclass torch.ao.nn.quantized.ELU(scale, zero_point, alpha=1.0)\nThis is the quantized equivalent of \"ELU\".\nParameters:\n * scale -- quantization scale of the output tensor\n * **zero_point** -- quantization zero point of the output tensor\n\n * **alpha** (*float*) -- the alpha constant\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.ELU.html", "category": "pytorch docs"} {"text": "torch.nn.functional.torch.nn.parallel.data_parallel\ntorch.nn.parallel.data_parallel(module, inputs, device_ids=None, output_device=None, dim=0, module_kwargs=None)\nEvaluates module(input) in parallel across the GPUs given in\n device_ids.\nThis is the functional version of the DataParallel module.\nParameters:\n * module (Module) -- the module to evaluate in parallel\n * **inputs** (*Tensor*) -- inputs to the module\n\n * **device_ids** (*list of python:int** or **torch.device*) --\n GPU ids on which to replicate module\n\n * **output_device** (*list of python:int** or **torch.device*)\n -- GPU location of the output Use -1 to indicate the CPU.\n (default: device_ids[0])\n\nReturns:\n a Tensor containing the result of module(input) located on\n output_device", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.torch.nn.parallel.data_parallel.html", "category": "pytorch docs"} {"text": "torch.cuda.init\ntorch.cuda.init()\nInitialize PyTorch's CUDA state. You may need to call this\n explicitly if you are interacting with PyTorch via its C API, as\n Python bindings for CUDA functionality will not be available until\n this initialization takes place. Ordinary users should not need\n this, as all of PyTorch's CUDA methods automatically initialize\n CUDA state on-demand.\nDoes nothing if the CUDA state is already initialized.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.init.html", "category": "pytorch docs"} {"text": "torch.Tensor.real\nTensor.real\nReturns a new tensor containing real values of the \"self\" tensor\n for a complex-valued input tensor. The returned tensor and \"self\"\n share the same underlying storage.\nReturns \"self\" if \"self\" is a real-valued tensor tensor.\nExample::\n >>> x=torch.randn(4, dtype=torch.cfloat)\n >>> x\n tensor([(0.3100+0.3553j), (-0.5445-0.7896j), (-1.6492-0.0633j), (-0.0638-0.8119j)])\n >>> x.real\n tensor([ 0.3100, -0.5445, -1.6492, -0.0638])", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.real.html", "category": "pytorch docs"} {"text": "torch.signal.windows.bartlett\ntorch.signal.windows.bartlett(M, *, sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)\nComputes the Bartlett window.\nThe Bartlett window is defined as follows:\n w_n = 1 - \\left| \\frac{2n}{M - 1} - 1 \\right| = \\begin{cases}\n \\frac{2n}{M - 1} & \\text{if } 0 \\leq n \\leq \\frac{M - 1}{2} \\\\\n 2 - \\frac{2n}{M - 1} & \\text{if } \\frac{M - 1}{2} < n < M \\\\\n \\end{cases}\n\nThe window is normalized to 1 (maximum value is 1). However, the 1\n doesn't appear if \"M\" is even and \"sym\" is True.\nParameters:\n M (int) -- the length of the window. In other words, the\n number of points of the returned window.\nKeyword Arguments:\n * sym (bool, optional) -- If False, returns a\n periodic window suitable for use in spectral analysis. If\n True, returns a symmetric window suitable for use in filter\n design. Default: True.", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.bartlett.html", "category": "pytorch docs"} {"text": "design. Default: True.\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\nReturn type:\n Tensor\nExamples:\n >>> # Generates a symmetric Bartlett window.\n >>> torch.signal.windows.bartlett(10)\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.bartlett.html", "category": "pytorch docs"} {"text": "\n\n\ntorch.signal.windows.bartlett(10)\n tensor([0.0000, 0.2222, 0.4444, 0.6667, 0.8889, 0.8889, 0.6667, 0.4444, 0.2222, 0.0000])\n\n\n\n >>> # Generates a periodic Bartlett window.\n >>> torch.signal.windows.bartlett(10, sym=False)\n tensor([0.0000, 0.2000, 0.4000, 0.6000, 0.8000, 1.0000, 0.8000, 0.6000, 0.4000, 0.2000])\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.bartlett.html", "category": "pytorch docs"} {"text": "torch.poisson\ntorch.poisson(input, generator=None) -> Tensor\nReturns a tensor of the same size as \"input\" with each element\n sampled from a Poisson distribution with rate parameter given by\n the corresponding element in \"input\" i.e.,\n \\text{out}_i \\sim \\text{Poisson}(\\text{input}_i)\n\n\"input\" must be non-negative.\nParameters:\n input (Tensor) -- the input tensor containing the rates of\n the Poisson distribution\nKeyword Arguments:\n generator (\"torch.Generator\", optional) -- a pseudorandom\n number generator for sampling\nExample:\n >>> rates = torch.rand(4, 4) * 5 # rate parameter between 0 and 5\n >>> torch.poisson(rates)\n tensor([[9., 1., 3., 5.],\n [8., 6., 6., 0.],\n [0., 4., 5., 3.],\n [2., 1., 4., 2.]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.poisson.html", "category": "pytorch docs"} {"text": "torch.asin\ntorch.asin(input, *, out=None) -> Tensor\nReturns a new tensor with the arcsine of the elements of \"input\".\n \\text{out}_{i} = \\sin^{-1}(\\text{input}_{i})\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(4)\n >>> a\n tensor([-0.5962, 1.4985, -0.4396, 1.4525])\n >>> torch.asin(a)\n tensor([-0.6387, nan, -0.4552, nan])\n", "source": "https://pytorch.org/docs/stable/generated/torch.asin.html", "category": "pytorch docs"} {"text": "torch.Tensor.arcsin_\nTensor.arcsin_() -> Tensor\nIn-place version of \"arcsin()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arcsin_.html", "category": "pytorch docs"} {"text": "torch.Tensor.geqrf\nTensor.geqrf()\nSee \"torch.geqrf()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.geqrf.html", "category": "pytorch docs"} {"text": "torch.Tensor.where\nTensor.where(condition, y) -> Tensor\n\"self.where(condition, y)\" is equivalent to \"torch.where(condition,\n self, y)\". See \"torch.where()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.where.html", "category": "pytorch docs"} {"text": "torch.sym_min\ntorch.sym_min(a, b)\nSymInt-aware utility for max().", "source": "https://pytorch.org/docs/stable/generated/torch.sym_min.html", "category": "pytorch docs"} {"text": "torch.index_add\ntorch.index_add(input, dim, index, source, *, alpha=1, out=None) -> Tensor\nSee \"index_add_()\" for function description.", "source": "https://pytorch.org/docs/stable/generated/torch.index_add.html", "category": "pytorch docs"} {"text": "HuberLoss\nclass torch.nn.HuberLoss(reduction='mean', delta=1.0)\nCreates a criterion that uses a squared term if the absolute\n element-wise error falls below delta and a delta-scaled L1 term\n otherwise. This loss combines advantages of both \"L1Loss\" and\n \"MSELoss\"; the delta-scaled L1 region makes the loss less sensitive\n to outliers than \"MSELoss\", while the L2 region provides smoothness\n over \"L1Loss\" near 0. See Huber loss for more information.\nFor a batch of size N, the unreduced loss can be described as:\n \\ell(x, y) = L = \\{l_1, ..., l_N\\}^T\n\nwith\n l_n = \\begin{cases} 0.5 (x_n - y_n)^2, & \\text{if } |x_n - y_n|\n < delta \\\\ delta * (|x_n - y_n| - 0.5 * delta), &\n \\text{otherwise } \\end{cases}\n\nIf reduction is not none, then:\n \\ell(x, y) = \\begin{cases} \\operatorname{mean}(L), &\n \\text{if reduction} = \\text{`mean';}\\\\\n \\operatorname{sum}(L), & \\text{if reduction} = \\text{`sum'.}\n \\end{cases}\n\nNote:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.HuberLoss.html", "category": "pytorch docs"} {"text": "\\end{cases}\nNote:\n When delta is set to 1, this loss is equivalent to\n \"SmoothL1Loss\". In general, this loss differs from \"SmoothL1Loss\"\n by a factor of delta (AKA beta in Smooth L1). See \"SmoothL1Loss\"\n for additional discussion on the differences in behavior between\n the two losses.\n\nParameters:\n * reduction (str, optional) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Default: \"'mean'\"\n * **delta** (*float**, **optional*) -- Specifies the threshold\n at which to change between delta-scaled L1 and L2 loss. The\n value must be positive. Default: 1.0\n\nShape:\n * Input: (*) where * means any number of dimensions.\n * Target: (*), same shape as the input.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.HuberLoss.html", "category": "pytorch docs"} {"text": "\n\nTarget: (*), same shape as the input.\n\nOutput: scalar. If \"reduction\" is \"'none'\", then (*), same\n shape as the input.\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.HuberLoss.html", "category": "pytorch docs"} {"text": "Module\nclass torch.nn.Module\nBase class for all neural network modules.\nYour models should also subclass this class.\nModules can also contain other Modules, allowing to nest them in a\n tree structure. You can assign the submodules as regular\n attributes:\n import torch.nn as nn\n import torch.nn.functional as F\n\n class Model(nn.Module):\n def __init__(self):\n super().__init__()\n self.conv1 = nn.Conv2d(1, 20, 5)\n self.conv2 = nn.Conv2d(20, 20, 5)\n\n def forward(self, x):\n x = F.relu(self.conv1(x))\n return F.relu(self.conv2(x))\n\nSubmodules assigned in this way will be registered, and will have\n their parameters converted too when you call \"to()\", etc.\nNote:\n As per the example above, an \"__init__()\" call to the parent\n class must be made before assignment on the child.\n\nVariables:\n training (bool) -- Boolean represents whether this module", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "is in training or evaluation mode.\nadd_module(name, module)\n Adds a child module to the current module.\n\n The module can be accessed as an attribute using the given name.\n\n Parameters:\n * **name** (*str*) -- name of the child module. The child\n module can be accessed from this module using the given\n name\n\n * **module** (*Module*) -- child module to be added to the\n module.\n\napply(fn)\n Applies \"fn\" recursively to every submodule (as returned by\n \".children()\") as well as self. Typical use includes\n initializing the parameters of a model (see also torch.nn.init).\n\n Parameters:\n **fn** (\"Module\" -> None) -- function to be applied to each\n submodule\n\n Returns:\n self\n\n Return type:\n Module\n\n Example:\n\n >>> @torch.no_grad()\n >>> def init_weights(m):\n >>> print(m)\n >>> if type(m) == nn.Linear:\n >>> m.weight.fill_(1.0)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "\n\n\n m.weight.fill_(1.0)\n >>> print(m.weight)\n >>> net = nn.Sequential(nn.Linear(2, 2), nn.Linear(2, 2))\n >>> net.apply(init_weights)\n Linear(in_features=2, out_features=2, bias=True)\n Parameter containing:\n tensor([[1., 1.],\n [1., 1.]], requires_grad=True)\n Linear(in_features=2, out_features=2, bias=True)\n Parameter containing:\n tensor([[1., 1.],\n [1., 1.]], requires_grad=True)\n Sequential(\n (0): Linear(in_features=2, out_features=2, bias=True)\n (1): Linear(in_features=2, out_features=2, bias=True)\n )\n\n\n\n\nbfloat16()\n Casts all floating point parameters and buffers to \"bfloat16\"\n datatype.\n\n Note:\n\n This method modifies the module in-place.\n\n Returns:\n self\n\n Return type:\n Module\n\nbuffers(recurse=True)\n Returns an iterator over module buffers.\n\n Parameters:\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "Parameters:\n recurse (bool) -- if True, then yields buffers of this\n module and all submodules. Otherwise, yields only buffers\n that are direct members of this module.\n Yields:\n *torch.Tensor* -- module buffer\n\n Return type:\n *Iterator*[*Tensor*]\n\n Example:\n\n >>> for buf in model.buffers():\n >>> print(type(buf), buf.size())\n (20L,)\n (20L, 1L, 5L, 5L)\n\nchildren()\n Returns an iterator over immediate children modules.\n\n Yields:\n *Module* -- a child module\n\n Return type:\n *Iterator*[*Module*]\n\ncpu()\n Moves all model parameters and buffers to the CPU.\n\n Note:\n\n This method modifies the module in-place.\n\n Returns:\n self\n\n Return type:\n Module\n\ncuda(device=None)\n Moves all model parameters and buffers to the GPU.\n\n This also makes associated parameters and buffers different\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "objects. So it should be called before constructing optimizer if\n the module will live on GPU while being optimized.\n Note:\n\n This method modifies the module in-place.\n\n Parameters:\n **device** (*int**, **optional*) -- if specified, all\n parameters will be copied to that device\n\n Returns:\n self\n\n Return type:\n Module\n\ndouble()\n Casts all floating point parameters and buffers to \"double\"\n datatype.\n\n Note:\n\n This method modifies the module in-place.\n\n Returns:\n self\n\n Return type:\n Module\n\neval()\n Sets the module in evaluation mode.\n\n This has any effect only on certain modules. See documentations\n of particular modules for details of their behaviors in\n training/evaluation mode, if they are affected, e.g. \"Dropout\",\n \"BatchNorm\", etc.\n\n This is equivalent with \"self.train(False)\".\n\n See Locally disabling gradient computation for a comparison\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "between .eval() and several similar mechanisms that may be\n confused with it.\n Returns:\n self\n\n Return type:\n Module\n\nextra_repr()\n Set the extra representation of the module\n\n To print customized extra information, you should re-implement\n this method in your own modules. Both single-line and multi-line\n strings are acceptable.\n\n Return type:\n str\n\nfloat()\n Casts all floating point parameters and buffers to \"float\"\n datatype.\n\n Note:\n\n This method modifies the module in-place.\n\n Returns:\n self\n\n Return type:\n Module\n\nforward(*input)\n Defines the computation performed at every call.\n\n Should be overridden by all subclasses.\n\n Note:\n\n Although the recipe for forward pass needs to be defined\n within this function, one should call the \"Module\" instance\n afterwards instead of this since the former takes care of\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "running the registered hooks while the latter silently ignores\n them.\nget_buffer(target)\n Returns the buffer given by \"target\" if it exists, otherwise\n throws an error.\n\n See the docstring for \"get_submodule\" for a more detailed\n explanation of this method's functionality as well as how to\n correctly specify \"target\".\n\n Parameters:\n **target** (*str*) -- The fully-qualified string name of the\n buffer to look for. (See \"get_submodule\" for how to specify a\n fully-qualified string.)\n\n Returns:\n The buffer referenced by \"target\"\n\n Return type:\n torch.Tensor\n\n Raises:\n **AttributeError** -- If the target string references an\n invalid path or resolves to something that is not a\n buffer\n\nget_extra_state()\n Returns any extra state to include in the module's state_dict.\n Implement this and a corresponding \"set_extra_state()\" for your\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "module if you need to store extra state. This function is called\n when building the module's state_dict().\n Note that extra state should be picklable to ensure working\n serialization of the state_dict. We only provide provide\n backwards compatibility guarantees for serializing Tensors;\n other objects may break backwards compatibility if their\n serialized pickled form changes.\n\n Returns:\n Any extra state to store in the module's state_dict\n\n Return type:\n object\n\nget_parameter(target)\n Returns the parameter given by \"target\" if it exists, otherwise\n throws an error.\n\n See the docstring for \"get_submodule\" for a more detailed\n explanation of this method's functionality as well as how to\n correctly specify \"target\".\n\n Parameters:\n **target** (*str*) -- The fully-qualified string name of the\n Parameter to look for. (See \"get_submodule\" for how to\n specify a fully-qualified string.)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "specify a fully-qualified string.)\n Returns:\n The Parameter referenced by \"target\"\n\n Return type:\n torch.nn.Parameter\n\n Raises:\n **AttributeError** -- If the target string references an\n invalid path or resolves to something that is not an\n \"nn.Parameter\"\n\nget_submodule(target)\n Returns the submodule given by \"target\" if it exists, otherwise\n throws an error.\n\n For example, let's say you have an \"nn.Module\" \"A\" that looks\n like this:\n\n A(\n (net_b): Module(\n (net_c): Module(\n (conv): Conv2d(16, 33, kernel_size=(3, 3), stride=(2, 2))\n )\n (linear): Linear(in_features=100, out_features=200, bias=True)\n )\n )\n\n (The diagram shows an \"nn.Module\" \"A\". \"A\" has a nested\n submodule \"net_b\", which itself has two submodules \"net_c\" and\n \"linear\". \"net_c\" then has a submodule \"conv\".)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "To check whether or not we have the \"linear\" submodule, we would\n call \"get_submodule(\"net_b.linear\")\". To check whether we have\n the \"conv\" submodule, we would call\n \"get_submodule(\"net_b.net_c.conv\")\".\n The runtime of \"get_submodule\" is bounded by the degree of\n module nesting in \"target\". A query against \"named_modules\"\n achieves the same result, but it is O(N) in the number of\n transitive modules. So, for a simple check to see if some\n submodule exists, \"get_submodule\" should always be used.\n\n Parameters:\n **target** (*str*) -- The fully-qualified string name of the\n submodule to look for. (See above example for how to specify\n a fully-qualified string.)\n\n Returns:\n The submodule referenced by \"target\"\n\n Return type:\n torch.nn.Module\n\n Raises:\n **AttributeError** -- If the target string references an\n invalid path or resolves to something that is not an\n \"nn.Module\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "\"nn.Module\"\nhalf()\n Casts all floating point parameters and buffers to \"half\"\n datatype.\n\n Note:\n\n This method modifies the module in-place.\n\n Returns:\n self\n\n Return type:\n Module\n\nipu(device=None)\n Moves all model parameters and buffers to the IPU.\n\n This also makes associated parameters and buffers different\n objects. So it should be called before constructing optimizer if\n the module will live on IPU while being optimized.\n\n Note:\n\n This method modifies the module in-place.\n\n Parameters:\n **device** (*int**, **optional*) -- if specified, all\n parameters will be copied to that device\n\n Returns:\n self\n\n Return type:\n Module\n\nload_state_dict(state_dict, strict=True)\n Copies parameters and buffers from \"state_dict\" into this module\n and its descendants. If \"strict\" is \"True\", then the keys of\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "\"state_dict\" must exactly match the keys returned by this\n module's \"state_dict()\" function.\n Parameters:\n * **state_dict** (*dict*) -- a dict containing parameters and\n persistent buffers.\n\n * **strict** (*bool**, **optional*) -- whether to strictly\n enforce that the keys in \"state_dict\" match the keys\n returned by this module's \"state_dict()\" function. Default:\n \"True\"\n\n Returns:\n * **missing_keys** is a list of str containing the missing\n keys\n\n * **unexpected_keys** is a list of str containing the\n unexpected keys\n\n Return type:\n \"NamedTuple\" with \"missing_keys\" and \"unexpected_keys\" fields\n\n Note:\n\n If a parameter or buffer is registered as \"None\" and its\n corresponding key exists in \"state_dict\", \"load_state_dict()\"\n will raise a \"RuntimeError\".\n\nmodules()\n Returns an iterator over all modules in the network.\n\n Yields:\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "Yields:\n Module -- a module in the network\n Return type:\n *Iterator*[*Module*]\n\n Note:\n\n Duplicate modules are returned only once. In the following\n example, \"l\" will be returned only once.\n\n Example:\n\n >>> l = nn.Linear(2, 2)\n >>> net = nn.Sequential(l, l)\n >>> for idx, m in enumerate(net.modules()):\n ... print(idx, '->', m)\n\n 0 -> Sequential(\n (0): Linear(in_features=2, out_features=2, bias=True)\n (1): Linear(in_features=2, out_features=2, bias=True)\n )\n 1 -> Linear(in_features=2, out_features=2, bias=True)\n\nnamed_buffers(prefix='', recurse=True, remove_duplicate=True)\n Returns an iterator over module buffers, yielding both the name\n of the buffer as well as the buffer itself.\n\n Parameters:\n * **prefix** (*str*) -- prefix to prepend to all buffer\n names.\n\n * **recurse** (*bool**, **optional*) -- if True, then yields\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "buffers of this module and all submodules. Otherwise,\n yields only buffers that are direct members of this module.\n Defaults to True.\n * **remove_duplicate** (*bool**, **optional*) -- whether to\n remove the duplicated buffers in the result. Defaults to\n True.\n\n Yields:\n *(str, torch.Tensor)* -- Tuple containing the name and buffer\n\n Return type:\n *Iterator*[*Tuple*[str, *Tensor*]]\n\n Example:\n\n >>> for name, buf in self.named_buffers():\n >>> if name in ['running_var']:\n >>> print(buf.size())\n\nnamed_children()\n Returns an iterator over immediate children modules, yielding\n both the name of the module as well as the module itself.\n\n Yields:\n *(str, Module)* -- Tuple containing a name and child module\n\n Return type:\n *Iterator*[*Tuple*[str, *Module*]]\n\n Example:\n\n >>> for name, module in model.named_children():\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "\n\n\nif name in ['conv4', 'conv5']:\n >>> print(module)\n\n\n\n\nnamed_modules(memo=None, prefix='', remove_duplicate=True)\n Returns an iterator over all modules in the network, yielding\n both the name of the module as well as the module itself.\n\n Parameters:\n * **memo** (*Optional**[**Set**[**Module**]**]*) -- a memo to\n store the set of modules already added to the result\n\n * **prefix** (*str*) -- a prefix that will be added to the\n name of the module\n\n * **remove_duplicate** (*bool*) -- whether to remove the\n duplicated module instances in the result or not\n\n Yields:\n *(str, Module)* -- Tuple of name and module\n\n Note:\n\n Duplicate modules are returned only once. In the following\n example, \"l\" will be returned only once.\n\n Example:\n\n >>> l = nn.Linear(2, 2)\n >>> net = nn.Sequential(l, l)\n >>> for idx, m in enumerate(net.named_modules()):\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "... print(idx, '->', m)\n 0 -> ('', Sequential(\n (0): Linear(in_features=2, out_features=2, bias=True)\n (1): Linear(in_features=2, out_features=2, bias=True)\n ))\n 1 -> ('0', Linear(in_features=2, out_features=2, bias=True))\n\nnamed_parameters(prefix='', recurse=True, remove_duplicate=True)\n Returns an iterator over module parameters, yielding both the\n name of the parameter as well as the parameter itself.\n\n Parameters:\n * **prefix** (*str*) -- prefix to prepend to all parameter\n names.\n\n * **recurse** (*bool*) -- if True, then yields parameters of\n this module and all submodules. Otherwise, yields only\n parameters that are direct members of this module.\n\n * **remove_duplicate** (*bool**, **optional*) -- whether to\n remove the duplicated parameters in the result. Defaults to\n True.\n\n Yields:\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "True.\n Yields:\n *(str, Parameter)* -- Tuple containing the name and parameter\n\n Return type:\n *Iterator*[*Tuple*[str, *Parameter*]]\n\n Example:\n\n >>> for name, param in self.named_parameters():\n >>> if name in ['bias']:\n >>> print(param.size())\n\nparameters(recurse=True)\n Returns an iterator over module parameters.\n\n This is typically passed to an optimizer.\n\n Parameters:\n **recurse** (*bool*) -- if True, then yields parameters of\n this module and all submodules. Otherwise, yields only\n parameters that are direct members of this module.\n\n Yields:\n *Parameter* -- module parameter\n\n Return type:\n *Iterator*[*Parameter*]\n\n Example:\n\n >>> for param in model.parameters():\n >>> print(type(param), param.size())\n (20L,)\n (20L, 1L, 5L, 5L)\n\nregister_backward_hook(hook)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "register_backward_hook(hook)\n Registers a backward hook on the module.\n\n This function is deprecated in favor of\n \"register_full_backward_hook()\" and the behavior of this\n function will change in future versions.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemovableHandle\"\n\nregister_buffer(name, tensor, persistent=True)\n Adds a buffer to the module.\n\n This is typically used to register a buffer that should not to\n be considered a model parameter. For example, BatchNorm's\n \"running_mean\" is not a parameter, but is part of the module's\n state. Buffers, by default, are persistent and will be saved\n alongside parameters. This behavior can be changed by setting\n \"persistent\" to \"False\". The only difference between a\n persistent buffer and a non-persistent buffer is that the latter\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "will not be a part of this module's \"state_dict\".\n Buffers can be accessed as attributes using given names.\n\n Parameters:\n * **name** (*str*) -- name of the buffer. The buffer can be\n accessed from this module using the given name\n\n * **tensor** (*Tensor** or **None*) -- buffer to be\n registered. If \"None\", then operations that run on buffers,\n such as \"cuda\", are ignored. If \"None\", the buffer is\n **not** included in the module's \"state_dict\".\n\n * **persistent** (*bool*) -- whether the buffer is part of\n this module's \"state_dict\".\n\n Example:\n\n >>> self.register_buffer('running_mean', torch.zeros(num_features))\n\nregister_forward_hook(hook, *, prepend=False, with_kwargs=False)\n Registers a forward hook on the module.\n\n The hook will be called every time after \"forward()\" has\n computed an output.\n\n If \"with_kwargs\" is \"False\" or not specified, the input contains\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "only the positional arguments given to the module. Keyword\n arguments won't be passed to the hooks and only to the\n \"forward\". The hook can modify the output. It can modify the\n input inplace but it will not have effect on forward since this\n is called after \"forward()\" is called. The hook should have the\n following signature:\n hook(module, args, output) -> None or modified output\n\n If \"with_kwargs\" is \"True\", the forward hook will be passed the\n \"kwargs\" given to the forward function and be expected to return\n the output possibly modified. The hook should have the following\n signature:\n\n hook(module, args, kwargs, output) -> None or modified output\n\n Parameters:\n * **hook** (*Callable*) -- The user defined hook to be\n registered.\n\n * **prepend** (*bool*) -- If \"True\", the provided \"hook\" will\n be fired before all existing \"forward\" hooks on this\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "\"torch.nn.modules.Module\". Otherwise, the provided \"hook\"\n will be fired after all existing \"forward\" hooks on this\n \"torch.nn.modules.Module\". Note that global \"forward\" hooks\n registered with \"register_module_forward_hook()\" will fire\n before all hooks registered by this method. Default:\n \"False\"\n * **with_kwargs** (*bool*) -- If \"True\", the \"hook\" will be\n passed the kwargs given to the forward function. Default:\n \"False\"\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemovableHandle\"\n\nregister_forward_pre_hook(hook, *, prepend=False, with_kwargs=False)\n Registers a forward pre-hook on the module.\n\n The hook will be called every time before \"forward()\" is\n invoked.\n\n If \"with_kwargs\" is false or not specified, the input contains\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "only the positional arguments given to the module. Keyword\n arguments won't be passed to the hooks and only to the\n \"forward\". The hook can modify the input. User can either return\n a tuple or a single modified value in the hook. We will wrap the\n value into a tuple if a single value is returned (unless that\n value is already a tuple). The hook should have the following\n signature:\n hook(module, args) -> None or modified input\n\n If \"with_kwargs\" is true, the forward pre-hook will be passed\n the kwargs given to the forward function. And if the hook\n modifies the input, both the args and kwargs should be returned.\n The hook should have the following signature:\n\n hook(module, args, kwargs) -> None or a tuple of modified input and kwargs\n\n Parameters:\n * **hook** (*Callable*) -- The user defined hook to be\n registered.\n\n * **prepend** (*bool*) -- If true, the provided \"hook\" will\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "be fired before all existing \"forward_pre\" hooks on this\n \"torch.nn.modules.Module\". Otherwise, the provided \"hook\"\n will be fired after all existing \"forward_pre\" hooks on\n this \"torch.nn.modules.Module\". Note that global\n \"forward_pre\" hooks registered with\n \"register_module_forward_pre_hook()\" will fire before all\n hooks registered by this method. Default: \"False\"\n * **with_kwargs** (*bool*) -- If true, the \"hook\" will be\n passed the kwargs given to the forward function. Default:\n \"False\"\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemovableHandle\"\n\nregister_full_backward_hook(hook, prepend=False)\n Registers a backward hook on the module.\n\n The hook will be called every time the gradients with respect to\n a module are computed, i.e. the hook will execute if and only if\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "the gradients with respect to module outputs are computed. The\n hook should have the following signature:\n hook(module, grad_input, grad_output) -> tuple(Tensor) or None\n\n The \"grad_input\" and \"grad_output\" are tuples that contain the\n gradients with respect to the inputs and outputs respectively.\n The hook should not modify its arguments, but it can optionally\n return a new gradient with respect to the input that will be\n used in place of \"grad_input\" in subsequent computations.\n \"grad_input\" will only correspond to the inputs given as\n positional arguments and all kwarg arguments are ignored.\n Entries in \"grad_input\" and \"grad_output\" will be \"None\" for all\n non-Tensor arguments.\n\n For technical reasons, when this hook is applied to a Module,\n its forward function will receive a view of each Tensor passed\n to the Module. Similarly the caller will receive a view of each\n Tensor returned by the Module's forward function.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "Warning:\n Modifying inputs or outputs inplace is not allowed when using\n backward hooks and will raise an error.\n\n Parameters:\n * **hook** (*Callable*) -- The user-defined hook to be\n registered.\n\n * **prepend** (*bool*) -- If true, the provided \"hook\" will\n be fired before all existing \"backward\" hooks on this\n \"torch.nn.modules.Module\". Otherwise, the provided \"hook\"\n will be fired after all existing \"backward\" hooks on this\n \"torch.nn.modules.Module\". Note that global \"backward\"\n hooks registered with\n \"register_module_full_backward_hook()\" will fire before all\n hooks registered by this method.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemovableHandle\"\n\nregister_full_backward_pre_hook(hook, prepend=False)\n Registers a backward pre-hook on the module.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "The hook will be called every time the gradients for the module\n are computed. The hook should have the following signature:\n hook(module, grad_output) -> Tensor or None\n\n The \"grad_output\" is a tuple. The hook should not modify its\n arguments, but it can optionally return a new gradient with\n respect to the output that will be used in place of\n \"grad_output\" in subsequent computations. Entries in\n \"grad_output\" will be \"None\" for all non-Tensor arguments.\n\n For technical reasons, when this hook is applied to a Module,\n its forward function will receive a view of each Tensor passed\n to the Module. Similarly the caller will receive a view of each\n Tensor returned by the Module's forward function.\n\n Warning:\n\n Modifying inputs inplace is not allowed when using backward\n hooks and will raise an error.\n\n Parameters:\n * **hook** (*Callable*) -- The user-defined hook to be\n registered.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "registered.\n * **prepend** (*bool*) -- If true, the provided \"hook\" will\n be fired before all existing \"backward_pre\" hooks on this\n \"torch.nn.modules.Module\". Otherwise, the provided \"hook\"\n will be fired after all existing \"backward_pre\" hooks on\n this \"torch.nn.modules.Module\". Note that global\n \"backward_pre\" hooks registered with\n \"register_module_full_backward_pre_hook()\" will fire before\n all hooks registered by this method.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemovableHandle\"\n\nregister_load_state_dict_post_hook(hook)\n Registers a post hook to be run after module's \"load_state_dict\"\n is called.\n\n It should have the following signature::\n hook(module, incompatible_keys) -> None\n\n The \"module\" argument is the current module that this hook is\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "registered on, and the \"incompatible_keys\" argument is a\n \"NamedTuple\" consisting of attributes \"missing_keys\" and\n \"unexpected_keys\". \"missing_keys\" is a \"list\" of \"str\"\n containing the missing keys and \"unexpected_keys\" is a \"list\" of\n \"str\" containing the unexpected keys.\n The given incompatible_keys can be modified inplace if needed.\n\n Note that the checks performed when calling \"load_state_dict()\"\n with \"strict=True\" are affected by modifications the hook makes\n to \"missing_keys\" or \"unexpected_keys\", as expected. Additions\n to either set of keys will result in an error being thrown when\n \"strict=True\", and clearing out both missing and unexpected keys\n will avoid an error.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemovableHandle\"\n\nregister_module(name, module)\n Alias for \"add_module()\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "Alias for \"add_module()\".\nregister_parameter(name, param)\n Adds a parameter to the module.\n\n The parameter can be accessed as an attribute using given name.\n\n Parameters:\n * **name** (*str*) -- name of the parameter. The parameter\n can be accessed from this module using the given name\n\n * **param** (*Parameter** or **None*) -- parameter to be\n added to the module. If \"None\", then operations that run on\n parameters, such as \"cuda\", are ignored. If \"None\", the\n parameter is **not** included in the module's \"state_dict\".\n\nregister_state_dict_pre_hook(hook)\n These hooks will be called with arguments: \"self\", \"prefix\", and\n \"keep_vars\" before calling \"state_dict\" on \"self\". The\n registered hooks can be used to perform pre-processing before\n the \"state_dict\" call is made.\n\nrequires_grad_(requires_grad=True)\n Change if autograd should record operations on parameters in\n this module.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "this module.\n This method sets the parameters' \"requires_grad\" attributes in-\n place.\n\n This method is helpful for freezing part of the module for\n finetuning or training parts of a model individually (e.g., GAN\n training).\n\n See Locally disabling gradient computation for a comparison\n between *.requires_grad_()* and several similar mechanisms that\n may be confused with it.\n\n Parameters:\n **requires_grad** (*bool*) -- whether autograd should record\n operations on parameters in this module. Default: \"True\".\n\n Returns:\n self\n\n Return type:\n Module\n\nset_extra_state(state)\n This function is called from \"load_state_dict()\" to handle any\n extra state found within the *state_dict*. Implement this\n function and a corresponding \"get_extra_state()\" for your module\n if you need to store extra state within its *state_dict*.\n\n Parameters:\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "Parameters:\n state (dict) -- Extra state from the state_dict\nshare_memory()\n See \"torch.Tensor.share_memory_()\"\n\n Return type:\n *T*\n\nstate_dict(, destination: T_destination, prefix: str = '', keep_vars: bool = False) -> T_destination\n state_dict(, prefix: str = '', keep_vars: bool = False) -> Dict[str, Any]\n Returns a dictionary containing references to the whole state of\n the module.\n\n Both parameters and persistent buffers (e.g. running averages)\n are included. Keys are corresponding parameter and buffer names.\n Parameters and buffers set to \"None\" are not included.\n\n Note:\n\n The returned object is a shallow copy. It contains references\n to the module's parameters and buffers.\n\n Warning:\n\n Currently \"state_dict()\" also accepts positional arguments for\n \"destination\", \"prefix\" and \"keep_vars\" in order. However,\n this is being deprecated and keyword arguments will be\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "enforced in future releases.\n Warning:\n\n Please avoid the use of argument \"destination\" as it is not\n designed for end-users.\n\n Parameters:\n * **destination** (*dict**, **optional*) -- If provided, the\n state of module will be updated into the dict and the same\n object is returned. Otherwise, an \"OrderedDict\" will be\n created and returned. Default: \"None\".\n\n * **prefix** (*str**, **optional*) -- a prefix added to\n parameter and buffer names to compose the keys in\n state_dict. Default: \"''\".\n\n * **keep_vars** (*bool**, **optional*) -- by default the\n \"Tensor\" s returned in the state dict are detached from\n autograd. If it's set to \"True\", detaching will not be\n performed. Default: \"False\".\n\n Returns:\n a dictionary containing a whole state of the module\n\n Return type:\n dict\n\n Example:\n\n >>> module.state_dict().keys()\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "\n\n\nmodule.state_dict().keys()\n ['bias', 'weight']\n\n\n\nto(device: Optional[Union[int, device]] = ..., dtype: Optional[Union[dtype, str]] = ..., non_blocking: bool = ...) -> T\n to(dtype: Union[dtype, str], non_blocking: bool = ...) -> T\n to(tensor: Tensor, non_blocking: bool = ...) -> T\n Moves and/or casts the parameters and buffers.\n\n This can be called as\n\n to(device=None, dtype=None, non_blocking=False)\n\n to(dtype, non_blocking=False)\n\n to(tensor, non_blocking=False)\n\n to(memory_format=torch.channels_last)\n\n Its signature is similar to \"torch.Tensor.to()\", but only\n accepts floating point or complex \"dtype\"s. In addition, this\n method will only cast the floating point or complex parameters\n and buffers to \"dtype\" (if given). The integral parameters and\n buffers will be moved \"device\", if that is given, but with\n dtypes unchanged. When \"non_blocking\" is set, it tries to\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "convert/move asynchronously with respect to the host if\n possible, e.g., moving CPU Tensors with pinned memory to CUDA\n devices.\n See below for examples.\n\n Note:\n\n This method modifies the module in-place.\n\n Parameters:\n * **device** (\"torch.device\") -- the desired device of the\n parameters and buffers in this module\n\n * **dtype** (\"torch.dtype\") -- the desired floating point or\n complex dtype of the parameters and buffers in this module\n\n * **tensor** (*torch.Tensor*) -- Tensor whose dtype and\n device are the desired dtype and device for all parameters\n and buffers in this module\n\n * **memory_format** (\"torch.memory_format\") -- the desired\n memory format for 4D parameters and buffers in this module\n (keyword only argument)\n\n Returns:\n self\n\n Return type:\n Module\n\n Examples:\n\n >>> linear = nn.Linear(2, 2)\n >>> linear.weight\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "\n\n\nlinear.weight\n Parameter containing:\n tensor([[ 0.1913, -0.3420],\n [-0.5113, -0.2325]])\n >>> linear.to(torch.double)\n Linear(in_features=2, out_features=2, bias=True)\n >>> linear.weight\n Parameter containing:\n tensor([[ 0.1913, -0.3420],\n [-0.5113, -0.2325]], dtype=torch.float64)\n >>> gpu1 = torch.device(\"cuda:1\")\n >>> linear.to(gpu1, dtype=torch.half, non_blocking=True)\n Linear(in_features=2, out_features=2, bias=True)\n >>> linear.weight\n Parameter containing:\n tensor([[ 0.1914, -0.3420],\n [-0.5112, -0.2324]], dtype=torch.float16, device='cuda:1')\n >>> cpu = torch.device(\"cpu\")\n >>> linear.to(cpu)\n Linear(in_features=2, out_features=2, bias=True)\n >>> linear.weight\n Parameter containing:\n tensor([[ 0.1914, -0.3420],\n [-0.5112, -0.2324]], dtype=torch.float16)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "\n\n\nlinear = nn.Linear(2, 2, bias=None).to(torch.cdouble)\n >>> linear.weight\n Parameter containing:\n tensor([[ 0.3741+0.j, 0.2382+0.j],\n [ 0.5593+0.j, -0.4443+0.j]], dtype=torch.complex128)\n >>> linear(torch.ones(3, 2, dtype=torch.cdouble))\n tensor([[0.6122+0.j, 0.1150+0.j],\n [0.6122+0.j, 0.1150+0.j],\n [0.6122+0.j, 0.1150+0.j]], dtype=torch.complex128)\n\n\n\nto_empty(*, device)\n Moves the parameters and buffers to the specified device without\n copying storage.\n\n Parameters:\n **device** (\"torch.device\") -- The desired device of the\n parameters and buffers in this module.\n\n Returns:\n self\n\n Return type:\n Module\n\ntrain(mode=True)\n Sets the module in training mode.\n\n This has any effect only on certain modules. See documentations\n of particular modules for details of their behaviors in\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "training/evaluation mode, if they are affected, e.g. \"Dropout\",\n \"BatchNorm\", etc.\n Parameters:\n **mode** (*bool*) -- whether to set training mode (\"True\") or\n evaluation mode (\"False\"). Default: \"True\".\n\n Returns:\n self\n\n Return type:\n Module\n\ntype(dst_type)\n Casts all parameters and buffers to \"dst_type\".\n\n Note:\n\n This method modifies the module in-place.\n\n Parameters:\n **dst_type** (*type** or **string*) -- the desired type\n\n Returns:\n self\n\n Return type:\n Module\n\nxpu(device=None)\n Moves all model parameters and buffers to the XPU.\n\n This also makes associated parameters and buffers different\n objects. So it should be called before constructing optimizer if\n the module will live on XPU while being optimized.\n\n Note:\n\n This method modifies the module in-place.\n\n Parameters:\n **device** (*int**, **optional*) -- if specified, all\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "parameters will be copied to that device\n Returns:\n self\n\n Return type:\n Module\n\nzero_grad(set_to_none=False)\n Sets gradients of all model parameters to zero. See similar\n function under \"torch.optim.Optimizer\" for more context.\n\n Parameters:\n **set_to_none** (*bool*) -- instead of setting to zero, set\n the grads to None. See \"torch.optim.Optimizer.zero_grad()\"\n for details.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Module.html", "category": "pytorch docs"} {"text": "torch.meshgrid\ntorch.meshgrid(*tensors, indexing=None)\nCreates grids of coordinates specified by the 1D inputs in\n attr:tensors.\nThis is helpful when you want to visualize data over some range of\n inputs. See below for a plotting example.\nGiven N 1D tensors T_0 \\ldots T_{N-1} as inputs with corresponding\n sizes S_0 \\ldots S_{N-1}, this creates N N-dimensional tensors G_0\n \\ldots G_{N-1}, each with shape (S_0, ..., S_{N-1}) where the\n output G_i is constructed by expanding T_i to the result shape.\nNote:\n 0D inputs are treated equivalently to 1D inputs of a single\n element.\n\nWarning:\n *torch.meshgrid(*tensors)* currently has the same behavior as\n calling *numpy.meshgrid(*arrays, indexing='ij')*.In the future\n *torch.meshgrid* will transition to *indexing='xy'* as the\n default.https://github.com/pytorch/pytorch/issues/50276 tracks\n this issue with the goal of migrating to NumPy's behavior.\n\nSee also:", "source": "https://pytorch.org/docs/stable/generated/torch.meshgrid.html", "category": "pytorch docs"} {"text": "See also:\n \"torch.cartesian_prod()\" has the same effect but it collects the\n data in a tensor of vectors.\n\nParameters:\n * tensors (list of Tensor) -- list of scalars or 1\n dimensional tensors. Scalars will be treated as tensors of\n size (1,) automatically\n * **indexing** (*Optional**[**str**]*) --\n\n (str, optional): the indexing mode, either \"xy\" or \"ij\",\n defaults to \"ij\". See warning for future changes.\n\n If \"xy\" is selected, the first dimension corresponds to the\n cardinality of the second input and the second dimension\n corresponds to the cardinality of the first input.\n\n If \"ij\" is selected, the dimensions are in the same order as\n the cardinality of the inputs.\n\nReturns:\n If the input has N tensors of size S_0 \\ldots S_{N-1}`, then the\n output will also have N tensors, where each tensor is of shape\n (S_0, ..., S_{N-1}).\nReturn type:\n seq (sequence of Tensors)\nExample:", "source": "https://pytorch.org/docs/stable/generated/torch.meshgrid.html", "category": "pytorch docs"} {"text": "seq (sequence of Tensors)\nExample:\n >>> x = torch.tensor([1, 2, 3])\n >>> y = torch.tensor([4, 5, 6])\n\n Observe the element-wise pairings across the grid, (1, 4),\n (1, 5), ..., (3, 6). This is the same thing as the\n cartesian product.\n >>> grid_x, grid_y = torch.meshgrid(x, y, indexing='ij')\n >>> grid_x\n tensor([[1, 1, 1],\n [2, 2, 2],\n [3, 3, 3]])\n >>> grid_y\n tensor([[4, 5, 6],\n [4, 5, 6],\n [4, 5, 6]])\n\n This correspondence can be seen when these grids are\n stacked properly.\n >>> torch.equal(torch.cat(tuple(torch.dstack([grid_x, grid_y]))),\n ... torch.cartesian_prod(x, y))\n True\n\n `torch.meshgrid` is commonly used to produce a grid for\n plotting.\n >>> import matplotlib.pyplot as plt\n >>> xs = torch.linspace(-5, 5, steps=100)\n >>> ys = torch.linspace(-5, 5, steps=100)\n >>> x, y = torch.meshgrid(xs, ys, indexing='xy')\n", "source": "https://pytorch.org/docs/stable/generated/torch.meshgrid.html", "category": "pytorch docs"} {"text": "\n\n\nz = torch.sin(torch.sqrt(x * x + y * y))\n >>> ax = plt.axes(projection='3d')\n >>> ax.plot_surface(x.numpy(), y.numpy(), z.numpy())\n >>> plt.show()\n\n\n\n[image]", "source": "https://pytorch.org/docs/stable/generated/torch.meshgrid.html", "category": "pytorch docs"} {"text": "torch.nn.functional.dropout3d\ntorch.nn.functional.dropout3d(input, p=0.5, training=True, inplace=False)\nRandomly zero out entire channels (a channel is a 3D feature map,\n e.g., the j-th channel of the i-th sample in the batched input is a\n 3D tensor \\text{input}[i, j]) of the input tensor). Each channel\n will be zeroed out independently on every forward call with\n probability \"p\" using samples from a Bernoulli distribution.\nSee \"Dropout3d\" for details.\nParameters:\n * p (float) -- probability of a channel to be zeroed.\n Default: 0.5\n * **training** (*bool*) -- apply dropout if is \"True\". Default:\n \"True\"\n\n * **inplace** (*bool*) -- If set to \"True\", will do this\n operation in-place. Default: \"False\"\n\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.dropout3d.html", "category": "pytorch docs"} {"text": "MSELoss\nclass torch.nn.MSELoss(size_average=None, reduce=None, reduction='mean')\nCreates a criterion that measures the mean squared error (squared\n L2 norm) between each element in the input x and target y.\nThe unreduced (i.e. with \"reduction\" set to \"'none'\") loss can be\n described as:\n \\ell(x, y) = L = \\{l_1,\\dots,l_N\\}^\\top, \\quad l_n = \\left( x_n\n - y_n \\right)^2,\n\nwhere N is the batch size. If \"reduction\" is not \"'none'\" (default\n \"'mean'\"), then:\n \\ell(x, y) = \\begin{cases} \\operatorname{mean}(L), &\n \\text{if reduction} = \\text{`mean';}\\\\\n \\operatorname{sum}(L), & \\text{if reduction} = \\text{`sum'.}\n \\end{cases}\n\nx and y are tensors of arbitrary shapes with a total of n elements\n each.\nThe mean operation still operates over all the elements, and\n divides by n.\nThe division by n can be avoided if one sets \"reduction = 'sum'\".\nParameters:\n * size_average (bool, optional) -- Deprecated (see", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html", "category": "pytorch docs"} {"text": "\"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n * **reduce** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n\n * **reduction** (*str**, **optional*) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html", "category": "pytorch docs"} {"text": "\"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\nShape:\n * Input: (*), where * means any number of dimensions.\n * Target: (*), same shape as the input.\n\nExamples:\n >>> loss = nn.MSELoss()\n >>> input = torch.randn(3, 5, requires_grad=True)\n >>> target = torch.randn(3, 5)\n >>> output = loss(input, target)\n >>> output.backward()\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MSELoss.html", "category": "pytorch docs"} {"text": "torch.seed\ntorch.seed()\nSets the seed for generating random numbers to a non-deterministic\n random number. Returns a 64 bit number used to seed the RNG.\nReturn type:\n int", "source": "https://pytorch.org/docs/stable/generated/torch.seed.html", "category": "pytorch docs"} {"text": "torch.linalg.eigh\ntorch.linalg.eigh(A, UPLO='L', *, out=None)\nComputes the eigenvalue decomposition of a complex Hermitian or\n real symmetric matrix.\nLetting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, the eigenvalue\n decomposition of a complex Hermitian or real symmetric matrix A\n \\in \\mathbb{K}^{n \\times n} is defined as\n A = Q \\operatorname{diag}(\\Lambda) Q^{\\text{H}}\\mathrlap{\\qquad\n Q \\in \\mathbb{K}^{n \\times n}, \\Lambda \\in \\mathbb{R}^n}\n\nwhere Q^{\\text{H}} is the conjugate transpose when Q is complex,\n and the transpose when Q is real-valued. Q is orthogonal in the\n real case and unitary in the complex case.\nSupports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\n\"A\" is assumed to be Hermitian (resp. symmetric), but this is not\n checked internally, instead:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eigh.html", "category": "pytorch docs"} {"text": "checked internally, instead:\n\n\nIf \"UPLO\"= 'L' (default), only the lower triangular part of the\n matrix is used in the computation.\n\n\nIf \"UPLO\"= 'U', only the upper triangular part of the matrix is\n used.\n\n\nThe eigenvalues are returned in ascending order.\nNote:\n When inputs are on a CUDA device, this function synchronizes that\n device with the CPU.\n\nNote:\n The eigenvalues of real symmetric or complex Hermitian matrices\n are always real.\n\nWarning:\n The eigenvectors of a symmetric matrix are not unique, nor are\n they continuous with respect to \"A\". Due to this lack of\n uniqueness, different hardware and software may compute different\n eigenvectors.This non-uniqueness is caused by the fact that\n multiplying an eigenvector by *-1* in the real case or by e^{i\n \\phi}, \\phi \\in \\mathbb{R} in the complex case produces another\n set of valid eigenvectors of the matrix. For this reason, the\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eigh.html", "category": "pytorch docs"} {"text": "loss function shall not depend on the phase of the eigenvectors,\n as this quantity is not well-defined. This is checked for complex\n inputs when computing the gradients of this function. As such,\n when inputs are complex and are on a CUDA device, the computation\n of the gradients of this function synchronizes that device with\n the CPU.\nWarning:\n Gradients computed using the *eigenvectors* tensor will only be\n finite when \"A\" has distinct eigenvalues. Furthermore, if the\n distance between any two eigenvalues is close to zero, the\n gradient will be numerically unstable, as it depends on the\n eigenvalues \\lambda_i through the computation of \\frac{1}{\\min_{i\n \\neq j} \\lambda_i - \\lambda_j}.\n\nSee also:\n \"torch.linalg.eigvalsh()\" computes only the eigenvalues of a\n Hermitian matrix. Unlike \"torch.linalg.eigh()\", the gradients of\n \"eigvalsh()\" are always numerically stable.\n\n \"torch.linalg.cholesky()\" for a different decomposition of a\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eigh.html", "category": "pytorch docs"} {"text": "Hermitian matrix. The Cholesky decomposition gives less\n information about the matrix but is much faster to compute than\n the eigenvalue decomposition.\n \"torch.linalg.eig()\" for a (slower) function that computes the\n eigenvalue decomposition of a not necessarily Hermitian square\n matrix.\n\n \"torch.linalg.svd()\" for a (slower) function that computes the\n more general SVD decomposition of matrices of any shape.\n\n \"torch.linalg.qr()\" for another (much faster) decomposition that\n works on general matrices.\n\nParameters:\n * A (Tensor) -- tensor of shape (, n, n)* where *** is\n zero or more batch dimensions consisting of symmetric or\n Hermitian matrices.\n * **UPLO** (*'L'**, **'U'**, **optional*) -- controls whether to\n use the upper or lower triangular part of \"A\" in the\n computations. Default: *'L'*.\n\nKeyword Arguments:\n out (tuple, optional) -- output tuple of two tensors.", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eigh.html", "category": "pytorch docs"} {"text": "Ignored if None. Default: None.\nReturns:\n A named tuple (eigenvalues, eigenvectors) which corresponds to\n \\Lambda and Q above.\n *eigenvalues* will always be real-valued, even when \"A\" is\n complex. It will also be ordered in ascending order.\n\n *eigenvectors* will have the same dtype as \"A\" and will contain\n the eigenvectors as its columns.\n\nExamples::\n >>> A = torch.randn(2, 2, dtype=torch.complex128)\n >>> A = A + A.T.conj() # creates a Hermitian matrix\n >>> A\n tensor([[2.9228+0.0000j, 0.2029-0.0862j],\n [0.2029+0.0862j, 0.3464+0.0000j]], dtype=torch.complex128)\n >>> L, Q = torch.linalg.eigh(A)\n >>> L\n tensor([0.3277, 2.9415], dtype=torch.float64)\n >>> Q\n tensor([[-0.0846+-0.0000j, -0.9964+0.0000j],\n [ 0.9170+0.3898j, -0.0779-0.0331j]], dtype=torch.complex128)\n >>> torch.dist(Q @ torch.diag(L.cdouble()) @ Q.T.conj(), A)\n tensor(6.1062e-16, dtype=torch.float64)", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eigh.html", "category": "pytorch docs"} {"text": "tensor(6.1062e-16, dtype=torch.float64)\n >>> A = torch.randn(3, 2, 2, dtype=torch.float64)\n >>> A = A + A.mT # creates a batch of symmetric matrices\n >>> L, Q = torch.linalg.eigh(A)\n >>> torch.dist(Q @ torch.diag_embed(L) @ Q.mH, A)\n tensor(1.5423e-15, dtype=torch.float64)\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eigh.html", "category": "pytorch docs"} {"text": "torch.Tensor.index_fill\nTensor.index_fill(dim, index, value) -> Tensor\nOut-of-place version of \"torch.Tensor.index_fill_()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_fill.html", "category": "pytorch docs"} {"text": "torch.Tensor.addmm\nTensor.addmm(mat1, mat2, *, beta=1, alpha=1) -> Tensor\nSee \"torch.addmm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.addmm.html", "category": "pytorch docs"} {"text": "torch.autograd.forward_ad.unpack_dual\ntorch.autograd.forward_ad.unpack_dual(tensor, *, level=None)\nUnpacks a \"dual tensor\" to get both its Tensor value and its\n forward AD gradient. The result is a namedtuple \"(primal, tangent)\"\n where \"primal\" is a view of \"tensor\"'s primal and \"tangent\" is\n \"tensor\"'s tangent as-is. Neither of these tensors can be dual\n tensor of level \"level\".\nThis function is backward differentiable.\nExample:\n >>> with dual_level():\n ... inp = make_dual(x, x_t)\n ... out = f(inp)\n ... y, jvp = unpack_dual(out)\n ... jvp = unpack_dual(out).tangent\n\nPlease see the forward-mode AD tutorial for detailed steps on how\n to use this API.", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.forward_ad.unpack_dual.html", "category": "pytorch docs"} {"text": "torch.nn.functional.normalize\ntorch.nn.functional.normalize(input, p=2.0, dim=1, eps=1e-12, out=None)\nPerforms L_p normalization of inputs over specified dimension.\nFor a tensor \"input\" of sizes (n_0, ..., n_{dim}, ..., n_k), each\n n_{dim} -element vector v along dimension \"dim\" is transformed as\n v = \\frac{v}{\\max(\\lVert v \\rVert_p, \\epsilon)}.\n\nWith the default arguments it uses the Euclidean norm over vectors\n along dimension 1 for normalization.\nParameters:\n * input (Tensor) -- input tensor of any shape\n * **p** (*float*) -- the exponent value in the norm formulation.\n Default: 2\n\n * **dim** (*int*) -- the dimension to reduce. Default: 1\n\n * **eps** (*float*) -- small value to avoid division by zero.\n Default: 1e-12\n\n * **out** (*Tensor**, **optional*) -- the output tensor. If\n \"out\" is used, this operation won't be differentiable.\n\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.normalize.html", "category": "pytorch docs"} {"text": "torch.nn.functional.conv3d\ntorch.nn.functional.conv3d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) -> Tensor\nApplies a 3D convolution over an input image composed of several\n input planes.\nThis operator supports TensorFloat32.\nSee \"Conv3d\" for details and output shape.\nNote:\n In some circumstances when given tensors on a CUDA device and\n using CuDNN, this operator may select a nondeterministic\n algorithm to increase performance. If this is undesirable, you\n can try to make the operation deterministic (potentially at a\n performance cost) by setting \"torch.backends.cudnn.deterministic\n = True\". See Reproducibility for more information.\n\nNote:\n This operator supports complex data types i.e. \"complex32,\n complex64, complex128\".\n\nParameters:\n * input -- input tensor of shape (\\text{minibatch} ,\n \\text{in_channels} , iT , iH , iW)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv3d.html", "category": "pytorch docs"} {"text": "\\text{in_channels} , iT , iH , iW)\n * **weight** -- filters of shape (\\text{out\\_channels} ,\n \\frac{\\text{in\\_channels}}{\\text{groups}} , kT , kH , kW)\n\n * **bias** -- optional bias tensor of shape\n (\\text{out\\_channels}). Default: None\n\n * **stride** -- the stride of the convolving kernel. Can be a\n single number or a tuple *(sT, sH, sW)*. Default: 1\n\n * **padding** --\n\n implicit paddings on both sides of the input. Can be a string\n {'valid', 'same'}, single number or a tuple *(padT, padH,\n padW)*. Default: 0 \"padding='valid'\" is the same as no\n padding. \"padding='same'\" pads the input so the output has the\n same shape as the input. However, this mode doesn't support\n any stride values other than 1.\n\n Warning:\n\n For \"padding='same'\", if the \"weight\" is even-length and\n \"dilation\" is odd in any dimension, a full \"pad()\" operation\n may be needed internally. Lowering performance.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv3d.html", "category": "pytorch docs"} {"text": "\n\ndilation -- the spacing between kernel elements. Can be a\n single number or a tuple (dT, dH, dW). Default: 1\n\ngroups -- split input into groups, \\text{in_channels}\n should be divisible by the number of groups. Default: 1\n\n\n\nExamples:\n >>> filters = torch.randn(33, 16, 3, 3, 3)\n >>> inputs = torch.randn(20, 16, 50, 10, 20)\n >>> F.conv3d(inputs, filters)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv3d.html", "category": "pytorch docs"} {"text": "torch.Tensor.is_sparse\nTensor.is_sparse\nIs \"True\" if the Tensor uses sparse storage layout, \"False\"\n otherwise.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_sparse.html", "category": "pytorch docs"} {"text": "ReplicationPad3d\nclass torch.nn.ReplicationPad3d(padding)\nPads the input tensor using replication of the input boundary.\nFor N-dimensional padding, use \"torch.nn.functional.pad()\".\nParameters:\n padding (int, tuple) -- the size of the padding. If is\n int, uses the same padding in all boundaries. If a 6-tuple,\n uses (\\text{padding_left}, \\text{padding_right},\n \\text{padding_top}, \\text{padding_bottom},\n \\text{padding_front}, \\text{padding_back})\nShape:\n * Input: (N, C, D_{in}, H_{in}, W_{in}) or (C, D_{in}, H_{in},\n W_{in}).\n * Output: (N, C, D_{out}, H_{out}, W_{out}) or (C, D_{out},\n H_{out}, W_{out}), where\n\n D_{out} = D_{in} + \\text{padding\\_front} +\n \\text{padding\\_back}\n\n H_{out} = H_{in} + \\text{padding\\_top} +\n \\text{padding\\_bottom}\n\n W_{out} = W_{in} + \\text{padding\\_left} +\n \\text{padding\\_right}\n\nExamples:\n >>> m = nn.ReplicationPad3d(3)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReplicationPad3d.html", "category": "pytorch docs"} {"text": "Examples:\n >>> m = nn.ReplicationPad3d(3)\n >>> input = torch.randn(16, 3, 8, 320, 480)\n >>> output = m(input)\n >>> # using different paddings for different sides\n >>> m = nn.ReplicationPad3d((3, 3, 6, 6, 1, 1))\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReplicationPad3d.html", "category": "pytorch docs"} {"text": "torch.nn.functional.gaussian_nll_loss\ntorch.nn.functional.gaussian_nll_loss(input, target, var, full=False, eps=1e-06, reduction='mean')\nGaussian negative log likelihood loss.\nSee \"GaussianNLLLoss\" for details.\nParameters:\n * input (Tensor) -- expectation of the Gaussian\n distribution.\n * **target** (*Tensor*) -- sample from the Gaussian\n distribution.\n\n * **var** (*Tensor*) -- tensor of positive variance(s), one for\n each of the expectations in the input (heteroscedastic), or a\n single one (homoscedastic).\n\n * **full** (*bool**, **optional*) -- include the constant term\n in the loss calculation. Default: \"False\".\n\n * **eps** (*float**, **optional*) -- value added to var, for\n stability. Default: 1e-6.\n\n * **reduction** (*str**, **optional*) -- specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.gaussian_nll_loss.html", "category": "pytorch docs"} {"text": "\"'none'\": no reduction will be applied, \"'mean'\": the output\n is the average of all batch member losses, \"'sum'\": the output\n is the sum of all batch member losses. Default: \"'mean'\".\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.gaussian_nll_loss.html", "category": "pytorch docs"} {"text": "torch.foreach_round\ntorch.foreach_round(self: List[Tensor]) -> None\nApply \"torch.round()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_round_.html", "category": "pytorch docs"} {"text": "torch.frac\ntorch.frac(input, *, out=None) -> Tensor\nComputes the fractional portion of each element in \"input\".\n \\text{out}_{i} = \\text{input}_{i} - \\left\\lfloor\n |\\text{input}_{i}| \\right\\rfloor *\n \\operatorname{sgn}(\\text{input}_{i})\n\nExample:\n >>> torch.frac(torch.tensor([1, 2.5, -3.2]))\n tensor([ 0.0000, 0.5000, -0.2000])\n", "source": "https://pytorch.org/docs/stable/generated/torch.frac.html", "category": "pytorch docs"} {"text": "adaptive_avg_pool3d\nclass torch.ao.nn.quantized.functional.adaptive_avg_pool3d(input, output_size)\nApplies a 3D adaptive average pooling over a quantized input signal\n composed of several quantized input planes.\nNote:\n The input quantization parameters propagate to the output.\n\nSee \"AdaptiveAvgPool3d\" for details and output shape.\nParameters:\n output_size (None) -- the target output size (single\n integer or double-integer tuple)\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.adaptive_avg_pool3d.html", "category": "pytorch docs"} {"text": "torch.smm\ntorch.smm(input, mat) -> Tensor\nPerforms a matrix multiplication of the sparse matrix \"input\" with\n the dense matrix \"mat\".\nParameters:\n * input (Tensor) -- a sparse matrix to be matrix\n multiplied\n * **mat** (*Tensor*) -- a dense matrix to be matrix multiplied\n", "source": "https://pytorch.org/docs/stable/generated/torch.smm.html", "category": "pytorch docs"} {"text": "torch.Tensor.erf\nTensor.erf() -> Tensor\nSee \"torch.erf()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.erf.html", "category": "pytorch docs"} {"text": "torch.fft.rfftfreq\ntorch.fft.rfftfreq(n, d=1.0, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\nComputes the sample frequencies for \"rfft()\" with a signal of size\n \"n\".\nNote:\n \"rfft()\" returns Hermitian one-sided output, so only the positive\n frequency terms are returned. For a real FFT of length \"n\" and\n with inputs spaced in length unit \"d\", the frequencies are:\n\n f = torch.arange((n + 1) // 2) / (d * n)\n\nNote:\n For even lengths, the Nyquist frequency at \"f[n/2]\" can be\n thought of as either negative or positive. Unlike \"fftfreq()\",\n \"rfftfreq()\" always returns it as positive.\n\nParameters:\n * n (int) -- the real FFT length\n * **d** (*float**, **optional*) -- The sampling length scale.\n The spacing between individual samples of the FFT input. The\n default assumes unit spacing, dividing that result by the\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.rfftfreq.html", "category": "pytorch docs"} {"text": "actual spacing gives the result in physical frequency units.\nKeyword Arguments:\n * out (Tensor, optional) -- the output tensor.\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\n-[ Example ]-\n\n\n\ntorch.fft.rfftfreq(5)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.rfftfreq.html", "category": "pytorch docs"} {"text": "-[ Example ]-\n\n\n\ntorch.fft.rfftfreq(5)\n tensor([0.0000, 0.2000, 0.4000])\ntorch.fft.rfftfreq(4)\n tensor([0.0000, 0.2500, 0.5000])\n\n\n\nCompared to the output from \"fftfreq()\", we see that the Nyquist\n frequency at \"f[2]\" has changed sign: >>> torch.fft.fftfreq(4)\n tensor([ 0.0000, 0.2500, -0.5000, -0.2500])", "source": "https://pytorch.org/docs/stable/generated/torch.fft.rfftfreq.html", "category": "pytorch docs"} {"text": "GRU\nclass torch.nn.GRU(args, *kwargs)\nApplies a multi-layer gated recurrent unit (GRU) RNN to an input\n sequence.\nFor each element in the input sequence, each layer computes the\n following function:\n \\begin{array}{ll} r_t = \\sigma(W_{ir} x_t + b_{ir} + W_{hr}\n h_{(t-1)} + b_{hr}) \\\\ z_t = \\sigma(W_{iz} x_t + b_{iz} +\n W_{hz} h_{(t-1)} + b_{hz}) \\\\ n_t = \\tanh(W_{in} x_t +\n b_{in} + r_t * (W_{hn} h_{(t-1)}+ b_{hn})) \\\\ h_t = (1 -\n z_t) * n_t + z_t * h_{(t-1)} \\end{array}\n\nwhere h_t is the hidden state at time t, x_t is the input at time\n t, h_{(t-1)} is the hidden state of the layer at time t-1 or\n the initial hidden state at time 0, and r_t, z_t, n_t are the\n reset, update, and new gates, respectively. \\sigma is the sigmoid\n function, and * is the Hadamard product.\nIn a multilayer GRU, the input x^{(l)}_t of the l -th layer (l >=\n 2) is the hidden state h^{(l-1)}_t of the previous layer multiplied", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GRU.html", "category": "pytorch docs"} {"text": "by dropout \\delta^{(l-1)}_t where each \\delta^{(l-1)}_t is a\n Bernoulli random variable which is 0 with probability \"dropout\".\nParameters:\n * input_size -- The number of expected features in the input\n x\n * **hidden_size** -- The number of features in the hidden state\n *h*\n\n * **num_layers** -- Number of recurrent layers. E.g., setting\n \"num_layers=2\" would mean stacking two GRUs together to form a\n *stacked GRU*, with the second GRU taking in outputs of the\n first GRU and computing the final results. Default: 1\n\n * **bias** -- If \"False\", then the layer does not use bias\n weights *b_ih* and *b_hh*. Default: \"True\"\n\n * **batch_first** -- If \"True\", then the input and output\n tensors are provided as *(batch, seq, feature)* instead of\n *(seq, batch, feature)*. Note that this does not apply to\n hidden or cell states. See the Inputs/Outputs sections below\n for details. Default: \"False\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GRU.html", "category": "pytorch docs"} {"text": "for details. Default: \"False\"\n * **dropout** -- If non-zero, introduces a *Dropout* layer on\n the outputs of each GRU layer except the last layer, with\n dropout probability equal to \"dropout\". Default: 0\n\n * **bidirectional** -- If \"True\", becomes a bidirectional GRU.\n Default: \"False\"\n\nInputs: input, h_0\n * input: tensor of shape (L, H_{in}) for unbatched input,\n (L, N, H_{in}) when \"batch_first=False\" or (N, L, H_{in}) when\n \"batch_first=True\" containing the features of the input\n sequence. The input can also be a packed variable length\n sequence. See \"torch.nn.utils.rnn.pack_padded_sequence()\" or\n \"torch.nn.utils.rnn.pack_sequence()\" for details.\n * **h_0**: tensor of shape (D * \\text{num\\_layers}, H_{out}) or\n (D * \\text{num\\_layers}, N, H_{out}) containing the initial\n hidden state for the input sequence. Defaults to zeros if not\n provided.\n\n where:\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GRU.html", "category": "pytorch docs"} {"text": "provided.\n where:\n\n \\begin{aligned} N ={} & \\text{batch size} \\\\ L ={} &\n \\text{sequence length} \\\\ D ={} & 2 \\text{ if\n bidirectional=True otherwise } 1 \\\\ H_{in} ={} &\n \\text{input\\_size} \\\\ H_{out} ={} & \\text{hidden\\_size}\n \\end{aligned}\n\nOutputs: output, h_n\n * output: tensor of shape (L, D * H_{out}) for unbatched\n input, (L, N, D * H_{out}) when \"batch_first=False\" or (N, L,\n D * H_{out}) when \"batch_first=True\" containing the output\n features (h_t) from the last layer of the GRU, for each t.\n If a \"torch.nn.utils.rnn.PackedSequence\" has been given as the\n input, the output will also be a packed sequence.\n * **h_n**: tensor of shape (D * \\text{num\\_layers}, H_{out}) or\n (D * \\text{num\\_layers}, N, H_{out}) containing the final\n hidden state for the input sequence.\n\nVariables:\n * weight_ih_l[k] -- the learnable input-hidden weights of", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GRU.html", "category": "pytorch docs"} {"text": "the \\text{k}^{th} layer (W_ir|W_iz|W_in), of shape\n (3hidden_size, input_size) for k = 0. Otherwise, the\n shape is (3hidden_size, num_directions * hidden_size)\n * **weight_hh_l[k]** -- the learnable hidden-hidden weights of\n the \\text{k}^{th} layer (W_hr|W_hz|W_hn), of shape\n *(3*hidden_size, hidden_size)*\n\n * **bias_ih_l[k]** -- the learnable input-hidden bias of the\n \\text{k}^{th} layer (b_ir|b_iz|b_in), of shape\n *(3*hidden_size)*\n\n * **bias_hh_l[k]** -- the learnable hidden-hidden bias of the\n \\text{k}^{th} layer (b_hr|b_hz|b_hn), of shape\n *(3*hidden_size)*\n\nNote:\n All the weights and biases are initialized from\n \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}) where k =\n \\frac{1}{\\text{hidden\\_size}}\n\nNote:\n For bidirectional GRUs, forward and backward are directions 0 and\n 1 respectively. Example of splitting the output layers when\n \"batch_first=False\": \"output.view(seq_len, batch, num_directions,\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GRU.html", "category": "pytorch docs"} {"text": "hidden_size)\".\nNote:\n \"batch_first\" argument is ignored for unbatched inputs.\n\nNote:\n If the following conditions are satisfied: 1) cudnn is enabled,\n 2) input data is on the GPU 3) input data has dtype\n \"torch.float16\" 4) V100 GPU is used, 5) input data is not in\n \"PackedSequence\" format persistent algorithm can be selected to\n improve performance.\n\nExamples:\n >>> rnn = nn.GRU(10, 20, 2)\n >>> input = torch.randn(5, 3, 10)\n >>> h0 = torch.randn(2, 3, 20)\n >>> output, hn = rnn(input, h0)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GRU.html", "category": "pytorch docs"} {"text": "SequentialLR\nclass torch.optim.lr_scheduler.SequentialLR(optimizer, schedulers, milestones, last_epoch=- 1, verbose=False)\nReceives the list of schedulers that is expected to be called\n sequentially during optimization process and milestone points that\n provides exact intervals to reflect which scheduler is supposed to\n be called at a given epoch.\nParameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * **schedulers** (*list*) -- List of chained schedulers.\n\n * **milestones** (*list*) -- List of integers that reflects\n milestone points.\n\n * **last_epoch** (*int*) -- The index of last epoch. Default:\n -1.\n\n * **verbose** (*bool*) -- Does nothing.\n\n-[ Example ]-\n\n\n\nAssuming optimizer uses lr = 1. for all groups\nlr = 0.1 if epoch == 0\nlr = 0.1 if epoch == 1\nlr = 0.9 if epoch == 2\nlr = 0.81 if epoch == 3\nlr = 0.729 if epoch == 4\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.SequentialLR.html", "category": "pytorch docs"} {"text": "\n\n\nlr = 0.729 if epoch == 4\nscheduler1 = ConstantLR(self.opt, factor=0.1, total_iters=2)\nscheduler2 = ExponentialLR(self.opt, gamma=0.9)\nscheduler = SequentialLR(self.opt, schedulers=[scheduler1, scheduler2], milestones=[2])\nfor epoch in range(100):\n train(...)\n validate(...)\n scheduler.step()\n\n\n\nget_last_lr()\n Return last computed learning rate by current scheduler.\n\nload_state_dict(state_dict)\n Loads the schedulers state.\n\n Parameters:\n **state_dict** (*dict*) -- scheduler state. Should be an\n object returned from a call to \"state_dict()\".\n\nprint_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n\nstate_dict()\n Returns the state of the scheduler as a \"dict\".\n\n It contains an entry for every variable in self.__dict__ which\n is not the optimizer. The wrapped scheduler states will also be\n saved.\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.SequentialLR.html", "category": "pytorch docs"} {"text": "torch.Tensor.argwhere\nTensor.argwhere() -> Tensor\nSee \"torch.argwhere()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.argwhere.html", "category": "pytorch docs"} {"text": "torch.Tensor.addcdiv\nTensor.addcdiv(tensor1, tensor2, *, value=1) -> Tensor\nSee \"torch.addcdiv()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.addcdiv.html", "category": "pytorch docs"} {"text": "torch.floor_divide\ntorch.floor_divide(input, other, *, out=None) -> Tensor\nNote:\n Before PyTorch 1.13 \"torch.floor_divide()\" incorrectly performed\n truncation division. To restore the previous behavior use\n \"torch.div()\" with \"rounding_mode='trunc'\".\n\nComputes \"input\" divided by \"other\", elementwise, and floors the\n result.\n \\text{{out}}_i = \\text{floor} \\left(\n \\frac{{\\text{{input}}_i}}{{\\text{{other}}_i}} \\right)\n\nSupports broadcasting to a common shape, type promotion, and\n integer and float inputs.\nParameters:\n * input (Tensor or Number) -- the dividend\n * **other** (*Tensor** or **Number*) -- the divisor\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.tensor([4.0, 3.0])\n >>> b = torch.tensor([2.0, 2.0])\n >>> torch.floor_divide(a, b)\n tensor([2.0, 1.0])\n >>> torch.floor_divide(a, 1.4)\n tensor([2.0, 2.0])\n", "source": "https://pytorch.org/docs/stable/generated/torch.floor_divide.html", "category": "pytorch docs"} {"text": "torch.get_float32_matmul_precision\ntorch.get_float32_matmul_precision()\nReturns the current value of float32 matrix multiplication\n precision. Refer to \"torch.set_float32_matmul_precision()\"\n documentation for more details.\nReturn type:\n str", "source": "https://pytorch.org/docs/stable/generated/torch.get_float32_matmul_precision.html", "category": "pytorch docs"} {"text": "prepare_qat_fx\nclass torch.quantization.quantize_fx.prepare_qat_fx(model, qconfig_mapping, example_inputs, prepare_custom_config=None, backend_config=None)\nPrepare a model for quantization aware training\nParameters:\n * model (***) -- torch.nn.Module model\n * **qconfig_mapping** (***) -- see \"prepare_fx()\"\n\n * **example_inputs** (***) -- see \"prepare_fx()\"\n\n * **prepare_custom_config** (***) -- see \"prepare_fx()\"\n\n * **backend_config** (***) -- see \"prepare_fx()\"\n\nReturns:\n A GraphModule with fake quant modules (configured by\n qconfig_mapping and backend_config), ready for quantization\n aware training\nReturn type:\n ObservedGraphModule\nExample:\n import torch\n from torch.ao.quantization import get_default_qat_qconfig_mapping\n from torch.ao.quantization import prepare_fx\n\n class Submodule(torch.nn.Module):\n def __init__(self):\n super().__init__()\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_qat_fx.html", "category": "pytorch docs"} {"text": "super().init()\n self.linear = torch.nn.Linear(5, 5)\n def forward(self, x):\n x = self.linear(x)\n return x\n class M(torch.nn.Module):\n def __init__(self):\n super().__init__()\n self.linear = torch.nn.Linear(5, 5)\n self.sub = Submodule()\n\n def forward(self, x):\n x = self.linear(x)\n x = self.sub(x) + x\n return x\n\n # initialize a floating point model\n float_model = M().train()\n # (optional, but preferred) load the weights from pretrained model\n # float_model.load_weights(...)\n\n # define the training loop for quantization aware training\n def train_loop(model, train_data):\n model.train()\n for image, target in data_loader:\n ...\n\n # qconfig is the configuration for how we insert observers for a particular\n # operator\n # qconfig = get_default_qconfig(\"fbgemm\")\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_qat_fx.html", "category": "pytorch docs"} {"text": "qconfig = get_default_qconfig(\"fbgemm\")\n # Example of customizing qconfig:\n # qconfig = torch.ao.quantization.QConfig(\n # activation=FakeQuantize.with_args(observer=MinMaxObserver.with_args(dtype=torch.qint8)),\n # weight=FakeQuantize.with_args(observer=MinMaxObserver.with_args(dtype=torch.qint8)))\n # `activation` and `weight` are constructors of observer module\n\n # qconfig_mapping is a collection of quantization configurations, user can\n # set the qconfig for each operator (torch op calls, functional calls, module calls)\n # in the model through qconfig_mapping\n # the following call will get the qconfig_mapping that works best for models\n # that target \"fbgemm\" backend\n qconfig_mapping = get_default_qat_qconfig(\"fbgemm\")\n\n # We can customize qconfig_mapping in different ways, please take a look at\n # the docstring for :func:`~torch.ao.quantization.prepare_fx` for different ways\n # to configure this\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_qat_fx.html", "category": "pytorch docs"} {"text": "to configure this\n # example_inputs is a tuple of inputs, that is used to infer the type of the\n # outputs in the model\n # currently it's not used, but please make sure model(*example_inputs) runs\n example_inputs = (torch.randn(1, 3, 224, 224),)\n\n # TODO: add backend_config after we split the backend_config for fbgemm and qnnpack\n # e.g. backend_config = get_default_backend_config(\"fbgemm\")\n # `prepare_qat_fx` inserts observers in the model based on qconfig_mapping and\n # backend_config, if the configuration for an operator in qconfig_mapping\n # is supported in the backend_config (meaning it's supported by the target\n # hardware), we'll insert fake_quantize modules according to the qconfig_mapping\n # otherwise the configuration in qconfig_mapping will be ignored\n # see :func:`~torch.ao.quantization.prepare_fx` for a detailed explanation of\n # how qconfig_mapping interacts with backend_config\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_qat_fx.html", "category": "pytorch docs"} {"text": "prepared_model = prepare_qat_fx(float_model, qconfig_mapping, example_inputs)\n # Run training\n train_loop(prepared_model, train_loop)", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_qat_fx.html", "category": "pytorch docs"} {"text": "torch.Tensor.std\nTensor.std(dim=None, *, correction=1, keepdim=False) -> Tensor\nSee \"torch.std()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.std.html", "category": "pytorch docs"} {"text": "BNReLU3d\nclass torch.ao.nn.intrinsic.quantized.BNReLU3d(num_features, eps=1e-05, momentum=0.1, device=None, dtype=None)\nA BNReLU3d module is a fused module of BatchNorm3d and ReLU\nWe adopt the same interface as \"torch.ao.nn.quantized.BatchNorm3d\".\nVariables:\n torch.ao.nn.quantized.BatchNorm3d (Same as) --", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.quantized.BNReLU3d.html", "category": "pytorch docs"} {"text": "torch.Tensor.sign_\nTensor.sign_() -> Tensor\nIn-place version of \"sign()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sign_.html", "category": "pytorch docs"} {"text": "torch.Tensor.floor\nTensor.floor() -> Tensor\nSee \"torch.floor()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.floor.html", "category": "pytorch docs"} {"text": "torch.normal\ntorch.normal(mean, std, *, generator=None, out=None) -> Tensor\nReturns a tensor of random numbers drawn from separate normal\n distributions whose mean and standard deviation are given.\nThe \"mean\" is a tensor with the mean of each output element's\n normal distribution\nThe \"std\" is a tensor with the standard deviation of each output\n element's normal distribution\nThe shapes of \"mean\" and \"std\" don't need to match, but the total\n number of elements in each tensor need to be the same.\nNote:\n When the shapes do not match, the shape of \"mean\" is used as the\n shape for the returned output tensor\n\nNote:\n When \"std\" is a CUDA tensor, this function synchronizes its\n device with the CPU.\n\nParameters:\n * mean (Tensor) -- the tensor of per-element means\n * **std** (*Tensor*) -- the tensor of per-element standard\n deviations\n\nKeyword Arguments:\n * generator (\"torch.Generator\", optional) -- a pseudorandom", "source": "https://pytorch.org/docs/stable/generated/torch.normal.html", "category": "pytorch docs"} {"text": "number generator for sampling\n * **out** (*Tensor**, **optional*) -- the output tensor.\n\nExample:\n >>> torch.normal(mean=torch.arange(1., 11.), std=torch.arange(1, 0, -0.1))\n tensor([ 1.0425, 3.5672, 2.7969, 4.2925, 4.7229, 6.2134,\n 8.0505, 8.1408, 9.0563, 10.0566])\n\ntorch.normal(mean=0.0, std, *, out=None) -> Tensor\nSimilar to the function above, but the means are shared among all\n drawn elements.\nParameters:\n * mean (float, optional) -- the mean for all\n distributions\n * **std** (*Tensor*) -- the tensor of per-element standard\n deviations\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> torch.normal(mean=0.5, std=torch.arange(1., 6.))\n tensor([-1.2793, -1.0732, -2.0687, 5.1177, -1.2303])\n\ntorch.normal(mean, std=1.0, *, out=None) -> Tensor\nSimilar to the function above, but the standard deviations are", "source": "https://pytorch.org/docs/stable/generated/torch.normal.html", "category": "pytorch docs"} {"text": "shared among all drawn elements.\nParameters:\n * mean (Tensor) -- the tensor of per-element means\n * **std** (*float**, **optional*) -- the standard deviation for\n all distributions\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor\nExample:\n >>> torch.normal(mean=torch.arange(1., 6.))\n tensor([ 1.1552, 2.6148, 2.6535, 5.8318, 4.2361])\n\ntorch.normal(mean, std, size, *, out=None) -> Tensor\nSimilar to the function above, but the means and standard\n deviations are shared among all drawn elements. The resulting\n tensor has size given by \"size\".\nParameters:\n * mean (float) -- the mean for all distributions\n * **std** (*float*) -- the standard deviation for all\n distributions\n\n * **size** (*int**...*) -- a sequence of integers defining the\n shape of the output tensor.\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:", "source": "https://pytorch.org/docs/stable/generated/torch.normal.html", "category": "pytorch docs"} {"text": "Example:\n >>> torch.normal(2, 3, size=(1, 4))\n tensor([[-1.3987, -1.9544, 3.6048, 0.7909]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.normal.html", "category": "pytorch docs"} {"text": "RNNCell\nclass torch.ao.nn.quantized.dynamic.RNNCell(input_size, hidden_size, bias=True, nonlinearity='tanh', dtype=torch.qint8)\nAn Elman RNN cell with tanh or ReLU non-linearity. A dynamic\n quantized RNNCell module with floating point tensor as inputs and\n outputs. Weights are quantized to 8 bits. We adopt the same\n interface as torch.nn.RNNCell, please see\n https://pytorch.org/docs/stable/nn.html#torch.nn.RNNCell for\n documentation.\nExamples:\n >>> rnn = nn.RNNCell(10, 20)\n >>> input = torch.randn(6, 3, 10)\n >>> hx = torch.randn(3, 20)\n >>> output = []\n >>> for i in range(6):\n ... hx = rnn(input[i], hx)\n ... output.append(hx)\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.RNNCell.html", "category": "pytorch docs"} {"text": "torch.Tensor.cpu\nTensor.cpu(memory_format=torch.preserve_format) -> Tensor\nReturns a copy of this object in CPU memory.\nIf this object is already in CPU memory and on the correct device,\n then no copy is performed and the original object is returned.\nParameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cpu.html", "category": "pytorch docs"} {"text": "torch.select\ntorch.select(input, dim, index) -> Tensor\nSlices the \"input\" tensor along the selected dimension at the given\n index. This function returns a view of the original tensor with the\n given dimension removed.\nNote:\n If \"input\" is a sparse tensor and returning a view of the tensor\n is not possible, a RuntimeError exception is raised. In this is\n the case, consider using \"torch.select_copy()\" function.\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int*) -- the dimension to slice\n\n * **index** (*int*) -- the index to select with\n\nNote:\n \"select()\" is equivalent to slicing. For example,\n \"tensor.select(0, index)\" is equivalent to \"tensor[index]\" and\n \"tensor.select(2, index)\" is equivalent to \"tensor[:,:,index]\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.select.html", "category": "pytorch docs"} {"text": "torch.cuda.current_blas_handle\ntorch.cuda.current_blas_handle()\nReturns cublasHandle_t pointer to current cuBLAS handle", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.current_blas_handle.html", "category": "pytorch docs"} {"text": "torch.Tensor.int\nTensor.int(memory_format=torch.preserve_format) -> Tensor\n\"self.int()\" is equivalent to \"self.to(torch.int32)\". See \"to()\".\nParameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.int.html", "category": "pytorch docs"} {"text": "torch.Tensor.erfc\nTensor.erfc() -> Tensor\nSee \"torch.erfc()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.erfc.html", "category": "pytorch docs"} {"text": "torch.Tensor.abs\nTensor.abs() -> Tensor\nSee \"torch.abs()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.abs.html", "category": "pytorch docs"} {"text": "torch.Tensor.scatter\nTensor.scatter(dim, index, src) -> Tensor\nOut-of-place version of \"torch.Tensor.scatter_()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.scatter.html", "category": "pytorch docs"} {"text": "torch.nn.functional.soft_margin_loss\ntorch.nn.functional.soft_margin_loss(input, target, size_average=None, reduce=None, reduction='mean') -> Tensor\nSee \"SoftMarginLoss\" for details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.soft_margin_loss.html", "category": "pytorch docs"} {"text": "torch.from_dlpack\ntorch.from_dlpack(ext_tensor) -> Tensor\nConverts a tensor from an external library into a \"torch.Tensor\".\nThe returned PyTorch tensor will share the memory with the input\n tensor (which may have come from another library). Note that in-\n place operations will therefore also affect the data of the input\n tensor. This may lead to unexpected issues (e.g., other libraries\n may have read-only flags or immutable data structures), so the user\n should only do this if they know for sure that this is fine.\nParameters:\n ext_tensor (object with \"dlpack\" attribute, or a DLPack\n capsule) --\n The tensor or DLPack capsule to convert.\n\n If \"ext_tensor\" is a tensor (or ndarray) object, it must support\n the \"__dlpack__\" protocol (i.e., have a \"ext_tensor.__dlpack__\"\n method). Otherwise \"ext_tensor\" may be a DLPack capsule, which\n is an opaque \"PyCapsule\" instance, typically produced by a\n \"to_dlpack\" function or method.\n", "source": "https://pytorch.org/docs/stable/generated/torch.from_dlpack.html", "category": "pytorch docs"} {"text": "\"to_dlpack\" function or method.\nReturn type:\n Tensor\nExamples:\n >>> import torch.utils.dlpack\n >>> t = torch.arange(4)\n\n # Convert a tensor directly (supported in PyTorch >= 1.10)\n >>> t2 = torch.from_dlpack(t)\n >>> t2[:2] = -1 # show that memory is shared\n >>> t2\n tensor([-1, -1, 2, 3])\n >>> t\n tensor([-1, -1, 2, 3])\n\n # The old-style DLPack usage, with an intermediate capsule object\n >>> capsule = torch.utils.dlpack.to_dlpack(t)\n >>> capsule\n \n >>> t3 = torch.from_dlpack(capsule)\n >>> t3\n tensor([-1, -1, 2, 3])\n >>> t3[0] = -9 # now we're sharing memory between 3 tensors\n >>> t3\n tensor([-9, -1, 2, 3])\n >>> t2\n tensor([-9, -1, 2, 3])\n >>> t\n tensor([-9, -1, 2, 3])\n", "source": "https://pytorch.org/docs/stable/generated/torch.from_dlpack.html", "category": "pytorch docs"} {"text": "torch.Tensor.is_set_to\nTensor.is_set_to(tensor) -> bool\nReturns True if both tensors are pointing to the exact same memory\n (same storage, offset, size and stride).", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_set_to.html", "category": "pytorch docs"} {"text": "DTypeWithConstraints\nclass torch.ao.quantization.backend_config.DTypeWithConstraints(dtype=None, quant_min_lower_bound=None, quant_max_upper_bound=None, scale_min_lower_bound=None, scale_max_upper_bound=None, scale_exact_match=None, zero_point_exact_match=None)\nConfig for specifying additional constraints for a given dtype,\n such as quantization value ranges, scale value ranges, and fixed\n quantization params, to be used in \"DTypeConfig\".\nThe constraints currently supported are:\n\n\nquant_min_lower_bound and quant_max_upper_bound: Lower and\n upper bounds for the minimum and maximum quantized values\n respectively. If the QConfig\u00e2\u0080\u0099s quant_min and quant_max fall\n outside this range, then the QConfig will be ignored.\n\n\nscale_min_lower_bound and scale_max_upper_bound: Lower and\n upper bounds for the minimum and maximum scale values\n respectively. If the QConfig\u00e2\u0080\u0099s minimum scale value (currently\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.DTypeWithConstraints.html", "category": "pytorch docs"} {"text": "exposed as eps) falls below the lower bound, then the QConfig\n will be ignored. Note that the upper bound is currently not\n enforced.\n\nscale_exact_match and zero_point_exact_match: Exact match\n requirements for scale and zero point, to be used for operators\n with fixed quantization parameters such as sigmoid and tanh. If\n the observer specified in the QConfig is neither\n FixedQParamsObserver nor FixedQParamsFakeQuantize, or if the\n quantization parameters don't match, then the QConfig will be\n ignored.\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.DTypeWithConstraints.html", "category": "pytorch docs"} {"text": "Hardtanh\nclass torch.nn.Hardtanh(min_val=- 1.0, max_val=1.0, inplace=False, min_value=None, max_value=None)\nApplies the HardTanh function element-wise.\nHardTanh is defined as:\n \\text{HardTanh}(x) = \\begin{cases} \\text{max\\_val} & \\text{\n if } x > \\text{ max\\_val } \\\\ \\text{min\\_val} & \\text{ if }\n x < \\text{ min\\_val } \\\\ x & \\text{ otherwise } \\\\\n \\end{cases}\n\nParameters:\n * min_val (float) -- minimum value of the linear region\n range. Default: -1\n * **max_val** (*float*) -- maximum value of the linear region\n range. Default: 1\n\n * **inplace** (*bool*) -- can optionally do the operation in-\n place. Default: \"False\"\n\nKeyword arguments \"min_value\" and \"max_value\" have been deprecated\n in favor of \"min_val\" and \"max_val\".\nShape:\n * Input: (*), where * means any number of dimensions.\n * Output: (*), same shape as the input.\n\n[image]\nExamples:\n >>> m = nn.Hardtanh(-2, 2)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Hardtanh.html", "category": "pytorch docs"} {"text": "Examples:\n >>> m = nn.Hardtanh(-2, 2)\n >>> input = torch.randn(2)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Hardtanh.html", "category": "pytorch docs"} {"text": "ConvBn2d\nclass torch.ao.nn.intrinsic.qat.ConvBn2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=None, padding_mode='zeros', eps=1e-05, momentum=0.1, freeze_bn=False, qconfig=None)\nA ConvBn2d module is a module fused from Conv2d and BatchNorm2d,\n attached with FakeQuantize modules for weight, used in quantization\n aware training.\nWe combined the interface of \"torch.nn.Conv2d\" and\n \"torch.nn.BatchNorm2d\".\nSimilar to \"torch.nn.Conv2d\", with FakeQuantize modules initialized\n to default.\nVariables:\n * freeze_bn --\n * **weight_fake_quant** -- fake quant module for weight\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.ConvBn2d.html", "category": "pytorch docs"} {"text": "torch.where\ntorch.where(condition, input, other, *, out=None) -> Tensor\nReturn a tensor of elements selected from either \"input\" or\n \"other\", depending on \"condition\".\nThe operation is defined as:\n \\text{out}_i = \\begin{cases} \\text{input}_i & \\text{if }\n \\text{condition}_i \\\\ \\text{other}_i & \\text{otherwise} \\\\\n \\end{cases}\n\nNote:\n The tensors \"condition\", \"input\", \"other\" must be broadcastable.\n\nParameters:\n * condition (BoolTensor) -- When True (nonzero), yield\n input, otherwise yield other\n * **input** (*Tensor** or **Scalar*) -- value (if \"input\" is a\n scalar) or values selected at indices where \"condition\" is\n \"True\"\n\n * **other** (*Tensor** or **Scalar*) -- value (if \"other\" is a\n scalar) or values selected at indices where \"condition\" is\n \"False\"\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nReturns:", "source": "https://pytorch.org/docs/stable/generated/torch.where.html", "category": "pytorch docs"} {"text": "Returns:\n A tensor of shape equal to the broadcasted shape of \"condition\",\n \"input\", \"other\"\nReturn type:\n Tensor\nExample:\n >>> x = torch.randn(3, 2)\n >>> y = torch.ones(3, 2)\n >>> x\n tensor([[-0.4620, 0.3139],\n [ 0.3898, -0.7197],\n [ 0.0478, -0.1657]])\n >>> torch.where(x > 0, x, y)\n tensor([[ 1.0000, 0.3139],\n [ 0.3898, 1.0000],\n [ 0.0478, 1.0000]])\n >>> x = torch.randn(2, 2, dtype=torch.double)\n >>> x\n tensor([[ 1.0779, 0.0383],\n [-0.8785, -1.1089]], dtype=torch.float64)\n >>> torch.where(x > 0, x, 0.)\n tensor([[1.0779, 0.0383],\n [0.0000, 0.0000]], dtype=torch.float64)\n\ntorch.where(condition) -> tuple of LongTensor\n\"torch.where(condition)\" is identical to \"torch.nonzero(condition,\n as_tuple=True)\".\nNote:\n See also \"torch.nonzero()\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.where.html", "category": "pytorch docs"} {"text": "torch.Tensor.clamp_\nTensor.clamp_(min=None, max=None) -> Tensor\nIn-place version of \"clamp()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.clamp_.html", "category": "pytorch docs"} {"text": "torch.Tensor.le\nTensor.le(other) -> Tensor\nSee \"torch.le()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.le.html", "category": "pytorch docs"} {"text": "GRUCell\nclass torch.ao.nn.quantized.dynamic.GRUCell(input_size, hidden_size, bias=True, dtype=torch.qint8)\nA gated recurrent unit (GRU) cell\nA dynamic quantized GRUCell module with floating point tensor as\n inputs and outputs. Weights are quantized to 8 bits. We adopt the\n same interface as torch.nn.GRUCell, please see\n https://pytorch.org/docs/stable/nn.html#torch.nn.GRUCell for\n documentation.\nExamples:\n >>> rnn = nn.GRUCell(10, 20)\n >>> input = torch.randn(6, 3, 10)\n >>> hx = torch.randn(3, 20)\n >>> output = []\n >>> for i in range(6):\n ... hx = rnn(input[i], hx)\n ... output.append(hx)\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.GRUCell.html", "category": "pytorch docs"} {"text": "torch.Tensor.narrow\nTensor.narrow(dimension, start, length) -> Tensor\nSee \"torch.narrow()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.narrow.html", "category": "pytorch docs"} {"text": "PerChannelMinMaxObserver\nclass torch.quantization.observer.PerChannelMinMaxObserver(ch_axis=0, dtype=torch.quint8, qscheme=torch.per_channel_affine, reduce_range=False, quant_min=None, quant_max=None, factory_kwargs=None, eps=1.1920928955078125e-07)\nObserver module for computing the quantization parameters based on\n the running per channel min and max values.\nThis observer uses the tensor min/max statistics to compute the per\n channel quantization parameters. The module records the running\n minimum and maximum of incoming tensors, and uses this statistic to\n compute the quantization parameters.\nParameters:\n * ch_axis -- Channel axis\n * **dtype** -- dtype argument to the *quantize* node needed to\n implement the reference model spec.\n\n * **qscheme** -- Quantization scheme to be used\n\n * **reduce_range** -- Reduces the range of the quantized data\n type by 1 bit\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.PerChannelMinMaxObserver.html", "category": "pytorch docs"} {"text": "type by 1 bit\n * **quant_min** -- Minimum quantization value. If unspecified,\n it will follow the 8-bit setup.\n\n * **quant_max** -- Maximum quantization value. If unspecified,\n it will follow the 8-bit setup.\n\n * **eps** (*Tensor*) -- Epsilon value for float32, Defaults to\n *torch.finfo(torch.float32).eps*.\n\nThe quantization parameters are computed the same way as in\n \"MinMaxObserver\", with the difference that the running min/max\n values are stored per channel. Scales and zero points are thus\n computed per channel as well.\nNote:\n If the running minimum equals to the running maximum, the scales\n and zero_points are set to 1.0 and 0.\n\nreset_min_max_vals()\n Resets the min/max values.\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.PerChannelMinMaxObserver.html", "category": "pytorch docs"} {"text": "torch.blackman_window\ntorch.blackman_window(window_length, periodic=True, *, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\nBlackman window function.\n w[n] = 0.42 - 0.5 \\cos \\left( \\frac{2 \\pi n}{N - 1} \\right) +\n 0.08 \\cos \\left( \\frac{4 \\pi n}{N - 1} \\right)\n\nwhere N is the full window size.\nThe input \"window_length\" is a positive integer controlling the\n returned window size. \"periodic\" flag determines whether the\n returned window trims off the last duplicate value from the\n symmetric window and is ready to be used as a periodic window with\n functions like \"torch.stft()\". Therefore, if \"periodic\" is true,\n the N in above formula is in fact \\text{window_length} + 1. Also,\n we always have \"torch.blackman_window(L, periodic=True)\" equal to\n \"torch.blackman_window(L + 1, periodic=False)[:-1])\".\nNote:\n If \"window_length\" =1, the returned window contains a single\n value 1.\n\nParameters:", "source": "https://pytorch.org/docs/stable/generated/torch.blackman_window.html", "category": "pytorch docs"} {"text": "value 1.\nParameters:\n * window_length (int) -- the size of returned window\n * **periodic** (*bool**, **optional*) -- If True, returns a\n window to be used as periodic function. If False, return a\n symmetric window.\n\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\"). Only floating point\n types are supported.\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned window tensor. Only \"torch.strided\" (dense layout) is\n supported.\n\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n", "source": "https://pytorch.org/docs/stable/generated/torch.blackman_window.html", "category": "pytorch docs"} {"text": "tensor types.\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\nReturns:\n A 1-D tensor of size (\\text{window_length},) containing the\n window\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.blackman_window.html", "category": "pytorch docs"} {"text": "torch.Tensor.svd\nTensor.svd(some=True, compute_uv=True)\nSee \"torch.svd()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.svd.html", "category": "pytorch docs"} {"text": "torch.cuda.stream\ntorch.cuda.stream(stream)\nWrapper around the Context-manager StreamContext that selects a\n given stream.\nParameters:\n stream (Stream) -- selected stream. This manager is a no-\n op if it's \"None\".\nReturn type:\n StreamContext\n..Note:: In eager mode stream is of type Stream class while in JIT\n it is an object of the custom class \"torch.classes.cuda.Stream\".", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.stream.html", "category": "pytorch docs"} {"text": "torch.Tensor.log_\nTensor.log_() -> Tensor\nIn-place version of \"log()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.log_.html", "category": "pytorch docs"} {"text": "device_of\nclass torch.cuda.device_of(obj)\nContext-manager that changes the current device to that of given\n object.\nYou can use both tensors and storages as arguments. If a given\n object is not allocated on a GPU, this is a no-op.\nParameters:\n obj (Tensor or Storage) -- object allocated on the\n selected device.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.device_of.html", "category": "pytorch docs"} {"text": "torch.histogram\ntorch.histogram(input, bins, *, range=None, weight=None, density=False, out=None)\nComputes a histogram of the values in a tensor.\n\"bins\" can be an integer or a 1D tensor.\nIf \"bins\" is an int, it specifies the number of equal-width bins.\n By default, the lower and upper range of the bins is determined by\n the minimum and maximum elements of the input tensor. The \"range\"\n argument can be provided to specify a range for the bins.\nIf \"bins\" is a 1D tensor, it specifies the sequence of bin edges\n including the rightmost edge. It should contain at least 2 elements\n and its elements should be increasing.\nParameters:\n * input (Tensor) -- the input tensor.\n * **bins** -- int or 1D Tensor. If int, defines the number of\n equal-width bins. If tensor, defines the sequence of bin edges\n including the rightmost edge.\n\nKeyword Arguments:\n * range (tuple of python:float) -- Defines the range of\n the bins.", "source": "https://pytorch.org/docs/stable/generated/torch.histogram.html", "category": "pytorch docs"} {"text": "the bins.\n * **weight** (*Tensor*) -- If provided, weight should have the\n same shape as input. Each value in input contributes its\n associated weight towards its bin's result.\n\n * **density** (*bool*) -- If False, the result will contain the\n count (or total weight) in each bin. If True, the result is\n the value of the probability density function over the bins,\n normalized such that the integral over the range of the bins\n is 1.\n\n * **out** (*Tensor**, **optional*) -- the output tensor. (tuple,\n optional): The result tuple of two output tensors (hist,\n bin_edges).\n\nReturns:\n 1D Tensor containing the values of the histogram.\n bin_edges(Tensor): 1D Tensor containing the edges of the\n histogram bins.\nReturn type:\n hist (Tensor)\nExample:\n >>> torch.histogram(torch.tensor([1., 2, 1]), bins=4, range=(0., 3.), weight=torch.tensor([1., 2., 4.]))\n", "source": "https://pytorch.org/docs/stable/generated/torch.histogram.html", "category": "pytorch docs"} {"text": "(tensor([ 0., 5., 2., 0.]), tensor([0., 0.75, 1.5, 2.25, 3.]))\n >>> torch.histogram(torch.tensor([1., 2, 1]), bins=4, range=(0., 3.), weight=torch.tensor([1., 2., 4.]), density=True)\n (tensor([ 0., 0.9524, 0.3810, 0.]), tensor([0., 0.75, 1.5, 2.25, 3.]))", "source": "https://pytorch.org/docs/stable/generated/torch.histogram.html", "category": "pytorch docs"} {"text": "torch.Tensor.arctan\nTensor.arctan() -> Tensor\nSee \"torch.arctan()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arctan.html", "category": "pytorch docs"} {"text": "torch.polygamma\ntorch.polygamma(n, input, *, out=None) -> Tensor\nAlias for \"torch.special.polygamma()\".", "source": "https://pytorch.org/docs/stable/generated/torch.polygamma.html", "category": "pytorch docs"} {"text": "torch.cuda.comm.broadcast_coalesced\ntorch.cuda.comm.broadcast_coalesced(tensors, devices, buffer_size=10485760)\nBroadcasts a sequence tensors to the specified GPUs. Small tensors\n are first coalesced into a buffer to reduce the number of\n synchronizations.\nParameters:\n * tensors (sequence) -- tensors to broadcast. Must be on\n the same device, either CPU or GPU.\n * **devices** (*Iterable**[**torch.device**, **str** or\n **int**]*) -- an iterable of GPU devices, among which to\n broadcast.\n\n * **buffer_size** (*int*) -- maximum size of the buffer used for\n coalescing\n\nReturns:\n A tuple containing copies of \"tensor\", placed on \"devices\".", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.comm.broadcast_coalesced.html", "category": "pytorch docs"} {"text": "torch._foreach_abs\ntorch._foreach_abs(self: List[Tensor]) -> List[Tensor]\nApply \"torch.abs()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_abs.html", "category": "pytorch docs"} {"text": "torch.neg\ntorch.neg(input, *, out=None) -> Tensor\nReturns a new tensor with the negative of the elements of \"input\".\n \\text{out} = -1 \\times \\text{input}\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(5)\n >>> a\n tensor([ 0.0090, -0.2262, -0.0682, -0.2866, 0.3940])\n >>> torch.neg(a)\n tensor([-0.0090, 0.2262, 0.0682, 0.2866, -0.3940])\n", "source": "https://pytorch.org/docs/stable/generated/torch.neg.html", "category": "pytorch docs"} {"text": "torch.Tensor.floor_\nTensor.floor_() -> Tensor\nIn-place version of \"floor()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.floor_.html", "category": "pytorch docs"} {"text": "torch.Tensor.heaviside\nTensor.heaviside(values) -> Tensor\nSee \"torch.heaviside()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.heaviside.html", "category": "pytorch docs"} {"text": "torch.nn.functional.max_unpool2d\ntorch.nn.functional.max_unpool2d(input, indices, kernel_size, stride=None, padding=0, output_size=None)\nComputes a partial inverse of \"MaxPool2d\".\nSee \"MaxUnpool2d\" for details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.max_unpool2d.html", "category": "pytorch docs"} {"text": "torch.Tensor.scatter_add_\nTensor.scatter_add_(dim, index, src) -> Tensor\nAdds all values from the tensor \"src\" into \"self\" at the indices\n specified in the \"index\" tensor in a similar fashion as\n \"scatter_()\". For each value in \"src\", it is added to an index in\n \"self\" which is specified by its index in \"src\" for \"dimension !=\n dim\" and by the corresponding value in \"index\" for \"dimension =\n dim\".\nFor a 3-D tensor, \"self\" is updated as:\n self[index[i][j][k]][j][k] += src[i][j][k] # if dim == 0\n self[i][index[i][j][k]][k] += src[i][j][k] # if dim == 1\n self[i][j][index[i][j][k]] += src[i][j][k] # if dim == 2\n\n\"self\", \"index\" and \"src\" should have same number of dimensions. It\n is also required that \"index.size(d) <= src.size(d)\" for all\n dimensions \"d\", and that \"index.size(d) <= self.size(d)\" for all\n dimensions \"d != dim\". Note that \"index\" and \"src\" do not\n broadcast.\nNote:", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_add_.html", "category": "pytorch docs"} {"text": "broadcast.\nNote:\n This operation may behave nondeterministically when given tensors\n on a CUDA device. See Reproducibility for more information.\n\nNote:\n The backward pass is implemented only for \"src.shape ==\n index.shape\".\n\nParameters:\n * dim (int) -- the axis along which to index\n * **index** (*LongTensor*) -- the indices of elements to scatter\n and add, can be either empty or of the same dimensionality as\n \"src\". When empty, the operation returns \"self\" unchanged.\n\n * **src** (*Tensor*) -- the source elements to scatter and add\n\nExample:\n >>> src = torch.ones((2, 5))\n >>> index = torch.tensor([[0, 1, 2, 0, 0]])\n >>> torch.zeros(3, 5, dtype=src.dtype).scatter_add_(0, index, src)\n tensor([[1., 0., 0., 1., 1.],\n [0., 1., 0., 0., 0.],\n [0., 0., 1., 0., 0.]])\n >>> index = torch.tensor([[0, 1, 2, 0, 0], [0, 1, 2, 2, 2]])\n >>> torch.zeros(3, 5, dtype=src.dtype).scatter_add_(0, index, src)\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_add_.html", "category": "pytorch docs"} {"text": "tensor([[2., 0., 0., 1., 1.],\n [0., 2., 0., 0., 0.],\n [0., 0., 2., 1., 1.]])", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_add_.html", "category": "pytorch docs"} {"text": "torch.jit.optimize_for_inference\ntorch.jit.optimize_for_inference(mod, other_methods=None)\nPerforms a set of optimization passes to optimize a model for the\n purposes of inference. If the model is not already frozen,\n optimize_for_inference will invoke torch.jit.freeze\n automatically.\nIn addition to generic optimizations that should speed up your\n model regardless of environment, prepare for inference will also\n bake in build specific settings such as the presence of CUDNN or\n MKLDNN, and may in the future make transformations which speed\n things up on one machine but slow things down on another.\n Accordingly, serialization is not implemented following invoking\n optimize_for_inference and is not guaranteed.\nThis is still in prototype, and may have the potential to slow down\n your model. Primary use cases that have been targeted so far have\n been vision models on cpu and gpu to a lesser extent.", "source": "https://pytorch.org/docs/stable/generated/torch.jit.optimize_for_inference.html", "category": "pytorch docs"} {"text": "Example (optimizing a module with Conv->Batchnorm):\n import torch\n in_channels, out_channels = 3, 32\n conv = torch.nn.Conv2d(in_channels, out_channels, kernel_size=3, stride=2, bias=True)\n bn = torch.nn.BatchNorm2d(out_channels, eps=.001)\n mod = torch.nn.Sequential(conv, bn)\n frozen_mod = torch.jit.optimize_for_inference(torch.jit.script(mod.eval()))\n assert \"batch_norm\" not in str(frozen_mod.graph)\n # if built with MKLDNN, convolution will be run with MKLDNN weights\n assert \"MKLDNN\" in frozen_mod.graph\n\nReturn type:\n ScriptModule", "source": "https://pytorch.org/docs/stable/generated/torch.jit.optimize_for_inference.html", "category": "pytorch docs"} {"text": "torch.Tensor.addcmul\nTensor.addcmul(tensor1, tensor2, *, value=1) -> Tensor\nSee \"torch.addcmul()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.addcmul.html", "category": "pytorch docs"} {"text": "torch.cuda.is_current_stream_capturing\ntorch.cuda.is_current_stream_capturing()\nReturns True if CUDA graph capture is underway on the current CUDA\n stream, False otherwise.\nIf a CUDA context does not exist on the current device, returns\n False without initializing the context.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.is_current_stream_capturing.html", "category": "pytorch docs"} {"text": "torch.Tensor.amin\nTensor.amin(dim=None, keepdim=False) -> Tensor\nSee \"torch.amin()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.amin.html", "category": "pytorch docs"} {"text": "torch.is_warn_always_enabled\ntorch.is_warn_always_enabled()\nReturns True if the global warn_always flag is turned on. Refer to\n \"torch.set_warn_always()\" documentation for more details.", "source": "https://pytorch.org/docs/stable/generated/torch.is_warn_always_enabled.html", "category": "pytorch docs"} {"text": "torch.Tensor.repeat_interleave\nTensor.repeat_interleave(repeats, dim=None, *, output_size=None) -> Tensor\nSee \"torch.repeat_interleave()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.repeat_interleave.html", "category": "pytorch docs"} {"text": "upsample\nclass torch.ao.nn.quantized.functional.upsample(input, size=None, scale_factor=None, mode='nearest', align_corners=None)\nUpsamples the input to either the given \"size\" or the given\n \"scale_factor\"\nWarning:\n This function is deprecated in favor of\n \"torch.nn.quantized.functional.interpolate()\". This is equivalent\n with \"nn.quantized.functional.interpolate(...)\".\n\nSee \"torch.nn.functional.interpolate()\" for implementation details.\nThe input dimensions are interpreted in the form: mini-batch x\n channels x [optional depth] x [optional height] x width.\nNote:\n The input quantization parameters propagate to the output.\n\nNote:\n Only 2D input is supported for quantized inputs\n\nNote:\n Only the following modes are supported for the quantized inputs:\n\n * *bilinear*\n\n * *nearest*\n\nParameters:\n * input (Tensor) -- quantized input tensor\n * **size** (*int** or **Tuple**[**int**] or **Tuple**[**int**,\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.upsample.html", "category": "pytorch docs"} {"text": "int] or Tuple[int, int, int]*) -- output\n spatial size.\n * **scale_factor** (*float** or **Tuple**[**float**]*) --\n multiplier for spatial size. Has to be an integer.\n\n * **mode** (*str*) -- algorithm used for upsampling: \"'nearest'\"\n | \"'bilinear'\"\n\n * **align_corners** (*bool**, **optional*) -- Geometrically, we\n consider the pixels of the input and output as squares rather\n than points. If set to \"True\", the input and output tensors\n are aligned by the center points of their corner pixels,\n preserving the values at the corner pixels. If set to \"False\",\n the input and output tensors are aligned by the corner points\n of their corner pixels, and the interpolation uses edge value\n padding for out-of-boundary values, making this operation\n *independent* of input size when \"scale_factor\" is kept the\n same. This only has an effect when \"mode\" is \"'bilinear'\".\n Default: \"False\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.upsample.html", "category": "pytorch docs"} {"text": "Default: \"False\"\nWarning:\n With \"align_corners = True\", the linearly interpolating modes\n (*bilinear*) don't proportionally align the output and input\n pixels, and thus the output values can depend on the input size.\n This was the default behavior for these modes up to version\n 0.3.1. Since then, the default behavior is \"align_corners =\n False\". See \"Upsample\" for concrete examples on how this affects\n the outputs.\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.upsample.html", "category": "pytorch docs"} {"text": "torch.Tensor.nansum\nTensor.nansum(dim=None, keepdim=False, dtype=None) -> Tensor\nSee \"torch.nansum()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.nansum.html", "category": "pytorch docs"} {"text": "torch.Tensor.unbind\nTensor.unbind(dim=0) -> seq\nSee \"torch.unbind()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.unbind.html", "category": "pytorch docs"} {"text": "torch.isclose\ntorch.isclose(input, other, rtol=1e-05, atol=1e-08, equal_nan=False) -> Tensor\nReturns a new tensor with boolean elements representing if each\n element of \"input\" is \"close\" to the corresponding element of\n \"other\". Closeness is defined as:\n \\lvert \\text{input} - \\text{other} \\rvert \\leq \\texttt{atol} +\n \\texttt{rtol} \\times \\lvert \\text{other} \\rvert\n\nwhere \"input\" and \"other\" are finite. Where \"input\" and/or \"other\"\n are nonfinite they are close if and only if they are equal, with\n NaNs being considered equal to each other when \"equal_nan\" is True.\nParameters:\n * input (Tensor) -- first tensor to compare\n * **other** (*Tensor*) -- second tensor to compare\n\n * **atol** (*float**, **optional*) -- absolute tolerance.\n Default: 1e-08\n\n * **rtol** (*float**, **optional*) -- relative tolerance.\n Default: 1e-05\n\n * **equal_nan** (*bool**, **optional*) -- if \"True\", then two\n", "source": "https://pytorch.org/docs/stable/generated/torch.isclose.html", "category": "pytorch docs"} {"text": "\"NaN\" s will be considered equal. Default: \"False\"\nExamples:\n >>> torch.isclose(torch.tensor((1., 2, 3)), torch.tensor((1 + 1e-10, 3, 4)))\n tensor([ True, False, False])\n >>> torch.isclose(torch.tensor((float('inf'), 4)), torch.tensor((float('inf'), 6)), rtol=.5)\n tensor([True, True])\n", "source": "https://pytorch.org/docs/stable/generated/torch.isclose.html", "category": "pytorch docs"} {"text": "torch.nn.functional.kl_div\ntorch.nn.functional.kl_div(input, target, size_average=None, reduce=None, reduction='mean', log_target=False)\nThe Kullback-Leibler divergence Loss\nSee \"KLDivLoss\" for details.\nParameters:\n * input (Tensor) -- Tensor of arbitrary shape in log-\n probabilities.\n * **target** (*Tensor*) -- Tensor of the same shape as input.\n See \"log_target\" for the target's interpretation.\n\n * **size_average** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n multiple elements per sample. If the field \"size_average\" is\n set to \"False\", the losses are instead summed for each\n minibatch. Ignored when reduce is \"False\". Default: \"True\"\n\n * **reduce** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.kl_div.html", "category": "pytorch docs"} {"text": "over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n * **reduction** (*str**, **optional*) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'batchmean'\" | \"'sum'\" |\n \"'mean'\". \"'none'\": no reduction will be applied\n \"'batchmean'\": the sum of the output will be divided by the\n batchsize \"'sum'\": the output will be summed \"'mean'\": the\n output will be divided by the number of elements in the output\n Default: \"'mean'\"\n\n * **log_target** (*bool*) -- A flag indicating whether \"target\"\n is passed in the log space. It is recommended to pass certain\n distributions (like \"softmax\") in the log space to avoid\n numerical issues caused by explicit \"log\". Default: \"False\"\n\nReturn type:\n Tensor\nNote:\n \"size_average\" and \"reduce\" are in the process of being\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.kl_div.html", "category": "pytorch docs"} {"text": "deprecated, and in the meantime, specifying either of those two\n args will override \"reduction\".\nNote:\n \"reduction\" = \"'mean'\" doesn't return the true kl divergence\n value, please use \"reduction\" = \"'batchmean'\" which aligns with\n KL math definition. In the next major release, \"'mean'\" will be\n changed to be the same as 'batchmean'.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.kl_div.html", "category": "pytorch docs"} {"text": "torch.ravel\ntorch.ravel(input) -> Tensor\nReturn a contiguous flattened tensor. A copy is made only if\n needed.\nParameters:\n input (Tensor) -- the input tensor.\nExample:\n >>> t = torch.tensor([[[1, 2],\n ... [3, 4]],\n ... [[5, 6],\n ... [7, 8]]])\n >>> torch.ravel(t)\n tensor([1, 2, 3, 4, 5, 6, 7, 8])\n", "source": "https://pytorch.org/docs/stable/generated/torch.ravel.html", "category": "pytorch docs"} {"text": "torch.get_default_dtype\ntorch.get_default_dtype() -> torch.dtype\nGet the current default floating point \"torch.dtype\".\nExample:\n >>> torch.get_default_dtype() # initial default for floating point is torch.float32\n torch.float32\n >>> torch.set_default_dtype(torch.float64)\n >>> torch.get_default_dtype() # default is now changed to torch.float64\n torch.float64\n >>> torch.set_default_tensor_type(torch.FloatTensor) # setting tensor type also affects this\n >>> torch.get_default_dtype() # changed to torch.float32, the dtype for torch.FloatTensor\n torch.float32\n", "source": "https://pytorch.org/docs/stable/generated/torch.get_default_dtype.html", "category": "pytorch docs"} {"text": "torch.autograd.backward\ntorch.autograd.backward(tensors, grad_tensors=None, retain_graph=None, create_graph=False, grad_variables=None, inputs=None)\nComputes the sum of gradients of given tensors with respect to\n graph leaves.\nThe graph is differentiated using the chain rule. If any of\n \"tensors\" are non-scalar (i.e. their data has more than one\n element) and require gradient, then the Jacobian-vector product\n would be computed, in this case the function additionally requires\n specifying \"grad_tensors\". It should be a sequence of matching\n length, that contains the \"vector\" in the Jacobian-vector product,\n usually the gradient of the differentiated function w.r.t.\n corresponding tensors (\"None\" is an acceptable value for all\n tensors that don't need gradient tensors).\nThis function accumulates gradients in the leaves - you might need\n to zero \".grad\" attributes or set them to \"None\" before calling it.", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.backward.html", "category": "pytorch docs"} {"text": "See Default gradient layouts for details on the memory layout of\n accumulated gradients.\nNote:\n Using this method with \"create_graph=True\" will create a\n reference cycle between the parameter and its gradient which can\n cause a memory leak. We recommend using \"autograd.grad\" when\n creating the graph to avoid this. If you have to use this\n function, make sure to reset the \".grad\" fields of your\n parameters to \"None\" after use to break the cycle and avoid the\n leak.\n\nNote:\n If you run any forward ops, create \"grad_tensors\", and/or call\n \"backward\" in a user-specified CUDA stream context, see Stream\n semantics of backward passes.\n\nNote:\n When \"inputs\" are provided and a given input is not a leaf, the\n current implementation will call its grad_fn (even though it is\n not strictly needed to get this gradients). It is an\n implementation detail on which the user should not rely. See htt\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.backward.html", "category": "pytorch docs"} {"text": "ps://github.com/pytorch/pytorch/pull/60521#issuecomment-867061780\n for more details.\nParameters:\n * tensors (Sequence[Tensor] or Tensor) -- Tensors\n of which the derivative will be computed.\n * **grad_tensors** (*Sequence**[**Tensor** or **None**] or\n **Tensor**, **optional*) -- The \"vector\" in the Jacobian-\n vector product, usually gradients w.r.t. each element of\n corresponding tensors. None values can be specified for scalar\n Tensors or ones that don't require grad. If a None value would\n be acceptable for all grad_tensors, then this argument is\n optional.\n\n * **retain_graph** (*bool**, **optional*) -- If \"False\", the\n graph used to compute the grad will be freed. Note that in\n nearly all cases setting this option to \"True\" is not needed\n and often can be worked around in a much more efficient way.\n Defaults to the value of \"create_graph\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.backward.html", "category": "pytorch docs"} {"text": "Defaults to the value of \"create_graph\".\n * **create_graph** (*bool**, **optional*) -- If \"True\", graph of\n the derivative will be constructed, allowing to compute higher\n order derivative products. Defaults to \"False\".\n\n * **inputs** (*Sequence**[**Tensor**] or **Tensor**,\n **optional*) -- Inputs w.r.t. which the gradient be will\n accumulated into \".grad\". All other Tensors will be ignored.\n If not provided, the gradient is accumulated into all the leaf\n Tensors that were used to compute the attr::tensors.\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.backward.html", "category": "pytorch docs"} {"text": "torch.geqrf\ntorch.geqrf(input, *, out=None)\nThis is a low-level function for calling LAPACK's geqrf directly.\n This function returns a namedtuple (a, tau) as defined in LAPACK\n documentation for geqrf .\nComputes a QR decomposition of \"input\". Both Q and R matrices\n are stored in the same output tensor a. The elements of R are\n stored on and above the diagonal. Elementary reflectors (or\n Householder vectors) implicitly defining matrix Q are stored\n below the diagonal. The results of this function can be used\n together with \"torch.linalg.householder_product()\" to obtain the\n Q matrix or with \"torch.ormqr()\", which uses an implicit\n representation of the Q matrix, for an efficient matrix-matrix\n multiplication.\nSee LAPACK documentation for geqrf for further details.\nNote:\n See also \"torch.linalg.qr()\", which computes Q and R matrices,\n and \"torch.linalg.lstsq()\" with the \"driver=\"gels\"\" option for a\n", "source": "https://pytorch.org/docs/stable/generated/torch.geqrf.html", "category": "pytorch docs"} {"text": "function that can solve matrix equations using a QR\n decomposition.\nParameters:\n input (Tensor) -- the input matrix\nKeyword Arguments:\n out (tuple, optional) -- the output tuple of (Tensor,\n Tensor). Ignored if None. Default: None.", "source": "https://pytorch.org/docs/stable/generated/torch.geqrf.html", "category": "pytorch docs"} {"text": "torch.autograd.Function.backward\nstatic Function.backward(ctx, *grad_outputs)\nDefines a formula for differentiating the operation with backward\n mode automatic differentiation (alias to the vjp function).\nThis function is to be overridden by all subclasses.\nIt must accept a context \"ctx\" as the first argument, followed by\n as many outputs as the \"forward()\" returned (None will be passed in\n for non tensor outputs of the forward function), and it should\n return as many tensors, as there were inputs to \"forward()\". Each\n argument is the gradient w.r.t the given output, and each returned\n value should be the gradient w.r.t. the corresponding input. If an\n input is not a Tensor or is a Tensor not requiring grads, you can\n just pass None as a gradient for that input.\nThe context can be used to retrieve tensors saved during the\n forward pass. It also has an attribute \"ctx.needs_input_grad\" as a", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.Function.backward.html", "category": "pytorch docs"} {"text": "tuple of booleans representing whether each input needs gradient.\n E.g., \"backward()\" will have \"ctx.needs_input_grad[0] = True\" if\n the first input to \"forward()\" needs gradient computated w.r.t. the\n output.\nReturn type:\n Any", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.Function.backward.html", "category": "pytorch docs"} {"text": "torch.Tensor.isneginf\nTensor.isneginf() -> Tensor\nSee \"torch.isneginf()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.isneginf.html", "category": "pytorch docs"} {"text": "torch.cumprod\ntorch.cumprod(input, dim, *, dtype=None, out=None) -> Tensor\nReturns the cumulative product of elements of \"input\" in the\n dimension \"dim\".\nFor example, if \"input\" is a vector of size N, the result will also\n be a vector of size N, with elements.\n y_i = x_1 \\times x_2\\times x_3\\times \\dots \\times x_i\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int*) -- the dimension to do the operation over\n\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. If specified, the input tensor is casted\n to \"dtype\" before the operation is performed. This is useful\n for preventing data type overflows. Default: None.\n * **out** (*Tensor**, **optional*) -- the output tensor.\n\nExample:\n >>> a = torch.randn(10)\n >>> a\n tensor([ 0.6001, 0.2069, -0.1919, 0.9792, 0.6727, 1.0062, 0.4126,\n -0.2129, -0.4206, 0.1968])\n", "source": "https://pytorch.org/docs/stable/generated/torch.cumprod.html", "category": "pytorch docs"} {"text": "-0.2129, -0.4206, 0.1968])\n >>> torch.cumprod(a, dim=0)\n tensor([ 0.6001, 0.1241, -0.0238, -0.0233, -0.0157, -0.0158, -0.0065,\n 0.0014, -0.0006, -0.0001])\n >>> a[5] = 0.0\n >>> torch.cumprod(a, dim=0)\n tensor([ 0.6001, 0.1241, -0.0238, -0.0233, -0.0157, -0.0000, -0.0000,\n 0.0000, -0.0000, -0.0000])\n", "source": "https://pytorch.org/docs/stable/generated/torch.cumprod.html", "category": "pytorch docs"} {"text": "torch.Tensor.diag\nTensor.diag(diagonal=0) -> Tensor\nSee \"torch.diag()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.diag.html", "category": "pytorch docs"} {"text": "torch.rsqrt\ntorch.rsqrt(input, *, out=None) -> Tensor\nReturns a new tensor with the reciprocal of the square-root of each\n of the elements of \"input\".\n \\text{out}_{i} = \\frac{1}{\\sqrt{\\text{input}_{i}}}\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(4)\n >>> a\n tensor([-0.0370, 0.2970, 1.5420, -0.9105])\n >>> torch.rsqrt(a)\n tensor([ nan, 1.8351, 0.8053, nan])\n", "source": "https://pytorch.org/docs/stable/generated/torch.rsqrt.html", "category": "pytorch docs"} {"text": "torch.dstack\ntorch.dstack(tensors, *, out=None) -> Tensor\nStack tensors in sequence depthwise (along third axis).\nThis is equivalent to concatenation along the third axis after 1-D\n and 2-D tensors have been reshaped by \"torch.atleast_3d()\".\nParameters:\n tensors (sequence of Tensors) -- sequence of tensors to\n concatenate\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.tensor([1, 2, 3])\n >>> b = torch.tensor([4, 5, 6])\n >>> torch.dstack((a,b))\n tensor([[[1, 4],\n [2, 5],\n [3, 6]]])\n >>> a = torch.tensor([[1],[2],[3]])\n >>> b = torch.tensor([[4],[5],[6]])\n >>> torch.dstack((a,b))\n tensor([[[1, 4]],\n [[2, 5]],\n [[3, 6]]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.dstack.html", "category": "pytorch docs"} {"text": "torch.Tensor.tan_\nTensor.tan_() -> Tensor\nIn-place version of \"tan()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.tan_.html", "category": "pytorch docs"} {"text": "torch.Tensor.sub\nTensor.sub(other, *, alpha=1) -> Tensor\nSee \"torch.sub()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sub.html", "category": "pytorch docs"} {"text": "torch._foreach_tan\ntorch._foreach_tan(self: List[Tensor]) -> List[Tensor]\nApply \"torch.tan()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_tan.html", "category": "pytorch docs"} {"text": "torch.dist\ntorch.dist(input, other, p=2) -> Tensor\nReturns the p-norm of (\"input\" - \"other\")\nThe shapes of \"input\" and \"other\" must be broadcastable.\nParameters:\n * input (Tensor) -- the input tensor.\n * **other** (*Tensor*) -- the Right-hand-side input tensor\n\n * **p** (*float**, **optional*) -- the norm to be computed\n\nExample:\n >>> x = torch.randn(4)\n >>> x\n tensor([-1.5393, -0.8675, 0.5916, 1.6321])\n >>> y = torch.randn(4)\n >>> y\n tensor([ 0.0967, -1.0511, 0.6295, 0.8360])\n >>> torch.dist(x, y, 3.5)\n tensor(1.6727)\n >>> torch.dist(x, y, 3)\n tensor(1.6973)\n >>> torch.dist(x, y, 0)\n tensor(4.)\n >>> torch.dist(x, y, 1)\n tensor(2.6537)\n", "source": "https://pytorch.org/docs/stable/generated/torch.dist.html", "category": "pytorch docs"} {"text": "torch.func.vjp\ntorch.func.vjp(func, *primals, has_aux=False)\nStanding for the vector-Jacobian product, returns a tuple\n containing the results of \"func\" applied to \"primals\" and a\n function that, when given \"cotangents\", computes the reverse-mode\n Jacobian of \"func\" with respect to \"primals\" times \"cotangents\".\nParameters:\n * func (Callable) -- A Python function that takes one or\n more arguments. Must return one or more Tensors.\n * **primals** (*Tensors*) -- Positional arguments to \"func\" that\n must all be Tensors. The returned function will also be\n computing the derivative with respect to these arguments\n\n * **has_aux** (*bool*) -- Flag indicating that \"func\" returns a\n \"(output, aux)\" tuple where the first element is the output of\n the function to be differentiated and the second element is\n other auxiliary objects that will not be differentiated.\n Default: False.\n\nReturns:", "source": "https://pytorch.org/docs/stable/generated/torch.func.vjp.html", "category": "pytorch docs"} {"text": "Default: False.\nReturns:\n Returns a \"(output, vjp_fn)\" tuple containing the output of\n \"func\" applied to \"primals\" and a function that computes the vjp\n of \"func\" with respect to all \"primals\" using the cotangents\n passed to the returned function. If \"has_aux is True\", then\n instead returns a \"(output, vjp_fn, aux)\" tuple. The returned\n \"vjp_fn\" function will return a tuple of each VJP.\nWhen used in simple cases, \"vjp()\" behaves the same as \"grad()\"\n\n\n\nx = torch.randn([5])\nf = lambda x: x.sin().sum()\n(_, vjpfunc) = torch.func.vjp(f, x)\ngrad = vjpfunc(torch.tensor(1.))[0]\nassert torch.allclose(grad, torch.func.grad(f)(x))\n\n\n\nHowever, \"vjp()\" can support functions with multiple outputs by\n passing in the cotangents for each of the outputs\n\n\n\nx = torch.randn([5])\nf = lambda x: (x.sin(), x.cos())\n(_, vjpfunc) = torch.func.vjp(f, x)\nvjps = vjpfunc((torch.ones([5]), torch.ones([5])))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.vjp.html", "category": "pytorch docs"} {"text": "\n\n\nassert torch.allclose(vjps[0], x.cos() + -x.sin())\n\n\n\n\"vjp()\" can even support outputs being Python structs\n\n\n\nx = torch.randn([5])\nf = lambda x: {'first': x.sin(), 'second': x.cos()}\n(_, vjpfunc) = torch.func.vjp(f, x)\ncotangents = {'first': torch.ones([5]), 'second': torch.ones([5])}\nvjps = vjpfunc(cotangents)\nassert torch.allclose(vjps[0], x.cos() + -x.sin())\n\n\n\nThe function returned by \"vjp()\" will compute the partials with\n respect to each of the \"primals\"\n\n\n\nx, y = torch.randn([5, 4]), torch.randn([4, 5])\n(_, vjpfunc) = torch.func.vjp(torch.matmul, x, y)\ncotangents = torch.randn([5, 5])\nvjps = vjpfunc(cotangents)\nassert len(vjps) == 2\nassert torch.allclose(vjps[0], torch.matmul(cotangents, y.transpose(0, 1)))\nassert torch.allclose(vjps[1], torch.matmul(x.transpose(0, 1), cotangents))\n\n\n\n\"primals\" are the positional arguments for \"f\". All kwargs use\n their default value\n\n\n\nx = torch.randn([5])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.vjp.html", "category": "pytorch docs"} {"text": "\n\n\nx = torch.randn([5])\ndef f(x, scale=4.):\n return x * scale\n(_, vjpfunc) = torch.func.vjp(f, x)\nvjps = vjpfunc(torch.ones_like(x))\nassert torch.allclose(vjps[0], torch.full(x.shape, 4.))\n\n\n\nNote:\n Using PyTorch \"torch.no_grad\" together with \"vjp\". Case 1: Using\n \"torch.no_grad\" inside a function:\n\n >>> def f(x):\n >>> with torch.no_grad():\n >>> c = x ** 2\n >>> return x - c\n\n In this case, \"vjp(f)(x)\" will respect the inner\n \"torch.no_grad\".Case 2: Using \"vjp\" inside \"torch.no_grad\"\n context manager:\n\n >>> with torch.no_grad():\n >>> vjp(f)(x)\n\n In this case, \"vjp\" will respect the inner \"torch.no_grad\", but\n not the outer one. This is because \"vjp\" is a \"function\n transform\": its result should not depend on the result of a\n context manager outside of \"f\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.vjp.html", "category": "pytorch docs"} {"text": "Tanhshrink\nclass torch.nn.Tanhshrink\nApplies the element-wise function:\n \\text{Tanhshrink}(x) = x - \\tanh(x)\n\nShape:\n * Input: (*), where * means any number of dimensions.\n * Output: (*), same shape as the input.\n\n[image]\nExamples:\n >>> m = nn.Tanhshrink()\n >>> input = torch.randn(2)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Tanhshrink.html", "category": "pytorch docs"} {"text": "torch.Tensor.arccos\nTensor.arccos() -> Tensor\nSee \"torch.arccos()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arccos.html", "category": "pytorch docs"} {"text": "torch.Tensor.row_indices\nTensor.row_indices()", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.row_indices.html", "category": "pytorch docs"} {"text": "Linear\nclass torch.ao.nn.qat.dynamic.Linear(in_features, out_features, bias=True, qconfig=None, device=None, dtype=None)\nA linear module attached with FakeQuantize modules for weight, used\n for dynamic quantization aware training.\nWe adopt the same interface as torch.nn.Linear, please see\n https://pytorch.org/docs/stable/nn.html#torch.nn.Linear for\n documentation.\nSimilar to torch.nn.Linear, with FakeQuantize modules initialized\n to default.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.qat.dynamic.Linear.html", "category": "pytorch docs"} {"text": "MaxUnpool2d\nclass torch.nn.MaxUnpool2d(kernel_size, stride=None, padding=0)\nComputes a partial inverse of \"MaxPool2d\".\n\"MaxPool2d\" is not fully invertible, since the non-maximal values\n are lost.\n\"MaxUnpool2d\" takes in as input the output of \"MaxPool2d\" including\n the indices of the maximal values and computes a partial inverse in\n which all non-maximal values are set to zero.\nNote:\n \"MaxPool2d\" can map several input sizes to the same output sizes.\n Hence, the inversion process can get ambiguous. To accommodate\n this, you can provide the needed output size as an additional\n argument \"output_size\" in the forward call. See the Inputs and\n Example below.\n\nParameters:\n * kernel_size (int or tuple) -- Size of the max\n pooling window.\n * **stride** (*int** or **tuple*) -- Stride of the max pooling\n window. It is set to \"kernel_size\" by default.\n\n * **padding** (*int** or **tuple*) -- Padding that was added to\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxUnpool2d.html", "category": "pytorch docs"} {"text": "the input\nInputs:\n * input: the input Tensor to invert\n * *indices*: the indices given out by \"MaxPool2d\"\n\n * *output_size* (optional): the targeted output size\n\nShape:\n * Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in}).\n * Output: (N, C, H_{out}, W_{out}) or (C, H_{out}, W_{out}),\n where\n\n H_{out} = (H_{in} - 1) \\times \\text{stride[0]} - 2 \\times\n \\text{padding[0]} + \\text{kernel\\_size[0]}\n\n W_{out} = (W_{in} - 1) \\times \\text{stride[1]} - 2 \\times\n \\text{padding[1]} + \\text{kernel\\_size[1]}\n\n or as given by \"output_size\" in the call operator\n\nExample:\n >>> pool = nn.MaxPool2d(2, stride=2, return_indices=True)\n >>> unpool = nn.MaxUnpool2d(2, stride=2)\n >>> input = torch.tensor([[[[ 1., 2., 3., 4.],\n [ 5., 6., 7., 8.],\n [ 9., 10., 11., 12.],\n [13., 14., 15., 16.]]]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxUnpool2d.html", "category": "pytorch docs"} {"text": "\n\n\noutput, indices = pool(input)\n >>> unpool(output, indices)\n tensor([[[[ 0., 0., 0., 0.],\n [ 0., 6., 0., 8.],\n [ 0., 0., 0., 0.],\n [ 0., 14., 0., 16.]]]])\n >>> # Now using output_size to resolve an ambiguous size for the inverse\n >>> input = torch.torch.tensor([[[[ 1., 2., 3., 4., 5.],\n [ 6., 7., 8., 9., 10.],\n [11., 12., 13., 14., 15.],\n [16., 17., 18., 19., 20.]]]])\n >>> output, indices = pool(input)\n >>> # This call will not work without specifying output_size\n >>> unpool(output, indices, output_size=input.size())\n tensor([[[[ 0., 0., 0., 0., 0.],\n [ 0., 7., 0., 9., 0.],\n [ 0., 0., 0., 0., 0.],\n [ 0., 17., 0., 19., 0.]]]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxUnpool2d.html", "category": "pytorch docs"} {"text": "LSTMCell\nclass torch.ao.nn.quantized.dynamic.LSTMCell(args, *kwargs)\nA long short-term memory (LSTM) cell.\nA dynamic quantized LSTMCell module with floating point tensor as\n inputs and outputs. Weights are quantized to 8 bits. We adopt the\n same interface as torch.nn.LSTMCell, please see\n https://pytorch.org/docs/stable/nn.html#torch.nn.LSTMCell for\n documentation.\nExamples:\n >>> rnn = nn.LSTMCell(10, 20)\n >>> input = torch.randn(6, 3, 10)\n >>> hx = torch.randn(3, 20)\n >>> cx = torch.randn(3, 20)\n >>> output = []\n >>> for i in range(6):\n ... hx, cx = rnn(input[i], (hx, cx))\n ... output.append(hx)\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.LSTMCell.html", "category": "pytorch docs"} {"text": "torch.sparse.sum\ntorch.sparse.sum(input, dim=None, dtype=None)\nReturns the sum of each row of the sparse tensor \"input\" in the\n given dimensions \"dim\". If \"dim\" is a list of dimensions, reduce\n over all of them. When sum over all \"sparse_dim\", this method\n returns a dense tensor instead of a sparse tensor.\nAll summed \"dim\" are squeezed (see \"torch.squeeze()\"), resulting an\n output tensor having \"dim\" fewer dimensions than \"input\".\nDuring backward, only gradients at \"nnz\" locations of \"input\" will\n propagate back. Note that the gradients of \"input\" is coalesced.\nParameters:\n * input (Tensor) -- the input sparse tensor\n * **dim** (*int** or **tuple of ints*) -- a dimension or a list\n of dimensions to reduce. Default: reduce over all dims.\n\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned Tensor. Default: dtype of \"input\".\n\nReturn type:\n Tensor\nExample:\n >>> nnz = 3\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.sum.html", "category": "pytorch docs"} {"text": "Tensor\nExample:\n >>> nnz = 3\n >>> dims = [5, 5, 2, 3]\n >>> I = torch.cat([torch.randint(0, dims[0], size=(nnz,)),\n torch.randint(0, dims[1], size=(nnz,))], 0).reshape(2, nnz)\n >>> V = torch.randn(nnz, dims[2], dims[3])\n >>> size = torch.Size(dims)\n >>> S = torch.sparse_coo_tensor(I, V, size)\n >>> S\n tensor(indices=tensor([[2, 0, 3],\n [2, 4, 1]]),\n values=tensor([[[-0.6438, -1.6467, 1.4004],\n [ 0.3411, 0.0918, -0.2312]],\n\n [[ 0.5348, 0.0634, -2.0494],\n [-0.7125, -1.0646, 2.1844]],\n\n [[ 0.1276, 0.1874, -0.6334],\n [-1.9682, -0.5340, 0.7483]]]),\n size=(5, 5, 2, 3), nnz=3, layout=torch.sparse_coo)\n\n # when sum over only part of sparse_dims, return a sparse tensor\n >>> torch.sparse.sum(S, [1, 3])\n tensor(indices=tensor([[0, 2, 3]]),\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.sum.html", "category": "pytorch docs"} {"text": "tensor(indices=tensor([[0, 2, 3]]),\n values=tensor([[-1.4512, 0.4073],\n [-0.8901, 0.2017],\n [-0.3183, -1.7539]]),\n size=(5, 2), nnz=3, layout=torch.sparse_coo)\n # when sum over all sparse dim, return a dense tensor\n # with summed dims squeezed\n >>> torch.sparse.sum(S, [0, 1, 3])\n tensor([-2.6596, -1.1450])\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.sum.html", "category": "pytorch docs"} {"text": "torch.remainder\ntorch.remainder(input, other, *, out=None) -> Tensor\nComputes Python's modulus operation entrywise. The result has the\n same sign as the divisor \"other\" and its absolute value is less\n than that of \"other\".\nIt may also be defined in terms of \"torch.div()\" as\n torch.remainder(a, b) == a - a.div(b, rounding_mode=\"floor\") * b\n\nSupports broadcasting to a common shape, type promotion, and\n integer and float inputs.\nNote:\n Complex inputs are not supported. In some cases, it is not\n mathematically possible to satisfy the definition of a modulo\n operation with complex numbers. See \"torch.fmod()\" for how\n division by zero is handled.\n\nSee also:\n \"torch.fmod()\" which implements C++'s std::fmod. This one is\n defined in terms of division rounding towards zero.\n\nParameters:\n * input (Tensor or Scalar) -- the dividend\n * **other** (*Tensor** or **Scalar*) -- the divisor\n\nKeyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.remainder.html", "category": "pytorch docs"} {"text": "Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> torch.remainder(torch.tensor([-3., -2, -1, 1, 2, 3]), 2)\n tensor([ 1., 0., 1., 1., 0., 1.])\n >>> torch.remainder(torch.tensor([1, 2, 3, 4, 5]), -1.5)\n tensor([ -0.5000, -1.0000, 0.0000, -0.5000, -1.0000 ])\n", "source": "https://pytorch.org/docs/stable/generated/torch.remainder.html", "category": "pytorch docs"} {"text": "torch.moveaxis\ntorch.moveaxis(input, source, destination) -> Tensor\nAlias for \"torch.movedim()\".\nThis function is equivalent to NumPy's moveaxis function.\nExamples:\n >>> t = torch.randn(3,2,1)\n >>> t\n tensor([[[-0.3362],\n [-0.8437]],\n\n [[-0.9627],\n [ 0.1727]],\n\n [[ 0.5173],\n [-0.1398]]])\n >>> torch.moveaxis(t, 1, 0).shape\n torch.Size([2, 3, 1])\n >>> torch.moveaxis(t, 1, 0)\n tensor([[[-0.3362],\n [-0.9627],\n [ 0.5173]],\n\n [[-0.8437],\n [ 0.1727],\n [-0.1398]]])\n >>> torch.moveaxis(t, (1, 2), (0, 1)).shape\n torch.Size([2, 1, 3])\n >>> torch.moveaxis(t, (1, 2), (0, 1))\n tensor([[[-0.3362, -0.9627, 0.5173]],\n\n [[-0.8437, 0.1727, -0.1398]]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.moveaxis.html", "category": "pytorch docs"} {"text": "torch.ormqr\ntorch.ormqr(input, tau, other, left=True, transpose=False, *, out=None) -> Tensor\nComputes the matrix-matrix multiplication of a product of\n Householder matrices with a general matrix.\nMultiplies a m \\times n matrix C (given by \"other\") with a matrix\n Q, where Q is represented using Householder reflectors (input,\n tau). See Representation of Orthogonal or Unitary Matrices for\n further details.\nIf \"left\" is True then op(Q) times C is computed, otherwise\n the result is C times op(Q). When \"left\" is True, the\n implicit matrix Q has size m \\times m. It has size n \\times n\n otherwise. If \"transpose\" is True then op is the conjugate\n transpose operation, otherwise it's a no-op.\nSupports inputs of float, double, cfloat and cdouble dtypes. Also\n supports batched inputs, and, if the input is batched, the output\n is batched with the same dimensions.\nSee also:\n \"torch.geqrf()\" can be used to form the Householder\n", "source": "https://pytorch.org/docs/stable/generated/torch.ormqr.html", "category": "pytorch docs"} {"text": "representation (input, tau) of matrix Q from the QR\n decomposition.\nNote:\n This function supports backward but it is only fast when \"(input,\n tau)\" do not require gradients and/or \"tau.size(-1)\" is very\n small. ``\n\nParameters:\n * input (Tensor) -- tensor of shape (, mn, k) where ***\n is zero or more batch dimensions and mn equals to m or n*\n depending on the \"left\".\n * **tau** (*Tensor*) -- tensor of shape *(*, min(mn, k))* where\n *** is zero or more batch dimensions.\n\n * **other** (*Tensor*) -- tensor of shape *(*, m, n)* where ***\n is zero or more batch dimensions.\n\n * **left** (*bool*) -- controls the order of multiplication.\n\n * **transpose** (*bool*) -- controls whether the matrix *Q* is\n conjugate transposed or not.\n\nKeyword Arguments:\n out (Tensor, optional) -- the output Tensor. Ignored\n if None. Default: None.", "source": "https://pytorch.org/docs/stable/generated/torch.ormqr.html", "category": "pytorch docs"} {"text": "torch.cuda.set_sync_debug_mode\ntorch.cuda.set_sync_debug_mode(debug_mode)\nSets the debug mode for cuda synchronizing operations.\nParameters:\n debug_mode (str or int) -- if \"default\" or 0, don't\n error or warn on synchronizing operations, if \"warn\" or 1, warn\n on synchronizing operations, if \"error\" or 2, error out\n synchronizing operations.\nWarning:\n This is an experimental feature, and not all synchronizing\n operations will trigger warning or error. In particular,\n operations in torch.distributed and torch.sparse namespaces are\n not covered yet.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.set_sync_debug_mode.html", "category": "pytorch docs"} {"text": "torch.log10\ntorch.log10(input, *, out=None) -> Tensor\nReturns a new tensor with the logarithm to the base 10 of the\n elements of \"input\".\n y_{i} = \\log_{10} (x_{i})\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.rand(5)\n >>> a\n tensor([ 0.5224, 0.9354, 0.7257, 0.1301, 0.2251])\n\n\n >>> torch.log10(a)\n tensor([-0.2820, -0.0290, -0.1392, -0.8857, -0.6476])\n", "source": "https://pytorch.org/docs/stable/generated/torch.log10.html", "category": "pytorch docs"} {"text": "torch.flatten\ntorch.flatten(input, start_dim=0, end_dim=- 1) -> Tensor\nFlattens \"input\" by reshaping it into a one-dimensional tensor. If\n \"start_dim\" or \"end_dim\" are passed, only dimensions starting with\n \"start_dim\" and ending with \"end_dim\" are flattened. The order of\n elements in \"input\" is unchanged.\nUnlike NumPy's flatten, which always copies input's data, this\n function may return the original object, a view, or copy. If no\n dimensions are flattened, then the original object \"input\" is\n returned. Otherwise, if input can be viewed as the flattened shape,\n then that view is returned. Finally, only if the input cannot be\n viewed as the flattened shape is input's data copied. See\n \"torch.Tensor.view()\" for details on when a view will be returned.\nNote:\n Flattening a zero-dimensional tensor will return a one-\n dimensional view.\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **start_dim** (*int*) -- the first dim to flatten\n", "source": "https://pytorch.org/docs/stable/generated/torch.flatten.html", "category": "pytorch docs"} {"text": "\nend_dim (int) -- the last dim to flatten\n\nExample:\n >>> t = torch.tensor([[[1, 2],\n ... [3, 4]],\n ... [[5, 6],\n ... [7, 8]]])\n >>> torch.flatten(t)\n tensor([1, 2, 3, 4, 5, 6, 7, 8])\n >>> torch.flatten(t, start_dim=1)\n tensor([[1, 2, 3, 4],\n [5, 6, 7, 8]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.flatten.html", "category": "pytorch docs"} {"text": "torch.Tensor.indices\nTensor.indices() -> Tensor\nReturn the indices tensor of a sparse COO tensor.\nWarning:\n Throws an error if \"self\" is not a sparse COO tensor.\n\nSee also \"Tensor.values()\".\nNote:\n This method can only be called on a coalesced sparse tensor. See\n \"Tensor.coalesce()\" for details.\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.indices.html", "category": "pytorch docs"} {"text": "torch.Tensor.erfc_\nTensor.erfc_() -> Tensor\nIn-place version of \"erfc()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.erfc_.html", "category": "pytorch docs"} {"text": "torch.autograd.profiler.profile.self_cpu_time_total\nproperty profile.self_cpu_time_total\nReturns total time spent on CPU obtained as a sum of all self times\n across all the events.", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.profiler.profile.self_cpu_time_total.html", "category": "pytorch docs"} {"text": "torch.nn.utils.clip_grad_norm_\ntorch.nn.utils.clip_grad_norm_(parameters, max_norm, norm_type=2.0, error_if_nonfinite=False, foreach=None)\nClips gradient norm of an iterable of parameters.\nThe norm is computed over all gradients together, as if they were\n concatenated into a single vector. Gradients are modified in-place.\nParameters:\n * parameters (Iterable[Tensor] or Tensor) -- an\n iterable of Tensors or a single Tensor that will have\n gradients normalized\n * **max_norm** (*float*) -- max norm of the gradients\n\n * **norm_type** (*float*) -- type of the used p-norm. Can be\n \"'inf'\" for infinity norm.\n\n * **error_if_nonfinite** (*bool*) -- if True, an error is thrown\n if the total norm of the gradients from \"parameters\" is \"nan\",\n \"inf\", or \"-inf\". Default: False (will switch to True in the\n future)\n\n * **foreach** (*bool*) -- use the faster foreach-based\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.clip_grad_norm_.html", "category": "pytorch docs"} {"text": "implementation. If \"None\", use the foreach implementation for\n CUDA and CPU tensors and silently fall back to the slow\n implementation for other device types. Default: \"None\"\nReturns:\n Total norm of the parameter gradients (viewed as a single\n vector).\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.clip_grad_norm_.html", "category": "pytorch docs"} {"text": "torch.Tensor.to_mkldnn\nTensor.to_mkldnn() -> Tensor\nReturns a copy of the tensor in \"torch.mkldnn\" layout.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_mkldnn.html", "category": "pytorch docs"} {"text": "torch.Tensor.erf_\nTensor.erf_() -> Tensor\nIn-place version of \"erf()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.erf_.html", "category": "pytorch docs"} {"text": "torch.Tensor.bool\nTensor.bool(memory_format=torch.preserve_format) -> Tensor\n\"self.bool()\" is equivalent to \"self.to(torch.bool)\". See \"to()\".\nParameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bool.html", "category": "pytorch docs"} {"text": "torch.view_as_complex\ntorch.view_as_complex(input) -> Tensor\nReturns a view of \"input\" as a complex tensor. For an input complex\n tensor of \"size\" m1, m2, \\dots, mi, 2, this function returns a new\n complex tensor of \"size\" m1, m2, \\dots, mi where the last dimension\n of the input tensor is expected to represent the real and imaginary\n components of complex numbers.\nWarning:\n \"view_as_complex()\" is only supported for tensors with\n \"torch.dtype\" \"torch.float64\" and \"torch.float32\". The input is\n expected to have the last dimension of \"size\" 2. In addition, the\n tensor must have a *stride* of 1 for its last dimension. The\n strides of all other dimensions must be even numbers.\n\nParameters:\n input (Tensor) -- the input tensor.\nExample:\n >>> x=torch.randn(4, 2)\n >>> x\n tensor([[ 1.6116, -0.5772],\n [-1.4606, -0.9120],\n [ 0.0786, -1.7497],\n [-0.6561, -1.6623]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.view_as_complex.html", "category": "pytorch docs"} {"text": "[-0.6561, -1.6623]])\n >>> torch.view_as_complex(x)\n tensor([(1.6116-0.5772j), (-1.4606-0.9120j), (0.0786-1.7497j), (-0.6561-1.6623j)])", "source": "https://pytorch.org/docs/stable/generated/torch.view_as_complex.html", "category": "pytorch docs"} {"text": "torch.nn.functional.pixel_shuffle\ntorch.nn.functional.pixel_shuffle(input, upscale_factor) -> Tensor\nRearranges elements in a tensor of shape (, C \\times r^2, H, W) to\n a tensor of shape (, C, H \\times r, W \\times r), where r is the\n \"upscale_factor\".\nSee \"PixelShuffle\" for details.\nParameters:\n * input (Tensor) -- the input tensor\n * **upscale_factor** (*int*) -- factor to increase spatial\n resolution by\n\nExamples:\n >>> input = torch.randn(1, 9, 4, 4)\n >>> output = torch.nn.functional.pixel_shuffle(input, 3)\n >>> print(output.size())\n torch.Size([1, 1, 12, 12])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.pixel_shuffle.html", "category": "pytorch docs"} {"text": "torch.Tensor.arcsinh_\nTensor.arcsinh_() -> Tensor\nIn-place version of \"arcsinh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arcsinh_.html", "category": "pytorch docs"} {"text": "torch.Tensor.less\nTensor.less()\nlt(other) -> Tensor\nSee \"torch.less()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.less.html", "category": "pytorch docs"} {"text": "ConvBnReLU3d\nclass torch.ao.nn.intrinsic.qat.ConvBnReLU3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=None, padding_mode='zeros', eps=1e-05, momentum=0.1, freeze_bn=False, qconfig=None)\nA ConvBnReLU3d module is a module fused from Conv3d, BatchNorm3d\n and ReLU, attached with FakeQuantize modules for weight, used in\n quantization aware training.\nWe combined the interface of \"torch.nn.Conv3d\" and\n \"torch.nn.BatchNorm3d\" and \"torch.nn.ReLU\".\nSimilar to torch.nn.Conv3d, with FakeQuantize modules initialized\n to default.\nVariables:\n weight_fake_quant -- fake quant module for weight", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.ConvBnReLU3d.html", "category": "pytorch docs"} {"text": "torch.logical_xor\ntorch.logical_xor(input, other, *, out=None) -> Tensor\nComputes the element-wise logical XOR of the given input tensors.\n Zeros are treated as \"False\" and nonzeros are treated as \"True\".\nParameters:\n * input (Tensor) -- the input tensor.\n * **other** (*Tensor*) -- the tensor to compute XOR with\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> torch.logical_xor(torch.tensor([True, False, True]), torch.tensor([True, False, False]))\n tensor([False, False, True])\n >>> a = torch.tensor([0, 1, 10, 0], dtype=torch.int8)\n >>> b = torch.tensor([4, 0, 1, 0], dtype=torch.int8)\n >>> torch.logical_xor(a, b)\n tensor([ True, True, False, False])\n >>> torch.logical_xor(a.double(), b.double())\n tensor([ True, True, False, False])\n >>> torch.logical_xor(a.double(), b)\n tensor([ True, True, False, False])\n", "source": "https://pytorch.org/docs/stable/generated/torch.logical_xor.html", "category": "pytorch docs"} {"text": "tensor([ True, True, False, False])\n >>> torch.logical_xor(a, b, out=torch.empty(4, dtype=torch.bool))\n tensor([ True, True, False, False])", "source": "https://pytorch.org/docs/stable/generated/torch.logical_xor.html", "category": "pytorch docs"} {"text": "SobolEngine\nclass torch.quasirandom.SobolEngine(dimension, scramble=False, seed=None)\nThe \"torch.quasirandom.SobolEngine\" is an engine for generating\n (scrambled) Sobol sequences. Sobol sequences are an example of low\n discrepancy quasi-random sequences.\nThis implementation of an engine for Sobol sequences is capable of\n sampling sequences up to a maximum dimension of 21201. It uses\n direction numbers from https://web.maths.unsw.edu.au/~fkuo/sobol/\n obtained using the search criterion D(6) up to the dimension 21201.\n This is the recommended choice by the authors.\n-[ References ]-\n\n\nArt B. Owen. Scrambling Sobol and Niederreiter-Xing points.\n Journal of Complexity, 14(4):466-489, December 1998.\n\n\nI. M. Sobol. The distribution of points in a cube and the\n accurate evaluation of integrals. Zh. Vychisl. Mat. i Mat. Phys.,\n 7:784-802, 1967.\n\n\nParameters:\n * dimension (Int) -- The dimensionality of the sequence to\n be drawn", "source": "https://pytorch.org/docs/stable/generated/torch.quasirandom.SobolEngine.html", "category": "pytorch docs"} {"text": "be drawn\n * **scramble** (*bool**, **optional*) -- Setting this to \"True\"\n will produce scrambled Sobol sequences. Scrambling is capable\n of producing better Sobol sequences. Default: \"False\".\n\n * **seed** (*Int**, **optional*) -- This is the seed for the\n scrambling. The seed of the random number generator is set to\n this, if specified. Otherwise, it uses a random seed. Default:\n \"None\"\n\nExamples:\n >>> soboleng = torch.quasirandom.SobolEngine(dimension=5)\n >>> soboleng.draw(3)\n tensor([[0.0000, 0.0000, 0.0000, 0.0000, 0.0000],\n [0.5000, 0.5000, 0.5000, 0.5000, 0.5000],\n [0.7500, 0.2500, 0.2500, 0.2500, 0.7500]])\n\ndraw(n=1, out=None, dtype=torch.float32)\n Function to draw a sequence of \"n\" points from a Sobol sequence.\n Note that the samples are dependent on the previous samples. The\n size of the result is (n, dimension).\n\n Parameters:\n", "source": "https://pytorch.org/docs/stable/generated/torch.quasirandom.SobolEngine.html", "category": "pytorch docs"} {"text": "Parameters:\n * n (Int, optional) -- The length of sequence of\n points to draw. Default: 1\n * **out** (*Tensor**, **optional*) -- The output tensor\n\n * **dtype** (\"torch.dtype\", optional) -- the desired data\n type of the returned tensor. Default: \"torch.float32\"\n\n Return type:\n *Tensor*\n\ndraw_base2(m, out=None, dtype=torch.float32)\n Function to draw a sequence of \"2**m\" points from a Sobol\n sequence. Note that the samples are dependent on the previous\n samples. The size of the result is (2**m, dimension).\n\n Parameters:\n * **m** (*Int*) -- The (base2) exponent of the number of\n points to draw.\n\n * **out** (*Tensor**, **optional*) -- The output tensor\n\n * **dtype** (\"torch.dtype\", optional) -- the desired data\n type of the returned tensor. Default: \"torch.float32\"\n\n Return type:\n *Tensor*\n\nfast_forward(n)", "source": "https://pytorch.org/docs/stable/generated/torch.quasirandom.SobolEngine.html", "category": "pytorch docs"} {"text": "Tensor\nfast_forward(n)\n Function to fast-forward the state of the \"SobolEngine\" by \"n\"\n steps. This is equivalent to drawing \"n\" samples without using\n the samples.\n\n Parameters:\n **n** (*Int*) -- The number of steps to fast-forward by.\n\nreset()\n Function to reset the \"SobolEngine\" to base state.\n", "source": "https://pytorch.org/docs/stable/generated/torch.quasirandom.SobolEngine.html", "category": "pytorch docs"} {"text": "PackedSequence\nclass torch.nn.utils.rnn.PackedSequence(data, batch_sizes=None, sorted_indices=None, unsorted_indices=None)\nHolds the data and list of \"batch_sizes\" of a packed sequence.\nAll RNN modules accept packed sequences as inputs.\nNote:\n Instances of this class should never be created manually. They\n are meant to be instantiated by functions like\n \"pack_padded_sequence()\".Batch sizes represent the number\n elements at each sequence step in the batch, not the varying\n sequence lengths passed to \"pack_padded_sequence()\". For\n instance, given data \"abc\" and \"x\" the \"PackedSequence\" would\n contain data \"axbc\" with \"batch_sizes=[2,1,1]\".\n\nVariables:\n * data (Tensor) -- Tensor containing packed sequence\n * **batch_sizes** (*Tensor*) -- Tensor of integers holding\n information about the batch size at each sequence step\n\n * **sorted_indices** (*Tensor**, **optional*) -- Tensor of\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.PackedSequence.html", "category": "pytorch docs"} {"text": "integers holding how this \"PackedSequence\" is constructed from\n sequences.\n * **unsorted_indices** (*Tensor**, **optional*) -- Tensor of\n integers holding how this to recover the original sequences\n with correct order.\n\nNote:\n \"data\" can be on arbitrary device and of arbitrary dtype.\n \"sorted_indices\" and \"unsorted_indices\" must be \"torch.int64\"\n tensors on the same device as \"data\".However, \"batch_sizes\"\n should always be a CPU \"torch.int64\" tensor.This invariant is\n maintained throughout \"PackedSequence\" class, and all functions\n that construct a *:class:PackedSequence* in PyTorch (i.e., they\n only pass in tensors conforming to this constraint).\n\nbatch_sizes: Tensor\n Alias for field number 1\n\ncount(value, /)\n Return number of occurrences of value.\n\ndata: Tensor\n Alias for field number 0\n\nindex(value, start=0, stop=9223372036854775807, /)\n Return first index of value.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.PackedSequence.html", "category": "pytorch docs"} {"text": "Return first index of value.\n Raises ValueError if the value is not present.\n\nproperty is_cuda\n Returns true if *self.data* stored on a gpu\n\nis_pinned()\n Returns true if *self.data* stored on in pinned memory\n\nsorted_indices: Optional[Tensor]\n Alias for field number 2\n\nto(args, *kwargs)\n Performs dtype and/or device conversion on *self.data*.\n\n It has similar signature as \"torch.Tensor.to()\", except optional\n arguments like *non_blocking* and *copy* should be passed as\n kwargs, not args, or they will not apply to the index tensors.\n\n Note:\n\n If the \"self.data\" Tensor already has the correct\n \"torch.dtype\" and \"torch.device\", then \"self\" is returned.\n Otherwise, returns a copy with the desired configuration.\n\nunsorted_indices: Optional[Tensor]\n Alias for field number 3\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.PackedSequence.html", "category": "pytorch docs"} {"text": "Softplus\nclass torch.nn.Softplus(beta=1, threshold=20)\nApplies the Softplus function \\text{Softplus}(x) = \\frac{1}{\\beta}\n * \\log(1 + \\exp(\\beta * x)) element-wise.\nSoftPlus is a smooth approximation to the ReLU function and can be\n used to constrain the output of a machine to always be positive.\nFor numerical stability the implementation reverts to the linear\n function when input \\times \\beta > threshold.\nParameters:\n * beta (int) -- the \\beta value for the Softplus\n formulation. Default: 1\n * **threshold** (*int*) -- values above this revert to a linear\n function. Default: 20\n\nShape:\n * Input: (*), where * means any number of dimensions.\n * Output: (*), same shape as the input.\n\n[image]\nExamples:\n >>> m = nn.Softplus()\n >>> input = torch.randn(2)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Softplus.html", "category": "pytorch docs"} {"text": "torch.logical_or\ntorch.logical_or(input, other, *, out=None) -> Tensor\nComputes the element-wise logical OR of the given input tensors.\n Zeros are treated as \"False\" and nonzeros are treated as \"True\".\nParameters:\n * input (Tensor) -- the input tensor.\n * **other** (*Tensor*) -- the tensor to compute OR with\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> torch.logical_or(torch.tensor([True, False, True]), torch.tensor([True, False, False]))\n tensor([ True, False, True])\n >>> a = torch.tensor([0, 1, 10, 0], dtype=torch.int8)\n >>> b = torch.tensor([4, 0, 1, 0], dtype=torch.int8)\n >>> torch.logical_or(a, b)\n tensor([ True, True, True, False])\n >>> torch.logical_or(a.double(), b.double())\n tensor([ True, True, True, False])\n >>> torch.logical_or(a.double(), b)\n tensor([ True, True, True, False])\n", "source": "https://pytorch.org/docs/stable/generated/torch.logical_or.html", "category": "pytorch docs"} {"text": "tensor([ True, True, True, False])\n >>> torch.logical_or(a, b, out=torch.empty(4, dtype=torch.bool))\n tensor([ True, True, True, False])", "source": "https://pytorch.org/docs/stable/generated/torch.logical_or.html", "category": "pytorch docs"} {"text": "threshold\nclass torch.ao.nn.quantized.functional.threshold(input, threshold, value)\nApplies the quantized version of the threshold function element-\n wise:\n x = \\begin{cases} x & \\text{if~} x > \\text{threshold} \\\\\n \\text{value} & \\text{otherwise} \\end{cases}\n\nSee \"Threshold\" for more details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.threshold.html", "category": "pytorch docs"} {"text": "torch.Tensor.diagonal\nTensor.diagonal(offset=0, dim1=0, dim2=1) -> Tensor\nSee \"torch.diagonal()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.diagonal.html", "category": "pytorch docs"} {"text": "MarginRankingLoss\nclass torch.nn.MarginRankingLoss(margin=0.0, size_average=None, reduce=None, reduction='mean')\nCreates a criterion that measures the loss given inputs x1, x2, two\n 1D mini-batch or 0D Tensors, and a label 1D mini-batch or 0D\n Tensor y (containing 1 or -1).\nIf y = 1 then it assumed the first input should be ranked higher\n (have a larger value) than the second input, and vice-versa for y =\n -1.\nThe loss function for each pair of samples in the mini-batch is:\n \\text{loss}(x1, x2, y) = \\max(0, -y * (x1 - x2) + \\text{margin})\n\nParameters:\n * margin (float, optional) -- Has a default value of\n 0.\n * **size_average** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MarginRankingLoss.html", "category": "pytorch docs"} {"text": "is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n * **reduce** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n\n * **reduction** (*str**, **optional*) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\n\nShape:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MarginRankingLoss.html", "category": "pytorch docs"} {"text": "Shape:\n * Input1: (N) or () where N is the batch size.\n * Input2: (N) or (), same shape as the Input1.\n\n * Target: (N) or (), same shape as the inputs.\n\n * Output: scalar. If \"reduction\" is \"'none'\" and Input size is\n not (), then (N).\n\nExamples:\n >>> loss = nn.MarginRankingLoss()\n >>> input1 = torch.randn(3, requires_grad=True)\n >>> input2 = torch.randn(3, requires_grad=True)\n >>> target = torch.randn(3).sign()\n >>> output = loss(input1, input2, target)\n >>> output.backward()\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MarginRankingLoss.html", "category": "pytorch docs"} {"text": "torch.Tensor.cumprod\nTensor.cumprod(dim, dtype=None) -> Tensor\nSee \"torch.cumprod()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cumprod.html", "category": "pytorch docs"} {"text": "LocalResponseNorm\nclass torch.nn.LocalResponseNorm(size, alpha=0.0001, beta=0.75, k=1.0)\nApplies local response normalization over an input signal composed\n of several input planes, where channels occupy the second\n dimension. Applies normalization across channels.\n b_{c} = a_{c}\\left(k + \\frac{\\alpha}{n} \\sum_{c'=\\max(0,\n c-n/2)}^{\\min(N-1,c+n/2)}a_{c'}^2\\right)^{-\\beta}\n\nParameters:\n * size (int) -- amount of neighbouring channels used for\n normalization\n * **alpha** (*float*) -- multiplicative factor. Default: 0.0001\n\n * **beta** (*float*) -- exponent. Default: 0.75\n\n * **k** (*float*) -- additive factor. Default: 1\n\nShape:\n * Input: (N, C, *)\n * Output: (N, C, *) (same shape as input)\n\nExamples:\n >>> lrn = nn.LocalResponseNorm(2)\n >>> signal_2d = torch.randn(32, 5, 24, 24)\n >>> signal_4d = torch.randn(16, 5, 7, 7, 7, 7)\n >>> output_2d = lrn(signal_2d)\n >>> output_4d = lrn(signal_4d)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LocalResponseNorm.html", "category": "pytorch docs"} {"text": "torch.jit.trace_module\ntorch.jit.trace_module(mod, inputs, optimize=None, check_trace=True, check_inputs=None, check_tolerance=1e-05, strict=True, _force_outplace=False, _module_class=None, _compilation_unit=, example_inputs_is_kwarg=False)\nTrace a module and return an executable \"ScriptModule\" that will be\n optimized using just-in-time compilation. When a module is passed\n to \"torch.jit.trace\", only the \"forward\" method is run and traced.\n With \"trace_module\", you can specify a dictionary of method names\n to example inputs to trace (see the \"inputs\") argument below.\nSee \"torch.jit.trace\" for more information on tracing.\nParameters:\n * mod (torch.nn.Module) -- A \"torch.nn.Module\" containing\n methods whose names are specified in \"inputs\". The given\n methods will be compiled as a part of a single ScriptModule.\n * **inputs** (*dict*) -- A dict containing sample inputs indexed\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.trace_module.html", "category": "pytorch docs"} {"text": "by method names in \"mod\". The inputs will be passed to methods\n whose names correspond to inputs' keys while tracing. \"{\n 'forward' : example_forward_input, 'method2':\n example_method2_input}\"\nKeyword Arguments:\n * check_trace (\"bool\", optional) -- Check if the same inputs\n run through traced code produce the same outputs. Default:\n \"True\". You might want to disable this if, for example, your\n network contains non- deterministic ops or if you are sure\n that the network is correct despite a checker failure.\n * **check_inputs** (*list of dicts**, **optional*) -- A list of\n dicts of input arguments that should be used to check the\n trace against what is expected. Each tuple is equivalent to a\n set of input arguments that would be specified in \"inputs\".\n For best results, pass in a set of checking inputs\n representative of the space of shapes and types of inputs you\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.trace_module.html", "category": "pytorch docs"} {"text": "expect the network to see. If not specified, the original\n \"inputs\" are used for checking\n * **check_tolerance** (*float**, **optional*) -- Floating-point\n comparison tolerance to use in the checker procedure. This can\n be used to relax the checker strictness in the event that\n results diverge numerically for a known reason, such as\n operator fusion.\n\n * **example_inputs_is_kwarg** (\"bool\", optional) -- This\n parameter indicate whether the example inputs is a pack pack\n of keyword arguments. Default: \"False\".\n\nReturns:\n A \"ScriptModule\" object with a single \"forward\" method\n containing the traced code. When \"func\" is a \"torch.nn.Module\",\n the returned \"ScriptModule\" will have the same set of sub-\n modules and parameters as \"func\".\nExample (tracing a module with multiple methods):\n import torch\n import torch.nn as nn\n\n class Net(nn.Module):\n def __init__(self):\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.trace_module.html", "category": "pytorch docs"} {"text": "def init(self):\n super(Net, self).init()\n self.conv = nn.Conv2d(1, 1, 3)\n def forward(self, x):\n return self.conv(x)\n\n def weighted_kernel_sum(self, weight):\n return weight * self.conv.weight\n\n\n n = Net()\n example_weight = torch.rand(1, 1, 3, 3)\n example_forward_input = torch.rand(1, 1, 3, 3)\n\n # Trace a specific method and construct `ScriptModule` with\n # a single `forward` method\n module = torch.jit.trace(n.forward, example_forward_input)\n\n # Trace a module (implicitly traces `forward`) and construct a\n # `ScriptModule` with a single `forward` method\n module = torch.jit.trace(n, example_forward_input)\n\n # Trace specific methods on a module (specified in `inputs`), constructs\n # a `ScriptModule` with `forward` and `weighted_kernel_sum` methods\n inputs = {'forward' : example_forward_input, 'weighted_kernel_sum' : example_weight}\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.trace_module.html", "category": "pytorch docs"} {"text": "module = torch.jit.trace_module(n, inputs)", "source": "https://pytorch.org/docs/stable/generated/torch.jit.trace_module.html", "category": "pytorch docs"} {"text": "ReplicationPad2d\nclass torch.nn.ReplicationPad2d(padding)\nPads the input tensor using replication of the input boundary.\nFor N-dimensional padding, use \"torch.nn.functional.pad()\".\nParameters:\n padding (int, tuple) -- the size of the padding. If is\n int, uses the same padding in all boundaries. If a 4-tuple,\n uses (\\text{padding_left}, \\text{padding_right},\n \\text{padding_top}, \\text{padding_bottom})\nShape:\n * Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in}).\n * Output: (N, C, H_{out}, W_{out}) or (C, H_{out}, W_{out}),\n where\n\n H_{out} = H_{in} + \\text{padding\\_top} +\n \\text{padding\\_bottom}\n\n W_{out} = W_{in} + \\text{padding\\_left} +\n \\text{padding\\_right}\n\nExamples:\n >>> m = nn.ReplicationPad2d(2)\n >>> input = torch.arange(9, dtype=torch.float).reshape(1, 1, 3, 3)\n >>> input\n tensor([[[[0., 1., 2.],\n [3., 4., 5.],\n [6., 7., 8.]]]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReplicationPad2d.html", "category": "pytorch docs"} {"text": "[6., 7., 8.]]]])\n >>> m(input)\n tensor([[[[0., 0., 0., 1., 2., 2., 2.],\n [0., 0., 0., 1., 2., 2., 2.],\n [0., 0., 0., 1., 2., 2., 2.],\n [3., 3., 3., 4., 5., 5., 5.],\n [6., 6., 6., 7., 8., 8., 8.],\n [6., 6., 6., 7., 8., 8., 8.],\n [6., 6., 6., 7., 8., 8., 8.]]]])\n >>> # using different paddings for different sides\n >>> m = nn.ReplicationPad2d((1, 1, 2, 0))\n >>> m(input)\n tensor([[[[0., 0., 1., 2., 2.],\n [0., 0., 1., 2., 2.],\n [0., 0., 1., 2., 2.],\n [3., 3., 4., 5., 5.],\n [6., 6., 7., 8., 8.]]]])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReplicationPad2d.html", "category": "pytorch docs"} {"text": "torch.Tensor.to_dense\nTensor.to_dense() -> Tensor\nCreates a strided copy of \"self\" if \"self\" is not a strided tensor,\n otherwise returns \"self\".\nExample:\n >>> s = torch.sparse_coo_tensor(\n ... torch.tensor([[1, 1],\n ... [0, 2]]),\n ... torch.tensor([9, 10]),\n ... size=(3, 3))\n >>> s.to_dense()\n tensor([[ 0, 0, 0],\n [ 9, 0, 10],\n [ 0, 0, 0]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_dense.html", "category": "pytorch docs"} {"text": "Dropout\nclass torch.nn.Dropout(p=0.5, inplace=False)\nDuring training, randomly zeroes some of the elements of the input\n tensor with probability \"p\" using samples from a Bernoulli\n distribution. Each channel will be zeroed out independently on\n every forward call.\nThis has proven to be an effective technique for regularization and\n preventing the co-adaptation of neurons as described in the paper\n Improving neural networks by preventing co-adaptation of feature\n detectors .\nFurthermore, the outputs are scaled by a factor of \\frac{1}{1-p}\n during training. This means that during evaluation the module\n simply computes an identity function.\nParameters:\n * p (float) -- probability of an element to be zeroed.\n Default: 0.5\n * **inplace** (*bool*) -- If set to \"True\", will do this\n operation in-place. Default: \"False\"\n\nShape:\n * Input: (*). Input can be of any shape\n * Output: (*). Output is of the same shape as input\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html", "category": "pytorch docs"} {"text": "Examples:\n >>> m = nn.Dropout(p=0.2)\n >>> input = torch.randn(20, 16)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Dropout.html", "category": "pytorch docs"} {"text": "avg_pool2d\nclass torch.ao.nn.quantized.functional.avg_pool2d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None)\nApplies 2D average-pooling operation in kH \\times kW regions by\n step size sH \\times sW steps. The number of output features is\n equal to the number of input planes.\nNote:\n The input quantization parameters propagate to the output.\n\nSee \"AvgPool2d\" for details and output shape.\nParameters:\n * input -- quantized input tensor (\\text{minibatch} ,\n \\text{in_channels} , iH , iW)\n * **kernel_size** -- size of the pooling region. Can be a single\n number or a tuple *(kH, kW)*\n\n * **stride** -- stride of the pooling operation. Can be a single\n number or a tuple *(sH, sW)*. Default: \"kernel_size\"\n\n * **padding** -- implicit zero paddings on both sides of the\n input. Can be a single number or a tuple *(padH, padW)*.\n Default: 0\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.avg_pool2d.html", "category": "pytorch docs"} {"text": "Default: 0\n * **ceil_mode** -- when True, will use *ceil* instead of *floor*\n in the formula to compute the output shape. Default: \"False\"\n\n * **count_include_pad** -- when True, will include the zero-\n padding in the averaging calculation. Default: \"True\"\n\n * **divisor_override** -- if specified, it will be used as\n divisor, otherwise size of the pooling region will be used.\n Default: None\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.avg_pool2d.html", "category": "pytorch docs"} {"text": "torch.Tensor.ne\nTensor.ne(other) -> Tensor\nSee \"torch.ne()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ne.html", "category": "pytorch docs"} {"text": "torch.foreach_asin\ntorch.foreach_asin(self: List[Tensor]) -> None\nApply \"torch.asin()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_asin_.html", "category": "pytorch docs"} {"text": "Linear\nclass torch.ao.nn.quantized.dynamic.Linear(in_features, out_features, bias_=True, dtype=torch.qint8)\nA dynamic quantized linear module with floating point tensor as\n inputs and outputs. We adopt the same interface as\n torch.nn.Linear, please see\n https://pytorch.org/docs/stable/nn.html#torch.nn.Linear for\n documentation.\nSimilar to \"torch.nn.Linear\", attributes will be randomly\n initialized at module creation time and will be overwritten later\nVariables:\n * weight (Tensor) -- the non-learnable quantized weights\n of the module which are of shape (\\text{out_features},\n \\text{in_features}).\n * **bias** (*Tensor*) -- the non-learnable floating point bias\n of the module of shape (\\text{out\\_features}). If \"bias\" is\n \"True\", the values are initialized to zero.\n\nExamples:\n >>> m = nn.quantized.dynamic.Linear(20, 30)\n >>> input = torch.randn(128, 20)\n >>> output = m(input)\n >>> print(output.size())\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.Linear.html", "category": "pytorch docs"} {"text": "\n\n\nprint(output.size())\n torch.Size([128, 30])\n\n\n\nclassmethod from_float(mod)\n Create a dynamic quantized module from a float module or\n qparams_dict\n\n Parameters:\n **mod** (*Module*) -- a float module, either produced by\n torch.ao.quantization utilities or provided by the user\n\nclassmethod from_reference(ref_qlinear)\n Create a (fbgemm/qnnpack) dynamic quantized module from a\n reference quantized module :param ref_qlinear: a reference\n quantized module, either produced by :type ref_qlinear: Module\n :param torch.ao.quantization functions or provided by the user:\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.Linear.html", "category": "pytorch docs"} {"text": "LazyLinear\nclass torch.nn.LazyLinear(out_features, bias=True, device=None, dtype=None)\nA \"torch.nn.Linear\" module where in_features is inferred.\nIn this module, the weight and bias are of\n \"torch.nn.UninitializedParameter\" class. They will be initialized\n after the first call to \"forward\" is done and the module will\n become a regular \"torch.nn.Linear\" module. The \"in_features\"\n argument of the \"Linear\" is inferred from the \"input.shape[-1]\".\nCheck the \"torch.nn.modules.lazy.LazyModuleMixin\" for further\n documentation on lazy modules and their limitations.\nParameters:\n * out_features (int) -- size of each output sample\n * **bias** (*UninitializedParameter*) -- If set to \"False\", the\n layer will not learn an additive bias. Default: \"True\"\n\nVariables:\n * weight (torch.nn.parameter.UninitializedParameter) --\n the learnable weights of the module of shape\n (\\text{out_features}, \\text{in_features}). The values are", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyLinear.html", "category": "pytorch docs"} {"text": "initialized from \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}), where k =\n \\frac{1}{\\text{in_features}}\n * **bias** (*torch.nn.parameter.UninitializedParameter*) -- the\n learnable bias of the module of shape (\\text{out\\_features}).\n If \"bias\" is \"True\", the values are initialized from\n \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}) where k =\n \\frac{1}{\\text{in\\_features}}\n\ncls_to_become\n alias of \"Linear\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyLinear.html", "category": "pytorch docs"} {"text": "torch.nn.functional.triplet_margin_with_distance_loss\ntorch.nn.functional.triplet_margin_with_distance_loss(anchor, positive, negative, *, distance_function=None, margin=1.0, swap=False, reduction='mean')\nSee \"TripletMarginWithDistanceLoss\" for details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.triplet_margin_with_distance_loss.html", "category": "pytorch docs"} {"text": "MultiheadAttention\nclass torch.nn.quantizable.MultiheadAttention(embed_dim, num_heads, dropout=0.0, bias=True, add_bias_kv=False, add_zero_attn=False, kdim=None, vdim=None, batch_first=False, device=None, dtype=None)\ndequantize()\n Utility to convert the quantized MHA back to float.\n\n The motivation for this is that it is not trivial to conver the\n weights from the format that is used in the quantized version\n back to the float.\n\nforward(query, key, value, key_padding_mask=None, need_weights=True, attn_mask=None, average_attn_weights=True, is_causal=False)\n Note::\n Please, refer to \"forward()\" for more information\n\n Parameters:\n * **query** (*Tensor*) -- map a query and a set of key-value\n pairs to an output. See \"Attention Is All You Need\" for\n more details.\n\n * **key** (*Tensor*) -- map a query and a set of key-value\n pairs to an output. See \"Attention Is All You Need\" for\n more details.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.quantizable.MultiheadAttention.html", "category": "pytorch docs"} {"text": "more details.\n * **value** (*Tensor*) -- map a query and a set of key-value\n pairs to an output. See \"Attention Is All You Need\" for\n more details.\n\n * **key_padding_mask** (*Optional**[**Tensor**]*) -- if\n provided, specified padding elements in the key will be\n ignored by the attention. When given a binary mask and a\n value is True, the corresponding value on the attention\n layer will be ignored. When given a byte mask and a value\n is non-zero, the corresponding value on the attention layer\n will be ignored\n\n * **need_weights** (*bool*) -- output attn_output_weights.\n\n * **attn_mask** (*Optional**[**Tensor**]*) -- 2D or 3D mask\n that prevents attention to certain positions. A 2D mask\n will be broadcasted for all the batches while a 3D mask\n allows to specify a different mask for the entries of each\n batch.\n\n Return type:\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.quantizable.MultiheadAttention.html", "category": "pytorch docs"} {"text": "batch.\n Return type:\n *Tuple*[*Tensor*, *Optional*[*Tensor*]]\n\n Shape:\n * Inputs:\n\n * query: (L, N, E) where L is the target sequence length, N\n is the batch size, E is the embedding dimension. (N, L, E)\n if \"batch_first\" is \"True\".\n\n * key: (S, N, E), where S is the source sequence length, N is\n the batch size, E is the embedding dimension. (N, S, E) if\n \"batch_first\" is \"True\".\n\n * value: (S, N, E) where S is the source sequence length, N\n is the batch size, E is the embedding dimension. (N, S, E)\n if \"batch_first\" is \"True\".\n\n * key_padding_mask: (N, S) where N is the batch size, S is\n the source sequence length. If a ByteTensor is provided,\n the non-zero positions will be ignored while the position\n with the zero positions will be unchanged. If a BoolTensor\n is provided, the positions with the value of \"True\" will be\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.quantizable.MultiheadAttention.html", "category": "pytorch docs"} {"text": "ignored while the position with the value of \"False\" will\n be unchanged.\n * attn_mask: 2D mask (L, S) where L is the target sequence\n length, S is the source sequence length. 3D mask\n (N*num_heads, L, S) where N is the batch size, L is the\n target sequence length, S is the source sequence length.\n attn_mask ensure that position i is allowed to attend the\n unmasked positions. If a ByteTensor is provided, the non-\n zero positions are not allowed to attend while the zero\n positions will be unchanged. If a BoolTensor is provided,\n positions with \"True\" is not allowed to attend while\n \"False\" values will be unchanged. If a FloatTensor is\n provided, it will be added to the attention weight.\n\n * is_causal: If specified, applies a causal mask as attention\n mask. Mutually exclusive with providing attn_mask. Default:\n \"False\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.quantizable.MultiheadAttention.html", "category": "pytorch docs"} {"text": "\"False\".\n * average_attn_weights: If true, indicates that the returned\n \"attn_weights\" should be averaged across heads. Otherwise,\n \"attn_weights\" are provided separately per head. Note that\n this flag only has an effect when \"need_weights=True.\".\n Default: True (i.e. average weights across heads)\n\n * Outputs:\n\n * attn_output: (L, N, E) where L is the target sequence\n length, N is the batch size, E is the embedding dimension.\n (N, L, E) if \"batch_first\" is \"True\".\n\n * attn_output_weights: If \"average_attn_weights=True\",\n returns attention weights averaged across heads of shape\n (N, L, S), where N is the batch size, L is the target\n sequence length, S is the source sequence length. If\n \"average_attn_weights=False\", returns attention weights per\n head of shape (N, num_heads, L, S).\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.quantizable.MultiheadAttention.html", "category": "pytorch docs"} {"text": "torch.ge\ntorch.ge(input, other, *, out=None) -> Tensor\nComputes \\text{input} \\geq \\text{other} element-wise.\nThe second argument can be a number or a tensor whose shape is\n broadcastable with the first argument.\nParameters:\n * input (Tensor) -- the tensor to compare\n * **other** (*Tensor** or **float*) -- the tensor or value to\n compare\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nReturns:\n A boolean tensor that is True where \"input\" is greater than or\n equal to \"other\" and False elsewhere\nExample:\n >>> torch.ge(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))\n tensor([[True, True], [False, True]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.ge.html", "category": "pytorch docs"} {"text": "torch.Tensor.map_\nTensor.map_(tensor, callable)\nApplies \"callable\" for each element in \"self\" tensor and the given\n \"tensor\" and stores the results in \"self\" tensor. \"self\" tensor and\n the given \"tensor\" must be broadcastable.\nThe \"callable\" should have the signature:\n def callable(a, b) -> number\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.map_.html", "category": "pytorch docs"} {"text": "Conv2d\nclass torch.ao.nn.qat.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', qconfig=None, device=None, dtype=None)\nA Conv2d module attached with FakeQuantize modules for weight, used\n for quantization aware training.\nWe adopt the same interface as torch.nn.Conv2d, please see https\n ://pytorch.org/docs/stable/nn.html?highlight=conv2d#torch.nn.Conv2d\n for documentation.\nSimilar to torch.nn.Conv2d, with FakeQuantize modules initialized\n to default.\nVariables:\n weight_fake_quant -- fake quant module for weight", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.qat.Conv2d.html", "category": "pytorch docs"} {"text": "linear\nclass torch.ao.nn.quantized.functional.linear(input, weight, bias=None, scale=None, zero_point=None)\nApplies a linear transformation to the incoming quantized data: y =\n xA^T + b. See \"Linear\"\nNote:\n Current implementation packs weights on every call, which has\n penalty on performance. If you want to avoid the overhead, use\n \"Linear\".\n\nParameters:\n * input (Tensor) -- Quantized input of type torch.quint8\n * **weight** (*Tensor*) -- Quantized weight of type\n *torch.qint8*\n\n * **bias** (*Tensor*) -- None or fp32 bias of type *torch.float*\n\n * **scale** (*double*) -- output scale. If None, derived from\n the input scale\n\n * **zero_point** (*python:long*) -- output zero point. If None,\n derived from the input zero_point\n\nReturn type:\n Tensor\nShape:\n * Input: (N, *, in_features) where *** means any number of\n additional dimensions\n * Weight: (out\\_features, in\\_features)\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.linear.html", "category": "pytorch docs"} {"text": "\n\nWeight: (out_features, in_features)\n\n\nBias: (out_features)\n\n\nOutput: (N, *, out_features)\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.linear.html", "category": "pytorch docs"} {"text": "MovingAveragePerChannelMinMaxObserver\nclass torch.quantization.observer.MovingAveragePerChannelMinMaxObserver(averaging_constant=0.01, ch_axis=0, dtype=torch.quint8, qscheme=torch.per_channel_affine, reduce_range=False, quant_min=None, quant_max=None, eps=1.1920928955078125e-07, **kwargs)\nObserver module for computing the quantization parameters based on\n the running per channel min and max values.\nThis observer uses the tensor min/max statistics to compute the per\n channel quantization parameters. The module records the running\n minimum and maximum of incoming tensors, and uses this statistic to\n compute the quantization parameters.\nParameters:\n * averaging_constant -- Averaging constant for min/max.\n * **ch_axis** -- Channel axis\n\n * **dtype** -- Quantized data type\n\n * **qscheme** -- Quantization scheme to be used\n\n * **reduce_range** -- Reduces the range of the quantized data\n type by 1 bit\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.MovingAveragePerChannelMinMaxObserver.html", "category": "pytorch docs"} {"text": "type by 1 bit\n * **quant_min** -- Minimum quantization value. If unspecified,\n it will follow the 8-bit setup.\n\n * **quant_max** -- Maximum quantization value. If unspecified,\n it will follow the 8-bit setup.\n\n * **eps** (*Tensor*) -- Epsilon value for float32, Defaults to\n *torch.finfo(torch.float32).eps*.\n\nThe quantization parameters are computed the same way as in\n \"MovingAverageMinMaxObserver\", with the difference that the running\n min/max values are stored per channel. Scales and zero points are\n thus computed per channel as well.\nNote:\n If the running minimum equals to the running maximum, the scales\n and zero_points are set to 1.0 and 0.\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.MovingAveragePerChannelMinMaxObserver.html", "category": "pytorch docs"} {"text": "torch.nn.functional.hardswish\ntorch.nn.functional.hardswish(input, inplace=False)\nApplies the hardswish function, element-wise, as described in the\n paper:\nSearching for MobileNetV3.\n \\text{Hardswish}(x) = \\begin{cases} 0 & \\text{if~} x \\le -3,\n \\\\ x & \\text{if~} x \\ge +3, \\\\ x \\cdot (x + 3) /6 &\n \\text{otherwise} \\end{cases}\n\nSee \"Hardswish\" for more details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.hardswish.html", "category": "pytorch docs"} {"text": "torch.quantized_batch_norm\ntorch.quantized_batch_norm(input, weight=None, bias=None, mean, var, eps, output_scale, output_zero_point) -> Tensor\nApplies batch normalization on a 4D (NCHW) quantized tensor.\n y = \\frac{x - \\mathrm{E}[x]}{\\sqrt{\\mathrm{Var}[x] + \\epsilon}}\n * \\gamma + \\beta\n\nParameters:\n * input (Tensor) -- quantized tensor\n * **weight** (*Tensor*) -- float tensor that corresponds to the\n gamma, size C\n\n * **bias** (*Tensor*) -- float tensor that corresponds to the\n beta, size C\n\n * **mean** (*Tensor*) -- float mean value in batch\n normalization, size C\n\n * **var** (*Tensor*) -- float tensor for variance, size C\n\n * **eps** (*float*) -- a value added to the denominator for\n numerical stability.\n\n * **output_scale** (*float*) -- output quantized tensor scale\n\n * **output_zero_point** (*int*) -- output quantized tensor\n zero_point\n\nReturns:", "source": "https://pytorch.org/docs/stable/generated/torch.quantized_batch_norm.html", "category": "pytorch docs"} {"text": "zero_point\nReturns:\n A quantized tensor with batch normalization applied.\nReturn type:\n Tensor\nExample:\n >>> qx = torch.quantize_per_tensor(torch.rand(2, 2, 2, 2), 1.5, 3, torch.quint8)\n >>> torch.quantized_batch_norm(qx, torch.ones(2), torch.zeros(2), torch.rand(2), torch.rand(2), 0.00001, 0.2, 2)\n tensor([[[[-0.2000, -0.2000],\n [ 1.6000, -0.2000]],\n\n [[-0.4000, -0.4000],\n [-0.4000, 0.6000]]],\n\n\n [[[-0.2000, -0.2000],\n [-0.2000, -0.2000]],\n\n [[ 0.6000, -0.4000],\n [ 0.6000, -0.4000]]]], size=(2, 2, 2, 2), dtype=torch.quint8,\n quantization_scheme=torch.per_tensor_affine, scale=0.2, zero_point=2)\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantized_batch_norm.html", "category": "pytorch docs"} {"text": "torch.linalg.cholesky_ex\ntorch.linalg.cholesky_ex(A, *, upper=False, check_errors=False, out=None)\nComputes the Cholesky decomposition of a complex Hermitian or real\n symmetric positive-definite matrix.\nThis function skips the (slow) error checking and error message\n construction of \"torch.linalg.cholesky()\", instead directly\n returning the LAPACK error codes as part of a named tuple \"(L,\n info)\". This makes this function a faster way to check if a matrix\n is positive-definite, and it provides an opportunity to handle\n decomposition errors more gracefully or performantly than\n \"torch.linalg.cholesky()\" does.\nSupports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\nIf \"A\" is not a Hermitian positive-definite matrix, or if it's a\n batch of matrices and one or more of them is not a Hermitian", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.cholesky_ex.html", "category": "pytorch docs"} {"text": "positive-definite matrix, then \"info\" stores a positive integer for\n the corresponding matrix. The positive integer indicates the order\n of the leading minor that is not positive-definite, and the\n decomposition could not be completed. \"info\" filled with zeros\n indicates that the decomposition was successful. If\n \"check_errors=True\" and \"info\" contains positive integers, then a\n RuntimeError is thrown.\nNote:\n When the inputs are on a CUDA device, this function synchronizes\n only when \"check_errors\"*= True*.\n\nWarning:\n This function is \"experimental\" and it may change in a future\n PyTorch release.\n\nSee also:\n \"torch.linalg.cholesky()\" is a NumPy compatible variant that\n always checks for errors.\n\nParameters:\n A (Tensor) -- the Hermitian n times n matrix or the\n batch of such matrices of size (, n, n)* where *** is one or\n more batch dimensions.\nKeyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.cholesky_ex.html", "category": "pytorch docs"} {"text": "Keyword Arguments:\n * upper (bool, optional) -- whether to return an upper\n triangular matrix. The tensor returned with upper=True is the\n conjugate transpose of the tensor returned with upper=False.\n * **check_errors** (*bool**, **optional*) -- controls whether to\n check the content of \"infos\". Default: *False*.\n\n * **out** (*tuple**, **optional*) -- tuple of two tensors to\n write the output to. Ignored if *None*. Default: *None*.\n\nExamples:\n >>> A = torch.randn(2, 2, dtype=torch.complex128)\n >>> A = A @ A.t().conj() # creates a Hermitian positive-definite matrix\n >>> L, info = torch.linalg.cholesky_ex(A)\n >>> A\n tensor([[ 2.3792+0.0000j, -0.9023+0.9831j],\n [-0.9023-0.9831j, 0.8757+0.0000j]], dtype=torch.complex128)\n >>> L\n tensor([[ 1.5425+0.0000j, 0.0000+0.0000j],\n [-0.5850-0.6374j, 0.3567+0.0000j]], dtype=torch.complex128)\n >>> info\n tensor(0, dtype=torch.int32)\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.cholesky_ex.html", "category": "pytorch docs"} {"text": "default_observer\ntorch.quantization.observer.default_observer\nalias of functools.partial(, quant_min=0,\n quant_max=127){}", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.default_observer.html", "category": "pytorch docs"} {"text": "torch.Tensor.arctanh_\nTensor.arctanh_(other) -> Tensor\nIn-place version of \"arctanh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arctanh_.html", "category": "pytorch docs"} {"text": "torch.Tensor.neg_\nTensor.neg_() -> Tensor\nIn-place version of \"neg()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.neg_.html", "category": "pytorch docs"} {"text": "torch.Tensor.sparse_mask\nTensor.sparse_mask(mask) -> Tensor\nReturns a new sparse tensor with values from a strided tensor\n \"self\" filtered by the indices of the sparse tensor \"mask\". The\n values of \"mask\" sparse tensor are ignored. \"self\" and \"mask\"\n tensors must have the same shape.\nNote:\n The returned sparse tensor might contain duplicate values if\n \"mask\" is not coalesced. It is therefore advisable to pass\n \"mask.coalesce()\" if such behavior is not desired.\n\nNote:\n The returned sparse tensor has the same indices as the sparse\n tensor \"mask\", even when the corresponding values in \"self\" are\n zeros.\n\nParameters:\n mask (Tensor) -- a sparse tensor whose indices are used as\n a filter\nExample:\n >>> nse = 5\n >>> dims = (5, 5, 2, 2)\n >>> I = torch.cat([torch.randint(0, dims[0], size=(nse,)),\n ... torch.randint(0, dims[1], size=(nse,))], 0).reshape(2, nse)\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sparse_mask.html", "category": "pytorch docs"} {"text": "\n\n\nV = torch.randn(nse, dims[2], dims[3])\n >>> S = torch.sparse_coo_tensor(I, V, dims).coalesce()\n >>> D = torch.randn(dims)\n >>> D.sparse_mask(S)\n tensor(indices=tensor([[0, 0, 0, 2],\n [0, 1, 4, 3]]),\n values=tensor([[[ 1.6550, 0.2397],\n [-0.1611, -0.0779]],\n\n\n\n [[ 0.2326, -1.0558],\n [ 1.4711, 1.9678]],\n\n [[-0.5138, -0.0411],\n [ 1.9417, 0.5158]],\n\n [[ 0.0793, 0.0036],\n [-0.2569, -0.1055]]]),\n size=(5, 5, 2, 2), nnz=4, layout=torch.sparse_coo)\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sparse_mask.html", "category": "pytorch docs"} {"text": "torch.cuda.max_memory_cached\ntorch.cuda.max_memory_cached(device=None)\nDeprecated; see \"max_memory_reserved()\".\nReturn type:\n int", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.max_memory_cached.html", "category": "pytorch docs"} {"text": "KLDivLoss\nclass torch.nn.KLDivLoss(size_average=None, reduce=None, reduction='mean', log_target=False)\nThe Kullback-Leibler divergence loss.\nFor tensors of the same shape y_{\\text{pred}},\\ y_{\\text{true}},\n where y_{\\text{pred}} is the \"input\" and y_{\\text{true}} is the\n \"target\", we define the pointwise KL-divergence as\n L(y_{\\text{pred}},\\ y_{\\text{true}}) = y_{\\text{true}} \\cdot\n \\log \\frac{y_{\\text{true}}}{y_{\\text{pred}}} =\n y_{\\text{true}} \\cdot (\\log y_{\\text{true}} - \\log\n y_{\\text{pred}})\n\nTo avoid underflow issues when computing this quantity, this loss\n expects the argument \"input\" in the log-space. The argument\n \"target\" may also be provided in the log-space if \"log_target\"=\n True.\nTo summarise, this function is roughly equivalent to computing\n if not log_target: # default\n loss_pointwise = target * (target.log() - input)\n else:\n loss_pointwise = target.exp() * (target - input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.KLDivLoss.html", "category": "pytorch docs"} {"text": "and then reducing this result depending on the argument \"reduction\"\n as\n if reduction == \"mean\": # default\n loss = loss_pointwise.mean()\n elif reduction == \"batchmean\": # mathematically correct\n loss = loss_pointwise.sum() / input.size(0)\n elif reduction == \"sum\":\n loss = loss_pointwise.sum()\n else: # reduction == \"none\"\n loss = loss_pointwise\n\nNote:\n As all the other losses in PyTorch, this function expects the\n first argument, \"input\", to be the output of the model (e.g. the\n neural network) and the second, \"target\", to be the observations\n in the dataset. This differs from the standard mathematical\n notation KL(P\\ ||\\ Q) where P denotes the distribution of the\n observations and Q denotes the model.\n\nWarning:\n \"reduction\"*= \"mean\"* doesn't return the true KL divergence\n value, please use \"reduction\"*= \"batchmean\"* which aligns with\n the mathematical definition. In a future release, *\"mean\"* will\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.KLDivLoss.html", "category": "pytorch docs"} {"text": "be changed to be the same as \"batchmean\".\nParameters:\n * size_average (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to False, the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is False. Default: True\n * **reduce** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is *False*, returns a loss per\n batch element instead and ignores \"size_average\". Default:\n *True*\n\n * **reduction** (*str**, **optional*) -- Specifies the reduction\n to apply to the output. Default: *\"mean\"*\n\n * **log_target** (*bool**, **optional*) -- Specifies whether\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.KLDivLoss.html", "category": "pytorch docs"} {"text": "target is the log space. Default: False\nShape:\n * Input: (*), where * means any number of dimensions.\n * Target: (*), same shape as the input.\n\n * Output: scalar by default. If \"reduction\" is *'none'*, then\n (*), same shape as the input.\n\nExamples:\n >>> import torch.nn.functional as F\n >>> kl_loss = nn.KLDivLoss(reduction=\"batchmean\")\n >>> # input should be a distribution in the log space\n >>> input = F.log_softmax(torch.randn(3, 5, requires_grad=True), dim=1)\n >>> # Sample a batch of distributions. Usually this would come from the dataset\n >>> target = F.softmax(torch.rand(3, 5), dim=1)\n >>> output = kl_loss(input, target)\n\n >>> kl_loss = nn.KLDivLoss(reduction=\"batchmean\", log_target=True)\n >>> log_target = F.log_softmax(torch.rand(3, 5), dim=1)\n >>> output = kl_loss(input, log_target)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.KLDivLoss.html", "category": "pytorch docs"} {"text": "ChainedScheduler\nclass torch.optim.lr_scheduler.ChainedScheduler(schedulers)\nChains list of learning rate schedulers. It takes a list of\n chainable learning rate schedulers and performs consecutive step()\n functions belonging to them by just one call.\nParameters:\n schedulers (list) -- List of chained schedulers.\n-[ Example ]-\n\n\n\nAssuming optimizer uses lr = 1. for all groups\nlr = 0.09 if epoch == 0\nlr = 0.081 if epoch == 1\nlr = 0.729 if epoch == 2\nlr = 0.6561 if epoch == 3\nlr = 0.59049 if epoch >= 4\nscheduler1 = ConstantLR(self.opt, factor=0.1, total_iters=2)\nscheduler2 = ExponentialLR(self.opt, gamma=0.9)\nscheduler = ChainedScheduler([scheduler1, scheduler2])\nfor epoch in range(100):\n train(...)\n validate(...)\n scheduler.step()\n\n\n\nget_last_lr()\n Return last computed learning rate by current scheduler.\n\nload_state_dict(state_dict)", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ChainedScheduler.html", "category": "pytorch docs"} {"text": "load_state_dict(state_dict)\n Loads the schedulers state.\n\n Parameters:\n **state_dict** (*dict*) -- scheduler state. Should be an\n object returned from a call to \"state_dict()\".\n\nprint_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n\nstate_dict()\n Returns the state of the scheduler as a \"dict\".\n\n It contains an entry for every variable in self.__dict__ which\n is not the optimizer. The wrapped scheduler states will also be\n saved.\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ChainedScheduler.html", "category": "pytorch docs"} {"text": "torch.fmax\ntorch.fmax(input, other, *, out=None) -> Tensor\nComputes the element-wise maximum of \"input\" and \"other\".\nThis is like \"torch.maximum()\" except it handles NaNs differently:\n if exactly one of the two elements being compared is a NaN then the\n non-NaN element is taken as the maximum. Only if both elements are\n NaN is NaN propagated.\nThis function is a wrapper around C++'s \"std::fmax\" and is similar\n to NumPy's \"fmax\" function.\nSupports broadcasting to a common shape, type promotion, and\n integer and floating-point inputs.\nParameters:\n * input (Tensor) -- the input tensor.\n * **other** (*Tensor*) -- the second input tensor\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.tensor([9.7, float('nan'), 3.1, float('nan')])\n >>> b = torch.tensor([-2.2, 0.5, float('nan'), float('nan')])\n >>> torch.fmax(a, b)\n tensor([9.7000, 0.5000, 3.1000, nan])\n", "source": "https://pytorch.org/docs/stable/generated/torch.fmax.html", "category": "pytorch docs"} {"text": "get_observer_state_dict\nclass torch.quantization.observer.get_observer_state_dict(mod)\nReturns the state dict corresponding to the observer stats.\n Traverse the model state_dict and extract out the stats.", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.get_observer_state_dict.html", "category": "pytorch docs"} {"text": "torch.acosh\ntorch.acosh(input, *, out=None) -> Tensor\nReturns a new tensor with the inverse hyperbolic cosine of the\n elements of \"input\".\n \\text{out}_{i} = \\cosh^{-1}(\\text{input}_{i})\n\nNote:\n The domain of the inverse hyperbolic cosine is *[1, inf)* and\n values outside this range will be mapped to \"NaN\", except for *+\n INF* for which the output is mapped to *+ INF*.\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(4).uniform_(1, 2)\n >>> a\n tensor([ 1.3192, 1.9915, 1.9674, 1.7151 ])\n >>> torch.acosh(a)\n tensor([ 0.7791, 1.3120, 1.2979, 1.1341 ])\n", "source": "https://pytorch.org/docs/stable/generated/torch.acosh.html", "category": "pytorch docs"} {"text": "torch.div\ntorch.div(input, other, *, rounding_mode=None, out=None) -> Tensor\nDivides each element of the input \"input\" by the corresponding\n element of \"other\".\n \\text{out}_i = \\frac{\\text{input}_i}{\\text{other}_i}\n\nNote:\n By default, this performs a \"true\" division like Python 3. See\n the \"rounding_mode\" argument for floor division.\n\nSupports broadcasting to a common shape, type promotion, and\n integer, float, and complex inputs. Always promotes integer types\n to the default scalar type.\nParameters:\n * input (Tensor) -- the dividend\n * **other** (*Tensor** or **Number*) -- the divisor\n\nKeyword Arguments:\n * rounding_mode (str, optional) --\n Type of rounding applied to the result:\n\n * None - default behavior. Performs no rounding and, if both\n \"input\" and \"other\" are integer types, promotes the inputs\n to the default scalar type. Equivalent to true division in\n", "source": "https://pytorch.org/docs/stable/generated/torch.div.html", "category": "pytorch docs"} {"text": "Python (the \"/\" operator) and NumPy's \"np.true_divide\".\n * \"\"trunc\"\" - rounds the results of the division towards zero.\n Equivalent to C-style integer division.\n\n * \"\"floor\"\" - rounds the results of the division down.\n Equivalent to floor division in Python (the \"//\" operator)\n and NumPy's \"np.floor_divide\".\n\n * **out** (*Tensor**, **optional*) -- the output tensor.\n\nExamples:\n >>> x = torch.tensor([ 0.3810, 1.2774, -0.2972, -0.3719, 0.4637])\n >>> torch.div(x, 0.5)\n tensor([ 0.7620, 2.5548, -0.5944, -0.7438, 0.9274])\n\n >>> a = torch.tensor([[-0.3711, -1.9353, -0.4605, -0.2917],\n ... [ 0.1815, -1.0111, 0.9805, -1.5923],\n ... [ 0.1062, 1.4581, 0.7759, -1.2344],\n ... [-0.1830, -0.0313, 1.1908, -1.4757]])\n >>> b = torch.tensor([ 0.8032, 0.2930, -0.8113, -0.2308])\n >>> torch.div(a, b)\n tensor([[-0.4620, -6.6051, 0.5676, 1.2639],\n", "source": "https://pytorch.org/docs/stable/generated/torch.div.html", "category": "pytorch docs"} {"text": "[ 0.2260, -3.4509, -1.2086, 6.8990],\n [ 0.1322, 4.9764, -0.9564, 5.3484],\n [-0.2278, -0.1068, -1.4678, 6.3938]])\n >>> torch.div(a, b, rounding_mode='trunc')\n tensor([[-0., -6., 0., 1.],\n [ 0., -3., -1., 6.],\n [ 0., 4., -0., 5.],\n [-0., -0., -1., 6.]])\n\n >>> torch.div(a, b, rounding_mode='floor')\n tensor([[-1., -7., 0., 1.],\n [ 0., -4., -2., 6.],\n [ 0., 4., -1., 5.],\n [-1., -1., -2., 6.]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.div.html", "category": "pytorch docs"} {"text": "torch.Tensor.index_reduce_\nTensor.index_reduce_(dim, index, source, reduce, *, include_self=True) -> Tensor\nAccumulate the elements of \"source\" into the \"self\" tensor by\n accumulating to the indices in the order given in \"index\" using the\n reduction given by the \"reduce\" argument. For example, if \"dim ==\n 0\", \"index[i] == j\", \"reduce == prod\" and \"include_self == True\"\n then the \"i\"th row of \"source\" is multiplied by the \"j\"th row of\n \"self\". If \"include_self=\"True\"\", the values in the \"self\" tensor\n are included in the reduction, otherwise, rows in the \"self\" tensor\n that are accumulated to are treated as if they were filled with the\n reduction identites.\nThe \"dim\"th dimension of \"source\" must have the same size as the\n length of \"index\" (which must be a vector), and all other\n dimensions must match \"self\", or an error will be raised.\nFor a 3-D tensor with \"reduce=\"prod\"\" and \"include_self=True\" the\n output is given as:", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_reduce_.html", "category": "pytorch docs"} {"text": "output is given as:\n self[index[i], :, :] *= src[i, :, :] # if dim == 0\n self[:, index[i], :] *= src[:, i, :] # if dim == 1\n self[:, :, index[i]] *= src[:, :, i] # if dim == 2\n\nNote:\n This operation may behave nondeterministically when given tensors\n on a CUDA device. See Reproducibility for more information.\n\nNote:\n This function only supports floating point tensors.\n\nWarning:\n This function is in beta and may change in the near future.\n\nParameters:\n * dim (int) -- dimension along which to index\n * **index** (*Tensor*) -- indices of \"source\" to select from,\n should have dtype either *torch.int64* or *torch.int32*\n\n * **source** (*FloatTensor*) -- the tensor containing values to\n accumulate\n\n * **reduce** (*str*) -- the reduction operation to apply\n (\"\"prod\"\", \"\"mean\"\", \"\"amax\"\", \"\"amin\"\")\n\nKeyword Arguments:\n include_self (bool) -- whether the elements from the", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_reduce_.html", "category": "pytorch docs"} {"text": "\"self\" tensor are included in the reduction\nExample:\n >>> x = torch.empty(5, 3).fill_(2)\n >>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]], dtype=torch.float)\n >>> index = torch.tensor([0, 4, 2, 0])\n >>> x.index_reduce_(0, index, t, 'prod')\n tensor([[20., 44., 72.],\n [ 2., 2., 2.],\n [14., 16., 18.],\n [ 2., 2., 2.],\n [ 8., 10., 12.]])\n >>> x = torch.empty(5, 3).fill_(2)\n >>> x.index_reduce_(0, index, t, 'prod', include_self=False)\n tensor([[10., 22., 36.],\n [ 2., 2., 2.],\n [ 7., 8., 9.],\n [ 2., 2., 2.],\n [ 4., 5., 6.]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_reduce_.html", "category": "pytorch docs"} {"text": "torch.autograd.functional.vhp\ntorch.autograd.functional.vhp(func, inputs, v=None, create_graph=False, strict=False)\nFunction that computes the dot product between a vector \"v\" and the\n Hessian of a given scalar function at the point given by the\n inputs.\nParameters:\n * func (function) -- a Python function that takes Tensor\n inputs and returns a Tensor with a single element.\n * **inputs** (*tuple of Tensors** or **Tensor*) -- inputs to the\n function \"func\".\n\n * **v** (*tuple of Tensors** or **Tensor*) -- The vector for\n which the vector Hessian product is computed. Must be the same\n size as the input of \"func\". This argument is optional when\n \"func\"'s input contains a single element and (if it is not\n provided) will be set as a Tensor containing a single \"1\".\n\n * **create_graph** (*bool**, **optional*) -- If \"True\", both the\n output and result will be computed in a differentiable way.\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.vhp.html", "category": "pytorch docs"} {"text": "Note that when \"strict\" is \"False\", the result can not require\n gradients or be disconnected from the inputs. Defaults to\n \"False\".\n * **strict** (*bool**, **optional*) -- If \"True\", an error will\n be raised when we detect that there exists an input such that\n all the outputs are independent of it. If \"False\", we return a\n Tensor of zeros as the vhp for said inputs, which is the\n expected mathematical value. Defaults to \"False\".\n\nReturns:\n tuple with:\n func_output (tuple of Tensors or Tensor): output of\n \"func(inputs)\"\n vhp (tuple of Tensors or Tensor): result of the dot product\n with the same shape as the inputs.\n\nReturn type:\n output (tuple)\n-[ Example ]-\n\n\n\ndef pow_reducer(x):\n ... return x.pow(3).sum()\ninputs = torch.rand(2, 2)\nv = torch.ones(2, 2)\nvhp(pow_reducer, inputs, v)\n (tensor(0.5591),\n tensor([[1.0689, 1.2431],\n [3.0989, 4.4456]]))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.vhp.html", "category": "pytorch docs"} {"text": "[3.0989, 4.4456]]))\n\n\n\nvhp(pow_reducer, inputs, v, create_graph=True)\n (tensor(0.5591, grad_fn=),\n tensor([[1.0689, 1.2431],\n [3.0989, 4.4456]], grad_fn=))\ndef pow_adder_reducer(x, y):\n ... return (2 * x.pow(2) + 3 * y.pow(2)).sum()\ninputs = (torch.rand(2), torch.rand(2))\nv = (torch.zeros(2), torch.ones(2))\nvhp(pow_adder_reducer, inputs, v)\n (tensor(4.8053),\n (tensor([0., 0.]),\n tensor([6., 6.])))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.vhp.html", "category": "pytorch docs"} {"text": "torch.nn.utils.prune.ln_structured\ntorch.nn.utils.prune.ln_structured(module, name, amount, n, dim, importance_scores=None)\nPrunes tensor corresponding to parameter called \"name\" in \"module\"\n by removing the specified \"amount\" of (currently unpruned) channels\n along the specified \"dim\" with the lowest L\"n\"-norm. Modifies\n module in place (and also return the modified module) by:\n\n\nadding a named buffer called \"name+'_mask'\" corresponding to the\n binary mask applied to the parameter \"name\" by the pruning\n method.\n\n\nreplacing the parameter \"name\" by its pruned version, while the\n original (unpruned) parameter is stored in a new parameter named\n \"name+'_orig'\".\n\n\nParameters:\n * module (nn.Module) -- module containing the tensor to\n prune\n * **name** (*str*) -- parameter name within \"module\" on which\n pruning will act.\n\n * **amount** (*int** or **float*) -- quantity of parameters to\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.ln_structured.html", "category": "pytorch docs"} {"text": "prune. If \"float\", should be between 0.0 and 1.0 and represent\n the fraction of parameters to prune. If \"int\", it represents\n the absolute number of parameters to prune.\n * **n** (*int**, **float**, **inf**, **-inf**, **'fro'**,\n **'nuc'*) -- See documentation of valid entries for argument\n \"p\" in \"torch.norm()\".\n\n * **dim** (*int*) -- index of the dim along which we define\n channels to prune.\n\n * **importance_scores** (*torch.Tensor*) -- tensor of importance\n scores (of same shape as module parameter) used to compute\n mask for pruning. The values in this tensor indicate the\n importance of the corresponding elements in the parameter\n being pruned. If unspecified or None, the module parameter\n will be used in its place.\n\nReturns:\n modified (i.e. pruned) version of the input module\nReturn type:\n module (nn.Module)\n-[ Examples ]-\n\n\n\nfrom torch.nn.utils import prune\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.ln_structured.html", "category": "pytorch docs"} {"text": "\n\n\nfrom torch.nn.utils import prune\nm = prune.ln_structured(\n ... nn.Conv2d(5, 3, 2), 'weight', amount=0.3, dim=1, n=float('-inf')\n ... )\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.ln_structured.html", "category": "pytorch docs"} {"text": "torch.foreach_trunc\ntorch.foreach_trunc(self: List[Tensor]) -> None\nApply \"torch.trunc()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_trunc_.html", "category": "pytorch docs"} {"text": "default_dynamic_qconfig\ntorch.quantization.qconfig.default_dynamic_qconfig\nalias of QConfig(activation=functools.partial(,\n dtype=torch.quint8, quant_min=0, quant_max=255, is_dynamic=True){},\n weight=functools.partial(,\n dtype=torch.qint8, qscheme=torch.per_tensor_symmetric){})", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.default_dynamic_qconfig.html", "category": "pytorch docs"} {"text": "torch.Tensor.renorm_\nTensor.renorm_(p, dim, maxnorm) -> Tensor\nIn-place version of \"renorm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.renorm_.html", "category": "pytorch docs"} {"text": "CTCLoss\nclass torch.nn.CTCLoss(blank=0, reduction='mean', zero_infinity=False)\nThe Connectionist Temporal Classification loss.\nCalculates loss between a continuous (unsegmented) time series and\n a target sequence. CTCLoss sums over the probability of possible\n alignments of input to target, producing a loss value which is\n differentiable with respect to each input node. The alignment of\n input to target is assumed to be \"many-to-one\", which limits the\n length of the target sequence such that it must be \\leq the input\n length.\nParameters:\n * blank (int, optional) -- blank label. Default 0.\n * **reduction** (*str**, **optional*) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the output\n losses will be divided by the target lengths and then the mean\n over the batch is taken. Default: \"'mean'\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CTCLoss.html", "category": "pytorch docs"} {"text": "\nzero_infinity (bool, optional) -- Whether to zero\n infinite losses and the associated gradients. Default: \"False\"\n Infinite losses mainly occur when the inputs are too short to\n be aligned to the targets.\n\nShape:\n * Log_probs: Tensor of size (T, N, C) or (T, C), where T =\n \\text{input length}, N = \\text{batch size}, and C =\n \\text{number of classes (including blank)}. The logarithmized\n probabilities of the outputs (e.g. obtained with\n \"torch.nn.functional.log_softmax()\").\n * Targets: Tensor of size (N, S) or\n (\\operatorname{sum}(\\text{target\\_lengths})), where N =\n \\text{batch size} and S = \\text{max target length, if shape is\n } (N, S). It represent the target sequences. Each element in\n the target sequence is a class index. And the target index\n cannot be blank (default=0). In the (N, S) form, targets are\n padded to the length of the longest sequence, and stacked. In\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CTCLoss.html", "category": "pytorch docs"} {"text": "the (\\operatorname{sum}(\\text{target_lengths})) form, the\n targets are assumed to be un-padded and concatenated within 1\n dimension.\n * Input_lengths: Tuple or tensor of size (N) or (), where N =\n \\text{batch size}. It represent the lengths of the inputs\n (must each be \\leq T). And the lengths are specified for each\n sequence to achieve masking under the assumption that\n sequences are padded to equal lengths.\n\n * Target_lengths: Tuple or tensor of size (N) or (), where N =\n \\text{batch size}. It represent lengths of the targets.\n Lengths are specified for each sequence to achieve masking\n under the assumption that sequences are padded to equal\n lengths. If target shape is (N,S), target_lengths are\n effectively the stop index s_n for each target sequence, such\n that \"target_n = targets[n,0:s_n]\" for each target in a batch.\n Lengths must each be \\leq S If the targets are given as a 1d\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CTCLoss.html", "category": "pytorch docs"} {"text": "tensor that is the concatenation of individual targets, the\n target_lengths must add up to the total length of the tensor.\n * Output: scalar. If \"reduction\" is \"'none'\", then (N) if input\n is batched or () if input is unbatched, where N = \\text{batch\n size}.\n\nExamples:\n >>> # Target are to be padded\n >>> T = 50 # Input sequence length\n >>> C = 20 # Number of classes (including blank)\n >>> N = 16 # Batch size\n >>> S = 30 # Target sequence length of longest target in batch (padding length)\n >>> S_min = 10 # Minimum target length, for demonstration purposes\n >>>\n >>> # Initialize random batch of input vectors, for *size = (T,N,C)\n >>> input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_()\n >>>\n >>> # Initialize random batch of targets (0 = blank, 1:C = classes)\n >>> target = torch.randint(low=1, high=C, size=(N, S), dtype=torch.long)\n >>>\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CTCLoss.html", "category": "pytorch docs"} {"text": "\n\n\n >>> input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long)\n >>> target_lengths = torch.randint(low=S_min, high=S, size=(N,), dtype=torch.long)\n >>> ctc_loss = nn.CTCLoss()\n >>> loss = ctc_loss(input, target, input_lengths, target_lengths)\n >>> loss.backward()\n >>>\n >>>\n >>> # Target are to be un-padded\n >>> T = 50 # Input sequence length\n >>> C = 20 # Number of classes (including blank)\n >>> N = 16 # Batch size\n >>>\n >>> # Initialize random batch of input vectors, for *size = (T,N,C)\n >>> input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_()\n >>> input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long)\n >>>\n >>> # Initialize random batch of targets (0 = blank, 1:C = classes)\n >>> target_lengths = torch.randint(low=1, high=T, size=(N,), dtype=torch.long)\n >>> target = torch.randint(low=1, high=C, size=(sum(target_lengths),), dtype=torch.long)\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CTCLoss.html", "category": "pytorch docs"} {"text": "\n\n\nctc_loss = nn.CTCLoss()\n >>> loss = ctc_loss(input, target, input_lengths, target_lengths)\n >>> loss.backward()\n >>>\n >>>\n >>> # Target are to be un-padded and unbatched (effectively N=1)\n >>> T = 50 # Input sequence length\n >>> C = 20 # Number of classes (including blank)\n >>>\n >>> # Initialize random batch of input vectors, for *size = (T,C)\n >>> input = torch.randn(T, C).log_softmax(2).detach().requires_grad_()\n >>> input_lengths = torch.tensor(T, dtype=torch.long)\n >>>\n >>> # Initialize random batch of targets (0 = blank, 1:C = classes)\n >>> target_lengths = torch.randint(low=1, high=T, size=(), dtype=torch.long)\n >>> target = torch.randint(low=1, high=C, size=(target_lengths,), dtype=torch.long)\n >>> ctc_loss = nn.CTCLoss()\n >>> loss = ctc_loss(input, target, input_lengths, target_lengths)\n >>> loss.backward()\n\n\n\nReference:\n A. Graves et al.: Connectionist Temporal Classification:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CTCLoss.html", "category": "pytorch docs"} {"text": "Labelling Unsegmented Sequence Data with Recurrent Neural\n Networks: https://www.cs.toronto.edu/~graves/icml_2006.pdf\nNote:\n In order to use CuDNN, the following must be satisfied: \"targets\"\n must be in concatenated format, all \"input_lengths\" must be *T*.\n blank=0, \"target_lengths\" \\leq 256, the integer arguments must be\n of dtype \"torch.int32\".The regular implementation uses the (more\n common in PyTorch) *torch.long* dtype.\n\nNote:\n In some circumstances when using the CUDA backend with CuDNN,\n this operator may select a nondeterministic algorithm to increase\n performance. If this is undesirable, you can try to make the\n operation deterministic (potentially at a performance cost) by\n setting \"torch.backends.cudnn.deterministic = True\". Please see\n the notes on Reproducibility for background.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CTCLoss.html", "category": "pytorch docs"} {"text": "torch.exp2\ntorch.exp2(input, *, out=None) -> Tensor\nAlias for \"torch.special.exp2()\".", "source": "https://pytorch.org/docs/stable/generated/torch.exp2.html", "category": "pytorch docs"} {"text": "torch.Tensor.log1p\nTensor.log1p() -> Tensor\nSee \"torch.log1p()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.log1p.html", "category": "pytorch docs"} {"text": "torch.nn.functional.unfold\ntorch.nn.functional.unfold(input, kernel_size, dilation=1, padding=0, stride=1)\nExtracts sliding local blocks from a batched input tensor.\nWarning:\n Currently, only 4-D input tensors (batched image-like tensors)\n are supported.\n\nWarning:\n More than one element of the unfolded tensor may refer to a\n single memory location. As a result, in-place operations\n (especially ones that are vectorized) may result in incorrect\n behavior. If you need to write to the tensor, please clone it\n first.\n\nSee \"torch.nn.Unfold\" for details\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.unfold.html", "category": "pytorch docs"} {"text": "torch._foreach_erfc\ntorch._foreach_erfc(self: List[Tensor]) -> List[Tensor]\nApply \"torch.erfc()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_erfc.html", "category": "pytorch docs"} {"text": "torch.Tensor.sqrt\nTensor.sqrt() -> Tensor\nSee \"torch.sqrt()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sqrt.html", "category": "pytorch docs"} {"text": "torch.masked_select\ntorch.masked_select(input, mask, *, out=None) -> Tensor\nReturns a new 1-D tensor which indexes the \"input\" tensor according\n to the boolean mask \"mask\" which is a BoolTensor.\nThe shapes of the \"mask\" tensor and the \"input\" tensor don't need\n to match, but they must be broadcastable.\nNote:\n The returned tensor does **not** use the same storage as the\n original tensor\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **mask** (*BoolTensor*) -- the tensor containing the binary\n mask to index with\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> x = torch.randn(3, 4)\n >>> x\n tensor([[ 0.3552, -2.3825, -0.8297, 0.3477],\n [-1.2035, 1.2252, 0.5002, 0.6248],\n [ 0.1307, -2.0608, 0.1244, 2.0139]])\n >>> mask = x.ge(0.5)\n >>> mask\n tensor([[False, False, False, False],\n [False, True, True, True],\n", "source": "https://pytorch.org/docs/stable/generated/torch.masked_select.html", "category": "pytorch docs"} {"text": "[False, True, True, True],\n [False, False, False, True]])\n >>> torch.masked_select(x, mask)\n tensor([ 1.2252, 0.5002, 0.6248, 2.0139])", "source": "https://pytorch.org/docs/stable/generated/torch.masked_select.html", "category": "pytorch docs"} {"text": "torch.Tensor.is_sparse_csr\nTensor.is_sparse_csr\nIs \"True\" if the Tensor uses sparse CSR storage layout, \"False\"\n otherwise.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_sparse_csr.html", "category": "pytorch docs"} {"text": "torch.Tensor.allclose\nTensor.allclose(other, rtol=1e-05, atol=1e-08, equal_nan=False) -> Tensor\nSee \"torch.allclose()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.allclose.html", "category": "pytorch docs"} {"text": "torch.log2\ntorch.log2(input, *, out=None) -> Tensor\nReturns a new tensor with the logarithm to the base 2 of the\n elements of \"input\".\n y_{i} = \\log_{2} (x_{i})\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.rand(5)\n >>> a\n tensor([ 0.8419, 0.8003, 0.9971, 0.5287, 0.0490])\n\n\n >>> torch.log2(a)\n tensor([-0.2483, -0.3213, -0.0042, -0.9196, -4.3504])\n", "source": "https://pytorch.org/docs/stable/generated/torch.log2.html", "category": "pytorch docs"} {"text": "torch.autograd.profiler.profile.export_chrome_trace\nprofile.export_chrome_trace(path)\nExports an EventList as a Chrome tracing tools file.\nThe checkpoint can be later loaded and inspected under\n \"chrome://tracing\" URL.\nParameters:\n path (str) -- Path where the trace will be written.", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.profiler.profile.export_chrome_trace.html", "category": "pytorch docs"} {"text": "FusedMovingAvgObsFakeQuantize\nclass torch.quantization.fake_quantize.FusedMovingAvgObsFakeQuantize(observer=, quant_min=0, quant_max=255, **observer_kwargs)\nFused module that is used to observe the input tensor (compute\n min/max), compute scale/zero_point and fake_quantize the tensor.\n This module uses calculation similar MovingAverageMinMaxObserver\n for the inputs, to compute the min/max values in order to compute\n the scale/zero_point. The qscheme input in the observer is used to\n differentiate between symmetric/affine quantization scheme.\nThe output of this module is given by x_out = (clamp(round(x/scale\n + zero_point), quant_min, quant_max)-zero_point)*scale\nSimilar to \"FakeQuantize\", and accepts the same attributes as the\n base class.", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.FusedMovingAvgObsFakeQuantize.html", "category": "pytorch docs"} {"text": "torch.linalg.diagonal\ntorch.linalg.diagonal(A, *, offset=0, dim1=- 2, dim2=- 1) -> Tensor\nAlias for \"torch.diagonal()\" with defaults \"dim1\"= -2, \"dim2\"=\n -1.", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.diagonal.html", "category": "pytorch docs"} {"text": "torch.sinc\ntorch.sinc(input, *, out=None) -> Tensor\nAlias for \"torch.special.sinc()\".", "source": "https://pytorch.org/docs/stable/generated/torch.sinc.html", "category": "pytorch docs"} {"text": "quantize_qat\nclass torch.quantization.quantize_qat(model, run_fn, run_args, inplace=False)\nDo quantization aware training and output a quantized model\nParameters:\n * model -- input model\n * **run_fn** -- a function for evaluating the prepared model,\n can be a function that simply runs the prepared model or a\n training loop\n\n * **run_args** -- positional arguments for *run_fn*\n\nReturns:\n Quantized model.", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_qat.html", "category": "pytorch docs"} {"text": "torch.Tensor.ceil_\nTensor.ceil_() -> Tensor\nIn-place version of \"ceil()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ceil_.html", "category": "pytorch docs"} {"text": "torch.Tensor.index_copy\nTensor.index_copy(dim, index, tensor2) -> Tensor\nOut-of-place version of \"torch.Tensor.index_copy_()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_copy.html", "category": "pytorch docs"} {"text": "Stream\nclass torch.cuda.Stream(device=None, priority=0, **kwargs)\nWrapper around a CUDA stream.\nA CUDA stream is a linear sequence of execution that belongs to a\n specific device, independent from other streams. See CUDA\n semantics for details.\nParameters:\n * device (torch.device or int, optional) -- a\n device on which to allocate the stream. If \"device\" is \"None\"\n (default) or a negative integer, this will use the current\n device.\n * **priority** (*int**, **optional*) -- priority of the stream.\n Can be either -1 (high priority) or 0 (low priority). By\n default, streams have priority 0.\n\nNote:\n Although CUDA versions >= 11 support more than two levels of\n priorities, in PyTorch, we only support two levels of priorities.\n\nquery()\n Checks if all the work submitted has been completed.\n\n Returns:\n A boolean indicating if all kernels in this stream are\n completed.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.Stream.html", "category": "pytorch docs"} {"text": "completed.\nrecord_event(event=None)\n Records an event.\n\n Parameters:\n **event** (*torch.cuda.Event**, **optional*) -- event to\n record. If not given, a new one will be allocated.\n\n Returns:\n Recorded event.\n\nsynchronize()\n Wait for all the kernels in this stream to complete.\n\n Note:\n\n This is a wrapper around \"cudaStreamSynchronize()\": see CUDA\n Stream documentation for more info.\n\nwait_event(event)\n Makes all future work submitted to the stream wait for an event.\n\n Parameters:\n **event** (*torch.cuda.Event*) -- an event to wait for.\n\n Note:\n\n This is a wrapper around \"cudaStreamWaitEvent()\": see CUDA\n Stream documentation for more info.This function returns\n without waiting for \"event\": only future operations are\n affected.\n\nwait_stream(stream)\n Synchronizes with another stream.\n\n All future work submitted to this stream will wait until all\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.Stream.html", "category": "pytorch docs"} {"text": "kernels submitted to a given stream at the time of call\n complete.\n Parameters:\n **stream** (*Stream*) -- a stream to synchronize.\n\n Note:\n\n This function returns without waiting for currently enqueued\n kernels in \"stream\": only future operations are affected.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.Stream.html", "category": "pytorch docs"} {"text": "torch.nn.functional.pad\ntorch.nn.functional.pad(input, pad, mode='constant', value=None) -> Tensor\nPads tensor.\nPadding size:\n The padding size by which to pad some dimensions of \"input\" are\n described starting from the last dimension and moving forward.\n \\left\\lfloor\\frac{\\text{len(pad)}}{2}\\right\\rfloor dimensions of\n \"input\" will be padded. For example, to pad only the last\n dimension of the input tensor, then \"pad\" has the form\n (\\text{padding_left}, \\text{padding_right}); to pad the last 2\n dimensions of the input tensor, then use (\\text{padding_left},\n \\text{padding_right}, \\text{padding_top},\n \\text{padding_bottom}); to pad the last 3 dimensions, use\n (\\text{padding_left}, \\text{padding_right},\n \\text{padding_top}, \\text{padding_bottom}\n \\text{padding_front}, \\text{padding_back}).\nPadding mode:\n See \"torch.nn.ConstantPad2d\", \"torch.nn.ReflectionPad2d\", and", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.pad.html", "category": "pytorch docs"} {"text": "\"torch.nn.ReplicationPad2d\" for concrete examples on how each of\n the padding modes works. Constant padding is implemented for\n arbitrary dimensions. Replicate and reflection padding are\n implemented for padding the last 3 dimensions of a 4D or 5D\n input tensor, the last 2 dimensions of a 3D or 4D input tensor,\n or the last dimension of a 2D or 3D input tensor.\nNote:\n When using the CUDA backend, this operation may induce\n nondeterministic behaviour in its backward pass that is not\n easily switched off. Please see the notes on Reproducibility for\n background.\n\nParameters:\n * input (Tensor) -- N-dimensional tensor\n * **pad** (*tuple*) -- m-elements tuple, where \\frac{m}{2} \\leq\n input dimensions and m is even.\n\n * **mode** -- \"'constant'\", \"'reflect'\", \"'replicate'\" or\n \"'circular'\". Default: \"'constant'\"\n\n * **value** -- fill value for \"'constant'\" padding. Default: \"0\"\n\nExamples:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.pad.html", "category": "pytorch docs"} {"text": "Examples:\n >>> t4d = torch.empty(3, 3, 4, 2)\n >>> p1d = (1, 1) # pad last dim by 1 on each side\n >>> out = F.pad(t4d, p1d, \"constant\", 0) # effectively zero padding\n >>> print(out.size())\n torch.Size([3, 3, 4, 4])\n >>> p2d = (1, 1, 2, 2) # pad last dim by (1, 1) and 2nd to last by (2, 2)\n >>> out = F.pad(t4d, p2d, \"constant\", 0)\n >>> print(out.size())\n torch.Size([3, 3, 8, 4])\n >>> t4d = torch.empty(3, 3, 4, 2)\n >>> p3d = (0, 1, 2, 1, 3, 3) # pad by (0, 1), (2, 1), and (3, 3)\n >>> out = F.pad(t4d, p3d, \"constant\", 0)\n >>> print(out.size())\n torch.Size([3, 9, 7, 3])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.pad.html", "category": "pytorch docs"} {"text": "torch.sym_not\ntorch.sym_not(a)\nSymInt-aware utility for logical negation.\nParameters:\n a (SymBool or bool) -- Object to negate", "source": "https://pytorch.org/docs/stable/generated/torch.sym_not.html", "category": "pytorch docs"} {"text": "torch.Tensor.requires_grad_\nTensor.requires_grad_(requires_grad=True) -> Tensor\nChange if autograd should record operations on this tensor: sets\n this tensor's \"requires_grad\" attribute in-place. Returns this\n tensor.\n\"requires_grad_()\"'s main use case is to tell autograd to begin\n recording operations on a Tensor \"tensor\". If \"tensor\" has\n \"requires_grad=False\" (because it was obtained through a\n DataLoader, or required preprocessing or initialization),\n \"tensor.requires_grad_()\" makes it so that autograd will begin to\n record operations on \"tensor\".\nParameters:\n requires_grad (bool) -- If autograd should record\n operations on this tensor. Default: \"True\".\nExample:\n >>> # Let's say we want to preprocess some saved weights and use\n >>> # the result as new weights.\n >>> saved_weights = [0.1, 0.2, 0.3, 0.25]\n >>> loaded_weights = torch.tensor(saved_weights)\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.requires_grad_.html", "category": "pytorch docs"} {"text": "\n\n\nweights = preprocess(loaded_weights) # some function\n >>> weights\n tensor([-0.5503, 0.4926, -2.1158, -0.8303])\n\n\n\n >>> # Now, start to record operations done to weights\n >>> weights.requires_grad_()\n >>> out = weights.pow(2).sum()\n >>> out.backward()\n >>> weights.grad\n tensor([-1.1007, 0.9853, -4.2316, -1.6606])\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.requires_grad_.html", "category": "pytorch docs"} {"text": "torch.nn.functional.softshrink\ntorch.nn.functional.softshrink(input, lambd=0.5) -> Tensor\nApplies the soft shrinkage function elementwise\nSee \"Softshrink\" for more details.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.softshrink.html", "category": "pytorch docs"} {"text": "torch.cuda.current_stream\ntorch.cuda.current_stream(device=None)\nReturns the currently selected \"Stream\" for a given device.\nParameters:\n device (torch.device or int, optional) -- selected\n device. Returns the currently selected \"Stream\" for the current\n device, given by \"current_device()\", if \"device\" is \"None\"\n (default).\nReturn type:\n Stream", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.current_stream.html", "category": "pytorch docs"} {"text": "torch.Tensor.bitwise_xor_\nTensor.bitwise_xor_() -> Tensor\nIn-place version of \"bitwise_xor()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_xor_.html", "category": "pytorch docs"} {"text": "torch.Tensor.contiguous\nTensor.contiguous(memory_format=torch.contiguous_format) -> Tensor\nReturns a contiguous in memory tensor containing the same data as\n \"self\" tensor. If \"self\" tensor is already in the specified memory\n format, this function returns the \"self\" tensor.\nParameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.contiguous_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.contiguous.html", "category": "pytorch docs"} {"text": "torch.std_mean\ntorch.std_mean(input, dim=None, *, correction=1, keepdim=False, out=None)\nCalculates the standard deviation and mean over the dimensions\n specified by \"dim\". \"dim\" can be a single dimension, list of\n dimensions, or \"None\" to reduce over all dimensions.\nThe standard deviation (\\sigma) is calculated as\n \\sigma = \\sqrt{\\frac{1}{N - \\delta\n N}\\sum_{i=0}^{N-1}(x_i-\\bar{x})^2}\n\nwhere x is the sample set of elements, \\bar{x} is the sample mean,\n N is the number of samples and \\delta N is the \"correction\".\nIf \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension(s) \"dim\" where it is of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensor having 1 (or \"len(dim)\") fewer dimension(s).\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int** or **tuple of ints**, **optional*) -- the\n", "source": "https://pytorch.org/docs/stable/generated/torch.std_mean.html", "category": "pytorch docs"} {"text": "dimension or dimensions to reduce. If \"None\", all dimensions\n are reduced.\nKeyword Arguments:\n * correction (int) --\n difference between the sample size and sample degrees of\n freedom. Defaults to Bessel's correction, \"correction=1\".\n\n Changed in version 2.0: Previously this argument was called\n \"unbiased\" and was a boolean with \"True\" corresponding to\n \"correction=1\" and \"False\" being \"correction=0\".\n\n * **keepdim** (*bool*) -- whether the output tensor has \"dim\"\n retained or not.\n\n * **out** (*Tensor**, **optional*) -- the output tensor.\n\nReturns:\n A tuple (std, mean) containing the standard deviation and mean.\n-[ Example ]-\n\n\n\na = torch.tensor(\n ... [[ 0.2035, 1.2959, 1.8101, -0.4644],\n ... [ 1.5027, -0.3270, 0.5905, 0.6538],\n ... [-1.5745, 1.3330, -0.5596, -0.6548],\n ... [ 0.1264, -0.5080, 1.6420, 0.1992]])\ntorch.std_mean(a, dim=0, keepdim=True)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.std_mean.html", "category": "pytorch docs"} {"text": "\n\n\ntorch.std_mean(a, dim=0, keepdim=True)\n (tensor([[1.2620, 1.0028, 1.0957, 0.6038]]),\n tensor([[ 0.0645, 0.4485, 0.8707, -0.0665]]))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.std_mean.html", "category": "pytorch docs"} {"text": "torch.cuda.manual_seed_all\ntorch.cuda.manual_seed_all(seed)\nSets the seed for generating random numbers on all GPUs. It's safe\n to call this function if CUDA is not available; in that case, it is\n silently ignored.\nParameters:\n seed (int) -- The desired seed.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.manual_seed_all.html", "category": "pytorch docs"} {"text": "torch.nn.functional.adaptive_avg_pool1d\ntorch.nn.functional.adaptive_avg_pool1d(input, output_size) -> Tensor\nApplies a 1D adaptive average pooling over an input signal composed\n of several input planes.\nSee \"AdaptiveAvgPool1d\" for details and output shape.\nParameters:\n output_size -- the target output size (single integer)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.adaptive_avg_pool1d.html", "category": "pytorch docs"} {"text": "Flatten\nclass torch.nn.Flatten(start_dim=1, end_dim=- 1)\nFlattens a contiguous range of dims into a tensor. For use with\n \"Sequential\".\nShape:\n * Input: (, S_{\\text{start}},..., S_{i}, ..., S_{\\text{end}},\n ),' where S_{i} is the size at dimension i and * means any\n number of dimensions including none.\n * Output: (*, \\prod_{i=\\text{start}}^{\\text{end}} S_{i}, *).\n\nParameters:\n * start_dim (int) -- first dim to flatten (default = 1).\n * **end_dim** (*int*) -- last dim to flatten (default = -1).\n\nExamples::\n >>> input = torch.randn(32, 1, 5, 5)\n >>> # With default parameters\n >>> m = nn.Flatten()\n >>> output = m(input)\n >>> output.size()\n torch.Size([32, 25])\n >>> # With non-default parameters\n >>> m = nn.Flatten(0, 2)\n >>> output = m(input)\n >>> output.size()\n torch.Size([160, 5])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Flatten.html", "category": "pytorch docs"} {"text": "torch.Tensor.mul\nTensor.mul(value) -> Tensor\nSee \"torch.mul()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.mul.html", "category": "pytorch docs"} {"text": "torch.nn.utils.rnn.pad_packed_sequence\ntorch.nn.utils.rnn.pad_packed_sequence(sequence, batch_first=False, padding_value=0.0, total_length=None)\nPads a packed batch of variable length sequences.\nIt is an inverse operation to \"pack_padded_sequence()\".\nThe returned Tensor's data will be of size \"T x B x \", where T\n is the length of the longest sequence and B is the batch size. If\n \"batch_first\" is True, the data will be transposed into \"B x T x \"\n format.\n-[ Example ]-\n\n\n\nfrom torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence\nseq = torch.tensor([[1, 2, 0], [3, 0, 0], [4, 5, 6]])\nlens = [2, 1, 3]\npacked = pack_padded_sequence(seq, lens, batch_first=True, enforce_sorted=False)\npacked\n PackedSequence(data=tensor([4, 1, 3, 5, 2, 6]), batch_sizes=tensor([3, 2, 1]),\n sorted_indices=tensor([2, 0, 1]), unsorted_indices=tensor([1, 2, 0]))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pad_packed_sequence.html", "category": "pytorch docs"} {"text": "\n\n\nseq_unpacked, lens_unpacked = pad_packed_sequence(packed, batch_first=True)\nseq_unpacked\n tensor([[1, 2, 0],\n [3, 0, 0],\n [4, 5, 6]])\nlens_unpacked\n tensor([2, 1, 3])\n\n\n\nNote:\n \"total_length\" is useful to implement the \"pack sequence ->\n recurrent network -> unpack sequence\" pattern in a \"Module\"\n wrapped in \"DataParallel\". See this FAQ section for details.\n\nParameters:\n * sequence (PackedSequence) -- batch to pad\n * **batch_first** (*bool**, **optional*) -- if \"True\", the\n output will be in \"B x T x *\" format.\n\n * **padding_value** (*float**, **optional*) -- values for padded\n elements.\n\n * **total_length** (*int**, **optional*) -- if not \"None\", the\n output will be padded to have length \"total_length\". This\n method will throw \"ValueError\" if \"total_length\" is less than\n the max sequence length in \"sequence\".\n\nReturns:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pad_packed_sequence.html", "category": "pytorch docs"} {"text": "Returns:\n Tuple of Tensor containing the padded sequence, and a Tensor\n containing the list of lengths of each sequence in the batch.\n Batch elements will be re-ordered as they were ordered\n originally when the batch was passed to \"pack_padded_sequence\"\n or \"pack_sequence\".\nReturn type:\n Tuple[Tensor, Tensor]", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pad_packed_sequence.html", "category": "pytorch docs"} {"text": "torch.Tensor.arctan2_\nTensor.arctan2_()\natan2_(other) -> Tensor\nIn-place version of \"arctan2()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arctan2_.html", "category": "pytorch docs"} {"text": "ConvBn1d\nclass torch.ao.nn.intrinsic.ConvBn1d(conv, bn)\nThis is a sequential container which calls the Conv 1d and Batch\n Norm 1d modules. During quantization this will be replaced with the\n corresponding fused module.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.ConvBn1d.html", "category": "pytorch docs"} {"text": "torch.Tensor.bitwise_and_\nTensor.bitwise_and_() -> Tensor\nIn-place version of \"bitwise_and()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_and_.html", "category": "pytorch docs"} {"text": "torch.chain_matmul\ntorch.chain_matmul(*matrices, out=None)\nReturns the matrix product of the N 2-D tensors. This product is\n efficiently computed using the matrix chain order algorithm which\n selects the order in which incurs the lowest cost in terms of\n arithmetic operations ([CLRS]). Note that since this is a function\n to compute the product, N needs to be greater than or equal to 2;\n if equal to 2 then a trivial matrix-matrix product is returned. If\n N is 1, then this is a no-op - the original matrix is returned as\n is.\nWarning:\n \"torch.chain_matmul()\" is deprecated and will be removed in a\n future PyTorch release. Use \"torch.linalg.multi_dot()\" instead,\n which accepts a list of two or more tensors rather than multiple\n arguments.\n\nParameters:\n * matrices (Tensors...) -- a sequence of 2 or more 2-D\n tensors whose product is to be determined.\n * **out** (*Tensor**, **optional*) -- the output tensor. Ignored\n", "source": "https://pytorch.org/docs/stable/generated/torch.chain_matmul.html", "category": "pytorch docs"} {"text": "if \"out\" = \"None\".\nReturns:\n if the i^{th} tensor was of dimensions p_{i} \\times p_{i + 1},\n then the product would be of dimensions p_{1} \\times p_{N + 1}.\nReturn type:\n Tensor\nExample:\n >>> a = torch.randn(3, 4)\n >>> b = torch.randn(4, 5)\n >>> c = torch.randn(5, 6)\n >>> d = torch.randn(6, 7)\n >>> # will raise a deprecation warning\n >>> torch.chain_matmul(a, b, c, d)\n tensor([[ -2.3375, -3.9790, -4.1119, -6.6577, 9.5609, -11.5095, -3.2614],\n [ 21.4038, 3.3378, -8.4982, -5.2457, -10.2561, -2.4684, 2.7163],\n [ -0.9647, -5.8917, -2.3213, -5.2284, 12.8615, -12.2816, -2.5095]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.chain_matmul.html", "category": "pytorch docs"} {"text": "torch.foreach_log2\ntorch.foreach_log2(self: List[Tensor]) -> None\nApply \"torch.log2()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_log2_.html", "category": "pytorch docs"} {"text": "torch.cuda.utilization\ntorch.cuda.utilization(device=None)\nReturns the percent of time over the past sample period during\n which one or more kernels was executing on the GPU as given by\n nvidia-smi.\nParameters:\n device (torch.device or int, optional) -- selected\n device. Returns statistic for the current device, given by\n \"current_device()\", if \"device\" is \"None\" (default).\nReturn type:\n int\nWarning: Each sample period may be between 1 second and 1/6 second,\n depending on the product being queried.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.utilization.html", "category": "pytorch docs"} {"text": "torch.get_num_threads\ntorch.get_num_threads() -> int\nReturns the number of threads used for parallelizing CPU operations", "source": "https://pytorch.org/docs/stable/generated/torch.get_num_threads.html", "category": "pytorch docs"} {"text": "torch.Tensor.hsplit\nTensor.hsplit(split_size_or_sections) -> List of Tensors\nSee \"torch.hsplit()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.hsplit.html", "category": "pytorch docs"} {"text": "CrossEntropyLoss\nclass torch.nn.CrossEntropyLoss(weight=None, size_average=None, ignore_index=- 100, reduce=None, reduction='mean', label_smoothing=0.0)\nThis criterion computes the cross entropy loss between input logits\n and target.\nIt is useful when training a classification problem with C\n classes. If provided, the optional argument \"weight\" should be a 1D\n Tensor assigning weight to each of the classes. This is\n particularly useful when you have an unbalanced training set.\nThe input is expected to contain the unnormalized logits for each\n class (which do not need to be positive or sum to 1, in general).\n input has to be a Tensor of size (C) for unbatched input,\n (minibatch, C) or (minibatch, C, d_1, d_2, ..., d_K) with K \\geq 1\n for the K-dimensional case. The last being useful for higher\n dimension inputs, such as computing cross entropy loss per-pixel\n for 2D images.\nThe target that this criterion expects should contain either:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html", "category": "pytorch docs"} {"text": "\n\nClass indices in the range [0, C) where C is the number of\n classes; if ignore_index is specified, this loss also accepts\n this class index (this index may not necessarily be in the class\n range). The unreduced (i.e. with \"reduction\" set to \"'none'\")\n loss for this case can be described as:\n\\ell(x, y) = L = \\{l_1,\\dots,l_N\\}^\\top, \\quad l_n = - w_{y_n}\n\\log \\frac{\\exp(x_{n,y_n})}{\\sum_{c=1}^C \\exp(x_{n,c})} \\cdot\n\\mathbb{1}\\{y_n \\not= \\text{ignore\\_index}\\}\n\nwhere x is the input, y is the target, w is the weight, C is the\n number of classes, and N spans the minibatch dimension as well as\n d_1, ..., d_k for the K-dimensional case. If \"reduction\" is not\n \"'none'\" (default \"'mean'\"), then\n\\ell(x, y) = \\begin{cases} \\sum_{n=1}^N\n\\frac{1}{\\sum_{n=1}^N w_{y_n} \\cdot \\mathbb{1}\\{y_n \\not=\n\\text{ignore\\_index}\\}} l_n, & \\text{if reduction} =\n\\text{`mean';}\\\\ \\sum_{n=1}^N l_n, & \\text{if\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html", "category": "pytorch docs"} {"text": "reduction} = \\text{`sum'.} \\end{cases}\n Note that this case is equivalent to the combination of\n \"LogSoftmax\" and \"NLLLoss\".\n\n\n\nProbabilities for each class; useful when labels beyond a single\n class per minibatch item are required, such as for blended\n labels, label smoothing, etc. The unreduced (i.e. with\n \"reduction\" set to \"'none'\") loss for this case can be described\n as:\n\\ell(x, y) = L = \\{l_1,\\dots,l_N\\}^\\top, \\quad l_n = -\n\\sum_{c=1}^C w_c \\log \\frac{\\exp(x_{n,c})}{\\sum_{i=1}^C\n\\exp(x_{n,i})} y_{n,c}\n\nwhere x is the input, y is the target, w is the weight, C is the\n number of classes, and N spans the minibatch dimension as well as\n d_1, ..., d_k for the K-dimensional case. If \"reduction\" is not\n \"'none'\" (default \"'mean'\"), then\n\\ell(x, y) = \\begin{cases} \\frac{\\sum_{n=1}^N l_n}{N}, &\n\\text{if reduction} = \\text{`mean';}\\\\ \\sum_{n=1}^N l_n,\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html", "category": "pytorch docs"} {"text": "& \\text{if reduction} = \\text{`sum'.} \\end{cases}\nNote:\n The performance of this criterion is generally better when\n *target* contains class indices, as this allows for optimized\n computation. Consider providing *target* as class probabilities\n only when a single class label per minibatch item is too\n restrictive.\n\nParameters:\n * weight (Tensor, optional) -- a manual rescaling\n weight given to each class. If given, has to be a Tensor of\n size C\n * **size_average** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n\n * **ignore_index** (*int**, **optional*) -- Specifies a target\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html", "category": "pytorch docs"} {"text": "value that is ignored and does not contribute to the input\n gradient. When \"size_average\" is \"True\", the loss is averaged\n over non-ignored targets. Note that \"ignore_index\" is only\n applicable when the target contains class indices.\n * **reduce** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n\n * **reduction** (*str**, **optional*) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the weighted\n mean of the output is taken, \"'sum'\": the output will be\n summed. Note: \"size_average\" and \"reduce\" are in the process\n of being deprecated, and in the meantime, specifying either of\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html", "category": "pytorch docs"} {"text": "those two args will override \"reduction\". Default: \"'mean'\"\n * **label_smoothing** (*float**, **optional*) -- A float in\n [0.0, 1.0]. Specifies the amount of smoothing when computing\n the loss, where 0.0 means no smoothing. The targets become a\n mixture of the original ground truth and a uniform\n distribution as described in Rethinking the Inception\n Architecture for Computer Vision. Default: 0.0.\n\nShape:\n * Input: Shape (C), (N, C) or (N, C, d_1, d_2, ..., d_K) with K\n \\geq 1 in the case of K-dimensional loss.\n * Target: If containing class indices, shape (), (N) or (N, d_1,\n d_2, ..., d_K) with K \\geq 1 in the case of K-dimensional loss\n where each value should be between [0, C). If containing class\n probabilities, same shape as the input and each value should\n be between [0, 1].\n\n * Output: If reduction is 'none', shape (), (N) or (N, d_1, d_2,\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html", "category": "pytorch docs"} {"text": "..., d_K) with K \\geq 1 in the case of K-dimensional loss,\n depending on the shape of the input. Otherwise, scalar.\n where:\n\n \\begin{aligned} C ={} & \\text{number of classes} \\\\ N\n ={} & \\text{batch size} \\\\ \\end{aligned}\n\nExamples:\n >>> # Example of target with class indices\n >>> loss = nn.CrossEntropyLoss()\n >>> input = torch.randn(3, 5, requires_grad=True)\n >>> target = torch.empty(3, dtype=torch.long).random_(5)\n >>> output = loss(input, target)\n >>> output.backward()\n >>>\n >>> # Example of target with class probabilities\n >>> input = torch.randn(3, 5, requires_grad=True)\n >>> target = torch.randn(3, 5).softmax(dim=1)\n >>> output = loss(input, target)\n >>> output.backward()\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CrossEntropyLoss.html", "category": "pytorch docs"} {"text": "prepare_qat\nclass torch.quantization.prepare_qat(model, mapping=None, inplace=False)\nPrepares a copy of the model for quantization calibration or\n quantization-aware training and converts it to quantized version.\nQuantization configuration should be assigned preemptively to\n individual submodules in .qconfig attribute.\nParameters:\n * model -- input model to be modified in-place\n * **mapping** -- dictionary that maps float modules to quantized\n modules to be replaced.\n\n * **inplace** -- carry out model transformations in-place, the\n original module is mutated\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.prepare_qat.html", "category": "pytorch docs"} {"text": "torch.Tensor.acos_\nTensor.acos_() -> Tensor\nIn-place version of \"acos()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.acos_.html", "category": "pytorch docs"} {"text": "torch.nn.functional.threshold\ntorch.nn.functional.threshold(input, threshold, value, inplace=False)\nThresholds each element of the input Tensor.\nSee \"Threshold\" for more details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.threshold.html", "category": "pytorch docs"} {"text": "torch.xlogy\ntorch.xlogy(input, other, *, out=None) -> Tensor\nAlias for \"torch.special.xlogy()\".", "source": "https://pytorch.org/docs/stable/generated/torch.xlogy.html", "category": "pytorch docs"} {"text": "torch.sum\ntorch.sum(input, *, dtype=None) -> Tensor\nReturns the sum of all elements in the \"input\" tensor.\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n dtype (\"torch.dtype\", optional) -- the desired data type of\n returned tensor. If specified, the input tensor is casted to\n \"dtype\" before the operation is performed. This is useful for\n preventing data type overflows. Default: None.\nExample:\n >>> a = torch.randn(1, 3)\n >>> a\n tensor([[ 0.1133, -0.9567, 0.2958]])\n >>> torch.sum(a)\n tensor(-0.5475)\n\ntorch.sum(input, dim, keepdim=False, *, dtype=None) -> Tensor\nReturns the sum of each row of the \"input\" tensor in the given\n dimension \"dim\". If \"dim\" is a list of dimensions, reduce over all\n of them.\nIf \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension(s) \"dim\" where it is of size 1.", "source": "https://pytorch.org/docs/stable/generated/torch.sum.html", "category": "pytorch docs"} {"text": "Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensor having 1 (or \"len(dim)\") fewer dimension(s).\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int** or **tuple of ints**, **optional*) -- the\n dimension or dimensions to reduce. If \"None\", all dimensions\n are reduced.\n\n * **keepdim** (*bool*) -- whether the output tensor has \"dim\"\n retained or not.\n\nKeyword Arguments:\n dtype (\"torch.dtype\", optional) -- the desired data type of\n returned tensor. If specified, the input tensor is casted to\n \"dtype\" before the operation is performed. This is useful for\n preventing data type overflows. Default: None.\nExample:\n >>> a = torch.randn(4, 4)\n >>> a\n tensor([[ 0.0569, -0.2475, 0.0737, -0.3429],\n [-0.2993, 0.9138, 0.9337, -1.6864],\n [ 0.1132, 0.7892, -0.1003, 0.5688],\n [ 0.3637, -0.9906, -0.4752, -1.5197]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.sum.html", "category": "pytorch docs"} {"text": "\n\n\ntorch.sum(a, 1)\n tensor([-0.4598, -0.1381, 1.3708, -2.6217])\n >>> b = torch.arange(4 * 5 * 6).view(4, 5, 6)\n >>> torch.sum(b, (2, 1))\n tensor([ 435., 1335., 2235., 3135.])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.sum.html", "category": "pytorch docs"} {"text": "torch.Tensor.is_cuda\nTensor.is_cuda\nIs \"True\" if the Tensor is stored on the GPU, \"False\" otherwise.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_cuda.html", "category": "pytorch docs"} {"text": "torch.autograd.grad\ntorch.autograd.grad(outputs, inputs, grad_outputs=None, retain_graph=None, create_graph=False, only_inputs=True, allow_unused=False, is_grads_batched=False)\nComputes and returns the sum of gradients of outputs with respect\n to the inputs.\n\"grad_outputs\" should be a sequence of length matching \"output\"\n containing the \"vector\" in vector-Jacobian product, usually the\n pre-computed gradients w.r.t. each of the outputs. If an output\n doesn't require_grad, then the gradient can be \"None\").\nNote:\n If you run any forward ops, create \"grad_outputs\", and/or call\n \"grad\" in a user-specified CUDA stream context, see Stream\n semantics of backward passes.\n\nNote:\n \"only_inputs\" argument is deprecated and is ignored now (defaults\n to \"True\"). To accumulate gradient for other parts of the graph,\n please use \"torch.autograd.backward\".\n\nParameters:\n * outputs (sequence of Tensor) -- outputs of the", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.grad.html", "category": "pytorch docs"} {"text": "differentiated function.\n * **inputs** (*sequence of Tensor*) -- Inputs w.r.t. which the\n gradient will be returned (and not accumulated into \".grad\").\n\n * **grad_outputs** (*sequence of Tensor*) -- The \"vector\" in the\n vector-Jacobian product. Usually gradients w.r.t. each output.\n None values can be specified for scalar Tensors or ones that\n don't require grad. If a None value would be acceptable for\n all grad_tensors, then this argument is optional. Default:\n None.\n\n * **retain_graph** (*bool**, **optional*) -- If \"False\", the\n graph used to compute the grad will be freed. Note that in\n nearly all cases setting this option to \"True\" is not needed\n and often can be worked around in a much more efficient way.\n Defaults to the value of \"create_graph\".\n\n * **create_graph** (*bool**, **optional*) -- If \"True\", graph of\n the derivative will be constructed, allowing to compute higher\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.grad.html", "category": "pytorch docs"} {"text": "order derivative products. Default: \"False\".\n * **allow_unused** (*bool**, **optional*) -- If \"False\",\n specifying inputs that were not used when computing outputs\n (and therefore their grad is always zero) is an error.\n Defaults to \"False\".\n\n * **is_grads_batched** (*bool**, **optional*) -- If \"True\", the\n first dimension of each tensor in \"grad_outputs\" will be\n interpreted as the batch dimension. Instead of computing a\n single vector-Jacobian product, we compute a batch of vector-\n Jacobian products for each \"vector\" in the batch. We use the\n vmap prototype feature as the backend to vectorize calls to\n the autograd engine so that this computation can be performed\n in a single call. This should lead to performance improvements\n when compared to manually looping and performing backward\n multiple times. Note that due to this feature being\n experimental, there may be performance cliffs. Please use\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.grad.html", "category": "pytorch docs"} {"text": "\"torch._C._debug_only_display_vmap_fallback_warnings(True)\" to\n show any performance warnings and file an issue on github if\n warnings exist for your use case. Defaults to \"False\".\nReturn type:\n Tuple[Tensor, ...]", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.grad.html", "category": "pytorch docs"} {"text": "torch.Tensor.index_add_\nTensor.index_add_(dim, index, source, *, alpha=1) -> Tensor\nAccumulate the elements of \"alpha\" times \"source\" into the \"self\"\n tensor by adding to the indices in the order given in \"index\". For\n example, if \"dim == 0\", \"index[i] == j\", and \"alpha=-1\", then the\n \"i\"th row of \"source\" is subtracted from the \"j\"th row of \"self\".\nThe \"dim\"th dimension of \"source\" must have the same size as the\n length of \"index\" (which must be a vector), and all other\n dimensions must match \"self\", or an error will be raised.\nFor a 3-D tensor the output is given as:\n self[index[i], :, :] += alpha * src[i, :, :] # if dim == 0\n self[:, index[i], :] += alpha * src[:, i, :] # if dim == 1\n self[:, :, index[i]] += alpha * src[:, :, i] # if dim == 2\n\nNote:\n This operation may behave nondeterministically when given tensors\n on a CUDA device. See Reproducibility for more information.\n\nParameters:", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_add_.html", "category": "pytorch docs"} {"text": "Parameters:\n * dim (int) -- dimension along which to index\n * **index** (*Tensor*) -- indices of \"source\" to select from,\n should have dtype either *torch.int64* or *torch.int32*\n\n * **source** (*Tensor*) -- the tensor containing values to add\n\nKeyword Arguments:\n alpha (Number) -- the scalar multiplier for \"source\"\nExample:\n >>> x = torch.ones(5, 3)\n >>> t = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float)\n >>> index = torch.tensor([0, 4, 2])\n >>> x.index_add_(0, index, t)\n tensor([[ 2., 3., 4.],\n [ 1., 1., 1.],\n [ 8., 9., 10.],\n [ 1., 1., 1.],\n [ 5., 6., 7.]])\n >>> x.index_add_(0, index, t, alpha=-1)\n tensor([[ 1., 1., 1.],\n [ 1., 1., 1.],\n [ 1., 1., 1.],\n [ 1., 1., 1.],\n [ 1., 1., 1.]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_add_.html", "category": "pytorch docs"} {"text": "torch.Tensor.ceil\nTensor.ceil() -> Tensor\nSee \"torch.ceil()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ceil.html", "category": "pytorch docs"} {"text": "torch.Tensor.bfloat16\nTensor.bfloat16(memory_format=torch.preserve_format) -> Tensor\n\"self.bfloat16()\" is equivalent to \"self.to(torch.bfloat16)\". See\n \"to()\".\nParameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bfloat16.html", "category": "pytorch docs"} {"text": "torch.Tensor.matmul\nTensor.matmul(tensor2) -> Tensor\nSee \"torch.matmul()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.matmul.html", "category": "pytorch docs"} {"text": "torch.Tensor.adjoint\nTensor.adjoint() -> Tensor\nAlias for \"adjoint()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.adjoint.html", "category": "pytorch docs"} {"text": "torch.tensordot\ntorch.tensordot(a, b, dims=2, out=None)\nReturns a contraction of a and b over multiple dimensions.\n\"tensordot\" implements a generalized matrix product.\nParameters:\n * a (Tensor) -- Left tensor to contract\n * **b** (*Tensor*) -- Right tensor to contract\n\n * **dims** (*int** or **Tuple**[**List**[**int**]**,\n **List**[**int**]**] or **List**[**List**[**int**]**]\n **containing two lists** or **Tensor*) -- number of dimensions\n to contract or explicit lists of dimensions for \"a\" and \"b\"\n respectively\n\nWhen called with a non-negative integer argument \"dims\" = d, and\n the number of dimensions of \"a\" and \"b\" is m and n, respectively,\n \"tensordot()\" computes\n r_{i_0,...,i_{m-d}, i_d,...,i_n} = \\sum_{k_0,...,k_{d-1}}\n a_{i_0,...,i_{m-d},k_0,...,k_{d-1}} \\times b_{k_0,...,k_{d-1},\n i_d,...,i_n}.\n\nWhen called with \"dims\" of the list form, the given dimensions will", "source": "https://pytorch.org/docs/stable/generated/torch.tensordot.html", "category": "pytorch docs"} {"text": "be contracted in place of the last d of \"a\" and the first d of b.\n The sizes in these dimensions must match, but \"tensordot()\" will\n deal with broadcasted dimensions.\nExamples:\n >>> a = torch.arange(60.).reshape(3, 4, 5)\n >>> b = torch.arange(24.).reshape(4, 3, 2)\n >>> torch.tensordot(a, b, dims=([1, 0], [0, 1]))\n tensor([[4400., 4730.],\n [4532., 4874.],\n [4664., 5018.],\n [4796., 5162.],\n [4928., 5306.]])\n\n >>> a = torch.randn(3, 4, 5, device='cuda')\n >>> b = torch.randn(4, 5, 6, device='cuda')\n >>> c = torch.tensordot(a, b, dims=2).cpu()\n tensor([[ 8.3504, -2.5436, 6.2922, 2.7556, -1.0732, 3.2741],\n [ 3.3161, 0.0704, 5.0187, -0.4079, -4.3126, 4.8744],\n [ 0.8223, 3.9445, 3.2168, -0.2400, 3.4117, 1.7780]])\n\n >>> a = torch.randn(3, 5, 4, 6)\n >>> b = torch.randn(6, 4, 5, 3)\n >>> torch.tensordot(a, b, dims=([2, 1, 3], [1, 2, 0]))\n", "source": "https://pytorch.org/docs/stable/generated/torch.tensordot.html", "category": "pytorch docs"} {"text": "tensor([[ 7.7193, -2.4867, -10.3204],\n [ 1.5513, -14.4737, -6.5113],\n [ -0.2850, 4.2573, -3.5997]])", "source": "https://pytorch.org/docs/stable/generated/torch.tensordot.html", "category": "pytorch docs"} {"text": "torch.mvlgamma\ntorch.mvlgamma(input, p, *, out=None) -> Tensor\nAlias for \"torch.special.multigammaln()\".", "source": "https://pytorch.org/docs/stable/generated/torch.mvlgamma.html", "category": "pytorch docs"} {"text": "torch.signal.windows.nuttall\ntorch.signal.windows.nuttall(M, *, sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)\nComputes the minimum 4-term Blackman-Harris window according to\n Nuttall.\n w_n = 1 - 0.36358 \\cos{(z_n)} + 0.48917 \\cos{(2z_n)} - 0.13659\n \\cos{(3z_n)} + 0.01064 \\cos{(4z_n)}\n\nwhere \"z_n = 2 \u00cf\u0080 n/ M\".\nThe window is normalized to 1 (maximum value is 1). However, the 1\n doesn't appear if \"M\" is even and \"sym\" is True.\nParameters:\n M (int) -- the length of the window. In other words, the\n number of points of the returned window.\nKeyword Arguments:\n * sym (bool, optional) -- If False, returns a\n periodic window suitable for use in spectral analysis. If\n True, returns a symmetric window suitable for use in filter\n design. Default: True.\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.nuttall.html", "category": "pytorch docs"} {"text": "of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\nReturn type:\n Tensor\nReferences:\n - A. Nuttall, \u00e2\u0080\u009cSome windows with very good sidelobe behavior,\u00e2\u0080\u009d\n IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 29, no. 1, pp. 84-91,\n Feb 1981. https://doi.org/10.1109/TASSP.1981.1163506\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.nuttall.html", "category": "pytorch docs"} {"text": "\nHeinzel G. et al., \u00e2\u0080\u009cSpectrum and spectral density estimation by the Discrete Fourier transform (DFT),\n including a comprehensive list of window functions and some new flat-top windows\u00e2\u0080\u009d,\n February 15, 2002 https://holometer.fnal.gov/GH_FFT.pdf\n\nExamples:\n >>> # Generates a symmetric Nutall window.\n >>> torch.signal.windows.general_hamming(5, sym=True)\n tensor([3.6280e-04, 2.2698e-01, 1.0000e+00, 2.2698e-01, 3.6280e-04])\n\n >>> # Generates a periodic Nuttall window.\n >>> torch.signal.windows.general_hamming(5, sym=False)\n tensor([3.6280e-04, 1.1052e-01, 7.9826e-01, 7.9826e-01, 1.1052e-01])\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.nuttall.html", "category": "pytorch docs"} {"text": "Upsample\nclass torch.nn.Upsample(size=None, scale_factor=None, mode='nearest', align_corners=None, recompute_scale_factor=None)\nUpsamples a given multi-channel 1D (temporal), 2D (spatial) or 3D\n (volumetric) data.\nThe input data is assumed to be of the form minibatch x channels x\n [optional depth] x [optional height] x width. Hence, for spatial\n inputs, we expect a 4D Tensor and for volumetric inputs, we expect\n a 5D Tensor.\nThe algorithms available for upsampling are nearest neighbor and\n linear, bilinear, bicubic and trilinear for 3D, 4D and 5D input\n Tensor, respectively.\nOne can either give a \"scale_factor\" or the target output \"size\" to\n calculate the output size. (You cannot give both, as it is\n ambiguous)\nParameters:\n * size (int or Tuple[int] or Tuple[int,\n int] or Tuple[int, int, int],\n optional) -- output spatial sizes\n * **scale_factor** (*float** or **Tuple**[**float**] or\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Upsample.html", "category": "pytorch docs"} {"text": "Tuple[float, float] or Tuple[float,\n float, float], optional*) -- multiplier for\n spatial size. Has to match input size if it is a tuple.\n * **mode** (*str**, **optional*) -- the upsampling algorithm:\n one of \"'nearest'\", \"'linear'\", \"'bilinear'\", \"'bicubic'\" and\n \"'trilinear'\". Default: \"'nearest'\"\n\n * **align_corners** (*bool**, **optional*) -- if \"True\", the\n corner pixels of the input and output tensors are aligned, and\n thus preserving the values at those pixels. This only has\n effect when \"mode\" is \"'linear'\", \"'bilinear'\", \"'bicubic'\",\n or \"'trilinear'\". Default: \"False\"\n\n * **recompute_scale_factor** (*bool**, **optional*) -- recompute\n the scale_factor for use in the interpolation calculation. If\n *recompute_scale_factor* is \"True\", then *scale_factor* must\n be passed in and *scale_factor* is used to compute the output\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Upsample.html", "category": "pytorch docs"} {"text": "size. The computed output size will be used to infer new\n scales for the interpolation. Note that when scale_factor is\n floating-point, it may differ from the recomputed\n scale_factor due to rounding and precision issues. If\n recompute_scale_factor is \"False\", then size or\n scale_factor will be used directly for interpolation.\nShape:\n * Input: (N, C, W_{in}), (N, C, H_{in}, W_{in}) or (N, C,\n D_{in}, H_{in}, W_{in})\n * Output: (N, C, W_{out}), (N, C, H_{out}, W_{out}) or (N, C,\n D_{out}, H_{out}, W_{out}), where\n\n D_{out} = \\left\\lfloor D_{in} \\times \\text{scale\\_factor}\n \\right\\rfloor\n\n H_{out} = \\left\\lfloor H_{in} \\times \\text{scale\\_factor}\n \\right\\rfloor\n\n W_{out} = \\left\\lfloor W_{in} \\times \\text{scale\\_factor}\n \\right\\rfloor\n\nWarning:\n With \"align_corners = True\", the linearly interpolating modes\n (*linear*, *bilinear*, *bicubic*, and *trilinear*) don't\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Upsample.html", "category": "pytorch docs"} {"text": "proportionally align the output and input pixels, and thus the\n output values can depend on the input size. This was the default\n behavior for these modes up to version 0.3.1. Since then, the\n default behavior is \"align_corners = False\". See below for\n concrete examples on how this affects the outputs.\nNote:\n If you want downsampling/general resizing, you should use\n \"interpolate()\".\n\nExamples:\n >>> input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2)\n >>> input\n tensor([[[[1., 2.],\n [3., 4.]]]])\n\n >>> m = nn.Upsample(scale_factor=2, mode='nearest')\n >>> m(input)\n tensor([[[[1., 1., 2., 2.],\n [1., 1., 2., 2.],\n [3., 3., 4., 4.],\n [3., 3., 4., 4.]]]])\n\n >>> m = nn.Upsample(scale_factor=2, mode='bilinear') # align_corners=False\n >>> m(input)\n tensor([[[[1.0000, 1.2500, 1.7500, 2.0000],\n [1.5000, 1.7500, 2.2500, 2.5000],\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Upsample.html", "category": "pytorch docs"} {"text": "[1.5000, 1.7500, 2.2500, 2.5000],\n [2.5000, 2.7500, 3.2500, 3.5000],\n [3.0000, 3.2500, 3.7500, 4.0000]]]])\n >>> m = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)\n >>> m(input)\n tensor([[[[1.0000, 1.3333, 1.6667, 2.0000],\n [1.6667, 2.0000, 2.3333, 2.6667],\n [2.3333, 2.6667, 3.0000, 3.3333],\n [3.0000, 3.3333, 3.6667, 4.0000]]]])\n\n >>> # Try scaling the same data in a larger tensor\n >>> input_3x3 = torch.zeros(3, 3).view(1, 1, 3, 3)\n >>> input_3x3[:, :, :2, :2].copy_(input)\n tensor([[[[1., 2.],\n [3., 4.]]]])\n >>> input_3x3\n tensor([[[[1., 2., 0.],\n [3., 4., 0.],\n [0., 0., 0.]]]])\n\n >>> m = nn.Upsample(scale_factor=2, mode='bilinear') # align_corners=False\n >>> # Notice that values in top left corner are the same with the small input (except at boundary)\n >>> m(input_3x3)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Upsample.html", "category": "pytorch docs"} {"text": "\n\n\nm(input_3x3)\n tensor([[[[1.0000, 1.2500, 1.7500, 1.5000, 0.5000, 0.0000],\n [1.5000, 1.7500, 2.2500, 1.8750, 0.6250, 0.0000],\n [2.5000, 2.7500, 3.2500, 2.6250, 0.8750, 0.0000],\n [2.2500, 2.4375, 2.8125, 2.2500, 0.7500, 0.0000],\n [0.7500, 0.8125, 0.9375, 0.7500, 0.2500, 0.0000],\n [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]])\n\n\n\n >>> m = nn.Upsample(scale_factor=2, mode='bilinear', align_corners=True)\n >>> # Notice that values in top left corner are now changed\n >>> m(input_3x3)\n tensor([[[[1.0000, 1.4000, 1.8000, 1.6000, 0.8000, 0.0000],\n [1.8000, 2.2000, 2.6000, 2.2400, 1.1200, 0.0000],\n [2.6000, 3.0000, 3.4000, 2.8800, 1.4400, 0.0000],\n [2.4000, 2.7200, 3.0400, 2.5600, 1.2800, 0.0000],\n [1.2000, 1.3600, 1.5200, 1.2800, 0.6400, 0.0000],\n [0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Upsample.html", "category": "pytorch docs"} {"text": "Conv3d\nclass torch.ao.nn.quantized.Conv3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)\nApplies a 3D convolution over a quantized input signal composed of\n several quantized input planes.\nFor details on input arguments, parameters, and implementation see\n \"Conv3d\".\nNote:\n Only *zeros* is supported for the \"padding_mode\" argument.\n\nNote:\n Only *torch.quint8* is supported for the input data type.\n\nVariables:\n * weight (Tensor) -- packed tensor derived from the\n learnable weight parameter.\n * **scale** (*Tensor*) -- scalar for the output scale\n\n * **zero_point** (*Tensor*) -- scalar for the output zero point\n\nSee \"Conv3d\" for other attributes.\nExamples:\n >>> # With square kernels and equal stride\n >>> m = nn.quantized.Conv3d(16, 33, 3, stride=2)\n >>> # non-square kernels and unequal stride and with padding\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Conv3d.html", "category": "pytorch docs"} {"text": "\n\n\nm = nn.quantized.Conv3d(16, 33, (3, 5, 5), stride=(1, 2, 2), padding=(1, 2, 2))\n >>> # non-square kernels and unequal stride and with padding and dilation\n >>> m = nn.quantized.Conv3d(16, 33, (3, 5, 5), stride=(1, 2, 2), padding=(1, 2, 2), dilation=(1, 2, 2))\n >>> input = torch.randn(20, 16, 56, 56, 56)\n >>> # quantize input to quint8\n >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)\n >>> output = m(q_input)\n\n\n\nclassmethod from_float(mod)\n Creates a quantized module from a float module or qparams_dict.\n\n Parameters:\n **mod** (*Module*) -- a float module, either produced by\n torch.ao.quantization utilities or provided by the user\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Conv3d.html", "category": "pytorch docs"} {"text": "torch.nn.utils.rnn.pad_sequence\ntorch.nn.utils.rnn.pad_sequence(sequences, batch_first=False, padding_value=0.0)\nPad a list of variable length Tensors with \"padding_value\"\n\"pad_sequence\" stacks a list of Tensors along a new dimension, and\n pads them to equal length. For example, if the input is list of\n sequences with size \"L x \" and if batch_first is False, and \"T x B\n x \" otherwise.\nB is batch size. It is equal to the number of elements in\n \"sequences\". T is length of the longest sequence. L is length\n of the sequence. *** is any number of trailing dimensions,\n including none.\n-[ Example ]-\n\n\n\nfrom torch.nn.utils.rnn import pad_sequence\na = torch.ones(25, 300)\nb = torch.ones(22, 300)\nc = torch.ones(15, 300)\npad_sequence([a, b, c]).size()\n torch.Size([25, 3, 300])\n\n\n\nNote:\n This function returns a Tensor of size \"T x B x *\" or \"B x T x *\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pad_sequence.html", "category": "pytorch docs"} {"text": "where T is the length of the longest sequence. This function\n assumes trailing dimensions and type of all the Tensors in\n sequences are same.\nParameters:\n * sequences (list[Tensor]) -- list of variable\n length sequences.\n * **batch_first** (*bool**, **optional*) -- output will be in \"B\n x T x *\" if True, or in \"T x B x *\" otherwise. Default: False.\n\n * **padding_value** (*float**, **optional*) -- value for padded\n elements. Default: 0.\n\nReturns:\n Tensor of size \"T x B x \" if \"batch_first\" is \"False\". Tensor\n of size \"B x T x \" otherwise\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pad_sequence.html", "category": "pytorch docs"} {"text": "torch.initial_seed\ntorch.initial_seed()\nReturns the initial seed for generating random numbers as a Python\n long.\nReturn type:\n int", "source": "https://pytorch.org/docs/stable/generated/torch.initial_seed.html", "category": "pytorch docs"} {"text": "load_observer_state_dict\nclass torch.quantization.observer.load_observer_state_dict(mod, obs_dict)\nGiven input model and a state_dict containing model observer stats,\n load the stats back into the model. The observer state_dict can be\n saved using torch.ao.quantization.get_observer_state_dict", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.load_observer_state_dict.html", "category": "pytorch docs"} {"text": "torch.scatter_add\ntorch.scatter_add(input, dim, index, src) -> Tensor\nOut-of-place version of \"torch.Tensor.scatter_add_()\"", "source": "https://pytorch.org/docs/stable/generated/torch.scatter_add.html", "category": "pytorch docs"} {"text": "torch.trapezoid\ntorch.trapezoid(y, x=None, *, dx=None, dim=- 1) -> Tensor\nComputes the trapezoidal rule along \"dim\". By default the spacing\n between elements is assumed to be 1, but \"dx\" can be used to\n specify a different constant spacing, and \"x\" can be used to\n specify arbitrary spacing along \"dim\".\nAssuming \"y\" is a one-dimensional tensor with elements {y_0, y_1,\n ..., y_n}, the default computation is\n \\begin{aligned} \\sum_{i = 1}^{n-1} \\frac{1}{2} (y_i +\n y_{i-1}) \\end{aligned}\n\nWhen \"dx\" is specified the computation becomes\n \\begin{aligned} \\sum_{i = 1}^{n-1} \\frac{\\Delta x}{2} (y_i +\n y_{i-1}) \\end{aligned}\n\neffectively multiplying the result by \"dx\". When \"x\" is specified,\n assuming \"x\" is also a one-dimensional tensor with elements {x_0,\n x_1, ..., x_n}, the computation becomes\n \\begin{aligned} \\sum_{i = 1}^{n-1} \\frac{(x_i - x_{i-1})}{2}\n (y_i + y_{i-1}) \\end{aligned}\n", "source": "https://pytorch.org/docs/stable/generated/torch.trapezoid.html", "category": "pytorch docs"} {"text": "(y_i + y_{i-1}) \\end{aligned}\nWhen \"x\" and \"y\" have the same size, the computation is as\n described above and no broadcasting is needed. The broadcasting\n behavior of this function is as follows when their sizes are\n different. For both \"x\" and \"y\", the function computes the\n difference between consecutive elements along dimension \"dim\". This\n effectively creates two tensors, x_diff and y_diff, that have\n the same shape as the original tensors except their lengths along\n the dimension \"dim\" is reduced by 1. After that, those two tensors\n are broadcast together to compute final output as part of the\n trapezoidal rule. See the examples below for details.\nNote:\n The trapezoidal rule is a technique for approximating the\n definite integral of a function by averaging its left and right\n Riemann sums. The approximation becomes more accurate as the\n resolution of the partition increases.\n\nParameters:\n * y (Tensor) -- Values to use when computing the", "source": "https://pytorch.org/docs/stable/generated/torch.trapezoid.html", "category": "pytorch docs"} {"text": "trapezoidal rule.\n * **x** (*Tensor*) -- If specified, defines spacing between\n values as specified above.\n\nKeyword Arguments:\n * dx (float) -- constant spacing between values. If\n neither \"x\" or \"dx\" are specified then this defaults to 1.\n Effectively multiplies the result by its value.\n * **dim** (*int*) -- The dimension along which to compute the\n trapezoidal rule. The last (inner-most) dimension by default.\n\nExamples:\n >>> # Computes the trapezoidal rule in 1D, spacing is implicitly 1\n >>> y = torch.tensor([1, 5, 10])\n >>> torch.trapezoid(y)\n tensor(10.5)\n\n >>> # Computes the same trapezoidal rule directly to verify\n >>> (1 + 10 + 10) / 2\n 10.5\n\n >>> # Computes the trapezoidal rule in 1D with constant spacing of 2\n >>> # NOTE: the result is the same as before, but multiplied by 2\n >>> torch.trapezoid(y, dx=2)\n 21.0\n\n >>> # Computes the trapezoidal rule in 1D with arbitrary spacing\n", "source": "https://pytorch.org/docs/stable/generated/torch.trapezoid.html", "category": "pytorch docs"} {"text": "\n\n\nx = torch.tensor([1, 3, 6])\n >>> torch.trapezoid(y, x)\n 28.5\n\n\n\n >>> # Computes the same trapezoidal rule directly to verify\n >>> ((3 - 1) * (1 + 5) + (6 - 3) * (5 + 10)) / 2\n 28.5\n\n >>> # Computes the trapezoidal rule for each row of a 3x3 matrix\n >>> y = torch.arange(9).reshape(3, 3)\n tensor([[0, 1, 2],\n [3, 4, 5],\n [6, 7, 8]])\n >>> torch.trapezoid(y)\n tensor([ 2., 8., 14.])\n\n >>> # Computes the trapezoidal rule for each column of the matrix\n >>> torch.trapezoid(y, dim=0)\n tensor([ 6., 8., 10.])\n\n >>> # Computes the trapezoidal rule for each row of a 3x3 ones matrix\n >>> # with the same arbitrary spacing\n >>> y = torch.ones(3, 3)\n >>> x = torch.tensor([1, 3, 6])\n >>> torch.trapezoid(y, x)\n array([5., 5., 5.])\n\n >>> # Computes the trapezoidal rule for each row of a 3x3 ones matrix\n >>> # with different arbitrary spacing per row\n >>> y = torch.ones(3, 3)\n", "source": "https://pytorch.org/docs/stable/generated/torch.trapezoid.html", "category": "pytorch docs"} {"text": "\n\n\ny = torch.ones(3, 3)\n >>> x = torch.tensor([[1, 2, 3], [1, 3, 5], [1, 4, 7]])\n >>> torch.trapezoid(y, x)\n array([2., 4., 6.])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.trapezoid.html", "category": "pytorch docs"} {"text": "RAdam\nclass torch.optim.RAdam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, *, foreach=None, differentiable=False)\nImplements RAdam algorithm.\n \\begin{aligned} &\\rule{110mm}{0.4pt}\n \\\\ &\\textbf{input} : \\gamma \\text{ (lr)}, \\: \\beta_1,\n \\beta_2 \\text{ (betas)}, \\: \\theta_0 \\text{ (params)},\n \\:f(\\theta) \\text{ (objective)}, \\: \\lambda \\text{\n (weightdecay)},\n \\\\ &\\hspace{13mm} \\epsilon \\text{ (epsilon)}\n \\\\ &\\textbf{initialize} : m_0 \\leftarrow 0 \\text{ ( first\n moment)}, v_0 \\leftarrow 0 \\text{ ( second moment)},\n \\\\ &\\hspace{18mm} \\rho_{\\infty} \\leftarrow 2/(1-\\beta_2)\n -1 \\\\[-1.ex] &\\rule{110mm}{0.4pt} \\\\\n &\\textbf{for} \\: t=1 \\: \\textbf{to} \\: \\ldots \\: \\textbf{do}\n \\\\ &\\hspace{6mm}g_t \\leftarrow \\nabla_{\\theta}\n f_t (\\theta_{t-1}) \\\\ &\\hspace{5mm} \\textbf{if}\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RAdam.html", "category": "pytorch docs"} {"text": "\\: \\lambda \\neq 0 \\\n &\\hspace{10mm} g_t \\leftarrow g_t + \\lambda \\theta_{t-1}\n \\ &\\hspace{6mm}m_t \\leftarrow \\beta_1 m_{t-1}\n + (1 - \\beta_1) g_t \\ &\\hspace{6mm}v_t\n \\leftarrow \\beta_2 v_{t-1} + (1-\\beta_2) g^2_t \\\n &\\hspace{6mm}\\widehat{m_t} \\leftarrow m_t/\\big(1-\\beta_1^t\n \\big) \\ &\\hspace{6mm}\\rho_t \\leftarrow\n \\rho_{\\infty} - 2 t \\beta^t_2 /\\big(1-\\beta_2^t \\big)\n \\[0.1.ex] &\\hspace{6mm}\\textbf{if} \\: \\rho_t > 5\n \\ &\\hspace{12mm} l_t \\leftarrow \\frac{\\sqrt{\n (1-\\beta^t_2) }}{ \\sqrt{v_t} +\\epsilon } \\\n &\\hspace{12mm} r_t \\leftarrow \\sqrt{\\frac{(\\rho_t-4)(\\rho_t-2)\\\n rho_{\\infty}}{(\\rho_{\\infty}-4)(\\rho_{\\infty}-2) \\rho_t}} \\\n &\\hspace{12mm}\\theta_t \\leftarrow \\theta_{t-1} - \\gamma\n \\widehat{m_t} r_t l_t \\ &\\hspace{6mm}\\textbf{else}", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RAdam.html", "category": "pytorch docs"} {"text": "\\ &\\hspace{12mm}\\theta_t \\leftarrow \\theta_{t-1} - \\gamma\n \\widehat{m_t} \\ &\\rule{110mm}{0.4pt}\n \\[-1.ex] &\\bf{return} \\: \\theta_t\n \\[-1.ex] &\\rule{110mm}{0.4pt}\n \\[-1.ex] \\end{aligned}\nFor further details regarding the algorithm we refer to On the\n variance of the adaptive learning rate and beyond.\nThis implementation uses the same weight_decay implementation as\n Adam (were the weight_decay is applied to the gradient) and not the\n one from AdamW (were weight_decay is applied to the update). This\n is different from the author's implementation.\nParameters:\n * params (iterable) -- iterable of parameters to optimize\n or dicts defining parameter groups\n * **lr** (*float**, **optional*) -- learning rate (default:\n 1e-3)\n\n * **betas** (*Tuple**[**float**, **float**]**, **optional*) --\n coefficients used for computing running averages of gradient\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RAdam.html", "category": "pytorch docs"} {"text": "and its square (default: (0.9, 0.999))\n * **eps** (*float**, **optional*) -- term added to the\n denominator to improve numerical stability (default: 1e-8)\n\n * **weight_decay** (*float**, **optional*) -- weight decay (L2\n penalty) (default: 0)\n\n * **foreach** (*bool**, **optional*) -- whether foreach\n implementation of optimizer is used. If unspecified by the\n user (so foreach is None), we will try to use foreach over the\n for-loop implementation on CUDA, since it is usually\n significantly more performant. (default: None)\n\n * **differentiable** (*bool**, **optional*) -- whether autograd\n should occur through the optimizer step in training.\n Otherwise, the step() function runs in a torch.no_grad()\n context. Setting to True can impair performance, so leave it\n False if you don't intend to run autograd through this\n instance (default: False)\n\nadd_param_group(param_group)", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RAdam.html", "category": "pytorch docs"} {"text": "add_param_group(param_group)\n Add a param group to the \"Optimizer\" s *param_groups*.\n\n This can be useful when fine tuning a pre-trained network as\n frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n\n Parameters:\n **param_group** (*dict*) -- Specifies what Tensors should be\n optimized along with group specific optimization options.\n\nload_state_dict(state_dict)\n Loads the optimizer state.\n\n Parameters:\n **state_dict** (*dict*) -- optimizer state. Should be an\n object returned from a call to \"state_dict()\".\n\nregister_step_post_hook(hook)\n Register an optimizer step post hook which will be called after\n optimizer step. It should have the following signature:\n\n hook(optimizer, args, kwargs) -> None\n\n The \"optimizer\" argument is the optimizer instance being used.\n\n Parameters:\n **hook** (*Callable*) -- The user defined hook to be\n registered.\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RAdam.html", "category": "pytorch docs"} {"text": "registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n\nregister_step_pre_hook(hook)\n Register an optimizer step pre hook which will be called before\n optimizer step. It should have the following signature:\n\n hook(optimizer, args, kwargs) -> None or modified args and kwargs\n\n The \"optimizer\" argument is the optimizer instance being used.\n If args and kwargs are modified by the pre-hook, then the\n transformed values are returned as a tuple containing the\n new_args and new_kwargs.\n\n Parameters:\n **hook** (*Callable*) -- The user defined hook to be\n registered.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n\nstate_dict()", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RAdam.html", "category": "pytorch docs"} {"text": "state_dict()\n Returns the state of the optimizer as a \"dict\".\n\n It contains two entries:\n\n * state - a dict holding current optimization state. Its content\n differs between optimizer classes.\n\n * param_groups - a list containing all parameter groups where\n each\n parameter group is a dict\n\nzero_grad(set_to_none=False)\n Sets the gradients of all optimized \"torch.Tensor\" s to zero.\n\n Parameters:\n **set_to_none** (*bool*) -- instead of setting to zero, set\n the grads to None. This will in general have lower memory\n footprint, and can modestly improve performance. However, it\n changes certain behaviors. For example: 1. When the user\n tries to access a gradient and perform manual ops on it, a\n None attribute or a Tensor full of 0s will behave\n differently. 2. If the user requests\n \"zero_grad(set_to_none=True)\" followed by a backward pass,\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RAdam.html", "category": "pytorch docs"} {"text": "\".grad\"s are guaranteed to be None for params that did not\n receive a gradient. 3. \"torch.optim\" optimizers have a\n different behavior if the gradient is 0 or None (in one case\n it does the step with a gradient of 0 and in the other it\n skips the step altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RAdam.html", "category": "pytorch docs"} {"text": "PixelUnshuffle\nclass torch.nn.PixelUnshuffle(downscale_factor)\nReverses the \"PixelShuffle\" operation by rearranging elements in a\n tensor of shape (, C, H \\times r, W \\times r) to a tensor of shape\n (, C \\times r^2, H, W), where r is a downscale factor.\nSee the paper: Real-Time Single Image and Video Super-Resolution\n Using an Efficient Sub-Pixel Convolutional Neural Network by Shi\n et. al (2016) for more details.\nParameters:\n downscale_factor (int) -- factor to decrease spatial\n resolution by\nShape:\n * Input: (*, C_{in}, H_{in}, W_{in}), where * is zero or more\n batch dimensions\n * Output: (*, C_{out}, H_{out}, W_{out}), where\n\n C_{out} = C_{in} \\times \\text{downscale\\_factor}^2\n\n H_{out} = H_{in} \\div \\text{downscale\\_factor}\n\n W_{out} = W_{in} \\div \\text{downscale\\_factor}\n\nExamples:\n >>> pixel_unshuffle = nn.PixelUnshuffle(3)\n >>> input = torch.randn(1, 1, 12, 12)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.PixelUnshuffle.html", "category": "pytorch docs"} {"text": "\n\n\ninput = torch.randn(1, 1, 12, 12)\n >>> output = pixel_unshuffle(input)\n >>> print(output.size())\n torch.Size([1, 9, 4, 4])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.PixelUnshuffle.html", "category": "pytorch docs"} {"text": "torch.foreach_exp\ntorch.foreach_exp(self: List[Tensor]) -> None\nApply \"torch.exp()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_exp_.html", "category": "pytorch docs"} {"text": "torch.nn.functional.log_softmax\ntorch.nn.functional.log_softmax(input, dim=None, _stacklevel=3, dtype=None)\nApplies a softmax followed by a logarithm.\nWhile mathematically equivalent to log(softmax(x)), doing these two\n operations separately is slower and numerically unstable. This\n function uses an alternative formulation to compute the output and\n gradient correctly.\nSee \"LogSoftmax\" for more details.\nParameters:\n * input (Tensor) -- input\n * **dim** (*int*) -- A dimension along which log_softmax will be\n computed.\n\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. If specified, the input tensor is cast to\n \"dtype\" before the operation is performed. This is useful for\n preventing data type overflows. Default: None.\n\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.log_softmax.html", "category": "pytorch docs"} {"text": "AvgPool3d\nclass torch.nn.AvgPool3d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None)\nApplies a 3D average pooling over an input signal composed of\n several input planes.\nIn the simplest case, the output value of the layer with input size\n (N, C, D, H, W), output (N, C, D_{out}, H_{out}, W_{out}) and\n \"kernel_size\" (kD, kH, kW) can be precisely described as:\n \\begin{aligned} \\text{out}(N_i, C_j, d, h, w) ={} &\n \\sum_{k=0}^{kD-1} \\sum_{m=0}^{kH-1} \\sum_{n=0}^{kW-1} \\\\\n & \\frac{\\text{input}(N_i, C_j, \\text{stride}[0] \\times d + k,\n \\text{stride}[1] \\times h + m, \\text{stride}[2] \\times w + n)}\n {kD \\times kH \\times kW} \\end{aligned}\n\nIf \"padding\" is non-zero, then the input is implicitly zero-padded\n on all three sides for \"padding\" number of points.\nNote:\n When ceil_mode=True, sliding windows are allowed to go off-bounds\n if they start within the left padding or the input. Sliding\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AvgPool3d.html", "category": "pytorch docs"} {"text": "windows that would start in the right padded region are ignored.\nThe parameters \"kernel_size\", \"stride\" can either be:\n * a single \"int\" -- in which case the same value is used for the\n depth, height and width dimension\n\n * a \"tuple\" of three ints -- in which case, the first *int* is\n used for the depth dimension, the second *int* for the height\n dimension and the third *int* for the width dimension\n\nParameters:\n * kernel_size (Union[int, Tuple[int, int,\n int]]) -- the size of the window\n * **stride** (*Union**[**int**, **Tuple**[**int**, **int**,\n **int**]**]*) -- the stride of the window. Default value is\n \"kernel_size\"\n\n * **padding** (*Union**[**int**, **Tuple**[**int**, **int**,\n **int**]**]*) -- implicit zero padding to be added on all\n three sides\n\n * **ceil_mode** (*bool*) -- when True, will use *ceil* instead\n of *floor* to compute the output shape\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AvgPool3d.html", "category": "pytorch docs"} {"text": "of floor to compute the output shape\n * **count_include_pad** (*bool*) -- when True, will include the\n zero-padding in the averaging calculation\n\n * **divisor_override** (*Optional**[**int**]*) -- if specified,\n it will be used as divisor, otherwise \"kernel_size\" will be\n used\n\nShape:\n * Input: (N, C, D_{in}, H_{in}, W_{in}) or (C, D_{in}, H_{in},\n W_{in}).\n * Output: (N, C, D_{out}, H_{out}, W_{out}) or (C, D_{out},\n H_{out}, W_{out}), where\n\n D_{out} = \\left\\lfloor\\frac{D_{in} + 2 \\times\n \\text{padding}[0] -\n \\text{kernel\\_size}[0]}{\\text{stride}[0]} + 1\\right\\rfloor\n\n H_{out} = \\left\\lfloor\\frac{H_{in} + 2 \\times\n \\text{padding}[1] -\n \\text{kernel\\_size}[1]}{\\text{stride}[1]} + 1\\right\\rfloor\n\n W_{out} = \\left\\lfloor\\frac{W_{in} + 2 \\times\n \\text{padding}[2] -\n \\text{kernel\\_size}[2]}{\\text{stride}[2]} + 1\\right\\rfloor\n\nExamples:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AvgPool3d.html", "category": "pytorch docs"} {"text": "Examples:\n >>> # pool of square window of size=3, stride=2\n >>> m = nn.AvgPool3d(3, stride=2)\n >>> # pool of non-square window\n >>> m = nn.AvgPool3d((3, 2, 2), stride=(2, 1, 2))\n >>> input = torch.randn(20, 16, 50, 44, 31)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AvgPool3d.html", "category": "pytorch docs"} {"text": "torch.Tensor.masked_fill\nTensor.masked_fill(mask, value) -> Tensor\nOut-of-place version of \"torch.Tensor.masked_fill_()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.masked_fill.html", "category": "pytorch docs"} {"text": "torch.Tensor.sparse_resize_and_clear_\nTensor.sparse_resize_and_clear_(size, sparse_dim, dense_dim) -> Tensor\nRemoves all specified elements from a sparse tensor \"self\" and\n resizes \"self\" to the desired size and the number of sparse and\n dense dimensions.\nParameters:\n * size (torch.Size) -- the desired size.\n * **sparse_dim** (*int*) -- the number of sparse dimensions\n\n * **dense_dim** (*int*) -- the number of dense dimensions\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sparse_resize_and_clear_.html", "category": "pytorch docs"} {"text": "torch.nn.utils.stateless.functional_call\ntorch.nn.utils.stateless.functional_call(module, parameters_and_buffers, args, kwargs=None, *, tie_weights=True)\nPerforms a functional call on the module by replacing the module\n parameters and buffers with the provided ones.\nWarning:\n This API is deprecated as of PyTorch 2.0 and will be removed in a\n future version of PyTorch. Please use\n \"torch.func.functional_call()\" instead, which is a drop-in\n replacement for this API.\n\nNote:\n If the module has active parametrizations, passing a value in the\n \"parameters_and_buffers\" argument with the name set to the\n regular parameter name will completely disable the\n parametrization. If you want to apply the parametrization\n function to the value passed please set the key as\n \"{submodule_name}.parametrizations.{parameter_name}.original\".\n\nNote:\n If the module performs in-place operations on parameters/buffers,\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.stateless.functional_call.html", "category": "pytorch docs"} {"text": "these will be reflected in the parameters_and_buffers\n input.Example:\n >>> a = {'foo': torch.zeros(())}\n >>> mod = Foo() # does self.foo = self.foo + 1\n >>> print(mod.foo) # tensor(0.)\n >>> functional_call(mod, a, torch.ones(()))\n >>> print(mod.foo) # tensor(0.)\n >>> print(a['foo']) # tensor(1.)\n\nNote:\n If the module has tied weights, whether or not functional_call\n respects the tying is determined by the tie_weights flag.Example:\n\n >>> a = {'foo': torch.zeros(())}\n >>> mod = Foo() # has both self.foo and self.foo_tied which are tied. Returns x + self.foo + self.foo_tied\n >>> print(mod.foo) # tensor(1.)\n >>> mod(torch.zeros(())) # tensor(2.)\n >>> functional_call(mod, a, torch.zeros(())) # tensor(0.) since it will change self.foo_tied too\n >>> functional_call(mod, a, torch.zeros(()), tie_weights=False) # tensor(1.)--self.foo_tied is not updated\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.stateless.functional_call.html", "category": "pytorch docs"} {"text": "\n\n\nnew_a = {'foo', torch.zeros(()), 'foo_tied': torch.zeros(())}\n >>> functional_call(mod, new_a, torch.zeros()) # tensor(0.)\n\n\n\nParameters:\n * module (torch.nn.Module) -- the module to call\n * **parameters_and_buffers** (*dict of str and Tensor*) -- the\n parameters that will be used in the module call.\n\n * **args** (*Any** or **tuple*) -- arguments to be passed to the\n module call. If not a tuple, considered a single argument.\n\n * **kwargs** (*dict*) -- keyword arguments to be passed to the\n module call\n\n * **tie_weights** (*bool**, **optional*) -- If True, then\n parameters and buffers tied in the original model will be\n treated as tied in the reparamaterized version. Therefore, if\n True and different values are passed for the tied paramaters\n and buffers, it will error. If False, it will not respect the\n originally tied parameters and buffers unless the values\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.stateless.functional_call.html", "category": "pytorch docs"} {"text": "passed for both weights are the same. Default: True.\nReturns:\n the result of calling \"module\".\nReturn type:\n Any", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.stateless.functional_call.html", "category": "pytorch docs"} {"text": "torch.nn.utils.prune.custom_from_mask\ntorch.nn.utils.prune.custom_from_mask(module, name, mask)\nPrunes tensor corresponding to parameter called \"name\" in \"module\"\n by applying the pre-computed mask in \"mask\". Modifies module in\n place (and also return the modified module) by:\n\n\nadding a named buffer called \"name+'_mask'\" corresponding to the\n binary mask applied to the parameter \"name\" by the pruning\n method.\n\n\nreplacing the parameter \"name\" by its pruned version, while the\n original (unpruned) parameter is stored in a new parameter named\n \"name+'_orig'\".\n\n\nParameters:\n * module (nn.Module) -- module containing the tensor to\n prune\n * **name** (*str*) -- parameter name within \"module\" on which\n pruning will act.\n\n * **mask** (*Tensor*) -- binary mask to be applied to the\n parameter.\n\nReturns:\n modified (i.e. pruned) version of the input module\nReturn type:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.custom_from_mask.html", "category": "pytorch docs"} {"text": "Return type:\n module (nn.Module)\n-[ Examples ]-\n\n\n\nfrom torch.nn.utils import prune\nm = prune.custom_from_mask(\n ... nn.Linear(5, 3), name='bias', mask=torch.tensor([0, 1, 0])\n ... )\nprint(m.bias_mask)\n tensor([0., 1., 0.])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.custom_from_mask.html", "category": "pytorch docs"} {"text": "torch.Tensor.xlogy\nTensor.xlogy(other) -> Tensor\nSee \"torch.xlogy()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.xlogy.html", "category": "pytorch docs"} {"text": "torch.Tensor.softmax\nTensor.softmax(dim) -> Tensor\nAlias for \"torch.nn.functional.softmax()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.softmax.html", "category": "pytorch docs"} {"text": "torch.autograd.functional.jvp\ntorch.autograd.functional.jvp(func, inputs, v=None, create_graph=False, strict=False)\nFunction that computes the dot product between the Jacobian of the\n given function at the point given by the inputs and a vector \"v\".\nParameters:\n * func (function) -- a Python function that takes Tensor\n inputs and returns a tuple of Tensors or a Tensor.\n * **inputs** (*tuple of Tensors** or **Tensor*) -- inputs to the\n function \"func\".\n\n * **v** (*tuple of Tensors** or **Tensor*) -- The vector for\n which the Jacobian vector product is computed. Must be the\n same size as the input of \"func\". This argument is optional\n when the input to \"func\" contains a single element and (if it\n is not provided) will be set as a Tensor containing a single\n \"1\".\n\n * **create_graph** (*bool**, **optional*) -- If \"True\", both the\n output and result will be computed in a differentiable way.\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.jvp.html", "category": "pytorch docs"} {"text": "Note that when \"strict\" is \"False\", the result can not require\n gradients or be disconnected from the inputs. Defaults to\n \"False\".\n * **strict** (*bool**, **optional*) -- If \"True\", an error will\n be raised when we detect that there exists an input such that\n all the outputs are independent of it. If \"False\", we return a\n Tensor of zeros as the jvp for said inputs, which is the\n expected mathematical value. Defaults to \"False\".\n\nReturns:\n tuple with:\n func_output (tuple of Tensors or Tensor): output of\n \"func(inputs)\"\n jvp (tuple of Tensors or Tensor): result of the dot product\n with the same shape as the output.\n\nReturn type:\n output (tuple)\nNote:\n \"autograd.functional.jvp\" computes the jvp by using the backward\n of the backward (sometimes called the double backwards trick).\n This is not the most performant way of computing the jvp. Please\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.jvp.html", "category": "pytorch docs"} {"text": "consider using \"torch.func.jvp()\" or the low-level forward-mode\n AD API instead.\n-[ Example ]-\n\n\n\ndef exp_reducer(x):\n ... return x.exp().sum(dim=1)\ninputs = torch.rand(4, 4)\nv = torch.ones(4, 4)\njvp(exp_reducer, inputs, v)\n (tensor([6.3090, 4.6742, 7.9114, 8.2106]),\n tensor([6.3090, 4.6742, 7.9114, 8.2106]))\njvp(exp_reducer, inputs, v, create_graph=True)\n (tensor([6.3090, 4.6742, 7.9114, 8.2106], grad_fn=),\n tensor([6.3090, 4.6742, 7.9114, 8.2106], grad_fn=))\ndef adder(x, y):\n ... return 2 * x + 3 * y\ninputs = (torch.rand(2), torch.rand(2))\nv = (torch.ones(2), torch.ones(2))\njvp(adder, inputs, v)\n (tensor([2.2399, 2.5005]),\n tensor([5., 5.]))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.jvp.html", "category": "pytorch docs"} {"text": "torch.func.grad_and_value\ntorch.func.grad_and_value(func, argnums=0, has_aux=False)\nReturns a function to compute a tuple of the gradient and primal,\n or forward, computation.\nParameters:\n * func (Callable) -- A Python function that takes one or\n more arguments. Must return a single-element Tensor. If\n specified \"has_aux\" equals \"True\", function can return a tuple\n of single-element Tensor and other auxiliary objects:\n \"(output, aux)\".\n * **argnums** (*int** or **Tuple**[**int**]*) -- Specifies\n arguments to compute gradients with respect to. \"argnums\" can\n be single integer or tuple of integers. Default: 0.\n\n * **has_aux** (*bool*) -- Flag indicating that \"func\" returns a\n tensor and other auxiliary objects: \"(output, aux)\". Default:\n False.\n\nReturns:\n Function to compute a tuple of gradients with respect to its\n inputs and the forward computation. By default, the output of", "source": "https://pytorch.org/docs/stable/generated/torch.func.grad_and_value.html", "category": "pytorch docs"} {"text": "the function is a tuple of the gradient tensor(s) with respect\n to the first argument and the primal computation. If specified\n \"has_aux\" equals \"True\", tuple of gradients and tuple of the\n forward computation with output auxiliary objects is returned.\n If \"argnums\" is a tuple of integers, a tuple of a tuple of the\n output gradients with respect to each \"argnums\" value and the\n forward computation is returned.\nReturn type:\n Callable\nSee \"grad()\" for examples", "source": "https://pytorch.org/docs/stable/generated/torch.func.grad_and_value.html", "category": "pytorch docs"} {"text": "torch.Tensor.size\nTensor.size(dim=None) -> torch.Size or int\nReturns the size of the \"self\" tensor. If \"dim\" is not specified,\n the returned value is a \"torch.Size\", a subclass of \"tuple\". If\n \"dim\" is specified, returns an int holding the size of that\n dimension.\nParameters:\n dim (int, optional) -- The dimension for which to\n retrieve the size.\nExample:\n >>> t = torch.empty(3, 4, 5)\n >>> t.size()\n torch.Size([3, 4, 5])\n >>> t.size(dim=1)\n 4\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.size.html", "category": "pytorch docs"} {"text": "torch.Tensor.bitwise_xor\nTensor.bitwise_xor() -> Tensor\nSee \"torch.bitwise_xor()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_xor.html", "category": "pytorch docs"} {"text": "torch.func.hessian\ntorch.func.hessian(func, argnums=0)\nComputes the Hessian of \"func\" with respect to the arg(s) at index\n \"argnum\" via a forward-over-reverse strategy.\nThe forward-over-reverse strategy (composing\n \"jacfwd(jacrev(func))\") is a good default for good performance. It\n is possible to compute Hessians through other compositions of\n \"jacfwd()\" and \"jacrev()\" like \"jacfwd(jacfwd(func))\" or\n \"jacrev(jacrev(func))\".\nParameters:\n * func (function) -- A Python function that takes one or\n more arguments, one of which must be a Tensor, and returns one\n or more Tensors\n * **argnums** (*int** or **Tuple**[**int**]*) -- Optional,\n integer or tuple of integers, saying which arguments to get\n the Hessian with respect to. Default: 0.\n\nReturns:\n Returns a function that takes in the same inputs as \"func\" and\n returns the Hessian of \"func\" with respect to the arg(s) at\n \"argnums\".\nNote:", "source": "https://pytorch.org/docs/stable/generated/torch.func.hessian.html", "category": "pytorch docs"} {"text": "\"argnums\".\nNote:\n You may see this API error out with \"forward-mode AD not\n implemented for operator X\". If so, please file a bug report and\n we will prioritize it. An alternative is to use\n \"jacrev(jacrev(func))\", which has better operator coverage.\n\nA basic usage with a R^N -> R^1 function gives a N x N Hessian:\n\n\n\nfrom torch.func import hessian\ndef f(x):\n return x.sin().sum()\nx = torch.randn(5)\nhess = hessian(f)(x) # equivalent to jacfwd(jacrev(f))(x)\nassert torch.allclose(hess, torch.diag(-x.sin()))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.hessian.html", "category": "pytorch docs"} {"text": "torch.slogdet\ntorch.slogdet(input)\nAlias for \"torch.linalg.slogdet()\"", "source": "https://pytorch.org/docs/stable/generated/torch.slogdet.html", "category": "pytorch docs"} {"text": "torch.broadcast_tensors\ntorch.broadcast_tensors(*tensors) -> List of Tensors\nBroadcasts the given tensors according to Broadcasting semantics.\nParameters:\n *tensors -- any number of tensors of the same type\nWarning:\n More than one element of a broadcasted tensor may refer to a\n single memory location. As a result, in-place operations\n (especially ones that are vectorized) may result in incorrect\n behavior. If you need to write to the tensors, please clone them\n first.\n\nExample:\n >>> x = torch.arange(3).view(1, 3)\n >>> y = torch.arange(2).view(2, 1)\n >>> a, b = torch.broadcast_tensors(x, y)\n >>> a.size()\n torch.Size([2, 3])\n >>> a\n tensor([[0, 1, 2],\n [0, 1, 2]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.broadcast_tensors.html", "category": "pytorch docs"} {"text": "torch.autograd.profiler.profile.total_average\nprofile.total_average()\nAverages all events.\nReturns:\n A FunctionEventAvg object.", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.profiler.profile.total_average.html", "category": "pytorch docs"} {"text": "torch.greater_equal\ntorch.greater_equal(input, other, *, out=None) -> Tensor\nAlias for \"torch.ge()\".", "source": "https://pytorch.org/docs/stable/generated/torch.greater_equal.html", "category": "pytorch docs"} {"text": "torch.Tensor.qr\nTensor.qr(some=True)\nSee \"torch.qr()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.qr.html", "category": "pytorch docs"} {"text": "torch.Tensor.mv\nTensor.mv(vec) -> Tensor\nSee \"torch.mv()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.mv.html", "category": "pytorch docs"} {"text": "ObservationType\nclass torch.ao.quantization.backend_config.ObservationType(value)\nAn enum that represents different ways of how an operator/operator\n pattern should be observed\nOUTPUT_SHARE_OBSERVER_WITH_INPUT = 1\n this means the output will use the same observer instance as\n input, based on qconfig.activation example: torch.cat, maxpool\n\nOUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT = 0\n this means input and output are observed with different\n observers, based on qconfig.activation example: conv, linear,\n softmax\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.ObservationType.html", "category": "pytorch docs"} {"text": "torch.jit.script\ntorch.jit.script(obj, optimize=None, _frames_up=0, _rcb=None, example_inputs=None)\nScripting a function or \"nn.Module\" will inspect the source code,\n compile it as TorchScript code using the TorchScript compiler, and\n return a \"ScriptModule\" or \"ScriptFunction\". TorchScript itself is\n a subset of the Python language, so not all features in Python\n work, but we provide enough functionality to compute on tensors and\n do control-dependent operations. For a complete guide, see the\n TorchScript Language Reference.\nScripting a dictionary or list copies the data inside it into a\n TorchScript instance than can be subsequently passed by reference\n between Python and TorchScript with zero copy overhead.\n\"torch.jit.script\" can be used as a function for modules,\n functions, dictionaries and lists\n and as a decorator \"@torch.jit.script\" for TorchScript Classes\n and functions.\nParameters:", "source": "https://pytorch.org/docs/stable/generated/torch.jit.script.html", "category": "pytorch docs"} {"text": "and functions.\nParameters:\n * obj (Callable, class, or nn.Module) -- The\n \"nn.Module\", function, class type, dictionary, or list to\n compile.\n * **example_inputs** (*Union**[**List**[**Tuple**]**,\n **Dict**[**Callable**, **List**[**Tuple**]**]**, **None**]*)\n -- Provide example inputs to annotate the arguments for a\n function or \"nn.Module\".\n\nReturns:\n If \"obj\" is \"nn.Module\", \"script\" returns a \"ScriptModule\"\n object. The returned \"ScriptModule\" will have the same set of\n sub-modules and parameters as the original \"nn.Module\". If \"obj\"\n is a standalone function, a \"ScriptFunction\" will be returned.\n If \"obj\" is a \"dict\", then \"script\" returns an instance of\n torch._C.ScriptDict. If \"obj\" is a \"list\", then \"script\"\n returns an instance of torch._C.ScriptList.\nScripting a function\n The \"@torch.jit.script\" decorator will construct a", "source": "https://pytorch.org/docs/stable/generated/torch.jit.script.html", "category": "pytorch docs"} {"text": "\"ScriptFunction\" by compiling the body of the function.\n Example (scripting a function):\n\n import torch\n\n @torch.jit.script\n def foo(x, y):\n if x.max() > y.max():\n r = x\n else:\n r = y\n return r\n\n print(type(foo)) # torch.jit.ScriptFunction\n\n # See the compiled graph as Python code\n print(foo.code)\n\n # Call the function using the TorchScript interpreter\n foo(torch.ones(2, 2), torch.ones(2, 2))\n\n**Scripting a function using example_inputs\n Example inputs can be used to annotate a function arguments.\n Example (annotating a function before scripting):\n\n import torch\n\n def test_sum(a, b):\n return a + b\n\n # Annotate the arguments to be int\n scripted_fn = torch.jit.script(test_sum, example_inputs=[(3, 4)])\n\n print(type(scripted_fn)) # torch.jit.ScriptFunction\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.script.html", "category": "pytorch docs"} {"text": "See the compiled graph as Python code\n print(scripted_fn.code)\n\n # Call the function using the TorchScript interpreter\n scripted_fn(20, 100)\n\nScripting an nn.Module\n Scripting an \"nn.Module\" by default will compile the \"forward\"\n method and recursively compile any methods, submodules, and\n functions called by \"forward\". If a \"nn.Module\" only uses\n features supported in TorchScript, no changes to the original\n module code should be necessary. \"script\" will construct\n \"ScriptModule\" that has copies of the attributes, parameters,\n and methods of the original module.\n Example (scripting a simple module with a Parameter):\n\n import torch\n\n class MyModule(torch.nn.Module):\n def __init__(self, N, M):\n super(MyModule, self).__init__()\n # This parameter will be copied to the new ScriptModule\n self.weight = torch.nn.Parameter(torch.rand(N, M))\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.script.html", "category": "pytorch docs"} {"text": "When this submodule is used, it will be compiled\n self.linear = torch.nn.Linear(N, M)\n\n def forward(self, input):\n output = self.weight.mv(input)\n\n # This calls the `forward` method of the `nn.Linear` module, which will\n # cause the `self.linear` submodule to be compiled to a `ScriptModule` here\n output = self.linear(output)\n return output\n\n scripted_module = torch.jit.script(MyModule(2, 3))\n\n Example (scripting a module with traced submodules):\n\n import torch\n import torch.nn as nn\n import torch.nn.functional as F\n\n class MyModule(nn.Module):\n def __init__(self):\n super(MyModule, self).__init__()\n # torch.jit.trace produces a ScriptModule's conv1 and conv2\n self.conv1 = torch.jit.trace(nn.Conv2d(1, 20, 5), torch.rand(1, 1, 16, 16))\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.script.html", "category": "pytorch docs"} {"text": "self.conv2 = torch.jit.trace(nn.Conv2d(20, 20, 5), torch.rand(1, 20, 16, 16))\n def forward(self, input):\n input = F.relu(self.conv1(input))\n input = F.relu(self.conv2(input))\n return input\n\n scripted_module = torch.jit.script(MyModule())\n\n To compile a method other than \"forward\" (and recursively\n compile anything it calls), add the \"@torch.jit.export\"\n decorator to the method. To opt out of compilation use\n \"@torch.jit.ignore\" or \"@torch.jit.unused\".\n\n Example (an exported and ignored method in a module):\n\n import torch\n import torch.nn as nn\n\n class MyModule(nn.Module):\n def __init__(self):\n super(MyModule, self).__init__()\n\n @torch.jit.export\n def some_entry_point(self, input):\n return input + 10\n\n @torch.jit.ignore\n def python_only_fn(self, input):\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.script.html", "category": "pytorch docs"} {"text": "def python_only_fn(self, input):\n # This function won't be compiled, so any\n # Python APIs can be used\n import pdb\n pdb.set_trace()\n def forward(self, input):\n if self.training:\n self.python_only_fn(input)\n return input * 99\n\n scripted_module = torch.jit.script(MyModule())\n print(scripted_module.some_entry_point(torch.randn(2, 2)))\n print(scripted_module(torch.randn(2, 2)))\n\n Example ( Annotating forward of nn.Module using example_inputs):\n\n import torch\n import torch.nn as nn\n from typing import NamedTuple\n\n class MyModule(NamedTuple):\n result: List[int]\n\n class TestNNModule(torch.nn.Module):\n def forward(self, a) -> MyModule:\n result = MyModule(result=a)\n return result\n\n pdt_model = TestNNModule()\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.script.html", "category": "pytorch docs"} {"text": "pdt_model = TestNNModule()\n # Runs the pdt_model in eager model with the inputs provided and annotates the arguments of forward\n scripted_model = torch.jit.script(pdt_model, example_inputs={pdt_model: [([10, 20, ], ), ], })\n\n # Run the scripted_model with actual inputs\n print(scripted_model([20]))\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.script.html", "category": "pytorch docs"} {"text": "POE0001:node-missing-onnx-shape-inference\nNode is missing ONNX shape inference. This usually happens when the\nnode is not valid under standard ONNX operator spec.", "source": "https://pytorch.org/docs/stable/generated/onnx_diagnostics_rules/POE0001:node-missing-onnx-shape-inference.html", "category": "pytorch docs"} {"text": "POE0004:operator-supported-in-newer-opset-version\nOperator is supported in newer opset version.\nExample:\ntorch.onnx.export(model, args, ..., opset_version=9)", "source": "https://pytorch.org/docs/stable/generated/onnx_diagnostics_rules/POE0004:operator-supported-in-newer-opset-version.html", "category": "pytorch docs"} {"text": "POE0003:missing-standard-symbolic-function\nMissing symbolic function for standard PyTorch operator, cannot\ntranslate node to ONNX.", "source": "https://pytorch.org/docs/stable/generated/onnx_diagnostics_rules/POE0003:missing-standard-symbolic-function.html", "category": "pytorch docs"} {"text": "POE0002:missing-custom-symbolic-function\nMissing symbolic function for custom PyTorch operator, cannot\ntranslate node to ONNX.", "source": "https://pytorch.org/docs/stable/generated/onnx_diagnostics_rules/POE0002:missing-custom-symbolic-function.html", "category": "pytorch docs"} {"text": "ConvReLU3d\nclass torch.ao.nn.intrinsic.qat.ConvReLU3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', qconfig=None)\nA ConvReLU3d module is a fused module of Conv3d and ReLU, attached\n with FakeQuantize modules for weight for quantization aware\n training.\nWe combined the interface of \"Conv3d\" and \"BatchNorm3d\".\nVariables:\n weight_fake_quant -- fake quant module for weight", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.ConvReLU3d.html", "category": "pytorch docs"} {"text": "torch.nn.functional.softmin\ntorch.nn.functional.softmin(input, dim=None, _stacklevel=3, dtype=None)\nApplies a softmin function.\nNote that \\text{Softmin}(x) = \\text{Softmax}(-x). See softmax\n definition for mathematical formula.\nSee \"Softmin\" for more details.\nParameters:\n * input (Tensor) -- input\n * **dim** (*int*) -- A dimension along which softmin will be\n computed (so every slice along dim will sum to 1).\n\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. If specified, the input tensor is casted\n to \"dtype\" before the operation is performed. This is useful\n for preventing data type overflows. Default: None.\n\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.softmin.html", "category": "pytorch docs"} {"text": "prepare\nclass torch.quantization.prepare(model, inplace=False, allow_list=None, observer_non_leaf_module_list=None, prepare_custom_config_dict=None)\nPrepares a copy of the model for quantization calibration or\n quantization-aware training.\nQuantization configuration should be assigned preemptively to\n individual submodules in .qconfig attribute.\nThe model will be attached with observer or fake quant modules, and\n qconfig will be propagated.\nParameters:\n * model -- input model to be modified in-place\n * **inplace** -- carry out model transformations in-place, the\n original module is mutated\n\n * **allow_list** -- list of quantizable modules\n\n * **observer_non_leaf_module_list** -- list of non-leaf modules\n we want to add observer\n\n * **prepare_custom_config_dict** -- customization configuration\n dictionary for prepare function\n\n # Example of prepare_custom_config_dict:\n prepare_custom_config_dict = {\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.prepare.html", "category": "pytorch docs"} {"text": "prepare_custom_config_dict = {\n # user will manually define the corresponding observed\n # module class which has a from_float class method that converts\n # float custom module to observed custom module\n \"float_to_observed_custom_module_class\": {\n CustomModule: ObservedCustomModule\n }\n }", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.prepare.html", "category": "pytorch docs"} {"text": "GaussianNLLLoss\nclass torch.nn.GaussianNLLLoss(*, full=False, eps=1e-06, reduction='mean')\nGaussian negative log likelihood loss.\nThe targets are treated as samples from Gaussian distributions with\n expectations and variances predicted by the neural network. For a\n \"target\" tensor modelled as having Gaussian distribution with a\n tensor of expectations \"input\" and a tensor of positive variances\n \"var\" the loss is:\n \\text{loss} =\n \\frac{1}{2}\\left(\\log\\left(\\text{max}\\left(\\text{var}, \\\n \\text{eps}\\right)\\right) + \\frac{\\left(\\text{input} -\n \\text{target}\\right)^2} {\\text{max}\\left(\\text{var}, \\\n \\text{eps}\\right)}\\right) + \\text{const.}\n\nwhere \"eps\" is used for stability. By default, the constant term of\n the loss function is omitted unless \"full\" is \"True\". If \"var\" is\n not the same size as \"input\" (due to a homoscedastic assumption),\n it must either have a final dimension of 1 or have one fewer", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GaussianNLLLoss.html", "category": "pytorch docs"} {"text": "dimension (with all other sizes being the same) for correct\n broadcasting.\nParameters:\n * full (bool, optional) -- include the constant term\n in the loss calculation. Default: \"False\".\n * **eps** (*float**, **optional*) -- value used to clamp \"var\"\n (see note below), for stability. Default: 1e-6.\n\n * **reduction** (*str**, **optional*) -- specifies the reduction\n to apply to the output:\"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the output\n is the average of all batch member losses, \"'sum'\": the output\n is the sum of all batch member losses. Default: \"'mean'\".\n\nShape:\n * Input: (N, ) or () where * means any number of additional\n dimensions\n * Target: (N, *) or (*), same shape as the input, or same shape\n as the input but with one dimension equal to 1 (to allow for\n broadcasting)\n\n * Var: (N, *) or (*), same shape as the input, or same shape as\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GaussianNLLLoss.html", "category": "pytorch docs"} {"text": "the input but with one dimension equal to 1, or same shape as\n the input but with one fewer dimension (to allow for\n broadcasting)\n * Output: scalar if \"reduction\" is \"'mean'\" (default) or\n \"'sum'\". If \"reduction\" is \"'none'\", then (N, *), same shape\n as the input\n\nExamples::\n >>> loss = nn.GaussianNLLLoss()\n >>> input = torch.randn(5, 2, requires_grad=True)\n >>> target = torch.randn(5, 2)\n >>> var = torch.ones(5, 2, requires_grad=True) # heteroscedastic\n >>> output = loss(input, target, var)\n >>> output.backward()\n >>> loss = nn.GaussianNLLLoss()\n >>> input = torch.randn(5, 2, requires_grad=True)\n >>> target = torch.randn(5, 2)\n >>> var = torch.ones(5, 1, requires_grad=True) # homoscedastic\n >>> output = loss(input, target, var)\n >>> output.backward()\n\nNote:\n The clamping of \"var\" is ignored with respect to autograd, and so\n the gradients are unaffected by it.\n\nReference:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GaussianNLLLoss.html", "category": "pytorch docs"} {"text": "Reference:\n Nix, D. A. and Weigend, A. S., \"Estimating the mean and variance\n of the target probability distribution\", Proceedings of 1994\n IEEE International Conference on Neural Networks (ICNN'94),\n Orlando, FL, USA, 1994, pp. 55-60 vol.1, doi:\n 10.1109/ICNN.1994.374138.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GaussianNLLLoss.html", "category": "pytorch docs"} {"text": "convert_fx\nclass torch.quantization.quantize_fx.convert_fx(graph_module, convert_custom_config=None, _remove_qconfig=True, qconfig_mapping=None, backend_config=None)\nConvert a calibrated or trained model to a quantized model\nParameters:\n * graph_module (***) -- A prepared and calibrated/trained\n model (GraphModule)\n * **convert_custom_config** (***) -- custom configurations for\n convert function. See \"ConvertCustomConfig\" for more details\n\n * **_remove_qconfig** (***) -- Option to remove the qconfig\n attributes in the model after convert.\n\n * **qconfig_mapping** (***) --\n\n config for specifying how to convert a model for quantization.\n\n The keys must include the ones in the qconfig_mapping\n passed to *prepare_fx* or *prepare_qat_fx*, with the\n same values or *None*. Additional keys can be specified\n with values set to *None*.\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.convert_fx.html", "category": "pytorch docs"} {"text": "with values set to None.\n For each entry whose value is set to None, we skip\n quantizing that entry in the model:\n\n qconfig_mapping = QConfigMapping\n .set_global(qconfig_from_prepare)\n .set_object_type(torch.nn.functional.add, None) # skip quantizing torch.nn.functional.add\n .set_object_type(torch.nn.functional.linear, qconfig_from_prepare)\n .set_module_name(\"foo.bar\", None) # skip quantizing module \"foo.bar\"\n\n * *backend_config* (BackendConfig): A configuration for the\n backend which describes how\n operators should be quantized in the backend, this\n includes quantization mode support\n (static/dynamic/weight_only), dtype support (quint8/qint8\n etc.), observer placement for each operators and fused\n operators. See \"BackendConfig\" for more details\n\nReturns:\n A quantized model (torch.nn.Module)\nReturn type:", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.convert_fx.html", "category": "pytorch docs"} {"text": "Return type:\n Module\nExample:\n # prepared_model: the model after prepare_fx/prepare_qat_fx and calibration/training\n # convert_fx converts a calibrated/trained model to a quantized model for the\n # target hardware, this includes converting the model first to a reference\n # quantized model, and then lower the reference quantized model to a backend\n # Currently, the supported backends are fbgemm (onednn), qnnpack (xnnpack) and\n # they share the same set of quantized operators, so we are using the same\n # lowering procedure\n #\n # backend_config defines the corresponding reference quantized module for\n # the weighted modules in the model, e.g. nn.Linear\n # TODO: add backend_config after we split the backend_config for fbgemm and qnnpack\n # e.g. backend_config = get_default_backend_config(\"fbgemm\")\n quantized_model = convert_fx(prepared_model)\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.convert_fx.html", "category": "pytorch docs"} {"text": "torch.Tensor.addmv_\nTensor.addmv_(mat, vec, *, beta=1, alpha=1) -> Tensor\nIn-place version of \"addmv()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.addmv_.html", "category": "pytorch docs"} {"text": "torch.gcd\ntorch.gcd(input, other, *, out=None) -> Tensor\nComputes the element-wise greatest common divisor (GCD) of \"input\"\n and \"other\".\nBoth \"input\" and \"other\" must have integer types.\nNote:\n This defines gcd(0, 0) = 0.\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **other** (*Tensor*) -- the second input tensor\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.tensor([5, 10, 15])\n >>> b = torch.tensor([3, 4, 5])\n >>> torch.gcd(a, b)\n tensor([1, 2, 5])\n >>> c = torch.tensor([3])\n >>> torch.gcd(a, c)\n tensor([1, 1, 3])\n", "source": "https://pytorch.org/docs/stable/generated/torch.gcd.html", "category": "pytorch docs"} {"text": "torch.Tensor.arctan2\nTensor.arctan2(other) -> Tensor\nSee \"torch.arctan2()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arctan2.html", "category": "pytorch docs"} {"text": "torch.arctan\ntorch.arctan(input, *, out=None) -> Tensor\nAlias for \"torch.atan()\".", "source": "https://pytorch.org/docs/stable/generated/torch.arctan.html", "category": "pytorch docs"} {"text": "torch.Tensor.log_normal_\nTensor.log_normal_(mean=1, std=2, *, generator=None)\nFills \"self\" tensor with numbers samples from the log-normal\n distribution parameterized by the given mean \\mu and standard\n deviation \\sigma. Note that \"mean\" and \"std\" are the mean and\n standard deviation of the underlying normal distribution, and not\n of the returned distribution:\n f(x) = \\dfrac{1}{x \\sigma \\sqrt{2\\pi}}\\ e^{-\\frac{(\\ln x -\n \\mu)^2}{2\\sigma^2}}\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.log_normal_.html", "category": "pytorch docs"} {"text": "ConvBnReLU3d\nclass torch.ao.nn.intrinsic.ConvBnReLU3d(conv, bn, relu)\nThis is a sequential container which calls the Conv 3d, Batch Norm\n 3d, and ReLU modules. During quantization this will be replaced\n with the corresponding fused module.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.ConvBnReLU3d.html", "category": "pytorch docs"} {"text": "torch.func.functional_call\ntorch.func.functional_call(module, parameter_and_buffer_dicts, args, kwargs=None, *, tie_weights=True)\nPerforms a functional call on the module by replacing the module\n parameters and buffers with the provided ones.\nNote:\n If the module has active parametrizations, passing a value in the\n \"parameters_and_buffers\" argument with the name set to the\n regular parameter name will completely disable the\n parametrization. If you want to apply the parametrization\n function to the value passed please set the key as\n \"{submodule_name}.parametrizations.{parameter_name}.original\".\n\nNote:\n If the module performs in-place operations on parameters/buffers,\n these will be reflected in the \"parameters_and_buffers\" input.\n\n Example:\n\n >>> a = {'foo': torch.zeros(())}\n >>> mod = Foo() # does self.foo = self.foo + 1\n >>> print(mod.foo) # tensor(0.)\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.functional_call.html", "category": "pytorch docs"} {"text": "\n\n\nprint(mod.foo) # tensor(0.)\n >>> functional_call(mod, a, torch.ones(()))\n >>> print(mod.foo) # tensor(0.)\n >>> print(a['foo']) # tensor(1.)\n\n\n\nNote:\n If the module has tied weights, whether or not functional_call\n respects the tying is determined by the tie_weights flag.Example:\n\n >>> a = {'foo': torch.zeros(())}\n >>> mod = Foo() # has both self.foo and self.foo_tied which are tied. Returns x + self.foo + self.foo_tied\n >>> print(mod.foo) # tensor(1.)\n >>> mod(torch.zeros(())) # tensor(2.)\n >>> functional_call(mod, a, torch.zeros(())) # tensor(0.) since it will change self.foo_tied too\n >>> functional_call(mod, a, torch.zeros(()), tie_weights=False) # tensor(1.)--self.foo_tied is not updated\n >>> new_a = {'foo', torch.zeros(()), 'foo_tied': torch.zeros(())}\n >>> functional_call(mod, new_a, torch.zeros()) # tensor(0.)\n\nAn example of passing mutliple dictionaries", "source": "https://pytorch.org/docs/stable/generated/torch.func.functional_call.html", "category": "pytorch docs"} {"text": "An example of passing mutliple dictionaries\n a = ({'weight': torch.ones(1, 1)}, {'buffer': torch.zeros(1)}) # two separate dictionaries\n mod = nn.Bar(1, 1) # return self.weight @ x + self.buffer\n print(mod.weight) # tensor(...)\n print(mod.buffer) # tensor(...)\n x = torch.randn((1, 1))\n print(x)\n functional_call(mod, a, x) # same as x\n print(mod.weight) # same as before functional_call\n\nAnd here is an example of applying the grad transform over the\n parameters of a model.\n import torch\n import torch.nn as nn\n from torch.func import functional_call, grad\n\n x = torch.randn(4, 3)\n t = torch.randn(4, 3)\n model = nn.Linear(3, 3)\n\n def compute_loss(params, x, t):\n y = functional_call(model, params, x)\n return nn.functional.mse_loss(y, t)\n\n grad_weights = grad(compute_loss)(dict(model.named_parameters()), x, t)\n\nNote:\n If the user does not need grad tracking outside of grad\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.functional_call.html", "category": "pytorch docs"} {"text": "transforms, they can detach all of the parameters for better\n performance and memory usageExample:\n >>> detached_params = {k: v.detach() for k, v in model.named_parameters()}\n >>> grad_weights = grad(compute_loss)(detached_params, x, t)\n >>> grad_weights.grad_fn # None--it's not tracking gradients outside of grad\n\n This means that the user cannot call \"grad_weight.backward()\".\n However, if they don't need autograd tracking outside of the\n transforms, this will result in less memory usage and faster\n speeds.\n\nParameters:\n * module (torch.nn.Module) -- the module to call\n * **parameters_and_buffers** (*Dict**[**str**,**Tensor**] or\n **tuple of Dict**[**str**, **Tensor**]*) -- the parameters\n that will be used in the module call. If given a tuple of\n dictionaries, they must have distinct keys so that all\n dictionaries can be used together\n\n * **args** (*Any** or **tuple*) -- arguments to be passed to the\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.functional_call.html", "category": "pytorch docs"} {"text": "module call. If not a tuple, considered a single argument.\n * **kwargs** (*dict*) -- keyword arguments to be passed to the\n module call\n\n * **tie_weights** (*bool**, **optional*) -- If True, then\n parameters and buffers tied in the original model will be\n treated as tied in the reparamaterized version. Therefore, if\n True and different values are passed for the tied paramaters\n and buffers, it will error. If False, it will not respect the\n originally tied parameters and buffers unless the values\n passed for both weights are the same. Default: True.\n\nReturns:\n the result of calling \"module\".\nReturn type:\n Any", "source": "https://pytorch.org/docs/stable/generated/torch.func.functional_call.html", "category": "pytorch docs"} {"text": "torch.linalg.eig\ntorch.linalg.eig(A, *, out=None)\nComputes the eigenvalue decomposition of a square matrix if it\n exists.\nLetting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, the eigenvalue\n decomposition of a square matrix A \\in \\mathbb{K}^{n \\times n}\n (if it exists) is defined as\n A = V \\operatorname{diag}(\\Lambda) V^{-1}\\mathrlap{\\qquad V \\in\n \\mathbb{C}^{n \\times n}, \\Lambda \\in \\mathbb{C}^n}\n\nThis decomposition exists if and only if A is diagonalizable. This\n is the case when all its eigenvalues are different.\nSupports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\nNote:\n The eigenvalues and eigenvectors of a real matrix may be complex.\n\nNote:\n When inputs are on a CUDA device, this function synchronizes that\n device with the CPU.\n\nWarning:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eig.html", "category": "pytorch docs"} {"text": "device with the CPU.\nWarning:\n This function assumes that \"A\" is diagonalizable (for example,\n when all the eigenvalues are different). If it is not\n diagonalizable, the returned eigenvalues will be correct but A\n \\neq V \\operatorname{diag}(\\Lambda)V^{-1}.\n\nWarning:\n The returned eigenvectors are normalized to have norm *1*. Even\n then, the eigenvectors of a matrix are not unique, nor are they\n continuous with respect to \"A\". Due to this lack of uniqueness,\n different hardware and software may compute different\n eigenvectors.This non-uniqueness is caused by the fact that\n multiplying an eigenvector by by e^{i \\phi}, \\phi \\in \\mathbb{R}\n produces another set of valid eigenvectors of the matrix. For\n this reason, the loss function shall not depend on the phase of\n the eigenvectors, as this quantity is not well-defined. This is\n checked when computing the gradients of this function. As such,\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eig.html", "category": "pytorch docs"} {"text": "when inputs are on a CUDA device, this function synchronizes that\n device with the CPU when computing the gradients. This is checked\n when computing the gradients of this function. As such, when\n inputs are on a CUDA device, the computation of the gradients of\n this function synchronizes that device with the CPU.\nWarning:\n Gradients computed using the *eigenvectors* tensor will only be\n finite when \"A\" has distinct eigenvalues. Furthermore, if the\n distance between any two eigenvalues is close to zero, the\n gradient will be numerically unstable, as it depends on the\n eigenvalues \\lambda_i through the computation of \\frac{1}{\\min_{i\n \\neq j} \\lambda_i - \\lambda_j}.\n\nSee also:\n \"torch.linalg.eigvals()\" computes only the eigenvalues. Unlike\n \"torch.linalg.eig()\", the gradients of \"eigvals()\" are always\n numerically stable.\n\n \"torch.linalg.eigh()\" for a (faster) function that computes the\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eig.html", "category": "pytorch docs"} {"text": "eigenvalue decomposition for Hermitian and symmetric matrices.\n \"torch.linalg.svd()\" for a function that computes another type of\n spectral decomposition that works on matrices of any shape.\n\n \"torch.linalg.qr()\" for another (much faster) decomposition that\n works on matrices of any shape.\n\nParameters:\n A (Tensor) -- tensor of shape (, n, n)* where *** is\n zero or more batch dimensions consisting of diagonalizable\n matrices.\nKeyword Arguments:\n out (tuple, optional) -- output tuple of two tensors.\n Ignored if None. Default: None.\nReturns:\n A named tuple (eigenvalues, eigenvectors) which corresponds to\n \\Lambda and V above.\n *eigenvalues* and *eigenvectors* will always be complex-valued,\n even when \"A\" is real. The eigenvectors will be given by the\n columns of *eigenvectors*.\n\nExamples:\n >>> A = torch.randn(2, 2, dtype=torch.complex128)\n >>> A\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eig.html", "category": "pytorch docs"} {"text": "\n\n\nA\n tensor([[ 0.9828+0.3889j, -0.4617+0.3010j],\n [ 0.1662-0.7435j, -0.6139+0.0562j]], dtype=torch.complex128)\n >>> L, V = torch.linalg.eig(A)\n >>> L\n tensor([ 1.1226+0.5738j, -0.7537-0.1286j], dtype=torch.complex128)\n >>> V\n tensor([[ 0.9218+0.0000j, 0.1882-0.2220j],\n [-0.0270-0.3867j, 0.9567+0.0000j]], dtype=torch.complex128)\n >>> torch.dist(V @ torch.diag(L) @ torch.linalg.inv(V), A)\n tensor(7.7119e-16, dtype=torch.float64)\n\n\n\n >>> A = torch.randn(3, 2, 2, dtype=torch.float64)\n >>> L, V = torch.linalg.eig(A)\n >>> torch.dist(V @ torch.diag_embed(L) @ torch.linalg.inv(V), A)\n tensor(3.2841e-16, dtype=torch.float64)\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eig.html", "category": "pytorch docs"} {"text": "torch.nn.functional.hinge_embedding_loss\ntorch.nn.functional.hinge_embedding_loss(input, target, margin=1.0, size_average=None, reduce=None, reduction='mean') -> Tensor\nSee \"HingeEmbeddingLoss\" for details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.hinge_embedding_loss.html", "category": "pytorch docs"} {"text": "DataParallel\nclass torch.nn.DataParallel(module, device_ids=None, output_device=None, dim=0)\nImplements data parallelism at the module level.\nThis container parallelizes the application of the given \"module\"\n by splitting the input across the specified devices by chunking in\n the batch dimension (other objects will be copied once per device).\n In the forward pass, the module is replicated on each device, and\n each replica handles a portion of the input. During the backwards\n pass, gradients from each replica are summed into the original\n module.\nThe batch size should be larger than the number of GPUs used.\nWarning:\n It is recommended to use \"DistributedDataParallel\", instead of\n this class, to do multi-GPU training, even if there is only a\n single node. See: Use nn.parallel.DistributedDataParallel instead\n of multiprocessing or nn.DataParallel and Distributed Data\n Parallel.\n\nArbitrary positional and keyword inputs are allowed to be passed", "source": "https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html", "category": "pytorch docs"} {"text": "into DataParallel but some types are specially handled. tensors\n will be scattered on dim specified (default 0). tuple, list and\n dict types will be shallow copied. The other types will be shared\n among different threads and can be corrupted if written to in the\n model's forward pass.\nThe parallelized \"module\" must have its parameters and buffers on\n \"device_ids[0]\" before running this \"DataParallel\" module.\nWarning:\n In each forward, \"module\" is **replicated** on each device, so\n any updates to the running module in \"forward\" will be lost. For\n example, if \"module\" has a counter attribute that is incremented\n in each \"forward\", it will always stay at the initial value\n because the update is done on the replicas which are destroyed\n after \"forward\". However, \"DataParallel\" guarantees that the\n replica on \"device[0]\" will have its parameters and buffers\n sharing storage with the base parallelized \"module\". So **in-\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html", "category": "pytorch docs"} {"text": "place** updates to the parameters or buffers on \"device[0]\" will\n be recorded. E.g., \"BatchNorm2d\" and \"spectral_norm()\" rely on\n this behavior to update the buffers.\nWarning:\n Forward and backward hooks defined on \"module\" and its submodules\n will be invoked \"len(device_ids)\" times, each with inputs located\n on a particular device. Particularly, the hooks are only\n guaranteed to be executed in correct order with respect to\n operations on corresponding devices. For example, it is not\n guaranteed that hooks set via \"register_forward_pre_hook()\" be\n executed before *all* \"len(device_ids)\" \"forward()\" calls, but\n that each such hook be executed before the corresponding\n \"forward()\" call of that device.\n\nWarning:\n When \"module\" returns a scalar (i.e., 0-dimensional tensor) in\n \"forward()\", this wrapper will return a vector of length equal to\n number of devices used in data parallelism, containing the result\n from each device.\n\nNote:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html", "category": "pytorch docs"} {"text": "from each device.\nNote:\n There is a subtlety in using the \"pack sequence -> recurrent\n network -> unpack sequence\" pattern in a \"Module\" wrapped in\n \"DataParallel\". See My recurrent network doesn't work with data\n parallelism section in FAQ for details.\n\nParameters:\n * module (Module) -- module to be parallelized\n * **device_ids** (*list of python:int** or **torch.device*) --\n CUDA devices (default: all devices)\n\n * **output_device** (*int** or **torch.device*) -- device\n location of output (default: device_ids[0])\n\nVariables:\n module (Module) -- the module to be parallelized\nExample:\n >>> net = torch.nn.DataParallel(model, device_ids=[0, 1, 2])\n >>> output = net(input_var) # input_var can be on any device, including CPU\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.DataParallel.html", "category": "pytorch docs"} {"text": "GLU\nclass torch.nn.GLU(dim=- 1)\nApplies the gated linear unit function {GLU}(a, b)= a \\otimes\n \\sigma(b) where a is the first half of the input matrices and b is\n the second half.\nParameters:\n dim (int) -- the dimension on which to split the input.\n Default: -1\nShape:\n * Input: (\\ast_1, N, \\ast_2) where *** means, any number of\n additional dimensions\n * Output: (\\ast_1, M, \\ast_2) where M=N/2\n\nExamples:\n >>> m = nn.GLU()\n >>> input = torch.randn(4, 2)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GLU.html", "category": "pytorch docs"} {"text": "torch.Tensor.diagflat\nTensor.diagflat(offset=0) -> Tensor\nSee \"torch.diagflat()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.diagflat.html", "category": "pytorch docs"} {"text": "ReflectionPad1d\nclass torch.nn.ReflectionPad1d(padding)\nPads the input tensor using the reflection of the input boundary.\nFor N-dimensional padding, use \"torch.nn.functional.pad()\".\nParameters:\n padding (int, tuple) -- the size of the padding. If is\n int, uses the same padding in all boundaries. If a 2-tuple,\n uses (\\text{padding_left}, \\text{padding_right})\nShape:\n * Input: (C, W_{in}) or (N, C, W_{in}).\n * Output: (C, W_{out}) or (N, C, W_{out}), where\n\n W_{out} = W_{in} + \\text{padding\\_left} +\n \\text{padding\\_right}\n\nExamples:\n >>> m = nn.ReflectionPad1d(2)\n >>> input = torch.arange(8, dtype=torch.float).reshape(1, 2, 4)\n >>> input\n tensor([[[0., 1., 2., 3.],\n [4., 5., 6., 7.]]])\n >>> m(input)\n tensor([[[2., 1., 0., 1., 2., 3., 2., 1.],\n [6., 5., 4., 5., 6., 7., 6., 5.]]])\n >>> # using different paddings for different sides\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReflectionPad1d.html", "category": "pytorch docs"} {"text": "\n\n\nm = nn.ReflectionPad1d((3, 1))\n >>> m(input)\n tensor([[[3., 2., 1., 0., 1., 2., 3., 2.],\n [7., 6., 5., 4., 5., 6., 7., 6.]]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReflectionPad1d.html", "category": "pytorch docs"} {"text": "conv1d\nclass torch.ao.nn.quantized.functional.conv1d(input, weight, bias, stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', scale=1.0, zero_point=0, dtype=torch.quint8)\nApplies a 1D convolution over a quantized 1D input composed of\n several input planes.\nSee \"Conv1d\" for details and output shape.\nParameters:\n * input -- quantized input tensor of shape (\\text{minibatch}\n , \\text{in_channels} , iW)\n * **weight** -- quantized filters of shape (\\text{out\\_channels}\n , \\frac{\\text{in\\_channels}}{\\text{groups}} , iW)\n\n * **bias** -- **non-quantized** bias tensor of shape\n (\\text{out\\_channels}). The tensor type must be *torch.float*.\n\n * **stride** -- the stride of the convolving kernel. Can be a\n single number or a tuple *(sW,)*. Default: 1\n\n * **padding** -- implicit paddings on both sides of the input.\n Can be a single number or a tuple *(padW,)*. Default: 0\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.conv1d.html", "category": "pytorch docs"} {"text": "\n\ndilation -- the spacing between kernel elements. Can be a\n single number or a tuple (dW,). Default: 1\n\n\ngroups -- split input into groups, \\text{in_channels}\n should be divisible by the number of groups. Default: 1\n\n\npadding_mode -- the padding mode to use. Only \"zeros\" is\n supported for quantized convolution at the moment. Default:\n \"zeros\"\n\n\nscale -- quantization scale for the output. Default: 1.0\n\n\nzero_point -- quantization zero_point for the output.\n Default: 0\n\n\ndtype -- quantization data type to use. Default:\n \"torch.quint8\"\n\n\n\n\nExamples:\n >>> from torch.ao.nn.quantized import functional as qF\n >>> filters = torch.randn(33, 16, 3, dtype=torch.float)\n >>> inputs = torch.randn(20, 16, 50, dtype=torch.float)\n >>> bias = torch.randn(33, dtype=torch.float)\n >>>\n >>> scale, zero_point = 1.0, 0\n >>> dtype_inputs = torch.quint8\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.conv1d.html", "category": "pytorch docs"} {"text": "\n\n\ndtype_inputs = torch.quint8\n >>> dtype_filters = torch.qint8\n >>>\n >>> q_filters = torch.quantize_per_tensor(filters, scale, zero_point, dtype_filters)\n >>> q_inputs = torch.quantize_per_tensor(inputs, scale, zero_point, dtype_inputs)\n >>> qF.conv1d(q_inputs, q_filters, bias, padding=1, scale=scale, zero_point=zero_point)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.conv1d.html", "category": "pytorch docs"} {"text": "conv2d\nclass torch.ao.nn.quantized.functional.conv2d(input, weight, bias, stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', scale=1.0, zero_point=0, dtype=torch.quint8)\nApplies a 2D convolution over a quantized 2D input composed of\n several input planes.\nSee \"Conv2d\" for details and output shape.\nParameters:\n * input -- quantized input tensor of shape (\\text{minibatch}\n , \\text{in_channels} , iH , iW)\n * **weight** -- quantized filters of shape (\\text{out\\_channels}\n , \\frac{\\text{in\\_channels}}{\\text{groups}} , kH , kW)\n\n * **bias** -- **non-quantized** bias tensor of shape\n (\\text{out\\_channels}). The tensor type must be *torch.float*.\n\n * **stride** -- the stride of the convolving kernel. Can be a\n single number or a tuple *(sH, sW)*. Default: 1\n\n * **padding** -- implicit paddings on both sides of the input.\n Can be a single number or a tuple *(padH, padW)*. Default: 0\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.conv2d.html", "category": "pytorch docs"} {"text": "\n\ndilation -- the spacing between kernel elements. Can be a\n single number or a tuple (dH, dW). Default: 1\n\n\ngroups -- split input into groups, \\text{in_channels}\n should be divisible by the number of groups. Default: 1\n\n\npadding_mode -- the padding mode to use. Only \"zeros\" is\n supported for quantized convolution at the moment. Default:\n \"zeros\"\n\n\nscale -- quantization scale for the output. Default: 1.0\n\n\nzero_point -- quantization zero_point for the output.\n Default: 0\n\n\ndtype -- quantization data type to use. Default:\n \"torch.quint8\"\n\n\n\n\nExamples:\n >>> from torch.ao.nn.quantized import functional as qF\n >>> filters = torch.randn(8, 4, 3, 3, dtype=torch.float)\n >>> inputs = torch.randn(1, 4, 5, 5, dtype=torch.float)\n >>> bias = torch.randn(8, dtype=torch.float)\n >>>\n >>> scale, zero_point = 1.0, 0\n >>> dtype_inputs = torch.quint8\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.conv2d.html", "category": "pytorch docs"} {"text": "\n\n\ndtype_inputs = torch.quint8\n >>> dtype_filters = torch.qint8\n >>>\n >>> q_filters = torch.quantize_per_tensor(filters, scale, zero_point, dtype_filters)\n >>> q_inputs = torch.quantize_per_tensor(inputs, scale, zero_point, dtype_inputs)\n >>> qF.conv2d(q_inputs, q_filters, bias, padding=1, scale=scale, zero_point=zero_point)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.conv2d.html", "category": "pytorch docs"} {"text": "torch.Tensor.exp_\nTensor.exp_() -> Tensor\nIn-place version of \"exp()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.exp_.html", "category": "pytorch docs"} {"text": "torch.manual_seed\ntorch.manual_seed(seed)\nSets the seed for generating random numbers. Returns a\n torch.Generator object.\nParameters:\n seed (int) -- The desired seed. Value must be within the\n inclusive range [-0x8000_0000_0000_0000,\n 0xffff_ffff_ffff_ffff]. Otherwise, a RuntimeError is raised.\n Negative inputs are remapped to positive values with the formula\n 0xffff_ffff_ffff_ffff + seed.\nReturn type:\n Generator", "source": "https://pytorch.org/docs/stable/generated/torch.manual_seed.html", "category": "pytorch docs"} {"text": "torch.Tensor.register_hook\nTensor.register_hook(hook)\nRegisters a backward hook.\nThe hook will be called every time a gradient with respect to the\n Tensor is computed. The hook should have the following signature:\n hook(grad) -> Tensor or None\n\nThe hook should not modify its argument, but it can optionally\n return a new gradient which will be used in place of \"grad\".\nThis function returns a handle with a method \"handle.remove()\" that\n removes the hook from the module.\nNote:\n See Backward Hooks execution for more information on how when\n this hook is executed, and how its execution is ordered relative\n to other hooks.\n\nExample:\n >>> v = torch.tensor([0., 0., 0.], requires_grad=True)\n >>> h = v.register_hook(lambda grad: grad * 2) # double the gradient\n >>> v.backward(torch.tensor([1., 2., 3.]))\n >>> v.grad\n\n 2\n 4\n 6\n [torch.FloatTensor of size (3,)]\n\n >>> h.remove() # removes the hook\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.register_hook.html", "category": "pytorch docs"} {"text": "torch.index_copy\ntorch.index_copy(input, dim, index, source, *, out=None) -> Tensor\nSee \"index_add_()\" for function description.", "source": "https://pytorch.org/docs/stable/generated/torch.index_copy.html", "category": "pytorch docs"} {"text": "torch.Tensor.atan2\nTensor.atan2(other) -> Tensor\nSee \"torch.atan2()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.atan2.html", "category": "pytorch docs"} {"text": "torch.set_warn_always\ntorch.set_warn_always(b)\nWhen this flag is False (default) then some PyTorch warnings may\n only appear once per process. This helps avoid excessive warning\n information. Setting it to True causes these warnings to always\n appear, which may be helpful when debugging.\nParameters:\n b (\"bool\") -- If True, force warnings to always be emitted\n If False, set to the default behaviour", "source": "https://pytorch.org/docs/stable/generated/torch.set_warn_always.html", "category": "pytorch docs"} {"text": "torch.nn.functional.pixel_unshuffle\ntorch.nn.functional.pixel_unshuffle(input, downscale_factor) -> Tensor\nReverses the \"PixelShuffle\" operation by rearranging elements in a\n tensor of shape (, C, H \\times r, W \\times r) to a tensor of shape\n (, C \\times r^2, H, W), where r is the \"downscale_factor\".\nSee \"PixelUnshuffle\" for details.\nParameters:\n * input (Tensor) -- the input tensor\n * **downscale_factor** (*int*) -- factor to increase spatial\n resolution by\n\nExamples:\n >>> input = torch.randn(1, 1, 12, 12)\n >>> output = torch.nn.functional.pixel_unshuffle(input, 3)\n >>> print(output.size())\n torch.Size([1, 9, 4, 4])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.pixel_unshuffle.html", "category": "pytorch docs"} {"text": "torch.nn.functional.sigmoid\ntorch.nn.functional.sigmoid(input) -> Tensor\nApplies the element-wise function \\text{Sigmoid}(x) = \\frac{1}{1 +\n \\exp(-x)}\nSee \"Sigmoid\" for more details.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.sigmoid.html", "category": "pytorch docs"} {"text": "Conv2d\nclass torch.ao.nn.quantized.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)\nApplies a 2D convolution over a quantized input signal composed of\n several quantized input planes.\nFor details on input arguments, parameters, and implementation see\n \"Conv2d\".\nNote:\n Only *zeros* is supported for the \"padding_mode\" argument.\n\nNote:\n Only *torch.quint8* is supported for the input data type.\n\nVariables:\n * weight (Tensor) -- packed tensor derived from the\n learnable weight parameter.\n * **scale** (*Tensor*) -- scalar for the output scale\n\n * **zero_point** (*Tensor*) -- scalar for the output zero point\n\nSee \"Conv2d\" for other attributes.\nExamples:\n >>> # With square kernels and equal stride\n >>> m = nn.quantized.Conv2d(16, 33, 3, stride=2)\n >>> # non-square kernels and unequal stride and with padding\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Conv2d.html", "category": "pytorch docs"} {"text": "\n\n\nm = nn.quantized.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))\n >>> # non-square kernels and unequal stride and with padding and dilation\n >>> m = nn.quantized.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1))\n >>> input = torch.randn(20, 16, 50, 100)\n >>> # quantize input to quint8\n >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)\n >>> output = m(q_input)\n\n\n\nclassmethod from_float(mod)\n Creates a quantized module from a float module or qparams_dict.\n\n Parameters:\n **mod** (*Module*) -- a float module, either produced by\n torch.ao.quantization utilities or provided by the user\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Conv2d.html", "category": "pytorch docs"} {"text": "torch.bitwise_or\ntorch.bitwise_or(input, other, *, out=None) -> Tensor\nComputes the bitwise OR of \"input\" and \"other\". The input tensor\n must be of integral or Boolean types. For bool tensors, it computes\n the logical OR.\nParameters:\n * input -- the first input tensor\n * **other** -- the second input tensor\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> torch.bitwise_or(torch.tensor([-1, -2, 3], dtype=torch.int8), torch.tensor([1, 0, 3], dtype=torch.int8))\n tensor([-1, -2, 3], dtype=torch.int8)\n >>> torch.bitwise_or(torch.tensor([True, True, False]), torch.tensor([False, True, False]))\n tensor([ True, True, False])\n", "source": "https://pytorch.org/docs/stable/generated/torch.bitwise_or.html", "category": "pytorch docs"} {"text": "torch.unsqueeze\ntorch.unsqueeze(input, dim) -> Tensor\nReturns a new tensor with a dimension of size one inserted at the\n specified position.\nThe returned tensor shares the same underlying data with this\n tensor.\nA \"dim\" value within the range \"[-input.dim() - 1, input.dim() +\n 1)\" can be used. Negative \"dim\" will correspond to \"unsqueeze()\"\n applied at \"dim\" = \"dim + input.dim() + 1\".\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int*) -- the index at which to insert the singleton\n dimension\n\nExample:\n >>> x = torch.tensor([1, 2, 3, 4])\n >>> torch.unsqueeze(x, 0)\n tensor([[ 1, 2, 3, 4]])\n >>> torch.unsqueeze(x, 1)\n tensor([[ 1],\n [ 2],\n [ 3],\n [ 4]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.unsqueeze.html", "category": "pytorch docs"} {"text": "torch.set_num_threads\ntorch.set_num_threads(int)\nSets the number of threads used for intraop parallelism on CPU.\nWarning:\n To ensure that the correct number of threads is used,\n set_num_threads must be called before running eager, JIT or\n autograd code.\n", "source": "https://pytorch.org/docs/stable/generated/torch.set_num_threads.html", "category": "pytorch docs"} {"text": "torch.square\ntorch.square(input, *, out=None) -> Tensor\nReturns a new tensor with the square of the elements of \"input\".\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(4)\n >>> a\n tensor([-2.0755, 1.0226, 0.0831, 0.4806])\n >>> torch.square(a)\n tensor([ 4.3077, 1.0457, 0.0069, 0.2310])\n", "source": "https://pytorch.org/docs/stable/generated/torch.square.html", "category": "pytorch docs"} {"text": "torch.Tensor.double\nTensor.double(memory_format=torch.preserve_format) -> Tensor\n\"self.double()\" is equivalent to \"self.to(torch.float64)\". See\n \"to()\".\nParameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.double.html", "category": "pytorch docs"} {"text": "torch.Tensor.i0_\nTensor.i0_() -> Tensor\nIn-place version of \"i0()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.i0_.html", "category": "pytorch docs"} {"text": "torch.all\ntorch.all(input) -> Tensor\nTests if all elements in \"input\" evaluate to True.\nNote:\n This function matches the behaviour of NumPy in returning output\n of dtype *bool* for all supported dtypes except *uint8*. For\n *uint8* the dtype of output is *uint8* itself.\n\nExample:\n >>> a = torch.rand(1, 2).bool()\n >>> a\n tensor([[False, True]], dtype=torch.bool)\n >>> torch.all(a)\n tensor(False, dtype=torch.bool)\n >>> a = torch.arange(0, 3)\n >>> a\n tensor([0, 1, 2])\n >>> torch.all(a)\n tensor(False)\n\ntorch.all(input, dim, keepdim=False, *, out=None) -> Tensor\nFor each row of \"input\" in the given dimension \"dim\", returns\n True if all elements in the row evaluate to True and False\n otherwise.\nIf \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension \"dim\" where it is of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in", "source": "https://pytorch.org/docs/stable/generated/torch.all.html", "category": "pytorch docs"} {"text": "the output tensor having 1 fewer dimension than \"input\".\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int*) -- the dimension to reduce.\n\n * **keepdim** (*bool*) -- whether the output tensor has \"dim\"\n retained or not.\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.rand(4, 2).bool()\n >>> a\n tensor([[True, True],\n [True, False],\n [True, True],\n [True, True]], dtype=torch.bool)\n >>> torch.all(a, dim=1)\n tensor([ True, False, True, True], dtype=torch.bool)\n >>> torch.all(a, dim=0)\n tensor([ True, False], dtype=torch.bool)\n", "source": "https://pytorch.org/docs/stable/generated/torch.all.html", "category": "pytorch docs"} {"text": "torch.Tensor.prod\nTensor.prod(dim=None, keepdim=False, dtype=None) -> Tensor\nSee \"torch.prod()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.prod.html", "category": "pytorch docs"} {"text": "torch.lu_solve\ntorch.lu_solve(b, LU_data, LU_pivots, *, out=None) -> Tensor\nReturns the LU solve of the linear system Ax = b using the\n partially pivoted LU factorization of A from \"lu_factor()\".\nThis function supports \"float\", \"double\", \"cfloat\" and \"cdouble\"\n dtypes for \"input\".\nWarning:\n \"torch.lu_solve()\" is deprecated in favor of\n \"torch.linalg.lu_solve()\". \"torch.lu_solve()\" will be removed in\n a future PyTorch release. \"X = torch.lu_solve(B, LU, pivots)\"\n should be replaced with\n\n X = linalg.lu_solve(LU, pivots, B)\n\nParameters:\n * b (Tensor) -- the RHS tensor of size (*, m, k), where *\n is zero or more batch dimensions.\n * **LU_data** (*Tensor*) -- the pivoted LU factorization of A\n from \"lu_factor()\" of size (*, m, m), where * is zero or more\n batch dimensions.\n\n * **LU_pivots** (*IntTensor*) -- the pivots of the LU\n factorization from \"lu_factor()\" of size (*, m), where * is\n", "source": "https://pytorch.org/docs/stable/generated/torch.lu_solve.html", "category": "pytorch docs"} {"text": "zero or more batch dimensions. The batch dimensions of\n \"LU_pivots\" must be equal to the batch dimensions of\n \"LU_data\".\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> A = torch.randn(2, 3, 3)\n >>> b = torch.randn(2, 3, 1)\n >>> LU, pivots = torch.linalg.lu_factor(A)\n >>> x = torch.lu_solve(b, LU, pivots)\n >>> torch.dist(A @ x, b)\n tensor(1.00000e-07 *\n 2.8312)\n", "source": "https://pytorch.org/docs/stable/generated/torch.lu_solve.html", "category": "pytorch docs"} {"text": "torch.cuda.comm.broadcast\ntorch.cuda.comm.broadcast(tensor, devices=None, *, out=None)\nBroadcasts a tensor to specified GPU devices.\nParameters:\n * tensor (Tensor) -- tensor to broadcast. Can be on CPU or\n GPU.\n * **devices** (*Iterable**[**torch.device**, **str** or\n **int**]**, **optional*) -- an iterable of GPU devices, among\n which to broadcast.\n\n * **out** (*Sequence**[**Tensor**]**, **optional**, **keyword-\n only*) -- the GPU tensors to store output results.\n\nNote:\n Exactly one of \"devices\" and \"out\" must be specified.\n\nReturns:\n * If \"devices\" is specified,\n a tuple containing copies of \"tensor\", placed on \"devices\".\n * If \"out\" is specified,\n a tuple containing \"out\" tensors, each containing a copy of\n \"tensor\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.comm.broadcast.html", "category": "pytorch docs"} {"text": "torch.Tensor.item\nTensor.item() -> number\nReturns the value of this tensor as a standard Python number. This\n only works for tensors with one element. For other cases, see\n \"tolist()\".\nThis operation is not differentiable.\nExample:\n >>> x = torch.tensor([1.0])\n >>> x.item()\n 1.0\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.item.html", "category": "pytorch docs"} {"text": "torch.fmod\ntorch.fmod(input, other, *, out=None) -> Tensor\nApplies C++'s std::fmod entrywise. The result has the same sign as\n the dividend \"input\" and its absolute value is less than that of\n \"other\".\nThis function may be defined in terms of \"torch.div()\" as\n torch.fmod(a, b) == a - a.div(b, rounding_mode=\"trunc\") * b\n\nSupports broadcasting to a common shape, type promotion, and\n integer and float inputs.\nNote:\n When the divisor is zero, returns \"NaN\" for floating point dtypes\n on both CPU and GPU; raises \"RuntimeError\" for integer division\n by zero on CPU; Integer division by zero on GPU may return any\n value.\n\nNote:\n Complex inputs are not supported. In some cases, it is not\n mathematically possible to satisfy the definition of a modulo\n operation with complex numbers.\n\nSee also:\n \"torch.remainder()\" which implements Python's modulus operator.\n This one is defined using division rounding down the result.\n\nParameters:", "source": "https://pytorch.org/docs/stable/generated/torch.fmod.html", "category": "pytorch docs"} {"text": "Parameters:\n * input (Tensor) -- the dividend\n * **other** (*Tensor** or **Scalar*) -- the divisor\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> torch.fmod(torch.tensor([-3., -2, -1, 1, 2, 3]), 2)\n tensor([-1., -0., -1., 1., 0., 1.])\n >>> torch.fmod(torch.tensor([1, 2, 3, 4, 5]), -1.5)\n tensor([1.0000, 0.5000, 0.0000, 1.0000, 0.5000])\n", "source": "https://pytorch.org/docs/stable/generated/torch.fmod.html", "category": "pytorch docs"} {"text": "torch.argmin\ntorch.argmin(input, dim=None, keepdim=False) -> LongTensor\nReturns the indices of the minimum value(s) of the flattened tensor\n or along a dimension\nThis is the second value returned by \"torch.min()\". See its\n documentation for the exact semantics of this method.\nNote:\n If there are multiple minimal values then the indices of the\n first minimal value are returned.\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int*) -- the dimension to reduce. If \"None\", the\n argmin of the flattened input is returned.\n\n * **keepdim** (*bool*) -- whether the output tensor has \"dim\"\n retained or not..\n\nExample:\n >>> a = torch.randn(4, 4)\n >>> a\n tensor([[ 0.1139, 0.2254, -0.1381, 0.3687],\n [ 1.0100, -1.1975, -0.0102, -0.4732],\n [-0.9240, 0.1207, -0.7506, -1.0213],\n [ 1.7809, -1.2960, 0.9384, 0.1438]])\n >>> torch.argmin(a)\n tensor(13)\n", "source": "https://pytorch.org/docs/stable/generated/torch.argmin.html", "category": "pytorch docs"} {"text": "\n\n\ntorch.argmin(a)\n tensor(13)\n >>> torch.argmin(a, dim=1)\n tensor([ 2, 1, 3, 1])\n >>> torch.argmin(a, dim=1, keepdim=True)\n tensor([[2],\n [1],\n [3],\n [1]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.argmin.html", "category": "pytorch docs"} {"text": "torch.Tensor.type_as\nTensor.type_as(tensor) -> Tensor\nReturns this tensor cast to the type of the given tensor.\nThis is a no-op if the tensor is already of the correct type. This\n is equivalent to \"self.type(tensor.type())\"\nParameters:\n tensor (Tensor) -- the tensor which has the desired type", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.type_as.html", "category": "pytorch docs"} {"text": "Conv1d\nclass torch.nn.Conv1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)\nApplies a 1D convolution over an input signal composed of several\n input planes.\nIn the simplest case, the output value of the layer with input size\n (N, C_{\\text{in}}, L) and output (N, C_{\\text{out}},\n L_{\\text{out}}) can be precisely described as:\n \\text{out}(N_i, C_{\\text{out}_j}) =\n \\text{bias}(C_{\\text{out}_j}) + \\sum_{k = 0}^{C_{in} - 1}\n \\text{weight}(C_{\\text{out}_j}, k) \\star \\text{input}(N_i, k)\n\nwhere \\star is the valid cross-correlation operator, N is a batch\n size, C denotes a number of channels, L is a length of signal\n sequence.\nThis module supports TensorFloat32.\nOn certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.\n\n\"stride\" controls the stride for the cross-correlation, a single\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html", "category": "pytorch docs"} {"text": "number or a one-element tuple.\n\n\n\"padding\" controls the amount of padding applied to the input. It\n can be either a string {'valid', 'same'} or a tuple of ints\n giving the amount of implicit padding applied on both sides.\n\n\n\"dilation\" controls the spacing between the kernel points; also\n known as the \u00c3\u00a0 trous algorithm. It is harder to describe, but\n this link has a nice visualization of what \"dilation\" does.\n\n\n\"groups\" controls the connections between inputs and outputs.\n \"in_channels\" and \"out_channels\" must both be divisible by\n \"groups\". For example,\n* At groups=1, all inputs are convolved to all outputs.\n\n* At groups=2, the operation becomes equivalent to having two\n conv layers side by side, each seeing half the input\n channels and producing half the output channels, and both\n subsequently concatenated.\n\n* At groups= \"in_channels\", each input channel is convolved\n with its own set of filters (of size\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html", "category": "pytorch docs"} {"text": "with its own set of filters (of size\n \\frac{\\text{out_channels}}{\\text{in_channels}}).\nNote:\n When *groups == in_channels* and *out_channels == K *\n in_channels*, where *K* is a positive integer, this operation is\n also known as a \"depthwise convolution\".In other words, for an\n input of size (N, C_{in}, L_{in}), a depthwise convolution with a\n depthwise multiplier *K* can be performed with the arguments\n (C_\\text{in}=C_\\text{in}, C_\\text{out}=C_\\text{in} \\times\n \\text{K}, ..., \\text{groups}=C_\\text{in}).\n\nNote:\n In some circumstances when given tensors on a CUDA device and\n using CuDNN, this operator may select a nondeterministic\n algorithm to increase performance. If this is undesirable, you\n can try to make the operation deterministic (potentially at a\n performance cost) by setting \"torch.backends.cudnn.deterministic\n = True\". See Reproducibility for more information.\n\nNote:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html", "category": "pytorch docs"} {"text": "Note:\n \"padding='valid'\" is the same as no padding. \"padding='same'\"\n pads the input so the output has the shape as the input. However,\n this mode doesn't support any stride values other than 1.\n\nNote:\n This module supports complex data types i.e. \"complex32,\n complex64, complex128\".\n\nParameters:\n * in_channels (int) -- Number of channels in the input\n image\n * **out_channels** (*int*) -- Number of channels produced by the\n convolution\n\n * **kernel_size** (*int** or **tuple*) -- Size of the convolving\n kernel\n\n * **stride** (*int** or **tuple**, **optional*) -- Stride of the\n convolution. Default: 1\n\n * **padding** (*int**, **tuple** or **str**, **optional*) --\n Padding added to both sides of the input. Default: 0\n\n * **padding_mode** (*str**, **optional*) -- \"'zeros'\",\n \"'reflect'\", \"'replicate'\" or \"'circular'\". Default: \"'zeros'\"\n\n * **dilation** (*int** or **tuple**, **optional*) -- Spacing\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html", "category": "pytorch docs"} {"text": "between kernel elements. Default: 1\n * **groups** (*int**, **optional*) -- Number of blocked\n connections from input channels to output channels. Default: 1\n\n * **bias** (*bool**, **optional*) -- If \"True\", adds a learnable\n bias to the output. Default: \"True\"\n\nShape:\n * Input: (N, C_{in}, L_{in}) or (C_{in}, L_{in})\n * Output: (N, C_{out}, L_{out}) or (C_{out}, L_{out}), where\n\n L_{out} = \\left\\lfloor\\frac{L_{in} + 2 \\times\n \\text{padding} - \\text{dilation} \\times\n (\\text{kernel\\_size} - 1) - 1}{\\text{stride}} +\n 1\\right\\rfloor\n\nVariables:\n * weight (Tensor) -- the learnable weights of the module\n of shape (\\text{out_channels},\n \\frac{\\text{in_channels}}{\\text{groups}},\n \\text{kernel_size}). The values of these weights are sampled\n from \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}) where k =\n \\frac{groups}{C_\\text{in} * \\text{kernel_size}}", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html", "category": "pytorch docs"} {"text": "\nbias (Tensor) -- the learnable bias of the module of\n shape (out_channels). If \"bias\" is \"True\", then the values of\n these weights are sampled from \\mathcal{U}(-\\sqrt{k},\n \\sqrt{k}) where k = \\frac{groups}{C_\\text{in} *\n \\text{kernel_size}}\n\nExamples:\n >>> m = nn.Conv1d(16, 33, 3, stride=2)\n >>> input = torch.randn(20, 16, 50)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv1d.html", "category": "pytorch docs"} {"text": "JitScalarType\nclass torch.onnx.JitScalarType(value)\nScalar types defined in torch.\nUse \"JitScalarType\" to convert from torch and JIT scalar types to\n ONNX scalar types.\n-[ Examples ]-\n\n\n\nJitScalarType.from_value(torch.ones(1, 2)).onnx_type()\n TensorProtoDataType.FLOAT\nJitScalarType.from_value(torch_c_value_with_type_float).onnx_type()\n TensorProtoDataType.FLOAT\nJitScalarType.from_dtype(torch.get_default_dtype).onnx_type()\n TensorProtoDataType.FLOAT\n\n\n\ndtype()\n Convert a JitScalarType to a torch dtype.\n\n Return type:\n *dtype*\n\nclassmethod from_dtype(dtype)\n Convert a torch dtype to JitScalarType.\n\n Note: DO NOT USE this API when *dtype* comes from a\n *torch._C.Value.type()* calls.\n A \"RuntimeError: INTERNAL ASSERT FAILED at\n \"../aten/src/ATen/core/jit_type_base.h\" can be raised in\n several scenarios where shape info is not present. Instead\n use *from_value* API which is safer.\n", "source": "https://pytorch.org/docs/stable/generated/torch.onnx.JitScalarType.html", "category": "pytorch docs"} {"text": "use from_value API which is safer.\n Parameters:\n **dtype** (*Optional**[**dtype**]*) -- A torch.dtype to\n create a JitScalarType from\n\n Returns:\n JitScalarType\n\n Raises:\n **OnnxExporterError** -- if dtype is not a valid torch.dtype\n or if it is None.\n\n Return type:\n *JitScalarType*\n\nclassmethod from_value(value, default=None)\n Create a JitScalarType from an value's scalar type.\n\n Parameters:\n * **value** (*Union**[**None**, **Value**, **Tensor**]*) --\n An object to fetch scalar type from.\n\n * **default** -- The JitScalarType to return if a valid\n scalar cannot be fetched from value\n\n Returns:\n JitScalarType.\n\n Raises:\n * **OnnxExporterError** -- if value does not have a valid\n scalar type and default is None.\n\n * **SymbolicValueError** -- when value.type()'s info are\n empty and default is None\n\n Return type:\n", "source": "https://pytorch.org/docs/stable/generated/torch.onnx.JitScalarType.html", "category": "pytorch docs"} {"text": "Return type:\n JitScalarType\nonnx_compatible()\n Return whether this JitScalarType is compatible with ONNX.\n\n Return type:\n bool\n\nonnx_type()\n Convert a JitScalarType to an ONNX data type.\n\n Return type:\n *TensorProtoDataType*\n\nscalar_name()\n Convert a JitScalarType to a JIT scalar type name.\n\n Return type:\n *Literal*['Byte', 'Char', 'Double', 'Float', 'Half', 'Int',\n 'Long', 'Short', 'Bool', 'ComplexHalf', 'ComplexFloat',\n 'ComplexDouble', 'QInt8', 'QUInt8', 'QInt32', 'BFloat16',\n 'Undefined']\n\ntorch_name()\n Convert a JitScalarType to a torch type name.\n\n Return type:\n *Literal*['bool', 'uint8_t', 'int8_t', 'double', 'float',\n 'half', 'int', 'int64_t', 'int16_t', 'complex32',\n 'complex64', 'complex128', 'qint8', 'quint8', 'qint32',\n 'bfloat16']\n", "source": "https://pytorch.org/docs/stable/generated/torch.onnx.JitScalarType.html", "category": "pytorch docs"} {"text": "torch.Tensor.lu\nTensor.lu(pivot=True, get_infos=False)\nSee \"torch.lu()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.lu.html", "category": "pytorch docs"} {"text": "torch.foreach_sin\ntorch.foreach_sin(self: List[Tensor]) -> None\nApply \"torch.sin()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_sin_.html", "category": "pytorch docs"} {"text": "torch.cuda.comm.scatter\ntorch.cuda.comm.scatter(tensor, devices=None, chunk_sizes=None, dim=0, streams=None, *, out=None)\nScatters tensor across multiple GPUs.\nParameters:\n * tensor (Tensor) -- tensor to scatter. Can be on CPU or\n GPU.\n * **devices** (*Iterable**[**torch.device**, **str** or\n **int**]**, **optional*) -- an iterable of GPU devices, among\n which to scatter.\n\n * **chunk_sizes** (*Iterable**[**int**]**, **optional*) -- sizes\n of chunks to be placed on each device. It should match\n \"devices\" in length and sums to \"tensor.size(dim)\". If not\n specified, \"tensor\" will be divided into equal chunks.\n\n * **dim** (*int**, **optional*) -- A dimension along which to\n chunk \"tensor\". Default: \"0\".\n\n * **streams** (*Iterable**[**Stream**]**, **optional*) -- an\n iterable of Streams, among which to execute the scatter. If\n not specified, the default stream will be utilized.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.comm.scatter.html", "category": "pytorch docs"} {"text": "\nout (Sequence[Tensor], optional, keyword-\n only) -- the GPU tensors to store output results. Sizes of\n these tensors must match that of \"tensor\", except for \"dim\",\n where the total size must sum to \"tensor.size(dim)\".\n\nNote:\n Exactly one of \"devices\" and \"out\" must be specified. When \"out\"\n is specified, \"chunk_sizes\" must not be specified and will be\n inferred from sizes of \"out\".\n\nReturns:\n * If \"devices\" is specified,\n a tuple containing chunks of \"tensor\", placed on \"devices\".\n * If \"out\" is specified,\n a tuple containing \"out\" tensors, each containing a chunk\n of \"tensor\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.comm.scatter.html", "category": "pytorch docs"} {"text": "torch.sparse.log_softmax\ntorch.sparse.log_softmax(input, dim, *, dtype=None) -> Tensor\nApplies a softmax function followed by logarithm.\nSee \"softmax\" for more details.\nParameters:\n * input (Tensor) -- input\n * **dim** (*int*) -- A dimension along which softmax will be\n computed.\n\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. If specified, the input tensor is casted\n to \"dtype\" before the operation is performed. This is useful\n for preventing data type overflows. Default: None\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.log_softmax.html", "category": "pytorch docs"} {"text": "torch.Tensor.atan2_\nTensor.atan2_(other) -> Tensor\nIn-place version of \"atan2()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.atan2_.html", "category": "pytorch docs"} {"text": "torch.Tensor.cos\nTensor.cos() -> Tensor\nSee \"torch.cos()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cos.html", "category": "pytorch docs"} {"text": "torch.inner\ntorch.inner(input, other, *, out=None) -> Tensor\nComputes the dot product for 1D tensors. For higher dimensions,\n sums the product of elements from \"input\" and \"other\" along their\n last dimension.\nNote:\n If either \"input\" or \"other\" is a scalar, the result is\n equivalent to *torch.mul(input, other)*.If both \"input\" and\n \"other\" are non-scalars, the size of their last dimension must\n match and the result is equivalent to *torch.tensordot(input,\n other, dims=([-1], [-1]))*\n\nParameters:\n * input (Tensor) -- First input tensor\n * **other** (*Tensor*) -- Second input tensor\n\nKeyword Arguments:\n out (Tensor, optional) -- Optional output tensor to\n write result into. The output shape is input.shape[:-1] +\n other.shape[:-1].\nExample:\n # Dot product\n >>> torch.inner(torch.tensor([1, 2, 3]), torch.tensor([0, 2, 1]))\n tensor(7)\n\n # Multidimensional input tensors\n", "source": "https://pytorch.org/docs/stable/generated/torch.inner.html", "category": "pytorch docs"} {"text": "Multidimensional input tensors\n >>> a = torch.randn(2, 3)\n >>> a\n tensor([[0.8173, 1.0874, 1.1784],\n [0.3279, 0.1234, 2.7894]])\n >>> b = torch.randn(2, 4, 3)\n >>> b\n tensor([[[-0.4682, -0.7159, 0.1506],\n [ 0.4034, -0.3657, 1.0387],\n [ 0.9892, -0.6684, 0.1774],\n [ 0.9482, 1.3261, 0.3917]],\n\n [[ 0.4537, 0.7493, 1.1724],\n [ 0.2291, 0.5749, -0.2267],\n [-0.7920, 0.3607, -0.3701],\n [ 1.3666, -0.5850, -1.7242]]])\n >>> torch.inner(a, b)\n tensor([[[-0.9837, 1.1560, 0.2907, 2.6785],\n [ 2.5671, 0.5452, -0.6912, -1.5509]],\n\n [[ 0.1782, 2.9843, 0.7366, 1.5672],\n [ 3.5115, -0.4864, -1.2476, -4.4337]]])\n\n # Scalar input\n >>> torch.inner(a, torch.tensor(2))\n tensor([[1.6347, 2.1748, 2.3567],\n [0.6558, 0.2469, 5.5787]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.inner.html", "category": "pytorch docs"} {"text": "torch.Tensor.cosh\nTensor.cosh() -> Tensor\nSee \"torch.cosh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cosh.html", "category": "pytorch docs"} {"text": "torch.Tensor.t_\nTensor.t_() -> Tensor\nIn-place version of \"t()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.t_.html", "category": "pytorch docs"} {"text": "torch.Tensor.cholesky\nTensor.cholesky(upper=False) -> Tensor\nSee \"torch.cholesky()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cholesky.html", "category": "pytorch docs"} {"text": "LSTMCell\nclass torch.nn.LSTMCell(input_size, hidden_size, bias=True, device=None, dtype=None)\nA long short-term memory (LSTM) cell.\n \\begin{array}{ll} i = \\sigma(W_{ii} x + b_{ii} + W_{hi} h +\n b_{hi}) \\\\ f = \\sigma(W_{if} x + b_{if} + W_{hf} h + b_{hf}) \\\\\n g = \\tanh(W_{ig} x + b_{ig} + W_{hg} h + b_{hg}) \\\\ o =\n \\sigma(W_{io} x + b_{io} + W_{ho} h + b_{ho}) \\\\ c' = f * c + i\n * g \\\\ h' = o * \\tanh(c') \\\\ \\end{array}\n\nwhere \\sigma is the sigmoid function, and * is the Hadamard\n product.\nParameters:\n * input_size (int) -- The number of expected features in\n the input x\n * **hidden_size** (*int*) -- The number of features in the\n hidden state *h*\n\n * **bias** (*bool*) -- If \"False\", then the layer does not use\n bias weights *b_ih* and *b_hh*. Default: \"True\"\n\nInputs: input, (h_0, c_0)\n * input of shape (batch, input_size) or (input_size):\n tensor containing input features", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LSTMCell.html", "category": "pytorch docs"} {"text": "tensor containing input features\n * **h_0** of shape *(batch, hidden_size)* or *(hidden_size)*:\n tensor containing the initial hidden state\n\n * **c_0** of shape *(batch, hidden_size)* or *(hidden_size)*:\n tensor containing the initial cell state\n\n If *(h_0, c_0)* is not provided, both **h_0** and **c_0**\n default to zero.\n\nOutputs: (h_1, c_1)\n * h_1 of shape (batch, hidden_size) or (hidden_size):\n tensor containing the next hidden state\n * **c_1** of shape *(batch, hidden_size)* or *(hidden_size)*:\n tensor containing the next cell state\n\nVariables:\n * weight_ih (torch.Tensor) -- the learnable input-hidden\n weights, of shape (4hidden_size, input_size)*\n * **weight_hh** (*torch.Tensor*) -- the learnable hidden-hidden\n weights, of shape *(4*hidden_size, hidden_size)*\n\n * **bias_ih** -- the learnable input-hidden bias, of shape\n *(4*hidden_size)*\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LSTMCell.html", "category": "pytorch docs"} {"text": "(4hidden_size)*\n * **bias_hh** -- the learnable hidden-hidden bias, of shape\n *(4*hidden_size)*\n\nNote:\n All the weights and biases are initialized from\n \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}) where k =\n \\frac{1}{\\text{hidden\\_size}}\n\nOn certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.\nExamples:\n >>> rnn = nn.LSTMCell(10, 20) # (input_size, hidden_size)\n >>> input = torch.randn(2, 3, 10) # (time_steps, batch, input_size)\n >>> hx = torch.randn(3, 20) # (batch, hidden_size)\n >>> cx = torch.randn(3, 20)\n >>> output = []\n >>> for i in range(input.size()[0]):\n ... hx, cx = rnn(input[i], (hx, cx))\n ... output.append(hx)\n >>> output = torch.stack(output, dim=0)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LSTMCell.html", "category": "pytorch docs"} {"text": "conv3d\nclass torch.ao.nn.quantized.functional.conv3d(input, weight, bias, stride=1, padding=0, dilation=1, groups=1, padding_mode='zeros', scale=1.0, zero_point=0, dtype=torch.quint8)\nApplies a 3D convolution over a quantized 3D input composed of\n several input planes.\nSee \"Conv3d\" for details and output shape.\nParameters:\n * input -- quantized input tensor of shape (\\text{minibatch}\n , \\text{in_channels} , iD , iH , iW)\n * **weight** -- quantized filters of shape (\\text{out\\_channels}\n , \\frac{\\text{in\\_channels}}{\\text{groups}} , kD , kH , kW)\n\n * **bias** -- **non-quantized** bias tensor of shape\n (\\text{out\\_channels}). The tensor type must be *torch.float*.\n\n * **stride** -- the stride of the convolving kernel. Can be a\n single number or a tuple *(sD, sH, sW)*. Default: 1\n\n * **padding** -- implicit paddings on both sides of the input.\n Can be a single number or a tuple *(padD, padH, padW)*.\n Default: 0\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.conv3d.html", "category": "pytorch docs"} {"text": "Default: 0\n * **dilation** -- the spacing between kernel elements. Can be a\n single number or a tuple *(dD, dH, dW)*. Default: 1\n\n * **groups** -- split input into groups, \\text{in\\_channels}\n should be divisible by the number of groups. Default: 1\n\n * **padding_mode** -- the padding mode to use. Only \"zeros\" is\n supported for quantized convolution at the moment. Default:\n \"zeros\"\n\n * **scale** -- quantization scale for the output. Default: 1.0\n\n * **zero_point** -- quantization zero_point for the output.\n Default: 0\n\n * **dtype** -- quantization data type to use. Default:\n \"torch.quint8\"\n\nExamples:\n >>> from torch.ao.nn.quantized import functional as qF\n >>> filters = torch.randn(8, 4, 3, 3, 3, dtype=torch.float)\n >>> inputs = torch.randn(1, 4, 5, 5, 5, dtype=torch.float)\n >>> bias = torch.randn(8, dtype=torch.float)\n >>>\n >>> scale, zero_point = 1.0, 0\n >>> dtype_inputs = torch.quint8\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.conv3d.html", "category": "pytorch docs"} {"text": "\n\n\ndtype_inputs = torch.quint8\n >>> dtype_filters = torch.qint8\n >>>\n >>> q_filters = torch.quantize_per_tensor(filters, scale, zero_point, dtype_filters)\n >>> q_inputs = torch.quantize_per_tensor(inputs, scale, zero_point, dtype_inputs)\n >>> qF.conv3d(q_inputs, q_filters, bias, padding=1, scale=scale, zero_point=zero_point)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.conv3d.html", "category": "pytorch docs"} {"text": "torch.nn.utils.prune.is_pruned\ntorch.nn.utils.prune.is_pruned(module)\nCheck whether \"module\" is pruned by looking for \"forward_pre_hooks\"\n in its modules that inherit from the \"BasePruningMethod\".\nParameters:\n module (nn.Module) -- object that is either pruned or\n unpruned\nReturns:\n binary answer to whether \"module\" is pruned.\n-[ Examples ]-\n\n\n\nfrom torch.nn.utils import prune\nm = nn.Linear(5, 7)\nprint(prune.is_pruned(m))\n False\nprune.random_unstructured(m, name='weight', amount=0.2)\nprint(prune.is_pruned(m))\n True\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.is_pruned.html", "category": "pytorch docs"} {"text": "torch.Tensor.ndim\nTensor.ndim\nAlias for \"dim()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ndim.html", "category": "pytorch docs"} {"text": "max_pool1d\nclass torch.ao.nn.quantized.functional.max_pool1d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)\nApplies a 1D max pooling over a quantized input signal composed of\n several quantized input planes.\nNote:\n The input quantization parameters are propagated to the output.\n\nSee \"MaxPool1d\" for details.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.max_pool1d.html", "category": "pytorch docs"} {"text": "torch.cuda.ipc_collect\ntorch.cuda.ipc_collect()\nForce collects GPU memory after it has been released by CUDA IPC.\nNote:\n Checks if any sent CUDA tensors could be cleaned from the memory.\n Force closes shared memory file used for reference counting if\n there is no active counters. Useful when the producer process\n stopped actively sending tensors and want to release unused\n memory.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.ipc_collect.html", "category": "pytorch docs"} {"text": "torch.Tensor.conj_physical_\nTensor.conj_physical_() -> Tensor\nIn-place version of \"conj_physical()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.conj_physical_.html", "category": "pytorch docs"} {"text": "torch.Tensor.view_as\nTensor.view_as(other) -> Tensor\nView this tensor as the same size as \"other\". \"self.view_as(other)\"\n is equivalent to \"self.view(other.size())\".\nPlease see \"view()\" for more information about \"view\".\nParameters:\n other (\"torch.Tensor\") -- The result tensor has the same\n size as \"other\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.view_as.html", "category": "pytorch docs"} {"text": "torch.Tensor.mvlgamma_\nTensor.mvlgamma_(p) -> Tensor\nIn-place version of \"mvlgamma()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.mvlgamma_.html", "category": "pytorch docs"} {"text": "torch.add\ntorch.add(input, other, *, alpha=1, out=None) -> Tensor\nAdds \"other\", scaled by \"alpha\", to \"input\".\n \\text{{out}}_i = \\text{{input}}_i + \\text{{alpha}} \\times\n \\text{{other}}_i\n\nSupports broadcasting to a common shape, type promotion, and\n integer, float, and complex inputs.\nParameters:\n * input (Tensor) -- the input tensor.\n * **other** (*Tensor** or **Number*) -- the tensor or number to\n add to \"input\".\n\nKeyword Arguments:\n * alpha (Number) -- the multiplier for \"other\".\n * **out** (*Tensor**, **optional*) -- the output tensor.\n\nExamples:\n >>> a = torch.randn(4)\n >>> a\n tensor([ 0.0202, 1.0985, 1.3506, -0.6056])\n >>> torch.add(a, 20)\n tensor([ 20.0202, 21.0985, 21.3506, 19.3944])\n\n >>> b = torch.randn(4)\n >>> b\n tensor([-0.9732, -0.3497, 0.6245, 0.4022])\n >>> c = torch.randn(4, 1)\n >>> c\n tensor([[ 0.3743],\n [-1.7724],\n", "source": "https://pytorch.org/docs/stable/generated/torch.add.html", "category": "pytorch docs"} {"text": "tensor([[ 0.3743],\n [-1.7724],\n [-0.5811],\n [-0.8017]])\n >>> torch.add(b, c, alpha=10)\n tensor([[ 2.7695, 3.3930, 4.3672, 4.1450],\n [-18.6971, -18.0736, -17.0994, -17.3216],\n [ -6.7845, -6.1610, -5.1868, -5.4090],\n [ -8.9902, -8.3667, -7.3925, -7.6147]])", "source": "https://pytorch.org/docs/stable/generated/torch.add.html", "category": "pytorch docs"} {"text": "torch.cuda.get_sync_debug_mode\ntorch.cuda.get_sync_debug_mode()\nReturns current value of debug mode for cuda synchronizing\n operations.\nReturn type:\n int", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.get_sync_debug_mode.html", "category": "pytorch docs"} {"text": "torch.cat\ntorch.cat(tensors, dim=0, *, out=None) -> Tensor\nConcatenates the given sequence of \"seq\" tensors in the given\n dimension. All tensors must either have the same shape (except in\n the concatenating dimension) or be empty.\n\"torch.cat()\" can be seen as an inverse operation for\n \"torch.split()\" and \"torch.chunk()\".\n\"torch.cat()\" can be best understood via examples.\nParameters:\n * tensors (sequence of Tensors) -- any python sequence of\n tensors of the same type. Non-empty tensors provided must have\n the same shape, except in the cat dimension.\n * **dim** (*int**, **optional*) -- the dimension over which the\n tensors are concatenated\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> x = torch.randn(2, 3)\n >>> x\n tensor([[ 0.6580, -1.0969, -0.4614],\n [-0.1034, -0.5790, 0.1497]])\n >>> torch.cat((x, x, x), 0)\n tensor([[ 0.6580, -1.0969, -0.4614],\n", "source": "https://pytorch.org/docs/stable/generated/torch.cat.html", "category": "pytorch docs"} {"text": "tensor([[ 0.6580, -1.0969, -0.4614],\n [-0.1034, -0.5790, 0.1497],\n [ 0.6580, -1.0969, -0.4614],\n [-0.1034, -0.5790, 0.1497],\n [ 0.6580, -1.0969, -0.4614],\n [-0.1034, -0.5790, 0.1497]])\n >>> torch.cat((x, x, x), 1)\n tensor([[ 0.6580, -1.0969, -0.4614, 0.6580, -1.0969, -0.4614, 0.6580,\n -1.0969, -0.4614],\n [-0.1034, -0.5790, 0.1497, -0.1034, -0.5790, 0.1497, -0.1034,\n -0.5790, 0.1497]])", "source": "https://pytorch.org/docs/stable/generated/torch.cat.html", "category": "pytorch docs"} {"text": "torch.load\ntorch.load(f, map_location=None, pickle_module=pickle, , weights_only=False, *pickle_load_args)\nLoads an object saved with \"torch.save()\" from a file.\n\"torch.load()\" uses Python's unpickling facilities but treats\n storages, which underlie tensors, specially. They are first\n deserialized on the CPU and are then moved to the device they were\n saved from. If this fails (e.g. because the run time system doesn't\n have certain devices), an exception is raised. However, storages\n can be dynamically remapped to an alternative set of devices using\n the \"map_location\" argument.\nIf \"map_location\" is a callable, it will be called once for each\n serialized storage with two arguments: storage and location. The\n storage argument will be the initial deserialization of the\n storage, residing on the CPU. Each serialized storage has a\n location tag associated with it which identifies the device it was\n saved from, and this tag is the second argument passed to", "source": "https://pytorch.org/docs/stable/generated/torch.load.html", "category": "pytorch docs"} {"text": "\"map_location\". The builtin location tags are \"'cpu'\" for CPU\n tensors and \"'cuda:device_id'\" (e.g. \"'cuda:2'\") for CUDA tensors.\n \"map_location\" should return either \"None\" or a storage. If\n \"map_location\" returns a storage, it will be used as the final\n deserialized object, already moved to the right device. Otherwise,\n \"torch.load()\" will fall back to the default behavior, as if\n \"map_location\" wasn't specified.\nIf \"map_location\" is a \"torch.device\" object or a string containing\n a device tag, it indicates the location where all tensors should be\n loaded.\nOtherwise, if \"map_location\" is a dict, it will be used to remap\n location tags appearing in the file (keys), to ones that specify\n where to put the storages (values).\nUser extensions can register their own location tags and tagging\n and deserialization methods using\n \"torch.serialization.register_package()\".\nParameters:\n * f (Union[str, PathLike, BinaryIO*,", "source": "https://pytorch.org/docs/stable/generated/torch.load.html", "category": "pytorch docs"} {"text": "IO[bytes]*]) -- a file-like object (has to implement\n \"read()\", \"readline()\", \"tell()\", and \"seek()\"), or a string\n or os.PathLike object containing a file name\n * **map_location**\n (*Optional**[**Union**[**Callable**[**[**Tensor**, **str**]**,\n **Tensor**]**, **device**, **str**, **Dict**[**str**,\n **str**]**]**]*) -- a function, \"torch.device\", string or a\n dict specifying how to remap storage locations\n\n * **pickle_module** (*Optional**[**Any**]*) -- module used for\n unpickling metadata and objects (has to match the\n \"pickle_module\" used to serialize file)\n\n * **weights_only** (*bool*) -- Indicates whether unpickler\n should be restricted to loading only tensors, primitive types\n and dictionaries\n\n * **pickle_load_args** (*Any*) -- (Python 3 only) optional\n keyword arguments passed over to \"pickle_module.load()\" and\n \"pickle_module.Unpickler()\", e.g., \"errors=...\".\n\nReturn type:", "source": "https://pytorch.org/docs/stable/generated/torch.load.html", "category": "pytorch docs"} {"text": "Return type:\n Any\nWarning:\n \"torch.load()\" unless *weights_only* parameter is set to *True*,\n uses \"pickle\" module implicitly, which is known to be insecure.\n It is possible to construct malicious pickle data which will\n execute arbitrary code during unpickling. Never load data that\n could have come from an untrusted source in an unsafe mode, or\n that could have been tampered with. **Only load data you trust**.\n\nNote:\n When you call \"torch.load()\" on a file which contains GPU\n tensors, those tensors will be loaded to GPU by default. You can\n call \"torch.load(.., map_location='cpu')\" and then\n \"load_state_dict()\" to avoid GPU RAM surge when loading a model\n checkpoint.\n\nNote:\n By default, we decode byte strings as \"utf-8\". This is to avoid\n a common error case \"UnicodeDecodeError: 'ascii' codec can't\n decode byte 0x...\" when loading files saved by Python 2 in Python\n", "source": "https://pytorch.org/docs/stable/generated/torch.load.html", "category": "pytorch docs"} {"text": "\nIf this default is incorrect, you may use an extra \"encoding\"\n keyword argument to specify how these objects should be loaded,\n e.g., \"encoding='latin1'\" decodes them to strings using \"latin1\"\n encoding, and \"encoding='bytes'\" keeps them as byte arrays which\n can be decoded later with \"byte_array.decode(...)\".\n\n-[ Example ]-\n\n\n\ntorch.load('tensors.pt')\n # Load all tensors onto the CPU\ntorch.load('tensors.pt', map_location=torch.device('cpu'))\n # Load all tensors onto the CPU, using a function\ntorch.load('tensors.pt', map_location=lambda storage, loc: storage)\n # Load all tensors onto GPU 1\ntorch.load('tensors.pt', map_location=lambda storage, loc: storage.cuda(1))\n # Map tensors from GPU 1 to GPU 0\ntorch.load('tensors.pt', map_location={'cuda:1': 'cuda:0'})\n # Load tensor from io.BytesIO object\nwith open('tensor.pt', 'rb') as f:\n ... buffer = io.BytesIO(f.read())\ntorch.load(buffer)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.load.html", "category": "pytorch docs"} {"text": "\n\n\ntorch.load(buffer)\n # Load a module with 'ascii' encoding for unpickling\ntorch.load('module.pt', encoding='ascii')\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.load.html", "category": "pytorch docs"} {"text": "torch.Tensor.unflatten\nTensor.unflatten(dim, sizes) -> Tensor\nSee \"torch.unflatten()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.unflatten.html", "category": "pytorch docs"} {"text": "torch.quantile\ntorch.quantile(input, q, dim=None, keepdim=False, *, interpolation='linear', out=None) -> Tensor\nComputes the q-th quantiles of each row of the \"input\" tensor along\n the dimension \"dim\".\nTo compute the quantile, we map q in [0, 1] to the range of indices\n [0, n] to find the location of the quantile in the sorted input. If\n the quantile lies between two data points \"a < b\" with indices \"i\"\n and \"j\" in the sorted order, result is computed according to the\n given \"interpolation\" method as follows:\n\n\n\"linear\": \"a + (b - a) * fraction\", where \"fraction\" is the\n fractional part of the computed quantile index.\n\n\n\"lower\": \"a\".\n\n\n\"higher\": \"b\".\n\n\n\"nearest\": \"a\" or \"b\", whichever's index is closer to the\n computed quantile index (rounding down for .5 fractions).\n\n\n\"midpoint\": \"(a + b) / 2\".\n\n\nIf \"q\" is a 1D tensor, the first dimension of the output represents\n the quantiles and has size equal to the size of \"q\", the remaining", "source": "https://pytorch.org/docs/stable/generated/torch.quantile.html", "category": "pytorch docs"} {"text": "dimensions are what remains from the reduction.\nNote:\n By default \"dim\" is \"None\" resulting in the \"input\" tensor being\n flattened before computation.\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **q** (*float** or **Tensor*) -- a scalar or 1D tensor of\n values in the range [0, 1].\n\n * **dim** (*int*) -- the dimension to reduce.\n\n * **keepdim** (*bool*) -- whether the output tensor has \"dim\"\n retained or not.\n\nKeyword Arguments:\n * interpolation (str) -- interpolation method to use when\n the desired quantile lies between two data points. Can be\n \"linear\", \"lower\", \"higher\", \"midpoint\" and \"nearest\". Default\n is \"linear\".\n * **out** (*Tensor**, **optional*) -- the output tensor.\n\nExample:\n >>> a = torch.randn(2, 3)\n >>> a\n tensor([[ 0.0795, -1.2117, 0.9765],\n [ 1.1707, 0.6706, 0.4884]])\n >>> q = torch.tensor([0.25, 0.5, 0.75])\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantile.html", "category": "pytorch docs"} {"text": "\n\n\nq = torch.tensor([0.25, 0.5, 0.75])\n >>> torch.quantile(a, q, dim=1, keepdim=True)\n tensor([[[-0.5661],\n [ 0.5795]],\n\n\n\n [[ 0.0795],\n [ 0.6706]],\n\n [[ 0.5280],\n [ 0.9206]]])\n >>> torch.quantile(a, q, dim=1, keepdim=True).shape\n torch.Size([3, 2, 1])\n >>> a = torch.arange(4.)\n >>> a\n tensor([0., 1., 2., 3.])\n >>> torch.quantile(a, 0.6, interpolation='linear')\n tensor(1.8000)\n >>> torch.quantile(a, 0.6, interpolation='lower')\n tensor(1.)\n >>> torch.quantile(a, 0.6, interpolation='higher')\n tensor(2.)\n >>> torch.quantile(a, 0.6, interpolation='midpoint')\n tensor(1.5000)\n >>> torch.quantile(a, 0.6, interpolation='nearest')\n tensor(2.)\n >>> torch.quantile(a, 0.4, interpolation='nearest')\n tensor(1.)\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantile.html", "category": "pytorch docs"} {"text": "torch.Tensor.to\nTensor.to(args, *kwargs) -> Tensor\nPerforms Tensor dtype and/or device conversion. A \"torch.dtype\" and\n \"torch.device\" are inferred from the arguments of \"self.to(args,\n *kwargs)\".\nNote:\n If the \"self\" Tensor already has the correct \"torch.dtype\" and\n \"torch.device\", then \"self\" is returned. Otherwise, the returned\n tensor is a copy of \"self\" with the desired \"torch.dtype\" and\n \"torch.device\".\n\nHere are the ways to call \"to\":\nto(dtype, non_blocking=False, copy=False, memory_format=torch.preserve_format) -> Tensor\n Returns a Tensor with the specified \"dtype\"\n\n Args:\n memory_format (\"torch.memory_format\", optional): the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".\n\ntorch.to(device=None, dtype=None, non_blocking=False, copy=False, memory_format=torch.preserve_format) -> Tensor\n Returns a Tensor with the specified \"device\" and (optional)\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to.html", "category": "pytorch docs"} {"text": "\"dtype\". If \"dtype\" is \"None\" it is inferred to be\n \"self.dtype\". When \"non_blocking\", tries to convert\n asynchronously with respect to the host if possible, e.g.,\n converting a CPU Tensor with pinned memory to a CUDA Tensor.\n When \"copy\" is set, a new Tensor is created even when the\n Tensor already matches the desired conversion.\n Args:\n memory_format (\"torch.memory_format\", optional): the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".\n\ntorch.to(other, non_blocking=False, copy=False) -> Tensor\n Returns a Tensor with same \"torch.dtype\" and \"torch.device\"\n as the Tensor \"other\". When \"non_blocking\", tries to convert\n asynchronously with respect to the host if possible, e.g.,\n converting a CPU Tensor with pinned memory to a CUDA Tensor.\n When \"copy\" is set, a new Tensor is created even when the\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to.html", "category": "pytorch docs"} {"text": "Tensor already matches the desired conversion.\nExample:\n >>> tensor = torch.randn(2, 2) # Initially dtype=float32, device=cpu\n >>> tensor.to(torch.float64)\n tensor([[-0.5044, 0.0005],\n [ 0.3310, -0.0584]], dtype=torch.float64)\n\n >>> cuda0 = torch.device('cuda:0')\n >>> tensor.to(cuda0)\n tensor([[-0.5044, 0.0005],\n [ 0.3310, -0.0584]], device='cuda:0')\n\n >>> tensor.to(cuda0, dtype=torch.float64)\n tensor([[-0.5044, 0.0005],\n [ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0')\n\n >>> other = torch.randn((), dtype=torch.float64, device=cuda0)\n >>> tensor.to(other, non_blocking=True)\n tensor([[-0.5044, 0.0005],\n [ 0.3310, -0.0584]], dtype=torch.float64, device='cuda:0')\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to.html", "category": "pytorch docs"} {"text": "torch.Tensor.gcd\nTensor.gcd(other) -> Tensor\nSee \"torch.gcd()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.gcd.html", "category": "pytorch docs"} {"text": "torch.Tensor.baddbmm\nTensor.baddbmm(batch1, batch2, *, beta=1, alpha=1) -> Tensor\nSee \"torch.baddbmm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.baddbmm.html", "category": "pytorch docs"} {"text": "add_quant_dequant\nclass torch.quantization.add_quant_dequant(module)\nWrap the leaf child module in QuantWrapper if it has a valid\n qconfig Note that this function will modify the children of module\n inplace and it can return a new module which wraps the input module\n as well.\nParameters:\n * module -- input module with qconfig attributes for all the\n leaf modules\n * **quantize** (*that we want to*) --\n\nReturns:\n Either the inplace modified module with submodules wrapped in\n QuantWrapper based on qconfig or a new QuantWrapper module\n which wraps the input module, the latter case only happens when\n the input module is a leaf module and we want to quantize it.", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.add_quant_dequant.html", "category": "pytorch docs"} {"text": "RecordingObserver\nclass torch.quantization.observer.RecordingObserver(dtype=torch.quint8, **kwargs)\nThe module is mainly for debug and records the tensor values during\n runtime.\nParameters:\n * dtype -- Quantized data type\n * **qscheme** -- Quantization scheme to be used\n\n * **reduce_range** -- Reduces the range of the quantized data\n type by 1 bit\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.RecordingObserver.html", "category": "pytorch docs"} {"text": "torch.Tensor.is_signed\nTensor.is_signed() -> bool\nReturns True if the data type of \"self\" is a signed data type.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_signed.html", "category": "pytorch docs"} {"text": "torch.broadcast_to\ntorch.broadcast_to(input, shape) -> Tensor\nBroadcasts \"input\" to the shape \"shape\". Equivalent to calling\n \"input.expand(shape)\". See \"expand()\" for details.\nParameters:\n * input (Tensor) -- the input tensor.\n * **shape** (list, tuple, or \"torch.Size\") -- the new shape.\n\nExample:\n >>> x = torch.tensor([1, 2, 3])\n >>> torch.broadcast_to(x, (3, 3))\n tensor([[1, 2, 3],\n [1, 2, 3],\n [1, 2, 3]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.broadcast_to.html", "category": "pytorch docs"} {"text": "Hardswish\nclass torch.nn.Hardswish(inplace=False)\nApplies the Hardswish function, element-wise, as described in the\n paper: Searching for MobileNetV3.\nHardswish is defined as:\n \\text{Hardswish}(x) = \\begin{cases} 0 & \\text{if~} x \\le -3,\n \\\\ x & \\text{if~} x \\ge +3, \\\\ x \\cdot (x + 3) /6 &\n \\text{otherwise} \\end{cases}\n\nParameters:\n inplace (bool) -- can optionally do the operation in-\n place. Default: \"False\"\nShape:\n * Input: (*), where * means any number of dimensions.\n * Output: (*), same shape as the input.\n\n[image]\nExamples:\n >>> m = nn.Hardswish()\n >>> input = torch.randn(2)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Hardswish.html", "category": "pytorch docs"} {"text": "torch.Tensor.greater_\nTensor.greater_(other) -> Tensor\nIn-place version of \"greater()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.greater_.html", "category": "pytorch docs"} {"text": "ReduceLROnPlateau\nclass torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer, mode='min', factor=0.1, patience=10, threshold=0.0001, threshold_mode='rel', cooldown=0, min_lr=0, eps=1e-08, verbose=False)\nReduce learning rate when a metric has stopped improving. Models\n often benefit from reducing the learning rate by a factor of 2-10\n once learning stagnates. This scheduler reads a metrics quantity\n and if no improvement is seen for a 'patience' number of epochs,\n the learning rate is reduced.\nParameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * **mode** (*str*) -- One of *min*, *max*. In *min* mode, lr\n will be reduced when the quantity monitored has stopped\n decreasing; in *max* mode it will be reduced when the quantity\n monitored has stopped increasing. Default: 'min'.\n\n * **factor** (*float*) -- Factor by which the learning rate will\n be reduced. new_lr = lr * factor. Default: 0.1.\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ReduceLROnPlateau.html", "category": "pytorch docs"} {"text": "\n\npatience (int) -- Number of epochs with no improvement\n after which learning rate will be reduced. For example, if\n patience = 2, then we will ignore the first 2 epochs with no\n improvement, and will only decrease the LR after the 3rd epoch\n if the loss still hasn't improved then. Default: 10.\n\n\nthreshold (float) -- Threshold for measuring the new\n optimum, to only focus on significant changes. Default: 1e-4.\n\n\nthreshold_mode (str) -- One of rel, abs. In rel\n mode, dynamic_threshold = best * ( 1 + threshold ) in 'max'\n mode or best * ( 1 - threshold ) in min mode. In abs mode,\n dynamic_threshold = best + threshold in max mode or best -\n threshold in min mode. Default: 'rel'.\n\n\ncooldown (int) -- Number of epochs to wait before\n resuming normal operation after lr has been reduced. Default:\n 0.\n\n\nmin_lr (float or list) -- A scalar or a list of\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ReduceLROnPlateau.html", "category": "pytorch docs"} {"text": "scalars. A lower bound on the learning rate of all param\n groups or each group respectively. Default: 0.\n * **eps** (*float*) -- Minimal decay applied to lr. If the\n difference between new and old lr is smaller than eps, the\n update is ignored. Default: 1e-8.\n\n * **verbose** (*bool*) -- If \"True\", prints a message to stdout\n for each update. Default: \"False\".\n\n-[ Example ]-\n\n\n\noptimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)\nscheduler = ReduceLROnPlateau(optimizer, 'min')\nfor epoch in range(10):\n train(...)\n val_loss = validate(...)\n # Note that step should be called after validate()\n scheduler.step(val_loss)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ReduceLROnPlateau.html", "category": "pytorch docs"} {"text": "UninitializedBuffer\nclass torch.nn.parameter.UninitializedBuffer(requires_grad=False, device=None, dtype=None)\nA buffer that is not initialized.\nUninitialized Buffer is a a special case of \"torch.Tensor\" where\n the shape of the data is still unknown.\nUnlike a \"torch.Tensor\", uninitialized parameters hold no data and\n attempting to access some properties, like their shape, will throw\n a runtime error. The only operations that can be performed on a\n uninitialized parameter are changing its datatype, moving it to a\n different device and converting it to a regular \"torch.Tensor\".\nThe default device or dtype to use when the buffer is materialized\n can be set during construction using e.g. \"device='cuda'\".", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parameter.UninitializedBuffer.html", "category": "pytorch docs"} {"text": "ConvTranspose3d\nclass torch.ao.nn.quantized.ConvTranspose3d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)\nApplies a 3D transposed convolution operator over an input image\n composed of several input planes. For details on input arguments,\n parameters, and implementation see \"ConvTranspose3d\".\nNote:\n Currently only the FBGEMM engine is implemented. Please, set the\n *torch.backends.quantized.engine = 'fbgemm'*\n\nFor special notes, please, see \"Conv3d\"\nVariables:\n * weight (Tensor) -- packed tensor derived from the\n learnable weight parameter.\n * **scale** (*Tensor*) -- scalar for the output scale\n\n * **zero_point** (*Tensor*) -- scalar for the output zero point\n\nSee \"ConvTranspose3d\" for other attributes.\nExamples:\n >>> torch.backends.quantized.engine = 'fbgemm'\n >>> from torch.nn import quantized as nnq\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.ConvTranspose3d.html", "category": "pytorch docs"} {"text": "\n\n\nfrom torch.nn import quantized as nnq\n >>> # With cubic kernels and equal stride\n >>> m = nnq.ConvTranspose3d(16, 33, 3, stride=2)\n >>> # non-cubic kernels and unequal stride and with padding\n >>> m = nnq.ConvTranspose3d(16, 33, (3, 3, 5), stride=(2, 1, 1), padding=(4, 2, 2))\n >>> input = torch.randn(20, 16, 50, 100, 100)\n >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)\n >>> output = m(q_input)\n >>> # exact output size can be also specified as an argument\n >>> input = torch.randn(1, 16, 12, 12, 12)\n >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)\n >>> downsample = nnq.Conv3d(16, 16, 3, stride=2, padding=1)\n >>> upsample = nnq.ConvTranspose3d(16, 16, 3, stride=2, padding=1)\n >>> h = downsample(q_input)\n >>> h.size()\n torch.Size([1, 16, 6, 6, 6])\n >>> output = upsample(h, output_size=input.size())\n >>> output.size()\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.ConvTranspose3d.html", "category": "pytorch docs"} {"text": "\n\n\noutput.size()\n torch.Size([1, 16, 12, 12, 12])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.ConvTranspose3d.html", "category": "pytorch docs"} {"text": "torch.cuda.is_initialized\ntorch.cuda.is_initialized()\nReturns whether PyTorch's CUDA state has been initialized.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.is_initialized.html", "category": "pytorch docs"} {"text": "torch.autograd.function.FunctionCtx.mark_dirty\nFunctionCtx.mark_dirty(*args)\nMarks given tensors as modified in an in-place operation.\nThis should be called at most once, only from inside the\n \"forward()\" method, and all arguments should be inputs.\nEvery tensor that's been modified in-place in a call to \"forward()\"\n should be given to this function, to ensure correctness of our\n checks. It doesn't matter whether the function is called before or\n after modification.\nExamples::\n >>> class Inplace(Function):\n >>> @staticmethod\n >>> def forward(ctx, x):\n >>> x_npy = x.numpy() # x_npy shares storage with x\n >>> x_npy += 1\n >>> ctx.mark_dirty(x)\n >>> return x\n >>>\n >>> @staticmethod\n >>> @once_differentiable\n >>> def backward(ctx, grad_output):\n >>> return grad_output\n >>>", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.mark_dirty.html", "category": "pytorch docs"} {"text": "\n\n\n return grad_output\n >>>\n >>> a = torch.tensor(1., requires_grad=True, dtype=torch.double).clone()\n >>> b = a * a\n >>> Inplace.apply(a) # This would lead to wrong gradients!\n >>> # but the engine would not know unless we mark_dirty\n >>> b.backward() # RuntimeError: one of the variables needed for gradient\n >>> # computation has been modified by an inplace operation\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.mark_dirty.html", "category": "pytorch docs"} {"text": "torch.Tensor.index_put\nTensor.index_put(indices, values, accumulate=False) -> Tensor\nOut-place version of \"index_put_()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_put.html", "category": "pytorch docs"} {"text": "torch.quantize_per_tensor\ntorch.quantize_per_tensor(input, scale, zero_point, dtype) -> Tensor\nConverts a float tensor to a quantized tensor with given scale and\n zero point.\nParameters:\n * input (Tensor) -- float tensor or list of tensors to\n quantize\n * **scale** (*float** or **Tensor*) -- scale to apply in\n quantization formula\n\n * **zero_point** (*int** or **Tensor*) -- offset in integer\n value that maps to float zero\n\n * **dtype** (\"torch.dtype\") -- the desired data type of returned\n tensor. Has to be one of the quantized dtypes: \"torch.quint8\",\n \"torch.qint8\", \"torch.qint32\"\n\nReturns:\n A newly quantized tensor or list of quantized tensors.\nReturn type:\n Tensor\nExample:\n >>> torch.quantize_per_tensor(torch.tensor([-1.0, 0.0, 1.0, 2.0]), 0.1, 10, torch.quint8)\n tensor([-1., 0., 1., 2.], size=(4,), dtype=torch.quint8,\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantize_per_tensor.html", "category": "pytorch docs"} {"text": "quantization_scheme=torch.per_tensor_affine, scale=0.1, zero_point=10)\n >>> torch.quantize_per_tensor(torch.tensor([-1.0, 0.0, 1.0, 2.0]), 0.1, 10, torch.quint8).int_repr()\n tensor([ 0, 10, 20, 30], dtype=torch.uint8)\n >>> torch.quantize_per_tensor([torch.tensor([-1.0, 0.0]), torch.tensor([-2.0, 2.0])],\n >>> torch.tensor([0.1, 0.2]), torch.tensor([10, 20]), torch.quint8)\n (tensor([-1., 0.], size=(2,), dtype=torch.quint8,\n quantization_scheme=torch.per_tensor_affine, scale=0.1, zero_point=10),\n tensor([-2., 2.], size=(2,), dtype=torch.quint8,\n quantization_scheme=torch.per_tensor_affine, scale=0.2, zero_point=20))\n >>> torch.quantize_per_tensor(torch.tensor([-1.0, 0.0, 1.0, 2.0]), torch.tensor(0.1), torch.tensor(10), torch.quint8)\n tensor([-1., 0., 1., 2.], size=(4,), dtype=torch.quint8,\n quantization_scheme=torch.per_tensor_affine, scale=0.10, zero_point=10)", "source": "https://pytorch.org/docs/stable/generated/torch.quantize_per_tensor.html", "category": "pytorch docs"} {"text": "torch.svd_lowrank\ntorch.svd_lowrank(A, q=6, niter=2, M=None)\nReturn the singular value decomposition \"(U, S, V)\" of a matrix,\n batches of matrices, or a sparse matrix A such that A \\approx U\n diag(S) V^T. In case M is given, then SVD is computed for the\n matrix A - M.\nNote:\n The implementation is based on the Algorithm 5.1 from Halko et\n al, 2009.\n\nNote:\n To obtain repeatable results, reset the seed for the pseudorandom\n number generator\n\nNote:\n The input is assumed to be a low-rank matrix.\n\nNote:\n In general, use the full-rank SVD implementation\n \"torch.linalg.svd()\" for dense matrices due to its 10-fold higher\n performance characteristics. The low-rank SVD will be useful for\n huge sparse matrices that \"torch.linalg.svd()\" cannot handle.\n\nArgs::\n A (Tensor): the input tensor of size (*, m, n)\n q (int, optional): a slightly overestimated rank of A.\n\n niter (int, optional): the number of subspace iterations to\n", "source": "https://pytorch.org/docs/stable/generated/torch.svd_lowrank.html", "category": "pytorch docs"} {"text": "conduct; niter must be a nonnegative integer, and defaults to\n 2\n M (Tensor, optional): the input tensor's mean of size\n (*, 1, n).\n\nReferences::\n * Nathan Halko, Per-Gunnar Martinsson, and Joel Tropp, Finding\n structure with randomness: probabilistic algorithms for\n constructing approximate matrix decompositions,\n arXiv:0909.4061 [math.NA; math.PR], 2009 (available at arXiv).\nReturn type:\n Tuple[Tensor, Tensor, Tensor]", "source": "https://pytorch.org/docs/stable/generated/torch.svd_lowrank.html", "category": "pytorch docs"} {"text": "torch.allclose\ntorch.allclose(input, other, rtol=1e-05, atol=1e-08, equal_nan=False) -> bool\nThis function checks if \"input\" and \"other\" satisfy the condition:\n \\lvert \\text{input} - \\text{other} \\rvert \\leq \\texttt{atol} +\n \\texttt{rtol} \\times \\lvert \\text{other} \\rvert\n\nelementwise, for all elements of \"input\" and \"other\". The behaviour\n of this function is analogous to numpy.allclose\nParameters:\n * input (Tensor) -- first tensor to compare\n * **other** (*Tensor*) -- second tensor to compare\n\n * **atol** (*float**, **optional*) -- absolute tolerance.\n Default: 1e-08\n\n * **rtol** (*float**, **optional*) -- relative tolerance.\n Default: 1e-05\n\n * **equal_nan** (*bool**, **optional*) -- if \"True\", then two\n \"NaN\" s will be considered equal. Default: \"False\"\n\nExample:\n >>> torch.allclose(torch.tensor([10000., 1e-07]), torch.tensor([10000.1, 1e-08]))\n False\n", "source": "https://pytorch.org/docs/stable/generated/torch.allclose.html", "category": "pytorch docs"} {"text": "False\n >>> torch.allclose(torch.tensor([10000., 1e-08]), torch.tensor([10000.1, 1e-09]))\n True\n >>> torch.allclose(torch.tensor([1.0, float('nan')]), torch.tensor([1.0, float('nan')]))\n False\n >>> torch.allclose(torch.tensor([1.0, float('nan')]), torch.tensor([1.0, float('nan')]), equal_nan=True)\n True", "source": "https://pytorch.org/docs/stable/generated/torch.allclose.html", "category": "pytorch docs"} {"text": "FeatureAlphaDropout\nclass torch.nn.FeatureAlphaDropout(p=0.5, inplace=False)\nRandomly masks out entire channels (a channel is a feature map,\n e.g. the j-th channel of the i-th sample in the batch input is a\n tensor \\text{input}[i, j]) of the input tensor). Instead of setting\n activations to zero, as in regular Dropout, the activations are set\n to the negative saturation value of the SELU activation function.\n More details can be found in the paper Self-Normalizing Neural\n Networks .\nEach element will be masked independently for each sample on every\n forward call with probability \"p\" using samples from a Bernoulli\n distribution. The elements to be masked are randomized on every\n forward call, and scaled and shifted to maintain zero mean and unit\n variance.\nUsually the input comes from \"nn.AlphaDropout\" modules.\nAs described in the paper Efficient Object Localization Using\n Convolutional Networks , if adjacent pixels within feature maps are", "source": "https://pytorch.org/docs/stable/generated/torch.nn.FeatureAlphaDropout.html", "category": "pytorch docs"} {"text": "strongly correlated (as is normally the case in early convolution\n layers) then i.i.d. dropout will not regularize the activations and\n will otherwise just result in an effective learning rate decrease.\nIn this case, \"nn.AlphaDropout()\" will help promote independence\n between feature maps and should be used instead.\nParameters:\n * p (float, optional) -- probability of an element to\n be zeroed. Default: 0.5\n * **inplace** (*bool**, **optional*) -- If set to \"True\", will\n do this operation in-place\n\nShape:\n * Input: (N, C, D, H, W) or (C, D, H, W).\n * Output: (N, C, D, H, W) or (C, D, H, W) (same shape as input).\n\nExamples:\n >>> m = nn.FeatureAlphaDropout(p=0.2)\n >>> input = torch.randn(20, 16, 4, 32, 32)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.FeatureAlphaDropout.html", "category": "pytorch docs"} {"text": "torch.Tensor.vdot\nTensor.vdot(other) -> Tensor\nSee \"torch.vdot()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.vdot.html", "category": "pytorch docs"} {"text": "torch.cuda.memory_reserved\ntorch.cuda.memory_reserved(device=None)\nReturns the current GPU memory managed by the caching allocator in\n bytes for a given device.\nParameters:\n device (torch.device or int, optional) -- selected\n device. Returns statistic for the current device, given by\n \"current_device()\", if \"device\" is \"None\" (default).\nReturn type:\n int\nNote:\n See Memory management for more details about GPU memory\n management.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.memory_reserved.html", "category": "pytorch docs"} {"text": "torch.foreach_acos\ntorch.foreach_acos(self: List[Tensor]) -> None\nApply \"torch.acos()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_acos_.html", "category": "pytorch docs"} {"text": "torch.sym_int\ntorch.sym_int(a)\nSymInt-aware utility for int casting.\nParameters:\n a (SymInt, SymFloat, or object) -- Object to cast", "source": "https://pytorch.org/docs/stable/generated/torch.sym_int.html", "category": "pytorch docs"} {"text": "torch.fft.ifft2\ntorch.fft.ifft2(input, s=None, dim=(- 2, - 1), norm=None, *, out=None) -> Tensor\nComputes the 2 dimensional inverse discrete Fourier transform of\n \"input\". Equivalent to \"ifftn()\" but IFFTs only the last two\n dimensions by default.\nNote:\n Supports torch.half and torch.chalf on CUDA with GPU Architecture\n SM53 or greater. However it only supports powers of 2 signal\n length in every transformed dimensions.\n\nParameters:\n * input (Tensor) -- the input tensor\n * **s** (*Tuple**[**int**]**, **optional*) -- Signal size in the\n transformed dimensions. If given, each dimension \"dim[i]\" will\n either be zero-padded or trimmed to the length \"s[i]\" before\n computing the IFFT. If a length \"-1\" is specified, no padding\n is done in that dimension. Default: \"s = [input.size(d) for d\n in dim]\"\n\n * **dim** (*Tuple**[**int**]**, **optional*) -- Dimensions to be\n transformed. Default: last two dimensions.\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ifft2.html", "category": "pytorch docs"} {"text": "\nnorm (str, optional) --Normalization mode. For the backward transform (\"ifft2()\"),\nthese correspond to:\n\n* \"\"forward\"\" - no normalization\n\n* \"\"backward\"\" - normalize by \"1/n\"\n\n* \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the IFFT\n orthonormal)\n\nWhere \"n = prod(s)\" is the logical IFFT size. Calling the\nforward transform (\"fft2()\") with the same normalization mode\nwill apply an overall normalization of \"1/n\" between the two\ntransforms. This is required to make \"ifft2()\" the exact\ninverse.\n\nDefault is \"\"backward\"\" (normalize by \"1/n\").\n\n\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\n-[ Example ]-\n\n\n\nx = torch.rand(10, 10, dtype=torch.complex64)\nifft2 = torch.fft.ifft2(x)\n\n\n\nThe discrete Fourier transform is separable, so \"ifft2()\" here is\n equivalent to two one-dimensional \"ifft()\" calls:", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ifft2.html", "category": "pytorch docs"} {"text": "\n\n\ntwo_iffts = torch.fft.ifft(torch.fft.ifft(x, dim=0), dim=1)\ntorch.testing.assert_close(ifft2, two_iffts, check_stride=False)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ifft2.html", "category": "pytorch docs"} {"text": "torch.nn.functional.leaky_relu_\ntorch.nn.functional.leaky_relu_(input, negative_slope=0.01) -> Tensor\nIn-place version of \"leaky_relu()\".", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.leaky_relu_.html", "category": "pytorch docs"} {"text": "torch.Tensor.masked_fill_\nTensor.masked_fill_(mask, value)\nFills elements of \"self\" tensor with \"value\" where \"mask\" is True.\n The shape of \"mask\" must be broadcastable with the shape of the\n underlying tensor.\nParameters:\n * mask (BoolTensor) -- the boolean mask\n * **value** (*float*) -- the value to fill in with\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.masked_fill_.html", "category": "pytorch docs"} {"text": "default_weight_fake_quant\ntorch.quantization.fake_quantize.default_weight_fake_quant\nalias of functools.partial(,\n observer=,\n quant_min=-128, quant_max=127, dtype=torch.qint8,\n qscheme=torch.per_tensor_symmetric, reduce_range=False){}", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.default_weight_fake_quant.html", "category": "pytorch docs"} {"text": "ConvTranspose3d\nclass torch.nn.ConvTranspose3d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)\nApplies a 3D transposed convolution operator over an input image\n composed of several input planes. The transposed convolution\n operator multiplies each input value element-wise by a learnable\n kernel, and sums over the outputs from all input feature planes.\nThis module can be seen as the gradient of Conv3d with respect to\n its input. It is also known as a fractionally-strided convolution\n or a deconvolution (although it is not an actual deconvolution\n operation as it does not compute a true inverse of convolution).\n For more information, see the visualizations here and the\n Deconvolutional Networks paper.\nThis module supports TensorFloat32.\nOn certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose3d.html", "category": "pytorch docs"} {"text": "use different precision for backward.\n\n\n\"stride\" controls the stride for the cross-correlation.\n\n\n\"padding\" controls the amount of implicit zero padding on both\n sides for \"dilation * (kernel_size - 1) - padding\" number of\n points. See note below for details.\n\n\n\"output_padding\" controls the additional size added to one side\n of the output shape. See note below for details.\n\n\n\"dilation\" controls the spacing between the kernel points; also\n known as the \u00c3\u00a0 trous algorithm. It is harder to describe, but the\n link here has a nice visualization of what \"dilation\" does.\n\n\n\"groups\" controls the connections between inputs and outputs.\n \"in_channels\" and \"out_channels\" must both be divisible by\n \"groups\". For example,\n* At groups=1, all inputs are convolved to all outputs.\n\n* At groups=2, the operation becomes equivalent to having two\n conv layers side by side, each seeing half the input\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose3d.html", "category": "pytorch docs"} {"text": "channels and producing half the output channels, and both\n subsequently concatenated.\n * At groups= \"in_channels\", each input channel is convolved\n with its own set of filters (of size\n \\frac{\\text{out\\_channels}}{\\text{in\\_channels}}).\n\nThe parameters \"kernel_size\", \"stride\", \"padding\", \"output_padding\"\n can either be:\n * a single \"int\" -- in which case the same value is used for the\n depth, height and width dimensions\n\n * a \"tuple\" of three ints -- in which case, the first *int* is\n used for the depth dimension, the second *int* for the height\n dimension and the third *int* for the width dimension\n\nNote:\n The \"padding\" argument effectively adds \"dilation * (kernel_size\n - 1) - padding\" amount of zero padding to both sizes of the\n input. This is set so that when a \"Conv3d\" and a\n \"ConvTranspose3d\" are initialized with same parameters, they are\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose3d.html", "category": "pytorch docs"} {"text": "inverses of each other in regard to the input and output shapes.\n However, when \"stride > 1\", \"Conv3d\" maps multiple input shapes\n to the same output shape. \"output_padding\" is provided to resolve\n this ambiguity by effectively increasing the calculated output\n shape on one side. Note that \"output_padding\" is only used to\n find output shape, but does not actually add zero-padding to\n output.\nNote:\n In some circumstances when given tensors on a CUDA device and\n using CuDNN, this operator may select a nondeterministic\n algorithm to increase performance. If this is undesirable, you\n can try to make the operation deterministic (potentially at a\n performance cost) by setting \"torch.backends.cudnn.deterministic\n = True\". See Reproducibility for more information.\n\nParameters:\n * in_channels (int) -- Number of channels in the input\n image\n * **out_channels** (*int*) -- Number of channels produced by the\n convolution\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose3d.html", "category": "pytorch docs"} {"text": "convolution\n * **kernel_size** (*int** or **tuple*) -- Size of the convolving\n kernel\n\n * **stride** (*int** or **tuple**, **optional*) -- Stride of the\n convolution. Default: 1\n\n * **padding** (*int** or **tuple**, **optional*) -- \"dilation *\n (kernel_size - 1) - padding\" zero-padding will be added to\n both sides of each dimension in the input. Default: 0\n\n * **output_padding** (*int** or **tuple**, **optional*) --\n Additional size added to one side of each dimension in the\n output shape. Default: 0\n\n * **groups** (*int**, **optional*) -- Number of blocked\n connections from input channels to output channels. Default: 1\n\n * **bias** (*bool**, **optional*) -- If \"True\", adds a learnable\n bias to the output. Default: \"True\"\n\n * **dilation** (*int** or **tuple**, **optional*) -- Spacing\n between kernel elements. Default: 1\n\nShape:\n * Input: (N, C_{in}, D_{in}, H_{in}, W_{in}) or (C_{in}, D_{in},", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose3d.html", "category": "pytorch docs"} {"text": "H_{in}, W_{in})\n * Output: (N, C_{out}, D_{out}, H_{out}, W_{out}) or (C_{out},\n D_{out}, H_{out}, W_{out}), where\n\n D_{out} = (D_{in} - 1) \\times \\text{stride}[0] - 2 \\times\n \\text{padding}[0] + \\text{dilation}[0] \\times\n (\\text{kernel\\_size}[0] - 1) + \\text{output\\_padding}[0] + 1\n\n H_{out} = (H_{in} - 1) \\times \\text{stride}[1] - 2 \\times\n \\text{padding}[1] + \\text{dilation}[1] \\times\n (\\text{kernel\\_size}[1] - 1) + \\text{output\\_padding}[1] + 1\n\n W_{out} = (W_{in} - 1) \\times \\text{stride}[2] - 2 \\times\n \\text{padding}[2] + \\text{dilation}[2] \\times\n (\\text{kernel\\_size}[2] - 1) + \\text{output\\_padding}[2] + 1\n\nVariables:\n * weight (Tensor) -- the learnable weights of the module\n of shape (\\text{in_channels},\n \\frac{\\text{out_channels}}{\\text{groups}},\n \\text{kernel_size[0]}, \\text{kernel_size[1]},", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose3d.html", "category": "pytorch docs"} {"text": "\\text{kernel_size[2]}). The values of these weights are\n sampled from \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}) where k =\n \\frac{groups}{C_\\text{out} *\n \\prod_{i=0}^{2}\\text{kernel_size}[i]}\n * **bias** (*Tensor*) -- the learnable bias of the module of\n shape (out_channels) If \"bias\" is \"True\", then the values of\n these weights are sampled from \\mathcal{U}(-\\sqrt{k},\n \\sqrt{k}) where k = \\frac{groups}{C_\\text{out} *\n \\prod_{i=0}^{2}\\text{kernel\\_size}[i]}\n\nExamples:\n >>> # With square kernels and equal stride\n >>> m = nn.ConvTranspose3d(16, 33, 3, stride=2)\n >>> # non-square kernels and unequal stride and with padding\n >>> m = nn.ConvTranspose3d(16, 33, (3, 5, 2), stride=(2, 1, 1), padding=(0, 4, 2))\n >>> input = torch.randn(20, 16, 10, 50, 100)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose3d.html", "category": "pytorch docs"} {"text": "torch.cholesky_inverse\ntorch.cholesky_inverse(input, upper=False, *, out=None) -> Tensor\nComputes the inverse of a symmetric positive-definite matrix A\n using its Cholesky factor u: returns matrix \"inv\". The inverse is\n computed using LAPACK routines \"dpotri\" and \"spotri\" (and the\n corresponding MAGMA routines).\nIf \"upper\" is \"False\", u is lower triangular such that the returned\n tensor is\n inv = (uu^{{T}})^{{-1}}\n\nIf \"upper\" is \"True\" or not provided, u is upper triangular such\n that the returned tensor is\n inv = (u^T u)^{{-1}}\n\nSupports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if A is a batch of matrices then\n the output has the same batch dimensions.\nParameters:\n * input (Tensor) -- the input tensor A of size (*, n, n),\n consisting of symmetric positive-definite matrices where * is\n zero or more batch dimensions.", "source": "https://pytorch.org/docs/stable/generated/torch.cholesky_inverse.html", "category": "pytorch docs"} {"text": "zero or more batch dimensions.\n * **upper** (*bool**, **optional*) -- flag that indicates\n whether to return a upper or lower triangular matrix. Default:\n False\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor for inv\nExample:\n >>> a = torch.randn(3, 3)\n >>> a = torch.mm(a, a.t()) + 1e-05 * torch.eye(3) # make symmetric positive definite\n >>> u = torch.linalg.cholesky(a)\n >>> a\n tensor([[ 0.9935, -0.6353, 1.5806],\n [ -0.6353, 0.8769, -1.7183],\n [ 1.5806, -1.7183, 10.6618]])\n >>> torch.cholesky_inverse(u)\n tensor([[ 1.9314, 1.2251, -0.0889],\n [ 1.2251, 2.4439, 0.2122],\n [-0.0889, 0.2122, 0.1412]])\n >>> a.inverse()\n tensor([[ 1.9314, 1.2251, -0.0889],\n [ 1.2251, 2.4439, 0.2122],\n [-0.0889, 0.2122, 0.1412]])\n >>> a = torch.randn(3, 2, 2) # Example for batched input\n", "source": "https://pytorch.org/docs/stable/generated/torch.cholesky_inverse.html", "category": "pytorch docs"} {"text": "\n\n\na = a @ a.mT + 1e-03 # make symmetric positive-definite\n >>> l = torch.linalg.cholesky(a)\n >>> z = l @ l.mT\n >>> torch.dist(z, a)\n tensor(3.5894e-07)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.cholesky_inverse.html", "category": "pytorch docs"} {"text": "torch.Tensor.isfinite\nTensor.isfinite() -> Tensor\nSee \"torch.isfinite()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.isfinite.html", "category": "pytorch docs"} {"text": "GroupNorm\nclass torch.nn.GroupNorm(num_groups, num_channels, eps=1e-05, affine=True, device=None, dtype=None)\nApplies Group Normalization over a mini-batch of inputs as\n described in the paper Group Normalization\n y = \\frac{x - \\mathrm{E}[x]}{ \\sqrt{\\mathrm{Var}[x] + \\epsilon}}\n * \\gamma + \\beta\n\nThe input channels are separated into \"num_groups\" groups, each\n containing \"num_channels / num_groups\" channels. \"num_channels\"\n must be divisible by \"num_groups\". The mean and standard-deviation\n are calculated separately over the each group. \\gamma and \\beta are\n learnable per-channel affine transform parameter vectors of size\n \"num_channels\" if \"affine\" is \"True\". The standard-deviation is\n calculated via the biased estimator, equivalent to\n torch.var(input, unbiased=False).\nThis layer uses statistics computed from input data in both\n training and evaluation modes.\nParameters:\n * num_groups (int) -- number of groups to separate the", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GroupNorm.html", "category": "pytorch docs"} {"text": "channels into\n * **num_channels** (*int*) -- number of channels expected in\n input\n\n * **eps** (*float*) -- a value added to the denominator for\n numerical stability. Default: 1e-5\n\n * **affine** (*bool*) -- a boolean value that when set to\n \"True\", this module has learnable per-channel affine\n parameters initialized to ones (for weights) and zeros (for\n biases). Default: \"True\".\n\nShape:\n * Input: (N, C, *) where C=\\text{num_channels}\n * Output: (N, C, *) (same shape as input)\n\nExamples:\n >>> input = torch.randn(20, 6, 10, 10)\n >>> # Separate 6 channels into 3 groups\n >>> m = nn.GroupNorm(3, 6)\n >>> # Separate 6 channels into 6 groups (equivalent with InstanceNorm)\n >>> m = nn.GroupNorm(6, 6)\n >>> # Put all 6 channels into a single group (equivalent with LayerNorm)\n >>> m = nn.GroupNorm(1, 6)\n >>> # Activating the module\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GroupNorm.html", "category": "pytorch docs"} {"text": "torch.cuda.caching_allocator_alloc\ntorch.cuda.caching_allocator_alloc(size, device=None, stream=None)\nPerforms a memory allocation using the CUDA memory allocator.\nMemory is allocated for a given device and a stream, this function\n is intended to be used for interoperability with other frameworks.\n Allocated memory is released through \"caching_allocator_delete()\".\nParameters:\n * size (int) -- number of bytes to be allocated.\n * **device** (*torch.device** or **int**, **optional*) --\n selected device. If it is \"None\" the default CUDA device is\n used.\n\n * **stream** (*torch.cuda.Stream** or **int**, **optional*) --\n selected stream. If is \"None\" then the default stream for the\n selected device is used.\n\nNote:\n See Memory management for more details about GPU memory\n management.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.caching_allocator_alloc.html", "category": "pytorch docs"} {"text": "BatchNorm1d\nclass torch.nn.BatchNorm1d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)\nApplies Batch Normalization over a 2D or 3D input as described in\n the paper Batch Normalization: Accelerating Deep Network Training\n by Reducing Internal Covariate Shift .\n y = \\frac{x - \\mathrm{E}[x]}{\\sqrt{\\mathrm{Var}[x] + \\epsilon}}\n * \\gamma + \\beta\n\nThe mean and standard-deviation are calculated per-dimension over\n the mini-batches and \\gamma and \\beta are learnable parameter\n vectors of size C (where C is the number of features or\n channels of the input). By default, the elements of \\gamma are set\n to 1 and the elements of \\beta are set to 0. The standard-deviation\n is calculated via the biased estimator, equivalent to\n torch.var(input, unbiased=False).\nAlso by default, during training this layer keeps running estimates\n of its computed mean and variance, which are then used for", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm1d.html", "category": "pytorch docs"} {"text": "normalization during evaluation. The running estimates are kept\n with a default \"momentum\" of 0.1.\nIf \"track_running_stats\" is set to \"False\", this layer then does\n not keep running estimates, and batch statistics are instead used\n during evaluation time as well.\nNote:\n This \"momentum\" argument is different from one used in optimizer\n classes and the conventional notion of momentum. Mathematically,\n the update rule for running statistics here is \\hat{x}_\\text{new}\n = (1 - \\text{momentum}) \\times \\hat{x} + \\text{momentum} \\times\n x_t, where \\hat{x} is the estimated statistic and x_t is the new\n observed value.\n\nBecause the Batch Normalization is done over the C dimension,\n computing statistics on (N, L) slices, it's common terminology to\n call this Temporal Batch Normalization.\nParameters:\n * num_features (int) -- number of features or channels C\n of the input\n * **eps** (*float*) -- a value added to the denominator for\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm1d.html", "category": "pytorch docs"} {"text": "numerical stability. Default: 1e-5\n * **momentum** (*float*) -- the value used for the running_mean\n and running_var computation. Can be set to \"None\" for\n cumulative moving average (i.e. simple average). Default: 0.1\n\n * **affine** (*bool*) -- a boolean value that when set to\n \"True\", this module has learnable affine parameters. Default:\n \"True\"\n\n * **track_running_stats** (*bool*) -- a boolean value that when\n set to \"True\", this module tracks the running mean and\n variance, and when set to \"False\", this module does not track\n such statistics, and initializes statistics buffers\n \"running_mean\" and \"running_var\" as \"None\". When these buffers\n are \"None\", this module always uses batch statistics. in both\n training and eval modes. Default: \"True\"\n\nShape:\n * Input: (N, C) or (N, C, L), where N is the batch size, C is\n the number of features or channels, and L is the sequence\n length", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm1d.html", "category": "pytorch docs"} {"text": "length\n * Output: (N, C) or (N, C, L) (same shape as input)\n\nExamples:\n >>> # With Learnable Parameters\n >>> m = nn.BatchNorm1d(100)\n >>> # Without Learnable Parameters\n >>> m = nn.BatchNorm1d(100, affine=False)\n >>> input = torch.randn(20, 100)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm1d.html", "category": "pytorch docs"} {"text": "torch.nn.functional.hardsigmoid\ntorch.nn.functional.hardsigmoid(input, inplace=False)\nApplies the element-wise function\n \\text{Hardsigmoid}(x) = \\begin{cases} 0 & \\text{if~} x \\le\n -3, \\\\ 1 & \\text{if~} x \\ge +3, \\\\ x / 6 + 1 / 2 &\n \\text{otherwise} \\end{cases}\n\nParameters:\n inplace (bool) -- If set to \"True\", will do this operation\n in-place. Default: \"False\"\nReturn type:\n Tensor\nSee \"Hardsigmoid\" for more details.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.hardsigmoid.html", "category": "pytorch docs"} {"text": "torch.cuda.synchronize\ntorch.cuda.synchronize(device=None)\nWaits for all kernels in all streams on a CUDA device to complete.\nParameters:\n device (torch.device or int, optional) -- device\n for which to synchronize. It uses the current device, given by\n \"current_device()\", if \"device\" is \"None\" (default).", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.synchronize.html", "category": "pytorch docs"} {"text": "torch.Tensor.logical_xor_\nTensor.logical_xor_() -> Tensor\nIn-place version of \"logical_xor()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logical_xor_.html", "category": "pytorch docs"} {"text": "torch.addcmul\ntorch.addcmul(input, tensor1, tensor2, *, value=1, out=None) -> Tensor\nPerforms the element-wise multiplication of \"tensor1\" by \"tensor2\",\n multiplies the result by the scalar \"value\" and adds it to \"input\".\n \\text{out}_i = \\text{input}_i + \\text{value} \\times\n \\text{tensor1}_i \\times \\text{tensor2}_i\n\nThe shapes of \"tensor\", \"tensor1\", and \"tensor2\" must be\n broadcastable.\nFor inputs of type FloatTensor or DoubleTensor, \"value\" must be\n a real number, otherwise an integer.\nParameters:\n * input (Tensor) -- the tensor to be added\n * **tensor1** (*Tensor*) -- the tensor to be multiplied\n\n * **tensor2** (*Tensor*) -- the tensor to be multiplied\n\nKeyword Arguments:\n * value (Number, optional) -- multiplier for tensor1\n .* tensor2\n * **out** (*Tensor**, **optional*) -- the output tensor.\n\nExample:\n >>> t = torch.randn(1, 3)\n >>> t1 = torch.randn(3, 1)\n >>> t2 = torch.randn(1, 3)\n", "source": "https://pytorch.org/docs/stable/generated/torch.addcmul.html", "category": "pytorch docs"} {"text": "\n\n\nt2 = torch.randn(1, 3)\n >>> torch.addcmul(t, t1, t2, value=0.1)\n tensor([[-0.8635, -0.6391, 1.6174],\n [-0.7617, -0.5879, 1.7388],\n [-0.8353, -0.6249, 1.6511]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.addcmul.html", "category": "pytorch docs"} {"text": "torch.Tensor.multiply\nTensor.multiply(value) -> Tensor\nSee \"torch.multiply()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.multiply.html", "category": "pytorch docs"} {"text": "MovingAverageMinMaxObserver\nclass torch.quantization.observer.MovingAverageMinMaxObserver(averaging_constant=0.01, dtype=torch.quint8, qscheme=torch.per_tensor_affine, reduce_range=False, quant_min=None, quant_max=None, eps=1.1920928955078125e-07, **kwargs)\nObserver module for computing the quantization parameters based on\n the moving average of the min and max values.\nThis observer computes the quantization parameters based on the\n moving averages of minimums and maximums of the incoming tensors.\n The module records the average minimum and maximum of incoming\n tensors, and uses this statistic to compute the quantization\n parameters.\nParameters:\n * averaging_constant -- Averaging constant for min/max.\n * **dtype** -- dtype argument to the *quantize* node needed to\n implement the reference model spec.\n\n * **qscheme** -- Quantization scheme to be used\n\n * **reduce_range** -- Reduces the range of the quantized data\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.MovingAverageMinMaxObserver.html", "category": "pytorch docs"} {"text": "type by 1 bit\n * **quant_min** -- Minimum quantization value. If unspecified,\n it will follow the 8-bit setup.\n\n * **quant_max** -- Maximum quantization value. If unspecified,\n it will follow the 8-bit setup.\n\n * **eps** (*Tensor*) -- Epsilon value for float32, Defaults to\n *torch.finfo(torch.float32).eps*.\n\nThe moving average min/max is computed as follows\n \\begin{array}{ll} x_\\text{min} = \\begin{cases}\n \\min(X) & \\text{if~}x_\\text{min} = \\text{None} \\\\ (1\n - c) x_\\text{min} + c \\min(X) & \\text{otherwise}\n \\end{cases}\\\\ x_\\text{max} = \\begin{cases}\n \\max(X) & \\text{if~}x_\\text{max} = \\text{None} \\\\ (1\n - c) x_\\text{max} + c \\max(X) & \\text{otherwise}\n \\end{cases}\\\\ \\end{array}\n\nwhere x_\\text{min/max} is the running average min/max, X is is the\n incoming tensor, and c is the \"averaging_constant\".\nThe scale and zero point are then computed as in \"MinMaxObserver\".\nNote:", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.MovingAverageMinMaxObserver.html", "category": "pytorch docs"} {"text": "Note:\n Only works with \"torch.per_tensor_affine\" quantization scheme.\n\nNote:\n If the running minimum equals to the running maximum, the scale\n and zero_point are set to 1.0 and 0.\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.MovingAverageMinMaxObserver.html", "category": "pytorch docs"} {"text": "torch.cuda.get_gencode_flags\ntorch.cuda.get_gencode_flags()\nReturns NVCC gencode flags this library was compiled with.\nReturn type:\n str", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.get_gencode_flags.html", "category": "pytorch docs"} {"text": "Softsign\nclass torch.nn.Softsign\nApplies the element-wise function:\n \\text{SoftSign}(x) = \\frac{x}{ 1 + |x|}\n\nShape:\n * Input: (*), where * means any number of dimensions.\n * Output: (*), same shape as the input.\n\n[image]\nExamples:\n >>> m = nn.Softsign()\n >>> input = torch.randn(2)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Softsign.html", "category": "pytorch docs"} {"text": "torch.Tensor.is_shared\nTensor.is_shared()\nChecks if tensor is in shared memory.\nThis is always \"True\" for CUDA tensors.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_shared.html", "category": "pytorch docs"} {"text": "torch.Tensor.topk\nTensor.topk(k, dim=None, largest=True, sorted=True)\nSee \"torch.topk()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.topk.html", "category": "pytorch docs"} {"text": "torch.nn.utils.prune.remove\ntorch.nn.utils.prune.remove(module, name)\nRemoves the pruning reparameterization from a module and the\n pruning method from the forward hook. The pruned parameter named\n \"name\" remains permanently pruned, and the parameter named\n \"name+'_orig'\" is removed from the parameter list. Similarly, the\n buffer named \"name+'_mask'\" is removed from the buffers.\nNote:\n Pruning itself is NOT undone or reversed!\n\nParameters:\n * module (nn.Module) -- module containing the tensor to\n prune\n * **name** (*str*) -- parameter name within \"module\" on which\n pruning will act.\n\n-[ Examples ]-\n\n\n\nm = random_unstructured(nn.Linear(5, 7), name='weight', amount=0.2)\nm = remove(m, name='weight')\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.remove.html", "category": "pytorch docs"} {"text": "torch.Tensor.q_per_channel_scales\nTensor.q_per_channel_scales() -> Tensor\nGiven a Tensor quantized by linear (affine) per-channel\n quantization, returns a Tensor of scales of the underlying\n quantizer. It has the number of elements that matches the\n corresponding dimensions (from q_per_channel_axis) of the tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.q_per_channel_scales.html", "category": "pytorch docs"} {"text": "torch.take_along_dim\ntorch.take_along_dim(input, indices, dim, *, out=None) -> Tensor\nSelects values from \"input\" at the 1-dimensional indices from\n \"indices\" along the given \"dim\".\nFunctions that return indices along a dimension, like\n \"torch.argmax()\" and \"torch.argsort()\", are designed to work with\n this function. See the examples below.\nNote:\n This function is similar to NumPy's *take_along_axis*. See also\n \"torch.gather()\".\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **indices** (*tensor*) -- the indices into \"input\". Must have\n long dtype.\n\n * **dim** (*int*) -- dimension to select along.\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> t = torch.tensor([[10, 30, 20], [60, 40, 50]])\n >>> max_idx = torch.argmax(t)\n >>> torch.take_along_dim(t, max_idx)\n tensor([60])\n >>> sorted_idx = torch.argsort(t, dim=1)\n", "source": "https://pytorch.org/docs/stable/generated/torch.take_along_dim.html", "category": "pytorch docs"} {"text": "\n\n\nsorted_idx = torch.argsort(t, dim=1)\n >>> torch.take_along_dim(t, sorted_idx, dim=1)\n tensor([[10, 20, 30],\n [40, 50, 60]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.take_along_dim.html", "category": "pytorch docs"} {"text": "torch.get_rng_state\ntorch.get_rng_state()\nReturns the random number generator state as a torch.ByteTensor.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.get_rng_state.html", "category": "pytorch docs"} {"text": "torch.nn.modules.module.register_module_forward_hook\ntorch.nn.modules.module.register_module_forward_hook(hook)\nRegisters a global forward hook for all the modules\nWarning:\n This adds global state to the *nn.module* module and it is only\n intended for debugging/profiling purposes.\n\nThe hook will be called every time after \"forward()\" has computed\n an output. It should have the following signature:\n hook(module, input, output) -> None or modified output\n\nThe input contains only the positional arguments given to the\n module. Keyword arguments won't be passed to the hooks and only to\n the \"forward\". The hook can modify the output. It can modify the\n input inplace but it will not have effect on forward since this is\n called after \"forward()\" is called.\nReturns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\nReturn type:\n \"torch.utils.hooks.RemovableHandle\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_forward_hook.html", "category": "pytorch docs"} {"text": "\"torch.utils.hooks.RemovableHandle\"\nThis hook will be executed before specific module hooks registered\n with \"register_forward_hook\".", "source": "https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_forward_hook.html", "category": "pytorch docs"} {"text": "torch.lu\ntorch.lu(args, *kwargs)\nComputes the LU factorization of a matrix or batches of matrices\n \"A\". Returns a tuple containing the LU factorization and pivots of\n \"A\". Pivoting is done if \"pivot\" is set to \"True\".\nWarning:\n \"torch.lu()\" is deprecated in favor of \"torch.linalg.lu_factor()\"\n and \"torch.linalg.lu_factor_ex()\". \"torch.lu()\" will be removed\n in a future PyTorch release. \"LU, pivots, info = torch.lu(A,\n compute_pivots)\" should be replaced with\n\n LU, pivots = torch.linalg.lu_factor(A, compute_pivots)\n\n \"LU, pivots, info = torch.lu(A, compute_pivots, get_infos=True)\"\n should be replaced with\n\n LU, pivots, info = torch.linalg.lu_factor_ex(A, compute_pivots)\n\nNote:\n * The returned permutation matrix for every matrix in the batch\n is represented by a 1-indexed vector of size \"min(A.shape[-2],\n A.shape[-1])\". \"pivots[i] == j\" represents that in the \"i\"-th\n", "source": "https://pytorch.org/docs/stable/generated/torch.lu.html", "category": "pytorch docs"} {"text": "step of the algorithm, the \"i\"-th row was permuted with the\n \"j-1\"-th row.\n * LU factorization with \"pivot\" = \"False\" is not available for\n CPU, and attempting to do so will throw an error. However, LU\n factorization with \"pivot\" = \"False\" is available for CUDA.\n\n * This function does not check if the factorization was\n successful or not if \"get_infos\" is \"True\" since the status of\n the factorization is present in the third element of the return\n tuple.\n\n * In the case of batches of square matrices with size less or\n equal to 32 on a CUDA device, the LU factorization is repeated\n for singular matrices due to the bug in the MAGMA library (see\n magma issue 13).\n\n * \"L\", \"U\", and \"P\" can be derived using \"torch.lu_unpack()\".\n\nWarning:\n The gradients of this function will only be finite when \"A\" is\n full rank. This is because the LU decomposition is just\n differentiable at full rank matrices. Furthermore, if \"A\" is\n", "source": "https://pytorch.org/docs/stable/generated/torch.lu.html", "category": "pytorch docs"} {"text": "close to not being full rank, the gradient will be numerically\n unstable as it depends on the computation of L^{-1} and U^{-1}.\nParameters:\n * A (Tensor) -- the tensor to factor of size (*, m, n)\n * **pivot** (*bool**, **optional*) -- controls whether pivoting\n is done. Default: \"True\"\n\n * **get_infos** (*bool**, **optional*) -- if set to \"True\",\n returns an info IntTensor. Default: \"False\"\n\n * **out** (*tuple**, **optional*) -- optional output tuple. If\n \"get_infos\" is \"True\", then the elements in the tuple are\n Tensor, IntTensor, and IntTensor. If \"get_infos\" is \"False\",\n then the elements in the tuple are Tensor, IntTensor. Default:\n \"None\"\n\nReturns:\n A tuple of tensors containing\n * **factorization** (*Tensor*): the factorization of size (*,\n m, n)\n\n * **pivots** (*IntTensor*): the pivots of size (*,\n \\text{min}(m, n)). \"pivots\" stores all the intermediate\n", "source": "https://pytorch.org/docs/stable/generated/torch.lu.html", "category": "pytorch docs"} {"text": "transpositions of rows. The final permutation \"perm\" could\n be reconstructed by applying \"swap(perm[i], perm[pivots[i]\n - 1])\" for \"i = 0, ..., pivots.size(-1) - 1\", where \"perm\"\n is initially the identity permutation of m elements\n (essentially this is what \"torch.lu_unpack()\" is doing).\n * **infos** (*IntTensor*, *optional*): if \"get_infos\" is\n \"True\", this is a tensor of size (*) where non-zero values\n indicate whether factorization for the matrix or each\n minibatch has succeeded or failed\n\nReturn type:\n (Tensor, IntTensor, IntTensor (optional))\nExample:\n >>> A = torch.randn(2, 3, 3)\n >>> A_LU, pivots = torch.lu(A)\n >>> A_LU\n tensor([[[ 1.3506, 2.5558, -0.0816],\n [ 0.1684, 1.1551, 0.1940],\n [ 0.1193, 0.6189, -0.5497]],\n\n [[ 0.4526, 1.2526, -0.3285],\n [-0.7988, 0.7175, -0.9701],\n [ 0.2634, -0.9255, -0.3459]]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.lu.html", "category": "pytorch docs"} {"text": "[ 0.2634, -0.9255, -0.3459]]])\n >>> pivots\n tensor([[ 3, 3, 3],\n [ 3, 3, 3]], dtype=torch.int32)\n >>> A_LU, pivots, info = torch.lu(A, get_infos=True)\n >>> if info.nonzero().size(0) == 0:\n ... print('LU factorization succeeded for all samples!')\n LU factorization succeeded for all samples!", "source": "https://pytorch.org/docs/stable/generated/torch.lu.html", "category": "pytorch docs"} {"text": "torch.Tensor.addbmm_\nTensor.addbmm_(batch1, batch2, *, beta=1, alpha=1) -> Tensor\nIn-place version of \"addbmm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.addbmm_.html", "category": "pytorch docs"} {"text": "torch.absolute\ntorch.absolute(input, *, out=None) -> Tensor\nAlias for \"torch.abs()\"", "source": "https://pytorch.org/docs/stable/generated/torch.absolute.html", "category": "pytorch docs"} {"text": "torch.Tensor.requires_grad\nTensor.requires_grad\nIs \"True\" if gradients need to be computed for this Tensor, \"False\"\n otherwise.\nNote:\n The fact that gradients need to be computed for a Tensor do not\n mean that the \"grad\" attribute will be populated, see \"is_leaf\"\n for more details.\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.requires_grad.html", "category": "pytorch docs"} {"text": "torch.trunc\ntorch.trunc(input, *, out=None) -> Tensor\nReturns a new tensor with the truncated integer values of the\n elements of \"input\".\nFor integer inputs, follows the array-api convention of returning a\n copy of the input tensor.\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(4)\n >>> a\n tensor([ 3.4742, 0.5466, -0.8008, -0.9079])\n >>> torch.trunc(a)\n tensor([ 3., 0., -0., -0.])\n", "source": "https://pytorch.org/docs/stable/generated/torch.trunc.html", "category": "pytorch docs"} {"text": "torch.linalg.cholesky\ntorch.linalg.cholesky(A, *, upper=False, out=None) -> Tensor\nComputes the Cholesky decomposition of a complex Hermitian or real\n symmetric positive-definite matrix.\nLetting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, the Cholesky\n decomposition of a complex Hermitian or real symmetric positive-\n definite matrix A \\in \\mathbb{K}^{n \\times n} is defined as\n A = LL^{\\text{H}}\\mathrlap{\\qquad L \\in \\mathbb{K}^{n \\times n}}\n\nwhere L is a lower triangular matrix with real positive diagonal\n (even in the complex case) and L^{\\text{H}} is the conjugate\n transpose when L is complex, and the transpose when L is real-\n valued.\nSupports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\nNote:\n When inputs are on a CUDA device, this function synchronizes that\n device with the CPU.\n\nSee also:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.cholesky.html", "category": "pytorch docs"} {"text": "device with the CPU.\nSee also:\n \"torch.linalg.cholesky_ex()\" for a version of this operation that\n skips the (slow) error checking by default and instead returns\n the debug information. This makes it a faster way to check if a\n matrix is positive-definite.\n\n \"torch.linalg.eigh()\" for a different decomposition of a\n Hermitian matrix. The eigenvalue decomposition gives more\n information about the matrix but it slower to compute than the\n Cholesky decomposition.\n\nParameters:\n A (Tensor) -- tensor of shape (, n, n)* where *** is\n zero or more batch dimensions consisting of symmetric or\n Hermitian positive-definite matrices.\nKeyword Arguments:\n * upper (bool, optional) -- whether to return an upper\n triangular matrix. The tensor returned with upper=True is the\n conjugate transpose of the tensor returned with upper=False.\n * **out** (*Tensor**, **optional*) -- output tensor. Ignored if\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.cholesky.html", "category": "pytorch docs"} {"text": "None. Default: None.\nRaises:\n RuntimeError -- if the \"A\" matrix or any matrix in a batched\n \"A\" is not Hermitian (resp. symmetric) positive-definite. If\n \"A\" is a batch of matrices, the error message will include\n the batch index of the first matrix that fails to meet this\n condition.\nExamples:\n >>> A = torch.randn(2, 2, dtype=torch.complex128)\n >>> A = A @ A.T.conj() + torch.eye(2) # creates a Hermitian positive-definite matrix\n >>> A\n tensor([[2.5266+0.0000j, 1.9586-2.0626j],\n [1.9586+2.0626j, 9.4160+0.0000j]], dtype=torch.complex128)\n >>> L = torch.linalg.cholesky(A)\n >>> L\n tensor([[1.5895+0.0000j, 0.0000+0.0000j],\n [1.2322+1.2976j, 2.4928+0.0000j]], dtype=torch.complex128)\n >>> torch.dist(L @ L.T.conj(), A)\n tensor(4.4692e-16, dtype=torch.float64)\n\n >>> A = torch.randn(3, 2, 2, dtype=torch.float64)\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.cholesky.html", "category": "pytorch docs"} {"text": "\n\n\nA = A @ A.mT + torch.eye(2) # batch of symmetric positive-definite matrices\n >>> L = torch.linalg.cholesky(A)\n >>> torch.dist(L @ L.mT, A)\n tensor(5.8747e-16, dtype=torch.float64)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.cholesky.html", "category": "pytorch docs"} {"text": "torch.nn.functional.conv_transpose3d\ntorch.nn.functional.conv_transpose3d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor\nApplies a 3D transposed convolution operator over an input image\n composed of several input planes, sometimes also called\n \"deconvolution\"\nThis operator supports TensorFloat32.\nSee \"ConvTranspose3d\" for details and output shape.\nNote:\n In some circumstances when given tensors on a CUDA device and\n using CuDNN, this operator may select a nondeterministic\n algorithm to increase performance. If this is undesirable, you\n can try to make the operation deterministic (potentially at a\n performance cost) by setting \"torch.backends.cudnn.deterministic\n = True\". See Reproducibility for more information.\n\nParameters:\n * input -- input tensor of shape (\\text{minibatch} ,\n \\text{in_channels} , iT , iH , iW)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv_transpose3d.html", "category": "pytorch docs"} {"text": "\\text{in_channels} , iT , iH , iW)\n * **weight** -- filters of shape (\\text{in\\_channels} ,\n \\frac{\\text{out\\_channels}}{\\text{groups}} , kT , kH , kW)\n\n * **bias** -- optional bias of shape (\\text{out\\_channels}).\n Default: None\n\n * **stride** -- the stride of the convolving kernel. Can be a\n single number or a tuple \"(sT, sH, sW)\". Default: 1\n\n * **padding** -- \"dilation * (kernel_size - 1) - padding\" zero-\n padding will be added to both sides of each dimension in the\n input. Can be a single number or a tuple \"(padT, padH, padW)\".\n Default: 0\n\n * **output_padding** -- additional size added to one side of\n each dimension in the output shape. Can be a single number or\n a tuple \"(out_padT, out_padH, out_padW)\". Default: 0\n\n * **groups** -- split input into groups, \\text{in\\_channels}\n should be divisible by the number of groups. Default: 1\n\n * **dilation** -- the spacing between kernel elements. Can be a\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv_transpose3d.html", "category": "pytorch docs"} {"text": "single number or a tuple (dT, dH, dW). Default: 1\nExamples:\n >>> inputs = torch.randn(20, 16, 50, 10, 20)\n >>> weights = torch.randn(16, 33, 3, 3, 3)\n >>> F.conv_transpose3d(inputs, weights)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv_transpose3d.html", "category": "pytorch docs"} {"text": "torch.cuda.memory_usage\ntorch.cuda.memory_usage(device=None)\nReturns the percent of time over the past sample period during\n which global (device) memory was being read or written. as given by\n nvidia-smi.\nParameters:\n device (torch.device or int, optional) -- selected\n device. Returns statistic for the current device, given by\n \"current_device()\", if \"device\" is \"None\" (default).\nReturn type:\n int\nWarning: Each sample period may be between 1 second and 1/6 second,\n depending on the product being queried.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.memory_usage.html", "category": "pytorch docs"} {"text": "ConvTranspose1d\nclass torch.ao.nn.quantized.ConvTranspose1d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)\nApplies a 1D transposed convolution operator over an input image\n composed of several input planes. For details on input arguments,\n parameters, and implementation see \"ConvTranspose1d\".\nNote:\n Currently only the QNNPACK engine is implemented. Please, set the\n *torch.backends.quantized.engine = 'qnnpack'*\n\nFor special notes, please, see \"Conv1d\"\nVariables:\n * weight (Tensor) -- packed tensor derived from the\n learnable weight parameter.\n * **scale** (*Tensor*) -- scalar for the output scale\n\n * **zero_point** (*Tensor*) -- scalar for the output zero point\n\nSee \"ConvTranspose2d\" for other attributes.\nExamples:\n >>> torch.backends.quantized.engine = 'qnnpack'\n >>> from torch.nn import quantized as nnq\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.ConvTranspose1d.html", "category": "pytorch docs"} {"text": "\n\n\nfrom torch.nn import quantized as nnq\n >>> # With square kernels and equal stride\n >>> m = nnq.ConvTranspose1d(16, 33, 3, stride=2)\n >>> # non-square kernels and unequal stride and with padding\n >>> m = nnq.ConvTranspose1d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))\n >>> input = torch.randn(20, 16, 50)\n >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)\n >>> output = m(q_input)\n >>> # exact output size can be also specified as an argument\n >>> input = torch.randn(1, 16, 12)\n >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)\n >>> downsample = nnq.Conv1d(16, 16, 3, stride=2, padding=1)\n >>> upsample = nnq.ConvTranspose1d(16, 16, 3, stride=2, padding=1)\n >>> h = downsample(q_input)\n >>> h.size()\n torch.Size([1, 16, 6])\n >>> output = upsample(h, output_size=input.size())\n >>> output.size()\n torch.Size([1, 16, 12])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.ConvTranspose1d.html", "category": "pytorch docs"} {"text": "torch.linalg.solve_triangular\ntorch.linalg.solve_triangular(A, B, *, upper, left=True, unitriangular=False, out=None) -> Tensor\nComputes the solution of a triangular system of linear equations\n with a unique solution.\nLetting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, this function\n computes the solution X \\in \\mathbb{K}^{n \\times k} of the linear\n system associated to the triangular matrix A \\in \\mathbb{K}^{n\n \\times n} without zeros on the diagonal (that is, it is invertible)\n and the rectangular matrix , B \\in \\mathbb{K}^{n \\times k}, which\n is defined as\n AX = B\n\nThe argument \"upper\" signals whether A is upper or lower\n triangular.\nIf \"left\"= False, this function returns the matrix X \\in\n \\mathbb{K}^{n \\times k} that solves the system\n XA = B\\mathrlap{\\qquad A \\in \\mathbb{K}^{k \\times k}, B \\in\n \\mathbb{K}^{n \\times k}.}\n\nIf \"upper\"= True (resp. False) just the upper (resp. lower)", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.solve_triangular.html", "category": "pytorch docs"} {"text": "triangular half of \"A\" will be accessed. The elements below the\n main diagonal will be considered to be zero and will not be\n accessed.\nIf \"unitriangular\"= True, the diagonal of \"A\" is assumed to be\n ones and will not be accessed.\nThe result may contain NaN s if the diagonal of \"A\" contains\n zeros or elements that are very close to zero and \"unitriangular\"=\n False (default) or if the input matrix has very small eigenvalues.\nSupports inputs of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if the inputs are batches of\n matrices then the output has the same batch dimensions.\nSee also:\n \"torch.linalg.solve()\" computes the solution of a general square\n system of linear equations with a unique solution.\n\nParameters:\n * A (Tensor) -- tensor of shape (, n, n) (or (, k,\n k) if \"left\"= True) where *** is zero or more batch\n dimensions.", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.solve_triangular.html", "category": "pytorch docs"} {"text": "dimensions.\n * **B** (*Tensor*) -- right-hand side tensor of shape *(*, n,\n k)*.\n\nKeyword Arguments:\n * upper (bool) -- whether \"A\" is an upper or lower\n triangular matrix.\n * **left** (*bool**, **optional*) -- whether to solve the system\n AX=B or XA = B. Default: *True*.\n\n * **unitriangular** (*bool**, **optional*) -- if *True*, the\n diagonal elements of \"A\" are assumed to be all equal to *1*.\n Default: *False*.\n\n * **out** (*Tensor**, **optional*) -- output tensor. *B* may be\n passed as *out* and the result is computed in-place on *B*.\n Ignored if *None*. Default: *None*.\n\nExamples:\n >>> A = torch.randn(3, 3).triu_()\n >>> b = torch.randn(3, 4)\n >>> X = torch.linalg.solve_triangular(A, B, upper=True)\n >>> torch.allclose(A @ X, B)\n True\n\n >>> A = torch.randn(2, 3, 3).tril_()\n >>> B = torch.randn(2, 3, 4)\n >>> X = torch.linalg.solve_triangular(A, B, upper=False)\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.solve_triangular.html", "category": "pytorch docs"} {"text": "\n\n\ntorch.allclose(A @ X, B)\n True\n\n\n\n >>> A = torch.randn(2, 4, 4).tril_()\n >>> B = torch.randn(2, 3, 4)\n >>> X = torch.linalg.solve_triangular(A, B, upper=False, left=False)\n >>> torch.allclose(X @ A, B)\n True\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.solve_triangular.html", "category": "pytorch docs"} {"text": "torch.cov\ntorch.cov(input, *, correction=1, fweights=None, aweights=None) -> Tensor\nEstimates the covariance matrix of the variables given by the\n \"input\" matrix, where rows are the variables and columns are the\n observations.\nA covariance matrix is a square matrix giving the covariance of\n each pair of variables. The diagonal contains the variance of each\n variable (covariance of a variable with itself). By definition, if\n \"input\" represents a single variable (Scalar or 1D) then its\n variance is returned.\nThe unbiased sample covariance of the variables x and y is given\n by:\n \\text{cov}_w(x,y) = \\frac{\\sum^{N}_{i = 1}(x_{i} -\n \\bar{x})(y_{i} - \\bar{y})}{N~-~1}\n\nwhere \\bar{x} and \\bar{y} are the simple means of the x and y\n respectively.\nIf \"fweights\" and/or \"aweights\" are provided, the unbiased weighted\n covariance is calculated, which is given by:\n \\text{cov}_w(x,y) = \\frac{\\sum^{N}_{i = 1}w_i(x_{i} -\n", "source": "https://pytorch.org/docs/stable/generated/torch.cov.html", "category": "pytorch docs"} {"text": "\\mu_x^)(y_{i} - \\mu_y^)}{\\sum^{N}_{i = 1}w_i~-~1}\nwhere w denotes \"fweights\" or \"aweights\" based on whichever is\n provided, or w = fweights \\times aweights if both are provided, and\n \\mu_x^* = \\frac{\\sum^{N}{i = 1}w_ix }{\\sum^{N}_{i = 1}w_i} is\n the weighted mean of the variable.\nParameters:\n input (Tensor) -- A 2D matrix containing multiple\n variables and observations, or a Scalar or 1D vector\n representing a single variable.\nKeyword Arguments:\n * correction (int, optional) -- difference between the\n sample size and sample degrees of freedom. Defaults to\n Bessel's correction, \"correction = 1\" which returns the\n unbiased estimate, even if both \"fweights\" and \"aweights\" are\n specified. \"correction = 0\" will return the simple average.\n Defaults to \"1\".\n * **fweights** (*tensor**, **optional*) -- A Scalar or 1D tensor\n of observation vector frequencies representing the number of\n", "source": "https://pytorch.org/docs/stable/generated/torch.cov.html", "category": "pytorch docs"} {"text": "times each observation should be repeated. Its numel must\n equal the number of columns of \"input\". Must have integral\n dtype. Ignored if \"None\". Defaults to `None.\n * **aweights** (*tensor**, **optional*) -- A Scalar or 1D array\n of observation vector weights. These relative weights are\n typically large for observations considered \u00e2\u0080\u009cimportant\u00e2\u0080\u009d and\n smaller for observations considered less \u00e2\u0080\u009cimportant\u00e2\u0080\u009d. Its\n numel must equal the number of columns of \"input\". Must have\n floating point dtype. Ignored if \"None\". *Defaults to\n ``None`*.\n\nReturns:\n (Tensor) The covariance matrix of the variables.\nSee also: \"torch.corrcoef()\" normalized covariance matrix.\nExample::\n >>> x = torch.tensor([[0, 2], [1, 1], [2, 0]]).T\n >>> x\n tensor([[0, 1, 2],\n [2, 1, 0]])\n >>> torch.cov(x)\n tensor([[ 1., -1.],\n [-1., 1.]])\n >>> torch.cov(x, correction=0)", "source": "https://pytorch.org/docs/stable/generated/torch.cov.html", "category": "pytorch docs"} {"text": "\n\n\ntorch.cov(x, correction=0)\n tensor([[ 0.6667, -0.6667],\n [-0.6667, 0.6667]])\n >>> fw = torch.randint(1, 10, (3,))\n >>> fw\n tensor([1, 6, 9])\n >>> aw = torch.rand(3)\n >>> aw\n tensor([0.4282, 0.0255, 0.4144])\n >>> torch.cov(x, fweights=fw, aweights=aw)\n tensor([[ 0.4169, -0.4169],\n [-0.4169, 0.4169]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.cov.html", "category": "pytorch docs"} {"text": "torch.asarray\ntorch.asarray(obj, *, dtype=None, device=None, copy=None, requires_grad=False) -> Tensor\nConverts \"obj\" to a tensor.\n\"obj\" can be one of:\n\n\na tensor\n\n\na NumPy array\n\n\na DLPack capsule\n\n\nan object that implements Python's buffer protocol\n\n\na scalar\n\n\na sequence of scalars\n\n\nWhen \"obj\" is a tensor, NumPy array, or DLPack capsule the returned\n tensor will, by default, not require a gradient, have the same\n datatype as \"obj\", be on the same device, and share memory with it.\n These properties can be controlled with the \"dtype\", \"device\",\n \"copy\", and \"requires_grad\" keyword arguments. If the returned\n tensor is of a different datatype, on a different device, or a copy\n is requested then it will not share its memory with \"obj\". If\n \"requires_grad\" is \"True\" then the returned tensor will require a\n gradient, and if \"obj\" is also a tensor with an autograd history\n then the returned tensor will have the same history.", "source": "https://pytorch.org/docs/stable/generated/torch.asarray.html", "category": "pytorch docs"} {"text": "When \"obj\" is not a tensor, NumPy Array, or DLPack capsule but\n implements Python's buffer protocol then the buffer is interpreted\n as an array of bytes grouped according to the size of the datatype\n passed to the \"dtype\" keyword argument. (If no datatype is passed\n then the default floating point datatype is used, instead.) The\n returned tensor will have the specified datatype (or default\n floating point datatype if none is specified) and, by default, be\n on the CPU device and share memory with the buffer.\nWhen \"obj\" is none of the above but a scalar or sequence of scalars\n then the returned tensor will, by default, infer its datatype from\n the scalar values, be on the CPU device, and not share its memory.\nSee also:\n \"torch.tensor()\" creates a tensor that always copies the data\n from the input object. \"torch.from_numpy()\" creates a tensor that\n always shares memory from NumPy arrays. \"torch.frombuffer()\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.asarray.html", "category": "pytorch docs"} {"text": "creates a tensor that always shares memory from objects that\n implement the buffer protocol. \"torch.from_dlpack()\" creates a\n tensor that always shares memory from DLPack capsules.\nParameters:\n obj (object) -- a tensor, NumPy array, DLPack Capsule,\n object that implements Python's buffer protocol, scalar, or\n sequence of scalars.\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the datatype of the\n returned tensor. Default: \"None\", which causes the datatype of\n the returned tensor to be inferred from \"obj\".\n * **copy** (*bool**, **optional*) -- controls whether the\n returned tensor shares memory with \"obj\". Default: \"None\",\n which causes the returned tensor to share memory with \"obj\"\n whenever possible. If \"True\" then the returned tensor does not\n share its memory. If \"False\" then the returned tensor shares\n its memory with \"obj\" and an error is thrown if it cannot.\n", "source": "https://pytorch.org/docs/stable/generated/torch.asarray.html", "category": "pytorch docs"} {"text": "\n\ndevice (\"torch.device\", optional) -- the device of the\n returned tensor. Default: \"None\", which causes the device of\n \"obj\" to be used.\n\nrequires_grad (bool, optional) -- whether the\n returned tensor requires grad. Default: \"False\", which causes\n the returned tensor not to require a gradient. If \"True\", then\n the returned tensor will require a gradient, and if \"obj\" is\n also a tensor with an autograd history then the returned\n tensor will have the same history.\n\n\n\nExample:\n >>> a = torch.tensor([1, 2, 3])\n >>> # Shares memory with tensor 'a'\n >>> b = torch.asarray(a)\n >>> a.data_ptr() == b.data_ptr()\n True\n >>> # Forces memory copy\n >>> c = torch.asarray(a, copy=True)\n >>> a.data_ptr() == c.data_ptr()\n False\n\n >>> a = torch.tensor([1, 2, 3], requires_grad=True).float()\n >>> b = a + 2\n >>> b\n tensor([1., 2., 3.], grad_fn=)\n", "source": "https://pytorch.org/docs/stable/generated/torch.asarray.html", "category": "pytorch docs"} {"text": "tensor([1., 2., 3.], grad_fn=)\n >>> # Shares memory with tensor 'b', with no grad\n >>> c = torch.asarray(b)\n >>> c\n tensor([1., 2., 3.])\n >>> # Shares memory with tensor 'b', retaining autograd history\n >>> d = torch.asarray(b, requires_grad=True)\n >>> d\n tensor([1., 2., 3.], grad_fn=)\n >>> array = numpy.array([1, 2, 3])\n >>> # Shares memory with array 'array'\n >>> t1 = torch.asarray(array)\n >>> array.__array_interface__['data'][0] == t1.data_ptr()\n True\n >>> # Copies memory due to dtype mismatch\n >>> t2 = torch.asarray(array, dtype=torch.float32)\n >>> array.__array_interface__['data'][0] == t1.data_ptr()\n False\n", "source": "https://pytorch.org/docs/stable/generated/torch.asarray.html", "category": "pytorch docs"} {"text": "torch.concat\ntorch.concat(tensors, dim=0, *, out=None) -> Tensor\nAlias of \"torch.cat()\".", "source": "https://pytorch.org/docs/stable/generated/torch.concat.html", "category": "pytorch docs"} {"text": "torch.argwhere\ntorch.argwhere(input) -> Tensor\nReturns a tensor containing the indices of all non-zero elements of\n \"input\". Each row in the result contains the indices of a non-zero\n element in \"input\". The result is sorted lexicographically, with\n the last index changing the fastest (C-style).\nIf \"input\" has n dimensions, then the resulting indices tensor\n \"out\" is of size (z \\times n), where z is the total number of non-\n zero elements in the \"input\" tensor.\nNote:\n This function is similar to NumPy's *argwhere*.When \"input\" is on\n CUDA, this function causes host-device synchronization.\n\nParameters:\n {input} --\nExample:\n >>> t = torch.tensor([1, 0, 1])\n >>> torch.argwhere(t)\n tensor([[0],\n [2]])\n >>> t = torch.tensor([[1, 0, 1], [0, 1, 1]])\n >>> torch.argwhere(t)\n tensor([[0, 0],\n [0, 2],\n [1, 1],\n [1, 2]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.argwhere.html", "category": "pytorch docs"} {"text": "torch.Tensor.fill_\nTensor.fill_(value) -> Tensor\nFills \"self\" tensor with the specified value.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.fill_.html", "category": "pytorch docs"} {"text": "NoopObserver\nclass torch.quantization.observer.NoopObserver(dtype=torch.float16, custom_op_name='')\nObserver that doesn't do anything and just passes its configuration\n to the quantized module's \".from_float()\".\nPrimarily used for quantization to float16 which doesn't require\n determining ranges.\nParameters:\n * dtype -- Quantized data type\n * **custom_op_name** -- (temporary) specify this observer for an\n operator that doesn't require any observation (Can be used in\n Graph Mode Passes for special case ops).\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.NoopObserver.html", "category": "pytorch docs"} {"text": "torch.nn.functional.hardtanh\ntorch.nn.functional.hardtanh(input, min_val=- 1., max_val=1., inplace=False) -> Tensor\nApplies the HardTanh function element-wise. See \"Hardtanh\" for more\n details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.hardtanh.html", "category": "pytorch docs"} {"text": "torch.foreach_frac\ntorch.foreach_frac(self: List[Tensor]) -> None\nApply \"torch.frac()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_frac_.html", "category": "pytorch docs"} {"text": "torch.nn.functional.mish\ntorch.nn.functional.mish(input, inplace=False)\nApplies the Mish function, element-wise. Mish: A Self Regularized\n Non-Monotonic Neural Activation Function.\n \\text{Mish}(x) = x * \\text{Tanh}(\\text{Softplus}(x))\n\nNote:\n See Mish: A Self Regularized Non-Monotonic Neural Activation\n Function\n\nSee \"Mish\" for more details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.mish.html", "category": "pytorch docs"} {"text": "torch.Tensor.square_\nTensor.square_() -> Tensor\nIn-place version of \"square()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.square_.html", "category": "pytorch docs"} {"text": "torch.Tensor.sgn_\nTensor.sgn_() -> Tensor\nIn-place version of \"sgn()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sgn_.html", "category": "pytorch docs"} {"text": "torch.fft.fftshift\ntorch.fft.fftshift(input, dim=None) -> Tensor\nReorders n-dimensional FFT data, as provided by \"fftn()\", to have\n negative frequency terms first.\nThis performs a periodic shift of n-dimensional data such that the\n origin \"(0, ..., 0)\" is moved to the center of the tensor.\n Specifically, to \"input.shape[dim] // 2\" in each selected\n dimension.\nNote:\n By convention, the FFT returns positive frequency terms first,\n followed by the negative frequencies in reverse order, so that\n \"f[-i]\" for all 0 < i \\leq n/2 in Python gives the negative\n frequency terms. \"fftshift()\" rearranges all frequencies into\n ascending order from negative to positive with the zero-frequency\n term in the center.\n\nNote:\n For even lengths, the Nyquist frequency at \"f[n/2]\" can be\n thought of as either negative or positive. \"fftshift()\" always\n puts the Nyquist term at the 0-index. This is the same convention\n used by \"fftfreq()\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fftshift.html", "category": "pytorch docs"} {"text": "used by \"fftfreq()\".\nParameters:\n * input (Tensor) -- the tensor in FFT order\n * **dim** (*int**, **Tuple**[**int**]**, **optional*) -- The\n dimensions to rearrange. Only dimensions specified here will\n be rearranged, any other dimensions will be left in their\n original order. Default: All dimensions of \"input\".\n\n-[ Example ]-\n\n\n\nf = torch.fft.fftfreq(4)\nf\n tensor([ 0.0000, 0.2500, -0.5000, -0.2500])\ntorch.fft.fftshift(f)\n tensor([-0.5000, -0.2500, 0.0000, 0.2500])\n\n\n\nAlso notice that the Nyquist frequency term at \"f[2]\" was moved to\n the beginning of the tensor.\nThis also works for multi-dimensional transforms:\n\n\n\nx = torch.fft.fftfreq(5, d=1/5) + 0.1 * torch.fft.fftfreq(5, d=1/5).unsqueeze(1)\nx\n tensor([[ 0.0000, 1.0000, 2.0000, -2.0000, -1.0000],\n [ 0.1000, 1.1000, 2.1000, -1.9000, -0.9000],\n [ 0.2000, 1.2000, 2.2000, -1.8000, -0.8000],\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fftshift.html", "category": "pytorch docs"} {"text": "[-0.2000, 0.8000, 1.8000, -2.2000, -1.2000],\n [-0.1000, 0.9000, 1.9000, -2.1000, -1.1000]])\n\n\n\ntorch.fft.fftshift(x)\n tensor([[-2.2000, -1.2000, -0.2000, 0.8000, 1.8000],\n [-2.1000, -1.1000, -0.1000, 0.9000, 1.9000],\n [-2.0000, -1.0000, 0.0000, 1.0000, 2.0000],\n [-1.9000, -0.9000, 0.1000, 1.1000, 2.1000],\n [-1.8000, -0.8000, 0.2000, 1.2000, 2.2000]])\n\n\n\n\"fftshift()\" can also be useful for spatial data. If our data is\n defined on a centered grid (\"[-(N//2), (N-1)//2]\") then we can use\n the standard FFT defined on an uncentered grid (\"[0, N)\") by first\n applying an \"ifftshift()\".\n\n\n\nx_centered = torch.arange(-5, 5)\nx_uncentered = torch.fft.ifftshift(x_centered)\nfft_uncentered = torch.fft.fft(x_uncentered)\n\n\n\nSimilarly, we can convert the frequency domain components to\n centered convention by applying \"fftshift()\".\n\n\n\nfft_centered = torch.fft.fftshift(fft_uncentered)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fftshift.html", "category": "pytorch docs"} {"text": "The inverse transform, from centered Fourier space back to centered\n spatial data, can be performed by applying the inverse shifts in\n reverse order:\n\n\n\nx_centered_2 = torch.fft.fftshift(torch.fft.ifft(torch.fft.ifftshift(fft_centered)))\ntorch.testing.assert_close(x_centered.to(torch.complex64), x_centered_2, check_stride=False)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fftshift.html", "category": "pytorch docs"} {"text": "float_qparams_weight_only_qconfig\ntorch.quantization.qconfig.float_qparams_weight_only_qconfig\nalias of QConfig(activation=,\n weight=functools.partial(,\n dtype=torch.quint8, qscheme=torch.per_channel_affine_float_qparams,\n ch_axis=0){})", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.float_qparams_weight_only_qconfig.html", "category": "pytorch docs"} {"text": "torch.nn.functional.gumbel_softmax\ntorch.nn.functional.gumbel_softmax(logits, tau=1, hard=False, eps=1e-10, dim=- 1)\nSamples from the Gumbel-Softmax distribution (Link 1 Link 2) and\n optionally discretizes.\nParameters:\n * logits (Tensor) -- [..., num_features] unnormalized\n log probabilities\n * **tau** (*float*) -- non-negative scalar temperature\n\n * **hard** (*bool*) -- if \"True\", the returned samples will be\n discretized as one-hot vectors, but will be differentiated as\n if it is the soft sample in autograd\n\n * **dim** (*int*) -- A dimension along which softmax will be\n computed. Default: -1.\n\nReturns:\n Sampled tensor of same shape as logits from the Gumbel-Softmax\n distribution. If \"hard=True\", the returned samples will be one-\n hot, otherwise they will be probability distributions that sum\n to 1 across dim.\nReturn type:\n Tensor\nNote:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.gumbel_softmax.html", "category": "pytorch docs"} {"text": "Return type:\n Tensor\nNote:\n This function is here for legacy reasons, may be removed from\n nn.Functional in the future.\n\nNote:\n The main trick for *hard* is to do *y_hard - y_soft.detach() +\n y_soft*It achieves two things: - makes the output value exactly\n one-hot (since we add then subtract y_soft value) - makes the\n gradient equal to y_soft gradient (since we strip all other\n gradients)\n\nExamples::\n >>> logits = torch.randn(20, 32)\n >>> # Sample soft categorical using reparametrization trick:\n >>> F.gumbel_softmax(logits, tau=1, hard=False)\n >>> # Sample hard categorical using \"Straight-through\" trick:\n >>> F.gumbel_softmax(logits, tau=1, hard=True)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.gumbel_softmax.html", "category": "pytorch docs"} {"text": "torch._foreach_log10\ntorch._foreach_log10(self: List[Tensor]) -> List[Tensor]\nApply \"torch.log10()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_log10.html", "category": "pytorch docs"} {"text": "torch.nn.utils.parametrize.register_parametrization\ntorch.nn.utils.parametrize.register_parametrization(module, tensor_name, parametrization, *, unsafe=False)\nAdds a parametrization to a tensor in a module.\nAssume that \"tensor_name=\"weight\"\" for simplicity. When accessing\n \"module.weight\", the module will return the parametrized version\n \"parametrization(module.weight)\". If the original tensor requires a\n gradient, the backward pass will differentiate through\n \"parametrization\", and the optimizer will update the tensor\n accordingly.\nThe first time that a module registers a parametrization, this\n function will add an attribute \"parametrizations\" to the module of\n type \"ParametrizationList\".\nThe list of parametrizations on the tensor \"weight\" will be\n accessible under \"module.parametrizations.weight\".\nThe original tensor will be accessible under\n \"module.parametrizations.weight.original\".", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.register_parametrization.html", "category": "pytorch docs"} {"text": "\"module.parametrizations.weight.original\".\nParametrizations may be concatenated by registering several\n parametrizations on the same attribute.\nThe training mode of a registered parametrization is updated on\n registration to match the training mode of the host module\nParametrized parameters and buffers have an inbuilt caching system\n that can be activated using the context manager \"cached()\".\nA \"parametrization\" may optionally implement a method with\n signature\n def right_inverse(self, X: Tensor) -> Union[Tensor, Sequence[Tensor]]\n\nThis method is called on the unparametrized tensor when the first\n parametrization is registered to compute the initial value of the\n original tensor. If this method is not implemented, the original\n tensor will be just the unparametrized tensor.\nIf all the parametrizations registered on a tensor implement\n right_inverse it is possible to initialize a parametrized tensor\n by assigning to it, as shown in the example below.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.register_parametrization.html", "category": "pytorch docs"} {"text": "It is possible for the first parametrization to depend on several\n inputs. This may be implemented returning a tuple of tensors from\n \"right_inverse\" (see the example implementation of a \"RankOne\"\n parametrization below).\nIn this case, the unconstrained tensors are also located under\n \"module.parametrizations.weight\" with names \"original0\",\n \"original1\",...\nNote:\n If unsafe=False (default) both the forward and right_inverse\n methods will be called once to perform a number of consistency\n checks. If unsafe=True, then right_inverse will be called if the\n tensor is not parametrized, and nothing will be called otherwise.\n\nNote:\n In most situations, \"right_inverse\" will be a function such that\n \"forward(right_inverse(X)) == X\" (see right inverse). Sometimes,\n when the parametrization is not surjective, it may be reasonable\n to relax this.\n\nWarning:\n If a parametrization depends on several inputs,\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.register_parametrization.html", "category": "pytorch docs"} {"text": "\"register_parametrization()\" will register a number of new\n parameters. If such parametrization is registered after the\n optimizer is created, these new parameters will need to be added\n manually to the optimizer. See\n \"torch.Optimizer.add_param_group()\".\nParameters:\n * module (nn.Module) -- module on which to register the\n parametrization\n * **tensor_name** (*str*) -- name of the parameter or buffer on\n which to register the parametrization\n\n * **parametrization** (*nn.Module*) -- the parametrization to\n register\n\nKeyword Arguments:\n unsafe (bool) -- a boolean flag that denotes whether the\n parametrization may change the dtype and shape of the tensor.\n Default: False Warning: the parametrization is not checked for\n consistency upon registration. Enable this flag at your own\n risk.\nRaises:\n ValueError -- if the module does not have a parameter or a\n buffer named \"tensor_name\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.register_parametrization.html", "category": "pytorch docs"} {"text": "buffer named \"tensor_name\"\nReturn type:\n Module\n-[ Examples ]-\n\n\n\nimport torch\nimport torch.nn as nn\nimport torch.nn.utils.parametrize as P\nclass Symmetric(nn.Module):\n def forward(self, X):\n return X.triu() + X.triu(1).T # Return a symmetric matrix\ndef right_inverse(self, A):\n return A.triu()\n\nm = nn.Linear(5, 5)\nP.register_parametrization(m, \"weight\", Symmetric())\nprint(torch.allclose(m.weight, m.weight.T)) # m.weight is now symmetric\n True\nA = torch.rand(5, 5)\nA = A + A.T # A is now symmetric\nm.weight = A # Initialize the weight to be the symmetric matrix A\nprint(torch.allclose(m.weight, A))\n True\nclass RankOne(nn.Module):\n def forward(self, x, y):\n # Form a rank 1 matrix multiplying two vectors\n return x.unsqueeze(-1) @ y.unsqueeze(-2)\ndef right_inverse(self, Z):\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.register_parametrization.html", "category": "pytorch docs"} {"text": "\n\n\ndef right_inverse(self, Z):\n # Project Z onto the rank 1 matrices\n U, S, Vh = torch.linalg.svd(Z, full_matrices=False)\n # Return rescaled singular vectors\n s0_sqrt = S[0].sqrt().unsqueeze(-1)\n return U[..., :, 0] * s0_sqrt, Vh[..., 0, :] * s0_sqrt\n\nlinear_rank_one = P.register_parametrization(nn.Linear(4, 4), \"weight\", RankOne())\nprint(torch.linalg.matrix_rank(linear_rank_one.weight).item())\n 1\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.register_parametrization.html", "category": "pytorch docs"} {"text": "default_qat_qconfig_v2\ntorch.quantization.qconfig.default_qat_qconfig_v2\nalias of QConfig(activation=functools.partial(,\n observer=,\n quant_min=0, quant_max=255, dtype=torch.quint8){},\n weight=functools.partial(, observer=,\n quant_min=-128, quant_max=127, dtype=torch.qint8,\n qscheme=torch.per_tensor_symmetric){})", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.default_qat_qconfig_v2.html", "category": "pytorch docs"} {"text": "torch.Tensor.trunc\nTensor.trunc() -> Tensor\nSee \"torch.trunc()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.trunc.html", "category": "pytorch docs"} {"text": "torch.fft.rfft2\ntorch.fft.rfft2(input, s=None, dim=(- 2, - 1), norm=None, *, out=None) -> Tensor\nComputes the 2-dimensional discrete Fourier transform of real\n \"input\". Equivalent to \"rfftn()\" but FFTs only the last two\n dimensions by default.\nThe FFT of a real signal is Hermitian-symmetric, \"X[i, j] =\n conj(X[-i, -j])\", so the full \"fft2()\" output contains redundant\n information. \"rfft2()\" instead omits the negative frequencies in\n the last dimension.\nNote:\n Supports torch.half on CUDA with GPU Architecture SM53 or\n greater. However it only supports powers of 2 signal length in\n every transformed dimensions.\n\nParameters:\n * input (Tensor) -- the input tensor\n * **s** (*Tuple**[**int**]**, **optional*) -- Signal size in the\n transformed dimensions. If given, each dimension \"dim[i]\" will\n either be zero-padded or trimmed to the length \"s[i]\" before\n computing the real FFT. If a length \"-1\" is specified, no\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.rfft2.html", "category": "pytorch docs"} {"text": "padding is done in that dimension. Default: \"s =\n [input.size(d) for d in dim]\"\n * **dim** (*Tuple**[**int**]**, **optional*) -- Dimensions to be\n transformed. Default: last two dimensions.\n\n * **norm** (*str**, **optional*) --\n\n Normalization mode. For the forward transform (\"rfft2()\"),\n these correspond to:\n\n * \"\"forward\"\" - normalize by \"1/n\"\n\n * \"\"backward\"\" - no normalization\n\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the real FFT\n orthonormal)\n\n Where \"n = prod(s)\" is the logical FFT size. Calling the\n backward transform (\"irfft2()\") with the same normalization\n mode will apply an overall normalization of \"1/n\" between the\n two transforms. This is required to make \"irfft2()\" the exact\n inverse.\n\n Default is \"\"backward\"\" (no normalization).\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\n-[ Example ]-\n\n\n\nt = torch.rand(10, 10)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.rfft2.html", "category": "pytorch docs"} {"text": "-[ Example ]-\n\n\n\nt = torch.rand(10, 10)\nrfft2 = torch.fft.rfft2(t)\nrfft2.size()\n torch.Size([10, 6])\n\n\n\nCompared against the full output from \"fft2()\", we have all\n elements up to the Nyquist frequency.\n\n\n\nfft2 = torch.fft.fft2(t)\ntorch.testing.assert_close(fft2[..., :6], rfft2, check_stride=False)\n\n\n\nThe discrete Fourier transform is separable, so \"rfft2()\" here is\n equivalent to a combination of \"fft()\" and \"rfft()\":\n\n\n\ntwo_ffts = torch.fft.fft(torch.fft.rfft(t, dim=1), dim=0)\ntorch.testing.assert_close(rfft2, two_ffts, check_stride=False)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.rfft2.html", "category": "pytorch docs"} {"text": "GRU\nclass torch.ao.nn.quantized.dynamic.GRU(args, *kwargs)\nApplies a multi-layer gated recurrent unit (GRU) RNN to an input\n sequence.\nFor each element in the input sequence, each layer computes the\n following function:\n \\begin{array}{ll} r_t = \\sigma(W_{ir} x_t + b_{ir} + W_{hr}\n h_{(t-1)} + b_{hr}) \\\\ z_t = \\sigma(W_{iz} x_t + b_{iz} +\n W_{hz} h_{(t-1)} + b_{hz}) \\\\ n_t = \\tanh(W_{in} x_t +\n b_{in} + r_t * (W_{hn} h_{(t-1)}+ b_{hn})) \\\\ h_t = (1 -\n z_t) * n_t + z_t * h_{(t-1)} \\end{array}\n\nwhere h_t is the hidden state at time t, x_t is the input at time\n t, h_{(t-1)} is the hidden state of the layer at time t-1 or\n the initial hidden state at time 0, and r_t, z_t, n_t are the\n reset, update, and new gates, respectively. \\sigma is the sigmoid\n function, and * is the Hadamard product.\nIn a multilayer GRU, the input x^{(l)}_t of the l -th layer (l >=\n 2) is the hidden state h^{(l-1)}_t of the previous layer multiplied", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.GRU.html", "category": "pytorch docs"} {"text": "by dropout \\delta^{(l-1)}_t where each \\delta^{(l-1)}_t is a\n Bernoulli random variable which is 0 with probability \"dropout\".\nParameters:\n * input_size -- The number of expected features in the input\n x\n * **hidden_size** -- The number of features in the hidden state\n *h*\n\n * **num_layers** -- Number of recurrent layers. E.g., setting\n \"num_layers=2\" would mean stacking two GRUs together to form a\n *stacked GRU*, with the second GRU taking in outputs of the\n first GRU and computing the final results. Default: 1\n\n * **bias** -- If \"False\", then the layer does not use bias\n weights *b_ih* and *b_hh*. Default: \"True\"\n\n * **batch_first** -- If \"True\", then the input and output\n tensors are provided as (batch, seq, feature). Default:\n \"False\"\n\n * **dropout** -- If non-zero, introduces a *Dropout* layer on\n the outputs of each GRU layer except the last layer, with\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.GRU.html", "category": "pytorch docs"} {"text": "dropout probability equal to \"dropout\". Default: 0\n * **bidirectional** -- If \"True\", becomes a bidirectional GRU.\n Default: \"False\"\n\nInputs: input, h_0\n * input of shape (seq_len, batch, input_size): tensor\n containing the features of the input sequence. The input can\n also be a packed variable length sequence. See\n \"torch.nn.utils.rnn.pack_padded_sequence()\" for details.\n * **h_0** of shape *(num_layers * num_directions, batch,\n hidden_size)*: tensor containing the initial hidden state for\n each element in the batch. Defaults to zero if not provided.\n If the RNN is bidirectional, num_directions should be 2, else\n it should be 1.\n\nOutputs: output, h_n\n * output of shape (seq_len, batch, num_directions *\n hidden_size): tensor containing the output features h_t from\n the last layer of the GRU, for each t. If a\n \"torch.nn.utils.rnn.PackedSequence\" has been given as the", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.GRU.html", "category": "pytorch docs"} {"text": "input, the output will also be a packed sequence. For the\n unpacked case, the directions can be separated using\n \"output.view(seq_len, batch, num_directions, hidden_size)\",\n with forward and backward being direction 0 and 1\n respectively.\n Similarly, the directions can be separated in the packed case.\n\n * **h_n** of shape *(num_layers * num_directions, batch,\n hidden_size)*: tensor containing the hidden state for *t =\n seq_len*\n\n Like *output*, the layers can be separated using\n \"h_n.view(num_layers, num_directions, batch, hidden_size)\".\n\nShape:\n * Input1: (L, N, H_{in}) tensor containing input features where\n H_{in}=\\text{input_size} and L represents a sequence\n length.\n * Input2: (S, N, H_{out}) tensor containing the initial hidden\n state for each element in the batch.\n H_{out}=\\text{hidden\\_size} Defaults to zero if not provided.\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.GRU.html", "category": "pytorch docs"} {"text": "where S=\\text{num_layers} * \\text{num_directions} If the RNN\n is bidirectional, num_directions should be 2, else it should\n be 1.\n * Output1: (L, N, H_{all}) where H_{all}=\\text{num\\_directions}\n * \\text{hidden\\_size}\n\n * Output2: (S, N, H_{out}) tensor containing the next hidden\n state for each element in the batch\n\nVariables:\n * weight_ih_l[k] -- the learnable input-hidden weights of\n the \\text{k}^{th} layer (W_ir|W_iz|W_in), of shape\n (3hidden_size, input_size) for k = 0. Otherwise, the\n shape is (3hidden_size, num_directions * hidden_size)\n * **weight_hh_l[k]** -- the learnable hidden-hidden weights of\n the \\text{k}^{th} layer (W_hr|W_hz|W_hn), of shape\n *(3*hidden_size, hidden_size)*\n\n * **bias_ih_l[k]** -- the learnable input-hidden bias of the\n \\text{k}^{th} layer (b_ir|b_iz|b_in), of shape\n *(3*hidden_size)*\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.GRU.html", "category": "pytorch docs"} {"text": "(3hidden_size)*\n * **bias_hh_l[k]** -- the learnable hidden-hidden bias of the\n \\text{k}^{th} layer (b_hr|b_hz|b_hn), of shape\n *(3*hidden_size)*\n\nNote:\n All the weights and biases are initialized from\n \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}) where k =\n \\frac{1}{\\text{hidden\\_size}}\n\nNote:\n If the following conditions are satisfied: 1) cudnn is enabled,\n 2) input data is on the GPU 3) input data has dtype\n \"torch.float16\" 4) V100 GPU is used, 5) input data is not in\n \"PackedSequence\" format persistent algorithm can be selected to\n improve performance.\n\nExamples:\n >>> rnn = nn.GRU(10, 20, 2)\n >>> input = torch.randn(5, 3, 10)\n >>> h0 = torch.randn(2, 3, 20)\n >>> output, hn = rnn(input, h0)\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.GRU.html", "category": "pytorch docs"} {"text": "update_bn_stats\nclass torch.ao.nn.intrinsic.qat.update_bn_stats(mod)", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.update_bn_stats.html", "category": "pytorch docs"} {"text": "torch.Tensor.crow_indices\nTensor.crow_indices() -> IntTensor\nReturns the tensor containing the compressed row indices of the\n \"self\" tensor when \"self\" is a sparse CSR tensor of layout\n \"sparse_csr\". The \"crow_indices\" tensor is strictly of shape\n (\"self\".size(0) + 1) and of type \"int32\" or \"int64\". When using MKL\n routines such as sparse matrix multiplication, it is necessary to\n use \"int32\" indexing in order to avoid downcasting and potentially\n losing information.\nExample::\n >>> csr = torch.eye(5,5).to_sparse_csr()\n >>> csr.crow_indices()\n tensor([0, 1, 2, 3, 4, 5], dtype=torch.int32)", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.crow_indices.html", "category": "pytorch docs"} {"text": "torch.Tensor.add\nTensor.add(other, *, alpha=1) -> Tensor\nAdd a scalar or tensor to \"self\" tensor. If both \"alpha\" and\n \"other\" are specified, each element of \"other\" is scaled by \"alpha\"\n before being used.\nWhen \"other\" is a tensor, the shape of \"other\" must be\n broadcastable with the shape of the underlying tensor\nSee \"torch.add()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.add.html", "category": "pytorch docs"} {"text": "torch.cuda.get_device_name\ntorch.cuda.get_device_name(device=None)\nGets the name of a device.\nParameters:\n device (torch.device or int, optional) -- device\n for which to return the name. This function is a no-op if this\n argument is a negative integer. It uses the current device,\n given by \"current_device()\", if \"device\" is \"None\" (default).\nReturns:\n the name of the device\nReturn type:\n str", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.get_device_name.html", "category": "pytorch docs"} {"text": "torch.Tensor.reciprocal\nTensor.reciprocal() -> Tensor\nSee \"torch.reciprocal()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.reciprocal.html", "category": "pytorch docs"} {"text": "torch.autograd.functional.vjp\ntorch.autograd.functional.vjp(func, inputs, v=None, create_graph=False, strict=False)\nFunction that computes the dot product between a vector \"v\" and the\n Jacobian of the given function at the point given by the inputs.\nParameters:\n * func (function) -- a Python function that takes Tensor\n inputs and returns a tuple of Tensors or a Tensor.\n * **inputs** (*tuple of Tensors** or **Tensor*) -- inputs to the\n function \"func\".\n\n * **v** (*tuple of Tensors** or **Tensor*) -- The vector for\n which the vector Jacobian product is computed. Must be the\n same size as the output of \"func\". This argument is optional\n when the output of \"func\" contains a single element and (if it\n is not provided) will be set as a Tensor containing a single\n \"1\".\n\n * **create_graph** (*bool**, **optional*) -- If \"True\", both the\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.vjp.html", "category": "pytorch docs"} {"text": "output and result will be computed in a differentiable way.\n Note that when \"strict\" is \"False\", the result can not require\n gradients or be disconnected from the inputs. Defaults to\n \"False\".\n * **strict** (*bool**, **optional*) -- If \"True\", an error will\n be raised when we detect that there exists an input such that\n all the outputs are independent of it. If \"False\", we return a\n Tensor of zeros as the vjp for said inputs, which is the\n expected mathematical value. Defaults to \"False\".\n\nReturns:\n tuple with:\n func_output (tuple of Tensors or Tensor): output of\n \"func(inputs)\"\n vjp (tuple of Tensors or Tensor): result of the dot product\n with the same shape as the inputs.\n\nReturn type:\n output (tuple)\n-[ Example ]-\n\n\n\ndef exp_reducer(x):\n ... return x.exp().sum(dim=1)\ninputs = torch.rand(4, 4)\nv = torch.ones(4)\nvjp(exp_reducer, inputs, v)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.vjp.html", "category": "pytorch docs"} {"text": "\n\n\nvjp(exp_reducer, inputs, v)\n (tensor([5.7817, 7.2458, 5.7830, 6.7782]),\n tensor([[1.4458, 1.3962, 1.3042, 1.6354],\n [2.1288, 1.0652, 1.5483, 2.5035],\n [2.2046, 1.1292, 1.1432, 1.3059],\n [1.3225, 1.6652, 1.7753, 2.0152]]))\nvjp(exp_reducer, inputs, v, create_graph=True)\n (tensor([5.7817, 7.2458, 5.7830, 6.7782], grad_fn=),\n tensor([[1.4458, 1.3962, 1.3042, 1.6354],\n [2.1288, 1.0652, 1.5483, 2.5035],\n [2.2046, 1.1292, 1.1432, 1.3059],\n [1.3225, 1.6652, 1.7753, 2.0152]], grad_fn=))\ndef adder(x, y):\n ... return 2 * x + 3 * y\ninputs = (torch.rand(2), torch.rand(2))\nv = torch.ones(2)\nvjp(adder, inputs, v)\n (tensor([2.4225, 2.3340]),\n (tensor([2., 2.]), tensor([3., 3.])))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.vjp.html", "category": "pytorch docs"} {"text": "torch.nan_to_num\ntorch.nan_to_num(input, nan=0.0, posinf=None, neginf=None, *, out=None) -> Tensor\nReplaces \"NaN\", positive infinity, and negative infinity values in\n \"input\" with the values specified by \"nan\", \"posinf\", and \"neginf\",\n respectively. By default, \"NaN\"s are replaced with zero, positive\n infinity is replaced with the greatest finite value representable\n by \"input\"'s dtype, and negative infinity is replaced with the\n least finite value representable by \"input\"'s dtype.\nParameters:\n * input (Tensor) -- the input tensor.\n * **nan** (*Number**, **optional*) -- the value to replace\n \"NaN\"s with. Default is zero.\n\n * **posinf** (*Number**, **optional*) -- if a Number, the value\n to replace positive infinity values with. If None, positive\n infinity values are replaced with the greatest finite value\n representable by \"input\"'s dtype. Default is None.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nan_to_num.html", "category": "pytorch docs"} {"text": "\nneginf (Number, optional) -- if a Number, the value\n to replace negative infinity values with. If None, negative\n infinity values are replaced with the lowest finite value\n representable by \"input\"'s dtype. Default is None.\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> x = torch.tensor([float('nan'), float('inf'), -float('inf'), 3.14])\n >>> torch.nan_to_num(x)\n tensor([ 0.0000e+00, 3.4028e+38, -3.4028e+38, 3.1400e+00])\n >>> torch.nan_to_num(x, nan=2.0)\n tensor([ 2.0000e+00, 3.4028e+38, -3.4028e+38, 3.1400e+00])\n >>> torch.nan_to_num(x, nan=2.0, posinf=1.0)\n tensor([ 2.0000e+00, 1.0000e+00, -3.4028e+38, 3.1400e+00])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nan_to_num.html", "category": "pytorch docs"} {"text": "FractionalMaxPool3d\nclass torch.nn.FractionalMaxPool3d(kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None)\nApplies a 3D fractional max pooling over an input signal composed\n of several input planes.\nFractional MaxPooling is described in detail in the paper\n Fractional MaxPooling by Ben Graham\nThe max-pooling operation is applied in kT \\times kH \\times kW\n regions by a stochastic step size determined by the target output\n size. The number of output features is equal to the number of input\n planes.\nParameters:\n * kernel_size (Union[int, Tuple[int, int,\n int]]) -- the size of the window to take a max over.\n Can be a single number k (for a square kernel of k x k x k) or\n a tuple (kt x kh x kw)\n * **output_size** (*Union**[**int**, **Tuple**[**int**, **int**,\n **int**]**]*) -- the target output size of the image of the\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.FractionalMaxPool3d.html", "category": "pytorch docs"} {"text": "form oT x oH x oW. Can be a tuple (oT, oH, oW) or a single\n number oH for a square image oH x oH x oH\n * **output_ratio** (*Union**[**float**, **Tuple**[**float**,\n **float**, **float**]**]*) -- If one wants to have an output\n size as a ratio of the input size, this option can be given.\n This has to be a number or tuple in the range (0, 1)\n\n * **return_indices** (*bool*) -- if \"True\", will return the\n indices along with the outputs. Useful to pass to\n \"nn.MaxUnpool3d()\". Default: \"False\"\n\nShape:\n * Input: (N, C, T_{in}, H_{in}, W_{in}) or (C, T_{in}, H_{in},\n W_{in}).\n * Output: (N, C, T_{out}, H_{out}, W_{out}) or (C, T_{out},\n H_{out}, W_{out}), where (T_{out}, H_{out},\n W_{out})=\\text{output\\_size} or (T_{out}, H_{out},\n W_{out})=\\text{output\\_ratio} \\times (T_{in}, H_{in}, W_{in})\n\n-[ Examples ]-\n\n\n\npool of cubic window of size=3, and target output size 13x12x11\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.FractionalMaxPool3d.html", "category": "pytorch docs"} {"text": "\n\n\nm = nn.FractionalMaxPool3d(3, output_size=(13, 12, 11))\npool of cubic window and target output size being half of input size\nm = nn.FractionalMaxPool3d(3, output_ratio=(0.5, 0.5, 0.5))\ninput = torch.randn(20, 16, 50, 32, 16)\noutput = m(input)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.FractionalMaxPool3d.html", "category": "pytorch docs"} {"text": "torch.Tensor.ger\nTensor.ger(vec2) -> Tensor\nSee \"torch.ger()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ger.html", "category": "pytorch docs"} {"text": "torch.Tensor.col_indices\nTensor.col_indices() -> IntTensor\nReturns the tensor containing the column indices of the \"self\"\n tensor when \"self\" is a sparse CSR tensor of layout \"sparse_csr\".\n The \"col_indices\" tensor is strictly of shape (\"self\".nnz()) and of\n type \"int32\" or \"int64\". When using MKL routines such as sparse\n matrix multiplication, it is necessary to use \"int32\" indexing in\n order to avoid downcasting and potentially losing information.\nExample::\n >>> csr = torch.eye(5,5).to_sparse_csr()\n >>> csr.col_indices()\n tensor([0, 1, 2, 3, 4], dtype=torch.int32)", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.col_indices.html", "category": "pytorch docs"} {"text": "torch.Tensor.uniform_\nTensor.uniform_(from=0, to=1) -> Tensor\nFills \"self\" tensor with numbers sampled from the continuous\n uniform distribution:\n P(x) = \\dfrac{1}{\\text{to} - \\text{from}}\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.uniform_.html", "category": "pytorch docs"} {"text": "torch.Tensor.device\nTensor.device\nIs the \"torch.device\" where this Tensor is.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.device.html", "category": "pytorch docs"} {"text": "torch.Tensor.unfold\nTensor.unfold(dimension, size, step) -> Tensor\nReturns a view of the original tensor which contains all slices of\n size \"size\" from \"self\" tensor in the dimension \"dimension\".\nStep between two slices is given by \"step\".\nIf sizedim is the size of dimension \"dimension\" for \"self\", the\n size of dimension \"dimension\" in the returned tensor will be\n (sizedim - size) / step + 1.\nAn additional dimension of size \"size\" is appended in the returned\n tensor.\nParameters:\n * dimension (int) -- dimension in which unfolding happens\n * **size** (*int*) -- the size of each slice that is unfolded\n\n * **step** (*int*) -- the step between each slice\n\nExample:\n >>> x = torch.arange(1., 8)\n >>> x\n tensor([ 1., 2., 3., 4., 5., 6., 7.])\n >>> x.unfold(0, 2, 1)\n tensor([[ 1., 2.],\n [ 2., 3.],\n [ 3., 4.],\n [ 4., 5.],\n [ 5., 6.],\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.unfold.html", "category": "pytorch docs"} {"text": "[ 5., 6.],\n [ 6., 7.]])\n >>> x.unfold(0, 2, 2)\n tensor([[ 1., 2.],\n [ 3., 4.],\n [ 5., 6.]])", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.unfold.html", "category": "pytorch docs"} {"text": "torch._foreach_log\ntorch._foreach_log(self: List[Tensor]) -> List[Tensor]\nApply \"torch.log()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_log.html", "category": "pytorch docs"} {"text": "torch.digamma\ntorch.digamma(input, *, out=None) -> Tensor\nAlias for \"torch.special.digamma()\".", "source": "https://pytorch.org/docs/stable/generated/torch.digamma.html", "category": "pytorch docs"} {"text": "Tanh\nclass torch.nn.Tanh\nApplies the Hyperbolic Tangent (Tanh) function element-wise.\nTanh is defined as:\n \\text{Tanh}(x) = \\tanh(x) = \\frac{\\exp(x) - \\exp(-x)} {\\exp(x) +\n \\exp(-x)}\n\nShape:\n * Input: (*), where * means any number of dimensions.\n * Output: (*), same shape as the input.\n\n[image]\nExamples:\n >>> m = nn.Tanh()\n >>> input = torch.randn(2)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Tanh.html", "category": "pytorch docs"} {"text": "torch.trapz\ntorch.trapz(y, x, *, dim=- 1) -> Tensor\nAlias for \"torch.trapezoid()\".", "source": "https://pytorch.org/docs/stable/generated/torch.trapz.html", "category": "pytorch docs"} {"text": "torch.Tensor.geometric_\nTensor.geometric_(p, *, generator=None) -> Tensor\nFills \"self\" tensor with elements drawn from the geometric\n distribution:\n f(X=k) = (1 - p)^{k - 1} p\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.geometric_.html", "category": "pytorch docs"} {"text": "torch.nn.functional.layer_norm\ntorch.nn.functional.layer_norm(input, normalized_shape, weight=None, bias=None, eps=1e-05)\nApplies Layer Normalization for last certain number of dimensions.\nSee \"LayerNorm\" for details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.layer_norm.html", "category": "pytorch docs"} {"text": "torch.isinf\ntorch.isinf(input) -> Tensor\nTests if each element of \"input\" is infinite (positive or negative\n infinity) or not.\nNote:\n Complex values are infinite when their real or imaginary part is\n infinite.\n\nParameters:\n input (Tensor) -- the input tensor.\nReturns:\n A boolean tensor that is True where \"input\" is infinite and\n False elsewhere\nExample:\n >>> torch.isinf(torch.tensor([1, float('inf'), 2, float('-inf'), float('nan')]))\n tensor([False, True, False, True, False])\n", "source": "https://pytorch.org/docs/stable/generated/torch.isinf.html", "category": "pytorch docs"} {"text": "Attribute\nclass torch.jit.Attribute(value, type)\nThis method is a pass-through function that returns value, mostly\n used to indicate to the TorchScript compiler that the left-hand\n side expression is a class instance attribute with type of type.\n Note that torch.jit.Attribute should only be used in init\n method of jit.ScriptModule subclasses.\nThough TorchScript can infer correct type for most Python\n expressions, there are some cases where type inference can be\n wrong, including:\n\n\nEmpty containers like [] and {}, which TorchScript assumes to\n be container of Tensor\n\n\nOptional types like Optional[T] but assigned a valid value of\n type T, TorchScript would assume it is type T rather than\n Optional[T]\n\n\nIn eager mode, it is simply a pass-through function that returns\n value without other implications.\nExample:\n import torch\n from typing import Dict\n\n class AttributeModule(torch.jit.ScriptModule):\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.Attribute.html", "category": "pytorch docs"} {"text": "def init(self):\n super(AttributeModule, self).init()\n self.foo = torch.jit.Attribute(0.1, float)\n # we should be able to use self.foo as a float here\n assert 0.0 < self.foo\n\n self.names_ages = torch.jit.Attribute({}, Dict[str, int])\n self.names_ages[\"someone\"] = 20\n assert isinstance(self.names_ages[\"someone\"], int)\n\n m = AttributeModule()\n # m will contain two attributes\n # 1. foo of type float\n # 2. names_ages of type Dict[str, int]\n\nNote: it's now preferred to instead use type annotations instead of\n torch.jit.Annotate:\n import torch\n from typing import Dict\n\n class AttributeModule(torch.nn.Module):\n names: Dict[str, int]\n\n def __init__(self):\n super(AttributeModule, self).__init__()\n self.names = {}\n\n m = AttributeModule()\n\nParameters:\n * value -- An initial value to be assigned to attribute.", "source": "https://pytorch.org/docs/stable/generated/torch.jit.Attribute.html", "category": "pytorch docs"} {"text": "\ntype -- A Python type\n\nReturns:\n Returns value\ncount(value, /)\n Return number of occurrences of value.\n\nindex(value, start=0, stop=9223372036854775807, /)\n Return first index of value.\n\n Raises ValueError if the value is not present.\n\ntype\n Alias for field number 1\n\nvalue\n Alias for field number 0\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.Attribute.html", "category": "pytorch docs"} {"text": "torch.Tensor.long\nTensor.long(memory_format=torch.preserve_format) -> Tensor\n\"self.long()\" is equivalent to \"self.to(torch.int64)\". See \"to()\".\nParameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.long.html", "category": "pytorch docs"} {"text": "torch.Tensor.mvlgamma\nTensor.mvlgamma(p) -> Tensor\nSee \"torch.mvlgamma()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.mvlgamma.html", "category": "pytorch docs"} {"text": "torch.Tensor.nan_to_num_\nTensor.nan_to_num_(nan=0.0, posinf=None, neginf=None) -> Tensor\nIn-place version of \"nan_to_num()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.nan_to_num_.html", "category": "pytorch docs"} {"text": "ConvBn3d\nclass torch.ao.nn.intrinsic.ConvBn3d(conv, bn)\nThis is a sequential container which calls the Conv 3d and Batch\n Norm 3d modules. During quantization this will be replaced with the\n corresponding fused module.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.ConvBn3d.html", "category": "pytorch docs"} {"text": "torch.Tensor.argmin\nTensor.argmin(dim=None, keepdim=False) -> LongTensor\nSee \"torch.argmin()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.argmin.html", "category": "pytorch docs"} {"text": "torch.asinh\ntorch.asinh(input, *, out=None) -> Tensor\nReturns a new tensor with the inverse hyperbolic sine of the\n elements of \"input\".\n \\text{out}_{i} = \\sinh^{-1}(\\text{input}_{i})\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(4)\n >>> a\n tensor([ 0.1606, -1.4267, -1.0899, -1.0250 ])\n >>> torch.asinh(a)\n tensor([ 0.1599, -1.1534, -0.9435, -0.8990 ])\n", "source": "https://pytorch.org/docs/stable/generated/torch.asinh.html", "category": "pytorch docs"} {"text": "torch.signal.windows.kaiser\ntorch.signal.windows.kaiser(M, *, beta=12.0, sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)\nComputes the Kaiser window.\nThe Kaiser window is defined as follows:\n w_n = I_0 \\left( \\beta \\sqrt{1 - \\left( {\\frac{n - N/2}{N/2}}\n \\right) ^2 } \\right) / I_0( \\beta )\n\nwhere \"I_0\" is the zeroth order modified Bessel function of the\n first kind (see \"torch.special.i0()\"), and \"N = M - 1 if sym else\n M\".\nThe window is normalized to 1 (maximum value is 1). However, the 1\n doesn't appear if \"M\" is even and \"sym\" is True.\nParameters:\n M (int) -- the length of the window. In other words, the\n number of points of the returned window.\nKeyword Arguments:\n * beta (float, optional) -- shape parameter for the\n window. Must be non-negative. Default: 12.0\n * **sym** (*bool**, **optional*) -- If *False*, returns a\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.kaiser.html", "category": "pytorch docs"} {"text": "periodic window suitable for use in spectral analysis. If\n True, returns a symmetric window suitable for use in filter\n design. Default: True.\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\nReturn type:\n Tensor\nExamples:", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.kaiser.html", "category": "pytorch docs"} {"text": "Return type:\n Tensor\nExamples:\n >>> # Generates a symmetric gaussian window with a standard deviation of 1.0.\n >>> torch.signal.windows.kaiser(5)\n tensor([4.0065e-05, 2.1875e-03, 4.3937e-02, 3.2465e-01, 8.8250e-01, 8.8250e-01, 3.2465e-01, 4.3937e-02, 2.1875e-03, 4.0065e-05])\n >>> # Generates a periodic gaussian window and standard deviation equal to 0.9.\n >>> torch.signal.windows.kaiser(5, sym=False,std=0.9)\n tensor([1.9858e-07, 5.1365e-05, 3.8659e-03, 8.4658e-02, 5.3941e-01, 1.0000e+00, 5.3941e-01, 8.4658e-02, 3.8659e-03, 5.1365e-05])\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.kaiser.html", "category": "pytorch docs"} {"text": "torch.linalg.cross\ntorch.linalg.cross(input, other, *, dim=- 1, out=None) -> Tensor\nComputes the cross product of two 3-dimensional vectors.\nSupports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of vectors, for which it computes the product\n along the dimension \"dim\". It broadcasts over the batch dimensions.\nParameters:\n * input (Tensor) -- the first input tensor.\n * **other** (*Tensor*) -- the second input tensor.\n\n * **dim** (*int**, **optional*) -- the dimension along which to\n take the cross-product. Default: *-1*.\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor. Ignored\n if None. Default: None.\n-[ Example ]-\n\n\n\na = torch.randn(4, 3)\na\n tensor([[-0.3956, 1.1455, 1.6895],\n [-0.5849, 1.3672, 0.3599],\n [-1.1626, 0.7180, -0.0521],\n [-0.1339, 0.9902, -2.0225]])\nb = torch.randn(4, 3)\nb\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.cross.html", "category": "pytorch docs"} {"text": "\n\n\nb = torch.randn(4, 3)\nb\n tensor([[-0.0257, -1.4725, -1.2251],\n [-1.1479, -0.7005, -1.9757],\n [-1.3904, 0.3726, -1.1836],\n [-0.9688, -0.7153, 0.2159]])\ntorch.linalg.cross(a, b)\n tensor([[ 1.0844, -0.5281, 0.6120],\n [-2.4490, -1.5687, 1.9792],\n [-0.8304, -1.3037, 0.5650],\n [-1.2329, 1.9883, 1.0551]])\na = torch.randn(1, 3) # a is broadcast to match shape of b\na\n tensor([[-0.9941, -0.5132, 0.5681]])\ntorch.linalg.cross(a, b)\n tensor([[ 1.4653, -1.2325, 1.4507],\n [ 1.4119, -2.6163, 0.1073],\n [ 0.3957, -1.9666, -1.0840],\n [ 0.2956, -0.3357, 0.2139]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.cross.html", "category": "pytorch docs"} {"text": "torch.combinations\ntorch.combinations(input, r=2, with_replacement=False) -> seq\nCompute combinations of length r of the given tensor. The behavior\n is similar to python's itertools.combinations when\n with_replacement is set to False, and\n itertools.combinations_with_replacement when with_replacement\n is set to True.\nParameters:\n * input (Tensor) -- 1D vector.\n * **r** (*int**, **optional*) -- number of elements to combine\n\n * **with_replacement** (*bool**, **optional*) -- whether to\n allow duplication in combination\n\nReturns:\n A tensor equivalent to converting all the input tensors into\n lists, do itertools.combinations or\n itertools.combinations_with_replacement on these lists, and\n finally convert the resulting list into tensor.\nReturn type:\n Tensor\nExample:\n >>> a = [1, 2, 3]\n >>> list(itertools.combinations(a, r=2))\n [(1, 2), (1, 3), (2, 3)]\n", "source": "https://pytorch.org/docs/stable/generated/torch.combinations.html", "category": "pytorch docs"} {"text": "[(1, 2), (1, 3), (2, 3)]\n >>> list(itertools.combinations(a, r=3))\n [(1, 2, 3)]\n >>> list(itertools.combinations_with_replacement(a, r=2))\n [(1, 1), (1, 2), (1, 3), (2, 2), (2, 3), (3, 3)]\n >>> tensor_a = torch.tensor(a)\n >>> torch.combinations(tensor_a)\n tensor([[1, 2],\n [1, 3],\n [2, 3]])\n >>> torch.combinations(tensor_a, r=3)\n tensor([[1, 2, 3]])\n >>> torch.combinations(tensor_a, with_replacement=True)\n tensor([[1, 1],\n [1, 2],\n [1, 3],\n [2, 2],\n [2, 3],\n [3, 3]])", "source": "https://pytorch.org/docs/stable/generated/torch.combinations.html", "category": "pytorch docs"} {"text": "Unfold\nclass torch.nn.Unfold(kernel_size, dilation=1, padding=0, stride=1)\nExtracts sliding local blocks from a batched input tensor.\nConsider a batched \"input\" tensor of shape (N, C, *), where N is\n the batch dimension, C is the channel dimension, and * represent\n arbitrary spatial dimensions. This operation flattens each sliding\n \"kernel_size\"-sized block within the spatial dimensions of \"input\"\n into a column (i.e., last dimension) of a 3-D \"output\" tensor of\n shape (N, C \\times \\prod(\\text{kernel_size}), L), where C \\times\n \\prod(\\text{kernel_size}) is the total number of values within\n each block (a block has \\prod(\\text{kernel_size}) spatial\n locations each containing a C-channeled vector), and L is the total\n number of such blocks:\n L = \\prod_d \\left\\lfloor\\frac{\\text{spatial\\_size}[d] + 2 \\times\n \\text{padding}[d] % - \\text{dilation}[d] \\times\n (\\text{kernel\\_size}[d] - 1) - 1}{\\text{stride}[d]} +\n 1\\right\\rfloor,\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Unfold.html", "category": "pytorch docs"} {"text": "1\\right\\rfloor,\nwhere \\text{spatial_size} is formed by the spatial dimensions of\n \"input\" (* above), and d is over all spatial dimensions.\nTherefore, indexing \"output\" at the last dimension (column\n dimension) gives all values within a certain block.\nThe \"padding\", \"stride\" and \"dilation\" arguments specify how the\n sliding blocks are retrieved.\n\n\n\"stride\" controls the stride for the sliding blocks.\n\n\n\"padding\" controls the amount of implicit zero-paddings on both\n sides for \"padding\" number of points for each dimension before\n reshaping.\n\n\n\"dilation\" controls the spacing between the kernel points; also\n known as the \u00c3\u00a0 trous algorithm. It is harder to describe, but\n this link has a nice visualization of what \"dilation\" does.\n\n\nParameters:\n * kernel_size (int or tuple) -- the size of the\n sliding blocks\n * **dilation** (*int** or **tuple**, **optional*) -- a parameter\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Unfold.html", "category": "pytorch docs"} {"text": "that controls the stride of elements within the neighborhood.\n Default: 1\n * **padding** (*int** or **tuple**, **optional*) -- implicit\n zero padding to be added on both sides of input. Default: 0\n\n * **stride** (*int** or **tuple**, **optional*) -- the stride of\n the sliding blocks in the input spatial dimensions. Default: 1\n\n\n\nIf \"kernel_size\", \"dilation\", \"padding\" or \"stride\" is an int or\n a tuple of length 1, their values will be replicated across all\n spatial dimensions.\n\n\nFor the case of two input spatial dimensions this operation is\n sometimes called \"im2col\".\n\n\nNote:\n \"Fold\" calculates each combined value in the resulting large\n tensor by summing all values from all containing blocks. \"Unfold\"\n extracts the values in the local blocks by copying from the large\n tensor. So, if the blocks overlap, they are not inverses of each\n other.In general, folding and unfolding operations are related as\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Unfold.html", "category": "pytorch docs"} {"text": "follows. Consider \"Fold\" and \"Unfold\" instances created with the\n same parameters:\n >>> fold_params = dict(kernel_size=..., dilation=..., padding=..., stride=...)\n >>> fold = nn.Fold(output_size=..., **fold_params)\n >>> unfold = nn.Unfold(**fold_params)\n\n Then for any (supported) \"input\" tensor the following equality\n holds:\n\n fold(unfold(input)) == divisor * input\n\n where \"divisor\" is a tensor that depends only on the shape and\n dtype of the \"input\":\n\n >>> input_ones = torch.ones(input.shape, dtype=input.dtype)\n >>> divisor = fold(unfold(input_ones))\n\n When the \"divisor\" tensor contains no zero elements, then \"fold\"\n and \"unfold\" operations are inverses of each other (up to\n constant divisor).\n\nWarning:\n Currently, only 4-D input tensors (batched image-like tensors)\n are supported.\n\nShape:\n * Input: (N, C, *)\n * Output: (N, C \\times \\prod(\\text{kernel\\_size}), L) as\n described above\n\nExamples:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Unfold.html", "category": "pytorch docs"} {"text": "described above\nExamples:\n >>> unfold = nn.Unfold(kernel_size=(2, 3))\n >>> input = torch.randn(2, 5, 3, 4)\n >>> output = unfold(input)\n >>> # each patch contains 30 values (2x3=6 vectors, each of 5 channels)\n >>> # 4 blocks (2x3 kernels) in total in the 3x4 input\n >>> output.size()\n torch.Size([2, 30, 4])\n\n >>> # Convolution is equivalent with Unfold + Matrix Multiplication + Fold (or view to output shape)\n >>> inp = torch.randn(1, 3, 10, 12)\n >>> w = torch.randn(2, 3, 4, 5)\n >>> inp_unf = torch.nn.functional.unfold(inp, (4, 5))\n >>> out_unf = inp_unf.transpose(1, 2).matmul(w.view(w.size(0), -1).t()).transpose(1, 2)\n >>> out = torch.nn.functional.fold(out_unf, (7, 8), (1, 1))\n >>> # or equivalently (and avoiding a copy),\n >>> # out = out_unf.view(1, 2, 7, 8)\n >>> (torch.nn.functional.conv2d(inp, w) - out).abs().max()\n tensor(1.9073e-06)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Unfold.html", "category": "pytorch docs"} {"text": "torch.unique_consecutive\ntorch.unique_consecutive(args, *kwargs)\nEliminates all but the first element from every consecutive group\n of equivalent elements.\nNote:\n This function is different from \"torch.unique()\" in the sense\n that this function only eliminates consecutive duplicate values.\n This semantics is similar to *std::unique* in C++.\n\nParameters:\n * input (Tensor) -- the input tensor\n * **return_inverse** (*bool*) -- Whether to also return the\n indices for where elements in the original input ended up in\n the returned unique list.\n\n * **return_counts** (*bool*) -- Whether to also return the\n counts for each unique element.\n\n * **dim** (*int*) -- the dimension to apply unique. If \"None\",\n the unique of the flattened input is returned. default: \"None\"\n\nReturns:\n A tensor or a tuple of tensors containing\n * **output** (*Tensor*): the output list of unique scalar\n", "source": "https://pytorch.org/docs/stable/generated/torch.unique_consecutive.html", "category": "pytorch docs"} {"text": "elements.\n * **inverse_indices** (*Tensor*): (optional) if\n \"return_inverse\" is True, there will be an additional\n returned tensor (same shape as input) representing the\n indices for where elements in the original input map to in\n the output; otherwise, this function will only return a\n single tensor.\n\n * **counts** (*Tensor*): (optional) if \"return_counts\" is\n True, there will be an additional returned tensor (same\n shape as output or output.size(dim), if dim was specified)\n representing the number of occurrences for each unique\n value or tensor.\n\nReturn type:\n (Tensor, Tensor (optional), Tensor (optional))\nExample:\n >>> x = torch.tensor([1, 1, 2, 2, 3, 1, 1, 2])\n >>> output = torch.unique_consecutive(x)\n >>> output\n tensor([1, 2, 3, 1, 2])\n\n >>> output, inverse_indices = torch.unique_consecutive(x, return_inverse=True)\n >>> output\n", "source": "https://pytorch.org/docs/stable/generated/torch.unique_consecutive.html", "category": "pytorch docs"} {"text": "\n\n\noutput\n tensor([1, 2, 3, 1, 2])\n >>> inverse_indices\n tensor([0, 0, 1, 1, 2, 3, 3, 4])\n\n\n\n >>> output, counts = torch.unique_consecutive(x, return_counts=True)\n >>> output\n tensor([1, 2, 3, 1, 2])\n >>> counts\n tensor([2, 2, 1, 2, 1])\n", "source": "https://pytorch.org/docs/stable/generated/torch.unique_consecutive.html", "category": "pytorch docs"} {"text": "torch.trace\ntorch.trace(input) -> Tensor\nReturns the sum of the elements of the diagonal of the input 2-D\n matrix.\nExample:\n >>> x = torch.arange(1., 10.).view(3, 3)\n >>> x\n tensor([[ 1., 2., 3.],\n [ 4., 5., 6.],\n [ 7., 8., 9.]])\n >>> torch.trace(x)\n tensor(15.)\n", "source": "https://pytorch.org/docs/stable/generated/torch.trace.html", "category": "pytorch docs"} {"text": "SoftMarginLoss\nclass torch.nn.SoftMarginLoss(size_average=None, reduce=None, reduction='mean')\nCreates a criterion that optimizes a two-class classification\n logistic loss between input tensor x and target tensor y\n (containing 1 or -1).\n \\text{loss}(x, y) = \\sum_i \\frac{\\log(1 +\n \\exp(-y[i]*x[i]))}{\\text{x.nelement}()}\n\nParameters:\n * size_average (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n * **reduce** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SoftMarginLoss.html", "category": "pytorch docs"} {"text": "\"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n * **reduction** (*str**, **optional*) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\n\nShape:\n * Input: (*), where * means any number of dimensions.\n * Target: (*), same shape as the input.\n\n * Output: scalar. If \"reduction\" is \"'none'\", then (*), same\n shape as input.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SoftMarginLoss.html", "category": "pytorch docs"} {"text": "get_default_qat_qconfig_mapping\nclass torch.ao.quantization.qconfig_mapping.get_default_qat_qconfig_mapping(backend='x86', version=1)\nReturn the default QConfigMapping for quantization aware training.\nParameters:\n * backend (***) -- the quantization backend for the default\n qconfig mapping, should be one of [\"x86\" (default), \"fbgemm\",\n \"qnnpack\", \"onednn\"]\n * **version** (***) -- the version for the default qconfig\n mapping\n\nReturn type:\n QConfigMapping", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.qconfig_mapping.get_default_qat_qconfig_mapping.html", "category": "pytorch docs"} {"text": "torch.Tensor.to_sparse_bsr\nTensor.to_sparse_bsr(blocksize, dense_dim) -> Tensor\nConvert a tensor to a block sparse row (BSR) storage format of\n given blocksize. If the \"self\" is strided, then the number of\n dense dimensions could be specified, and a hybrid BSR tensor will\n be created, with dense_dim dense dimensions and self.dim() - 2 -\n dense_dim batch dimension.\nParameters:\n * blocksize (list, tuple, \"torch.Size\", optional) -- Block\n size of the resulting BSR tensor. A block size must be a tuple\n of length two such that its items evenly divide the two sparse\n dimensions.\n * **dense_dim** (*int**, **optional*) -- Number of dense\n dimensions of the resulting BSR tensor. This argument should\n be used only if \"self\" is a strided tensor, and must be a\n value between 0 and dimension of \"self\" tensor minus two.\n\nExample:\n >>> dense = torch.randn(10, 10)\n >>> sparse = dense.to_sparse_csr()\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse_bsr.html", "category": "pytorch docs"} {"text": "\n\n\nsparse = dense.to_sparse_csr()\n >>> sparse_bsr = sparse.to_sparse_bsr((5, 5))\n >>> sparse_bsr.col_indices()\n tensor([0, 1, 0, 1])\n\n\n\n >>> dense = torch.zeros(4, 3, 1)\n >>> dense[0:2, 0] = dense[0:2, 2] = dense[2:4, 1] = 1\n >>> dense.to_sparse_bsr((2, 1), 1)\n tensor(crow_indices=tensor([0, 2, 3]),\n col_indices=tensor([0, 2, 1]),\n values=tensor([[[[1.]],\n\n [[1.]]],\n\n\n [[[1.]],\n\n [[1.]]],\n\n\n [[[1.]],\n\n [[1.]]]]), size=(4, 3, 1), nnz=3,\n layout=torch.sparse_bsr)\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse_bsr.html", "category": "pytorch docs"} {"text": "torch.Tensor.inner\nTensor.inner(other) -> Tensor\nSee \"torch.inner()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.inner.html", "category": "pytorch docs"} {"text": "torch.index_select\ntorch.index_select(input, dim, index, *, out=None) -> Tensor\nReturns a new tensor which indexes the \"input\" tensor along\n dimension \"dim\" using the entries in \"index\" which is a\n LongTensor.\nThe returned tensor has the same number of dimensions as the\n original tensor (\"input\"). The \"dim\"th dimension has the same size\n as the length of \"index\"; other dimensions have the same size as in\n the original tensor.\nNote:\n The returned tensor does **not** use the same storage as the\n original tensor. If \"out\" has a different shape than expected,\n we silently change it to the correct shape, reallocating the\n underlying storage if necessary.\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int*) -- the dimension in which we index\n\n * **index** (*IntTensor** or **LongTensor*) -- the 1-D tensor\n containing the indices to index\n\nKeyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.index_select.html", "category": "pytorch docs"} {"text": "Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> x = torch.randn(3, 4)\n >>> x\n tensor([[ 0.1427, 0.0231, -0.5414, -1.0009],\n [-0.4664, 0.2647, -0.1228, -1.1068],\n [-1.1734, -0.6571, 0.7230, -0.6004]])\n >>> indices = torch.tensor([0, 2])\n >>> torch.index_select(x, 0, indices)\n tensor([[ 0.1427, 0.0231, -0.5414, -1.0009],\n [-1.1734, -0.6571, 0.7230, -0.6004]])\n >>> torch.index_select(x, 1, indices)\n tensor([[ 0.1427, -0.5414],\n [-0.4664, -0.1228],\n [-1.1734, 0.7230]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.index_select.html", "category": "pytorch docs"} {"text": "torch.igammac\ntorch.igammac(input, other, *, out=None) -> Tensor\nAlias for \"torch.special.gammaincc()\".", "source": "https://pytorch.org/docs/stable/generated/torch.igammac.html", "category": "pytorch docs"} {"text": "Dropout2d\nclass torch.nn.Dropout2d(p=0.5, inplace=False)\nRandomly zero out entire channels (a channel is a 2D feature map,\n e.g., the j-th channel of the i-th sample in the batched input is a\n 2D tensor \\text{input}[i, j]). Each channel will be zeroed out\n independently on every forward call with probability \"p\" using\n samples from a Bernoulli distribution.\nUsually the input comes from \"nn.Conv2d\" modules.\nAs described in the paper Efficient Object Localization Using\n Convolutional Networks , if adjacent pixels within feature maps are\n strongly correlated (as is normally the case in early convolution\n layers) then i.i.d. dropout will not regularize the activations and\n will otherwise just result in an effective learning rate decrease.\nIn this case, \"nn.Dropout2d()\" will help promote independence\n between feature maps and should be used instead.\nParameters:\n * p (float, optional) -- probability of an element to\n be zero-ed.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Dropout2d.html", "category": "pytorch docs"} {"text": "be zero-ed.\n * **inplace** (*bool**, **optional*) -- If set to \"True\", will\n do this operation in-place\n\nWarning:\n Due to historical reasons, this class will perform 1D channel-\n wise dropout for 3D inputs (as done by \"nn.Dropout1d\"). Thus, it\n currently does NOT support inputs without a batch dimension of\n shape (C, H, W). This behavior will change in a future release to\n interpret 3D inputs as no-batch-dim inputs. To maintain the old\n behavior, switch to \"nn.Dropout1d\".\n\nShape:\n * Input: (N, C, H, W) or (N, C, L).\n * Output: (N, C, H, W) or (N, C, L) (same shape as input).\n\nExamples:\n >>> m = nn.Dropout2d(p=0.2)\n >>> input = torch.randn(20, 16, 32, 32)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Dropout2d.html", "category": "pytorch docs"} {"text": "torch.Tensor.logical_not_\nTensor.logical_not_() -> Tensor\nIn-place version of \"logical_not()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logical_not_.html", "category": "pytorch docs"} {"text": "torch.linalg.svd\ntorch.linalg.svd(A, full_matrices=True, *, driver=None, out=None)\nComputes the singular value decomposition (SVD) of a matrix.\nLetting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, the full SVD of\n a matrix A \\in \\mathbb{K}^{m \\times n}, if k = min(m,n), is\n defined as\n A = U \\operatorname{diag}(S) V^{\\text{H}} \\mathrlap{\\qquad U \\in\n \\mathbb{K}^{m \\times m}, S \\in \\mathbb{R}^k, V \\in \\mathbb{K}^{n\n \\times n}}\n\nwhere \\operatorname{diag}(S) \\in \\mathbb{K}^{m \\times n},\n V^{\\text{H}} is the conjugate transpose when V is complex, and the\n transpose when V is real-valued. The matrices U, V (and thus\n V^{\\text{H}}) are orthogonal in the real case, and unitary in the\n complex case.\nWhen m > n (resp. m < n) we can drop the last m - n (resp. n\n - m) columns of U (resp. V) to form the reduced SVD:\n A = U \\operatorname{diag}(S) V^{\\text{H}} \\mathrlap{\\qquad U \\in\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.svd.html", "category": "pytorch docs"} {"text": "\\mathbb{K}^{m \\times k}, S \\in \\mathbb{R}^k, V \\in \\mathbb{K}^{k\n \\times n}}\nwhere \\operatorname{diag}(S) \\in \\mathbb{K}^{k \\times k}. In this\n case, U and V also have orthonormal columns.\nSupports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\nThe returned decomposition is a named tuple (U, S, Vh) which\n corresponds to U, S, V^{\\text{H}} above.\nThe singular values are returned in descending order.\nThe parameter \"full_matrices\" chooses between the full (default)\n and reduced SVD.\nThe \"driver\" kwarg may be used in CUDA with a cuSOLVER backend to\n choose the algorithm used to compute the SVD. The choice of a\n driver is a trade-off between accuracy and speed.\n\n\nIf \"A\" is well-conditioned (its condition number is not too\n large), or you do not mind some precision loss.\n\nFor a general matrix: 'gesvdj' (Jacobi method)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.svd.html", "category": "pytorch docs"} {"text": "\n\nIf \"A\" is tall or wide (m >> n or m << n): 'gesvda'\n (Approximate method)\n\n\nIf \"A\" is not well-conditioned or precision is relevant:\n 'gesvd' (QR based)\n\n\nBy default (\"driver\"= None), we call 'gesvdj' and, if it fails,\n we fallback to 'gesvd'.\nDifferences with numpy.linalg.svd:\n\nUnlike numpy.linalg.svd, this function always returns a tuple\n of three tensors and it doesn't support compute_uv argument.\n Please use \"torch.linalg.svdvals()\", which computes only the\n singular values, instead of compute_uv=False.\n\nNote:\n When \"full_matrices\"*= True*, the gradients with respect to\n *U[..., :, min(m, n):]* and *Vh[..., min(m, n):, :]* will be\n ignored, as those vectors can be arbitrary bases of the\n corresponding subspaces.\n\nWarning:\n The returned tensors *U* and *V* are not unique, nor are they\n continuous with respect to \"A\". Due to this lack of uniqueness,\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.svd.html", "category": "pytorch docs"} {"text": "different hardware and software may compute different singular\n vectors.This non-uniqueness is caused by the fact that\n multiplying any pair of singular vectors u_k, v_k by -1 in the\n real case or by e^{i \\phi}, \\phi \\in \\mathbb{R} in the complex\n case produces another two valid singular vectors of the matrix.\n For this reason, the loss function shall not depend on this e^{i\n \\phi} quantity, as it is not well-defined. This is checked for\n complex inputs when computing the gradients of this function. As\n such, when inputs are complex and are on a CUDA device, the\n computation of the gradients of this function synchronizes that\n device with the CPU.\nWarning:\n Gradients computed using *U* or *Vh* will only be finite when \"A\"\n does not have repeated singular values. If \"A\" is rectangular,\n additionally, zero must also not be one of its singular values.\n Furthermore, if the distance between any two singular values is\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.svd.html", "category": "pytorch docs"} {"text": "close to zero, the gradient will be numerically unstable, as it\n depends on the singular values \\sigma_i through the computation\n of \\frac{1}{\\min_{i \\neq j} \\sigma_i^2 - \\sigma_j^2}. In the\n rectangular case, the gradient will also be numerically unstable\n when \"A\" has small singular values, as it also depends on the\n computation of \\frac{1}{\\sigma_i}.\nSee also:\n \"torch.linalg.svdvals()\" computes only the singular values.\n Unlike \"torch.linalg.svd()\", the gradients of \"svdvals()\" are\n always numerically stable.\n\n \"torch.linalg.eig()\" for a function that computes another type of\n spectral decomposition of a matrix. The eigendecomposition works\n just on square matrices.\n\n \"torch.linalg.eigh()\" for a (faster) function that computes the\n eigenvalue decomposition for Hermitian and symmetric matrices.\n\n \"torch.linalg.qr()\" for another (much faster) decomposition that\n works on general matrices.\n\nParameters:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.svd.html", "category": "pytorch docs"} {"text": "works on general matrices.\nParameters:\n * A (Tensor) -- tensor of shape (, m, n)* where *** is\n zero or more batch dimensions.\n * **full_matrices** (*bool**, **optional*) -- controls whether\n to compute the full or reduced SVD, and consequently, the\n shape of the returned tensors *U* and *Vh*. Default: *True*.\n\nKeyword Arguments:\n * driver (str, optional) -- name of the cuSOLVER\n method to be used. This keyword argument only works on CUDA\n inputs. Available options are: None, gesvd, gesvdj, and\n gesvda. Default: None.\n * **out** (*tuple**, **optional*) -- output tuple of three\n tensors. Ignored if *None*.\n\nReturns:\n A named tuple (U, S, Vh) which corresponds to U, S,\n V^{\\text{H}} above.\n *S* will always be real-valued, even when \"A\" is complex. It\n will also be ordered in descending order.\n\n *U* and *Vh* will have the same dtype as \"A\". The left / right\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.svd.html", "category": "pytorch docs"} {"text": "singular vectors will be given by the columns of U and the\n rows of Vh respectively.\nExamples:\n >>> A = torch.randn(5, 3)\n >>> U, S, Vh = torch.linalg.svd(A, full_matrices=False)\n >>> U.shape, S.shape, Vh.shape\n (torch.Size([5, 3]), torch.Size([3]), torch.Size([3, 3]))\n >>> torch.dist(A, U @ torch.diag(S) @ Vh)\n tensor(1.0486e-06)\n\n >>> U, S, Vh = torch.linalg.svd(A)\n >>> U.shape, S.shape, Vh.shape\n (torch.Size([5, 5]), torch.Size([3]), torch.Size([3, 3]))\n >>> torch.dist(A, U[:, :3] @ torch.diag(S) @ Vh)\n tensor(1.0486e-06)\n\n >>> A = torch.randn(7, 5, 3)\n >>> U, S, Vh = torch.linalg.svd(A, full_matrices=False)\n >>> torch.dist(A, U @ torch.diag_embed(S) @ Vh)\n tensor(3.0957e-06)\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.svd.html", "category": "pytorch docs"} {"text": "torch.Tensor.record_stream\nTensor.record_stream(stream)\nEnsures that the tensor memory is not reused for another tensor\n until all current work queued on \"stream\" are complete.\nNote:\n The caching allocator is aware of only the stream where a tensor\n was allocated. Due to the awareness, it already correctly manages\n the life cycle of tensors on only one stream. But if a tensor is\n used on a stream different from the stream of origin, the\n allocator might reuse the memory unexpectedly. Calling this\n method lets the allocator know which streams have used the\n tensor.\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.record_stream.html", "category": "pytorch docs"} {"text": "torch.Tensor.squeeze_\nTensor.squeeze_(dim=None) -> Tensor\nIn-place version of \"squeeze()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.squeeze_.html", "category": "pytorch docs"} {"text": "LazyBatchNorm3d\nclass torch.nn.LazyBatchNorm3d(eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)\nA \"torch.nn.BatchNorm3d\" module with lazy initialization of the\n \"num_features\" argument of the \"BatchNorm3d\" that is inferred from\n the \"input.size(1)\". The attributes that will be lazily initialized\n are weight, bias, running_mean and running_var.\nCheck the \"torch.nn.modules.lazy.LazyModuleMixin\" for further\n documentation on lazy modules and their limitations.\nParameters:\n * eps (float) -- a value added to the denominator for\n numerical stability. Default: 1e-5\n * **momentum** (*float*) -- the value used for the running_mean\n and running_var computation. Can be set to \"None\" for\n cumulative moving average (i.e. simple average). Default: 0.1\n\n * **affine** (*bool*) -- a boolean value that when set to\n \"True\", this module has learnable affine parameters. Default:\n \"True\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyBatchNorm3d.html", "category": "pytorch docs"} {"text": "\"True\"\n * **track_running_stats** (*bool*) -- a boolean value that when\n set to \"True\", this module tracks the running mean and\n variance, and when set to \"False\", this module does not track\n such statistics, and initializes statistics buffers\n \"running_mean\" and \"running_var\" as \"None\". When these buffers\n are \"None\", this module always uses batch statistics. in both\n training and eval modes. Default: \"True\"\n\ncls_to_become\n alias of \"BatchNorm3d\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyBatchNorm3d.html", "category": "pytorch docs"} {"text": "torch.foreach_reciprocal\ntorch.foreach_reciprocal(self: List[Tensor]) -> None\nApply \"torch.reciprocal()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_reciprocal_.html", "category": "pytorch docs"} {"text": "torch.Tensor.tan\nTensor.tan() -> Tensor\nSee \"torch.tan()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.tan.html", "category": "pytorch docs"} {"text": "torch.Tensor.pinverse\nTensor.pinverse() -> Tensor\nSee \"torch.pinverse()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.pinverse.html", "category": "pytorch docs"} {"text": "torch.Tensor.is_contiguous\nTensor.is_contiguous(memory_format=torch.contiguous_format) -> bool\nReturns True if \"self\" tensor is contiguous in memory in the order\n specified by memory format.\nParameters:\n memory_format (\"torch.memory_format\", optional) -- Specifies\n memory allocation order. Default: \"torch.contiguous_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_contiguous.html", "category": "pytorch docs"} {"text": "torch.Tensor.q_scale\nTensor.q_scale() -> float\nGiven a Tensor quantized by linear(affine) quantization, returns\n the scale of the underlying quantizer().", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.q_scale.html", "category": "pytorch docs"} {"text": "torch.get_deterministic_debug_mode\ntorch.get_deterministic_debug_mode()\nReturns the current value of the debug mode for deterministic\n operations. Refer to \"torch.set_deterministic_debug_mode()\"\n documentation for more details.\nReturn type:\n int", "source": "https://pytorch.org/docs/stable/generated/torch.get_deterministic_debug_mode.html", "category": "pytorch docs"} {"text": "MultiLabelSoftMarginLoss\nclass torch.nn.MultiLabelSoftMarginLoss(weight=None, size_average=None, reduce=None, reduction='mean')\nCreates a criterion that optimizes a multi-label one-versus-all\n loss based on max-entropy, between input x and target y of size (N,\n C). For each sample in the minibatch:\n loss(x, y) = - \\frac{1}{C} * \\sum_i y[i] * \\log((1 +\n \\exp(-x[i]))^{-1}) + (1-y[i]) *\n \\log\\left(\\frac{\\exp(-x[i])}{(1 + \\exp(-x[i]))}\\right)\n\nwhere i \\in \\left{0, \\; \\cdots , \\; \\text{x.nElement}() -\n 1\\right}, y[i] \\in \\left{0, \\; 1\\right}.\nParameters:\n * weight (Tensor, optional) -- a manual rescaling\n weight given to each class. If given, it has to be a Tensor of\n size C. Otherwise, it is treated as if having all ones.\n * **size_average** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiLabelSoftMarginLoss.html", "category": "pytorch docs"} {"text": "loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n * **reduce** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n\n * **reduction** (*str**, **optional*) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiLabelSoftMarginLoss.html", "category": "pytorch docs"} {"text": "deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\nShape:\n * Input: (N, C) where N is the batch size and C is the\n number of classes.\n * Target: (N, C), label targets padded by -1 ensuring same shape\n as the input.\n\n * Output: scalar. If \"reduction\" is \"'none'\", then (N).\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiLabelSoftMarginLoss.html", "category": "pytorch docs"} {"text": "torch.nn.functional.fold\ntorch.nn.functional.fold(input, output_size, kernel_size, dilation=1, padding=0, stride=1)\nCombines an array of sliding local blocks into a large containing\n tensor.\nWarning:\n Currently, only unbatched (3D) or batched (4D) image-like output\n tensors are supported.\n\nSee \"torch.nn.Fold\" for details\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.fold.html", "category": "pytorch docs"} {"text": "torch.autograd.graph.Node.register_hook\nabstract Node.register_hook(fn)\nRegisters a backward hook.\nThe hook will be called every time a gradient with respect to the\n Node is computed. The hook should have the following signature:\n hook(grad_inputs: Tuple[Tensor], grad_outputs: Tuple[Tensor]) -> Tuple[Tensor] or None\n\nThe hook should not modify its argument, but it can optionally\n return a new gradient which will be used in place of\n \"grad_outputs\".\nThis function returns a handle with a method \"handle.remove()\" that\n removes the hook from the module.\nNote:\n See Backward Hooks execution for more information on how when\n this hook is executed, and how its execution is ordered relative\n to other hooks.\n\nExample:\n >>> import torch\n >>> a = torch.tensor([0., 0., 0.], requires_grad=True)\n >>> b = a.clone()\n >>> assert isinstance(b.grad_fn, torch.autograd.graph.Node)\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.graph.Node.register_hook.html", "category": "pytorch docs"} {"text": "\n\n\nhandle = b.grad_fn.register_hook(lambda gI, gO: (gO[0] * 2,))\n >>> b.sum().backward(retain_graph=True)\n >>> print(a.grad)\n tensor([2., 2., 2.])\n >>> handle.remove() # Removes the hook\n >>> a.grad = None\n >>> b.sum().backward(retain_graph=True)\n >>> print(a.grad)\n tensor([1., 1., 1.])\n\n\n\nReturn type:\n RemovableHandle", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.graph.Node.register_hook.html", "category": "pytorch docs"} {"text": "torch.arccos\ntorch.arccos(input, *, out=None) -> Tensor\nAlias for \"torch.acos()\".", "source": "https://pytorch.org/docs/stable/generated/torch.arccos.html", "category": "pytorch docs"} {"text": "torch.histc\ntorch.histc(input, bins=100, min=0, max=0, *, out=None) -> Tensor\nComputes the histogram of a tensor.\nThe elements are sorted into equal width bins between \"min\" and\n \"max\". If \"min\" and \"max\" are both zero, the minimum and maximum\n values of the data are used.\nElements lower than min and higher than max and \"NaN\" elements are\n ignored.\nParameters:\n * input (Tensor) -- the input tensor.\n * **bins** (*int*) -- number of histogram bins\n\n * **min** (*Scalar*) -- lower end of the range (inclusive)\n\n * **max** (*Scalar*) -- upper end of the range (inclusive)\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nReturns:\n Histogram represented as a tensor\nReturn type:\n Tensor\nExample:\n >>> torch.histc(torch.tensor([1., 2, 1]), bins=4, min=0, max=3)\n tensor([ 0., 2., 1., 0.])\n", "source": "https://pytorch.org/docs/stable/generated/torch.histc.html", "category": "pytorch docs"} {"text": "torch.Tensor.float_power\nTensor.float_power(exponent) -> Tensor\nSee \"torch.float_power()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.float_power.html", "category": "pytorch docs"} {"text": "torch.linalg.tensorsolve\ntorch.linalg.tensorsolve(A, B, dims=None, *, out=None) -> Tensor\nComputes the solution X to the system torch.tensordot(A, X) =\n B.\nIf m is the product of the first \"B\".ndim dimensions of \"A\"\n and n is the product of the rest of the dimensions, this function\n expects m and n to be equal.\nThe returned tensor x satisfies tensordot(\"A\", x, dims=x.ndim)\n == \"B\". x has shape \"A\"[B.ndim:].\nIf \"dims\" is specified, \"A\" will be reshaped as\n A = movedim(A, dims, range(len(dims) - A.ndim + 1, 0))\n\nSupports inputs of float, double, cfloat and cdouble dtypes.\nSee also:\n \"torch.linalg.tensorinv()\" computes the multiplicative inverse of\n \"torch.tensordot()\".\n\nParameters:\n * A (Tensor) -- tensor to solve for. Its shape must\n satisfy prod(\"A\".shape[:\"B\".ndim]) ==\n prod(\"A\".shape[\"B\".ndim:]).\n * **B** (*Tensor*) -- tensor of shape \"A\"*.shape[:*\"B\"*.ndim]*.\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.tensorsolve.html", "category": "pytorch docs"} {"text": "\ndims (Tuple[int], optional) -- dimensions of\n \"A\" to be moved. If None, no dimensions are moved. Default:\n None.\n\nKeyword Arguments:\n out (Tensor, optional) -- output tensor. Ignored if\n None. Default: None.\nRaises:\n RuntimeError -- if the reshaped \"A\".view(m, m) with m as\n above is not invertible or the product of the first \"ind\"\n dimensions is not equal to the product of the rest of the\n dimensions.\nExamples:\n >>> A = torch.eye(2 * 3 * 4).reshape((2 * 3, 4, 2, 3, 4))\n >>> B = torch.randn(2 * 3, 4)\n >>> X = torch.linalg.tensorsolve(A, B)\n >>> X.shape\n torch.Size([2, 3, 4])\n >>> torch.allclose(torch.tensordot(A, X, dims=X.ndim), B)\n True\n\n >>> A = torch.randn(6, 4, 4, 3, 2)\n >>> B = torch.randn(4, 3, 2)\n >>> X = torch.linalg.tensorsolve(A, B, dims=(0, 2))\n >>> X.shape\n torch.Size([6, 4])\n >>> A = A.permute(1, 3, 4, 0, 2)\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.tensorsolve.html", "category": "pytorch docs"} {"text": "\n\n\nA = A.permute(1, 3, 4, 0, 2)\n >>> A.shape[B.ndim:]\n torch.Size([6, 4])\n >>> torch.allclose(torch.tensordot(A, X, dims=X.ndim), B, atol=1e-6)\n True\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.tensorsolve.html", "category": "pytorch docs"} {"text": "torch.Tensor.new_full\nTensor.new_full(size, fill_value, *, dtype=None, device=None, requires_grad=False, layout=torch.strided, pin_memory=False) -> Tensor\nReturns a Tensor of size \"size\" filled with \"fill_value\". By\n default, the returned Tensor has the same \"torch.dtype\" and\n \"torch.device\" as this tensor.\nParameters:\n fill_value (scalar) -- the number to fill the output\n tensor with.\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired type of\n returned tensor. Default: if None, same \"torch.dtype\" as this\n tensor.\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if None, same \"torch.device\" as this\n tensor.\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.new_full.html", "category": "pytorch docs"} {"text": "returned Tensor. Default: \"torch.strided\".\n * **pin_memory** (*bool**, **optional*) -- If set, returned\n tensor would be allocated in the pinned memory. Works only for\n CPU tensors. Default: \"False\".\n\nExample:\n >>> tensor = torch.ones((2,), dtype=torch.float64)\n >>> tensor.new_full((3, 4), 3.141592)\n tensor([[ 3.1416, 3.1416, 3.1416, 3.1416],\n [ 3.1416, 3.1416, 3.1416, 3.1416],\n [ 3.1416, 3.1416, 3.1416, 3.1416]], dtype=torch.float64)\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.new_full.html", "category": "pytorch docs"} {"text": "torch.Tensor.pow\nTensor.pow(exponent) -> Tensor\nSee \"torch.pow()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.pow.html", "category": "pytorch docs"} {"text": "torch.Tensor.int_repr\nTensor.int_repr() -> Tensor\nGiven a quantized Tensor, \"self.int_repr()\" returns a CPU Tensor\n with uint8_t as data type that stores the underlying uint8_t values\n of the given Tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.int_repr.html", "category": "pytorch docs"} {"text": "torch.Tensor.addcmul_\nTensor.addcmul_(tensor1, tensor2, *, value=1) -> Tensor\nIn-place version of \"addcmul()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.addcmul_.html", "category": "pytorch docs"} {"text": "torch.sspaddmm\ntorch.sspaddmm(input, mat1, mat2, *, beta=1, alpha=1, out=None) -> Tensor\nMatrix multiplies a sparse tensor \"mat1\" with a dense tensor\n \"mat2\", then adds the sparse tensor \"input\" to the result.\nNote: This function is equivalent to \"torch.addmm()\", except\n \"input\" and \"mat1\" are sparse.\nParameters:\n * input (Tensor) -- a sparse matrix to be added\n * **mat1** (*Tensor*) -- a sparse matrix to be matrix multiplied\n\n * **mat2** (*Tensor*) -- a dense matrix to be matrix multiplied\n\nKeyword Arguments:\n * beta (Number, optional) -- multiplier for \"mat\"\n (\\beta)\n * **alpha** (*Number**, **optional*) -- multiplier for mat1 @\n mat2 (\\alpha)\n\n * **out** (*Tensor**, **optional*) -- the output tensor.\n", "source": "https://pytorch.org/docs/stable/generated/torch.sspaddmm.html", "category": "pytorch docs"} {"text": "torch.Tensor.arctan_\nTensor.arctan_() -> Tensor\nIn-place version of \"arctan()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arctan_.html", "category": "pytorch docs"} {"text": "torch.Tensor.digamma_\nTensor.digamma_() -> Tensor\nIn-place version of \"digamma()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.digamma_.html", "category": "pytorch docs"} {"text": "ParameterList\nclass torch.nn.ParameterList(values=None)\nHolds parameters in a list.\n\"ParameterList\" can be used like a regular Python list, but Tensors\n that are \"Parameter\" are properly registered, and will be visible\n by all \"Module\" methods.\nNote that the constructor, assigning an element of the list, the\n \"append()\" method and the \"extend()\" method will convert any\n \"Tensor\" into \"Parameter\".\nParameters:\n parameters (iterable, optional) -- an iterable of\n elements to add to the list.\nExample:\n class MyModule(nn.Module):\n def __init__(self):\n super(MyModule, self).__init__()\n self.params = nn.ParameterList([nn.Parameter(torch.randn(10, 10)) for i in range(10)])\n\n def forward(self, x):\n # ParameterList can act as an iterable, or be indexed using ints\n for i, p in enumerate(self.params):\n x = self.params[i // 2].mm(x) + p.mm(x)\n return x\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ParameterList.html", "category": "pytorch docs"} {"text": "return x\nappend(value)\n Appends a given value at the end of the list.\n\n Parameters:\n **value** (*Any*) -- value to append\n\n Return type:\n *ParameterList*\n\nextend(values)\n Appends values from a Python iterable to the end of the list.\n\n Parameters:\n **values** (*iterable*) -- iterable of values to append\n\n Return type:\n *ParameterList*\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ParameterList.html", "category": "pytorch docs"} {"text": "torch.sinh\ntorch.sinh(input, *, out=None) -> Tensor\nReturns a new tensor with the hyperbolic sine of the elements of\n \"input\".\n \\text{out}_{i} = \\sinh(\\text{input}_{i})\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(4)\n >>> a\n tensor([ 0.5380, -0.8632, -0.1265, 0.9399])\n >>> torch.sinh(a)\n tensor([ 0.5644, -0.9744, -0.1268, 1.0845])\n\nNote:\n When \"input\" is on the CPU, the implementation of torch.sinh may\n use the Sleef library, which rounds very large results to\n infinity or negative infinity. See here for details.\n", "source": "https://pytorch.org/docs/stable/generated/torch.sinh.html", "category": "pytorch docs"} {"text": "inference_mode\nclass torch.inference_mode(mode=True)\nContext-manager that enables or disables inference mode\nInferenceMode is a new context manager analogous to \"no_grad\" to be\n used when you are certain your operations will have no interactions\n with autograd (e.g., model training). Code run under this mode gets\n better performance by disabling view tracking and version counter\n bumps. Note that unlike some other mechanisms that locally enable\n or disable grad, entering inference_mode also disables to forward-\n mode AD.\nThis context manager is thread local; it will not affect\n computation in other threads.\nAlso functions as a decorator. (Make sure to instantiate with\n parenthesis.)\nNote:\n Inference mode is one of several mechanisms that can enable or\n disable gradients locally see Locally disabling gradient\n computation for more information on how they compare.\n\nParameters:", "source": "https://pytorch.org/docs/stable/generated/torch.inference_mode.html", "category": "pytorch docs"} {"text": "Parameters:\n mode (bool) -- Flag whether to enable or disable inference\n mode\nExample::\n >>> import torch\n >>> x = torch.ones(1, 2, 3, requires_grad=True)\n >>> with torch.inference_mode():\n ... y = x * x\n >>> y.requires_grad\n False\n >>> y._version\n Traceback (most recent call last):\n File \"\", line 1, in \n RuntimeError: Inference tensors do not track version counter.\n >>> @torch.inference_mode()\n ... def func(x):\n ... return x * x\n >>> out = func(x)\n >>> out.requires_grad\n False", "source": "https://pytorch.org/docs/stable/generated/torch.inference_mode.html", "category": "pytorch docs"} {"text": "torch.Tensor.arccos_\nTensor.arccos_() -> Tensor\nIn-place version of \"arccos()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arccos_.html", "category": "pytorch docs"} {"text": "torch.Tensor.addmv\nTensor.addmv(mat, vec, *, beta=1, alpha=1) -> Tensor\nSee \"torch.addmv()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.addmv.html", "category": "pytorch docs"} {"text": "torch.Tensor.less_\nTensor.less_(other) -> Tensor\nIn-place version of \"less()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.less_.html", "category": "pytorch docs"} {"text": "torch.foreach_ceil\ntorch.foreach_ceil(self: List[Tensor]) -> None\nApply \"torch.ceil()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_ceil_.html", "category": "pytorch docs"} {"text": "convert\nclass torch.quantization.convert(module, mapping=None, inplace=False, remove_qconfig=True, is_reference=False, convert_custom_config_dict=None)\nConverts submodules in input module to a different module according\n to mapping by calling from_float method on the target module\n class. And remove qconfig at the end if remove_qconfig is set to\n True.\nParameters:\n * module -- prepared and calibrated module\n * **mapping** -- a dictionary that maps from source module type\n to target module type, can be overwritten to allow swapping\n user defined Modules\n\n * **inplace** -- carry out model transformations in-place, the\n original module is mutated\n\n * **convert_custom_config_dict** -- custom configuration\n dictionary for convert function\n\n # Example of convert_custom_config_dict:\n convert_custom_config_dict = {\n # user will manually define the corresponding quantized\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.convert.html", "category": "pytorch docs"} {"text": "module class which has a from_observed class method that converts\n # observed custom module to quantized custom module\n \"observed_to_quantized_custom_module_class\": {\n ObservedCustomModule: QuantizedCustomModule\n }\n }\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.convert.html", "category": "pytorch docs"} {"text": "torch.Tensor.divide_\nTensor.divide_(value, *, rounding_mode=None) -> Tensor\nIn-place version of \"divide()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.divide_.html", "category": "pytorch docs"} {"text": "graph\nclass torch.cuda.graph(cuda_graph, pool=None, stream=None)\nContext-manager that captures CUDA work into a\n \"torch.cuda.CUDAGraph\" object for later replay.\nSee CUDA Graphs for a general introduction, detailed use, and\n constraints.\nParameters:\n * cuda_graph (torch.cuda.CUDAGraph) -- Graph object used\n for capture.\n * **pool** (*optional*) -- Opaque token (returned by a call to\n \"graph_pool_handle()\" or \"other_Graph_instance.pool()\")\n hinting this graph's capture may share memory from the\n specified pool. See Graph memory management.\n\n * **stream** (*torch.cuda.Stream**, **optional*) -- If supplied,\n will be set as the current stream in the context. If not\n supplied, \"graph\" sets its own internal side stream as the\n current stream in the context.\n\nNote:\n For effective memory sharing, if you pass a \"pool\" used by a\n previous capture and the previous capture used an explicit\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.graph.html", "category": "pytorch docs"} {"text": "\"stream\" argument, you should pass the same \"stream\" argument to\n this capture.\nWarning:\n This API is in beta and may change in future releases.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.graph.html", "category": "pytorch docs"} {"text": "torch.jit.load\ntorch.jit.load(f, map_location=None, _extra_files=None)\nLoad a \"ScriptModule\" or \"ScriptFunction\" previously saved with\n \"torch.jit.save\"\nAll previously saved modules, no matter their device, are first\n loaded onto CPU, and then are moved to the devices they were saved\n from. If this fails (e.g. because the run time system doesn't have\n certain devices), an exception is raised.\nParameters:\n * f -- a file-like object (has to implement read, readline,\n tell, and seek), or a string containing a file name\n * **map_location** (*string** or **torch.device*) -- A\n simplified version of \"map_location\" in *torch.jit.save* used\n to dynamically remap storages to an alternative set of\n devices.\n\n * **_extra_files** (*dictionary of filename to content*) -- The\n extra filenames given in the map would be loaded and their\n content would be stored in the provided map.\n\nReturns:\n A \"ScriptModule\" object.", "source": "https://pytorch.org/docs/stable/generated/torch.jit.load.html", "category": "pytorch docs"} {"text": "Returns:\n A \"ScriptModule\" object.\nExample:\n import torch\n import io\n\n torch.jit.load('scriptmodule.pt')\n\n # Load ScriptModule from io.BytesIO object\n with open('scriptmodule.pt', 'rb') as f:\n buffer = io.BytesIO(f.read())\n\n # Load all tensors to the original device\n torch.jit.load(buffer)\n\n # Load all tensors onto CPU, using a device\n buffer.seek(0)\n torch.jit.load(buffer, map_location=torch.device('cpu'))\n\n # Load all tensors onto CPU, using a string\n buffer.seek(0)\n torch.jit.load(buffer, map_location='cpu')\n\n # Load with extra files.\n extra_files = {'foo.txt': ''} # values will be replaced with data\n torch.jit.load('scriptmodule.pt', _extra_files=extra_files)\n print(extra_files['foo.txt'])\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.load.html", "category": "pytorch docs"} {"text": "torch.Tensor.quantile\nTensor.quantile(q, dim=None, keepdim=False, *, interpolation='linear') -> Tensor\nSee \"torch.quantile()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.quantile.html", "category": "pytorch docs"} {"text": "torch.complex\ntorch.complex(real, imag, *, out=None) -> Tensor\nConstructs a complex tensor with its real part equal to \"real\" and\n its imaginary part equal to \"imag\".\nParameters:\n * real (Tensor) -- The real part of the complex tensor.\n Must be float or double.\n * **imag** (*Tensor*) -- The imaginary part of the complex\n tensor. Must be same dtype as \"real\".\n\nKeyword Arguments:\n out (Tensor) -- If the inputs are \"torch.float32\", must be\n \"torch.complex64\". If the inputs are \"torch.float64\", must be\n \"torch.complex128\".\nExample:\n >>> real = torch.tensor([1, 2], dtype=torch.float32)\n >>> imag = torch.tensor([3, 4], dtype=torch.float32)\n >>> z = torch.complex(real, imag)\n >>> z\n tensor([(1.+3.j), (2.+4.j)])\n >>> z.dtype\n torch.complex64\n", "source": "https://pytorch.org/docs/stable/generated/torch.complex.html", "category": "pytorch docs"} {"text": "torch._foreach_neg\ntorch._foreach_neg(self: List[Tensor]) -> List[Tensor]\nApply \"torch.neg()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_neg.html", "category": "pytorch docs"} {"text": "torch.lcm\ntorch.lcm(input, other, *, out=None) -> Tensor\nComputes the element-wise least common multiple (LCM) of \"input\"\n and \"other\".\nBoth \"input\" and \"other\" must have integer types.\nNote:\n This defines lcm(0, 0) = 0 and lcm(0, a) = 0.\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **other** (*Tensor*) -- the second input tensor\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.tensor([5, 10, 15])\n >>> b = torch.tensor([3, 4, 5])\n >>> torch.lcm(a, b)\n tensor([15, 20, 15])\n >>> c = torch.tensor([3])\n >>> torch.lcm(a, c)\n tensor([15, 30, 15])\n", "source": "https://pytorch.org/docs/stable/generated/torch.lcm.html", "category": "pytorch docs"} {"text": "torch._foreach_asin\ntorch._foreach_asin(self: List[Tensor]) -> List[Tensor]\nApply \"torch.asin()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_asin.html", "category": "pytorch docs"} {"text": "torch.isposinf\ntorch.isposinf(input, *, out=None) -> Tensor\nTests if each element of \"input\" is positive infinity or not.\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.tensor([-float('inf'), float('inf'), 1.2])\n >>> torch.isposinf(a)\n tensor([False, True, False])\n", "source": "https://pytorch.org/docs/stable/generated/torch.isposinf.html", "category": "pytorch docs"} {"text": "ConvBn1d\nclass torch.ao.nn.intrinsic.qat.ConvBn1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=None, padding_mode='zeros', eps=1e-05, momentum=0.1, freeze_bn=False, qconfig=None)\nA ConvBn1d module is a module fused from Conv1d and BatchNorm1d,\n attached with FakeQuantize modules for weight, used in quantization\n aware training.\nWe combined the interface of \"torch.nn.Conv1d\" and\n \"torch.nn.BatchNorm1d\".\nSimilar to \"torch.nn.Conv1d\", with FakeQuantize modules initialized\n to default.\nVariables:\n * freeze_bn --\n * **weight_fake_quant** -- fake quant module for weight\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.ConvBn1d.html", "category": "pytorch docs"} {"text": "enable_fake_quant\nclass torch.quantization.fake_quantize.enable_fake_quant(mod)\nEnable fake quantization for this module, if applicable. Example\n usage:\n # model is any PyTorch model\n model.apply(torch.ao.quantization.enable_fake_quant)\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.enable_fake_quant.html", "category": "pytorch docs"} {"text": "RNN\nclass torch.nn.RNN(args, *kwargs)\nApplies a multi-layer Elman RNN with \\tanh or \\text{ReLU} non-\n linearity to an input sequence.\nFor each element in the input sequence, each layer computes the\n following function:\n h_t = \\tanh(x_t W_{ih}^T + b_{ih} + h_{t-1}W_{hh}^T + b_{hh})\n\nwhere h_t is the hidden state at time t, x_t is the input at time\n t, and h_{(t-1)} is the hidden state of the previous layer at\n time t-1 or the initial hidden state at time 0. If\n \"nonlinearity\" is \"'relu'\", then \\text{ReLU} is used instead of\n \\tanh.\nParameters:\n * input_size -- The number of expected features in the input\n x\n * **hidden_size** -- The number of features in the hidden state\n *h*\n\n * **num_layers** -- Number of recurrent layers. E.g., setting\n \"num_layers=2\" would mean stacking two RNNs together to form a\n *stacked RNN*, with the second RNN taking in outputs of the\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.RNN.html", "category": "pytorch docs"} {"text": "first RNN and computing the final results. Default: 1\n * **nonlinearity** -- The non-linearity to use. Can be either\n \"'tanh'\" or \"'relu'\". Default: \"'tanh'\"\n\n * **bias** -- If \"False\", then the layer does not use bias\n weights *b_ih* and *b_hh*. Default: \"True\"\n\n * **batch_first** -- If \"True\", then the input and output\n tensors are provided as *(batch, seq, feature)* instead of\n *(seq, batch, feature)*. Note that this does not apply to\n hidden or cell states. See the Inputs/Outputs sections below\n for details. Default: \"False\"\n\n * **dropout** -- If non-zero, introduces a *Dropout* layer on\n the outputs of each RNN layer except the last layer, with\n dropout probability equal to \"dropout\". Default: 0\n\n * **bidirectional** -- If \"True\", becomes a bidirectional RNN.\n Default: \"False\"\n\nInputs: input, h_0\n * input: tensor of shape (L, H_{in}) for unbatched input,", "source": "https://pytorch.org/docs/stable/generated/torch.nn.RNN.html", "category": "pytorch docs"} {"text": "(L, N, H_{in}) when \"batch_first=False\" or (N, L, H_{in}) when\n \"batch_first=True\" containing the features of the input\n sequence. The input can also be a packed variable length\n sequence. See \"torch.nn.utils.rnn.pack_padded_sequence()\" or\n \"torch.nn.utils.rnn.pack_sequence()\" for details.\n * **h_0**: tensor of shape (D * \\text{num\\_layers}, H_{out}) for\n unbatched input or (D * \\text{num\\_layers}, N, H_{out})\n containing the initial hidden state for the input sequence\n batch. Defaults to zeros if not provided.\n\n where:\n\n \\begin{aligned} N ={} & \\text{batch size} \\\\ L ={} &\n \\text{sequence length} \\\\ D ={} & 2 \\text{ if\n bidirectional=True otherwise } 1 \\\\ H_{in} ={} &\n \\text{input\\_size} \\\\ H_{out} ={} & \\text{hidden\\_size}\n \\end{aligned}\n\nOutputs: output, h_n\n * output: tensor of shape (L, D * H_{out}) for unbatched", "source": "https://pytorch.org/docs/stable/generated/torch.nn.RNN.html", "category": "pytorch docs"} {"text": "input, (L, N, D * H_{out}) when \"batch_first=False\" or (N, L,\n D * H_{out}) when \"batch_first=True\" containing the output\n features (h_t) from the last layer of the RNN, for each t.\n If a \"torch.nn.utils.rnn.PackedSequence\" has been given as the\n input, the output will also be a packed sequence.\n * **h_n**: tensor of shape (D * \\text{num\\_layers}, H_{out}) for\n unbatched input or (D * \\text{num\\_layers}, N, H_{out})\n containing the final hidden state for each element in the\n batch.\n\nVariables:\n * weight_ih_l[k] -- the learnable input-hidden weights of\n the k-th layer, of shape (hidden_size, input_size) for k =\n 0. Otherwise, the shape is (hidden_size, num_directions *\n hidden_size)\n * **weight_hh_l[k]** -- the learnable hidden-hidden weights of\n the k-th layer, of shape *(hidden_size, hidden_size)*\n\n * **bias_ih_l[k]** -- the learnable input-hidden bias of the\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.RNN.html", "category": "pytorch docs"} {"text": "k-th layer, of shape (hidden_size)\n * **bias_hh_l[k]** -- the learnable hidden-hidden bias of the\n k-th layer, of shape *(hidden_size)*\n\nNote:\n All the weights and biases are initialized from\n \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}) where k =\n \\frac{1}{\\text{hidden\\_size}}\n\nNote:\n For bidirectional RNNs, forward and backward are directions 0 and\n 1 respectively. Example of splitting the output layers when\n \"batch_first=False\": \"output.view(seq_len, batch, num_directions,\n hidden_size)\".\n\nNote:\n \"batch_first\" argument is ignored for unbatched inputs.\n\nWarning:\n There are known non-determinism issues for RNN functions on some\n versions of cuDNN and CUDA. You can enforce deterministic\n behavior by setting the following environment variables:On CUDA\n 10.1, set environment variable \"CUDA_LAUNCH_BLOCKING=1\". This may\n affect performance.On CUDA 10.2 or later, set environment\n variable (note the leading colon symbol)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.RNN.html", "category": "pytorch docs"} {"text": "variable (note the leading colon symbol)\n \"CUBLAS_WORKSPACE_CONFIG=:16:8\" or\n \"CUBLAS_WORKSPACE_CONFIG=:4096:2\".See the cuDNN 8 Release Notes\n for more information.\nNote:\n If the following conditions are satisfied: 1) cudnn is enabled,\n 2) input data is on the GPU 3) input data has dtype\n \"torch.float16\" 4) V100 GPU is used, 5) input data is not in\n \"PackedSequence\" format persistent algorithm can be selected to\n improve performance.\n\nExamples:\n >>> rnn = nn.RNN(10, 20, 2)\n >>> input = torch.randn(5, 3, 10)\n >>> h0 = torch.randn(2, 3, 20)\n >>> output, hn = rnn(input, h0)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.RNN.html", "category": "pytorch docs"} {"text": "torch.Tensor.tanh_\nTensor.tanh_() -> Tensor\nIn-place version of \"tanh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.tanh_.html", "category": "pytorch docs"} {"text": "torch.deg2rad\ntorch.deg2rad(input, *, out=None) -> Tensor\nReturns a new tensor with each of the elements of \"input\" converted\n from angles in degrees to radians.\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.tensor([[180.0, -180.0], [360.0, -360.0], [90.0, -90.0]])\n >>> torch.deg2rad(a)\n tensor([[ 3.1416, -3.1416],\n [ 6.2832, -6.2832],\n [ 1.5708, -1.5708]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.deg2rad.html", "category": "pytorch docs"} {"text": "torch.rand\ntorch.rand(size, , generator=None, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False, pin_memory=False) -> Tensor\nReturns a tensor filled with random numbers from a uniform\n distribution on the interval [0, 1)\nThe shape of the tensor is defined by the variable argument \"size\".\nParameters:\n size (int...) -- a sequence of integers defining the\n shape of the output tensor. Can be a variable number of\n arguments or a collection like a list or tuple.\nKeyword Arguments:\n * generator (\"torch.Generator\", optional) -- a pseudorandom\n number generator for sampling\n * **out** (*Tensor**, **optional*) -- the output tensor.\n\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n", "source": "https://pytorch.org/docs/stable/generated/torch.rand.html", "category": "pytorch docs"} {"text": "returned Tensor. Default: \"torch.strided\".\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\n * **pin_memory** (*bool**, **optional*) -- If set, returned\n tensor would be allocated in the pinned memory. Works only for\n CPU tensors. Default: \"False\".\n\nExample:\n >>> torch.rand(4)\n tensor([ 0.5204, 0.2503, 0.3525, 0.5673])\n >>> torch.rand(2, 3)\n tensor([[ 0.8237, 0.5781, 0.6879],\n [ 0.3816, 0.7249, 0.0998]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.rand.html", "category": "pytorch docs"} {"text": "torch.Tensor.sinc\nTensor.sinc() -> Tensor\nSee \"torch.sinc()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sinc.html", "category": "pytorch docs"} {"text": "torch.autograd.profiler.load_nvprof\ntorch.autograd.profiler.load_nvprof(path)\nOpens an nvprof trace file and parses autograd annotations.\nParameters:\n path (str) -- path to nvprof trace", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.profiler.load_nvprof.html", "category": "pytorch docs"} {"text": "torch.Tensor.triu\nTensor.triu(diagonal=0) -> Tensor\nSee \"torch.triu()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.triu.html", "category": "pytorch docs"} {"text": "torch.Tensor.ge\nTensor.ge(other) -> Tensor\nSee \"torch.ge()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ge.html", "category": "pytorch docs"} {"text": "check_sparse_tensor_invariants\nclass torch.sparse.check_sparse_tensor_invariants(enable=True)\nA tool to control checking sparse tensor invariants.\nThe following options exists to manage sparsr tensor invariants\n checking in sparse tensor construction:\n\n\nUsing a context manager:\n with torch.sparse.check_sparse_tensor_invariants():\n run_my_model()\n\n\n\nUsing a procedural approach:\n prev_checks_enabled = torch.sparse.check_sparse_tensor_invariants.is_enabled()\n torch.sparse.check_sparse_tensor_invariants.enable()\n\n run_my_model()\n\n if not prev_checks_enabled:\n torch.sparse.check_sparse_tensor_invariants.disable()\n\n\n\nUsing function decoration:\n @torch.sparse.check_sparse_tensor_invariants()\n def run_my_model():\n ...\n\n run_my_model()\n\n\n\nUsing \"check_invariants\" keyword argument in sparse tensor\n constructor call. For example:\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.check_sparse_tensor_invariants.html", "category": "pytorch docs"} {"text": "constructor call. For example:\n >>> torch.sparse_csr_tensor([0, 1, 3], [0, 1], [1, 2], check_invariants=True)\n Traceback (most recent call last):\n File \"\", line 1, in \n RuntimeError: `crow_indices[..., -1] == nnz` is not satisfied.\n\nstatic disable()\n Disable sparse tensor invariants checking in sparse tensor\n constructors.\n\n See \"torch.sparse.check_sparse_tensor_invariants.enable()\" for\n more information.\n\nstatic enable()\n Enable sparse tensor invariants checking in sparse tensor\n constructors.\n\n Note:\n\n By default, the sparse tensor invariants checks are disabled.\n Use \"torch.sparse.check_sparse_tensor_invariants.is_enabled()\"\n to retrieve the current state of sparse tensor invariants\n checking.\n\n Note:\n\n The sparse tensor invariants check flag is effective to all\n sparse tensor constructors, both in Python and ATen.The flag\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.check_sparse_tensor_invariants.html", "category": "pytorch docs"} {"text": "can be locally overridden by the \"check_invariants\" optional\n argument of the sparse tensor constructor functions.\nstatic is_enabled()\n Returns True if the sparse tensor invariants checking is\n enabled.\n\n Note:\n\n Use \"torch.sparse.check_sparse_tensor_invariants.enable()\" or\n \"torch.sparse.check_sparse_tensor_invariants.disable()\" to\n manage the state of the sparse tensor invariants checks.\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.check_sparse_tensor_invariants.html", "category": "pytorch docs"} {"text": "torch.sin\ntorch.sin(input, *, out=None) -> Tensor\nReturns a new tensor with the sine of the elements of \"input\".\n \\text{out}_{i} = \\sin(\\text{input}_{i})\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(4)\n >>> a\n tensor([-0.5461, 0.1347, -2.7266, -0.2746])\n >>> torch.sin(a)\n tensor([-0.5194, 0.1343, -0.4032, -0.2711])\n", "source": "https://pytorch.org/docs/stable/generated/torch.sin.html", "category": "pytorch docs"} {"text": "torch.autograd.graph.Node.register_prehook\nabstract Node.register_prehook(fn)\nRegisters a backward pre-hook.\nThe hook will be called every time a gradient with respect to the\n Node is computed. The hook should have the following signature:\n hook(grad_outputs: Tuple[Tensor]) -> Tuple[Tensor] or None\n\nThe hook should not modify its argument, but it can optionally\n return a new gradient which will be used in place of\n \"grad_outputs\".\nThis function returns a handle with a method \"handle.remove()\" that\n removes the hook from the module.\nNote:\n See Backward Hooks execution for more information on how when\n this hook is executed, and how its execution is ordered relative\n to other hooks.\n\nExample:\n >>> a = torch.tensor([0., 0., 0.], requires_grad=True)\n >>> b = a.clone()\n >>> assert isinstance(b.grad_fn, torch.autograd.graph.Node)\n >>> handle = b.grad_fn.register_prehook(lambda gI: (gI[0] * 2,))\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.graph.Node.register_prehook.html", "category": "pytorch docs"} {"text": "\n\n\nb.sum().backward(retain_graph=True)\n >>> print(a.grad)\n tensor([2., 2., 2.])\n >>> handle.remove()\n >>> a.grad = None\n >>> b.sum().backward(retain_graph=True)\n >>> print(a.grad)\n tensor([1., 1., 1.])\n\n\n\nReturn type:\n RemovableHandle", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.graph.Node.register_prehook.html", "category": "pytorch docs"} {"text": "torch.nn.utils.rnn.pack_padded_sequence\ntorch.nn.utils.rnn.pack_padded_sequence(input, lengths, batch_first=False, enforce_sorted=True)\nPacks a Tensor containing padded sequences of variable length.\n\"input\" can be of size \"T x B x \" where T is the length of the\n longest sequence (equal to \"lengths[0]\"), \"B\" is the batch size,\n and \"\" is any number of dimensions (including 0). If \"batch_first\"\n is \"True\", \"B x T x *\" \"input\" is expected.\nFor unsorted sequences, use enforce_sorted = False. If\n \"enforce_sorted\" is \"True\", the sequences should be sorted by\n length in a decreasing order, i.e. \"input[:,0]\" should be the\n longest sequence, and \"input[:,B-1]\" the shortest one.\n enforce_sorted = True is only necessary for ONNX export.\nNote:\n This function accepts any input that has at least two dimensions.\n You can apply it to pack the labels, and use the output of the\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pack_padded_sequence.html", "category": "pytorch docs"} {"text": "RNN with them to compute the loss directly. A Tensor can be\n retrieved from a \"PackedSequence\" object by accessing its \".data\"\n attribute.\nParameters:\n * input (Tensor) -- padded batch of variable length\n sequences.\n * **lengths** (*Tensor** or **list**(**int**)*) -- list of\n sequence lengths of each batch element (must be on the CPU if\n provided as a tensor).\n\n * **batch_first** (*bool**, **optional*) -- if \"True\", the input\n is expected in \"B x T x *\" format.\n\n * **enforce_sorted** (*bool**, **optional*) -- if \"True\", the\n input is expected to contain sequences sorted by length in a\n decreasing order. If \"False\", the input will get sorted\n unconditionally. Default: \"True\".\n\nReturns:\n a \"PackedSequence\" object\nReturn type:\n PackedSequence", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pack_padded_sequence.html", "category": "pytorch docs"} {"text": "torch.nn.utils.clip_grad_value_\ntorch.nn.utils.clip_grad_value_(parameters, clip_value, foreach=None)\nClips gradient of an iterable of parameters at specified value.\nGradients are modified in-place.\nParameters:\n * parameters (Iterable[Tensor] or Tensor) -- an\n iterable of Tensors or a single Tensor that will have\n gradients normalized\n * **clip_value** (*float*) -- maximum allowed value of the\n gradients. The gradients are clipped in the range\n \\left[\\text{-clip\\_value}, \\text{clip\\_value}\\right]\n\n * **foreach** (*bool*) -- use the faster foreach-based\n implementation If \"None\", use the foreach implementation for\n CUDA and CPU tensors and silently fall back to the slow\n implementation for other device types. Default: \"None\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.clip_grad_value_.html", "category": "pytorch docs"} {"text": "torch.Tensor.igammac_\nTensor.igammac_(other) -> Tensor\nIn-place version of \"igammac()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.igammac_.html", "category": "pytorch docs"} {"text": "torch.autograd.functional.hessian\ntorch.autograd.functional.hessian(func, inputs, create_graph=False, strict=False, vectorize=False, outer_jacobian_strategy='reverse-mode')\nFunction that computes the Hessian of a given scalar function.\nParameters:\n * func (function) -- a Python function that takes Tensor\n inputs and returns a Tensor with a single element.\n * **inputs** (*tuple of Tensors** or **Tensor*) -- inputs to the\n function \"func\".\n\n * **create_graph** (*bool**, **optional*) -- If \"True\", the\n Hessian will be computed in a differentiable manner. Note that\n when \"strict\" is \"False\", the result can not require gradients\n or be disconnected from the inputs. Defaults to \"False\".\n\n * **strict** (*bool**, **optional*) -- If \"True\", an error will\n be raised when we detect that there exists an input such that\n all the outputs are independent of it. If \"False\", we return a\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.hessian.html", "category": "pytorch docs"} {"text": "Tensor of zeros as the hessian for said inputs, which is the\n expected mathematical value. Defaults to \"False\".\n * **vectorize** (*bool**, **optional*) -- This feature is\n experimental. Please consider using \"torch.func.hessian()\"\n instead if you are looking for something less experimental and\n more performant. When computing the hessian, usually we invoke\n \"autograd.grad\" once per row of the hessian. If this flag is\n \"True\", we use the vmap prototype feature as the backend to\n vectorize calls to \"autograd.grad\" so we only invoke it once\n instead of once per row. This should lead to performance\n improvements in many use cases, however, due to this feature\n being incomplete, there may be performance cliffs. Please use\n *torch._C._debug_only_display_vmap_fallback_warnings(True)* to\n show any performance warnings and file us issues if warnings\n exist for your use case. Defaults to \"False\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.hessian.html", "category": "pytorch docs"} {"text": "\nouter_jacobian_strategy (str, optional) -- The\n Hessian is computed by computing the Jacobian of a Jacobian.\n The inner Jacobian is always computed in reverse-mode AD.\n Setting strategy to \"\"forward-mode\"\" or \"\"reverse-mode\"\"\n determines whether the outer Jacobian will be computed with\n forward or reverse mode AD. Currently, computing the outer\n Jacobian in \"\"forward-mode\"\" requires \"vectorized=True\".\n Defaults to \"\"reverse-mode\"\".\n\nReturns:\n if there is a single input, this will be a single Tensor\n containing the Hessian for the input. If it is a tuple, then the\n Hessian will be a tuple of tuples where \"Hessian[i][j]\" will\n contain the Hessian of the \"i\"th input and \"j\"th input with size\n the sum of the size of the \"i\"th input plus the size of the\n \"j\"th input. \"Hessian[i][j]\" will have the same dtype and device\n as the corresponding \"i\"th input.\nReturn type:", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.hessian.html", "category": "pytorch docs"} {"text": "Return type:\n Hessian (Tensor or a tuple of tuple of Tensors)\n-[ Example ]-\n\n\n\ndef pow_reducer(x):\n ... return x.pow(3).sum()\ninputs = torch.rand(2, 2)\nhessian(pow_reducer, inputs)\n tensor([[[[5.2265, 0.0000],\n [0.0000, 0.0000]],\n [[0.0000, 4.8221],\n [0.0000, 0.0000]]],\n [[[0.0000, 0.0000],\n [1.9456, 0.0000]],\n [[0.0000, 0.0000],\n [0.0000, 3.2550]]]])\nhessian(pow_reducer, inputs, create_graph=True)\n tensor([[[[5.2265, 0.0000],\n [0.0000, 0.0000]],\n [[0.0000, 4.8221],\n [0.0000, 0.0000]]],\n [[[0.0000, 0.0000],\n [1.9456, 0.0000]],\n [[0.0000, 0.0000],\n [0.0000, 3.2550]]]], grad_fn=)\ndef pow_adder_reducer(x, y):\n ... return (2 * x.pow(2) + 3 * y.pow(2)).sum()\ninputs = (torch.rand(2), torch.rand(2))\nhessian(pow_adder_reducer, inputs)\n ((tensor([[4., 0.],\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.hessian.html", "category": "pytorch docs"} {"text": "((tensor([[4., 0.],\n [0., 4.]]),\n tensor([[0., 0.],\n [0., 0.]])),\n (tensor([[0., 0.],\n [0., 0.]]),\n tensor([[6., 0.],\n [0., 6.]])))", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.hessian.html", "category": "pytorch docs"} {"text": "torch.hstack\ntorch.hstack(tensors, *, out=None) -> Tensor\nStack tensors in sequence horizontally (column wise).\nThis is equivalent to concatenation along the first axis for 1-D\n tensors, and along the second axis for all other tensors.\nParameters:\n tensors (sequence of Tensors) -- sequence of tensors to\n concatenate\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.tensor([1, 2, 3])\n >>> b = torch.tensor([4, 5, 6])\n >>> torch.hstack((a,b))\n tensor([1, 2, 3, 4, 5, 6])\n >>> a = torch.tensor([[1],[2],[3]])\n >>> b = torch.tensor([[4],[5],[6]])\n >>> torch.hstack((a,b))\n tensor([[1, 4],\n [2, 5],\n [3, 6]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.hstack.html", "category": "pytorch docs"} {"text": "torch.vmap\ntorch.vmap(func, in_dims=0, out_dims=0, randomness='error', *, chunk_size=None)\nvmap is the vectorizing map; \"vmap(func)\" returns a new function\n that maps \"func\" over some dimension of the inputs. Semantically,\n vmap pushes the map into PyTorch operations called by \"func\",\n effectively vectorizing those operations.\nvmap is useful for handling batch dimensions: one can write a\n function \"func\" that runs on examples and then lift it to a\n function that can take batches of examples with \"vmap(func)\". vmap\n can also be used to compute batched gradients when composed with\n autograd.\nNote:\n \"torch.vmap()\" is aliased to \"torch.func.vmap()\" for convenience.\n Use whichever one you'd like.\n\nParameters:\n * func (function) -- A Python function that takes one or\n more arguments. Must return one or more Tensors.\n * **in_dims** (*int** or **nested structure*) -- Specifies which\n", "source": "https://pytorch.org/docs/stable/generated/torch.vmap.html", "category": "pytorch docs"} {"text": "dimension of the inputs should be mapped over. \"in_dims\"\n should have a structure like the inputs. If the \"in_dim\" for a\n particular input is None, then that indicates there is no map\n dimension. Default: 0.\n * **out_dims** (*int** or **Tuple**[**int**]*) -- Specifies\n where the mapped dimension should appear in the outputs. If\n \"out_dims\" is a Tuple, then it should have one element per\n output. Default: 0.\n\n * **randomness** (*str*) -- Specifies whether the randomness in\n this vmap should be the same or different across batches. If\n 'different', the randomness for each batch will be different.\n If 'same', the randomness will be the same across batches. If\n 'error', any calls to random functions will error. Default:\n 'error'. WARNING: this flag only applies to random PyTorch\n operations and does not apply to Python's random module or\n numpy randomness.\n", "source": "https://pytorch.org/docs/stable/generated/torch.vmap.html", "category": "pytorch docs"} {"text": "numpy randomness.\n * **chunk_size** (*None** or **int*) -- If None (default), apply\n a single vmap over inputs. If not None, then compute the vmap\n \"chunk_size\" samples at a time. Note that \"chunk_size=1\" is\n equivalent to computing the vmap with a for-loop. If you run\n into memory issues computing the vmap, please try a non-None\n chunk_size.\n\nReturns:\n Returns a new \"batched\" function. It takes the same inputs as\n \"func\", except each input has an extra dimension at the index\n specified by \"in_dims\". It takes returns the same outputs as\n \"func\", except each output has an extra dimension at the index\n specified by \"out_dims\".\nReturn type:\n Callable\nOne example of using \"vmap()\" is to compute batched dot products.\n PyTorch doesn't provide a batched \"torch.dot\" API; instead of\n unsuccessfully rummaging through docs, use \"vmap()\" to construct a\n new function.", "source": "https://pytorch.org/docs/stable/generated/torch.vmap.html", "category": "pytorch docs"} {"text": "new function.\n\n\n\ntorch.dot # [D], [D] -> []\nbatched_dot = torch.func.vmap(torch.dot) # [N, D], [N, D] -> [N]\nx, y = torch.randn(2, 5), torch.randn(2, 5)\nbatched_dot(x, y)\n\n\n\n\"vmap()\" can be helpful in hiding batch dimensions, leading to a\n simpler model authoring experience.\n\n\n\nbatch_size, feature_size = 3, 5\nweights = torch.randn(feature_size, requires_grad=True)\ndef model(feature_vec):\n # Very simple linear model with activation\n return feature_vec.dot(weights).relu()\nexamples = torch.randn(batch_size, feature_size)\nresult = torch.vmap(model)(examples)\n\n\n\n\"vmap()\" can also help vectorize computations that were previously\n difficult or impossible to batch. One example is higher-order\n gradient computation. The PyTorch autograd engine computes vjps\n (vector-Jacobian products). Computing a full Jacobian matrix for\n some function f: R^N -> R^N usually requires N calls to", "source": "https://pytorch.org/docs/stable/generated/torch.vmap.html", "category": "pytorch docs"} {"text": "\"autograd.grad\", one per Jacobian row. Using \"vmap()\", we can\n vectorize the whole computation, computing the Jacobian in a single\n call to \"autograd.grad\".\n\n\n\nSetup\nN = 5\nf = lambda x: x ** 2\nx = torch.randn(N, requires_grad=True)\ny = f(x)\nI_N = torch.eye(N)\nSequential approach\njacobian_rows = [torch.autograd.grad(y, x, v, retain_graph=True)[0]\n for v in I_N.unbind()]\njacobian = torch.stack(jacobian_rows)\nvectorized gradient computation\ndef get_vjp(v):\n return torch.autograd.grad(y, x, v)\njacobian = torch.vmap(get_vjp)(I_N)\n\n\n\n\"vmap()\" can also be nested, producing an output with multiple\n batched dimensions\n\n\n\ntorch.dot # [D], [D] -> []\nbatched_dot = torch.vmap(torch.vmap(torch.dot)) # [N1, N0, D], [N1, N0, D] -> [N1, N0]\nx, y = torch.randn(2, 3, 5), torch.randn(2, 3, 5)\nbatched_dot(x, y) # tensor of size [2, 3]\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.vmap.html", "category": "pytorch docs"} {"text": "\n\n\nbatched_dot(x, y) # tensor of size [2, 3]\n\n\n\nIf the inputs are not batched along the first dimension, \"in_dims\"\n specifies the dimension that each inputs are batched along as\n\n\n\ntorch.dot # [N], [N] -> []\nbatched_dot = torch.vmap(torch.dot, in_dims=1) # [N, D], [N, D] -> [D]\nx, y = torch.randn(2, 5), torch.randn(2, 5)\nbatched_dot(x, y) # output is [5] instead of [2] if batched along the 0th dimension\n\n\n\nIf there are multiple inputs each of which is batched along\n different dimensions, \"in_dims\" must be a tuple with the batch\n dimension for each input as\n\n\n\ntorch.dot # [D], [D] -> []\nbatched_dot = torch.vmap(torch.dot, in_dims=(0, None)) # [N, D], [D] -> [N]\nx, y = torch.randn(2, 5), torch.randn(5)\nbatched_dot(x, y) # second arg doesn't have a batch dim because in_dim[1] was None\n\n\n\nIf the input is a Python struct, \"in_dims\" must be a tuple", "source": "https://pytorch.org/docs/stable/generated/torch.vmap.html", "category": "pytorch docs"} {"text": "containing a struct matching the shape of the input:\n\n\n\nf = lambda dict: torch.dot(dict['x'], dict['y'])\nx, y = torch.randn(2, 5), torch.randn(5)\ninput = {'x': x, 'y': y}\nbatched_dot = torch.vmap(f, in_dims=({'x': 0, 'y': None},))\nbatched_dot(input)\n\n\n\nBy default, the output is batched along the first dimension.\n However, it can be batched along any dimension by using \"out_dims\"\n\n\n\nf = lambda x: x ** 2\nx = torch.randn(2, 5)\nbatched_pow = torch.vmap(f, out_dims=1)\nbatched_pow(x) # [5, 2]\n\n\n\nFor any function that uses kwargs, the returned function will not\n batch the kwargs but will accept kwargs\n\n\n\nx = torch.randn([2, 5])\ndef fn(x, scale=4.):\n return x * scale\nbatched_pow = torch.vmap(fn)\nassert torch.allclose(batched_pow(x), x * 4)\nbatched_pow(x, scale=x) # scale is not batched, output has shape [2, 2, 5]\n\n\n\nNote:\n vmap does not provide general autobatching or handle variable-\n", "source": "https://pytorch.org/docs/stable/generated/torch.vmap.html", "category": "pytorch docs"} {"text": "length sequences out of the box.", "source": "https://pytorch.org/docs/stable/generated/torch.vmap.html", "category": "pytorch docs"} {"text": "torch.cuda.default_stream\ntorch.cuda.default_stream(device=None)\nReturns the default \"Stream\" for a given device.\nParameters:\n device (torch.device or int, optional) -- selected\n device. Returns the default \"Stream\" for the current device,\n given by \"current_device()\", if \"device\" is \"None\" (default).\nReturn type:\n Stream", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.default_stream.html", "category": "pytorch docs"} {"text": "torch.Tensor.numpy\nTensor.numpy(*, force=False) -> numpy.ndarray\nReturns the tensor as a NumPy \"ndarray\".\nIf \"force\" is \"False\" (the default), the conversion is performed\n only if the tensor is on the CPU, does not require grad, does not\n have its conjugate bit set, and is a dtype and layout that NumPy\n supports. The returned ndarray and the tensor will share their\n storage, so changes to the tensor will be reflected in the ndarray\n and vice versa.\nIf \"force\" is \"True\" this is equivalent to calling\n \"t.detach().cpu().resolve_conj().resolve_neg().numpy()\". If the\n tensor isn't on the CPU or the conjugate or negative bit is set,\n the tensor won't share its storage with the returned ndarray.\n Setting \"force\" to \"True\" can be a useful shorthand.\nParameters:\n force (bool) -- if \"True\", the ndarray may be a copy of\n the tensor instead of always sharing memory, defaults to\n \"False\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.numpy.html", "category": "pytorch docs"} {"text": "torch.expm1\ntorch.expm1(input, *, out=None) -> Tensor\nAlias for \"torch.special.expm1()\".", "source": "https://pytorch.org/docs/stable/generated/torch.expm1.html", "category": "pytorch docs"} {"text": "torch.nn.functional.pdist\ntorch.nn.functional.pdist(input, p=2) -> Tensor\nComputes the p-norm distance between every pair of row vectors in\n the input. This is identical to the upper triangular portion,\n excluding the diagonal, of torch.norm(input[:, None] - input,\n dim=2, p=p). This function will be faster if the rows are\n contiguous.\nIf input has shape N \\times M then the output will have shape\n \\frac{1}{2} N (N - 1).\nThis function is equivalent to \"scipy.spatial.distance.pdist(input,\n 'minkowski', p=p)\" if p \\in (0, \\infty). When p = 0 it is\n equivalent to \"scipy.spatial.distance.pdist(input, 'hamming') * M\".\n When p = \\infty, the closest scipy function is\n \"scipy.spatial.distance.pdist(xn, lambda x, y: np.abs(x -\n y).max())\".\nParameters:\n * input -- input tensor of shape N \\times M.\n * **p** -- p value for the p-norm distance to calculate between\n each vector pair \\in [0, \\infty].\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.pdist.html", "category": "pytorch docs"} {"text": "LogSigmoid\nclass torch.nn.LogSigmoid\nApplies the element-wise function:\n \\text{LogSigmoid}(x) = \\log\\left(\\frac{ 1 }{ 1 +\n \\exp(-x)}\\right)\n\nShape:\n * Input: (*), where * means any number of dimensions.\n * Output: (*), same shape as the input.\n\n[image]\nExamples:\n >>> m = nn.LogSigmoid()\n >>> input = torch.randn(2)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LogSigmoid.html", "category": "pytorch docs"} {"text": "torch.Tensor.frac\nTensor.frac() -> Tensor\nSee \"torch.frac()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.frac.html", "category": "pytorch docs"} {"text": "SELU\nclass torch.nn.SELU(inplace=False)\nApplied element-wise, as:\n \\text{SELU}(x) = \\text{scale} * (\\max(0,x) + \\min(0, \\alpha *\n (\\exp(x) - 1)))\n\nwith \\alpha = 1.6732632423543772848170429916717 and \\text{scale} =\n 1.0507009873554804934193349852946.\nWarning:\n When using \"kaiming_normal\" or \"kaiming_normal_\" for\n initialisation, \"nonlinearity='linear'\" should be used instead of\n \"nonlinearity='selu'\" in order to get Self-Normalizing Neural\n Networks. See \"torch.nn.init.calculate_gain()\" for more\n information.\n\nMore details can be found in the paper Self-Normalizing Neural\n Networks .\nParameters:\n inplace (bool, optional) -- can optionally do the\n operation in-place. Default: \"False\"\nShape:\n * Input: (*), where * means any number of dimensions.\n * Output: (*), same shape as the input.\n\n[image]\nExamples:\n >>> m = nn.SELU()\n >>> input = torch.randn(2)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SELU.html", "category": "pytorch docs"} {"text": "torch.nn.functional.nll_loss\ntorch.nn.functional.nll_loss(input, target, weight=None, size_average=None, ignore_index=- 100, reduce=None, reduction='mean')\nThe negative log likelihood loss.\nSee \"NLLLoss\" for details.\nParameters:\n * input (Tensor) -- (N, C) where C = number of classes\n or (N, C, H, W) in case of 2D Loss, or (N, C, d_1, d_2, ...,\n d_K) where K \\geq 1 in the case of K-dimensional loss. input\n is expected to be log-probabilities.\n * **target** (*Tensor*) -- (N) where each value is 0 \\leq\n \\text{targets}[i] \\leq C-1, or (N, d_1, d_2, ..., d_K) where K\n \\geq 1 for K-dimensional loss.\n\n * **weight** (*Tensor**, **optional*) -- a manual rescaling\n weight given to each class. If given, has to be a Tensor of\n size *C*\n\n * **size_average** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.nll_loss.html", "category": "pytorch docs"} {"text": "loss element in the batch. Note that for some losses, there\n multiple elements per sample. If the field \"size_average\" is\n set to \"False\", the losses are instead summed for each\n minibatch. Ignored when reduce is \"False\". Default: \"True\"\n * **ignore_index** (*int**, **optional*) -- Specifies a target\n value that is ignored and does not contribute to the input\n gradient. When \"size_average\" is \"True\", the loss is averaged\n over non-ignored targets. Default: -100\n\n * **reduce** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n\n * **reduction** (*str**, **optional*) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.nll_loss.html", "category": "pytorch docs"} {"text": "\"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\nReturn type:\n Tensor\nExample:\n >>> # input is of size N x C = 3 x 5\n >>> input = torch.randn(3, 5, requires_grad=True)\n >>> # each element in target has to have 0 <= value < C\n >>> target = torch.tensor([1, 0, 4])\n >>> output = F.nll_loss(F.log_softmax(input, dim=1), target)\n >>> output.backward()\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.nll_loss.html", "category": "pytorch docs"} {"text": "torch.compile\ntorch.compile(model=None, , fullgraph=False, dynamic=False, backend='inductor', mode=None, passes=None, *kwargs)\nOptimizes given model/function using Dynamo and specified backend\nParameters:\n * model (Callable) -- Module/function to optimize\n * **fullgraph** (*bool*) -- Whether it is ok to break model into\n several subgraphs\n\n * **dynamic** (*bool*) -- Use dynamic shape tracing\n\n * **backend** (*str** or **Callable*) -- backend to be used\n\n * **mode** (*str*) -- Can be either \"default\", \"reduce-overhead\"\n or \"max-autotune\"\n\n * **passes** (*dict*) -- A dictionary of passes to the backend.\n Passes currently recognized by inductor backend: - static-\n memory - matmul-tune - matmul-padding - triton-autotune -\n triton-bmm - triton-mm - triton-convolution - rematerialize-\n threshold - rematerialize-acc-threshold\n\nReturn type:\n Callable\nExample:", "source": "https://pytorch.org/docs/stable/generated/torch.compile.html", "category": "pytorch docs"} {"text": "Return type:\n Callable\nExample:\n @torch.compile(passes={\"matmul-padding\": True}, fullgraph=True)\n def foo(x):\n return torch.sin(x) + torch.cos(x)\n", "source": "https://pytorch.org/docs/stable/generated/torch.compile.html", "category": "pytorch docs"} {"text": "torch.nn.functional.local_response_norm\ntorch.nn.functional.local_response_norm(input, size, alpha=0.0001, beta=0.75, k=1.0)\nApplies local response normalization over an input signal composed\n of several input planes, where channels occupy the second\n dimension. Applies normalization across channels.\nSee \"LocalResponseNorm\" for details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.local_response_norm.html", "category": "pytorch docs"} {"text": "torch.Tensor.kthvalue\nTensor.kthvalue(k, dim=None, keepdim=False)\nSee \"torch.kthvalue()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.kthvalue.html", "category": "pytorch docs"} {"text": "ModuleList\nclass torch.nn.ModuleList(modules=None)\nHolds submodules in a list.\n\"ModuleList\" can be indexed like a regular Python list, but modules\n it contains are properly registered, and will be visible by all\n \"Module\" methods.\nParameters:\n modules (iterable, optional) -- an iterable of modules\n to add\nExample:\n class MyModule(nn.Module):\n def __init__(self):\n super(MyModule, self).__init__()\n self.linears = nn.ModuleList([nn.Linear(10, 10) for i in range(10)])\n\n def forward(self, x):\n # ModuleList can act as an iterable, or be indexed using ints\n for i, l in enumerate(self.linears):\n x = self.linears[i // 2](x) + l(x)\n return x\n\nappend(module)\n Appends a given module to the end of the list.\n\n Parameters:\n **module** (*nn.Module*) -- module to append\n\n Return type:\n *ModuleList*\n\nextend(modules)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ModuleList.html", "category": "pytorch docs"} {"text": "ModuleList\nextend(modules)\n Appends modules from a Python iterable to the end of the list.\n\n Parameters:\n **modules** (*iterable*) -- iterable of modules to append\n\n Return type:\n *ModuleList*\n\ninsert(index, module)\n Insert a given module before a given index in the list.\n\n Parameters:\n * **index** (*int*) -- index to insert.\n\n * **module** (*nn.Module*) -- module to insert\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ModuleList.html", "category": "pytorch docs"} {"text": "torch.nn.functional.adaptive_max_pool1d\ntorch.nn.functional.adaptive_max_pool1d(args, *kwargs)\nApplies a 1D adaptive max pooling over an input signal composed of\n several input planes.\nSee \"AdaptiveMaxPool1d\" for details and output shape.\nParameters:\n * output_size -- the target output size (single integer)\n * **return_indices** -- whether to return pooling indices.\n Default: \"False\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.adaptive_max_pool1d.html", "category": "pytorch docs"} {"text": "FakeQuantize\nclass torch.quantization.fake_quantize.FakeQuantize(observer=, quant_min=None, quant_max=None, **observer_kwargs)\nSimulate the quantize and dequantize operations in training time.\n The output of this module is given by:\n x_out = (\n clamp(round(x/scale + zero_point), quant_min, quant_max) - zero_point\n ) * scale\n\n\n\n\"scale\" defines the scale factor used for quantization.\n\n\n\"zero_point\" specifies the quantized value to which 0 in floating\n point maps to\n\n\n\"fake_quant_enabled\" controls the application of fake\n quantization on tensors, note that statistics can still be\n updated.\n\n\n\"observer_enabled\" controls statistics collection on tensors\n\n\n\"dtype\" specifies the quantized dtype that is being emulated with\n fake-quantization,\n allowable values are torch.qint8 and torch.quint8.\n\n\nParameters:", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.FakeQuantize.html", "category": "pytorch docs"} {"text": "Parameters:\n * observer (module) -- Module for observing statistics on\n input tensors and calculating scale and zero-point.\n * **observer_kwargs** (*optional*) -- Arguments for the observer\n module\n\nVariables:\n activation_post_process (Module) -- User provided module\n that collects statistics on the input tensor and provides a\n method to calculate scale and zero-point.", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.FakeQuantize.html", "category": "pytorch docs"} {"text": "torch.adjoint\ntorch.adjoint(Tensor) -> Tensor\nReturns a view of the tensor conjugated and with the last two\n dimensions transposed.\n\"x.adjoint()\" is equivalent to \"x.transpose(-2, -1).conj()\" for\n complex tensors and to \"x.transpose(-2, -1)\" for real tensors.\nExample::\n >>> x = torch.arange(4, dtype=torch.float)\n >>> A = torch.complex(x, x).reshape(2, 2)\n >>> A\n tensor([[0.+0.j, 1.+1.j],\n [2.+2.j, 3.+3.j]])\n >>> A.adjoint()\n tensor([[0.-0.j, 2.-2.j],\n [1.-1.j, 3.-3.j]])\n >>> (A.adjoint() == A.mH).all()\n tensor(True)", "source": "https://pytorch.org/docs/stable/generated/torch.adjoint.html", "category": "pytorch docs"} {"text": "Softmin\nclass torch.nn.Softmin(dim=None)\nApplies the Softmin function to an n-dimensional input Tensor\n rescaling them so that the elements of the n-dimensional output\n Tensor lie in the range [0, 1] and sum to 1.\nSoftmin is defined as:\n \\text{Softmin}(x_{i}) = \\frac{\\exp(-x_i)}{\\sum_j \\exp(-x_j)}\n\nShape:\n * Input: (*) where *** means, any number of additional\n dimensions\n * Output: (*), same shape as the input\n\nParameters:\n dim (int) -- A dimension along which Softmin will be\n computed (so every slice along dim will sum to 1).\nReturns:\n a Tensor of the same dimension and shape as the input, with\n values in the range [0, 1]\nReturn type:\n None\nExamples:\n >>> m = nn.Softmin(dim=1)\n >>> input = torch.randn(2, 3)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Softmin.html", "category": "pytorch docs"} {"text": "torch.Tensor.masked_scatter\nTensor.masked_scatter(mask, tensor) -> Tensor\nOut-of-place version of \"torch.Tensor.masked_scatter_()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.masked_scatter.html", "category": "pytorch docs"} {"text": "torch.nn.utils.parameters_to_vector\ntorch.nn.utils.parameters_to_vector(parameters)\nConvert parameters to one vector\nParameters:\n parameters (Iterable[Tensor]) -- an iterator of\n Tensors that are the parameters of a model.\nReturns:\n The parameters represented by a single vector\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parameters_to_vector.html", "category": "pytorch docs"} {"text": "default_debug_qconfig\ntorch.quantization.qconfig.default_debug_qconfig\nalias of QConfig(activation=,\n weight=functools.partial(,\n dtype=torch.qint8, qscheme=torch.per_tensor_symmetric){})", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.default_debug_qconfig.html", "category": "pytorch docs"} {"text": "UninitializedParameter\nclass torch.nn.parameter.UninitializedParameter(requires_grad=True, device=None, dtype=None)\nA parameter that is not initialized.\nUninitialized Parameters are a a special case of\n \"torch.nn.Parameter\" where the shape of the data is still unknown.\nUnlike a \"torch.nn.Parameter\", uninitialized parameters hold no\n data and attempting to access some properties, like their shape,\n will throw a runtime error. The only operations that can be\n performed on a uninitialized parameter are changing its datatype,\n moving it to a different device and converting it to a regular\n \"torch.nn.Parameter\".\nThe default device or dtype to use when the parameter is\n materialized can be set during construction using e.g.\n \"device='cuda'\".\ncls_to_become\n alias of \"Parameter\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parameter.UninitializedParameter.html", "category": "pytorch docs"} {"text": "torch.linalg.tensorinv\ntorch.linalg.tensorinv(A, ind=2, *, out=None) -> Tensor\nComputes the multiplicative inverse of \"torch.tensordot()\".\nIf m is the product of the first \"ind\" dimensions of \"A\" and n\n is the product of the rest of the dimensions, this function expects\n m and n to be equal. If this is the case, it computes a tensor\n X such that tensordot(\"A\", X, \"ind\") is the identity matrix\n in dimension m. X will have the shape of \"A\" but with the first\n \"ind\" dimensions pushed back to the end\n X.shape == A.shape[ind:] + A.shape[:ind]\n\nSupports input of float, double, cfloat and cdouble dtypes.\nNote:\n When \"A\" is a *2*-dimensional tensor and \"ind\"*= 1*, this\n function computes the (multiplicative) inverse of \"A\" (see\n \"torch.linalg.inv()\").\n\nNote:\n Consider using \"torch.linalg.tensorsolve()\" if possible for\n multiplying a tensor on the left by the tensor inverse, as:\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.tensorinv.html", "category": "pytorch docs"} {"text": "linalg.tensorsolve(A, B) == torch.tensordot(linalg.tensorinv(A), B) # When B is a tensor with shape A.shape[:B.ndim]\n It is always preferred to use \"tensorsolve()\" when possible, as\n it is faster and more numerically stable than computing the\n pseudoinverse explicitly.\n\nSee also:\n \"torch.linalg.tensorsolve()\" computes\n *torch.tensordot(tensorinv(*\"A\"*), *\"B\"*)*.\n\nParameters:\n * A (Tensor) -- tensor to invert. Its shape must satisfy\n prod(\"A\".shape[:\"ind\"]) == prod(\"A\".shape[\"ind\":]).\n * **ind** (*int*) -- index at which to compute the inverse of\n \"torch.tensordot()\". Default: *2*.\n\nKeyword Arguments:\n out (Tensor, optional) -- output tensor. Ignored if\n None. Default: None.\nRaises:\n RuntimeError -- if the reshaped \"A\" is not invertible or the\n product of the first \"ind\" dimensions is not equal to the\n product of the rest.\nExamples:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.tensorinv.html", "category": "pytorch docs"} {"text": "product of the rest.\nExamples:\n >>> A = torch.eye(4 * 6).reshape((4, 6, 8, 3))\n >>> Ainv = torch.linalg.tensorinv(A, ind=2)\n >>> Ainv.shape\n torch.Size([8, 3, 4, 6])\n >>> B = torch.randn(4, 6)\n >>> torch.allclose(torch.tensordot(Ainv, B), torch.linalg.tensorsolve(A, B))\n True\n\n >>> A = torch.randn(4, 4)\n >>> Atensorinv = torch.linalg.tensorinv(A, ind=1)\n >>> Ainv = torch.linalg.inverse(A)\n >>> torch.allclose(Atensorinv, Ainv)\n True\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.tensorinv.html", "category": "pytorch docs"} {"text": "torch.Tensor.apply_\nTensor.apply_(callable) -> Tensor\nApplies the function \"callable\" to each element in the tensor,\n replacing each element with the value returned by \"callable\".\nNote:\n This function only works with CPU tensors and should not be used\n in code sections that require high performance.\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.apply_.html", "category": "pytorch docs"} {"text": "torch.softmax\ntorch.softmax(input, dim, *, dtype=None) -> Tensor\nAlias for \"torch.nn.functional.softmax()\".", "source": "https://pytorch.org/docs/stable/generated/torch.softmax.html", "category": "pytorch docs"} {"text": "torch.randint\ntorch.randint(low=0, high, size, *, generator=None, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\nReturns a tensor filled with random integers generated uniformly\n between \"low\" (inclusive) and \"high\" (exclusive).\nThe shape of the tensor is defined by the variable argument \"size\".\nNote:\n With the global dtype default (\"torch.float32\"), this function\n returns a tensor with dtype \"torch.int64\".\n\nParameters:\n * low (int, optional) -- Lowest integer to be drawn\n from the distribution. Default: 0.\n * **high** (*int*) -- One above the highest integer to be drawn\n from the distribution.\n\n * **size** (*tuple*) -- a tuple defining the shape of the output\n tensor.\n\nKeyword Arguments:\n * generator (\"torch.Generator\", optional) -- a pseudorandom\n number generator for sampling\n * **out** (*Tensor**, **optional*) -- the output tensor.\n", "source": "https://pytorch.org/docs/stable/generated/torch.randint.html", "category": "pytorch docs"} {"text": "\n\ndtype (torch.dtype, optional) -- if \"None\", this\n function returns a tensor with dtype \"torch.int64\".\n\n\nlayout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n\n\ndevice (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n\n\nrequires_grad (bool, optional) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\n\n\n\nExample:\n >>> torch.randint(3, 5, (3,))\n tensor([4, 3, 4])\n\n\n >>> torch.randint(10, (2, 2))\n tensor([[0, 2],\n [5, 5]])\n\n\n >>> torch.randint(3, 10, (2, 2))\n tensor([[4, 5],\n [6, 7]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.randint.html", "category": "pytorch docs"} {"text": "torch.Tensor.hardshrink\nTensor.hardshrink(lambd=0.5) -> Tensor\nSee \"torch.nn.functional.hardshrink()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.hardshrink.html", "category": "pytorch docs"} {"text": "get_default_qconfig_mapping\nclass torch.ao.quantization.qconfig_mapping.get_default_qconfig_mapping(backend='x86', version=0)\nReturn the default QConfigMapping for post training quantization.\nParameters:\n * backend (***) -- the quantization backend for the default\n qconfig mapping, should be one of [\"x86\" (default), \"fbgemm\",\n \"qnnpack\", \"onednn\"]\n * **version** (***) -- the version for the default qconfig\n mapping\n\nReturn type:\n QConfigMapping", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.qconfig_mapping.get_default_qconfig_mapping.html", "category": "pytorch docs"} {"text": "QuantStub\nclass torch.quantization.QuantStub(qconfig=None)\nQuantize stub module, before calibration, this is same as an\n observer, it will be swapped as nnq.Quantize in convert.\nParameters:\n qconfig -- quantization configuration for the tensor, if\n qconfig is not provided, we will get qconfig from parent modules", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.QuantStub.html", "category": "pytorch docs"} {"text": "torch.any\ntorch.any(input) -> Tensor\nTests if any element in \"input\" evaluates to True.\nNote:\n This function matches the behaviour of NumPy in returning output\n of dtype *bool* for all supported dtypes except *uint8*. For\n *uint8* the dtype of output is *uint8* itself.\n\nExample:\n >>> a = torch.rand(1, 2).bool()\n >>> a\n tensor([[False, True]], dtype=torch.bool)\n >>> torch.any(a)\n tensor(True, dtype=torch.bool)\n >>> a = torch.arange(0, 3)\n >>> a\n tensor([0, 1, 2])\n >>> torch.any(a)\n tensor(True)\n\ntorch.any(input, dim, keepdim=False, *, out=None) -> Tensor\nFor each row of \"input\" in the given dimension \"dim\", returns\n True if any element in the row evaluate to True and False\n otherwise.\nIf \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension \"dim\" where it is of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in", "source": "https://pytorch.org/docs/stable/generated/torch.any.html", "category": "pytorch docs"} {"text": "the output tensor having 1 fewer dimension than \"input\".\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int*) -- the dimension to reduce.\n\n * **keepdim** (*bool*) -- whether the output tensor has \"dim\"\n retained or not.\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(4, 2) < 0\n >>> a\n tensor([[ True, True],\n [False, True],\n [ True, True],\n [False, False]])\n >>> torch.any(a, 1)\n tensor([ True, True, True, False])\n >>> torch.any(a, 0)\n tensor([True, True])\n", "source": "https://pytorch.org/docs/stable/generated/torch.any.html", "category": "pytorch docs"} {"text": "torch.Tensor.chunk\nTensor.chunk(chunks, dim=0) -> List of Tensors\nSee \"torch.chunk()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.chunk.html", "category": "pytorch docs"} {"text": "torch.Tensor.erfinv\nTensor.erfinv() -> Tensor\nSee \"torch.erfinv()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.erfinv.html", "category": "pytorch docs"} {"text": "torch.Tensor.sparse_resize_\nTensor.sparse_resize_(size, sparse_dim, dense_dim) -> Tensor\nResizes \"self\" sparse tensor to the desired size and the number of\n sparse and dense dimensions.\nNote:\n If the number of specified elements in \"self\" is zero, then\n \"size\", \"sparse_dim\", and \"dense_dim\" can be any size and\n positive integers such that \"len(size) == sparse_dim +\n dense_dim\".If \"self\" specifies one or more elements, however,\n then each dimension in \"size\" must not be smaller than the\n corresponding dimension of \"self\", \"sparse_dim\" must equal the\n number of sparse dimensions in \"self\", and \"dense_dim\" must equal\n the number of dense dimensions in \"self\".\n\nWarning:\n Throws an error if \"self\" is not a sparse tensor.\n\nParameters:\n * size (torch.Size) -- the desired size. If \"self\" is non-\n empty sparse tensor, the desired size cannot be smaller than\n the original size.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sparse_resize_.html", "category": "pytorch docs"} {"text": "the original size.\n * **sparse_dim** (*int*) -- the number of sparse dimensions\n\n * **dense_dim** (*int*) -- the number of dense dimensions\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sparse_resize_.html", "category": "pytorch docs"} {"text": "torch.linalg.multi_dot\ntorch.linalg.multi_dot(tensors, *, out=None)\nEfficiently multiplies two or more matrices by reordering the\n multiplications so that the fewest arithmetic operations are\n performed.\nSupports inputs of float, double, cfloat and cdouble dtypes. This\n function does not support batched inputs.\nEvery tensor in \"tensors\" must be 2D, except for the first and last\n which may be 1D. If the first tensor is a 1D vector of shape (n,)\n it is treated as a row vector of shape (1, n), similarly if the\n last tensor is a 1D vector of shape (n,) it is treated as a\n column vector of shape (n, 1).\nIf the first and last tensors are matrices, the output will be a\n matrix. However, if either is a 1D vector, then the output will be\n a 1D vector.\nDifferences with numpy.linalg.multi_dot:\n\nUnlike numpy.linalg.multi_dot, the first and last tensors must\n either be 1D or 2D whereas NumPy allows them to be nD\n\nWarning:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.multi_dot.html", "category": "pytorch docs"} {"text": "Warning:\n This function does not broadcast.\n\nNote:\n This function is implemented by chaining \"torch.mm()\" calls after\n computing the optimal matrix multiplication order.\n\nNote:\n The cost of multiplying two matrices with shapes *(a, b)* and\n *(b, c)* is *a * b * c*. Given matrices *A*, *B*, *C* with shapes\n *(10, 100)*, *(100, 5)*, *(5, 50)* respectively, we can calculate\n the cost of different multiplication orders as follows:\n\n \\begin{align*} \\operatorname{cost}((AB)C) &= 10 \\times 100\n \\times 5 + 10 \\times 5 \\times 50 = 7500 \\\\\n \\operatorname{cost}(A(BC)) &= 10 \\times 100 \\times 50 + 100\n \\times 5 \\times 50 = 75000 \\end{align*}\n\n In this case, multiplying *A* and *B* first followed by *C* is 10\n times faster.\n\nParameters:\n tensors (Sequence[Tensor]) -- two or more tensors to\n multiply. The first and last tensors may be 1D or 2D. Every\n other tensor must be 2D.\nKeyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.multi_dot.html", "category": "pytorch docs"} {"text": "Keyword Arguments:\n out (Tensor, optional) -- output tensor. Ignored if\n None. Default: None.\nExamples:\n >>> from torch.linalg import multi_dot\n\n >>> multi_dot([torch.tensor([1, 2]), torch.tensor([2, 3])])\n tensor(8)\n >>> multi_dot([torch.tensor([[1, 2]]), torch.tensor([2, 3])])\n tensor([8])\n >>> multi_dot([torch.tensor([[1, 2]]), torch.tensor([[2], [3]])])\n tensor([[8]])\n\n >>> A = torch.arange(2 * 3).view(2, 3)\n >>> B = torch.arange(3 * 2).view(3, 2)\n >>> C = torch.arange(2 * 2).view(2, 2)\n >>> multi_dot((A, B, C))\n tensor([[ 26, 49],\n [ 80, 148]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.multi_dot.html", "category": "pytorch docs"} {"text": "default_qconfig\ntorch.quantization.qconfig.default_qconfig\nalias of QConfig(activation=functools.partial(, quant_min=0,\n quant_max=127){}, weight=functools.partial(,\n dtype=torch.qint8, qscheme=torch.per_tensor_symmetric){})", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.default_qconfig.html", "category": "pytorch docs"} {"text": "torch.fake_quantize_per_tensor_affine\ntorch.fake_quantize_per_tensor_affine(input, scale, zero_point, quant_min, quant_max) -> Tensor\nReturns a new tensor with the data in \"input\" fake quantized using\n \"scale\", \"zero_point\", \"quant_min\" and \"quant_max\".\n \\text{output} = min( \\text{quant\\_max}, max(\n \\text{quant\\_min}, \\text{std::nearby\\_int}(\\text{input}\n / \\text{scale}) + \\text{zero\\_point} ) )\n\nParameters:\n * input (Tensor) -- the input value(s), \"torch.float32\"\n tensor\n * **scale** (double scalar or \"float32\" Tensor) -- quantization\n scale\n\n * **zero_point** (int64 scalar or \"int32\" Tensor) --\n quantization zero_point\n\n * **quant_min** (*int64*) -- lower bound of the quantized domain\n\n * **quant_max** (*int64*) -- upper bound of the quantized domain\n\nReturns:\n A newly fake_quantized \"torch.float32\" tensor\nReturn type:\n Tensor\nExample:", "source": "https://pytorch.org/docs/stable/generated/torch.fake_quantize_per_tensor_affine.html", "category": "pytorch docs"} {"text": "Return type:\n Tensor\nExample:\n >>> x = torch.randn(4)\n >>> x\n tensor([ 0.0552, 0.9730, 0.3973, -1.0780])\n >>> torch.fake_quantize_per_tensor_affine(x, 0.1, 0, 0, 255)\n tensor([0.1000, 1.0000, 0.4000, 0.0000])\n >>> torch.fake_quantize_per_tensor_affine(x, torch.tensor(0.1), torch.tensor(0), 0, 255)\n tensor([0.6000, 0.4000, 0.0000, 0.0000])\n", "source": "https://pytorch.org/docs/stable/generated/torch.fake_quantize_per_tensor_affine.html", "category": "pytorch docs"} {"text": "torch.Tensor.rad2deg\nTensor.rad2deg() -> Tensor\nSee \"torch.rad2deg()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.rad2deg.html", "category": "pytorch docs"} {"text": "torch.Tensor.view\nTensor.view(*shape) -> Tensor\nReturns a new tensor with the same data as the \"self\" tensor but of\n a different \"shape\".\nThe returned tensor shares the same data and must have the same\n number of elements, but may have a different size. For a tensor to\n be viewed, the new view size must be compatible with its original\n size and stride, i.e., each new view dimension must either be a\n subspace of an original dimension, or only span across original\n dimensions d, d+1, \\dots, d+k that satisfy the following\n contiguity-like condition that \\forall i = d, \\dots, d+k-1,\n \\text{stride}[i] = \\text{stride}[i+1] \\times \\text{size}[i+1]\n\nOtherwise, it will not be possible to view \"self\" tensor as \"shape\"\n without copying it (e.g., via \"contiguous()\"). When it is unclear\n whether a \"view()\" can be performed, it is advisable to use\n \"reshape()\", which returns a view if the shapes are compatible, and", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.view.html", "category": "pytorch docs"} {"text": "copies (equivalent to calling \"contiguous()\") otherwise.\nParameters:\n shape (torch.Size or int...) -- the desired size\nExample:\n >>> x = torch.randn(4, 4)\n >>> x.size()\n torch.Size([4, 4])\n >>> y = x.view(16)\n >>> y.size()\n torch.Size([16])\n >>> z = x.view(-1, 8) # the size -1 is inferred from other dimensions\n >>> z.size()\n torch.Size([2, 8])\n\n >>> a = torch.randn(1, 2, 3, 4)\n >>> a.size()\n torch.Size([1, 2, 3, 4])\n >>> b = a.transpose(1, 2) # Swaps 2nd and 3rd dimension\n >>> b.size()\n torch.Size([1, 3, 2, 4])\n >>> c = a.view(1, 3, 2, 4) # Does not change tensor layout in memory\n >>> c.size()\n torch.Size([1, 3, 2, 4])\n >>> torch.equal(b, c)\n False\n\nview(dtype) -> Tensor\nReturns a new tensor with the same data as the \"self\" tensor but of\n a different \"dtype\".\nIf the element size of \"dtype\" is different than that of", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.view.html", "category": "pytorch docs"} {"text": "\"self.dtype\", then the size of the last dimension of the output\n will be scaled proportionally. For instance, if \"dtype\" element\n size is twice that of \"self.dtype\", then each pair of elements in\n the last dimension of \"self\" will be combined, and the size of the\n last dimension of the output will be half that of \"self\". If\n \"dtype\" element size is half that of \"self.dtype\", then each\n element in the last dimension of \"self\" will be split in two, and\n the size of the last dimension of the output will be double that of\n \"self\". For this to be possible, the following conditions must be\n true:\n * \"self.dim()\" must be greater than 0.\n\n * \"self.stride(-1)\" must be 1.\n\nAdditionally, if the element size of \"dtype\" is greater than that\n of \"self.dtype\", the following conditions must be true as well:\n * \"self.size(-1)\" must be divisible by the ratio between the\n element sizes of the dtypes.\n\n * \"self.storage_offset()\" must be divisible by the ratio between\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.view.html", "category": "pytorch docs"} {"text": "the element sizes of the dtypes.\n * The strides of all dimensions, except the last dimension, must\n be divisible by the ratio between the element sizes of the\n dtypes.\n\nIf any of the above conditions are not met, an error is thrown.\nWarning:\n This overload is not supported by TorchScript, and using it in a\n Torchscript program will cause undefined behavior.\n\nParameters:\n dtype (\"torch.dtype\") -- the desired dtype\nExample:\n >>> x = torch.randn(4, 4)\n >>> x\n tensor([[ 0.9482, -0.0310, 1.4999, -0.5316],\n [-0.1520, 0.7472, 0.5617, -0.8649],\n [-2.4724, -0.0334, -0.2976, -0.8499],\n [-0.2109, 1.9913, -0.9607, -0.6123]])\n >>> x.dtype\n torch.float32\n\n >>> y = x.view(torch.int32)\n >>> y\n tensor([[ 1064483442, -1124191867, 1069546515, -1089989247],\n [-1105482831, 1061112040, 1057999968, -1084397505],\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.view.html", "category": "pytorch docs"} {"text": "[-1071760287, -1123489973, -1097310419, -1084649136],\n [-1101533110, 1073668768, -1082790149, -1088634448]],\n dtype=torch.int32)\n >>> y[0, 0] = 1000000000\n >>> x\n tensor([[ 0.0047, -0.0310, 1.4999, -0.5316],\n [-0.1520, 0.7472, 0.5617, -0.8649],\n [-2.4724, -0.0334, -0.2976, -0.8499],\n [-0.2109, 1.9913, -0.9607, -0.6123]])\n >>> x.view(torch.cfloat)\n tensor([[ 0.0047-0.0310j, 1.4999-0.5316j],\n [-0.1520+0.7472j, 0.5617-0.8649j],\n [-2.4724-0.0334j, -0.2976-0.8499j],\n [-0.2109+1.9913j, -0.9607-0.6123j]])\n >>> x.view(torch.cfloat).size()\n torch.Size([4, 2])\n\n >>> x.view(torch.uint8)\n tensor([[ 0, 202, 154, 59, 182, 243, 253, 188, 185, 252, 191, 63, 240, 22,\n 8, 191],\n [227, 165, 27, 190, 128, 72, 63, 63, 146, 203, 15, 63, 22, 106,\n 93, 191],\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.view.html", "category": "pytorch docs"} {"text": "93, 191],\n [205, 59, 30, 192, 112, 206, 8, 189, 7, 95, 152, 190, 12, 147,\n 89, 191],\n [ 43, 246, 87, 190, 235, 226, 254, 63, 111, 240, 117, 191, 177, 191,\n 28, 191]], dtype=torch.uint8)\n >>> x.view(torch.uint8).size()\n torch.Size([4, 16])", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.view.html", "category": "pytorch docs"} {"text": "MultiStepLR\nclass torch.optim.lr_scheduler.MultiStepLR(optimizer, milestones, gamma=0.1, last_epoch=- 1, verbose=False)\nDecays the learning rate of each parameter group by gamma once the\n number of epoch reaches one of the milestones. Notice that such\n decay can happen simultaneously with other changes to the learning\n rate from outside this scheduler. When last_epoch=-1, sets initial\n lr as lr.\nParameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * **milestones** (*list*) -- List of epoch indices. Must be\n increasing.\n\n * **gamma** (*float*) -- Multiplicative factor of learning rate\n decay. Default: 0.1.\n\n * **last_epoch** (*int*) -- The index of last epoch. Default:\n -1.\n\n * **verbose** (*bool*) -- If \"True\", prints a message to stdout\n for each update. Default: \"False\".\n\n-[ Example ]-\n\n\n\nAssuming optimizer uses lr = 0.05 for all groups\nlr = 0.05 if epoch < 30\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.MultiStepLR.html", "category": "pytorch docs"} {"text": "\n\n\nlr = 0.05 if epoch < 30\nlr = 0.005 if 30 <= epoch < 80\nlr = 0.0005 if epoch >= 80\nscheduler = MultiStepLR(optimizer, milestones=[30,80], gamma=0.1)\nfor epoch in range(100):\n train(...)\n validate(...)\n scheduler.step()\n\n\n\nget_last_lr()\n Return last computed learning rate by current scheduler.\n\nload_state_dict(state_dict)\n Loads the schedulers state.\n\n Parameters:\n **state_dict** (*dict*) -- scheduler state. Should be an\n object returned from a call to \"state_dict()\".\n\nprint_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n\nstate_dict()\n Returns the state of the scheduler as a \"dict\".\n\n It contains an entry for every variable in self.__dict__ which\n is not the optimizer.\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.MultiStepLR.html", "category": "pytorch docs"} {"text": "torch.Tensor.sqrt_\nTensor.sqrt_() -> Tensor\nIn-place version of \"sqrt()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sqrt_.html", "category": "pytorch docs"} {"text": "torch.autograd.function.FunctionCtx.save_for_backward\nFunctionCtx.save_for_backward(*tensors)\nSaves given tensors for a future call to \"backward()\".\n\"save_for_backward\" should be called at most once, only from inside\n the \"forward()\" method, and only with tensors.\nAll tensors intended to be used in the backward pass should be\n saved with \"save_for_backward\" (as opposed to directly on \"ctx\") to\n prevent incorrect gradients and memory leaks, and enable the\n application of saved tensor hooks. See\n \"torch.autograd.graph.saved_tensors_hooks\".\nNote that if intermediary tensors, tensors that are neither inputs\n nor outputs of \"forward()\", are saved for backward, your custom\n Function may not support double backward. Custom Functions that do\n not support double backward should decorate their \"backward()\"\n method with \"@once_differentiable\" so that performing double", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.save_for_backward.html", "category": "pytorch docs"} {"text": "backward raises an error. If you'd like to support double backward,\n you can either recompute intermediaries based on the inputs during\n backward or return the intermediaries as the outputs of the custom\n Function. See the double backward tutorial for more details.\nIn \"backward()\", saved tensors can be accessed through the\n \"saved_tensors\" attribute. Before returning them to the user, a\n check is made to ensure they weren't used in any in-place operation\n that modified their content.\nArguments can also be \"None\". This is a no-op.\nSee Extending torch.autograd for more details on how to use this\n method.\nExample::\n >>> class Func(Function):\n >>> @staticmethod\n >>> def forward(ctx, x: torch.Tensor, y: torch.Tensor, z: int):\n >>> w = x * z\n >>> out = x * y + y * z + w * y\n >>> ctx.save_for_backward(x, y, w, out)\n >>> ctx.z = z # z is not a tensor\n >>> return out\n >>>", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.save_for_backward.html", "category": "pytorch docs"} {"text": "\n\n\n return out\n >>>\n >>> @staticmethod\n >>> @once_differentiable\n >>> def backward(ctx, grad_out):\n >>> x, y, w, out = ctx.saved_tensors\n >>> z = ctx.z\n >>> gx = grad_out * (y + y * z)\n >>> gy = grad_out * (x + z + w)\n >>> gz = None\n >>> return gx, gy, gz\n >>>\n >>> a = torch.tensor(1., requires_grad=True, dtype=torch.double)\n >>> b = torch.tensor(2., requires_grad=True, dtype=torch.double)\n >>> c = 4\n >>> d = Func.apply(a, b, c)\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.save_for_backward.html", "category": "pytorch docs"} {"text": "torch.full\ntorch.full(size, fill_value, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\nCreates a tensor of size \"size\" filled with \"fill_value\". The\n tensor's dtype is inferred from \"fill_value\".\nParameters:\n * size (int...) -- a list, tuple, or \"torch.Size\" of\n integers defining the shape of the output tensor.\n * **fill_value** (*Scalar*) -- the value to fill the output\n tensor with.\n\nKeyword Arguments:\n * out (Tensor, optional) -- the output tensor.\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n", "source": "https://pytorch.org/docs/stable/generated/torch.full.html", "category": "pytorch docs"} {"text": "returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\nExample:\n >>> torch.full((2, 3), 3.141592)\n tensor([[ 3.1416, 3.1416, 3.1416],\n [ 3.1416, 3.1416, 3.1416]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.full.html", "category": "pytorch docs"} {"text": "torch.Tensor.digamma\nTensor.digamma() -> Tensor\nSee \"torch.digamma()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.digamma.html", "category": "pytorch docs"} {"text": "default_dynamic_quant_observer\ntorch.quantization.observer.default_dynamic_quant_observer\nalias of functools.partial(,\n dtype=torch.quint8, quant_min=0, quant_max=255, is_dynamic=True){}", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.default_dynamic_quant_observer.html", "category": "pytorch docs"} {"text": "torch._foreach_floor\ntorch._foreach_floor(self: List[Tensor]) -> List[Tensor]\nApply \"torch.floor()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_floor.html", "category": "pytorch docs"} {"text": "torch.matrix_exp\ntorch.matrix_exp(A) -> Tensor\nAlias for \"torch.linalg.matrix_exp()\".", "source": "https://pytorch.org/docs/stable/generated/torch.matrix_exp.html", "category": "pytorch docs"} {"text": "torch.nanquantile\ntorch.nanquantile(input, q, dim=None, keepdim=False, *, interpolation='linear', out=None) -> Tensor\nThis is a variant of \"torch.quantile()\" that \"ignores\" \"NaN\"\n values, computing the quantiles \"q\" as if \"NaN\" values in \"input\"\n did not exist. If all values in a reduced row are \"NaN\" then the\n quantiles for that reduction will be \"NaN\". See the documentation\n for \"torch.quantile()\".\nParameters:\n * input (Tensor) -- the input tensor.\n * **q** (*float** or **Tensor*) -- a scalar or 1D tensor of\n quantile values in the range [0, 1]\n\n * **dim** (*int*) -- the dimension to reduce.\n\n * **keepdim** (*bool*) -- whether the output tensor has \"dim\"\n retained or not.\n\nKeyword Arguments:\n * interpolation (str) -- interpolation method to use when\n the desired quantile lies between two data points. Can be\n \"linear\", \"lower\", \"higher\", \"midpoint\" and \"nearest\". Default\n is \"linear\".", "source": "https://pytorch.org/docs/stable/generated/torch.nanquantile.html", "category": "pytorch docs"} {"text": "is \"linear\".\n * **out** (*Tensor**, **optional*) -- the output tensor.\n\nExample:\n >>> t = torch.tensor([float('nan'), 1, 2])\n >>> t.quantile(0.5)\n tensor(nan)\n >>> t.nanquantile(0.5)\n tensor(1.5000)\n >>> t = torch.tensor([[float('nan'), float('nan')], [1, 2]])\n >>> t\n tensor([[nan, nan],\n [1., 2.]])\n >>> t.nanquantile(0.5, dim=0)\n tensor([1., 2.])\n >>> t.nanquantile(0.5, dim=1)\n tensor([ nan, 1.5000])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nanquantile.html", "category": "pytorch docs"} {"text": "torch.aminmax\ntorch.aminmax(input, *, dim=None, keepdim=False, out=None) -> (Tensor min, Tensor max)\nComputes the minimum and maximum values of the \"input\" tensor.\nParameters:\n input (Tensor) -- The input tensor\nKeyword Arguments:\n * dim (Optional[int]) -- The dimension along which\n to compute the values. If None, computes the values over the\n entire \"input\" tensor. Default is None.\n * **keepdim** (*bool*) -- If *True*, the reduced dimensions will\n be kept in the output tensor as dimensions with size 1 for\n broadcasting, otherwise they will be removed, as if calling\n (\"torch.squeeze()\"). Default is *False*.\n\n * **out** (*Optional**[**Tuple**[**Tensor**, **Tensor**]**]*) --\n Optional tensors on which to write the result. Must have the\n same shape and dtype as the expected output. Default is\n *None*.\n\nReturns:\n A named tuple (min, max) containing the minimum and maximum", "source": "https://pytorch.org/docs/stable/generated/torch.aminmax.html", "category": "pytorch docs"} {"text": "values.\nRaises:\n RuntimeError -- If any of the dimensions to compute the\n values over has size 0.\nNote:\n NaN values are propagated to the output if at least one value is\n NaN.\n\nSee also:\n \"torch.amin()\" computes just the minimum value \"torch.amax()\"\n computes just the maximum value\n\nExample:\n >>> torch.aminmax(torch.tensor([1, -3, 5]))\n torch.return_types.aminmax(\n min=tensor(-3),\n max=tensor(5))\n\n >>> # aminmax propagates NaNs\n >>> torch.aminmax(torch.tensor([1, -3, 5, torch.nan]))\n torch.return_types.aminmax(\n min=tensor(nan),\n max=tensor(nan))\n\n >>> t = torch.arange(10).view(2, 5)\n >>> t\n tensor([[0, 1, 2, 3, 4],\n [5, 6, 7, 8, 9]])\n >>> t.aminmax(dim=0, keepdim=True)\n torch.return_types.aminmax(\n min=tensor([[0, 1, 2, 3, 4]]),\n max=tensor([[5, 6, 7, 8, 9]]))\n", "source": "https://pytorch.org/docs/stable/generated/torch.aminmax.html", "category": "pytorch docs"} {"text": "torch.autograd.functional.jacobian\ntorch.autograd.functional.jacobian(func, inputs, create_graph=False, strict=False, vectorize=False, strategy='reverse-mode')\nFunction that computes the Jacobian of a given function.\nParameters:\n * func (function) -- a Python function that takes Tensor\n inputs and returns a tuple of Tensors or a Tensor.\n * **inputs** (*tuple of Tensors** or **Tensor*) -- inputs to the\n function \"func\".\n\n * **create_graph** (*bool**, **optional*) -- If \"True\", the\n Jacobian will be computed in a differentiable manner. Note\n that when \"strict\" is \"False\", the result can not require\n gradients or be disconnected from the inputs. Defaults to\n \"False\".\n\n * **strict** (*bool**, **optional*) -- If \"True\", an error will\n be raised when we detect that there exists an input such that\n all the outputs are independent of it. If \"False\", we return a\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.jacobian.html", "category": "pytorch docs"} {"text": "Tensor of zeros as the jacobian for said inputs, which is the\n expected mathematical value. Defaults to \"False\".\n * **vectorize** (*bool**, **optional*) -- This feature is\n experimental. Please consider using \"torch.func.jacrev()\" or\n \"torch.func.jacfwd()\" instead if you are looking for something\n less experimental and more performant. When computing the\n jacobian, usually we invoke \"autograd.grad\" once per row of\n the jacobian. If this flag is \"True\", we perform only a single\n \"autograd.grad\" call with \"batched_grad=True\" which uses the\n vmap prototype feature. Though this should lead to performance\n improvements in many cases, because this feature is still\n experimental, there may be performance cliffs. See\n \"torch.autograd.grad()\"'s \"batched_grad\" parameter for more\n information.\n\n * **strategy** (*str**, **optional*) -- Set to \"\"forward-mode\"\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.jacobian.html", "category": "pytorch docs"} {"text": "or \"\"reverse-mode\"\" to determine whether the Jacobian will be\n computed with forward or reverse mode AD. Currently,\n \"\"forward-mode\"\" requires \"vectorized=True\". Defaults to\n \"\"reverse-mode\"\". If \"func\" has more outputs than inputs,\n \"\"forward-mode\"\" tends to be more performant. Otherwise,\n prefer to use \"\"reverse-mode\"\".\nReturns:\n if there is a single input and output, this will be a single\n Tensor containing the Jacobian for the linearized inputs and\n output. If one of the two is a tuple, then the Jacobian will be\n a tuple of Tensors. If both of them are tuples, then the\n Jacobian will be a tuple of tuple of Tensors where\n \"Jacobian[i][j]\" will contain the Jacobian of the \"i\"th output\n and \"j\"th input and will have as size the concatenation of the\n sizes of the corresponding output and the corresponding input\n and will have same dtype and device as the corresponding input.", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.jacobian.html", "category": "pytorch docs"} {"text": "If strategy is \"forward-mode\", the dtype will be that of the\n output; otherwise, the input.\nReturn type:\n Jacobian (Tensor or nested tuple of Tensors)\n-[ Example ]-\n\n\n\ndef exp_reducer(x):\n ... return x.exp().sum(dim=1)\ninputs = torch.rand(2, 2)\njacobian(exp_reducer, inputs)\n tensor([[[1.4917, 2.4352],\n [0.0000, 0.0000]],\n [[0.0000, 0.0000],\n [2.4369, 2.3799]]])\njacobian(exp_reducer, inputs, create_graph=True)\n tensor([[[1.4917, 2.4352],\n [0.0000, 0.0000]],\n [[0.0000, 0.0000],\n [2.4369, 2.3799]]], grad_fn=)\ndef exp_adder(x, y):\n ... return 2 * x.exp() + 3 * y\ninputs = (torch.rand(2), torch.rand(2))\njacobian(exp_adder, inputs)\n (tensor([[2.8052, 0.0000],\n [0.0000, 3.3963]]),\n tensor([[3., 0.],\n [0., 3.]]))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.functional.jacobian.html", "category": "pytorch docs"} {"text": "BatchNorm3d\nclass torch.ao.nn.quantized.BatchNorm3d(num_features, eps=1e-05, momentum=0.1, device=None, dtype=None)\nThis is the quantized version of \"BatchNorm3d\".", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.BatchNorm3d.html", "category": "pytorch docs"} {"text": "torch.nonzero\ntorch.nonzero(input, *, out=None, as_tuple=False) -> LongTensor or tuple of LongTensors\nNote:\n \"torch.nonzero(..., as_tuple=False)\" (default) returns a 2-D\n tensor where each row is the index for a nonzero\n value.\"torch.nonzero(..., as_tuple=True)\" returns a tuple of 1-D\n index tensors, allowing for advanced indexing, so\n \"x[x.nonzero(as_tuple=True)]\" gives all nonzero values of tensor\n \"x\". Of the returned tuple, each index tensor contains nonzero\n indices for a certain dimension.See below for more details on the\n two behaviors.When \"input\" is on CUDA, \"torch.nonzero()\" causes\n host-device synchronization.\n\nWhen \"as_tuple\" is \"False\" (default):\nReturns a tensor containing the indices of all non-zero elements of\n \"input\". Each row in the result contains the indices of a non-zero\n element in \"input\". The result is sorted lexicographically, with\n the last index changing the fastest (C-style).", "source": "https://pytorch.org/docs/stable/generated/torch.nonzero.html", "category": "pytorch docs"} {"text": "the last index changing the fastest (C-style).\nIf \"input\" has n dimensions, then the resulting indices tensor\n \"out\" is of size (z \\times n), where z is the total number of non-\n zero elements in the \"input\" tensor.\nWhen \"as_tuple\" is \"True\":\nReturns a tuple of 1-D tensors, one for each dimension in \"input\",\n each containing the indices (in that dimension) of all non-zero\n elements of \"input\" .\nIf \"input\" has n dimensions, then the resulting tuple contains n\n tensors of size z, where z is the total number of non-zero elements\n in the \"input\" tensor.\nAs a special case, when \"input\" has zero dimensions and a nonzero\n scalar value, it is treated as a one-dimensional tensor with one\n element.\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (LongTensor, optional) -- the output tensor\n containing indices\nReturns:\n If \"as_tuple\" is \"False\", the output tensor containing indices.", "source": "https://pytorch.org/docs/stable/generated/torch.nonzero.html", "category": "pytorch docs"} {"text": "If \"as_tuple\" is \"True\", one 1-D tensor for each dimension,\n containing the indices of each nonzero element along that\n dimension.\nReturn type:\n LongTensor or tuple of LongTensor\nExample:\n >>> torch.nonzero(torch.tensor([1, 1, 1, 0, 1]))\n tensor([[ 0],\n [ 1],\n [ 2],\n [ 4]])\n >>> torch.nonzero(torch.tensor([[0.6, 0.0, 0.0, 0.0],\n ... [0.0, 0.4, 0.0, 0.0],\n ... [0.0, 0.0, 1.2, 0.0],\n ... [0.0, 0.0, 0.0,-0.4]]))\n tensor([[ 0, 0],\n [ 1, 1],\n [ 2, 2],\n [ 3, 3]])\n >>> torch.nonzero(torch.tensor([1, 1, 1, 0, 1]), as_tuple=True)\n (tensor([0, 1, 2, 4]),)\n >>> torch.nonzero(torch.tensor([[0.6, 0.0, 0.0, 0.0],\n ... [0.0, 0.4, 0.0, 0.0],\n ... [0.0, 0.0, 1.2, 0.0],\n", "source": "https://pytorch.org/docs/stable/generated/torch.nonzero.html", "category": "pytorch docs"} {"text": "... [0.0, 0.0, 0.0,-0.4]]), as_tuple=True)\n (tensor([0, 1, 2, 3]), tensor([0, 1, 2, 3]))\n >>> torch.nonzero(torch.tensor(5), as_tuple=True)\n (tensor([0]),)", "source": "https://pytorch.org/docs/stable/generated/torch.nonzero.html", "category": "pytorch docs"} {"text": "torch.set_default_dtype\ntorch.set_default_dtype(d)\nSets the default floating point dtype to \"d\". Supports\n torch.float32 and torch.float64 as inputs. Other dtypes may be\n accepted without complaint but are not supported and are unlikely\n to work as expected.\nWhen PyTorch is initialized its default floating point dtype is\n torch.float32, and the intent of set_default_dtype(torch.float64)\n is to facilitate NumPy-like type inference. The default floating\n point dtype is used to:\n\n\nImplicitly determine the default complex dtype. When the default\n floating point type is float32 the default complex dtype is\n complex64, and when the default floating point type is float64\n the default complex type is complex128.\n\n\nInfer the dtype for tensors constructed using Python floats or\n complex Python numbers. See examples below.\n\n\nDetermine the result of type promotion between bool and integer\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.set_default_dtype.html", "category": "pytorch docs"} {"text": "tensors and Python floats and complex Python numbers.\nParameters:\n d (\"torch.dtype\") -- the floating point dtype to make the\n default. Either torch.float32 or torch.float64.\n-[ Example ]-\n\n\n\ninitial default for floating point is torch.float32\nPython floats are interpreted as float32\ntorch.tensor([1.2, 3]).dtype\n torch.float32\ninitial default for floating point is torch.complex64\nComplex Python numbers are interpreted as complex64\ntorch.tensor([1.2, 3j]).dtype\n torch.complex64\ntorch.set_default_dtype(torch.float64)\nPython floats are now interpreted as float64\ntorch.tensor([1.2, 3]).dtype # a new floating point tensor\n torch.float64\nComplex Python numbers are now interpreted as complex128\ntorch.tensor([1.2, 3j]).dtype # a new complex tensor\n torch.complex128\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.set_default_dtype.html", "category": "pytorch docs"} {"text": "torch.arctan2\ntorch.arctan2(input, other, *, out=None) -> Tensor\nAlias for \"torch.atan2()\".", "source": "https://pytorch.org/docs/stable/generated/torch.arctan2.html", "category": "pytorch docs"} {"text": "torch.Tensor.trunc_\nTensor.trunc_() -> Tensor\nIn-place version of \"trunc()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.trunc_.html", "category": "pytorch docs"} {"text": "RandomStructured\nclass torch.nn.utils.prune.RandomStructured(amount, dim=- 1)\nPrune entire (currently unpruned) channels in a tensor at random.\nParameters:\n * amount (int or float) -- quantity of parameters to\n prune. If \"float\", should be between 0.0 and 1.0 and represent\n the fraction of parameters to prune. If \"int\", it represents\n the absolute number of parameters to prune.\n * **dim** (*int**, **optional*) -- index of the dim along which\n we define channels to prune. Default: -1.\n\nclassmethod apply(module, name, amount, dim=- 1)\n Adds the forward pre-hook that enables pruning on the fly and\n the reparametrization of a tensor in terms of the original\n tensor and the pruning mask.\n\n Parameters:\n * **module** (*nn.Module*) -- module containing the tensor to\n prune\n\n * **name** (*str*) -- parameter name within \"module\" on which\n pruning will act.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.RandomStructured.html", "category": "pytorch docs"} {"text": "pruning will act.\n * **amount** (*int** or **float*) -- quantity of parameters\n to prune. If \"float\", should be between 0.0 and 1.0 and\n represent the fraction of parameters to prune. If \"int\", it\n represents the absolute number of parameters to prune.\n\n * **dim** (*int**, **optional*) -- index of the dim along\n which we define channels to prune. Default: -1.\n\napply_mask(module)\n Simply handles the multiplication between the parameter being\n pruned and the generated mask. Fetches the mask and the original\n tensor from the module and returns the pruned version of the\n tensor.\n\n Parameters:\n **module** (*nn.Module*) -- module containing the tensor to\n prune\n\n Returns:\n pruned version of the input tensor\n\n Return type:\n pruned_tensor (torch.Tensor)\n\ncompute_mask(t, default_mask)\n Computes and returns a mask for the input tensor \"t\". Starting\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.RandomStructured.html", "category": "pytorch docs"} {"text": "from a base \"default_mask\" (which should be a mask of ones if\n the tensor has not been pruned yet), generate a random mask to\n apply on top of the \"default_mask\" by randomly zeroing out\n channels along the specified dim of the tensor.\n Parameters:\n * **t** (*torch.Tensor*) -- tensor representing the parameter\n to prune\n\n * **default_mask** (*torch.Tensor*) -- Base mask from\n previous pruning iterations, that need to be respected\n after the new mask is applied. Same dims as \"t\".\n\n Returns:\n mask to apply to \"t\", of same dims as \"t\"\n\n Return type:\n mask (torch.Tensor)\n\n Raises:\n **IndexError** -- if \"self.dim >= len(t.shape)\"\n\nprune(t, default_mask=None, importance_scores=None)\n Computes and returns a pruned version of input tensor \"t\"\n according to the pruning rule specified in \"compute_mask()\".\n\n Parameters:\n * **t** (*torch.Tensor*) -- tensor to prune (of same\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.RandomStructured.html", "category": "pytorch docs"} {"text": "dimensions as \"default_mask\").\n * **importance_scores** (*torch.Tensor*) -- tensor of\n importance scores (of same shape as \"t\") used to compute\n mask for pruning \"t\". The values in this tensor indicate\n the importance of the corresponding elements in the \"t\"\n that is being pruned. If unspecified or None, the tensor\n \"t\" will be used in its place.\n\n * **default_mask** (*torch.Tensor**, **optional*) -- mask\n from previous pruning iteration, if any. To be considered\n when determining what portion of the tensor that pruning\n should act on. If None, default to a mask of ones.\n\n Returns:\n pruned version of tensor \"t\".\n\nremove(module)\n Removes the pruning reparameterization from a module. The pruned\n parameter named \"name\" remains permanently pruned, and the\n parameter named \"name+'_orig'\" is removed from the parameter\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.RandomStructured.html", "category": "pytorch docs"} {"text": "list. Similarly, the buffer named \"name+'_mask'\" is removed from\n the buffers.\n Note:\n\n Pruning itself is NOT undone or reversed!\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.RandomStructured.html", "category": "pytorch docs"} {"text": "torch.cuda.get_rng_state_all\ntorch.cuda.get_rng_state_all()\nReturns a list of ByteTensor representing the random number states\n of all devices.\nReturn type:\n List[Tensor]", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.get_rng_state_all.html", "category": "pytorch docs"} {"text": "torch.fix\ntorch.fix(input, *, out=None) -> Tensor\nAlias for \"torch.trunc()\"", "source": "https://pytorch.org/docs/stable/generated/torch.fix.html", "category": "pytorch docs"} {"text": "torch.cuda.seed_all\ntorch.cuda.seed_all()\nSets the seed for generating random numbers to a random number on\n all GPUs. It's safe to call this function if CUDA is not available;\n in that case, it is silently ignored.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.seed_all.html", "category": "pytorch docs"} {"text": "set_grad_enabled\nclass torch.set_grad_enabled(mode)\nContext-manager that sets gradient calculation on or off.\n\"set_grad_enabled\" will enable or disable grads based on its\n argument \"mode\". It can be used as a context-manager or as a\n function.\nThis context manager is thread local; it will not affect\n computation in other threads.\nParameters:\n mode (bool) -- Flag whether to enable grad (\"True\"), or\n disable (\"False\"). This can be used to conditionally enable\n gradients.\nNote:\n set_grad_enabled is one of several mechanisms that can enable or\n disable gradients locally see Locally disabling gradient\n computation for more information on how they compare.\n\nNote:\n This API does not apply to forward-mode AD.\n\nExample::\n >>> x = torch.tensor([1.], requires_grad=True)\n >>> is_train = False\n >>> with torch.set_grad_enabled(is_train):\n ... y = x * 2\n >>> y.requires_grad\n False", "source": "https://pytorch.org/docs/stable/generated/torch.set_grad_enabled.html", "category": "pytorch docs"} {"text": "\n\n\ny.requires_grad\n False\n >>> _ = torch.set_grad_enabled(True)\n >>> y = x * 2\n >>> y.requires_grad\n True\n >>> _ = torch.set_grad_enabled(False)\n >>> y = x * 2\n >>> y.requires_grad\n False\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.set_grad_enabled.html", "category": "pytorch docs"} {"text": "torch.nn.functional.fractional_max_pool2d\ntorch.nn.functional.fractional_max_pool2d(args, *kwargs)\nApplies 2D fractional max pooling over an input signal composed of\n several input planes.\nFractional MaxPooling is described in detail in the paper\n Fractional MaxPooling by Ben Graham\nThe max-pooling operation is applied in kH \\times kW regions by a\n stochastic step size determined by the target output size. The\n number of output features is equal to the number of input planes.\nParameters:\n * kernel_size -- the size of the window to take a max over.\n Can be a single number k (for a square kernel of k \\times k)\n or a tuple (kH, kW)\n * **output_size** -- the target output size of the image of the\n form oH \\times oW. Can be a tuple *(oH, oW)* or a single\n number oH for a square image oH \\times oH\n\n * **output_ratio** -- If one wants to have an output size as a\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.fractional_max_pool2d.html", "category": "pytorch docs"} {"text": "ratio of the input size, this option can be given. This has to\n be a number or tuple in the range (0, 1)\n * **return_indices** -- if \"True\", will return the indices along\n with the outputs. Useful to pass to \"max_unpool2d()\".\n\nExamples::\n >>> input = torch.randn(20, 16, 50, 32)\n >>> # pool of square window of size=3, and target output size 13x12\n >>> F.fractional_max_pool2d(input, 3, output_size=(13, 12))\n >>> # pool of square window and target output size being half of input image size\n >>> F.fractional_max_pool2d(input, 3, output_ratio=(0.5, 0.5))", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.fractional_max_pool2d.html", "category": "pytorch docs"} {"text": "torch.nn.functional.batch_norm\ntorch.nn.functional.batch_norm(input, running_mean, running_var, weight=None, bias=None, training=False, momentum=0.1, eps=1e-05)\nApplies Batch Normalization for each channel across a batch of\n data.\nSee \"BatchNorm1d\", \"BatchNorm2d\", \"BatchNorm3d\" for details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.batch_norm.html", "category": "pytorch docs"} {"text": "torch.jit.set_fusion_strategy\ntorch.jit.set_fusion_strategy(strategy)\nSets the type and number of specializations that can occur during\n fusion.\nUsage: provide a list of pairs (type, depth) where type is one of\n \"STATIC\" or \"DYNAMIC\" and depth is an integer.\nBehavior - static vs dynamic:\n In STATIC fusion, fused ops are compiled to have fixed input\n shapes. The shape is determined based on some initial profiling\n runs. In DYNAMIC fusion, fused ops are compiled to have variable\n input shapes, so that multiple shapes are possible.\nIn both cases, we also recompile on new striding behavior, device,\n or dtype.\nBehavior - fallback functions & depth:\n When an input doesn't match the format required by the\n specialized compiled op, it will run a fallback function.\n Fallback functions are recursively be compiled and specialized\n based on the observed tensor shapes. Since compilation can be", "source": "https://pytorch.org/docs/stable/generated/torch.jit.set_fusion_strategy.html", "category": "pytorch docs"} {"text": "slow, the \"depth\" parameter is provided to limit the number of\n specializations that can be compiled, before giving up on\n recompiling and falling back to a completely un-fused, un-\n specialized implementation.\nThe list of (type, depth) pairs controls the type of\n specializations and the number of specializations. For example:\n [(\"STATIC\", 2), (\"DYNAMIC\", 2)] indicates that the first two\n specializations will use static fusions, the following two\n specializations will use dynamic fusion, and any inputs that\n satisfy none of the 4 options will run an unfused implementation.\nNB: in the future, if more as more fusion backends are added there\n may be more granular apis for specific fusers.", "source": "https://pytorch.org/docs/stable/generated/torch.jit.set_fusion_strategy.html", "category": "pytorch docs"} {"text": "torch.Tensor.lgamma_\nTensor.lgamma_() -> Tensor\nIn-place version of \"lgamma()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.lgamma_.html", "category": "pytorch docs"} {"text": "torch.linalg.matmul\ntorch.linalg.matmul(input, other, *, out=None) -> Tensor\nAlias for \"torch.matmul()\"", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.matmul.html", "category": "pytorch docs"} {"text": "AdaptiveMaxPool2d\nclass torch.nn.AdaptiveMaxPool2d(output_size, return_indices=False)\nApplies a 2D adaptive max pooling over an input signal composed of\n several input planes.\nThe output is of size H_{out} \\times W_{out}, for any input size.\n The number of output features is equal to the number of input\n planes.\nParameters:\n * output_size (Union[int, None,\n Tuple[Optional[int],\n Optional[int]]]) -- the target output size of the\n image of the form H_{out} \\times W_{out}. Can be a tuple\n (H_{out}, W_{out}) or a single H_{out} for a square image\n H_{out} \\times H_{out}. H_{out} and W_{out} can be either a\n \"int\", or \"None\" which means the size will be the same as that\n of the input.\n * **return_indices** (*bool*) -- if \"True\", will return the\n indices along with the outputs. Useful to pass to\n nn.MaxUnpool2d. Default: \"False\"\n\nShape:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveMaxPool2d.html", "category": "pytorch docs"} {"text": "Shape:\n * Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in}).\n * Output: (N, C, H_{out}, W_{out}) or (C, H_{out}, W_{out}),\n where (H_{out}, W_{out})=\\text{output\\_size}.\n\n-[ Examples ]-\n\n\n\ntarget output size of 5x7\nm = nn.AdaptiveMaxPool2d((5, 7))\ninput = torch.randn(1, 64, 8, 9)\noutput = m(input)\ntarget output size of 7x7 (square)\nm = nn.AdaptiveMaxPool2d(7)\ninput = torch.randn(1, 64, 10, 9)\noutput = m(input)\ntarget output size of 10x7\nm = nn.AdaptiveMaxPool2d((None, 7))\ninput = torch.randn(1, 64, 10, 9)\noutput = m(input)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveMaxPool2d.html", "category": "pytorch docs"} {"text": "torch.Tensor.slice_scatter\nTensor.slice_scatter(src, dim=0, start=None, end=None, step=1) -> Tensor\nSee \"torch.slice_scatter()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.slice_scatter.html", "category": "pytorch docs"} {"text": "torch.Tensor.cov\nTensor.cov(*, correction=1, fweights=None, aweights=None) -> Tensor\nSee \"torch.cov()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cov.html", "category": "pytorch docs"} {"text": "UpsamplingBilinear2d\nclass torch.nn.UpsamplingBilinear2d(size=None, scale_factor=None)\nApplies a 2D bilinear upsampling to an input signal composed of\n several input channels.\nTo specify the scale, it takes either the \"size\" or the\n \"scale_factor\" as it's constructor argument.\nWhen \"size\" is given, it is the output size of the image (h, w).\nParameters:\n * size (int or Tuple[int, int],\n optional) -- output spatial sizes\n * **scale_factor** (*float** or **Tuple**[**float**,\n **float**]**, **optional*) -- multiplier for spatial size.\n\nWarning:\n This class is deprecated in favor of \"interpolate()\". It is\n equivalent to \"nn.functional.interpolate(..., mode='bilinear',\n align_corners=True)\".\n\nShape:\n * Input: (N, C, H_{in}, W_{in})\n * Output: (N, C, H_{out}, W_{out}) where\n\n H_{out} = \\left\\lfloor H_{in} \\times \\text{scale\\_factor}\n \\right\\rfloor\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.UpsamplingBilinear2d.html", "category": "pytorch docs"} {"text": "\\right\\rfloor\n W_{out} = \\left\\lfloor W_{in} \\times \\text{scale\\_factor}\n \\right\\rfloor\n\nExamples:\n >>> input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2)\n >>> input\n tensor([[[[1., 2.],\n [3., 4.]]]])\n\n >>> m = nn.UpsamplingBilinear2d(scale_factor=2)\n >>> m(input)\n tensor([[[[1.0000, 1.3333, 1.6667, 2.0000],\n [1.6667, 2.0000, 2.3333, 2.6667],\n [2.3333, 2.6667, 3.0000, 3.3333],\n [3.0000, 3.3333, 3.6667, 4.0000]]]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.UpsamplingBilinear2d.html", "category": "pytorch docs"} {"text": "AvgPool2d\nclass torch.nn.AvgPool2d(kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None)\nApplies a 2D average pooling over an input signal composed of\n several input planes.\nIn the simplest case, the output value of the layer with input size\n (N, C, H, W), output (N, C, H_{out}, W_{out}) and \"kernel_size\"\n (kH, kW) can be precisely described as:\n out(N_i, C_j, h, w) = \\frac{1}{kH * kW} \\sum_{m=0}^{kH-1}\n \\sum_{n=0}^{kW-1} input(N_i, C_j,\n stride[0] \\times h + m, stride[1] \\times w + n)\n\nIf \"padding\" is non-zero, then the input is implicitly zero-padded\n on both sides for \"padding\" number of points.\nNote:\n When ceil_mode=True, sliding windows are allowed to go off-bounds\n if they start within the left padding or the input. Sliding\n windows that would start in the right padded region are ignored.\n\nThe parameters \"kernel_size\", \"stride\", \"padding\" can either be:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html", "category": "pytorch docs"} {"text": "\n\na single \"int\" -- in which case the same value is used for the\n height and width dimension\n\na \"tuple\" of two ints -- in which case, the first int is\n used for the height dimension, and the second int for the\n width dimension\n\n\n\nParameters:\n * kernel_size (Union[int, Tuple[int,\n int]]) -- the size of the window\n * **stride** (*Union**[**int**, **Tuple**[**int**, **int**]**]*)\n -- the stride of the window. Default value is \"kernel_size\"\n\n * **padding** (*Union**[**int**, **Tuple**[**int**,\n **int**]**]*) -- implicit zero padding to be added on both\n sides\n\n * **ceil_mode** (*bool*) -- when True, will use *ceil* instead\n of *floor* to compute the output shape\n\n * **count_include_pad** (*bool*) -- when True, will include the\n zero-padding in the averaging calculation\n\n * **divisor_override** (*Optional**[**int**]*) -- if specified,\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html", "category": "pytorch docs"} {"text": "it will be used as divisor, otherwise size of the pooling\n region will be used.\nShape:\n * Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in}).\n * Output: (N, C, H_{out}, W_{out}) or (C, H_{out}, W_{out}),\n where\n\n H_{out} = \\left\\lfloor\\frac{H_{in} + 2 \\times\n \\text{padding}[0] -\n \\text{kernel\\_size}[0]}{\\text{stride}[0]} + 1\\right\\rfloor\n\n W_{out} = \\left\\lfloor\\frac{W_{in} + 2 \\times\n \\text{padding}[1] -\n \\text{kernel\\_size}[1]}{\\text{stride}[1]} + 1\\right\\rfloor\n\nExamples:\n >>> # pool of square window of size=3, stride=2\n >>> m = nn.AvgPool2d(3, stride=2)\n >>> # pool of non-square window\n >>> m = nn.AvgPool2d((3, 2), stride=(2, 1))\n >>> input = torch.randn(20, 16, 50, 32)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AvgPool2d.html", "category": "pytorch docs"} {"text": "torch.Tensor.fix\nTensor.fix() -> Tensor\nSee \"torch.fix()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.fix.html", "category": "pytorch docs"} {"text": "torch.nn.functional.feature_alpha_dropout\ntorch.nn.functional.feature_alpha_dropout(input, p=0.5, training=False, inplace=False)\nRandomly masks out entire channels (a channel is a feature map,\n e.g. the j-th channel of the i-th sample in the batch input is a\n tensor \\text{input}[i, j]) of the input tensor). Instead of setting\n activations to zero, as in regular Dropout, the activations are set\n to the negative saturation value of the SELU activation function.\nEach element will be masked independently on every forward call\n with probability \"p\" using samples from a Bernoulli distribution.\n The elements to be masked are randomized on every forward call, and\n scaled and shifted to maintain zero mean and unit variance.\nSee \"FeatureAlphaDropout\" for details.\nParameters:\n * p (float) -- dropout probability of a channel to be\n zeroed. Default: 0.5\n * **training** (*bool*) -- apply dropout if is \"True\". Default:\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.feature_alpha_dropout.html", "category": "pytorch docs"} {"text": "\"True\"\n * **inplace** (*bool*) -- If set to \"True\", will do this\n operation in-place. Default: \"False\"\n\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.feature_alpha_dropout.html", "category": "pytorch docs"} {"text": "torch.Tensor.to_sparse_csr\nTensor.to_sparse_csr(dense_dim=None) -> Tensor\nConvert a tensor to compressed row storage format (CSR). Except\n for strided tensors, only works with 2D tensors. If the \"self\" is\n strided, then the number of dense dimensions could be specified,\n and a hybrid CSR tensor will be created, with dense_dim dense\n dimensions and self.dim() - 2 - dense_dim batch dimension.\nParameters:\n dense_dim (int, optional) -- Number of dense\n dimensions of the resulting CSR tensor. This argument should be\n used only if \"self\" is a strided tensor, and must be a value\n between 0 and dimension of \"self\" tensor minus two.\nExample:\n >>> dense = torch.randn(5, 5)\n >>> sparse = dense.to_sparse_csr()\n >>> sparse._nnz()\n 25\n\n >>> dense = torch.zeros(3, 3, 1, 1)\n >>> dense[0, 0] = dense[1, 2] = dense[2, 1] = 1\n >>> dense.to_sparse_csr(dense_dim=2)\n tensor(crow_indices=tensor([0, 1, 2, 3]),\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse_csr.html", "category": "pytorch docs"} {"text": "tensor(crow_indices=tensor([0, 1, 2, 3]),\n col_indices=tensor([0, 2, 1]),\n values=tensor([[[1.]],\n [[1.]],\n\n [[1.]]]), size=(3, 3, 1, 1), nnz=3,\n layout=torch.sparse_csr)\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse_csr.html", "category": "pytorch docs"} {"text": "torch.nn.functional.glu\ntorch.nn.functional.glu(input, dim=- 1) -> Tensor\nThe gated linear unit. Computes:\n \\text{GLU}(a, b) = a \\otimes \\sigma(b)\n\nwhere input is split in half along dim to form a and b,\n \\sigma is the sigmoid function and \\otimes is the element-wise\n product between matrices.\nSee Language Modeling with Gated Convolutional Networks.\nParameters:\n * input (Tensor) -- input tensor\n * **dim** (*int*) -- dimension on which to split the input.\n Default: -1\n\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.glu.html", "category": "pytorch docs"} {"text": "torch.nn.functional.conv1d\ntorch.nn.functional.conv1d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1) -> Tensor\nApplies a 1D convolution over an input signal composed of several\n input planes.\nThis operator supports TensorFloat32.\nSee \"Conv1d\" for details and output shape.\nNote:\n In some circumstances when given tensors on a CUDA device and\n using CuDNN, this operator may select a nondeterministic\n algorithm to increase performance. If this is undesirable, you\n can try to make the operation deterministic (potentially at a\n performance cost) by setting \"torch.backends.cudnn.deterministic\n = True\". See Reproducibility for more information.\n\nNote:\n This operator supports complex data types i.e. \"complex32,\n complex64, complex128\".\n\nParameters:\n * input -- input tensor of shape (\\text{minibatch} ,\n \\text{in_channels} , iW)\n * **weight** -- filters of shape (\\text{out\\_channels} ,\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv1d.html", "category": "pytorch docs"} {"text": "\\frac{\\text{in_channels}}{\\text{groups}} , kW)\n * **bias** -- optional bias of shape (\\text{out\\_channels}).\n Default: \"None\"\n\n * **stride** -- the stride of the convolving kernel. Can be a\n single number or a one-element tuple *(sW,)*. Default: 1\n\n * **padding** --\n\n implicit paddings on both sides of the input. Can be a string\n {'valid', 'same'}, single number or a one-element tuple\n *(padW,)*. Default: 0 \"padding='valid'\" is the same as no\n padding. \"padding='same'\" pads the input so the output has the\n same shape as the input. However, this mode doesn't support\n any stride values other than 1.\n\n Warning:\n\n For \"padding='same'\", if the \"weight\" is even-length and\n \"dilation\" is odd in any dimension, a full \"pad()\" operation\n may be needed internally. Lowering performance.\n\n * **dilation** -- the spacing between kernel elements. Can be a\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv1d.html", "category": "pytorch docs"} {"text": "single number or a one-element tuple (dW,). Default: 1\n * **groups** -- split input into groups, \\text{in\\_channels}\n should be divisible by the number of groups. Default: 1\n\nExamples:\n >>> inputs = torch.randn(33, 16, 30)\n >>> filters = torch.randn(20, 16, 5)\n >>> F.conv1d(inputs, filters)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv1d.html", "category": "pytorch docs"} {"text": "default_weight_observer\ntorch.quantization.observer.default_weight_observer\nalias of functools.partial(,\n dtype=torch.qint8, qscheme=torch.per_tensor_symmetric){}", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.default_weight_observer.html", "category": "pytorch docs"} {"text": "torch.gradient\ntorch.gradient(input, *, spacing=1, dim=None, edge_order=1) -> List of Tensors\nEstimates the gradient of a function g : \\mathbb{R}^n \\rightarrow\n \\mathbb{R} in one or more dimensions using the second-order\n accurate central differences method.\nThe gradient of g is estimated using samples. By default, when\n \"spacing\" is not specified, the samples are entirely described by\n \"input\", and the mapping of input coordinates to an output is the\n same as the tensor's mapping of indices to values. For example, for\n a three-dimensional \"input\" the function described is g :\n \\mathbb{R}^3 \\rightarrow \\mathbb{R}, and g(1, 2, 3)\\ == input[1, 2,\n 3].\nWhen \"spacing\" is specified, it modifies the relationship between\n \"input\" and input coordinates. This is detailed in the \"Keyword\n Arguments\" section below.\nThe gradient is estimated by estimating each partial derivative of\n g independently. This estimation is accurate if g is in C^3 (it has", "source": "https://pytorch.org/docs/stable/generated/torch.gradient.html", "category": "pytorch docs"} {"text": "at least 3 continuous derivatives), and the estimation can be\n improved by providing closer samples. Mathematically, the value at\n each interior point of a partial derivative is estimated using\n Taylor\u00e2\u0080\u0099s theorem with remainder. Letting x be an interior point and\n x+h_r be point neighboring it, the partial gradient at f(x+h_r) is\n estimated using:\n \\begin{aligned} f(x+h_r) = f(x) + h_r f'(x) + {h_r}^2\n \\frac{f''(x)}{2} + {h_r}^3 \\frac{f'''(x_r)}{6} \\\\ \\end{aligned}\n\nwhere x_r is a number in the interval [x, x+ h_r] and using the\n fact that f \\in C^3 we derive :\n f'(x) \\approx \\frac{ {h_l}^2 f(x+h_r) - {h_r}^2 f(x-h_l) +\n ({h_r}^2-{h_l}^2 ) f(x) }{ {h_r} {h_l}^2 + {h_r}^2 {h_l} }\n\nNote:\n We estimate the gradient of functions in complex domain g :\n \\mathbb{C}^n \\rightarrow \\mathbb{C} in the same way.\n\nThe value of each partial derivative at the boundary points is\n computed differently. See edge_order below.\nParameters:", "source": "https://pytorch.org/docs/stable/generated/torch.gradient.html", "category": "pytorch docs"} {"text": "Parameters:\n input (\"Tensor\") -- the tensor that represents the values of\n the function\nKeyword Arguments:\n * spacing (\"scalar\", \"list of scalar\", \"list of Tensor\",\n optional) -- \"spacing\" can be used to modify how the \"input\"\n tensor's indices relate to sample coordinates. If \"spacing\" is\n a scalar then the indices are multiplied by the scalar to\n produce the coordinates. For example, if \"spacing=2\" the\n indices (1, 2, 3) become coordinates (2, 4, 6). If \"spacing\"\n is a list of scalars then the corresponding indices are\n multiplied. For example, if \"spacing=(2, -1, 3)\" the indices\n (1, 2, 3) become coordinates (2, -2, 9). Finally, if \"spacing\"\n is a list of one-dimensional tensors then each tensor\n specifies the coordinates for the corresponding dimension. For\n example, if the indices are (1, 2, 3) and the tensors are (t0,\n t1, t2), then the coordinates are (t0[1], t1[2], t2[3])", "source": "https://pytorch.org/docs/stable/generated/torch.gradient.html", "category": "pytorch docs"} {"text": "\n\ndim (\"int\", \"list of int\", optional) -- the dimension or\n dimensions to approximate the gradient over. By default the\n partial gradient in every dimension is computed. Note that\n when \"dim\" is specified the elements of the \"spacing\"\n argument must correspond with the specified dims.\"\n\nedge_order (\"int\", optional) -- 1 or 2, for first-order or\n second-order estimation of the boundary (\"edge\") values,\n respectively.\n\n\n\nExamples:\n >>> # Estimates the gradient of f(x)=x^2 at points [-2, -1, 2, 4]\n >>> coordinates = (torch.tensor([-2., -1., 1., 4.]),)\n >>> values = torch.tensor([4., 1., 1., 16.], )\n >>> torch.gradient(values, spacing = coordinates)\n (tensor([-3., -2., 2., 5.]),)\n\n >>> # Estimates the gradient of the R^2 -> R function whose samples are\n >>> # described by the tensor t. Implicit coordinates are [0, 1] for the outermost\n", "source": "https://pytorch.org/docs/stable/generated/torch.gradient.html", "category": "pytorch docs"} {"text": "\n\n\ndimension and [0, 1, 2, 3] for the innermost dimension, and function estimates\n >>> # partial derivative for both dimensions.\n >>> t = torch.tensor([[1, 2, 4, 8], [10, 20, 40, 80]])\n >>> torch.gradient(t)\n (tensor([[ 9., 18., 36., 72.],\n [ 9., 18., 36., 72.]]),\n tensor([[ 1.0000, 1.5000, 3.0000, 4.0000],\n [10.0000, 15.0000, 30.0000, 40.0000]]))\n\n\n\n\n >>> # A scalar value for spacing modifies the relationship between tensor indices\n >>> # and input coordinates by multiplying the indices to find the\n >>> # coordinates. For example, below the indices of the innermost\n >>> # 0, 1, 2, 3 translate to coordinates of [0, 2, 4, 6], and the indices of\n >>> # the outermost dimension 0, 1 translate to coordinates of [0, 2].\n >>> torch.gradient(t, spacing = 2.0) # dim = None (implicitly [0, 1])\n (tensor([[ 4.5000, 9.0000, 18.0000, 36.0000],\n [ 4.5000, 9.0000, 18.0000, 36.0000]]),\n", "source": "https://pytorch.org/docs/stable/generated/torch.gradient.html", "category": "pytorch docs"} {"text": "tensor([[ 0.5000, 0.7500, 1.5000, 2.0000],\n [ 5.0000, 7.5000, 15.0000, 20.0000]]))\n >>> # doubling the spacing between samples halves the estimated partial gradients.\n >>>\n >>> # Estimates only the partial derivative for dimension 1\n >>> torch.gradient(t, dim = 1) # spacing = None (implicitly 1.)\n (tensor([[ 1.0000, 1.5000, 3.0000, 4.0000],\n [10.0000, 15.0000, 30.0000, 40.0000]]),)\n\n >>> # When spacing is a list of scalars, the relationship between the tensor\n >>> # indices and input coordinates changes based on dimension.\n >>> # For example, below, the indices of the innermost dimension 0, 1, 2, 3 translate\n >>> # to coordinates of [0, 3, 6, 9], and the indices of the outermost dimension\n >>> # 0, 1 translate to coordinates of [0, 2].\n >>> torch.gradient(t, spacing = [3., 2.])\n (tensor([[ 4.5000, 9.0000, 18.0000, 36.0000],\n [ 4.5000, 9.0000, 18.0000, 36.0000]]),\n", "source": "https://pytorch.org/docs/stable/generated/torch.gradient.html", "category": "pytorch docs"} {"text": "tensor([[ 0.3333, 0.5000, 1.0000, 1.3333],\n [ 3.3333, 5.0000, 10.0000, 13.3333]]))\n >>> # The following example is a replication of the previous one with explicit\n >>> # coordinates.\n >>> coords = (torch.tensor([0, 2]), torch.tensor([0, 3, 6, 9]))\n >>> torch.gradient(t, spacing = coords)\n (tensor([[ 4.5000, 9.0000, 18.0000, 36.0000],\n [ 4.5000, 9.0000, 18.0000, 36.0000]]),\n tensor([[ 0.3333, 0.5000, 1.0000, 1.3333],\n [ 3.3333, 5.0000, 10.0000, 13.3333]]))\n", "source": "https://pytorch.org/docs/stable/generated/torch.gradient.html", "category": "pytorch docs"} {"text": "torch.select_scatter\ntorch.select_scatter(input, src, dim, index) -> Tensor\nEmbeds the values of the \"src\" tensor into \"input\" at the given\n index. This function returns a tensor with fresh storage; it does\n not create a view.\nParameters:\n * input (Tensor) -- the input tensor.\n * **src** (*Tensor*) -- The tensor to embed into \"input\"\n\n * **dim** (*int*) -- the dimension to insert the slice into.\n\n * **index** (*int*) -- the index to select with\n\nNote:\n \"src\" must be of the proper size in order to be embedded into\n \"input\". Specifically, it should have the same shape as\n \"torch.select(input, dim, index)\"\n\nExample:\n >>> a = torch.zeros(2, 2)\n >>> b = torch.ones(2)\n >>> a.select_scatter(b, 0, 0)\n tensor([[1., 1.],\n [0., 0.]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.select_scatter.html", "category": "pytorch docs"} {"text": "torch.Tensor.q_per_channel_zero_points\nTensor.q_per_channel_zero_points() -> Tensor\nGiven a Tensor quantized by linear (affine) per-channel\n quantization, returns a tensor of zero_points of the underlying\n quantizer. It has the number of elements that matches the\n corresponding dimensions (from q_per_channel_axis) of the tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.q_per_channel_zero_points.html", "category": "pytorch docs"} {"text": "LSTM\nclass torch.nn.quantizable.LSTM(input_size, hidden_size, num_layers=1, bias=True, batch_first=False, dropout=0.0, bidirectional=False, device=None, dtype=None)\nA quantizable long short-term memory (LSTM).\nFor the description and the argument types, please, refer to \"LSTM\"\nVariables:\n layers -- instances of the _LSTMLayer\nNote:\n To access the weights and biases, you need to access them per\n layer. See examples below.\n\nExamples:\n >>> import torch.nn.quantizable as nnqa\n >>> rnn = nnqa.LSTM(10, 20, 2)\n >>> input = torch.randn(5, 3, 10)\n >>> h0 = torch.randn(2, 3, 20)\n >>> c0 = torch.randn(2, 3, 20)\n >>> output, (hn, cn) = rnn(input, (h0, c0))\n >>> # To get the weights:\n >>> print(rnn.layers[0].weight_ih)\n tensor([[...]])\n >>> print(rnn.layers[0].weight_hh)\n AssertionError: There is no reverse path in the non-bidirectional layer\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.quantizable.LSTM.html", "category": "pytorch docs"} {"text": "torch.result_type\ntorch.result_type(tensor1, tensor2) -> dtype\nReturns the \"torch.dtype\" that would result from performing an\n arithmetic operation on the provided input tensors. See type\n promotion documentation for more information on the type promotion\n logic.\nParameters:\n * tensor1 (Tensor or Number) -- an input tensor or\n number\n * **tensor2** (*Tensor** or **Number*) -- an input tensor or\n number\n\nExample:\n >>> torch.result_type(torch.tensor([1, 2], dtype=torch.int), 1.0)\n torch.float32\n >>> torch.result_type(torch.tensor([1, 2], dtype=torch.uint8), torch.tensor(1))\n torch.uint8\n", "source": "https://pytorch.org/docs/stable/generated/torch.result_type.html", "category": "pytorch docs"} {"text": "torch.foreach_atan\ntorch.foreach_atan(self: List[Tensor]) -> None\nApply \"torch.atan()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_atan_.html", "category": "pytorch docs"} {"text": "torch.cuda.nvtx.mark\ntorch.cuda.nvtx.mark(msg)\nDescribe an instantaneous event that occurred at some point.\nParameters:\n msg (str) -- ASCII message to associate with the event.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.nvtx.mark.html", "category": "pytorch docs"} {"text": "torch.Tensor.cumsum\nTensor.cumsum(dim, dtype=None) -> Tensor\nSee \"torch.cumsum()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cumsum.html", "category": "pytorch docs"} {"text": "torch.resolve_neg\ntorch.resolve_neg(input) -> Tensor\nReturns a new tensor with materialized negation if \"input\"'s\n negative bit is set to True, else returns \"input\". The output\n tensor will always have its negative bit set to False. :param\n input: the input tensor. :type input: Tensor\nExample:\n >>> x = torch.tensor([-1 + 1j, -2 + 2j, 3 - 3j])\n >>> y = x.conj()\n >>> z = y.imag\n >>> z.is_neg()\n True\n >>> out = y.resolve_neg()\n >>> out\n tensor([-1, -2, -3])\n >>> out.is_neg()\n False\n", "source": "https://pytorch.org/docs/stable/generated/torch.resolve_neg.html", "category": "pytorch docs"} {"text": "torch._foreach_atan\ntorch._foreach_atan(self: List[Tensor]) -> List[Tensor]\nApply \"torch.atan()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_atan.html", "category": "pytorch docs"} {"text": "torch.conj_physical\ntorch.conj_physical(input, *, out=None) -> Tensor\nComputes the element-wise conjugate of the given \"input\" tensor. If\n \"input\" has a non-complex dtype, this function just returns\n \"input\".\nNote:\n This performs the conjugate operation regardless of the fact\n conjugate bit is set or not.\n\nWarning:\n In the future, \"torch.conj_physical()\" may return a non-writeable\n view for an \"input\" of non-complex dtype. It's recommended that\n programs not modify the tensor returned by\n \"torch.conj_physical()\" when \"input\" is of non-complex dtype to\n be compatible with this change.\n\n \\text{out}_{i} = conj(\\text{input}_{i})\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> torch.conj_physical(torch.tensor([-1 + 1j, -2 + 2j, 3 - 3j]))\n tensor([-1 - 1j, -2 - 2j, 3 + 3j])\n", "source": "https://pytorch.org/docs/stable/generated/torch.conj_physical.html", "category": "pytorch docs"} {"text": "torch.Tensor.new_zeros\nTensor.new_zeros(size, *, dtype=None, device=None, requires_grad=False, layout=torch.strided, pin_memory=False) -> Tensor\nReturns a Tensor of size \"size\" filled with \"0\". By default, the\n returned Tensor has the same \"torch.dtype\" and \"torch.device\" as\n this tensor.\nParameters:\n size (int...) -- a list, tuple, or \"torch.Size\" of\n integers defining the shape of the output tensor.\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired type of\n returned tensor. Default: if None, same \"torch.dtype\" as this\n tensor.\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if None, same \"torch.device\" as this\n tensor.\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.new_zeros.html", "category": "pytorch docs"} {"text": "returned Tensor. Default: \"torch.strided\".\n * **pin_memory** (*bool**, **optional*) -- If set, returned\n tensor would be allocated in the pinned memory. Works only for\n CPU tensors. Default: \"False\".\n\nExample:\n >>> tensor = torch.tensor((), dtype=torch.float64)\n >>> tensor.new_zeros((2, 3))\n tensor([[ 0., 0., 0.],\n [ 0., 0., 0.]], dtype=torch.float64)\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.new_zeros.html", "category": "pytorch docs"} {"text": "propagate_qconfig\nclass torch.quantization.propagate_qconfig_(module, qconfig_dict=None, prepare_custom_config_dict=None)\nPropagate qconfig through the module hierarchy and assign qconfig\n attribute on each leaf module\nParameters:\n * module -- input module\n * **qconfig_dict** -- dictionary that maps from name or type of\n submodule to quantization configuration, qconfig applies to\n all submodules of a given module unless qconfig for the\n submodules are specified (when the submodule already has\n qconfig attribute)\n\n * **prepare_custom_config_dict** -- dictionary for custom\n handling of modules see docs for \"prepare_fx()\"\n\nReturns:\n None, module is modified inplace with qconfig attached", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.propagate_qconfig_.html", "category": "pytorch docs"} {"text": "torch.sign\ntorch.sign(input, *, out=None) -> Tensor\nReturns a new tensor with the signs of the elements of \"input\".\n \\text{out}_{i} = \\operatorname{sgn}(\\text{input}_{i})\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.tensor([0.7, -1.2, 0., 2.3])\n >>> a\n tensor([ 0.7000, -1.2000, 0.0000, 2.3000])\n >>> torch.sign(a)\n tensor([ 1., -1., 0., 1.])\n", "source": "https://pytorch.org/docs/stable/generated/torch.sign.html", "category": "pytorch docs"} {"text": "torch.Tensor.ravel\nTensor.ravel() -> Tensor\nsee \"torch.ravel()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ravel.html", "category": "pytorch docs"} {"text": "torch.swapaxes\ntorch.swapaxes(input, axis0, axis1) -> Tensor\nAlias for \"torch.transpose()\".\nThis function is equivalent to NumPy's swapaxes function.\nExamples:\n >>> x = torch.tensor([[[0,1],[2,3]],[[4,5],[6,7]]])\n >>> x\n tensor([[[0, 1],\n [2, 3]],\n\n [[4, 5],\n [6, 7]]])\n >>> torch.swapaxes(x, 0, 1)\n tensor([[[0, 1],\n [4, 5]],\n\n [[2, 3],\n [6, 7]]])\n >>> torch.swapaxes(x, 0, 2)\n tensor([[[0, 4],\n [2, 6]],\n\n [[1, 5],\n [3, 7]]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.swapaxes.html", "category": "pytorch docs"} {"text": "torch.nn.utils.remove_spectral_norm\ntorch.nn.utils.remove_spectral_norm(module, name='weight')\nRemoves the spectral normalization reparameterization from a\n module.\nParameters:\n * module (Module) -- containing module\n * **name** (*str**, **optional*) -- name of weight parameter\n\nReturn type:\n T_module\n-[ Example ]-\n\n\n\nm = spectral_norm(nn.Linear(40, 10))\nremove_spectral_norm(m)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.remove_spectral_norm.html", "category": "pytorch docs"} {"text": "torch.cuda.seed\ntorch.cuda.seed()\nSets the seed for generating random numbers to a random number for\n the current GPU. It's safe to call this function if CUDA is not\n available; in that case, it is silently ignored.\nWarning:\n If you are working with a multi-GPU model, this function will\n only initialize the seed on one GPU. To initialize all GPUs, use\n \"seed_all()\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.seed.html", "category": "pytorch docs"} {"text": "torch.nn.functional.cross_entropy\ntorch.nn.functional.cross_entropy(input, target, weight=None, size_average=None, ignore_index=- 100, reduce=None, reduction='mean', label_smoothing=0.0)\nThis criterion computes the cross entropy loss between input logits\n and target.\nSee \"CrossEntropyLoss\" for details.\nParameters:\n * input (Tensor) -- Predicted unnormalized logits; see\n Shape section below for supported shapes.\n * **target** (*Tensor*) -- Ground truth class indices or class\n probabilities; see Shape section below for supported shapes.\n\n * **weight** (*Tensor**, **optional*) -- a manual rescaling\n weight given to each class. If given, has to be a Tensor of\n size *C*\n\n * **size_average** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.cross_entropy.html", "category": "pytorch docs"} {"text": "multiple elements per sample. If the field \"size_average\" is\n set to \"False\", the losses are instead summed for each\n minibatch. Ignored when reduce is \"False\". Default: \"True\"\n * **ignore_index** (*int**, **optional*) -- Specifies a target\n value that is ignored and does not contribute to the input\n gradient. When \"size_average\" is \"True\", the loss is averaged\n over non-ignored targets. Note that \"ignore_index\" is only\n applicable when the target contains class indices. Default:\n -100\n\n * **reduce** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n\n * **reduction** (*str**, **optional*) -- Specifies the reduction\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.cross_entropy.html", "category": "pytorch docs"} {"text": "to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\n * **label_smoothing** (*float**, **optional*) -- A float in\n [0.0, 1.0]. Specifies the amount of smoothing when computing\n the loss, where 0.0 means no smoothing. The targets become a\n mixture of the original ground truth and a uniform\n distribution as described in Rethinking the Inception\n Architecture for Computer Vision. Default: 0.0.\n\nReturn type:\n Tensor\nShape:\n * Input: Shape (C), (N, C) or (N, C, d_1, d_2, ..., d_K) with K\n \\geq 1 in the case of K-dimensional loss.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.cross_entropy.html", "category": "pytorch docs"} {"text": "\n\nTarget: If containing class indices, shape (), (N) or (N, d_1,\n d_2, ..., d_K) with K \\geq 1 in the case of K-dimensional loss\n where each value should be between [0, C). If containing class\n probabilities, same shape as the input and each value should\n be between [0, 1].\nwhere:\n \\begin{aligned} C ={} & \\text{number of classes} \\\\ N\n ={} & \\text{batch size} \\\\ \\end{aligned}\n\n\n\nExamples:\n >>> # Example of target with class indices\n >>> input = torch.randn(3, 5, requires_grad=True)\n >>> target = torch.randint(5, (3,), dtype=torch.int64)\n >>> loss = F.cross_entropy(input, target)\n >>> loss.backward()\n >>>\n >>> # Example of target with class probabilities\n >>> input = torch.randn(3, 5, requires_grad=True)\n >>> target = torch.randn(3, 5).softmax(dim=1)\n >>> loss = F.cross_entropy(input, target)\n >>> loss.backward()\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.cross_entropy.html", "category": "pytorch docs"} {"text": "torch.arctanh\ntorch.arctanh(input, *, out=None) -> Tensor\nAlias for \"torch.atanh()\".", "source": "https://pytorch.org/docs/stable/generated/torch.arctanh.html", "category": "pytorch docs"} {"text": "torch.conj\ntorch.conj(input) -> Tensor\nReturns a view of \"input\" with a flipped conjugate bit. If \"input\"\n has a non-complex dtype, this function just returns \"input\".\nNote:\n \"torch.conj()\" performs a lazy conjugation, but the actual\n conjugated tensor can be materialized at any time using\n \"torch.resolve_conj()\".\n\nWarning:\n In the future, \"torch.conj()\" may return a non-writeable view for\n an \"input\" of non-complex dtype. It's recommended that programs\n not modify the tensor returned by \"torch.conj_physical()\" when\n \"input\" is of non-complex dtype to be compatible with this\n change.\n\nParameters:\n input (Tensor) -- the input tensor.\nExample:\n >>> x = torch.tensor([-1 + 1j, -2 + 2j, 3 - 3j])\n >>> x.is_conj()\n False\n >>> y = torch.conj(x)\n >>> y.is_conj()\n True\n", "source": "https://pytorch.org/docs/stable/generated/torch.conj.html", "category": "pytorch docs"} {"text": "torch.Tensor.logical_and\nTensor.logical_and() -> Tensor\nSee \"torch.logical_and()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logical_and.html", "category": "pytorch docs"} {"text": "torch.Tensor.sinh\nTensor.sinh() -> Tensor\nSee \"torch.sinh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sinh.html", "category": "pytorch docs"} {"text": "DeQuantStub\nclass torch.quantization.DeQuantStub(qconfig=None)\nDequantize stub module, before calibration, this is same as\n identity, this will be swapped as nnq.DeQuantize in convert.\nParameters:\n qconfig -- quantization configuration for the tensor, if\n qconfig is not provided, we will get qconfig from parent modules", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.DeQuantStub.html", "category": "pytorch docs"} {"text": "torch.nn.utils.remove_weight_norm\ntorch.nn.utils.remove_weight_norm(module, name='weight')\nRemoves the weight normalization reparameterization from a module.\nParameters:\n * module (Module) -- containing module\n * **name** (*str**, **optional*) -- name of weight parameter\n\nReturn type:\n T_module\n-[ Example ]-\n\n\n\nm = weight_norm(nn.Linear(20, 40))\nremove_weight_norm(m)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.remove_weight_norm.html", "category": "pytorch docs"} {"text": "StepLR\nclass torch.optim.lr_scheduler.StepLR(optimizer, step_size, gamma=0.1, last_epoch=- 1, verbose=False)\nDecays the learning rate of each parameter group by gamma every\n step_size epochs. Notice that such decay can happen simultaneously\n with other changes to the learning rate from outside this\n scheduler. When last_epoch=-1, sets initial lr as lr.\nParameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * **step_size** (*int*) -- Period of learning rate decay.\n\n * **gamma** (*float*) -- Multiplicative factor of learning rate\n decay. Default: 0.1.\n\n * **last_epoch** (*int*) -- The index of last epoch. Default:\n -1.\n\n * **verbose** (*bool*) -- If \"True\", prints a message to stdout\n for each update. Default: \"False\".\n\n-[ Example ]-\n\n\n\nAssuming optimizer uses lr = 0.05 for all groups\nlr = 0.05 if epoch < 30\nlr = 0.005 if 30 <= epoch < 60\nlr = 0.0005 if 60 <= epoch < 90\n...\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.StepLR.html", "category": "pytorch docs"} {"text": "\n\n\n...\nscheduler = StepLR(optimizer, step_size=30, gamma=0.1)\nfor epoch in range(100):\n train(...)\n validate(...)\n scheduler.step()\n\n\n\nget_last_lr()\n Return last computed learning rate by current scheduler.\n\nload_state_dict(state_dict)\n Loads the schedulers state.\n\n Parameters:\n **state_dict** (*dict*) -- scheduler state. Should be an\n object returned from a call to \"state_dict()\".\n\nprint_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n\nstate_dict()\n Returns the state of the scheduler as a \"dict\".\n\n It contains an entry for every variable in self.__dict__ which\n is not the optimizer.\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.StepLR.html", "category": "pytorch docs"} {"text": "PReLU\nclass torch.nn.PReLU(num_parameters=1, init=0.25, device=None, dtype=None)\nApplies the element-wise function:\n \\text{PReLU}(x) = \\max(0,x) + a * \\min(0,x)\n\nor\n \\text{PReLU}(x) = \\begin{cases} x, & \\text{ if } x \\geq 0 \\\\ ax,\n & \\text{ otherwise } \\end{cases}\n\nHere a is a learnable parameter. When called without arguments,\n nn.PReLU() uses a single parameter a across all input channels.\n If called with nn.PReLU(nChannels), a separate a is used for each\n input channel.\nNote:\n weight decay should not be used when learning a for good\n performance.\n\nNote:\n Channel dim is the 2nd dim of input. When input has dims < 2,\n then there is no channel dim and the number of channels = 1.\n\nParameters:\n * num_parameters (int) -- number of a to learn. Although\n it takes an int as input, there is only two values are\n legitimate: 1, or the number of channels at input. Default: 1", "source": "https://pytorch.org/docs/stable/generated/torch.nn.PReLU.html", "category": "pytorch docs"} {"text": "\ninit (float) -- the initial value of a. Default: 0.25\n\nShape:\n * Input: ( *) where *** means, any number of additional\n dimensions.\n * Output: (*), same shape as the input.\n\nVariables:\n weight (Tensor) -- the learnable weights of shape\n (\"num_parameters\").\n[image]\nExamples:\n >>> m = nn.PReLU()\n >>> input = torch.randn(2)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.PReLU.html", "category": "pytorch docs"} {"text": "torch.transpose\ntorch.transpose(input, dim0, dim1) -> Tensor\nReturns a tensor that is a transposed version of \"input\". The given\n dimensions \"dim0\" and \"dim1\" are swapped.\nIf \"input\" is a strided tensor then the resulting \"out\" tensor\n shares its underlying storage with the \"input\" tensor, so changing\n the content of one would change the content of the other.\nIf \"input\" is a sparse tensor then the resulting \"out\" tensor does\n not share the underlying storage with the \"input\" tensor.\nIf \"input\" is a sparse tensor with compressed layout (SparseCSR,\n SparseBSR, SparseCSC or SparseBSC) the arguments \"dim0\" and \"dim1\"\n must be both batch dimensions, or must both be sparse dimensions.\n The batch dimensions of a sparse tensor are the dimensions\n preceding the sparse dimensions.\nNote:\n Transpositions which interchange the sparse dimensions of a\n *SparseCSR* or *SparseCSC* layout tensor will result in the\n", "source": "https://pytorch.org/docs/stable/generated/torch.transpose.html", "category": "pytorch docs"} {"text": "layout changing between the two options. Transposition of the\n sparse dimensions of a SparseBSR or SparseBSC layout tensor\n will likewise generate a result with the opposite layout.\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim0** (*int*) -- the first dimension to be transposed\n\n * **dim1** (*int*) -- the second dimension to be transposed\n\nExample:\n >>> x = torch.randn(2, 3)\n >>> x\n tensor([[ 1.0028, -0.9893, 0.5809],\n [-0.1669, 0.7299, 0.4942]])\n >>> torch.transpose(x, 0, 1)\n tensor([[ 1.0028, -0.1669],\n [-0.9893, 0.7299],\n [ 0.5809, 0.4942]])\n\nSee also \"torch.t()\".", "source": "https://pytorch.org/docs/stable/generated/torch.transpose.html", "category": "pytorch docs"} {"text": "torch.cdist\ntorch.cdist(x1, x2, p=2.0, compute_mode='use_mm_for_euclid_dist_if_necessary')\nComputes batched the p-norm distance between each pair of the two\n collections of row vectors.\nParameters:\n * x1 (Tensor) -- input tensor of shape B \\times P \\times\n M.\n * **x2** (*Tensor*) -- input tensor of shape B \\times R \\times\n M.\n\n * **p** (*float*) -- p value for the p-norm distance to\n calculate between each vector pair \\in [0, \\infty].\n\n * **compute_mode** (*str*) --\n 'use_mm_for_euclid_dist_if_necessary' - will use matrix\n multiplication approach to calculate euclidean distance (p =\n 2) if P > 25 or R > 25 'use_mm_for_euclid_dist' - will always\n use matrix multiplication approach to calculate euclidean\n distance (p = 2) 'donot_use_mm_for_euclid_dist' - will never\n use matrix multiplication approach to calculate euclidean\n distance (p = 2) Default: use_mm_for_euclid_dist_if_necessary.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cdist.html", "category": "pytorch docs"} {"text": "Return type:\n Tensor\nIf x1 has shape B \\times P \\times M and x2 has shape B \\times R\n \\times M then the output will have shape B \\times P \\times R.\nThis function is equivalent to\n scipy.spatial.distance.cdist(input,'minkowski', p=p) if p \\in (0,\n \\infty). When p = 0 it is equivalent to\n scipy.spatial.distance.cdist(input, 'hamming') * M. When p =\n \\infty, the closest scipy function is\n scipy.spatial.distance.cdist(xn, lambda x, y: np.abs(x -\n y).max()).\n-[ Example ]-\n\n\n\na = torch.tensor([[0.9041, 0.0196], [-0.3108, -2.4423], [-0.4821, 1.059]])\na\n tensor([[ 0.9041, 0.0196],\n [-0.3108, -2.4423],\n [-0.4821, 1.0590]])\nb = torch.tensor([[-2.1763, -0.4713], [-0.6986, 1.3702]])\nb\n tensor([[-2.1763, -0.4713],\n [-0.6986, 1.3702]])\ntorch.cdist(a, b, p=2)\n tensor([[3.1193, 2.0959],\n [2.7138, 3.8322],\n [2.2830, 0.3791]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.cdist.html", "category": "pytorch docs"} {"text": "torch.split\ntorch.split(tensor, split_size_or_sections, dim=0)\nSplits the tensor into chunks. Each chunk is a view of the original\n tensor.\nIf \"split_size_or_sections\" is an integer type, then \"tensor\" will\n be split into equally sized chunks (if possible). Last chunk will\n be smaller if the tensor size along the given dimension \"dim\" is\n not divisible by \"split_size\".\nIf \"split_size_or_sections\" is a list, then \"tensor\" will be split\n into \"len(split_size_or_sections)\" chunks with sizes in \"dim\"\n according to \"split_size_or_sections\".\nParameters:\n * tensor (Tensor) -- tensor to split.\n * **split_size_or_sections** (*int**) or **(**list**(**int**)*)\n -- size of a single chunk or list of sizes for each chunk\n\n * **dim** (*int*) -- dimension along which to split the tensor.\n\nReturn type:\n List[Tensor]\nExample:\n >>> a = torch.arange(10).reshape(5, 2)\n >>> a\n tensor([[0, 1],\n [2, 3],\n", "source": "https://pytorch.org/docs/stable/generated/torch.split.html", "category": "pytorch docs"} {"text": "tensor([[0, 1],\n [2, 3],\n [4, 5],\n [6, 7],\n [8, 9]])\n >>> torch.split(a, 2)\n (tensor([[0, 1],\n [2, 3]]),\n tensor([[4, 5],\n [6, 7]]),\n tensor([[8, 9]]))\n >>> torch.split(a, [1, 4])\n (tensor([[0, 1]]),\n tensor([[2, 3],\n [4, 5],\n [6, 7],\n [8, 9]]))", "source": "https://pytorch.org/docs/stable/generated/torch.split.html", "category": "pytorch docs"} {"text": "torch.Tensor.sinh_\nTensor.sinh_() -> Tensor\nIn-place version of \"sinh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sinh_.html", "category": "pytorch docs"} {"text": "ConvReLU1d\nclass torch.ao.nn.intrinsic.ConvReLU1d(conv, relu)\nThis is a sequential container which calls the Conv1d and ReLU\n modules. During quantization this will be replaced with the\n corresponding fused module.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.ConvReLU1d.html", "category": "pytorch docs"} {"text": "torch.cuda.jiterator._create_jit_fn\ntorch.cuda.jiterator._create_jit_fn(code_string, **kwargs)\nCreate a jiterator-generated cuda kernel for an elementwise op.\nThe code string has to be a valid CUDA function that describes the\n computation for a single element. The code string has to follow the\n c++ template pattern, as shown in the example below. This function\n will be inlined into elementwise kernel template, and compiled on\n the fly. Compiled kernel will be cached in memory, as well as local\n temp dir.\nJiterator-generated kernels accepts noncontiguous tensors, and\n supports boardcasting and type promotion.\nParameters:\n * code_string (str) -- CUDA code string to be compiled by\n jiterator. The entry functor must return by value.\n * **kwargs** (*Dict**, **optional*) -- Keyword arguments for\n generated function\n\nReturn type:\n Callable\nExample:", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.jiterator._create_jit_fn.html", "category": "pytorch docs"} {"text": "Return type:\n Callable\nExample:\n code_string = \"template T my_kernel(T x, T y, T alpha) { return -x + alpha * y; }\"\n jitted_fn = create_jit_fn(code_string, alpha=1.0)\n a = torch.rand(3, device='cuda')\n b = torch.rand(3, device='cuda')\n # invoke jitted function like a regular python function\n result = jitted_fn(a, b, alpha=3.14)\n\ncode_string also allows multiple function definitions, and the last\n function will be treated as the entry function.\nExample:\n code_string = \"template T util_fn(T x, T y) { return ::sin(x) + ::cos(y); }\"\n code_string += \"template T my_kernel(T x, T y, T val) { return ::min(val, util_fn(x, y)); }\"\n jitted_fn = create_jit_fn(code_string, val=0.0)\n a = torch.rand(3, device='cuda')\n b = torch.rand(3, device='cuda')\n # invoke jitted function like a regular python function\n result = jitted_fn(a, b) # using default val=0.0\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.jiterator._create_jit_fn.html", "category": "pytorch docs"} {"text": "Jiterator can be used together with python registration to override\n an operator's cuda kernel. Following example is overriding gelu's\n cuda kernel with relu.\nExample:\n code_string = \"template T my_gelu(T a) { return a > 0 ? a : 0; }\"\n my_gelu = create_jit_fn(code_string)\n my_lib = torch.library.Library(\"aten\", \"IMPL\")\n my_lib.impl('aten::gelu', my_gelu, \"CUDA\")\n # torch.nn.GELU and torch.nn.function.gelu are now overridden\n a = torch.rand(3, device='cuda')\n torch.allclose(torch.nn.functional.gelu(a), torch.nn.functional.relu(a))\n\nWarning:\n This API is in beta and may change in future releases.\n\nWarning:\n This API only supports up to 8 inputs and 1 output\n\nWarning:\n All input tensors must live in CUDA device\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.jiterator._create_jit_fn.html", "category": "pytorch docs"} {"text": "default_fake_quant\ntorch.quantization.fake_quantize.default_fake_quant\nalias of functools.partial(,\n observer=,\n quant_min=0, quant_max=255, dtype=torch.quint8,\n qscheme=torch.per_tensor_affine, reduce_range=True){}", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.default_fake_quant.html", "category": "pytorch docs"} {"text": "torch.logaddexp\ntorch.logaddexp(input, other, *, out=None) -> Tensor\nLogarithm of the sum of exponentiations of the inputs.\nCalculates pointwise \\log\\left(e^x + e^y\\right). This function is\n useful in statistics where the calculated probabilities of events\n may be so small as to exceed the range of normal floating point\n numbers. In such cases the logarithm of the calculated probability\n is stored. This function allows adding probabilities stored in such\n a fashion.\nThis op should be disambiguated with \"torch.logsumexp()\" which\n performs a reduction on a single tensor.\nParameters:\n * input (Tensor) -- the input tensor.\n * **other** (*Tensor*) -- the second input tensor\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> torch.logaddexp(torch.tensor([-1.0]), torch.tensor([-1.0, -2, -3]))\n tensor([-0.3069, -0.6867, -0.8731])\n", "source": "https://pytorch.org/docs/stable/generated/torch.logaddexp.html", "category": "pytorch docs"} {"text": "tensor([-0.3069, -0.6867, -0.8731])\n >>> torch.logaddexp(torch.tensor([-100.0, -200, -300]), torch.tensor([-1.0, -2, -3]))\n tensor([-1., -2., -3.])\n >>> torch.logaddexp(torch.tensor([1.0, 2000, 30000]), torch.tensor([-1.0, -2, -3]))\n tensor([1.1269e+00, 2.0000e+03, 3.0000e+04])", "source": "https://pytorch.org/docs/stable/generated/torch.logaddexp.html", "category": "pytorch docs"} {"text": "torch.nn.utils.prune.random_structured\ntorch.nn.utils.prune.random_structured(module, name, amount, dim)\nPrunes tensor corresponding to parameter called \"name\" in \"module\"\n by removing the specified \"amount\" of (currently unpruned) channels\n along the specified \"dim\" selected at random. Modifies module in\n place (and also return the modified module) by:\n\n\nadding a named buffer called \"name+'_mask'\" corresponding to the\n binary mask applied to the parameter \"name\" by the pruning\n method.\n\n\nreplacing the parameter \"name\" by its pruned version, while the\n original (unpruned) parameter is stored in a new parameter named\n \"name+'_orig'\".\n\n\nParameters:\n * module (nn.Module) -- module containing the tensor to\n prune\n * **name** (*str*) -- parameter name within \"module\" on which\n pruning will act.\n\n * **amount** (*int** or **float*) -- quantity of parameters to\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.random_structured.html", "category": "pytorch docs"} {"text": "prune. If \"float\", should be between 0.0 and 1.0 and represent\n the fraction of parameters to prune. If \"int\", it represents\n the absolute number of parameters to prune.\n * **dim** (*int*) -- index of the dim along which we define\n channels to prune.\n\nReturns:\n modified (i.e. pruned) version of the input module\nReturn type:\n module (nn.Module)\n-[ Examples ]-\n\n\n\nm = prune.random_structured(\n ... nn.Linear(5, 3), 'weight', amount=3, dim=1\n ... )\ncolumns_pruned = int(sum(torch.sum(m.weight, dim=0) == 0))\nprint(columns_pruned)\n 3\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.random_structured.html", "category": "pytorch docs"} {"text": "torch.argmax\ntorch.argmax(input) -> LongTensor\nReturns the indices of the maximum value of all elements in the\n \"input\" tensor.\nThis is the second value returned by \"torch.max()\". See its\n documentation for the exact semantics of this method.\nNote:\n If there are multiple maximal values then the indices of the\n first maximal value are returned.\n\nParameters:\n input (Tensor) -- the input tensor.\nExample:\n >>> a = torch.randn(4, 4)\n >>> a\n tensor([[ 1.3398, 0.2663, -0.2686, 0.2450],\n [-0.7401, -0.8805, -0.3402, -1.1936],\n [ 0.4907, -1.3948, -1.0691, -0.3132],\n [-1.6092, 0.5419, -0.2993, 0.3195]])\n >>> torch.argmax(a)\n tensor(0)\n\ntorch.argmax(input, dim, keepdim=False) -> LongTensor\nReturns the indices of the maximum values of a tensor across a\n dimension.\nThis is the second value returned by \"torch.max()\". See its\n documentation for the exact semantics of this method.", "source": "https://pytorch.org/docs/stable/generated/torch.argmax.html", "category": "pytorch docs"} {"text": "Parameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int*) -- the dimension to reduce. If \"None\", the\n argmax of the flattened input is returned.\n\n * **keepdim** (*bool*) -- whether the output tensor has \"dim\"\n retained or not. Ignored if \"dim=None\".\n\nExample:\n >>> a = torch.randn(4, 4)\n >>> a\n tensor([[ 1.3398, 0.2663, -0.2686, 0.2450],\n [-0.7401, -0.8805, -0.3402, -1.1936],\n [ 0.4907, -1.3948, -1.0691, -0.3132],\n [-1.6092, 0.5419, -0.2993, 0.3195]])\n >>> torch.argmax(a, dim=1)\n tensor([ 0, 2, 0, 1])\n", "source": "https://pytorch.org/docs/stable/generated/torch.argmax.html", "category": "pytorch docs"} {"text": "QuantWrapper\nclass torch.quantization.QuantWrapper(module)\nA wrapper class that wraps the input module, adds QuantStub and\n DeQuantStub and surround the call to module with call to quant and\n dequant modules.\nThis is used by the quantization utility functions to add the\n quant and dequant modules, before convert function QuantStub\n will just be observer, it observes the input tensor, after\n convert, QuantStub will be swapped to nnq.Quantize which does\n actual quantization. Similarly for DeQuantStub.", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.QuantWrapper.html", "category": "pytorch docs"} {"text": "torch.Tensor.dsplit\nTensor.dsplit(split_size_or_sections) -> List of Tensors\nSee \"torch.dsplit()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.dsplit.html", "category": "pytorch docs"} {"text": "torch.Tensor.gt_\nTensor.gt_(other) -> Tensor\nIn-place version of \"gt()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.gt_.html", "category": "pytorch docs"} {"text": "torch.Tensor.sign\nTensor.sign() -> Tensor\nSee \"torch.sign()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sign.html", "category": "pytorch docs"} {"text": "AdaptiveAvgPool3d\nclass torch.nn.AdaptiveAvgPool3d(output_size)\nApplies a 3D adaptive average pooling over an input signal composed\n of several input planes.\nThe output is of size D x H x W, for any input size. The number of\n output features is equal to the number of input planes.\nParameters:\n output_size (Union[int, None,\n Tuple[Optional[int], Optional[int],\n Optional[int]]]) -- the target output size of the\n form D x H x W. Can be a tuple (D, H, W) or a single number D\n for a cube D x D x D. D, H and W can be either a \"int\", or\n \"None\" which means the size will be the same as that of the\n input.\nShape:\n * Input: (N, C, D_{in}, H_{in}, W_{in}) or (C, D_{in}, H_{in},\n W_{in}).\n * Output: (N, C, S_{0}, S_{1}, S_{2}) or (C, S_{0}, S_{1},\n S_{2}), where S=\\text{output\\_size}.\n\n-[ Examples ]-\n\n\n\ntarget output size of 5x7x9\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveAvgPool3d.html", "category": "pytorch docs"} {"text": "\n\n\ntarget output size of 5x7x9\nm = nn.AdaptiveAvgPool3d((5, 7, 9))\ninput = torch.randn(1, 64, 8, 9, 10)\noutput = m(input)\ntarget output size of 7x7x7 (cube)\nm = nn.AdaptiveAvgPool3d(7)\ninput = torch.randn(1, 64, 10, 9, 8)\noutput = m(input)\ntarget output size of 7x9x8\nm = nn.AdaptiveAvgPool3d((7, None, None))\ninput = torch.randn(1, 64, 10, 9, 8)\noutput = m(input)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveAvgPool3d.html", "category": "pytorch docs"} {"text": "torch.bitwise_right_shift\ntorch.bitwise_right_shift(input, other, *, out=None) -> Tensor\nComputes the right arithmetic shift of \"input\" by \"other\" bits. The\n input tensor must be of integral type. This operator supports\n broadcasting to a common shape and type promotion.\nThe operation applied is:\n \\text{out}_i = \\text{input}_i >> \\text{other}_i\n\nParameters:\n * input (Tensor or Scalar) -- the first input tensor\n * **other** (*Tensor** or **Scalar*) -- the second input tensor\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> torch.bitwise_right_shift(torch.tensor([-2, -7, 31], dtype=torch.int8), torch.tensor([1, 0, 3], dtype=torch.int8))\n tensor([-1, -7, 3], dtype=torch.int8)\n", "source": "https://pytorch.org/docs/stable/generated/torch.bitwise_right_shift.html", "category": "pytorch docs"} {"text": "torch.Tensor.normal_\nTensor.normal_(mean=0, std=1, *, generator=None) -> Tensor\nFills \"self\" tensor with elements samples from the normal\n distribution parameterized by \"mean\" and \"std\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.normal_.html", "category": "pytorch docs"} {"text": "hardsigmoid\nclass torch.ao.nn.quantized.functional.hardsigmoid(input, inplace=False)\nThis is the quantized version of \"hardsigmoid()\".\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.hardsigmoid.html", "category": "pytorch docs"} {"text": "celu\nclass torch.ao.nn.quantized.functional.celu(input, scale, zero_point, alpha=1.)\nApplies the quantized CELU function element-wise.\n \\text{CELU}(x) = \\max(0,x) + \\min(0, \\alpha * (\\exp(x / \\alpha)\n - 1))\n\nParameters:\n * input (Tensor) -- quantized input\n * **alpha** (*float*) -- the \\alpha value for the CELU\n formulation. Default: 1.0\n\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.celu.html", "category": "pytorch docs"} {"text": "torch.nn.functional.huber_loss\ntorch.nn.functional.huber_loss(input, target, reduction='mean', delta=1.0)\nFunction that uses a squared term if the absolute element-wise\n error falls below delta and a delta-scaled L1 term otherwise.\nSee \"HuberLoss\" for details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.huber_loss.html", "category": "pytorch docs"} {"text": "torch.linalg.lu\ntorch.linalg.lu(A, *, pivot=True, out=None)\nComputes the LU decomposition with partial pivoting of a matrix.\nLetting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, the LU\n decomposition with partial pivoting of a matrix A \\in\n \\mathbb{K}^{m \\times n} is defined as\n A = PLU\\mathrlap{\\qquad P \\in \\mathbb{K}^{m \\times m}, L \\in\n \\mathbb{K}^{m \\times k}, U \\in \\mathbb{K}^{k \\times n}}\n\nwhere k = min(m,n), P is a permutation matrix, L is lower\n triangular with ones on the diagonal and U is upper triangular.\nIf \"pivot\"= False and \"A\" is on GPU, then the LU decomposition\n without pivoting is computed\n A = LU\\mathrlap{\\qquad L \\in \\mathbb{K}^{m \\times k}, U \\in\n \\mathbb{K}^{k \\times n}}\n\nWhen \"pivot\"= False, the returned matrix \"P\" will be empty. The\n LU decomposition without pivoting may not exist if any of the\n principal minors of \"A\" is singular. In this case, the output\n matrix may contain inf or NaN.", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu.html", "category": "pytorch docs"} {"text": "matrix may contain inf or NaN.\nSupports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\nSee also:\n \"torch.linalg.solve()\" solves a system of linear equations using\n the LU decomposition with partial pivoting.\n\nWarning:\n The LU decomposition is almost never unique, as often there are\n different permutation matrices that can yield different LU\n decompositions. As such, different platforms, like SciPy, or\n inputs on different devices, may produce different valid\n decompositions.\n\nWarning:\n Gradient computations are only supported if the input matrix is\n full-rank. If this condition is not met, no error will be thrown,\n but the gradient may not be finite. This is because the LU\n decomposition with pivoting is not differentiable at these\n points.\n\nParameters:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu.html", "category": "pytorch docs"} {"text": "points.\nParameters:\n * A (Tensor) -- tensor of shape (, m, n)* where *** is\n zero or more batch dimensions.\n * **pivot** (*bool**, **optional*) -- Controls whether to\n compute the LU decomposition with partial pivoting or no\n pivoting. Default: *True*.\n\nKeyword Arguments:\n out (tuple, optional) -- output tuple of three\n tensors. Ignored if None. Default: None.\nReturns:\n A named tuple (P, L, U).\nExamples:\n >>> A = torch.randn(3, 2)\n >>> P, L, U = torch.linalg.lu(A)\n >>> P\n tensor([[0., 1., 0.],\n [0., 0., 1.],\n [1., 0., 0.]])\n >>> L\n tensor([[1.0000, 0.0000],\n [0.5007, 1.0000],\n [0.0633, 0.9755]])\n >>> U\n tensor([[0.3771, 0.0489],\n [0.0000, 0.9644]])\n >>> torch.dist(A, P @ L @ U)\n tensor(5.9605e-08)\n\n >>> A = torch.randn(2, 5, 7, device=\"cuda\")\n >>> P, L, U = torch.linalg.lu(A, pivot=False)\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu.html", "category": "pytorch docs"} {"text": "\n\n\nP\n tensor([], device='cuda:0')\n >>> torch.dist(A, L @ U)\n tensor(1.0376e-06, device='cuda:0')\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu.html", "category": "pytorch docs"} {"text": "torch.Tensor.data_ptr\nTensor.data_ptr() -> int\nReturns the address of the first element of \"self\" tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.data_ptr.html", "category": "pytorch docs"} {"text": "quantize\nclass torch.quantization.quantize(model, run_fn, run_args, mapping=None, inplace=False)\nQuantize the input float model with post training static\n quantization.\nFirst it will prepare the model for calibration, then it calls\n run_fn which will run the calibration step, after that we will\n convert the model to a quantized model.\nParameters:\n * model -- input float model\n * **run_fn** -- a calibration function for calibrating the\n prepared model\n\n * **run_args** -- positional arguments for *run_fn*\n\n * **inplace** -- carry out model transformations in-place, the\n original module is mutated\n\n * **mapping** -- correspondence between original module types\n and quantized counterparts\n\nReturns:\n Quantized model.", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize.html", "category": "pytorch docs"} {"text": "torch.log1p\ntorch.log1p(input, *, out=None) -> Tensor\nReturns a new tensor with the natural logarithm of (1 + \"input\").\n y_i = \\log_{e} (x_i + 1)\n\nNote:\n This function is more accurate than \"torch.log()\" for small\n values of \"input\"\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(5)\n >>> a\n tensor([-1.0090, -0.9923, 1.0249, -0.5372, 0.2492])\n >>> torch.log1p(a)\n tensor([ nan, -4.8653, 0.7055, -0.7705, 0.2225])\n", "source": "https://pytorch.org/docs/stable/generated/torch.log1p.html", "category": "pytorch docs"} {"text": "torch.diagflat\ntorch.diagflat(input, offset=0) -> Tensor\n\n\nIf \"input\" is a vector (1-D tensor), then returns a 2-D square\n tensor with the elements of \"input\" as the diagonal.\n\n\nIf \"input\" is a tensor with more than one dimension, then returns\n a 2-D tensor with diagonal elements equal to a flattened \"input\".\n\n\nThe argument \"offset\" controls which diagonal to consider:\n\n\nIf \"offset\" = 0, it is the main diagonal.\n\n\nIf \"offset\" > 0, it is above the main diagonal.\n\n\nIf \"offset\" < 0, it is below the main diagonal.\n\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **offset** (*int**, **optional*) -- the diagonal to consider.\n Default: 0 (main diagonal).\n\nExamples:\n >>> a = torch.randn(3)\n >>> a\n tensor([-0.2956, -0.9068, 0.1695])\n >>> torch.diagflat(a)\n tensor([[-0.2956, 0.0000, 0.0000],\n [ 0.0000, -0.9068, 0.0000],\n [ 0.0000, 0.0000, 0.1695]])\n >>> torch.diagflat(a, 1)\n", "source": "https://pytorch.org/docs/stable/generated/torch.diagflat.html", "category": "pytorch docs"} {"text": "\n\n\ntorch.diagflat(a, 1)\n tensor([[ 0.0000, -0.2956, 0.0000, 0.0000],\n [ 0.0000, 0.0000, -0.9068, 0.0000],\n [ 0.0000, 0.0000, 0.0000, 0.1695],\n [ 0.0000, 0.0000, 0.0000, 0.0000]])\n\n\n\n >>> a = torch.randn(2, 2)\n >>> a\n tensor([[ 0.2094, -0.3018],\n [-0.1516, 1.9342]])\n >>> torch.diagflat(a)\n tensor([[ 0.2094, 0.0000, 0.0000, 0.0000],\n [ 0.0000, -0.3018, 0.0000, 0.0000],\n [ 0.0000, 0.0000, -0.1516, 0.0000],\n [ 0.0000, 0.0000, 0.0000, 1.9342]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.diagflat.html", "category": "pytorch docs"} {"text": "torch.Tensor.masked_scatter_\nTensor.masked_scatter_(mask, source)\nCopies elements from \"source\" into \"self\" tensor at positions where\n the \"mask\" is True. The shape of \"mask\" must be broadcastable with\n the shape of the underlying tensor. The \"source\" should have at\n least as many elements as the number of ones in \"mask\"\nParameters:\n * mask (BoolTensor) -- the boolean mask\n * **source** (*Tensor*) -- the tensor to copy from\n\nNote:\n The \"mask\" operates on the \"self\" tensor, not on the given\n \"source\" tensor.\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.masked_scatter_.html", "category": "pytorch docs"} {"text": "dual_level\nclass torch.autograd.forward_ad.dual_level\nContext-manager that enables forward AD. All forward AD computation\n must be performed in a \"dual_level\" context.\nNote:\n The \"dual_level\" context appropriately enters and exit the dual\n level to controls the current forward AD level, which is used by\n default by the other functions in this API.We currently don't\n plan to support nested \"dual_level\" contexts, however, so only a\n single forward AD level is supported. To compute higher-order\n forward grads, one can use \"torch.func.jvp()\".\n\nExample:\n >>> x = torch.tensor([1])\n >>> x_t = torch.tensor([1])\n >>> with dual_level():\n ... inp = make_dual(x, x_t)\n ... # Do computations with inp\n ... out = your_fn(inp)\n ... _, grad = unpack_dual(out)\n >>> grad is None\n False\n >>> # After exiting the level, the grad is deleted\n >>> _, grad_after = unpack_dual(out)\n >>> grad is None\n True\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.forward_ad.dual_level.html", "category": "pytorch docs"} {"text": "\n\n\ngrad is None\n True\n\n\n\nPlease see the forward-mode AD tutorial for detailed steps on how\n to use this API.", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.forward_ad.dual_level.html", "category": "pytorch docs"} {"text": "torch.Tensor.share_memory_\nTensor.share_memory_()\nMoves the underlying storage to shared memory.\nThis is a no-op if the underlying storage is already in shared\n memory and for CUDA tensors. Tensors in shared memory cannot be\n resized.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.share_memory_.html", "category": "pytorch docs"} {"text": "LBFGS\nclass torch.optim.LBFGS(params, lr=1, max_iter=20, max_eval=None, tolerance_grad=1e-07, tolerance_change=1e-09, history_size=100, line_search_fn=None)\nImplements L-BFGS algorithm, heavily inspired by minFunc.\nWarning:\n This optimizer doesn't support per-parameter options and\n parameter groups (there can be only one).\n\nWarning:\n Right now all parameters have to be on a single device. This will\n be improved in the future.\n\nNote:\n This is a very memory intensive optimizer (it requires additional\n \"param_bytes * (history_size + 1)\" bytes). If it doesn't fit in\n memory try reducing the history size, or use a different\n algorithm.\n\nParameters:\n * lr (float) -- learning rate (default: 1)\n * **max_iter** (*int*) -- maximal number of iterations per\n optimization step (default: 20)\n\n * **max_eval** (*int*) -- maximal number of function evaluations\n per optimization step (default: max_iter * 1.25).\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.LBFGS.html", "category": "pytorch docs"} {"text": "\n\ntolerance_grad (float) -- termination tolerance on first\n order optimality (default: 1e-5).\n\n\ntolerance_change (float) -- termination tolerance on\n function value/parameter changes (default: 1e-9).\n\n\nhistory_size (int) -- update history size (default:\n 100).\n\n\nline_search_fn (str) -- either 'strong_wolfe' or None\n (default: None).\n\n\n\n\nadd_param_group(param_group)\n Add a param group to the \"Optimizer\" s *param_groups*.\n\n This can be useful when fine tuning a pre-trained network as\n frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n\n Parameters:\n **param_group** (*dict*) -- Specifies what Tensors should be\n optimized along with group specific optimization options.\n\nload_state_dict(state_dict)\n Loads the optimizer state.\n\n Parameters:\n **state_dict** (*dict*) -- optimizer state. Should be an\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.LBFGS.html", "category": "pytorch docs"} {"text": "object returned from a call to \"state_dict()\".\nregister_step_post_hook(hook)\n Register an optimizer step post hook which will be called after\n optimizer step. It should have the following signature:\n\n hook(optimizer, args, kwargs) -> None\n\n The \"optimizer\" argument is the optimizer instance being used.\n\n Parameters:\n **hook** (*Callable*) -- The user defined hook to be\n registered.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n\nregister_step_pre_hook(hook)\n Register an optimizer step pre hook which will be called before\n optimizer step. It should have the following signature:\n\n hook(optimizer, args, kwargs) -> None or modified args and kwargs\n\n The \"optimizer\" argument is the optimizer instance being used.\n If args and kwargs are modified by the pre-hook, then the\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.LBFGS.html", "category": "pytorch docs"} {"text": "transformed values are returned as a tuple containing the\n new_args and new_kwargs.\n Parameters:\n **hook** (*Callable*) -- The user defined hook to be\n registered.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n\nstate_dict()\n Returns the state of the optimizer as a \"dict\".\n\n It contains two entries:\n\n * state - a dict holding current optimization state. Its content\n differs between optimizer classes.\n\n * param_groups - a list containing all parameter groups where\n each\n parameter group is a dict\n\nstep(closure)\n Performs a single optimization step.\n\n Parameters:\n **closure** (*Callable*) -- A closure that reevaluates the\n model and returns the loss.\n\nzero_grad(set_to_none=False)\n Sets the gradients of all optimized \"torch.Tensor\" s to zero.\n\n Parameters:\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.LBFGS.html", "category": "pytorch docs"} {"text": "Parameters:\n set_to_none (bool) -- instead of setting to zero, set\n the grads to None. This will in general have lower memory\n footprint, and can modestly improve performance. However, it\n changes certain behaviors. For example: 1. When the user\n tries to access a gradient and perform manual ops on it, a\n None attribute or a Tensor full of 0s will behave\n differently. 2. If the user requests\n \"zero_grad(set_to_none=True)\" followed by a backward pass,\n \".grad\"s are guaranteed to be None for params that did not\n receive a gradient. 3. \"torch.optim\" optimizers have a\n different behavior if the gradient is 0 or None (in one case\n it does the step with a gradient of 0 and in the other it\n skips the step altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.LBFGS.html", "category": "pytorch docs"} {"text": "torch.Tensor.addr\nTensor.addr(vec1, vec2, *, beta=1, alpha=1) -> Tensor\nSee \"torch.addr()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.addr.html", "category": "pytorch docs"} {"text": "torch.Tensor.type\nTensor.type(dtype=None, non_blocking=False, **kwargs) -> str or Tensor\nReturns the type if dtype is not provided, else casts this object\n to the specified type.\nIf this is already of the correct type, no copy is performed and\n the original object is returned.\nParameters:\n * dtype (dtype or string) -- The desired type\n * **non_blocking** (*bool*) -- If \"True\", and the source is in\n pinned memory and destination is on the GPU or vice versa, the\n copy is performed asynchronously with respect to the host.\n Otherwise, the argument has no effect.\n\n * ****kwargs** -- For compatibility, may contain the key \"async\"\n in place of the \"non_blocking\" argument. The \"async\" arg is\n deprecated.\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.type.html", "category": "pytorch docs"} {"text": "torch.Tensor.narrow_copy\nTensor.narrow_copy(dimension, start, length) -> Tensor\nSee \"torch.narrow_copy()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.narrow_copy.html", "category": "pytorch docs"} {"text": "LazyInstanceNorm3d\nclass torch.nn.LazyInstanceNorm3d(eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)\nA \"torch.nn.InstanceNorm3d\" module with lazy initialization of the\n \"num_features\" argument of the \"InstanceNorm3d\" that is inferred\n from the \"input.size(1)\". The attributes that will be lazily\n initialized are weight, bias, running_mean and running_var.\nCheck the \"torch.nn.modules.lazy.LazyModuleMixin\" for further\n documentation on lazy modules and their limitations.\nParameters:\n * num_features -- C from an expected input of size (N, C, D,\n H, W) or (C, D, H, W)\n * **eps** (*float*) -- a value added to the denominator for\n numerical stability. Default: 1e-5\n\n * **momentum** (*float*) -- the value used for the running_mean\n and running_var computation. Default: 0.1\n\n * **affine** (*bool*) -- a boolean value that when set to\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyInstanceNorm3d.html", "category": "pytorch docs"} {"text": "\"True\", this module has learnable affine parameters,\n initialized the same way as done for batch normalization.\n Default: \"False\".\n * **track_running_stats** (*bool*) -- a boolean value that when\n set to \"True\", this module tracks the running mean and\n variance, and when set to \"False\", this module does not track\n such statistics and always uses batch statistics in both\n training and eval modes. Default: \"False\"\n\nShape:\n * Input: (N, C, D, H, W) or (C, D, H, W)\n * Output: (N, C, D, H, W) or (C, D, H, W) (same shape as input)\n\ncls_to_become\n alias of \"InstanceNorm3d\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyInstanceNorm3d.html", "category": "pytorch docs"} {"text": "ConstantPad2d\nclass torch.nn.ConstantPad2d(padding, value)\nPads the input tensor boundaries with a constant value.\nFor N-dimensional padding, use \"torch.nn.functional.pad()\".\nParameters:\n padding (int, tuple) -- the size of the padding. If is\n int, uses the same padding in all boundaries. If a 4-tuple,\n uses (\\text{padding_left}, \\text{padding_right},\n \\text{padding_top}, \\text{padding_bottom})\nShape:\n * Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in}).\n * Output: (N, C, H_{out}, W_{out}) or (C, H_{out}, W_{out}),\n where\n\n H_{out} = H_{in} + \\text{padding\\_top} +\n \\text{padding\\_bottom}\n\n W_{out} = W_{in} + \\text{padding\\_left} +\n \\text{padding\\_right}\n\nExamples:\n >>> m = nn.ConstantPad2d(2, 3.5)\n >>> input = torch.randn(1, 2, 2)\n >>> input\n tensor([[[ 1.6585, 0.4320],\n [-0.8701, -0.4649]]])\n >>> m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConstantPad2d.html", "category": "pytorch docs"} {"text": "\n\n\nm(input)\n tensor([[[ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],\n [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],\n [ 3.5000, 3.5000, 1.6585, 0.4320, 3.5000, 3.5000],\n [ 3.5000, 3.5000, -0.8701, -0.4649, 3.5000, 3.5000],\n [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],\n [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000, 3.5000]]])\n >>> # using different paddings for different sides\n >>> m = nn.ConstantPad2d((3, 0, 2, 1), 3.5)\n >>> m(input)\n tensor([[[ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],\n [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000],\n [ 3.5000, 3.5000, 3.5000, 1.6585, 0.4320],\n [ 3.5000, 3.5000, 3.5000, -0.8701, -0.4649],\n [ 3.5000, 3.5000, 3.5000, 3.5000, 3.5000]]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConstantPad2d.html", "category": "pytorch docs"} {"text": "torch.Tensor.polygamma_\nTensor.polygamma_(n) -> Tensor\nIn-place version of \"polygamma()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.polygamma_.html", "category": "pytorch docs"} {"text": "GRUCell\nclass torch.nn.GRUCell(input_size, hidden_size, bias=True, device=None, dtype=None)\nA gated recurrent unit (GRU) cell\n \\begin{array}{ll} r = \\sigma(W_{ir} x + b_{ir} + W_{hr} h +\n b_{hr}) \\\\ z = \\sigma(W_{iz} x + b_{iz} + W_{hz} h + b_{hz}) \\\\\n n = \\tanh(W_{in} x + b_{in} + r * (W_{hn} h + b_{hn})) \\\\ h' =\n (1 - z) * n + z * h \\end{array}\n\nwhere \\sigma is the sigmoid function, and * is the Hadamard\n product.\nParameters:\n * input_size (int) -- The number of expected features in\n the input x\n * **hidden_size** (*int*) -- The number of features in the\n hidden state *h*\n\n * **bias** (*bool*) -- If \"False\", then the layer does not use\n bias weights *b_ih* and *b_hh*. Default: \"True\"\n\nInputs: input, hidden\n * input : tensor containing input features\n * **hidden** : tensor containing the initial hidden state for\n each element in the batch. Defaults to zero if not provided.\n\nOutputs: h'", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GRUCell.html", "category": "pytorch docs"} {"text": "Outputs: h'\n * h' : tensor containing the next hidden state for each\n element in the batch\nShape:\n * input: (N, H_{in}) or (H_{in}) tensor containing input\n features where H_{in} = input_size.\n * hidden: (N, H_{out}) or (H_{out}) tensor containing the\n initial hidden state where H_{out} = *hidden_size*. Defaults\n to zero if not provided.\n\n * output: (N, H_{out}) or (H_{out}) tensor containing the next\n hidden state.\n\nVariables:\n * weight_ih (torch.Tensor) -- the learnable input-hidden\n weights, of shape (3hidden_size, input_size)*\n * **weight_hh** (*torch.Tensor*) -- the learnable hidden-hidden\n weights, of shape *(3*hidden_size, hidden_size)*\n\n * **bias_ih** -- the learnable input-hidden bias, of shape\n *(3*hidden_size)*\n\n * **bias_hh** -- the learnable hidden-hidden bias, of shape\n *(3*hidden_size)*\n\nNote:\n All the weights and biases are initialized from\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GRUCell.html", "category": "pytorch docs"} {"text": "\\mathcal{U}(-\\sqrt{k}, \\sqrt{k}) where k =\n \\frac{1}{\\text{hidden_size}}\nOn certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.\nExamples:\n >>> rnn = nn.GRUCell(10, 20)\n >>> input = torch.randn(6, 3, 10)\n >>> hx = torch.randn(3, 20)\n >>> output = []\n >>> for i in range(6):\n ... hx = rnn(input[i], hx)\n ... output.append(hx)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.GRUCell.html", "category": "pytorch docs"} {"text": "torch.erfinv\ntorch.erfinv(input, *, out=None) -> Tensor\nAlias for \"torch.special.erfinv()\".", "source": "https://pytorch.org/docs/stable/generated/torch.erfinv.html", "category": "pytorch docs"} {"text": "torch.Tensor.asin_\nTensor.asin_() -> Tensor\nIn-place version of \"asin()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.asin_.html", "category": "pytorch docs"} {"text": "torch.Tensor.smm\nTensor.smm(mat) -> Tensor\nSee \"torch.smm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.smm.html", "category": "pytorch docs"} {"text": "torch.fft.ifftshift\ntorch.fft.ifftshift(input, dim=None) -> Tensor\nInverse of \"fftshift()\".\nParameters:\n * input (Tensor) -- the tensor in FFT order\n * **dim** (*int**, **Tuple**[**int**]**, **optional*) -- The\n dimensions to rearrange. Only dimensions specified here will\n be rearranged, any other dimensions will be left in their\n original order. Default: All dimensions of \"input\".\n\n-[ Example ]-\n\n\n\nf = torch.fft.fftfreq(5)\nf\n tensor([ 0.0000, 0.2000, 0.4000, -0.4000, -0.2000])\n\n\n\nA round-trip through \"fftshift()\" and \"ifftshift()\" gives the same\n result:\n\n\n\nshifted = torch.fft.fftshift(f)\ntorch.fft.ifftshift(shifted)\n tensor([ 0.0000, 0.2000, 0.4000, -0.4000, -0.2000])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ifftshift.html", "category": "pytorch docs"} {"text": "torch.Tensor.repeat\nTensor.repeat(*sizes) -> Tensor\nRepeats this tensor along the specified dimensions.\nUnlike \"expand()\", this function copies the tensor's data.\nWarning:\n \"repeat()\" behaves differently from numpy.repeat, but is more\n similar to numpy.tile. For the operator similar to\n *numpy.repeat*, see \"torch.repeat_interleave()\".\n\nParameters:\n sizes (torch.Size or int...) -- The number of times\n to repeat this tensor along each dimension\nExample:\n >>> x = torch.tensor([1, 2, 3])\n >>> x.repeat(4, 2)\n tensor([[ 1, 2, 3, 1, 2, 3],\n [ 1, 2, 3, 1, 2, 3],\n [ 1, 2, 3, 1, 2, 3],\n [ 1, 2, 3, 1, 2, 3]])\n >>> x.repeat(4, 2, 1).size()\n torch.Size([4, 2, 3])\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.repeat.html", "category": "pytorch docs"} {"text": "torch.func.jacrev\ntorch.func.jacrev(func, argnums=0, *, has_aux=False, chunk_size=None, _preallocate_and_copy=False)\nComputes the Jacobian of \"func\" with respect to the arg(s) at index\n \"argnum\" using reverse mode autodiff\nNote:\n Using \"chunk_size=1\" is equivalent to computing the jacobian row-\n by-row with a for-loop i.e. the constraints of \"vmap()\" are not\n applicable.\n\nParameters:\n * func (function) -- A Python function that takes one or\n more arguments, one of which must be a Tensor, and returns one\n or more Tensors\n * **argnums** (*int** or **Tuple**[**int**]*) -- Optional,\n integer or tuple of integers, saying which arguments to get\n the Jacobian with respect to. Default: 0.\n\n * **has_aux** (*bool*) -- Flag indicating that \"func\" returns a\n \"(output, aux)\" tuple where the first element is the output of\n the function to be differentiated and the second element is\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.jacrev.html", "category": "pytorch docs"} {"text": "auxiliary objects that will not be differentiated. Default:\n False.\n * **chunk_size** (*None** or **int*) -- If None (default), use\n the maximum chunk size (equivalent to doing a single vmap over\n vjp to compute the jacobian). If 1, then compute the jacobian\n row-by-row with a for-loop. If not None, then compute the\n jacobian \"chunk_size\" rows at a time (equivalent to doing\n multiple vmap over vjp). If you run into memory issues\n computing the jacobian, please try to specify a non-None\n chunk_size.\n\nReturns:\n Returns a function that takes in the same inputs as \"func\" and\n returns the Jacobian of \"func\" with respect to the arg(s) at\n \"argnums\". If \"has_aux is True\", then the returned function\n instead returns a \"(jacobian, aux)\" tuple where \"jacobian\" is\n the Jacobian and \"aux\" is auxiliary objects returned by \"func\".\nA basic usage with a pointwise, unary operation will give a\n diagonal array as the Jacobian", "source": "https://pytorch.org/docs/stable/generated/torch.func.jacrev.html", "category": "pytorch docs"} {"text": "diagonal array as the Jacobian\n\n\n\nfrom torch.func import jacrev\nx = torch.randn(5)\njacobian = jacrev(torch.sin)(x)\nexpected = torch.diag(torch.cos(x))\nassert torch.allclose(jacobian, expected)\n\n\n\nIf you would like to compute the output of the function as well as\n the jacobian of the function, use the \"has_aux\" flag to return the\n output as an auxiliary object:\n\n\n\nfrom torch.func import jacrev\nx = torch.randn(5)\ndef f(x):\n return x.sin()\ndef g(x):\n result = f(x)\n return result, result\njacobian_f, f_x = jacrev(g, has_aux=True)(x)\nassert torch.allclose(f_x, f(x))\n\n\n\n\"jacrev()\" can be composed with vmap to produce batched Jacobians:\n\n\n\nfrom torch.func import jacrev, vmap\nx = torch.randn(64, 5)\njacobian = vmap(jacrev(torch.sin))(x)\nassert jacobian.shape == (64, 5, 5)\n\n\n\nAdditionally, \"jacrev()\" can be composed with itself to produce\n Hessians", "source": "https://pytorch.org/docs/stable/generated/torch.func.jacrev.html", "category": "pytorch docs"} {"text": "Hessians\n\n\n\nfrom torch.func import jacrev\ndef f(x):\n return x.sin().sum()\nx = torch.randn(5)\nhessian = jacrev(jacrev(f))(x)\nassert torch.allclose(hessian, torch.diag(-x.sin()))\n\n\n\nBy default, \"jacrev()\" computes the Jacobian with respect to the\n first input. However, it can compute the Jacboian with respect to a\n different argument by using \"argnums\":\n\n\n\nfrom torch.func import jacrev\ndef f(x, y):\n return x + y ** 2\nx, y = torch.randn(5), torch.randn(5)\njacobian = jacrev(f, argnums=1)(x, y)\nexpected = torch.diag(2 * y)\nassert torch.allclose(jacobian, expected)\n\n\n\nAdditionally, passing a tuple to \"argnums\" will compute the\n Jacobian with respect to multiple arguments\n\n\n\nfrom torch.func import jacrev\ndef f(x, y):\n return x + y ** 2\nx, y = torch.randn(5), torch.randn(5)\njacobian = jacrev(f, argnums=(0, 1))(x, y)\nexpectedX = torch.diag(torch.ones_like(x))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.jacrev.html", "category": "pytorch docs"} {"text": "\n\n\nexpectedX = torch.diag(torch.ones_like(x))\nexpectedY = torch.diag(2 * y)\nassert torch.allclose(jacobian[0], expectedX)\nassert torch.allclose(jacobian[1], expectedY)\n\n\n\nNote:\n Using PyTorch \"torch.no_grad\" together with \"jacrev\". Case 1:\n Using \"torch.no_grad\" inside a function:\n\n >>> def f(x):\n >>> with torch.no_grad():\n >>> c = x ** 2\n >>> return x - c\n\n In this case, \"jacrev(f)(x)\" will respect the inner\n \"torch.no_grad\".Case 2: Using \"jacrev\" inside \"torch.no_grad\"\n context manager:\n\n >>> with torch.no_grad():\n >>> jacrev(f)(x)\n\n In this case, \"jacrev\" will respect the inner \"torch.no_grad\",\n but not the outer one. This is because \"jacrev\" is a \"function\n transform\": its result should not depend on the result of a\n context manager outside of \"f\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.jacrev.html", "category": "pytorch docs"} {"text": "Conv2d\nclass torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)\nApplies a 2D convolution over an input signal composed of several\n input planes.\nIn the simplest case, the output value of the layer with input size\n (N, C_{\\text{in}}, H, W) and output (N, C_{\\text{out}},\n H_{\\text{out}}, W_{\\text{out}}) can be precisely described as:\n \\text{out}(N_i, C_{\\text{out}_j}) =\n \\text{bias}(C_{\\text{out}_j}) + \\sum_{k = 0}^{C_{\\text{in}} - 1}\n \\text{weight}(C_{\\text{out}_j}, k) \\star \\text{input}(N_i, k)\n\nwhere \\star is the valid 2D cross-correlation operator, N is a\n batch size, C denotes a number of channels, H is a height of input\n planes in pixels, and W is width in pixels.\nThis module supports TensorFloat32.\nOn certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html", "category": "pytorch docs"} {"text": "use different precision for backward.\n\n\n\"stride\" controls the stride for the cross-correlation, a single\n number or a tuple.\n\n\n\"padding\" controls the amount of padding applied to the input. It\n can be either a string {'valid', 'same'} or an int / a tuple of\n ints giving the amount of implicit padding applied on both sides.\n\n\n\"dilation\" controls the spacing between the kernel points; also\n known as the \u00c3\u00a0 trous algorithm. It is harder to describe, but\n this link has a nice visualization of what \"dilation\" does.\n\n\n\"groups\" controls the connections between inputs and outputs.\n \"in_channels\" and \"out_channels\" must both be divisible by\n \"groups\". For example,\n* At groups=1, all inputs are convolved to all outputs.\n\n* At groups=2, the operation becomes equivalent to having two\n conv layers side by side, each seeing half the input\n channels and producing half the output channels, and both\n subsequently concatenated.\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html", "category": "pytorch docs"} {"text": "subsequently concatenated.\n * At groups= \"in_channels\", each input channel is convolved\n with its own set of filters (of size\n \\frac{\\text{out\\_channels}}{\\text{in\\_channels}}).\n\nThe parameters \"kernel_size\", \"stride\", \"padding\", \"dilation\" can\n either be:\n * a single \"int\" -- in which case the same value is used for the\n height and width dimension\n\n * a \"tuple\" of two ints -- in which case, the first *int* is\n used for the height dimension, and the second *int* for the\n width dimension\n\nNote:\n When *groups == in_channels* and *out_channels == K *\n in_channels*, where *K* is a positive integer, this operation is\n also known as a \"depthwise convolution\".In other words, for an\n input of size (N, C_{in}, L_{in}), a depthwise convolution with a\n depthwise multiplier *K* can be performed with the arguments\n (C_\\text{in}=C_\\text{in}, C_\\text{out}=C_\\text{in} \\times\n \\text{K}, ..., \\text{groups}=C_\\text{in}).\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html", "category": "pytorch docs"} {"text": "\\text{K}, ..., \\text{groups}=C_\\text{in}).\nNote:\n In some circumstances when given tensors on a CUDA device and\n using CuDNN, this operator may select a nondeterministic\n algorithm to increase performance. If this is undesirable, you\n can try to make the operation deterministic (potentially at a\n performance cost) by setting \"torch.backends.cudnn.deterministic\n = True\". See Reproducibility for more information.\n\nNote:\n \"padding='valid'\" is the same as no padding. \"padding='same'\"\n pads the input so the output has the shape as the input. However,\n this mode doesn't support any stride values other than 1.\n\nNote:\n This module supports complex data types i.e. \"complex32,\n complex64, complex128\".\n\nParameters:\n * in_channels (int) -- Number of channels in the input\n image\n * **out_channels** (*int*) -- Number of channels produced by the\n convolution\n\n * **kernel_size** (*int** or **tuple*) -- Size of the convolving\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html", "category": "pytorch docs"} {"text": "kernel\n * **stride** (*int** or **tuple**, **optional*) -- Stride of the\n convolution. Default: 1\n\n * **padding** (*int**, **tuple** or **str**, **optional*) --\n Padding added to all four sides of the input. Default: 0\n\n * **padding_mode** (*str**, **optional*) -- \"'zeros'\",\n \"'reflect'\", \"'replicate'\" or \"'circular'\". Default: \"'zeros'\"\n\n * **dilation** (*int** or **tuple**, **optional*) -- Spacing\n between kernel elements. Default: 1\n\n * **groups** (*int**, **optional*) -- Number of blocked\n connections from input channels to output channels. Default: 1\n\n * **bias** (*bool**, **optional*) -- If \"True\", adds a learnable\n bias to the output. Default: \"True\"\n\nShape:\n * Input: (N, C_{in}, H_{in}, W_{in}) or (C_{in}, H_{in}, W_{in})\n * Output: (N, C_{out}, H_{out}, W_{out}) or (C_{out}, H_{out},\n W_{out}), where\n\n H_{out} = \\left\\lfloor\\frac{H_{in} + 2 \\times\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html", "category": "pytorch docs"} {"text": "\\text{padding}[0] - \\text{dilation}[0] \\times\n (\\text{kernel_size}[0] - 1) - 1}{\\text{stride}[0]} +\n 1\\right\\rfloor\n W_{out} = \\left\\lfloor\\frac{W_{in} + 2 \\times\n \\text{padding}[1] - \\text{dilation}[1] \\times\n (\\text{kernel\\_size}[1] - 1) - 1}{\\text{stride}[1]} +\n 1\\right\\rfloor\n\nVariables:\n * weight (Tensor) -- the learnable weights of the module\n of shape (\\text{out_channels},\n \\frac{\\text{in_channels}}{\\text{groups}},\n \\text{kernel_size[0]}, \\text{kernel_size[1]}). The values of\n these weights are sampled from \\mathcal{U}(-\\sqrt{k},\n \\sqrt{k}) where k = \\frac{groups}{C_\\text{in} *\n \\prod_{i=0}^{1}\\text{kernel_size}[i]}\n * **bias** (*Tensor*) -- the learnable bias of the module of\n shape (out_channels). If \"bias\" is \"True\", then the values of\n these weights are sampled from \\mathcal{U}(-\\sqrt{k},\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html", "category": "pytorch docs"} {"text": "\\sqrt{k}) where k = \\frac{groups}{C_\\text{in} *\n \\prod_{i=0}^{1}\\text{kernel_size}[i]}\n-[ Examples ]-\n\n\n\nWith square kernels and equal stride\nm = nn.Conv2d(16, 33, 3, stride=2)\nnon-square kernels and unequal stride and with padding\nm = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))\nnon-square kernels and unequal stride and with padding and dilation\nm = nn.Conv2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2), dilation=(3, 1))\ninput = torch.randn(20, 16, 50, 100)\noutput = m(input)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv2d.html", "category": "pytorch docs"} {"text": "torch.Tensor.detach_\nTensor.detach_()\nDetaches the Tensor from the graph that created it, making it a\n leaf. Views cannot be detached in-place.\nThis method also affects forward mode AD gradients and the result\n will never have forward mode AD gradients.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.detach_.html", "category": "pytorch docs"} {"text": "torch.mode\ntorch.mode(input, dim=- 1, keepdim=False, *, out=None)\nReturns a namedtuple \"(values, indices)\" where \"values\" is the mode\n value of each row of the \"input\" tensor in the given dimension\n \"dim\", i.e. a value which appears most often in that row, and\n \"indices\" is the index location of each mode value found.\nBy default, \"dim\" is the last dimension of the \"input\" tensor.\nIf \"keepdim\" is \"True\", the output tensors are of the same size as\n \"input\" except in the dimension \"dim\" where they are of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensors having 1 fewer dimension than \"input\".\nNote:\n This function is not defined for \"torch.cuda.Tensor\" yet.\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int*) -- the dimension to reduce.\n\n * **keepdim** (*bool*) -- whether the output tensor has \"dim\"\n retained or not.\n\nKeyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.mode.html", "category": "pytorch docs"} {"text": "retained or not.\nKeyword Arguments:\n out (tuple, optional) -- the result tuple of two\n output tensors (values, indices)\nExample:\n >>> a = torch.randint(10, (5,))\n >>> a\n tensor([6, 5, 1, 0, 2])\n >>> b = a + (torch.randn(50, 1) * 5).long()\n >>> torch.mode(b, 0)\n torch.return_types.mode(values=tensor([6, 5, 1, 0, 2]), indices=tensor([2, 2, 2, 2, 2]))\n", "source": "https://pytorch.org/docs/stable/generated/torch.mode.html", "category": "pytorch docs"} {"text": "torch.signal.windows.cosine\ntorch.signal.windows.cosine(M, *, sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)\nComputes a window with a simple cosine waveform. Also known as the\n sine window.\nThe cosine window is defined as follows:\n w_n = \\cos{\\left(\\frac{\\pi n}{M} - \\frac{\\pi}{2}\\right)} =\n \\sin{\\left(\\frac{\\pi n}{M}\\right)}\n\nThe window is normalized to 1 (maximum value is 1). However, the 1\n doesn't appear if \"M\" is even and \"sym\" is True.\nParameters:\n M (int) -- the length of the window. In other words, the\n number of points of the returned window.\nKeyword Arguments:\n * sym (bool, optional) -- If False, returns a\n periodic window suitable for use in spectral analysis. If\n True, returns a symmetric window suitable for use in filter\n design. Default: True.\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.cosine.html", "category": "pytorch docs"} {"text": "of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\nReturn type:\n Tensor\nExamples:\n >>> # Generates a symmetric cosine window.\n >>> torch.signal.windows.cosine(10)\n tensor([0.1564, 0.4540, 0.7071, 0.8910, 0.9877, 0.9877, 0.8910, 0.7071, 0.4540, 0.1564])\n\n >>> # Generates a periodic cosine window.\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.cosine.html", "category": "pytorch docs"} {"text": "\n\n\nGenerates a periodic cosine window.\n >>> torch.signal.windows.cosine(10, sym=False)\n tensor([0.1423, 0.4154, 0.6549, 0.8413, 0.9595, 1.0000, 0.9595, 0.8413, 0.6549, 0.4154])\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.cosine.html", "category": "pytorch docs"} {"text": "torch.pow\ntorch.pow(input, exponent, *, out=None) -> Tensor\nTakes the power of each element in \"input\" with \"exponent\" and\n returns a tensor with the result.\n\"exponent\" can be either a single \"float\" number or a Tensor with\n the same number of elements as \"input\".\nWhen \"exponent\" is a scalar value, the operation applied is:\n \\text{out}_i = x_i ^ \\text{exponent}\n\nWhen \"exponent\" is a tensor, the operation applied is:\n \\text{out}_i = x_i ^ {\\text{exponent}_i}\n\nWhen \"exponent\" is a tensor, the shapes of \"input\" and \"exponent\"\n must be broadcastable.\nParameters:\n * input (Tensor) -- the input tensor.\n * **exponent** (*float** or **tensor*) -- the exponent value\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(4)\n >>> a\n tensor([ 0.4331, 1.2475, 0.6834, -0.2791])\n >>> torch.pow(a, 2)\n tensor([ 0.1875, 1.5561, 0.4670, 0.0779])\n", "source": "https://pytorch.org/docs/stable/generated/torch.pow.html", "category": "pytorch docs"} {"text": "tensor([ 0.1875, 1.5561, 0.4670, 0.0779])\n >>> exp = torch.arange(1., 5.)\n >>> a = torch.arange(1., 5.)\n >>> a\n tensor([ 1., 2., 3., 4.])\n >>> exp\n tensor([ 1., 2., 3., 4.])\n >>> torch.pow(a, exp)\n tensor([ 1., 4., 27., 256.])\n\ntorch.pow(self, exponent, *, out=None) -> Tensor\n\"self\" is a scalar \"float\" value, and \"exponent\" is a tensor. The\n returned tensor \"out\" is of the same shape as \"exponent\"\nThe operation applied is:\n \\text{out}_i = \\text{self} ^ {\\text{exponent}_i}\n\nParameters:\n * self (float) -- the scalar base value for the power\n operation\n * **exponent** (*Tensor*) -- the exponent tensor\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> exp = torch.arange(1., 5.)\n >>> base = 2\n >>> torch.pow(base, exp)\n tensor([ 2., 4., 8., 16.])\n", "source": "https://pytorch.org/docs/stable/generated/torch.pow.html", "category": "pytorch docs"} {"text": "torch.logsumexp\ntorch.logsumexp(input, dim, keepdim=False, *, out=None)\nReturns the log of summed exponentials of each row of the \"input\"\n tensor in the given dimension \"dim\". The computation is numerically\n stabilized.\nFor summation index j given by dim and other indices i, the\n result is\n \\text{logsumexp}(x)_{i} = \\log \\sum_j \\exp(x_{ij})\n\nIf \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension(s) \"dim\" where it is of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensor having 1 (or \"len(dim)\") fewer dimension(s).\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int** or **tuple of ints**, **optional*) -- the\n dimension or dimensions to reduce. If \"None\", all dimensions\n are reduced.\n\n * **keepdim** (*bool*) -- whether the output tensor has \"dim\"\n retained or not.\n\nKeyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.logsumexp.html", "category": "pytorch docs"} {"text": "retained or not.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(3, 3)\n >>> torch.logsumexp(a, 1)\n tensor([1.4907, 1.0593, 1.5696])\n >>> torch.dist(torch.logsumexp(a, 1), torch.log(torch.sum(torch.exp(a), 1)))\n tensor(1.6859e-07)\n", "source": "https://pytorch.org/docs/stable/generated/torch.logsumexp.html", "category": "pytorch docs"} {"text": "torch.Tensor.clamp\nTensor.clamp(min=None, max=None) -> Tensor\nSee \"torch.clamp()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.clamp.html", "category": "pytorch docs"} {"text": "torch.Tensor.cdouble\nTensor.cdouble(memory_format=torch.preserve_format) -> Tensor\n\"self.cdouble()\" is equivalent to \"self.to(torch.complex128)\". See\n \"to()\".\nParameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cdouble.html", "category": "pytorch docs"} {"text": "torch.Tensor.inverse\nTensor.inverse() -> Tensor\nSee \"torch.inverse()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.inverse.html", "category": "pytorch docs"} {"text": "torch.nn.functional.triplet_margin_loss\ntorch.nn.functional.triplet_margin_loss(anchor, positive, negative, margin=1.0, p=2, eps=1e-06, swap=False, size_average=None, reduce=None, reduction='mean')\nSee \"TripletMarginLoss\" for details\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.triplet_margin_loss.html", "category": "pytorch docs"} {"text": "torch.Tensor.grad\nTensor.grad\nThis attribute is \"None\" by default and becomes a Tensor the first\n time a call to \"backward()\" computes gradients for \"self\". The\n attribute will then contain the gradients computed and future calls\n to \"backward()\" will accumulate (add) gradients into it.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.grad.html", "category": "pytorch docs"} {"text": "torch.Tensor.sigmoid_\nTensor.sigmoid_() -> Tensor\nIn-place version of \"sigmoid()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sigmoid_.html", "category": "pytorch docs"} {"text": "torch.Tensor.bincount\nTensor.bincount(weights=None, minlength=0) -> Tensor\nSee \"torch.bincount()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bincount.html", "category": "pytorch docs"} {"text": "torch.cuda.memory_cached\ntorch.cuda.memory_cached(device=None)\nDeprecated; see \"memory_reserved()\".\nReturn type:\n int", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.memory_cached.html", "category": "pytorch docs"} {"text": "torch.Tensor.short\nTensor.short(memory_format=torch.preserve_format) -> Tensor\n\"self.short()\" is equivalent to \"self.to(torch.int16)\". See \"to()\".\nParameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.short.html", "category": "pytorch docs"} {"text": "torch.cuda.set_rng_state\ntorch.cuda.set_rng_state(new_state, device='cuda')\nSets the random number generator state of the specified GPU.\nParameters:\n * new_state (torch.ByteTensor) -- The desired state\n * **device** (*torch.device** or **int**, **optional*) -- The\n device to set the RNG state. Default: \"'cuda'\" (i.e.,\n \"torch.device('cuda')\", the current CUDA device).\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.set_rng_state.html", "category": "pytorch docs"} {"text": "torch.unbind\ntorch.unbind(input, dim=0) -> seq\nRemoves a tensor dimension.\nReturns a tuple of all slices along a given dimension, already\n without it.\nParameters:\n * input (Tensor) -- the tensor to unbind\n * **dim** (*int*) -- dimension to remove\n\nExample:\n >>> torch.unbind(torch.tensor([[1, 2, 3],\n >>> [4, 5, 6],\n >>> [7, 8, 9]]))\n (tensor([1, 2, 3]), tensor([4, 5, 6]), tensor([7, 8, 9]))\n", "source": "https://pytorch.org/docs/stable/generated/torch.unbind.html", "category": "pytorch docs"} {"text": "torch.Tensor.logical_xor\nTensor.logical_xor() -> Tensor\nSee \"torch.logical_xor()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logical_xor.html", "category": "pytorch docs"} {"text": "LnStructured\nclass torch.nn.utils.prune.LnStructured(amount, n, dim=- 1)\nPrune entire (currently unpruned) channels in a tensor based on\n their L\"n\"-norm.\nParameters:\n * amount (int or float) -- quantity of channels to\n prune. If \"float\", should be between 0.0 and 1.0 and represent\n the fraction of parameters to prune. If \"int\", it represents\n the absolute number of parameters to prune.\n * **n** (*int**, **float**, **inf**, **-inf**, **'fro'**,\n **'nuc'*) -- See documentation of valid entries for argument\n \"p\" in \"torch.norm()\".\n\n * **dim** (*int**, **optional*) -- index of the dim along which\n we define channels to prune. Default: -1.\n\nclassmethod apply(module, name, amount, n, dim, importance_scores=None)\n Adds the forward pre-hook that enables pruning on the fly and\n the reparametrization of a tensor in terms of the original\n tensor and the pruning mask.\n\n Parameters:\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.LnStructured.html", "category": "pytorch docs"} {"text": "Parameters:\n * module (nn.Module) -- module containing the tensor to\n prune\n * **name** (*str*) -- parameter name within \"module\" on which\n pruning will act.\n\n * **amount** (*int** or **float*) -- quantity of parameters\n to prune. If \"float\", should be between 0.0 and 1.0 and\n represent the fraction of parameters to prune. If \"int\", it\n represents the absolute number of parameters to prune.\n\n * **n** (*int**, **float**, **inf**, **-inf**, **'fro'**,\n **'nuc'*) -- See documentation of valid entries for\n argument \"p\" in \"torch.norm()\".\n\n * **dim** (*int*) -- index of the dim along which we define\n channels to prune.\n\n * **importance_scores** (*torch.Tensor*) -- tensor of\n importance scores (of same shape as module parameter) used\n to compute mask for pruning. The values in this tensor\n indicate the importance of the corresponding elements in\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.LnStructured.html", "category": "pytorch docs"} {"text": "the parameter being pruned. If unspecified or None, the\n module parameter will be used in its place.\napply_mask(module)\n Simply handles the multiplication between the parameter being\n pruned and the generated mask. Fetches the mask and the original\n tensor from the module and returns the pruned version of the\n tensor.\n\n Parameters:\n **module** (*nn.Module*) -- module containing the tensor to\n prune\n\n Returns:\n pruned version of the input tensor\n\n Return type:\n pruned_tensor (torch.Tensor)\n\ncompute_mask(t, default_mask)\n Computes and returns a mask for the input tensor \"t\". Starting\n from a base \"default_mask\" (which should be a mask of ones if\n the tensor has not been pruned yet), generate a mask to apply on\n top of the \"default_mask\" by zeroing out the channels along the\n specified dim with the lowest L\"n\"-norm.\n\n Parameters:\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.LnStructured.html", "category": "pytorch docs"} {"text": "Parameters:\n * t (torch.Tensor) -- tensor representing the parameter\n to prune\n * **default_mask** (*torch.Tensor*) -- Base mask from\n previous pruning iterations, that need to be respected\n after the new mask is applied. Same dims as \"t\".\n\n Returns:\n mask to apply to \"t\", of same dims as \"t\"\n\n Return type:\n mask (torch.Tensor)\n\n Raises:\n **IndexError** -- if \"self.dim >= len(t.shape)\"\n\nprune(t, default_mask=None, importance_scores=None)\n Computes and returns a pruned version of input tensor \"t\"\n according to the pruning rule specified in \"compute_mask()\".\n\n Parameters:\n * **t** (*torch.Tensor*) -- tensor to prune (of same\n dimensions as \"default_mask\").\n\n * **importance_scores** (*torch.Tensor*) -- tensor of\n importance scores (of same shape as \"t\") used to compute\n mask for pruning \"t\". The values in this tensor indicate\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.LnStructured.html", "category": "pytorch docs"} {"text": "the importance of the corresponding elements in the \"t\"\n that is being pruned. If unspecified or None, the tensor\n \"t\" will be used in its place.\n * **default_mask** (*torch.Tensor**, **optional*) -- mask\n from previous pruning iteration, if any. To be considered\n when determining what portion of the tensor that pruning\n should act on. If None, default to a mask of ones.\n\n Returns:\n pruned version of tensor \"t\".\n\nremove(module)\n Removes the pruning reparameterization from a module. The pruned\n parameter named \"name\" remains permanently pruned, and the\n parameter named \"name+'_orig'\" is removed from the parameter\n list. Similarly, the buffer named \"name+'_mask'\" is removed from\n the buffers.\n\n Note:\n\n Pruning itself is NOT undone or reversed!\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.LnStructured.html", "category": "pytorch docs"} {"text": "torch.Tensor.nanmean\nTensor.nanmean(dim=None, keepdim=False, *, dtype=None) -> Tensor\nSee \"torch.nanmean()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.nanmean.html", "category": "pytorch docs"} {"text": "torch.Tensor.half\nTensor.half(memory_format=torch.preserve_format) -> Tensor\n\"self.half()\" is equivalent to \"self.to(torch.float16)\". See\n \"to()\".\nParameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.half.html", "category": "pytorch docs"} {"text": "torch.Tensor.nextafter\nTensor.nextafter(other) -> Tensor\nSee \"torch.nextafter()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.nextafter.html", "category": "pytorch docs"} {"text": "torch.Tensor.acosh_\nTensor.acosh_() -> Tensor\nIn-place version of \"acosh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.acosh_.html", "category": "pytorch docs"} {"text": "LazyConv2d\nclass torch.nn.LazyConv2d(out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)\nA \"torch.nn.Conv2d\" module with lazy initialization of the\n \"in_channels\" argument of the \"Conv2d\" that is inferred from the\n \"input.size(1)\". The attributes that will be lazily initialized are\n weight and bias.\nCheck the \"torch.nn.modules.lazy.LazyModuleMixin\" for further\n documentation on lazy modules and their limitations.\nParameters:\n * out_channels (int) -- Number of channels produced by the\n convolution\n * **kernel_size** (*int** or **tuple*) -- Size of the convolving\n kernel\n\n * **stride** (*int** or **tuple**, **optional*) -- Stride of the\n convolution. Default: 1\n\n * **padding** (*int** or **tuple**, **optional*) -- Zero-padding\n added to both sides of the input. Default: 0\n\n * **padding_mode** (*str**, **optional*) -- \"'zeros'\",\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyConv2d.html", "category": "pytorch docs"} {"text": "\"'reflect'\", \"'replicate'\" or \"'circular'\". Default: \"'zeros'\"\n * **dilation** (*int** or **tuple**, **optional*) -- Spacing\n between kernel elements. Default: 1\n\n * **groups** (*int**, **optional*) -- Number of blocked\n connections from input channels to output channels. Default: 1\n\n * **bias** (*bool**, **optional*) -- If \"True\", adds a learnable\n bias to the output. Default: \"True\"\n\nSee also:\n \"torch.nn.Conv2d\" and \"torch.nn.modules.lazy.LazyModuleMixin\"\n\ncls_to_become\n alias of \"Conv2d\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyConv2d.html", "category": "pytorch docs"} {"text": "torch.Tensor.xlogy_\nTensor.xlogy_(other) -> Tensor\nIn-place version of \"xlogy()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.xlogy_.html", "category": "pytorch docs"} {"text": "torch.cuda.get_device_properties\ntorch.cuda.get_device_properties(device)\nGets the properties of a device.\nParameters:\n device (torch.device or int or str) -- device for\n which to return the properties of the device.\nReturns:\n the properties of the device\nReturn type:\n _CudaDeviceProperties", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.get_device_properties.html", "category": "pytorch docs"} {"text": "torch.Tensor.ldexp_\nTensor.ldexp_(other) -> Tensor\nIn-place version of \"ldexp()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ldexp_.html", "category": "pytorch docs"} {"text": "torch.kron\ntorch.kron(input, other, *, out=None) -> Tensor\nComputes the Kronecker product, denoted by \\otimes, of \"input\" and\n \"other\".\nIf \"input\" is a (a_0 \\times a_1 \\times \\dots \\times a_n) tensor and\n \"other\" is a (b_0 \\times b_1 \\times \\dots \\times b_n) tensor, the\n result will be a (a_0b_0 \\times a_1b_1 \\times \\dots \\times\n a_n*b_n) tensor with the following entries:\n (\\text{input} \\otimes \\text{other})_{k_0, k_1, \\dots, k_n} =\n \\text{input}_{i_0, i_1, \\dots, i_n} * \\text{other}_{j_0, j_1,\n \\dots, j_n},\n\nwhere k_t = i_t * b_t + j_t for 0 \\leq t \\leq n. If one tensor has\n fewer dimensions than the other it is unsqueezed until it has the\n same number of dimensions.\nSupports real-valued and complex-valued inputs.\nNote:\n This function generalizes the typical definition of the Kronecker\n product for two matrices to two tensors, as described above. When\n \"input\" is a (m \\times n) matrix and \"other\" is a (p \\times q)\n", "source": "https://pytorch.org/docs/stable/generated/torch.kron.html", "category": "pytorch docs"} {"text": "matrix, the result will be a (pm \\times qn) block matrix:\n \\mathbf{A} \\otimes \\mathbf{B}=\\begin{bmatrix} a_{11}\n \\mathbf{B} & \\cdots & a_{1 n} \\mathbf{B} \\\\ \\vdots & \\ddots &\n \\vdots \\\\ a_{m 1} \\mathbf{B} & \\cdots & a_{m n} \\mathbf{B}\n \\end{bmatrix}\n\n where \"input\" is \\mathbf{A} and \"other\" is \\mathbf{B}.\n\nParameters:\n * input (Tensor) --\n * **other** (*Tensor*) --\n\nKeyword Arguments:\n out (Tensor, optional) -- The output tensor. Ignored\n if \"None\". Default: \"None\"\nExamples:\n >>> mat1 = torch.eye(2)\n >>> mat2 = torch.ones(2, 2)\n >>> torch.kron(mat1, mat2)\n tensor([[1., 1., 0., 0.],\n [1., 1., 0., 0.],\n [0., 0., 1., 1.],\n [0., 0., 1., 1.]])\n\n >>> mat1 = torch.eye(2)\n >>> mat2 = torch.arange(1, 5).reshape(2, 2)\n >>> torch.kron(mat1, mat2)\n tensor([[1., 2., 0., 0.],\n [3., 4., 0., 0.],\n [0., 0., 1., 2.],\n", "source": "https://pytorch.org/docs/stable/generated/torch.kron.html", "category": "pytorch docs"} {"text": "[0., 0., 1., 2.],\n [0., 0., 3., 4.]])", "source": "https://pytorch.org/docs/stable/generated/torch.kron.html", "category": "pytorch docs"} {"text": "torch.fft.ihfft\ntorch.fft.ihfft(input, n=None, dim=- 1, norm=None, *, out=None) -> Tensor\nComputes the inverse of \"hfft()\".\n\"input\" must be a real-valued signal, interpreted in the Fourier\n domain. The IFFT of a real signal is Hermitian-symmetric, \"X[i] =\n conj(X[-i])\". \"ihfft()\" represents this in the one-sided form where\n only the positive frequencies below the Nyquist frequency are\n included. To compute the full output, use \"ifft()\".\nNote:\n Supports torch.half on CUDA with GPU Architecture SM53 or\n greater. However it only supports powers of 2 signal length in\n every transformed dimension.\n\nParameters:\n * input (Tensor) -- the real input tensor\n * **n** (*int**, **optional*) -- Signal length. If given, the\n input will either be zero-padded or trimmed to this length\n before computing the Hermitian IFFT.\n\n * **dim** (*int**, **optional*) -- The dimension along which to\n take the one dimensional Hermitian IFFT.\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ihfft.html", "category": "pytorch docs"} {"text": "take the one dimensional Hermitian IFFT.\n * **norm** (*str**, **optional*) --\n\n Normalization mode. For the backward transform (\"ihfft()\"),\n these correspond to:\n\n * \"\"forward\"\" - no normalization\n\n * \"\"backward\"\" - normalize by \"1/n\"\n\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the IFFT\n orthonormal)\n\n Calling the forward transform (\"hfft()\") with the same\n normalization mode will apply an overall normalization of\n \"1/n\" between the two transforms. This is required to make\n \"ihfft()\" the exact inverse.\n\n Default is \"\"backward\"\" (normalize by \"1/n\").\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\n-[ Example ]-\n\n\n\nt = torch.arange(5)\nt\n tensor([0, 1, 2, 3, 4])\ntorch.fft.ihfft(t)\n tensor([ 2.0000-0.0000j, -0.5000-0.6882j, -0.5000-0.1625j])\n\n\n\nCompare against the full output from \"ifft()\":\n\n\n\ntorch.fft.ifft(t)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ihfft.html", "category": "pytorch docs"} {"text": "\n\n\ntorch.fft.ifft(t)\n tensor([ 2.0000-0.0000j, -0.5000-0.6882j, -0.5000-0.1625j, -0.5000+0.1625j,\n -0.5000+0.6882j])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ihfft.html", "category": "pytorch docs"} {"text": "torch.nn.modules.module.register_module_full_backward_hook\ntorch.nn.modules.module.register_module_full_backward_hook(hook)\nRegisters a backward hook common to all the modules.\nWarning:\n This adds global state to the *nn.module* module and it is only\n intended for debugging/profiling purposes.\n\nThe hook will be called every time the gradients with respect to a\n module are computed, i.e. the hook will execute if and only if the\n gradients with respect to module outputs are computed. The hook\n should have the following signature:\n hook(module, grad_input, grad_output) -> Tensor or None\n\nThe \"grad_input\" and \"grad_output\" are tuples. The hook should not\n modify its arguments, but it can optionally return a new gradient\n with respect to the input that will be used in place of\n \"grad_input\" in subsequent computations. \"grad_input\" will only\n correspond to the inputs given as positional arguments and all", "source": "https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_full_backward_hook.html", "category": "pytorch docs"} {"text": "kwarg arguments will not appear in the hook. Entries in\n \"grad_input\" and \"grad_output\" will be \"None\" for all non-Tensor\n arguments.\nFor technical reasons, when this hook is applied to a Module, its\n forward function will receive a view of each Tensor passed to the\n Module. Similarly the caller will receive a view of each Tensor\n returned by the Module's forward function.\nGlobal hooks are called before hooks registered with\n register_backward_hook\nReturns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\nReturn type:\n \"torch.utils.hooks.RemovableHandle\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_full_backward_hook.html", "category": "pytorch docs"} {"text": "torch.Tensor.polygamma\nTensor.polygamma(n) -> Tensor\nSee \"torch.polygamma()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.polygamma.html", "category": "pytorch docs"} {"text": "torch.jit.annotate\ntorch.jit.annotate(the_type, the_value)\nThis method is a pass-through function that returns the_value,\n used to hint TorchScript compiler the type of the_value. It is a\n no-op when running outside of TorchScript.\nThough TorchScript can infer correct type for most Python\n expressions, there are some cases where type inference can be\n wrong, including:\n\n\nEmpty containers like [] and {}, which TorchScript assumes to\n be container of Tensor\n\n\nOptional types like Optional[T] but assigned a valid value of\n type T, TorchScript would assume it is type T rather than\n Optional[T]\n\n\nNote that annotate() does not help in init method of\n torch.nn.Module subclasses because it is executed in eager mode.\n To annotate types of torch.nn.Module attributes, use \"Annotate()\"\n instead.\nExample:\n import torch\n from typing import Dict\n\n @torch.jit.script\n def fn():\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.annotate.html", "category": "pytorch docs"} {"text": "@torch.jit.script\n def fn():\n # Telling TorchScript that this empty dictionary is a (str -> int) dictionary\n # instead of default dictionary type of (str -> Tensor).\n d = torch.jit.annotate(Dict[str, int], {})\n # Without `torch.jit.annotate` above, following statement would fail because of\n # type mismatch.\n d[\"name\"] = 20\n\nParameters:\n * the_type -- Python type that should be passed to\n TorchScript compiler as type hint for the_value\n * **the_value** -- Value or expression to hint type for.\n\nReturns:\n the_value is passed back as return value.", "source": "https://pytorch.org/docs/stable/generated/torch.jit.annotate.html", "category": "pytorch docs"} {"text": "torch.isfinite\ntorch.isfinite(input) -> Tensor\nReturns a new tensor with boolean elements representing if each\n element is finite or not.\nReal values are finite when they are not NaN, negative infinity, or\n infinity. Complex values are finite when both their real and\n imaginary parts are finite.\nParameters:\n input (Tensor) -- the input tensor.\nReturns:\n A boolean tensor that is True where \"input\" is finite and False\n elsewhere\nExample:\n >>> torch.isfinite(torch.tensor([1, float('inf'), 2, float('-inf'), float('nan')]))\n tensor([True, False, True, False, False])\n", "source": "https://pytorch.org/docs/stable/generated/torch.isfinite.html", "category": "pytorch docs"} {"text": "torch.set_rng_state\ntorch.set_rng_state(new_state)\nSets the random number generator state.\nParameters:\n new_state (torch.ByteTensor) -- The desired state", "source": "https://pytorch.org/docs/stable/generated/torch.set_rng_state.html", "category": "pytorch docs"} {"text": "FixedQParamsFakeQuantize\nclass torch.quantization.fake_quantize.FixedQParamsFakeQuantize(observer)\nSimulate quantize and dequantize with fixed quantization parameters\n in training time. Only per tensor quantization is supported.", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.FixedQParamsFakeQuantize.html", "category": "pytorch docs"} {"text": "torch.greater\ntorch.greater(input, other, *, out=None) -> Tensor\nAlias for \"torch.gt()\".", "source": "https://pytorch.org/docs/stable/generated/torch.greater.html", "category": "pytorch docs"} {"text": "torch.Tensor.greater_equal\nTensor.greater_equal(other) -> Tensor\nSee \"torch.greater_equal()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.greater_equal.html", "category": "pytorch docs"} {"text": "torch.Tensor.sort\nTensor.sort(dim=- 1, descending=False)\nSee \"torch.sort()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sort.html", "category": "pytorch docs"} {"text": "torch.linspace\ntorch.linspace(start, end, steps, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\nCreates a one-dimensional tensor of size \"steps\" whose values are\n evenly spaced from \"start\" to \"end\", inclusive. That is, the value\n are:\n (\\text{start}, \\text{start} + \\frac{\\text{end} -\n \\text{start}}{\\text{steps} - 1}, \\ldots, \\text{start} +\n (\\text{steps} - 2) * \\frac{\\text{end} -\n \\text{start}}{\\text{steps} - 1}, \\text{end})\n\nFrom PyTorch 1.11 linspace requires the steps argument. Use\n steps=100 to restore the previous behavior.\nParameters:\n * start (float) -- the starting value for the set of\n points\n * **end** (*float*) -- the ending value for the set of points\n\n * **steps** (*int*) -- size of the constructed tensor\n\nKeyword Arguments:\n * out (Tensor, optional) -- the output tensor.\n * **dtype** (*torch.dtype**, **optional*) -- the data type to\n", "source": "https://pytorch.org/docs/stable/generated/torch.linspace.html", "category": "pytorch docs"} {"text": "perform the computation in. Default: if None, uses the global\n default dtype (see torch.get_default_dtype()) when both\n \"start\" and \"end\" are real, and corresponding complex dtype\n when either is complex.\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\nExample:\n >>> torch.linspace(3, 10, steps=5)\n tensor([ 3.0000, 4.7500, 6.5000, 8.2500, 10.0000])\n >>> torch.linspace(-10, 10, steps=5)\n", "source": "https://pytorch.org/docs/stable/generated/torch.linspace.html", "category": "pytorch docs"} {"text": "\n\n\ntorch.linspace(-10, 10, steps=5)\n tensor([-10., -5., 0., 5., 10.])\n >>> torch.linspace(start=-10, end=10, steps=5)\n tensor([-10., -5., 0., 5., 10.])\n >>> torch.linspace(start=-10, end=10, steps=1)\n tensor([-10.])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linspace.html", "category": "pytorch docs"} {"text": "elu\nclass torch.ao.nn.quantized.functional.elu(input, scale, zero_point, alpha=1.0)\nThis is the quantized version of \"elu()\".\nParameters:\n * input (Tensor) -- quantized input\n * **scale** (*float*) -- quantization scale of the output tensor\n\n * **zero_point** (*int*) -- quantization zero point of the\n output tensor\n\n * **alpha** (*float*) -- the alpha constant\n\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.elu.html", "category": "pytorch docs"} {"text": "torch.nn.functional.pairwise_distance\ntorch.nn.functional.pairwise_distance(x1, x2, p=2.0, eps=1e-6, keepdim=False) -> Tensor\nSee \"torch.nn.PairwiseDistance\" for details", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.pairwise_distance.html", "category": "pytorch docs"} {"text": "torch.nn.functional.multi_margin_loss\ntorch.nn.functional.multi_margin_loss(input, target, p=1, margin=1, weight=None, size_average=None, reduce=None, reduction='mean') -> Tensor\nSee \"MultiMarginLoss\" for details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.multi_margin_loss.html", "category": "pytorch docs"} {"text": "PolynomialLR\nclass torch.optim.lr_scheduler.PolynomialLR(optimizer, total_iters=5, power=1.0, last_epoch=- 1, verbose=False)\nDecays the learning rate of each parameter group using a polynomial\n function in the given total_iters. When last_epoch=-1, sets initial\n lr as lr.\nParameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * **total_iters** (*int*) -- The number of steps that the\n scheduler decays the learning rate. Default: 5.\n\n * **power** (*int*) -- The power of the polynomial. Default:\n 1.0.\n\n * **verbose** (*bool*) -- If \"True\", prints a message to stdout\n for each update. Default: \"False\".\n\n-[ Example ]-\n\n\n\nAssuming optimizer uses lr = 0.001 for all groups\nlr = 0.001 if epoch == 0\nlr = 0.00075 if epoch == 1\nlr = 0.00050 if epoch == 2\nlr = 0.00025 if epoch == 3\nlr = 0.0 if epoch >= 4\nscheduler = PolynomialLR(self.opt, total_iters=4, power=1.0)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.PolynomialLR.html", "category": "pytorch docs"} {"text": "\n\n\nfor epoch in range(100):\n train(...)\n validate(...)\n scheduler.step()\n\n\n\nget_last_lr()\n Return last computed learning rate by current scheduler.\n\nload_state_dict(state_dict)\n Loads the schedulers state.\n\n Parameters:\n **state_dict** (*dict*) -- scheduler state. Should be an\n object returned from a call to \"state_dict()\".\n\nprint_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n\nstate_dict()\n Returns the state of the scheduler as a \"dict\".\n\n It contains an entry for every variable in self.__dict__ which\n is not the optimizer.\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.PolynomialLR.html", "category": "pytorch docs"} {"text": "torch.Tensor.flip\nTensor.flip(dims) -> Tensor\nSee \"torch.flip()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.flip.html", "category": "pytorch docs"} {"text": "ReflectionPad2d\nclass torch.nn.ReflectionPad2d(padding)\nPads the input tensor using the reflection of the input boundary.\nFor N-dimensional padding, use \"torch.nn.functional.pad()\".\nParameters:\n padding (int, tuple) -- the size of the padding. If is\n int, uses the same padding in all boundaries. If a 4-tuple,\n uses (\\text{padding_left}, \\text{padding_right},\n \\text{padding_top}, \\text{padding_bottom})\nShape:\n * Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in}).\n * Output: (N, C, H_{out}, W_{out}) or (C, H_{out}, W_{out})\n where\n\n H_{out} = H_{in} + \\text{padding\\_top} +\n \\text{padding\\_bottom}\n\n W_{out} = W_{in} + \\text{padding\\_left} +\n \\text{padding\\_right}\n\nExamples:\n >>> m = nn.ReflectionPad2d(2)\n >>> input = torch.arange(9, dtype=torch.float).reshape(1, 1, 3, 3)\n >>> input\n tensor([[[[0., 1., 2.],\n [3., 4., 5.],\n [6., 7., 8.]]]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReflectionPad2d.html", "category": "pytorch docs"} {"text": "[6., 7., 8.]]]])\n >>> m(input)\n tensor([[[[8., 7., 6., 7., 8., 7., 6.],\n [5., 4., 3., 4., 5., 4., 3.],\n [2., 1., 0., 1., 2., 1., 0.],\n [5., 4., 3., 4., 5., 4., 3.],\n [8., 7., 6., 7., 8., 7., 6.],\n [5., 4., 3., 4., 5., 4., 3.],\n [2., 1., 0., 1., 2., 1., 0.]]]])\n >>> # using different paddings for different sides\n >>> m = nn.ReflectionPad2d((1, 1, 2, 0))\n >>> m(input)\n tensor([[[[7., 6., 7., 8., 7.],\n [4., 3., 4., 5., 4.],\n [1., 0., 1., 2., 1.],\n [4., 3., 4., 5., 4.],\n [7., 6., 7., 8., 7.]]]])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReflectionPad2d.html", "category": "pytorch docs"} {"text": "torch.Tensor.take\nTensor.take(indices) -> Tensor\nSee \"torch.take()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.take.html", "category": "pytorch docs"} {"text": "torch.matmul\ntorch.matmul(input, other, *, out=None) -> Tensor\nMatrix product of two tensors.\nThe behavior depends on the dimensionality of the tensors as\n follows:\n\n\nIf both tensors are 1-dimensional, the dot product (scalar) is\n returned.\n\n\nIf both arguments are 2-dimensional, the matrix-matrix product is\n returned.\n\n\nIf the first argument is 1-dimensional and the second argument is\n 2-dimensional, a 1 is prepended to its dimension for the purpose\n of the matrix multiply. After the matrix multiply, the prepended\n dimension is removed.\n\n\nIf the first argument is 2-dimensional and the second argument is\n 1-dimensional, the matrix-vector product is returned.\n\n\nIf both arguments are at least 1-dimensional and at least one\n argument is N-dimensional (where N > 2), then a batched matrix\n multiply is returned. If the first argument is 1-dimensional, a\n 1 is prepended to its dimension for the purpose of the batched\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.matmul.html", "category": "pytorch docs"} {"text": "matrix multiply and removed after. If the second argument is\n 1-dimensional, a 1 is appended to its dimension for the purpose\n of the batched matrix multiple and removed after. The non-matrix\n (i.e. batch) dimensions are broadcasted (and thus must be\n broadcastable). For example, if \"input\" is a (j \\times 1 \\times\n n \\times n) tensor and \"other\" is a (k \\times n \\times n) tensor,\n \"out\" will be a (j \\times k \\times n \\times n) tensor.\n Note that the broadcasting logic only looks at the batch\n dimensions when determining if the inputs are broadcastable, and\n not the matrix dimensions. For example, if \"input\" is a (j \\times\n 1 \\times n \\times m) tensor and \"other\" is a (k \\times m \\times\n p) tensor, these inputs are valid for broadcasting even though\n the final two dimensions (i.e. the matrix dimensions) are\n different. \"out\" will be a (j \\times k \\times n \\times p) tensor.\n\nThis operation has support for arguments with sparse layouts. In", "source": "https://pytorch.org/docs/stable/generated/torch.matmul.html", "category": "pytorch docs"} {"text": "particular the matrix-matrix (both arguments 2-dimensional)\n supports sparse arguments with the same restrictions as\n \"torch.mm()\"\nWarning:\n Sparse support is a beta feature and some layout(s)/dtype/device\n combinations may not be supported, or may not have autograd\n support. If you notice missing functionality please open a\n feature request.\n\nThis operator supports TensorFloat32.\nOn certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.\nNote:\n The 1-dimensional dot product version of this function does not\n support an \"out\" parameter.\n\nParameters:\n * input (Tensor) -- the first tensor to be multiplied\n * **other** (*Tensor*) -- the second tensor to be multiplied\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> # vector x vector\n >>> tensor1 = torch.randn(3)\n >>> tensor2 = torch.randn(3)\n", "source": "https://pytorch.org/docs/stable/generated/torch.matmul.html", "category": "pytorch docs"} {"text": "\n\n\ntensor2 = torch.randn(3)\n >>> torch.matmul(tensor1, tensor2).size()\n torch.Size([])\n >>> # matrix x vector\n >>> tensor1 = torch.randn(3, 4)\n >>> tensor2 = torch.randn(4)\n >>> torch.matmul(tensor1, tensor2).size()\n torch.Size([3])\n >>> # batched matrix x broadcasted vector\n >>> tensor1 = torch.randn(10, 3, 4)\n >>> tensor2 = torch.randn(4)\n >>> torch.matmul(tensor1, tensor2).size()\n torch.Size([10, 3])\n >>> # batched matrix x batched matrix\n >>> tensor1 = torch.randn(10, 3, 4)\n >>> tensor2 = torch.randn(10, 4, 5)\n >>> torch.matmul(tensor1, tensor2).size()\n torch.Size([10, 3, 5])\n >>> # batched matrix x broadcasted matrix\n >>> tensor1 = torch.randn(10, 3, 4)\n >>> tensor2 = torch.randn(4, 5)\n >>> torch.matmul(tensor1, tensor2).size()\n torch.Size([10, 3, 5])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.matmul.html", "category": "pytorch docs"} {"text": "default_eval_fn\nclass torch.quantization.default_eval_fn(model, calib_data)\nDefault evaluation function takes a torch.utils.data.Dataset or a\n list of input Tensors and run the model on the dataset", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.default_eval_fn.html", "category": "pytorch docs"} {"text": "Linear\nclass torch.ao.nn.quantized.Linear(in_features, out_features, bias_=True, dtype=torch.qint8)\nA quantized linear module with quantized tensor as inputs and\n outputs. We adopt the same interface as torch.nn.Linear, please\n see https://pytorch.org/docs/stable/nn.html#torch.nn.Linear for\n documentation.\nSimilar to \"Linear\", attributes will be randomly initialized at\n module creation time and will be overwritten later\nVariables:\n * weight (Tensor) -- the non-learnable quantized weights\n of the module of shape (\\text{out_features},\n \\text{in_features}).\n * **bias** (*Tensor*) -- the non-learnable bias of the module of\n shape (\\text{out\\_features}). If \"bias\" is \"True\", the values\n are initialized to zero.\n\n * **scale** -- *scale* parameter of output Quantized Tensor,\n type: double\n\n * **zero_point** -- *zero_point* parameter for output Quantized\n Tensor, type: long\n\nExamples:", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Linear.html", "category": "pytorch docs"} {"text": "Tensor, type: long\nExamples:\n >>> m = nn.quantized.Linear(20, 30)\n >>> input = torch.randn(128, 20)\n >>> input = torch.quantize_per_tensor(input, 1.0, 0, torch.quint8)\n >>> output = m(input)\n >>> print(output.size())\n torch.Size([128, 30])\n\nclassmethod from_float(mod)\n Create a quantized module from an observed float module\n\n Parameters:\n **mod** (*Module*) -- a float module, either produced by\n torch.ao.quantization utilities or provided by the user\n\nclassmethod from_reference(ref_qlinear, output_scale, output_zero_point)\n Create a (fbgemm/qnnpack) quantized module from a reference\n quantized module\n\n Parameters:\n * **ref_qlinear** (*Module*) -- a reference quantized linear\n module, either produced by torch.ao.quantization utilities\n or provided by the user\n\n * **output_scale** (*float*) -- scale for output Tensor\n\n * **output_zero_point** (*int*) -- zero point for output\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Linear.html", "category": "pytorch docs"} {"text": "Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Linear.html", "category": "pytorch docs"} {"text": "torch.lerp\ntorch.lerp(input, end, weight, *, out=None)\nDoes a linear interpolation of two tensors \"start\" (given by\n \"input\") and \"end\" based on a scalar or tensor \"weight\" and returns\n the resulting \"out\" tensor.\n \\text{out}_i = \\text{start}_i + \\text{weight}_i \\times\n (\\text{end}_i - \\text{start}_i)\n\nThe shapes of \"start\" and \"end\" must be broadcastable. If \"weight\"\n is a tensor, then the shapes of \"weight\", \"start\", and \"end\" must\n be broadcastable.\nParameters:\n * input (Tensor) -- the tensor with the starting points\n * **end** (*Tensor*) -- the tensor with the ending points\n\n * **weight** (*float** or **tensor*) -- the weight for the\n interpolation formula\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> start = torch.arange(1., 5.)\n >>> end = torch.empty(4).fill_(10)\n >>> start\n tensor([ 1., 2., 3., 4.])\n >>> end\n tensor([ 10., 10., 10., 10.])\n", "source": "https://pytorch.org/docs/stable/generated/torch.lerp.html", "category": "pytorch docs"} {"text": "tensor([ 10., 10., 10., 10.])\n >>> torch.lerp(start, end, 0.5)\n tensor([ 5.5000, 6.0000, 6.5000, 7.0000])\n >>> torch.lerp(start, end, torch.full_like(start, 0.5))\n tensor([ 5.5000, 6.0000, 6.5000, 7.0000])", "source": "https://pytorch.org/docs/stable/generated/torch.lerp.html", "category": "pytorch docs"} {"text": "torch.Tensor.cfloat\nTensor.cfloat(memory_format=torch.preserve_format) -> Tensor\n\"self.cfloat()\" is equivalent to \"self.to(torch.complex64)\". See\n \"to()\".\nParameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cfloat.html", "category": "pytorch docs"} {"text": "torch.Tensor.atanh\nTensor.atanh() -> Tensor\nSee \"torch.atanh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.atanh.html", "category": "pytorch docs"} {"text": "torch.nn.functional.softmax\ntorch.nn.functional.softmax(input, dim=None, _stacklevel=3, dtype=None)\nApplies a softmax function.\nSoftmax is defined as:\n\\text{Softmax}(x_{i}) = \\frac{\\exp(x_i)}{\\sum_j \\exp(x_j)}\nIt is applied to all slices along dim, and will re-scale them so\n that the elements lie in the range [0, 1] and sum to 1.\nSee \"Softmax\" for more details.\nParameters:\n * input (Tensor) -- input\n * **dim** (*int*) -- A dimension along which softmax will be\n computed.\n\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. If specified, the input tensor is casted\n to \"dtype\" before the operation is performed. This is useful\n for preventing data type overflows. Default: None.\n\nReturn type:\n Tensor\nNote:\n This function doesn't work directly with NLLLoss, which expects\n the Log to be computed between the Softmax and itself. Use\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.softmax.html", "category": "pytorch docs"} {"text": "log_softmax instead (it's faster and has better numerical\n properties).", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.softmax.html", "category": "pytorch docs"} {"text": "torch.sym_float\ntorch.sym_float(a)\nSymInt-aware utility for float casting.\nParameters:\n a (SymInt, SymFloat, or object) -- Object to cast", "source": "https://pytorch.org/docs/stable/generated/torch.sym_float.html", "category": "pytorch docs"} {"text": "torch.addr\ntorch.addr(input, vec1, vec2, *, beta=1, alpha=1, out=None) -> Tensor\nPerforms the outer-product of vectors \"vec1\" and \"vec2\" and adds it\n to the matrix \"input\".\nOptional values \"beta\" and \"alpha\" are scaling factors on the outer\n product between \"vec1\" and \"vec2\" and the added matrix \"input\"\n respectively.\n \\text{out} = \\beta\\ \\text{input} + \\alpha\\ (\\text{vec1} \\otimes\n \\text{vec2})\n\nIf \"beta\" is 0, then \"input\" will be ignored, and nan and inf\n in it will not be propagated.\nIf \"vec1\" is a vector of size n and \"vec2\" is a vector of size\n m, then \"input\" must be broadcastable with a matrix of size (n\n \\times m) and \"out\" will be a matrix of size (n \\times m).\nParameters:\n * input (Tensor) -- matrix to be added\n * **vec1** (*Tensor*) -- the first vector of the outer product\n\n * **vec2** (*Tensor*) -- the second vector of the outer product\n\nKeyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.addr.html", "category": "pytorch docs"} {"text": "Keyword Arguments:\n * beta (Number, optional) -- multiplier for \"input\"\n (\\beta)\n * **alpha** (*Number**, **optional*) -- multiplier for\n \\text{vec1} \\otimes \\text{vec2} (\\alpha)\n\n * **out** (*Tensor**, **optional*) -- the output tensor.\n\nExample:\n >>> vec1 = torch.arange(1., 4.)\n >>> vec2 = torch.arange(1., 3.)\n >>> M = torch.zeros(3, 2)\n >>> torch.addr(M, vec1, vec2)\n tensor([[ 1., 2.],\n [ 2., 4.],\n [ 3., 6.]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.addr.html", "category": "pytorch docs"} {"text": "torch.Tensor.index_select\nTensor.index_select(dim, index) -> Tensor\nSee \"torch.index_select()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_select.html", "category": "pytorch docs"} {"text": "torch.linalg.pinv\ntorch.linalg.pinv(A, *, atol=None, rtol=None, hermitian=False, out=None) -> Tensor\nComputes the pseudoinverse (Moore-Penrose inverse) of a matrix.\nThe pseudoinverse may be defined algebraically but it is more\n computationally convenient to understand it through the SVD\nSupports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\nIf \"hermitian\"= True, \"A\" is assumed to be Hermitian if complex\n or symmetric if real, but this is not checked internally. Instead,\n just the lower triangular part of the matrix is used in the\n computations.\nThe singular values (or the norm of the eigenvalues when\n \"hermitian\"= True) that are below \\max(\\text{atol}, \\sigma_1\n \\cdot \\text{rtol}) threshold are treated as zero and discarded in\n the computation, where \\sigma_1 is the largest singular value (or\n eigenvalue).", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.pinv.html", "category": "pytorch docs"} {"text": "eigenvalue).\nIf \"rtol\" is not specified and \"A\" is a matrix of dimensions (m,\n n), the relative tolerance is set to be \\text{rtol} = \\max(m, n)\n \\varepsilon and \\varepsilon is the epsilon value for the dtype of\n \"A\" (see \"finfo\"). If \"rtol\" is not specified and \"atol\" is\n specified to be larger than zero then \"rtol\" is set to zero.\nIf \"atol\" or \"rtol\" is a \"torch.Tensor\", its shape must be\n broadcastable to that of the singular values of \"A\" as returned by\n \"torch.linalg.svd()\".\nNote:\n This function uses \"torch.linalg.svd()\" if \"hermitian\"*= False*\n and \"torch.linalg.eigh()\" if \"hermitian\"*= True*. For CUDA\n inputs, this function synchronizes that device with the CPU.\n\nNote:\n Consider using \"torch.linalg.lstsq()\" if possible for multiplying\n a matrix on the left by the pseudoinverse, as:\n\n torch.linalg.lstsq(A, B).solution == A.pinv() @ B\n\n It is always preferred to use \"lstsq()\" when possible, as it is\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.pinv.html", "category": "pytorch docs"} {"text": "faster and more numerically stable than computing the\n pseudoinverse explicitly.\nNote:\n This function has NumPy compatible variant *linalg.pinv(A, rcond,\n hermitian=False)*. However, use of the positional argument\n \"rcond\" is deprecated in favor of \"rtol\".\n\nWarning:\n This function uses internally \"torch.linalg.svd()\" (or\n \"torch.linalg.eigh()\" when \"hermitian\"*= True*), so its\n derivative has the same problems as those of these functions. See\n the warnings in \"torch.linalg.svd()\" and \"torch.linalg.eigh()\"\n for more details.\n\nSee also:\n \"torch.linalg.inv()\" computes the inverse of a square matrix.\n\n \"torch.linalg.lstsq()\" computes \"A\"*.pinv() @ *\"B\" with a\n numerically stable algorithm.\n\nParameters:\n * A (Tensor) -- tensor of shape (, m, n)* where *** is\n zero or more batch dimensions.\n * **rcond** (*float**, **Tensor**, **optional*) -- [NumPy\n Compat]. Alias for \"rtol\". Default: *None*.\n\nKeyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.pinv.html", "category": "pytorch docs"} {"text": "Keyword Arguments:\n * atol (float, Tensor, optional) -- the absolute\n tolerance value. When None it's considered to be zero.\n Default: None.\n * **rtol** (*float**, **Tensor**, **optional*) -- the relative\n tolerance value. See above for the value it takes when *None*.\n Default: *None*.\n\n * **hermitian** (*bool**, **optional*) -- indicates whether \"A\"\n is Hermitian if complex or symmetric if real. Default:\n *False*.\n\n * **out** (*Tensor**, **optional*) -- output tensor. Ignored if\n *None*. Default: *None*.\n\nExamples:\n >>> A = torch.randn(3, 5)\n >>> A\n tensor([[ 0.5495, 0.0979, -1.4092, -0.1128, 0.4132],\n [-1.1143, -0.3662, 0.3042, 1.6374, -0.9294],\n [-0.3269, -0.5745, -0.0382, -0.5922, -0.6759]])\n >>> torch.linalg.pinv(A)\n tensor([[ 0.0600, -0.1933, -0.2090],\n [-0.0903, -0.0817, -0.4752],\n [-0.7124, -0.1631, -0.2272],\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.pinv.html", "category": "pytorch docs"} {"text": "[-0.7124, -0.1631, -0.2272],\n [ 0.1356, 0.3933, -0.5023],\n [-0.0308, -0.1725, -0.5216]])\n >>> A = torch.randn(2, 6, 3)\n >>> Apinv = torch.linalg.pinv(A)\n >>> torch.dist(Apinv @ A, torch.eye(3))\n tensor(8.5633e-07)\n\n >>> A = torch.randn(3, 3, dtype=torch.complex64)\n >>> A = A + A.T.conj() # creates a Hermitian matrix\n >>> Apinv = torch.linalg.pinv(A, hermitian=True)\n >>> torch.dist(Apinv @ A, torch.eye(3))\n tensor(1.0830e-06)\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.pinv.html", "category": "pytorch docs"} {"text": "HingeEmbeddingLoss\nclass torch.nn.HingeEmbeddingLoss(margin=1.0, size_average=None, reduce=None, reduction='mean')\nMeasures the loss given an input tensor x and a labels tensor y\n (containing 1 or -1). This is usually used for measuring whether\n two inputs are similar or dissimilar, e.g. using the L1 pairwise\n distance as x, and is typically used for learning nonlinear\n embeddings or semi-supervised learning.\nThe loss function for n-th sample in the mini-batch is\n l_n = \\begin{cases} x_n, & \\text{if}\\; y_n = 1,\\\\ \\max\n \\{0, \\Delta - x_n\\}, & \\text{if}\\; y_n = -1, \\end{cases}\n\nand the total loss functions is\n \\ell(x, y) = \\begin{cases} \\operatorname{mean}(L), &\n \\text{if reduction} = \\text{`mean';}\\\\\n \\operatorname{sum}(L), & \\text{if reduction} = \\text{`sum'.}\n \\end{cases}\n\nwhere L = {l_1,\\dots,l_N}^\\top.\nParameters:\n * margin (float, optional) -- Has a default value of\n 1.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.HingeEmbeddingLoss.html", "category": "pytorch docs"} {"text": "1.\n * **size_average** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n\n * **reduce** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n\n * **reduction** (*str**, **optional*) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.HingeEmbeddingLoss.html", "category": "pytorch docs"} {"text": "the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\nShape:\n * Input: (*) where * means, any number of dimensions. The sum\n operation operates over all the elements.\n * Target: (*), same shape as the input\n\n * Output: scalar. If \"reduction\" is \"'none'\", then same shape as\n the input\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.HingeEmbeddingLoss.html", "category": "pytorch docs"} {"text": "torch.set_default_device\ntorch.set_default_device(device)\nSets the default \"torch.Tensor\" to be allocated on \"device\". This\n does not affect factory function calls which are called with an\n explicit \"device\" argument. Factory calls will be performed as if\n they were passed \"device\" as an argument.\nTo only temporarily change the default device instead of setting it\n globally, use \"with torch.device(device):\" instead.\nThe default device is initially \"cpu\". If you set the default\n tensor device to another device (e.g., \"cuda\") without a device\n index, tensors will be allocated on whatever the current device for\n the device type, even after \"torch.cuda.set_device()\" is called.\nWarning:\n This function imposes a slight performance cost on every Python\n call to the torch API (not just factory functions). If this is\n causing problems for you, please comment on\n https://github.com/pytorch/pytorch/issues/92701\n\nParameters:", "source": "https://pytorch.org/docs/stable/generated/torch.set_default_device.html", "category": "pytorch docs"} {"text": "Parameters:\n device (device or string) -- the device to set as\n default\nExample:\n >>> torch.tensor([1.2, 3]).device\n device(type='cpu')\n >>> torch.set_default_device('cuda') # current device is 0\n >>> torch.tensor([1.2, 3]).device\n device(type='cuda', index=0)\n >>> torch.set_default_device('cuda:1')\n >>> torch.tensor([1.2, 3]).device\n device(type='cuda', index=1)\n", "source": "https://pytorch.org/docs/stable/generated/torch.set_default_device.html", "category": "pytorch docs"} {"text": "torch.nn.functional.max_pool1d\ntorch.nn.functional.max_pool1d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)\nApplies a 1D max pooling over an input signal composed of several\n input planes.\nNote:\n The order of \"ceil_mode\" and \"return_indices\" is different from\n what seen in \"MaxPool1d\", and will change in a future release.\n\nSee \"MaxPool1d\" for details.\nParameters:\n * input -- input tensor of shape (\\text{minibatch} ,\n \\text{in_channels} , iW), minibatch dim optional.\n * **kernel_size** -- the size of the window. Can be a single\n number or a tuple *(kW,)*\n\n * **stride** -- the stride of the window. Can be a single number\n or a tuple *(sW,)*. Default: \"kernel_size\"\n\n * **padding** -- Implicit negative infinity padding to be added\n on both sides, must be >= 0 and <= kernel_size / 2.\n\n * **dilation** -- The stride between elements within a sliding\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.max_pool1d.html", "category": "pytorch docs"} {"text": "window, must be > 0.\n * **ceil_mode** -- If \"True\", will use *ceil* instead of *floor*\n to compute the output shape. This ensures that every element\n in the input tensor is covered by a sliding window.\n\n * **return_indices** -- If \"True\", will return the argmax along\n with the max values. Useful for\n \"torch.nn.functional.max_unpool1d\" later\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.max_pool1d.html", "category": "pytorch docs"} {"text": "torch.Tensor.to_sparse_csc\nTensor.to_sparse_csc() -> Tensor\nConvert a tensor to compressed column storage (CSC) format. Except\n for strided tensors, only works with 2D tensors. If the \"self\" is\n strided, then the number of dense dimensions could be specified,\n and a hybrid CSC tensor will be created, with dense_dim dense\n dimensions and self.dim() - 2 - dense_dim batch dimension.\nParameters:\n dense_dim (int, optional) -- Number of dense\n dimensions of the resulting CSC tensor. This argument should be\n used only if \"self\" is a strided tensor, and must be a value\n between 0 and dimension of \"self\" tensor minus two.\nExample:\n >>> dense = torch.randn(5, 5)\n >>> sparse = dense.to_sparse_csc()\n >>> sparse._nnz()\n 25\n\n >>> dense = torch.zeros(3, 3, 1, 1)\n >>> dense[0, 0] = dense[1, 2] = dense[2, 1] = 1\n >>> dense.to_sparse_csc(dense_dim=2)\n tensor(ccol_indices=tensor([0, 1, 2, 3]),\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse_csc.html", "category": "pytorch docs"} {"text": "tensor(ccol_indices=tensor([0, 1, 2, 3]),\n row_indices=tensor([0, 2, 1]),\n values=tensor([[[1.]],\n [[1.]],\n\n [[1.]]]), size=(3, 3, 1, 1), nnz=3,\n layout=torch.sparse_csc)\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse_csc.html", "category": "pytorch docs"} {"text": "torch.save\ntorch.save(obj, f, pickle_module=pickle, pickle_protocol=DEFAULT_PROTOCOL, _use_new_zipfile_serialization=True)\nSaves an object to a disk file.\nSee also: Saving and loading tensors\nParameters:\n * obj (object) -- saved object\n * **f** (*Union**[**str**, **PathLike**, **BinaryIO**,\n **IO**[**bytes**]**]*) -- a file-like object (has to implement\n write and flush) or a string or os.PathLike object containing\n a file name\n\n * **pickle_module** (*Any*) -- module used for pickling metadata\n and objects\n\n * **pickle_protocol** (*int*) -- can be specified to override\n the default protocol\n\nNote:\n A common PyTorch convention is to save tensors using .pt file\n extension.\n\nNote:\n PyTorch preserves storage sharing across serialization. See\n Saving and loading tensors preserves views for more details.\n\nNote:\n The 1.6 release of PyTorch switched \"torch.save\" to use a new\n", "source": "https://pytorch.org/docs/stable/generated/torch.save.html", "category": "pytorch docs"} {"text": "zipfile-based file format. \"torch.load\" still retains the ability\n to load files in the old format. If for any reason you want\n \"torch.save\" to use the old format, pass the kwarg\n \"_use_new_zipfile_serialization=False\".\n-[ Example ]-\n\n\n\nSave to file\nx = torch.tensor([0, 1, 2, 3, 4])\ntorch.save(x, 'tensor.pt')\nSave to io.BytesIO buffer\nbuffer = io.BytesIO()\ntorch.save(x, buffer)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.save.html", "category": "pytorch docs"} {"text": "torch.triu\ntorch.triu(input, diagonal=0, *, out=None) -> Tensor\nReturns the upper triangular part of a matrix (2-D tensor) or batch\n of matrices \"input\", the other elements of the result tensor \"out\"\n are set to 0.\nThe upper triangular part of the matrix is defined as the elements\n on and above the diagonal.\nThe argument \"diagonal\" controls which diagonal to consider. If\n \"diagonal\" = 0, all elements on and above the main diagonal are\n retained. A positive value excludes just as many diagonals above\n the main diagonal, and similarly a negative value includes just as\n many diagonals below the main diagonal. The main diagonal are the\n set of indices \\lbrace (i, i) \\rbrace for i \\in [0, \\min{d_{1},\n d_{2}} - 1] where d_{1}, d_{2} are the dimensions of the matrix.\nParameters:\n * input (Tensor) -- the input tensor.\n * **diagonal** (*int**, **optional*) -- the diagonal to consider\n\nKeyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.triu.html", "category": "pytorch docs"} {"text": "Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(3, 3)\n >>> a\n tensor([[ 0.2309, 0.5207, 2.0049],\n [ 0.2072, -1.0680, 0.6602],\n [ 0.3480, -0.5211, -0.4573]])\n >>> torch.triu(a)\n tensor([[ 0.2309, 0.5207, 2.0049],\n [ 0.0000, -1.0680, 0.6602],\n [ 0.0000, 0.0000, -0.4573]])\n >>> torch.triu(a, diagonal=1)\n tensor([[ 0.0000, 0.5207, 2.0049],\n [ 0.0000, 0.0000, 0.6602],\n [ 0.0000, 0.0000, 0.0000]])\n >>> torch.triu(a, diagonal=-1)\n tensor([[ 0.2309, 0.5207, 2.0049],\n [ 0.2072, -1.0680, 0.6602],\n [ 0.0000, -0.5211, -0.4573]])\n\n >>> b = torch.randn(4, 6)\n >>> b\n tensor([[ 0.5876, -0.0794, -1.8373, 0.6654, 0.2604, 1.5235],\n [-0.2447, 0.9556, -1.2919, 1.3378, -0.1768, -1.0857],\n [ 0.4333, 0.3146, 0.6576, -1.0432, 0.9348, -0.4410],\n", "source": "https://pytorch.org/docs/stable/generated/torch.triu.html", "category": "pytorch docs"} {"text": "[-0.9888, 1.0679, -1.3337, -1.6556, 0.4798, 0.2830]])\n >>> torch.triu(b, diagonal=1)\n tensor([[ 0.0000, -0.0794, -1.8373, 0.6654, 0.2604, 1.5235],\n [ 0.0000, 0.0000, -1.2919, 1.3378, -0.1768, -1.0857],\n [ 0.0000, 0.0000, 0.0000, -1.0432, 0.9348, -0.4410],\n [ 0.0000, 0.0000, 0.0000, 0.0000, 0.4798, 0.2830]])\n >>> torch.triu(b, diagonal=-1)\n tensor([[ 0.5876, -0.0794, -1.8373, 0.6654, 0.2604, 1.5235],\n [-0.2447, 0.9556, -1.2919, 1.3378, -0.1768, -1.0857],\n [ 0.0000, 0.3146, 0.6576, -1.0432, 0.9348, -0.4410],\n [ 0.0000, 0.0000, -1.3337, -1.6556, 0.4798, 0.2830]])", "source": "https://pytorch.org/docs/stable/generated/torch.triu.html", "category": "pytorch docs"} {"text": "torch.Tensor.select_scatter\nTensor.select_scatter(src, dim, index) -> Tensor\nSee \"torch.select_scatter()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.select_scatter.html", "category": "pytorch docs"} {"text": "torch.linalg.svdvals\ntorch.linalg.svdvals(A, *, driver=None, out=None) -> Tensor\nComputes the singular values of a matrix.\nSupports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\nThe singular values are returned in descending order.\nNote:\n This function is equivalent to NumPy's *linalg.svd(A,\n compute_uv=False)*.\n\nNote:\n When inputs are on a CUDA device, this function synchronizes that\n device with the CPU.\n\nSee also:\n \"torch.linalg.svd()\" computes the full singular value\n decomposition.\n\nParameters:\n A (Tensor) -- tensor of shape (, m, n)* where *** is\n zero or more batch dimensions.\nKeyword Arguments:\n * driver (str, optional) -- name of the cuSOLVER\n method to be used. This keyword argument only works on CUDA", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.svdvals.html", "category": "pytorch docs"} {"text": "inputs. Available options are: None, gesvd, gesvdj, and\n gesvda. Check \"torch.linalg.svd()\" for details. Default:\n None.\n * **out** (*Tensor**, **optional*) -- output tensor. Ignored if\n *None*. Default: *None*.\n\nReturns:\n A real-valued tensor, even when \"A\" is complex.\nExamples:\n >>> A = torch.randn(5, 3)\n >>> S = torch.linalg.svdvals(A)\n >>> S\n tensor([2.5139, 2.1087, 1.1066])\n\n >>> torch.dist(S, torch.linalg.svd(A, full_matrices=False).S)\n tensor(2.4576e-07)\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.svdvals.html", "category": "pytorch docs"} {"text": "AdamW\nclass torch.optim.AdamW(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0.01, amsgrad=False, *, maximize=False, foreach=None, capturable=False, differentiable=False)\nImplements AdamW algorithm.\n \\begin{aligned} &\\rule{110mm}{0.4pt}\n \\\\ &\\textbf{input} : \\gamma \\text{(lr)}, \\: \\beta_1,\n \\beta_2 \\text{(betas)}, \\: \\theta_0 \\text{(params)}, \\:\n f(\\theta) \\text{(objective)}, \\: \\epsilon \\text{\n (epsilon)} \\\\\n &\\hspace{13mm} \\lambda \\text{(weight decay)}, \\:\n \\textit{amsgrad}, \\: \\textit{maximize}\n \\\\ &\\textbf{initialize} : m_0 \\leftarrow 0 \\text{ (first\n moment)}, v_0 \\leftarrow 0 \\text{ ( second moment)}, \\:\n \\widehat{v_0}^{max}\\leftarrow 0 \\\\[-1.ex]\n &\\rule{110mm}{0.4pt}\n \\\\ &\\textbf{for} \\: t=1 \\: \\textbf{to} \\: \\ldots \\:\n \\textbf{do} \\\\\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html", "category": "pytorch docs"} {"text": "\\textbf{do} \\\n &\\hspace{5mm}\\textbf{if} \\: \\textit{maximize}:\n \\ &\\hspace{10mm}g_t \\leftarrow\n -\\nabla_{\\theta} f_t (\\theta_{t-1}) \\\n &\\hspace{5mm}\\textbf{else}\n \\ &\\hspace{10mm}g_t \\leftarrow \\nabla_{\\theta}\n f_t (\\theta_{t-1}) \\ &\\hspace{5mm} \\theta_t\n \\leftarrow \\theta_{t-1} - \\gamma \\lambda \\theta_{t-1} \\\n &\\hspace{5mm}m_t \\leftarrow \\beta_1 m_{t-1} + (1 -\n \\beta_1) g_t \\ &\\hspace{5mm}v_t\n \\leftarrow \\beta_2 v_{t-1} + (1-\\beta_2) g^2_t \\\n &\\hspace{5mm}\\widehat{m_t} \\leftarrow m_t/\\big(1-\\beta_1^t\n \\big) \\ &\\hspace{5mm}\\widehat{v_t}\n \\leftarrow v_t/\\big(1-\\beta_2^t \\big) \\\n &\\hspace{5mm}\\textbf{if} \\: amsgrad\n \\ &\\hspace{10mm}\\widehat{v_t}^{max} \\leftarrow\n \\mathrm{max}(\\widehat{v_t}^{max}, \\widehat{v_t})", "source": "https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html", "category": "pytorch docs"} {"text": "\\ &\\hspace{10mm}\\theta_t \\leftarrow \\theta_t - \\gamma\n \\widehat{m_t}/ \\big(\\sqrt{\\widehat{v_t}^{max}} +\n \\epsilon \\big) \\\n &\\hspace{5mm}\\textbf{else}\n \\ &\\hspace{10mm}\\theta_t \\leftarrow \\theta_t - \\gamma\n \\widehat{m_t}/ \\big(\\sqrt{\\widehat{v_t}} + \\epsilon\n \\big) \\\n &\\rule{110mm}{0.4pt}\n \\[-1.ex] &\\bf{return} \\: \\theta_t\n \\[-1.ex] &\\rule{110mm}{0.4pt}\n \\[-1.ex] \\end{aligned}\nFor further details regarding the algorithm we refer to Decoupled\n Weight Decay Regularization.\nParameters:\n * params (iterable) -- iterable of parameters to optimize\n or dicts defining parameter groups\n * **lr** (*float**, **optional*) -- learning rate (default:\n 1e-3)\n\n * **betas** (*Tuple**[**float**, **float**]**, **optional*) --\n coefficients used for computing running averages of gradient\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html", "category": "pytorch docs"} {"text": "and its square (default: (0.9, 0.999))\n * **eps** (*float**, **optional*) -- term added to the\n denominator to improve numerical stability (default: 1e-8)\n\n * **weight_decay** (*float**, **optional*) -- weight decay\n coefficient (default: 1e-2)\n\n * **amsgrad** (*bool**, **optional*) -- whether to use the\n AMSGrad variant of this algorithm from the paper On the\n Convergence of Adam and Beyond (default: False)\n\n * **maximize** (*bool**, **optional*) -- maximize the params\n based on the objective, instead of minimizing (default: False)\n\n * **foreach** (*bool**, **optional*) -- whether foreach\n implementation of optimizer is used. If unspecified by the\n user (so foreach is None), we will try to use foreach over the\n for-loop implementation on CUDA, since it is usually\n significantly more performant. (default: None)\n\n * **capturable** (*bool**, **optional*) -- whether this instance\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html", "category": "pytorch docs"} {"text": "is safe to capture in a CUDA graph. Passing True can impair\n ungraphed performance, so if you don't intend to graph capture\n this instance, leave it False (default: False)\n * **differentiable** (*bool**, **optional*) -- whether autograd\n should occur through the optimizer step in training.\n Otherwise, the step() function runs in a torch.no_grad()\n context. Setting to True can impair performance, so leave it\n False if you don't intend to run autograd through this\n instance (default: False)\n\nadd_param_group(param_group)\n Add a param group to the \"Optimizer\" s *param_groups*.\n\n This can be useful when fine tuning a pre-trained network as\n frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n\n Parameters:\n **param_group** (*dict*) -- Specifies what Tensors should be\n optimized along with group specific optimization options.\n\nload_state_dict(state_dict)", "source": "https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html", "category": "pytorch docs"} {"text": "load_state_dict(state_dict)\n Loads the optimizer state.\n\n Parameters:\n **state_dict** (*dict*) -- optimizer state. Should be an\n object returned from a call to \"state_dict()\".\n\nregister_step_post_hook(hook)\n Register an optimizer step post hook which will be called after\n optimizer step. It should have the following signature:\n\n hook(optimizer, args, kwargs) -> None\n\n The \"optimizer\" argument is the optimizer instance being used.\n\n Parameters:\n **hook** (*Callable*) -- The user defined hook to be\n registered.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n\nregister_step_pre_hook(hook)\n Register an optimizer step pre hook which will be called before\n optimizer step. It should have the following signature:\n\n hook(optimizer, args, kwargs) -> None or modified args and kwargs\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html", "category": "pytorch docs"} {"text": "The \"optimizer\" argument is the optimizer instance being used.\n If args and kwargs are modified by the pre-hook, then the\n transformed values are returned as a tuple containing the\n new_args and new_kwargs.\n Parameters:\n **hook** (*Callable*) -- The user defined hook to be\n registered.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n\nstate_dict()\n Returns the state of the optimizer as a \"dict\".\n\n It contains two entries:\n\n * state - a dict holding current optimization state. Its content\n differs between optimizer classes.\n\n * param_groups - a list containing all parameter groups where\n each\n parameter group is a dict\n\nzero_grad(set_to_none=False)\n Sets the gradients of all optimized \"torch.Tensor\" s to zero.\n\n Parameters:\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html", "category": "pytorch docs"} {"text": "Parameters:\n set_to_none (bool) -- instead of setting to zero, set\n the grads to None. This will in general have lower memory\n footprint, and can modestly improve performance. However, it\n changes certain behaviors. For example: 1. When the user\n tries to access a gradient and perform manual ops on it, a\n None attribute or a Tensor full of 0s will behave\n differently. 2. If the user requests\n \"zero_grad(set_to_none=True)\" followed by a backward pass,\n \".grad\"s are guaranteed to be None for params that did not\n receive a gradient. 3. \"torch.optim\" optimizers have a\n different behavior if the gradient is 0 or None (in one case\n it does the step with a gradient of 0 and in the other it\n skips the step altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.AdamW.html", "category": "pytorch docs"} {"text": "torch.Tensor.cummin\nTensor.cummin(dim)\nSee \"torch.cummin()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cummin.html", "category": "pytorch docs"} {"text": "FuseCustomConfig\nclass torch.ao.quantization.fx.custom_config.FuseCustomConfig\nCustom configuration for \"fuse_fx()\".\nExample usage:\n fuse_custom_config = FuseCustomConfig().set_preserved_attributes([\"attr1\", \"attr2\"])\n\nclassmethod from_dict(fuse_custom_config_dict)\n Create a \"ConvertCustomConfig\" from a dictionary with the\n following items:\n\n \"preserved_attributes\": a list of attributes that persist\n even if they are not used in \"forward\"\n\n This function is primarily for backward compatibility and may be\n removed in the future.\n\n Return type:\n *FuseCustomConfig*\n\nset_preserved_attributes(attributes)\n Set the names of the attributes that will persist in the graph\n module even if they are not used in the model's \"forward\"\n method.\n\n Return type:\n *FuseCustomConfig*\n\nto_dict()\n Convert this \"FuseCustomConfig\" to a dictionary with the items\n described in \"from_dict()\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.FuseCustomConfig.html", "category": "pytorch docs"} {"text": "described in \"from_dict()\".\n Return type:\n *Dict*[str, *Any*]\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.FuseCustomConfig.html", "category": "pytorch docs"} {"text": "Adam\nclass torch.optim.Adam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, amsgrad=False, *, foreach=None, maximize=False, capturable=False, differentiable=False, fused=None)\nImplements Adam algorithm.\n \\begin{aligned} &\\rule{110mm}{0.4pt}\n \\\\ &\\textbf{input} : \\gamma \\text{ (lr)}, \\beta_1,\n \\beta_2 \\text{ (betas)},\\theta_0 \\text{\n (params)},f(\\theta) \\text{ (objective)} \\\\\n &\\hspace{13mm} \\lambda \\text{ (weight decay)}, \\:\n \\textit{amsgrad}, \\:\\textit{maximize}\n \\\\ &\\textbf{initialize} : m_0 \\leftarrow 0 \\text{ ( first\n moment)}, v_0\\leftarrow 0 \\text{ (second moment)},\\:\n \\widehat{v_0}^{max}\\leftarrow 0\\\\[-1.ex]\n &\\rule{110mm}{0.4pt}\n \\\\ &\\textbf{for} \\: t=1 \\: \\textbf{to} \\: \\ldots \\:\n \\textbf{do} \\\\\n &\\hspace{5mm}\\textbf{if} \\: \\textit{maximize}:\n \\\\ &\\hspace{10mm}g_t \\leftarrow\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adam.html", "category": "pytorch docs"} {"text": "-\\nabla_{\\theta} f_t (\\theta_{t-1}) \\\n &\\hspace{5mm}\\textbf{else}\n \\ &\\hspace{10mm}g_t \\leftarrow \\nabla_{\\theta}\n f_t (\\theta_{t-1}) \\ &\\hspace{5mm}\\textbf{if} \\:\n \\lambda \\neq 0 \\\n &\\hspace{10mm} g_t \\leftarrow g_t + \\lambda \\theta_{t-1}\n \\ &\\hspace{5mm}m_t \\leftarrow \\beta_1 m_{t-1}\n + (1 - \\beta_1) g_t \\ &\\hspace{5mm}v_t\n \\leftarrow \\beta_2 v_{t-1} + (1-\\beta_2) g^2_t \\\n &\\hspace{5mm}\\widehat{m_t} \\leftarrow m_t/\\big(1-\\beta_1^t\n \\big) \\ &\\hspace{5mm}\\widehat{v_t}\n \\leftarrow v_t/\\big(1-\\beta_2^t \\big) \\\n &\\hspace{5mm}\\textbf{if} \\: amsgrad\n \\ &\\hspace{10mm}\\widehat{v_t}^{max} \\leftarrow\n \\mathrm{max}(\\widehat{v_t}^{max}, \\widehat{v_t})\n \\ &\\hspace{10mm}\\theta_t \\leftarrow \\theta_{t-1} - \\gamma", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adam.html", "category": "pytorch docs"} {"text": "\\widehat{m_t}/ \\big(\\sqrt{\\widehat{v_t}^{max}} +\n \\epsilon \\big) \\\n &\\hspace{5mm}\\textbf{else}\n \\ &\\hspace{10mm}\\theta_t \\leftarrow \\theta_{t-1} - \\gamma\n \\widehat{m_t}/ \\big(\\sqrt{\\widehat{v_t}} + \\epsilon\n \\big) \\\n &\\rule{110mm}{0.4pt}\n \\[-1.ex] &\\bf{return} \\: \\theta_t\n \\[-1.ex] &\\rule{110mm}{0.4pt}\n \\[-1.ex] \\end{aligned}\nFor further details regarding the algorithm we refer to Adam: A\n Method for Stochastic Optimization.\nParameters:\n * params (iterable) -- iterable of parameters to optimize\n or dicts defining parameter groups\n * **lr** (*float**, **optional*) -- learning rate (default:\n 1e-3)\n\n * **betas** (*Tuple**[**float**, **float**]**, **optional*) --\n coefficients used for computing running averages of gradient\n and its square (default: (0.9, 0.999))\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adam.html", "category": "pytorch docs"} {"text": "and its square (default: (0.9, 0.999))\n * **eps** (*float**, **optional*) -- term added to the\n denominator to improve numerical stability (default: 1e-8)\n\n * **weight_decay** (*float**, **optional*) -- weight decay (L2\n penalty) (default: 0)\n\n * **amsgrad** (*bool**, **optional*) -- whether to use the\n AMSGrad variant of this algorithm from the paper On the\n Convergence of Adam and Beyond (default: False)\n\n * **foreach** (*bool**, **optional*) -- whether foreach\n implementation of optimizer is used (default: None)\n\n * **maximize** (*bool**, **optional*) -- maximize the params\n based on the objective, instead of minimizing (default: False)\n\n * **capturable** (*bool**, **optional*) -- whether this instance\n is safe to capture in a CUDA graph. Passing True can impair\n ungraphed performance, so if you don't intend to graph capture\n this instance, leave it False (default: False)\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adam.html", "category": "pytorch docs"} {"text": "\n\ndifferentiable (bool, optional) -- whether autograd\n should occur through the optimizer step in training.\n Otherwise, the step() function runs in a torch.no_grad()\n context. Setting to True can impair performance, so leave it\n False if you don't intend to run autograd through this\n instance (default: False)\n\nfused (bool, optional) -- whether the fused\n implementation (CUDA only) is used. Currently,\n torch.float64, torch.float32, torch.float16, and\n torch.bfloat16 are supported. Since the fused implementation\n is usually significantly faster than the for-loop\n implementation, we try to use it whenever possible (all\n parameters are on CUDA and are of a supported type). Else, we\n continue with the for-loop implementation. (default: None)\n\n\n\nadd_param_group(param_group)\n Add a param group to the \"Optimizer\" s *param_groups*.\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adam.html", "category": "pytorch docs"} {"text": "This can be useful when fine tuning a pre-trained network as\n frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n Parameters:\n **param_group** (*dict*) -- Specifies what Tensors should be\n optimized along with group specific optimization options.\n\nload_state_dict(state_dict)\n Loads the optimizer state.\n\n Parameters:\n **state_dict** (*dict*) -- optimizer state. Should be an\n object returned from a call to \"state_dict()\".\n\nregister_step_post_hook(hook)\n Register an optimizer step post hook which will be called after\n optimizer step. It should have the following signature:\n\n hook(optimizer, args, kwargs) -> None\n\n The \"optimizer\" argument is the optimizer instance being used.\n\n Parameters:\n **hook** (*Callable*) -- The user defined hook to be\n registered.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adam.html", "category": "pytorch docs"} {"text": "\"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n\nregister_step_pre_hook(hook)\n Register an optimizer step pre hook which will be called before\n optimizer step. It should have the following signature:\n\n hook(optimizer, args, kwargs) -> None or modified args and kwargs\n\n The \"optimizer\" argument is the optimizer instance being used.\n If args and kwargs are modified by the pre-hook, then the\n transformed values are returned as a tuple containing the\n new_args and new_kwargs.\n\n Parameters:\n **hook** (*Callable*) -- The user defined hook to be\n registered.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n\nstate_dict()\n Returns the state of the optimizer as a \"dict\".\n\n It contains two entries:\n\n * state - a dict holding current optimization state. Its content\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adam.html", "category": "pytorch docs"} {"text": "differs between optimizer classes.\n * param_groups - a list containing all parameter groups where\n each\n parameter group is a dict\n\nzero_grad(set_to_none=False)\n Sets the gradients of all optimized \"torch.Tensor\" s to zero.\n\n Parameters:\n **set_to_none** (*bool*) -- instead of setting to zero, set\n the grads to None. This will in general have lower memory\n footprint, and can modestly improve performance. However, it\n changes certain behaviors. For example: 1. When the user\n tries to access a gradient and perform manual ops on it, a\n None attribute or a Tensor full of 0s will behave\n differently. 2. If the user requests\n \"zero_grad(set_to_none=True)\" followed by a backward pass,\n \".grad\"s are guaranteed to be None for params that did not\n receive a gradient. 3. \"torch.optim\" optimizers have a\n different behavior if the gradient is 0 or None (in one case\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adam.html", "category": "pytorch docs"} {"text": "it does the step with a gradient of 0 and in the other it\n skips the step altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adam.html", "category": "pytorch docs"} {"text": "torch.nn.utils.skip_init\ntorch.nn.utils.skip_init(module_cls, args, *kwargs)\nGiven a module class object and args / kwargs, instantiates the\n module without initializing parameters / buffers. This can be\n useful if initialization is slow or if custom initialization will\n be performed, making the default initialization unnecessary. There\n are some caveats to this, due to the way this function is\n implemented:\n\n\nThe module must accept a device arg in its constructor that is\n passed to any parameters or buffers created during construction.\n\n\nThe module must not perform any computation on parameters in its\n constructor except initialization (i.e. functions from\n \"torch.nn.init\").\n\n\nIf these conditions are satisfied, the module can be instantiated\n with parameter / buffer values uninitialized, as if having been\n created using \"torch.empty()\".\nParameters:\n * module_cls -- Class object; should be a subclass of\n \"torch.nn.Module\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.skip_init.html", "category": "pytorch docs"} {"text": "\"torch.nn.Module\"\n * **args** -- args to pass to the module's constructor\n\n * **kwargs** -- kwargs to pass to the module's constructor\n\nReturns:\n Instantiated module with uninitialized parameters / buffers\nExample:\n >>> import torch\n >>> m = torch.nn.utils.skip_init(torch.nn.Linear, 5, 1)\n >>> m.weight\n Parameter containing:\n tensor([[0.0000e+00, 1.5846e+29, 7.8307e+00, 2.5250e-29, 1.1210e-44]],\n requires_grad=True)\n >>> m2 = torch.nn.utils.skip_init(torch.nn.Linear, in_features=6, out_features=1)\n >>> m2.weight\n Parameter containing:\n tensor([[-1.4677e+24, 4.5915e-41, 1.4013e-45, 0.0000e+00, -1.4677e+24,\n 4.5915e-41]], requires_grad=True)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.skip_init.html", "category": "pytorch docs"} {"text": "QConfig\nclass torch.quantization.qconfig.QConfig(activation, weight)\nDescribes how to quantize a layer or a part of the network by\n providing settings (observer classes) for activations and weights\n respectively.\nNote that QConfig needs to contain observer classes (like\n MinMaxObserver) or a callable that returns instances on invocation,\n not the concrete observer instances themselves. Quantization\n preparation function will instantiate observers multiple times for\n each of the layers.\nObserver classes have usually reasonable default arguments, but\n they can be overwritten with with_args method (that behaves like\n functools.partial):\n my_qconfig = QConfig(\n activation=MinMaxObserver.with_args(dtype=torch.qint8),\n weight=default_observer.with_args(dtype=torch.qint8))\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.QConfig.html", "category": "pytorch docs"} {"text": "torch.nn.functional.gelu\ntorch.nn.functional.gelu(input, approximate='none') -> Tensor\nWhen the approximate argument is 'none', it applies element-wise\n the function \\text{GELU}(x) = x * \\Phi(x)\nwhere \\Phi(x) is the Cumulative Distribution Function for Gaussian\n Distribution.\nWhen the approximate argument is 'tanh', Gelu is estimated with\n \\text{GELU}(x) = 0.5 * x * (1 + \\text{Tanh}(\\sqrt(2 / \\pi) * (x\n + 0.044715 * x^3)))\n\nSee Gaussian Error Linear Units (GELUs).", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.gelu.html", "category": "pytorch docs"} {"text": "torch.linalg.ldl_factor\ntorch.linalg.ldl_factor(A, *, hermitian=False, out=None)\nComputes a compact representation of the LDL factorization of a\n Hermitian or symmetric (possibly indefinite) matrix.\nWhen \"A\" is complex valued it can be Hermitian (\"hermitian\"=\n True) or symmetric (\"hermitian\"= False).\nThe factorization is of the form the form A = L D L^T. If\n \"hermitian\" is True then transpose operation is the conjugate\n transpose.\nL (or U) and D are stored in compact form in \"LD\". They follow the\n format specified by LAPACK's sytrf function. These tensors may be\n used in \"torch.linalg.ldl_solve()\" to solve linear systems.\nSupports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\nNote:\n When inputs are on a CUDA device, this function synchronizes that\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.ldl_factor.html", "category": "pytorch docs"} {"text": "device with the CPU. For a version of this function that does not\n synchronize, see \"torch.linalg.ldl_factor_ex()\".\nParameters:\n A (Tensor) -- tensor of shape (*, n, n) where * is zero or\n more batch dimensions consisting of symmetric or Hermitian\n matrices. (*, n, n) where *** is one or more batch dimensions.\nKeyword Arguments:\n * hermitian (bool, optional) -- whether to consider\n the input to be Hermitian or symmetric. For real-valued\n matrices, this switch has no effect. Default: False.\n * **out** (*tuple**, **optional*) -- tuple of two tensors to\n write the output to. Ignored if *None*. Default: *None*.\n\nReturns:\n A named tuple (LD, pivots).\nExamples:\n >>> A = torch.randn(3, 3)\n >>> A = A @ A.mT # make symmetric\n >>> A\n tensor([[7.2079, 4.2414, 1.9428],\n [4.2414, 3.4554, 0.3264],\n [1.9428, 0.3264, 1.3823]])\n >>> LD, pivots = torch.linalg.ldl_factor(A)\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.ldl_factor.html", "category": "pytorch docs"} {"text": "\n\n\nLD, pivots = torch.linalg.ldl_factor(A)\n >>> LD\n tensor([[ 7.2079, 0.0000, 0.0000],\n [ 0.5884, 0.9595, 0.0000],\n [ 0.2695, -0.8513, 0.1633]])\n >>> pivots\n tensor([1, 2, 3], dtype=torch.int32)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.ldl_factor.html", "category": "pytorch docs"} {"text": "torch.cuda.change_current_allocator\ntorch.cuda.change_current_allocator(allocator)\nChanges the currently used memory allocator to be the one provided.\n If the current allocator has already been used/initialized, this\n function will error.\nParameters:\n allocator (torch.cuda.memory._CUDAAllocator) -- allocator\n to be set as the active one.\nNote:\n See Memory management for details on creating and using a custom\n allocator\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.change_current_allocator.html", "category": "pytorch docs"} {"text": "torch.bitwise_not\ntorch.bitwise_not(input, *, out=None) -> Tensor\nComputes the bitwise NOT of the given input tensor. The input\n tensor must be of integral or Boolean types. For bool tensors, it\n computes the logical NOT.\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> torch.bitwise_not(torch.tensor([-1, -2, 3], dtype=torch.int8))\n tensor([ 0, 1, -4], dtype=torch.int8)\n", "source": "https://pytorch.org/docs/stable/generated/torch.bitwise_not.html", "category": "pytorch docs"} {"text": "torch.nn.functional.hardshrink\ntorch.nn.functional.hardshrink(input, lambd=0.5) -> Tensor\nApplies the hard shrinkage function element-wise\nSee \"Hardshrink\" for more details.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.hardshrink.html", "category": "pytorch docs"} {"text": "torch.atleast_2d\ntorch.atleast_2d(*tensors)\nReturns a 2-dimensional view of each input tensor with zero\n dimensions. Input tensors with two or more dimensions are returned\n as-is.\nParameters:\n input (Tensor or list of Tensors) --\nReturns:\n output (Tensor or tuple of Tensors)\nExample:\n >>> x = torch.tensor(1.)\n >>> x\n tensor(1.)\n >>> torch.atleast_2d(x)\n tensor([[1.]])\n >>> x = torch.arange(4).view(2, 2)\n >>> x\n tensor([[0, 1],\n [2, 3]])\n >>> torch.atleast_2d(x)\n tensor([[0, 1],\n [2, 3]])\n >>> x = torch.tensor(0.5)\n >>> y = torch.tensor(1.)\n >>> torch.atleast_2d((x, y))\n (tensor([[0.5000]]), tensor([[1.]]))\n", "source": "https://pytorch.org/docs/stable/generated/torch.atleast_2d.html", "category": "pytorch docs"} {"text": "torch.nn.functional.dropout1d\ntorch.nn.functional.dropout1d(input, p=0.5, training=True, inplace=False)\nRandomly zero out entire channels (a channel is a 1D feature map,\n e.g., the j-th channel of the i-th sample in the batched input is a\n 1D tensor \\text{input}[i, j]) of the input tensor). Each channel\n will be zeroed out independently on every forward call with\n probability \"p\" using samples from a Bernoulli distribution.\nSee \"Dropout1d\" for details.\nParameters:\n * p (float) -- probability of a channel to be zeroed.\n Default: 0.5\n * **training** (*bool*) -- apply dropout if is \"True\". Default:\n \"True\"\n\n * **inplace** (*bool*) -- If set to \"True\", will do this\n operation in-place. Default: \"False\"\n\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.dropout1d.html", "category": "pytorch docs"} {"text": "torch.signal.windows.exponential\ntorch.signal.windows.exponential(M, *, center=None, tau=1.0, sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)\nComputes a window with an exponential waveform. Also known as\n Poisson window.\nThe exponential window is defined as follows:\n w_n = \\exp{\\left(-\\frac{|n - c|}{\\tau}\\right)}\n\nwhere c is the \"center\" of the window.\nThe window is normalized to 1 (maximum value is 1). However, the 1\n doesn't appear if \"M\" is even and \"sym\" is True.\nParameters:\n M (int) -- the length of the window. In other words, the\n number of points of the returned window.\nKeyword Arguments:\n * center (float, optional) -- where the center of the\n window will be located. Default: M / 2 if sym is False,\n else (M - 1) / 2.\n * **tau** (*float**, **optional*) -- the decay value. Tau is\n generally associated with a percentage, that means, that the\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.exponential.html", "category": "pytorch docs"} {"text": "value should vary within the interval (0, 100]. If tau is 100,\n it is considered the uniform window. Default: 1.0.\n * **sym** (*bool**, **optional*) -- If *False*, returns a\n periodic window suitable for use in spectral analysis. If\n *True*, returns a symmetric window suitable for use in filter\n design. Default: *True*.\n\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.exponential.html", "category": "pytorch docs"} {"text": "tensor types.\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\nReturn type:\n Tensor\nExamples:\n >>> # Generates a symmetric exponential window of size 10 and with a decay value of 1.0.\n >>> # The center will be at (M - 1) / 2, where M is 10.\n >>> torch.signal.windows.exponential(10)\n tensor([0.0111, 0.0302, 0.0821, 0.2231, 0.6065, 0.6065, 0.2231, 0.0821, 0.0302, 0.0111])\n\n >>> # Generates a periodic exponential window and decay factor equal to .5\n >>> torch.signal.windows.exponential(10, sym=False,tau=.5)\n tensor([4.5400e-05, 3.3546e-04, 2.4788e-03, 1.8316e-02, 1.3534e-01, 1.0000e+00, 1.3534e-01, 1.8316e-02, 2.4788e-03, 3.3546e-04])\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.exponential.html", "category": "pytorch docs"} {"text": "torch.ne\ntorch.ne(input, other, *, out=None) -> Tensor\nComputes \\text{input} \\neq \\text{other} element-wise.\nThe second argument can be a number or a tensor whose shape is\n broadcastable with the first argument.\nParameters:\n * input (Tensor) -- the tensor to compare\n * **other** (*Tensor** or **float*) -- the tensor or value to\n compare\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nReturns:\n A boolean tensor that is True where \"input\" is not equal to\n \"other\" and False elsewhere\nExample:\n >>> torch.ne(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))\n tensor([[False, True], [True, False]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.ne.html", "category": "pytorch docs"} {"text": "torch.Tensor.logcumsumexp\nTensor.logcumsumexp(dim) -> Tensor\nSee \"torch.logcumsumexp()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logcumsumexp.html", "category": "pytorch docs"} {"text": "default_activation_only_qconfig\ntorch.quantization.qconfig.default_activation_only_qconfig\nalias of QConfig(activation=functools.partial(,\n observer=,\n quant_min=0, quant_max=255, dtype=torch.quint8,\n qscheme=torch.per_tensor_affine, reduce_range=True){},\n weight=)", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.default_activation_only_qconfig.html", "category": "pytorch docs"} {"text": "torch.set_float32_matmul_precision\ntorch.set_float32_matmul_precision(precision)\nSets the internal precision of float32 matrix multiplications.\nRunning float32 matrix multiplications in lower precision may\n significantly increase performance, and in some programs the loss\n of precision has a negligible impact.\nSupports three settings:\n * \"highest\", float32 matrix multiplications use the float32\n datatype for internal computations.\n\n * \"high\", float32 matrix multiplications use the TensorFloat32\n or bfloat16_3x datatypes for internal computations, if fast\n matrix multiplication algorithms using those datatypes\n internally are available. Otherwise float32 matrix\n multiplications are computed as if the precision is \"highest\".\n\n * \"medium\", float32 matrix multiplications use the bfloat16\n datatype for internal computations, if a fast matrix\n", "source": "https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html", "category": "pytorch docs"} {"text": "multiplication algorithm using that datatype internally is\n available. Otherwise float32 matrix multiplications are\n computed as if the precision is \"high\".\nNote:\n This does not change the output dtype of float32 matrix\n multiplications, it controls how the internal computation of the\n matrix multiplication is performed.\n\nNote:\n This does not change the precision of convolution operations.\n Other flags, like *torch.backends.cudnn.allow_tf32*, may control\n the precision of convolution operations.\n\nNote:\n This flag currently only affects one native device type: CUDA. If\n \"high\" or \"medium\" are set then the TensorFloat32 datatype will\n be used when computing float32 matrix multiplications, equivalent\n to setting *torch.backends.cuda.matmul.allow_tf32 = True*. When\n \"highest\" (the default) is set then the float32 datatype is used\n for internal computations, equivalent to setting\n *torch.backends.cuda.matmul.allow_tf32 = False*.\n", "source": "https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html", "category": "pytorch docs"} {"text": "Parameters:\n precision (str) -- can be set to \"highest\" (default),\n \"high\", or \"medium\" (see above).", "source": "https://pytorch.org/docs/stable/generated/torch.set_float32_matmul_precision.html", "category": "pytorch docs"} {"text": "Linear\nclass torch.ao.nn.qat.Linear(in_features, out_features, bias=True, qconfig=None, device=None, dtype=None)\nA linear module attached with FakeQuantize modules for weight, used\n for quantization aware training.\nWe adopt the same interface as torch.nn.Linear, please see\n https://pytorch.org/docs/stable/nn.html#torch.nn.Linear for\n documentation.\nSimilar to torch.nn.Linear, with FakeQuantize modules initialized\n to default.\nVariables:\n weight (torch.Tensor) -- fake quant module for weight\nclassmethod from_float(mod)\n Create a qat module from a float module or qparams_dict Args:\n *mod* a float module, either produced by torch.ao.quantization\n utilities or directly from user\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.qat.Linear.html", "category": "pytorch docs"} {"text": "LazyModuleMixin\nclass torch.nn.modules.lazy.LazyModuleMixin(args, *kwargs)\nA mixin for modules that lazily initialize parameters, also known\n as \"lazy modules.\"\nModules that lazily initialize parameters, or \"lazy modules\",\n derive the shapes of their parameters from the first input(s) to\n their forward method. Until that first forward they contain\n \"torch.nn.UninitializedParameter\" s that should not be accessed or\n used, and afterward they contain regular \"torch.nn.Parameter\" s.\n Lazy modules are convenient since they don't require computing some\n module arguments, like the \"in_features\" argument of a typical\n \"torch.nn.Linear\".\nAfter construction, networks with lazy modules should first be\n converted to the desired dtype and placed on the expected device.\n This is because lazy modules only perform shape inference so the\n usual dtype and device placement behavior applies. The lazy modules\n should then perform \"dry runs\" to initialize all the components in", "source": "https://pytorch.org/docs/stable/generated/torch.nn.modules.lazy.LazyModuleMixin.html", "category": "pytorch docs"} {"text": "the module. These \"dry runs\" send inputs of the correct size,\n dtype, and device through the network and to each one of its lazy\n modules. After this the network can be used as usual.\n\n\n\nclass LazyMLP(torch.nn.Module):\n ... def init(self):\n ... super().init()\n ... self.fc1 = torch.nn.LazyLinear(10)\n ... self.relu1 = torch.nn.ReLU()\n ... self.fc2 = torch.nn.LazyLinear(1)\n ... self.relu2 = torch.nn.ReLU()\n ...\n ... def forward(self, input):\n ... x = self.relu1(self.fc1(input))\n ... y = self.relu2(self.fc2(x))\n ... return y\nconstructs a network with lazy modules\nlazy_mlp = LazyMLP()\ntransforms the network's device and dtype\nNOTE: these transforms can and should be applied after construction and before any 'dry runs'\nlazy_mlp = lazy_mlp.cuda().double()\nlazy_mlp\n LazyMLP( (fc1): LazyLinear(in_features=0, out_features=10, bias=True)\n (relu1): ReLU()\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.modules.lazy.LazyModuleMixin.html", "category": "pytorch docs"} {"text": "(relu1): ReLU()\n (fc2): LazyLinear(in_features=0, out_features=1, bias=True)\n (relu2): ReLU()\n )\n\n\n\nperforms a dry run to initialize the network's lazy modules\nlazy_mlp(torch.ones(10,10).cuda())\nafter initialization, LazyLinear modules become regular Linear modules\nlazy_mlp\n LazyMLP(\n (fc1): Linear(in_features=10, out_features=10, bias=True)\n (relu1): ReLU()\n (fc2): Linear(in_features=10, out_features=1, bias=True)\n (relu2): ReLU()\n )\nattaches an optimizer, since parameters can now be used as usual\noptim = torch.optim.SGD(mlp.parameters(), lr=0.01)\n\n\n\nA final caveat when using lazy modules is that the order of\n initialization of a network's parameters may change, since the lazy\n modules are always initialized after other modules. For example, if\n the LazyMLP class defined above had a \"torch.nn.LazyLinear\" module\n first and then a regular \"torch.nn.Linear\" second, the second", "source": "https://pytorch.org/docs/stable/generated/torch.nn.modules.lazy.LazyModuleMixin.html", "category": "pytorch docs"} {"text": "module would be initialized on construction and the first module\n would be initialized during the first dry run. This can cause the\n parameters of a network using lazy modules to be initialized\n differently than the parameters of a network without lazy modules\n as the order of parameter initializations, which often depends on a\n stateful random number generator, is different. Check\n Reproducibility for more details.\nLazy modules can be serialized with a state dict like other\n modules. For example:\n\n\n\nlazy_mlp = LazyMLP()\nThe state dict shows the uninitialized parameters\nlazy_mlp.state_dict()\n OrderedDict([('fc1.weight', Uninitialized parameter),\n ('fc1.bias',\n tensor([-1.8832e+25, 4.5636e-41, -1.8832e+25, 4.5636e-41, -6.1598e-30,\n 4.5637e-41, -1.8788e+22, 4.5636e-41, -2.0042e-31, 4.5637e-41])),\n ('fc2.weight', Uninitialized parameter),\n ('fc2.bias', tensor([0.0019]))])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.modules.lazy.LazyModuleMixin.html", "category": "pytorch docs"} {"text": "('fc2.bias', tensor([0.0019]))])\nLazy modules can load regular \"torch.nn.Parameter\" s (i.e. you can\n serialize/deserialize initialized LazyModules and they will remain\n initialized)\n\n\n\nfull_mlp = LazyMLP()\nDry run to initialize another module\nfull_mlp.forward(torch.ones(10, 1))\nLoad an initialized state into a lazy module\nlazy_mlp.load_state_dict(full_mlp.state_dict())\nThe state dict now holds valid values\nlazy_mlp.state_dict()\n OrderedDict([('fc1.weight',\n tensor([[-0.3837],\n [ 0.0907],\n [ 0.6708],\n [-0.5223],\n [-0.9028],\n [ 0.2851],\n [-0.4537],\n [ 0.6813],\n [ 0.5766],\n [-0.8678]])),\n ('fc1.bias',\n tensor([-1.8832e+25, 4.5636e-41, -1.8832e+25, 4.5636e-41, -6.1598e-30,\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.modules.lazy.LazyModuleMixin.html", "category": "pytorch docs"} {"text": "4.5637e-41, -1.8788e+22, 4.5636e-41, -2.0042e-31, 4.5637e-41])),\n ('fc2.weight',\n tensor([[ 0.1320, 0.2938, 0.0679, 0.2793, 0.1088, -0.1795, -0.2301, 0.2807,\n 0.2479, 0.1091]])),\n ('fc2.bias', tensor([0.0019]))])\nNote, however, that the loaded parameters will not be replaced when\n doing a \"dry run\" if they are initialized when the state is loaded.\n This prevents using initialized modules in different contexts.\nhas_uninitialized_params()\n Check if a module has parameters that are not initialized\n\ninitialize_parameters(args, *kwargs)\n Initialize parameters according to the input batch properties.\n This adds an interface to isolate parameter initialization from\n the forward pass when doing parameter shape inference.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.modules.lazy.LazyModuleMixin.html", "category": "pytorch docs"} {"text": "torch.fft.rfft\ntorch.fft.rfft(input, n=None, dim=- 1, norm=None, *, out=None) -> Tensor\nComputes the one dimensional Fourier transform of real-valued\n \"input\".\nThe FFT of a real signal is Hermitian-symmetric, \"X[i] =\n conj(X[-i])\" so the output contains only the positive frequencies\n below the Nyquist frequency. To compute the full output, use\n \"fft()\"\nNote:\n Supports torch.half on CUDA with GPU Architecture SM53 or\n greater. However it only supports powers of 2 signal length in\n every transformed dimension.\n\nParameters:\n * input (Tensor) -- the real input tensor\n * **n** (*int**, **optional*) -- Signal length. If given, the\n input will either be zero-padded or trimmed to this length\n before computing the real FFT.\n\n * **dim** (*int**, **optional*) -- The dimension along which to\n take the one dimensional real FFT.\n\n * **norm** (*str**, **optional*) --\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.rfft.html", "category": "pytorch docs"} {"text": "\nnorm (str, optional) --Normalization mode. For the forward transform (\"rfft()\"),\nthese correspond to:\n\n* \"\"forward\"\" - normalize by \"1/n\"\n\n* \"\"backward\"\" - no normalization\n\n* \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the FFT\n orthonormal)\n\nCalling the backward transform (\"irfft()\") with the same\nnormalization mode will apply an overall normalization of\n\"1/n\" between the two transforms. This is required to make\n\"irfft()\" the exact inverse.\n\nDefault is \"\"backward\"\" (no normalization).\n\n\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\n-[ Example ]-\n\n\n\nt = torch.arange(4)\nt\n tensor([0, 1, 2, 3])\ntorch.fft.rfft(t)\n tensor([ 6.+0.j, -2.+2.j, -2.+0.j])\n\n\n\nCompare against the full output from \"fft()\":\n\n\n\ntorch.fft.fft(t)\n tensor([ 6.+0.j, -2.+2.j, -2.+0.j, -2.-2.j])\n\n\n\nNotice that the symmetric element \"T[-1] == T[1].conj()\" is", "source": "https://pytorch.org/docs/stable/generated/torch.fft.rfft.html", "category": "pytorch docs"} {"text": "omitted. At the Nyquist frequency \"T[-2] == T[2]\" is it's own\n symmetric pair, and therefore must always be real-valued.", "source": "https://pytorch.org/docs/stable/generated/torch.fft.rfft.html", "category": "pytorch docs"} {"text": "torch.nanmean\ntorch.nanmean(input, dim=None, keepdim=False, *, dtype=None, out=None) -> Tensor\nComputes the mean of all non-NaN elements along the specified\n dimensions.\nThis function is identical to \"torch.mean()\" when there are no\n NaN values in the \"input\" tensor. In the presence of NaN,\n \"torch.mean()\" will propagate the NaN to the output whereas\n \"torch.nanmean()\" will ignore the NaN values (torch.nanmean(a)\n is equivalent to torch.mean(a[~a.isnan()])).\nIf \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension(s) \"dim\" where it is of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensor having 1 (or \"len(dim)\") fewer dimension(s).\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int** or **tuple of ints**, **optional*) -- the\n dimension or dimensions to reduce. If \"None\", all dimensions\n are reduced.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nanmean.html", "category": "pytorch docs"} {"text": "are reduced.\n * **keepdim** (*bool*) -- whether the output tensor has \"dim\"\n retained or not.\n\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. If specified, the input tensor is casted\n to \"dtype\" before the operation is performed. This is useful\n for preventing data type overflows. Default: None.\n * **out** (*Tensor**, **optional*) -- the output tensor.\n\nSee also:\n \"torch.mean()\" computes the mean value, propagating *NaN*.\n\nExample:\n >>> x = torch.tensor([[torch.nan, 1, 2], [1, 2, 3]])\n >>> x.mean()\n tensor(nan)\n >>> x.nanmean()\n tensor(1.8000)\n >>> x.mean(dim=0)\n tensor([ nan, 1.5000, 2.5000])\n >>> x.nanmean(dim=0)\n tensor([1.0000, 1.5000, 2.5000])\n\n # If all elements in the reduced dimensions are NaN then the result is NaN\n >>> torch.tensor([torch.nan]).nanmean()\n tensor(nan)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nanmean.html", "category": "pytorch docs"} {"text": "Identity\nclass torch.nn.utils.prune.Identity\nUtility pruning method that does not prune any units but generates\n the pruning parametrization with a mask of ones.\nclassmethod apply(module, name)\n Adds the forward pre-hook that enables pruning on the fly and\n the reparametrization of a tensor in terms of the original\n tensor and the pruning mask.\n\n Parameters:\n * **module** (*nn.Module*) -- module containing the tensor to\n prune\n\n * **name** (*str*) -- parameter name within \"module\" on which\n pruning will act.\n\napply_mask(module)\n Simply handles the multiplication between the parameter being\n pruned and the generated mask. Fetches the mask and the original\n tensor from the module and returns the pruned version of the\n tensor.\n\n Parameters:\n **module** (*nn.Module*) -- module containing the tensor to\n prune\n\n Returns:\n pruned version of the input tensor\n\n Return type:\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.Identity.html", "category": "pytorch docs"} {"text": "Return type:\n pruned_tensor (torch.Tensor)\nprune(t, default_mask=None, importance_scores=None)\n Computes and returns a pruned version of input tensor \"t\"\n according to the pruning rule specified in \"compute_mask()\".\n\n Parameters:\n * **t** (*torch.Tensor*) -- tensor to prune (of same\n dimensions as \"default_mask\").\n\n * **importance_scores** (*torch.Tensor*) -- tensor of\n importance scores (of same shape as \"t\") used to compute\n mask for pruning \"t\". The values in this tensor indicate\n the importance of the corresponding elements in the \"t\"\n that is being pruned. If unspecified or None, the tensor\n \"t\" will be used in its place.\n\n * **default_mask** (*torch.Tensor**, **optional*) -- mask\n from previous pruning iteration, if any. To be considered\n when determining what portion of the tensor that pruning\n should act on. If None, default to a mask of ones.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.Identity.html", "category": "pytorch docs"} {"text": "Returns:\n pruned version of tensor \"t\".\nremove(module)\n Removes the pruning reparameterization from a module. The pruned\n parameter named \"name\" remains permanently pruned, and the\n parameter named \"name+'_orig'\" is removed from the parameter\n list. Similarly, the buffer named \"name+'_mask'\" is removed from\n the buffers.\n\n Note:\n\n Pruning itself is NOT undone or reversed!\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.Identity.html", "category": "pytorch docs"} {"text": "torch.kthvalue\ntorch.kthvalue(input, k, dim=None, keepdim=False, *, out=None)\nReturns a namedtuple \"(values, indices)\" where \"values\" is the \"k\"\n th smallest element of each row of the \"input\" tensor in the given\n dimension \"dim\". And \"indices\" is the index location of each\n element found.\nIf \"dim\" is not given, the last dimension of the input is chosen.\nIf \"keepdim\" is \"True\", both the \"values\" and \"indices\" tensors are\n the same size as \"input\", except in the dimension \"dim\" where they\n are of size 1. Otherwise, \"dim\" is squeezed (see\n \"torch.squeeze()\"), resulting in both the \"values\" and \"indices\"\n tensors having 1 fewer dimension than the \"input\" tensor.\nNote:\n When \"input\" is a CUDA tensor and there are multiple valid \"k\" th\n values, this function may nondeterministically return \"indices\"\n for any of them.\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **k** (*int*) -- k for the k-th smallest element\n", "source": "https://pytorch.org/docs/stable/generated/torch.kthvalue.html", "category": "pytorch docs"} {"text": "\n\ndim (int, optional) -- the dimension to find the kth\n value along\n\nkeepdim (bool) -- whether the output tensor has \"dim\"\n retained or not.\n\n\n\nKeyword Arguments:\n out (tuple, optional) -- the output tuple of (Tensor,\n LongTensor) can be optionally given to be used as output buffers\nExample:\n >>> x = torch.arange(1., 6.)\n >>> x\n tensor([ 1., 2., 3., 4., 5.])\n >>> torch.kthvalue(x, 4)\n torch.return_types.kthvalue(values=tensor(4.), indices=tensor(3))\n\n >>> x=torch.arange(1.,7.).resize_(2,3)\n >>> x\n tensor([[ 1., 2., 3.],\n [ 4., 5., 6.]])\n >>> torch.kthvalue(x, 2, 0, True)\n torch.return_types.kthvalue(values=tensor([[4., 5., 6.]]), indices=tensor([[1, 1, 1]]))\n", "source": "https://pytorch.org/docs/stable/generated/torch.kthvalue.html", "category": "pytorch docs"} {"text": "torch.foreach_sinh\ntorch.foreach_sinh(self: List[Tensor]) -> None\nApply \"torch.sinh()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_sinh_.html", "category": "pytorch docs"} {"text": "torch.Tensor.nanmedian\nTensor.nanmedian(dim=None, keepdim=False)\nSee \"torch.nanmedian()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.nanmedian.html", "category": "pytorch docs"} {"text": "torch.Tensor.fix_\nTensor.fix_() -> Tensor\nIn-place version of \"fix()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.fix_.html", "category": "pytorch docs"} {"text": "torch.Tensor.nonzero\nTensor.nonzero() -> LongTensor\nSee \"torch.nonzero()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.nonzero.html", "category": "pytorch docs"} {"text": "interpolate\nclass torch.ao.nn.quantized.functional.interpolate(input, size=None, scale_factor=None, mode='nearest', align_corners=None)\nDown/up samples the input to either the given \"size\" or the given\n \"scale_factor\"\nSee \"torch.nn.functional.interpolate()\" for implementation details.\nThe input dimensions are interpreted in the form: mini-batch x\n channels x [optional depth] x [optional height] x width.\nNote:\n The input quantization parameters propagate to the output.\n\nNote:\n Only 2D/3D input is supported for quantized inputs\n\nNote:\n Only the following modes are supported for the quantized inputs:\n\n * *bilinear*\n\n * *nearest*\n\nParameters:\n * input (Tensor) -- the input tensor\n * **size** (*int** or **Tuple**[**int**] or **Tuple**[**int**,\n **int**] or **Tuple**[**int**, **int**, **int**]*) -- output\n spatial size.\n\n * **scale_factor** (*float** or **Tuple**[**float**]*) --\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.interpolate.html", "category": "pytorch docs"} {"text": "multiplier for spatial size. Has to match input size if it is\n a tuple.\n * **mode** (*str*) -- algorithm used for upsampling: \"'nearest'\"\n | \"'bilinear'\"\n\n * **align_corners** (*bool**, **optional*) -- Geometrically, we\n consider the pixels of the input and output as squares rather\n than points. If set to \"True\", the input and output tensors\n are aligned by the center points of their corner pixels,\n preserving the values at the corner pixels. If set to \"False\",\n the input and output tensors are aligned by the corner points\n of their corner pixels, and the interpolation uses edge value\n padding for out-of-boundary values, making this operation\n *independent* of input size when \"scale_factor\" is kept the\n same. This only has an effect when \"mode\" is \"'bilinear'\".\n Default: \"False\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.interpolate.html", "category": "pytorch docs"} {"text": "torch.mul\ntorch.mul(input, other, *, out=None) -> Tensor\nMultiplies \"input\" by \"other\".\n \\text{out}_i = \\text{input}_i \\times \\text{other}_i\n\nSupports broadcasting to a common shape, type promotion, and\n integer, float, and complex inputs.\nParameters:\n * input (Tensor) -- the input tensor.\n * **other** (*Tensor** or **Number*) --\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExamples:\n >>> a = torch.randn(3)\n >>> a\n tensor([ 0.2015, -0.4255, 2.6087])\n >>> torch.mul(a, 100)\n tensor([ 20.1494, -42.5491, 260.8663])\n\n >>> b = torch.randn(4, 1)\n >>> b\n tensor([[ 1.1207],\n [-0.3137],\n [ 0.0700],\n [ 0.8378]])\n >>> c = torch.randn(1, 4)\n >>> c\n tensor([[ 0.5146, 0.1216, -0.5244, 2.2382]])\n >>> torch.mul(b, c)\n tensor([[ 0.5767, 0.1363, -0.5877, 2.5083],\n [-0.1614, -0.0382, 0.1645, -0.7021],\n", "source": "https://pytorch.org/docs/stable/generated/torch.mul.html", "category": "pytorch docs"} {"text": "[ 0.0360, 0.0085, -0.0367, 0.1567],\n [ 0.4312, 0.1019, -0.4394, 1.8753]])", "source": "https://pytorch.org/docs/stable/generated/torch.mul.html", "category": "pytorch docs"} {"text": "torch.nn.functional.adaptive_avg_pool3d\ntorch.nn.functional.adaptive_avg_pool3d(input, output_size)\nApplies a 3D adaptive average pooling over an input signal composed\n of several input planes.\nSee \"AdaptiveAvgPool3d\" for details and output shape.\nParameters:\n output_size (None) -- the target output size (single\n integer or triple-integer tuple)\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.adaptive_avg_pool3d.html", "category": "pytorch docs"} {"text": "torch.use_deterministic_algorithms\ntorch.use_deterministic_algorithms(mode, *, warn_only=False)\nSets whether PyTorch operations must use \"deterministic\"\n algorithms. That is, algorithms which, given the same input, and\n when run on the same software and hardware, always produce the same\n output. When enabled, operations will use deterministic algorithms\n when available, and if only nondeterministic algorithms are\n available they will throw a \"RuntimeError\" when called.\nNote:\n This setting alone is not always enough to make an application\n reproducible. Refer to Reproducibility for more information.\n\nNote:\n \"torch.set_deterministic_debug_mode()\" offers an alternative\n interface for this feature.\n\nThe following normally-nondeterministic operations will act\n deterministically when \"mode=True\":\n * \"torch.nn.Conv1d\" when called on CUDA tensor\n\n * \"torch.nn.Conv2d\" when called on CUDA tensor\n", "source": "https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html", "category": "pytorch docs"} {"text": "\n\n\"torch.nn.Conv3d\" when called on CUDA tensor\n\n\n\"torch.nn.ConvTranspose1d\" when called on CUDA tensor\n\n\n\"torch.nn.ConvTranspose2d\" when called on CUDA tensor\n\n\n\"torch.nn.ConvTranspose3d\" when called on CUDA tensor\n\n\n\"torch.bmm()\" when called on sparse-dense CUDA tensors\n\n\n\"torch.Tensor.getitem()\" when attempting to differentiate\n a CPU tensor and the index is a list of tensors\n\n\n\"torch.Tensor.index_put()\" with \"accumulate=False\"\n\n\n\"torch.Tensor.index_put()\" with \"accumulate=True\" when called\n on a CPU tensor\n\n\n\"torch.Tensor.put_()\" with \"accumulate=True\" when called on a\n CPU tensor\n\n\n\"torch.Tensor.scatter_add_()\" when called on a CUDA tensor\n\n\n\"torch.gather()\" when called on a CUDA tensor that requires\n grad\n\n\n\"torch.index_add()\" when called on CUDA tensor\n\n\n\"torch.index_select()\" when attempting to differentiate a CUDA\n tensor\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html", "category": "pytorch docs"} {"text": "tensor\n * \"torch.repeat_interleave()\" when attempting to differentiate a\n CUDA tensor\n\n * \"torch.Tensor.index_copy()\" when called on a CPU or CUDA\n tensor\n\nThe following normally-nondeterministic operations will throw a\n \"RuntimeError\" when \"mode=True\":\n * \"torch.nn.AvgPool3d\" when attempting to differentiate a CUDA\n tensor\n\n * \"torch.nn.AdaptiveAvgPool2d\" when attempting to differentiate\n a CUDA tensor\n\n * \"torch.nn.AdaptiveAvgPool3d\" when attempting to differentiate\n a CUDA tensor\n\n * \"torch.nn.MaxPool3d\" when attempting to differentiate a CUDA\n tensor\n\n * \"torch.nn.AdaptiveMaxPool2d\" when attempting to differentiate\n a CUDA tensor\n\n * \"torch.nn.FractionalMaxPool2d\" when attempting to\n differentiate a CUDA tensor\n\n * \"torch.nn.FractionalMaxPool3d\" when attempting to\n differentiate a CUDA tensor\n\n * \"torch.nn.MaxUnpool1d\"\n\n * \"torch.nn.MaxUnpool2d\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html", "category": "pytorch docs"} {"text": "\n\n\"torch.nn.MaxUnpool2d\"\n\n\n\"torch.nn.MaxUnpool3d\"\n\n\n\"torch.nn.functional.interpolate()\" when attempting to\n differentiate a CUDA tensor and one of the following modes is\n used:\n\n\n\"linear\"\n\n\n\"bilinear\"\n\n\n\"bicubic\"\n\n\n\"trilinear\"\n\n\n\n\n\"torch.nn.ReflectionPad1d\" when attempting to differentiate a\n CUDA tensor\n\n\n\"torch.nn.ReflectionPad2d\" when attempting to differentiate a\n CUDA tensor\n\n\n\"torch.nn.ReflectionPad3d\" when attempting to differentiate a\n CUDA tensor\n\n\n\"torch.nn.ReplicationPad1d\" when attempting to differentiate a\n CUDA tensor\n\n\n\"torch.nn.ReplicationPad2d\" when attempting to differentiate a\n CUDA tensor\n\n\n\"torch.nn.ReplicationPad3d\" when attempting to differentiate a\n CUDA tensor\n\n\n\"torch.nn.NLLLoss\" when called on a CUDA tensor\n\n\n\"torch.nn.CTCLoss\" when attempting to differentiate a CUDA\n tensor\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html", "category": "pytorch docs"} {"text": "tensor\n * \"torch.nn.EmbeddingBag\" when attempting to differentiate a\n CUDA tensor when \"mode='max'\"\n\n * \"torch.Tensor.put_()\" when \"accumulate=False\"\n\n * \"torch.Tensor.put_()\" when \"accumulate=True\" and called on a\n CUDA tensor\n\n * \"torch.histc()\" when called on a CUDA tensor\n\n * \"torch.bincount()\" when called on a CUDA tensor\n\n * \"torch.kthvalue()\" with called on a CUDA tensor\n\n * \"torch.median()\" with indices output when called on a CUDA\n tensor\n\n * \"torch.nn.functional.grid_sample()\" when attempting to\n differentiate a CUDA tensor\n\n * \"torch.cumsum()\" when called on a CUDA tensor when dtype is\n floating point or complex\n\nA handful of CUDA operations are nondeterministic if the CUDA\n version is 10.2 or greater, unless the environment variable\n \"CUBLAS_WORKSPACE_CONFIG=:4096:8\" or\n \"CUBLAS_WORKSPACE_CONFIG=:16:8\" is set. See the CUDA documentation", "source": "https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html", "category": "pytorch docs"} {"text": "for more details: https://docs.nvidia.com/cuda/cublas/index.html#c\n ublasApi_reproducibility If one of these environment variable\n configurations is not set, a \"RuntimeError\" will be raised from\n these operations when called with CUDA tensors:\n * \"torch.mm()\"\n\n * \"torch.mv()\"\n\n * \"torch.bmm()\"\n\nNote that deterministic operations tend to have worse performance\n than nondeterministic operations.\nNote:\n This flag does not detect or prevent nondeterministic behavior\n caused by calling an inplace operation on a tensor with an\n internal memory overlap or by giving such a tensor as the \"out\"\n argument for an operation. In these cases, multiple writes of\n different data may target a single memory location, and the order\n of writes is not guaranteed.\n\nParameters:\n mode (\"bool\") -- If True, makes potentially nondeterministic\n operations switch to a deterministic algorithm or throw a", "source": "https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html", "category": "pytorch docs"} {"text": "runtime error. If False, allows nondeterministic operations.\nKeyword Arguments:\n warn_only (\"bool\", optional) -- If True, operations that do\n not have a deterministic implementation will throw a warning\n instead of an error. Default: \"False\"\nExample:\n >>> torch.use_deterministic_algorithms(True)\n\n # Forward mode nondeterministic error\n >>> torch.randn(10, device='cuda').kthvalue(0)\n ...\n RuntimeError: kthvalue CUDA does not have a deterministic implementation...\n\n # Backward mode nondeterministic error\n >>> torch.nn.AvgPool3d(1)(torch.randn(3, 4, 5, 6, requires_grad=True).cuda()).sum().backward()\n ...\n RuntimeError: avg_pool3d_backward_cuda does not have a deterministic implementation...\n", "source": "https://pytorch.org/docs/stable/generated/torch.use_deterministic_algorithms.html", "category": "pytorch docs"} {"text": "torch.as_strided\ntorch.as_strided(input, size, stride, storage_offset=None) -> Tensor\nCreate a view of an existing torch.Tensor \"input\" with specified\n \"size\", \"stride\" and \"storage_offset\".\nWarning:\n Prefer using other view functions, like \"torch.Tensor.expand()\",\n to setting a view's strides manually with *as_strided*, as this\n function's behavior depends on the implementation of a tensor's\n storage. The constructed view of the storage must only refer to\n elements within the storage or a runtime error will be thrown,\n and if the view is \"overlapped\" (with multiple indices referring\n to the same element in memory) its behavior is undefined.\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **size** (*tuple** or **ints*) -- the shape of the output\n tensor\n\n * **stride** (*tuple** or **ints*) -- the stride of the output\n tensor\n\n * **storage_offset** (*int**, **optional*) -- the offset in the\n", "source": "https://pytorch.org/docs/stable/generated/torch.as_strided.html", "category": "pytorch docs"} {"text": "underlying storage of the output tensor. If \"None\", the\n storage_offset of the output tensor will match the input\n tensor.\nExample:\n >>> x = torch.randn(3, 3)\n >>> x\n tensor([[ 0.9039, 0.6291, 1.0795],\n [ 0.1586, 2.1939, -0.4900],\n [-0.1909, -0.7503, 1.9355]])\n >>> t = torch.as_strided(x, (2, 2), (1, 2))\n >>> t\n tensor([[0.9039, 1.0795],\n [0.6291, 0.1586]])\n >>> t = torch.as_strided(x, (2, 2), (1, 2), 1)\n tensor([[0.6291, 0.1586],\n [1.0795, 2.1939]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.as_strided.html", "category": "pytorch docs"} {"text": "torch.einsum\ntorch.einsum(equation, *operands) -> Tensor\nSums the product of the elements of the input \"operands\" along\n dimensions specified using a notation based on the Einstein\n summation convention.\nEinsum allows computing many common multi-dimensional linear\n algebraic array operations by representing them in a short-hand\n format based on the Einstein summation convention, given by\n \"equation\". The details of this format are described below, but the\n general idea is to label every dimension of the input \"operands\"\n with some subscript and define which subscripts are part of the\n output. The output is then computed by summing the product of the\n elements of the \"operands\" along the dimensions whose subscripts\n are not part of the output. For example, matrix multiplication can\n be computed using einsum as torch.einsum(\"ij,jk->ik\", A, B).\n Here, j is the summation subscript and i and k the output\n subscripts (see section below for more details on why).", "source": "https://pytorch.org/docs/stable/generated/torch.einsum.html", "category": "pytorch docs"} {"text": "Equation:\n The \"equation\" string specifies the subscripts (letters in\n *[a-zA-Z]*) for each dimension of the input \"operands\" in the\n same order as the dimensions, separating subscripts for each\n operand by a comma (','), e.g. *'ij,jk'* specify subscripts for\n two 2D operands. The dimensions labeled with the same subscript\n must be broadcastable, that is, their size must either match or\n be *1*. The exception is if a subscript is repeated for the same\n input operand, in which case the dimensions labeled with this\n subscript for this operand must match in size and the operand\n will be replaced by its diagonal along these dimensions. The\n subscripts that appear exactly once in the \"equation\" will be\n part of the output, sorted in increasing alphabetical order. The\n output is computed by multiplying the input \"operands\" element-\n wise, with their dimensions aligned based on the subscripts, and\n", "source": "https://pytorch.org/docs/stable/generated/torch.einsum.html", "category": "pytorch docs"} {"text": "then summing out the dimensions whose subscripts are not part of\n the output.\n Optionally, the output subscripts can be explicitly defined by\n adding an arrow ('->') at the end of the equation followed by\n the subscripts for the output. For instance, the following\n equation computes the transpose of a matrix multiplication:\n 'ij,jk->ki'. The output subscripts must appear at least once for\n some input operand and at most once for the output.\n\n Ellipsis ('...') can be used in place of subscripts to broadcast\n the dimensions covered by the ellipsis. Each input operand may\n contain at most one ellipsis which will cover the dimensions not\n covered by subscripts, e.g. for an input operand with 5\n dimensions, the ellipsis in the equation *'ab...c'* cover the\n third and fourth dimensions. The ellipsis does not need to cover\n the same number of dimensions across the \"operands\" but the\n", "source": "https://pytorch.org/docs/stable/generated/torch.einsum.html", "category": "pytorch docs"} {"text": "'shape' of the ellipsis (the size of the dimensions covered by\n them) must broadcast together. If the output is not explicitly\n defined with the arrow ('->') notation, the ellipsis will come\n first in the output (left-most dimensions), before the subscript\n labels that appear exactly once for the input operands. e.g. the\n following equation implements batch matrix multiplication\n '...ij,...jk'.\n A few final notes: the equation may contain whitespaces between\n the different elements (subscripts, ellipsis, arrow and comma)\n but something like *'. . .'* is not valid. An empty string *''*\n is valid for scalar operands.\n\nNote:\n \"torch.einsum\" handles ellipsis ('...') differently from NumPy in\n that it allows dimensions covered by the ellipsis to be summed\n over, that is, ellipsis are not required to be part of the\n output.\n\nNote:\n This function uses opt_einsum (https://optimized-\n", "source": "https://pytorch.org/docs/stable/generated/torch.einsum.html", "category": "pytorch docs"} {"text": "einsum.readthedocs.io/en/stable/) to speed up computation or to\n consume less memory by optimizing contraction order. This\n optimization occurs when there are at least three inputs, since\n the order does not matter otherwise. Note that finding the\n optimal path is an NP-hard problem, thus, opt_einsum relies on\n different heuristics to achieve near-optimal results. If\n opt_einsum is not available, the default order is to contract\n from left to right.To bypass this default behavior, add the\n following line to disable the usage of opt_einsum and skip path\n calculation: torch.backends.opt_einsum.enabled = FalseTo\n specify which strategy you'd like for opt_einsum to compute the\n contraction path, add the following line:\n torch.backends.opt_einsum.strategy = 'auto'. The default\n strategy is 'auto', and we also support 'greedy' and 'optimal'.\n Disclaimer that the runtime of 'optimal' is factorial in the", "source": "https://pytorch.org/docs/stable/generated/torch.einsum.html", "category": "pytorch docs"} {"text": "number of inputs! See more details in the opt_einsum\n documentation (https://optimized-\n einsum.readthedocs.io/en/stable/path_finding.html).\nNote:\n As of PyTorch 1.10 \"torch.einsum()\" also supports the sublist\n format (see examples below). In this format, subscripts for each\n operand are specified by sublists, list of integers in the range\n [0, 52). These sublists follow their operands, and an extra\n sublist can appear at the end of the input to specify the\n output's subscripts., e.g. *torch.einsum(op1, sublist1, op2,\n sublist2, ..., [subslist_out])*. Python's *Ellipsis* object may\n be provided in a sublist to enable broadcasting as described in\n the Equation section above.\n\nParameters:\n * equation (str) -- The subscripts for the Einstein\n summation.\n * **operands** (*List**[**Tensor**]*) -- The tensors to compute\n the Einstein summation of.\n\nReturn type:\n Tensor\nExamples:\n >>> # trace\n", "source": "https://pytorch.org/docs/stable/generated/torch.einsum.html", "category": "pytorch docs"} {"text": "Tensor\nExamples:\n >>> # trace\n >>> torch.einsum('ii', torch.randn(4, 4))\n tensor(-1.2104)\n\n >>> # diagonal\n >>> torch.einsum('ii->i', torch.randn(4, 4))\n tensor([-0.1034, 0.7952, -0.2433, 0.4545])\n\n >>> # outer product\n >>> x = torch.randn(5)\n >>> y = torch.randn(4)\n >>> torch.einsum('i,j->ij', x, y)\n tensor([[ 0.1156, -0.2897, -0.3918, 0.4963],\n [-0.3744, 0.9381, 1.2685, -1.6070],\n [ 0.7208, -1.8058, -2.4419, 3.0936],\n [ 0.1713, -0.4291, -0.5802, 0.7350],\n [ 0.5704, -1.4290, -1.9323, 2.4480]])\n\n >>> # batch matrix multiplication\n >>> As = torch.randn(3, 2, 5)\n >>> Bs = torch.randn(3, 5, 4)\n >>> torch.einsum('bij,bjk->bik', As, Bs)\n tensor([[[-1.0564, -1.5904, 3.2023, 3.1271],\n [-1.6706, -0.8097, -0.8025, -2.1183]],\n\n [[ 4.2239, 0.3107, -0.5756, -0.2354],\n [-1.4558, -0.3460, 1.5087, -0.8530]],\n", "source": "https://pytorch.org/docs/stable/generated/torch.einsum.html", "category": "pytorch docs"} {"text": "[[ 2.8153, 1.8787, -4.3839, -1.2112],\n [ 0.3728, -2.1131, 0.0921, 0.8305]]])\n >>> # with sublist format and ellipsis\n >>> torch.einsum(As, [..., 0, 1], Bs, [..., 1, 2], [..., 0, 2])\n tensor([[[-1.0564, -1.5904, 3.2023, 3.1271],\n [-1.6706, -0.8097, -0.8025, -2.1183]],\n\n [[ 4.2239, 0.3107, -0.5756, -0.2354],\n [-1.4558, -0.3460, 1.5087, -0.8530]],\n\n [[ 2.8153, 1.8787, -4.3839, -1.2112],\n [ 0.3728, -2.1131, 0.0921, 0.8305]]])\n\n >>> # batch permute\n >>> A = torch.randn(2, 3, 4, 5)\n >>> torch.einsum('...ij->...ji', A).shape\n torch.Size([2, 3, 5, 4])\n\n >>> # equivalent to torch.nn.functional.bilinear\n >>> A = torch.randn(3, 5, 4)\n >>> l = torch.randn(2, 5)\n >>> r = torch.randn(2, 4)\n >>> torch.einsum('bn,anm,bm->ba', l, A, r)\n tensor([[-0.3430, -5.2405, 0.4494],\n [ 0.3311, 5.5201, -3.0356]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.einsum.html", "category": "pytorch docs"} {"text": "torch.less_equal\ntorch.less_equal(input, other, *, out=None) -> Tensor\nAlias for \"torch.le()\".", "source": "https://pytorch.org/docs/stable/generated/torch.less_equal.html", "category": "pytorch docs"} {"text": "torch.nn.functional.margin_ranking_loss\ntorch.nn.functional.margin_ranking_loss(input1, input2, target, margin=0, size_average=None, reduce=None, reduction='mean') -> Tensor\nSee \"MarginRankingLoss\" for details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.margin_ranking_loss.html", "category": "pytorch docs"} {"text": "torch.linalg.ldl_factor_ex\ntorch.linalg.ldl_factor_ex(A, *, hermitian=False, check_errors=False, out=None)\nThis is a version of \"ldl_factor()\" that does not perform error\n checks unless \"check_errors\"= True. It also returns the \"info\"\n tensor returned by LAPACK's sytrf. \"info\" stores integer error\n codes from the backend library. A positive integer indicates the\n diagonal element of D that is zero. Division by 0 will occur if the\n result is used for solving a system of linear equations. \"info\"\n filled with zeros indicates that the factorization was successful.\n If \"check_errors=True\" and \"info\" contains positive integers, then\n a RuntimeError is thrown.\nNote:\n When the inputs are on a CUDA device, this function synchronizes\n only when \"check_errors\"*= True*.\n\nWarning:\n This function is \"experimental\" and it may change in a future\n PyTorch release.\n\nParameters:\n A (Tensor) -- tensor of shape (*, n, n) where * is zero or", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.ldl_factor_ex.html", "category": "pytorch docs"} {"text": "more batch dimensions consisting of symmetric or Hermitian\n matrices. (*, n, n) where *** is one or more batch dimensions.\nKeyword Arguments:\n * hermitian (bool, optional) -- whether to consider\n the input to be Hermitian or symmetric. For real-valued\n matrices, this switch has no effect. Default: False.\n * **check_errors** (*bool**, **optional*) -- controls whether to\n check the content of \"info\" and raise an error if it is non-\n zero. Default: *False*.\n\n * **out** (*tuple**, **optional*) -- tuple of three tensors to\n write the output to. Ignored if *None*. Default: *None*.\n\nReturns:\n A named tuple (LD, pivots, info).\nExamples:\n >>> A = torch.randn(3, 3)\n >>> A = A @ A.mT # make symmetric\n >>> A\n tensor([[7.2079, 4.2414, 1.9428],\n [4.2414, 3.4554, 0.3264],\n [1.9428, 0.3264, 1.3823]])\n >>> LD, pivots, info = torch.linalg.ldl_factor_ex(A)\n >>> LD\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.ldl_factor_ex.html", "category": "pytorch docs"} {"text": "\n\n\nLD\n tensor([[ 7.2079, 0.0000, 0.0000],\n [ 0.5884, 0.9595, 0.0000],\n [ 0.2695, -0.8513, 0.1633]])\n >>> pivots\n tensor([1, 2, 3], dtype=torch.int32)\n >>> info\n tensor(0, dtype=torch.int32)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.ldl_factor_ex.html", "category": "pytorch docs"} {"text": "torch.nn.utils.prune.random_unstructured\ntorch.nn.utils.prune.random_unstructured(module, name, amount)\nPrunes tensor corresponding to parameter called \"name\" in \"module\"\n by removing the specified \"amount\" of (currently unpruned) units\n selected at random. Modifies module in place (and also return the\n modified module) by:\n\n\nadding a named buffer called \"name+'_mask'\" corresponding to the\n binary mask applied to the parameter \"name\" by the pruning\n method.\n\n\nreplacing the parameter \"name\" by its pruned version, while the\n original (unpruned) parameter is stored in a new parameter named\n \"name+'_orig'\".\n\n\nParameters:\n * module (nn.Module) -- module containing the tensor to\n prune\n * **name** (*str*) -- parameter name within \"module\" on which\n pruning will act.\n\n * **amount** (*int** or **float*) -- quantity of parameters to\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.random_unstructured.html", "category": "pytorch docs"} {"text": "prune. If \"float\", should be between 0.0 and 1.0 and represent\n the fraction of parameters to prune. If \"int\", it represents\n the absolute number of parameters to prune.\nReturns:\n modified (i.e. pruned) version of the input module\nReturn type:\n module (nn.Module)\n-[ Examples ]-\n\n\n\nm = prune.random_unstructured(nn.Linear(2, 3), 'weight', amount=1)\ntorch.sum(m.weight_mask == 0)\n tensor(1)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.random_unstructured.html", "category": "pytorch docs"} {"text": "clamp\nclass torch.ao.nn.quantized.functional.clamp(input, min_, max_)\nfloat(input, min_, max_) -> Tensor\nApplies the clamp function element-wise. See \"clamp\" for more\n details.\nParameters:\n * input (Tensor) -- quantized input\n * **min** -- minimum value for clamping\n\n * **max** -- maximum value for clamping\n\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.clamp.html", "category": "pytorch docs"} {"text": "torch.Tensor.scatter_\nTensor.scatter_(dim, index, src, reduce=None) -> Tensor\nWrites all values from the tensor \"src\" into \"self\" at the indices\n specified in the \"index\" tensor. For each value in \"src\", its\n output index is specified by its index in \"src\" for \"dimension !=\n dim\" and by the corresponding value in \"index\" for \"dimension =\n dim\".\nFor a 3-D tensor, \"self\" is updated as:\n self[index[i][j][k]][j][k] = src[i][j][k] # if dim == 0\n self[i][index[i][j][k]][k] = src[i][j][k] # if dim == 1\n self[i][j][index[i][j][k]] = src[i][j][k] # if dim == 2\n\nThis is the reverse operation of the manner described in\n \"gather()\".\n\"self\", \"index\" and \"src\" (if it is a Tensor) should all have the\n same number of dimensions. It is also required that \"index.size(d)\n <= src.size(d)\" for all dimensions \"d\", and that \"index.size(d) <=\n self.size(d)\" for all dimensions \"d != dim\". Note that \"index\" and\n \"src\" do not broadcast.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_.html", "category": "pytorch docs"} {"text": "\"src\" do not broadcast.\nMoreover, as for \"gather()\", the values of \"index\" must be between\n \"0\" and \"self.size(dim) - 1\" inclusive.\nWarning:\n When indices are not unique, the behavior is non-deterministic\n (one of the values from \"src\" will be picked arbitrarily) and the\n gradient will be incorrect (it will be propagated to all\n locations in the source that correspond to the same index)!\n\nNote:\n The backward pass is implemented only for \"src.shape ==\n index.shape\".\n\nAdditionally accepts an optional \"reduce\" argument that allows\n specification of an optional reduction operation, which is applied\n to all values in the tensor \"src\" into \"self\" at the indices\n specified in the \"index\". For each value in \"src\", the reduction\n operation is applied to an index in \"self\" which is specified by\n its index in \"src\" for \"dimension != dim\" and by the corresponding\n value in \"index\" for \"dimension = dim\".\nGiven a 3-D tensor and reduction using the multiplication", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_.html", "category": "pytorch docs"} {"text": "operation, \"self\" is updated as:\n self[index[i][j][k]][j][k] *= src[i][j][k] # if dim == 0\n self[i][index[i][j][k]][k] *= src[i][j][k] # if dim == 1\n self[i][j][index[i][j][k]] *= src[i][j][k] # if dim == 2\n\nReducing with the addition operation is the same as using\n \"scatter_add_()\".\nParameters:\n * dim (int) -- the axis along which to index\n * **index** (*LongTensor*) -- the indices of elements to\n scatter, can be either empty or of the same dimensionality as\n \"src\". When empty, the operation returns \"self\" unchanged.\n\n * **src** (*Tensor** or **float*) -- the source element(s) to\n scatter.\n\n * **reduce** (*str**, **optional*) -- reduction operation to\n apply, can be either \"'add'\" or \"'multiply'\".\n\nExample:\n >>> src = torch.arange(1, 11).reshape((2, 5))\n >>> src\n tensor([[ 1, 2, 3, 4, 5],\n [ 6, 7, 8, 9, 10]])\n >>> index = torch.tensor([[0, 1, 2, 0]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_.html", "category": "pytorch docs"} {"text": "\n\n\nindex = torch.tensor([[0, 1, 2, 0]])\n >>> torch.zeros(3, 5, dtype=src.dtype).scatter_(0, index, src)\n tensor([[1, 0, 0, 4, 0],\n [0, 2, 0, 0, 0],\n [0, 0, 3, 0, 0]])\n >>> index = torch.tensor([[0, 1, 2], [0, 1, 4]])\n >>> torch.zeros(3, 5, dtype=src.dtype).scatter_(1, index, src)\n tensor([[1, 2, 3, 0, 0],\n [6, 7, 0, 0, 8],\n [0, 0, 0, 0, 0]])\n\n\n\n >>> torch.full((2, 4), 2.).scatter_(1, torch.tensor([[2], [3]]),\n ... 1.23, reduce='multiply')\n tensor([[2.0000, 2.0000, 2.4600, 2.0000],\n [2.0000, 2.0000, 2.0000, 2.4600]])\n >>> torch.full((2, 4), 2.).scatter_(1, torch.tensor([[2], [3]]),\n ... 1.23, reduce='add')\n tensor([[2.0000, 2.0000, 3.2300, 2.0000],\n [2.0000, 2.0000, 2.0000, 3.2300]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_.html", "category": "pytorch docs"} {"text": "torch.ceil\ntorch.ceil(input, *, out=None) -> Tensor\nReturns a new tensor with the ceil of the elements of \"input\", the\n smallest integer greater than or equal to each element.\nFor integer inputs, follows the array-api convention of returning a\n copy of the input tensor.\n \\text{out}_{i} = \\left\\lceil \\text{input}_{i} \\right\\rceil\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(4)\n >>> a\n tensor([-0.6341, -1.4208, -1.0900, 0.5826])\n >>> torch.ceil(a)\n tensor([-0., -1., -1., 1.])\n", "source": "https://pytorch.org/docs/stable/generated/torch.ceil.html", "category": "pytorch docs"} {"text": "torch.Tensor.remainder_\nTensor.remainder_(divisor) -> Tensor\nIn-place version of \"remainder()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.remainder_.html", "category": "pytorch docs"} {"text": "torch.real\ntorch.real(input) -> Tensor\nReturns a new tensor containing real values of the \"self\" tensor.\n The returned tensor and \"self\" share the same underlying storage.\nParameters:\n input (Tensor) -- the input tensor.\nExample:\n >>> x=torch.randn(4, dtype=torch.cfloat)\n >>> x\n tensor([(0.3100+0.3553j), (-0.5445-0.7896j), (-1.6492-0.0633j), (-0.0638-0.8119j)])\n >>> x.real\n tensor([ 0.3100, -0.5445, -1.6492, -0.0638])\n", "source": "https://pytorch.org/docs/stable/generated/torch.real.html", "category": "pytorch docs"} {"text": "torch.jit.isinstance\ntorch.jit.isinstance(obj, target_type)\nThis function provides for container type refinement in\n TorchScript. It can refine parameterized containers of the List,\n Dict, Tuple, and Optional types. E.g. \"List[str]\", \"Dict[str,\n List[torch.Tensor]]\", \"Optional[Tuple[int,str,int]]\". It can also\n refine basic types such as bools and ints that are available in\n TorchScript.\nParameters:\n * obj -- object to refine the type of\n * **target_type** -- type to try to refine obj to\n\nReturns:\n True if obj was successfully refined to the type of target_type,\n False otherwise with no new type refinement\nReturn type:\n \"bool\"\nExample (using \"torch.jit.isinstance\" for type refinement): ..\n testcode:\n import torch\n from typing import Any, Dict, List\n\n class MyModule(torch.nn.Module):\n def __init__(self):\n super(MyModule, self).__init__()\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.isinstance.html", "category": "pytorch docs"} {"text": "super(MyModule, self).init()\n def forward(self, input: Any): # note the Any type\n if torch.jit.isinstance(input, List[torch.Tensor]):\n for t in input:\n y = t.clamp(0, 0.5)\n elif torch.jit.isinstance(input, Dict[str, str]):\n for val in input.values():\n print(val)\n\n m = torch.jit.script(MyModule())\n x = [torch.rand(3,3), torch.rand(4,3)]\n m(x)\n y = {\"key1\":\"val1\",\"key2\":\"val2\"}\n m(y)\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.isinstance.html", "category": "pytorch docs"} {"text": "torch.Tensor.index_fill_\nTensor.index_fill_(dim, index, value) -> Tensor\nFills the elements of the \"self\" tensor with value \"value\" by\n selecting the indices in the order given in \"index\".\nParameters:\n * dim (int) -- dimension along which to index\n * **index** (*LongTensor*) -- indices of \"self\" tensor to fill\n in\n\n * **value** (*float*) -- the value to fill with\n\nExample::\n >>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]], dtype=torch.float)\n >>> index = torch.tensor([0, 2])\n >>> x.index_fill_(1, index, -1)\n tensor([[-1., 2., -1.],\n [-1., 5., -1.],\n [-1., 8., -1.]])", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_fill_.html", "category": "pytorch docs"} {"text": "torch.Tensor.clone\nTensor.clone(*, memory_format=torch.preserve_format) -> Tensor\nSee \"torch.clone()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.clone.html", "category": "pytorch docs"} {"text": "LPPool1d\nclass torch.nn.LPPool1d(norm_type, kernel_size, stride=None, ceil_mode=False)\nApplies a 1D power-average pooling over an input signal composed of\n several input planes.\nOn each window, the function computed is:\n f(X) = \\sqrt[p]{\\sum_{x \\in X} x^{p}}\n\n\n\nAt p = \\infty, one gets Max Pooling\n\n\nAt p = 1, one gets Sum Pooling (which is proportional to Average\n Pooling)\n\n\nNote:\n If the sum to the power of *p* is zero, the gradient of this\n function is not defined. This implementation will set the\n gradient to zero in this case.\n\nParameters:\n * kernel_size (Union[int, Tuple[int]]) --\n a single int, the size of the window\n * **stride** (*Union**[**int**, **Tuple**[**int**]**]*) -- a\n single int, the stride of the window. Default value is\n \"kernel_size\"\n\n * **ceil_mode** (*bool*) -- when True, will use *ceil* instead\n of *floor* to compute the output shape\n\nShape:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LPPool1d.html", "category": "pytorch docs"} {"text": "Shape:\n * Input: (N, C, L_{in}) or (C, L_{in}).\n * Output: (N, C, L_{out}) or (C, L_{out}), where\n\n L_{out} = \\left\\lfloor\\frac{L_{in} -\n \\text{kernel\\_size}}{\\text{stride}} + 1\\right\\rfloor\n\nExamples::\n >>> # power-2 pool of window of length 3, with stride 2.\n >>> m = nn.LPPool1d(2, 3, stride=2)\n >>> input = torch.randn(20, 16, 50)\n >>> output = m(input)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LPPool1d.html", "category": "pytorch docs"} {"text": "Embedding\nclass torch.ao.nn.quantized.Embedding(num_embeddings, embedding_dim, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False, _weight=None, dtype=torch.quint8)\nA quantized Embedding module with quantized packed weights as\n inputs. We adopt the same interface as torch.nn.Embedding, please\n see https://pytorch.org/docs/stable/nn.html#torch.nn.Embedding for\n documentation.\nSimilar to \"Embedding\", attributes will be randomly initialized at\n module creation time and will be overwritten later\nVariables:\n weight (Tensor) -- the non-learnable quantized weights of\n the module of shape (\\text{num_embeddings},\n \\text{embedding_dim}).\nExamples::\n >>> m = nn.quantized.Embedding(num_embeddings=10, embedding_dim=12)\n >>> indices = torch.tensor([9, 6, 5, 7, 8, 8, 9, 2, 8])\n >>> output = m(indices)\n >>> print(output.size())\n torch.Size([9, 12])\nclassmethod from_float(mod)", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Embedding.html", "category": "pytorch docs"} {"text": "classmethod from_float(mod)\n Create a quantized embedding module from a float module\n\n Parameters:\n **mod** (*Module*) -- a float module, either produced by\n torch.ao.quantization utilities or provided by user\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Embedding.html", "category": "pytorch docs"} {"text": "torch.multiply\ntorch.multiply(input, other, *, out=None)\nAlias for \"torch.mul()\".", "source": "https://pytorch.org/docs/stable/generated/torch.multiply.html", "category": "pytorch docs"} {"text": "AlphaDropout\nclass torch.nn.AlphaDropout(p=0.5, inplace=False)\nApplies Alpha Dropout over the input.\nAlpha Dropout is a type of Dropout that maintains the self-\n normalizing property. For an input with zero mean and unit standard\n deviation, the output of Alpha Dropout maintains the original mean\n and standard deviation of the input. Alpha Dropout goes hand-in-\n hand with SELU activation function, which ensures that the outputs\n have zero mean and unit standard deviation.\nDuring training, it randomly masks some of the elements of the\n input tensor with probability p using samples from a bernoulli\n distribution. The elements to masked are randomized on every\n forward call, and scaled and shifted to maintain zero mean and unit\n standard deviation.\nDuring evaluation the module simply computes an identity function.\nMore details can be found in the paper Self-Normalizing Neural\n Networks .\nParameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AlphaDropout.html", "category": "pytorch docs"} {"text": "Networks .\nParameters:\n * p (float) -- probability of an element to be dropped.\n Default: 0.5\n * **inplace** (*bool**, **optional*) -- If set to \"True\", will\n do this operation in-place\n\nShape:\n * Input: (*). Input can be of any shape\n * Output: (*). Output is of the same shape as input\n\nExamples:\n >>> m = nn.AlphaDropout(p=0.2)\n >>> input = torch.randn(20, 16)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AlphaDropout.html", "category": "pytorch docs"} {"text": "torch.logdet\ntorch.logdet(input) -> Tensor\nCalculates log determinant of a square matrix or batches of square\n matrices.\nIt returns \"-inf\" if the input has a determinant of zero, and \"NaN\"\n if it has a negative determinant.\nNote:\n Backward through \"logdet()\" internally uses SVD results when\n \"input\" is not invertible. In this case, double backward through\n \"logdet()\" will be unstable in when \"input\" doesn't have distinct\n singular values. See \"torch.linalg.svd()\" for details.\n\nSee also:\n \"torch.linalg.slogdet()\" computes the sign (resp. angle) and\n natural logarithm of the absolute value of the determinant of\n real-valued (resp. complex) square matrices.\n\nParameters:\n input (Tensor) -- the input tensor of size \"(, n, n)\"\n where \"\" is zero or more batch dimensions.\nExample:\n >>> A = torch.randn(3, 3)\n >>> torch.det(A)\n tensor(0.2611)\n >>> torch.logdet(A)\n tensor(-1.3430)\n >>> A\n", "source": "https://pytorch.org/docs/stable/generated/torch.logdet.html", "category": "pytorch docs"} {"text": "tensor(-1.3430)\n >>> A\n tensor([[[ 0.9254, -0.6213],\n [-0.5787, 1.6843]],\n [[ 0.3242, -0.9665],\n [ 0.4539, -0.0887]],\n\n [[ 1.1336, -0.4025],\n [-0.7089, 0.9032]]])\n >>> A.det()\n tensor([1.1990, 0.4099, 0.7386])\n >>> A.det().log()\n tensor([ 0.1815, -0.8917, -0.3031])\n", "source": "https://pytorch.org/docs/stable/generated/torch.logdet.html", "category": "pytorch docs"} {"text": "torch.Tensor.max\nTensor.max(dim=None, keepdim=False)\nSee \"torch.max()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.max.html", "category": "pytorch docs"} {"text": "torch.abs\ntorch.abs(input, *, out=None) -> Tensor\nComputes the absolute value of each element in \"input\".\n \\text{out}_{i} = |\\text{input}_{i}|\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> torch.abs(torch.tensor([-1, -2, 3]))\n tensor([ 1, 2, 3])\n", "source": "https://pytorch.org/docs/stable/generated/torch.abs.html", "category": "pytorch docs"} {"text": "torch.positive\ntorch.positive(input) -> Tensor\nReturns \"input\". Throws a runtime error if \"input\" is a bool\n tensor.\nParameters:\n input (Tensor) -- the input tensor.\nExample:\n >>> t = torch.randn(5)\n >>> t\n tensor([ 0.0090, -0.2262, -0.0682, -0.2866, 0.3940])\n >>> torch.positive(t)\n tensor([ 0.0090, -0.2262, -0.0682, -0.2866, 0.3940])\n", "source": "https://pytorch.org/docs/stable/generated/torch.positive.html", "category": "pytorch docs"} {"text": "prepare_fx\nclass torch.quantization.quantize_fx.prepare_fx(model, qconfig_mapping, example_inputs, prepare_custom_config=None, _equalization_config=None, backend_config=None)\nPrepare a model for post training static quantization\nParameters:\n * model (***) -- torch.nn.Module model\n * **qconfig_mapping** (***) -- QConfigMapping object to\n configure how a model is quantized, see \"QConfigMapping\" for\n more details\n\n * **example_inputs** (***) -- Example inputs for forward\n function of the model, Tuple of positional args (keyword args\n can be passed as positional args as well)\n\n * **prepare_custom_config** (***) -- customization configuration\n for quantization tool. See \"PrepareCustomConfig\" for more\n details\n\n * **_equalization_config** (***) -- config for specifying how to\n perform equalization on the model\n\n * **backend_config** (***) -- config that specifies how\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_fx.html", "category": "pytorch docs"} {"text": "operators are quantized in a backend, this includes how the\n operators are observed, supported fusion patterns, how\n quantize/dequantize ops are inserted, supported dtypes etc.\n See \"BackendConfig\" for more details\nReturns:\n A GraphModule with observer (configured by qconfig_mapping),\n ready for calibration\nReturn type:\n ObservedGraphModule\nExample:\n import torch\n from torch.ao.quantization import get_default_qconfig_mapping\n from torch.ao.quantization import prepare_fx\n\n class Submodule(torch.nn.Module):\n def __init__(self):\n super().__init__()\n self.linear = torch.nn.Linear(5, 5)\n def forward(self, x):\n x = self.linear(x)\n return x\n\n class M(torch.nn.Module):\n def __init__(self):\n super().__init__()\n self.linear = torch.nn.Linear(5, 5)\n self.sub = Submodule()\n\n def forward(self, x):\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_fx.html", "category": "pytorch docs"} {"text": "def forward(self, x):\n x = self.linear(x)\n x = self.sub(x) + x\n return x\n # initialize a floating point model\n float_model = M().eval()\n\n # define calibration function\n def calibrate(model, data_loader):\n model.eval()\n with torch.no_grad():\n for image, target in data_loader:\n model(image)\n\n # qconfig is the configuration for how we insert observers for a particular\n # operator\n # qconfig = get_default_qconfig(\"fbgemm\")\n # Example of customizing qconfig:\n # qconfig = torch.ao.quantization.QConfig(\n # activation=MinMaxObserver.with_args(dtype=torch.qint8),\n # weight=MinMaxObserver.with_args(dtype=torch.qint8))\n # `activation` and `weight` are constructors of observer module\n\n # qconfig_mapping is a collection of quantization configurations, user can\n # set the qconfig for each operator (torch op calls, functional calls, module calls)\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_fx.html", "category": "pytorch docs"} {"text": "in the model through qconfig_mapping\n # the following call will get the qconfig_mapping that works best for models\n # that target \"fbgemm\" backend\n qconfig_mapping = get_default_qconfig_mapping(\"fbgemm\")\n\n # We can customize qconfig_mapping in different ways.\n # e.g. set the global qconfig, which means we will use the same qconfig for\n # all operators in the model, this can be overwritten by other settings\n # qconfig_mapping = QConfigMapping().set_global(qconfig)\n # e.g. quantize the linear submodule with a specific qconfig\n # qconfig_mapping = QConfigMapping().set_module_name(\"linear\", qconfig)\n # e.g. quantize all nn.Linear modules with a specific qconfig\n # qconfig_mapping = QConfigMapping().set_object_type(torch.nn.Linear, qconfig)\n # for a more complete list, please see the docstring for :class:`torch.ao.quantization.QConfigMapping`\n # argument\n\n # example_inputs is a tuple of inputs, that is used to infer the type of the\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_fx.html", "category": "pytorch docs"} {"text": "outputs in the model\n # currently it's not used, but please make sure model(*example_inputs) runs\n example_inputs = (torch.randn(1, 3, 224, 224),)\n\n # TODO: add backend_config after we split the backend_config for fbgemm and qnnpack\n # e.g. backend_config = get_default_backend_config(\"fbgemm\")\n # `prepare_fx` inserts observers in the model based on qconfig_mapping and\n # backend_config. If the configuration for an operator in qconfig_mapping\n # is supported in the backend_config (meaning it's supported by the target\n # hardware), we'll insert observer modules according to the qconfig_mapping\n # otherwise the configuration in qconfig_mapping will be ignored\n #\n # Example:\n # in qconfig_mapping, user sets linear module to be quantized with quint8 for\n # activation and qint8 for weight:\n # qconfig = torch.ao.quantization.QConfig(\n # observer=MinMaxObserver.with_args(dtype=torch.quint8),\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_fx.html", "category": "pytorch docs"} {"text": "weight=MinMaxObserver.with-args(dtype=torch.qint8))\n # Note: current qconfig api does not support setting output observer, but\n # we may extend this to support these more fine grained control in the\n # future\n #\n # qconfig_mapping = QConfigMapping().set_object_type(torch.nn.Linear, qconfig)\n # in backend config, linear module also supports in this configuration:\n # weighted_int8_dtype_config = DTypeConfig(\n # input_dtype=torch.quint8,\n # output_dtype=torch.quint8,\n # weight_dtype=torch.qint8,\n # bias_type=torch.float)\n\n # linear_pattern_config = BackendPatternConfig(torch.nn.Linear) \\\n # .set_observation_type(ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT) \\\n # .add_dtype_config(weighted_int8_dtype_config) \\\n # ...\n\n # backend_config = BackendConfig().set_backend_pattern_config(linear_pattern_config)\n # `prepare_fx` will check that the setting requested by suer in qconfig_mapping\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_fx.html", "category": "pytorch docs"} {"text": "is supported by the backend_config and insert observers and fake quant modules\n # in the model\n prepared_model = prepare_fx(float_model, qconfig_mapping, example_inputs)\n # Run calibration\n calibrate(prepared_model, sample_inference_data)\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.prepare_fx.html", "category": "pytorch docs"} {"text": "ExponentialLR\nclass torch.optim.lr_scheduler.ExponentialLR(optimizer, gamma, last_epoch=- 1, verbose=False)\nDecays the learning rate of each parameter group by gamma every\n epoch. When last_epoch=-1, sets initial lr as lr.\nParameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * **gamma** (*float*) -- Multiplicative factor of learning rate\n decay.\n\n * **last_epoch** (*int*) -- The index of last epoch. Default:\n -1.\n\n * **verbose** (*bool*) -- If \"True\", prints a message to stdout\n for each update. Default: \"False\".\n\nget_last_lr()\n Return last computed learning rate by current scheduler.\n\nload_state_dict(state_dict)\n Loads the schedulers state.\n\n Parameters:\n **state_dict** (*dict*) -- scheduler state. Should be an\n object returned from a call to \"state_dict()\".\n\nprint_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n\nstate_dict()", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ExponentialLR.html", "category": "pytorch docs"} {"text": "state_dict()\n Returns the state of the scheduler as a \"dict\".\n\n It contains an entry for every variable in self.__dict__ which\n is not the optimizer.\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ExponentialLR.html", "category": "pytorch docs"} {"text": "torch.Tensor.logdet\nTensor.logdet() -> Tensor\nSee \"torch.logdet()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logdet.html", "category": "pytorch docs"} {"text": "torch.Tensor.log1p_\nTensor.log1p_() -> Tensor\nIn-place version of \"log1p()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.log1p_.html", "category": "pytorch docs"} {"text": "torch.dsplit\ntorch.dsplit(input, indices_or_sections) -> List of Tensors\nSplits \"input\", a tensor with three or more dimensions, into\n multiple tensors depthwise according to \"indices_or_sections\". Each\n split is a view of \"input\".\nThis is equivalent to calling torch.tensor_split(input,\n indices_or_sections, dim=2) (the split dimension is 2), except that\n if \"indices_or_sections\" is an integer it must evenly divide the\n split dimension or a runtime error will be thrown.\nThis function is based on NumPy's \"numpy.dsplit()\".\nParameters:\n * input (Tensor) -- tensor to split.\n * **indices_or_sections** (*int** or **list** or **tuple of\n ints*) -- See argument in \"torch.tensor_split()\".\n\nExample::\n >>> t = torch.arange(16.0).reshape(2, 2, 4)\n >>> t\n tensor([[[ 0., 1., 2., 3.],\n [ 4., 5., 6., 7.]],\n [[ 8., 9., 10., 11.],\n [12., 13., 14., 15.]]])\n >>> torch.dsplit(t, 2)", "source": "https://pytorch.org/docs/stable/generated/torch.dsplit.html", "category": "pytorch docs"} {"text": "\n\n\ntorch.dsplit(t, 2)\n (tensor([[[ 0., 1.],\n [ 4., 5.]],\n [[ 8., 9.],\n [12., 13.]]]),\n tensor([[[ 2., 3.],\n [ 6., 7.]],\n [[10., 11.],\n [14., 15.]]]))\n\n\n\n >>> torch.dsplit(t, [3, 6])\n (tensor([[[ 0., 1., 2.],\n [ 4., 5., 6.]],\n [[ 8., 9., 10.],\n [12., 13., 14.]]]),\n tensor([[[ 3.],\n [ 7.]],\n [[11.],\n [15.]]]),\n tensor([], size=(2, 2, 0)))\n", "source": "https://pytorch.org/docs/stable/generated/torch.dsplit.html", "category": "pytorch docs"} {"text": "torch._foreach_log1p\ntorch._foreach_log1p(self: List[Tensor]) -> List[Tensor]\nApply \"torch.log1p()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_log1p.html", "category": "pytorch docs"} {"text": "torch.linalg.matrix_rank\ntorch.linalg.matrix_rank(A, *, atol=None, rtol=None, hermitian=False, out=None) -> Tensor\nComputes the numerical rank of a matrix.\nThe matrix rank is computed as the number of singular values (or\n eigenvalues in absolute value when \"hermitian\"= True) that are\n greater than \\max(\\text{atol}, \\sigma_1 * \\text{rtol}) threshold,\n where \\sigma_1 is the largest singular value (or eigenvalue).\nSupports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\nIf \"hermitian\"= True, \"A\" is assumed to be Hermitian if complex\n or symmetric if real, but this is not checked internally. Instead,\n just the lower triangular part of the matrix is used in the\n computations.\nIf \"rtol\" is not specified and \"A\" is a matrix of dimensions (m,\n n), the relative tolerance is set to be \\text{rtol} = \\max(m, n)", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.matrix_rank.html", "category": "pytorch docs"} {"text": "\\varepsilon and \\varepsilon is the epsilon value for the dtype of\n \"A\" (see \"finfo\"). If \"rtol\" is not specified and \"atol\" is\n specified to be larger than zero then \"rtol\" is set to zero.\nIf \"atol\" or \"rtol\" is a \"torch.Tensor\", its shape must be\n broadcastable to that of the singular values of \"A\" as returned by\n \"torch.linalg.svdvals()\".\nNote:\n This function has NumPy compatible variant *linalg.matrix_rank(A,\n tol, hermitian=False)*. However, use of the positional argument\n \"tol\" is deprecated in favor of \"atol\" and \"rtol\".\n\nNote:\n The matrix rank is computed using a singular value decomposition\n \"torch.linalg.svdvals()\" if \"hermitian\"*= False* (default) and\n the eigenvalue decomposition \"torch.linalg.eigvalsh()\" when\n \"hermitian\"*= True*. When inputs are on a CUDA device, this\n function synchronizes that device with the CPU.\n\nParameters:\n * A (Tensor) -- tensor of shape (, m, n)* where *** is\n zero or more batch dimensions.", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.matrix_rank.html", "category": "pytorch docs"} {"text": "zero or more batch dimensions.\n * **tol** (*float**, **Tensor**, **optional*) -- [NumPy Compat]\n Alias for \"atol\". Default: *None*.\n\nKeyword Arguments:\n * atol (float, Tensor, optional) -- the absolute\n tolerance value. When None it's considered to be zero.\n Default: None.\n * **rtol** (*float**, **Tensor**, **optional*) -- the relative\n tolerance value. See above for the value it takes when *None*.\n Default: *None*.\n\n * **hermitian** (*bool*) -- indicates whether \"A\" is Hermitian\n if complex or symmetric if real. Default: *False*.\n\n * **out** (*Tensor**, **optional*) -- output tensor. Ignored if\n *None*. Default: *None*.\n\nExamples:\n >>> A = torch.eye(10)\n >>> torch.linalg.matrix_rank(A)\n tensor(10)\n >>> B = torch.eye(10)\n >>> B[0, 0] = 0\n >>> torch.linalg.matrix_rank(B)\n tensor(9)\n\n >>> A = torch.randn(4, 3, 2)\n >>> torch.linalg.matrix_rank(A)\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.matrix_rank.html", "category": "pytorch docs"} {"text": "\n\n\ntorch.linalg.matrix_rank(A)\n tensor([2, 2, 2, 2])\n\n\n\n >>> A = torch.randn(2, 4, 2, 3)\n >>> torch.linalg.matrix_rank(A)\n tensor([[2, 2, 2, 2],\n [2, 2, 2, 2]])\n\n >>> A = torch.randn(2, 4, 3, 3, dtype=torch.complex64)\n >>> torch.linalg.matrix_rank(A)\n tensor([[3, 3, 3, 3],\n [3, 3, 3, 3]])\n >>> torch.linalg.matrix_rank(A, hermitian=True)\n tensor([[3, 3, 3, 3],\n [3, 3, 3, 3]])\n >>> torch.linalg.matrix_rank(A, atol=1.0, rtol=0.0)\n tensor([[3, 2, 2, 2],\n [1, 2, 1, 2]])\n >>> torch.linalg.matrix_rank(A, atol=1.0, rtol=0.0, hermitian=True)\n tensor([[2, 2, 2, 1],\n [1, 2, 2, 2]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.matrix_rank.html", "category": "pytorch docs"} {"text": "torch.from_numpy\ntorch.from_numpy(ndarray) -> Tensor\nCreates a \"Tensor\" from a \"numpy.ndarray\".\nThe returned tensor and \"ndarray\" share the same memory.\n Modifications to the tensor will be reflected in the \"ndarray\" and\n vice versa. The returned tensor is not resizable.\nIt currently accepts \"ndarray\" with dtypes of \"numpy.float64\",\n \"numpy.float32\", \"numpy.float16\", \"numpy.complex64\",\n \"numpy.complex128\", \"numpy.int64\", \"numpy.int32\", \"numpy.int16\",\n \"numpy.int8\", \"numpy.uint8\", and \"numpy.bool\".\nWarning:\n Writing to a tensor created from a read-only NumPy array is not\n supported and will result in undefined behavior.\n\nExample:\n >>> a = numpy.array([1, 2, 3])\n >>> t = torch.from_numpy(a)\n >>> t\n tensor([ 1, 2, 3])\n >>> t[0] = -1\n >>> a\n array([-1, 2, 3])\n", "source": "https://pytorch.org/docs/stable/generated/torch.from_numpy.html", "category": "pytorch docs"} {"text": "torch.diag_embed\ntorch.diag_embed(input, offset=0, dim1=- 2, dim2=- 1) -> Tensor\nCreates a tensor whose diagonals of certain 2D planes (specified by\n \"dim1\" and \"dim2\") are filled by \"input\". To facilitate creating\n batched diagonal matrices, the 2D planes formed by the last two\n dimensions of the returned tensor are chosen by default.\nThe argument \"offset\" controls which diagonal to consider:\n\n\nIf \"offset\" = 0, it is the main diagonal.\n\n\nIf \"offset\" > 0, it is above the main diagonal.\n\n\nIf \"offset\" < 0, it is below the main diagonal.\n\n\nThe size of the new matrix will be calculated to make the specified\n diagonal of the size of the last input dimension. Note that for\n \"offset\" other than 0, the order of \"dim1\" and \"dim2\" matters.\n Exchanging them is equivalent to changing the sign of \"offset\".\nApplying \"torch.diagonal()\" to the output of this function with the\n same arguments yields a matrix identical to input. However,", "source": "https://pytorch.org/docs/stable/generated/torch.diag_embed.html", "category": "pytorch docs"} {"text": "\"torch.diagonal()\" has different default dimensions, so those need\n to be explicitly specified.\nParameters:\n * input (Tensor) -- the input tensor. Must be at least\n 1-dimensional.\n * **offset** (*int**, **optional*) -- which diagonal to\n consider. Default: 0 (main diagonal).\n\n * **dim1** (*int**, **optional*) -- first dimension with respect\n to which to take diagonal. Default: -2.\n\n * **dim2** (*int**, **optional*) -- second dimension with\n respect to which to take diagonal. Default: -1.\n\nExample:\n >>> a = torch.randn(2, 3)\n >>> torch.diag_embed(a)\n tensor([[[ 1.5410, 0.0000, 0.0000],\n [ 0.0000, -0.2934, 0.0000],\n [ 0.0000, 0.0000, -2.1788]],\n\n [[ 0.5684, 0.0000, 0.0000],\n [ 0.0000, -1.0845, 0.0000],\n [ 0.0000, 0.0000, -1.3986]]])\n\n >>> torch.diag_embed(a, offset=1, dim1=0, dim2=2)\n tensor([[[ 0.0000, 1.5410, 0.0000, 0.0000],\n", "source": "https://pytorch.org/docs/stable/generated/torch.diag_embed.html", "category": "pytorch docs"} {"text": "[ 0.0000, 0.5684, 0.0000, 0.0000]],\n [[ 0.0000, 0.0000, -0.2934, 0.0000],\n [ 0.0000, 0.0000, -1.0845, 0.0000]],\n\n [[ 0.0000, 0.0000, 0.0000, -2.1788],\n [ 0.0000, 0.0000, 0.0000, -1.3986]],\n\n [[ 0.0000, 0.0000, 0.0000, 0.0000],\n [ 0.0000, 0.0000, 0.0000, 0.0000]]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.diag_embed.html", "category": "pytorch docs"} {"text": "torch.Tensor.count_nonzero\nTensor.count_nonzero(dim=None) -> Tensor\nSee \"torch.count_nonzero()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.count_nonzero.html", "category": "pytorch docs"} {"text": "torch.Tensor.take_along_dim\nTensor.take_along_dim(indices, dim) -> Tensor\nSee \"torch.take_along_dim()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.take_along_dim.html", "category": "pytorch docs"} {"text": "torch.optim.Optimizer.load_state_dict\nOptimizer.load_state_dict(state_dict)\nLoads the optimizer state.\nParameters:\n state_dict (dict) -- optimizer state. Should be an object\n returned from a call to \"state_dict()\".", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.load_state_dict.html", "category": "pytorch docs"} {"text": "torch.nn.functional.relu6\ntorch.nn.functional.relu6(input, inplace=False) -> Tensor\nApplies the element-wise function \\text{ReLU6}(x) = \\min(\\max(0,x),\n 6).\nSee \"ReLU6\" for more details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.relu6.html", "category": "pytorch docs"} {"text": "torch.cuda.memory_allocated\ntorch.cuda.memory_allocated(device=None)\nReturns the current GPU memory occupied by tensors in bytes for a\n given device.\nParameters:\n device (torch.device or int, optional) -- selected\n device. Returns statistic for the current device, given by\n \"current_device()\", if \"device\" is \"None\" (default).\nReturn type:\n int\nNote:\n This is likely less than the amount shown in *nvidia-smi* since\n some unused memory can be held by the caching allocator and some\n context needs to be created on GPU. See Memory management for\n more details about GPU memory management.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.memory_allocated.html", "category": "pytorch docs"} {"text": "torch.Tensor.not_equal_\nTensor.not_equal_(other) -> Tensor\nIn-place version of \"not_equal()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.not_equal_.html", "category": "pytorch docs"} {"text": "torch.atleast_3d\ntorch.atleast_3d(*tensors)\nReturns a 3-dimensional view of each input tensor with zero\n dimensions. Input tensors with three or more dimensions are\n returned as-is.\nParameters:\n input (Tensor or list of Tensors) --\nReturns:\n output (Tensor or tuple of Tensors)\n-[ Example ]-\n\n\n\nx = torch.tensor(0.5)\nx\n tensor(0.5000)\ntorch.atleast_3d(x)\n tensor([[[0.5000]]])\ny = torch.arange(4).view(2, 2)\ny\n tensor([[0, 1],\n [2, 3]])\ntorch.atleast_3d(y)\n tensor([[[0],\n [1]],\n\n\n\n [[2],\n [3]]])\n\n\n\n\nx = torch.tensor(1).view(1, 1, 1)\nx\n tensor([[[1]]])\ntorch.atleast_3d(x)\n tensor([[[1]]])\nx = torch.tensor(0.5)\ny = torch.tensor(1.)\ntorch.atleast_3d((x, y))\n (tensor([[[0.5000]]]), tensor([[[1.]]]))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.atleast_3d.html", "category": "pytorch docs"} {"text": "torch.cummin\ntorch.cummin(input, dim, *, out=None)\nReturns a namedtuple \"(values, indices)\" where \"values\" is the\n cumulative minimum of elements of \"input\" in the dimension \"dim\".\n And \"indices\" is the index location of each maximum value found in\n the dimension \"dim\".\n y_i = min(x_1, x_2, x_3, \\dots, x_i)\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int*) -- the dimension to do the operation over\n\nKeyword Arguments:\n out (tuple, optional) -- the result tuple of two\n output tensors (values, indices)\nExample:\n >>> a = torch.randn(10)\n >>> a\n tensor([-0.2284, -0.6628, 0.0975, 0.2680, -1.3298, -0.4220, -0.3885, 1.1762,\n 0.9165, 1.6684])\n >>> torch.cummin(a, dim=0)\n torch.return_types.cummin(\n values=tensor([-0.2284, -0.6628, -0.6628, -0.6628, -1.3298, -1.3298, -1.3298, -1.3298,\n -1.3298, -1.3298]),\n indices=tensor([0, 1, 1, 1, 4, 4, 4, 4, 4, 4]))\n", "source": "https://pytorch.org/docs/stable/generated/torch.cummin.html", "category": "pytorch docs"} {"text": "torch.cuda.set_rng_state_all\ntorch.cuda.set_rng_state_all(new_states)\nSets the random number generator state of all devices.\nParameters:\n new_states (Iterable of torch.ByteTensor) -- The desired\n state for each device", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.set_rng_state_all.html", "category": "pytorch docs"} {"text": "torch.Tensor.deg2rad\nTensor.deg2rad() -> Tensor\nSee \"torch.deg2rad()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.deg2rad.html", "category": "pytorch docs"} {"text": "torch.ldexp\ntorch.ldexp(input, other, *, out=None) -> Tensor\nMultiplies \"input\" by 2 ** \"other\".\n \\text{{out}}_i = \\text{{input}}_i * 2^\\text{{other}}_i\n\nTypically this function is used to construct floating point numbers\n by multiplying mantissas in \"input\" with integral powers of two\n created from the exponents in \"other\".\nParameters:\n * input (Tensor) -- the input tensor.\n * **other** (*Tensor*) -- a tensor of exponents, typically\n integers.\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> torch.ldexp(torch.tensor([1.]), torch.tensor([1]))\n tensor([2.])\n >>> torch.ldexp(torch.tensor([1.0]), torch.tensor([1, 2, 3, 4]))\n tensor([ 2., 4., 8., 16.])\n", "source": "https://pytorch.org/docs/stable/generated/torch.ldexp.html", "category": "pytorch docs"} {"text": "Sigmoid\nclass torch.nn.Sigmoid\nApplies the element-wise function:\n \\text{Sigmoid}(x) = \\sigma(x) = \\frac{1}{1 + \\exp(-x)}\n\nShape:\n * Input: (*), where * means any number of dimensions.\n * Output: (*), same shape as the input.\n\n[image]\nExamples:\n >>> m = nn.Sigmoid()\n >>> input = torch.randn(2)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Sigmoid.html", "category": "pytorch docs"} {"text": "torch.cuda.graph_pool_handle\ntorch.cuda.graph_pool_handle()\nReturns an opaque token representing the id of a graph memory pool.\n See Graph memory management.\nWarning:\n This API is in beta and may change in future releases.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.graph_pool_handle.html", "category": "pytorch docs"} {"text": "torch.Tensor.roll\nTensor.roll(shifts, dims) -> Tensor\nSee \"torch.roll()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.roll.html", "category": "pytorch docs"} {"text": "torch.jit.enable_onednn_fusion\ntorch.jit.enable_onednn_fusion(enabled)\nEnables or disables onednn JIT fusion based on the parameter\n enabled.", "source": "https://pytorch.org/docs/stable/generated/torch.jit.enable_onednn_fusion.html", "category": "pytorch docs"} {"text": "ReLU6\nclass torch.nn.ReLU6(inplace=False)\nApplies the element-wise function:\n \\text{ReLU6}(x) = \\min(\\max(0,x), 6)\n\nParameters:\n inplace (bool) -- can optionally do the operation in-\n place. Default: \"False\"\nShape:\n * Input: (*), where * means any number of dimensions.\n * Output: (*), same shape as the input.\n\n[image]\nExamples:\n >>> m = nn.ReLU6()\n >>> input = torch.randn(2)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReLU6.html", "category": "pytorch docs"} {"text": "adaptive_avg_pool2d\nclass torch.ao.nn.quantized.functional.adaptive_avg_pool2d(input, output_size)\nApplies a 2D adaptive average pooling over a quantized input signal\n composed of several quantized input planes.\nNote:\n The input quantization parameters propagate to the output.\n\nSee \"AdaptiveAvgPool2d\" for details and output shape.\nParameters:\n output_size (None) -- the target output size (single\n integer or double-integer tuple)\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.adaptive_avg_pool2d.html", "category": "pytorch docs"} {"text": "torch.Tensor.atanh_\nTensor.atanh_(other) -> Tensor\nIn-place version of \"atanh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.atanh_.html", "category": "pytorch docs"} {"text": "DistributedDataParallel\nclass torch.nn.parallel.DistributedDataParallel(module, device_ids=None, output_device=None, dim=0, broadcast_buffers=True, process_group=None, bucket_cap_mb=25, find_unused_parameters=False, check_reduction=False, gradient_as_bucket_view=False, static_graph=False)\nImplements distributed data parallelism that is based on\n \"torch.distributed\" package at the module level.\nThis container provides data parallelism by synchronizing gradients\n across each model replica. The devices to synchronize across are\n specified by the input \"process_group\", which is the entire world\n by default. Note that \"DistributedDataParallel\" does not chunk or\n otherwise shard the input across participating GPUs; the user is\n responsible for defining how to do so, for example through the use\n of a \"DistributedSampler\".\nSee also: Basics and Use nn.parallel.DistributedDataParallel\n instead of multiprocessing or nn.DataParallel. The same constraints", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"} {"text": "on input as in \"torch.nn.DataParallel\" apply.\nCreation of this class requires that \"torch.distributed\" to be\n already initialized, by calling\n \"torch.distributed.init_process_group()\".\n\"DistributedDataParallel\" is proven to be significantly faster than\n \"torch.nn.DataParallel\" for single-node multi-GPU data parallel\n training.\nTo use \"DistributedDataParallel\" on a host with N GPUs, you should\n spawn up \"N\" processes, ensuring that each process exclusively\n works on a single GPU from 0 to N-1. This can be done by either\n setting \"CUDA_VISIBLE_DEVICES\" for every process or by calling:\n\n\n\ntorch.cuda.set_device(i)\n\n\n\nwhere i is from 0 to N-1. In each process, you should refer the\n following to construct this module:\n\n\n\ntorch.distributed.init_process_group(\n backend='nccl', world_size=N, init_method='...'\n)\nmodel = DistributedDataParallel(model, device_ids=[i], output_device=i)\n\n\n\nIn order to spawn up multiple processes per node, you can use", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"} {"text": "either \"torch.distributed.launch\" or \"torch.multiprocessing.spawn\".\nNote:\n Please refer to PyTorch Distributed Overview for a brief\n introduction to all features related to distributed training.\n\nNote:\n \"DistributedDataParallel\" can be used in conjunction with\n \"torch.distributed.optim.ZeroRedundancyOptimizer\" to reduce per-\n rank optimizer states memory footprint. Please refer to\n ZeroRedundancyOptimizer recipe for more details.\n\nNote:\n \"nccl\" backend is currently the fastest and highly recommended\n backend when using GPUs. This applies to both single-node and\n multi-node distributed training.\n\nNote:\n This module also supports mixed-precision distributed training.\n This means that your model can have different types of parameters\n such as mixed types of \"fp16\" and \"fp32\", the gradient reduction\n on these mixed types of parameters will just work fine.\n\nNote:\n If you use \"torch.save\" on one process to checkpoint the module,\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"} {"text": "and \"torch.load\" on some other processes to recover it, make sure\n that \"map_location\" is configured properly for every process.\n Without \"map_location\", \"torch.load\" would recover the module to\n devices where the module was saved from.\nNote:\n When a model is trained on \"M\" nodes with \"batch=N\", the gradient\n will be \"M\" times smaller when compared to the same model trained\n on a single node with \"batch=M*N\" if the loss is summed (NOT\n averaged as usual) across instances in a batch (because the\n gradients between different nodes are averaged). You should take\n this into consideration when you want to obtain a mathematically\n equivalent training process compared to the local training\n counterpart. But in most cases, you can just treat a\n DistributedDataParallel wrapped model, a DataParallel wrapped\n model and an ordinary model on a single GPU as the same (E.g.\n using the same learning rate for equivalent batch size).\n\nNote:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"} {"text": "Note:\n Parameters are never broadcast between processes. The module\n performs an all-reduce step on gradients and assumes that they\n will be modified by the optimizer in all processes in the same\n way. Buffers (e.g. BatchNorm stats) are broadcast from the module\n in process of rank 0, to all other replicas in the system in\n every iteration.\n\nNote:\n If you are using DistributedDataParallel in conjunction with the\n Distributed RPC Framework, you should always use\n \"torch.distributed.autograd.backward()\" to compute gradients and\n \"torch.distributed.optim.DistributedOptimizer\" for optimizing\n parameters.Example:\n\n >>> import torch.distributed.autograd as dist_autograd\n >>> from torch.nn.parallel import DistributedDataParallel as DDP\n >>> import torch\n >>> from torch import optim\n >>> from torch.distributed.optim import DistributedOptimizer\n >>> import torch.distributed.rpc as rpc\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"} {"text": "\n\n\nimport torch.distributed.rpc as rpc\n >>> from torch.distributed.rpc import RRef\n >>>\n >>> t1 = torch.rand((3, 3), requires_grad=True)\n >>> t2 = torch.rand((3, 3), requires_grad=True)\n >>> rref = rpc.remote(\"worker1\", torch.add, args=(t1, t2))\n >>> ddp_model = DDP(my_model)\n >>>\n >>> # Setup optimizer\n >>> optimizer_params = [rref]\n >>> for param in ddp_model.parameters():\n >>> optimizer_params.append(RRef(param))\n >>>\n >>> dist_optim = DistributedOptimizer(\n >>> optim.SGD,\n >>> optimizer_params,\n >>> lr=0.05,\n >>> )\n >>>\n >>> with dist_autograd.context() as context_id:\n >>> pred = ddp_model(rref.to_here())\n >>> loss = loss_func(pred, target)\n >>> dist_autograd.backward(context_id, [loss])\n >>> dist_optim.step(context_id)\n\n\n\nNote:\n DistributedDataParallel currently offers limited support for\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"} {"text": "gradient checkpointing with \"torch.utils.checkpoint()\". DDP will\n work as expected when there are no unused parameters in the model\n and each layer is checkpointed at most once (make sure you are\n not passing find_unused_parameters=True to DDP). We currently\n do not support the case where a layer is checkpointed multiple\n times, or when there unused parameters in the checkpointed model.\nNote:\n To let a non-DDP model load a state dict from a DDP model,\n \"consume_prefix_in_state_dict_if_present()\" needs to be applied\n to strip the prefix \"module.\" in the DDP state dict before\n loading.\n\nWarning:\n Constructor, forward method, and differentiation of the output\n (or a function of the output of this module) are distributed\n synchronization points. Take that into account in case different\n processes might be executing different code.\n\nWarning:\n This module assumes all parameters are registered in the model by\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"} {"text": "the time it is created. No parameters should be added nor removed\n later. Same applies to buffers.\nWarning:\n This module assumes all parameters are registered in the model of\n each distributed processes are in the same order. The module\n itself will conduct gradient \"allreduce\" following the reverse\n order of the registered parameters of the model. In other words,\n it is users' responsibility to ensure that each distributed\n process has the exact same model and thus the exact same\n parameter registration order.\n\nWarning:\n This module allows parameters with non-rowmajor-contiguous\n strides. For example, your model may contain some parameters\n whose \"torch.memory_format\" is \"torch.contiguous_format\" and\n others whose format is \"torch.channels_last\". However,\n corresponding parameters in different processes must have the\n same strides.\n\nWarning:\n This module doesn't work with \"torch.autograd.grad()\" (i.e. it\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"} {"text": "will only work if gradients are to be accumulated in \".grad\"\n attributes of parameters).\nWarning:\n If you plan on using this module with a \"nccl\" backend or a\n \"gloo\" backend (that uses Infiniband), together with a DataLoader\n that uses multiple workers, please change the multiprocessing\n start method to \"forkserver\" (Python 3 only) or \"spawn\".\n Unfortunately Gloo (that uses Infiniband) and NCCL2 are not fork\n safe, and you will likely experience deadlocks if you don't\n change this setting.\n\nWarning:\n You should never try to change your model's parameters after\n wrapping up your model with \"DistributedDataParallel\". Because,\n when wrapping up your model with \"DistributedDataParallel\", the\n constructor of \"DistributedDataParallel\" will register the\n additional gradient reduction functions on all the parameters of\n the model itself at the time of construction. If you change the\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"} {"text": "model's parameters afterwards, gradient reduction functions no\n longer match the correct set of parameters.\nWarning:\n Using \"DistributedDataParallel\" in conjunction with the\n Distributed RPC Framework is experimental and subject to change.\n\nParameters:\n * module (Module) -- module to be parallelized\n * **device_ids** (*list of python:int** or **torch.device*) --\n\n CUDA devices. 1) For single-device modules, \"device_ids\" can\n contain exactly one device id, which represents the only CUDA\n device where the input module corresponding to this process\n resides. Alternatively, \"device_ids\" can also be \"None\". 2)\n For multi-device modules and CPU modules, \"device_ids\" must be\n \"None\".\n\n When \"device_ids\" is \"None\" for both cases, both the input\n data for the forward pass and the actual module must be placed\n on the correct device. (default: \"None\")\n\n * **output_device** (*int** or **torch.device*) -- Device\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"} {"text": "location of output for single-device CUDA modules. For multi-\n device modules and CPU modules, it must be \"None\", and the\n module itself dictates the output location. (default:\n \"device_ids[0]\" for single-device modules)\n * **broadcast_buffers** (*bool*) -- Flag that enables syncing\n (broadcasting) buffers of the module at beginning of the\n \"forward\" function. (default: \"True\")\n\n * **process_group** -- The process group to be used for\n distributed data all-reduction. If \"None\", the default process\n group, which is created by\n \"torch.distributed.init_process_group()\", will be used.\n (default: \"None\")\n\n * **bucket_cap_mb** -- \"DistributedDataParallel\" will bucket\n parameters into multiple buckets so that gradient reduction of\n each bucket can potentially overlap with backward computation.\n \"bucket_cap_mb\" controls the bucket size in MegaBytes (MB).\n (default: 25)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"} {"text": "(default: 25)\n * **find_unused_parameters** (*bool*) -- Traverse the autograd\n graph from all tensors contained in the return value of the\n wrapped module's \"forward\" function. Parameters that don't\n receive gradients as part of this graph are preemptively\n marked as being ready to be reduced. In addition, parameters\n that may have been used in the wrapped module's \"forward\"\n function but were not part of loss computation and thus would\n also not receive gradients are preemptively marked as ready to\n be reduced. (default: \"False\")\n\n * **check_reduction** -- This argument is deprecated.\n\n * **gradient_as_bucket_view** (*bool*) -- When set to \"True\",\n gradients will be views pointing to different offsets of\n \"allreduce\" communication buckets. This can reduce peak memory\n usage, where the saved memory size will be equal to the total\n gradients size. Moreover, it avoids the overhead of copying\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"} {"text": "between gradients and \"allreduce\" communication buckets. When\n gradients are views, \"detach_()\" cannot be called on the\n gradients. If hitting such errors, please fix it by referring\n to the \"zero_grad()\" function in \"torch/optim/optimizer.py\" as\n a solution. Note that gradients will be views after first\n iteration, so the peak memory saving should be checked after\n first iteration.\n * **static_graph** (*bool*) --\n\n When set to \"True\", DDP knows the trained graph is static.\n Static graph means 1) The set of used and unused parameters\n will not change during the whole training loop; in this case,\n it does not matter whether users set \"find_unused_parameters =\n True\" or not. 2) How the graph is trained will not change\n during the whole training loop (meaning there is no control\n flow depending on iterations). When static_graph is set to be\n \"True\", DDP will support cases that can not be supported in\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"} {"text": "the past: 1) Reentrant backwards. 2) Activation checkpointing\n multiple times. 3) Activation checkpointing when model has\n unused parameters. 4) There are model parameters that are\n outside of forward function. 5) Potentially improve\n performance when there are unused parameters, as DDP will not\n search graph in each iteration to detect unused parameters\n when static_graph is set to be \"True\". To check whether you\n can set static_graph to be \"True\", one way is to check ddp\n logging data at the end of your previous model training, if\n \"ddp_logging_data.get(\"can_set_static_graph\") == True\", mostly\n you can set \"static_graph = True\" as well.\n Example::\n >>> model_DDP = torch.nn.parallel.DistributedDataParallel(model)\n >>> # Training loop\n >>> ...\n >>> ddp_logging_data = model_DDP._get_ddp_logging_data()\n >>> static_graph = ddp_logging_data.get(\"can_set_static_graph\")\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"} {"text": "Variables:\n module (Module) -- the module to be parallelized.\nExample:\n >>> torch.distributed.init_process_group(backend='nccl', world_size=4, init_method='...')\n >>> net = torch.nn.parallel.DistributedDataParallel(model)\n\njoin(divide_by_initial_world_size=True, enable=True, throw_on_early_termination=False)\n A context manager to be used in conjunction with an instance of\n \"torch.nn.parallel.DistributedDataParallel\" to be able to train\n with uneven inputs across participating processes.\n\n This context manager will keep track of already-joined DDP\n processes, and \"shadow\" the forward and backward passes by\n inserting collective communication operations to match with the\n ones created by non-joined DDP processes. This will ensure each\n collective call has a corresponding call by already-joined DDP\n processes, preventing hangs or errors that would otherwise\n happen when training with uneven inputs across processes.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"} {"text": "Alternatively, if the flag \"throw_on_early_termination\" is\n specified to be \"True\", all trainers will throw an error once\n one rank runs out of inputs, allowing these errors to be caught\n and handled according to application logic.\n Once all DDP processes have joined, the context manager will\n broadcast the model corresponding to the last joined process to\n all processes to ensure the model is the same across all\n processes (which is guaranteed by DDP).\n\n To use this to enable training with uneven inputs across\n processes, simply wrap this context manager around your training\n loop. No further modifications to the model or data loading is\n required.\n\n Warning:\n\n If the model or training loop this context manager is wrapped\n around has additional distributed collective operations, such\n as \"SyncBatchNorm\" in the model's forward pass, then the flag\n \"throw_on_early_termination\" must be enabled. This is because\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"} {"text": "this context manager is not aware of non-DDP collective\n communication. This flag will cause all ranks to throw when\n any one rank exhausts inputs, allowing these errors to be\n caught and recovered from across all ranks.\n Parameters:\n * **divide_by_initial_world_size** (*bool*) -- If \"True\",\n will divide gradients by the initial \"world_size\" DDP\n training was launched with. If \"False\", will compute the\n effective world size (number of ranks that have not\n depleted their inputs yet) and divide gradients by that\n during allreduce. Set \"divide_by_initial_world_size=True\"\n to ensure every input sample including the uneven inputs\n have equal weight in terms of how much they contribute to\n the global gradient. This is achieved by always dividing\n the gradient by the initial \"world_size\" even when we\n encounter uneven inputs. If you set this to \"False\", we\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"} {"text": "divide the gradient by the remaining number of nodes. This\n ensures parity with training on a smaller \"world_size\"\n although it also means the uneven inputs would contribute\n more towards the global gradient. Typically, you would want\n to set this to \"True\" for cases where the last few inputs\n of your training job are uneven. In extreme cases, where\n there is a large discrepancy in the number of inputs,\n setting this to \"False\" might provide better results.\n * **enable** (*bool*) -- Whether to enable uneven input\n detection or not. Pass in \"enable=False\" to disable in\n cases where you know that inputs are even across\n participating processes. Default is \"True\".\n\n * **throw_on_early_termination** (*bool*) -- Whether to throw\n an error or continue training when at least one rank has\n exhausted inputs. If \"True\", will throw upon the first rank\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"} {"text": "reaching end of data. If \"False\", will continue training\n with a smaller effective world size until all ranks are\n joined. Note that if this flag is specified, then the flag\n \"divide_by_initial_world_size\" would be ignored. Default is\n \"False\".\n Example:\n\n >>> import torch\n >>> import torch.distributed as dist\n >>> import os\n >>> import torch.multiprocessing as mp\n >>> import torch.nn as nn\n >>> # On each spawned worker\n >>> def worker(rank):\n >>> dist.init_process_group(\"nccl\", rank=rank, world_size=2)\n >>> torch.cuda.set_device(rank)\n >>> model = nn.Linear(1, 1, bias=False).to(rank)\n >>> model = torch.nn.parallel.DistributedDataParallel(\n >>> model, device_ids=[rank], output_device=rank\n >>> )\n >>> # Rank 1 gets one more input than rank 0.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"} {"text": "\n\n\ninputs = [torch.tensor([1]).float() for _ in range(10 + rank)]\n >>> with model.join():\n >>> for _ in range(5):\n >>> for inp in inputs:\n >>> loss = model(inp).sum()\n >>> loss.backward()\n >>> # Without the join() API, the below synchronization will hang\n >>> # blocking for rank 1's allreduce to complete.\n >>> torch.cuda.synchronize(device=rank)\n\n\n\n\njoin_hook(**kwargs)\n Returns the DDP join hook, which enables training on uneven\n inputs by shadowing the collective communications in the forward\n and backward passes.\n\n Parameters:\n **kwargs** (*dict*) -- a \"dict\" containing any keyword\n arguments to modify the behavior of the join hook at run\n time; all \"Joinable\" instances sharing the same join context\n manager are forwarded the same value for \"kwargs\".\n\n The hook supports the following keyword arguments:\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"} {"text": "divide_by_initial_world_size (bool, optional):\n If \"True\", then gradients are divided by the initial world\n size that DDP was launched with. If \"False\", then\n gradients are divided by the effective world size (i.e.\n the number of non-joined processes), meaning that the\n uneven inputs contribute more toward the global gradient.\n Typically, this should be set to \"True\" if the degree of\n unevenness is small but can be set to \"False\" in extreme\n cases for possibly better results. Default is \"True\".\nno_sync()\n A context manager to disable gradient synchronizations across\n DDP processes. Within this context, gradients will be\n accumulated on module variables, which will later be\n synchronized in the first forward-backward pass exiting the\n context.\n\n Example:\n\n >>> ddp = torch.nn.parallel.DistributedDataParallel(model, pg)\n >>> with ddp.no_sync():\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"} {"text": "\n\n\nwith ddp.no_sync():\n >>> for input in inputs:\n >>> ddp(input).backward() # no synchronization, accumulate grads\n >>> ddp(another_input).backward() # synchronize grads\n\n\n\n Warning:\n\n The forward pass should be included inside the context\n manager, or else gradients will still be synchronized.\n\nregister_comm_hook(state, hook)\n Registers a communication hook which is an enhancement that\n provides a flexible hook to users where they can specify how DDP\n aggregates gradients across multiple workers.\n\n This hook would be very useful for researchers to try out new\n ideas. For example, this hook can be used to implement several\n algorithms like GossipGrad and gradient compression which\n involve different communication strategies for parameter syncs\n while running Distributed DataParallel training.\n\n Parameters:\n * **state** (*object*) --\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"} {"text": "\nstate (object) -- Passed to the hook to maintain any state information during\n the training process. Examples include error feedback in\n gradient compression, peers to communicate with next in\n GossipGrad, etc.\n\n It is locally stored by each worker and shared by all the\n gradient tensors on the worker.\n\n * **hook** (*Callable*) --\n\n Callable with the following signature: \"hook(state: object,\n bucket: dist.GradBucket) ->\n torch.futures.Future[torch.Tensor]\":\n\n This function is called once the bucket is ready. The hook\n can perform whatever processing is needed and return a\n Future indicating completion of any async work (ex:\n allreduce). If the hook doesn't perform any communication,\n it still must return a completed Future. The Future should\n hold the new value of grad bucket's tensors. Once a bucket\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"} {"text": "is ready, c10d reducer would call this hook and use the\n tensors returned by the Future and copy grads to individual\n parameters. Note that the future's return type must be a\n single tensor.\n We also provide an API called \"get_future\" to retrieve a\n Future associated with the completion of\n \"c10d.ProcessGroup.Work\". \"get_future\" is currently\n supported for NCCL and also supported for most operations\n on GLOO and MPI, except for peer to peer operations\n (send/recv).\n\n Warning:\n\n Grad bucket's tensors will not be predivided by world_size.\n User is responsible to divide by the world_size in case of\n operations like allreduce.\n\n Warning:\n\n DDP communication hook can only be registered once and should\n be registered before calling backward.\n\n Warning:\n\n The Future object that hook returns should contain a single\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"} {"text": "tensor that has the same shape with the tensors inside grad\n bucket.\n Warning:\n\n \"get_future\" API supports NCCL, and partially GLOO and MPI\n backends (no support for peer-to-peer operations like\n send/recv) and will return a \"torch.futures.Future\".\n\n Example::\n Below is an example of a noop hook that returns the same\n tensor.\n\n >>> def noop(state: object, bucket: dist.GradBucket) -> torch.futures.Future[torch.Tensor]:\n >>> fut = torch.futures.Future()\n >>> fut.set_result(bucket.buffer())\n >>> return fut\n >>> ddp.register_comm_hook(state=None, hook=noop)\n\n Example::\n Below is an example of a Parallel SGD algorithm where\n gradients are encoded before allreduce, and then decoded\n after allreduce.\n\n >>> def encode_and_decode(state: object, bucket: dist.GradBucket) -> torch.futures.Future[torch.Tensor]:\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"} {"text": "\n\n\nencoded_tensor = encode(bucket.buffer()) # encode gradients\n >>> fut = torch.distributed.all_reduce(encoded_tensor).get_future()\n >>> # Define the then callback to decode.\n >>> def decode(fut):\n >>> decoded_tensor = decode(fut.value()[0]) # decode gradients\n >>> return decoded_tensor\n >>> return fut.then(decode)\n >>> ddp.register_comm_hook(state=None, hook=encode_and_decode)\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parallel.DistributedDataParallel.html", "category": "pytorch docs"} {"text": "torch.sparse_compressed_tensor\ntorch.sparse_compressed_tensor(compressed_indices, plain_indices, values, size=None, *, dtype=None, layout=None, device=None, requires_grad=False, check_invariants=None) -> Tensor\nConstructs a sparse tensor in Compressed Sparse format - CSR, CSC,\n BSR, or BSC - with specified values at the given\n \"compressed_indices\" and \"plain_indices\". Sparse matrix\n multiplication operations in Compressed Sparse format are typically\n faster than that for sparse tensors in COO format. Make you have a\n look at the note on the data type of the indices.\nNote:\n If the \"device\" argument is not specified the device of the given\n \"values\" and indices tensor(s) must match. If, however, the\n argument is specified the input Tensors will be converted to the\n given device and in turn determine the device of the constructed\n sparse tensor.\n\nParameters:\n * compressed_indices (array_like) -- (B+1)-dimensional", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_compressed_tensor.html", "category": "pytorch docs"} {"text": "array of size \"(*batchsize, compressed_dim_size + 1)\". The\n last element of each batch is the number of non-zero elements\n or blocks. This tensor encodes the index in \"values\" and\n \"plain_indices\" depending on where the given compressed\n dimension (row or column) starts. Each successive number in\n the tensor subtracted by the number before it denotes the\n number of elements or blocks in a given compressed dimension.\n * **plain_indices** (*array_like*) -- Plain dimension (column or\n row) co-ordinates of each element or block in values.\n (B+1)-dimensional tensor with the same length as values.\n\n * **values** (*array_list*) -- Initial values for the tensor.\n Can be a list, tuple, NumPy \"ndarray\", scalar, and other\n types. that represents a (1+K)-dimensional (for CSR and CSC\n layouts) or (1+2+K)-dimensional tensor (for BSR and BSC\n layouts) where \"K\" is the number of dense dimensions.\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_compressed_tensor.html", "category": "pytorch docs"} {"text": "\nsize (list, tuple, \"torch.Size\", optional) -- Size of the\n sparse tensor: \"(batchsize, nrows * blocksize[0], ncols *\n blocksize[1], densesize)\" where \"blocksize[0] == blocksize[1]\n == 1\" for CSR and CSC formats. If not provided, the size will\n be inferred as the minimum size big enough to hold all non-\n zero elements or blocks.\n\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if None, infers data type from\n \"values\".\n * **layout** (\"torch.layout\", required) -- the desired layout of\n returned tensor: \"torch.sparse_csr\", \"torch.sparse_csc\",\n \"torch.sparse_bsr\", or \"torch.sparse_bsc\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if None, uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_compressed_tensor.html", "category": "pytorch docs"} {"text": "for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\n * **check_invariants** (*bool**, **optional*) -- If sparse\n tensor invariants are checked. Default: as returned by\n \"torch.sparse.check_sparse_tensor_invariants.is_enabled()\",\n initially False.\n\nExample::\n >>> compressed_indices = [0, 2, 4]\n >>> plain_indices = [0, 1, 0, 1]\n >>> values = [1, 2, 3, 4]\n >>> torch.sparse_compressed_tensor(torch.tensor(compressed_indices, dtype=torch.int64),\n ... torch.tensor(plain_indices, dtype=torch.int64),\n ... torch.tensor(values), dtype=torch.double, layout=torch.sparse_csr)\n tensor(crow_indices=tensor([0, 2, 4]),\n col_indices=tensor([0, 1, 0, 1]),\n values=tensor([1., 2., 3., 4.]), size=(2, 2), nnz=4,", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_compressed_tensor.html", "category": "pytorch docs"} {"text": "dtype=torch.float64, layout=torch.sparse_csr)", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_compressed_tensor.html", "category": "pytorch docs"} {"text": "InstanceNorm3d\nclass torch.nn.InstanceNorm3d(num_features, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False, device=None, dtype=None)\nApplies Instance Normalization over a 5D input (a mini-batch of 3D\n inputs with additional channel dimension) as described in the paper\n Instance Normalization: The Missing Ingredient for Fast\n Stylization.\n y = \\frac{x - \\mathrm{E}[x]}{ \\sqrt{\\mathrm{Var}[x] + \\epsilon}}\n * \\gamma + \\beta\n\nThe mean and standard-deviation are calculated per-dimension\n separately for each object in a mini-batch. \\gamma and \\beta are\n learnable parameter vectors of size C (where C is the input size)\n if \"affine\" is \"True\". The standard-deviation is calculated via the\n biased estimator, equivalent to torch.var(input, unbiased=False).\nBy default, this layer uses instance statistics computed from input\n data in both training and evaluation modes.\nIf \"track_running_stats\" is set to \"True\", during training this", "source": "https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm3d.html", "category": "pytorch docs"} {"text": "layer keeps running estimates of its computed mean and variance,\n which are then used for normalization during evaluation. The\n running estimates are kept with a default \"momentum\" of 0.1.\nNote:\n This \"momentum\" argument is different from one used in optimizer\n classes and the conventional notion of momentum. Mathematically,\n the update rule for running statistics here is \\hat{x}_\\text{new}\n = (1 - \\text{momentum}) \\times \\hat{x} + \\text{momentum} \\times\n x_t, where \\hat{x} is the estimated statistic and x_t is the new\n observed value.\n\nNote:\n \"InstanceNorm3d\" and \"LayerNorm\" are very similar, but have some\n subtle differences. \"InstanceNorm3d\" is applied on each channel\n of channeled data like 3D models with RGB color, but \"LayerNorm\"\n is usually applied on entire sample and often in NLP tasks.\n Additionally, \"LayerNorm\" applies elementwise affine transform,\n while \"InstanceNorm3d\" usually don't apply affine transform.\n\nParameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm3d.html", "category": "pytorch docs"} {"text": "Parameters:\n * num_features (int) -- C from an expected input of size\n (N, C, D, H, W) or (C, D, H, W)\n * **eps** (*float*) -- a value added to the denominator for\n numerical stability. Default: 1e-5\n\n * **momentum** (*float*) -- the value used for the running_mean\n and running_var computation. Default: 0.1\n\n * **affine** (*bool*) -- a boolean value that when set to\n \"True\", this module has learnable affine parameters,\n initialized the same way as done for batch normalization.\n Default: \"False\".\n\n * **track_running_stats** (*bool*) -- a boolean value that when\n set to \"True\", this module tracks the running mean and\n variance, and when set to \"False\", this module does not track\n such statistics and always uses batch statistics in both\n training and eval modes. Default: \"False\"\n\nShape:\n * Input: (N, C, D, H, W) or (C, D, H, W)\n * Output: (N, C, D, H, W) or (C, D, H, W) (same shape as input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm3d.html", "category": "pytorch docs"} {"text": "Examples:\n >>> # Without Learnable Parameters\n >>> m = nn.InstanceNorm3d(100)\n >>> # With Learnable Parameters\n >>> m = nn.InstanceNorm3d(100, affine=True)\n >>> input = torch.randn(20, 100, 35, 45, 10)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm3d.html", "category": "pytorch docs"} {"text": "torch.nn.functional.bilinear\ntorch.nn.functional.bilinear(input1, input2, weight, bias=None) -> Tensor\nApplies a bilinear transformation to the incoming data: y = x_1^T A\n x_2 + b\nShape:\n * input1: (N, *, H_{in1}) where H_{in1}=\\text{in1\\_features} and\n * means any number of additional dimensions. All but the last\n dimension of the inputs should be the same.\n\n * input2: (N, *, H_{in2}) where H_{in2}=\\text{in2\\_features}\n\n * weight: (\\text{out\\_features}, \\text{in1\\_features},\n \\text{in2\\_features})\n\n * bias: (\\text{out\\_features})\n\n * output: (N, *, H_{out}) where H_{out}=\\text{out\\_features} and\n all but the last dimension are the same shape as the input.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.bilinear.html", "category": "pytorch docs"} {"text": "torch.Tensor.bitwise_not\nTensor.bitwise_not() -> Tensor\nSee \"torch.bitwise_not()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_not.html", "category": "pytorch docs"} {"text": "torch.linalg.householder_product\ntorch.linalg.householder_product(A, tau, *, out=None) -> Tensor\nComputes the first n columns of a product of Householder\n matrices.\nLet \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, and let V \\in\n \\mathbb{K}^{m \\times n} be a matrix with columns v_i \\in\n \\mathbb{K}^m for i=1,\\ldots,m with m \\geq n. Denote by w_i the\n vector resulting from zeroing out the first i-1 components of v_i\n and setting to 1 the i-th. For a vector \\tau \\in \\mathbb{K}^k\n with k \\leq n, this function computes the first n columns of the\n matrix\n H_1H_2 ... H_k \\qquad\\text{with}\\qquad H_i = \\mathrm{I}_m -\n \\tau_i w_i w_i^{\\text{H}}\n\nwhere \\mathrm{I}_m is the m-dimensional identity matrix and\n w^{\\text{H}} is the conjugate transpose when w is complex, and the\n transpose when w is real-valued. The output matrix is the same size\n as the input matrix \"A\".\nSee Representation of Orthogonal or Unitary Matrices for further\n details.", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.householder_product.html", "category": "pytorch docs"} {"text": "details.\nSupports inputs of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if the inputs are batches of\n matrices then the output has the same batch dimensions.\nSee also:\n \"torch.geqrf()\" can be used together with this function to form\n the *Q* from the \"qr()\" decomposition.\n\n \"torch.ormqr()\" is a related function that computes the matrix\n multiplication of a product of Householder matrices with another\n matrix. However, that function is not supported by autograd.\n\nWarning:\n Gradient computations are only well-defined if tau_i \\neq\n \\frac{1}{||v_i||^2}. If this condition is not met, no error will\n be thrown, but the gradient produced may contain *NaN*.\n\nParameters:\n * A (Tensor) -- tensor of shape (, m, n)* where *** is\n zero or more batch dimensions.\n * **tau** (*Tensor*) -- tensor of shape *(*, k)* where *** is\n zero or more batch dimensions.\n\nKeyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.householder_product.html", "category": "pytorch docs"} {"text": "Keyword Arguments:\n out (Tensor, optional) -- output tensor. Ignored if\n None. Default: None.\nRaises:\n RuntimeError -- if \"A\" doesn't satisfy the requirement m >=\n n, or \"tau\" doesn't satisfy the requirement n >= k.\nExamples:\n >>> A = torch.randn(2, 2)\n >>> h, tau = torch.geqrf(A)\n >>> Q = torch.linalg.householder_product(h, tau)\n >>> torch.dist(Q, torch.linalg.qr(A).Q)\n tensor(0.)\n\n >>> h = torch.randn(3, 2, 2, dtype=torch.complex128)\n >>> tau = torch.randn(3, 1, dtype=torch.complex128)\n >>> Q = torch.linalg.householder_product(h, tau)\n >>> Q\n tensor([[[ 1.8034+0.4184j, 0.2588-1.0174j],\n [-0.6853+0.7953j, 2.0790+0.5620j]],\n\n [[ 1.4581+1.6989j, -1.5360+0.1193j],\n [ 1.3877-0.6691j, 1.3512+1.3024j]],\n\n [[ 1.4766+0.5783j, 0.0361+0.6587j],\n [ 0.6396+0.1612j, 1.3693+0.4481j]]], dtype=torch.complex128)\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.householder_product.html", "category": "pytorch docs"} {"text": "torch.set_num_interop_threads\ntorch.set_num_interop_threads(int)\nSets the number of threads used for interop parallelism (e.g. in\n JIT interpreter) on CPU.\nWarning:\n Can only be called once and before any inter-op parallel work is\n started (e.g. JIT execution).\n", "source": "https://pytorch.org/docs/stable/generated/torch.set_num_interop_threads.html", "category": "pytorch docs"} {"text": "torch.stack\ntorch.stack(tensors, dim=0, *, out=None) -> Tensor\nConcatenates a sequence of tensors along a new dimension.\nAll tensors need to be of the same size.\nParameters:\n * tensors (sequence of Tensors) -- sequence of tensors to\n concatenate\n * **dim** (*int*) -- dimension to insert. Has to be between 0\n and the number of dimensions of concatenated tensors\n (inclusive)\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.stack.html", "category": "pytorch docs"} {"text": "torch.Tensor.multiply_\nTensor.multiply_(value) -> Tensor\nIn-place version of \"multiply()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.multiply_.html", "category": "pytorch docs"} {"text": "torch.nextafter\ntorch.nextafter(input, other, *, out=None) -> Tensor\nReturn the next floating-point value after \"input\" towards \"other\",\n elementwise.\nThe shapes of \"input\" and \"other\" must be broadcastable.\nParameters:\n * input (Tensor) -- the first input tensor\n * **other** (*Tensor*) -- the second input tensor\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> eps = torch.finfo(torch.float32).eps\n >>> torch.nextafter(torch.tensor([1.0, 2.0]), torch.tensor([2.0, 1.0])) == torch.tensor([eps + 1, 2 - eps])\n tensor([True, True])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nextafter.html", "category": "pytorch docs"} {"text": "torch.Tensor.fmod\nTensor.fmod(divisor) -> Tensor\nSee \"torch.fmod()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.fmod.html", "category": "pytorch docs"} {"text": "torch.Tensor.log\nTensor.log() -> Tensor\nSee \"torch.log()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.log.html", "category": "pytorch docs"} {"text": "torch.Tensor.bitwise_or_\nTensor.bitwise_or_() -> Tensor\nIn-place version of \"bitwise_or()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_or_.html", "category": "pytorch docs"} {"text": "torch.Tensor.baddbmm_\nTensor.baddbmm_(batch1, batch2, *, beta=1, alpha=1) -> Tensor\nIn-place version of \"baddbmm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.baddbmm_.html", "category": "pytorch docs"} {"text": "torch.fft.irfft2\ntorch.fft.irfft2(input, s=None, dim=(- 2, - 1), norm=None, *, out=None) -> Tensor\nComputes the inverse of \"rfft2()\". Equivalent to \"irfftn()\" but\n IFFTs only the last two dimensions by default.\n\"input\" is interpreted as a one-sided Hermitian signal in the\n Fourier domain, as produced by \"rfft2()\". By the Hermitian\n property, the output will be real-valued.\nNote:\n Some input frequencies must be real-valued to satisfy the\n Hermitian property. In these cases the imaginary component will\n be ignored. For example, any imaginary component in the zero-\n frequency term cannot be represented in a real output and so will\n always be ignored.\n\nNote:\n The correct interpretation of the Hermitian input depends on the\n length of the original data, as given by \"s\". This is because\n each input shape could correspond to either an odd or even length\n signal. By default, the signal is assumed to be even length and\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.irfft2.html", "category": "pytorch docs"} {"text": "odd signals will not round-trip properly. So, it is recommended\n to always pass the signal shape \"s\".\nNote:\n Supports torch.half and torch.chalf on CUDA with GPU Architecture\n SM53 or greater. However it only supports powers of 2 signal\n length in every transformed dimensions. With default arguments,\n the size of last dimension should be (2^n + 1) as argument *s*\n defaults to even output size = 2 * (last_dim_size - 1)\n\nParameters:\n * input (Tensor) -- the input tensor\n * **s** (*Tuple**[**int**]**, **optional*) -- Signal size in the\n transformed dimensions. If given, each dimension \"dim[i]\" will\n either be zero-padded or trimmed to the length \"s[i]\" before\n computing the real FFT. If a length \"-1\" is specified, no\n padding is done in that dimension. Defaults to even output in\n the last dimension: \"s[-1] = 2*(input.size(dim[-1]) - 1)\".\n\n * **dim** (*Tuple**[**int**]**, **optional*) -- Dimensions to be\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.irfft2.html", "category": "pytorch docs"} {"text": "transformed. The last dimension must be the half-Hermitian\n compressed dimension. Default: last two dimensions.\n * **norm** (*str**, **optional*) --\n\n Normalization mode. For the backward transform (\"irfft2()\"),\n these correspond to:\n\n * \"\"forward\"\" - no normalization\n\n * \"\"backward\"\" - normalize by \"1/n\"\n\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the real IFFT\n orthonormal)\n\n Where \"n = prod(s)\" is the logical IFFT size. Calling the\n forward transform (\"rfft2()\") with the same normalization mode\n will apply an overall normalization of \"1/n\" between the two\n transforms. This is required to make \"irfft2()\" the exact\n inverse.\n\n Default is \"\"backward\"\" (normalize by \"1/n\").\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\n-[ Example ]-\n\n\n\nt = torch.rand(10, 9)\nT = torch.fft.rfft2(t)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.irfft2.html", "category": "pytorch docs"} {"text": "\n\n\nT = torch.fft.rfft2(t)\n\n\n\nWithout specifying the output length to \"irfft2()\", the output will\n not round-trip properly because the input is odd-length in the last\n dimension:\n\n\n\ntorch.fft.irfft2(T).size()\n torch.Size([10, 8])\n\n\n\nSo, it is recommended to always pass the signal shape \"s\".\n\n\n\nroundtrip = torch.fft.irfft2(T, t.size())\nroundtrip.size()\n torch.Size([10, 9])\ntorch.testing.assert_close(roundtrip, t, check_stride=False)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.irfft2.html", "category": "pytorch docs"} {"text": "torch._foreach_ceil\ntorch._foreach_ceil(self: List[Tensor]) -> List[Tensor]\nApply \"torch.ceil()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_ceil.html", "category": "pytorch docs"} {"text": "torch.var_mean\ntorch.var_mean(input, dim=None, *, correction=1, keepdim=False, out=None)\nCalculates the variance and mean over the dimensions specified by\n \"dim\". \"dim\" can be a single dimension, list of dimensions, or\n \"None\" to reduce over all dimensions.\nThe variance (\\sigma^2) is calculated as\n \\sigma^2 = \\frac{1}{N - \\delta N}\\sum_{i=0}^{N-1}(x_i-\\bar{x})^2\n\nwhere x is the sample set of elements, \\bar{x} is the sample mean,\n N is the number of samples and \\delta N is the \"correction\".\nIf \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension(s) \"dim\" where it is of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensor having 1 (or \"len(dim)\") fewer dimension(s).\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int** or **tuple of ints**, **optional*) -- the\n dimension or dimensions to reduce. If \"None\", all dimensions\n", "source": "https://pytorch.org/docs/stable/generated/torch.var_mean.html", "category": "pytorch docs"} {"text": "are reduced.\nKeyword Arguments:\n * correction (int) --\n difference between the sample size and sample degrees of\n freedom. Defaults to Bessel's correction, \"correction=1\".\n\n Changed in version 2.0: Previously this argument was called\n \"unbiased\" and was a boolean with \"True\" corresponding to\n \"correction=1\" and \"False\" being \"correction=0\".\n\n * **keepdim** (*bool*) -- whether the output tensor has \"dim\"\n retained or not.\n\n * **out** (*Tensor**, **optional*) -- the output tensor.\n\nReturns:\n A tuple (var, mean) containing the variance and mean.\n-[ Example ]-\n\n\n\na = torch.tensor(\n ... [[ 0.2035, 1.2959, 1.8101, -0.4644],\n ... [ 1.5027, -0.3270, 0.5905, 0.6538],\n ... [-1.5745, 1.3330, -0.5596, -0.6548],\n ... [ 0.1264, -0.5080, 1.6420, 0.1992]])\ntorch.var_mean(a, dim=0, keepdim=True)\n (tensor([[1.5926, 1.0056, 1.2005, 0.3646]]),\n tensor([[ 0.0645, 0.4485, 0.8707, -0.0665]]))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.var_mean.html", "category": "pytorch docs"} {"text": "torch.Tensor.resize_\nTensor.resize_(*sizes, memory_format=torch.contiguous_format) -> Tensor\nResizes \"self\" tensor to the specified size. If the number of\n elements is larger than the current storage size, then the\n underlying storage is resized to fit the new number of elements. If\n the number of elements is smaller, the underlying storage is not\n changed. Existing elements are preserved but any new memory is\n uninitialized.\nWarning:\n This is a low-level method. The storage is reinterpreted as\n C-contiguous, ignoring the current strides (unless the target\n size equals the current size, in which case the tensor is left\n unchanged). For most purposes, you will instead want to use\n \"view()\", which checks for contiguity, or \"reshape()\", which\n copies data if needed. To change the size in-place with custom\n strides, see \"set_()\".\n\nParameters:\n * sizes (torch.Size or int...) -- the desired size", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.resize_.html", "category": "pytorch docs"} {"text": "\nmemory_format (\"torch.memory_format\", optional) -- the\n desired memory format of Tensor. Default:\n \"torch.contiguous_format\". Note that memory format of \"self\"\n is going to be unaffected if \"self.size()\" matches \"sizes\".\n\nExample:\n >>> x = torch.tensor([[1, 2], [3, 4], [5, 6]])\n >>> x.resize_(2, 2)\n tensor([[ 1, 2],\n [ 3, 4]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.resize_.html", "category": "pytorch docs"} {"text": "torch.nn.functional.lp_pool2d\ntorch.nn.functional.lp_pool2d(input, norm_type, kernel_size, stride=None, ceil_mode=False)\nApplies a 2D power-average pooling over an input signal composed of\n several input planes. If the sum of all inputs to the power of p\n is zero, the gradient is set to zero as well.\nSee \"LPPool2d\" for details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.lp_pool2d.html", "category": "pytorch docs"} {"text": "torch.sparse_bsr_tensor\ntorch.sparse_bsr_tensor(crow_indices, col_indices, values, size=None, *, dtype=None, device=None, requires_grad=False, check_invariants=None) -> Tensor\nConstructs a sparse tensor in BSR (Block Compressed Sparse Row))\n with specified 2-dimensional blocks at the given \"crow_indices\" and\n \"col_indices\". Sparse matrix multiplication operations in BSR\n format are typically faster than that for sparse tensors in COO\n format. Make you have a look at the note on the data type of the\n indices.\nNote:\n If the \"device\" argument is not specified the device of the given\n \"values\" and indices tensor(s) must match. If, however, the\n argument is specified the input Tensors will be converted to the\n given device and in turn determine the device of the constructed\n sparse tensor.\n\nParameters:\n * crow_indices (array_like) -- (B+1)-dimensional array of\n size \"(*batchsize, nrowblocks + 1)\". The last element of each", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_bsr_tensor.html", "category": "pytorch docs"} {"text": "batch is the number of non-zeros. This tensor encodes the\n block index in values and col_indices depending on where the\n given row block starts. Each successive number in the tensor\n subtracted by the number before it denotes the number of\n blocks in a given row.\n * **col_indices** (*array_like*) -- Column block co-ordinates of\n each block in values. (B+1)-dimensional tensor with the same\n length as values.\n\n * **values** (*array_list*) -- Initial values for the tensor.\n Can be a list, tuple, NumPy \"ndarray\", scalar, and other types\n that represents a (1 + 2 + K)-dimensional tensor where \"K\" is\n the number of dense dimensions.\n\n * **size** (list, tuple, \"torch.Size\", optional) -- Size of the\n sparse tensor: \"(*batchsize, nrows * blocksize[0], ncols *\n blocksize[1], *densesize)\" where \"blocksize ==\n values.shape[1:3]\". If not provided, the size will be inferred\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_bsr_tensor.html", "category": "pytorch docs"} {"text": "as the minimum size big enough to hold all non-zero blocks.\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if None, infers data type from\n \"values\".\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if None, uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\n * **check_invariants** (*bool**, **optional*) -- If sparse\n tensor invariants are checked. Default: as returned by\n \"torch.sparse.check_sparse_tensor_invariants.is_enabled()\",\n initially False.\n\nExample::\n >>> crow_indices = [0, 1, 2]", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_bsr_tensor.html", "category": "pytorch docs"} {"text": "Example::\n >>> crow_indices = [0, 1, 2]\n >>> col_indices = [0, 1]\n >>> values = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]]\n >>> torch.sparse_bsr_tensor(torch.tensor(crow_indices, dtype=torch.int64),\n ... torch.tensor(col_indices, dtype=torch.int64),\n ... torch.tensor(values), dtype=torch.double)\n tensor(crow_indices=tensor([0, 1, 2]),\n col_indices=tensor([0, 1]),\n values=tensor([[[1., 2.],\n [3., 4.]],\n [[5., 6.],\n [7., 8.]]]), size=(2, 2), nnz=2, dtype=torch.float64,\n layout=torch.sparse_bsr)", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_bsr_tensor.html", "category": "pytorch docs"} {"text": "LogSoftmax\nclass torch.nn.LogSoftmax(dim=None)\nApplies the \\log(\\text{Softmax}(x)) function to an n-dimensional\n input Tensor. The LogSoftmax formulation can be simplified as:\n \\text{LogSoftmax}(x_{i}) = \\log\\left(\\frac{\\exp(x_i) }{ \\sum_j\n \\exp(x_j)} \\right)\n\nShape:\n * Input: (*) where *** means, any number of additional\n dimensions\n * Output: (*), same shape as the input\n\nParameters:\n dim (int) -- A dimension along which LogSoftmax will be\n computed.\nReturns:\n a Tensor of the same dimension and shape as the input with\n values in the range [-inf, 0)\nReturn type:\n None\nExamples:\n >>> m = nn.LogSoftmax(dim=1)\n >>> input = torch.randn(2, 3)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LogSoftmax.html", "category": "pytorch docs"} {"text": "torch.func.jvp\ntorch.func.jvp(func, primals, tangents, *, strict=False, has_aux=False)\nStanding for the Jacobian-vector product, returns a tuple\n containing the output of func(primals)* and the \"Jacobian of\n \"func\" evaluated at \"primals\"\" times \"tangents\". This is also known\n as forward-mode autodiff.\nParameters:\n * func (function) -- A Python function that takes one or\n more arguments, one of which must be a Tensor, and returns one\n or more Tensors\n * **primals** (*Tensors*) -- Positional arguments to \"func\" that\n must all be Tensors. The returned function will also be\n computing the derivative with respect to these arguments\n\n * **tangents** (*Tensors*) -- The \"vector\" for which Jacobian-\n vector-product is computed. Must be the same structure and\n sizes as the inputs to \"func\".\n\n * **has_aux** (*bool*) -- Flag indicating that \"func\" returns a\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.jvp.html", "category": "pytorch docs"} {"text": "\"(output, aux)\" tuple where the first element is the output of\n the function to be differentiated and the second element is\n other auxiliary objects that will not be differentiated.\n Default: False.\nReturns:\n Returns a \"(output, jvp_out)\" tuple containing the output of\n \"func\" evaluated at \"primals\" and the Jacobian-vector product.\n If \"has_aux is True\", then instead returns a \"(output, jvp_out,\n aux)\" tuple.\nNote:\n You may see this API error out with \"forward-mode AD not\n implemented for operator X\". If so, please file a bug report and\n we will prioritize it.\n\njvp is useful when you wish to compute gradients of a function R^1\n -> R^N\n\n\n\nfrom torch.func import jvp\nx = torch.randn([])\nf = lambda x: x * torch.tensor([1., 2., 3])\nvalue, grad = jvp(f, (x,), (torch.tensor(1.),))\nassert torch.allclose(value, f(x))\nassert torch.allclose(grad, torch.tensor([1., 2, 3]))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.jvp.html", "category": "pytorch docs"} {"text": "\"jvp()\" can support functions with multiple inputs by passing in\n the tangents for each of the inputs\n\n\n\nfrom torch.func import jvp\nx = torch.randn(5)\ny = torch.randn(5)\nf = lambda x, y: (x * y)\n_, output = jvp(f, (x, y), (torch.ones(5), torch.ones(5)))\nassert torch.allclose(output, x + y)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.jvp.html", "category": "pytorch docs"} {"text": "torch.angle\ntorch.angle(input, *, out=None) -> Tensor\nComputes the element-wise angle (in radians) of the given \"input\"\n tensor.\n \\text{out}_{i} = angle(\\text{input}_{i})\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nNote:\n Starting in PyTorch 1.8, angle returns pi for negative real\n numbers, zero for non-negative real numbers, and propagates NaNs.\n Previously the function would return zero for all real numbers\n and not propagate floating-point NaNs.\n\nExample:\n >>> torch.angle(torch.tensor([-1 + 1j, -2 + 2j, 3 - 3j]))*180/3.14159\n tensor([ 135., 135, -45])\n", "source": "https://pytorch.org/docs/stable/generated/torch.angle.html", "category": "pytorch docs"} {"text": "torch.flipud\ntorch.flipud(input) -> Tensor\nFlip tensor in the up/down direction, returning a new tensor.\nFlip the entries in each column in the up/down direction. Rows are\n preserved, but appear in a different order than before.\nNote:\n Requires the tensor to be at least 1-D.\n\nNote:\n *torch.flipud* makes a copy of \"input\"'s data. This is different\n from NumPy's *np.flipud*, which returns a view in constant time.\n Since copying a tensor's data is more work than viewing that\n data, *torch.flipud* is expected to be slower than *np.flipud*.\n\nParameters:\n input (Tensor) -- Must be at least 1-dimensional.\nExample:\n >>> x = torch.arange(4).view(2, 2)\n >>> x\n tensor([[0, 1],\n [2, 3]])\n >>> torch.flipud(x)\n tensor([[2, 3],\n [0, 1]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.flipud.html", "category": "pytorch docs"} {"text": "torch.foreach_abs\ntorch.foreach_abs(self: List[Tensor]) -> None\nApply \"torch.abs()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_abs_.html", "category": "pytorch docs"} {"text": "torch.cartesian_prod\ntorch.cartesian_prod(*tensors)\nDo cartesian product of the given sequence of tensors. The behavior\n is similar to python's itertools.product.\nParameters:\n tensors (Tensor*) -- any number of 1 dimensional tensors.\nReturns:\n A tensor equivalent to converting all the input tensors into\n lists, do itertools.product on these lists, and finally\n convert the resulting list into tensor.\nReturn type:\n Tensor\nExample:\n >>> import itertools\n >>> a = [1, 2, 3]\n >>> b = [4, 5]\n >>> list(itertools.product(a, b))\n [(1, 4), (1, 5), (2, 4), (2, 5), (3, 4), (3, 5)]\n >>> tensor_a = torch.tensor(a)\n >>> tensor_b = torch.tensor(b)\n >>> torch.cartesian_prod(tensor_a, tensor_b)\n tensor([[1, 4],\n [1, 5],\n [2, 4],\n [2, 5],\n [3, 4],\n [3, 5]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.cartesian_prod.html", "category": "pytorch docs"} {"text": "BNReLU2d\nclass torch.ao.nn.intrinsic.BNReLU2d(batch_norm, relu)\nThis is a sequential container which calls the BatchNorm 2d and\n ReLU modules. During quantization this will be replaced with the\n corresponding fused module.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.BNReLU2d.html", "category": "pytorch docs"} {"text": "torch.autograd.profiler.profile.key_averages\nprofile.key_averages(group_by_input_shape=False, group_by_stack_n=0)\nAverages all function events over their keys.\nParameters:\n * group_by_input_shapes -- group entries by (event name,\n input shapes) rather than just event name. This is useful to\n see which input shapes contribute to the runtime the most and\n may help with size-specific optimizations or choosing the best\n candidates for quantization (aka fitting a roof line)\n * **group_by_stack_n** -- group by top n stack trace entries\n\nReturns:\n An EventList containing FunctionEventAvg objects.", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.profiler.profile.key_averages.html", "category": "pytorch docs"} {"text": "Hardsigmoid\nclass torch.nn.Hardsigmoid(inplace=False)\nApplies the Hardsigmoid function element-wise.\nHardsigmoid is defined as:\n \\text{Hardsigmoid}(x) = \\begin{cases} 0 & \\text{if~} x \\le\n -3, \\\\ 1 & \\text{if~} x \\ge +3, \\\\ x / 6 + 1 / 2 &\n \\text{otherwise} \\end{cases}\n\nParameters:\n inplace (bool) -- can optionally do the operation in-\n place. Default: \"False\"\nShape:\n * Input: (*), where * means any number of dimensions.\n * Output: (*), same shape as the input.\n\n[image]\nExamples:\n >>> m = nn.Hardsigmoid()\n >>> input = torch.randn(2)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Hardsigmoid.html", "category": "pytorch docs"} {"text": "torch._foreach_sqrt\ntorch._foreach_sqrt(self: List[Tensor]) -> List[Tensor]\nApply \"torch.sqrt()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_sqrt.html", "category": "pytorch docs"} {"text": "torch.linalg.vander\ntorch.linalg.vander(x, N=None) -> Tensor\nGenerates a Vandermonde matrix.\nReturns the Vandermonde matrix V\n V = \\begin{pmatrix} 1 & x_1 & x_1^2 & \\dots &\n x_1^{N-1}\\\\ 1 & x_2 & x_2^2 & \\dots & x_2^{N-1}\\\\\n 1 & x_3 & x_3^2 & \\dots & x_3^{N-1}\\\\ \\vdots & \\vdots &\n \\vdots & \\ddots &\\vdots \\\\ 1 & x_n & x_n^2 & \\dots &\n x_n^{N-1} \\end{pmatrix}.\n\nfor N > 1. If \"N\"= None, then N = x.size(-1) so that the\n output is a square matrix.\nSupports inputs of float, double, cfloat, cdouble, and integral\n dtypes. Also supports batches of vectors, and if \"x\" is a batch of\n vectors then the output has the same batch dimensions.\nDifferences with numpy.vander:\n\nUnlike numpy.vander, this function returns the powers of \"x\" in\n ascending order. To get them in the reverse order call\n \"linalg.vander(x, N).flip(-1)\".\n\nParameters:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.vander.html", "category": "pytorch docs"} {"text": "Parameters:\n x (Tensor) -- tensor of shape (, n)* where *** is zero\n or more batch dimensions consisting of vectors.\nKeyword Arguments:\n N (int, optional) -- Number of columns in the output.\n Default: x.size(-1)\nExample:\n >>> x = torch.tensor([1, 2, 3, 5])\n >>> linalg.vander(x)\n tensor([[ 1, 1, 1, 1],\n [ 1, 2, 4, 8],\n [ 1, 3, 9, 27],\n [ 1, 5, 25, 125]])\n >>> linalg.vander(x, N=3)\n tensor([[ 1, 1, 1],\n [ 1, 2, 4],\n [ 1, 3, 9],\n [ 1, 5, 25]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.vander.html", "category": "pytorch docs"} {"text": "torch.nn.functional.silu\ntorch.nn.functional.silu(input, inplace=False)\nApplies the Sigmoid Linear Unit (SiLU) function, element-wise. The\n SiLU function is also known as the swish function.\n \\text{silu}(x) = x * \\sigma(x), \\text{where } \\sigma(x) \\text{\n is the logistic sigmoid.}\n\nNote:\n See Gaussian Error Linear Units (GELUs) where the SiLU (Sigmoid\n Linear Unit) was originally coined, and see Sigmoid-Weighted\n Linear Units for Neural Network Function Approximation in\n Reinforcement Learning and Swish: a Self-Gated Activation\n Function where the SiLU was experimented with later.\n\nSee \"SiLU\" for more details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.silu.html", "category": "pytorch docs"} {"text": "torch.clone\ntorch.clone(input, *, memory_format=torch.preserve_format) -> Tensor\nReturns a copy of \"input\".\nNote:\n This function is differentiable, so gradients will flow back from\n the result of this operation to \"input\". To create a tensor\n without an autograd relationship to \"input\" see \"detach()\".\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.clone.html", "category": "pytorch docs"} {"text": "LinearReLU\nclass torch.ao.nn.intrinsic.qat.LinearReLU(in_features, out_features, bias=True, qconfig=None)\nA LinearReLU module fused from Linear and ReLU modules, attached\n with FakeQuantize modules for weight, used in quantization aware\n training.\nWe adopt the same interface as \"torch.nn.Linear\".\nSimilar to torch.nn.intrinsic.LinearReLU, with FakeQuantize\n modules initialized to default.\nVariables:\n weight (torch.Tensor) -- fake quant module for weight\nExamples:\n >>> m = nn.qat.LinearReLU(20, 30)\n >>> input = torch.randn(128, 20)\n >>> output = m(input)\n >>> print(output.size())\n torch.Size([128, 30])\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.LinearReLU.html", "category": "pytorch docs"} {"text": "torch.foreach_cosh\ntorch.foreach_cosh(self: List[Tensor]) -> None\nApply \"torch.cosh()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_cosh_.html", "category": "pytorch docs"} {"text": "torch.imag\ntorch.imag(input) -> Tensor\nReturns a new tensor containing imaginary values of the \"self\"\n tensor. The returned tensor and \"self\" share the same underlying\n storage.\nWarning:\n \"imag()\" is only supported for tensors with complex dtypes.\n\nParameters:\n input (Tensor) -- the input tensor.\nExample:\n >>> x=torch.randn(4, dtype=torch.cfloat)\n >>> x\n tensor([(0.3100+0.3553j), (-0.5445-0.7896j), (-1.6492-0.0633j), (-0.0638-0.8119j)])\n >>> x.imag\n tensor([ 0.3553, -0.7896, -0.0633, -0.8119])\n", "source": "https://pytorch.org/docs/stable/generated/torch.imag.html", "category": "pytorch docs"} {"text": "RMSprop\nclass torch.optim.RMSprop(params, lr=0.01, alpha=0.99, eps=1e-08, weight_decay=0, momentum=0, centered=False, foreach=None, maximize=False, differentiable=False)\nImplements RMSprop algorithm.\n \\begin{aligned} &\\rule{110mm}{0.4pt}\n \\\\ &\\textbf{input} : \\alpha \\text{ (alpha)},\\: \\gamma\n \\text{ (lr)}, \\: \\theta_0 \\text{ (params)}, \\:\n f(\\theta) \\text{ (objective)} \\\\\n &\\hspace{13mm} \\lambda \\text{ (weight decay)},\\: \\mu \\text{\n (momentum)},\\: centered\\\\ &\\textbf{initialize} : v_0\n \\leftarrow 0 \\text{ (square average)}, \\: \\textbf{b}_0\n \\leftarrow 0 \\text{ (buffer)}, \\: g^{ave}_0 \\leftarrow 0\n \\\\[-1.ex] &\\rule{110mm}{0.4pt}\n \\\\ &\\textbf{for} \\: t=1 \\: \\textbf{to} \\: \\ldots \\:\n \\textbf{do} \\\\ &\\hspace{5mm}g_t\n \\leftarrow \\nabla_{\\theta} f_t (\\theta_{t-1}) \\\\\n &\\hspace{5mm}if \\: \\lambda \\neq 0\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html", "category": "pytorch docs"} {"text": "&\\hspace{5mm}if \\: \\lambda \\neq 0\n \\ &\\hspace{10mm} g_t \\leftarrow g_t + \\lambda\n \\theta_{t-1} \\ &\\hspace{5mm}v_t\n \\leftarrow \\alpha v_{t-1} + (1 - \\alpha) g^2_t\n \\hspace{8mm}\n \\ &\\hspace{5mm} \\tilde{v_t} \\leftarrow v_t\n \\ &\\hspace{5mm}if \\: centered\n \\ &\\hspace{10mm} g^{ave}t \\leftarrow g^{ave} \\alpha\n + (1-\\alpha) g_t \\ &\\hspace{10mm} \\tilde{v_t}\n \\leftarrow \\tilde{v_t} - \\big(g^{ave}{t} \\big)^2 \\\n &\\hspace{5mm}if \\: \\mu > 0\n \\ &\\hspace{10mm} \\textbf{b}_t\\leftarrow \\mu\n \\textbf{b} + g_t/ \\big(\\sqrt{\\tilde{v_t}} +\n \\epsilon \\big) \\\n &\\hspace{10mm} \\theta_t \\leftarrow \\theta_{t-1} - \\gamma\n \\textbf{b}t \\ &\\hspace{5mm} else\n \\ &\\hspace{10mm}\\theta_t \\leftarrow \\theta -\n \\gamma g_t/ \\big(\\sqrt{\\tilde{v_t}} + \\epsilon \\big)", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html", "category": "pytorch docs"} {"text": "\\hspace{3mm} \\ &\\rule{110mm}{0.4pt}\n \\[-1.ex] &\\bf{return} \\: \\theta_t\n \\[-1.ex] &\\rule{110mm}{0.4pt}\n \\[-1.ex] \\end{aligned}\nFor further details regarding the algorithm we refer to lecture\n notes by G. Hinton. and centered version Generating Sequences With\n Recurrent Neural Networks. The implementation here takes the square\n root of the gradient average before adding epsilon (note that\n TensorFlow interchanges these two operations). The effective\n learning rate is thus \\gamma/(\\sqrt{v} + \\epsilon) where \\gamma is\n the scheduled learning rate and v is the weighted moving average of\n the squared gradient.\nParameters:\n * params (iterable) -- iterable of parameters to optimize\n or dicts defining parameter groups\n * **lr** (*float**, **optional*) -- learning rate (default:\n 1e-2)\n\n * **momentum** (*float**, **optional*) -- momentum factor\n (default: 0)\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html", "category": "pytorch docs"} {"text": "(default: 0)\n * **alpha** (*float**, **optional*) -- smoothing constant\n (default: 0.99)\n\n * **eps** (*float**, **optional*) -- term added to the\n denominator to improve numerical stability (default: 1e-8)\n\n * **centered** (*bool**, **optional*) -- if \"True\", compute the\n centered RMSProp, the gradient is normalized by an estimation\n of its variance\n\n * **weight_decay** (*float**, **optional*) -- weight decay (L2\n penalty) (default: 0)\n\n * **foreach** (*bool**, **optional*) -- whether foreach\n implementation of optimizer is used. If unspecified by the\n user (so foreach is None), we will try to use foreach over the\n for-loop implementation on CUDA, since it is usually\n significantly more performant. (default: None)\n\n * **maximize** (*bool**, **optional*) -- maximize the params\n based on the objective, instead of minimizing (default: False)\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html", "category": "pytorch docs"} {"text": "\ndifferentiable (bool, optional) -- whether autograd\n should occur through the optimizer step in training.\n Otherwise, the step() function runs in a torch.no_grad()\n context. Setting to True can impair performance, so leave it\n False if you don't intend to run autograd through this\n instance (default: False)\n\nadd_param_group(param_group)\n Add a param group to the \"Optimizer\" s *param_groups*.\n\n This can be useful when fine tuning a pre-trained network as\n frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n\n Parameters:\n **param_group** (*dict*) -- Specifies what Tensors should be\n optimized along with group specific optimization options.\n\nload_state_dict(state_dict)\n Loads the optimizer state.\n\n Parameters:\n **state_dict** (*dict*) -- optimizer state. Should be an\n object returned from a call to \"state_dict()\".\n\nregister_step_post_hook(hook)", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html", "category": "pytorch docs"} {"text": "register_step_post_hook(hook)\n Register an optimizer step post hook which will be called after\n optimizer step. It should have the following signature:\n\n hook(optimizer, args, kwargs) -> None\n\n The \"optimizer\" argument is the optimizer instance being used.\n\n Parameters:\n **hook** (*Callable*) -- The user defined hook to be\n registered.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n\nregister_step_pre_hook(hook)\n Register an optimizer step pre hook which will be called before\n optimizer step. It should have the following signature:\n\n hook(optimizer, args, kwargs) -> None or modified args and kwargs\n\n The \"optimizer\" argument is the optimizer instance being used.\n If args and kwargs are modified by the pre-hook, then the\n transformed values are returned as a tuple containing the\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html", "category": "pytorch docs"} {"text": "new_args and new_kwargs.\n Parameters:\n **hook** (*Callable*) -- The user defined hook to be\n registered.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n\nstate_dict()\n Returns the state of the optimizer as a \"dict\".\n\n It contains two entries:\n\n * state - a dict holding current optimization state. Its content\n differs between optimizer classes.\n\n * param_groups - a list containing all parameter groups where\n each\n parameter group is a dict\n\nzero_grad(set_to_none=False)\n Sets the gradients of all optimized \"torch.Tensor\" s to zero.\n\n Parameters:\n **set_to_none** (*bool*) -- instead of setting to zero, set\n the grads to None. This will in general have lower memory\n footprint, and can modestly improve performance. However, it\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html", "category": "pytorch docs"} {"text": "changes certain behaviors. For example: 1. When the user\n tries to access a gradient and perform manual ops on it, a\n None attribute or a Tensor full of 0s will behave\n differently. 2. If the user requests\n \"zero_grad(set_to_none=True)\" followed by a backward pass,\n \".grad\"s are guaranteed to be None for params that did not\n receive a gradient. 3. \"torch.optim\" optimizers have a\n different behavior if the gradient is 0 or None (in one case\n it does the step with a gradient of 0 and in the other it\n skips the step altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.RMSprop.html", "category": "pytorch docs"} {"text": "torch.qr\ntorch.qr(input, some=True, *, out=None)\nComputes the QR decomposition of a matrix or a batch of matrices\n \"input\", and returns a namedtuple (Q, R) of tensors such that\n \\text{input} = Q R with Q being an orthogonal matrix or batch of\n orthogonal matrices and R being an upper triangular matrix or batch\n of upper triangular matrices.\nIf \"some\" is \"True\", then this function returns the thin (reduced)\n QR factorization. Otherwise, if \"some\" is \"False\", this function\n returns the complete QR factorization.\nWarning:\n \"torch.qr()\" is deprecated in favor of \"torch.linalg.qr()\" and\n will be removed in a future PyTorch release. The boolean\n parameter \"some\" has been replaced with a string parameter\n \"mode\".\"Q, R = torch.qr(A)\" should be replaced with\n\n Q, R = torch.linalg.qr(A)\n\n \"Q, R = torch.qr(A, some=False)\" should be replaced with\n\n Q, R = torch.linalg.qr(A, mode=\"complete\")\n\nWarning:", "source": "https://pytorch.org/docs/stable/generated/torch.qr.html", "category": "pytorch docs"} {"text": "Warning:\n If you plan to backpropagate through QR, note that the current\n backward implementation is only well-defined when the first\n \\min(input.size(-1), input.size(-2)) columns of \"input\" are\n linearly independent. This behavior will probably change once QR\n supports pivoting.\n\nNote:\n This function uses LAPACK for CPU inputs and MAGMA for CUDA\n inputs, and may produce different (valid) decompositions on\n different device types or different platforms.\n\nParameters:\n * input (Tensor) -- the input tensor of size (*, m, n)\n where *** is zero or more batch dimensions consisting of\n matrices of dimension m \\times n.\n * **some** (*bool**, **optional*) --\n\n Set to \"True\" for reduced QR decomposition and \"False\" for\n complete QR decomposition. If *k = min(m, n)* then:\n\n * \"some=True\" : returns *(Q, R)* with dimensions (m, k),\n (k, n) (default)\n", "source": "https://pytorch.org/docs/stable/generated/torch.qr.html", "category": "pytorch docs"} {"text": "(k, n) (default)\n * \"'some=False'\": returns *(Q, R)* with dimensions (m, m),\n (m, n)\n\nKeyword Arguments:\n out (tuple, optional) -- tuple of Q and R tensors.\n The dimensions of Q and R are detailed in the description of\n \"some\" above.\nExample:\n >>> a = torch.tensor([[12., -51, 4], [6, 167, -68], [-4, 24, -41]])\n >>> q, r = torch.qr(a)\n >>> q\n tensor([[-0.8571, 0.3943, 0.3314],\n [-0.4286, -0.9029, -0.0343],\n [ 0.2857, -0.1714, 0.9429]])\n >>> r\n tensor([[ -14.0000, -21.0000, 14.0000],\n [ 0.0000, -175.0000, 70.0000],\n [ 0.0000, 0.0000, -35.0000]])\n >>> torch.mm(q, r).round()\n tensor([[ 12., -51., 4.],\n [ 6., 167., -68.],\n [ -4., 24., -41.]])\n >>> torch.mm(q.t(), q).round()\n tensor([[ 1., 0., 0.],\n [ 0., 1., -0.],\n [ 0., -0., 1.]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.qr.html", "category": "pytorch docs"} {"text": "[ 0., -0., 1.]])\n >>> a = torch.randn(3, 4, 5)\n >>> q, r = torch.qr(a, some=False)\n >>> torch.allclose(torch.matmul(q, r), a)\n True\n >>> torch.allclose(torch.matmul(q.mT, q), torch.eye(5))\n True", "source": "https://pytorch.org/docs/stable/generated/torch.qr.html", "category": "pytorch docs"} {"text": "torch.linalg.lu_factor_ex\ntorch.linalg.lu_factor_ex(A, *, pivot=True, check_errors=False, out=None)\nThis is a version of \"lu_factor()\" that does not perform error\n checks unless \"check_errors\"= True. It also returns the \"info\"\n tensor returned by LAPACK's getrf.\nNote:\n When the inputs are on a CUDA device, this function synchronizes\n only when \"check_errors\"*= True*.\n\nWarning:\n This function is \"experimental\" and it may change in a future\n PyTorch release.\n\nParameters:\n A (Tensor) -- tensor of shape (, m, n)* where *** is\n zero or more batch dimensions.\nKeyword Arguments:\n * pivot (bool, optional) -- Whether to compute the LU\n decomposition with partial pivoting, or the regular LU\n decomposition. \"pivot\"= False not supported on CPU. Default:\n True.\n * **check_errors** (*bool**, **optional*) -- controls whether to\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu_factor_ex.html", "category": "pytorch docs"} {"text": "check the content of \"infos\" and raise an error if it is non-\n zero. Default: False.\n * **out** (*tuple**, **optional*) -- tuple of three tensors to\n write the output to. Ignored if *None*. Default: *None*.\n\nReturns:\n A named tuple (LU, pivots, info).", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu_factor_ex.html", "category": "pytorch docs"} {"text": "torch.Tensor.sum_to_size\nTensor.sum_to_size(*size) -> Tensor\nSum \"this\" tensor to \"size\". \"size\" must be broadcastable to \"this\"\n tensor size.\nParameters:\n size (int...) -- a sequence of integers defining the\n shape of the output tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sum_to_size.html", "category": "pytorch docs"} {"text": "torch.logcumsumexp\ntorch.logcumsumexp(input, dim, *, out=None) -> Tensor\nReturns the logarithm of the cumulative summation of the\n exponentiation of elements of \"input\" in the dimension \"dim\".\nFor summation index j given by dim and other indices i, the\n result is\n \\text{logcumsumexp}(x)_{ij} = \\log \\sum\\limits_{j=0}^{i}\n \\exp(x_{ij})\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int*) -- the dimension to do the operation over\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(10)\n >>> torch.logcumsumexp(a, dim=0)\n tensor([-0.42296738, -0.04462666, 0.86278635, 0.94622083, 1.05277811,\n 1.39202815, 1.83525007, 1.84492621, 2.06084887, 2.06844475]))\n", "source": "https://pytorch.org/docs/stable/generated/torch.logcumsumexp.html", "category": "pytorch docs"} {"text": "torch.Tensor.conj_physical\nTensor.conj_physical() -> Tensor\nSee \"torch.conj_physical()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.conj_physical.html", "category": "pytorch docs"} {"text": "torch.Tensor.unsqueeze\nTensor.unsqueeze(dim) -> Tensor\nSee \"torch.unsqueeze()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.unsqueeze.html", "category": "pytorch docs"} {"text": "device\nclass torch.cuda.device(device)\nContext-manager that changes the selected device.\nParameters:\n device (torch.device or int) -- device index to\n select. It's a no-op if this argument is a negative integer or\n \"None\".", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.device.html", "category": "pytorch docs"} {"text": "torch.Tensor.fmod_\nTensor.fmod_(divisor) -> Tensor\nIn-place version of \"fmod()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.fmod_.html", "category": "pytorch docs"} {"text": "torch.diagonal\ntorch.diagonal(input, offset=0, dim1=0, dim2=1) -> Tensor\nReturns a partial view of \"input\" with the its diagonal elements\n with respect to \"dim1\" and \"dim2\" appended as a dimension at the\n end of the shape.\nThe argument \"offset\" controls which diagonal to consider:\n\n\nIf \"offset\" = 0, it is the main diagonal.\n\n\nIf \"offset\" > 0, it is above the main diagonal.\n\n\nIf \"offset\" < 0, it is below the main diagonal.\n\n\nApplying \"torch.diag_embed()\" to the output of this function with\n the same arguments yields a diagonal matrix with the diagonal\n entries of the input. However, \"torch.diag_embed()\" has different\n default dimensions, so those need to be explicitly specified.\nParameters:\n * input (Tensor) -- the input tensor. Must be at least\n 2-dimensional.\n * **offset** (*int**, **optional*) -- which diagonal to\n consider. Default: 0 (main diagonal).\n\n * **dim1** (*int**, **optional*) -- first dimension with respect\n", "source": "https://pytorch.org/docs/stable/generated/torch.diagonal.html", "category": "pytorch docs"} {"text": "to which to take diagonal. Default: 0.\n * **dim2** (*int**, **optional*) -- second dimension with\n respect to which to take diagonal. Default: 1.\n\nNote:\n To take a batch diagonal, pass in dim1=-2, dim2=-1.\n\nExamples:\n >>> a = torch.randn(3, 3)\n >>> a\n tensor([[-1.0854, 1.1431, -0.1752],\n [ 0.8536, -0.0905, 0.0360],\n [ 0.6927, -0.3735, -0.4945]])\n\n\n >>> torch.diagonal(a, 0)\n tensor([-1.0854, -0.0905, -0.4945])\n\n\n >>> torch.diagonal(a, 1)\n tensor([ 1.1431, 0.0360])\n\n\n >>> x = torch.randn(2, 5, 4, 2)\n >>> torch.diagonal(x, offset=-1, dim1=1, dim2=2)\n tensor([[[-1.2631, 0.3755, -1.5977, -1.8172],\n [-1.1065, 1.0401, -0.2235, -0.7938]],\n\n [[-1.7325, -0.3081, 0.6166, 0.2335],\n [ 1.0500, 0.7336, -0.3836, -1.1015]]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.diagonal.html", "category": "pytorch docs"} {"text": "MultiLabelMarginLoss\nclass torch.nn.MultiLabelMarginLoss(size_average=None, reduce=None, reduction='mean')\nCreates a criterion that optimizes a multi-class multi-\n classification hinge loss (margin-based loss) between input x (a 2D\n mini-batch Tensor) and output y (which is a 2D Tensor of target\n class indices). For each sample in the mini-batch:\n \\text{loss}(x, y) = \\sum_{ij}\\frac{\\max(0, 1 - (x[y[j]] -\n x[i]))}{\\text{x.size}(0)}\n\nwhere x \\in \\left{0, \\; \\cdots , \\; \\text{x.size}(0) - 1\\right},\n y \\in \\left{0, \\; \\cdots , \\; \\text{y.size}(0) - 1\\right}, 0 \\leq\n y[j] \\leq \\text{x.size}(0)-1, and i \\neq y[j] for all i and j.\ny and x must have the same size.\nThe criterion only considers a contiguous block of non-negative\n targets that starts at the front.\nThis allows for different samples to have variable amounts of\n target classes.\nParameters:\n * size_average (bool, optional) -- Deprecated (see", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiLabelMarginLoss.html", "category": "pytorch docs"} {"text": "\"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n * **reduce** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n\n * **reduction** (*str**, **optional*) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiLabelMarginLoss.html", "category": "pytorch docs"} {"text": "\"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\nShape:\n * Input: (C) or (N, C) where N is the batch size and C is\n the number of classes.\n * Target: (C) or (N, C), label targets padded by -1 ensuring\n same shape as the input.\n\n * Output: scalar. If \"reduction\" is \"'none'\", then (N).\n\nExamples:\n >>> loss = nn.MultiLabelMarginLoss()\n >>> x = torch.FloatTensor([[0.1, 0.2, 0.4, 0.8]])\n >>> # for target y, only consider labels 3 and 0, not after label -1\n >>> y = torch.LongTensor([[3, 0, -1, 1]])\n >>> # 0.25 * ((1-(0.1-0.2)) + (1-(0.1-0.4)) + (1-(0.8-0.2)) + (1-(0.8-0.4)))\n >>> loss(x, y)\n tensor(0.85...)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiLabelMarginLoss.html", "category": "pytorch docs"} {"text": "BatchNorm3d\nclass torch.nn.BatchNorm3d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)\nApplies Batch Normalization over a 5D input (a mini-batch of 3D\n inputs with additional channel dimension) as described in the paper\n Batch Normalization: Accelerating Deep Network Training by Reducing\n Internal Covariate Shift .\n y = \\frac{x - \\mathrm{E}[x]}{ \\sqrt{\\mathrm{Var}[x] + \\epsilon}}\n * \\gamma + \\beta\n\nThe mean and standard-deviation are calculated per-dimension over\n the mini-batches and \\gamma and \\beta are learnable parameter\n vectors of size C (where C is the input size). By default, the\n elements of \\gamma are set to 1 and the elements of \\beta are set\n to 0. The standard-deviation is calculated via the biased\n estimator, equivalent to torch.var(input, unbiased=False).\nAlso by default, during training this layer keeps running estimates\n of its computed mean and variance, which are then used for", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm3d.html", "category": "pytorch docs"} {"text": "normalization during evaluation. The running estimates are kept\n with a default \"momentum\" of 0.1.\nIf \"track_running_stats\" is set to \"False\", this layer then does\n not keep running estimates, and batch statistics are instead used\n during evaluation time as well.\nNote:\n This \"momentum\" argument is different from one used in optimizer\n classes and the conventional notion of momentum. Mathematically,\n the update rule for running statistics here is \\hat{x}_\\text{new}\n = (1 - \\text{momentum}) \\times \\hat{x} + \\text{momentum} \\times\n x_t, where \\hat{x} is the estimated statistic and x_t is the new\n observed value.\n\nBecause the Batch Normalization is done over the C dimension,\n computing statistics on (N, D, H, W) slices, it's common\n terminology to call this Volumetric Batch Normalization or Spatio-\n temporal Batch Normalization.\nParameters:\n * num_features (int) -- C from an expected input of size\n (N, C, D, H, W)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm3d.html", "category": "pytorch docs"} {"text": "(N, C, D, H, W)\n * **eps** (*float*) -- a value added to the denominator for\n numerical stability. Default: 1e-5\n\n * **momentum** (*float*) -- the value used for the running_mean\n and running_var computation. Can be set to \"None\" for\n cumulative moving average (i.e. simple average). Default: 0.1\n\n * **affine** (*bool*) -- a boolean value that when set to\n \"True\", this module has learnable affine parameters. Default:\n \"True\"\n\n * **track_running_stats** (*bool*) -- a boolean value that when\n set to \"True\", this module tracks the running mean and\n variance, and when set to \"False\", this module does not track\n such statistics, and initializes statistics buffers\n \"running_mean\" and \"running_var\" as \"None\". When these buffers\n are \"None\", this module always uses batch statistics. in both\n training and eval modes. Default: \"True\"\n\nShape:\n * Input: (N, C, D, H, W)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm3d.html", "category": "pytorch docs"} {"text": "Shape:\n * Input: (N, C, D, H, W)\n * Output: (N, C, D, H, W) (same shape as input)\n\nExamples:\n >>> # With Learnable Parameters\n >>> m = nn.BatchNorm3d(100)\n >>> # Without Learnable Parameters\n >>> m = nn.BatchNorm3d(100, affine=False)\n >>> input = torch.randn(20, 100, 35, 45, 10)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm3d.html", "category": "pytorch docs"} {"text": "Softshrink\nclass torch.nn.Softshrink(lambd=0.5)\nApplies the soft shrinkage function elementwise:\n \\text{SoftShrinkage}(x) = \\begin{cases} x - \\lambda, & \\text{ if\n } x > \\lambda \\\\ x + \\lambda, & \\text{ if } x < -\\lambda \\\\ 0, &\n \\text{ otherwise } \\end{cases}\n\nParameters:\n lambd (float) -- the \\lambda (must be no less than zero)\n value for the Softshrink formulation. Default: 0.5\nShape:\n * Input: (*), where * means any number of dimensions.\n * Output: (*), same shape as the input.\n\n[image]\nExamples:\n >>> m = nn.Softshrink()\n >>> input = torch.randn(2)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Softshrink.html", "category": "pytorch docs"} {"text": "torch.Tensor.slogdet\nTensor.slogdet()\nSee \"torch.slogdet()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.slogdet.html", "category": "pytorch docs"} {"text": "torch.foreach_sigmoid\ntorch.foreach_sigmoid(self: List[Tensor]) -> None\nApply \"torch.sigmoid()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_sigmoid_.html", "category": "pytorch docs"} {"text": "torch.scatter_reduce\ntorch.scatter_reduce(input, dim, index, src, reduce, *, include_self=True) -> Tensor\nOut-of-place version of \"torch.Tensor.scatter_reduce_()\"", "source": "https://pytorch.org/docs/stable/generated/torch.scatter_reduce.html", "category": "pytorch docs"} {"text": "torch.cross\ntorch.cross(input, other, dim=None, *, out=None) -> Tensor\nReturns the cross product of vectors in dimension \"dim\" of \"input\"\n and \"other\".\nSupports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of vectors, for which it computes the product\n along the dimension \"dim\". In this case, the output has the same\n batch dimensions as the inputs.\nIf \"dim\" is not given, it defaults to the first dimension found\n with the size 3. Note that this might be unexpected.\nSee also:\n \"torch.linalg.cross()\" which requires specifying dim (defaulting\n to -1).\n\nWarning:\n This function may change in a future PyTorch release to match the\n default behaviour in \"torch.linalg.cross()\". We recommend using\n \"torch.linalg.cross()\".\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **other** (*Tensor*) -- the second input tensor\n\n * **dim** (*int**, **optional*) -- the dimension to take the\n", "source": "https://pytorch.org/docs/stable/generated/torch.cross.html", "category": "pytorch docs"} {"text": "cross-product in.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(4, 3)\n >>> a\n tensor([[-0.3956, 1.1455, 1.6895],\n [-0.5849, 1.3672, 0.3599],\n [-1.1626, 0.7180, -0.0521],\n [-0.1339, 0.9902, -2.0225]])\n >>> b = torch.randn(4, 3)\n >>> b\n tensor([[-0.0257, -1.4725, -1.2251],\n [-1.1479, -0.7005, -1.9757],\n [-1.3904, 0.3726, -1.1836],\n [-0.9688, -0.7153, 0.2159]])\n >>> torch.cross(a, b, dim=1)\n tensor([[ 1.0844, -0.5281, 0.6120],\n [-2.4490, -1.5687, 1.9792],\n [-0.8304, -1.3037, 0.5650],\n [-1.2329, 1.9883, 1.0551]])\n >>> torch.cross(a, b)\n tensor([[ 1.0844, -0.5281, 0.6120],\n [-2.4490, -1.5687, 1.9792],\n [-0.8304, -1.3037, 0.5650],\n [-1.2329, 1.9883, 1.0551]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.cross.html", "category": "pytorch docs"} {"text": "torch.Tensor.sinc_\nTensor.sinc_() -> Tensor\nIn-place version of \"sinc()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sinc_.html", "category": "pytorch docs"} {"text": "torch.is_inference_mode_enabled\ntorch.is_inference_mode_enabled()\nReturns True if inference mode is currently enabled.", "source": "https://pytorch.org/docs/stable/generated/torch.is_inference_mode_enabled.html", "category": "pytorch docs"} {"text": "torch.Tensor.lerp_\nTensor.lerp_(end, weight) -> Tensor\nIn-place version of \"lerp()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.lerp_.html", "category": "pytorch docs"} {"text": "torch.Tensor.nanquantile\nTensor.nanquantile(q, dim=None, keepdim=False, *, interpolation='linear') -> Tensor\nSee \"torch.nanquantile()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.nanquantile.html", "category": "pytorch docs"} {"text": "torch.cuda.nvtx.range_pop\ntorch.cuda.nvtx.range_pop()\nPops a range off of a stack of nested range spans. Returns the\n zero-based depth of the range that is ended.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.nvtx.range_pop.html", "category": "pytorch docs"} {"text": "torch.dequantize\ntorch.dequantize(tensor) -> Tensor\nReturns an fp32 Tensor by dequantizing a quantized Tensor\nParameters:\n tensor (Tensor) -- A quantized Tensor\ntorch.dequantize(tensors) -> sequence of Tensors\nGiven a list of quantized Tensors, dequantize them and return a\n list of fp32 Tensors\nParameters:\n tensors (sequence of Tensors) -- A list of quantized\n Tensors", "source": "https://pytorch.org/docs/stable/generated/torch.dequantize.html", "category": "pytorch docs"} {"text": "torch.Tensor.bitwise_left_shift\nTensor.bitwise_left_shift(other) -> Tensor\nSee \"torch.bitwise_left_shift()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_left_shift.html", "category": "pytorch docs"} {"text": "LinearReLU\nclass torch.ao.nn.intrinsic.quantized.LinearReLU(in_features, out_features, bias=True, dtype=torch.qint8)\nA LinearReLU module fused from Linear and ReLU modules\nWe adopt the same interface as \"torch.ao.nn.quantized.Linear\".\nVariables:\n torch.ao.nn.quantized.Linear (Same as) --\nExamples:\n >>> m = nn.intrinsic.LinearReLU(20, 30)\n >>> input = torch.randn(128, 20)\n >>> output = m(input)\n >>> print(output.size())\n torch.Size([128, 30])\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.quantized.LinearReLU.html", "category": "pytorch docs"} {"text": "FakeQuantizeBase\nclass torch.quantization.fake_quantize.FakeQuantizeBase\nBase fake quantize module Any fake quantize implementation should\n derive from this class.\nConcrete fake quantize module should follow the same API. In\n forward, they will update the statistics of the observed Tensor and\n fake quantize the input. They should also provide a\n calculate_qparams function that computes the quantization\n parameters given the collected statistics.", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.FakeQuantizeBase.html", "category": "pytorch docs"} {"text": "torch.optim.Optimizer.add_param_group\nOptimizer.add_param_group(param_group)\nAdd a param group to the \"Optimizer\" s param_groups.\nThis can be useful when fine tuning a pre-trained network as frozen\n layers can be made trainable and added to the \"Optimizer\" as\n training progresses.\nParameters:\n param_group (dict) -- Specifies what Tensors should be\n optimized along with group specific optimization options.", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.add_param_group.html", "category": "pytorch docs"} {"text": "ConvBnReLU2d\nclass torch.ao.nn.intrinsic.ConvBnReLU2d(conv, bn, relu)\nThis is a sequential container which calls the Conv 2d, Batch Norm\n 2d, and ReLU modules. During quantization this will be replaced\n with the corresponding fused module.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.ConvBnReLU2d.html", "category": "pytorch docs"} {"text": "torch.Tensor.new_ones\nTensor.new_ones(size, *, dtype=None, device=None, requires_grad=False, layout=torch.strided, pin_memory=False) -> Tensor\nReturns a Tensor of size \"size\" filled with \"1\". By default, the\n returned Tensor has the same \"torch.dtype\" and \"torch.device\" as\n this tensor.\nParameters:\n size (int...) -- a list, tuple, or \"torch.Size\" of\n integers defining the shape of the output tensor.\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired type of\n returned tensor. Default: if None, same \"torch.dtype\" as this\n tensor.\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if None, same \"torch.device\" as this\n tensor.\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.new_ones.html", "category": "pytorch docs"} {"text": "returned Tensor. Default: \"torch.strided\".\n * **pin_memory** (*bool**, **optional*) -- If set, returned\n tensor would be allocated in the pinned memory. Works only for\n CPU tensors. Default: \"False\".\n\nExample:\n >>> tensor = torch.tensor((), dtype=torch.int32)\n >>> tensor.new_ones((2, 3))\n tensor([[ 1, 1, 1],\n [ 1, 1, 1]], dtype=torch.int32)\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.new_ones.html", "category": "pytorch docs"} {"text": "AdaptiveMaxPool3d\nclass torch.nn.AdaptiveMaxPool3d(output_size, return_indices=False)\nApplies a 3D adaptive max pooling over an input signal composed of\n several input planes.\nThe output is of size D_{out} \\times H_{out} \\times W_{out}, for\n any input size. The number of output features is equal to the\n number of input planes.\nParameters:\n * output_size (Union[int, None,\n Tuple[Optional[int], Optional[int],\n Optional[int]]]) -- the target output size of the\n image of the form D_{out} \\times H_{out} \\times W_{out}. Can\n be a tuple (D_{out}, H_{out}, W_{out}) or a single D_{out} for\n a cube D_{out} \\times D_{out} \\times D_{out}. D_{out}, H_{out}\n and W_{out} can be either a \"int\", or \"None\" which means the\n size will be the same as that of the input.\n * **return_indices** (*bool*) -- if \"True\", will return the\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveMaxPool3d.html", "category": "pytorch docs"} {"text": "indices along with the outputs. Useful to pass to\n nn.MaxUnpool3d. Default: \"False\"\nShape:\n * Input: (N, C, D_{in}, H_{in}, W_{in}) or (C, D_{in}, H_{in},\n W_{in}).\n * Output: (N, C, D_{out}, H_{out}, W_{out}) or (C, D_{out},\n H_{out}, W_{out}), where (D_{out}, H_{out},\n W_{out})=\\text{output\\_size}.\n\n-[ Examples ]-\n\n\n\ntarget output size of 5x7x9\nm = nn.AdaptiveMaxPool3d((5, 7, 9))\ninput = torch.randn(1, 64, 8, 9, 10)\noutput = m(input)\ntarget output size of 7x7x7 (cube)\nm = nn.AdaptiveMaxPool3d(7)\ninput = torch.randn(1, 64, 10, 9, 8)\noutput = m(input)\ntarget output size of 7x9x8\nm = nn.AdaptiveMaxPool3d((7, None, None))\ninput = torch.randn(1, 64, 10, 9, 8)\noutput = m(input)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveMaxPool3d.html", "category": "pytorch docs"} {"text": "torch.optim.Optimizer.zero_grad\nOptimizer.zero_grad(set_to_none=False)\nSets the gradients of all optimized \"torch.Tensor\" s to zero.\nParameters:\n set_to_none (bool) -- instead of setting to zero, set the\n grads to None. This will in general have lower memory footprint,\n and can modestly improve performance. However, it changes\n certain behaviors. For example: 1. When the user tries to access\n a gradient and perform manual ops on it, a None attribute or a\n Tensor full of 0s will behave differently. 2. If the user\n requests \"zero_grad(set_to_none=True)\" followed by a backward\n pass, \".grad\"s are guaranteed to be None for params that did not\n receive a gradient. 3. \"torch.optim\" optimizers have a different\n behavior if the gradient is 0 or None (in one case it does the\n step with a gradient of 0 and in the other it skips the step\n altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.zero_grad.html", "category": "pytorch docs"} {"text": "torch.nn.modules.module.register_module_backward_hook\ntorch.nn.modules.module.register_module_backward_hook(hook)\nRegisters a backward hook common to all the modules.\nThis function is deprecated in favor of\n \"torch.nn.modules.module.register_module_full_backward_hook()\" and\n the behavior of this function will change in future versions.\nReturns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\nReturn type:\n \"torch.utils.hooks.RemovableHandle\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_backward_hook.html", "category": "pytorch docs"} {"text": "torch._foreach_cosh\ntorch._foreach_cosh(self: List[Tensor]) -> List[Tensor]\nApply \"torch.cosh()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_cosh.html", "category": "pytorch docs"} {"text": "ConstantPad3d\nclass torch.nn.ConstantPad3d(padding, value)\nPads the input tensor boundaries with a constant value.\nFor N-dimensional padding, use \"torch.nn.functional.pad()\".\nParameters:\n padding (int, tuple) -- the size of the padding. If is\n int, uses the same padding in all boundaries. If a 6-tuple,\n uses (\\text{padding_left}, \\text{padding_right},\n \\text{padding_top}, \\text{padding_bottom},\n \\text{padding_front}, \\text{padding_back})\nShape:\n * Input: (N, C, D_{in}, H_{in}, W_{in}) or (C, D_{in}, H_{in},\n W_{in}).\n * Output: (N, C, D_{out}, H_{out}, W_{out}) or (C, D_{out},\n H_{out}, W_{out}), where\n\n D_{out} = D_{in} + \\text{padding\\_front} +\n \\text{padding\\_back}\n\n H_{out} = H_{in} + \\text{padding\\_top} +\n \\text{padding\\_bottom}\n\n W_{out} = W_{in} + \\text{padding\\_left} +\n \\text{padding\\_right}\n\nExamples:\n >>> m = nn.ConstantPad3d(3, 3.5)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConstantPad3d.html", "category": "pytorch docs"} {"text": "\n\n\nm = nn.ConstantPad3d(3, 3.5)\n >>> input = torch.randn(16, 3, 10, 20, 30)\n >>> output = m(input)\n >>> # using different paddings for different sides\n >>> m = nn.ConstantPad3d((3, 3, 6, 6, 0, 1), 3.5)\n >>> output = m(input)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConstantPad3d.html", "category": "pytorch docs"} {"text": "torch.nn.utils.weight_norm\ntorch.nn.utils.weight_norm(module, name='weight', dim=0)\nApplies weight normalization to a parameter in the given module.\n \\mathbf{w} = g \\dfrac{\\mathbf{v}}{\\|\\mathbf{v}\\|}\n\nWeight normalization is a reparameterization that decouples the\n magnitude of a weight tensor from its direction. This replaces the\n parameter specified by \"name\" (e.g. \"'weight'\") with two\n parameters: one specifying the magnitude (e.g. \"'weight_g'\") and\n one specifying the direction (e.g. \"'weight_v'\"). Weight\n normalization is implemented via a hook that recomputes the weight\n tensor from the magnitude and direction before every \"forward()\"\n call.\nBy default, with \"dim=0\", the norm is computed independently per\n output channel/plane. To compute a norm over the entire weight\n tensor, use \"dim=None\".\nSee https://arxiv.org/abs/1602.07868\nParameters:\n * module (Module) -- containing module", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.weight_norm.html", "category": "pytorch docs"} {"text": "\n\nname (str, optional) -- name of weight parameter\n\ndim (int, optional) -- dimension over which to\n compute the norm\n\n\n\nReturns:\n The original module with the weight norm hook\nReturn type:\n T_module\nExample:\n >>> m = weight_norm(nn.Linear(20, 40), name='weight')\n >>> m\n Linear(in_features=20, out_features=40, bias=True)\n >>> m.weight_g.size()\n torch.Size([40, 1])\n >>> m.weight_v.size()\n torch.Size([40, 20])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.weight_norm.html", "category": "pytorch docs"} {"text": "torch.cuda.make_graphed_callables\ntorch.cuda.make_graphed_callables(callables, sample_args, num_warmup_iters=3, allow_unused_input=False)\nAccepts callables (functions or \"nn.Module\"s) and returns graphed\n versions.\nEach graphed callable's forward pass runs its source callable's\n forward CUDA work as a CUDA graph inside a single autograd node.\nThe graphed callable's forward pass also appends a backward node to\n the autograd graph. During backward, this node runs the callable's\n backward work as a CUDA graph.\nTherefore, each graphed callable should be a drop-in replacement\n for its source callable in an autograd-enabled training loop.\nSee Partial-network capture for detailed use and constraints.\nIf you pass a tuple of several callables, their captures will use\n the same memory pool. See Graph memory management for when this is\n appropriate.\nParameters:\n * callables (torch.nn.Module or Python function*, or", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.make_graphed_callables.html", "category": "pytorch docs"} {"text": "*tuple of these) -- Callable or callables to graph. See\n Graph memory management for when passing a tuple of callables\n is appropriate. If you pass a tuple of callables, their order\n in the tuple must be the same order they'll run in the live\n workload.\n * **sample_args** (*tuple of Tensors**, or **tuple of tuples of\n Tensors*) -- Samples args for each callable. If a single\n callable was passed, \"sample_args\" must be a single tuple of\n argument Tensors. If a tuple of callables was passed,\n \"sample_args\" must be tuple of tuples of argument Tensors.\n\n * **num_warmup_iters** (*int*) -- The number of warmup\n iterations. Currently, \"DataDistributedParallel\" needs 11\n iterations for warm up. Default: \"3\".\n\n * **allow_unused_input** (*bool*) -- If False, specifying inputs\n that were not used when computing outputs (and therefore their\n grad is always zero) is an error. Defaults to False.\n\nNote:", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.make_graphed_callables.html", "category": "pytorch docs"} {"text": "Note:\n The \"requires_grad\" state of each Tensor in \"sample_args\" must\n match the state that's expected for the corresponding real input\n in the training loop.\n\nWarning:\n This API is in beta and may change in future releases.\n\nWarning:\n \"sample_args\" for each callable must contain only Tensors. Other\n types are not allowed.\n\nWarning:\n Returned callables do not support higher order differentiation\n (e.g., double backward).\n\nWarning:\n In any \"Module\" passed to \"make_graphed_callables()\", only\n parameters may be trainable. Buffers must have\n \"requires_grad=False\".\n\nWarning:\n After you pass a \"torch.nn.Module\" through\n \"make_graphed_callables()\", you may not add or remove any of that\n Module's parameters or buffers.\n\nWarning:\n \"torch.nn.Module\"s passed to \"make_graphed_callables()\" must not\n have module hooks registered on them at the time they are passed.\n However, registering hooks on modules *after* passing them\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.make_graphed_callables.html", "category": "pytorch docs"} {"text": "through \"make_graphed_callables()\" is allowed.\nWarning:\n When running a graphed callable, you must pass its arguments in\n the same order and format they appeared in that callable's\n \"sample_args\".\n\nWarning:\n The automatic mixed precision is supported in\n \"make_graphed_callables()\" only with disabled caching. The\n context manager *torch.cuda.amp.autocast()* must have\n *cache_enabled=False*.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.make_graphed_callables.html", "category": "pytorch docs"} {"text": "torch.nn.utils.spectral_norm\ntorch.nn.utils.spectral_norm(module, name='weight', n_power_iterations=1, eps=1e-12, dim=None)\nApplies spectral normalization to a parameter in the given module.\n \\mathbf{W}_{SN} = \\dfrac{\\mathbf{W}}{\\sigma(\\mathbf{W})},\n \\sigma(\\mathbf{W}) = \\max_{\\mathbf{h}: \\mathbf{h} \\ne 0}\n \\dfrac{\\|\\mathbf{W} \\mathbf{h}\\|_2}{\\|\\mathbf{h}\\|_2}\n\nSpectral normalization stabilizes the training of discriminators\n (critics) in Generative Adversarial Networks (GANs) by rescaling\n the weight tensor with spectral norm \\sigma of the weight matrix\n calculated using power iteration method. If the dimension of the\n weight tensor is greater than 2, it is reshaped to 2D in power\n iteration method to get spectral norm. This is implemented via a\n hook that calculates spectral norm and rescales weight before every\n \"forward()\" call.\nSee Spectral Normalization for Generative Adversarial Networks .\nParameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.spectral_norm.html", "category": "pytorch docs"} {"text": "Parameters:\n * module (nn.Module) -- containing module\n * **name** (*str**, **optional*) -- name of weight parameter\n\n * **n_power_iterations** (*int**, **optional*) -- number of\n power iterations to calculate spectral norm\n\n * **eps** (*float**, **optional*) -- epsilon for numerical\n stability in calculating norms\n\n * **dim** (*int**, **optional*) -- dimension corresponding to\n number of outputs, the default is \"0\", except for modules that\n are instances of ConvTranspose{1,2,3}d, when it is \"1\"\n\nReturns:\n The original module with the spectral norm hook\nReturn type:\n T_module\nNote:\n This function has been reimplemented as\n \"torch.nn.utils.parametrizations.spectral_norm()\" using the new\n parametrization functionality in\n \"torch.nn.utils.parametrize.register_parametrization()\". Please\n use the newer version. This function will be deprecated in a\n future version of PyTorch.\n\nExample:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.spectral_norm.html", "category": "pytorch docs"} {"text": "future version of PyTorch.\nExample:\n >>> m = spectral_norm(nn.Linear(20, 40))\n >>> m\n Linear(in_features=20, out_features=40, bias=True)\n >>> m.weight_u.size()\n torch.Size([40])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.spectral_norm.html", "category": "pytorch docs"} {"text": "torch.roll\ntorch.roll(input, shifts, dims=None) -> Tensor\nRoll the tensor \"input\" along the given dimension(s). Elements that\n are shifted beyond the last position are re-introduced at the first\n position. If \"dims\" is None, the tensor will be flattened before\n rolling and then restored to the original shape.\nParameters:\n * input (Tensor) -- the input tensor.\n * **shifts** (*int** or **tuple of ints*) -- The number of\n places by which the elements of the tensor are shifted. If\n shifts is a tuple, dims must be a tuple of the same size, and\n each dimension will be rolled by the corresponding value\n\n * **dims** (*int** or **tuple of ints*) -- Axis along which to\n roll\n\nExample:\n >>> x = torch.tensor([1, 2, 3, 4, 5, 6, 7, 8]).view(4, 2)\n >>> x\n tensor([[1, 2],\n [3, 4],\n [5, 6],\n [7, 8]])\n >>> torch.roll(x, 1)\n tensor([[8, 1],\n [2, 3],\n [4, 5],\n", "source": "https://pytorch.org/docs/stable/generated/torch.roll.html", "category": "pytorch docs"} {"text": "[2, 3],\n [4, 5],\n [6, 7]])\n >>> torch.roll(x, 1, 0)\n tensor([[7, 8],\n [1, 2],\n [3, 4],\n [5, 6]])\n >>> torch.roll(x, -1, 0)\n tensor([[3, 4],\n [5, 6],\n [7, 8],\n [1, 2]])\n >>> torch.roll(x, shifts=(2, 1), dims=(0, 1))\n tensor([[6, 5],\n [8, 7],\n [2, 1],\n [4, 3]])", "source": "https://pytorch.org/docs/stable/generated/torch.roll.html", "category": "pytorch docs"} {"text": "torch.Tensor.new_tensor\nTensor.new_tensor(data, *, dtype=None, device=None, requires_grad=False, layout=torch.strided, pin_memory=False) -> Tensor\nReturns a new Tensor with \"data\" as the tensor data. By default,\n the returned Tensor has the same \"torch.dtype\" and \"torch.device\"\n as this tensor.\nWarning:\n \"new_tensor()\" always copies \"data\". If you have a Tensor \"data\"\n and want to avoid a copy, use \"torch.Tensor.requires_grad_()\" or\n \"torch.Tensor.detach()\". If you have a numpy array and want to\n avoid a copy, use \"torch.from_numpy()\".\n\nWarning:\n When data is a tensor *x*, \"new_tensor()\" reads out 'the data'\n from whatever it is passed, and constructs a leaf variable.\n Therefore \"tensor.new_tensor(x)\" is equivalent to\n \"x.clone().detach()\" and \"tensor.new_tensor(x,\n requires_grad=True)\" is equivalent to\n \"x.clone().detach().requires_grad_(True)\". The equivalents using\n \"clone()\" and \"detach()\" are recommended.\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.new_tensor.html", "category": "pytorch docs"} {"text": "\"clone()\" and \"detach()\" are recommended.\nParameters:\n data (array_like) -- The returned Tensor copies \"data\".\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired type of\n returned tensor. Default: if None, same \"torch.dtype\" as this\n tensor.\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if None, same \"torch.device\" as this\n tensor.\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n\n * **pin_memory** (*bool**, **optional*) -- If set, returned\n tensor would be allocated in the pinned memory. Works only for\n CPU tensors. Default: \"False\".\n\nExample:\n >>> tensor = torch.ones((2,), dtype=torch.int8)\n >>> data = [[0, 1], [2, 3]]\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.new_tensor.html", "category": "pytorch docs"} {"text": "\n\n\ndata = [[0, 1], [2, 3]]\n >>> tensor.new_tensor(data)\n tensor([[ 0, 1],\n [ 2, 3]], dtype=torch.int8)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.new_tensor.html", "category": "pytorch docs"} {"text": "torch.set_printoptions\ntorch.set_printoptions(precision=None, threshold=None, edgeitems=None, linewidth=None, profile=None, sci_mode=None)\nSet options for printing. Items shamelessly taken from NumPy\nParameters:\n * precision -- Number of digits of precision for floating\n point output (default = 4).\n * **threshold** -- Total number of array elements which trigger\n summarization rather than full *repr* (default = 1000).\n\n * **edgeitems** -- Number of array items in summary at beginning\n and end of each dimension (default = 3).\n\n * **linewidth** -- The number of characters per line for the\n purpose of inserting line breaks (default = 80). Thresholded\n matrices will ignore this parameter.\n\n * **profile** -- Sane defaults for pretty printing. Can override\n with any of the above options. (any one of *default*, *short*,\n *full*)\n\n * **sci_mode** -- Enable (True) or disable (False) scientific\n", "source": "https://pytorch.org/docs/stable/generated/torch.set_printoptions.html", "category": "pytorch docs"} {"text": "notation. If None (default) is specified, the value is defined\n by torch._tensor_str._Formatter. This value is automatically\n chosen by the framework.\nExample:\n >>> # Limit the precision of elements\n >>> torch.set_printoptions(precision=2)\n >>> torch.tensor([1.12345])\n tensor([1.12])\n >>> # Limit the number of elements shown\n >>> torch.set_printoptions(threshold=5)\n >>> torch.arange(10)\n tensor([0, 1, 2, ..., 7, 8, 9])\n >>> # Restore defaults\n >>> torch.set_printoptions(profile='default')\n >>> torch.tensor([1.12345])\n tensor([1.1235])\n >>> torch.arange(10)\n tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])\n", "source": "https://pytorch.org/docs/stable/generated/torch.set_printoptions.html", "category": "pytorch docs"} {"text": "torch.jit.ignore\ntorch.jit.ignore(drop=False, **kwargs)\nThis decorator indicates to the compiler that a function or method\n should be ignored and left as a Python function. This allows you to\n leave code in your model that is not yet TorchScript compatible. If\n called from TorchScript, ignored functions will dispatch the call\n to the Python interpreter. Models with ignored functions cannot be\n exported; use \"@torch.jit.unused\" instead.\nExample (using \"@torch.jit.ignore\" on a method):\n import torch\n import torch.nn as nn\n\n class MyModule(nn.Module):\n @torch.jit.ignore\n def debugger(self, x):\n import pdb\n pdb.set_trace()\n\n def forward(self, x):\n x += 10\n # The compiler would normally try to compile `debugger`,\n # but since it is `@ignore`d, it will be left as a call\n # to Python\n self.debugger(x)\n return x\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ignore.html", "category": "pytorch docs"} {"text": "return x\n m = torch.jit.script(MyModule())\n\n # Error! The call `debugger` cannot be saved since it calls into Python\n m.save(\"m.pt\")\n\nExample (using \"@torch.jit.ignore(drop=True)\" on a method):\n import torch\n import torch.nn as nn\n\n class MyModule(nn.Module):\n @torch.jit.ignore(drop=True)\n def training_method(self, x):\n import pdb\n pdb.set_trace()\n\n def forward(self, x):\n if self.training:\n self.training_method(x)\n return x\n\n m = torch.jit.script(MyModule())\n\n # This is OK since `training_method` is not saved, the call is replaced\n # with a `raise`.\n m.save(\"m.pt\")\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ignore.html", "category": "pytorch docs"} {"text": "torch.nn.functional.adaptive_avg_pool2d\ntorch.nn.functional.adaptive_avg_pool2d(input, output_size)\nApplies a 2D adaptive average pooling over an input signal composed\n of several input planes.\nSee \"AdaptiveAvgPool2d\" for details and output shape.\nParameters:\n output_size (None) -- the target output size (single\n integer or double-integer tuple)\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.adaptive_avg_pool2d.html", "category": "pytorch docs"} {"text": "torch.sparse_bsc_tensor\ntorch.sparse_bsc_tensor(ccol_indices, row_indices, values, size=None, *, dtype=None, device=None, requires_grad=False, check_invariants=None) -> Tensor\nConstructs a sparse tensor in BSC (Block Compressed Sparse Column))\n with specified 2-dimensional blocks at the given \"ccol_indices\" and\n \"row_indices\". Sparse matrix multiplication operations in BSC\n format are typically faster than that for sparse tensors in COO\n format. Make you have a look at the note on the data type of the\n indices.\nNote:\n If the \"device\" argument is not specified the device of the given\n \"values\" and indices tensor(s) must match. If, however, the\n argument is specified the input Tensors will be converted to the\n given device and in turn determine the device of the constructed\n sparse tensor.\n\nParameters:\n * ccol_indices (array_like) -- (B+1)-dimensional array of\n size \"(*batchsize, ncolblocks + 1)\". The last element of each", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_bsc_tensor.html", "category": "pytorch docs"} {"text": "batch is the number of non-zeros. This tensor encodes the\n index in values and row_indices depending on where the given\n column starts. Each successive number in the tensor subtracted\n by the number before it denotes the number of elements in a\n given column.\n * **row_indices** (*array_like*) -- Row block co-ordinates of\n each block in values. (B+1)-dimensional tensor with the same\n length as values.\n\n * **values** (*array_list*) -- Initial blocks for the tensor.\n Can be a list, tuple, NumPy \"ndarray\", and other types that\n represents a (1 + 2 + K)-dimensional tensor where \"K\" is the\n number of dense dimensions.\n\n * **size** (list, tuple, \"torch.Size\", optional) -- Size of the\n sparse tensor: \"(*batchsize, nrows * blocksize[0], ncols *\n blocksize[1], *densesize)\" If not provided, the size will be\n inferred as the minimum size big enough to hold all non-zero\n blocks.\n\nKeyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_bsc_tensor.html", "category": "pytorch docs"} {"text": "blocks.\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if None, infers data type from\n \"values\".\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if None, uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\n * **check_invariants** (*bool**, **optional*) -- If sparse\n tensor invariants are checked. Default: as returned by\n \"torch.sparse.check_sparse_tensor_invariants.is_enabled()\",\n initially False.\n\nExample::\n >>> ccol_indices = [0, 1, 2]\n >>> row_indices = [0, 1]", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_bsc_tensor.html", "category": "pytorch docs"} {"text": "\n\n\nrow_indices = [0, 1]\n >>> values = [[[1, 2], [3, 4]], [[5, 6], [7, 8]]]\n >>> torch.sparse_bsc_tensor(torch.tensor(ccol_indices, dtype=torch.int64),\n ... torch.tensor(row_indices, dtype=torch.int64),\n ... torch.tensor(values), dtype=torch.double)\n tensor(ccol_indices=tensor([0, 1, 2]),\n row_indices=tensor([0, 1]),\n values=tensor([[[1., 2.],\n [3., 4.]],\n [[5., 6.],\n [7., 8.]]]), size=(2, 2), nnz=2, dtype=torch.float64,\n layout=torch.sparse_bsc)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_bsc_tensor.html", "category": "pytorch docs"} {"text": "LSTM\nclass torch.nn.LSTM(args, *kwargs)\nApplies a multi-layer long short-term memory (LSTM) RNN to an input\n sequence.\nFor each element in the input sequence, each layer computes the\n following function:\n \\begin{array}{ll} \\\\ i_t = \\sigma(W_{ii} x_t + b_{ii} +\n W_{hi} h_{t-1} + b_{hi}) \\\\ f_t = \\sigma(W_{if} x_t + b_{if}\n + W_{hf} h_{t-1} + b_{hf}) \\\\ g_t = \\tanh(W_{ig} x_t +\n b_{ig} + W_{hg} h_{t-1} + b_{hg}) \\\\ o_t = \\sigma(W_{io} x_t\n + b_{io} + W_{ho} h_{t-1} + b_{ho}) \\\\ c_t = f_t \\odot\n c_{t-1} + i_t \\odot g_t \\\\ h_t = o_t \\odot \\tanh(c_t) \\\\\n \\end{array}\n\nwhere h_t is the hidden state at time t, c_t is the cell state at\n time t, x_t is the input at time t, h_{t-1} is the hidden state\n of the layer at time t-1 or the initial hidden state at time 0,\n and i_t, f_t, g_t, o_t are the input, forget, cell, and output\n gates, respectively. \\sigma is the sigmoid function, and \\odot is\n the Hadamard product.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html", "category": "pytorch docs"} {"text": "the Hadamard product.\nIn a multilayer LSTM, the input x^{(l)}_t of the l -th layer (l >=\n 2) is the hidden state h^{(l-1)}_t of the previous layer multiplied\n by dropout \\delta^{(l-1)}_t where each \\delta^{(l-1)}_t is a\n Bernoulli random variable which is 0 with probability \"dropout\".\nIf \"proj_size > 0\" is specified, LSTM with projections will be\n used. This changes the LSTM cell in the following way. First, the\n dimension of h_t will be changed from \"hidden_size\" to \"proj_size\"\n (dimensions of W_{hi} will be changed accordingly). Second, the\n output hidden state of each layer will be multiplied by a learnable\n projection matrix: h_t = W_{hr}h_t. Note that as a consequence of\n this, the output of LSTM network will be of different shape as\n well. See Inputs/Outputs sections below for exact dimensions of all\n variables. You can find more details in\n https://arxiv.org/abs/1402.1128.\nParameters:\n * input_size -- The number of expected features in the input", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html", "category": "pytorch docs"} {"text": "x\n * **hidden_size** -- The number of features in the hidden state\n *h*\n\n * **num_layers** -- Number of recurrent layers. E.g., setting\n \"num_layers=2\" would mean stacking two LSTMs together to form\n a *stacked LSTM*, with the second LSTM taking in outputs of\n the first LSTM and computing the final results. Default: 1\n\n * **bias** -- If \"False\", then the layer does not use bias\n weights *b_ih* and *b_hh*. Default: \"True\"\n\n * **batch_first** -- If \"True\", then the input and output\n tensors are provided as *(batch, seq, feature)* instead of\n *(seq, batch, feature)*. Note that this does not apply to\n hidden or cell states. See the Inputs/Outputs sections below\n for details. Default: \"False\"\n\n * **dropout** -- If non-zero, introduces a *Dropout* layer on\n the outputs of each LSTM layer except the last layer, with\n dropout probability equal to \"dropout\". Default: 0\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html", "category": "pytorch docs"} {"text": "\n\nbidirectional -- If \"True\", becomes a bidirectional LSTM.\n Default: \"False\"\n\nproj_size -- If \"> 0\", will use LSTM with projections of\n corresponding size. Default: 0\n\n\n\nInputs: input, (h_0, c_0)\n * input: tensor of shape (L, H_{in}) for unbatched input,\n (L, N, H_{in}) when \"batch_first=False\" or (N, L, H_{in}) when\n \"batch_first=True\" containing the features of the input\n sequence. The input can also be a packed variable length\n sequence. See \"torch.nn.utils.rnn.pack_padded_sequence()\" or\n \"torch.nn.utils.rnn.pack_sequence()\" for details.\n * **h_0**: tensor of shape (D * \\text{num\\_layers}, H_{out}) for\n unbatched input or (D * \\text{num\\_layers}, N, H_{out})\n containing the initial hidden state for each element in the\n input sequence. Defaults to zeros if (h_0, c_0) is not\n provided.\n\n * **c_0**: tensor of shape (D * \\text{num\\_layers}, H_{cell})\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html", "category": "pytorch docs"} {"text": "for unbatched input or (D * \\text{num_layers}, N, H_{cell})\n containing the initial cell state for each element in the\n input sequence. Defaults to zeros if (h_0, c_0) is not\n provided.\n where:\n\n \\begin{aligned} N ={} & \\text{batch size} \\\\ L ={} &\n \\text{sequence length} \\\\ D ={} & 2 \\text{ if\n bidirectional=True otherwise } 1 \\\\ H_{in} ={} &\n \\text{input\\_size} \\\\ H_{cell} ={} & \\text{hidden\\_size}\n \\\\ H_{out} ={} & \\text{proj\\_size if }\n \\text{proj\\_size}>0 \\text{ otherwise hidden\\_size} \\\\\n \\end{aligned}\n\nOutputs: output, (h_n, c_n)\n * output: tensor of shape (L, D * H_{out}) for unbatched\n input, (L, N, D * H_{out}) when \"batch_first=False\" or (N, L,\n D * H_{out}) when \"batch_first=True\" containing the output\n features (h_t) from the last layer of the LSTM, for each\n t. If a \"torch.nn.utils.rnn.PackedSequence\" has been given", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html", "category": "pytorch docs"} {"text": "as the input, the output will also be a packed sequence. When\n \"bidirectional=True\", output will contain a concatenation of\n the forward and reverse hidden states at each time step in the\n sequence.\n * **h_n**: tensor of shape (D * \\text{num\\_layers}, H_{out}) for\n unbatched input or (D * \\text{num\\_layers}, N, H_{out})\n containing the final hidden state for each element in the\n sequence. When \"bidirectional=True\", *h_n* will contain a\n concatenation of the final forward and reverse hidden states,\n respectively.\n\n * **c_n**: tensor of shape (D * \\text{num\\_layers}, H_{cell})\n for unbatched input or (D * \\text{num\\_layers}, N, H_{cell})\n containing the final cell state for each element in the\n sequence. When \"bidirectional=True\", *c_n* will contain a\n concatenation of the final forward and reverse cell states,\n respectively.\n\nVariables:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html", "category": "pytorch docs"} {"text": "respectively.\nVariables:\n * weight_ih_l[k] -- the learnable input-hidden weights of\n the \\text{k}^{th} layer (W_ii|W_if|W_ig|W_io), of shape\n (4hidden_size, input_size) for k = 0. Otherwise, the\n shape is (4hidden_size, num_directions * hidden_size). If\n \"proj_size > 0\" was specified, the shape will be\n (4hidden_size, num_directions * proj_size) for k > 0*\n * **weight_hh_l[k]** -- the learnable hidden-hidden weights of\n the \\text{k}^{th} layer *(W_hi|W_hf|W_hg|W_ho)*, of shape\n *(4*hidden_size, hidden_size)*. If \"proj_size > 0\" was\n specified, the shape will be *(4*hidden_size, proj_size)*.\n\n * **bias_ih_l[k]** -- the learnable input-hidden bias of the\n \\text{k}^{th} layer *(b_ii|b_if|b_ig|b_io)*, of shape\n *(4*hidden_size)*\n\n * **bias_hh_l[k]** -- the learnable hidden-hidden bias of the\n \\text{k}^{th} layer *(b_hi|b_hf|b_hg|b_ho)*, of shape\n *(4*hidden_size)*\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html", "category": "pytorch docs"} {"text": "(4hidden_size)*\n * **weight_hr_l[k]** -- the learnable projection weights of the\n \\text{k}^{th} layer of shape *(proj_size, hidden_size)*. Only\n present when \"proj_size > 0\" was specified.\n\n * **weight_ih_l[k]_reverse** -- Analogous to *weight_ih_l[k]*\n for the reverse direction. Only present when\n \"bidirectional=True\".\n\n * **weight_hh_l[k]_reverse** -- Analogous to *weight_hh_l[k]*\n for the reverse direction. Only present when\n \"bidirectional=True\".\n\n * **bias_ih_l[k]_reverse** -- Analogous to *bias_ih_l[k]* for\n the reverse direction. Only present when \"bidirectional=True\".\n\n * **bias_hh_l[k]_reverse** -- Analogous to *bias_hh_l[k]* for\n the reverse direction. Only present when \"bidirectional=True\".\n\n * **weight_hr_l[k]_reverse** -- Analogous to *weight_hr_l[k]*\n for the reverse direction. Only present when\n \"bidirectional=True\" and \"proj_size > 0\" was specified.\n\nNote:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html", "category": "pytorch docs"} {"text": "Note:\n All the weights and biases are initialized from\n \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}) where k =\n \\frac{1}{\\text{hidden\\_size}}\n\nNote:\n For bidirectional LSTMs, forward and backward are directions 0\n and 1 respectively. Example of splitting the output layers when\n \"batch_first=False\": \"output.view(seq_len, batch, num_directions,\n hidden_size)\".\n\nNote:\n For bidirectional LSTMs, *h_n* is not equivalent to the last\n element of *output*; the former contains the final forward and\n reverse hidden states, while the latter contains the final\n forward hidden state and the initial reverse hidden state.\n\nNote:\n \"batch_first\" argument is ignored for unbatched inputs.\n\nWarning:\n There are known non-determinism issues for RNN functions on some\n versions of cuDNN and CUDA. You can enforce deterministic\n behavior by setting the following environment variables:On CUDA\n 10.1, set environment variable \"CUDA_LAUNCH_BLOCKING=1\". This may\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html", "category": "pytorch docs"} {"text": "affect performance.On CUDA 10.2 or later, set environment\n variable (note the leading colon symbol)\n \"CUBLAS_WORKSPACE_CONFIG=:16:8\" or\n \"CUBLAS_WORKSPACE_CONFIG=:4096:2\".See the cuDNN 8 Release Notes\n for more information.\nNote:\n If the following conditions are satisfied: 1) cudnn is enabled,\n 2) input data is on the GPU 3) input data has dtype\n \"torch.float16\" 4) V100 GPU is used, 5) input data is not in\n \"PackedSequence\" format persistent algorithm can be selected to\n improve performance.\n\nExamples:\n >>> rnn = nn.LSTM(10, 20, 2)\n >>> input = torch.randn(5, 3, 10)\n >>> h0 = torch.randn(2, 3, 20)\n >>> c0 = torch.randn(2, 3, 20)\n >>> output, (hn, cn) = rnn(input, (h0, c0))\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LSTM.html", "category": "pytorch docs"} {"text": "torch.zeros_like\ntorch.zeros_like(input, *, dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) -> Tensor\nReturns a tensor filled with the scalar value 0, with the same\n size as \"input\". \"torch.zeros_like(input)\" is equivalent to\n \"torch.zeros(input.size(), dtype=input.dtype, layout=input.layout,\n device=input.device)\".\nWarning:\n As of 0.4, this function does not support an \"out\" keyword. As an\n alternative, the old \"torch.zeros_like(input, out=output)\" is\n equivalent to \"torch.zeros(input.size(), out=output)\".\n\nParameters:\n input (Tensor) -- the size of \"input\" will determine size\n of the output tensor.\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned Tensor. Default: if \"None\", defaults to the dtype\n of \"input\".\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n", "source": "https://pytorch.org/docs/stable/generated/torch.zeros_like.html", "category": "pytorch docs"} {"text": "returned tensor. Default: if \"None\", defaults to the layout of\n \"input\".\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", defaults to the device of\n \"input\".\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\n * **memory_format** (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".\n\nExample:\n >>> input = torch.empty(2, 3)\n >>> torch.zeros_like(input)\n tensor([[ 0., 0., 0.],\n [ 0., 0., 0.]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.zeros_like.html", "category": "pytorch docs"} {"text": "torch.hsplit\ntorch.hsplit(input, indices_or_sections) -> List of Tensors\nSplits \"input\", a tensor with one or more dimensions, into multiple\n tensors horizontally according to \"indices_or_sections\". Each split\n is a view of \"input\".\nIf \"input\" is one dimensional this is equivalent to calling\n torch.tensor_split(input, indices_or_sections, dim=0) (the split\n dimension is zero), and if \"input\" has two or more dimensions it's\n equivalent to calling torch.tensor_split(input,\n indices_or_sections, dim=1) (the split dimension is 1), except that\n if \"indices_or_sections\" is an integer it must evenly divide the\n split dimension or a runtime error will be thrown.\nThis function is based on NumPy's \"numpy.hsplit()\".\nParameters:\n * input (Tensor) -- tensor to split.\n * **indices_or_sections** (*int** or **list** or **tuple of\n ints*) -- See argument in \"torch.tensor_split()\".\n\nExample::\n >>> t = torch.arange(16.0).reshape(4,4)\n >>> t", "source": "https://pytorch.org/docs/stable/generated/torch.hsplit.html", "category": "pytorch docs"} {"text": "\n\n\nt\n tensor([[ 0., 1., 2., 3.],\n [ 4., 5., 6., 7.],\n [ 8., 9., 10., 11.],\n [12., 13., 14., 15.]])\n >>> torch.hsplit(t, 2)\n (tensor([[ 0., 1.],\n [ 4., 5.],\n [ 8., 9.],\n [12., 13.]]),\n tensor([[ 2., 3.],\n [ 6., 7.],\n [10., 11.],\n [14., 15.]]))\n >>> torch.hsplit(t, [3, 6])\n (tensor([[ 0., 1., 2.],\n [ 4., 5., 6.],\n [ 8., 9., 10.],\n [12., 13., 14.]]),\n tensor([[ 3.],\n [ 7.],\n [11.],\n [15.]]),\n tensor([], size=(4, 0)))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.hsplit.html", "category": "pytorch docs"} {"text": "torch.Tensor.aminmax\nTensor.aminmax(*, dim=None, keepdim=False) -> (Tensor min, Tensor max)\nSee \"torch.aminmax()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.aminmax.html", "category": "pytorch docs"} {"text": "ConvBn3d\nclass torch.ao.nn.intrinsic.qat.ConvBn3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=None, padding_mode='zeros', eps=1e-05, momentum=0.1, freeze_bn=False, qconfig=None)\nA ConvBn3d module is a module fused from Conv3d and BatchNorm3d,\n attached with FakeQuantize modules for weight, used in quantization\n aware training.\nWe combined the interface of \"torch.nn.Conv3d\" and\n \"torch.nn.BatchNorm3d\".\nSimilar to \"torch.nn.Conv3d\", with FakeQuantize modules initialized\n to default.\nVariables:\n * freeze_bn --\n * **weight_fake_quant** -- fake quant module for weight\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.ConvBn3d.html", "category": "pytorch docs"} {"text": "torch.Tensor.retain_grad\nTensor.retain_grad() -> None\nEnables this Tensor to have their \"grad\" populated during\n \"backward()\". This is a no-op for leaf tensors.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.retain_grad.html", "category": "pytorch docs"} {"text": "BCELoss\nclass torch.nn.BCELoss(weight=None, size_average=None, reduce=None, reduction='mean')\nCreates a criterion that measures the Binary Cross Entropy between\n the target and the input probabilities:\nThe unreduced (i.e. with \"reduction\" set to \"'none'\") loss can be\n described as:\n \\ell(x, y) = L = \\{l_1,\\dots,l_N\\}^\\top, \\quad l_n = - w_n\n \\left[ y_n \\cdot \\log x_n + (1 - y_n) \\cdot \\log (1 - x_n)\n \\right],\n\nwhere N is the batch size. If \"reduction\" is not \"'none'\" (default\n \"'mean'\"), then\n \\ell(x, y) = \\begin{cases} \\operatorname{mean}(L), &\n \\text{if reduction} = \\text{`mean';}\\\\\n \\operatorname{sum}(L), & \\text{if reduction} = \\text{`sum'.}\n \\end{cases}\n\nThis is used for measuring the error of a reconstruction in for\n example an auto-encoder. Note that the targets y should be numbers\n between 0 and 1.\nNotice that if x_n is either 0 or 1, one of the log terms would be", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BCELoss.html", "category": "pytorch docs"} {"text": "mathematically undefined in the above loss equation. PyTorch\n chooses to set \\log (0) = -\\infty, since \\lim_{x\\to 0} \\log (x) =\n -\\infty. However, an infinite term in the loss equation is not\n desirable for several reasons.\nFor one, if either y_n = 0 or (1 - y_n) = 0, then we would be\n multiplying 0 with infinity. Secondly, if we have an infinite loss\n value, then we would also have an infinite term in our gradient,\n since \\lim_{x\\to 0} \\frac{d}{dx} \\log (x) = \\infty. This would make\n BCELoss's backward method nonlinear with respect to x_n, and using\n it for things like linear regression would not be straight-forward.\nOur solution is that BCELoss clamps its log function outputs to be\n greater than or equal to -100. This way, we can always have a\n finite loss value and a linear backward method.\nParameters:\n * weight (Tensor, optional) -- a manual rescaling\n weight given to the loss of each batch element. If given, has", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BCELoss.html", "category": "pytorch docs"} {"text": "to be a Tensor of size nbatch.\n * **size_average** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n\n * **reduce** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n\n * **reduction** (*str**, **optional*) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BCELoss.html", "category": "pytorch docs"} {"text": "the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\nShape:\n * Input: (*), where * means any number of dimensions.\n * Target: (*), same shape as the input.\n\n * Output: scalar. If \"reduction\" is \"'none'\", then (*), same\n shape as input.\n\nExamples:\n >>> m = nn.Sigmoid()\n >>> loss = nn.BCELoss()\n >>> input = torch.randn(3, requires_grad=True)\n >>> target = torch.empty(3).random_(2)\n >>> output = loss(m(input), target)\n >>> output.backward()\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BCELoss.html", "category": "pytorch docs"} {"text": "torch.Tensor.outer\nTensor.outer(vec2) -> Tensor\nSee \"torch.outer()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.outer.html", "category": "pytorch docs"} {"text": "torch.Tensor.clip\nTensor.clip(min=None, max=None) -> Tensor\nAlias for \"clamp()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.clip.html", "category": "pytorch docs"} {"text": "torch.Tensor.square\nTensor.square() -> Tensor\nSee \"torch.square()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.square.html", "category": "pytorch docs"} {"text": "torch.hann_window\ntorch.hann_window(window_length, periodic=True, *, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\nHann window function.\n w[n] = \\frac{1}{2}\\ \\left[1 - \\cos \\left( \\frac{2 \\pi n}{N - 1}\n \\right)\\right] = \\sin^2 \\left( \\frac{\\pi n}{N - 1}\n \\right),\n\nwhere N is the full window size.\nThe input \"window_length\" is a positive integer controlling the\n returned window size. \"periodic\" flag determines whether the\n returned window trims off the last duplicate value from the\n symmetric window and is ready to be used as a periodic window with\n functions like \"torch.stft()\". Therefore, if \"periodic\" is true,\n the N in above formula is in fact \\text{window_length} + 1. Also,\n we always have \"torch.hann_window(L, periodic=True)\" equal to\n \"torch.hann_window(L + 1, periodic=False)[:-1])\".\nNote:\n If \"window_length\" =1, the returned window contains a single\n value 1.\n\nParameters:", "source": "https://pytorch.org/docs/stable/generated/torch.hann_window.html", "category": "pytorch docs"} {"text": "value 1.\nParameters:\n * window_length (int) -- the size of returned window\n * **periodic** (*bool**, **optional*) -- If True, returns a\n window to be used as periodic function. If False, return a\n symmetric window.\n\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\"). Only floating point\n types are supported.\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned window tensor. Only \"torch.strided\" (dense layout) is\n supported.\n\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n", "source": "https://pytorch.org/docs/stable/generated/torch.hann_window.html", "category": "pytorch docs"} {"text": "tensor types.\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\nReturns:\n A 1-D tensor of size (\\text{window_length},) containing the\n window\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.hann_window.html", "category": "pytorch docs"} {"text": "fuse_fx\nclass torch.quantization.quantize_fx.fuse_fx(model, fuse_custom_config=None, backend_config=None)\nFuse modules like conv+bn, conv+bn+relu etc, model must be in eval\n mode. Fusion rules are defined in\n torch.quantization.fx.fusion_pattern.py\nParameters:\n * model (***) -- a torch.nn.Module model\n * **fuse_custom_config** (***) -- custom configurations for\n fuse_fx. See \"FuseCustomConfig\" for more details\n\nReturn type:\n GraphModule\nExample:\n from torch.ao.quantization import fuse_fx\n m = Model().eval()\n m = fuse_fx(m)\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_fx.fuse_fx.html", "category": "pytorch docs"} {"text": "torch.linalg.solve\ntorch.linalg.solve(A, B, *, left=True, out=None) -> Tensor\nComputes the solution of a square system of linear equations with a\n unique solution.\nLetting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, this function\n computes the solution X \\in \\mathbb{K}^{n \\times k} of the linear\n system associated to A \\in \\mathbb{K}^{n \\times n}, B \\in\n \\mathbb{K}^{n \\times k}, which is defined as\n AX = B\n\nIf \"left\"= False, this function returns the matrix X \\in\n \\mathbb{K}^{n \\times k} that solves the system\n XA = B\\mathrlap{\\qquad A \\in \\mathbb{K}^{k \\times k}, B \\in\n \\mathbb{K}^{n \\times k}.}\n\nThis system of linear equations has one solution if and only if A\n is invertible. This function assumes that A is invertible.\nSupports inputs of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if the inputs are batches of\n matrices then the output has the same batch dimensions.", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.solve.html", "category": "pytorch docs"} {"text": "Letting *** be zero or more batch dimensions,\n\n\nIf \"A\" has shape (, n, n) and \"B\" has shape (, n) (a batch\n of vectors) or shape (, n, k) (a batch of matrices or\n \"multiple right-hand sides\"), this function returns X of shape\n (, n) or (, n, k)* respectively.\n\n\nOtherwise, if \"A\" has shape (, n, n) and \"B\" has shape (n,)\n or (n, k), \"B\" is broadcasted to have shape (, n) or (, n,\n k)* respectively. This function then returns the solution of the\n resulting batch of systems of linear equations.\n\n\nNote:\n This function computes *X = *\"A\"*.inverse() @ *\"B\" in a faster\n and more numerically stable way than performing the computations\n separately.\n\nNote:\n It is possible to compute the solution of the system XA = B by\n passing the inputs \"A\" and \"B\" transposed and transposing the\n output returned by this function.\n\nNote:\n When inputs are on a CUDA device, this function synchronizes that\n device with the CPU.\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.solve.html", "category": "pytorch docs"} {"text": "device with the CPU.\nSee also:\n \"torch.linalg.solve_triangular()\" computes the solution of a\n triangular system of linear equations with a unique solution.\n\nParameters:\n * A (Tensor) -- tensor of shape (, n, n)* where *** is\n zero or more batch dimensions.\n * **B** (*Tensor*) -- right-hand side tensor of shape *(*, n)*\n or *(*, n, k)* or *(n,)* or *(n, k)* according to the rules\n described above\n\nKeyword Arguments:\n * left (bool, optional) -- whether to solve the system\n AX=B or XA = B. Default: True.\n * **out** (*Tensor**, **optional*) -- output tensor. Ignored if\n *None*. Default: *None*.\n\nRaises:\n RuntimeError -- if the \"A\" matrix is not invertible or any\n matrix in a batched \"A\" is not invertible.\nExamples:\n >>> A = torch.randn(3, 3)\n >>> b = torch.randn(3)\n >>> x = torch.linalg.solve(A, b)\n >>> torch.allclose(A @ x, b)\n True\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.solve.html", "category": "pytorch docs"} {"text": "\n\n\ntorch.allclose(A @ x, b)\n True\n >>> A = torch.randn(2, 3, 3)\n >>> B = torch.randn(2, 3, 4)\n >>> X = torch.linalg.solve(A, B)\n >>> X.shape\n torch.Size([2, 3, 4])\n >>> torch.allclose(A @ X, B)\n True\n\n\n\n >>> A = torch.randn(2, 3, 3)\n >>> b = torch.randn(3, 1)\n >>> x = torch.linalg.solve(A, b) # b is broadcasted to size (2, 3, 1)\n >>> x.shape\n torch.Size([2, 3, 1])\n >>> torch.allclose(A @ x, b)\n True\n >>> b = torch.randn(3)\n >>> x = torch.linalg.solve(A, b) # b is broadcasted to size (2, 3)\n >>> x.shape\n torch.Size([2, 3])\n >>> Ax = A @ x.unsqueeze(-1)\n >>> torch.allclose(Ax, b.unsqueeze(-1).expand_as(Ax))\n True\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.solve.html", "category": "pytorch docs"} {"text": "torch.polar\ntorch.polar(abs, angle, *, out=None) -> Tensor\nConstructs a complex tensor whose elements are Cartesian\n coordinates corresponding to the polar coordinates with absolute\n value \"abs\" and angle \"angle\".\n \\text{out} = \\text{abs} \\cdot \\cos(\\text{angle}) + \\text{abs}\n \\cdot \\sin(\\text{angle}) \\cdot j\n\nNote:\n *torch.polar* is similar to std::polar and does not compute the\n polar decomposition of a complex tensor like Python's\n *cmath.polar* and SciPy's *linalg.polar* do. The behavior of this\n function is undefined if *abs* is negative or NaN, or if *angle*\n is infinite.\n\nParameters:\n * abs (Tensor) -- The absolute value the complex tensor.\n Must be float or double.\n * **angle** (*Tensor*) -- The angle of the complex tensor. Must\n be same dtype as \"abs\".\n\nKeyword Arguments:\n out (Tensor) -- If the inputs are \"torch.float32\", must be\n \"torch.complex64\". If the inputs are \"torch.float64\", must be", "source": "https://pytorch.org/docs/stable/generated/torch.polar.html", "category": "pytorch docs"} {"text": "\"torch.complex128\".\nExample:\n >>> import numpy as np\n >>> abs = torch.tensor([1, 2], dtype=torch.float64)\n >>> angle = torch.tensor([np.pi / 2, 5 * np.pi / 4], dtype=torch.float64)\n >>> z = torch.polar(abs, angle)\n >>> z\n tensor([(0.0000+1.0000j), (-1.4142-1.4142j)], dtype=torch.complex128)\n", "source": "https://pytorch.org/docs/stable/generated/torch.polar.html", "category": "pytorch docs"} {"text": "torch.foreach_sqrt\ntorch.foreach_sqrt(self: List[Tensor]) -> None\nApply \"torch.sqrt()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_sqrt_.html", "category": "pytorch docs"} {"text": "torch.numel\ntorch.numel(input) -> int\nReturns the total number of elements in the \"input\" tensor.\nParameters:\n input (Tensor) -- the input tensor.\nExample:\n >>> a = torch.randn(1, 2, 3, 4, 5)\n >>> torch.numel(a)\n 120\n >>> a = torch.zeros(4,4)\n >>> torch.numel(a)\n 16\n", "source": "https://pytorch.org/docs/stable/generated/torch.numel.html", "category": "pytorch docs"} {"text": "torch.Tensor.igammac\nTensor.igammac(other) -> Tensor\nSee \"torch.igammac()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.igammac.html", "category": "pytorch docs"} {"text": "torch.lt\ntorch.lt(input, other, *, out=None) -> Tensor\nComputes \\text{input} < \\text{other} element-wise.\nThe second argument can be a number or a tensor whose shape is\n broadcastable with the first argument.\nParameters:\n * input (Tensor) -- the tensor to compare\n * **other** (*Tensor** or **float*) -- the tensor or value to\n compare\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nReturns:\n A boolean tensor that is True where \"input\" is less than \"other\"\n and False elsewhere\nExample:\n >>> torch.lt(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))\n tensor([[False, False], [True, False]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.lt.html", "category": "pytorch docs"} {"text": "torch.triu_indices\ntorch.triu_indices(row, col, offset=0, *, dtype=torch.long, device='cpu', layout=torch.strided) -> Tensor\nReturns the indices of the upper triangular part of a \"row\" by\n \"col\" matrix in a 2-by-N Tensor, where the first row contains row\n coordinates of all indices and the second row contains column\n coordinates. Indices are ordered based on rows and then columns.\nThe upper triangular part of the matrix is defined as the elements\n on and above the diagonal.\nThe argument \"offset\" controls which diagonal to consider. If\n \"offset\" = 0, all elements on and above the main diagonal are\n retained. A positive value excludes just as many diagonals above\n the main diagonal, and similarly a negative value includes just as\n many diagonals below the main diagonal. The main diagonal are the\n set of indices \\lbrace (i, i) \\rbrace for i \\in [0, \\min{d_{1},\n d_{2}} - 1] where d_{1}, d_{2} are the dimensions of the matrix.\nNote:", "source": "https://pytorch.org/docs/stable/generated/torch.triu_indices.html", "category": "pytorch docs"} {"text": "Note:\n When running on CUDA, \"row * col\" must be less than 2^{59} to\n prevent overflow during calculation.\n\nParameters:\n * row (\"int\") -- number of rows in the 2-D matrix.\n * **col** (\"int\") -- number of columns in the 2-D matrix.\n\n * **offset** (\"int\") -- diagonal offset from the main diagonal.\n Default: if not provided, 0.\n\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", \"torch.long\".\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n\n * **layout** (\"torch.layout\", optional) -- currently only\n support \"torch.strided\".\n\nExample:\n >>> a = torch.triu_indices(3, 3)\n", "source": "https://pytorch.org/docs/stable/generated/torch.triu_indices.html", "category": "pytorch docs"} {"text": "\n\n\na = torch.triu_indices(3, 3)\n >>> a\n tensor([[0, 0, 0, 1, 1, 2],\n [0, 1, 2, 1, 2, 2]])\n\n\n\n >>> a = torch.triu_indices(4, 3, -1)\n >>> a\n tensor([[0, 0, 0, 1, 1, 1, 2, 2, 3],\n [0, 1, 2, 0, 1, 2, 1, 2, 2]])\n\n >>> a = torch.triu_indices(4, 3, 1)\n >>> a\n tensor([[0, 0, 1],\n [1, 2, 2]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.triu_indices.html", "category": "pytorch docs"} {"text": "LazyInstanceNorm1d\nclass torch.nn.LazyInstanceNorm1d(eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)\nA \"torch.nn.InstanceNorm1d\" module with lazy initialization of the\n \"num_features\" argument of the \"InstanceNorm1d\" that is inferred\n from the \"input.size(1)\". The attributes that will be lazily\n initialized are weight, bias, running_mean and running_var.\nCheck the \"torch.nn.modules.lazy.LazyModuleMixin\" for further\n documentation on lazy modules and their limitations.\nParameters:\n * num_features -- C from an expected input of size (N, C, L)\n or (C, L)\n * **eps** (*float*) -- a value added to the denominator for\n numerical stability. Default: 1e-5\n\n * **momentum** (*float*) -- the value used for the running_mean\n and running_var computation. Default: 0.1\n\n * **affine** (*bool*) -- a boolean value that when set to\n \"True\", this module has learnable affine parameters,\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyInstanceNorm1d.html", "category": "pytorch docs"} {"text": "initialized the same way as done for batch normalization.\n Default: \"False\".\n * **track_running_stats** (*bool*) -- a boolean value that when\n set to \"True\", this module tracks the running mean and\n variance, and when set to \"False\", this module does not track\n such statistics and always uses batch statistics in both\n training and eval modes. Default: \"False\"\n\nShape:\n * Input: (N, C, L) or (C, L)\n * Output: (N, C, L) or (C, L) (same shape as input)\n\ncls_to_become\n alias of \"InstanceNorm1d\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyInstanceNorm1d.html", "category": "pytorch docs"} {"text": "enable_observer\nclass torch.quantization.fake_quantize.enable_observer(mod)\nEnable observation for this module, if applicable. Example usage:\n # model is any PyTorch model\n model.apply(torch.ao.quantization.enable_observer)\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.enable_observer.html", "category": "pytorch docs"} {"text": "torch.fake_quantize_per_channel_affine\ntorch.fake_quantize_per_channel_affine(input, scale, zero_point, quant_min, quant_max) -> Tensor\nReturns a new tensor with the data in \"input\" fake quantized per\n channel using \"scale\", \"zero_point\", \"quant_min\" and \"quant_max\",\n across the channel specified by \"axis\".\n \\text{output} = min( \\text{quant\\_max}, max(\n \\text{quant\\_min}, \\text{std::nearby\\_int}(\\text{input}\n / \\text{scale}) + \\text{zero\\_point} ) )\n\nParameters:\n * input (Tensor) -- the input value(s), in \"torch.float32\"\n * **scale** (*Tensor*) -- quantization scale, per channel in\n \"torch.float32\"\n\n * **zero_point** (*Tensor*) -- quantization zero_point, per\n channel in \"torch.int32\" or \"torch.half\" or \"torch.float32\"\n\n * **axis** (*int32*) -- channel axis\n\n * **quant_min** (*int64*) -- lower bound of the quantized domain\n", "source": "https://pytorch.org/docs/stable/generated/torch.fake_quantize_per_channel_affine.html", "category": "pytorch docs"} {"text": "\nquant_max (int64) -- upper bound of the quantized domain\n\nReturns:\n A newly fake_quantized per channel \"torch.float32\" tensor\nReturn type:\n Tensor\nExample:\n >>> x = torch.randn(2, 2, 2)\n >>> x\n tensor([[[-0.2525, -0.0466],\n [ 0.3491, -0.2168]],\n\n [[-0.5906, 1.6258],\n [ 0.6444, -0.0542]]])\n >>> scales = (torch.randn(2) + 1) * 0.05\n >>> scales\n tensor([0.0475, 0.0486])\n >>> zero_points = torch.zeros(2).to(torch.int32)\n >>> zero_points\n tensor([0, 0])\n >>> torch.fake_quantize_per_channel_affine(x, scales, zero_points, 1, 0, 255)\n tensor([[[0.0000, 0.0000],\n [0.3405, 0.0000]],\n\n [[0.0000, 1.6134],\n [0.6323, 0.0000]]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.fake_quantize_per_channel_affine.html", "category": "pytorch docs"} {"text": "torch.Tensor.subtract\nTensor.subtract(other, *, alpha=1) -> Tensor\nSee \"torch.subtract()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.subtract.html", "category": "pytorch docs"} {"text": "torch.nn.functional.instance_norm\ntorch.nn.functional.instance_norm(input, running_mean=None, running_var=None, weight=None, bias=None, use_input_stats=True, momentum=0.1, eps=1e-05)\nApplies Instance Normalization for each channel in each data sample\n in a batch.\nSee \"InstanceNorm1d\", \"InstanceNorm2d\", \"InstanceNorm3d\" for\n details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.instance_norm.html", "category": "pytorch docs"} {"text": "torch.Tensor.random_\nTensor.random_(from=0, to=None, *, generator=None) -> Tensor\nFills \"self\" tensor with numbers sampled from the discrete uniform\n distribution over \"[from, to - 1]\". If not specified, the values\n are usually only bounded by \"self\" tensor's data type. However, for\n floating point types, if unspecified, range will be \"[0,\n 2^mantissa]\" to ensure that every value is representable. For\n example, torch.tensor(1, dtype=torch.double).random_() will be\n uniform in \"[0, 2^53]\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.random_.html", "category": "pytorch docs"} {"text": "per_channel_dynamic_qconfig\ntorch.quantization.qconfig.per_channel_dynamic_qconfig\nalias of QConfig(activation=functools.partial(,\n dtype=torch.quint8, quant_min=0, quant_max=255, is_dynamic=True){},\n weight=functools.partial(,\n dtype=torch.qint8, qscheme=torch.per_channel_symmetric){})", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.per_channel_dynamic_qconfig.html", "category": "pytorch docs"} {"text": "torch.fft.ihfftn\ntorch.fft.ihfftn(input, s=None, dim=None, norm=None, *, out=None) -> Tensor\nComputes the N-dimensional inverse discrete Fourier transform of\n real \"input\".\n\"input\" must be a real-valued signal, interpreted in the Fourier\n domain. The n-dimensional IFFT of a real signal is Hermitian-\n symmetric, \"X[i, j, ...] = conj(X[-i, -j, ...])\". \"ihfftn()\"\n represents this in the one-sided form where only the positive\n frequencies below the Nyquist frequency are included in the last\n signal dimension. To compute the full output, use \"ifftn()\".\nNote:\n Supports torch.half on CUDA with GPU Architecture SM53 or\n greater. However it only supports powers of 2 signal length in\n every transformed dimensions.\n\nParameters:\n * input (Tensor) -- the input tensor\n * **s** (*Tuple**[**int**]**, **optional*) -- Signal size in the\n transformed dimensions. If given, each dimension \"dim[i]\" will\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ihfftn.html", "category": "pytorch docs"} {"text": "either be zero-padded or trimmed to the length \"s[i]\" before\n computing the Hermitian IFFT. If a length \"-1\" is specified,\n no padding is done in that dimension. Default: \"s =\n [input.size(d) for d in dim]\"\n * **dim** (*Tuple**[**int**]**, **optional*) -- Dimensions to be\n transformed. Default: all dimensions, or the last \"len(s)\"\n dimensions if \"s\" is given.\n\n * **norm** (*str**, **optional*) --\n\n Normalization mode. For the backward transform (\"ihfftn()\"),\n these correspond to:\n\n * \"\"forward\"\" - no normalization\n\n * \"\"backward\"\" - normalize by \"1/n\"\n\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the Hermitian\n IFFT orthonormal)\n\n Where \"n = prod(s)\" is the logical IFFT size. Calling the\n forward transform (\"hfftn()\") with the same normalization mode\n will apply an overall normalization of \"1/n\" between the two\n transforms. This is required to make \"ihfftn()\" the exact\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ihfftn.html", "category": "pytorch docs"} {"text": "inverse.\n Default is \"\"backward\"\" (normalize by \"1/n\").\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\n-[ Example ]-\n\n\n\nT = torch.rand(10, 10)\nihfftn = torch.fft.ihfftn(T)\nihfftn.size()\n torch.Size([10, 6])\n\n\n\nCompared against the full output from \"ifftn()\", we have all\n elements up to the Nyquist frequency.\n\n\n\nifftn = torch.fft.ifftn(t)\ntorch.allclose(ifftn[..., :6], ihfftn)\n True\n\n\n\nThe discrete Fourier transform is separable, so \"ihfftn()\" here is\n equivalent to a combination of \"ihfft()\" and \"ifft()\":\n\n\n\ntwo_iffts = torch.fft.ifft(torch.fft.ihfft(t, dim=1), dim=0)\ntorch.allclose(ihfftn, two_iffts)\n True\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ihfftn.html", "category": "pytorch docs"} {"text": "torch.isnan\ntorch.isnan(input) -> Tensor\nReturns a new tensor with boolean elements representing if each\n element of \"input\" is NaN or not. Complex values are considered NaN\n when either their real and/or imaginary part is NaN.\nParameters:\n input (Tensor) -- the input tensor.\nReturns:\n A boolean tensor that is True where \"input\" is NaN and False\n elsewhere\nExample:\n >>> torch.isnan(torch.tensor([1, float('nan'), 2]))\n tensor([False, True, False])\n", "source": "https://pytorch.org/docs/stable/generated/torch.isnan.html", "category": "pytorch docs"} {"text": "torch.linalg.eigvals\ntorch.linalg.eigvals(A, *, out=None) -> Tensor\nComputes the eigenvalues of a square matrix.\nLetting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, the eigenvalues\n of a square matrix A \\in \\mathbb{K}^{n \\times n} are defined as the\n roots (counted with multiplicity) of the polynomial p of degree\n n given by\n p(\\lambda) = \\operatorname{det}(A - \\lambda\n \\mathrm{I}_n)\\mathrlap{\\qquad \\lambda \\in \\mathbb{C}}\n\nwhere \\mathrm{I}_n is the n-dimensional identity matrix.\nSupports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\nNote:\n The eigenvalues of a real matrix may be complex, as the roots of\n a real polynomial may be complex.The eigenvalues of a matrix are\n always well-defined, even when the matrix is not diagonalizable.\n\nNote:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eigvals.html", "category": "pytorch docs"} {"text": "Note:\n When inputs are on a CUDA device, this function synchronizes that\n device with the CPU.\n\nSee also:\n \"torch.linalg.eig()\" computes the full eigenvalue decomposition.\n\nParameters:\n A (Tensor) -- tensor of shape (, n, n)* where *** is\n zero or more batch dimensions.\nKeyword Arguments:\n out (Tensor, optional) -- output tensor. Ignored if\n None. Default: None.\nReturns:\n A complex-valued tensor containing the eigenvalues even when \"A\"\n is real.\nExamples:\n >>> A = torch.randn(2, 2, dtype=torch.complex128)\n >>> L = torch.linalg.eigvals(A)\n >>> L\n tensor([ 1.1226+0.5738j, -0.7537-0.1286j], dtype=torch.complex128)\n\n >>> torch.dist(L, torch.linalg.eig(A).eigenvalues)\n tensor(2.4576e-07)\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eigvals.html", "category": "pytorch docs"} {"text": "disable_fake_quant\nclass torch.quantization.fake_quantize.disable_fake_quant(mod)\nDisable fake quantization for this module, if applicable. Example\n usage:\n # model is any PyTorch model\n model.apply(torch.ao.quantization.disable_fake_quant)\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.disable_fake_quant.html", "category": "pytorch docs"} {"text": "torch.Tensor.clip_\nTensor.clip_(min=None, max=None) -> Tensor\nAlias for \"clamp_()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.clip_.html", "category": "pytorch docs"} {"text": "torch.amin\ntorch.amin(input, dim, keepdim=False, *, out=None) -> Tensor\nReturns the minimum value of each slice of the \"input\" tensor in\n the given dimension(s) \"dim\".\nNote:\n The difference between \"max\"/\"min\" and \"amax\"/\"amin\" is:\n * \"amax\"/\"amin\" supports reducing on multiple dimensions,\n\n * \"amax\"/\"amin\" does not return indices,\n\n * \"amax\"/\"amin\" evenly distributes gradient between equal\n values, while \"max(dim)\"/\"min(dim)\" propagates gradient only\n to a single index in the source tensor.\n\nIf \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension(s) \"dim\" where it is of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensor having 1 (or \"len(dim)\") fewer dimension(s).\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int** or **tuple of ints*) -- the dimension or\n dimensions to reduce.\n", "source": "https://pytorch.org/docs/stable/generated/torch.amin.html", "category": "pytorch docs"} {"text": "dimensions to reduce.\n * **keepdim** (*bool*) -- whether the output tensor has \"dim\"\n retained or not.\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(4, 4)\n >>> a\n tensor([[ 0.6451, -0.4866, 0.2987, -1.3312],\n [-0.5744, 1.2980, 1.8397, -0.2713],\n [ 0.9128, 0.9214, -1.7268, -0.2995],\n [ 0.9023, 0.4853, 0.9075, -1.6165]])\n >>> torch.amin(a, 1)\n tensor([-1.3312, -0.5744, -1.7268, -1.6165])\n", "source": "https://pytorch.org/docs/stable/generated/torch.amin.html", "category": "pytorch docs"} {"text": "torch.ones_like\ntorch.ones_like(input, *, dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) -> Tensor\nReturns a tensor filled with the scalar value 1, with the same\n size as \"input\". \"torch.ones_like(input)\" is equivalent to\n \"torch.ones(input.size(), dtype=input.dtype, layout=input.layout,\n device=input.device)\".\nWarning:\n As of 0.4, this function does not support an \"out\" keyword. As an\n alternative, the old \"torch.ones_like(input, out=output)\" is\n equivalent to \"torch.ones(input.size(), out=output)\".\n\nParameters:\n input (Tensor) -- the size of \"input\" will determine size\n of the output tensor.\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned Tensor. Default: if \"None\", defaults to the dtype\n of \"input\".\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n", "source": "https://pytorch.org/docs/stable/generated/torch.ones_like.html", "category": "pytorch docs"} {"text": "returned tensor. Default: if \"None\", defaults to the layout of\n \"input\".\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", defaults to the device of\n \"input\".\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\n * **memory_format** (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".\n\nExample:\n >>> input = torch.empty(2, 3)\n >>> torch.ones_like(input)\n tensor([[ 1., 1., 1.],\n [ 1., 1., 1.]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.ones_like.html", "category": "pytorch docs"} {"text": "torch.sub\ntorch.sub(input, other, *, alpha=1, out=None) -> Tensor\nSubtracts \"other\", scaled by \"alpha\", from \"input\".\n \\text{{out}}_i = \\text{{input}}_i - \\text{{alpha}} \\times\n \\text{{other}}_i\n\nSupports broadcasting to a common shape, type promotion, and\n integer, float, and complex inputs.\nParameters:\n * input (Tensor) -- the input tensor.\n * **other** (*Tensor** or **Number*) -- the tensor or number to\n subtract from \"input\".\n\nKeyword Arguments:\n * alpha (Number) -- the multiplier for \"other\".\n * **out** (*Tensor**, **optional*) -- the output tensor.\n\nExample:\n >>> a = torch.tensor((1, 2))\n >>> b = torch.tensor((0, 1))\n >>> torch.sub(a, b, alpha=2)\n tensor([1, 0])\n", "source": "https://pytorch.org/docs/stable/generated/torch.sub.html", "category": "pytorch docs"} {"text": "QConfigMapping\nclass torch.ao.quantization.qconfig_mapping.QConfigMapping\nMapping from model ops to \"torch.ao.quantization.QConfig\" s.\nThe user can specify QConfigs using the following methods (in\n increasing match priority):\n \"set_global\" : sets the global (default) QConfig\n\n \"set_object_type\" : sets the QConfig for a given module type,\n function, or method name\n\n \"set_module_name_regex\" : sets the QConfig for modules matching\n the given regex string\n\n \"set_module_name\" : sets the QConfig for modules matching the\n given module name\n\n \"set_module_name_object_type_order\" : sets the QConfig for\n modules matching a combination of the given module name, object\n type, and the index at which the module appears\n\nExample usage:\n qconfig_mapping = QConfigMapping()\n .set_global(global_qconfig)\n .set_object_type(torch.nn.Linear, qconfig1)\n .set_object_type(torch.nn.ReLU, qconfig1)\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.qconfig_mapping.QConfigMapping.html", "category": "pytorch docs"} {"text": ".set_module_name_regex(\"foo.bar.conv[0-9]+\", qconfig1)\n .set_module_name_regex(\"foo.*\", qconfig2)\n .set_module_name(\"module1\", qconfig1)\n .set_module_name(\"module2\", qconfig2)\n .set_module_name_object_type_order(\"foo.bar\", torch.nn.functional.linear, 0, qconfig3)\nclassmethod from_dict(qconfig_dict)\n Create a \"QConfigMapping\" from a dictionary with the following\n keys (all optional):\n\n \"\" (for global QConfig)\n\n \"object_type\"\n\n \"module_name_regex\"\n\n \"module_name\"\n\n \"module_name_object_type_order\"\n\n The values of this dictionary are expected to be lists of\n tuples.\n\n Return type:\n *QConfigMapping*\n\nset_global(global_qconfig)\n Set the global (default) QConfig.\n\n Return type:\n *QConfigMapping*\n\nset_module_name(module_name, qconfig)\n Set the QConfig for modules matching the given module name. If\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.qconfig_mapping.QConfigMapping.html", "category": "pytorch docs"} {"text": "the QConfig for an existing module name was already set, the new\n QConfig will override the old one.\n Return type:\n *QConfigMapping*\n\nset_module_name_object_type_order(module_name, object_type, index, qconfig)\n Set the QConfig for modules matching a combination of the given\n module name, object type, and the index at which the module\n appears.\n\n If the QConfig for an existing (module name, object type, index)\n was already set, the new QConfig will override the old one.\n\n Return type:\n *QConfigMapping*\n\nset_module_name_regex(module_name_regex, qconfig)\n Set the QConfig for modules matching the given regex string.\n\n Regexes will be matched in the order in which they are\n registered through this method. Thus, the caller should register\n more specific patterns first, e.g.:\n\n qconfig_mapping = QConfigMapping()\n .set_module_name_regex(\"foo.*bar.*conv[0-9]+\", qconfig1)\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.qconfig_mapping.QConfigMapping.html", "category": "pytorch docs"} {"text": ".set_module_name_regex(\"foo.bar.\", qconfig2)\n .set_module_name_regex(\"foo.*\", qconfig3)\n In this example, \"foo.bar.conv0\" would match qconfig1,\n \"foo.bar.linear\" would match qconfig2, and \"foo.baz.relu\" would\n match qconfig3.\n\n If the QConfig for an existing module name regex was already\n set, the new QConfig will override the old one while preserving\n the order in which the regexes were originally registered.\n\n Return type:\n *QConfigMapping*\n\nset_object_type(object_type, qconfig)\n Set the QConfig for a given module type, function, or method\n name. If the QConfig for an existing object type was already\n set, the new QConfig will override the old one.\n\n Return type:\n *QConfigMapping*\n\nto_dict()\n Convert this \"QConfigMapping\" to a dictionary with the following\n keys:\n\n \"\" (for global QConfig)\n\n \"object_type\"\n\n \"module_name_regex\"\n\n \"module_name\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.qconfig_mapping.QConfigMapping.html", "category": "pytorch docs"} {"text": "\"module_name\"\n \"module_name_object_type_order\"\n\n The values of this dictionary are lists of tuples.\n\n Return type:\n *Dict*[str, *Any*]\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.qconfig_mapping.QConfigMapping.html", "category": "pytorch docs"} {"text": "torch.var\ntorch.var(input, dim=None, *, correction=1, keepdim=False, out=None) -> Tensor\nCalculates the variance over the dimensions specified by \"dim\".\n \"dim\" can be a single dimension, list of dimensions, or \"None\" to\n reduce over all dimensions.\nThe variance (\\sigma^2) is calculated as\n \\sigma^2 = \\frac{1}{N - \\delta N}\\sum_{i=0}^{N-1}(x_i-\\bar{x})^2\n\nwhere x is the sample set of elements, \\bar{x} is the sample mean,\n N is the number of samples and \\delta N is the \"correction\".\nIf \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension(s) \"dim\" where it is of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensor having 1 (or \"len(dim)\") fewer dimension(s).\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int** or **tuple of ints**, **optional*) -- the\n dimension or dimensions to reduce. If \"None\", all dimensions\n are reduced.\n", "source": "https://pytorch.org/docs/stable/generated/torch.var.html", "category": "pytorch docs"} {"text": "are reduced.\nKeyword Arguments:\n * correction (int) --\n difference between the sample size and sample degrees of\n freedom. Defaults to Bessel's correction, \"correction=1\".\n\n Changed in version 2.0: Previously this argument was called\n \"unbiased\" and was a boolean with \"True\" corresponding to\n \"correction=1\" and \"False\" being \"correction=0\".\n\n * **keepdim** (*bool*) -- whether the output tensor has \"dim\"\n retained or not.\n\n * **out** (*Tensor**, **optional*) -- the output tensor.\n\n-[ Example ]-\n\n\n\na = torch.tensor(\n ... [[ 0.2035, 1.2959, 1.8101, -0.4644],\n ... [ 1.5027, -0.3270, 0.5905, 0.6538],\n ... [-1.5745, 1.3330, -0.5596, -0.6548],\n ... [ 0.1264, -0.5080, 1.6420, 0.1992]])\ntorch.var(a, dim=1, keepdim=True)\n tensor([[1.0631],\n [0.5590],\n [1.4893],\n [0.8258]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.var.html", "category": "pytorch docs"} {"text": "torch.multinomial\ntorch.multinomial(input, num_samples, replacement=False, *, generator=None, out=None) -> LongTensor\nReturns a tensor where each row contains \"num_samples\" indices\n sampled from the multinomial probability distribution located in\n the corresponding row of tensor \"input\".\nNote:\n The rows of \"input\" do not need to sum to one (in which case we\n use the values as weights), but must be non-negative, finite and\n have a non-zero sum.\n\nIndices are ordered from left to right according to when each was\n sampled (first samples are placed in first column).\nIf \"input\" is a vector, \"out\" is a vector of size \"num_samples\".\nIf \"input\" is a matrix with m rows, \"out\" is an matrix of shape\n (m \\times \\text{num_samples}).\nIf replacement is \"True\", samples are drawn with replacement.\nIf not, they are drawn without replacement, which means that when a\n sample index is drawn for a row, it cannot be drawn again for that\n row.\nNote:", "source": "https://pytorch.org/docs/stable/generated/torch.multinomial.html", "category": "pytorch docs"} {"text": "row.\nNote:\n When drawn without replacement, \"num_samples\" must be lower than\n number of non-zero elements in \"input\" (or the min number of non-\n zero elements in each row of \"input\" if it is a matrix).\n\nParameters:\n * input (Tensor) -- the input tensor containing\n probabilities\n * **num_samples** (*int*) -- number of samples to draw\n\n * **replacement** (*bool**, **optional*) -- whether to draw with\n replacement or not\n\nKeyword Arguments:\n * generator (\"torch.Generator\", optional) -- a pseudorandom\n number generator for sampling\n * **out** (*Tensor**, **optional*) -- the output tensor.\n\nExample:\n >>> weights = torch.tensor([0, 10, 3, 0], dtype=torch.float) # create a tensor of weights\n >>> torch.multinomial(weights, 2)\n tensor([1, 2])\n >>> torch.multinomial(weights, 4) # ERROR!\n RuntimeError: invalid argument 2: invalid multinomial distribution (with replacement=False,\n", "source": "https://pytorch.org/docs/stable/generated/torch.multinomial.html", "category": "pytorch docs"} {"text": "not enough non-negative category to sample) at ../aten/src/TH/generic/THTensorRandom.cpp:320\n >>> torch.multinomial(weights, 4, replacement=True)\n tensor([ 2, 1, 1, 1])", "source": "https://pytorch.org/docs/stable/generated/torch.multinomial.html", "category": "pytorch docs"} {"text": "torch.histogramdd\ntorch.histogramdd(input, bins, *, range=None, weight=None, density=False, out=None) -> (Tensor, Tensor[])\nComputes a multi-dimensional histogram of the values in a tensor.\nInterprets the elements of an input tensor whose innermost\n dimension has size N as a collection of N-dimensional points. Maps\n each of the points into a set of N-dimensional bins and returns the\n number of points (or total weight) in each bin.\n\"input\" must be a tensor with at least 2 dimensions. If input has\n shape (M, N), each of its M rows defines a point in N-dimensional\n space. If input has three or more dimensions, all but the last\n dimension are flattened.\nEach dimension is independently associated with its own strictly\n increasing sequence of bin edges. Bin edges may be specified\n explicitly by passing a sequence of 1D tensors. Alternatively, bin\n edges may be constructed automatically by passing a sequence of", "source": "https://pytorch.org/docs/stable/generated/torch.histogramdd.html", "category": "pytorch docs"} {"text": "integers specifying the number of equal-width bins in each\n dimension.\nFor each N-dimensional point in input:\n * Each of its coordinates is binned independently among the bin\n edges\n corresponding to its dimension\n * Binning results are combined to identify the N-dimensional bin\n (if any)\n into which the point falls\n\n * If the point falls into a bin, the bin's count (or total\n weight) is incremented\n\n * Points which do not fall into any bin do not contribute to the\n output\n\n\"bins\" can be a sequence of N 1D tensors, a sequence of N ints, or\n a single int.\nIf \"bins\" is a sequence of N 1D tensors, it explicitly specifies\n the N sequences of bin edges. Each 1D tensor should contain a\n strictly increasing sequence with at least one element. A sequence\n of K bin edges defines K-1 bins, explicitly specifying the left and\n right edges of all bins. Every bin is exclusive of its left edge.", "source": "https://pytorch.org/docs/stable/generated/torch.histogramdd.html", "category": "pytorch docs"} {"text": "Only the rightmost bin is inclusive of its right edge.\nIf \"bins\" is a sequence of N ints, it specifies the number of\n equal-width bins in each dimension. By default, the leftmost and\n rightmost bin edges in each dimension are determined by the minimum\n and maximum elements of the input tensor in the corresponding\n dimension. The \"range\" argument can be provided to manually specify\n the leftmost and rightmost bin edges in each dimension.\nIf \"bins\" is an int, it specifies the number of equal-width bins\n for all dimensions.\nNote:\n See also \"torch.histogram()\", which specifically computes 1D\n histograms. While \"torch.histogramdd()\" infers the dimensionality\n of its bins and binned values from the shape of \"input\",\n \"torch.histogram()\" accepts and flattens \"input\" of any shape.\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **bins** -- Tensor[], int[], or int. If Tensor[], defines the\n", "source": "https://pytorch.org/docs/stable/generated/torch.histogramdd.html", "category": "pytorch docs"} {"text": "sequences of bin edges. If int[], defines the number of equal-\n width bins in each dimension. If int, defines the number of\n equal-width bins for all dimensions.\nKeyword Arguments:\n * range (sequence of python:float) -- Defines the leftmost\n and rightmost bin edges in each dimension.\n * **weight** (*Tensor*) -- By default, each value in the input\n has weight 1. If a weight tensor is passed, each N-dimensional\n coordinate in input contributes its associated weight towards\n its bin's result. The weight tensor should have the same shape\n as the \"input\" tensor excluding its innermost dimension N.\n\n * **density** (*bool*) -- If False (default), the result will\n contain the count (or total weight) in each bin. If True, each\n count (weight) is divided by the total count (total weight),\n then divided by the volume of its associated bin.\n\nReturns:\n N-dimensional Tensor containing the values of the histogram.", "source": "https://pytorch.org/docs/stable/generated/torch.histogramdd.html", "category": "pytorch docs"} {"text": "bin_edges(Tensor[]): sequence of N 1D Tensors containing the bin\n edges.\nReturn type:\n hist (Tensor)\nExample::\n >>> torch.histogramdd(torch.tensor([[0., 1.], [1., 0.], [2., 0.], [2., 2.]]), bins=[3, 3],\n ... weight=torch.tensor([1., 2., 4., 8.]))\n torch.return_types.histogramdd(\n hist=tensor([[0., 1., 0.],\n [2., 0., 0.],\n [4., 0., 8.]]),\n bin_edges=(tensor([0.0000, 0.6667, 1.3333, 2.0000]),\n tensor([0.0000, 0.6667, 1.3333, 2.0000])))\n >>> torch.histogramdd(torch.tensor([[0., 0.], [1., 1.], [2., 2.]]), bins=[2, 2],\n ... range=[0., 1., 0., 1.], density=True)\n torch.return_types.histogramdd(\n hist=tensor([[2., 0.],\n [0., 2.]]),\n bin_edges=(tensor([0.0000, 0.5000, 1.0000]),\n tensor([0.0000, 0.5000, 1.0000])))\n", "source": "https://pytorch.org/docs/stable/generated/torch.histogramdd.html", "category": "pytorch docs"} {"text": "torch.Tensor.logaddexp\nTensor.logaddexp(other) -> Tensor\nSee \"torch.logaddexp()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logaddexp.html", "category": "pytorch docs"} {"text": "torch.row_stack\ntorch.row_stack(tensors, *, out=None) -> Tensor\nAlias of \"torch.vstack()\".", "source": "https://pytorch.org/docs/stable/generated/torch.row_stack.html", "category": "pytorch docs"} {"text": "torch.Tensor.is_conj\nTensor.is_conj() -> bool\nReturns True if the conjugate bit of \"self\" is set to true.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_conj.html", "category": "pytorch docs"} {"text": "torch.empty\ntorch.empty(size, , out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False, pin_memory=False, memory_format=torch.contiguous_format) -> Tensor\nReturns a tensor filled with uninitialized data. The shape of the\n tensor is defined by the variable argument \"size\".\nParameters:\n size (int...) -- a sequence of integers defining the\n shape of the output tensor. Can be a variable number of\n arguments or a collection like a list or tuple.\nKeyword Arguments:\n * out (Tensor, optional) -- the output tensor.\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n", "source": "https://pytorch.org/docs/stable/generated/torch.empty.html", "category": "pytorch docs"} {"text": "returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\n * **pin_memory** (*bool**, **optional*) -- If set, returned\n tensor would be allocated in the pinned memory. Works only for\n CPU tensors. Default: \"False\".\n\n * **memory_format** (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.contiguous_format\".\n\nExample:\n >>> torch.empty((2,3), dtype=torch.int64)\n tensor([[ 9.4064e+13, 2.8000e+01, 9.3493e+13],\n [ 7.5751e+18, 7.1428e+18, 7.5955e+18]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.empty.html", "category": "pytorch docs"} {"text": "torch.nn.functional.elu\ntorch.nn.functional.elu(input, alpha=1.0, inplace=False)\nApplies the Exponential Linear Unit (ELU) function element-wise.\nSee \"ELU\" for more details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.elu.html", "category": "pytorch docs"} {"text": "ConvTranspose2d\nclass torch.ao.nn.quantized.ConvTranspose2d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)\nApplies a 2D transposed convolution operator over an input image\n composed of several input planes. For details on input arguments,\n parameters, and implementation see \"ConvTranspose2d\".\nFor special notes, please, see \"Conv2d\"\nVariables:\n * weight (Tensor) -- packed tensor derived from the\n learnable weight parameter.\n * **scale** (*Tensor*) -- scalar for the output scale\n\n * **zero_point** (*Tensor*) -- scalar for the output zero point\n\nSee \"ConvTranspose2d\" for other attributes.\nExamples:\n >>> # QNNPACK or FBGEMM as backend\n >>> torch.backends.quantized.engine = 'qnnpack'\n >>> # With square kernels and equal stride\n >>> import torch.nn.quantized as nnq\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.ConvTranspose2d.html", "category": "pytorch docs"} {"text": "\n\n\nimport torch.nn.quantized as nnq\n >>> m = nnq.ConvTranspose2d(16, 33, 3, stride=2)\n >>> # non-square kernels and unequal stride and with padding\n >>> m = nnq.ConvTranspose2d(16, 33, (3, 5), stride=(2, 1), padding=(4, 2))\n >>> input = torch.randn(20, 16, 50, 100)\n >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)\n >>> output = m(q_input)\n >>> # exact output size can be also specified as an argument\n >>> input = torch.randn(1, 16, 12, 12)\n >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0, dtype=torch.quint8)\n >>> downsample = nnq.Conv2d(16, 16, 3, stride=2, padding=1)\n >>> upsample = nnq.ConvTranspose2d(16, 16, 3, stride=2, padding=1)\n >>> h = downsample(q_input)\n >>> h.size()\n torch.Size([1, 16, 6, 6])\n >>> output = upsample(h, output_size=input.size())\n >>> output.size()\n torch.Size([1, 16, 12, 12])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.ConvTranspose2d.html", "category": "pytorch docs"} {"text": "torch.vdot\ntorch.vdot(input, other, *, out=None) -> Tensor\nComputes the dot product of two 1D vectors along a dimension.\nIn symbols, this function computes\n \\sum_{i=1}^n \\overline{x_i}y_i.\n\nwhere \\overline{x_i} denotes the conjugate for complex vectors, and\n it is the identity for real vectors.\nNote:\n Unlike NumPy's vdot, torch.vdot intentionally only supports\n computing the dot product of two 1D tensors with the same number\n of elements.\n\nSee also:\n \"torch.linalg.vecdot()\" computes the dot product of two batches\n of vectors along a dimension.\n\nParameters:\n * input (Tensor) -- first tensor in the dot product, must\n be 1D. Its conjugate is used if it's complex.\n * **other** (*Tensor*) -- second tensor in the dot product, must\n be 1D.\n\nKeyword args:\nNote:\n out (Tensor, optional): the output tensor.\n\nExample:\n >>> torch.vdot(torch.tensor([2, 3]), torch.tensor([2, 1]))\n tensor(7)\n", "source": "https://pytorch.org/docs/stable/generated/torch.vdot.html", "category": "pytorch docs"} {"text": "tensor(7)\n >>> a = torch.tensor((1 +2j, 3 - 1j))\n >>> b = torch.tensor((2 +1j, 4 - 0j))\n >>> torch.vdot(a, b)\n tensor([16.+1.j])\n >>> torch.vdot(b, a)\n tensor([16.-1.j])", "source": "https://pytorch.org/docs/stable/generated/torch.vdot.html", "category": "pytorch docs"} {"text": "torch.nn.utils.vector_to_parameters\ntorch.nn.utils.vector_to_parameters(vec, parameters)\nConvert one vector to the parameters\nParameters:\n * vec (Tensor) -- a single vector represents the\n parameters of a model.\n * **parameters** (*Iterable**[**Tensor**]*) -- an iterator of\n Tensors that are the parameters of a model.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.vector_to_parameters.html", "category": "pytorch docs"} {"text": "torch.svd\ntorch.svd(input, some=True, compute_uv=True, *, out=None)\nComputes the singular value decomposition of either a matrix or\n batch of matrices \"input\". The singular value decomposition is\n represented as a namedtuple (U, S, V), such that \"input\" = U\n \\text{diag}(S) V^{\\text{H}}. where V^{\\text{H}} is the transpose of\n V for real inputs, and the conjugate transpose of V for complex\n inputs. If \"input\" is a batch of matrices, then U, S, and V\n are also batched with the same batch dimensions as \"input\".\nIf \"some\" is True (default), the method returns the reduced\n singular value decomposition. In this case, if the last two\n dimensions of \"input\" are m and n, then the returned U and\n V matrices will contain only min(n, m) orthonormal columns.\nIf \"compute_uv\" is False, the returned U and V will be zero-\n filled matrices of shape (m, m) and (n, n) respectively, and", "source": "https://pytorch.org/docs/stable/generated/torch.svd.html", "category": "pytorch docs"} {"text": "the same device as \"input\". The argument \"some\" has no effect when\n \"compute_uv\" is False.\nSupports \"input\" of float, double, cfloat and cdouble data types.\n The dtypes of U and V are the same as \"input\"'s. S will\n always be real-valued, even if \"input\" is complex.\nWarning:\n \"torch.svd()\" is deprecated in favor of \"torch.linalg.svd()\" and\n will be removed in a future PyTorch release.\"U, S, V =\n torch.svd(A, some=some, compute_uv=True)\" (default) should be\n replaced with\n\n U, S, Vh = torch.linalg.svd(A, full_matrices=not some)\n V = Vh.mH\n\n \"_, S, _ = torch.svd(A, some=some, compute_uv=False)\" should be\n replaced with\n\n S = torch.linalg.svdvals(A)\n\nNote:\n Differences with \"torch.linalg.svd()\":\n\n * \"some\" is the opposite of \"torch.linalg.svd()\"'s\n \"full_matrices\". Note that default value for both is *True*, so\n the default behavior is effectively the opposite.\n", "source": "https://pytorch.org/docs/stable/generated/torch.svd.html", "category": "pytorch docs"} {"text": "\n\n\"torch.svd()\" returns V, whereas \"torch.linalg.svd()\" returns\n Vh, that is, V^{\\text{H}}.\n\nIf \"compute_uv\" is False, \"torch.svd()\" returns zero-filled\n tensors for U and Vh, whereas \"torch.linalg.svd()\" returns\n empty tensors.\n\n\n\nNote:\n The singular values are returned in descending order. If \"input\"\n is a batch of matrices, then the singular values of each matrix\n in the batch are returned in descending order.\n\nNote:\n The *S* tensor can only be used to compute gradients if\n \"compute_uv\" is *True*.\n\nNote:\n When \"some\" is *False*, the gradients on *U[..., :, min(m, n):]*\n and *V[..., :, min(m, n):]* will be ignored in the backward pass,\n as those vectors can be arbitrary bases of the corresponding\n subspaces.\n\nNote:\n The implementation of \"torch.linalg.svd()\" on CPU uses LAPACK's\n routine *?gesdd* (a divide-and-conquer algorithm) instead of\n *?gesvd* for speed. Analogously, on GPU, it uses cuSOLVER's\n", "source": "https://pytorch.org/docs/stable/generated/torch.svd.html", "category": "pytorch docs"} {"text": "routines gesvdj and gesvdjBatched on CUDA 10.1.243 and later,\n and MAGMA's routine gesdd on earlier versions of CUDA.\nNote:\n The returned *U* will not be contiguous. The matrix (or batch of\n matrices) will be represented as a column-major matrix (i.e.\n Fortran-contiguous).\n\nWarning:\n The gradients with respect to *U* and *V* will only be finite\n when the input does not have zero nor repeated singular values.\n\nWarning:\n If the distance between any two singular values is close to zero,\n the gradients with respect to *U* and *V* will be numerically\n unstable, as they depends on \\frac{1}{\\min_{i \\neq j} \\sigma_i^2\n - \\sigma_j^2}. The same happens when the matrix has small\n singular values, as these gradients also depend on *S\u00e2\u0081\u00bb\u00c2\u00b9*.\n\nWarning:\n For complex-valued \"input\" the singular value decomposition is\n not unique, as *U* and *V* may be multiplied by an arbitrary\n phase factor e^{i \\phi} on every column. The same happens when\n", "source": "https://pytorch.org/docs/stable/generated/torch.svd.html", "category": "pytorch docs"} {"text": "\"input\" has repeated singular values, where one may multiply the\n columns of the spanning subspace in U and V by a rotation\n matrix and the resulting vectors will span the same subspace.\n Different platforms, like NumPy, or inputs on different device\n types, may produce different U and V tensors.\nParameters:\n * input (Tensor) -- the input tensor of size (, m, n)\n where *** is zero or more batch dimensions consisting of (m,\n n)* matrices.\n * **some** (*bool**, **optional*) -- controls whether to compute\n the reduced or full decomposition, and consequently, the shape\n of returned *U* and *V*. Default: *True*.\n\n * **compute_uv** (*bool**, **optional*) -- controls whether to\n compute *U* and *V*. Default: *True*.\n\nKeyword Arguments:\n out (tuple, optional) -- the output tuple of tensors\nExample:\n >>> a = torch.randn(5, 3)\n >>> a\n tensor([[ 0.2364, -0.7752, 0.6372],\n", "source": "https://pytorch.org/docs/stable/generated/torch.svd.html", "category": "pytorch docs"} {"text": "tensor([[ 0.2364, -0.7752, 0.6372],\n [ 1.7201, 0.7394, -0.0504],\n [-0.3371, -1.0584, 0.5296],\n [ 0.3550, -0.4022, 1.5569],\n [ 0.2445, -0.0158, 1.1414]])\n >>> u, s, v = torch.svd(a)\n >>> u\n tensor([[ 0.4027, 0.0287, 0.5434],\n [-0.1946, 0.8833, 0.3679],\n [ 0.4296, -0.2890, 0.5261],\n [ 0.6604, 0.2717, -0.2618],\n [ 0.4234, 0.2481, -0.4733]])\n >>> s\n tensor([2.3289, 2.0315, 0.7806])\n >>> v\n tensor([[-0.0199, 0.8766, 0.4809],\n [-0.5080, 0.4054, -0.7600],\n [ 0.8611, 0.2594, -0.4373]])\n >>> torch.dist(a, torch.mm(torch.mm(u, torch.diag(s)), v.t()))\n tensor(8.6531e-07)\n >>> a_big = torch.randn(7, 5, 3)\n >>> u, s, v = torch.svd(a_big)\n >>> torch.dist(a_big, torch.matmul(torch.matmul(u, torch.diag_embed(s)), v.mT))\n tensor(2.6503e-06)", "source": "https://pytorch.org/docs/stable/generated/torch.svd.html", "category": "pytorch docs"} {"text": "RNNBase\nclass torch.nn.RNNBase(mode, input_size, hidden_size, num_layers=1, bias=True, batch_first=False, dropout=0.0, bidirectional=False, proj_size=0, device=None, dtype=None)\nflatten_parameters()\n Resets parameter data pointer so that they can use faster code\n paths.\n\n Right now, this works only if the module is on the GPU and cuDNN\n is enabled. Otherwise, it's a no-op.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.RNNBase.html", "category": "pytorch docs"} {"text": "torch.tril_indices\ntorch.tril_indices(row, col, offset=0, *, dtype=torch.long, device='cpu', layout=torch.strided) -> Tensor\nReturns the indices of the lower triangular part of a \"row\"-by-\n \"col\" matrix in a 2-by-N Tensor, where the first row contains row\n coordinates of all indices and the second row contains column\n coordinates. Indices are ordered based on rows and then columns.\nThe lower triangular part of the matrix is defined as the elements\n on and below the diagonal.\nThe argument \"offset\" controls which diagonal to consider. If\n \"offset\" = 0, all elements on and below the main diagonal are\n retained. A positive value includes just as many diagonals above\n the main diagonal, and similarly a negative value excludes just as\n many diagonals below the main diagonal. The main diagonal are the\n set of indices \\lbrace (i, i) \\rbrace for i \\in [0, \\min{d_{1},\n d_{2}} - 1] where d_{1}, d_{2} are the dimensions of the matrix.\nNote:", "source": "https://pytorch.org/docs/stable/generated/torch.tril_indices.html", "category": "pytorch docs"} {"text": "Note:\n When running on CUDA, \"row * col\" must be less than 2^{59} to\n prevent overflow during calculation.\n\nParameters:\n * row (\"int\") -- number of rows in the 2-D matrix.\n * **col** (\"int\") -- number of columns in the 2-D matrix.\n\n * **offset** (\"int\") -- diagonal offset from the main diagonal.\n Default: if not provided, 0.\n\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", \"torch.long\".\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n\n * **layout** (\"torch.layout\", optional) -- currently only\n support \"torch.strided\".\n\nExample:\n >>> a = torch.tril_indices(3, 3)\n", "source": "https://pytorch.org/docs/stable/generated/torch.tril_indices.html", "category": "pytorch docs"} {"text": "\n\n\na = torch.tril_indices(3, 3)\n >>> a\n tensor([[0, 1, 1, 2, 2, 2],\n [0, 0, 1, 0, 1, 2]])\n\n\n\n >>> a = torch.tril_indices(4, 3, -1)\n >>> a\n tensor([[1, 2, 2, 3, 3, 3],\n [0, 0, 1, 0, 1, 2]])\n\n >>> a = torch.tril_indices(4, 3, 1)\n >>> a\n tensor([[0, 0, 1, 1, 1, 2, 2, 2, 3, 3, 3],\n [0, 1, 0, 1, 2, 0, 1, 2, 0, 1, 2]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.tril_indices.html", "category": "pytorch docs"} {"text": "torch.Tensor.copy_\nTensor.copy_(src, non_blocking=False) -> Tensor\nCopies the elements from \"src\" into \"self\" tensor and returns\n \"self\".\nThe \"src\" tensor must be broadcastable with the \"self\" tensor. It\n may be of a different data type or reside on a different device.\nParameters:\n * src (Tensor) -- the source tensor to copy from\n * **non_blocking** (*bool*) -- if \"True\" and this copy is\n between CPU and GPU, the copy may occur asynchronously with\n respect to the host. For other cases, this argument has no\n effect.\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.copy_.html", "category": "pytorch docs"} {"text": "torch.Tensor.chalf\nTensor.chalf(memory_format=torch.preserve_format) -> Tensor\n\"self.chalf()\" is equivalent to \"self.to(torch.complex32)\". See\n \"to()\".\nParameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.chalf.html", "category": "pytorch docs"} {"text": "torch.nn.functional.fractional_max_pool3d\ntorch.nn.functional.fractional_max_pool3d(args, *kwargs)\nApplies 3D fractional max pooling over an input signal composed of\n several input planes.\nFractional MaxPooling is described in detail in the paper\n Fractional MaxPooling by Ben Graham\nThe max-pooling operation is applied in kT \\times kH \\times kW\n regions by a stochastic step size determined by the target output\n size. The number of output features is equal to the number of input\n planes.\nParameters:\n * kernel_size -- the size of the window to take a max over.\n Can be a single number k (for a square kernel of k \\times k\n \\times k) or a tuple (kT, kH, kW)\n * **output_size** -- the target output size of the form oT\n \\times oH \\times oW. Can be a tuple *(oT, oH, oW)* or a single\n number oH for a cubic output oH \\times oH \\times oH\n\n * **output_ratio** -- If one wants to have an output size as a\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.fractional_max_pool3d.html", "category": "pytorch docs"} {"text": "ratio of the input size, this option can be given. This has to\n be a number or tuple in the range (0, 1)\n * **return_indices** -- if \"True\", will return the indices along\n with the outputs. Useful to pass to \"max_unpool3d()\".\n\nShape:\n * Input: (N, C, T_{in}, H_{in}, W_{in}) or (C, T_{in}, H_{in},\n W_{in}).\n * Output: (N, C, T_{out}, H_{out}, W_{out}) or (C, T_{out},\n H_{out}, W_{out}), where (T_{out}, H_{out},\n W_{out})=\\text{output\\_size} or (T_{out}, H_{out},\n W_{out})=\\text{output\\_ratio} \\times (T_{in}, H_{in}, W_{in})\n\nExamples::\n >>> input = torch.randn(20, 16, 50, 32, 16)\n >>> # pool of cubic window of size=3, and target output size 13x12x11\n >>> F.fractional_max_pool3d(input, 3, output_size=(13, 12, 11))\n >>> # pool of cubic window and target output size being half of input size\n >>> F.fractional_max_pool3d(input, 3, output_ratio=(0.5, 0.5, 0.5))", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.fractional_max_pool3d.html", "category": "pytorch docs"} {"text": "ConvBnReLU2d\nclass torch.ao.nn.intrinsic.qat.ConvBnReLU2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=None, padding_mode='zeros', eps=1e-05, momentum=0.1, freeze_bn=False, qconfig=None)\nA ConvBnReLU2d module is a module fused from Conv2d, BatchNorm2d\n and ReLU, attached with FakeQuantize modules for weight, used in\n quantization aware training.\nWe combined the interface of \"torch.nn.Conv2d\" and\n \"torch.nn.BatchNorm2d\" and \"torch.nn.ReLU\".\nSimilar to torch.nn.Conv2d, with FakeQuantize modules initialized\n to default.\nVariables:\n weight_fake_quant -- fake quant module for weight", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.ConvBnReLU2d.html", "category": "pytorch docs"} {"text": "ParametrizationList\nclass torch.nn.utils.parametrize.ParametrizationList(modules, original, unsafe=False)\nA sequential container that holds and manages the \"original\" or\n \"original0\", \"original1\", ... parameters or buffers of a\n parametrized \"torch.nn.Module\".\nIt is the type of \"module.parametrizations[tensor_name]\" when\n \"module[tensor_name]\" has been parametrized with\n \"register_parametrization()\".\nIf the first registered parametrization has a \"right_inverse\" that\n returns one tensor or does not have a \"right_inverse\" (in which\n case we assume that \"right_inverse\" is the identity), it will hold\n the tensor under the name \"original\". If it has a \"right_inverse\"\n that returns more than one tensor, these will be registered as\n \"original0\", \"original1\", ...\nWarning:\n This class is used internally by \"register_parametrization()\". It\n is documented here for completeness. It shall not be instantiated\n by the user.\n\nParameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.ParametrizationList.html", "category": "pytorch docs"} {"text": "by the user.\nParameters:\n * modules (sequence) -- sequence of modules representing\n the parametrizations\n * **original** (*Parameter** or **Tensor*) -- parameter or\n buffer that is parametrized\n\n * **unsafe** (*bool*) -- a boolean flag that denotes whether the\n parametrization may change the dtype and shape of the tensor.\n Default: *False* Warning: the parametrization is not checked\n for consistency upon registration. Enable this flag at your\n own risk.\n\nright_inverse(value)\n Calls the methods \"right_inverse\" (see\n \"register_parametrization()\") of the parametrizations in the\n inverse order they were registered in. Then, it stores the\n result in \"self.original\" if \"right_inverse\" outputs one tensor\n or in \"self.original0\", \"self.original1\", ... if it outputs\n several.\n\n Parameters:\n **value** (*Tensor*) -- Value to which initialize the module\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.ParametrizationList.html", "category": "pytorch docs"} {"text": "torch.nn.functional.cosine_embedding_loss\ntorch.nn.functional.cosine_embedding_loss(input1, input2, target, margin=0, size_average=None, reduce=None, reduction='mean') -> Tensor\nSee \"CosineEmbeddingLoss\" for details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.cosine_embedding_loss.html", "category": "pytorch docs"} {"text": "torch._foreach_frac\ntorch._foreach_frac(self: List[Tensor]) -> List[Tensor]\nApply \"torch.frac()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_frac.html", "category": "pytorch docs"} {"text": "torch.stft\ntorch.stft(input, n_fft, hop_length=None, win_length=None, window=None, center=True, pad_mode='reflect', normalized=False, onesided=None, return_complex=None)\nShort-time Fourier transform (STFT).\nWarning:\n From version 1.8.0, \"return_complex\" must always be given\n explicitly for real inputs and *return_complex=False* has been\n deprecated. Strongly prefer *return_complex=True* as in a future\n pytorch release, this function will only return complex\n tensors.Note that \"torch.view_as_real()\" can be used to recover a\n real tensor with an extra last dimension for real and imaginary\n components.\n\nThe STFT computes the Fourier transform of short overlapping\n windows of the input. This giving frequency components of the\n signal as they change over time. The interface of this function is\n modeled after (but not a drop-in replacement for) librosa stft\n function.\nIgnoring the optional batch dimension, this method computes the", "source": "https://pytorch.org/docs/stable/generated/torch.stft.html", "category": "pytorch docs"} {"text": "following expression:\n X[\\omega, m] = \\sum_{k = 0}^{\\text{win\\_length-1}}%\n \\text{window}[k]\\ \\text{input}[m \\times \\text{hop\\_length} + k]\\\n % \\exp\\left(- j \\frac{2 \\pi \\cdot \\omega\n k}{\\text{win\\_length}}\\right),\n\nwhere m is the index of the sliding window, and \\omega is the\n frequency 0 \\leq \\omega < \\text{n_fft} for \"onesided=False\", or 0\n \\leq \\omega < \\lfloor \\text{n_fft} / 2 \\rfloor + 1 for\n \"onesided=True\".\n\n\n\"input\" must be either a 1-D time sequence or a 2-D batch of time\n sequences.\n\n\nIf \"hop_length\" is \"None\" (default), it is treated as equal to\n \"floor(n_fft / 4)\".\n\n\nIf \"win_length\" is \"None\" (default), it is treated as equal to\n \"n_fft\".\n\n\n\"window\" can be a 1-D tensor of size \"win_length\", e.g., from\n \"torch.hann_window()\". If \"window\" is \"None\" (default), it is\n treated as if having 1 everywhere in the window. If\n \\text{win_length} < \\text{n_fft}, \"window\" will be padded on\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.stft.html", "category": "pytorch docs"} {"text": "both sides to length \"n_fft\" before being applied.\n\n\nIf \"center\" is \"True\" (default), \"input\" will be padded on both\n sides so that the t-th frame is centered at time t \\times\n \\text{hop_length}. Otherwise, the t-th frame begins at time t\n \\times \\text{hop_length}.\n\n\n\"pad_mode\" determines the padding method used on \"input\" when\n \"center\" is \"True\". See \"torch.nn.functional.pad()\" for all\n available options. Default is \"\"reflect\"\".\n\n\nIf \"onesided\" is \"True\" (default for real input), only values for\n \\omega in \\left[0, 1, 2, \\dots, \\left\\lfloor\n \\frac{\\text{n_fft}}{2} \\right\\rfloor + 1\\right] are returned\n because the real-to-complex Fourier transform satisfies the\n conjugate symmetry, i.e., X[m, \\omega] = X[m, \\text{n_fft} -\n \\omega]^*. Note if the input or window tensors are complex, then\n \"onesided\" output is not possible.\n\n\nIf \"normalized\" is \"True\" (default is \"False\"), the function\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.stft.html", "category": "pytorch docs"} {"text": "returns the normalized STFT results, i.e., multiplied by\n (\\text{frame_length})^{-0.5}.\n\nIf \"return_complex\" is \"True\" (default if input is complex), the\n return is a \"input.dim() + 1\" dimensional complex tensor. If\n \"False\", the output is a \"input.dim() + 2\" dimensional real\n tensor where the last dimension represents the real and imaginary\n components.\n\nReturns either a complex tensor of size ( \\times N \\times T) if\n \"return_complex\" is true, or a real tensor of size ( \\times N\n \\times T \\times 2). Where * is the optional batch size of \"input\",\n N is the number of frequencies where STFT is applied and T is the\n total number of frames used.\nWarning:\n This function changed signature at version 0.4.1. Calling with\n the previous signature may cause error or return incorrect\n result.\n\nParameters:\n * input (Tensor) -- the input tensor\n * **n_fft** (*int*) -- size of Fourier transform\n", "source": "https://pytorch.org/docs/stable/generated/torch.stft.html", "category": "pytorch docs"} {"text": "\n\nhop_length (int, optional) -- the distance between\n neighboring sliding window frames. Default: \"None\" (treated as\n equal to \"floor(n_fft / 4)\")\n\n\nwin_length (int, optional) -- the size of window\n frame and STFT filter. Default: \"None\" (treated as equal to\n \"n_fft\")\n\n\nwindow (Tensor, optional) -- the optional window\n function. Default: \"None\" (treated as window of all 1 s)\n\n\ncenter (bool, optional) -- whether to pad \"input\" on\n both sides so that the t-th frame is centered at time t \\times\n \\text{hop_length}. Default: \"True\"\n\n\npad_mode (str, optional) -- controls the padding\n method used when \"center\" is \"True\". Default: \"\"reflect\"\"\n\n\nnormalized (bool, optional) -- controls whether to\n return the normalized STFT results Default: \"False\"\n\n\nonesided (bool, optional) -- controls whether to\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.stft.html", "category": "pytorch docs"} {"text": "return half of results to avoid redundancy for real inputs.\n Default: \"True\" for real \"input\" and \"window\", \"False\"\n otherwise.\n * **return_complex** (*bool**, **optional*) --\n\n whether to return a complex tensor, or a real tensor with an\n extra last dimension for the real and imaginary components.\n\n Changed in version 2.0: \"return_complex\" is now a required\n argument for real inputs, as the default is being transitioned\n to \"True\".\n\n Deprecated since version 2.0: \"return_complex=False\" is\n deprecated, instead use \"return_complex=True\" Note that\n calling \"torch.view_as_real()\" on the output will recover the\n deprecated output format.\n\nReturns:\n A tensor containing the STFT result with shape described above\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.stft.html", "category": "pytorch docs"} {"text": "upsample_nearest\nclass torch.ao.nn.quantized.functional.upsample_nearest(input, size=None, scale_factor=None)\nUpsamples the input, using nearest neighbours' pixel values.\nWarning:\n This function is deprecated in favor of\n \"torch.nn.quantized.functional.interpolate()\". This is equivalent\n with \"nn.quantized.functional.interpolate(..., mode='nearest')\".\n\nNote:\n The input quantization parameters propagate to the output.\n\nNote:\n Only 2D inputs are supported\n\nParameters:\n * input (Tensor) -- quantized input\n * **size** (*int** or **Tuple**[**int**, **int**] or\n **Tuple**[**int**, **int**, **int**]*) -- output spatial size.\n\n * **scale_factor** (*int*) -- multiplier for spatial size. Has\n to be an integer.\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.upsample_nearest.html", "category": "pytorch docs"} {"text": "torch.addmm\ntorch.addmm(input, mat1, mat2, *, beta=1, alpha=1, out=None) -> Tensor\nPerforms a matrix multiplication of the matrices \"mat1\" and \"mat2\".\n The matrix \"input\" is added to the final result.\nIf \"mat1\" is a (n \\times m) tensor, \"mat2\" is a (m \\times p)\n tensor, then \"input\" must be broadcastable with a (n \\times p)\n tensor and \"out\" will be a (n \\times p) tensor.\n\"alpha\" and \"beta\" are scaling factors on matrix-vector product\n between \"mat1\" and \"mat2\" and the added matrix \"input\"\n respectively.\n \\text{out} = \\beta\\ \\text{input} + \\alpha\\ (\\text{mat1}_i\n \\mathbin{@} \\text{mat2}_i)\n\nIf \"beta\" is 0, then \"input\" will be ignored, and nan and inf\n in it will not be propagated.\nFor inputs of type FloatTensor or DoubleTensor, arguments\n \"beta\" and \"alpha\" must be real numbers, otherwise they should be\n integers.\nThis operation has support for arguments with sparse layouts. If", "source": "https://pytorch.org/docs/stable/generated/torch.addmm.html", "category": "pytorch docs"} {"text": "\"input\" is sparse the result will have the same layout and if \"out\"\n is provided it must have the same layout as \"input\".\nWarning:\n Sparse support is a beta feature and some layout(s)/dtype/device\n combinations may not be supported, or may not have autograd\n support. If you notice missing functionality please open a\n feature request.\n\nThis operator supports TensorFloat32.\nOn certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.\nParameters:\n * input (Tensor) -- matrix to be added\n * **mat1** (*Tensor*) -- the first matrix to be matrix\n multiplied\n\n * **mat2** (*Tensor*) -- the second matrix to be matrix\n multiplied\n\nKeyword Arguments:\n * beta (Number, optional) -- multiplier for \"input\"\n (\\beta)\n * **alpha** (*Number**, **optional*) -- multiplier for mat1 @\n mat2 (\\alpha)\n\n * **out** (*Tensor**, **optional*) -- the output tensor.\n\nExample:", "source": "https://pytorch.org/docs/stable/generated/torch.addmm.html", "category": "pytorch docs"} {"text": "Example:\n >>> M = torch.randn(2, 3)\n >>> mat1 = torch.randn(2, 3)\n >>> mat2 = torch.randn(3, 3)\n >>> torch.addmm(M, mat1, mat2)\n tensor([[-4.8716, 1.4671, -1.3746],\n [ 0.7573, -3.9555, -2.8681]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.addmm.html", "category": "pytorch docs"} {"text": "torch.Tensor.subtract_\nTensor.subtract_(other, *, alpha=1) -> Tensor\nIn-place version of \"subtract()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.subtract_.html", "category": "pytorch docs"} {"text": "torch.Tensor.arcsin\nTensor.arcsin() -> Tensor\nSee \"torch.arcsin()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arcsin.html", "category": "pytorch docs"} {"text": "torch.quantized_max_pool2d\ntorch.quantized_max_pool2d(input, kernel_size, stride=[], padding=0, dilation=1, ceil_mode=False) -> Tensor\nApplies a 2D max pooling over an input quantized tensor composed of\n several input planes.\nParameters:\n * input (Tensor) -- quantized tensor\n * **kernel_size** (\"list of int\") -- the size of the sliding\n window\n\n * **stride** (\"list of int\", optional) -- the stride of the\n sliding window\n\n * **padding** (\"list of int\", optional) -- padding to be added\n on both sides, must be >= 0 and <= kernel_size / 2\n\n * **dilation** (\"list of int\", optional) -- The stride between\n elements within a sliding window, must be > 0. Default 1\n\n * **ceil_mode** (*bool**, **optional*) -- If True, will use ceil\n instead of floor to compute the output shape. Defaults to\n False.\n\nReturns:\n A quantized tensor with max_pool2d applied.\nReturn type:\n Tensor\nExample:", "source": "https://pytorch.org/docs/stable/generated/torch.quantized_max_pool2d.html", "category": "pytorch docs"} {"text": "Return type:\n Tensor\nExample:\n >>> qx = torch.quantize_per_tensor(torch.rand(2, 2, 2, 2), 1.5, 3, torch.quint8)\n >>> torch.quantized_max_pool2d(qx, [2,2])\n tensor([[[[1.5000]],\n\n [[1.5000]]],\n\n\n [[[0.0000]],\n\n [[0.0000]]]], size=(2, 2, 1, 1), dtype=torch.quint8,\n quantization_scheme=torch.per_tensor_affine, scale=1.5, zero_point=3)\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantized_max_pool2d.html", "category": "pytorch docs"} {"text": "ReLU\nclass torch.nn.ReLU(inplace=False)\nApplies the rectified linear unit function element-wise:\n\\text{ReLU}(x) = (x)^+ = \\max(0, x)\nParameters:\n inplace (bool) -- can optionally do the operation in-\n place. Default: \"False\"\nShape:\n * Input: (*), where * means any number of dimensions.\n * Output: (*), same shape as the input.\n\n[image]\nExamples:\n >>> m = nn.ReLU()\n >>> input = torch.randn(2)\n >>> output = m(input)\n\n\n An implementation of CReLU - https://arxiv.org/abs/1603.05201\n\n >>> m = nn.ReLU()\n >>> input = torch.randn(2).unsqueeze(0)\n >>> output = torch.cat((m(input), m(-input)))\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReLU.html", "category": "pytorch docs"} {"text": "hardtanh\nclass torch.ao.nn.quantized.functional.hardtanh(input, min_val=- 1.0, max_val=1.0, inplace=False)\nThis is the quantized version of \"hardtanh()\".\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.hardtanh.html", "category": "pytorch docs"} {"text": "torch.Tensor.put_\nTensor.put_(index, source, accumulate=False) -> Tensor\nCopies the elements from \"source\" into the positions specified by\n \"index\". For the purpose of indexing, the \"self\" tensor is treated\n as if it were a 1-D tensor.\n\"index\" and \"source\" need to have the same number of elements, but\n not necessarily the same shape.\nIf \"accumulate\" is \"True\", the elements in \"source\" are added to\n \"self\". If accumulate is \"False\", the behavior is undefined if\n \"index\" contain duplicate elements.\nParameters:\n * index (LongTensor) -- the indices into self\n * **source** (*Tensor*) -- the tensor containing values to copy\n from\n\n * **accumulate** (*bool*) -- whether to accumulate into self\n\nExample:\n >>> src = torch.tensor([[4, 3, 5],\n ... [6, 7, 8]])\n >>> src.put_(torch.tensor([1, 3]), torch.tensor([9, 10]))\n tensor([[ 4, 9, 5],\n [ 10, 7, 8]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.put_.html", "category": "pytorch docs"} {"text": "torch._foreach_trunc\ntorch._foreach_trunc(self: List[Tensor]) -> List[Tensor]\nApply \"torch.trunc()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_trunc.html", "category": "pytorch docs"} {"text": "torch.Tensor.acosh\nTensor.acosh() -> Tensor\nSee \"torch.acosh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.acosh.html", "category": "pytorch docs"} {"text": "torch.Tensor.backward\nTensor.backward(gradient=None, retain_graph=None, create_graph=False, inputs=None)\nComputes the gradient of current tensor w.r.t. graph leaves.\nThe graph is differentiated using the chain rule. If the tensor is\n non-scalar (i.e. its data has more than one element) and requires\n gradient, the function additionally requires specifying \"gradient\".\n It should be a tensor of matching type and location, that contains\n the gradient of the differentiated function w.r.t. \"self\".\nThis function accumulates gradients in the leaves - you might need\n to zero \".grad\" attributes or set them to \"None\" before calling it.\n See Default gradient layouts for details on the memory layout of\n accumulated gradients.\nNote:\n If you run any forward ops, create \"gradient\", and/or call\n \"backward\" in a user-specified CUDA stream context, see Stream\n semantics of backward passes.\n\nNote:", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.backward.html", "category": "pytorch docs"} {"text": "semantics of backward passes.\nNote:\n When \"inputs\" are provided and a given input is not a leaf, the\n current implementation will call its grad_fn (though it is not\n strictly needed to get this gradients). It is an implementation\n detail on which the user should not rely. See https://github.com\n /pytorch/pytorch/pull/60521#issuecomment-867061780 for more\n details.\n\nParameters:\n * gradient (Tensor or None) -- Gradient w.r.t. the\n tensor. If it is a tensor, it will be automatically converted\n to a Tensor that does not require grad unless \"create_graph\"\n is True. None values can be specified for scalar Tensors or\n ones that don't require grad. If a None value would be\n acceptable then this argument is optional.\n * **retain_graph** (*bool**, **optional*) -- If \"False\", the\n graph used to compute the grads will be freed. Note that in\n nearly all cases setting this option to True is not needed and\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.backward.html", "category": "pytorch docs"} {"text": "often can be worked around in a much more efficient way.\n Defaults to the value of \"create_graph\".\n * **create_graph** (*bool**, **optional*) -- If \"True\", graph of\n the derivative will be constructed, allowing to compute higher\n order derivative products. Defaults to \"False\".\n\n * **inputs** (*sequence of Tensor*) -- Inputs w.r.t. which the\n gradient will be accumulated into \".grad\". All other Tensors\n will be ignored. If not provided, the gradient is accumulated\n into all the leaf Tensors that were used to compute the\n attr::tensors.\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.backward.html", "category": "pytorch docs"} {"text": "torch.pinverse\ntorch.pinverse(input, rcond=1e-15) -> Tensor\nAlias for \"torch.linalg.pinv()\"", "source": "https://pytorch.org/docs/stable/generated/torch.pinverse.html", "category": "pytorch docs"} {"text": "torch.Tensor.reshape_as\nTensor.reshape_as(other) -> Tensor\nReturns this tensor as the same shape as \"other\".\n \"self.reshape_as(other)\" is equivalent to\n \"self.reshape(other.sizes())\". This method returns a view if\n \"other.sizes()\" is compatible with the current shape. See\n \"torch.Tensor.view()\" on when it is possible to return a view.\nPlease see \"reshape()\" for more information about \"reshape\".\nParameters:\n other (\"torch.Tensor\") -- The result tensor has the same\n shape as \"other\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.reshape_as.html", "category": "pytorch docs"} {"text": "torch.nn.modules.module.register_module_forward_pre_hook\ntorch.nn.modules.module.register_module_forward_pre_hook(hook)\nRegisters a forward pre-hook common to all modules.\nWarning:\n This adds global state to the *nn.module* module and it is only\n intended for debugging/profiling purposes.\n\nThe hook will be called every time before \"forward()\" is invoked.\n It should have the following signature:\n hook(module, input) -> None or modified input\n\nThe input contains only the positional arguments given to the\n module. Keyword arguments won't be passed to the hooks and only to\n the \"forward\". The hook can modify the input. User can either\n return a tuple or a single modified value in the hook. We will wrap\n the value into a tuple if a single value is returned(unless that\n value is already a tuple).\nThis hook has precedence over the specific module hooks registered\n with \"register_forward_pre_hook\".\nReturns:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_forward_pre_hook.html", "category": "pytorch docs"} {"text": "with \"register_forward_pre_hook\".\nReturns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\nReturn type:\n \"torch.utils.hooks.RemovableHandle\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.modules.module.register_module_forward_pre_hook.html", "category": "pytorch docs"} {"text": "torch.nn.functional.prelu\ntorch.nn.functional.prelu(input, weight) -> Tensor\nApplies element-wise the function \\text{PReLU}(x) = \\max(0,x) +\n \\text{weight} * \\min(0,x) where weight is a learnable parameter.\nNote:\n *weight* is expected to be a scalar or 1-D tensor. If *weight* is\n 1-D, its size must match the number of input channels, determined\n by *input.size(1)* when *input.dim() >= 2*, otherwise 1. In the\n 1-D case, note that when *input* has dim > 2, *weight* can be\n expanded to the shape of *input* in a way that is not possible\n using normal broadcasting semantics.\n\nSee \"PReLU\" for more details.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.prelu.html", "category": "pytorch docs"} {"text": "default_per_channel_weight_fake_quant\ntorch.quantization.fake_quantize.default_per_channel_weight_fake_quant\nalias of functools.partial(,\n observer=, quant_min=-128, quant_max=127,\n dtype=torch.qint8, qscheme=torch.per_channel_symmetric,\n reduce_range=False, ch_axis=0){}", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.default_per_channel_weight_fake_quant.html", "category": "pytorch docs"} {"text": "torch.fft.irfft\ntorch.fft.irfft(input, n=None, dim=- 1, norm=None, *, out=None) -> Tensor\nComputes the inverse of \"rfft()\".\n\"input\" is interpreted as a one-sided Hermitian signal in the\n Fourier domain, as produced by \"rfft()\". By the Hermitian property,\n the output will be real-valued.\nNote:\n Some input frequencies must be real-valued to satisfy the\n Hermitian property. In these cases the imaginary component will\n be ignored. For example, any imaginary component in the zero-\n frequency term cannot be represented in a real output and so will\n always be ignored.\n\nNote:\n The correct interpretation of the Hermitian input depends on the\n length of the original data, as given by \"n\". This is because\n each input shape could correspond to either an odd or even length\n signal. By default, the signal is assumed to be even length and\n odd signals will not round-trip properly. So, it is recommended\n to always pass the signal length \"n\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.irfft.html", "category": "pytorch docs"} {"text": "to always pass the signal length \"n\".\nNote:\n Supports torch.half and torch.chalf on CUDA with GPU Architecture\n SM53 or greater. However it only supports powers of 2 signal\n length in every transformed dimension. With default arguments,\n size of the transformed dimension should be (2^n + 1) as argument\n *n* defaults to even output size = 2 * (transformed_dim_size - 1)\n\nParameters:\n * input (Tensor) -- the input tensor representing a half-\n Hermitian signal\n * **n** (*int**, **optional*) -- Output signal length. This\n determines the length of the output signal. If given, the\n input will either be zero-padded or trimmed to this length\n before computing the real IFFT. Defaults to even output:\n \"n=2*(input.size(dim) - 1)\".\n\n * **dim** (*int**, **optional*) -- The dimension along which to\n take the one dimensional real IFFT.\n\n * **norm** (*str**, **optional*) --\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.irfft.html", "category": "pytorch docs"} {"text": "\nnorm (str, optional) --Normalization mode. For the backward transform (\"irfft()\"),\nthese correspond to:\n\n* \"\"forward\"\" - no normalization\n\n* \"\"backward\"\" - normalize by \"1/n\"\n\n* \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the real IFFT\n orthonormal)\n\nCalling the forward transform (\"rfft()\") with the same\nnormalization mode will apply an overall normalization of\n\"1/n\" between the two transforms. This is required to make\n\"irfft()\" the exact inverse.\n\nDefault is \"\"backward\"\" (normalize by \"1/n\").\n\n\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\n-[ Example ]-\n\n\n\nt = torch.linspace(0, 1, 5)\nt\n tensor([0.0000, 0.2500, 0.5000, 0.7500, 1.0000])\nT = torch.fft.rfft(t)\nT\n tensor([ 2.5000+0.0000j, -0.6250+0.8602j, -0.6250+0.2031j])\n\n\n\nWithout specifying the output length to \"irfft()\", the output will", "source": "https://pytorch.org/docs/stable/generated/torch.fft.irfft.html", "category": "pytorch docs"} {"text": "not round-trip properly because the input is odd-length:\n\n\n\ntorch.fft.irfft(T)\n tensor([0.1562, 0.3511, 0.7812, 1.2114])\n\n\n\nSo, it is recommended to always pass the signal length \"n\":\n\n\n\nroundtrip = torch.fft.irfft(T, t.numel())\ntorch.testing.assert_close(roundtrip, t, check_stride=False)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.irfft.html", "category": "pytorch docs"} {"text": "torch.hamming_window\ntorch.hamming_window(window_length, periodic=True, alpha=0.54, beta=0.46, *, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\nHamming window function.\n w[n] = \\alpha - \\beta\\ \\cos \\left( \\frac{2 \\pi n}{N - 1}\n \\right),\n\nwhere N is the full window size.\nThe input \"window_length\" is a positive integer controlling the\n returned window size. \"periodic\" flag determines whether the\n returned window trims off the last duplicate value from the\n symmetric window and is ready to be used as a periodic window with\n functions like \"torch.stft()\". Therefore, if \"periodic\" is true,\n the N in above formula is in fact \\text{window_length} + 1. Also,\n we always have \"torch.hamming_window(L, periodic=True)\" equal to\n \"torch.hamming_window(L + 1, periodic=False)[:-1])\".\nNote:\n If \"window_length\" =1, the returned window contains a single\n value 1.\n\nNote:", "source": "https://pytorch.org/docs/stable/generated/torch.hamming_window.html", "category": "pytorch docs"} {"text": "value 1.\nNote:\n This is a generalized version of \"torch.hann_window()\".\n\nParameters:\n * window_length (int) -- the size of returned window\n * **periodic** (*bool**, **optional*) -- If True, returns a\n window to be used as periodic function. If False, return a\n symmetric window.\n\n * **alpha** (*float**, **optional*) -- The coefficient \\alpha in\n the equation above\n\n * **beta** (*float**, **optional*) -- The coefficient \\beta in\n the equation above\n\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\"). Only floating point\n types are supported.\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned window tensor. Only \"torch.strided\" (dense layout) is\n supported.\n\n * **device** (\"torch.device\", optional) -- the desired device of\n", "source": "https://pytorch.org/docs/stable/generated/torch.hamming_window.html", "category": "pytorch docs"} {"text": "returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\nReturns:\n A 1-D tensor of size (\\text{window_length},) containing the\n window.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.hamming_window.html", "category": "pytorch docs"} {"text": "torch.Tensor.matrix_power\nTensor.matrix_power(n) -> Tensor\nNote:\n \"matrix_power()\" is deprecated, use \"torch.linalg.matrix_power()\"\n instead.\n\nAlias for \"torch.linalg.matrix_power()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.matrix_power.html", "category": "pytorch docs"} {"text": "BackendConfig\nclass torch.ao.quantization.backend_config.BackendConfig(name='')\nConfig that defines the set of patterns that can be quantized on a\n given backend, and how reference quantized models can be produced\n from these patterns.\nA pattern in this context refers to a module, a functional, an\n operator, or a directed acyclic graph of the above. Each pattern\n supported on the target backend can be individually configured\n through \"BackendPatternConfig\" in terms of:\n\n\nThe supported input/output activation, weight, and bias data\n types\n\n\nHow observers and quant/dequant ops are inserted in order to\n construct the reference pattern, and\n\n\n(Optionally) Fusion, QAT, and reference module mappings.\n\n\nThe format of the patterns is described in: https://github.com/pyt\n orch/pytorch/blob/master/torch/ao/quantization/backend_config/READ\n ME.md\nExample usage:\n import torch\n from torch.ao.quantization.backend_config import (\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendConfig.html", "category": "pytorch docs"} {"text": "BackendConfig,\n BackendPatternConfig,\n DTypeConfig,\n ObservationType,\n )\n weighted_int8_dtype_config = DTypeConfig(\n input_dtype=torch.quint8,\n output_dtype=torch.quint8,\n weight_dtype=torch.qint8,\n bias_dtype=torch.float)\n\n def fuse_conv2d_relu(is_qat, conv, relu):\n return torch.ao.nn.intrinsic.ConvReLU2d(conv, relu)\n\n # For quantizing Linear\n linear_config = BackendPatternConfig(torch.nn.Linear) .set_observation_type(ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT) .add_dtype_config(weighted_int8_dtype_config) .set_root_module(torch.nn.Linear) .set_qat_module(torch.ao.nn.qat.Linear) .set_reference_quantized_module(torch.ao.nn.quantized.reference.Linear)\n\n # For fusing Conv2d + ReLU into ConvReLU2d\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendConfig.html", "category": "pytorch docs"} {"text": "For fusing Conv2d + ReLU into ConvReLU2d\n conv_relu_config = BackendPatternConfig((torch.nn.Conv2d, torch.nn.ReLU)) .set_observation_type(ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT) .add_dtype_config(weighted_int8_dtype_config) .set_fused_module(torch.ao.nn.intrinsic.ConvReLU2d) .set_fuser_method(fuse_conv2d_relu)\n\n # For quantizing ConvReLU2d\n fused_conv_relu_config = BackendPatternConfig(torch.ao.nn.intrinsic.ConvReLU2d) .set_observation_type(ObservationType.OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT) .add_dtype_config(weighted_int8_dtype_config) .set_root_module(torch.nn.Conv2d) .set_qat_module(torch.ao.nn.intrinsic.qat.ConvReLU2d) .set_reference_quantized_module(torch.ao.nn.quantized.reference.Conv2d)\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendConfig.html", "category": "pytorch docs"} {"text": "backend_config = BackendConfig(\"my_backend\") .set_backend_pattern_config(linear_config) .set_backend_pattern_config(conv_relu_config) .set_backend_pattern_config(fused_conv_relu_config)\nproperty configs: List[BackendPatternConfig]\n Return a copy of the list of configs set in this\n *BackendConfig*.\n\nclassmethod from_dict(backend_config_dict)\n Create a \"BackendConfig\" from a dictionary with the following\n items:\n\n \"name\": the name of the target backend\n\n \"configs\": a list of dictionaries that each represents a\n *BackendPatternConfig*\n\n Return type:\n *BackendConfig*\n\nset_backend_pattern_config(config)\n Set the config for an pattern that can be run on the target\n backend. This overrides any existing config for the given\n pattern.\n\n Return type:\n *BackendConfig*\n\nset_backend_pattern_configs(configs)\n Set the configs for patterns that can be run on the target\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendConfig.html", "category": "pytorch docs"} {"text": "backend. This overrides any existing config for a given pattern\n if it was previously registered already.\n Return type:\n *BackendConfig*\n\nset_name(name)\n Set the name of the target backend.\n\n Return type:\n *BackendConfig*\n\nto_dict()\n Convert this \"BackendConfig\" to a dictionary with the items\n described in \"from_dict()\".\n\n Return type:\n *Dict*[str, *Any*]\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendConfig.html", "category": "pytorch docs"} {"text": "torch.Tensor.as_subclass\nTensor.as_subclass(cls) -> Tensor\nMakes a \"cls\" instance with the same data pointer as \"self\".\n Changes in the output mirror changes in \"self\", and the output\n stays attached to the autograd graph. \"cls\" must be a subclass of\n \"Tensor\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.as_subclass.html", "category": "pytorch docs"} {"text": "torch.Tensor.cumprod_\nTensor.cumprod_(dim, dtype=None) -> Tensor\nIn-place version of \"cumprod()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cumprod_.html", "category": "pytorch docs"} {"text": "torch.Tensor.flipud\nTensor.flipud() -> Tensor\nSee \"torch.flipud()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.flipud.html", "category": "pytorch docs"} {"text": "torch.zeros\ntorch.zeros(size, , out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\nReturns a tensor filled with the scalar value 0, with the shape\n defined by the variable argument \"size\".\nParameters:\n size (int...) -- a sequence of integers defining the\n shape of the output tensor. Can be a variable number of\n arguments or a collection like a list or tuple.\nKeyword Arguments:\n * out (Tensor, optional) -- the output tensor.\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n", "source": "https://pytorch.org/docs/stable/generated/torch.zeros.html", "category": "pytorch docs"} {"text": "for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\nExample:\n >>> torch.zeros(2, 3)\n tensor([[ 0., 0., 0.],\n [ 0., 0., 0.]])\n\n >>> torch.zeros(5)\n tensor([ 0., 0., 0., 0., 0.])\n", "source": "https://pytorch.org/docs/stable/generated/torch.zeros.html", "category": "pytorch docs"} {"text": "torch.Tensor.swapaxes\nTensor.swapaxes(axis0, axis1) -> Tensor\nSee \"torch.swapaxes()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.swapaxes.html", "category": "pytorch docs"} {"text": "torch.jit.save\ntorch.jit.save(m, f, _extra_files=None)\nSave an offline version of this module for use in a separate\n process. The saved module serializes all of the methods,\n submodules, parameters, and attributes of this module. It can be\n loaded into the C++ API using \"torch::jit::load(filename)\" or into\n the Python API with \"torch.jit.load\".\nTo be able to save a module, it must not make any calls to native\n Python functions. This means that all submodules must be\n subclasses of \"ScriptModule\" as well.\nDanger:\n All modules, no matter their device, are always loaded onto the\n CPU during loading. This is different from \"torch.load()\"'s\n semantics and may change in the future.\n\nParameters:\n * m -- A \"ScriptModule\" to save.\n * **f** -- A file-like object (has to implement write and flush)\n or a string containing a file name.\n\n * **_extra_files** -- Map from filename to contents which will\n be stored as part of *f*.\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.save.html", "category": "pytorch docs"} {"text": "be stored as part of f.\nNote:\n torch.jit.save attempts to preserve the behavior of some\n operators across versions. For example, dividing two integer\n tensors in PyTorch 1.5 performed floor division, and if the\n module containing that code is saved in PyTorch 1.5 and loaded in\n PyTorch 1.6 its division behavior will be preserved. The same\n module saved in PyTorch 1.6 will fail to load in PyTorch 1.5,\n however, since the behavior of division changed in 1.6, and 1.5\n does not know how to replicate the 1.6 behavior.\n\nExample:\n import torch\n import io\n\n class MyModule(torch.nn.Module):\n def forward(self, x):\n return x + 10\n\n m = torch.jit.script(MyModule())\n\n # Save to file\n torch.jit.save(m, 'scriptmodule.pt')\n # This line is equivalent to the previous\n m.save(\"scriptmodule.pt\")\n\n # Save to io.BytesIO buffer\n buffer = io.BytesIO()\n torch.jit.save(m, buffer)\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.save.html", "category": "pytorch docs"} {"text": "torch.jit.save(m, buffer)\n # Save with extra files\n extra_files = {'foo.txt': b'bar'}\n torch.jit.save(m, 'scriptmodule.pt', _extra_files=extra_files)\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.save.html", "category": "pytorch docs"} {"text": "Sequential\nclass torch.nn.Sequential(*args: Module)\nclass torch.nn.Sequential(arg: OrderedDict[str, Module])\nA sequential container. Modules will be added to it in the order\n they are passed in the constructor. Alternatively, an \"OrderedDict\"\n of modules can be passed in. The \"forward()\" method of \"Sequential\"\n accepts any input and forwards it to the first module it contains.\n It then \"chains\" outputs to inputs sequentially for each subsequent\n module, finally returning the output of the last module.\nThe value a \"Sequential\" provides over manually calling a sequence\n of modules is that it allows treating the whole container as a\n single module, such that performing a transformation on the\n \"Sequential\" applies to each of the modules it stores (which are\n each a registered submodule of the \"Sequential\").\nWhat's the difference between a \"Sequential\" and a\n \"torch.nn.ModuleList\"? A \"ModuleList\" is exactly what it sounds", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html", "category": "pytorch docs"} {"text": "like--a list for storing \"Module\" s! On the other hand, the layers\n in a \"Sequential\" are connected in a cascading way.\nExample:\n # Using Sequential to create a small model. When `model` is run,\n # input will first be passed to `Conv2d(1,20,5)`. The output of\n # `Conv2d(1,20,5)` will be used as the input to the first\n # `ReLU`; the output of the first `ReLU` will become the input\n # for `Conv2d(20,64,5)`. Finally, the output of\n # `Conv2d(20,64,5)` will be used as input to the second `ReLU`\n model = nn.Sequential(\n nn.Conv2d(1,20,5),\n nn.ReLU(),\n nn.Conv2d(20,64,5),\n nn.ReLU()\n )\n\n # Using Sequential with OrderedDict. This is functionally the\n # same as the above code\n model = nn.Sequential(OrderedDict([\n ('conv1', nn.Conv2d(1,20,5)),\n ('relu1', nn.ReLU()),\n ('conv2', nn.Conv2d(20,64,5)),\n ('relu2', nn.ReLU())\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html", "category": "pytorch docs"} {"text": "('relu2', nn.ReLU())\n ]))\nappend(module)\n Appends a given module to the end.\n\n Parameters:\n **module** (*nn.Module*) -- module to append\n\n Return type:\n *Sequential*\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Sequential.html", "category": "pytorch docs"} {"text": "torch.ones\ntorch.ones(size, , out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\nReturns a tensor filled with the scalar value 1, with the shape\n defined by the variable argument \"size\".\nParameters:\n size (int...) -- a sequence of integers defining the\n shape of the output tensor. Can be a variable number of\n arguments or a collection like a list or tuple.\nKeyword Arguments:\n * out (Tensor, optional) -- the output tensor.\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n", "source": "https://pytorch.org/docs/stable/generated/torch.ones.html", "category": "pytorch docs"} {"text": "for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\nExample:\n >>> torch.ones(2, 3)\n tensor([[ 1., 1., 1.],\n [ 1., 1., 1.]])\n\n >>> torch.ones(5)\n tensor([ 1., 1., 1., 1., 1.])\n", "source": "https://pytorch.org/docs/stable/generated/torch.ones.html", "category": "pytorch docs"} {"text": "torch.arcsin\ntorch.arcsin(input, *, out=None) -> Tensor\nAlias for \"torch.asin()\".", "source": "https://pytorch.org/docs/stable/generated/torch.arcsin.html", "category": "pytorch docs"} {"text": "torch.mean\ntorch.mean(input, *, dtype=None) -> Tensor\nReturns the mean value of all elements in the \"input\" tensor.\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n dtype (\"torch.dtype\", optional) -- the desired data type of\n returned tensor. If specified, the input tensor is casted to\n \"dtype\" before the operation is performed. This is useful for\n preventing data type overflows. Default: None.\nExample:\n >>> a = torch.randn(1, 3)\n >>> a\n tensor([[ 0.2294, -0.5481, 1.3288]])\n >>> torch.mean(a)\n tensor(0.3367)\n\ntorch.mean(input, dim, keepdim=False, *, dtype=None, out=None) -> Tensor\nReturns the mean value of each row of the \"input\" tensor in the\n given dimension \"dim\". If \"dim\" is a list of dimensions, reduce\n over all of them.\nIf \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension(s) \"dim\" where it is of size 1.", "source": "https://pytorch.org/docs/stable/generated/torch.mean.html", "category": "pytorch docs"} {"text": "Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensor having 1 (or \"len(dim)\") fewer dimension(s).\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int** or **tuple of ints*) -- the dimension or\n dimensions to reduce.\n\n * **keepdim** (*bool*) -- whether the output tensor has \"dim\"\n retained or not.\n\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. If specified, the input tensor is casted\n to \"dtype\" before the operation is performed. This is useful\n for preventing data type overflows. Default: None.\n * **out** (*Tensor**, **optional*) -- the output tensor.\n\nSee also:\n \"torch.nanmean()\" computes the mean value of *non-NaN* elements.\n\nExample:\n >>> a = torch.randn(4, 4)\n >>> a\n tensor([[-0.3841, 0.6320, 0.4254, -0.7384],\n [-0.9644, 1.0131, -0.6549, -1.4279],\n", "source": "https://pytorch.org/docs/stable/generated/torch.mean.html", "category": "pytorch docs"} {"text": "[-0.2951, -1.3350, -0.7694, 0.5600],\n [ 1.0842, -0.9580, 0.3623, 0.2343]])\n >>> torch.mean(a, 1)\n tensor([-0.0163, -0.5085, -0.4599, 0.1807])\n >>> torch.mean(a, 1, True)\n tensor([[-0.0163],\n [-0.5085],\n [-0.4599],\n [ 0.1807]])", "source": "https://pytorch.org/docs/stable/generated/torch.mean.html", "category": "pytorch docs"} {"text": "torch.fft.fft\ntorch.fft.fft(input, n=None, dim=- 1, norm=None, *, out=None) -> Tensor\nComputes the one dimensional discrete Fourier transform of \"input\".\nNote:\n The Fourier domain representation of any real signal satisfies\n the Hermitian property: *X[i] = conj(X[-i])*. This function\n always returns both the positive and negative frequency terms\n even though, for real inputs, the negative frequencies are\n redundant. \"rfft()\" returns the more compact one-sided\n representation where only the positive frequencies are returned.\n\nNote:\n Supports torch.half and torch.chalf on CUDA with GPU Architecture\n SM53 or greater. However it only supports powers of 2 signal\n length in every transformed dimension.\n\nParameters:\n * input (Tensor) -- the input tensor\n * **n** (*int**, **optional*) -- Signal length. If given, the\n input will either be zero-padded or trimmed to this length\n before computing the FFT.\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fft.html", "category": "pytorch docs"} {"text": "before computing the FFT.\n * **dim** (*int**, **optional*) -- The dimension along which to\n take the one dimensional FFT.\n\n * **norm** (*str**, **optional*) --\n\n Normalization mode. For the forward transform (\"fft()\"), these\n correspond to:\n\n * \"\"forward\"\" - normalize by \"1/n\"\n\n * \"\"backward\"\" - no normalization\n\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the FFT\n orthonormal)\n\n Calling the backward transform (\"ifft()\") with the same\n normalization mode will apply an overall normalization of\n \"1/n\" between the two transforms. This is required to make\n \"ifft()\" the exact inverse.\n\n Default is \"\"backward\"\" (no normalization).\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\n-[ Example ]-\n\n\n\nt = torch.arange(4)\nt\n tensor([0, 1, 2, 3])\ntorch.fft.fft(t)\n tensor([ 6.+0.j, -2.+2.j, -2.+0.j, -2.-2.j])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fft.html", "category": "pytorch docs"} {"text": "tensor([ 6.+0.j, -2.+2.j, -2.+0.j, -2.-2.j])\n\n\n\nt = torch.tensor([0.+1.j, 2.+3.j, 4.+5.j, 6.+7.j])\ntorch.fft.fft(t)\n tensor([12.+16.j, -8.+0.j, -4.-4.j, 0.-8.j])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fft.html", "category": "pytorch docs"} {"text": "torch.Tensor.var\nTensor.var(dim=None, *, correction=1, keepdim=False) -> Tensor\nSee \"torch.var()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.var.html", "category": "pytorch docs"} {"text": "torch.erfc\ntorch.erfc(input, *, out=None) -> Tensor\nAlias for \"torch.special.erfc()\".", "source": "https://pytorch.org/docs/stable/generated/torch.erfc.html", "category": "pytorch docs"} {"text": "torch.nn.functional.one_hot\ntorch.nn.functional.one_hot(tensor, num_classes=- 1) -> LongTensor\nTakes LongTensor with index values of shape \"()\" and returns a\n tensor of shape \"(, num_classes)\" that have zeros everywhere\n except where the index of last dimension matches the corresponding\n value of the input tensor, in which case it will be 1.\nSee also One-hot on Wikipedia .\nParameters:\n * tensor (LongTensor) -- class values of any shape.\n * **num_classes** (*int*) -- Total number of classes. If set to\n -1, the number of classes will be inferred as one greater than\n the largest class value in the input tensor.\n\nReturns:\n LongTensor that has one more dimension with 1 values at the\n index of last dimension indicated by the input, and 0 everywhere\n else.\n-[ Examples ]-\n\n\n\nF.one_hot(torch.arange(0, 5) % 3)\n tensor([[1, 0, 0],\n [0, 1, 0],\n [0, 0, 1],\n [1, 0, 0],\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.one_hot.html", "category": "pytorch docs"} {"text": "[0, 0, 1],\n [1, 0, 0],\n [0, 1, 0]])\n\n\n\nF.one_hot(torch.arange(0, 5) % 3, num_classes=5)\n tensor([[1, 0, 0, 0, 0],\n [0, 1, 0, 0, 0],\n [0, 0, 1, 0, 0],\n [1, 0, 0, 0, 0],\n [0, 1, 0, 0, 0]])\nF.one_hot(torch.arange(0, 6).view(3,2) % 3)\n tensor([[[1, 0, 0],\n [0, 1, 0]],\n [[0, 0, 1],\n [1, 0, 0]],\n [[0, 1, 0],\n [0, 0, 1]]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.one_hot.html", "category": "pytorch docs"} {"text": "torch.Tensor.tile\nTensor.tile(*reps) -> Tensor\nSee \"torch.tile()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.tile.html", "category": "pytorch docs"} {"text": "torch.Tensor.log2\nTensor.log2() -> Tensor\nSee \"torch.log2()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.log2.html", "category": "pytorch docs"} {"text": "torch.Tensor.lcm_\nTensor.lcm_(other) -> Tensor\nIn-place version of \"lcm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.lcm_.html", "category": "pytorch docs"} {"text": "torch.cholesky_solve\ntorch.cholesky_solve(input, input2, upper=False, *, out=None) -> Tensor\nSolves a linear system of equations with a positive semidefinite\n matrix to be inverted given its Cholesky factor matrix u.\nIf \"upper\" is \"False\", u is and lower triangular and c is\n returned such that:\n c = (u u^T)^{{-1}} b\n\nIf \"upper\" is \"True\" or not provided, u is upper triangular and c\n is returned such that:\n c = (u^T u)^{{-1}} b\n\ntorch.cholesky_solve(b, u) can take in 2D inputs b, u or inputs\n that are batches of 2D matrices. If the inputs are batches, then\n returns batched outputs c\nSupports real-valued and complex-valued inputs. For the complex-\n valued inputs the transpose operator above is the conjugate\n transpose.\nParameters:\n * input (Tensor) -- input matrix b of size (*, m, k),\n where * is zero or more batch dimensions\n * **input2** (*Tensor*) -- input matrix u of size (*, m, m),\n", "source": "https://pytorch.org/docs/stable/generated/torch.cholesky_solve.html", "category": "pytorch docs"} {"text": "where * is zero of more batch dimensions composed of upper or\n lower triangular Cholesky factor\n * **upper** (*bool**, **optional*) -- whether to consider the\n Cholesky factor as a lower or upper triangular matrix.\n Default: \"False\".\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor for c\nExample:\n >>> a = torch.randn(3, 3)\n >>> a = torch.mm(a, a.t()) # make symmetric positive definite\n >>> u = torch.linalg.cholesky(a)\n >>> a\n tensor([[ 0.7747, -1.9549, 1.3086],\n [-1.9549, 6.7546, -5.4114],\n [ 1.3086, -5.4114, 4.8733]])\n >>> b = torch.randn(3, 2)\n >>> b\n tensor([[-0.6355, 0.9891],\n [ 0.1974, 1.4706],\n [-0.4115, -0.6225]])\n >>> torch.cholesky_solve(b, u)\n tensor([[ -8.1625, 19.6097],\n [ -5.8398, 14.2387],\n [ -4.3771, 10.4173]])\n >>> torch.mm(a.inverse(), b)\n tensor([[ -8.1626, 19.6097],\n", "source": "https://pytorch.org/docs/stable/generated/torch.cholesky_solve.html", "category": "pytorch docs"} {"text": "tensor([[ -8.1626, 19.6097],\n [ -5.8398, 14.2387],\n [ -4.3771, 10.4173]])", "source": "https://pytorch.org/docs/stable/generated/torch.cholesky_solve.html", "category": "pytorch docs"} {"text": "torch.tensor_split\ntorch.tensor_split(input, indices_or_sections, dim=0) -> List of Tensors\nSplits a tensor into multiple sub-tensors, all of which are views\n of \"input\", along dimension \"dim\" according to the indices or\n number of sections specified by \"indices_or_sections\". This\n function is based on NumPy's \"numpy.array_split()\".\nParameters:\n * input (Tensor) -- the tensor to split\n * **indices_or_sections** (*Tensor**, **int** or **list** or\n **tuple of ints*) --\n\n If \"indices_or_sections\" is an integer \"n\" or a zero\n dimensional long tensor with value \"n\", \"input\" is split into\n \"n\" sections along dimension \"dim\". If \"input\" is divisible by\n \"n\" along dimension \"dim\", each section will be of equal size,\n \"input.size(dim) / n\". If \"input\" is not divisible by \"n\", the\n sizes of the first \"int(input.size(dim) % n)\" sections will\n have size \"int(input.size(dim) / n) + 1\", and the rest will\n", "source": "https://pytorch.org/docs/stable/generated/torch.tensor_split.html", "category": "pytorch docs"} {"text": "have size \"int(input.size(dim) / n)\".\n If \"indices_or_sections\" is a list or tuple of ints, or a one-\n dimensional long tensor, then \"input\" is split along dimension\n \"dim\" at each of the indices in the list, tuple or tensor. For\n instance, \"indices_or_sections=[2, 3]\" and \"dim=0\" would\n result in the tensors \"input[:2]\", \"input[2:3]\", and\n \"input[3:]\".\n\n If \"indices_or_sections\" is a tensor, it must be a zero-\n dimensional or one-dimensional long tensor on the CPU.\n\n * **dim** (*int**, **optional*) -- dimension along which to\n split the tensor. Default: \"0\"\n\nExample:\n >>> x = torch.arange(8)\n >>> torch.tensor_split(x, 3)\n (tensor([0, 1, 2]), tensor([3, 4, 5]), tensor([6, 7]))\n\n >>> x = torch.arange(7)\n >>> torch.tensor_split(x, 3)\n (tensor([0, 1, 2]), tensor([3, 4]), tensor([5, 6]))\n >>> torch.tensor_split(x, (1, 6))\n (tensor([0]), tensor([1, 2, 3, 4, 5]), tensor([6]))\n", "source": "https://pytorch.org/docs/stable/generated/torch.tensor_split.html", "category": "pytorch docs"} {"text": "\n\n\nx = torch.arange(14).reshape(2, 7)\n >>> x\n tensor([[ 0, 1, 2, 3, 4, 5, 6],\n [ 7, 8, 9, 10, 11, 12, 13]])\n >>> torch.tensor_split(x, 3, dim=1)\n (tensor([[0, 1, 2],\n [7, 8, 9]]),\n tensor([[ 3, 4],\n [10, 11]]),\n tensor([[ 5, 6],\n [12, 13]]))\n >>> torch.tensor_split(x, (1, 6), dim=1)\n (tensor([[0],\n [7]]),\n tensor([[ 1, 2, 3, 4, 5],\n [ 8, 9, 10, 11, 12]]),\n tensor([[ 6],\n [13]]))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.tensor_split.html", "category": "pytorch docs"} {"text": "torch.fft.rfftn\ntorch.fft.rfftn(input, s=None, dim=None, norm=None, *, out=None) -> Tensor\nComputes the N-dimensional discrete Fourier transform of real\n \"input\".\nThe FFT of a real signal is Hermitian-symmetric, \"X[i_1, ..., i_n]\n = conj(X[-i_1, ..., -i_n])\" so the full \"fftn()\" output contains\n redundant information. \"rfftn()\" instead omits the negative\n frequencies in the last dimension.\nNote:\n Supports torch.half on CUDA with GPU Architecture SM53 or\n greater. However it only supports powers of 2 signal length in\n every transformed dimensions.\n\nParameters:\n * input (Tensor) -- the input tensor\n * **s** (*Tuple**[**int**]**, **optional*) -- Signal size in the\n transformed dimensions. If given, each dimension \"dim[i]\" will\n either be zero-padded or trimmed to the length \"s[i]\" before\n computing the real FFT. If a length \"-1\" is specified, no\n padding is done in that dimension. Default: \"s =\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.rfftn.html", "category": "pytorch docs"} {"text": "[input.size(d) for d in dim]\"\n * **dim** (*Tuple**[**int**]**, **optional*) -- Dimensions to be\n transformed. Default: all dimensions, or the last \"len(s)\"\n dimensions if \"s\" is given.\n\n * **norm** (*str**, **optional*) --\n\n Normalization mode. For the forward transform (\"rfftn()\"),\n these correspond to:\n\n * \"\"forward\"\" - normalize by \"1/n\"\n\n * \"\"backward\"\" - no normalization\n\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the real FFT\n orthonormal)\n\n Where \"n = prod(s)\" is the logical FFT size. Calling the\n backward transform (\"irfftn()\") with the same normalization\n mode will apply an overall normalization of \"1/n\" between the\n two transforms. This is required to make \"irfftn()\" the exact\n inverse.\n\n Default is \"\"backward\"\" (no normalization).\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\n-[ Example ]-\n\n\n\nt = torch.rand(10, 10)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.rfftn.html", "category": "pytorch docs"} {"text": "-[ Example ]-\n\n\n\nt = torch.rand(10, 10)\nrfftn = torch.fft.rfftn(t)\nrfftn.size()\n torch.Size([10, 6])\n\n\n\nCompared against the full output from \"fftn()\", we have all\n elements up to the Nyquist frequency.\n\n\n\nfftn = torch.fft.fftn(t)\ntorch.testing.assert_close(fftn[..., :6], rfftn, check_stride=False)\n\n\n\nThe discrete Fourier transform is separable, so \"rfftn()\" here is\n equivalent to a combination of \"fft()\" and \"rfft()\":\n\n\n\ntwo_ffts = torch.fft.fft(torch.fft.rfft(t, dim=1), dim=0)\ntorch.testing.assert_close(rfftn, two_ffts, check_stride=False)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.rfftn.html", "category": "pytorch docs"} {"text": "torch.randperm\ntorch.randperm(n, *, generator=None, out=None, dtype=torch.int64, layout=torch.strided, device=None, requires_grad=False, pin_memory=False) -> Tensor\nReturns a random permutation of integers from \"0\" to \"n - 1\".\nParameters:\n n (int) -- the upper bound (exclusive)\nKeyword Arguments:\n * generator (\"torch.Generator\", optional) -- a pseudorandom\n number generator for sampling\n * **out** (*Tensor**, **optional*) -- the output tensor.\n\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: \"torch.int64\".\n\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n", "source": "https://pytorch.org/docs/stable/generated/torch.randperm.html", "category": "pytorch docs"} {"text": "for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\n * **pin_memory** (*bool**, **optional*) -- If set, returned\n tensor would be allocated in the pinned memory. Works only for\n CPU tensors. Default: \"False\".\n\nExample:\n >>> torch.randperm(4)\n tensor([2, 1, 0, 3])\n", "source": "https://pytorch.org/docs/stable/generated/torch.randperm.html", "category": "pytorch docs"} {"text": "torch.nn.functional.tanhshrink\ntorch.nn.functional.tanhshrink(input) -> Tensor\nApplies element-wise, \\text{Tanhshrink}(x) = x - \\text{Tanh}(x)\nSee \"Tanhshrink\" for more details.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.tanhshrink.html", "category": "pytorch docs"} {"text": "torch.func.replace_all_batch_norm_modules_\ntorch.func.replace_all_batch_norm_modules_(root)\nIn place updates \"root\" by setting the \"running_mean\" and\n \"running_var\" to be None and setting track_running_stats to be\n False for any nn.BatchNorm module in \"root\"\nReturn type:\n Module", "source": "https://pytorch.org/docs/stable/generated/torch.func.replace_all_batch_norm_modules_.html", "category": "pytorch docs"} {"text": "hardswish\nclass torch.ao.nn.quantized.functional.hardswish(input, scale, zero_point)\nThis is the quantized version of \"hardswish()\".\nParameters:\n * input (Tensor) -- quantized input\n * **scale** (*float*) -- quantization scale of the output tensor\n\n * **zero_point** (*int*) -- quantization zero point of the\n output tensor\n\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.hardswish.html", "category": "pytorch docs"} {"text": "Transformer\nclass torch.nn.Transformer(d_model=512, nhead=8, num_encoder_layers=6, num_decoder_layers=6, dim_feedforward=2048, dropout=0.1, activation=, custom_encoder=None, custom_decoder=None, layer_norm_eps=1e-05, batch_first=False, norm_first=False, device=None, dtype=None)\nA transformer model. User is able to modify the attributes as\n needed. The architecture is based on the paper \"Attention Is All\n You Need\". Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob\n Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia\n Polosukhin. 2017. Attention is all you need. In Advances in Neural\n Information Processing Systems, pages 6000-6010.\nParameters:\n * d_model (int) -- the number of expected features in the\n encoder/decoder inputs (default=512).\n * **nhead** (*int*) -- the number of heads in the\n multiheadattention models (default=8).\n\n * **num_encoder_layers** (*int*) -- the number of sub-encoder-\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html", "category": "pytorch docs"} {"text": "layers in the encoder (default=6).\n * **num_decoder_layers** (*int*) -- the number of sub-decoder-\n layers in the decoder (default=6).\n\n * **dim_feedforward** (*int*) -- the dimension of the\n feedforward network model (default=2048).\n\n * **dropout** (*float*) -- the dropout value (default=0.1).\n\n * **activation** (*Union**[**str**,\n **Callable**[**[**Tensor**]**, **Tensor**]**]*) -- the\n activation function of encoder/decoder intermediate layer, can\n be a string (\"relu\" or \"gelu\") or a unary callable. Default:\n relu\n\n * **custom_encoder** (*Optional**[**Any**]*) -- custom encoder\n (default=None).\n\n * **custom_decoder** (*Optional**[**Any**]*) -- custom decoder\n (default=None).\n\n * **layer_norm_eps** (*float*) -- the eps value in layer\n normalization components (default=1e-5).\n\n * **batch_first** (*bool*) -- If \"True\", then the input and\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html", "category": "pytorch docs"} {"text": "output tensors are provided as (batch, seq, feature). Default:\n \"False\" (seq, batch, feature).\n * **norm_first** (*bool*) -- if \"True\", encoder and decoder\n layers will perform LayerNorms before other attention and\n feedforward operations, otherwise after. Default: \"False\"\n (after).\n\nExamples::\n >>> transformer_model = nn.Transformer(nhead=16, num_encoder_layers=12)\n >>> src = torch.rand((10, 32, 512))\n >>> tgt = torch.rand((20, 32, 512))\n >>> out = transformer_model(src, tgt)\nNote: A full example to apply nn.Transformer module for the word\n language model is available in\n https://github.com/pytorch/examples/tree/master/word_language_model\nforward(src, tgt, src_mask=None, tgt_mask=None, memory_mask=None, src_key_padding_mask=None, tgt_key_padding_mask=None, memory_key_padding_mask=None)\n Take in and process masked source/target sequences.\n\n Parameters:\n * **src** (*Tensor*) -- the sequence to the encoder\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html", "category": "pytorch docs"} {"text": "(required).\n * **tgt** (*Tensor*) -- the sequence to the decoder\n (required).\n\n * **src_mask** (*Optional**[**Tensor**]*) -- the additive\n mask for the src sequence (optional).\n\n * **tgt_mask** (*Optional**[**Tensor**]*) -- the additive\n mask for the tgt sequence (optional).\n\n * **memory_mask** (*Optional**[**Tensor**]*) -- the additive\n mask for the encoder output (optional).\n\n * **src_key_padding_mask** (*Optional**[**Tensor**]*) -- the\n ByteTensor mask for src keys per batch (optional).\n\n * **tgt_key_padding_mask** (*Optional**[**Tensor**]*) -- the\n ByteTensor mask for tgt keys per batch (optional).\n\n * **memory_key_padding_mask** (*Optional**[**Tensor**]*) --\n the ByteTensor mask for memory keys per batch (optional).\n\n Return type:\n *Tensor*\n\n Shape:\n * src: (S, E) for unbatched input, (S, N, E) if\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html", "category": "pytorch docs"} {"text": "batch_first=False or (N, S, E) if batch_first=True.\n * tgt: (T, E) for unbatched input, (T, N, E) if\n *batch_first=False* or *(N, T, E)* if *batch_first=True*.\n\n * src_mask: (S, S) or (N\\cdot\\text{num\\_heads}, S, S).\n\n * tgt_mask: (T, T) or (N\\cdot\\text{num\\_heads}, T, T).\n\n * memory_mask: (T, S).\n\n * src_key_padding_mask: (S) for unbatched input otherwise (N,\n S).\n\n * tgt_key_padding_mask: (T) for unbatched input otherwise (N,\n T).\n\n * memory_key_padding_mask: (S) for unbatched input otherwise\n (N, S).\n\n Note: [src/tgt/memory]_mask ensures that position i is\n allowed to attend the unmasked positions. If a ByteTensor is\n provided, the non-zero positions are not allowed to attend\n while the zero positions will be unchanged. If a BoolTensor\n is provided, positions with \"True\" are not allowed to attend\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html", "category": "pytorch docs"} {"text": "while \"False\" values will be unchanged. If a FloatTensor is\n provided, it will be added to the attention weight.\n [src/tgt/memory]_key_padding_mask provides specified elements\n in the key to be ignored by the attention. If a ByteTensor is\n provided, the non-zero positions will be ignored while the\n zero positions will be unchanged. If a BoolTensor is\n provided, the positions with the value of \"True\" will be\n ignored while the position with the value of \"False\" will be\n unchanged.\n * output: (T, E) for unbatched input, (T, N, E) if\n *batch_first=False* or *(N, T, E)* if *batch_first=True*.\n\n Note: Due to the multi-head attention architecture in the\n transformer model, the output sequence length of a\n transformer is same as the input sequence (i.e. target)\n length of the decoder.\n\n where S is the source sequence length, T is the target\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html", "category": "pytorch docs"} {"text": "sequence length, N is the batch size, E is the feature number\n -[ Examples ]-\n\n >>> output = transformer_model(src, tgt, src_mask=src_mask, tgt_mask=tgt_mask)\n\nstatic generate_square_subsequent_mask(sz, device='cpu')\n Generate a square mask for the sequence. The masked positions\n are filled with float('-inf'). Unmasked positions are filled\n with float(0.0).\n\n Return type:\n *Tensor*\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Transformer.html", "category": "pytorch docs"} {"text": "default_placeholder_observer\ntorch.quantization.observer.default_placeholder_observer\nalias of \"PlaceholderObserver\"", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.default_placeholder_observer.html", "category": "pytorch docs"} {"text": "torch.sparse_csr_tensor\ntorch.sparse_csr_tensor(crow_indices, col_indices, values, size=None, *, dtype=None, device=None, requires_grad=False, check_invariants=None) -> Tensor\nConstructs a sparse tensor in CSR (Compressed Sparse Row) with\n specified values at the given \"crow_indices\" and \"col_indices\".\n Sparse matrix multiplication operations in CSR format are typically\n faster than that for sparse tensors in COO format. Make you have a\n look at the note on the data type of the indices.\nNote:\n If the \"device\" argument is not specified the device of the given\n \"values\" and indices tensor(s) must match. If, however, the\n argument is specified the input Tensors will be converted to the\n given device and in turn determine the device of the constructed\n sparse tensor.\n\nParameters:\n * crow_indices (array_like) -- (B+1)-dimensional array of\n size \"(*batchsize, nrows + 1)\". The last element of each", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_csr_tensor.html", "category": "pytorch docs"} {"text": "batch is the number of non-zeros. This tensor encodes the\n index in values and col_indices depending on where the given\n row starts. Each successive number in the tensor subtracted by\n the number before it denotes the number of elements in a given\n row.\n * **col_indices** (*array_like*) -- Column co-ordinates of each\n element in values. (B+1)-dimensional tensor with the same\n length as values.\n\n * **values** (*array_list*) -- Initial values for the tensor.\n Can be a list, tuple, NumPy \"ndarray\", scalar, and other types\n that represents a (1+K)-dimensional tensor where \"K\" is the\n number of dense dimensions.\n\n * **size** (list, tuple, \"torch.Size\", optional) -- Size of the\n sparse tensor: \"(*batchsize, nrows, ncols, *densesize)\". If\n not provided, the size will be inferred as the minimum size\n big enough to hold all non-zero elements.\n\nKeyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_csr_tensor.html", "category": "pytorch docs"} {"text": "Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if None, infers data type from\n \"values\".\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if None, uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\n * **check_invariants** (*bool**, **optional*) -- If sparse\n tensor invariants are checked. Default: as returned by\n \"torch.sparse.check_sparse_tensor_invariants.is_enabled()\",\n initially False.\n\nExample::\n >>> crow_indices = [0, 2, 4]\n >>> col_indices = [0, 1, 0, 1]\n >>> values = [1, 2, 3, 4]", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_csr_tensor.html", "category": "pytorch docs"} {"text": "\n\n\nvalues = [1, 2, 3, 4]\n >>> torch.sparse_csr_tensor(torch.tensor(crow_indices, dtype=torch.int64),\n ... torch.tensor(col_indices, dtype=torch.int64),\n ... torch.tensor(values), dtype=torch.double)\n tensor(crow_indices=tensor([0, 2, 4]),\n col_indices=tensor([0, 1, 0, 1]),\n values=tensor([1., 2., 3., 4.]), size=(2, 2), nnz=4,\n dtype=torch.float64, layout=torch.sparse_csr)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_csr_tensor.html", "category": "pytorch docs"} {"text": "torch.column_stack\ntorch.column_stack(tensors, *, out=None) -> Tensor\nCreates a new tensor by horizontally stacking the tensors in\n \"tensors\".\nEquivalent to \"torch.hstack(tensors)\", except each zero or one\n dimensional tensor \"t\" in \"tensors\" is first reshaped into a\n \"(t.numel(), 1)\" column before being stacked horizontally.\nParameters:\n tensors (sequence of Tensors) -- sequence of tensors to\n concatenate\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.tensor([1, 2, 3])\n >>> b = torch.tensor([4, 5, 6])\n >>> torch.column_stack((a, b))\n tensor([[1, 4],\n [2, 5],\n [3, 6]])\n >>> a = torch.arange(5)\n >>> b = torch.arange(10).reshape(5, 2)\n >>> torch.column_stack((a, b, b))\n tensor([[0, 0, 1, 0, 1],\n [1, 2, 3, 2, 3],\n [2, 4, 5, 4, 5],\n [3, 6, 7, 6, 7],\n [4, 8, 9, 8, 9]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.column_stack.html", "category": "pytorch docs"} {"text": "torch.index_reduce\ntorch.index_reduce(input, dim, index, source, reduce, *, include_self=True, out=None) -> Tensor\nSee \"index_reduce_()\" for function description.", "source": "https://pytorch.org/docs/stable/generated/torch.index_reduce.html", "category": "pytorch docs"} {"text": "torch.Tensor.neg\nTensor.neg() -> Tensor\nSee \"torch.neg()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.neg.html", "category": "pytorch docs"} {"text": "torch.Tensor.is_complex\nTensor.is_complex() -> bool\nReturns True if the data type of \"self\" is a complex data type.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_complex.html", "category": "pytorch docs"} {"text": "torch.nn.utils.parametrizations.orthogonal\ntorch.nn.utils.parametrizations.orthogonal(module, name='weight', orthogonal_map=None, *, use_trivialization=True)\nApplies an orthogonal or unitary parametrization to a matrix or a\n batch of matrices.\nLetting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, the parametrized\n matrix Q \\in \\mathbb{K}^{m \\times n} is orthogonal as\n \\begin{align*} Q^{\\text{H}}Q &= \\mathrm{I}_n\n \\mathrlap{\\qquad \\text{if }m \\geq n}\\\\ QQ^{\\text{H}} &=\n \\mathrm{I}_m \\mathrlap{\\qquad \\text{if }m < n} \\end{align*}\n\nwhere Q^{\\text{H}} is the conjugate transpose when Q is complex and\n the transpose when Q is real-valued, and \\mathrm{I}_n is the\n n-dimensional identity matrix. In plain words, Q will have\n orthonormal columns whenever m \\geq n and orthonormal rows\n otherwise.\nIf the tensor has more than two dimensions, we consider it as a\n batch of matrices of shape (..., m, n).", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrizations.orthogonal.html", "category": "pytorch docs"} {"text": "batch of matrices of shape (..., m, n).\nThe matrix Q may be parametrized via three different\n \"orthogonal_map\" in terms of the original tensor:\n\n\n\"\"matrix_exp\"\"/\"\"cayley\"\": the \"matrix_exp()\" Q = \\exp(A) and the\n Cayley map Q = (\\mathrm{I}_n + A/2)(\\mathrm{I}_n - A/2)^{-1} are\n applied to a skew-symmetric A to give an orthogonal matrix.\n\n\n\"\"householder\"\": computes a product of Householder reflectors\n (\"householder_product()\").\n\n\n\"\"matrix_exp\"\"/\"\"cayley\"\" often make the parametrized weight\n converge faster than \"\"householder\"\", but they are slower to\n compute for very thin or very wide matrices.\nIf \"use_trivialization=True\" (default), the parametrization\n implements the \"Dynamic Trivialization Framework\", where an extra\n matrix B \\in \\mathbb{K}^{n \\times n} is stored under\n \"module.parametrizations.weight[0].base\". This helps the\n convergence of the parametrized layer at the expense of some extra", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrizations.orthogonal.html", "category": "pytorch docs"} {"text": "memory use. See Trivializations for Gradient-Based Optimization on\n Manifolds .\nInitial value of Q: If the original tensor is not parametrized and\n \"use_trivialization=True\" (default), the initial value of Q is that\n of the original tensor if it is orthogonal (or unitary in the\n complex case) and it is orthogonalized via the QR decomposition\n otherwise (see \"torch.linalg.qr()\"). Same happens when it is not\n parametrized and \"orthogonal_map=\"householder\"\" even when\n \"use_trivialization=False\". Otherwise, the initial value is the\n result of the composition of all the registered parametrizations\n applied to the original tensor.\nNote:\n This function is implemented using the parametrization\n functionality in \"register_parametrization()\".\n\nParameters:\n * module (nn.Module) -- module on which to register the\n parametrization.\n * **name** (*str**, **optional*) -- name of the tensor to make\n orthogonal. Default: \"\"weight\"\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrizations.orthogonal.html", "category": "pytorch docs"} {"text": "orthogonal. Default: \"\"weight\"\".\n * **orthogonal_map** (*str**, **optional*) -- One of the\n following: \"\"matrix_exp\"\", \"\"cayley\"\", \"\"householder\"\".\n Default: \"\"matrix_exp\"\" if the matrix is square or complex,\n \"\"householder\"\" otherwise.\n\n * **use_trivialization** (*bool**, **optional*) -- whether to\n use the dynamic trivialization framework. Default: \"True\".\n\nReturns:\n The original module with an orthogonal parametrization\n registered to the specified weight\nReturn type:\n Module\nExample:\n >>> orth_linear = orthogonal(nn.Linear(20, 40))\n >>> orth_linear\n ParametrizedLinear(\n in_features=20, out_features=40, bias=True\n (parametrizations): ModuleDict(\n (weight): ParametrizationList(\n (0): _Orthogonal()\n )\n )\n )\n >>> Q = orth_linear.weight\n >>> torch.dist(Q.T @ Q, torch.eye(20))\n tensor(4.9332e-07)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrizations.orthogonal.html", "category": "pytorch docs"} {"text": "torch.Tensor.any\nTensor.any(dim=None, keepdim=False) -> Tensor\nSee \"torch.any()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.any.html", "category": "pytorch docs"} {"text": "torch.Tensor.dist\nTensor.dist(other, p=2) -> Tensor\nSee \"torch.dist()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.dist.html", "category": "pytorch docs"} {"text": "torch.cuda.device_count\ntorch.cuda.device_count()\nReturns the number of GPUs available.\nReturn type:\n int", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.device_count.html", "category": "pytorch docs"} {"text": "SyncBatchNorm\nclass torch.nn.SyncBatchNorm(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, process_group=None, device=None, dtype=None)\nApplies Batch Normalization over a N-Dimensional input (a mini-\n batch of [N-2]D inputs with additional channel dimension) as\n described in the paper Batch Normalization: Accelerating Deep\n Network Training by Reducing Internal Covariate Shift .\n y = \\frac{x - \\mathrm{E}[x]}{ \\sqrt{\\mathrm{Var}[x] + \\epsilon}}\n * \\gamma + \\beta\n\nThe mean and standard-deviation are calculated per-dimension over\n all mini-batches of the same process groups. \\gamma and \\beta are\n learnable parameter vectors of size C (where C is the input\n size). By default, the elements of \\gamma are sampled from\n \\mathcal{U}(0, 1) and the elements of \\beta are set to 0. The\n standard-deviation is calculated via the biased estimator,\n equivalent to torch.var(input, unbiased=False).", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SyncBatchNorm.html", "category": "pytorch docs"} {"text": "Also by default, during training this layer keeps running estimates\n of its computed mean and variance, which are then used for\n normalization during evaluation. The running estimates are kept\n with a default \"momentum\" of 0.1.\nIf \"track_running_stats\" is set to \"False\", this layer then does\n not keep running estimates, and batch statistics are instead used\n during evaluation time as well.\nNote:\n This \"momentum\" argument is different from one used in optimizer\n classes and the conventional notion of momentum. Mathematically,\n the update rule for running statistics here is \\hat{x}_\\text{new}\n = (1 - \\text{momentum}) \\times \\hat{x} + \\text{momentum} \\times\n x_t, where \\hat{x} is the estimated statistic and x_t is the new\n observed value.\n\nBecause the Batch Normalization is done for each channel in the \"C\"\n dimension, computing statistics on \"(N, +)\" slices, it's common\n terminology to call this Volumetric Batch Normalization or Spatio-", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SyncBatchNorm.html", "category": "pytorch docs"} {"text": "temporal Batch Normalization.\nCurrently \"SyncBatchNorm\" only supports \"DistributedDataParallel\"\n (DDP) with single GPU per process. Use\n \"torch.nn.SyncBatchNorm.convert_sync_batchnorm()\" to convert\n \"BatchNorm*D\" layer to \"SyncBatchNorm\" before wrapping Network with\n DDP.\nParameters:\n * num_features (int) -- C from an expected input of size\n (N, C, +)\n * **eps** (*float*) -- a value added to the denominator for\n numerical stability. Default: \"1e-5\"\n\n * **momentum** (*float*) -- the value used for the running_mean\n and running_var computation. Can be set to \"None\" for\n cumulative moving average (i.e. simple average). Default: 0.1\n\n * **affine** (*bool*) -- a boolean value that when set to\n \"True\", this module has learnable affine parameters. Default:\n \"True\"\n\n * **track_running_stats** (*bool*) -- a boolean value that when\n set to \"True\", this module tracks the running mean and\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SyncBatchNorm.html", "category": "pytorch docs"} {"text": "variance, and when set to \"False\", this module does not track\n such statistics, and initializes statistics buffers\n \"running_mean\" and \"running_var\" as \"None\". When these buffers\n are \"None\", this module always uses batch statistics. in both\n training and eval modes. Default: \"True\"\n * **process_group** (*Optional**[**Any**]*) -- synchronization\n of stats happen within each process group individually.\n Default behavior is synchronization across the whole world\n\nShape:\n * Input: (N, C, +)\n * Output: (N, C, +) (same shape as input)\n\nNote:\n Synchronization of batchnorm statistics occurs only while\n training, i.e. synchronization is disabled when \"model.eval()\" is\n set or if \"self.training\" is otherwise \"False\".\n\nExamples:\n >>> # With Learnable Parameters\n >>> m = nn.SyncBatchNorm(100)\n >>> # creating process group (optional)\n >>> # ranks is a list of int identifying rank ids.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SyncBatchNorm.html", "category": "pytorch docs"} {"text": "\n\n\nranks = list(range(8))\n >>> r1, r2 = ranks[:4], ranks[4:]\n >>> # Note: every rank calls into new_group for every\n >>> # process group created, even if that rank is not\n >>> # part of the group.\n >>> process_groups = [torch.distributed.new_group(pids) for pids in [r1, r2]]\n >>> process_group = process_groups[0 if dist.get_rank() <= 3 else 1]\n >>> # Without Learnable Parameters\n >>> m = nn.BatchNorm3d(100, affine=False, process_group=process_group)\n >>> input = torch.randn(20, 100, 35, 45, 10)\n >>> output = m(input)\n\n\n\n >>> # network is nn.BatchNorm layer\n >>> sync_bn_network = nn.SyncBatchNorm.convert_sync_batchnorm(network, process_group)\n >>> # only single gpu per process is currently supported\n >>> ddp_sync_bn_network = torch.nn.parallel.DistributedDataParallel(\n >>> sync_bn_network,\n >>> device_ids=[args.local_rank],\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SyncBatchNorm.html", "category": "pytorch docs"} {"text": "\n\n\n output_device=args.local_rank)\n\n\n\n\nclassmethod convert_sync_batchnorm(module, process_group=None)\n Helper function to convert all \"BatchNorm*D\" layers in the model\n to \"torch.nn.SyncBatchNorm\" layers.\n\n Parameters:\n * **module** (*nn.Module*) -- module containing one or more\n \"BatchNorm*D\" layers\n\n * **process_group** (*optional*) -- process group to scope\n synchronization, default is the whole world\n\n Returns:\n The original \"module\" with the converted\n \"torch.nn.SyncBatchNorm\" layers. If the original \"module\" is\n a \"BatchNorm*D\" layer, a new \"torch.nn.SyncBatchNorm\" layer\n object will be returned instead.\n\n Example:\n\n >>> # Network with nn.BatchNorm layer\n >>> module = torch.nn.Sequential(\n >>> torch.nn.Linear(20, 100),\n >>> torch.nn.BatchNorm1d(100),\n >>> ).cuda()\n >>> # creating process group (optional)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SyncBatchNorm.html", "category": "pytorch docs"} {"text": "\n\n\ncreating process group (optional)\n >>> # ranks is a list of int identifying rank ids.\n >>> ranks = list(range(8))\n >>> r1, r2 = ranks[:4], ranks[4:]\n >>> # Note: every rank calls into new_group for every\n >>> # process group created, even if that rank is not\n >>> # part of the group.\n >>> process_groups = [torch.distributed.new_group(pids) for pids in [r1, r2]]\n >>> process_group = process_groups[0 if dist.get_rank() <= 3 else 1]\n >>> sync_bn_module = torch.nn.SyncBatchNorm.convert_sync_batchnorm(module, process_group)\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SyncBatchNorm.html", "category": "pytorch docs"} {"text": "LambdaLR\nclass torch.optim.lr_scheduler.LambdaLR(optimizer, lr_lambda, last_epoch=- 1, verbose=False)\nSets the learning rate of each parameter group to the initial lr\n times a given function. When last_epoch=-1, sets initial lr as lr.\nParameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * **lr_lambda** (*function** or **list*) -- A function which\n computes a multiplicative factor given an integer parameter\n epoch, or a list of such functions, one for each group in\n optimizer.param_groups.\n\n * **last_epoch** (*int*) -- The index of last epoch. Default:\n -1.\n\n * **verbose** (*bool*) -- If \"True\", prints a message to stdout\n for each update. Default: \"False\".\n\n-[ Example ]-\n\n\n\nAssuming optimizer has two groups.\nlambda1 = lambda epoch: epoch // 30\nlambda2 = lambda epoch: 0.95 ** epoch\nscheduler = LambdaLR(optimizer, lr_lambda=[lambda1, lambda2])\nfor epoch in range(100):\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.LambdaLR.html", "category": "pytorch docs"} {"text": "\n\n\nfor epoch in range(100):\n train(...)\n validate(...)\n scheduler.step()\n\n\n\nget_last_lr()\n Return last computed learning rate by current scheduler.\n\nload_state_dict(state_dict)\n Loads the schedulers state.\n\n When saving or loading the scheduler, please make sure to also\n save or load the state of the optimizer.\n\n Parameters:\n **state_dict** (*dict*) -- scheduler state. Should be an\n object returned from a call to \"state_dict()\".\n\nprint_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n\nstate_dict()\n Returns the state of the scheduler as a \"dict\".\n\n It contains an entry for every variable in self.__dict__ which\n is not the optimizer. The learning rate lambda functions will\n only be saved if they are callable objects and not if they are\n functions or lambdas.\n\n When saving or loading the scheduler, please make sure to also\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.LambdaLR.html", "category": "pytorch docs"} {"text": "save or load the state of the optimizer.", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.LambdaLR.html", "category": "pytorch docs"} {"text": "torch.eye\ntorch.eye(n, m=None, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\nReturns a 2-D tensor with ones on the diagonal and zeros elsewhere.\nParameters:\n * n (int) -- the number of rows\n * **m** (*int**, **optional*) -- the number of columns with\n default being \"n\"\n\nKeyword Arguments:\n * out (Tensor, optional) -- the output tensor.\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n", "source": "https://pytorch.org/docs/stable/generated/torch.eye.html", "category": "pytorch docs"} {"text": "for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\nReturns:\n A 2-D tensor with ones on the diagonal and zeros elsewhere\nReturn type:\n Tensor\nExample:\n >>> torch.eye(3)\n tensor([[ 1., 0., 0.],\n [ 0., 1., 0.],\n [ 0., 0., 1.]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.eye.html", "category": "pytorch docs"} {"text": "Adagrad\nclass torch.optim.Adagrad(params, lr=0.01, lr_decay=0, weight_decay=0, initial_accumulator_value=0, eps=1e-10, foreach=None, *, maximize=False, differentiable=False)\nImplements Adagrad algorithm.\n \\begin{aligned} &\\rule{110mm}{0.4pt}\n \\\\ &\\textbf{input} : \\gamma \\text{ (lr)}, \\: \\theta_0\n \\text{ (params)}, \\: f(\\theta) \\text{ (objective)}, \\:\n \\lambda \\text{ (weight decay)}, \\\\\n &\\hspace{12mm} \\tau \\text{ (initial accumulator value)}, \\:\n \\eta\\text{ (lr decay)}\\\\ &\\textbf{initialize} :\n state\\_sum_0 \\leftarrow 0 \\\\[-1.ex]\n &\\rule{110mm}{0.4pt}\n \\\\ &\\textbf{for} \\: t=1 \\: \\textbf{to} \\: \\ldots \\:\n \\textbf{do} \\\\ &\\hspace{5mm}g_t\n \\leftarrow \\nabla_{\\theta} f_t (\\theta_{t-1}) \\\\\n &\\hspace{5mm} \\tilde{\\gamma} \\leftarrow \\gamma / (1 +(t-1)\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adagrad.html", "category": "pytorch docs"} {"text": "\\eta) \\ &\\hspace{5mm} \\textbf{if} \\:\n \\lambda \\neq 0 \\\n &\\hspace{10mm} g_t \\leftarrow g_t + \\lambda \\theta_{t-1}\n \\ &\\hspace{5mm}state_sum_t \\leftarrow state_sum_{t-1}\n + g^2_t \\ &\\hspace{5mm}\\theta_t\n \\leftarrow \\theta_{t-1}- \\tilde{\\gamma}\n \\frac{g_t}{\\sqrt{state_sum_t}+\\epsilon} \\\n &\\rule{110mm}{0.4pt}\n \\[-1.ex] &\\bf{return} \\: \\theta_t\n \\[-1.ex] &\\rule{110mm}{0.4pt}\n \\[-1.ex] \\end{aligned}\nFor further details regarding the algorithm we refer to Adaptive\n Subgradient Methods for Online Learning and Stochastic\n Optimization.\nParameters:\n * params (iterable) -- iterable of parameters to optimize\n or dicts defining parameter groups\n * **lr** (*float**, **optional*) -- learning rate (default:\n 1e-2)\n\n * **lr_decay** (*float**, **optional*) -- learning rate decay\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adagrad.html", "category": "pytorch docs"} {"text": "(default: 0)\n * **weight_decay** (*float**, **optional*) -- weight decay (L2\n penalty) (default: 0)\n\n * **eps** (*float**, **optional*) -- term added to the\n denominator to improve numerical stability (default: 1e-10)\n\n * **foreach** (*bool**, **optional*) -- whether foreach\n implementation of optimizer is used. If unspecified by the\n user (so foreach is None), we will try to use foreach over the\n for-loop implementation on CUDA, since it is usually\n significantly more performant. (default: None)\n\n * **maximize** (*bool**, **optional*) -- maximize the params\n based on the objective, instead of minimizing (default: False)\n\n * **differentiable** (*bool**, **optional*) -- whether autograd\n should occur through the optimizer step in training.\n Otherwise, the step() function runs in a torch.no_grad()\n context. Setting to True can impair performance, so leave it\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adagrad.html", "category": "pytorch docs"} {"text": "False if you don't intend to run autograd through this\n instance (default: False)\nadd_param_group(param_group)\n Add a param group to the \"Optimizer\" s *param_groups*.\n\n This can be useful when fine tuning a pre-trained network as\n frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n\n Parameters:\n **param_group** (*dict*) -- Specifies what Tensors should be\n optimized along with group specific optimization options.\n\nload_state_dict(state_dict)\n Loads the optimizer state.\n\n Parameters:\n **state_dict** (*dict*) -- optimizer state. Should be an\n object returned from a call to \"state_dict()\".\n\nregister_step_post_hook(hook)\n Register an optimizer step post hook which will be called after\n optimizer step. It should have the following signature:\n\n hook(optimizer, args, kwargs) -> None\n\n The \"optimizer\" argument is the optimizer instance being used.\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adagrad.html", "category": "pytorch docs"} {"text": "Parameters:\n hook (Callable) -- The user defined hook to be\n registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n\nregister_step_pre_hook(hook)\n Register an optimizer step pre hook which will be called before\n optimizer step. It should have the following signature:\n\n hook(optimizer, args, kwargs) -> None or modified args and kwargs\n\n The \"optimizer\" argument is the optimizer instance being used.\n If args and kwargs are modified by the pre-hook, then the\n transformed values are returned as a tuple containing the\n new_args and new_kwargs.\n\n Parameters:\n **hook** (*Callable*) -- The user defined hook to be\n registered.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adagrad.html", "category": "pytorch docs"} {"text": "\"handle.remove()\"\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n\nstate_dict()\n Returns the state of the optimizer as a \"dict\".\n\n It contains two entries:\n\n * state - a dict holding current optimization state. Its content\n differs between optimizer classes.\n\n * param_groups - a list containing all parameter groups where\n each\n parameter group is a dict\n\nzero_grad(set_to_none=False)\n Sets the gradients of all optimized \"torch.Tensor\" s to zero.\n\n Parameters:\n **set_to_none** (*bool*) -- instead of setting to zero, set\n the grads to None. This will in general have lower memory\n footprint, and can modestly improve performance. However, it\n changes certain behaviors. For example: 1. When the user\n tries to access a gradient and perform manual ops on it, a\n None attribute or a Tensor full of 0s will behave\n differently. 2. If the user requests\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adagrad.html", "category": "pytorch docs"} {"text": "differently. 2. If the user requests\n \"zero_grad(set_to_none=True)\" followed by a backward pass,\n \".grad\"s are guaranteed to be None for params that did not\n receive a gradient. 3. \"torch.optim\" optimizers have a\n different behavior if the gradient is 0 or None (in one case\n it does the step with a gradient of 0 and in the other it\n skips the step altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adagrad.html", "category": "pytorch docs"} {"text": "torch.nn.functional.tanh\ntorch.nn.functional.tanh(input) -> Tensor\nApplies element-wise, \\text{Tanh}(x) = \\tanh(x) = \\frac{\\exp(x) -\n \\exp(-x)}{\\exp(x) + \\exp(-x)}\nSee \"Tanh\" for more details.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.tanh.html", "category": "pytorch docs"} {"text": "torch.Tensor.cholesky_inverse\nTensor.cholesky_inverse(upper=False) -> Tensor\nSee \"torch.cholesky_inverse()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cholesky_inverse.html", "category": "pytorch docs"} {"text": "torch.Tensor.new_empty\nTensor.new_empty(size, *, dtype=None, device=None, requires_grad=False, layout=torch.strided, pin_memory=False) -> Tensor\nReturns a Tensor of size \"size\" filled with uninitialized data. By\n default, the returned Tensor has the same \"torch.dtype\" and\n \"torch.device\" as this tensor.\nParameters:\n size (int...) -- a list, tuple, or \"torch.Size\" of\n integers defining the shape of the output tensor.\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired type of\n returned tensor. Default: if None, same \"torch.dtype\" as this\n tensor.\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if None, same \"torch.device\" as this\n tensor.\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.new_empty.html", "category": "pytorch docs"} {"text": "returned Tensor. Default: \"torch.strided\".\n * **pin_memory** (*bool**, **optional*) -- If set, returned\n tensor would be allocated in the pinned memory. Works only for\n CPU tensors. Default: \"False\".\n\nExample:\n >>> tensor = torch.ones(())\n >>> tensor.new_empty((2, 3))\n tensor([[ 5.8182e-18, 4.5765e-41, -1.0545e+30],\n [ 3.0949e-41, 4.4842e-44, 0.0000e+00]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.new_empty.html", "category": "pytorch docs"} {"text": "default_debug_observer\ntorch.quantization.observer.default_debug_observer\nalias of \"RecordingObserver\"", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.default_debug_observer.html", "category": "pytorch docs"} {"text": "default_per_channel_weight_observer\ntorch.quantization.observer.default_per_channel_weight_observer\nalias of functools.partial(,\n dtype=torch.qint8, qscheme=torch.per_channel_symmetric){}", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.default_per_channel_weight_observer.html", "category": "pytorch docs"} {"text": "torch.Tensor.transpose\nTensor.transpose(dim0, dim1) -> Tensor\nSee \"torch.transpose()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.transpose.html", "category": "pytorch docs"} {"text": "InstanceNorm2d\nclass torch.ao.nn.quantized.InstanceNorm2d(num_features, weight, bias, scale, zero_point, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False, device=None, dtype=None)\nThis is the quantized version of \"InstanceNorm2d\".\nAdditional args:\n * scale - quantization scale of the output, type: double.\n * **zero_point** - quantization zero point of the output, type:\n long.\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.InstanceNorm2d.html", "category": "pytorch docs"} {"text": "torch.Tensor.lerp\nTensor.lerp(end, weight) -> Tensor\nSee \"torch.lerp()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.lerp.html", "category": "pytorch docs"} {"text": "torch.Tensor.div_\nTensor.div_(value, *, rounding_mode=None) -> Tensor\nIn-place version of \"div()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.div_.html", "category": "pytorch docs"} {"text": "torch.diag\ntorch.diag(input, diagonal=0, *, out=None) -> Tensor\n\n\nIf \"input\" is a vector (1-D tensor), then returns a 2-D square\n tensor with the elements of \"input\" as the diagonal.\n\n\nIf \"input\" is a matrix (2-D tensor), then returns a 1-D tensor\n with the diagonal elements of \"input\".\n\n\nThe argument \"diagonal\" controls which diagonal to consider:\n\n\nIf \"diagonal\" = 0, it is the main diagonal.\n\n\nIf \"diagonal\" > 0, it is above the main diagonal.\n\n\nIf \"diagonal\" < 0, it is below the main diagonal.\n\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **diagonal** (*int**, **optional*) -- the diagonal to consider\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nSee also:\n \"torch.diagonal()\" always returns the diagonal of its input.\n\n \"torch.diagflat()\" always constructs a tensor with diagonal\n elements specified by the input.\n\nExamples:", "source": "https://pytorch.org/docs/stable/generated/torch.diag.html", "category": "pytorch docs"} {"text": "Examples:\nGet the square matrix where the input vector is the diagonal:\n >>> a = torch.randn(3)\n >>> a\n tensor([ 0.5950,-0.0872, 2.3298])\n >>> torch.diag(a)\n tensor([[ 0.5950, 0.0000, 0.0000],\n [ 0.0000,-0.0872, 0.0000],\n [ 0.0000, 0.0000, 2.3298]])\n >>> torch.diag(a, 1)\n tensor([[ 0.0000, 0.5950, 0.0000, 0.0000],\n [ 0.0000, 0.0000,-0.0872, 0.0000],\n [ 0.0000, 0.0000, 0.0000, 2.3298],\n [ 0.0000, 0.0000, 0.0000, 0.0000]])\n\nGet the k-th diagonal of a given matrix:\n >>> a = torch.randn(3, 3)\n >>> a\n tensor([[-0.4264, 0.0255,-0.1064],\n [ 0.8795,-0.2429, 0.1374],\n [ 0.1029,-0.6482,-1.6300]])\n >>> torch.diag(a, 0)\n tensor([-0.4264,-0.2429,-1.6300])\n >>> torch.diag(a, 1)\n tensor([ 0.0255, 0.1374])\n", "source": "https://pytorch.org/docs/stable/generated/torch.diag.html", "category": "pytorch docs"} {"text": "torch.nn.functional.multilabel_soft_margin_loss\ntorch.nn.functional.multilabel_soft_margin_loss(input, target, weight=None, size_average=None, reduce=None, reduction='mean') -> Tensor\nSee \"MultiLabelSoftMarginLoss\" for details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.multilabel_soft_margin_loss.html", "category": "pytorch docs"} {"text": "ConvBnReLU1d\nclass torch.ao.nn.intrinsic.ConvBnReLU1d(conv, bn, relu)\nThis is a sequential container which calls the Conv 1d, Batch Norm\n 1d, and ReLU modules. During quantization this will be replaced\n with the corresponding fused module.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.ConvBnReLU1d.html", "category": "pytorch docs"} {"text": "torch.Tensor.isclose\nTensor.isclose(other, rtol=1e-05, atol=1e-08, equal_nan=False) -> Tensor\nSee \"torch.isclose()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.isclose.html", "category": "pytorch docs"} {"text": "torch.fft.hfft2\ntorch.fft.hfft2(input, s=None, dim=(- 2, - 1), norm=None, *, out=None) -> Tensor\nComputes the 2-dimensional discrete Fourier transform of a\n Hermitian symmetric \"input\" signal. Equivalent to \"hfftn()\" but\n only transforms the last two dimensions by default.\n\"input\" is interpreted as a one-sided Hermitian signal in the time\n domain. By the Hermitian property, the Fourier transform will be\n real-valued.\nNote:\n Supports torch.half and torch.chalf on CUDA with GPU Architecture\n SM53 or greater. However it only supports powers of 2 signal\n length in every transformed dimensions. With default arguments,\n the size of last dimension should be (2^n + 1) as argument *s*\n defaults to even output size = 2 * (last_dim_size - 1)\n\nParameters:\n * input (Tensor) -- the input tensor\n * **s** (*Tuple**[**int**]**, **optional*) -- Signal size in the\n transformed dimensions. If given, each dimension \"dim[i]\" will\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.hfft2.html", "category": "pytorch docs"} {"text": "either be zero-padded or trimmed to the length \"s[i]\" before\n computing the Hermitian FFT. If a length \"-1\" is specified, no\n padding is done in that dimension. Defaults to even output in\n the last dimension: \"s[-1] = 2*(input.size(dim[-1]) - 1)\".\n * **dim** (*Tuple**[**int**]**, **optional*) -- Dimensions to be\n transformed. The last dimension must be the half-Hermitian\n compressed dimension. Default: last two dimensions.\n\n * **norm** (*str**, **optional*) --\n\n Normalization mode. For the forward transform (\"hfft2()\"),\n these correspond to:\n\n * \"\"forward\"\" - normalize by \"1/n\"\n\n * \"\"backward\"\" - no normalization\n\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the Hermitian\n FFT orthonormal)\n\n Where \"n = prod(s)\" is the logical FFT size. Calling the\n backward transform (\"ihfft2()\") with the same normalization\n mode will apply an overall normalization of \"1/n\" between the\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.hfft2.html", "category": "pytorch docs"} {"text": "two transforms. This is required to make \"ihfft2()\" the exact\n inverse.\n Default is \"\"backward\"\" (no normalization).\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\n-[ Example ]-\nStarting from a real frequency-space signal, we can generate a\n Hermitian-symmetric time-domain signal: >>> T = torch.rand(10, 9)\n\n\n\nt = torch.fft.ihfft2(T)\n\n\n\nWithout specifying the output length to \"hfftn()\", the output will\n not round-trip properly because the input is odd-length in the last\n dimension:\n\n\n\ntorch.fft.hfft2(t).size()\n torch.Size([10, 10])\n\n\n\nSo, it is recommended to always pass the signal shape \"s\".\n\n\n\nroundtrip = torch.fft.hfft2(t, T.size())\nroundtrip.size()\n torch.Size([10, 9])\ntorch.allclose(roundtrip, T)\n True\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.hfft2.html", "category": "pytorch docs"} {"text": "torch.Tensor.bitwise_and\nTensor.bitwise_and() -> Tensor\nSee \"torch.bitwise_and()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_and.html", "category": "pytorch docs"} {"text": "torch.topk\ntorch.topk(input, k, dim=None, largest=True, sorted=True, *, out=None)\nReturns the \"k\" largest elements of the given \"input\" tensor along\n a given dimension.\nIf \"dim\" is not given, the last dimension of the input is chosen.\nIf \"largest\" is \"False\" then the k smallest elements are\n returned.\nA namedtuple of (values, indices) is returned with the values\n and indices of the largest k elements of each row of the\n input tensor in the given dimension dim.\nThe boolean option \"sorted\" if \"True\", will make sure that the\n returned k elements are themselves sorted\nParameters:\n * input (Tensor) -- the input tensor.\n * **k** (*int*) -- the k in \"top-k\"\n\n * **dim** (*int**, **optional*) -- the dimension to sort along\n\n * **largest** (*bool**, **optional*) -- controls whether to\n return largest or smallest elements\n\n * **sorted** (*bool**, **optional*) -- controls whether to\n", "source": "https://pytorch.org/docs/stable/generated/torch.topk.html", "category": "pytorch docs"} {"text": "return the elements in sorted order\nKeyword Arguments:\n out (tuple, optional) -- the output tuple of (Tensor,\n LongTensor) that can be optionally given to be used as output\n buffers\nExample:\n >>> x = torch.arange(1., 6.)\n >>> x\n tensor([ 1., 2., 3., 4., 5.])\n >>> torch.topk(x, 3)\n torch.return_types.topk(values=tensor([5., 4., 3.]), indices=tensor([4, 3, 2]))\n", "source": "https://pytorch.org/docs/stable/generated/torch.topk.html", "category": "pytorch docs"} {"text": "SmoothL1Loss\nclass torch.nn.SmoothL1Loss(size_average=None, reduce=None, reduction='mean', beta=1.0)\nCreates a criterion that uses a squared term if the absolute\n element-wise error falls below beta and an L1 term otherwise. It is\n less sensitive to outliers than \"torch.nn.MSELoss\" and in some\n cases prevents exploding gradients (e.g. see the paper Fast R-CNN\n by Ross Girshick).\nFor a batch of size N, the unreduced loss can be described as:\n \\ell(x, y) = L = \\{l_1, ..., l_N\\}^T\n\nwith\n l_n = \\begin{cases} 0.5 (x_n - y_n)^2 / beta, & \\text{if } |x_n\n - y_n| < beta \\\\ |x_n - y_n| - 0.5 * beta, & \\text{otherwise }\n \\end{cases}\n\nIf reduction is not none, then:\n \\ell(x, y) = \\begin{cases} \\operatorname{mean}(L), &\n \\text{if reduction} = \\text{`mean';}\\\\\n \\operatorname{sum}(L), & \\text{if reduction} = \\text{`sum'.}\n \\end{cases}\n\nNote:\n Smooth L1 loss can be seen as exactly \"L1Loss\", but with the |x -\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SmoothL1Loss.html", "category": "pytorch docs"} {"text": "y| < beta portion replaced with a quadratic function such that\n its slope is 1 at |x - y| = beta. The quadratic segment smooths\n the L1 loss near |x - y| = 0.\nNote:\n Smooth L1 loss is closely related to \"HuberLoss\", being\n equivalent to huber(x, y) / beta (note that Smooth L1's beta\n hyper-parameter is also known as delta for Huber). This leads to\n the following differences:\n\n * As beta -> 0, Smooth L1 loss converges to \"L1Loss\", while\n \"HuberLoss\" converges to a constant 0 loss. When beta is 0,\n Smooth L1 loss is equivalent to L1 loss.\n\n * As beta -> +\\infty, Smooth L1 loss converges to a constant 0\n loss, while \"HuberLoss\" converges to \"MSELoss\".\n\n * For Smooth L1 loss, as beta varies, the L1 segment of the loss\n has a constant slope of 1. For \"HuberLoss\", the slope of the L1\n segment is beta.\n\nParameters:\n * size_average (bool, optional) -- Deprecated (see", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SmoothL1Loss.html", "category": "pytorch docs"} {"text": "\"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n * **reduce** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n\n * **reduction** (*str**, **optional*) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SmoothL1Loss.html", "category": "pytorch docs"} {"text": "\"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\n * **beta** (*float**, **optional*) -- Specifies the threshold at\n which to change between L1 and L2 loss. The value must be non-\n negative. Default: 1.0\n\nShape:\n * Input: (*), where * means any number of dimensions.\n * Target: (*), same shape as the input.\n\n * Output: scalar. If \"reduction\" is \"'none'\", then (*), same\n shape as the input.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.SmoothL1Loss.html", "category": "pytorch docs"} {"text": "AdaptiveAvgPool2d\nclass torch.nn.AdaptiveAvgPool2d(output_size)\nApplies a 2D adaptive average pooling over an input signal composed\n of several input planes.\nThe output is of size H x W, for any input size. The number of\n output features is equal to the number of input planes.\nParameters:\n output_size (Union[int, None,\n Tuple[Optional[int], Optional[int]]])\n -- the target output size of the image of the form H x W. Can be\n a tuple (H, W) or a single H for a square image H x H. H and W\n can be either a \"int\", or \"None\" which means the size will be\n the same as that of the input.\nShape:\n * Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in}).\n * Output: (N, C, S_{0}, S_{1}) or (C, S_{0}, S_{1}), where\n S=\\text{output\\_size}.\n\n-[ Examples ]-\n\n\n\ntarget output size of 5x7\nm = nn.AdaptiveAvgPool2d((5, 7))\ninput = torch.randn(1, 64, 8, 9)\noutput = m(input)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveAvgPool2d.html", "category": "pytorch docs"} {"text": "\n\n\noutput = m(input)\ntarget output size of 7x7 (square)\nm = nn.AdaptiveAvgPool2d(7)\ninput = torch.randn(1, 64, 10, 9)\noutput = m(input)\ntarget output size of 10x7\nm = nn.AdaptiveAvgPool2d((None, 7))\ninput = torch.randn(1, 64, 10, 9)\noutput = m(input)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveAvgPool2d.html", "category": "pytorch docs"} {"text": "torch.quantized_max_pool1d\ntorch.quantized_max_pool1d(input, kernel_size, stride=[], padding=0, dilation=1, ceil_mode=False) -> Tensor\nApplies a 1D max pooling over an input quantized tensor composed of\n several input planes.\nParameters:\n * input (Tensor) -- quantized tensor\n * **kernel_size** (*list of python:int*) -- the size of the\n sliding window\n\n * **stride** (\"list of int\", optional) -- the stride of the\n sliding window\n\n * **padding** (\"list of int\", optional) -- padding to be added\n on both sides, must be >= 0 and <= kernel_size / 2\n\n * **dilation** (\"list of int\", optional) -- The stride between\n elements within a sliding window, must be > 0. Default 1\n\n * **ceil_mode** (*bool**, **optional*) -- If True, will use ceil\n instead of floor to compute the output shape. Defaults to\n False.\n\nReturns:\n A quantized tensor with max_pool1d applied.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.quantized_max_pool1d.html", "category": "pytorch docs"} {"text": "Return type:\n Tensor\nExample:\n >>> qx = torch.quantize_per_tensor(torch.rand(2, 2), 1.5, 3, torch.quint8)\n >>> torch.quantized_max_pool1d(qx, [2])\n tensor([[0.0000],\n [1.5000]], size=(2, 1), dtype=torch.quint8,\n quantization_scheme=torch.per_tensor_affine, scale=1.5, zero_point=3)\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantized_max_pool1d.html", "category": "pytorch docs"} {"text": "torch.Tensor.rsqrt_\nTensor.rsqrt_() -> Tensor\nIn-place version of \"rsqrt()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.rsqrt_.html", "category": "pytorch docs"} {"text": "torch.sort\ntorch.sort(input, dim=- 1, descending=False, stable=False, *, out=None)\nSorts the elements of the \"input\" tensor along a given dimension in\n ascending order by value.\nIf \"dim\" is not given, the last dimension of the input is chosen.\nIf \"descending\" is \"True\" then the elements are sorted in\n descending order by value.\nIf \"stable\" is \"True\" then the sorting routine becomes stable,\n preserving the order of equivalent elements.\nA namedtuple of (values, indices) is returned, where the values\n are the sorted values and indices are the indices of the elements\n in the original input tensor.\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int**, **optional*) -- the dimension to sort along\n\n * **descending** (*bool**, **optional*) -- controls the sorting\n order (ascending or descending)\n\n * **stable** (*bool**, **optional*) -- makes the sorting routine\n", "source": "https://pytorch.org/docs/stable/generated/torch.sort.html", "category": "pytorch docs"} {"text": "stable, which guarantees that the order of equivalent elements\n is preserved.\nKeyword Arguments:\n out (tuple, optional) -- the output tuple of\n (Tensor, LongTensor) that can be optionally given to be used\n as output buffers\nExample:\n >>> x = torch.randn(3, 4)\n >>> sorted, indices = torch.sort(x)\n >>> sorted\n tensor([[-0.2162, 0.0608, 0.6719, 2.3332],\n [-0.5793, 0.0061, 0.6058, 0.9497],\n [-0.5071, 0.3343, 0.9553, 1.0960]])\n >>> indices\n tensor([[ 1, 0, 2, 3],\n [ 3, 1, 0, 2],\n [ 0, 3, 1, 2]])\n\n >>> sorted, indices = torch.sort(x, 0)\n >>> sorted\n tensor([[-0.5071, -0.2162, 0.6719, -0.5793],\n [ 0.0608, 0.0061, 0.9497, 0.3343],\n [ 0.6058, 0.9553, 1.0960, 2.3332]])\n >>> indices\n tensor([[ 2, 0, 0, 1],\n [ 0, 1, 1, 2],\n [ 1, 2, 2, 0]])\n >>> x = torch.tensor([0, 1] * 9)\n", "source": "https://pytorch.org/docs/stable/generated/torch.sort.html", "category": "pytorch docs"} {"text": "\n\n\nx = torch.tensor([0, 1] * 9)\n >>> x.sort()\n torch.return_types.sort(\n values=tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]),\n indices=tensor([ 2, 16, 4, 6, 14, 8, 0, 10, 12, 9, 17, 15, 13, 11, 7, 5, 3, 1]))\n >>> x.sort(stable=True)\n torch.return_types.sort(\n values=tensor([0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1]),\n indices=tensor([ 0, 2, 4, 6, 8, 10, 12, 14, 16, 1, 3, 5, 7, 9, 11, 13, 15, 17]))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.sort.html", "category": "pytorch docs"} {"text": "torch.sparse.addmm\ntorch.sparse.addmm(mat, mat1, mat2, *, beta=1., alpha=1.) -> Tensor\nThis function does exact same thing as \"torch.addmm()\" in the\n forward, except that it supports backward for sparse COO matrix\n \"mat1\". When \"mat1\" is a COO tensor it must have sparse_dim = 2.\n When inputs are COO tensors, this function also supports backward\n for both inputs.\nSupports both CSR and COO storage formats.\nNote:\n This function doesn't support computing derivaties with respect\n to CSR matrices.\n\nParameters:\n * mat (Tensor) -- a dense matrix to be added\n * **mat1** (*Tensor*) -- a sparse matrix to be multiplied\n\n * **mat2** (*Tensor*) -- a dense matrix to be multiplied\n\n * **beta** (*Number**, **optional*) -- multiplier for \"mat\"\n (\\beta)\n\n * **alpha** (*Number**, **optional*) -- multiplier for mat1 @\n mat2 (\\alpha)\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.addmm.html", "category": "pytorch docs"} {"text": "torch.is_nonzero\ntorch.is_nonzero(input)\nReturns True if the \"input\" is a single element tensor which is not\n equal to zero after type conversions. i.e. not equal to\n \"torch.tensor([0.])\" or \"torch.tensor([0])\" or\n \"torch.tensor([False])\". Throws a \"RuntimeError\" if \"torch.numel()\n != 1\" (even in case of sparse tensors).\nParameters:\n input (Tensor) -- the input tensor.\nExamples:\n >>> torch.is_nonzero(torch.tensor([0.]))\n False\n >>> torch.is_nonzero(torch.tensor([1.5]))\n True\n >>> torch.is_nonzero(torch.tensor([False]))\n False\n >>> torch.is_nonzero(torch.tensor([3]))\n True\n >>> torch.is_nonzero(torch.tensor([1, 3, 5]))\n Traceback (most recent call last):\n ...\n RuntimeError: bool value of Tensor with more than one value is ambiguous\n >>> torch.is_nonzero(torch.tensor([]))\n Traceback (most recent call last):\n ...\n RuntimeError: bool value of Tensor with no values is ambiguous\n", "source": "https://pytorch.org/docs/stable/generated/torch.is_nonzero.html", "category": "pytorch docs"} {"text": "torch.signbit\ntorch.signbit(input, *, out=None) -> Tensor\nTests if each element of \"input\" has its sign bit set or not.\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.tensor([0.7, -1.2, 0., 2.3])\n >>> torch.signbit(a)\n tensor([ False, True, False, False])\n >>> a = torch.tensor([-0.0, 0.0])\n >>> torch.signbit(a)\n tensor([ True, False])\n\nNote:\n signbit handles signed zeros, so negative zero (-0) returns True.\n", "source": "https://pytorch.org/docs/stable/generated/torch.signbit.html", "category": "pytorch docs"} {"text": "torch.kaiser_window\ntorch.kaiser_window(window_length, periodic=True, beta=12.0, *, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\nComputes the Kaiser window with window length \"window_length\" and\n shape parameter \"beta\".\nLet I_0 be the zeroth order modified Bessel function of the first\n kind (see \"torch.i0()\") and \"N = L - 1\" if \"periodic\" is False and\n \"L\" if \"periodic\" is True, where \"L\" is the \"window_length\". This\n function computes:\n out_i = I_0 \\left( \\beta \\sqrt{1 - \\left( {\\frac{i - N/2}{N/2}}\n \\right) ^2 } \\right) / I_0( \\beta )\n\nCalling \"torch.kaiser_window(L, B, periodic=True)\" is equivalent to\n calling \"torch.kaiser_window(L + 1, B, periodic=False)[:-1])\". The\n \"periodic\" argument is intended as a helpful shorthand to produce a\n periodic window as input to functions like \"torch.stft()\".\nNote:\n If \"window_length\" is one, then the returned window is a single\n element tensor containing a one.\n", "source": "https://pytorch.org/docs/stable/generated/torch.kaiser_window.html", "category": "pytorch docs"} {"text": "element tensor containing a one.\nParameters:\n * window_length (int) -- length of the window.\n * **periodic** (*bool**, **optional*) -- If True, returns a\n periodic window suitable for use in spectral analysis. If\n False, returns a symmetric window suitable for use in filter\n design.\n\n * **beta** (*float**, **optional*) -- shape parameter for the\n window.\n\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned window tensor. Only \"torch.strided\" (dense layout) is\n supported.\n\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n", "source": "https://pytorch.org/docs/stable/generated/torch.kaiser_window.html", "category": "pytorch docs"} {"text": "for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.kaiser_window.html", "category": "pytorch docs"} {"text": "torch.prod\ntorch.prod(input, *, dtype=None) -> Tensor\nReturns the product of all elements in the \"input\" tensor.\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n dtype (\"torch.dtype\", optional) -- the desired data type of\n returned tensor. If specified, the input tensor is casted to\n \"dtype\" before the operation is performed. This is useful for\n preventing data type overflows. Default: None.\nExample:\n >>> a = torch.randn(1, 3)\n >>> a\n tensor([[-0.8020, 0.5428, -1.5854]])\n >>> torch.prod(a)\n tensor(0.6902)\n\ntorch.prod(input, dim, keepdim=False, *, dtype=None) -> Tensor\nReturns the product of each row of the \"input\" tensor in the given\n dimension \"dim\".\nIf \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension \"dim\" where it is of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in", "source": "https://pytorch.org/docs/stable/generated/torch.prod.html", "category": "pytorch docs"} {"text": "the output tensor having 1 fewer dimension than \"input\".\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int*) -- the dimension to reduce.\n\n * **keepdim** (*bool*) -- whether the output tensor has \"dim\"\n retained or not.\n\nKeyword Arguments:\n dtype (\"torch.dtype\", optional) -- the desired data type of\n returned tensor. If specified, the input tensor is casted to\n \"dtype\" before the operation is performed. This is useful for\n preventing data type overflows. Default: None.\nExample:\n >>> a = torch.randn(4, 2)\n >>> a\n tensor([[ 0.5261, -0.3837],\n [ 1.1857, -0.2498],\n [-1.1646, 0.0705],\n [ 1.1131, -1.0629]])\n >>> torch.prod(a, 1)\n tensor([-0.2018, -0.2962, -0.0821, -1.1831])\n", "source": "https://pytorch.org/docs/stable/generated/torch.prod.html", "category": "pytorch docs"} {"text": "torch.Tensor.stft\nTensor.stft(n_fft, hop_length=None, win_length=None, window=None, center=True, pad_mode='reflect', normalized=False, onesided=None, return_complex=None)\nSee \"torch.stft()\"\nWarning:\n This function changed signature at version 0.4.1. Calling with\n the previous signature may cause error or return incorrect\n result.\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.stft.html", "category": "pytorch docs"} {"text": "torch.fft.hfftn\ntorch.fft.hfftn(input, s=None, dim=None, norm=None, *, out=None) -> Tensor\nComputes the n-dimensional discrete Fourier transform of a\n Hermitian symmetric \"input\" signal.\n\"input\" is interpreted as a one-sided Hermitian signal in the time\n domain. By the Hermitian property, the Fourier transform will be\n real-valued.\nNote:\n \"hfftn()\"/\"ihfftn()\" are analogous to \"rfftn()\"/\"irfftn()\". The\n real FFT expects a real signal in the time-domain and gives\n Hermitian symmetry in the frequency-domain. The Hermitian FFT is\n the opposite; Hermitian symmetric in the time-domain and real-\n valued in the frequency-domain. For this reason, special care\n needs to be taken with the shape argument \"s\", in the same way as\n with \"irfftn()\".\n\nNote:\n Some input frequencies must be real-valued to satisfy the\n Hermitian property. In these cases the imaginary component will\n be ignored. For example, any imaginary component in the zero-\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.hfftn.html", "category": "pytorch docs"} {"text": "frequency term cannot be represented in a real output and so will\n always be ignored.\nNote:\n The correct interpretation of the Hermitian input depends on the\n length of the original data, as given by \"s\". This is because\n each input shape could correspond to either an odd or even length\n signal. By default, the signal is assumed to be even length and\n odd signals will not round-trip properly. It is recommended to\n always pass the signal shape \"s\".\n\nNote:\n Supports torch.half and torch.chalf on CUDA with GPU Architecture\n SM53 or greater. However it only supports powers of 2 signal\n length in every transformed dimensions. With default arguments,\n the size of last dimension should be (2^n + 1) as argument *s*\n defaults to even output size = 2 * (last_dim_size - 1)\n\nParameters:\n * input (Tensor) -- the input tensor\n * **s** (*Tuple**[**int**]**, **optional*) -- Signal size in the\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.hfftn.html", "category": "pytorch docs"} {"text": "transformed dimensions. If given, each dimension \"dim[i]\" will\n either be zero-padded or trimmed to the length \"s[i]\" before\n computing the real FFT. If a length \"-1\" is specified, no\n padding is done in that dimension. Defaults to even output in\n the last dimension: \"s[-1] = 2*(input.size(dim[-1]) - 1)\".\n * **dim** (*Tuple**[**int**]**, **optional*) -- Dimensions to be\n transformed. The last dimension must be the half-Hermitian\n compressed dimension. Default: all dimensions, or the last\n \"len(s)\" dimensions if \"s\" is given.\n\n * **norm** (*str**, **optional*) --\n\n Normalization mode. For the forward transform (\"hfftn()\"),\n these correspond to:\n\n * \"\"forward\"\" - normalize by \"1/n\"\n\n * \"\"backward\"\" - no normalization\n\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the Hermitian\n FFT orthonormal)\n\n Where \"n = prod(s)\" is the logical FFT size. Calling the\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.hfftn.html", "category": "pytorch docs"} {"text": "backward transform (\"ihfftn()\") with the same normalization\n mode will apply an overall normalization of \"1/n\" between the\n two transforms. This is required to make \"ihfftn()\" the exact\n inverse.\n Default is \"\"backward\"\" (no normalization).\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\n-[ Example ]-\nStarting from a real frequency-space signal, we can generate a\n Hermitian-symmetric time-domain signal: >>> T = torch.rand(10, 9)\n\n\n\nt = torch.fft.ihfftn(T)\n\n\n\nWithout specifying the output length to \"hfftn()\", the output will\n not round-trip properly because the input is odd-length in the last\n dimension:\n\n\n\ntorch.fft.hfftn(t).size()\n torch.Size([10, 10])\n\n\n\nSo, it is recommended to always pass the signal shape \"s\".\n\n\n\nroundtrip = torch.fft.hfftn(t, T.size())\nroundtrip.size()\n torch.Size([10, 9])\ntorch.allclose(roundtrip, T)\n True\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.hfftn.html", "category": "pytorch docs"} {"text": "MultiMarginLoss\nclass torch.nn.MultiMarginLoss(p=1, margin=1.0, weight=None, size_average=None, reduce=None, reduction='mean')\nCreates a criterion that optimizes a multi-class classification\n hinge loss (margin-based loss) between input x (a 2D mini-batch\n Tensor) and output y (which is a 1D tensor of target class\n indices, 0 \\leq y \\leq \\text{x.size}(1)-1):\nFor each mini-batch sample, the loss in terms of the 1D input x and\n scalar output y is:\n \\text{loss}(x, y) = \\frac{\\sum_i \\max(0, \\text{margin} - x[y] +\n x[i])^p}{\\text{x.size}(0)}\n\nwhere i \\in \\left{0, \\; \\cdots , \\; \\text{x.size}(0) - 1\\right}\n and i \\neq y.\nOptionally, you can give non-equal weighting on the classes by\n passing a 1D \"weight\" tensor into the constructor.\nThe loss function then becomes:\n \\text{loss}(x, y) = \\frac{\\sum_i \\max(0, w[y] * (\\text{margin} -\n x[y] + x[i]))^p}{\\text{x.size}(0)}\n\nParameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiMarginLoss.html", "category": "pytorch docs"} {"text": "Parameters:\n * p (int, optional) -- Has a default value of 1. 1 and\n 2 are the only supported values.\n * **margin** (*float**, **optional*) -- Has a default value of\n 1.\n\n * **weight** (*Tensor**, **optional*) -- a manual rescaling\n weight given to each class. If given, it has to be a Tensor of\n size *C*. Otherwise, it is treated as if having all ones.\n\n * **size_average** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n\n * **reduce** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiMarginLoss.html", "category": "pytorch docs"} {"text": "\"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n * **reduction** (*str**, **optional*) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\n\nShape:\n * Input: (N, C) or (C), where N is the batch size and C is the\n number of classes.\n * Target: (N) or (), where each value is 0 \\leq\n \\text{targets}[i] \\leq C-1.\n\n * Output: scalar. If \"reduction\" is \"'none'\", then same shape as\n the target.\n\nExamples:\n >>> loss = nn.MultiMarginLoss()\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiMarginLoss.html", "category": "pytorch docs"} {"text": "\n\n\nloss = nn.MultiMarginLoss()\n >>> x = torch.tensor([[0.1, 0.2, 0.4, 0.8]])\n >>> y = torch.tensor([3])\n >>> # 0.25 * ((1-(0.8-0.1)) + (1-(0.8-0.2)) + (1-(0.8-0.4)))\n >>> loss(x, y)\n tensor(0.32...)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MultiMarginLoss.html", "category": "pytorch docs"} {"text": "torch.slice_scatter\ntorch.slice_scatter(input, src, dim=0, start=None, end=None, step=1) -> Tensor\nEmbeds the values of the \"src\" tensor into \"input\" at the given\n dimension. This function returns a tensor with fresh storage; it\n does not create a view.\nParameters:\n * input (Tensor) -- the input tensor.\n * **src** (*Tensor*) -- The tensor to embed into \"input\"\n\n * **dim** (*int*) -- the dimension to insert the slice into\n\n * **start** (*Optional**[**int**]*) -- the start index of where\n to insert the slice\n\n * **end** (*Optional**[**int**]*) -- the end index of where to\n insert the slice\n\n * **step** (*int*) -- the how many elements to skip in\n\nExample:\n >>> a = torch.zeros(8, 8)\n >>> b = torch.ones(8)\n >>> a.slice_scatter(b, start=6)\n tensor([[0., 0., 0., 0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0., 0., 0., 0.],\n", "source": "https://pytorch.org/docs/stable/generated/torch.slice_scatter.html", "category": "pytorch docs"} {"text": "[0., 0., 0., 0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0., 0., 0., 0.],\n [0., 0., 0., 0., 0., 0., 0., 0.],\n [1., 1., 1., 1., 1., 1., 1., 1.],\n [1., 1., 1., 1., 1., 1., 1., 1.]])\n >>> b = torch.ones(2)\n >>> a.slice_scatter(b, dim=1, start=2, end=6, step=2)\n tensor([[0., 0., 1., 0., 1., 0., 0., 0.],\n [0., 0., 1., 0., 1., 0., 0., 0.],\n [0., 0., 1., 0., 1., 0., 0., 0.],\n [0., 0., 1., 0., 1., 0., 0., 0.],\n [0., 0., 1., 0., 1., 0., 0., 0.],\n [0., 0., 1., 0., 1., 0., 0., 0.],\n [0., 0., 1., 0., 1., 0., 0., 0.],\n [0., 0., 1., 0., 1., 0., 0., 0.]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.slice_scatter.html", "category": "pytorch docs"} {"text": "torch.det\ntorch.det(input) -> Tensor\nAlias for \"torch.linalg.det()\"", "source": "https://pytorch.org/docs/stable/generated/torch.det.html", "category": "pytorch docs"} {"text": "torch.linalg.matrix_power\ntorch.linalg.matrix_power(A, n, *, out=None) -> Tensor\nComputes the n-th power of a square matrix for an integer n.\nSupports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\nIf \"n\"= 0, it returns the identity matrix (or batch) of the same\n shape as \"A\". If \"n\" is negative, it returns the inverse of each\n matrix (if invertible) raised to the power of abs(n).\nNote:\n Consider using \"torch.linalg.solve()\" if possible for multiplying\n a matrix on the left by a negative power as, if \"n\"*> 0*:\n\n matrix_power(torch.linalg.solve(A, B), n) == matrix_power(A, -n) @ B\n\n It is always preferred to use \"solve()\" when possible, as it is\n faster and more numerically stable than computing A^{-n}\n explicitly.\n\nSee also:\n \"torch.linalg.solve()\" computes \"A\"*.inverse() @ *\"B\" with a\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.matrix_power.html", "category": "pytorch docs"} {"text": "numerically stable algorithm.\nParameters:\n * A (Tensor) -- tensor of shape (, m, m)* where *** is\n zero or more batch dimensions.\n * **n** (*int*) -- the exponent.\n\nKeyword Arguments:\n out (Tensor, optional) -- output tensor. Ignored if\n None. Default: None.\nRaises:\n RuntimeError -- if \"n\"< 0 and the matrix \"A\" or any matrix\n in the batch of matrices \"A\" is not invertible.\nExamples:\n >>> A = torch.randn(3, 3)\n >>> torch.linalg.matrix_power(A, 0)\n tensor([[1., 0., 0.],\n [0., 1., 0.],\n [0., 0., 1.]])\n >>> torch.linalg.matrix_power(A, 3)\n tensor([[ 1.0756, 0.4980, 0.0100],\n [-1.6617, 1.4994, -1.9980],\n [-0.4509, 0.2731, 0.8001]])\n >>> torch.linalg.matrix_power(A.expand(2, -1, -1), -2)\n tensor([[[ 0.2640, 0.4571, -0.5511],\n [-1.0163, 0.3491, -1.5292],\n [-0.4899, 0.0822, 0.2773]],\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.matrix_power.html", "category": "pytorch docs"} {"text": "[-0.4899, 0.0822, 0.2773]],\n [[ 0.2640, 0.4571, -0.5511],\n [-1.0163, 0.3491, -1.5292],\n [-0.4899, 0.0822, 0.2773]]])", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.matrix_power.html", "category": "pytorch docs"} {"text": "Adadelta\nclass torch.optim.Adadelta(params, lr=1.0, rho=0.9, eps=1e-06, weight_decay=0, foreach=None, *, maximize=False, differentiable=False)\nImplements Adadelta algorithm.\n \\begin{aligned} &\\rule{110mm}{0.4pt}\n \\\\ &\\textbf{input} : \\gamma \\text{ (lr)}, \\: \\theta_0\n \\text{ (params)}, \\: f(\\theta) \\text{ (objective)}, \\:\n \\rho \\text{ (decay)}, \\: \\lambda \\text{ (weight decay)}\n \\\\ &\\textbf{initialize} : v_0 \\leftarrow 0 \\: \\text{\n (square avg)}, \\: u_0 \\leftarrow 0 \\: \\text{\n (accumulate variables)} \\\\[-1.ex]\n &\\rule{110mm}{0.4pt}\n \\\\ &\\textbf{for} \\: t=1 \\: \\textbf{to} \\: \\ldots \\:\n \\textbf{do} \\\\ &\\hspace{5mm}g_t\n \\leftarrow \\nabla_{\\theta} f_t (\\theta_{t-1}) \\\\\n &\\hspace{5mm}if \\: \\lambda \\neq 0\n \\\\ &\\hspace{10mm} g_t \\leftarrow g_t + \\lambda\n \\theta_{t-1} \\\\ &\\hspace{5mm}\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adadelta.html", "category": "pytorch docs"} {"text": "v_t \\leftarrow v_{t-1} \\rho + g^2_t (1 - \\rho)\n \\ &\\hspace{5mm}\\Delta x_t \\leftarrow\n \\frac{\\sqrt{u_{t-1} + \\epsilon }}{ \\sqrt{v_t +\n \\epsilon} }g_t \\hspace{21mm} \\\n &\\hspace{5mm} u_t \\leftarrow u_{t-1} \\rho + \\Delta\n x^2_t (1 - \\rho)\n \\ &\\hspace{5mm}\\theta_t \\leftarrow \\theta_{t-1} -\n \\gamma \\Delta x_t \\ &\\rule{110mm}{0.4pt}\n \\[-1.ex] &\\bf{return} \\: \\theta_t\n \\[-1.ex] &\\rule{110mm}{0.4pt}\n \\[-1.ex] \\end{aligned}\nFor further details regarding the algorithm we refer to ADADELTA:\n An Adaptive Learning Rate Method.\nParameters:\n * params (iterable) -- iterable of parameters to optimize\n or dicts defining parameter groups\n * **rho** (*float**, **optional*) -- coefficient used for\n computing a running average of squared gradients (default:\n 0.9)\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adadelta.html", "category": "pytorch docs"} {"text": "0.9)\n * **eps** (*float**, **optional*) -- term added to the\n denominator to improve numerical stability (default: 1e-6)\n\n * **lr** (*float**, **optional*) -- coefficient that scale delta\n before it is applied to the parameters (default: 1.0)\n\n * **weight_decay** (*float**, **optional*) -- weight decay (L2\n penalty) (default: 0)\n\n * **foreach** (*bool**, **optional*) -- whether foreach\n implementation of optimizer is used. If unspecified by the\n user (so foreach is None), we will try to use foreach over the\n for-loop implementation on CUDA, since it is usually\n significantly more performant. (default: None)\n\n * **maximize** (*bool**, **optional*) -- maximize the params\n based on the objective, instead of minimizing (default: False)\n\n * **differentiable** (*bool**, **optional*) -- whether autograd\n should occur through the optimizer step in training.\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adadelta.html", "category": "pytorch docs"} {"text": "Otherwise, the step() function runs in a torch.no_grad()\n context. Setting to True can impair performance, so leave it\n False if you don't intend to run autograd through this\n instance (default: False)\nadd_param_group(param_group)\n Add a param group to the \"Optimizer\" s *param_groups*.\n\n This can be useful when fine tuning a pre-trained network as\n frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n\n Parameters:\n **param_group** (*dict*) -- Specifies what Tensors should be\n optimized along with group specific optimization options.\n\nload_state_dict(state_dict)\n Loads the optimizer state.\n\n Parameters:\n **state_dict** (*dict*) -- optimizer state. Should be an\n object returned from a call to \"state_dict()\".\n\nregister_step_post_hook(hook)\n Register an optimizer step post hook which will be called after\n optimizer step. It should have the following signature:\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adadelta.html", "category": "pytorch docs"} {"text": "hook(optimizer, args, kwargs) -> None\n The \"optimizer\" argument is the optimizer instance being used.\n\n Parameters:\n **hook** (*Callable*) -- The user defined hook to be\n registered.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n\nregister_step_pre_hook(hook)\n Register an optimizer step pre hook which will be called before\n optimizer step. It should have the following signature:\n\n hook(optimizer, args, kwargs) -> None or modified args and kwargs\n\n The \"optimizer\" argument is the optimizer instance being used.\n If args and kwargs are modified by the pre-hook, then the\n transformed values are returned as a tuple containing the\n new_args and new_kwargs.\n\n Parameters:\n **hook** (*Callable*) -- The user defined hook to be\n registered.\n\n Returns:\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adadelta.html", "category": "pytorch docs"} {"text": "registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n\nstate_dict()\n Returns the state of the optimizer as a \"dict\".\n\n It contains two entries:\n\n * state - a dict holding current optimization state. Its content\n differs between optimizer classes.\n\n * param_groups - a list containing all parameter groups where\n each\n parameter group is a dict\n\nzero_grad(set_to_none=False)\n Sets the gradients of all optimized \"torch.Tensor\" s to zero.\n\n Parameters:\n **set_to_none** (*bool*) -- instead of setting to zero, set\n the grads to None. This will in general have lower memory\n footprint, and can modestly improve performance. However, it\n changes certain behaviors. For example: 1. When the user\n tries to access a gradient and perform manual ops on it, a\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adadelta.html", "category": "pytorch docs"} {"text": "None attribute or a Tensor full of 0s will behave\n differently. 2. If the user requests\n \"zero_grad(set_to_none=True)\" followed by a backward pass,\n \".grad\"s are guaranteed to be None for params that did not\n receive a gradient. 3. \"torch.optim\" optimizers have a\n different behavior if the gradient is 0 or None (in one case\n it does the step with a gradient of 0 and in the other it\n skips the step altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Adadelta.html", "category": "pytorch docs"} {"text": "torch.autograd.function.FunctionCtx.mark_non_differentiable\nFunctionCtx.mark_non_differentiable(*args)\nMarks outputs as non-differentiable.\nThis should be called at most once, only from inside the\n \"forward()\" method, and all arguments should be tensor outputs.\nThis will mark outputs as not requiring gradients, increasing the\n efficiency of backward computation. You still need to accept a\n gradient for each output in \"backward()\", but it's always going to\n be a zero tensor with the same shape as the shape of a\n corresponding output.\nThis is used e.g. for indices returned from a sort. See example::\n >>> class Func(Function):\n >>> @staticmethod\n >>> def forward(ctx, x):\n >>> sorted, idx = x.sort()\n >>> ctx.mark_non_differentiable(idx)\n >>> ctx.save_for_backward(x, idx)\n >>> return sorted, idx\n >>>\n >>> @staticmethod", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.mark_non_differentiable.html", "category": "pytorch docs"} {"text": "\n\n\n >>> @staticmethod\n >>> @once_differentiable\n >>> def backward(ctx, g1, g2): # still need to accept g2\n >>> x, idx = ctx.saved_tensors\n >>> grad_input = torch.zeros_like(x)\n >>> grad_input.index_add_(0, idx, g1)\n >>> return grad_input\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.mark_non_differentiable.html", "category": "pytorch docs"} {"text": "torch.sparse.mm\ntorch.sparse.mm()\n Performs a matrix multiplication of the sparse matrix \"mat1\" and\n the (sparse or strided) matrix \"mat2\". Similar to \"torch.mm()\",\n if \"mat1\" is a (n \\times m) tensor, \"mat2\" is a (m \\times p)\n tensor, out will be a (n \\times p) tensor. When \"mat1\" is a COO\n tensor it must have *sparse_dim = 2*. When inputs are COO\n tensors, this function also supports backward for both inputs.\n\n Supports both CSR and COO storage formats.\n\nNote:\n This function doesn't support computing derivaties with respect\n to CSR matrices.\n\n Args:\n mat1 (Tensor): the first sparse matrix to be multiplied mat2\n (Tensor): the second matrix to be multiplied, which could be\n sparse or dense\n\n Shape:\n The format of the output tensor of this function follows: -\n sparse x sparse -> sparse - sparse x dense -> dense\n\n Example:\n\n >>> a = torch.randn(2, 3).to_sparse().requires_grad_(True)\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.mm.html", "category": "pytorch docs"} {"text": "\n\n\na\n tensor(indices=tensor([[0, 0, 0, 1, 1, 1],\n [0, 1, 2, 0, 1, 2]]),\n values=tensor([ 1.5901, 0.0183, -0.6146, 1.8061, -0.0112, 0.6302]),\n size=(2, 3), nnz=6, layout=torch.sparse_coo, requires_grad=True)\n\n\n\n >>> b = torch.randn(3, 2, requires_grad=True)\n >>> b\n tensor([[-0.6479, 0.7874],\n [-1.2056, 0.5641],\n [-1.1716, -0.9923]], requires_grad=True)\n\n >>> y = torch.sparse.mm(a, b)\n >>> y\n tensor([[-0.3323, 1.8723],\n [-1.8951, 0.7904]], grad_fn=)\n >>> y.sum().backward()\n >>> a.grad\n tensor(indices=tensor([[0, 0, 0, 1, 1, 1],\n [0, 1, 2, 0, 1, 2]]),\n values=tensor([ 0.1394, -0.6415, -2.1639, 0.1394, -0.6415, -2.1639]),\n size=(2, 3), nnz=6, layout=torch.sparse_coo)\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.mm.html", "category": "pytorch docs"} {"text": "torch.Tensor.get_device\nTensor.get_device() -> Device ordinal (Integer)\nFor CUDA tensors, this function returns the device ordinal of the\n GPU on which the tensor resides. For CPU tensors, this function\n returns -1.\nExample:\n >>> x = torch.randn(3, 4, 5, device='cuda:0')\n >>> x.get_device()\n 0\n >>> x.cpu().get_device()\n -1\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.get_device.html", "category": "pytorch docs"} {"text": "torch.cuda.set_per_process_memory_fraction\ntorch.cuda.set_per_process_memory_fraction(fraction, device=None)\nSet memory fraction for a process. The fraction is used to limit an\n caching allocator to allocated memory on a CUDA device. The allowed\n value equals the total visible memory multiplied fraction. If\n trying to allocate more than the allowed value in a process, will\n raise an out of memory error in allocator.\nParameters:\n * fraction (float) -- Range: 0~1. Allowed memory equals\n total_memory * fraction.\n * **device** (*torch.device** or **int**, **optional*) --\n selected device. If it is \"None\" the default CUDA device is\n used.\n\nNote:\n In general, the total available free memory is less than the\n total capacity.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.set_per_process_memory_fraction.html", "category": "pytorch docs"} {"text": "ConvBn2d\nclass torch.ao.nn.intrinsic.ConvBn2d(conv, bn)\nThis is a sequential container which calls the Conv 2d and Batch\n Norm 2d modules. During quantization this will be replaced with the\n corresponding fused module.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.ConvBn2d.html", "category": "pytorch docs"} {"text": "torch.func.grad\ntorch.func.grad(func, argnums=0, has_aux=False)\n\"grad\" operator helps computing gradients of \"func\" with respect to\n the input(s) specified by \"argnums\". This operator can be nested to\n compute higher-order gradients.\nParameters:\n * func (Callable) -- A Python function that takes one or\n more arguments. Must return a single-element Tensor. If\n specified \"has_aux\" equals \"True\", function can return a tuple\n of single-element Tensor and other auxiliary objects:\n \"(output, aux)\".\n * **argnums** (*int** or **Tuple**[**int**]*) -- Specifies\n arguments to compute gradients with respect to. \"argnums\" can\n be single integer or tuple of integers. Default: 0.\n\n * **has_aux** (*bool*) -- Flag indicating that \"func\" returns a\n tensor and other auxiliary objects: \"(output, aux)\". Default:\n False.\n\nReturns:\n Function to compute gradients with respect to its inputs. By", "source": "https://pytorch.org/docs/stable/generated/torch.func.grad.html", "category": "pytorch docs"} {"text": "default, the output of the function is the gradient tensor(s)\n with respect to the first argument. If specified \"has_aux\"\n equals \"True\", tuple of gradients and output auxiliary objects\n is returned. If \"argnums\" is a tuple of integers, a tuple of\n output gradients with respect to each \"argnums\" value is\n returned.\nReturn type:\n Callable\nExample of using \"grad\":\n\n\n\nfrom torch.func import grad\nx = torch.randn([])\ncos_x = grad(lambda x: torch.sin(x))(x)\nassert torch.allclose(cos_x, x.cos())\nSecond-order gradients\nneg_sin_x = grad(grad(lambda x: torch.sin(x)))(x)\nassert torch.allclose(neg_sin_x, -x.sin())\n\n\n\nWhen composed with \"vmap\", \"grad\" can be used to compute per-\n sample-gradients:\n\n\n\nfrom torch.func import grad, vmap\nbatch_size, feature_size = 3, 5\ndef model(weights, feature_vec):\n # Very simple linear model with activation\n assert feature_vec.dim() == 1\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.grad.html", "category": "pytorch docs"} {"text": "\n\n\nassert feature_vec.dim() == 1\nreturn feature_vec.dot(weights).relu()\n\ndef compute_loss(weights, example, target):\n y = model(weights, example)\n return ((y - target) ** 2).mean() # MSELoss\nweights = torch.randn(feature_size, requires_grad=True)\nexamples = torch.randn(batch_size, feature_size)\ntargets = torch.randn(batch_size)\ninputs = (weights, examples, targets)\ngrad_weight_per_example = vmap(grad(compute_loss), in_dims=(None, 0, 0))(*inputs)\n\n\n\nExample of using \"grad\" with \"has_aux\" and \"argnums\":\n\n\n\nfrom torch.func import grad\ndef my_loss_func(y, y_pred):\n loss_per_sample = (0.5 * y_pred - y) ** 2\n loss = loss_per_sample.mean()\n return loss, (y_pred, loss_per_sample)\nfn = grad(my_loss_func, argnums=(0, 1), has_aux=True)\ny_true = torch.rand(4)\ny_preds = torch.rand(4, requires_grad=True)\nout = fn(y_true, y_preds)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.grad.html", "category": "pytorch docs"} {"text": "\n\n\nout = fn(y_true, y_preds)\n> output is ((grads w.r.t y_true, grads w.r.t y_preds), (y_pred, loss_per_sample))\n\n\n\nNote:\n Using PyTorch \"torch.no_grad\" together with \"grad\".Case 1: Using\n \"torch.no_grad\" inside a function:\n\n >>> def f(x):\n >>> with torch.no_grad():\n >>> c = x ** 2\n >>> return x - c\n\n In this case, \"grad(f)(x)\" will respect the inner\n \"torch.no_grad\".Case 2: Using \"grad\" inside \"torch.no_grad\"\n context manager:\n\n >>> with torch.no_grad():\n >>> grad(f)(x)\n\n In this case, \"grad\" will respect the inner \"torch.no_grad\", but\n not the outer one. This is because \"grad\" is a \"function\n transform\": its result should not depend on the result of a\n context manager outside of \"f\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.grad.html", "category": "pytorch docs"} {"text": "torch.sparse_coo_tensor\ntorch.sparse_coo_tensor(indices, values, size=None, *, dtype=None, device=None, requires_grad=False, check_invariants=None) -> Tensor\nConstructs a sparse tensor in COO(rdinate) format with specified\n values at the given \"indices\".\nNote:\n This function returns an uncoalesced tensor.\n\nNote:\n If the \"device\" argument is not specified the device of the given\n \"values\" and indices tensor(s) must match. If, however, the\n argument is specified the input Tensors will be converted to the\n given device and in turn determine the device of the constructed\n sparse tensor.\n\nParameters:\n * indices (array_like) -- Initial data for the tensor. Can\n be a list, tuple, NumPy \"ndarray\", scalar, and other types.\n Will be cast to a \"torch.LongTensor\" internally. The indices\n are the coordinates of the non-zero values in the matrix, and\n thus should be two-dimensional where the first dimension is", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_coo_tensor.html", "category": "pytorch docs"} {"text": "the number of tensor dimensions and the second dimension is\n the number of non-zero values.\n * **values** (*array_like*) -- Initial values for the tensor.\n Can be a list, tuple, NumPy \"ndarray\", scalar, and other\n types.\n\n * **size** (list, tuple, or \"torch.Size\", optional) -- Size of\n the sparse tensor. If not provided the size will be inferred\n as the minimum size big enough to hold all non-zero elements.\n\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if None, infers data type from\n \"values\".\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if None, uses the current device for\n the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_coo_tensor.html", "category": "pytorch docs"} {"text": "tensor types.\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\n * **check_invariants** (*bool**, **optional*) -- If sparse\n tensor invariants are checked. Default: as returned by\n \"torch.sparse.check_sparse_tensor_invariants.is_enabled()\",\n initially False.\n\nExample:\n >>> i = torch.tensor([[0, 1, 1],\n ... [2, 0, 2]])\n >>> v = torch.tensor([3, 4, 5], dtype=torch.float32)\n >>> torch.sparse_coo_tensor(i, v, [2, 4])\n tensor(indices=tensor([[0, 1, 1],\n [2, 0, 2]]),\n values=tensor([3., 4., 5.]),\n size=(2, 4), nnz=3, layout=torch.sparse_coo)\n\n >>> torch.sparse_coo_tensor(i, v) # Shape inference\n tensor(indices=tensor([[0, 1, 1],\n [2, 0, 2]]),\n values=tensor([3., 4., 5.]),\n size=(2, 3), nnz=3, layout=torch.sparse_coo)\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_coo_tensor.html", "category": "pytorch docs"} {"text": "\n\n\ntorch.sparse_coo_tensor(i, v, [2, 4],\n ... dtype=torch.float64,\n ... device=torch.device('cuda:0'))\n tensor(indices=tensor([[0, 1, 1],\n [2, 0, 2]]),\n values=tensor([3., 4., 5.]),\n device='cuda:0', size=(2, 4), nnz=3, dtype=torch.float64,\n layout=torch.sparse_coo)\n\n\n\n # Create an empty sparse tensor with the following invariants:\n # 1. sparse_dim + dense_dim = len(SparseTensor.shape)\n # 2. SparseTensor._indices().shape = (sparse_dim, nnz)\n # 3. SparseTensor._values().shape = (nnz, SparseTensor.shape[sparse_dim:])\n #\n # For instance, to create an empty sparse tensor with nnz = 0, dense_dim = 0 and\n # sparse_dim = 1 (hence indices is a 2D tensor of shape = (1, 0))\n >>> S = torch.sparse_coo_tensor(torch.empty([1, 0]), [], [1])\n tensor(indices=tensor([], size=(1, 0)),\n values=tensor([], size=(0,)),\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_coo_tensor.html", "category": "pytorch docs"} {"text": "values=tensor([], size=(0,)),\n size=(1,), nnz=0, layout=torch.sparse_coo)\n # and to create an empty sparse tensor with nnz = 0, dense_dim = 1 and\n # sparse_dim = 1\n >>> S = torch.sparse_coo_tensor(torch.empty([1, 0]), torch.empty([0, 2]), [1, 2])\n tensor(indices=tensor([], size=(1, 0)),\n values=tensor([], size=(0, 2)),\n size=(1, 2), nnz=0, layout=torch.sparse_coo)\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse_coo_tensor.html", "category": "pytorch docs"} {"text": "torch._foreach_round\ntorch._foreach_round(self: List[Tensor]) -> List[Tensor]\nApply \"torch.round()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_round.html", "category": "pytorch docs"} {"text": "LeakyReLU\nclass torch.nn.LeakyReLU(negative_slope=0.01, inplace=False)\nApplies the element-wise function:\n \\text{LeakyReLU}(x) = \\max(0, x) + \\text{negative\\_slope} *\n \\min(0, x)\n\nor\n \\text{LeakyReLU}(x) = \\begin{cases} x, & \\text{ if } x \\geq 0 \\\\\n \\text{negative\\_slope} \\times x, & \\text{ otherwise }\n \\end{cases}\n\nParameters:\n * negative_slope (float) -- Controls the angle of the\n negative slope. Default: 1e-2\n * **inplace** (*bool*) -- can optionally do the operation in-\n place. Default: \"False\"\n\nShape:\n * Input: (*) where *** means, any number of additional\n dimensions\n * Output: (*), same shape as the input\n\n[image]\nExamples:\n >>> m = nn.LeakyReLU(0.1)\n >>> input = torch.randn(2)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LeakyReLU.html", "category": "pytorch docs"} {"text": "torch.Tensor.isreal\nTensor.isreal() -> Tensor\nSee \"torch.isreal()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.isreal.html", "category": "pytorch docs"} {"text": "torch.nn.functional.leaky_relu\ntorch.nn.functional.leaky_relu(input, negative_slope=0.01, inplace=False) -> Tensor\nApplies element-wise, \\text{LeakyReLU}(x) = \\max(0, x) +\n \\text{negative_slope} * \\min(0, x)\nSee \"LeakyReLU\" for more details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.leaky_relu.html", "category": "pytorch docs"} {"text": "torch.Tensor.fill_diagonal_\nTensor.fill_diagonal_(fill_value, wrap=False) -> Tensor\nFill the main diagonal of a tensor that has at least 2-dimensions.\n When dims>2, all dimensions of input must be of equal length. This\n function modifies the input tensor in-place, and returns the input\n tensor.\nParameters:\n * fill_value (Scalar) -- the fill value\n * **wrap** (*bool*) -- the diagonal 'wrapped' after N columns\n for tall matrices.\n\nExample:\n >>> a = torch.zeros(3, 3)\n >>> a.fill_diagonal_(5)\n tensor([[5., 0., 0.],\n [0., 5., 0.],\n [0., 0., 5.]])\n >>> b = torch.zeros(7, 3)\n >>> b.fill_diagonal_(5)\n tensor([[5., 0., 0.],\n [0., 5., 0.],\n [0., 0., 5.],\n [0., 0., 0.],\n [0., 0., 0.],\n [0., 0., 0.],\n [0., 0., 0.]])\n >>> c = torch.zeros(7, 3)\n >>> c.fill_diagonal_(5, wrap=True)\n tensor([[5., 0., 0.],\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.fill_diagonal_.html", "category": "pytorch docs"} {"text": "tensor([[5., 0., 0.],\n [0., 5., 0.],\n [0., 0., 5.],\n [0., 0., 0.],\n [5., 0., 0.],\n [0., 5., 0.],\n [0., 0., 5.]])", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.fill_diagonal_.html", "category": "pytorch docs"} {"text": "torch.Tensor.acos\nTensor.acos() -> Tensor\nSee \"torch.acos()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.acos.html", "category": "pytorch docs"} {"text": "ConstantLR\nclass torch.optim.lr_scheduler.ConstantLR(optimizer, factor=0.3333333333333333, total_iters=5, last_epoch=- 1, verbose=False)\nDecays the learning rate of each parameter group by a small\n constant factor until the number of epoch reaches a pre-defined\n milestone: total_iters. Notice that such decay can happen\n simultaneously with other changes to the learning rate from outside\n this scheduler. When last_epoch=-1, sets initial lr as lr.\nParameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * **factor** (*float*) -- The number we multiply learning rate\n until the milestone. Default: 1./3.\n\n * **total_iters** (*int*) -- The number of steps that the\n scheduler decays the learning rate. Default: 5.\n\n * **last_epoch** (*int*) -- The index of the last epoch.\n Default: -1.\n\n * **verbose** (*bool*) -- If \"True\", prints a message to stdout\n for each update. Default: \"False\".\n\n-[ Example ]-", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ConstantLR.html", "category": "pytorch docs"} {"text": "-[ Example ]-\n\n\n\nAssuming optimizer uses lr = 0.05 for all groups\nlr = 0.025 if epoch == 0\nlr = 0.025 if epoch == 1\nlr = 0.025 if epoch == 2\nlr = 0.025 if epoch == 3\nlr = 0.05 if epoch >= 4\nscheduler = ConstantLR(self.opt, factor=0.5, total_iters=4)\nfor epoch in range(100):\n train(...)\n validate(...)\n scheduler.step()\n\n\n\nget_last_lr()\n Return last computed learning rate by current scheduler.\n\nload_state_dict(state_dict)\n Loads the schedulers state.\n\n Parameters:\n **state_dict** (*dict*) -- scheduler state. Should be an\n object returned from a call to \"state_dict()\".\n\nprint_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n\nstate_dict()\n Returns the state of the scheduler as a \"dict\".\n\n It contains an entry for every variable in self.__dict__ which\n is not the optimizer.\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.ConstantLR.html", "category": "pytorch docs"} {"text": "BNReLU2d\nclass torch.ao.nn.intrinsic.quantized.BNReLU2d(num_features, eps=1e-05, momentum=0.1, device=None, dtype=None)\nA BNReLU2d module is a fused module of BatchNorm2d and ReLU\nWe adopt the same interface as \"torch.ao.nn.quantized.BatchNorm2d\".\nVariables:\n torch.ao.nn.quantized.BatchNorm2d (Same as) --", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.quantized.BNReLU2d.html", "category": "pytorch docs"} {"text": "freeze_bn_stats\nclass torch.ao.nn.intrinsic.qat.freeze_bn_stats(mod)", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.freeze_bn_stats.html", "category": "pytorch docs"} {"text": "ConvertCustomConfig\nclass torch.ao.quantization.fx.custom_config.ConvertCustomConfig\nCustom configuration for \"convert_fx()\".\nExample usage:\n convert_custom_config = ConvertCustomConfig() .set_observed_to_quantized_mapping(ObservedCustomModule, QuantizedCustomModule) .set_preserved_attributes([\"attr1\", \"attr2\"])\n\nclassmethod from_dict(convert_custom_config_dict)\n Create a \"ConvertCustomConfig\" from a dictionary with the\n following items:\n\n \"observed_to_quantized_custom_module_class\": a nested\n dictionary mapping from quantization mode to an inner mapping\n from observed module classes to quantized module classes,\n e.g.:: { \"static\": {FloatCustomModule: ObservedCustomModule},\n \"dynamic\": {FloatCustomModule: ObservedCustomModule},\n \"weight_only\": {FloatCustomModule: ObservedCustomModule} }\n \"preserved_attributes\": a list of attributes that persist\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.ConvertCustomConfig.html", "category": "pytorch docs"} {"text": "even if they are not used in \"forward\"\n This function is primarily for backward compatibility and may be\n removed in the future.\n\n Return type:\n *ConvertCustomConfig*\n\nset_observed_to_quantized_mapping(observed_class, quantized_class, quant_type=QuantType.STATIC)\n Set the mapping from a custom observed module class to a custom\n quantized module class.\n\n The quantized module class must have a \"from_observed\" class\n method that converts the observed module class to the quantized\n module class.\n\n Return type:\n *ConvertCustomConfig*\n\nset_preserved_attributes(attributes)\n Set the names of the attributes that will persist in the graph\n module even if they are not used in the model's \"forward\"\n method.\n\n Return type:\n *ConvertCustomConfig*\n\nto_dict()\n Convert this \"ConvertCustomConfig\" to a dictionary with the\n items described in \"from_dict()\".\n\n Return type:\n *Dict*[str, *Any*]\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.ConvertCustomConfig.html", "category": "pytorch docs"} {"text": "torch.Tensor.angle\nTensor.angle() -> Tensor\nSee \"torch.angle()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.angle.html", "category": "pytorch docs"} {"text": "torch.set_default_tensor_type\ntorch.set_default_tensor_type(t)\nSets the default \"torch.Tensor\" type to floating point tensor type\n \"t\". This type will also be used as default floating point type for\n type inference in \"torch.tensor()\".\nThe default floating point tensor type is initially\n \"torch.FloatTensor\".\nParameters:\n t (type or string) -- the floating point tensor type\n or its name\nExample:\n >>> torch.tensor([1.2, 3]).dtype # initial default for floating point is torch.float32\n torch.float32\n >>> torch.set_default_tensor_type(torch.DoubleTensor)\n >>> torch.tensor([1.2, 3]).dtype # a new floating point tensor\n torch.float64\n", "source": "https://pytorch.org/docs/stable/generated/torch.set_default_tensor_type.html", "category": "pytorch docs"} {"text": "PairwiseDistance\nclass torch.nn.PairwiseDistance(p=2.0, eps=1e-06, keepdim=False)\nComputes the pairwise distance between input vectors, or between\n columns of input matrices.\nDistances are computed using \"p\"-norm, with constant \"eps\" added to\n avoid division by zero if \"p\" is negative, i.e.:\n \\mathrm{dist}\\left(x, y\\right) = \\left\\Vert x-y + \\epsilon e\n \\right\\Vert_p,\n\nwhere e is the vector of ones and the \"p\"-norm is given by.\n \\Vert x \\Vert _p = \\left( \\sum_{i=1}^n \\vert x_i \\vert ^ p\n \\right) ^ {1/p}.\n\nParameters:\n * p (real, optional) -- the norm degree. Can be\n negative. Default: 2\n * **eps** (*float**, **optional*) -- Small value to avoid\n division by zero. Default: 1e-6\n\n * **keepdim** (*bool**, **optional*) -- Determines whether or\n not to keep the vector dimension. Default: False\n\nShape:\n * Input1: (N, D) or (D) where N = batch dimension and D =\n vector dimension", "source": "https://pytorch.org/docs/stable/generated/torch.nn.PairwiseDistance.html", "category": "pytorch docs"} {"text": "vector dimension*\n * Input2: (N, D) or (D), same shape as the Input1\n\n * Output: (N) or () based on input dimension. If \"keepdim\" is\n \"True\", then (N, 1) or (1) based on input dimension.\n\nExamples::\n >>> pdist = nn.PairwiseDistance(p=2)\n >>> input1 = torch.randn(100, 128)\n >>> input2 = torch.randn(100, 128)\n >>> output = pdist(input1, input2)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.PairwiseDistance.html", "category": "pytorch docs"} {"text": "torch.fft.ifftn\ntorch.fft.ifftn(input, s=None, dim=None, norm=None, *, out=None) -> Tensor\nComputes the N dimensional inverse discrete Fourier transform of\n \"input\".\nNote:\n Supports torch.half and torch.chalf on CUDA with GPU Architecture\n SM53 or greater. However it only supports powers of 2 signal\n length in every transformed dimensions.\n\nParameters:\n * input (Tensor) -- the input tensor\n * **s** (*Tuple**[**int**]**, **optional*) -- Signal size in the\n transformed dimensions. If given, each dimension \"dim[i]\" will\n either be zero-padded or trimmed to the length \"s[i]\" before\n computing the IFFT. If a length \"-1\" is specified, no padding\n is done in that dimension. Default: \"s = [input.size(d) for d\n in dim]\"\n\n * **dim** (*Tuple**[**int**]**, **optional*) -- Dimensions to be\n transformed. Default: all dimensions, or the last \"len(s)\"\n dimensions if \"s\" is given.\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ifftn.html", "category": "pytorch docs"} {"text": "dimensions if \"s\" is given.\n * **norm** (*str**, **optional*) --\n\n Normalization mode. For the backward transform (\"ifftn()\"),\n these correspond to:\n\n * \"\"forward\"\" - no normalization\n\n * \"\"backward\"\" - normalize by \"1/n\"\n\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the IFFT\n orthonormal)\n\n Where \"n = prod(s)\" is the logical IFFT size. Calling the\n forward transform (\"fftn()\") with the same normalization mode\n will apply an overall normalization of \"1/n\" between the two\n transforms. This is required to make \"ifftn()\" the exact\n inverse.\n\n Default is \"\"backward\"\" (normalize by \"1/n\").\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\n-[ Example ]-\n\n\n\nx = torch.rand(10, 10, dtype=torch.complex64)\nifftn = torch.fft.ifftn(x)\n\n\n\nThe discrete Fourier transform is separable, so \"ifftn()\" here is\n equivalent to two one-dimensional \"ifft()\" calls:", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ifftn.html", "category": "pytorch docs"} {"text": "\n\n\ntwo_iffts = torch.fft.ifft(torch.fft.ifft(x, dim=0), dim=1)\ntorch.testing.assert_close(ifftn, two_iffts, check_stride=False)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ifftn.html", "category": "pytorch docs"} {"text": "torch.Tensor.ldexp\nTensor.ldexp(other) -> Tensor\nSee \"torch.ldexp()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ldexp.html", "category": "pytorch docs"} {"text": "torch.nn.functional.lp_pool1d\ntorch.nn.functional.lp_pool1d(input, norm_type, kernel_size, stride=None, ceil_mode=False)\nApplies a 1D power-average pooling over an input signal composed of\n several input planes. If the sum of all inputs to the power of p\n is zero, the gradient is set to zero as well.\nSee \"LPPool1d\" for details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.lp_pool1d.html", "category": "pytorch docs"} {"text": "torch.frexp\ntorch.frexp(input, *, out=None) -> (Tensor mantissa, Tensor exponent)\nDecomposes \"input\" into mantissa and exponent tensors such that\n \\text{input} = \\text{mantissa} \\times 2^{\\text{exponent}}.\nThe range of mantissa is the open interval (-1, 1).\nSupports float inputs.\nParameters:\n input (Tensor) -- the input tensor\nKeyword Arguments:\n out (tuple, optional) -- the output tensors\nExample:\n >>> x = torch.arange(9.)\n >>> mantissa, exponent = torch.frexp(x)\n >>> mantissa\n tensor([0.0000, 0.5000, 0.5000, 0.7500, 0.5000, 0.6250, 0.7500, 0.8750, 0.5000])\n >>> exponent\n tensor([0, 1, 2, 2, 3, 3, 3, 3, 4], dtype=torch.int32)\n >>> torch.ldexp(mantissa, exponent)\n tensor([0., 1., 2., 3., 4., 5., 6., 7., 8.])\n", "source": "https://pytorch.org/docs/stable/generated/torch.frexp.html", "category": "pytorch docs"} {"text": "torch.vsplit\ntorch.vsplit(input, indices_or_sections) -> List of Tensors\nSplits \"input\", a tensor with two or more dimensions, into multiple\n tensors vertically according to \"indices_or_sections\". Each split\n is a view of \"input\".\nThis is equivalent to calling torch.tensor_split(input,\n indices_or_sections, dim=0) (the split dimension is 0), except that\n if \"indices_or_sections\" is an integer it must evenly divide the\n split dimension or a runtime error will be thrown.\nThis function is based on NumPy's \"numpy.vsplit()\".\nParameters:\n * input (Tensor) -- tensor to split.\n * **indices_or_sections** (*int** or **list** or **tuple of\n ints*) -- See argument in \"torch.tensor_split()\".\n\nExample::\n >>> t = torch.arange(16.0).reshape(4,4)\n >>> t\n tensor([[ 0., 1., 2., 3.],\n [ 4., 5., 6., 7.],\n [ 8., 9., 10., 11.],\n [12., 13., 14., 15.]])\n >>> torch.vsplit(t, 2)", "source": "https://pytorch.org/docs/stable/generated/torch.vsplit.html", "category": "pytorch docs"} {"text": "\n\n\ntorch.vsplit(t, 2)\n (tensor([[0., 1., 2., 3.],\n [4., 5., 6., 7.]]),\n tensor([[ 8., 9., 10., 11.],\n [12., 13., 14., 15.]]))\n >>> torch.vsplit(t, [3, 6])\n (tensor([[ 0., 1., 2., 3.],\n [ 4., 5., 6., 7.],\n [ 8., 9., 10., 11.]]),\n tensor([[12., 13., 14., 15.]]),\n tensor([], size=(0, 4)))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.vsplit.html", "category": "pytorch docs"} {"text": "no_grad\nclass torch.no_grad\nContext-manager that disabled gradient calculation.\nDisabling gradient calculation is useful for inference, when you\n are sure that you will not call \"Tensor.backward()\". It will reduce\n memory consumption for computations that would otherwise have\n requires_grad=True.\nIn this mode, the result of every computation will have\n requires_grad=False, even when the inputs have\n requires_grad=True.\nThis context manager is thread local; it will not affect\n computation in other threads.\nAlso functions as a decorator. (Make sure to instantiate with\n parenthesis.)\nNote:\n No-grad is one of several mechanisms that can enable or disable\n gradients locally see Locally disabling gradient computation for\n more information on how they compare.\n\nNote:\n This API does not apply to forward-mode AD. If you want to\n disable forward AD for a computation, you can unpack your dual\n tensors.\n\nExample::", "source": "https://pytorch.org/docs/stable/generated/torch.no_grad.html", "category": "pytorch docs"} {"text": "tensors.\nExample::\n >>> x = torch.tensor([1.], requires_grad=True)\n >>> with torch.no_grad():\n ... y = x * 2\n >>> y.requires_grad\n False\n >>> @torch.no_grad()\n ... def doubler(x):\n ... return x * 2\n >>> z = doubler(x)\n >>> z.requires_grad\n False", "source": "https://pytorch.org/docs/stable/generated/torch.no_grad.html", "category": "pytorch docs"} {"text": "ConvReLU1d\nclass torch.ao.nn.intrinsic.quantized.ConvReLU1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)\nA ConvReLU1d module is a fused module of Conv1d and ReLU\nWe adopt the same interface as \"torch.ao.nn.quantized.Conv1d\".\nVariables:\n torch.ao.nn.quantized.Conv1d (Same as) --", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.quantized.ConvReLU1d.html", "category": "pytorch docs"} {"text": "torch.nn.functional.rrelu\ntorch.nn.functional.rrelu(input, lower=1. / 8, upper=1. / 3, training=False, inplace=False) -> Tensor\nRandomized leaky ReLU.\nSee \"RReLU\" for more details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.rrelu.html", "category": "pytorch docs"} {"text": "InstanceNorm1d\nclass torch.ao.nn.quantized.InstanceNorm1d(num_features, weight, bias, scale, zero_point, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False, device=None, dtype=None)\nThis is the quantized version of \"InstanceNorm1d\".\nAdditional args:\n * scale - quantization scale of the output, type: double.\n * **zero_point** - quantization zero point of the output, type:\n long.\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.InstanceNorm1d.html", "category": "pytorch docs"} {"text": "torch.cuda.max_memory_reserved\ntorch.cuda.max_memory_reserved(device=None)\nReturns the maximum GPU memory managed by the caching allocator in\n bytes for a given device.\nBy default, this returns the peak cached memory since the beginning\n of this program. \"reset_peak_memory_stats()\" can be used to reset\n the starting point in tracking this metric. For example, these two\n functions can measure the peak cached memory amount of each\n iteration in a training loop.\nParameters:\n device (torch.device or int, optional) -- selected\n device. Returns statistic for the current device, given by\n \"current_device()\", if \"device\" is \"None\" (default).\nReturn type:\n int\nNote:\n See Memory management for more details about GPU memory\n management.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.max_memory_reserved.html", "category": "pytorch docs"} {"text": "torch.nn.functional.affine_grid\ntorch.nn.functional.affine_grid(theta, size, align_corners=None)\nGenerates a 2D or 3D flow field (sampling grid), given a batch of\n affine matrices \"theta\".\nNote:\n This function is often used in conjunction with \"grid_sample()\"\n to build Spatial Transformer Networks .\n\nParameters:\n * theta (Tensor) -- input batch of affine matrices with\n shape (N \\times 2 \\times 3) for 2D or (N \\times 3 \\times 4)\n for 3D\n * **size** (*torch.Size*) -- the target output image size. (N\n \\times C \\times H \\times W for 2D or N \\times C \\times D\n \\times H \\times W for 3D) Example: torch.Size((32, 3, 24, 24))\n\n * **align_corners** (*bool**, **optional*) -- if \"True\",\n consider \"-1\" and \"1\" to refer to the centers of the corner\n pixels rather than the image corners. Refer to \"grid_sample()\"\n for a more complete description. A grid generated by\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.affine_grid.html", "category": "pytorch docs"} {"text": "\"affine_grid()\" should be passed to \"grid_sample()\" with the\n same setting for this option. Default: \"False\"\nReturns:\n output Tensor of size (N \\times H \\times W \\times 2)\nReturn type:\n output (Tensor)\nWarning:\n When \"align_corners = True\", the grid positions depend on the\n pixel size relative to the input image size, and so the locations\n sampled by \"grid_sample()\" will differ for the same input given\n at different resolutions (that is, after being upsampled or\n downsampled). The default behavior up to version 1.2.0 was\n \"align_corners = True\". Since then, the default behavior has been\n changed to \"align_corners = False\", in order to bring it in line\n with the default for \"interpolate()\".\n\nWarning:\n When \"align_corners = True\", 2D affine transforms on 1D data and\n 3D affine transforms on 2D data (that is, when one of the spatial\n dimensions has unit size) are ill-defined, and not an intended\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.affine_grid.html", "category": "pytorch docs"} {"text": "use case. This is not a problem when \"align_corners = False\". Up\n to version 1.2.0, all grid points along a unit dimension were\n considered arbitrarily to be at \"-1\". From version 1.3.0, under\n \"align_corners = True\" all grid points along a unit dimension are\n considered to be at \"0\" (the center of the input image).", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.affine_grid.html", "category": "pytorch docs"} {"text": "torch.Tensor.to_sparse_coo\nTensor.to_sparse_coo()\nConvert a tensor to coordinate format.\nExamples:\n >>> dense = torch.randn(5, 5)\n >>> sparse = dense.to_sparse_coo()\n >>> sparse._nnz()\n 25\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse_coo.html", "category": "pytorch docs"} {"text": "torch.Tensor.negative_\nTensor.negative_() -> Tensor\nIn-place version of \"negative()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.negative_.html", "category": "pytorch docs"} {"text": "torch.Tensor.expand_as\nTensor.expand_as(other) -> Tensor\nExpand this tensor to the same size as \"other\".\n \"self.expand_as(other)\" is equivalent to\n \"self.expand(other.size())\".\nPlease see \"expand()\" for more information about \"expand\".\nParameters:\n other (\"torch.Tensor\") -- The result tensor has the same\n size as \"other\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.expand_as.html", "category": "pytorch docs"} {"text": "torch.erf\ntorch.erf(input, *, out=None) -> Tensor\nAlias for \"torch.special.erf()\".", "source": "https://pytorch.org/docs/stable/generated/torch.erf.html", "category": "pytorch docs"} {"text": "torch.cuda.get_rng_state\ntorch.cuda.get_rng_state(device='cuda')\nReturns the random number generator state of the specified GPU as a\n ByteTensor.\nParameters:\n device (torch.device or int, optional) -- The\n device to return the RNG state of. Default: \"'cuda'\" (i.e.,\n \"torch.device('cuda')\", the current CUDA device).\nReturn type:\n Tensor\nWarning:\n This function eagerly initializes CUDA.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.get_rng_state.html", "category": "pytorch docs"} {"text": "torch.Tensor.diff\nTensor.diff(n=1, dim=- 1, prepend=None, append=None) -> Tensor\nSee \"torch.diff()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.diff.html", "category": "pytorch docs"} {"text": "torch.range\ntorch.range(start=0, end, step=1, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\nReturns a 1-D tensor of size \\left\\lfloor \\frac{\\text{end} -\n \\text{start}}{\\text{step}} \\right\\rfloor + 1 with values from\n \"start\" to \"end\" with step \"step\". Step is the gap between two\n values in the tensor.\n \\text{out}_{i+1} = \\text{out}_i + \\text{step}.\n\nWarning:\n This function is deprecated and will be removed in a future\n release because its behavior is inconsistent with Python's range\n builtin. Instead, use \"torch.arange()\", which produces values in\n [start, end).\n\nParameters:\n * start (float) -- the starting value for the set of\n points. Default: \"0\".\n * **end** (*float*) -- the ending value for the set of points\n\n * **step** (*float*) -- the gap between each pair of adjacent\n points. Default: \"1\".\n\nKeyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.range.html", "category": "pytorch docs"} {"text": "Keyword Arguments:\n * out (Tensor, optional) -- the output tensor.\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\"). If *dtype* is not\n given, infer the data type from the other input arguments. If\n any of *start*, *end*, or *stop* are floating-point, the\n *dtype* is inferred to be the default dtype, see\n \"get_default_dtype()\". Otherwise, the *dtype* is inferred to\n be *torch.int64*.\n\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n", "source": "https://pytorch.org/docs/stable/generated/torch.range.html", "category": "pytorch docs"} {"text": "tensor types.\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\nExample:\n >>> torch.range(1, 4)\n tensor([ 1., 2., 3., 4.])\n >>> torch.range(1, 4, 0.5)\n tensor([ 1.0000, 1.5000, 2.0000, 2.5000, 3.0000, 3.5000, 4.0000])\n", "source": "https://pytorch.org/docs/stable/generated/torch.range.html", "category": "pytorch docs"} {"text": "ConvBnReLU1d\nclass torch.ao.nn.intrinsic.qat.ConvBnReLU1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=None, padding_mode='zeros', eps=1e-05, momentum=0.1, freeze_bn=False, qconfig=None)\nA ConvBnReLU1d module is a module fused from Conv1d, BatchNorm1d\n and ReLU, attached with FakeQuantize modules for weight, used in\n quantization aware training.\nWe combined the interface of \"torch.nn.Conv1d\" and\n \"torch.nn.BatchNorm1d\" and \"torch.nn.ReLU\".\nSimilar to torch.nn.Conv1d, with FakeQuantize modules initialized\n to default.\nVariables:\n weight_fake_quant -- fake quant module for weight", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.ConvBnReLU1d.html", "category": "pytorch docs"} {"text": "torch.rand_like\ntorch.rand_like(input, *, dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) -> Tensor\nReturns a tensor with the same size as \"input\" that is filled with\n random numbers from a uniform distribution on the interval [0, 1).\n \"torch.rand_like(input)\" is equivalent to \"torch.rand(input.size(),\n dtype=input.dtype, layout=input.layout, device=input.device)\".\nParameters:\n input (Tensor) -- the size of \"input\" will determine size\n of the output tensor.\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned Tensor. Default: if \"None\", defaults to the dtype\n of \"input\".\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned tensor. Default: if \"None\", defaults to the layout of\n \"input\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n", "source": "https://pytorch.org/docs/stable/generated/torch.rand_like.html", "category": "pytorch docs"} {"text": "returned tensor. Default: if \"None\", defaults to the device of\n \"input\".\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\n * **memory_format** (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.rand_like.html", "category": "pytorch docs"} {"text": "torch.orgqr\ntorch.orgqr(input, tau) -> Tensor\nAlias for \"torch.linalg.householder_product()\".", "source": "https://pytorch.org/docs/stable/generated/torch.orgqr.html", "category": "pytorch docs"} {"text": "torch.Tensor.detach\nTensor.detach()\nReturns a new Tensor, detached from the current graph.\nThe result will never require gradient.\nThis method also affects forward mode AD gradients and the result\n will never have forward mode AD gradients.\nNote:\n Returned Tensor shares the same storage with the original one.\n In-place modifications on either of them will be seen, and may\n trigger errors in correctness checks. IMPORTANT NOTE: Previously,\n in-place size / stride / storage changes (such as *resize_* /\n *resize_as_* / *set_* / *transpose_*) to the returned tensor also\n update the original tensor. Now, these in-place changes will not\n update the original tensor anymore, and will instead trigger an\n error. For sparse tensors: In-place indices / values changes\n (such as *zero_* / *copy_* / *add_*) to the returned tensor will\n not update the original tensor anymore, and will instead trigger\n an error.\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.detach.html", "category": "pytorch docs"} {"text": "torch.clip\ntorch.clip(input, min=None, max=None, *, out=None) -> Tensor\nAlias for \"torch.clamp()\".", "source": "https://pytorch.org/docs/stable/generated/torch.clip.html", "category": "pytorch docs"} {"text": "torch.Tensor.histogram\nTensor.histogram(input, bins, *, range=None, weight=None, density=False)\nSee \"torch.histogram()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.histogram.html", "category": "pytorch docs"} {"text": "CUDAGraph\nclass torch.cuda.CUDAGraph\nWrapper around a CUDA graph.\nWarning:\n This API is in beta and may change in future releases.\n\ncapture_begin(pool=None)\n Begins capturing CUDA work on the current stream.\n\n Typically, you shouldn't call \"capture_begin\" yourself. Use\n \"graph\" or \"make_graphed_callables()\", which call\n \"capture_begin\" internally.\n\n Parameters:\n **pool** (*optional*) -- Token (returned by\n \"graph_pool_handle()\" or \"other_Graph_instance.pool()\") that\n hints this graph may share memory with the indicated pool.\n See Graph memory management.\n\ncapture_end()\n Ends CUDA graph capture on the current stream. After\n \"capture_end\", \"replay\" may be called on this instance.\n\n Typically, you shouldn't call \"capture_end\" yourself. Use\n \"graph\" or \"make_graphed_callables()\", which call \"capture_end\"\n internally.\n\ndebug_dump(debug_path)\n Parameters:\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.CUDAGraph.html", "category": "pytorch docs"} {"text": "debug_dump(debug_path)\n Parameters:\n **debug_path** (*required*) -- Path to dump the graph to.\n\n Calls a debugging function to dump the graph if the debugging is\n enabled via CUDAGraph.enable_debug_mode()\n\nenable_debug_mode()\n Enables debugging mode for CUDAGraph.debug_dump.\n\npool()\n Returns an opaque token representing the id of this graph's\n memory pool. This id can optionally be passed to another graph's\n \"capture_begin\", which hints the other graph may share the same\n memory pool.\n\nreplay()\n Replays the CUDA work captured by this graph.\n\nreset()\n Deletes the graph currently held by this instance.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.CUDAGraph.html", "category": "pytorch docs"} {"text": "torch.are_deterministic_algorithms_enabled\ntorch.are_deterministic_algorithms_enabled()\nReturns True if the global deterministic flag is turned on. Refer\n to \"torch.use_deterministic_algorithms()\" documentation for more\n details.", "source": "https://pytorch.org/docs/stable/generated/torch.are_deterministic_algorithms_enabled.html", "category": "pytorch docs"} {"text": "torch.Tensor.squeeze\nTensor.squeeze(dim=None) -> Tensor\nSee \"torch.squeeze()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.squeeze.html", "category": "pytorch docs"} {"text": "Softmax2d\nclass torch.nn.Softmax2d\nApplies SoftMax over features to each spatial location.\nWhen given an image of \"Channels x Height x Width\", it will apply\n Softmax to each location (Channels, h_i, w_j)\nShape:\n * Input: (N, C, H, W) or (C, H, W).\n * Output: (N, C, H, W) or (C, H, W) (same shape as input)\n\nReturns:\n a Tensor of the same dimension and shape as the input with\n values in the range [0, 1]\nReturn type:\n None\nExamples:\n >>> m = nn.Softmax2d()\n >>> # you softmax over the 2nd dimension\n >>> input = torch.randn(2, 3, 12, 13)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Softmax2d.html", "category": "pytorch docs"} {"text": "torch.chunk\ntorch.chunk(input, chunks, dim=0) -> List of Tensors\nAttempts to split a tensor into the specified number of chunks.\n Each chunk is a view of the input tensor.\nNote:\n This function may return fewer than the specified number of\n chunks!\n\nSee also:\n \"torch.tensor_split()\" a function that always returns exactly the\n specified number of chunks\n\nIf the tensor size along the given dimension \"dim\" is divisible by\n \"chunks\", all returned chunks will be the same size. If the tensor\n size along the given dimension \"dim\" is not divisible by \"chunks\",\n all returned chunks will be the same size, except the last one. If\n such division is not possible, this function may return fewer than\n the specified number of chunks.\nParameters:\n * input (Tensor) -- the tensor to split\n * **chunks** (*int*) -- number of chunks to return\n\n * **dim** (*int*) -- dimension along which to split the tensor\n\n-[ Example ]-", "source": "https://pytorch.org/docs/stable/generated/torch.chunk.html", "category": "pytorch docs"} {"text": "-[ Example ]-\n\n\n\ntorch.arange(11).chunk(6)\n (tensor([0, 1]),\n tensor([2, 3]),\n tensor([4, 5]),\n tensor([6, 7]),\n tensor([8, 9]),\n tensor([10]))\ntorch.arange(12).chunk(6)\n (tensor([0, 1]),\n tensor([2, 3]),\n tensor([4, 5]),\n tensor([6, 7]),\n tensor([8, 9]),\n tensor([10, 11]))\ntorch.arange(13).chunk(6)\n (tensor([0, 1, 2]),\n tensor([3, 4, 5]),\n tensor([6, 7, 8]),\n tensor([ 9, 10, 11]),\n tensor([12]))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.chunk.html", "category": "pytorch docs"} {"text": "torch.nn.functional.group_norm\ntorch.nn.functional.group_norm(input, num_groups, weight=None, bias=None, eps=1e-05)\nApplies Group Normalization for last certain number of dimensions.\nSee \"GroupNorm\" for details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.group_norm.html", "category": "pytorch docs"} {"text": "SGD\nclass torch.optim.SGD(params, lr=, momentum=0, dampening=0, weight_decay=0, nesterov=False, *, maximize=False, foreach=None, differentiable=False)\nImplements stochastic gradient descent (optionally with momentum).\n \\begin{aligned} &\\rule{110mm}{0.4pt}\n \\\\ &\\textbf{input} : \\gamma \\text{ (lr)}, \\: \\theta_0\n \\text{ (params)}, \\: f(\\theta) \\text{ (objective)}, \\:\n \\lambda \\text{ (weight decay)}, \\\\\n &\\hspace{13mm} \\:\\mu \\text{ (momentum)}, \\:\\tau \\text{\n (dampening)}, \\:\\textit{ nesterov,}\\:\\textit{ maximize}\n \\\\[-1.ex] &\\rule{110mm}{0.4pt}\n \\\\ &\\textbf{for} \\: t=1 \\: \\textbf{to} \\: \\ldots \\:\n \\textbf{do} \\\\ &\\hspace{5mm}g_t\n \\leftarrow \\nabla_{\\theta} f_t (\\theta_{t-1}) \\\\\n &\\hspace{5mm}\\textbf{if} \\: \\lambda \\neq 0\n \\\\ &\\hspace{10mm} g_t \\leftarrow g_t + \\lambda\n \\theta_{t-1} \\\\\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.SGD.html", "category": "pytorch docs"} {"text": "\\theta_{t-1} \\\n &\\hspace{5mm}\\textbf{if} \\: \\mu \\neq 0\n \\ &\\hspace{10mm}\\textbf{if} \\: t > 1\n \\ &\\hspace{15mm} \\textbf{b}t \\leftarrow \\mu\n \\textbf{b} + (1-\\tau) g_t \\\n &\\hspace{10mm}\\textbf{else}\n \\ &\\hspace{15mm} \\textbf{b}t \\leftarrow g_t\n \\ &\\hspace{10mm}\\textbf{if} \\: \\textit{nesterov}\n \\ &\\hspace{15mm} g_t \\leftarrow g + \\mu \\textbf{b}t\n \\ &\\hspace{10mm}\\textbf{else}\n \\[-1.ex] &\\hspace{15mm} g_t \\leftarrow \\textbf{b}_t\n \\ &\\hspace{5mm}\\textbf{if} \\: \\textit{maximize}\n \\ &\\hspace{10mm}\\theta_t \\leftarrow \\theta + \\gamma\n g_t \\[-1.ex] &\\hspace{5mm}\\textbf{else}\n \\[-1.ex] &\\hspace{10mm}\\theta_t \\leftarrow \\theta_{t-1} -\n \\gamma g_t \\[-1.ex] &\\rule{110mm}{0.4pt}\n \\[-1.ex] &\\bf{return} \\: \\theta_t\n \\[-1.ex] &\\rule{110mm}{0.4pt}", "source": "https://pytorch.org/docs/stable/generated/torch.optim.SGD.html", "category": "pytorch docs"} {"text": "\\[-1.ex] &\\rule{110mm}{0.4pt}\n \\[-1.ex] \\end{aligned}\nNesterov momentum is based on the formula from On the importance of\n initialization and momentum in deep learning.\nParameters:\n * params (iterable) -- iterable of parameters to optimize\n or dicts defining parameter groups\n * **lr** (*float*) -- learning rate\n\n * **momentum** (*float**, **optional*) -- momentum factor\n (default: 0)\n\n * **weight_decay** (*float**, **optional*) -- weight decay (L2\n penalty) (default: 0)\n\n * **dampening** (*float**, **optional*) -- dampening for\n momentum (default: 0)\n\n * **nesterov** (*bool**, **optional*) -- enables Nesterov\n momentum (default: False)\n\n * **maximize** (*bool**, **optional*) -- maximize the params\n based on the objective, instead of minimizing (default: False)\n\n * **foreach** (*bool**, **optional*) -- whether foreach\n implementation of optimizer is used (default: None)\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.SGD.html", "category": "pytorch docs"} {"text": "\ndifferentiable (bool, optional) -- whether autograd\n should occur through the optimizer step in training.\n Otherwise, the step() function runs in a torch.no_grad()\n context. Setting to True can impair performance, so leave it\n False if you don't intend to run autograd through this\n instance (default: False)\n\n-[ Example ]-\n\n\n\noptimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)\noptimizer.zero_grad()\nloss_fn(model(input), target).backward()\noptimizer.step()\n\n\n\nNote:\n The implementation of SGD with Momentum/Nesterov subtly differs\n from Sutskever et. al. and implementations in some other\n frameworks.Considering the specific case of Momentum, the update\n can be written as\n\n \\begin{aligned} v_{t+1} & = \\mu * v_{t} + g_{t+1}, \\\\\n p_{t+1} & = p_{t} - \\text{lr} * v_{t+1}, \\end{aligned}\n\n where p, g, v and \\mu denote the parameters, gradient, velocity,\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.SGD.html", "category": "pytorch docs"} {"text": "and momentum respectively.This is in contrast to Sutskever et.\n al. and other frameworks which employ an update of the form\n \\begin{aligned} v_{t+1} & = \\mu * v_{t} + \\text{lr} *\n g_{t+1}, \\\\ p_{t+1} & = p_{t} - v_{t+1}. \\end{aligned}\n\n The Nesterov version is analogously modified.Moreover, the\n initial value of the momentum buffer is set to the gradient value\n at the first step. This is in contrast to some other frameworks\n that initialize it to all zeros.\n\nadd_param_group(param_group)\n Add a param group to the \"Optimizer\" s *param_groups*.\n\n This can be useful when fine tuning a pre-trained network as\n frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n\n Parameters:\n **param_group** (*dict*) -- Specifies what Tensors should be\n optimized along with group specific optimization options.\n\nload_state_dict(state_dict)\n Loads the optimizer state.\n\n Parameters:\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.SGD.html", "category": "pytorch docs"} {"text": "Parameters:\n state_dict (dict) -- optimizer state. Should be an\n object returned from a call to \"state_dict()\".\nregister_step_post_hook(hook)\n Register an optimizer step post hook which will be called after\n optimizer step. It should have the following signature:\n\n hook(optimizer, args, kwargs) -> None\n\n The \"optimizer\" argument is the optimizer instance being used.\n\n Parameters:\n **hook** (*Callable*) -- The user defined hook to be\n registered.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n\nregister_step_pre_hook(hook)\n Register an optimizer step pre hook which will be called before\n optimizer step. It should have the following signature:\n\n hook(optimizer, args, kwargs) -> None or modified args and kwargs\n\n The \"optimizer\" argument is the optimizer instance being used.\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.SGD.html", "category": "pytorch docs"} {"text": "If args and kwargs are modified by the pre-hook, then the\n transformed values are returned as a tuple containing the\n new_args and new_kwargs.\n Parameters:\n **hook** (*Callable*) -- The user defined hook to be\n registered.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n\nstate_dict()\n Returns the state of the optimizer as a \"dict\".\n\n It contains two entries:\n\n * state - a dict holding current optimization state. Its content\n differs between optimizer classes.\n\n * param_groups - a list containing all parameter groups where\n each\n parameter group is a dict\n\nzero_grad(set_to_none=False)\n Sets the gradients of all optimized \"torch.Tensor\" s to zero.\n\n Parameters:\n **set_to_none** (*bool*) -- instead of setting to zero, set\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.SGD.html", "category": "pytorch docs"} {"text": "the grads to None. This will in general have lower memory\n footprint, and can modestly improve performance. However, it\n changes certain behaviors. For example: 1. When the user\n tries to access a gradient and perform manual ops on it, a\n None attribute or a Tensor full of 0s will behave\n differently. 2. If the user requests\n \"zero_grad(set_to_none=True)\" followed by a backward pass,\n \".grad\"s are guaranteed to be None for params that did not\n receive a gradient. 3. \"torch.optim\" optimizers have a\n different behavior if the gradient is 0 or None (in one case\n it does the step with a gradient of 0 and in the other it\n skips the step altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.SGD.html", "category": "pytorch docs"} {"text": "torch.std\ntorch.std(input, dim=None, *, correction=1, keepdim=False, out=None) -> Tensor\nCalculates the standard deviation over the dimensions specified by\n \"dim\". \"dim\" can be a single dimension, list of dimensions, or\n \"None\" to reduce over all dimensions.\nThe standard deviation (\\sigma) is calculated as\n \\sigma = \\sqrt{\\frac{1}{N - \\delta\n N}\\sum_{i=0}^{N-1}(x_i-\\bar{x})^2}\n\nwhere x is the sample set of elements, \\bar{x} is the sample mean,\n N is the number of samples and \\delta N is the \"correction\".\nIf \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension(s) \"dim\" where it is of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensor having 1 (or \"len(dim)\") fewer dimension(s).\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int** or **tuple of ints*) -- the dimension or\n dimensions to reduce.\n\nKeyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.std.html", "category": "pytorch docs"} {"text": "Keyword Arguments:\n * correction (int) --\n difference between the sample size and sample degrees of\n freedom. Defaults to Bessel's correction, \"correction=1\".\n\n Changed in version 2.0: Previously this argument was called\n \"unbiased\" and was a boolean with \"True\" corresponding to\n \"correction=1\" and \"False\" being \"correction=0\".\n\n * **keepdim** (*bool*) -- whether the output tensor has \"dim\"\n retained or not.\n\n * **out** (*Tensor**, **optional*) -- the output tensor.\n\n-[ Example ]-\n\n\n\na = torch.tensor(\n ... [[ 0.2035, 1.2959, 1.8101, -0.4644],\n ... [ 1.5027, -0.3270, 0.5905, 0.6538],\n ... [-1.5745, 1.3330, -0.5596, -0.6548],\n ... [ 0.1264, -0.5080, 1.6420, 0.1992]])\ntorch.std(a, dim=1, keepdim=True)\n tensor([[1.0311],\n [0.7477],\n [1.2204],\n [0.9087]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.std.html", "category": "pytorch docs"} {"text": "torch.linalg.inv\ntorch.linalg.inv(A, *, out=None) -> Tensor\nComputes the inverse of a square matrix if it exists. Throws a\n RuntimeError if the matrix is not invertible.\nLetting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, for a matrix A \\in\n \\mathbb{K}^{n \\times n}, its inverse matrix A^{-1} \\in\n \\mathbb{K}^{n \\times n} (if it exists) is defined as\n A^{-1}A = AA^{-1} = \\mathrm{I}_n\n\nwhere \\mathrm{I}_n is the n-dimensional identity matrix.\nThe inverse matrix exists if and only if A is invertible. In this\n case, the inverse is unique.\nSupports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\nNote:\n When inputs are on a CUDA device, this function synchronizes that\n device with the CPU.\n\nNote:\n Consider using \"torch.linalg.solve()\" if possible for multiplying\n a matrix on the left by the inverse, as:\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.inv.html", "category": "pytorch docs"} {"text": "a matrix on the left by the inverse, as:\n linalg.solve(A, B) == linalg.inv(A) @ B # When B is a matrix\n\n It is always preferred to use \"solve()\" when possible, as it is\n faster and more numerically stable than computing the inverse\n explicitly.\n\nSee also:\n \"torch.linalg.pinv()\" computes the pseudoinverse (Moore-Penrose\n inverse) of matrices of any shape.\n\n \"torch.linalg.solve()\" computes \"A\"*.inv() @ *\"B\" with a\n numerically stable algorithm.\n\nParameters:\n A (Tensor) -- tensor of shape (, n, n)* where *** is\n zero or more batch dimensions consisting of invertible matrices.\nKeyword Arguments:\n out (Tensor, optional) -- output tensor. Ignored if\n None. Default: None.\nRaises:\n RuntimeError -- if the matrix \"A\" or any matrix in the batch\n of matrices \"A\" is not invertible.\nExamples:\n >>> A = torch.randn(4, 4)\n >>> Ainv = torch.linalg.inv(A)\n >>> torch.dist(A @ Ainv, torch.eye(4))\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.inv.html", "category": "pytorch docs"} {"text": "\n\n\ntorch.dist(A @ Ainv, torch.eye(4))\n tensor(1.1921e-07)\n\n\n\n >>> A = torch.randn(2, 3, 4, 4) # Batch of matrices\n >>> Ainv = torch.linalg.inv(A)\n >>> torch.dist(A @ Ainv, torch.eye(4))\n tensor(1.9073e-06)\n\n >>> A = torch.randn(4, 4, dtype=torch.complex128) # Complex matrix\n >>> Ainv = torch.linalg.inv(A)\n >>> torch.dist(A @ Ainv, torch.eye(4))\n tensor(7.5107e-16, dtype=torch.float64)\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.inv.html", "category": "pytorch docs"} {"text": "ChannelShuffle\nclass torch.nn.ChannelShuffle(groups)\nDivide the channels in a tensor of shape (, C , H, W) into g\n groups and rearrange them as (, C \\frac g, g, H, W), while keeping\n the original tensor shape.\nParameters:\n groups (int) -- number of groups to divide channels in.\nExamples:\n >>> channel_shuffle = nn.ChannelShuffle(2)\n >>> input = torch.randn(1, 4, 2, 2)\n >>> print(input)\n [[[[1, 2],\n [3, 4]],\n [[5, 6],\n [7, 8]],\n [[9, 10],\n [11, 12]],\n [[13, 14],\n [15, 16]],\n ]]\n >>> output = channel_shuffle(input)\n >>> print(output)\n [[[[1, 2],\n [3, 4]],\n [[9, 10],\n [11, 12]],\n [[5, 6],\n [7, 8]],\n [[13, 14],\n [15, 16]],\n ]]\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ChannelShuffle.html", "category": "pytorch docs"} {"text": "torch.gt\ntorch.gt(input, other, *, out=None) -> Tensor\nComputes \\text{input} > \\text{other} element-wise.\nThe second argument can be a number or a tensor whose shape is\n broadcastable with the first argument.\nParameters:\n * input (Tensor) -- the tensor to compare\n * **other** (*Tensor** or **float*) -- the tensor or value to\n compare\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nReturns:\n A boolean tensor that is True where \"input\" is greater than\n \"other\" and False elsewhere\nExample:\n >>> torch.gt(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))\n tensor([[False, True], [False, False]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.gt.html", "category": "pytorch docs"} {"text": "Bilinear\nclass torch.nn.Bilinear(in1_features, in2_features, out_features, bias=True, device=None, dtype=None)\nApplies a bilinear transformation to the incoming data: y = x_1^T A\n x_2 + b\nParameters:\n * in1_features (int) -- size of each first input sample\n * **in2_features** (*int*) -- size of each second input sample\n\n * **out_features** (*int*) -- size of each output sample\n\n * **bias** (*bool*) -- If set to False, the layer will not learn\n an additive bias. Default: \"True\"\n\nShape:\n * Input1: (*, H_{in1}) where H_{in1}=\\text{in1_features} and *\n means any number of additional dimensions including none. All\n but the last dimension of the inputs should be the same.\n * Input2: (*, H_{in2}) where H_{in2}=\\text{in2\\_features}.\n\n * Output: (*, H_{out}) where H_{out}=\\text{out\\_features} and\n all but the last dimension are the same shape as the input.\n\nVariables:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Bilinear.html", "category": "pytorch docs"} {"text": "Variables:\n * weight (torch.Tensor) -- the learnable weights of the\n module of shape (\\text{out_features}, \\text{in1_features},\n \\text{in2_features}). The values are initialized from\n \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}), where k =\n \\frac{1}{\\text{in1_features}}\n * **bias** -- the learnable bias of the module of shape\n (\\text{out\\_features}). If \"bias\" is \"True\", the values are\n initialized from \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}), where k =\n \\frac{1}{\\text{in1\\_features}}\n\nExamples:\n >>> m = nn.Bilinear(20, 30, 40)\n >>> input1 = torch.randn(128, 20)\n >>> input2 = torch.randn(128, 30)\n >>> output = m(input1, input2)\n >>> print(output.size())\n torch.Size([128, 40])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Bilinear.html", "category": "pytorch docs"} {"text": "torch.Tensor.logical_and_\nTensor.logical_and_() -> Tensor\nIn-place version of \"logical_and()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logical_and_.html", "category": "pytorch docs"} {"text": "torch.arange\ntorch.arange(start=0, end, step=1, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\nReturns a 1-D tensor of size \\left\\lceil \\frac{\\text{end} -\n \\text{start}}{\\text{step}} \\right\\rceil with values from the\n interval \"[start, end)\" taken with common difference \"step\"\n beginning from start.\nNote that non-integer \"step\" is subject to floating point rounding\n errors when comparing against \"end\"; to avoid inconsistency, we\n advise adding a small epsilon to \"end\" in such cases.\n \\text{out}_{{i+1}} = \\text{out}_{i} + \\text{step}\n\nParameters:\n * start (Number) -- the starting value for the set of\n points. Default: \"0\".\n * **end** (*Number*) -- the ending value for the set of points\n\n * **step** (*Number*) -- the gap between each pair of adjacent\n points. Default: \"1\".\n\nKeyword Arguments:\n * out (Tensor, optional) -- the output tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.arange.html", "category": "pytorch docs"} {"text": "\n\ndtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\"). If dtype is not\n given, infer the data type from the other input arguments. If\n any of start, end, or stop are floating-point, the\n dtype is inferred to be the default dtype, see\n \"get_default_dtype()\". Otherwise, the dtype is inferred to\n be torch.int64.\n\n\nlayout (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n\n\ndevice (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.arange.html", "category": "pytorch docs"} {"text": "tensor types.\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\nExample:\n >>> torch.arange(5)\n tensor([ 0, 1, 2, 3, 4])\n >>> torch.arange(1, 4)\n tensor([ 1, 2, 3])\n >>> torch.arange(1, 2.5, 0.5)\n tensor([ 1.0000, 1.5000, 2.0000])\n", "source": "https://pytorch.org/docs/stable/generated/torch.arange.html", "category": "pytorch docs"} {"text": "torch.signal.windows.hann\ntorch.signal.windows.hann(M, *, sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)\nComputes the Hann window.\nThe Hann window is defined as follows:\n w_n = \\frac{1}{2}\\ \\left[1 - \\cos \\left( \\frac{2 \\pi n}{M - 1}\n \\right)\\right] = \\sin^2 \\left( \\frac{\\pi n}{M - 1} \\right)\n\nThe window is normalized to 1 (maximum value is 1). However, the 1\n doesn't appear if \"M\" is even and \"sym\" is True.\nParameters:\n M (int) -- the length of the window. In other words, the\n number of points of the returned window.\nKeyword Arguments:\n * sym (bool, optional) -- If False, returns a\n periodic window suitable for use in spectral analysis. If\n True, returns a symmetric window suitable for use in filter\n design. Default: True.\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.hann.html", "category": "pytorch docs"} {"text": "(see \"torch.set_default_tensor_type()\").\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\nReturn type:\n Tensor\nExamples:\n >>> # Generates a symmetric Hann window.\n >>> torch.signal.windows.hann(10)\n tensor([0.0000, 0.1170, 0.4132, 0.7500, 0.9698, 0.9698, 0.7500, 0.4132, 0.1170, 0.0000])\n\n >>> # Generates a periodic Hann window.\n >>> torch.signal.windows.hann(10, sym=False)\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.hann.html", "category": "pytorch docs"} {"text": "\n\n\ntorch.signal.windows.hann(10, sym=False)\n tensor([0.0000, 0.0955, 0.3455, 0.6545, 0.9045, 1.0000, 0.9045, 0.6545, 0.3455, 0.0955])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.hann.html", "category": "pytorch docs"} {"text": "torch.maximum\ntorch.maximum(input, other, *, out=None) -> Tensor\nComputes the element-wise maximum of \"input\" and \"other\".\nNote:\n If one of the elements being compared is a NaN, then that element\n is returned. \"maximum()\" is not supported for tensors with\n complex dtypes.\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **other** (*Tensor*) -- the second input tensor\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.tensor((1, 2, -1))\n >>> b = torch.tensor((3, 0, 4))\n >>> torch.maximum(a, b)\n tensor([3, 2, 4])\n", "source": "https://pytorch.org/docs/stable/generated/torch.maximum.html", "category": "pytorch docs"} {"text": "strict_fusion\nclass torch.jit.strict_fusion\nThis class errors if not all nodes have been fused in inference, or\n symbolically differentiated in training.\nExample:\nForcing fusion of additions.\n @torch.jit.script\n def foo(x):\n with torch.jit.strict_fusion():\n return x + x + x\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.strict_fusion.html", "category": "pytorch docs"} {"text": "Linear\nclass torch.nn.Linear(in_features, out_features, bias=True, device=None, dtype=None)\nApplies a linear transformation to the incoming data: y = xA^T + b\nThis module supports TensorFloat32.\nOn certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.\nParameters:\n * in_features (int) -- size of each input sample\n * **out_features** (*int*) -- size of each output sample\n\n * **bias** (*bool*) -- If set to \"False\", the layer will not\n learn an additive bias. Default: \"True\"\n\nShape:\n * Input: (*, H_{in}) where * means any number of dimensions\n including none and H_{in} = \\text{in_features}.\n * Output: (*, H_{out}) where all but the last dimension are the\n same shape as the input and H_{out} = \\text{out\\_features}.\n\nVariables:\n * weight (torch.Tensor) -- the learnable weights of the\n module of shape (\\text{out_features}, \\text{in_features}).", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Linear.html", "category": "pytorch docs"} {"text": "The values are initialized from \\mathcal{U}(-\\sqrt{k},\n \\sqrt{k}), where k = \\frac{1}{\\text{in_features}}\n * **bias** -- the learnable bias of the module of shape\n (\\text{out\\_features}). If \"bias\" is \"True\", the values are\n initialized from \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}) where k =\n \\frac{1}{\\text{in\\_features}}\n\nExamples:\n >>> m = nn.Linear(20, 30)\n >>> input = torch.randn(128, 20)\n >>> output = m(input)\n >>> print(output.size())\n torch.Size([128, 30])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Linear.html", "category": "pytorch docs"} {"text": "torch.cumulative_trapezoid\ntorch.cumulative_trapezoid(y, x=None, *, dx=None, dim=- 1) -> Tensor\nCumulatively computes the trapezoidal rule along \"dim\". By default\n the spacing between elements is assumed to be 1, but \"dx\" can be\n used to specify a different constant spacing, and \"x\" can be used\n to specify arbitrary spacing along \"dim\".\nFor more details, please read \"torch.trapezoid()\". The difference\n between \"torch.trapezoid()\" and this function is that,\n \"torch.trapezoid()\" returns a value for each integration, where as\n this function returns a cumulative value for every spacing within\n the integration. This is analogous to how .sum returns a value\n and .cumsum returns a cumulative sum.\nParameters:\n * y (Tensor) -- Values to use when computing the\n trapezoidal rule.\n * **x** (*Tensor*) -- If specified, defines spacing between\n values as specified above.\n\nKeyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.cumulative_trapezoid.html", "category": "pytorch docs"} {"text": "Keyword Arguments:\n * dx (float) -- constant spacing between values. If\n neither \"x\" or \"dx\" are specified then this defaults to 1.\n Effectively multiplies the result by its value.\n * **dim** (*int*) -- The dimension along which to compute the\n trapezoidal rule. The last (inner-most) dimension by default.\n\nExamples:\n >>> # Cumulatively computes the trapezoidal rule in 1D, spacing is implicitly 1.\n >>> y = torch.tensor([1, 5, 10])\n >>> torch.cumulative_trapezoid(y)\n tensor([3., 10.5])\n\n >>> # Computes the same trapezoidal rule directly up to each element to verify\n >>> (1 + 5) / 2\n 3.0\n >>> (1 + 10 + 10) / 2\n 10.5\n\n >>> # Cumulatively computes the trapezoidal rule in 1D with constant spacing of 2\n >>> # NOTE: the result is the same as before, but multiplied by 2\n >>> torch.cumulative_trapezoid(y, dx=2)\n tensor([6., 21.])\n", "source": "https://pytorch.org/docs/stable/generated/torch.cumulative_trapezoid.html", "category": "pytorch docs"} {"text": "tensor([6., 21.])\n >>> # Cumulatively computes the trapezoidal rule in 1D with arbitrary spacing\n >>> x = torch.tensor([1, 3, 6])\n >>> torch.cumulative_trapezoid(y, x)\n tensor([6., 28.5])\n\n >>> # Computes the same trapezoidal rule directly up to each element to verify\n >>> ((3 - 1) * (1 + 5)) / 2\n 6.0\n >>> ((3 - 1) * (1 + 5) + (6 - 3) * (5 + 10)) / 2\n 28.5\n\n >>> # Cumulatively computes the trapezoidal rule for each row of a 3x3 matrix\n >>> y = torch.arange(9).reshape(3, 3)\n tensor([[0, 1, 2],\n [3, 4, 5],\n [6, 7, 8]])\n >>> torch.cumulative_trapezoid(y)\n tensor([[ 0.5, 2.],\n [ 3.5, 8.],\n [ 6.5, 14.]])\n\n >>> # Cumulatively computes the trapezoidal rule for each column of the matrix\n >>> torch.cumulative_trapezoid(y, dim=0)\n tensor([[ 1.5, 2.5, 3.5],\n [ 6.0, 8.0, 10.0]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.cumulative_trapezoid.html", "category": "pytorch docs"} {"text": "[ 6.0, 8.0, 10.0]])\n >>> # Cumulatively computes the trapezoidal rule for each row of a 3x3 ones matrix\n >>> # with the same arbitrary spacing\n >>> y = torch.ones(3, 3)\n >>> x = torch.tensor([1, 3, 6])\n >>> torch.cumulative_trapezoid(y, x)\n tensor([[2., 5.],\n [2., 5.],\n [2., 5.]])\n\n >>> # Cumulatively computes the trapezoidal rule for each row of a 3x3 ones matrix\n >>> # with different arbitrary spacing per row\n >>> y = torch.ones(3, 3)\n >>> x = torch.tensor([[1, 2, 3], [1, 3, 5], [1, 4, 7]])\n >>> torch.cumulative_trapezoid(y, x)\n tensor([[1., 2.],\n [2., 4.],\n [3., 6.]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.cumulative_trapezoid.html", "category": "pytorch docs"} {"text": "BatchNorm2d\nclass torch.ao.nn.quantized.BatchNorm2d(num_features, eps=1e-05, momentum=0.1, device=None, dtype=None)\nThis is the quantized version of \"BatchNorm2d\".", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.BatchNorm2d.html", "category": "pytorch docs"} {"text": "ParameterDict\nclass torch.nn.ParameterDict(parameters=None)\nHolds parameters in a dictionary.\nParameterDict can be indexed like a regular Python dictionary, but\n Parameters it contains are properly registered, and will be visible\n by all Module methods. Other objects are treated as would be done\n by a regular Python dictionary\n\"ParameterDict\" is an ordered dictionary. \"update()\" with other\n unordered mapping types (e.g., Python's plain \"dict\") does not\n preserve the order of the merged mapping. On the other hand,\n \"OrderedDict\" or another \"ParameterDict\" will preserve their\n ordering.\nNote that the constructor, assigning an element of the dictionary\n and the \"update()\" method will convert any \"Tensor\" into\n \"Parameter\".\nParameters:\n values (iterable, optional) -- a mapping (dictionary)\n of (string : Any) or an iterable of key-value pairs of type\n (string, Any)\nExample:\n class MyModule(nn.Module):\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ParameterDict.html", "category": "pytorch docs"} {"text": "Example:\n class MyModule(nn.Module):\n def __init__(self):\n super(MyModule, self).__init__()\n self.params = nn.ParameterDict({\n 'left': nn.Parameter(torch.randn(5, 10)),\n 'right': nn.Parameter(torch.randn(5, 10))\n })\n\n def forward(self, x, choice):\n x = self.params[choice].mm(x)\n return x\n\nclear()\n Remove all items from the ParameterDict.\n\ncopy()\n Returns a copy of this \"ParameterDict\" instance.\n\n Return type:\n *ParameterDict*\n\nfromkeys(keys, default=None)\n Return a new ParameterDict with the keys provided\n\n Parameters:\n * **keys** (*iterable**, **string*) -- keys to make the new\n ParameterDict from\n\n * **default** (*Parameter**, **optional*) -- value to set for\n all keys\n\n Return type:\n *ParameterDict*\n\nget(key, default=None)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ParameterDict.html", "category": "pytorch docs"} {"text": "get(key, default=None)\n Return the parameter associated with key if present. Otherwise\n return default if provided, None if not.\n\n Parameters:\n * **key** (*str*) -- key to get from the ParameterDict\n\n * **default** (*Parameter**, **optional*) -- value to return\n if key not present\n\n Return type:\n *Any*\n\nitems()\n Return an iterable of the ParameterDict key/value pairs.\n\n Return type:\n *Iterable*[*Tuple*[str, *Any*]]\n\nkeys()\n Return an iterable of the ParameterDict keys.\n\n Return type:\n *Iterable*[str]\n\npop(key)\n Remove key from the ParameterDict and return its parameter.\n\n Parameters:\n **key** (*str*) -- key to pop from the ParameterDict\n\n Return type:\n *Any*\n\npopitem()\n Remove and return the last inserted *(key, parameter)* pair from\n the ParameterDict\n\n Return type:\n *Tuple*[str, *Any*]\n\nsetdefault(key, default=None)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ParameterDict.html", "category": "pytorch docs"} {"text": "setdefault(key, default=None)\n If key is in the ParameterDict, return its value. If not, insert\n *key* with a parameter *default* and return *default*. *default*\n defaults to *None*.\n\n Parameters:\n * **key** (*str*) -- key to set default for\n\n * **default** (*Any*) -- the parameter set to the key\n\n Return type:\n *Any*\n\nupdate(parameters)\n Update the \"ParameterDict\" with the key-value pairs from a\n mapping or an iterable, overwriting existing keys.\n\n Note:\n\n If \"parameters\" is an \"OrderedDict\", a \"ParameterDict\", or an\n iterable of key-value pairs, the order of new elements in it\n is preserved.\n\n Parameters:\n **parameters** (*iterable*) -- a mapping (dictionary) from\n string to \"Parameter\", or an iterable of key-value pairs of\n type (string, \"Parameter\")\n\nvalues()\n Return an iterable of the ParameterDict values.\n\n Return type:\n *Iterable*[*Any*]\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ParameterDict.html", "category": "pytorch docs"} {"text": "torch.bitwise_and\ntorch.bitwise_and(input, other, *, out=None) -> Tensor\nComputes the bitwise AND of \"input\" and \"other\". The input tensor\n must be of integral or Boolean types. For bool tensors, it computes\n the logical AND.\nParameters:\n * input -- the first input tensor\n * **other** -- the second input tensor\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> torch.bitwise_and(torch.tensor([-1, -2, 3], dtype=torch.int8), torch.tensor([1, 0, 3], dtype=torch.int8))\n tensor([1, 0, 3], dtype=torch.int8)\n >>> torch.bitwise_and(torch.tensor([True, True, False]), torch.tensor([False, True, False]))\n tensor([ False, True, False])\n", "source": "https://pytorch.org/docs/stable/generated/torch.bitwise_and.html", "category": "pytorch docs"} {"text": "torch.nn.functional.selu\ntorch.nn.functional.selu(input, inplace=False) -> Tensor\nApplies element-wise, \\text{SELU}(x) = scale * (\\max(0,x) + \\min(0,\n \\alpha * (\\exp(x) - 1))), with\n \\alpha=1.6732632423543772848170429916717 and\n scale=1.0507009873554804934193349852946.\nSee \"SELU\" for more details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.selu.html", "category": "pytorch docs"} {"text": "torch.view_as_real\ntorch.view_as_real(input) -> Tensor\nReturns a view of \"input\" as a real tensor. For an input complex\n tensor of \"size\" m1, m2, \\dots, mi, this function returns a new\n real tensor of size m1, m2, \\dots, mi, 2, where the last dimension\n of size 2 represents the real and imaginary components of complex\n numbers.\nWarning:\n \"view_as_real()\" is only supported for tensors with \"complex\n dtypes\".\n\nParameters:\n input (Tensor) -- the input tensor.\nExample:\n >>> x=torch.randn(4, dtype=torch.cfloat)\n >>> x\n tensor([(0.4737-0.3839j), (-0.2098-0.6699j), (0.3470-0.9451j), (-0.5174-1.3136j)])\n >>> torch.view_as_real(x)\n tensor([[ 0.4737, -0.3839],\n [-0.2098, -0.6699],\n [ 0.3470, -0.9451],\n [-0.5174, -1.3136]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.view_as_real.html", "category": "pytorch docs"} {"text": "torch.Tensor.sspaddmm\nTensor.sspaddmm(mat1, mat2, *, beta=1, alpha=1) -> Tensor\nSee \"torch.sspaddmm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sspaddmm.html", "category": "pytorch docs"} {"text": "torch.less\ntorch.less(input, other, *, out=None) -> Tensor\nAlias for \"torch.lt()\".", "source": "https://pytorch.org/docs/stable/generated/torch.less.html", "category": "pytorch docs"} {"text": "torch.Tensor.bitwise_right_shift\nTensor.bitwise_right_shift(other) -> Tensor\nSee \"torch.bitwise_right_shift()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_right_shift.html", "category": "pytorch docs"} {"text": "StreamContext\nclass torch.cuda.StreamContext(stream)\nContext-manager that selects a given stream.\nAll CUDA kernels queued within its context will be enqueued on a\n selected stream.\nParameters:\n Stream (Stream) -- selected stream. This manager is a no-\n op if it's \"None\".\nNote:\n Streams are per-device.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.StreamContext.html", "category": "pytorch docs"} {"text": "torch.Tensor.sgn\nTensor.sgn() -> Tensor\nSee \"torch.sgn()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sgn.html", "category": "pytorch docs"} {"text": "torch.fft.ifft\ntorch.fft.ifft(input, n=None, dim=- 1, norm=None, *, out=None) -> Tensor\nComputes the one dimensional inverse discrete Fourier transform of\n \"input\".\nNote:\n Supports torch.half and torch.chalf on CUDA with GPU Architecture\n SM53 or greater. However it only supports powers of 2 signal\n length in every transformed dimension.\n\nParameters:\n * input (Tensor) -- the input tensor\n * **n** (*int**, **optional*) -- Signal length. If given, the\n input will either be zero-padded or trimmed to this length\n before computing the IFFT.\n\n * **dim** (*int**, **optional*) -- The dimension along which to\n take the one dimensional IFFT.\n\n * **norm** (*str**, **optional*) --\n\n Normalization mode. For the backward transform (\"ifft()\"),\n these correspond to:\n\n * \"\"forward\"\" - no normalization\n\n * \"\"backward\"\" - normalize by \"1/n\"\n\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the IFFT\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ifft.html", "category": "pytorch docs"} {"text": "orthonormal)\n Calling the forward transform (\"fft()\") with the same\n normalization mode will apply an overall normalization of\n \"1/n\" between the two transforms. This is required to make\n \"ifft()\" the exact inverse.\n\n Default is \"\"backward\"\" (normalize by \"1/n\").\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\n-[ Example ]-\n\n\n\nt = torch.tensor([ 6.+0.j, -2.+2.j, -2.+0.j, -2.-2.j])\ntorch.fft.ifft(t)\n tensor([0.+0.j, 1.+0.j, 2.+0.j, 3.+0.j])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ifft.html", "category": "pytorch docs"} {"text": "torch.Tensor.frac_\nTensor.frac_() -> Tensor\nIn-place version of \"frac()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.frac_.html", "category": "pytorch docs"} {"text": "InstanceNorm1d\nclass torch.nn.InstanceNorm1d(num_features, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False, device=None, dtype=None)\nApplies Instance Normalization over a 2D (unbatched) or 3D\n (batched) input as described in the paper Instance Normalization:\n The Missing Ingredient for Fast Stylization.\n y = \\frac{x - \\mathrm{E}[x]}{ \\sqrt{\\mathrm{Var}[x] + \\epsilon}}\n * \\gamma + \\beta\n\nThe mean and standard-deviation are calculated per-dimension\n separately for each object in a mini-batch. \\gamma and \\beta are\n learnable parameter vectors of size C (where C is the number of\n features or channels of the input) if \"affine\" is \"True\". The\n standard-deviation is calculated via the biased estimator,\n equivalent to torch.var(input, unbiased=False).\nBy default, this layer uses instance statistics computed from input\n data in both training and evaluation modes.\nIf \"track_running_stats\" is set to \"True\", during training this", "source": "https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm1d.html", "category": "pytorch docs"} {"text": "layer keeps running estimates of its computed mean and variance,\n which are then used for normalization during evaluation. The\n running estimates are kept with a default \"momentum\" of 0.1.\nNote:\n This \"momentum\" argument is different from one used in optimizer\n classes and the conventional notion of momentum. Mathematically,\n the update rule for running statistics here is \\hat{x}_\\text{new}\n = (1 - \\text{momentum}) \\times \\hat{x} + \\text{momentum} \\times\n x_t, where \\hat{x} is the estimated statistic and x_t is the new\n observed value.\n\nNote:\n \"InstanceNorm1d\" and \"LayerNorm\" are very similar, but have some\n subtle differences. \"InstanceNorm1d\" is applied on each channel\n of channeled data like multidimensional time series, but\n \"LayerNorm\" is usually applied on entire sample and often in NLP\n tasks. Additionally, \"LayerNorm\" applies elementwise affine\n transform, while \"InstanceNorm1d\" usually don't apply affine\n transform.\n\nParameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm1d.html", "category": "pytorch docs"} {"text": "transform.\nParameters:\n * num_features (int) -- number of features or channels C\n of the input\n * **eps** (*float*) -- a value added to the denominator for\n numerical stability. Default: 1e-5\n\n * **momentum** (*float*) -- the value used for the running_mean\n and running_var computation. Default: 0.1\n\n * **affine** (*bool*) -- a boolean value that when set to\n \"True\", this module has learnable affine parameters,\n initialized the same way as done for batch normalization.\n Default: \"False\".\n\n * **track_running_stats** (*bool*) -- a boolean value that when\n set to \"True\", this module tracks the running mean and\n variance, and when set to \"False\", this module does not track\n such statistics and always uses batch statistics in both\n training and eval modes. Default: \"False\"\n\nShape:\n * Input: (N, C, L) or (C, L)\n * Output: (N, C, L) or (C, L) (same shape as input)\n\nExamples:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm1d.html", "category": "pytorch docs"} {"text": "Examples:\n >>> # Without Learnable Parameters\n >>> m = nn.InstanceNorm1d(100)\n >>> # With Learnable Parameters\n >>> m = nn.InstanceNorm1d(100, affine=True)\n >>> input = torch.randn(20, 100, 40)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm1d.html", "category": "pytorch docs"} {"text": "TransformerEncoder\nclass torch.nn.TransformerEncoder(encoder_layer, num_layers, norm=None, enable_nested_tensor=True, mask_check=True)\nTransformerEncoder is a stack of N encoder layers. Users can build\n the BERT(https://arxiv.org/abs/1810.04805) model with corresponding\n parameters.\nParameters:\n * encoder_layer -- an instance of the\n TransformerEncoderLayer() class (required).\n * **num_layers** -- the number of sub-encoder-layers in the\n encoder (required).\n\n * **norm** -- the layer normalization component (optional).\n\n * **enable_nested_tensor** -- if True, input will automatically\n convert to nested tensor (and convert back on output). This\n will improve the overall performance of TransformerEncoder\n when padding rate is high. Default: \"True\" (enabled).\n\nExamples::\n >>> encoder_layer = nn.TransformerEncoderLayer(d_model=512, nhead=8)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoder.html", "category": "pytorch docs"} {"text": "\n\n\ntransformer_encoder = nn.TransformerEncoder(encoder_layer, num_layers=6)\n >>> src = torch.rand(10, 32, 512)\n >>> out = transformer_encoder(src)\n\n\n\nforward(src, mask=None, src_key_padding_mask=None, is_causal=None)\n Pass the input through the encoder layers in turn.\n\n Parameters:\n * **src** (*Tensor*) -- the sequence to the encoder\n (required).\n\n * **mask** (*Optional**[**Tensor**]*) -- the mask for the src\n sequence (optional).\n\n * **is_causal** (*Optional**[**bool**]*) -- If specified,\n applies a causal mask as mask (optional) and ignores\n attn_mask for computing scaled dot product attention.\n Default: \"False\".\n\n * **src_key_padding_mask** (*Optional**[**Tensor**]*) -- the\n mask for the src keys per batch (optional).\n\n Return type:\n *Tensor*\n\n Shape:\n see the docs in Transformer class.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TransformerEncoder.html", "category": "pytorch docs"} {"text": "torch.atan\ntorch.atan(input, *, out=None) -> Tensor\nReturns a new tensor with the arctangent of the elements of\n \"input\".\n \\text{out}_{i} = \\tan^{-1}(\\text{input}_{i})\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(4)\n >>> a\n tensor([ 0.2341, 0.2539, -0.6256, -0.6448])\n >>> torch.atan(a)\n tensor([ 0.2299, 0.2487, -0.5591, -0.5727])\n", "source": "https://pytorch.org/docs/stable/generated/torch.atan.html", "category": "pytorch docs"} {"text": "LayerNorm\nclass torch.ao.nn.quantized.LayerNorm(normalized_shape, weight, bias, scale, zero_point, eps=1e-05, elementwise_affine=True, device=None, dtype=None)\nThis is the quantized version of \"LayerNorm\".\nAdditional args:\n * scale - quantization scale of the output, type: double.\n * **zero_point** - quantization zero point of the output, type:\n long.\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.LayerNorm.html", "category": "pytorch docs"} {"text": "torch.nn.functional.embedding_bag\ntorch.nn.functional.embedding_bag(input, weight, offsets=None, max_norm=None, norm_type=2, scale_grad_by_freq=False, mode='mean', sparse=False, per_sample_weights=None, include_last_offset=False, padding_idx=None)\nComputes sums, means or maxes of bags of embeddings, without\n instantiating the intermediate embeddings.\nSee \"torch.nn.EmbeddingBag\" for more details.\nNote:\n This operation may produce nondeterministic gradients when given\n tensors on a CUDA device. See Reproducibility for more\n information.\n\nParameters:\n * input (LongTensor) -- Tensor containing bags of indices\n into the embedding matrix\n * **weight** (*Tensor*) -- The embedding matrix with number of\n rows equal to the maximum possible index + 1, and number of\n columns equal to the embedding size\n\n * **offsets** (*LongTensor**, **optional*) -- Only used when\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.embedding_bag.html", "category": "pytorch docs"} {"text": "\"input\" is 1D. \"offsets\" determines the starting index\n position of each bag (sequence) in \"input\".\n * **max_norm** (*float**, **optional*) -- If given, each\n embedding vector with norm larger than \"max_norm\" is\n renormalized to have norm \"max_norm\". Note: this will modify\n \"weight\" in-place.\n\n * **norm_type** (*float**, **optional*) -- The \"p\" in the\n \"p\"-norm to compute for the \"max_norm\" option. Default \"2\".\n\n * **scale_grad_by_freq** (*bool**, **optional*) -- if given,\n this will scale gradients by the inverse of frequency of the\n words in the mini-batch. Default \"False\". Note: this option is\n not supported when \"mode=\"max\"\".\n\n * **mode** (*str**, **optional*) -- \"\"sum\"\", \"\"mean\"\" or\n \"\"max\"\". Specifies the way to reduce the bag. Default:\n \"\"mean\"\"\n\n * **sparse** (*bool**, **optional*) -- if \"True\", gradient\n w.r.t. \"weight\" will be a sparse tensor. See Notes under\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.embedding_bag.html", "category": "pytorch docs"} {"text": "\"torch.nn.Embedding\" for more details regarding sparse\n gradients. Note: this option is not supported when\n \"mode=\"max\"\".\n * **per_sample_weights** (*Tensor**, **optional*) -- a tensor of\n float / double weights, or None to indicate all weights should\n be taken to be 1. If specified, \"per_sample_weights\" must have\n exactly the same shape as input and is treated as having the\n same \"offsets\", if those are not None.\n\n * **include_last_offset** (*bool**, **optional*) -- if \"True\",\n the size of offsets is equal to the number of bags + 1. The\n last element is the size of the input, or the ending index\n position of the last bag (sequence).\n\n * **padding_idx** (*int**, **optional*) -- If specified, the\n entries at \"padding_idx\" do not contribute to the gradient;\n therefore, the embedding vector at \"padding_idx\" is not\n updated during training, i.e. it remains as a fixed \"pad\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.embedding_bag.html", "category": "pytorch docs"} {"text": "Note that the embedding vector at \"padding_idx\" is excluded\n from the reduction.\nReturn type:\n Tensor\nShape:\n * \"input\" (LongTensor) and \"offsets\" (LongTensor, optional)\n * If \"input\" is 2D of shape *(B, N)*, it will be treated as\n \"B\" bags (sequences) each of fixed length \"N\", and this will\n return \"B\" values aggregated in a way depending on the\n \"mode\". \"offsets\" is ignored and required to be \"None\" in\n this case.\n\n * If \"input\" is 1D of shape *(N)*, it will be treated as a\n concatenation of multiple bags (sequences). \"offsets\" is\n required to be a 1D tensor containing the starting index\n positions of each bag in \"input\". Therefore, for \"offsets\"\n of shape *(B)*, \"input\" will be viewed as having \"B\" bags.\n Empty bags (i.e., having 0-length) will have returned\n vectors filled by zeros.\n\n * \"weight\" (Tensor): the learnable weights of the module of\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.embedding_bag.html", "category": "pytorch docs"} {"text": "shape (num_embeddings, embedding_dim)\n * \"per_sample_weights\" (Tensor, optional). Has the same shape as\n \"input\".\n\n * \"output\": aggregated embedding values of shape *(B,\n embedding_dim)*\n\nExamples:\n >>> # an Embedding module containing 10 tensors of size 3\n >>> embedding_matrix = torch.rand(10, 3)\n >>> # a batch of 2 samples of 4 indices each\n >>> input = torch.tensor([1, 2, 4, 5, 4, 3, 2, 9])\n >>> offsets = torch.tensor([0, 4])\n >>> F.embedding_bag(input, embedding_matrix, offsets)\n tensor([[ 0.3397, 0.3552, 0.5545],\n [ 0.5893, 0.4386, 0.5882]])\n\n >>> # example with padding_idx\n >>> embedding_matrix = torch.rand(10, 3)\n >>> input = torch.tensor([2, 2, 2, 2, 4, 3, 2, 9])\n >>> offsets = torch.tensor([0, 4])\n >>> F.embedding_bag(input, embedding_matrix, offsets, padding_idx=2, mode='sum')\n tensor([[ 0.0000, 0.0000, 0.0000],\n [-0.7082, 3.2145, -2.6251]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.embedding_bag.html", "category": "pytorch docs"} {"text": "torch.Tensor.cos_\nTensor.cos_() -> Tensor\nIn-place version of \"cos()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cos_.html", "category": "pytorch docs"} {"text": "torch.Tensor.logaddexp2\nTensor.logaddexp2(other) -> Tensor\nSee \"torch.logaddexp2()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logaddexp2.html", "category": "pytorch docs"} {"text": "Identity\nclass torch.nn.Identity(args, *kwargs)\nA placeholder identity operator that is argument-insensitive.\nParameters:\n * args (Any) -- any argument (unused)\n * **kwargs** (*Any*) -- any keyword argument (unused)\n\nShape:\n * Input: (*), where * means any number of dimensions.\n * Output: (*), same shape as the input.\n\nExamples:\n >>> m = nn.Identity(54, unused_argument1=0.1, unused_argument2=False)\n >>> input = torch.randn(128, 20)\n >>> output = m(input)\n >>> print(output.size())\n torch.Size([128, 20])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Identity.html", "category": "pytorch docs"} {"text": "torch.Tensor.positive\nTensor.positive() -> Tensor\nSee \"torch.positive()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.positive.html", "category": "pytorch docs"} {"text": "torch.Tensor.logit_\nTensor.logit_() -> Tensor\nIn-place version of \"logit()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logit_.html", "category": "pytorch docs"} {"text": "torch.jit.script_if_tracing\ntorch.jit.script_if_tracing(fn)\nCompiles \"fn\" when it is first called during tracing.\n \"torch.jit.script\" has a non-negligible start up time when it is\n first called due to lazy-initializations of many compiler builtins.\n Therefore you should not use it in library code. However, you may\n want to have parts of your library work in tracing even if they use\n control flow. In these cases, you should use\n \"@torch.jit.script_if_tracing\" to substitute for\n \"torch.jit.script\".\nParameters:\n fn -- A function to compile.\nReturns:\n If called during tracing, a \"ScriptFunction\" created by\n torch.jit.script is returned. Otherwise, the original function\n fn is returned.", "source": "https://pytorch.org/docs/stable/generated/torch.jit.script_if_tracing.html", "category": "pytorch docs"} {"text": "torch._foreach_sigmoid\ntorch._foreach_sigmoid(self: List[Tensor]) -> List[Tensor]\nApply \"torch.sigmoid()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_sigmoid.html", "category": "pytorch docs"} {"text": "torch.Tensor.eq\nTensor.eq(other) -> Tensor\nSee \"torch.eq()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.eq.html", "category": "pytorch docs"} {"text": "torch.Tensor.zero_\nTensor.zero_() -> Tensor\nFills \"self\" tensor with zeros.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.zero_.html", "category": "pytorch docs"} {"text": "torch.Tensor.split\nTensor.split(split_size, dim=0)\nSee \"torch.split()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.split.html", "category": "pytorch docs"} {"text": "Dropout3d\nclass torch.nn.Dropout3d(p=0.5, inplace=False)\nRandomly zero out entire channels (a channel is a 3D feature map,\n e.g., the j-th channel of the i-th sample in the batched input is a\n 3D tensor \\text{input}[i, j]). Each channel will be zeroed out\n independently on every forward call with probability \"p\" using\n samples from a Bernoulli distribution.\nUsually the input comes from \"nn.Conv3d\" modules.\nAs described in the paper Efficient Object Localization Using\n Convolutional Networks , if adjacent pixels within feature maps are\n strongly correlated (as is normally the case in early convolution\n layers) then i.i.d. dropout will not regularize the activations and\n will otherwise just result in an effective learning rate decrease.\nIn this case, \"nn.Dropout3d()\" will help promote independence\n between feature maps and should be used instead.\nParameters:\n * p (float, optional) -- probability of an element to\n be zeroed.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Dropout3d.html", "category": "pytorch docs"} {"text": "be zeroed.\n * **inplace** (*bool**, **optional*) -- If set to \"True\", will\n do this operation in-place\n\nShape:\n * Input: (N, C, D, H, W) or (C, D, H, W).\n * Output: (N, C, D, H, W) or (C, D, H, W) (same shape as input).\n\nExamples:\n >>> m = nn.Dropout3d(p=0.2)\n >>> input = torch.randn(20, 16, 4, 32, 32)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Dropout3d.html", "category": "pytorch docs"} {"text": "ASGD\nclass torch.optim.ASGD(params, lr=0.01, lambd=0.0001, alpha=0.75, t0=1000000.0, weight_decay=0, foreach=None, maximize=False, differentiable=False)\nImplements Averaged Stochastic Gradient Descent.\nIt has been proposed in Acceleration of stochastic approximation by\n averaging.\nParameters:\n * params (iterable) -- iterable of parameters to optimize\n or dicts defining parameter groups\n * **lr** (*float**, **optional*) -- learning rate (default:\n 1e-2)\n\n * **lambd** (*float**, **optional*) -- decay term (default:\n 1e-4)\n\n * **alpha** (*float**, **optional*) -- power for eta update\n (default: 0.75)\n\n * **t0** (*float**, **optional*) -- point at which to start\n averaging (default: 1e6)\n\n * **weight_decay** (*float**, **optional*) -- weight decay (L2\n penalty) (default: 0)\n\n * **foreach** (*bool**, **optional*) -- whether foreach\n implementation of optimizer is used. If unspecified by the\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.ASGD.html", "category": "pytorch docs"} {"text": "user (so foreach is None), we will try to use foreach over the\n for-loop implementation on CUDA, since it is usually\n significantly more performant. (default: None)\n * **maximize** (*bool**, **optional*) -- maximize the params\n based on the objective, instead of minimizing (default: False)\n\n * **differentiable** (*bool**, **optional*) -- whether autograd\n should occur through the optimizer step in training.\n Otherwise, the step() function runs in a torch.no_grad()\n context. Setting to True can impair performance, so leave it\n False if you don't intend to run autograd through this\n instance (default: False)\n\nadd_param_group(param_group)\n Add a param group to the \"Optimizer\" s *param_groups*.\n\n This can be useful when fine tuning a pre-trained network as\n frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n\n Parameters:\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.ASGD.html", "category": "pytorch docs"} {"text": "as training progresses.\n Parameters:\n **param_group** (*dict*) -- Specifies what Tensors should be\n optimized along with group specific optimization options.\n\nload_state_dict(state_dict)\n Loads the optimizer state.\n\n Parameters:\n **state_dict** (*dict*) -- optimizer state. Should be an\n object returned from a call to \"state_dict()\".\n\nregister_step_post_hook(hook)\n Register an optimizer step post hook which will be called after\n optimizer step. It should have the following signature:\n\n hook(optimizer, args, kwargs) -> None\n\n The \"optimizer\" argument is the optimizer instance being used.\n\n Parameters:\n **hook** (*Callable*) -- The user defined hook to be\n registered.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n\nregister_step_pre_hook(hook)", "source": "https://pytorch.org/docs/stable/generated/torch.optim.ASGD.html", "category": "pytorch docs"} {"text": "register_step_pre_hook(hook)\n Register an optimizer step pre hook which will be called before\n optimizer step. It should have the following signature:\n\n hook(optimizer, args, kwargs) -> None or modified args and kwargs\n\n The \"optimizer\" argument is the optimizer instance being used.\n If args and kwargs are modified by the pre-hook, then the\n transformed values are returned as a tuple containing the\n new_args and new_kwargs.\n\n Parameters:\n **hook** (*Callable*) -- The user defined hook to be\n registered.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n\nstate_dict()\n Returns the state of the optimizer as a \"dict\".\n\n It contains two entries:\n\n * state - a dict holding current optimization state. Its content\n differs between optimizer classes.\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.ASGD.html", "category": "pytorch docs"} {"text": "differs between optimizer classes.\n * param_groups - a list containing all parameter groups where\n each\n parameter group is a dict\n\nzero_grad(set_to_none=False)\n Sets the gradients of all optimized \"torch.Tensor\" s to zero.\n\n Parameters:\n **set_to_none** (*bool*) -- instead of setting to zero, set\n the grads to None. This will in general have lower memory\n footprint, and can modestly improve performance. However, it\n changes certain behaviors. For example: 1. When the user\n tries to access a gradient and perform manual ops on it, a\n None attribute or a Tensor full of 0s will behave\n differently. 2. If the user requests\n \"zero_grad(set_to_none=True)\" followed by a backward pass,\n \".grad\"s are guaranteed to be None for params that did not\n receive a gradient. 3. \"torch.optim\" optimizers have a\n different behavior if the gradient is 0 or None (in one case\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.ASGD.html", "category": "pytorch docs"} {"text": "it does the step with a gradient of 0 and in the other it\n skips the step altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.ASGD.html", "category": "pytorch docs"} {"text": "torch.squeeze\ntorch.squeeze(input, dim=None) -> Tensor\nReturns a tensor with all specified dimensions of \"input\" of size\n 1 removed.\nFor example, if input is of shape: (A \\times 1 \\times B \\times C\n \\times 1 \\times D) then the input.squeeze() will be of shape: (A\n \\times B \\times C \\times D).\nWhen \"dim\" is given, a squeeze operation is done only in the given\n dimension(s). If input is of shape: (A \\times 1 \\times B),\n \"squeeze(input, 0)\" leaves the tensor unchanged, but\n \"squeeze(input, 1)\" will squeeze the tensor to the shape (A \\times\n B).\nNote:\n The returned tensor shares the storage with the input tensor, so\n changing the contents of one will change the contents of the\n other.\n\nWarning:\n If the tensor has a batch dimension of size 1, then\n *squeeze(input)* will also remove the batch dimension, which can\n lead to unexpected errors. Consider specifying only the dims you\n wish to be squeezed.\n\nParameters:", "source": "https://pytorch.org/docs/stable/generated/torch.squeeze.html", "category": "pytorch docs"} {"text": "wish to be squeezed.\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int** or **tuple of ints**, **optional*) --\n\n if given, the input will be squeezed\n only in the specified dimensions.\n\n Changed in version 2.0: \"dim\" now accepts tuples of\n dimensions.\n\nExample:\n >>> x = torch.zeros(2, 1, 2, 1, 2)\n >>> x.size()\n torch.Size([2, 1, 2, 1, 2])\n >>> y = torch.squeeze(x)\n >>> y.size()\n torch.Size([2, 2, 2])\n >>> y = torch.squeeze(x, 0)\n >>> y.size()\n torch.Size([2, 1, 2, 1, 2])\n >>> y = torch.squeeze(x, 1)\n >>> y.size()\n torch.Size([2, 2, 1, 2])\n >>> y = torch.squeeze(x, (1, 2, 3))\n torch.Size([2, 2, 2])\n", "source": "https://pytorch.org/docs/stable/generated/torch.squeeze.html", "category": "pytorch docs"} {"text": "torch.cuda.empty_cache\ntorch.cuda.empty_cache()\nReleases all unoccupied cached memory currently held by the caching\n allocator so that those can be used in other GPU application and\n visible in nvidia-smi.\nNote:\n \"empty_cache()\" doesn't increase the amount of GPU memory\n available for PyTorch. However, it may help reduce fragmentation\n of GPU memory in certain cases. See Memory management for more\n details about GPU memory management.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.empty_cache.html", "category": "pytorch docs"} {"text": "torch.is_deterministic_algorithms_warn_only_enabled\ntorch.is_deterministic_algorithms_warn_only_enabled()\nReturns True if the global deterministic flag is set to warn only.\n Refer to \"torch.use_deterministic_algorithms()\" documentation for\n more details.", "source": "https://pytorch.org/docs/stable/generated/torch.is_deterministic_algorithms_warn_only_enabled.html", "category": "pytorch docs"} {"text": "torch.Tensor.sub_\nTensor.sub_(other, *, alpha=1) -> Tensor\nIn-place version of \"sub()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sub_.html", "category": "pytorch docs"} {"text": "torch.nn.utils.rnn.pack_sequence\ntorch.nn.utils.rnn.pack_sequence(sequences, enforce_sorted=True)\nPacks a list of variable length Tensors\nConsecutive call of the next functions: \"pad_sequence\",\n \"pack_padded_sequence\".\n\"sequences\" should be a list of Tensors of size \"L x \", where L*\n is the length of a sequence and *** is any number of trailing\n dimensions, including zero.\nFor unsorted sequences, use enforce_sorted = False. If\n \"enforce_sorted\" is \"True\", the sequences should be sorted in the\n order of decreasing length. \"enforce_sorted = True\" is only\n necessary for ONNX export.\n-[ Example ]-\n\n\n\nfrom torch.nn.utils.rnn import pack_sequence\na = torch.tensor([1, 2, 3])\nb = torch.tensor([4, 5])\nc = torch.tensor([6])\npack_sequence([a, b, c])\n PackedSequence(data=tensor([1, 4, 6, 2, 5, 3]), batch_sizes=tensor([3, 2, 1]), sorted_indices=None, unsorted_indices=None)\n\n\n\nParameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pack_sequence.html", "category": "pytorch docs"} {"text": "Parameters:\n * sequences (list[Tensor]) -- A list of sequences of\n decreasing length.\n * **enforce_sorted** (*bool**, **optional*) -- if \"True\", checks\n that the input contains sequences sorted by length in a\n decreasing order. If \"False\", this condition is not checked.\n Default: \"True\".\n\nReturns:\n a \"PackedSequence\" object\nReturn type:\n PackedSequence", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.rnn.pack_sequence.html", "category": "pytorch docs"} {"text": "Sigmoid\nclass torch.ao.nn.quantized.Sigmoid(output_scale, output_zero_point)\nThis is the quantized equivalent of \"Sigmoid\".\nParameters:\n * scale -- quantization scale of the output tensor\n * **zero_point** -- quantization zero point of the output tensor\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Sigmoid.html", "category": "pytorch docs"} {"text": "torch.Tensor.moveaxis\nTensor.moveaxis(source, destination) -> Tensor\nSee \"torch.moveaxis()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.moveaxis.html", "category": "pytorch docs"} {"text": "torch.Tensor.gcd_\nTensor.gcd_(other) -> Tensor\nIn-place version of \"gcd()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.gcd_.html", "category": "pytorch docs"} {"text": "torch.Tensor.mean\nTensor.mean(dim=None, keepdim=False, *, dtype=None) -> Tensor\nSee \"torch.mean()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.mean.html", "category": "pytorch docs"} {"text": "torch.Tensor.resize_as_\nTensor.resize_as_(tensor, memory_format=torch.contiguous_format) -> Tensor\nResizes the \"self\" tensor to be the same size as the specified\n \"tensor\". This is equivalent to \"self.resize_(tensor.size())\".\nParameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of Tensor. Default:\n \"torch.contiguous_format\". Note that memory format of \"self\" is\n going to be unaffected if \"self.size()\" matches \"tensor.size()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.resize_as_.html", "category": "pytorch docs"} {"text": "torch.Tensor.round\nTensor.round(decimals=0) -> Tensor\nSee \"torch.round()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.round.html", "category": "pytorch docs"} {"text": "torch.empty_strided\ntorch.empty_strided(size, stride, *, dtype=None, layout=None, device=None, requires_grad=False, pin_memory=False) -> Tensor\nCreates a tensor with the specified \"size\" and \"stride\" and filled\n with undefined data.\nWarning:\n If the constructed tensor is \"overlapped\" (with multiple indices\n referring to the same element in memory) its behavior is\n undefined.\n\nParameters:\n * size (tuple of python:int) -- the shape of the output\n tensor\n * **stride** (*tuple of python:int*) -- the strides of the\n output tensor\n\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n", "source": "https://pytorch.org/docs/stable/generated/torch.empty_strided.html", "category": "pytorch docs"} {"text": "returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\n * **pin_memory** (*bool**, **optional*) -- If set, returned\n tensor would be allocated in the pinned memory. Works only for\n CPU tensors. Default: \"False\".\n\nExample:\n >>> a = torch.empty_strided((2, 3), (1, 2))\n >>> a\n tensor([[8.9683e-44, 4.4842e-44, 5.1239e+07],\n [0.0000e+00, 0.0000e+00, 3.0705e-41]])\n >>> a.stride()\n (1, 2)\n >>> a.size()\n torch.Size([2, 3])\n", "source": "https://pytorch.org/docs/stable/generated/torch.empty_strided.html", "category": "pytorch docs"} {"text": "torch.Tensor.absolute\nTensor.absolute() -> Tensor\nAlias for \"abs()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.absolute.html", "category": "pytorch docs"} {"text": "torch.nn.functional.ctc_loss\ntorch.nn.functional.ctc_loss(log_probs, targets, input_lengths, target_lengths, blank=0, reduction='mean', zero_infinity=False)\nThe Connectionist Temporal Classification loss.\nSee \"CTCLoss\" for details.\nNote:\n In some circumstances when given tensors on a CUDA device and\n using CuDNN, this operator may select a nondeterministic\n algorithm to increase performance. If this is undesirable, you\n can try to make the operation deterministic (potentially at a\n performance cost) by setting \"torch.backends.cudnn.deterministic\n = True\". See Reproducibility for more information.\n\nNote:\n This operation may produce nondeterministic gradients when given\n tensors on a CUDA device. See Reproducibility for more\n information.\n\nParameters:\n * log_probs (Tensor) -- (T, N, C) or (T, C) where C =\n number of characters in alphabet including blank, *T = input", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.ctc_loss.html", "category": "pytorch docs"} {"text": "length, and N = batch size*. The logarithmized probabilities\n of the outputs (e.g. obtained with\n \"torch.nn.functional.log_softmax()\").\n * **targets** (*Tensor*) -- (N, S) or *(sum(target_lengths))*.\n Targets cannot be blank. In the second form, the targets are\n assumed to be concatenated.\n\n * **input_lengths** (*Tensor*) -- (N) or (). Lengths of the\n inputs (must each be \\leq T)\n\n * **target_lengths** (*Tensor*) -- (N) or (). Lengths of the\n targets\n\n * **blank** (*int**, **optional*) -- Blank label. Default 0.\n\n * **reduction** (*str**, **optional*) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the output\n losses will be divided by the target lengths and then the mean\n over the batch is taken, \"'sum'\": the output will be summed.\n Default: \"'mean'\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.ctc_loss.html", "category": "pytorch docs"} {"text": "Default: \"'mean'\"\n * **zero_infinity** (*bool**, **optional*) -- Whether to zero\n infinite losses and the associated gradients. Default: \"False\"\n Infinite losses mainly occur when the inputs are too short to\n be aligned to the targets.\n\nReturn type:\n Tensor\nExample:\n >>> log_probs = torch.randn(50, 16, 20).log_softmax(2).detach().requires_grad_()\n >>> targets = torch.randint(1, 20, (16, 30), dtype=torch.long)\n >>> input_lengths = torch.full((16,), 50, dtype=torch.long)\n >>> target_lengths = torch.randint(10, 30, (16,), dtype=torch.long)\n >>> loss = F.ctc_loss(log_probs, targets, input_lengths, target_lengths)\n >>> loss.backward()\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.ctc_loss.html", "category": "pytorch docs"} {"text": "torch.mm\ntorch.mm(input, mat2, *, out=None) -> Tensor\nPerforms a matrix multiplication of the matrices \"input\" and\n \"mat2\".\nIf \"input\" is a (n \\times m) tensor, \"mat2\" is a (m \\times p)\n tensor, \"out\" will be a (n \\times p) tensor.\nNote:\n This function does not broadcast. For broadcasting matrix\n products, see \"torch.matmul()\".\n\nSupports strided and sparse 2-D tensors as inputs, autograd with\n respect to strided inputs.\nThis operation has support for arguments with sparse layouts. If\n \"out\" is provided it's layout will be used. Otherwise, the result\n layout will be deduced from that of \"input\".\nWarning:\n Sparse support is a beta feature and some layout(s)/dtype/device\n combinations may not be supported, or may not have autograd\n support. If you notice missing functionality please open a\n feature request.\n\nThis operator supports TensorFloat32.\nOn certain ROCm devices, when using float16 inputs this module will", "source": "https://pytorch.org/docs/stable/generated/torch.mm.html", "category": "pytorch docs"} {"text": "use different precision for backward.\nParameters:\n * input (Tensor) -- the first matrix to be matrix\n multiplied\n * **mat2** (*Tensor*) -- the second matrix to be matrix\n multiplied\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> mat1 = torch.randn(2, 3)\n >>> mat2 = torch.randn(3, 3)\n >>> torch.mm(mat1, mat2)\n tensor([[ 0.4851, 0.5037, -0.3633],\n [-0.0760, -3.6705, 2.4784]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.mm.html", "category": "pytorch docs"} {"text": "torch.le\ntorch.le(input, other, *, out=None) -> Tensor\nComputes \\text{input} \\leq \\text{other} element-wise.\nThe second argument can be a number or a tensor whose shape is\n broadcastable with the first argument.\nParameters:\n * input (Tensor) -- the tensor to compare\n * **other** (*Tensor** or **Scalar*) -- the tensor or value to\n compare\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nReturns:\n A boolean tensor that is True where \"input\" is less than or\n equal to \"other\" and False elsewhere\nExample:\n >>> torch.le(torch.tensor([[1, 2], [3, 4]]), torch.tensor([[1, 1], [4, 4]]))\n tensor([[True, False], [True, True]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.le.html", "category": "pytorch docs"} {"text": "torch.Tensor.imag\nTensor.imag\nReturns a new tensor containing imaginary values of the \"self\"\n tensor. The returned tensor and \"self\" share the same underlying\n storage.\nWarning:\n \"imag()\" is only supported for tensors with complex dtypes.\n\nExample::\n >>> x=torch.randn(4, dtype=torch.cfloat)\n >>> x\n tensor([(0.3100+0.3553j), (-0.5445-0.7896j), (-1.6492-0.0633j), (-0.0638-0.8119j)])\n >>> x.imag\n tensor([ 0.3553, -0.7896, -0.0633, -0.8119])", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.imag.html", "category": "pytorch docs"} {"text": "torch.nn.utils.prune.identity\ntorch.nn.utils.prune.identity(module, name)\nApplies pruning reparametrization to the tensor corresponding to\n the parameter called \"name\" in \"module\" without actually pruning\n any units. Modifies module in place (and also return the modified\n module) by:\n\n\nadding a named buffer called \"name+'_mask'\" corresponding to the\n binary mask applied to the parameter \"name\" by the pruning\n method.\n\n\nreplacing the parameter \"name\" by its pruned version, while the\n original (unpruned) parameter is stored in a new parameter named\n \"name+'_orig'\".\n\n\nNote:\n The mask is a tensor of ones.\n\nParameters:\n * module (nn.Module) -- module containing the tensor to\n prune.\n * **name** (*str*) -- parameter name within \"module\" on which\n pruning will act.\n\nReturns:\n modified (i.e. pruned) version of the input module\nReturn type:\n module (nn.Module)\n-[ Examples ]-", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.identity.html", "category": "pytorch docs"} {"text": "module (nn.Module)\n-[ Examples ]-\n\n\n\nm = prune.identity(nn.Linear(2, 3), 'bias')\nprint(m.bias_mask)\n tensor([1., 1., 1.])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.identity.html", "category": "pytorch docs"} {"text": "torch.Tensor.not_equal\nTensor.not_equal(other) -> Tensor\nSee \"torch.not_equal()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.not_equal.html", "category": "pytorch docs"} {"text": "Mish\nclass torch.nn.Mish(inplace=False)\nApplies the Mish function, element-wise. Mish: A Self Regularized\n Non-Monotonic Neural Activation Function.\n \\text{Mish}(x) = x * \\text{Tanh}(\\text{Softplus}(x))\n\nNote:\n See Mish: A Self Regularized Non-Monotonic Neural Activation\n Function\n\nShape:\n * Input: (*), where * means any number of dimensions.\n * Output: (*), same shape as the input.\n\n[image]\nExamples:\n >>> m = nn.Mish()\n >>> input = torch.randn(2)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Mish.html", "category": "pytorch docs"} {"text": "torch.nn.functional.elu_\ntorch.nn.functional.elu_(input, alpha=1.) -> Tensor\nIn-place version of \"elu()\".", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.elu_.html", "category": "pytorch docs"} {"text": "torch.cos\ntorch.cos(input, *, out=None) -> Tensor\nReturns a new tensor with the cosine of the elements of \"input\".\n \\text{out}_{i} = \\cos(\\text{input}_{i})\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(4)\n >>> a\n tensor([ 1.4309, 1.2706, -0.8562, 0.9796])\n >>> torch.cos(a)\n tensor([ 0.1395, 0.2957, 0.6553, 0.5574])\n", "source": "https://pytorch.org/docs/stable/generated/torch.cos.html", "category": "pytorch docs"} {"text": "torch.Tensor.addr_\nTensor.addr_(vec1, vec2, *, beta=1, alpha=1) -> Tensor\nIn-place version of \"addr()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.addr_.html", "category": "pytorch docs"} {"text": "torch.logit\ntorch.logit(input, eps=None, *, out=None) -> Tensor\nAlias for \"torch.special.logit()\".", "source": "https://pytorch.org/docs/stable/generated/torch.logit.html", "category": "pytorch docs"} {"text": "torch.Tensor.ne_\nTensor.ne_(other) -> Tensor\nIn-place version of \"ne()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ne_.html", "category": "pytorch docs"} {"text": "torch.Tensor.renorm\nTensor.renorm(p, dim, maxnorm) -> Tensor\nSee \"torch.renorm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.renorm.html", "category": "pytorch docs"} {"text": "torch.nn.functional.l1_loss\ntorch.nn.functional.l1_loss(input, target, size_average=None, reduce=None, reduction='mean') -> Tensor\nFunction that takes the mean element-wise absolute value\n difference.\nSee \"L1Loss\" for details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.l1_loss.html", "category": "pytorch docs"} {"text": "torch.nn.functional.celu\ntorch.nn.functional.celu(input, alpha=1., inplace=False) -> Tensor\nApplies element-wise, \\text{CELU}(x) = \\max(0,x) + \\min(0, \\alpha *\n (\\exp(x/\\alpha) - 1)).\nSee \"CELU\" for more details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.celu.html", "category": "pytorch docs"} {"text": "torch.cuda.jiterator._create_multi_output_jit_fn\ntorch.cuda.jiterator._create_multi_output_jit_fn(code_string, num_outputs, **kwargs)\nCreate a jiterator-generated cuda kernel for an elementwise op that\n supports returning one or more outputs.\nParameters:\n * code_string (str) -- CUDA code string to be compiled by\n jiterator. The entry functor must return value by reference.\n * **num_outputs** (*int*) -- number of outputs return by the\n kernel\n\n * **kwargs** (*Dict**, **optional*) -- Keyword arguments for\n generated function\n\nReturn type:\n Callable\nExample:\n code_string = \"template void my_kernel(T x, T y, T alpha, T& out) { out = -x + alpha * y; }\"\n jitted_fn = create_jit_fn(code_string, alpha=1.0)\n a = torch.rand(3, device='cuda')\n b = torch.rand(3, device='cuda')\n # invoke jitted function like a regular python function\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.jiterator._create_multi_output_jit_fn.html", "category": "pytorch docs"} {"text": "result = jitted_fn(a, b, alpha=3.14)\nWarning:\n This API is in beta and may change in future releases.\n\nWarning:\n This API only supports up to 8 inputs and 8 outputs\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.jiterator._create_multi_output_jit_fn.html", "category": "pytorch docs"} {"text": "torch.Tensor.sin\nTensor.sin() -> Tensor\nSee \"torch.sin()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sin.html", "category": "pytorch docs"} {"text": "LazyConv3d\nclass torch.nn.LazyConv3d(out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)\nA \"torch.nn.Conv3d\" module with lazy initialization of the\n \"in_channels\" argument of the \"Conv3d\" that is inferred from the\n \"input.size(1)\". The attributes that will be lazily initialized are\n weight and bias.\nCheck the \"torch.nn.modules.lazy.LazyModuleMixin\" for further\n documentation on lazy modules and their limitations.\nParameters:\n * out_channels (int) -- Number of channels produced by the\n convolution\n * **kernel_size** (*int** or **tuple*) -- Size of the convolving\n kernel\n\n * **stride** (*int** or **tuple**, **optional*) -- Stride of the\n convolution. Default: 1\n\n * **padding** (*int** or **tuple**, **optional*) -- Zero-padding\n added to both sides of the input. Default: 0\n\n * **padding_mode** (*str**, **optional*) -- \"'zeros'\",\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyConv3d.html", "category": "pytorch docs"} {"text": "\"'reflect'\", \"'replicate'\" or \"'circular'\". Default: \"'zeros'\"\n * **dilation** (*int** or **tuple**, **optional*) -- Spacing\n between kernel elements. Default: 1\n\n * **groups** (*int**, **optional*) -- Number of blocked\n connections from input channels to output channels. Default: 1\n\n * **bias** (*bool**, **optional*) -- If \"True\", adds a learnable\n bias to the output. Default: \"True\"\n\nSee also:\n \"torch.nn.Conv3d\" and \"torch.nn.modules.lazy.LazyModuleMixin\"\n\ncls_to_become\n alias of \"Conv3d\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyConv3d.html", "category": "pytorch docs"} {"text": "torch.ger\ntorch.ger(input, vec2, *, out=None) -> Tensor\nAlias of \"torch.outer()\".\nWarning:\n This function is deprecated and will be removed in a future\n PyTorch release. Use \"torch.outer()\" instead.\n", "source": "https://pytorch.org/docs/stable/generated/torch.ger.html", "category": "pytorch docs"} {"text": "torch.Tensor.expm1\nTensor.expm1() -> Tensor\nSee \"torch.expm1()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.expm1.html", "category": "pytorch docs"} {"text": "torch.nn.functional.conv_transpose2d\ntorch.nn.functional.conv_transpose2d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor\nApplies a 2D transposed convolution operator over an input image\n composed of several input planes, sometimes also called\n \"deconvolution\".\nThis operator supports TensorFloat32.\nSee \"ConvTranspose2d\" for details and output shape.\nNote:\n In some circumstances when given tensors on a CUDA device and\n using CuDNN, this operator may select a nondeterministic\n algorithm to increase performance. If this is undesirable, you\n can try to make the operation deterministic (potentially at a\n performance cost) by setting \"torch.backends.cudnn.deterministic\n = True\". See Reproducibility for more information.\n\nParameters:\n * input -- input tensor of shape (\\text{minibatch} ,\n \\text{in_channels} , iH , iW)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv_transpose2d.html", "category": "pytorch docs"} {"text": "\\text{in_channels} , iH , iW)\n * **weight** -- filters of shape (\\text{in\\_channels} ,\n \\frac{\\text{out\\_channels}}{\\text{groups}} , kH , kW)\n\n * **bias** -- optional bias of shape (\\text{out\\_channels}).\n Default: None\n\n * **stride** -- the stride of the convolving kernel. Can be a\n single number or a tuple \"(sH, sW)\". Default: 1\n\n * **padding** -- \"dilation * (kernel_size - 1) - padding\" zero-\n padding will be added to both sides of each dimension in the\n input. Can be a single number or a tuple \"(padH, padW)\".\n Default: 0\n\n * **output_padding** -- additional size added to one side of\n each dimension in the output shape. Can be a single number or\n a tuple \"(out_padH, out_padW)\". Default: 0\n\n * **groups** -- split input into groups, \\text{in\\_channels}\n should be divisible by the number of groups. Default: 1\n\n * **dilation** -- the spacing between kernel elements. Can be a\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv_transpose2d.html", "category": "pytorch docs"} {"text": "single number or a tuple \"(dH, dW)\". Default: 1\nExamples:\n >>> # With square kernels and equal stride\n >>> inputs = torch.randn(1, 4, 5, 5)\n >>> weights = torch.randn(4, 8, 3, 3)\n >>> F.conv_transpose2d(inputs, weights, padding=1)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv_transpose2d.html", "category": "pytorch docs"} {"text": "torch.nn.utils.parametrize.remove_parametrizations\ntorch.nn.utils.parametrize.remove_parametrizations(module, tensor_name, leave_parametrized=True)\nRemoves the parametrizations on a tensor in a module.\n\n\nIf \"leave_parametrized=True\", \"module[tensor_name]\" will be set\n to its current output. In this case, the parametrization shall\n not change the \"dtype\" of the tensor.\n\n\nIf \"leave_parametrized=False\", \"module[tensor_name]\" will be set\n to the unparametrised tensor in\n \"module.parametrizations[tensor_name].original\". This is only\n possible when the parametrization depends on just one tensor.\n\n\nParameters:\n * module (nn.Module) -- module from which remove the\n parametrization\n * **tensor_name** (*str*) -- name of the parametrization to be\n removed\n\n * **leave_parametrized** (*bool**, **optional*) -- leave the\n attribute \"tensor_name\" parametrized. Default: \"True\"\n\nReturns:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.remove_parametrizations.html", "category": "pytorch docs"} {"text": "Returns:\n module\nReturn type:\n Module\nRaises:\n * ValueError -- if \"module[tensor_name]\" is not parametrized\n * **ValueError** -- if \"leave_parametrized=False\" and the\n parametrization depends on several tensors\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.remove_parametrizations.html", "category": "pytorch docs"} {"text": "torch.Tensor.q_per_channel_axis\nTensor.q_per_channel_axis() -> int\nGiven a Tensor quantized by linear (affine) per-channel\n quantization, returns the index of dimension on which per-channel\n quantization is applied.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.q_per_channel_axis.html", "category": "pytorch docs"} {"text": "torch.Tensor.triu_\nTensor.triu_(diagonal=0) -> Tensor\nIn-place version of \"triu()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.triu_.html", "category": "pytorch docs"} {"text": "RandomUnstructured\nclass torch.nn.utils.prune.RandomUnstructured(amount)\nPrune (currently unpruned) units in a tensor at random.\nParameters:\n * name (str) -- parameter name within \"module\" on which\n pruning will act.\n * **amount** (*int** or **float*) -- quantity of parameters to\n prune. If \"float\", should be between 0.0 and 1.0 and represent\n the fraction of parameters to prune. If \"int\", it represents\n the absolute number of parameters to prune.\n\nclassmethod apply(module, name, amount)\n Adds the forward pre-hook that enables pruning on the fly and\n the reparametrization of a tensor in terms of the original\n tensor and the pruning mask.\n\n Parameters:\n * **module** (*nn.Module*) -- module containing the tensor to\n prune\n\n * **name** (*str*) -- parameter name within \"module\" on which\n pruning will act.\n\n * **amount** (*int** or **float*) -- quantity of parameters\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.RandomUnstructured.html", "category": "pytorch docs"} {"text": "to prune. If \"float\", should be between 0.0 and 1.0 and\n represent the fraction of parameters to prune. If \"int\", it\n represents the absolute number of parameters to prune.\napply_mask(module)\n Simply handles the multiplication between the parameter being\n pruned and the generated mask. Fetches the mask and the original\n tensor from the module and returns the pruned version of the\n tensor.\n\n Parameters:\n **module** (*nn.Module*) -- module containing the tensor to\n prune\n\n Returns:\n pruned version of the input tensor\n\n Return type:\n pruned_tensor (torch.Tensor)\n\nprune(t, default_mask=None, importance_scores=None)\n Computes and returns a pruned version of input tensor \"t\"\n according to the pruning rule specified in \"compute_mask()\".\n\n Parameters:\n * **t** (*torch.Tensor*) -- tensor to prune (of same\n dimensions as \"default_mask\").\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.RandomUnstructured.html", "category": "pytorch docs"} {"text": "dimensions as \"default_mask\").\n * **importance_scores** (*torch.Tensor*) -- tensor of\n importance scores (of same shape as \"t\") used to compute\n mask for pruning \"t\". The values in this tensor indicate\n the importance of the corresponding elements in the \"t\"\n that is being pruned. If unspecified or None, the tensor\n \"t\" will be used in its place.\n\n * **default_mask** (*torch.Tensor**, **optional*) -- mask\n from previous pruning iteration, if any. To be considered\n when determining what portion of the tensor that pruning\n should act on. If None, default to a mask of ones.\n\n Returns:\n pruned version of tensor \"t\".\n\nremove(module)\n Removes the pruning reparameterization from a module. The pruned\n parameter named \"name\" remains permanently pruned, and the\n parameter named \"name+'_orig'\" is removed from the parameter\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.RandomUnstructured.html", "category": "pytorch docs"} {"text": "list. Similarly, the buffer named \"name+'_mask'\" is removed from\n the buffers.\n Note:\n\n Pruning itself is NOT undone or reversed!\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.RandomUnstructured.html", "category": "pytorch docs"} {"text": "RNNCell\nclass torch.nn.RNNCell(input_size, hidden_size, bias=True, nonlinearity='tanh', device=None, dtype=None)\nAn Elman RNN cell with tanh or ReLU non-linearity.\n h' = \\tanh(W_{ih} x + b_{ih} + W_{hh} h + b_{hh})\n\nIf \"nonlinearity\" is 'relu', then ReLU is used in place of tanh.\nParameters:\n * input_size (int) -- The number of expected features in\n the input x\n * **hidden_size** (*int*) -- The number of features in the\n hidden state *h*\n\n * **bias** (*bool*) -- If \"False\", then the layer does not use\n bias weights *b_ih* and *b_hh*. Default: \"True\"\n\n * **nonlinearity** (*str*) -- The non-linearity to use. Can be\n either \"'tanh'\" or \"'relu'\". Default: \"'tanh'\"\n\nInputs: input, hidden\n * input: tensor containing input features\n * **hidden**: tensor containing the initial hidden state\n Defaults to zero if not provided.\n\nOutputs: h'", "source": "https://pytorch.org/docs/stable/generated/torch.nn.RNNCell.html", "category": "pytorch docs"} {"text": "Outputs: h'\n * h' of shape (batch, hidden_size): tensor containing the\n next hidden state for each element in the batch\nShape:\n * input: (N, H_{in}) or (H_{in}) tensor containing input\n features where H_{in} = input_size.\n * hidden: (N, H_{out}) or (H_{out}) tensor containing the\n initial hidden state where H_{out} = *hidden_size*. Defaults\n to zero if not provided.\n\n * output: (N, H_{out}) or (H_{out}) tensor containing the next\n hidden state.\n\nVariables:\n * weight_ih (torch.Tensor) -- the learnable input-hidden\n weights, of shape (hidden_size, input_size)\n * **weight_hh** (*torch.Tensor*) -- the learnable hidden-hidden\n weights, of shape *(hidden_size, hidden_size)*\n\n * **bias_ih** -- the learnable input-hidden bias, of shape\n *(hidden_size)*\n\n * **bias_hh** -- the learnable hidden-hidden bias, of shape\n *(hidden_size)*\n\nNote:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.RNNCell.html", "category": "pytorch docs"} {"text": "(hidden_size)\nNote:\n All the weights and biases are initialized from\n \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}) where k =\n \\frac{1}{\\text{hidden\\_size}}\n\nExamples:\n >>> rnn = nn.RNNCell(10, 20)\n >>> input = torch.randn(6, 3, 10)\n >>> hx = torch.randn(3, 20)\n >>> output = []\n >>> for i in range(6):\n ... hx = rnn(input[i], hx)\n ... output.append(hx)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.RNNCell.html", "category": "pytorch docs"} {"text": "torch.Tensor.rsqrt\nTensor.rsqrt() -> Tensor\nSee \"torch.rsqrt()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.rsqrt.html", "category": "pytorch docs"} {"text": "torch.diagonal_scatter\ntorch.diagonal_scatter(input, src, offset=0, dim1=0, dim2=1) -> Tensor\nEmbeds the values of the \"src\" tensor into \"input\" along the\n diagonal elements of \"input\", with respect to \"dim1\" and \"dim2\".\nThis function returns a tensor with fresh storage; it does not\n return a view.\nThe argument \"offset\" controls which diagonal to consider:\n\n\nIf \"offset\" = 0, it is the main diagonal.\n\n\nIf \"offset\" > 0, it is above the main diagonal.\n\n\nIf \"offset\" < 0, it is below the main diagonal.\n\n\nParameters:\n * input (Tensor) -- the input tensor. Must be at least\n 2-dimensional.\n * **src** (*Tensor*) -- the tensor to embed into \"input\".\n\n * **offset** (*int**, **optional*) -- which diagonal to\n consider. Default: 0 (main diagonal).\n\n * **dim1** (*int**, **optional*) -- first dimension with respect\n to which to take diagonal. Default: 0.\n\n * **dim2** (*int**, **optional*) -- second dimension with\n", "source": "https://pytorch.org/docs/stable/generated/torch.diagonal_scatter.html", "category": "pytorch docs"} {"text": "respect to which to take diagonal. Default: 1.\nNote:\n \"src\" must be of the proper size in order to be embedded into\n \"input\". Specifically, it should have the same shape as\n \"torch.diagonal(input, offset, dim1, dim2)\"\n\nExamples:\n >>> a = torch.zeros(3, 3)\n >>> a\n tensor([[0., 0., 0.],\n [0., 0., 0.],\n [0., 0., 0.]])\n\n >>> torch.diagonal_scatter(a, torch.ones(3), 0)\n tensor([[1., 0., 0.],\n [0., 1., 0.],\n [0., 0., 1.]])\n\n >>> torch.diagonal_scatter(a, torch.ones(2), 1)\n tensor([[0., 1., 0.],\n [0., 0., 1.],\n [0., 0., 0.]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.diagonal_scatter.html", "category": "pytorch docs"} {"text": "torch.nn.utils.prune.l1_unstructured\ntorch.nn.utils.prune.l1_unstructured(module, name, amount, importance_scores=None)\nPrunes tensor corresponding to parameter called \"name\" in \"module\"\n by removing the specified amount of (currently unpruned) units\n with the lowest L1-norm. Modifies module in place (and also return\n the modified module) by:\n\n\nadding a named buffer called \"name+'_mask'\" corresponding to the\n binary mask applied to the parameter \"name\" by the pruning\n method.\n\n\nreplacing the parameter \"name\" by its pruned version, while the\n original (unpruned) parameter is stored in a new parameter named\n \"name+'_orig'\".\n\n\nParameters:\n * module (nn.Module) -- module containing the tensor to\n prune\n * **name** (*str*) -- parameter name within \"module\" on which\n pruning will act.\n\n * **amount** (*int** or **float*) -- quantity of parameters to\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.l1_unstructured.html", "category": "pytorch docs"} {"text": "prune. If \"float\", should be between 0.0 and 1.0 and represent\n the fraction of parameters to prune. If \"int\", it represents\n the absolute number of parameters to prune.\n * **importance_scores** (*torch.Tensor*) -- tensor of importance\n scores (of same shape as module parameter) used to compute\n mask for pruning. The values in this tensor indicate the\n importance of the corresponding elements in the parameter\n being pruned. If unspecified or None, the module parameter\n will be used in its place.\n\nReturns:\n modified (i.e. pruned) version of the input module\nReturn type:\n module (nn.Module)\n-[ Examples ]-\n\n\n\nm = prune.l1_unstructured(nn.Linear(2, 3), 'weight', amount=0.2)\nm.state_dict().keys()\n odict_keys(['bias', 'weight_orig', 'weight_mask'])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.l1_unstructured.html", "category": "pytorch docs"} {"text": "torch.Tensor.matrix_exp\nTensor.matrix_exp() -> Tensor\nSee \"torch.matrix_exp()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.matrix_exp.html", "category": "pytorch docs"} {"text": "torch.lgamma\ntorch.lgamma(input, *, out=None) -> Tensor\nComputes the natural logarithm of the absolute value of the gamma\n function on \"input\".\n \\text{out}_{i} = \\ln \\Gamma(|\\text{input}_{i}|)\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.arange(0.5, 2, 0.5)\n >>> torch.lgamma(a)\n tensor([ 0.5724, 0.0000, -0.1208])\n", "source": "https://pytorch.org/docs/stable/generated/torch.lgamma.html", "category": "pytorch docs"} {"text": "torch.gather\ntorch.gather(input, dim, index, *, sparse_grad=False, out=None) -> Tensor\nGathers values along an axis specified by dim.\nFor a 3-D tensor the output is specified by:\n out[i][j][k] = input[index[i][j][k]][j][k] # if dim == 0\n out[i][j][k] = input[i][index[i][j][k]][k] # if dim == 1\n out[i][j][k] = input[i][j][index[i][j][k]] # if dim == 2\n\n\"input\" and \"index\" must have the same number of dimensions. It is\n also required that \"index.size(d) <= input.size(d)\" for all\n dimensions \"d != dim\". \"out\" will have the same shape as \"index\".\n Note that \"input\" and \"index\" do not broadcast against each other.\nParameters:\n * input (Tensor) -- the source tensor\n * **dim** (*int*) -- the axis along which to index\n\n * **index** (*LongTensor*) -- the indices of elements to gather\n\nKeyword Arguments:\n * sparse_grad (bool, optional) -- If \"True\", gradient\n w.r.t. \"input\" will be a sparse tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.gather.html", "category": "pytorch docs"} {"text": "w.r.t. \"input\" will be a sparse tensor.\n * **out** (*Tensor**, **optional*) -- the destination tensor\n\nExample:\n >>> t = torch.tensor([[1, 2], [3, 4]])\n >>> torch.gather(t, 1, torch.tensor([[0, 0], [1, 0]]))\n tensor([[ 1, 1],\n [ 4, 3]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.gather.html", "category": "pytorch docs"} {"text": "torch.quantize_per_channel\ntorch.quantize_per_channel(input, scales, zero_points, axis, dtype) -> Tensor\nConverts a float tensor to a per-channel quantized tensor with\n given scales and zero points.\nParameters:\n * input (Tensor) -- float tensor to quantize\n * **scales** (*Tensor*) -- float 1D tensor of scales to use,\n size should match \"input.size(axis)\"\n\n * **zero_points** (*int*) -- integer 1D tensor of offset to use,\n size should match \"input.size(axis)\"\n\n * **axis** (*int*) -- dimension on which apply per-channel\n quantization\n\n * **dtype** (\"torch.dtype\") -- the desired data type of returned\n tensor. Has to be one of the quantized dtypes: \"torch.quint8\",\n \"torch.qint8\", \"torch.qint32\"\n\nReturns:\n A newly quantized tensor\nReturn type:\n Tensor\nExample:\n >>> x = torch.tensor([[-1.0, 0.0], [1.0, 2.0]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantize_per_channel.html", "category": "pytorch docs"} {"text": "\n\n\ntorch.quantize_per_channel(x, torch.tensor([0.1, 0.01]), torch.tensor([10, 0]), 0, torch.quint8)\n tensor([[-1., 0.],\n [ 1., 2.]], size=(2, 2), dtype=torch.quint8,\n quantization_scheme=torch.per_channel_affine,\n scale=tensor([0.1000, 0.0100], dtype=torch.float64),\n zero_point=tensor([10, 0]), axis=0)\n >>> torch.quantize_per_channel(x, torch.tensor([0.1, 0.01]), torch.tensor([10, 0]), 0, torch.quint8).int_repr()\n tensor([[ 0, 10],\n [100, 200]], dtype=torch.uint8)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantize_per_channel.html", "category": "pytorch docs"} {"text": "torch.signal.windows.blackman\ntorch.signal.windows.blackman(M, *, sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)\nComputes the Blackman window.\nThe Blackman window is defined as follows:\n w_n = 0.42 - 0.5 \\cos \\left( \\frac{2 \\pi n}{M - 1} \\right) +\n 0.08 \\cos \\left( \\frac{4 \\pi n}{M - 1} \\right)\n\nThe window is normalized to 1 (maximum value is 1). However, the 1\n doesn't appear if \"M\" is even and \"sym\" is True.\nParameters:\n M (int) -- the length of the window. In other words, the\n number of points of the returned window.\nKeyword Arguments:\n * sym (bool, optional) -- If False, returns a\n periodic window suitable for use in spectral analysis. If\n True, returns a symmetric window suitable for use in filter\n design. Default: True.\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.blackman.html", "category": "pytorch docs"} {"text": "of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\nReturn type:\n Tensor\nExamples:\n >>> # Generates a symmetric Blackman window.\n >>> torch.signal.windows.blackman(5)\n tensor([-1.4901e-08, 3.4000e-01, 1.0000e+00, 3.4000e-01, -1.4901e-08])\n\n >>> # Generates a periodic Blackman window.\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.blackman.html", "category": "pytorch docs"} {"text": "\n\n\nGenerates a periodic Blackman window.\n >>> torch.signal.windows.blackman(5, sym=False)\n tensor([-1.4901e-08, 2.0077e-01, 8.4923e-01, 8.4923e-01, 2.0077e-01])\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.blackman.html", "category": "pytorch docs"} {"text": "torch.nn.functional.softsign\ntorch.nn.functional.softsign(input) -> Tensor\nApplies element-wise, the function \\text{SoftSign}(x) = \\frac{x}{1\n + |x|}\nSee \"Softsign\" for more details.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.softsign.html", "category": "pytorch docs"} {"text": "Event\nclass torch.cuda.Event(enable_timing=False, blocking=False, interprocess=False)\nWrapper around a CUDA event.\nCUDA events are synchronization markers that can be used to monitor\n the device's progress, to accurately measure timing, and to\n synchronize CUDA streams.\nThe underlying CUDA events are lazily initialized when the event is\n first recorded or exported to another process. After creation, only\n streams on the same device may record the event. However, streams\n on any device can wait on the event.\nParameters:\n * enable_timing (bool, optional) -- indicates if the\n event should measure time (default: \"False\")\n * **blocking** (*bool**, **optional*) -- if \"True\", \"wait()\"\n will be blocking (default: \"False\")\n\n * **interprocess** (*bool*) -- if \"True\", the event can be\n shared between processes (default: \"False\")\n\nelapsed_time(end_event)\n Returns the time elapsed in milliseconds after the event was\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.Event.html", "category": "pytorch docs"} {"text": "recorded and before the end_event was recorded.\nclassmethod from_ipc_handle(device, handle)\n Reconstruct an event from an IPC handle on the given device.\n\nipc_handle()\n Returns an IPC handle of this event. If not recorded yet, the\n event will use the current device.\n\nquery()\n Checks if all work currently captured by event has completed.\n\n Returns:\n A boolean indicating if all work currently captured by event\n has completed.\n\nrecord(stream=None)\n Records the event in a given stream.\n\n Uses \"torch.cuda.current_stream()\" if no stream is specified.\n The stream's device must match the event's device.\n\nsynchronize()\n Waits for the event to complete.\n\n Waits until the completion of all work currently captured in\n this event. This prevents the CPU thread from proceeding until\n the event completes.\n\n Note:\n\n This is a wrapper around \"cudaEventSynchronize()\": see CUDA\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.Event.html", "category": "pytorch docs"} {"text": "Event documentation for more info.\nwait(stream=None)\n Makes all future work submitted to the given stream wait for\n this event.\n\n Use \"torch.cuda.current_stream()\" if no stream is specified.\n\n Note:\n\n This is a wrapper around \"cudaStreamWaitEvent()\": see CUDA\n Event documentation for more info.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.Event.html", "category": "pytorch docs"} {"text": "torch.argsort\ntorch.argsort(input, dim=- 1, descending=False, stable=False) -> Tensor\nReturns the indices that sort a tensor along a given dimension in\n ascending order by value.\nThis is the second value returned by \"torch.sort()\". See its\n documentation for the exact semantics of this method.\nIf \"stable\" is \"True\" then the sorting routine becomes stable,\n preserving the order of equivalent elements. If \"False\", the\n relative order of values which compare equal is not guaranteed.\n \"True\" is slower.\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int**, **optional*) -- the dimension to sort along\n\n * **descending** (*bool**, **optional*) -- controls the sorting\n order (ascending or descending)\n\n * **stable** (*bool**, **optional*) -- controls the relative\n order of equivalent elements\n\nExample:\n >>> a = torch.randn(4, 4)\n >>> a\n tensor([[ 0.0785, 1.5267, -0.8521, 0.4065],\n", "source": "https://pytorch.org/docs/stable/generated/torch.argsort.html", "category": "pytorch docs"} {"text": "[ 0.1598, 0.0788, -0.0745, -1.2700],\n [ 1.2208, 1.0722, -0.7064, 1.2564],\n [ 0.0669, -0.2318, -0.8229, -0.9280]])\n >>> torch.argsort(a, dim=1)\n tensor([[2, 0, 3, 1],\n [3, 2, 1, 0],\n [2, 1, 0, 3],\n [3, 2, 1, 0]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.argsort.html", "category": "pytorch docs"} {"text": "torch.is_grad_enabled\ntorch.is_grad_enabled()\nReturns True if grad mode is currently enabled.", "source": "https://pytorch.org/docs/stable/generated/torch.is_grad_enabled.html", "category": "pytorch docs"} {"text": "CosineEmbeddingLoss\nclass torch.nn.CosineEmbeddingLoss(margin=0.0, size_average=None, reduce=None, reduction='mean')\nCreates a criterion that measures the loss given input tensors x_1,\n x_2 and a Tensor label y with values 1 or -1. This is used for\n measuring whether two inputs are similar or dissimilar, using the\n cosine similarity, and is typically used for learning nonlinear\n embeddings or semi-supervised learning.\nThe loss function for each sample is:\n \\text{loss}(x, y) = \\begin{cases} 1 - \\cos(x_1, x_2), & \\text{if\n } y = 1 \\\\ \\max(0, \\cos(x_1, x_2) - \\text{margin}), & \\text{if }\n y = -1 \\end{cases}\n\nParameters:\n * margin (float, optional) -- Should be a number from\n -1 to 1, 0 to 0.5 is suggested. If \"margin\" is missing, the\n default value is 0.\n * **size_average** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CosineEmbeddingLoss.html", "category": "pytorch docs"} {"text": "loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n * **reduce** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n\n * **reduction** (*str**, **optional*) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CosineEmbeddingLoss.html", "category": "pytorch docs"} {"text": "deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\nShape:\n * Input1: (N, D) or (D), where N is the batch size and D is\n the embedding dimension.\n * Input2: (N, D) or (D), same shape as Input1.\n\n * Target: (N) or ().\n\n * Output: If \"reduction\" is \"'none'\", then (N), otherwise\n scalar.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CosineEmbeddingLoss.html", "category": "pytorch docs"} {"text": "torch.arccosh\ntorch.arccosh(input, *, out=None) -> Tensor\nAlias for \"torch.acosh()\".", "source": "https://pytorch.org/docs/stable/generated/torch.arccosh.html", "category": "pytorch docs"} {"text": "ConvReLU2d\nclass torch.ao.nn.intrinsic.qat.ConvReLU2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', qconfig=None)\nA ConvReLU2d module is a fused module of Conv2d and ReLU, attached\n with FakeQuantize modules for weight for quantization aware\n training.\nWe combined the interface of \"Conv2d\" and \"BatchNorm2d\".\nVariables:\n weight_fake_quant -- fake quant module for weight", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.qat.ConvReLU2d.html", "category": "pytorch docs"} {"text": "torch.vstack\ntorch.vstack(tensors, *, out=None) -> Tensor\nStack tensors in sequence vertically (row wise).\nThis is equivalent to concatenation along the first axis after all\n 1-D tensors have been reshaped by \"torch.atleast_2d()\".\nParameters:\n tensors (sequence of Tensors) -- sequence of tensors to\n concatenate\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.tensor([1, 2, 3])\n >>> b = torch.tensor([4, 5, 6])\n >>> torch.vstack((a,b))\n tensor([[1, 2, 3],\n [4, 5, 6]])\n >>> a = torch.tensor([[1],[2],[3]])\n >>> b = torch.tensor([[4],[5],[6]])\n >>> torch.vstack((a,b))\n tensor([[1],\n [2],\n [3],\n [4],\n [5],\n [6]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.vstack.html", "category": "pytorch docs"} {"text": "torch.flip\ntorch.flip(input, dims) -> Tensor\nReverse the order of an n-D tensor along given axis in dims.\nNote:\n *torch.flip* makes a copy of \"input\"'s data. This is different\n from NumPy's *np.flip*, which returns a view in constant time.\n Since copying a tensor's data is more work than viewing that\n data, *torch.flip* is expected to be slower than *np.flip*.\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **dims** (*a list** or **tuple*) -- axis to flip on\n\nExample:\n >>> x = torch.arange(8).view(2, 2, 2)\n >>> x\n tensor([[[ 0, 1],\n [ 2, 3]],\n\n [[ 4, 5],\n [ 6, 7]]])\n >>> torch.flip(x, [0, 1])\n tensor([[[ 6, 7],\n [ 4, 5]],\n\n [[ 2, 3],\n [ 0, 1]]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.flip.html", "category": "pytorch docs"} {"text": "torch.Tensor.frexp\nTensor.frexp(input) -> (Tensor mantissa, Tensor exponent)\nSee \"torch.frexp()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.frexp.html", "category": "pytorch docs"} {"text": "torch.median\ntorch.median(input) -> Tensor\nReturns the median of the values in \"input\".\nNote:\n The median is not unique for \"input\" tensors with an even number\n of elements. In this case the lower of the two medians is\n returned. To compute the mean of both medians, use\n \"torch.quantile()\" with \"q=0.5\" instead.\n\nWarning:\n This function produces deterministic (sub)gradients unlike\n \"median(dim=0)\"\n\nParameters:\n input (Tensor) -- the input tensor.\nExample:\n >>> a = torch.randn(1, 3)\n >>> a\n tensor([[ 1.5219, -1.5212, 0.2202]])\n >>> torch.median(a)\n tensor(0.2202)\n\ntorch.median(input, dim=- 1, keepdim=False, *, out=None)\nReturns a namedtuple \"(values, indices)\" where \"values\" contains\n the median of each row of \"input\" in the dimension \"dim\", and\n \"indices\" contains the index of the median values found in the\n dimension \"dim\".\nBy default, \"dim\" is the last dimension of the \"input\" tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.median.html", "category": "pytorch docs"} {"text": "If \"keepdim\" is \"True\", the output tensors are of the same size as\n \"input\" except in the dimension \"dim\" where they are of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the outputs tensor having 1 fewer dimension than \"input\".\nNote:\n The median is not unique for \"input\" tensors with an even number\n of elements in the dimension \"dim\". In this case the lower of the\n two medians is returned. To compute the mean of both medians in\n \"input\", use \"torch.quantile()\" with \"q=0.5\" instead.\n\nWarning:\n \"indices\" does not necessarily contain the first occurrence of\n each median value found, unless it is unique. The exact\n implementation details are device-specific. Do not expect the\n same result when run on CPU and GPU in general. For the same\n reason do not expect the gradients to be deterministic.\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int*) -- the dimension to reduce.\n", "source": "https://pytorch.org/docs/stable/generated/torch.median.html", "category": "pytorch docs"} {"text": "\nkeepdim (bool) -- whether the output tensor has \"dim\"\n retained or not.\n\nKeyword Arguments:\n out ((Tensor, Tensor), optional) -- The first\n tensor will be populated with the median values and the second\n tensor, which must have dtype long, with their indices in the\n dimension \"dim\" of \"input\".\nExample:\n >>> a = torch.randn(4, 5)\n >>> a\n tensor([[ 0.2505, -0.3982, -0.9948, 0.3518, -1.3131],\n [ 0.3180, -0.6993, 1.0436, 0.0438, 0.2270],\n [-0.2751, 0.7303, 0.2192, 0.3321, 0.2488],\n [ 1.0778, -1.9510, 0.7048, 0.4742, -0.7125]])\n >>> torch.median(a, 1)\n torch.return_types.median(values=tensor([-0.3982, 0.2270, 0.2488, 0.4742]), indices=tensor([1, 4, 4, 3]))\n", "source": "https://pytorch.org/docs/stable/generated/torch.median.html", "category": "pytorch docs"} {"text": "ConstantPad1d\nclass torch.nn.ConstantPad1d(padding, value)\nPads the input tensor boundaries with a constant value.\nFor N-dimensional padding, use \"torch.nn.functional.pad()\".\nParameters:\n padding (int, tuple) -- the size of the padding. If is\n int, uses the same padding in both boundaries. If a 2-tuple,\n uses (\\text{padding_left}, \\text{padding_right})\nShape:\n * Input: (C, W_{in}) or (N, C, W_{in}).\n * Output: (C, W_{out}) or (N, C, W_{out}), where\n\n W_{out} = W_{in} + \\text{padding\\_left} +\n \\text{padding\\_right}\n\nExamples:\n >>> m = nn.ConstantPad1d(2, 3.5)\n >>> input = torch.randn(1, 2, 4)\n >>> input\n tensor([[[-1.0491, -0.7152, -0.0749, 0.8530],\n [-1.3287, 1.8966, 0.1466, -0.2771]]])\n >>> m(input)\n tensor([[[ 3.5000, 3.5000, -1.0491, -0.7152, -0.0749, 0.8530, 3.5000,\n 3.5000],\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConstantPad1d.html", "category": "pytorch docs"} {"text": "3.5000],\n [ 3.5000, 3.5000, -1.3287, 1.8966, 0.1466, -0.2771, 3.5000,\n 3.5000]]])\n >>> m = nn.ConstantPad1d(2, 3.5)\n >>> input = torch.randn(1, 2, 3)\n >>> input\n tensor([[[ 1.6616, 1.4523, -1.1255],\n [-3.6372, 0.1182, -1.8652]]])\n >>> m(input)\n tensor([[[ 3.5000, 3.5000, 1.6616, 1.4523, -1.1255, 3.5000, 3.5000],\n [ 3.5000, 3.5000, -3.6372, 0.1182, -1.8652, 3.5000, 3.5000]]])\n >>> # using different paddings for different sides\n >>> m = nn.ConstantPad1d((3, 1), 3.5)\n >>> m(input)\n tensor([[[ 3.5000, 3.5000, 3.5000, 1.6616, 1.4523, -1.1255, 3.5000],\n [ 3.5000, 3.5000, 3.5000, -3.6372, 0.1182, -1.8652, 3.5000]]])", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConstantPad1d.html", "category": "pytorch docs"} {"text": "ZeroPad2d\nclass torch.nn.ZeroPad2d(padding)\nPads the input tensor boundaries with zero.\nFor N-dimensional padding, use \"torch.nn.functional.pad()\".\nParameters:\n padding (int, tuple) -- the size of the padding. If is\n int, uses the same padding in all boundaries. If a 4-tuple,\n uses (\\text{padding_left}, \\text{padding_right},\n \\text{padding_top}, \\text{padding_bottom})\nShape:\n * Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in}).\n * Output: (N, C, H_{out}, W_{out}) or (C, H_{out}, W_{out}),\n where\n\n H_{out} = H_{in} + \\text{padding\\_top} +\n \\text{padding\\_bottom}\n\n W_{out} = W_{in} + \\text{padding\\_left} +\n \\text{padding\\_right}\n\nExamples:\n >>> m = nn.ZeroPad2d(2)\n >>> input = torch.randn(1, 1, 3, 3)\n >>> input\n tensor([[[[-0.1678, -0.4418, 1.9466],\n [ 0.9604, -0.4219, -0.5241],\n [-0.9162, -0.5436, -0.6446]]]])\n >>> m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ZeroPad2d.html", "category": "pytorch docs"} {"text": "\n\n\nm(input)\n tensor([[[[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],\n [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],\n [ 0.0000, 0.0000, -0.1678, -0.4418, 1.9466, 0.0000, 0.0000],\n [ 0.0000, 0.0000, 0.9604, -0.4219, -0.5241, 0.0000, 0.0000],\n [ 0.0000, 0.0000, -0.9162, -0.5436, -0.6446, 0.0000, 0.0000],\n [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],\n [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000, 0.0000]]]])\n >>> # using different paddings for different sides\n >>> m = nn.ZeroPad2d((1, 1, 2, 0))\n >>> m(input)\n tensor([[[[ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],\n [ 0.0000, 0.0000, 0.0000, 0.0000, 0.0000],\n [ 0.0000, -0.1678, -0.4418, 1.9466, 0.0000],\n [ 0.0000, 0.9604, -0.4219, -0.5241, 0.0000],\n [ 0.0000, -0.9162, -0.5436, -0.6446, 0.0000]]]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ZeroPad2d.html", "category": "pytorch docs"} {"text": "torch.Tensor.copysign\nTensor.copysign(other) -> Tensor\nSee \"torch.copysign()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.copysign.html", "category": "pytorch docs"} {"text": "torch.true_divide\ntorch.true_divide(dividend, divisor, *, out) -> Tensor\nAlias for \"torch.div()\" with \"rounding_mode=None\".", "source": "https://pytorch.org/docs/stable/generated/torch.true_divide.html", "category": "pytorch docs"} {"text": "torch.Tensor.scatter_reduce\nTensor.scatter_reduce(dim, index, src, reduce, *, include_self=True) -> Tensor\nOut-of-place version of \"torch.Tensor.scatter_reduce_()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_reduce.html", "category": "pytorch docs"} {"text": "torch.linalg.ldl_solve\ntorch.linalg.ldl_solve(LD, pivots, B, *, hermitian=False, out=None) -> Tensor\nComputes the solution of a system of linear equations using the LDL\n factorization.\n\"LD\" and \"pivots\" are the compact representation of the LDL\n factorization and are expected to be computed by\n \"torch.linalg.ldl_factor_ex()\". \"hermitian\" argument to this\n function should be the same as the corresponding arguments in\n \"torch.linalg.ldl_factor_ex()\".\nSupports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\nWarning:\n This function is \"experimental\" and it may change in a future\n PyTorch release.\n\nParameters:\n * LD (Tensor) -- the n times n matrix or the batch of\n such matrices of size (, n, n)* where *** is one or more\n batch dimensions.", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.ldl_solve.html", "category": "pytorch docs"} {"text": "batch dimensions.\n * **pivots** (*Tensor*) -- the pivots corresponding to the LDL\n factorization of \"LD\".\n\n * **B** (*Tensor*) -- right-hand side tensor of shape *(*, n,\n k)*.\n\nKeyword Arguments:\n * hermitian (bool, optional) -- whether to consider\n the decomposed matrix to be Hermitian or symmetric. For real-\n valued matrices, this switch has no effect. Default: False.\n * **out** (*tuple**, **optional*) -- output tensor. *B* may be\n passed as *out* and the result is computed in-place on *B*.\n Ignored if *None*. Default: *None*.\n\nExamples:\n >>> A = torch.randn(2, 3, 3)\n >>> A = A @ A.mT # make symmetric\n >>> LD, pivots, info = torch.linalg.ldl_factor_ex(A)\n >>> B = torch.randn(2, 3, 4)\n >>> X = torch.linalg.ldl_solve(LD, pivots, B)\n >>> torch.linalg.norm(A @ X - B)\n >>> tensor(0.0001)\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.ldl_solve.html", "category": "pytorch docs"} {"text": "torch.tan\ntorch.tan(input, *, out=None) -> Tensor\nReturns a new tensor with the tangent of the elements of \"input\".\n \\text{out}_{i} = \\tan(\\text{input}_{i})\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(4)\n >>> a\n tensor([-1.2027, -1.7687, 0.4412, -1.3856])\n >>> torch.tan(a)\n tensor([-2.5930, 4.9859, 0.4722, -5.3366])\n", "source": "https://pytorch.org/docs/stable/generated/torch.tan.html", "category": "pytorch docs"} {"text": "torch.Tensor.greater_equal_\nTensor.greater_equal_(other) -> Tensor\nIn-place version of \"greater_equal()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.greater_equal_.html", "category": "pytorch docs"} {"text": "default_fused_per_channel_wt_fake_quant\ntorch.quantization.fake_quantize.default_fused_per_channel_wt_fake_quant\nalias of functools.partial(, observer=,\n quant_min=-128, quant_max=127, dtype=torch.qint8,\n qscheme=torch.per_channel_symmetric){}", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.default_fused_per_channel_wt_fake_quant.html", "category": "pytorch docs"} {"text": "torch.optim.Optimizer.state_dict\nOptimizer.state_dict()\nReturns the state of the optimizer as a \"dict\".\nIt contains two entries:\n\n\nstate - a dict holding current optimization state. Its content\n differs between optimizer classes.\n\n\nparam_groups - a list containing all parameter groups where each\n parameter group is a dict\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Optimizer.state_dict.html", "category": "pytorch docs"} {"text": "leaky_relu\nclass torch.ao.nn.quantized.functional.leaky_relu(input, negative_slope=0.01, inplace=False, scale=None, zero_point=None)\nQuantized version of the. leaky_relu(input, negative_slope=0.01,\n inplace=False, scale, zero_point) -> Tensor\nApplies element-wise, \\text{LeakyReLU}(x) = \\max(0, x) +\n \\text{negative_slope} * \\min(0, x)\nParameters:\n * input (Tensor) -- Quantized input\n * **negative_slope** (*float*) -- The slope of the negative\n input\n\n * **inplace** (*bool*) -- Inplace modification of the input\n tensor\n\n * **scale** (*Optional**[**float**]*) -- Scale and zero point of\n the output tensor.\n\n * **zero_point** (*Optional**[**int**]*) -- Scale and zero point\n of the output tensor.\n\nSee \"LeakyReLU\" for more details.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.leaky_relu.html", "category": "pytorch docs"} {"text": "torch.func.jacfwd\ntorch.func.jacfwd(func, argnums=0, has_aux=False, *, randomness='error')\nComputes the Jacobian of \"func\" with respect to the arg(s) at index\n \"argnum\" using forward-mode autodiff\nParameters:\n * func (function) -- A Python function that takes one or\n more arguments, one of which must be a Tensor, and returns one\n or more Tensors\n * **argnums** (*int** or **Tuple**[**int**]*) -- Optional,\n integer or tuple of integers, saying which arguments to get\n the Jacobian with respect to. Default: 0.\n\n * **has_aux** (*bool*) -- Flag indicating that \"func\" returns a\n \"(output, aux)\" tuple where the first element is the output of\n the function to be differentiated and the second element is\n auxiliary objects that will not be differentiated. Default:\n False.\n\n * **randomness** (*str*) -- Flag indicating what type of\n randomness to use. See \"vmap()\" for more detail. Allowed:\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.jacfwd.html", "category": "pytorch docs"} {"text": "\"different\", \"same\", \"error\". Default: \"error\"\nReturns:\n Returns a function that takes in the same inputs as \"func\" and\n returns the Jacobian of \"func\" with respect to the arg(s) at\n \"argnums\". If \"has_aux is True\", then the returned function\n instead returns a \"(jacobian, aux)\" tuple where \"jacobian\" is\n the Jacobian and \"aux\" is auxiliary objects returned by \"func\".\nNote:\n You may see this API error out with \"forward-mode AD not\n implemented for operator X\". If so, please file a bug report and\n we will prioritize it. An alternative is to use \"jacrev()\", which\n has better operator coverage.\n\nA basic usage with a pointwise, unary operation will give a\n diagonal array as the Jacobian\n\n\n\nfrom torch.func import jacfwd\nx = torch.randn(5)\njacobian = jacfwd(torch.sin)(x)\nexpected = torch.diag(torch.cos(x))\nassert torch.allclose(jacobian, expected)\n\n\n\n\"jacfwd()\" can be composed with vmap to produce batched Jacobians:", "source": "https://pytorch.org/docs/stable/generated/torch.func.jacfwd.html", "category": "pytorch docs"} {"text": "\n\n\nfrom torch.func import jacfwd, vmap\nx = torch.randn(64, 5)\njacobian = vmap(jacfwd(torch.sin))(x)\nassert jacobian.shape == (64, 5, 5)\n\n\n\nIf you would like to compute the output of the function as well as\n the jacobian of the function, use the \"has_aux\" flag to return the\n output as an auxiliary object:\n\n\n\nfrom torch.func import jacfwd\nx = torch.randn(5)\ndef f(x):\n return x.sin()\ndef g(x):\n result = f(x)\n return result, result\njacobian_f, f_x = jacfwd(g, has_aux=True)(x)\nassert torch.allclose(f_x, f(x))\n\n\n\nAdditionally, \"jacrev()\" can be composed with itself or \"jacrev()\"\n to produce Hessians\n\n\n\nfrom torch.func import jacfwd, jacrev\ndef f(x):\n return x.sin().sum()\nx = torch.randn(5)\nhessian = jacfwd(jacrev(f))(x)\nassert torch.allclose(hessian, torch.diag(-x.sin()))\n\n\n\nBy default, \"jacfwd()\" computes the Jacobian with respect to the", "source": "https://pytorch.org/docs/stable/generated/torch.func.jacfwd.html", "category": "pytorch docs"} {"text": "first input. However, it can compute the Jacboian with respect to a\n different argument by using \"argnums\":\n\n\n\nfrom torch.func import jacfwd\ndef f(x, y):\n return x + y ** 2\nx, y = torch.randn(5), torch.randn(5)\njacobian = jacfwd(f, argnums=1)(x, y)\nexpected = torch.diag(2 * y)\nassert torch.allclose(jacobian, expected)\n\n\n\nAdditionally, passing a tuple to \"argnums\" will compute the\n Jacobian with respect to multiple arguments\n\n\n\nfrom torch.func import jacfwd\ndef f(x, y):\n return x + y ** 2\nx, y = torch.randn(5), torch.randn(5)\njacobian = jacfwd(f, argnums=(0, 1))(x, y)\nexpectedX = torch.diag(torch.ones_like(x))\nexpectedY = torch.diag(2 * y)\nassert torch.allclose(jacobian[0], expectedX)\nassert torch.allclose(jacobian[1], expectedY)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.jacfwd.html", "category": "pytorch docs"} {"text": "torch._foreach_exp\ntorch._foreach_exp(self: List[Tensor]) -> List[Tensor]\nApply \"torch.exp()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_exp.html", "category": "pytorch docs"} {"text": "torch.linalg.solve_ex\ntorch.linalg.solve_ex(A, B, *, left=True, check_errors=False, out=None)\nA version of \"solve()\" that does not perform error checks unless\n \"check_errors\"= True. It also returns the \"info\" tensor returned\n by LAPACK's getrf.\nNote:\n When the inputs are on a CUDA device, this function synchronizes\n only when \"check_errors\"*= True*.\n\nWarning:\n This function is \"experimental\" and it may change in a future\n PyTorch release.\n\nParameters:\n A (Tensor) -- tensor of shape (, n, n)* where *** is\n zero or more batch dimensions.\nKeyword Arguments:\n * left (bool, optional) -- whether to solve the system\n AX=B or XA = B. Default: True.\n * **check_errors** (*bool**, **optional*) -- controls whether to\n check the content of \"infos\" and raise an error if it is non-\n zero. Default: *False*.\n\n * **out** (*tuple**, **optional*) -- tuple of two tensors to\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.solve_ex.html", "category": "pytorch docs"} {"text": "write the output to. Ignored if None. Default: None.\nReturns:\n A named tuple (result, info).\nExamples:\n >>> A = torch.randn(3, 3)\n >>> Ainv, info = torch.linalg.solve_ex(A)\n >>> torch.dist(torch.linalg.inv(A), Ainv)\n tensor(0.)\n >>> info\n tensor(0, dtype=torch.int32)\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.solve_ex.html", "category": "pytorch docs"} {"text": "torch.nn.functional.smooth_l1_loss\ntorch.nn.functional.smooth_l1_loss(input, target, size_average=None, reduce=None, reduction='mean', beta=1.0)\nFunction that uses a squared term if the absolute element-wise\n error falls below beta and an L1 term otherwise.\nSee \"SmoothL1Loss\" for details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.smooth_l1_loss.html", "category": "pytorch docs"} {"text": "MaxPool2d\nclass torch.nn.MaxPool2d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)\nApplies a 2D max pooling over an input signal composed of several\n input planes.\nIn the simplest case, the output value of the layer with input size\n (N, C, H, W), output (N, C, H_{out}, W_{out}) and \"kernel_size\"\n (kH, kW) can be precisely described as:\n \\begin{aligned} out(N_i, C_j, h, w) ={} & \\max_{m=0, \\ldots,\n kH-1} \\max_{n=0, \\ldots, kW-1} \\\\ &\n \\text{input}(N_i, C_j, \\text{stride[0]} \\times h + m,\n \\text{stride[1]} \\times w + n) \\end{aligned}\n\nIf \"padding\" is non-zero, then the input is implicitly padded with\n negative infinity on both sides for \"padding\" number of points.\n \"dilation\" controls the spacing between the kernel points. It is\n harder to describe, but this link has a nice visualization of what\n \"dilation\" does.\nNote:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html", "category": "pytorch docs"} {"text": "\"dilation\" does.\nNote:\n When ceil_mode=True, sliding windows are allowed to go off-bounds\n if they start within the left padding or the input. Sliding\n windows that would start in the right padded region are ignored.\n\nThe parameters \"kernel_size\", \"stride\", \"padding\", \"dilation\" can\n either be:\n * a single \"int\" -- in which case the same value is used for the\n height and width dimension\n\n * a \"tuple\" of two ints -- in which case, the first *int* is\n used for the height dimension, and the second *int* for the\n width dimension\n\nParameters:\n * kernel_size (Union[int, Tuple[int,\n int]]) -- the size of the window to take a max over\n * **stride** (*Union**[**int**, **Tuple**[**int**, **int**]**]*)\n -- the stride of the window. Default value is \"kernel_size\"\n\n * **padding** (*Union**[**int**, **Tuple**[**int**,\n **int**]**]*) -- Implicit negative infinity padding to be\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html", "category": "pytorch docs"} {"text": "added on both sides\n * **dilation** (*Union**[**int**, **Tuple**[**int**,\n **int**]**]*) -- a parameter that controls the stride of\n elements in the window\n\n * **return_indices** (*bool*) -- if \"True\", will return the max\n indices along with the outputs. Useful for\n \"torch.nn.MaxUnpool2d\" later\n\n * **ceil_mode** (*bool*) -- when True, will use *ceil* instead\n of *floor* to compute the output shape\n\nShape:\n * Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in})\n * Output: (N, C, H_{out}, W_{out}) or (C, H_{out}, W_{out}),\n where\n\n H_{out} = \\left\\lfloor\\frac{H_{in} + 2 * \\text{padding[0]}\n - \\text{dilation[0]} \\times (\\text{kernel\\_size[0]} -\n 1) - 1}{\\text{stride[0]}} + 1\\right\\rfloor\n\n W_{out} = \\left\\lfloor\\frac{W_{in} + 2 * \\text{padding[1]}\n - \\text{dilation[1]} \\times (\\text{kernel\\_size[1]} -\n 1) - 1}{\\text{stride[1]}} + 1\\right\\rfloor\n\nExamples:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html", "category": "pytorch docs"} {"text": "Examples:\n >>> # pool of square window of size=3, stride=2\n >>> m = nn.MaxPool2d(3, stride=2)\n >>> # pool of non-square window\n >>> m = nn.MaxPool2d((3, 2), stride=(2, 1))\n >>> input = torch.randn(20, 16, 50, 32)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxPool2d.html", "category": "pytorch docs"} {"text": "torch.jit.fork\ntorch.jit.fork(func, args, *kwargs)\nCreates an asynchronous task executing func and a reference to\n the value of the result of this execution. fork will return\n immediately, so the return value of func may not have been\n computed yet. To force completion of the task and access the return\n value invoke torch.jit.wait on the Future. fork invoked with a\n func which returns T is typed as torch.jit.Future[T]. fork\n calls can be arbitrarily nested, and may be invoked with positional\n and keyword arguments. Asynchronous execution will only occur when\n run in TorchScript. If run in pure python, fork will not execute\n in parallel. fork will also not execute in parallel when invoked\n while tracing, however the fork and wait calls will be captured\n in the exported IR Graph.\nWarning:\n *fork* tasks will execute non-deterministically. We recommend\n only spawning parallel fork tasks for pure functions that do not\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.fork.html", "category": "pytorch docs"} {"text": "modify their inputs, module attributes, or global state.\nParameters:\n * func (callable or torch.nn.Module) -- A Python\n function or torch.nn.Module that will be invoked. If\n executed in TorchScript, it will execute asynchronously,\n otherwise it will not. Traced invocations of fork will be\n captured in the IR.\n * ***args** -- arguments to invoke *func* with.\n\n * ****kwargs** -- arguments to invoke *func* with.\n\nReturns:\n a reference to the execution of func. The value T can only\n be accessed by forcing completion of func through\n torch.jit.wait.\nReturn type:\n torch.jit.Future[T]\nExample (fork a free function):\n import torch\n from torch import Tensor\n def foo(a : Tensor, b : int) -> Tensor:\n return a + b\n def bar(a):\n fut : torch.jit.Future[Tensor] = torch.jit.fork(foo, a, b=2)\n return torch.jit.wait(fut)\n script_bar = torch.jit.script(bar)\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.fork.html", "category": "pytorch docs"} {"text": "script_bar = torch.jit.script(bar)\n input = torch.tensor(2)\n # only the scripted version executes asynchronously\n assert script_bar(input) == bar(input)\n # trace is not run asynchronously, but fork is captured in IR\n graph = torch.jit.trace(bar, (input,)).graph\n assert \"fork\" in str(graph)\nExample (fork a module method):\n import torch\n from torch import Tensor\n class AddMod(torch.nn.Module):\n def forward(self, a: Tensor, b : int):\n return a + b\n class Mod(torch.nn.Module):\n def __init__(self):\n super(self).__init__()\n self.mod = AddMod()\n def forward(self, input):\n fut = torch.jit.fork(self.mod, a, b=2)\n return torch.jit.wait(fut)\n input = torch.tensor(2)\n mod = Mod()\n assert mod(input) == torch.jit.script(mod).forward(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.fork.html", "category": "pytorch docs"} {"text": "torch.Tensor.conj\nTensor.conj() -> Tensor\nSee \"torch.conj()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.conj.html", "category": "pytorch docs"} {"text": "torch.nn.functional.logsigmoid\ntorch.nn.functional.logsigmoid(input) -> Tensor\nApplies element-wise \\text{LogSigmoid}(x_i) = \\log \\left(\\frac{1}{1\n + \\exp(-x_i)}\\right)\nSee \"LogSigmoid\" for more details.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.logsigmoid.html", "category": "pytorch docs"} {"text": "Parameter\nclass torch.nn.parameter.Parameter(data=None, requires_grad=True)\nA kind of Tensor that is to be considered a module parameter.\nParameters are \"Tensor\" subclasses, that have a very special\n property when used with \"Module\" s - when they're assigned as\n Module attributes they are automatically added to the list of its\n parameters, and will appear e.g. in \"parameters()\" iterator.\n Assigning a Tensor doesn't have such effect. This is because one\n might want to cache some temporary state, like last hidden state of\n the RNN, in the model. If there was no such class as \"Parameter\",\n these temporaries would get registered too.\nParameters:\n * data (Tensor) -- parameter tensor.\n * **requires_grad** (*bool**, **optional*) -- if the parameter\n requires gradient. See Locally disabling gradient computation\n for more details. Default: *True*\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.parameter.Parameter.html", "category": "pytorch docs"} {"text": "torch.foreach_lgamma\ntorch.foreach_lgamma(self: List[Tensor]) -> None\nApply \"torch.lgamma()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_lgamma_.html", "category": "pytorch docs"} {"text": "torch.Tensor.q_zero_point\nTensor.q_zero_point() -> int\nGiven a Tensor quantized by linear(affine) quantization, returns\n the zero_point of the underlying quantizer().", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.q_zero_point.html", "category": "pytorch docs"} {"text": "torch.Tensor.dim\nTensor.dim() -> int\nReturns the number of dimensions of \"self\" tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.dim.html", "category": "pytorch docs"} {"text": "PlaceholderObserver\nclass torch.quantization.observer.PlaceholderObserver(dtype=torch.float32, custom_op_name='', compute_dtype=None, quant_min=None, quant_max=None, is_dynamic=False)\nObserver that doesn't do anything and just passes its configuration\n to the quantized module's \".from_float()\".\nCan be used for quantization to float16 which doesn't require\n determining ranges.\nParameters:\n * dtype -- dtype argument to the quantize node needed to\n implement the reference model spec.\n * **quant_min** -- minimum value in quantized domain (TODO:\n align behavior with other observers)\n\n * **quant_min** -- maximum value in quantized domain\n\n * **custom_op_name** -- (temporary) specify this observer for an\n operator that doesn't require any observation (Can be used in\n Graph Mode Passes for special case ops).\n\n * **compute_dtype** (*deprecated*) -- if set, marks the future\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.PlaceholderObserver.html", "category": "pytorch docs"} {"text": "quantize function to use dynamic quantization instead of\n static quantization. This field is deprecated, use\n is_dynamic=True instead.\n * **is_dynamic** -- if True, the *quantize* function in the\n reference model representation taking stats from this observer\n instance will use dynamic quantization.\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.PlaceholderObserver.html", "category": "pytorch docs"} {"text": "torch.Tensor.element_size\nTensor.element_size() -> int\nReturns the size in bytes of an individual element.\nExample:\n >>> torch.tensor([]).element_size()\n 4\n >>> torch.tensor([], dtype=torch.uint8).element_size()\n 1\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.element_size.html", "category": "pytorch docs"} {"text": "torch.Tensor.sin_\nTensor.sin_() -> Tensor\nIn-place version of \"sin()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sin_.html", "category": "pytorch docs"} {"text": "torch.Tensor.lcm\nTensor.lcm(other) -> Tensor\nSee \"torch.lcm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.lcm.html", "category": "pytorch docs"} {"text": "torch.nn.utils.parametrize.is_parametrized\ntorch.nn.utils.parametrize.is_parametrized(module, tensor_name=None)\nReturns \"True\" if module has an active parametrization.\nIf the argument \"tensor_name\" is specified, returns \"True\" if\n \"module[tensor_name]\" is parametrized.\nParameters:\n * module (nn.Module) -- module to query\n * **tensor_name** (*str**, **optional*) -- attribute in the\n module to query Default: \"None\"\n\nReturn type:\n bool", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.is_parametrized.html", "category": "pytorch docs"} {"text": "torch.Tensor.scatter_reduce_\nTensor.scatter_reduce_(dim, index, src, reduce, *, include_self=True) -> Tensor\nReduces all values from the \"src\" tensor to the indices specified\n in the \"index\" tensor in the \"self\" tensor using the applied\n reduction defined via the \"reduce\" argument (\"\"sum\"\", \"\"prod\"\",\n \"\"mean\"\", \"\"amax\"\", \"\"amin\"\"). For each value in \"src\", it is\n reduced to an index in \"self\" which is specified by its index in\n \"src\" for \"dimension != dim\" and by the corresponding value in\n \"index\" for \"dimension = dim\". If \"include_self=\"True\"\", the values\n in the \"self\" tensor are included in the reduction.\n\"self\", \"index\" and \"src\" should all have the same number of\n dimensions. It is also required that \"index.size(d) <= src.size(d)\"\n for all dimensions \"d\", and that \"index.size(d) <= self.size(d)\"\n for all dimensions \"d != dim\". Note that \"index\" and \"src\" do not\n broadcast.\nFor a 3-D tensor with \"reduce=\"sum\"\" and \"include_self=True\" the", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_reduce_.html", "category": "pytorch docs"} {"text": "output is given as:\n self[index[i][j][k]][j][k] += src[i][j][k] # if dim == 0\n self[i][index[i][j][k]][k] += src[i][j][k] # if dim == 1\n self[i][j][index[i][j][k]] += src[i][j][k] # if dim == 2\n\nNote:\n This operation may behave nondeterministically when given tensors\n on a CUDA device. See Reproducibility for more information.\n\nNote:\n The backward pass is implemented only for \"src.shape ==\n index.shape\".\n\nWarning:\n This function is in beta and may change in the near future.\n\nParameters:\n * dim (int) -- the axis along which to index\n * **index** (*LongTensor*) -- the indices of elements to scatter\n and reduce.\n\n * **src** (*Tensor*) -- the source elements to scatter and\n reduce\n\n * **reduce** (*str*) -- the reduction operation to apply for\n non-unique indices (\"\"sum\"\", \"\"prod\"\", \"\"mean\"\", \"\"amax\"\",\n \"\"amin\"\")\n\n * **include_self** (*bool*) -- whether elements from the \"self\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_reduce_.html", "category": "pytorch docs"} {"text": "tensor are included in the reduction\nExample:\n >>> src = torch.tensor([1., 2., 3., 4., 5., 6.])\n >>> index = torch.tensor([0, 1, 0, 1, 2, 1])\n >>> input = torch.tensor([1., 2., 3., 4.])\n >>> input.scatter_reduce(0, index, src, reduce=\"sum\")\n tensor([5., 14., 8., 4.])\n >>> input.scatter_reduce(0, index, src, reduce=\"sum\", include_self=False)\n tensor([4., 12., 5., 4.])\n >>> input2 = torch.tensor([5., 4., 3., 2.])\n >>> input2.scatter_reduce(0, index, src, reduce=\"amax\")\n tensor([5., 6., 5., 2.])\n >>> input2.scatter_reduce(0, index, src, reduce=\"amax\", include_self=False)\n tensor([3., 6., 5., 2.])\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.scatter_reduce_.html", "category": "pytorch docs"} {"text": "torch._foreach_sinh\ntorch._foreach_sinh(self: List[Tensor]) -> List[Tensor]\nApply \"torch.sinh()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_sinh.html", "category": "pytorch docs"} {"text": "torch.negative\ntorch.negative(input, *, out=None) -> Tensor\nAlias for \"torch.neg()\"", "source": "https://pytorch.org/docs/stable/generated/torch.negative.html", "category": "pytorch docs"} {"text": "ReflectionPad3d\nclass torch.nn.ReflectionPad3d(padding)\nPads the input tensor using the reflection of the input boundary.\nFor N-dimensional padding, use \"torch.nn.functional.pad()\".\nParameters:\n padding (int, tuple) -- the size of the padding. If is\n int, uses the same padding in all boundaries. If a 6-tuple,\n uses (\\text{padding_left}, \\text{padding_right},\n \\text{padding_top}, \\text{padding_bottom},\n \\text{padding_front}, \\text{padding_back})\nShape:\n * Input: (N, C, D_{in}, H_{in}, W_{in}) or (C, D_{in}, H_{in},\n W_{in}).\n * Output: (N, C, D_{out}, H_{out}, W_{out}) or (C, D_{out},\n H_{out}, W_{out}), where\n\n D_{out} = D_{in} + \\text{padding\\_front} +\n \\text{padding\\_back}\n\n H_{out} = H_{in} + \\text{padding\\_top} +\n \\text{padding\\_bottom}\n\n W_{out} = W_{in} + \\text{padding\\_left} +\n \\text{padding\\_right}\n\nExamples:\n >>> m = nn.ReflectionPad3d(1)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReflectionPad3d.html", "category": "pytorch docs"} {"text": "Examples:\n >>> m = nn.ReflectionPad3d(1)\n >>> input = torch.arange(8, dtype=torch.float).reshape(1, 1, 2, 2, 2)\n >>> m(input)\n tensor([[[[[7., 6., 7., 6.],\n [5., 4., 5., 4.],\n [7., 6., 7., 6.],\n [5., 4., 5., 4.]],\n [[3., 2., 3., 2.],\n [1., 0., 1., 0.],\n [3., 2., 3., 2.],\n [1., 0., 1., 0.]],\n [[7., 6., 7., 6.],\n [5., 4., 5., 4.],\n [7., 6., 7., 6.],\n [5., 4., 5., 4.]],\n [[3., 2., 3., 2.],\n [1., 0., 1., 0.],\n [3., 2., 3., 2.],\n [1., 0., 1., 0.]]]]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReflectionPad3d.html", "category": "pytorch docs"} {"text": "torch.nn.functional.grid_sample\ntorch.nn.functional.grid_sample(input, grid, mode='bilinear', padding_mode='zeros', align_corners=None)\nGiven an \"input\" and a flow-field \"grid\", computes the \"output\"\n using \"input\" values and pixel locations from \"grid\".\nCurrently, only spatial (4-D) and volumetric (5-D) \"input\" are\n supported.\nIn the spatial (4-D) case, for \"input\" with shape (N, C,\n H_\\text{in}, W_\\text{in}) and \"grid\" with shape (N, H_\\text{out},\n W_\\text{out}, 2), the output will have shape (N, C, H_\\text{out},\n W_\\text{out}).\nFor each output location \"output[n, :, h, w]\", the size-2 vector\n \"grid[n, h, w]\" specifies \"input\" pixel locations \"x\" and \"y\",\n which are used to interpolate the output value \"output[n, :, h,\n w]\". In the case of 5D inputs, \"grid[n, d, h, w]\" specifies the\n \"x\", \"y\", \"z\" pixel locations for interpolating \"output[n, :, d, h,\n w]\". \"mode\" argument specifies \"nearest\" or \"bilinear\"", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.grid_sample.html", "category": "pytorch docs"} {"text": "interpolation method to sample the input pixels.\n\"grid\" specifies the sampling pixel locations normalized by the\n \"input\" spatial dimensions. Therefore, it should have most values\n in the range of \"[-1, 1]\". For example, values \"x = -1, y = -1\" is\n the left-top pixel of \"input\", and values \"x = 1, y = 1\" is the\n right-bottom pixel of \"input\".\nIf \"grid\" has values outside the range of \"[-1, 1]\", the\n corresponding outputs are handled as defined by \"padding_mode\".\n Options are\n * \"padding_mode=\"zeros\"\": use \"0\" for out-of-bound grid\n locations,\n\n * \"padding_mode=\"border\"\": use border values for out-of-bound\n grid locations,\n\n * \"padding_mode=\"reflection\"\": use values at locations reflected\n by the border for out-of-bound grid locations. For location\n far away from the border, it will keep being reflected until\n becoming in bound, e.g., (normalized) pixel location \"x =\n -3.5\" reflects by border \"-1\" and becomes \"x' = 1.5\", then\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.grid_sample.html", "category": "pytorch docs"} {"text": "reflects by border \"1\" and becomes \"x'' = -0.5\".\nNote:\n This function is often used in conjunction with \"affine_grid()\"\n to build Spatial Transformer Networks .\n\nNote:\n When using the CUDA backend, this operation may induce\n nondeterministic behaviour in its backward pass that is not\n easily switched off. Please see the notes on Reproducibility for\n background.\n\nNote:\n NaN values in \"grid\" would be interpreted as \"-1\".\n\nParameters:\n * input (Tensor) -- input of shape (N, C, H_\\text{in},\n W_\\text{in}) (4-D case) or (N, C, D_\\text{in}, H_\\text{in},\n W_\\text{in}) (5-D case)\n * **grid** (*Tensor*) -- flow-field of shape (N, H_\\text{out},\n W_\\text{out}, 2) (4-D case) or (N, D_\\text{out}, H_\\text{out},\n W_\\text{out}, 3) (5-D case)\n\n * **mode** (*str*) -- interpolation mode to calculate output\n values \"'bilinear'\" | \"'nearest'\" | \"'bicubic'\". Default:\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.grid_sample.html", "category": "pytorch docs"} {"text": "\"'bilinear'\" Note: \"mode='bicubic'\" supports only 4-D input.\n When \"mode='bilinear'\" and the input is 5-D, the interpolation\n mode used internally will actually be trilinear. However, when\n the input is 4-D, the interpolation mode will legitimately be\n bilinear.\n * **padding_mode** (*str*) -- padding mode for outside grid\n values \"'zeros'\" | \"'border'\" | \"'reflection'\". Default:\n \"'zeros'\"\n\n * **align_corners** (*bool**, **optional*) -- Geometrically, we\n consider the pixels of the input as squares rather than\n points. If set to \"True\", the extrema (\"-1\" and \"1\") are\n considered as referring to the center points of the input's\n corner pixels. If set to \"False\", they are instead considered\n as referring to the corner points of the input's corner\n pixels, making the sampling more resolution agnostic. This\n option parallels the \"align_corners\" option in\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.grid_sample.html", "category": "pytorch docs"} {"text": "\"interpolate()\", and so whichever option is used here should\n also be used there to resize the input image before grid\n sampling. Default: \"False\"\nReturns:\n output Tensor\nReturn type:\n output (Tensor)\nWarning:\n When \"align_corners = True\", the grid positions depend on the\n pixel size relative to the input image size, and so the locations\n sampled by \"grid_sample()\" will differ for the same input given\n at different resolutions (that is, after being upsampled or\n downsampled). The default behavior up to version 1.2.0 was\n \"align_corners = True\". Since then, the default behavior has been\n changed to \"align_corners = False\", in order to bring it in line\n with the default for \"interpolate()\".\n\nNote:\n \"mode='bicubic'\" is implemented using the cubic convolution\n algorithm with \\alpha=-0.75. The constant \\alpha might be\n different from packages to packages. For example, PIL and OpenCV\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.grid_sample.html", "category": "pytorch docs"} {"text": "use -0.5 and -0.75 respectively. This algorithm may \"overshoot\"\n the range of values it's interpolating. For example, it may\n produce negative values or values greater than 255 when\n interpolating input in [0, 255]. Clamp the results with :func:\n torch.clamp to ensure they are within the valid range.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.grid_sample.html", "category": "pytorch docs"} {"text": "torch.isin\ntorch.isin(elements, test_elements, *, assume_unique=False, invert=False) -> Tensor\nTests if each element of \"elements\" is in \"test_elements\". Returns\n a boolean tensor of the same shape as \"elements\" that is True for\n elements in \"test_elements\" and False otherwise.\nNote:\n One of \"elements\" or \"test_elements\" can be a scalar, but not\n both.\n\nParameters:\n * elements (Tensor or Scalar) -- Input elements\n * **test_elements** (*Tensor** or **Scalar*) -- Values against\n which to test for each input element\n\n * **assume_unique** (*bool**, **optional*) -- If True, assumes\n both \"elements\" and \"test_elements\" contain unique elements,\n which can speed up the calculation. Default: False\n\n * **invert** (*bool**, **optional*) -- If True, inverts the\n boolean return tensor, resulting in True values for elements\n *not* in \"test_elements\". Default: False\n\nReturns:", "source": "https://pytorch.org/docs/stable/generated/torch.isin.html", "category": "pytorch docs"} {"text": "Returns:\n A boolean tensor of the same shape as \"elements\" that is True\n for elements in \"test_elements\" and False otherwise\n-[ Example ]-\n\n\n\ntorch.isin(torch.tensor([[1, 2], [3, 4]]), torch.tensor([2, 3]))\n tensor([[False, True],\n [ True, False]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.isin.html", "category": "pytorch docs"} {"text": "BackendPatternConfig\nclass torch.ao.quantization.backend_config.BackendPatternConfig(pattern=None)\nConfig object that specifies quantization behavior for a given\n operator pattern. For a detailed example usage, see\n \"BackendConfig\".\nadd_dtype_config(dtype_config)\n Add a set of supported data types passed as arguments to\n quantize ops in the reference model spec.\n\n Return type:\n *BackendPatternConfig*\n\nclassmethod from_dict(backend_pattern_config_dict)\n Create a \"BackendPatternConfig\" from a dictionary with the\n following items:\n\n \"pattern\": the pattern being configured \"observation_type\":\n the \"ObservationType\" that specifies how observers should be\n inserted for this pattern \"dtype_configs\": a list of\n dictionaries that represents \"DTypeConfig\" s \"root_module\": a\n \"torch.nn.Module\" that represents the root for this pattern\n \"qat_module\": a \"torch.nn.Module\" that represents the QAT\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendPatternConfig.html", "category": "pytorch docs"} {"text": "implementation for this pattern \"reference_quantized_module\":\n a \"torch.nn.Module\" that represents the reference quantized\n implementation for this pattern's root module.\n \"fused_module\": a \"torch.nn.Module\" that represents the fused\n implementation for this pattern \"fuser_method\": a function\n that specifies how to fuse the pattern for this pattern\n \"pattern_complex_format\": the pattern specified in the\n reversed nested tuple format (deprecated)\n Return type:\n *BackendPatternConfig*\n\nset_dtype_configs(dtype_configs)\n Set the supported data types passed as arguments to quantize ops\n in the reference model spec, overriding all previously\n registered data types.\n\n Return type:\n *BackendPatternConfig*\n\nset_fused_module(fused_module)\n Set the module that represents the fused implementation for this\n pattern.\n\n Return type:\n *BackendPatternConfig*\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendPatternConfig.html", "category": "pytorch docs"} {"text": "BackendPatternConfig\nset_fuser_method(fuser_method)\n Set the function that specifies how to fuse this\n BackendPatternConfig's pattern.\n\n The first argument of this function should be *is_qat*, and the\n rest of the arguments should be the items in the tuple pattern.\n The return value of this function should be the resulting fused\n module.\n\n For example, the fuser method for the pattern *(torch.nn.Linear,\n torch.nn.ReLU)* can be:\n\n def fuse_linear_relu(is_qat, linear, relu):\n return torch.ao.nn.intrinsic.LinearReLU(linear, relu)\n\n For a more complicated example, see https://gist.github.com/jer\n ryzh168/8bea7180a8ba3c279f2c9b050f2a69a6.\n\n Return type:\n *BackendPatternConfig*\n\nset_observation_type(observation_type)\n Set how observers should be inserted in the graph for this\n pattern.\n\n Observation type here refers to how observers (or quant-dequant\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendPatternConfig.html", "category": "pytorch docs"} {"text": "ops) will be placed in the graph. This is used to produce the\n desired reference patterns understood by the backend. Weighted\n ops such as linear and conv require different observers (or\n quantization parameters passed to quantize ops in the reference\n model) for the input and the output.\n There are two observation types:\n\n *OUTPUT_USE_DIFFERENT_OBSERVER_AS_INPUT* (default): the\n output observer instance will be different from the input.\n This is the most common observation type.\n\n *OUTPUT_SHARE_OBSERVER_WITH_INPUT*: the output observer\n instance will be the same as the input. This is useful for\n operators like *cat*.\n\n Note: This will be renamed in the near future, since we will\n soon insert QuantDeQuantStubs with observers (and fake\n quantizes) attached instead of observers themselves.\n\n Return type:\n *BackendPatternConfig*\n\nset_pattern(pattern)\n Set the pattern to configure.\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendPatternConfig.html", "category": "pytorch docs"} {"text": "Set the pattern to configure.\n The pattern can be a float module, functional operator, pytorch\n operator, or a tuple combination of the above. Tuple patterns\n are treated as sequential patterns, and currently only tuples of\n 2 or 3 elements are supported.\n\n Return type:\n *BackendPatternConfig*\n\nset_qat_module(qat_module)\n Set the module that represents the QAT implementation for this\n pattern.\n\n Return type:\n *BackendPatternConfig*\n\nset_reference_quantized_module(reference_quantized_module)\n Set the module that represents the reference quantized\n implementation for this pattern's root module.\n\n For more detail, see \"set_root_module()\".\n\n Return type:\n *BackendPatternConfig*\n\nset_root_module(root_module)\n Set the module that represents the root for this pattern.\n\n When we construct the reference quantized model during the\n convert phase, the root modules (e.g. torch.nn.Linear for\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendPatternConfig.html", "category": "pytorch docs"} {"text": "torch.ao.nn.intrinsic.LinearReLU) will be swapped to the\n corresponding reference quantized modules (e.g.\n torch.ao.nn.reference.quantized.Linear). This allows custom\n backends to specify custom reference quantized module\n implementations to match the numerics of their lowered\n operators. Since this is a one-to-one mapping, both the root\n module and the reference quantized module must be specified in\n the same BackendPatternConfig in order for the conversion to\n take place.\n Return type:\n *BackendPatternConfig*\n\nto_dict()\n Convert this \"BackendPatternConfig\" to a dictionary with the\n items described in \"from_dict()\".\n\n Return type:\n *Dict*[str, *Any*]\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.BackendPatternConfig.html", "category": "pytorch docs"} {"text": "torch.randn\ntorch.randn(size, , out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False, pin_memory=False) -> Tensor\nReturns a tensor filled with random numbers from a normal\n distribution with mean 0 and variance 1 (also called the\n standard normal distribution).\n \\text{out}_{i} \\sim \\mathcal{N}(0, 1)\n\nThe shape of the tensor is defined by the variable argument \"size\".\nParameters:\n size (int...) -- a sequence of integers defining the\n shape of the output tensor. Can be a variable number of\n arguments or a collection like a list or tuple.\nKeyword Arguments:\n * generator (\"torch.Generator\", optional) -- a pseudorandom\n number generator for sampling\n * **out** (*Tensor**, **optional*) -- the output tensor.\n\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n", "source": "https://pytorch.org/docs/stable/generated/torch.randn.html", "category": "pytorch docs"} {"text": "(see \"torch.set_default_tensor_type()\").\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\n * **pin_memory** (*bool**, **optional*) -- If set, returned\n tensor would be allocated in the pinned memory. Works only for\n CPU tensors. Default: \"False\".\n\nExample:\n >>> torch.randn(4)\n tensor([-2.1436, 0.9966, 2.3426, -0.6366])\n >>> torch.randn(2, 3)\n tensor([[ 1.5954, 2.8929, -1.0923],\n", "source": "https://pytorch.org/docs/stable/generated/torch.randn.html", "category": "pytorch docs"} {"text": "tensor([[ 1.5954, 2.8929, -1.0923],\n [ 1.1719, -0.4709, -0.1996]])", "source": "https://pytorch.org/docs/stable/generated/torch.randn.html", "category": "pytorch docs"} {"text": "torch.linalg.lu_solve\ntorch.linalg.lu_solve(LU, pivots, B, *, left=True, adjoint=False, out=None) -> Tensor\nComputes the solution of a square system of linear equations with a\n unique solution given an LU decomposition.\nLetting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, this function\n computes the solution X \\in \\mathbb{K}^{n \\times k} of the linear\n system associated to A \\in \\mathbb{K}^{n \\times n}, B \\in\n \\mathbb{K}^{n \\times k}, which is defined as\n AX = B\n\nwhere A is given factorized as returned by \"lu_factor()\".\nIf \"left\"= False, this function returns the matrix X \\in\n \\mathbb{K}^{n \\times k} that solves the system\n XA = B\\mathrlap{\\qquad A \\in \\mathbb{K}^{k \\times k}, B \\in\n \\mathbb{K}^{n \\times k}.}\n\nIf \"adjoint\"= True (and \"left\"= True), given an LU\n factorization of :math:`A this function function returns the X \\in\n \\mathbb{K}^{n \\times k} that solves the system", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu_solve.html", "category": "pytorch docs"} {"text": "\\mathbb{K}^{n \\times k} that solves the system\n A^{\\text{H}}X = B\\mathrlap{\\qquad A \\in \\mathbb{K}^{k \\times k},\n B \\in \\mathbb{K}^{n \\times k}.}\n\nwhere A^{\\text{H}} is the conjugate transpose when A is complex,\n and the transpose when A is real-valued. The \"left\"= False case\n is analogous.\nSupports inputs of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if the inputs are batches of\n matrices then the output has the same batch dimensions.\nParameters:\n * LU (Tensor) -- tensor of shape (, n, n) (or (, k,\n k) if \"left\"= True) where *** is zero or more batch\n dimensions as returned by \"lu_factor()\".\n * **pivots** (*Tensor*) -- tensor of shape *(*, n)* (or *(*, k)*\n if \"left\"*= True*) where *** is zero or more batch dimensions\n as returned by \"lu_factor()\".\n\n * **B** (*Tensor*) -- right-hand side tensor of shape *(*, n,\n k)*.\n\nKeyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu_solve.html", "category": "pytorch docs"} {"text": "k)*.\nKeyword Arguments:\n * left (bool, optional) -- whether to solve the system\n AX=B or XA = B. Default: True.\n * **adjoint** (*bool**, **optional*) -- whether to solve the\n system AX=B or A^{\\text{H}}X = B. Default: *False*.\n\n * **out** (*Tensor**, **optional*) -- output tensor. Ignored if\n *None*. Default: *None*.\n\nExamples:\n >>> A = torch.randn(3, 3)\n >>> LU, pivots = torch.linalg.lu_factor(A)\n >>> B = torch.randn(3, 2)\n >>> X = torch.linalg.lu_solve(LU, pivots, B)\n >>> torch.allclose(A @ X, B)\n True\n\n >>> B = torch.randn(3, 3, 2) # Broadcasting rules apply: A is broadcasted\n >>> X = torch.linalg.lu_solve(LU, pivots, B)\n >>> torch.allclose(A @ X, B)\n True\n\n >>> B = torch.randn(3, 5, 3)\n >>> X = torch.linalg.lu_solve(LU, pivots, B, left=False)\n >>> torch.allclose(X @ A, B)\n True\n\n >>> B = torch.randn(3, 3, 4) # Now solve for A^T\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu_solve.html", "category": "pytorch docs"} {"text": "\n\n\nX = torch.linalg.lu_solve(LU, pivots, B, adjoint=True)\n >>> torch.allclose(A.mT @ X, B)\n True\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu_solve.html", "category": "pytorch docs"} {"text": "torch.sgn\ntorch.sgn(input, *, out=None) -> Tensor\nThis function is an extension of torch.sign() to complex tensors.\n It computes a new tensor whose elements have the same angles as the\n corresponding elements of \"input\" and absolute values (i.e.\n magnitudes) of one for complex tensors and is equivalent to\n torch.sign() for non-complex tensors.\n \\text{out}_{i} = \\begin{cases} 0 &\n |\\text{{input}}_i| == 0 \\\\\n \\frac{{\\text{{input}}_i}}{|{\\text{{input}}_i}|} &\n \\text{otherwise} \\end{cases}\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> t = torch.tensor([3+4j, 7-24j, 0, 1+2j])\n >>> t.sgn()\n tensor([0.6000+0.8000j, 0.2800-0.9600j, 0.0000+0.0000j, 0.4472+0.8944j])\n", "source": "https://pytorch.org/docs/stable/generated/torch.sgn.html", "category": "pytorch docs"} {"text": "torch.matrix_power\ntorch.matrix_power(input, n, *, out=None) -> Tensor\nAlias for \"torch.linalg.matrix_power()\"", "source": "https://pytorch.org/docs/stable/generated/torch.matrix_power.html", "category": "pytorch docs"} {"text": "torch.Tensor.storage_type\nTensor.storage_type() -> type\nReturns the type of the underlying storage.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.storage_type.html", "category": "pytorch docs"} {"text": "torch.cuda.OutOfMemoryError\nexception torch.cuda.OutOfMemoryError\nException raised when CUDA is out of memory", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.OutOfMemoryError.html", "category": "pytorch docs"} {"text": "torch.as_tensor\ntorch.as_tensor(data, dtype=None, device=None) -> Tensor\nConverts \"data\" into a tensor, sharing data and preserving autograd\n history if possible.\nIf \"data\" is already a tensor with the requested dtype and device\n then \"data\" itself is returned, but if \"data\" is a tensor with a\n different dtype or device then it's copied as if using\n data.to(dtype=dtype, device=device).\nIf \"data\" is a NumPy array (an ndarray) with the same dtype and\n device then a tensor is constructed using \"torch.from_numpy()\".\nSee also:\n \"torch.tensor()\" never shares its data and creates a new \"leaf\n tensor\" (see Autograd mechanics).\n\nParameters:\n * data (array_like) -- Initial data for the tensor. Can be\n a list, tuple, NumPy \"ndarray\", scalar, and other types.\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", infers data type from\n \"data\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.as_tensor.html", "category": "pytorch docs"} {"text": "\"data\".\n * **device** (\"torch.device\", optional) -- the device of the\n constructed tensor. If None and data is a tensor then the\n device of data is used. If None and data is not a tensor then\n the result tensor is constructed on the CPU.\n\nExample:\n >>> a = numpy.array([1, 2, 3])\n >>> t = torch.as_tensor(a)\n >>> t\n tensor([ 1, 2, 3])\n >>> t[0] = -1\n >>> a\n array([-1, 2, 3])\n\n >>> a = numpy.array([1, 2, 3])\n >>> t = torch.as_tensor(a, device=torch.device('cuda'))\n >>> t\n tensor([ 1, 2, 3])\n >>> t[0] = -1\n >>> a\n array([1, 2, 3])\n", "source": "https://pytorch.org/docs/stable/generated/torch.as_tensor.html", "category": "pytorch docs"} {"text": "torch.nn.functional.softplus\ntorch.nn.functional.softplus(input, beta=1, threshold=20) -> Tensor\nApplies element-wise, the function \\text{Softplus}(x) =\n \\frac{1}{\\beta} * \\log(1 + \\exp(\\beta * x)).\nFor numerical stability the implementation reverts to the linear\n function when input \\times \\beta > threshold.\nSee \"Softplus\" for more details.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.softplus.html", "category": "pytorch docs"} {"text": "torch.tile\ntorch.tile(input, dims) -> Tensor\nConstructs a tensor by repeating the elements of \"input\". The\n \"dims\" argument specifies the number of repetitions in each\n dimension.\nIf \"dims\" specifies fewer dimensions than \"input\" has, then ones\n are prepended to \"dims\" until all dimensions are specified. For\n example, if \"input\" has shape (8, 6, 4, 2) and \"dims\" is (2, 2),\n then \"dims\" is treated as (1, 1, 2, 2).\nAnalogously, if \"input\" has fewer dimensions than \"dims\" specifies,\n then \"input\" is treated as if it were unsqueezed at dimension zero\n until it has as many dimensions as \"dims\" specifies. For example,\n if \"input\" has shape (4, 2) and \"dims\" is (3, 3, 2, 2), then\n \"input\" is treated as if it had the shape (1, 1, 4, 2).\nNote:\n This function is similar to NumPy's tile function.\n\nParameters:\n * input (Tensor) -- the tensor whose elements to repeat.\n * **dims** (*tuple*) -- the number of repetitions per dimension.\n\nExample:", "source": "https://pytorch.org/docs/stable/generated/torch.tile.html", "category": "pytorch docs"} {"text": "Example:\n >>> x = torch.tensor([1, 2, 3])\n >>> x.tile((2,))\n tensor([1, 2, 3, 1, 2, 3])\n >>> y = torch.tensor([[1, 2], [3, 4]])\n >>> torch.tile(y, (2, 2))\n tensor([[1, 2, 1, 2],\n [3, 4, 3, 4],\n [1, 2, 1, 2],\n [3, 4, 3, 4]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.tile.html", "category": "pytorch docs"} {"text": "Conv3d\nclass torch.nn.Conv3d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)\nApplies a 3D convolution over an input signal composed of several\n input planes.\nIn the simplest case, the output value of the layer with input size\n (N, C_{in}, D, H, W) and output (N, C_{out}, D_{out}, H_{out},\n W_{out}) can be precisely described as:\n out(N_i, C_{out_j}) = bias(C_{out_j}) +\n \\sum_{k = 0}^{C_{in} - 1} weight(C_{out_j}, k) \\star input(N_i,\n k)\n\nwhere \\star is the valid 3D cross-correlation operator\nThis module supports TensorFloat32.\nOn certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.\n\n\n\"stride\" controls the stride for the cross-correlation.\n\n\n\"padding\" controls the amount of padding applied to the input. It\n can be either a string {'valid', 'same'} or a tuple of ints\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html", "category": "pytorch docs"} {"text": "giving the amount of implicit padding applied on both sides.\n\n\n\"dilation\" controls the spacing between the kernel points; also\n known as the \u00c3\u00a0 trous algorithm. It is harder to describe, but\n this link has a nice visualization of what \"dilation\" does.\n\n\n\"groups\" controls the connections between inputs and outputs.\n \"in_channels\" and \"out_channels\" must both be divisible by\n \"groups\". For example,\n* At groups=1, all inputs are convolved to all outputs.\n\n* At groups=2, the operation becomes equivalent to having two\n conv layers side by side, each seeing half the input\n channels and producing half the output channels, and both\n subsequently concatenated.\n\n* At groups= \"in_channels\", each input channel is convolved\n with its own set of filters (of size\n \\frac{\\text{out\\_channels}}{\\text{in\\_channels}}).\n\n\n\nThe parameters \"kernel_size\", \"stride\", \"padding\", \"dilation\" can\n either be:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html", "category": "pytorch docs"} {"text": "either be:\n * a single \"int\" -- in which case the same value is used for the\n depth, height and width dimension\n\n * a \"tuple\" of three ints -- in which case, the first *int* is\n used for the depth dimension, the second *int* for the height\n dimension and the third *int* for the width dimension\n\nNote:\n When *groups == in_channels* and *out_channels == K *\n in_channels*, where *K* is a positive integer, this operation is\n also known as a \"depthwise convolution\".In other words, for an\n input of size (N, C_{in}, L_{in}), a depthwise convolution with a\n depthwise multiplier *K* can be performed with the arguments\n (C_\\text{in}=C_\\text{in}, C_\\text{out}=C_\\text{in} \\times\n \\text{K}, ..., \\text{groups}=C_\\text{in}).\n\nNote:\n In some circumstances when given tensors on a CUDA device and\n using CuDNN, this operator may select a nondeterministic\n algorithm to increase performance. If this is undesirable, you\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html", "category": "pytorch docs"} {"text": "can try to make the operation deterministic (potentially at a\n performance cost) by setting \"torch.backends.cudnn.deterministic\n = True\". See Reproducibility for more information.\nNote:\n \"padding='valid'\" is the same as no padding. \"padding='same'\"\n pads the input so the output has the shape as the input. However,\n this mode doesn't support any stride values other than 1.\n\nNote:\n This module supports complex data types i.e. \"complex32,\n complex64, complex128\".\n\nParameters:\n * in_channels (int) -- Number of channels in the input\n image\n * **out_channels** (*int*) -- Number of channels produced by the\n convolution\n\n * **kernel_size** (*int** or **tuple*) -- Size of the convolving\n kernel\n\n * **stride** (*int** or **tuple**, **optional*) -- Stride of the\n convolution. Default: 1\n\n * **padding** (*int**, **tuple** or **str**, **optional*) --\n Padding added to all six sides of the input. Default: 0\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html", "category": "pytorch docs"} {"text": "\n\npadding_mode (str, optional) -- \"'zeros'\",\n \"'reflect'\", \"'replicate'\" or \"'circular'\". Default: \"'zeros'\"\n\n\ndilation (int or tuple, optional) -- Spacing\n between kernel elements. Default: 1\n\n\ngroups (int, optional) -- Number of blocked\n connections from input channels to output channels. Default: 1\n\n\nbias (bool, optional) -- If \"True\", adds a learnable\n bias to the output. Default: \"True\"\n\n\n\n\nShape:\n * Input: (N, C_{in}, D_{in}, H_{in}, W_{in}) or (C_{in}, D_{in},\n H_{in}, W_{in})\n * Output: (N, C_{out}, D_{out}, H_{out}, W_{out}) or (C_{out},\n D_{out}, H_{out}, W_{out}), where\n\n D_{out} = \\left\\lfloor\\frac{D_{in} + 2 \\times\n \\text{padding}[0] - \\text{dilation}[0] \\times\n (\\text{kernel\\_size}[0] - 1) - 1}{\\text{stride}[0]} +\n 1\\right\\rfloor\n\n H_{out} = \\left\\lfloor\\frac{H_{in} + 2 \\times\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html", "category": "pytorch docs"} {"text": "\\text{padding}[1] - \\text{dilation}[1] \\times\n (\\text{kernel_size}[1] - 1) - 1}{\\text{stride}[1]} +\n 1\\right\\rfloor\n W_{out} = \\left\\lfloor\\frac{W_{in} + 2 \\times\n \\text{padding}[2] - \\text{dilation}[2] \\times\n (\\text{kernel\\_size}[2] - 1) - 1}{\\text{stride}[2]} +\n 1\\right\\rfloor\n\nVariables:\n * weight (Tensor) -- the learnable weights of the module\n of shape (\\text{out_channels},\n \\frac{\\text{in_channels}}{\\text{groups}},\n \\text{kernel_size[0]}, \\text{kernel_size[1]},\n \\text{kernel_size[2]}). The values of these weights are\n sampled from \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}) where k =\n \\frac{groups}{C_\\text{in} *\n \\prod_{i=0}^{2}\\text{kernel_size}[i]}\n * **bias** (*Tensor*) -- the learnable bias of the module of\n shape (out_channels). If \"bias\" is \"True\", then the values of\n these weights are sampled from \\mathcal{U}(-\\sqrt{k},\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html", "category": "pytorch docs"} {"text": "\\sqrt{k}) where k = \\frac{groups}{C_\\text{in} *\n \\prod_{i=0}^{2}\\text{kernel_size}[i]}\nExamples:\n >>> # With square kernels and equal stride\n >>> m = nn.Conv3d(16, 33, 3, stride=2)\n >>> # non-square kernels and unequal stride and with padding\n >>> m = nn.Conv3d(16, 33, (3, 5, 2), stride=(2, 1, 1), padding=(4, 2, 0))\n >>> input = torch.randn(20, 16, 10, 50, 100)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Conv3d.html", "category": "pytorch docs"} {"text": "torch.Tensor.cuda\nTensor.cuda(device=None, non_blocking=False, memory_format=torch.preserve_format) -> Tensor\nReturns a copy of this object in CUDA memory.\nIf this object is already in CUDA memory and on the correct device,\n then no copy is performed and the original object is returned.\nParameters:\n * device (\"torch.device\") -- The destination GPU device.\n Defaults to the current CUDA device.\n * **non_blocking** (*bool*) -- If \"True\" and the source is in\n pinned memory, the copy will be asynchronous with respect to\n the host. Otherwise, the argument has no effect. Default:\n \"False\".\n\n * **memory_format** (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cuda.html", "category": "pytorch docs"} {"text": "torch.Tensor.exponential_\nTensor.exponential_(lambd=1, *, generator=None) -> Tensor\nFills \"self\" tensor with elements drawn from the exponential\n distribution:\n f(x) = \\lambda e^{-\\lambda x}\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.exponential_.html", "category": "pytorch docs"} {"text": "torch.randn_like\ntorch.randn_like(input, *, dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) -> Tensor\nReturns a tensor with the same size as \"input\" that is filled with\n random numbers from a normal distribution with mean 0 and variance\n 1. \"torch.randn_like(input)\" is equivalent to\n \"torch.randn(input.size(), dtype=input.dtype, layout=input.layout,\n device=input.device)\".\nParameters:\n input (Tensor) -- the size of \"input\" will determine size\n of the output tensor.\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned Tensor. Default: if \"None\", defaults to the dtype\n of \"input\".\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned tensor. Default: if \"None\", defaults to the layout of\n \"input\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n", "source": "https://pytorch.org/docs/stable/generated/torch.randn_like.html", "category": "pytorch docs"} {"text": "returned tensor. Default: if \"None\", defaults to the device of\n \"input\".\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\n * **memory_format** (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.randn_like.html", "category": "pytorch docs"} {"text": "torch.nn.functional.poisson_nll_loss\ntorch.nn.functional.poisson_nll_loss(input, target, log_input=True, full=False, size_average=None, eps=1e-08, reduce=None, reduction='mean')\nPoisson negative log likelihood loss.\nSee \"PoissonNLLLoss\" for details.\nParameters:\n * input (Tensor) -- expectation of underlying Poisson\n distribution.\n * **target** (*Tensor*) -- random sample target \\sim\n \\text{Poisson}(input).\n\n * **log_input** (*bool*) -- if \"True\" the loss is computed as\n \\exp(\\text{input}) - \\text{target} * \\text{input}, if \"False\"\n then loss is \\text{input} - \\text{target} *\n \\log(\\text{input}+\\text{eps}). Default: \"True\"\n\n * **full** (*bool*) -- whether to compute full loss, i. e. to\n add the Stirling approximation term. Default: \"False\"\n \\text{target} * \\log(\\text{target}) - \\text{target} + 0.5 *\n \\log(2 * \\pi * \\text{target}).\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.poisson_nll_loss.html", "category": "pytorch docs"} {"text": "\\log(2 * \\pi * \\text{target}).\n * **size_average** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n multiple elements per sample. If the field \"size_average\" is\n set to \"False\", the losses are instead summed for each\n minibatch. Ignored when reduce is \"False\". Default: \"True\"\n\n * **eps** (*float**, **optional*) -- Small value to avoid\n evaluation of \\log(0) when \"log_input\"=\"False\". Default: 1e-8\n\n * **reduce** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n\n * **reduction** (*str**, **optional*) -- Specifies the reduction\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.poisson_nll_loss.html", "category": "pytorch docs"} {"text": "to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.poisson_nll_loss.html", "category": "pytorch docs"} {"text": "torch.foreach_log1p\ntorch.foreach_log1p(self: List[Tensor]) -> None\nApply \"torch.log1p()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_log1p_.html", "category": "pytorch docs"} {"text": "torch.max\ntorch.max(input) -> Tensor\nReturns the maximum value of all elements in the \"input\" tensor.\nWarning:\n This function produces deterministic (sub)gradients unlike\n \"max(dim=0)\"\n\nParameters:\n input (Tensor) -- the input tensor.\nExample:\n >>> a = torch.randn(1, 3)\n >>> a\n tensor([[ 0.6763, 0.7445, -2.2369]])\n >>> torch.max(a)\n tensor(0.7445)\n\ntorch.max(input, dim, keepdim=False, *, out=None)\nReturns a namedtuple \"(values, indices)\" where \"values\" is the\n maximum value of each row of the \"input\" tensor in the given\n dimension \"dim\". And \"indices\" is the index location of each\n maximum value found (argmax).\nIf \"keepdim\" is \"True\", the output tensors are of the same size as\n \"input\" except in the dimension \"dim\" where they are of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensors having 1 fewer dimension than \"input\".\nNote:", "source": "https://pytorch.org/docs/stable/generated/torch.max.html", "category": "pytorch docs"} {"text": "Note:\n If there are multiple maximal values in a reduced row then the\n indices of the first maximal value are returned.\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int*) -- the dimension to reduce.\n\n * **keepdim** (*bool*) -- whether the output tensor has \"dim\"\n retained or not. Default: \"False\".\n\nKeyword Arguments:\n out (tuple, optional) -- the result tuple of two\n output tensors (max, max_indices)\nExample:\n >>> a = torch.randn(4, 4)\n >>> a\n tensor([[-1.2360, -0.2942, -0.1222, 0.8475],\n [ 1.1949, -1.1127, -2.2379, -0.6702],\n [ 1.5717, -0.9207, 0.1297, -1.8768],\n [-0.6172, 1.0036, -0.6060, -0.2432]])\n >>> torch.max(a, 1)\n torch.return_types.max(values=tensor([0.8475, 1.1949, 1.5717, 1.0036]), indices=tensor([3, 0, 0, 1]))\n\ntorch.max(input, other, *, out=None) -> Tensor\nSee \"torch.maximum()\".", "source": "https://pytorch.org/docs/stable/generated/torch.max.html", "category": "pytorch docs"} {"text": "torch.Tensor.storage\nTensor.storage() -> torch.TypedStorage\nReturns the underlying \"TypedStorage\".\nWarning:\n \"TypedStorage\" is deprecated. It will be removed in the future,\n and \"UntypedStorage\" will be the only storage class. To access\n the \"UntypedStorage\" directly, use \"Tensor.untyped_storage()\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.storage.html", "category": "pytorch docs"} {"text": "torch.Tensor.cross\nTensor.cross(other, dim=None) -> Tensor\nSee \"torch.cross()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cross.html", "category": "pytorch docs"} {"text": "torch.corrcoef\ntorch.corrcoef(input) -> Tensor\nEstimates the Pearson product-moment correlation coefficient matrix\n of the variables given by the \"input\" matrix, where rows are the\n variables and columns are the observations.\nNote:\n The correlation coefficient matrix R is computed using the\n covariance matrix C as given by R_{ij} = \\frac{ C_{ij} } { \\sqrt{\n C_{ii} * C_{jj} } }\n\nNote:\n Due to floating point rounding, the resulting array may not be\n Hermitian and its diagonal elements may not be 1. The real and\n imaginary values are clipped to the interval [-1, 1] in an\n attempt to improve this situation.\n\nParameters:\n input (Tensor) -- A 2D matrix containing multiple\n variables and observations, or a Scalar or 1D vector\n representing a single variable.\nReturns:\n (Tensor) The correlation coefficient matrix of the variables.\nSee also: \"torch.cov()\" covariance matrix.\nExample:", "source": "https://pytorch.org/docs/stable/generated/torch.corrcoef.html", "category": "pytorch docs"} {"text": "Example:\n >>> x = torch.tensor([[0, 1, 2], [2, 1, 0]])\n >>> torch.corrcoef(x)\n tensor([[ 1., -1.],\n [-1., 1.]])\n >>> x = torch.randn(2, 4)\n >>> x\n tensor([[-0.2678, -0.0908, -0.3766, 0.2780],\n [-0.5812, 0.1535, 0.2387, 0.2350]])\n >>> torch.corrcoef(x)\n tensor([[1.0000, 0.3582],\n [0.3582, 1.0000]])\n >>> torch.corrcoef(x[0])\n tensor(1.)\n", "source": "https://pytorch.org/docs/stable/generated/torch.corrcoef.html", "category": "pytorch docs"} {"text": "torch.bitwise_left_shift\ntorch.bitwise_left_shift(input, other, *, out=None) -> Tensor\nComputes the left arithmetic shift of \"input\" by \"other\" bits. The\n input tensor must be of integral type. This operator supports\n broadcasting to a common shape and type promotion.\nThe operation applied is:\n \\text{out}_i = \\text{input}_i << \\text{other}_i\n\nParameters:\n * input (Tensor or Scalar) -- the first input tensor\n * **other** (*Tensor** or **Scalar*) -- the second input tensor\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> torch.bitwise_left_shift(torch.tensor([-1, -2, 3], dtype=torch.int8), torch.tensor([1, 0, 3], dtype=torch.int8))\n tensor([-2, -2, 24], dtype=torch.int8)\n", "source": "https://pytorch.org/docs/stable/generated/torch.bitwise_left_shift.html", "category": "pytorch docs"} {"text": "torch.heaviside\ntorch.heaviside(input, values, *, out=None) -> Tensor\nComputes the Heaviside step function for each element in \"input\".\n The Heaviside step function is defined as:\n \\text{{heaviside}}(input, values) = \\begin{cases} 0, &\n \\text{if input < 0}\\\\ values, & \\text{if input == 0}\\\\\n 1, & \\text{if input > 0} \\end{cases}\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **values** (*Tensor*) -- The values to use where \"input\" is\n zero.\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> input = torch.tensor([-1.5, 0, 2.0])\n >>> values = torch.tensor([0.5])\n >>> torch.heaviside(input, values)\n tensor([0.0000, 0.5000, 1.0000])\n >>> values = torch.tensor([1.2, -2.0, 3.5])\n >>> torch.heaviside(input, values)\n tensor([0., -2., 1.])\n", "source": "https://pytorch.org/docs/stable/generated/torch.heaviside.html", "category": "pytorch docs"} {"text": "float16_dynamic_qconfig\ntorch.quantization.qconfig.float16_dynamic_qconfig\nalias of QConfig(activation=functools.partial(,\n dtype=torch.float16, is_dynamic=True){},\n weight=functools.partial(,\n dtype=torch.float16){})", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.float16_dynamic_qconfig.html", "category": "pytorch docs"} {"text": "torch.cuda.get_allocator_backend\ntorch.cuda.get_allocator_backend()\nReturns a string describing the active allocator backend as set by\n \"PYTORCH_CUDA_ALLOC_CONF\". Currently available backends are\n \"native\" (PyTorch's native caching allocator) and\n cudaMallocAsync` (CUDA's built-in asynchronous allocator).\nNote:\n See Memory management for details on choosing the allocator\n backend.\n\nReturn type:\n str", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.get_allocator_backend.html", "category": "pytorch docs"} {"text": "torch.Tensor.cholesky_solve\nTensor.cholesky_solve(input2, upper=False) -> Tensor\nSee \"torch.cholesky_solve()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cholesky_solve.html", "category": "pytorch docs"} {"text": "torch.nn.functional.upsample\ntorch.nn.functional.upsample(input, size=None, scale_factor=None, mode='nearest', align_corners=None)\nUpsamples the input to either the given \"size\" or the given\n \"scale_factor\"\nWarning:\n This function is deprecated in favor of\n \"torch.nn.functional.interpolate()\". This is equivalent with\n \"nn.functional.interpolate(...)\".\n\nNote:\n This operation may produce nondeterministic gradients when given\n tensors on a CUDA device. See Reproducibility for more\n information.\n\nThe algorithm used for upsampling is determined by \"mode\".\nCurrently temporal, spatial and volumetric upsampling are\n supported, i.e. expected inputs are 3-D, 4-D or 5-D in shape.\nThe input dimensions are interpreted in the form: mini-batch x\n channels x [optional depth] x [optional height] x width.\nThe modes available for upsampling are: nearest, linear (3D-\n only), bilinear, bicubic (4D-only), trilinear (5D-only)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.upsample.html", "category": "pytorch docs"} {"text": "Parameters:\n * input (Tensor) -- the input tensor\n * **size** (*int** or **Tuple**[**int**] or **Tuple**[**int**,\n **int**] or **Tuple**[**int**, **int**, **int**]*) -- output\n spatial size.\n\n * **scale_factor** (*float** or **Tuple**[**float**]*) --\n multiplier for spatial size. Has to match input size if it is\n a tuple.\n\n * **mode** (*str*) -- algorithm used for upsampling: \"'nearest'\"\n | \"'linear'\" | \"'bilinear'\" | \"'bicubic'\" | \"'trilinear'\".\n Default: \"'nearest'\"\n\n * **align_corners** (*bool**, **optional*) -- Geometrically, we\n consider the pixels of the input and output as squares rather\n than points. If set to \"True\", the input and output tensors\n are aligned by the center points of their corner pixels,\n preserving the values at the corner pixels. If set to \"False\",\n the input and output tensors are aligned by the corner points\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.upsample.html", "category": "pytorch docs"} {"text": "of their corner pixels, and the interpolation uses edge value\n padding for out-of-boundary values, making this operation\n independent of input size when \"scale_factor\" is kept the\n same. This only has an effect when \"mode\" is \"'linear'\",\n \"'bilinear'\", \"'bicubic'\" or \"'trilinear'\". Default: \"False\"\nNote:\n With \"mode='bicubic'\", it's possible to cause overshoot, in other\n words it can produce negative values or values greater than 255\n for images. Explicitly call \"result.clamp(min=0, max=255)\" if you\n want to reduce the overshoot when displaying the image.\n\nWarning:\n With \"align_corners = True\", the linearly interpolating modes\n (*linear*, *bilinear*, and *trilinear*) don't proportionally\n align the output and input pixels, and thus the output values can\n depend on the input size. This was the default behavior for these\n modes up to version 0.3.1. Since then, the default behavior is\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.upsample.html", "category": "pytorch docs"} {"text": "\"align_corners = False\". See \"Upsample\" for concrete examples on\n how this affects the outputs.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.upsample.html", "category": "pytorch docs"} {"text": "torch.nn.functional.relu_\ntorch.nn.functional.relu_(input) -> Tensor\nIn-place version of \"relu()\".", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.relu_.html", "category": "pytorch docs"} {"text": "torch.Tensor.storage_offset\nTensor.storage_offset() -> int\nReturns \"self\" tensor's offset in the underlying storage in terms\n of number of storage elements (not bytes).\nExample:\n >>> x = torch.tensor([1, 2, 3, 4, 5])\n >>> x.storage_offset()\n 0\n >>> x[3:].storage_offset()\n 3\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.storage_offset.html", "category": "pytorch docs"} {"text": "Hardswish\nclass torch.ao.nn.quantized.Hardswish(scale, zero_point)\nThis is the quantized version of \"Hardswish\".\nParameters:\n * scale -- quantization scale of the output tensor\n * **zero_point** -- quantization zero point of the output tensor\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Hardswish.html", "category": "pytorch docs"} {"text": "torch.linalg.vecdot\ntorch.linalg.vecdot(x, y, *, dim=- 1, out=None) -> Tensor\nComputes the dot product of two batches of vectors along a\n dimension.\nIn symbols, this function computes\n \\sum_{i=1}^n \\overline{x_i}y_i.\n\nover the dimension \"dim\" where \\overline{x_i} denotes the conjugate\n for complex vectors, and it is the identity for real vectors.\nSupports input of half, bfloat16, float, double, cfloat, cdouble\n and integral dtypes. It also supports broadcasting.\nParameters:\n * x (Tensor) -- first batch of vectors of shape (, n)*.\n * **y** (*Tensor*) -- second batch of vectors of shape *(*, n)*.\n\nKeyword Arguments:\n * dim (int) -- Dimension along which to compute the dot\n product. Default: -1.\n * **out** (*Tensor**, **optional*) -- output tensor. Ignored if\n *None*. Default: *None*.\n\nExamples:\n >>> v1 = torch.randn(3, 2)\n >>> v2 = torch.randn(3, 2)\n >>> linalg.vecdot(v1, v2)\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.vecdot.html", "category": "pytorch docs"} {"text": "\n\n\nlinalg.vecdot(v1, v2)\n tensor([ 0.3223, 0.2815, -0.1944])\n >>> torch.vdot(v1[0], v2[0])\n tensor(0.3223)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.vecdot.html", "category": "pytorch docs"} {"text": "torch.Tensor.stride\nTensor.stride(dim) -> tuple or int\nReturns the stride of \"self\" tensor.\nStride is the jump necessary to go from one element to the next one\n in the specified dimension \"dim\". A tuple of all strides is\n returned when no argument is passed in. Otherwise, an integer value\n is returned as the stride in the particular dimension \"dim\".\nParameters:\n dim (int, optional) -- the desired dimension in which\n stride is required\nExample:\n >>> x = torch.tensor([[1, 2, 3, 4, 5], [6, 7, 8, 9, 10]])\n >>> x.stride()\n (5, 1)\n >>> x.stride(0)\n 5\n >>> x.stride(-1)\n 1\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.stride.html", "category": "pytorch docs"} {"text": "torch.Tensor.bitwise_left_shift_\nTensor.bitwise_left_shift_(other) -> Tensor\nIn-place version of \"bitwise_left_shift()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_left_shift_.html", "category": "pytorch docs"} {"text": "torch.Tensor.logsumexp\nTensor.logsumexp(dim, keepdim=False) -> Tensor\nSee \"torch.logsumexp()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logsumexp.html", "category": "pytorch docs"} {"text": "ReplicationPad1d\nclass torch.nn.ReplicationPad1d(padding)\nPads the input tensor using replication of the input boundary.\nFor N-dimensional padding, use \"torch.nn.functional.pad()\".\nParameters:\n padding (int, tuple) -- the size of the padding. If is\n int, uses the same padding in all boundaries. If a 2-tuple,\n uses (\\text{padding_left}, \\text{padding_right})\nShape:\n * Input: (C, W_{in}) or (N, C, W_{in}).\n * Output: (C, W_{out}) or (N, C, W_{out}), where\n\n W_{out} = W_{in} + \\text{padding\\_left} +\n \\text{padding\\_right}\n\nExamples:\n >>> m = nn.ReplicationPad1d(2)\n >>> input = torch.arange(8, dtype=torch.float).reshape(1, 2, 4)\n >>> input\n tensor([[[0., 1., 2., 3.],\n [4., 5., 6., 7.]]])\n >>> m(input)\n tensor([[[0., 0., 0., 1., 2., 3., 3., 3.],\n [4., 4., 4., 5., 6., 7., 7., 7.]]])\n >>> # using different paddings for different sides\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReplicationPad1d.html", "category": "pytorch docs"} {"text": "\n\n\nm = nn.ReplicationPad1d((3, 1))\n >>> m(input)\n tensor([[[0., 0., 0., 0., 1., 2., 3., 3.],\n [4., 4., 4., 4., 5., 6., 7., 7.]]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ReplicationPad1d.html", "category": "pytorch docs"} {"text": "torch.linalg.qr\ntorch.linalg.qr(A, mode='reduced', *, out=None)\nComputes the QR decomposition of a matrix.\nLetting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, the full QR\n decomposition of a matrix A \\in \\mathbb{K}^{m \\times n} is\n defined as\n A = QR\\mathrlap{\\qquad Q \\in \\mathbb{K}^{m \\times m}, R \\in\n \\mathbb{K}^{m \\times n}}\n\nwhere Q is orthogonal in the real case and unitary in the complex\n case, and R is upper triangular with real diagonal (even in the\n complex case).\nWhen m > n (tall matrix), as R is upper triangular, its last m\n - n rows are zero. In this case, we can drop the last m - n\n columns of Q to form the reduced QR decomposition:\n A = QR\\mathrlap{\\qquad Q \\in \\mathbb{K}^{m \\times n}, R \\in\n \\mathbb{K}^{n \\times n}}\n\nThe reduced QR decomposition agrees with the full QR decomposition\n when n >= m (wide matrix).\nSupports input of float, double, cfloat and cdouble dtypes. Also", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.qr.html", "category": "pytorch docs"} {"text": "supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\nThe parameter \"mode\" chooses between the full and reduced QR\n decomposition. If \"A\" has shape (, m, n), denoting k = min(m,\n n)*\n\n\n\"mode\"= 'reduced' (default): Returns (Q, R) of shapes (, m,\n k), (, k, n) respectively. It is always differentiable.\n\n\n\"mode\"= 'complete': Returns (Q, R) of shapes (, m, m),\n (, m, n) respectively. It is differentiable for m <= n.\n\n\n\"mode\"= 'r': Computes only the reduced R. Returns (Q, R)\n with Q empty and R of shape (, k, n)*. It is never\n differentiable.\n\n\nDifferences with numpy.linalg.qr:\n\n\n\"mode\"= 'raw' is not implemented.\n\n\nUnlike numpy.linalg.qr, this function always returns a tuple of\n two tensors. When \"mode\"= 'r', the Q tensor is an empty\n tensor.\n\n\nWarning:\n The elements in the diagonal of *R* are not necessarily positive.\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.qr.html", "category": "pytorch docs"} {"text": "As such, the returned QR decomposition is only unique up to the\n sign of the diagonal of R. Therefore, different platforms, like\n NumPy, or inputs on different devices, may produce different\n valid decompositions.\nWarning:\n The QR decomposition is only well-defined if the first *k =\n min(m, n)* columns of every matrix in \"A\" are linearly\n independent. If this condition is not met, no error will be\n thrown, but the QR produced may be incorrect and its autodiff may\n fail or produce incorrect results.\n\nParameters:\n * A (Tensor) -- tensor of shape (, m, n)* where *** is\n zero or more batch dimensions.\n * **mode** (*str**, **optional*) -- one of *'reduced'*,\n *'complete'*, *'r'*. Controls the shape of the returned\n tensors. Default: *'reduced'*.\n\nKeyword Arguments:\n out (tuple, optional) -- output tuple of two tensors.\n Ignored if None. Default: None.\nReturns:\n A named tuple (Q, R).", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.qr.html", "category": "pytorch docs"} {"text": "Returns:\n A named tuple (Q, R).\nExamples:\n >>> A = torch.tensor([[12., -51, 4], [6, 167, -68], [-4, 24, -41]])\n >>> Q, R = torch.linalg.qr(A)\n >>> Q\n tensor([[-0.8571, 0.3943, 0.3314],\n [-0.4286, -0.9029, -0.0343],\n [ 0.2857, -0.1714, 0.9429]])\n >>> R\n tensor([[ -14.0000, -21.0000, 14.0000],\n [ 0.0000, -175.0000, 70.0000],\n [ 0.0000, 0.0000, -35.0000]])\n >>> (Q @ R).round()\n tensor([[ 12., -51., 4.],\n [ 6., 167., -68.],\n [ -4., 24., -41.]])\n >>> (Q.T @ Q).round()\n tensor([[ 1., 0., 0.],\n [ 0., 1., -0.],\n [ 0., -0., 1.]])\n >>> Q2, R2 = torch.linalg.qr(A, mode='r')\n >>> Q2\n tensor([])\n >>> torch.equal(R, R2)\n True\n >>> A = torch.randn(3, 4, 5)\n >>> Q, R = torch.linalg.qr(A, mode='complete')\n >>> torch.dist(Q @ R, A)\n tensor(1.6099e-06)\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.qr.html", "category": "pytorch docs"} {"text": "tensor(1.6099e-06)\n >>> torch.dist(Q.mT @ Q, torch.eye(4))\n tensor(6.2158e-07)", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.qr.html", "category": "pytorch docs"} {"text": "torch.cuda.get_device_capability\ntorch.cuda.get_device_capability(device=None)\nGets the cuda capability of a device.\nParameters:\n device (torch.device or int, optional) -- device\n for which to return the device capability. This function is a\n no-op if this argument is a negative integer. It uses the\n current device, given by \"current_device()\", if \"device\" is\n \"None\" (default).\nReturns:\n the major and minor cuda capability of the device\nReturn type:\n tuple(int, int)", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.get_device_capability.html", "category": "pytorch docs"} {"text": "torch.Tensor.fliplr\nTensor.fliplr() -> Tensor\nSee \"torch.fliplr()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.fliplr.html", "category": "pytorch docs"} {"text": "torch.Tensor.addmm_\nTensor.addmm_(mat1, mat2, *, beta=1, alpha=1) -> Tensor\nIn-place version of \"addmm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.addmm_.html", "category": "pytorch docs"} {"text": "torch.Tensor.logical_or_\nTensor.logical_or_() -> Tensor\nIn-place version of \"logical_or()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logical_or_.html", "category": "pytorch docs"} {"text": "torch.cuda.get_arch_list\ntorch.cuda.get_arch_list()\nReturns list CUDA architectures this library was compiled for.\nReturn type:\n List[str]", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.get_arch_list.html", "category": "pytorch docs"} {"text": "torch.Tensor.bitwise_right_shift_\nTensor.bitwise_right_shift_(other) -> Tensor\nIn-place version of \"bitwise_right_shift()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_right_shift_.html", "category": "pytorch docs"} {"text": "torch._foreach_lgamma\ntorch._foreach_lgamma(self: List[Tensor]) -> List[Tensor]\nApply \"torch.lgamma()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_lgamma.html", "category": "pytorch docs"} {"text": "torch.is_complex\ntorch.is_complex(input)\nReturns True if the data type of \"input\" is a complex data type\n i.e., one of \"torch.complex64\", and \"torch.complex128\".\nParameters:\n input (Tensor) -- the input tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.is_complex.html", "category": "pytorch docs"} {"text": "torch.foreach_erfc\ntorch.foreach_erfc(self: List[Tensor]) -> None\nApply \"torch.erfc()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_erfc_.html", "category": "pytorch docs"} {"text": "CosineAnnealingLR\nclass torch.optim.lr_scheduler.CosineAnnealingLR(optimizer, T_max, eta_min=0, last_epoch=- 1, verbose=False)\nSet the learning rate of each parameter group using a cosine\n annealing schedule, where \\eta_{max} is set to the initial lr and\n T_{cur} is the number of epochs since the last restart in SGDR:\n \\begin{aligned} \\eta_t & = \\eta_{min} +\n \\frac{1}{2}(\\eta_{max} - \\eta_{min})\\left(1 +\n \\cos\\left(\\frac{T_{cur}}{T_{max}}\\pi\\right)\\right), &\n T_{cur} \\neq (2k+1)T_{max}; \\\\ \\eta_{t+1} & = \\eta_{t} +\n \\frac{1}{2}(\\eta_{max} - \\eta_{min}) \\left(1 -\n \\cos\\left(\\frac{1}{T_{max}}\\pi\\right)\\right), & T_{cur} =\n (2k+1)T_{max}. \\end{aligned}\n\nWhen last_epoch=-1, sets initial lr as lr. Notice that because the\n schedule is defined recursively, the learning rate can be\n simultaneously modified outside this scheduler by other operators.\n If the learning rate is set solely by this scheduler, the learning", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CosineAnnealingLR.html", "category": "pytorch docs"} {"text": "rate at each step becomes:\n \\eta_t = \\eta_{min} + \\frac{1}{2}(\\eta_{max} -\n \\eta_{min})\\left(1 +\n \\cos\\left(\\frac{T_{cur}}{T_{max}}\\pi\\right)\\right)\n\nIt has been proposed in SGDR: Stochastic Gradient Descent with Warm\n Restarts. Note that this only implements the cosine annealing part\n of SGDR, and not the restarts.\nParameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * **T_max** (*int*) -- Maximum number of iterations.\n\n * **eta_min** (*float*) -- Minimum learning rate. Default: 0.\n\n * **last_epoch** (*int*) -- The index of last epoch. Default:\n -1.\n\n * **verbose** (*bool*) -- If \"True\", prints a message to stdout\n for each update. Default: \"False\".\n\nget_last_lr()\n Return last computed learning rate by current scheduler.\n\nload_state_dict(state_dict)\n Loads the schedulers state.\n\n Parameters:\n **state_dict** (*dict*) -- scheduler state. Should be an\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CosineAnnealingLR.html", "category": "pytorch docs"} {"text": "object returned from a call to \"state_dict()\".\nprint_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n\nstate_dict()\n Returns the state of the scheduler as a \"dict\".\n\n It contains an entry for every variable in self.__dict__ which\n is not the optimizer.\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CosineAnnealingLR.html", "category": "pytorch docs"} {"text": "torch.foreach_tan\ntorch.foreach_tan(self: List[Tensor]) -> None\nApply \"torch.tan()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_tan_.html", "category": "pytorch docs"} {"text": "torch.is_floating_point\ntorch.is_floating_point(input)\nReturns True if the data type of \"input\" is a floating point data\n type i.e., one of \"torch.float64\", \"torch.float32\",\n \"torch.float16\", and \"torch.bfloat16\".\nParameters:\n input (Tensor) -- the input tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.is_floating_point.html", "category": "pytorch docs"} {"text": "Conv1d\nclass torch.ao.nn.quantized.Conv1d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)\nApplies a 1D convolution over a quantized input signal composed of\n several quantized input planes.\nFor details on input arguments, parameters, and implementation see\n \"Conv1d\".\nNote:\n Only *zeros* is supported for the \"padding_mode\" argument.\n\nNote:\n Only *torch.quint8* is supported for the input data type.\n\nVariables:\n * weight (Tensor) -- packed tensor derived from the\n learnable weight parameter.\n * **scale** (*Tensor*) -- scalar for the output scale\n\n * **zero_point** (*Tensor*) -- scalar for the output zero point\n\nSee \"Conv1d\" for other attributes.\nExamples:\n >>> m = nn.quantized.Conv1d(16, 33, 3, stride=2)\n >>> input = torch.randn(20, 16, 100)\n >>> # quantize input to quint8\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Conv1d.html", "category": "pytorch docs"} {"text": "\n\n\nquantize input to quint8\n >>> q_input = torch.quantize_per_tensor(input, scale=1.0, zero_point=0,\n ... dtype=torch.quint8)\n >>> output = m(q_input)\n\n\n\n\nclassmethod from_float(mod)\n Creates a quantized module from a float module or qparams_dict.\n\n Parameters:\n **mod** (*Module*) -- a float module, either produced by\n torch.ao.quantization utilities or provided by the user\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.Conv1d.html", "category": "pytorch docs"} {"text": "disable_observer\nclass torch.quantization.fake_quantize.disable_observer(mod)\nDisable observation for this module, if applicable. Example usage:\n # model is any PyTorch model\n model.apply(torch.ao.quantization.disable_observer)\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.disable_observer.html", "category": "pytorch docs"} {"text": "torch.autograd.graph.Node.metadata\nabstract Node.metadata()\nReturns the metadata.\nReturn type:\n dict", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.graph.Node.metadata.html", "category": "pytorch docs"} {"text": "torch.Tensor.arccosh_\nTensor.arccosh_()\nacosh_() -> Tensor\nIn-place version of \"arccosh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arccosh_.html", "category": "pytorch docs"} {"text": "DTypeConfig\nclass torch.ao.quantization.backend_config.DTypeConfig(input_dtype=None, output_dtype=None, weight_dtype=None, bias_dtype=None, is_dynamic=None)\nConfig object that specifies the supported data types passed as\n arguments to quantize ops in the reference model spec, for input\n and output activations, weights, and biases.\nFor example, consider the following reference model:\n quant1 - [dequant1 - fp32_linear - quant2] - dequant2\n\nThe pattern in the square brackets refers to the reference pattern\n of statically quantized linear. Setting the input dtype as\n torch.quint8 in the DTypeConfig means we pass in torch.quint8\n as the dtype argument to the first quantize op (quant1). Similarly,\n setting the output dtype as torch.quint8 means we pass in\n torch.quint8 as the dtype argument to the second quantize op\n (quant2).\nNote that the dtype here does not refer to the interface dtypes of", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.DTypeConfig.html", "category": "pytorch docs"} {"text": "the op. For example, the \"input dtype\" here is not the dtype of the\n input tensor passed to the quantized linear op. Though it can still\n be the same as the interface dtype, this is not always the case,\n e.g. the interface dtype is fp32 in dynamic quantization but the\n \"input dtype\" specified in the DTypeConfig would still be quint8.\n The semantics of dtypes here are the same as the semantics of the\n dtypes specified in the observers.\nThese dtypes are matched against the ones specified in the user\u00e2\u0080\u0099s\n QConfig. If there is a match, and the QConfig satisfies the\n constraints specified in the DTypeConfig (if any), then we will\n quantize the given pattern using this DTypeConfig. Otherwise, the\n QConfig is ignored and the pattern will not be quantized.\nExample usage:\n >>> dtype_config1 = DTypeConfig(\n ... input_dtype=torch.quint8,\n ... output_dtype=torch.quint8,\n ... weight_dtype=torch.qint8,\n ... bias_dtype=torch.float)\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.DTypeConfig.html", "category": "pytorch docs"} {"text": "... bias_dtype=torch.float)\n >>> dtype_config2 = DTypeConfig(\n ... input_dtype=DTypeWithConstraints(\n ... dtype=torch.quint8,\n ... quant_min_lower_bound=0,\n ... quant_max_upper_bound=255,\n ... ),\n ... output_dtype=DTypeWithConstraints(\n ... dtype=torch.quint8,\n ... quant_min_lower_bound=0,\n ... quant_max_upper_bound=255,\n ... ),\n ... weight_dtype=DTypeWithConstraints(\n ... dtype=torch.qint8,\n ... quant_min_lower_bound=-128,\n ... quant_max_upper_bound=127,\n ... ),\n ... bias_dtype=torch.float)\n\n >>> dtype_config1.input_dtype\n torch.quint8\n\n >>> dtype_config2.input_dtype\n torch.quint8\n\n >>> dtype_config2.input_dtype_with_constraints\n DTypeWithConstraints(dtype=torch.quint8, quant_min_lower_bound=0, quant_max_upper_bound=255, scale_min_lower_bound=None, scale_max_upper_bound=None)\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.DTypeConfig.html", "category": "pytorch docs"} {"text": "classmethod from_dict(dtype_config_dict)\n Create a \"DTypeConfig\" from a dictionary with the following\n items (all optional):\n \"input_dtype\": torch.dtype or \"DTypeWithConstraints\"\n \"output_dtype\": torch.dtype or \"DTypeWithConstraints\"\n \"weight_dtype\": torch.dtype or \"DTypeWithConstraints\"\n \"bias_type\": torch.dtype \"is_dynamic\": bool\n\n Return type:\n *DTypeConfig*\n\nto_dict()\n Convert this \"DTypeConfig\" to a dictionary with the items\n described in \"from_dict()\".\n\n Return type:\n *Dict*[str, *Any*]\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.backend_config.DTypeConfig.html", "category": "pytorch docs"} {"text": "BCEWithLogitsLoss\nclass torch.nn.BCEWithLogitsLoss(weight=None, size_average=None, reduce=None, reduction='mean', pos_weight=None)\nThis loss combines a Sigmoid layer and the BCELoss in one\n single class. This version is more numerically stable than using a\n plain Sigmoid followed by a BCELoss as, by combining the\n operations into one layer, we take advantage of the log-sum-exp\n trick for numerical stability.\nThe unreduced (i.e. with \"reduction\" set to \"'none'\") loss can be\n described as:\n \\ell(x, y) = L = \\{l_1,\\dots,l_N\\}^\\top, \\quad l_n = - w_n\n \\left[ y_n \\cdot \\log \\sigma(x_n) + (1 - y_n) \\cdot \\log (1 -\n \\sigma(x_n)) \\right],\n\nwhere N is the batch size. If \"reduction\" is not \"'none'\" (default\n \"'mean'\"), then\n \\ell(x, y) = \\begin{cases} \\operatorname{mean}(L), &\n \\text{if reduction} = \\text{`mean';}\\\\\n \\operatorname{sum}(L), & \\text{if reduction} = \\text{`sum'.}\n \\end{cases}\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html", "category": "pytorch docs"} {"text": "\\end{cases}\nThis is used for measuring the error of a reconstruction in for\n example an auto-encoder. Note that the targets t[i] should be\n numbers between 0 and 1.\nIt's possible to trade off recall and precision by adding weights\n to positive examples. In the case of multi-label classification the\n loss can be described as:\n \\ell_c(x, y) = L_c = \\{l_{1,c},\\dots,l_{N,c}\\}^\\top, \\quad\n l_{n,c} = - w_{n,c} \\left[ p_c y_{n,c} \\cdot \\log\n \\sigma(x_{n,c}) + (1 - y_{n,c}) \\cdot \\log (1 - \\sigma(x_{n,c}))\n \\right],\n\nwhere c is the class number (c > 1 for multi-label binary\n classification, c = 1 for single-label binary classification), n is\n the number of the sample in the batch and p_c is the weight of the\n positive answer for the class c.\np_c > 1 increases the recall, p_c < 1 increases the precision.\nFor example, if a dataset contains 100 positive and 300 negative\n examples of a single class, then pos_weight for the class should", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html", "category": "pytorch docs"} {"text": "be equal to \\frac{300}{100}=3. The loss would act as if the dataset\n contains 3\\times 100=300 positive examples.\nExamples:\n >>> target = torch.ones([10, 64], dtype=torch.float32) # 64 classes, batch size = 10\n >>> output = torch.full([10, 64], 1.5) # A prediction (logit)\n >>> pos_weight = torch.ones([64]) # All weights are equal to 1\n >>> criterion = torch.nn.BCEWithLogitsLoss(pos_weight=pos_weight)\n >>> criterion(output, target) # -log(sigmoid(1.5))\n tensor(0.20...)\n\nParameters:\n * weight (Tensor, optional) -- a manual rescaling\n weight given to the loss of each batch element. If given, has\n to be a Tensor of size nbatch.\n * **size_average** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html", "category": "pytorch docs"} {"text": "is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n * **reduce** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n\n * **reduction** (*str**, **optional*) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html", "category": "pytorch docs"} {"text": "\npos_weight (Tensor, optional) -- a weight of\n positive examples. Must be a vector with length equal to the\n number of classes.\n\nShape:\n * Input: (*), where * means any number of dimensions.\n * Target: (*), same shape as the input.\n\n * Output: scalar. If \"reduction\" is \"'none'\", then (*), same\n shape as input.\n\n Examples:\n\n >>> loss = nn.BCEWithLogitsLoss()\n >>> input = torch.randn(3, requires_grad=True)\n >>> target = torch.empty(3).random_(2)\n >>> output = loss(input, target)\n >>> output.backward()\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html", "category": "pytorch docs"} {"text": "torch.Tensor.nextafter_\nTensor.nextafter_(other) -> Tensor\nIn-place version of \"nextafter()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.nextafter_.html", "category": "pytorch docs"} {"text": "torch.Tensor.qscheme\nTensor.qscheme() -> torch.qscheme\nReturns the quantization scheme of a given QTensor.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.qscheme.html", "category": "pytorch docs"} {"text": "torch.autograd.gradcheck\ntorch.autograd.gradcheck(func, inputs, *, eps=1e-06, atol=1e-05, rtol=0.001, raise_exception=True, check_sparse_nnz=False, nondet_tol=0.0, check_undefined_grad=True, check_grad_dtypes=False, check_batched_grad=False, check_batched_forward_grad=False, check_forward_ad=False, check_backward_ad=True, fast_mode=False)\nCheck gradients computed via small finite differences against\n analytical gradients w.r.t. tensors in \"inputs\" that are of\n floating point or complex type and with \"requires_grad=True\".\nThe check between numerical and analytical gradients uses\n \"allclose()\".\nFor most of the complex functions we consider for optimization\n purposes, no notion of Jacobian exists. Instead, gradcheck verifies\n if the numerical and analytical values of the Wirtinger and\n Conjugate Wirtinger derivatives are consistent. Because the\n gradient computation is done under the assumption that the overall", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.gradcheck.html", "category": "pytorch docs"} {"text": "function has a real-valued output, we treat functions with complex\n output in a special way. For these functions, gradcheck is applied\n to two real-valued functions corresponding to taking the real\n components of the complex outputs for the first, and taking the\n imaginary components of the complex outputs for the second. For\n more details, check out Autograd for Complex Numbers.\nNote:\n The default values are designed for \"input\" of double precision.\n This check will likely fail if \"input\" is of less precision,\n e.g., \"FloatTensor\".\n\nWarning:\n If any checked tensor in \"input\" has overlapping memory, i.e.,\n different indices pointing to the same memory address (e.g., from\n \"torch.expand()\"), this check will likely fail because the\n numerical gradients computed by point perturbation at such\n indices will change values at all other indices that share the\n same memory address.\n\nParameters:", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.gradcheck.html", "category": "pytorch docs"} {"text": "same memory address.\nParameters:\n * func (function) -- a Python function that takes Tensor\n inputs and returns a Tensor or a tuple of Tensors\n * **inputs** (*tuple of Tensor** or **Tensor*) -- inputs to the\n function\n\n * **eps** (*float**, **optional*) -- perturbation for finite\n differences\n\n * **atol** (*float**, **optional*) -- absolute tolerance\n\n * **rtol** (*float**, **optional*) -- relative tolerance\n\n * **raise_exception** (*bool**, **optional*) -- indicating\n whether to raise an exception if the check fails. The\n exception gives more information about the exact nature of the\n failure. This is helpful when debugging gradchecks.\n\n * **check_sparse_nnz** (*bool**, **optional*) -- if True,\n gradcheck allows for SparseTensor input, and for any\n SparseTensor at input, gradcheck will perform check at nnz\n positions only.\n\n * **nondet_tol** (*float**, **optional*) -- tolerance for non-\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.gradcheck.html", "category": "pytorch docs"} {"text": "determinism. When running identical inputs through the\n differentiation, the results must either match exactly\n (default, 0.0) or be within this tolerance.\n * **check_undefined_grad** (*bool**, **optional*) -- if True,\n check if undefined output grads are supported and treated as\n zeros, for \"Tensor\" outputs.\n\n * **check_batched_grad** (*bool**, **optional*) -- if True,\n check if we can compute batched gradients using prototype vmap\n support. Defaults to False.\n\n * **check_batched_forward_grad** (*bool**, **optional*) -- if\n True, checks if we can compute batched forward gradients using\n forward ad and prototype vmap support. Defaults to False.\n\n * **check_forward_ad** (*bool**, **optional*) -- if True, check\n that the gradients computed with forward mode AD match the\n numerical ones. Defaults to False.\n\n * **check_backward_ad** (*bool**, **optional*) -- if False, do\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.gradcheck.html", "category": "pytorch docs"} {"text": "not perform any checks that rely on backward mode AD to be\n implemented. Defaults to True.\n * **fast_mode** (*bool**, **optional*) -- Fast mode for\n gradcheck and gradgradcheck is currently only implemented for\n R to R functions. If none of the inputs and outputs are\n complex a faster implementation of gradcheck that no longer\n computes the entire jacobian is run; otherwise, we fall back\n to the slow implementation.\n\nReturns:\n True if all differences satisfy allclose condition\nReturn type:\n bool", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.gradcheck.html", "category": "pytorch docs"} {"text": "MaxPool3d\nclass torch.nn.MaxPool3d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)\nApplies a 3D max pooling over an input signal composed of several\n input planes.\nIn the simplest case, the output value of the layer with input size\n (N, C, D, H, W), output (N, C, D_{out}, H_{out}, W_{out}) and\n \"kernel_size\" (kD, kH, kW) can be precisely described as:\n \\begin{aligned} \\text{out}(N_i, C_j, d, h, w) ={} &\n \\max_{k=0, \\ldots, kD-1} \\max_{m=0, \\ldots, kH-1} \\max_{n=0,\n \\ldots, kW-1} \\\\ &\n \\text{input}(N_i, C_j, \\text{stride[0]} \\times d + k,\n \\text{stride[1]} \\times h + m, \\text{stride[2]} \\times w + n)\n \\end{aligned}\n\nIf \"padding\" is non-zero, then the input is implicitly padded with\n negative infinity on both sides for \"padding\" number of points.\n \"dilation\" controls the spacing between the kernel points. It is", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxPool3d.html", "category": "pytorch docs"} {"text": "harder to describe, but this link has a nice visualization of what\n \"dilation\" does.\nNote:\n When ceil_mode=True, sliding windows are allowed to go off-bounds\n if they start within the left padding or the input. Sliding\n windows that would start in the right padded region are ignored.\n\nThe parameters \"kernel_size\", \"stride\", \"padding\", \"dilation\" can\n either be:\n * a single \"int\" -- in which case the same value is used for the\n depth, height and width dimension\n\n * a \"tuple\" of three ints -- in which case, the first *int* is\n used for the depth dimension, the second *int* for the height\n dimension and the third *int* for the width dimension\n\nParameters:\n * kernel_size (Union[int, Tuple[int, int,\n int]]) -- the size of the window to take a max over\n * **stride** (*Union**[**int**, **Tuple**[**int**, **int**,\n **int**]**]*) -- the stride of the window. Default value is\n \"kernel_size\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxPool3d.html", "category": "pytorch docs"} {"text": "\"kernel_size\"\n * **padding** (*Union**[**int**, **Tuple**[**int**, **int**,\n **int**]**]*) -- Implicit negative infinity padding to be\n added on all three sides\n\n * **dilation** (*Union**[**int**, **Tuple**[**int**, **int**,\n **int**]**]*) -- a parameter that controls the stride of\n elements in the window\n\n * **return_indices** (*bool*) -- if \"True\", will return the max\n indices along with the outputs. Useful for\n \"torch.nn.MaxUnpool3d\" later\n\n * **ceil_mode** (*bool*) -- when True, will use *ceil* instead\n of *floor* to compute the output shape\n\nShape:\n * Input: (N, C, D_{in}, H_{in}, W_{in}) or (C, D_{in}, H_{in},\n W_{in}).\n * Output: (N, C, D_{out}, H_{out}, W_{out}) or (C, D_{out},\n H_{out}, W_{out}), where\n\n D_{out} = \\left\\lfloor\\frac{D_{in} + 2 \\times\n \\text{padding}[0] - \\text{dilation}[0] \\times\n (\\text{kernel\\_size}[0] - 1) - 1}{\\text{stride}[0]} +\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxPool3d.html", "category": "pytorch docs"} {"text": "1\\right\\rfloor\n H_{out} = \\left\\lfloor\\frac{H_{in} + 2 \\times\n \\text{padding}[1] - \\text{dilation}[1] \\times\n (\\text{kernel\\_size}[1] - 1) - 1}{\\text{stride}[1]} +\n 1\\right\\rfloor\n\n W_{out} = \\left\\lfloor\\frac{W_{in} + 2 \\times\n \\text{padding}[2] - \\text{dilation}[2] \\times\n (\\text{kernel\\_size}[2] - 1) - 1}{\\text{stride}[2]} +\n 1\\right\\rfloor\n\nExamples:\n >>> # pool of square window of size=3, stride=2\n >>> m = nn.MaxPool3d(3, stride=2)\n >>> # pool of non-square window\n >>> m = nn.MaxPool3d((3, 2, 2), stride=(2, 1, 2))\n >>> input = torch.randn(20, 16, 50, 44, 31)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxPool3d.html", "category": "pytorch docs"} {"text": "torch.Tensor.index_reduce\nTensor.index_reduce()", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_reduce.html", "category": "pytorch docs"} {"text": "torch.hspmm\ntorch.hspmm(mat1, mat2, *, out=None) -> Tensor\nPerforms a matrix multiplication of a sparse COO matrix \"mat1\" and\n a strided matrix \"mat2\". The result is a (1 + 1)-dimensional hybrid\n COO matrix.\nParameters:\n * mat1 (Tensor) -- the first sparse matrix to be matrix\n multiplied\n * **mat2** (*Tensor*) -- the second strided matrix to be matrix\n multiplied\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.hspmm.html", "category": "pytorch docs"} {"text": "torch.sparse.sampled_addmm\ntorch.sparse.sampled_addmm(input, mat1, mat2, *, beta=1., alpha=1., out=None) -> Tensor\nPerforms a matrix multiplication of the dense matrices \"mat1\" and\n \"mat2\" at the locations specified by the sparsity pattern of\n \"input\". The matrix \"input\" is added to the final result.\nMathematically this performs the following operation:\n \\text{out} = \\alpha\\ (\\text{mat1} \\mathbin{@}\n \\text{mat2})*\\text{spy}(\\text{input}) + \\beta\\ \\text{input}\n\nwhere \\text{spy}(\\text{input}) is the sparsity pattern matrix of\n \"input\", \"alpha\" and \"beta\" are the scaling factors.\n \\text{spy}(\\text{input}) has value 1 at the positions where \"input\"\n has non-zero values, and 0 elsewhere.\nNote:\n \"input\" must be a sparse CSR tensor. \"mat1\" and \"mat2\" must be\n dense tensors.\n\nParameters:\n * input (Tensor) -- a sparse CSR matrix of shape (m, n)\n to be added and used to compute the sampled matrix\n multiplication", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.sampled_addmm.html", "category": "pytorch docs"} {"text": "multiplication\n * **mat1** (*Tensor*) -- a dense matrix of shape *(m, k)* to be\n multiplied\n\n * **mat2** (*Tensor*) -- a dense matrix of shape *(k, n)* to be\n multiplied\n\nKeyword Arguments:\n * beta (Number, optional) -- multiplier for \"input\"\n (\\beta)\n * **alpha** (*Number**, **optional*) -- multiplier for mat1 @\n mat2 (\\alpha)\n\n * **out** (*Tensor**, **optional*) -- output tensor. Ignored if\n *None*. Default: *None*.\n\nExamples:\n >>> input = torch.eye(3, device='cuda').to_sparse_csr()\n >>> mat1 = torch.randn(3, 5, device='cuda')\n >>> mat2 = torch.randn(5, 3, device='cuda')\n >>> torch.sparse.sampled_addmm(input, mat1, mat2)\n tensor(crow_indices=tensor([0, 1, 2, 3]),\n col_indices=tensor([0, 1, 2]),\n values=tensor([ 0.2847, -0.7805, -0.1900]), device='cuda:0',\n size=(3, 3), nnz=3, layout=torch.sparse_csr)\n >>> torch.sparse.sampled_addmm(input, mat1, mat2).to_dense()\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.sampled_addmm.html", "category": "pytorch docs"} {"text": "tensor([[ 0.2847, 0.0000, 0.0000],\n [ 0.0000, -0.7805, 0.0000],\n [ 0.0000, 0.0000, -0.1900]], device='cuda:0')\n >>> torch.sparse.sampled_addmm(input, mat1, mat2, beta=0.5, alpha=0.5)\n tensor(crow_indices=tensor([0, 1, 2, 3]),\n col_indices=tensor([0, 1, 2]),\n values=tensor([ 0.1423, -0.3903, -0.0950]), device='cuda:0',\n size=(3, 3), nnz=3, layout=torch.sparse_csr)", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.sampled_addmm.html", "category": "pytorch docs"} {"text": "torch.take\ntorch.take(input, index) -> Tensor\nReturns a new tensor with the elements of \"input\" at the given\n indices. The input tensor is treated as if it were viewed as a 1-D\n tensor. The result takes the same shape as the indices.\nParameters:\n * input (Tensor) -- the input tensor.\n * **index** (*LongTensor*) -- the indices into tensor\n\nExample:\n >>> src = torch.tensor([[4, 3, 5],\n ... [6, 7, 8]])\n >>> torch.take(src, torch.tensor([0, 2, 5]))\n tensor([ 4, 5, 8])\n", "source": "https://pytorch.org/docs/stable/generated/torch.take.html", "category": "pytorch docs"} {"text": "torch.Tensor.equal\nTensor.equal(other) -> bool\nSee \"torch.equal()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.equal.html", "category": "pytorch docs"} {"text": "default_weight_only_qconfig\ntorch.quantization.qconfig.default_weight_only_qconfig\nalias of QConfig(activation=,\n weight=functools.partial(,\n observer=,\n quant_min=-128, quant_max=127, dtype=torch.qint8,\n qscheme=torch.per_tensor_symmetric, reduce_range=False){})", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.default_weight_only_qconfig.html", "category": "pytorch docs"} {"text": "torch.vander\ntorch.vander(x, N=None, increasing=False) -> Tensor\nGenerates a Vandermonde matrix.\nThe columns of the output matrix are elementwise powers of the\n input vector x^{(N-1)}, x^{(N-2)}, ..., x^0. If increasing is True,\n the order of the columns is reversed x^0, x^1, ..., x^{(N-1)}. Such\n a matrix with a geometric progression in each row is named for\n Alexandre-Theophile Vandermonde.\nParameters:\n * x (Tensor) -- 1-D input tensor.\n * **N** (*int**, **optional*) -- Number of columns in the\n output. If N is not specified, a square array is returned (N =\n len(x)).\n\n * **increasing** (*bool**, **optional*) -- Order of the powers\n of the columns. If True, the powers increase from left to\n right, if False (the default) they are reversed.\n\nReturns:\n Vandermonde matrix. If increasing is False, the first column is\n x^{(N-1)}, the second x^{(N-2)} and so forth. If increasing is", "source": "https://pytorch.org/docs/stable/generated/torch.vander.html", "category": "pytorch docs"} {"text": "True, the columns are x^0, x^1, ..., x^{(N-1)}.\nReturn type:\n Tensor\nExample:\n >>> x = torch.tensor([1, 2, 3, 5])\n >>> torch.vander(x)\n tensor([[ 1, 1, 1, 1],\n [ 8, 4, 2, 1],\n [ 27, 9, 3, 1],\n [125, 25, 5, 1]])\n >>> torch.vander(x, N=3)\n tensor([[ 1, 1, 1],\n [ 4, 2, 1],\n [ 9, 3, 1],\n [25, 5, 1]])\n >>> torch.vander(x, N=3, increasing=True)\n tensor([[ 1, 1, 1],\n [ 1, 2, 4],\n [ 1, 3, 9],\n [ 1, 5, 25]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.vander.html", "category": "pytorch docs"} {"text": "NLLLoss\nclass torch.nn.NLLLoss(weight=None, size_average=None, ignore_index=- 100, reduce=None, reduction='mean')\nThe negative log likelihood loss. It is useful to train a\n classification problem with C classes.\nIf provided, the optional argument \"weight\" should be a 1D Tensor\n assigning weight to each of the classes. This is particularly\n useful when you have an unbalanced training set.\nThe input given through a forward call is expected to contain\n log-probabilities of each class. input has to be a Tensor of size\n either (minibatch, C) or (minibatch, C, d_1, d_2, ..., d_K) with K\n \\geq 1 for the K-dimensional case. The latter is useful for\n higher dimension inputs, such as computing NLL loss per-pixel for\n 2D images.\nObtaining log-probabilities in a neural network is easily achieved\n by adding a LogSoftmax layer in the last layer of your network.\n You may use CrossEntropyLoss instead, if you prefer not to add an\n extra layer.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html", "category": "pytorch docs"} {"text": "extra layer.\nThe target that this loss expects should be a class index in the\n range [0, C-1] where C = number of classes; if ignore_index is\n specified, this loss also accepts this class index (this index may\n not necessarily be in the class range).\nThe unreduced (i.e. with \"reduction\" set to \"'none'\") loss can be\n described as:\n \\ell(x, y) = L = \\{l_1,\\dots,l_N\\}^\\top, \\quad l_n = - w_{y_n}\n x_{n,y_n}, \\quad w_{c} = \\text{weight}[c] \\cdot \\mathbb{1}\\{c\n \\not= \\text{ignore\\_index}\\},\n\nwhere x is the input, y is the target, w is the weight, and N is\n the batch size. If \"reduction\" is not \"'none'\" (default \"'mean'\"),\n then\n \\ell(x, y) = \\begin{cases} \\sum_{n=1}^N\n \\frac{1}{\\sum_{n=1}^N w_{y_n}} l_n, & \\text{if reduction} =\n \\text{`mean';}\\\\ \\sum_{n=1}^N l_n, & \\text{if\n reduction} = \\text{`sum'.} \\end{cases}\n\nParameters:\n * weight (Tensor, optional) -- a manual rescaling", "source": "https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html", "category": "pytorch docs"} {"text": "weight given to each class. If given, it has to be a Tensor of\n size C. Otherwise, it is treated as if having all ones.\n * **size_average** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"None\"\n\n * **ignore_index** (*int**, **optional*) -- Specifies a target\n value that is ignored and does not contribute to the input\n gradient. When \"size_average\" is \"True\", the loss is averaged\n over non-ignored targets.\n\n * **reduce** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html", "category": "pytorch docs"} {"text": "\"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"None\"\n * **reduction** (*str**, **optional*) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the weighted\n mean of the output is taken, \"'sum'\": the output will be\n summed. Note: \"size_average\" and \"reduce\" are in the process\n of being deprecated, and in the meantime, specifying either of\n those two args will override \"reduction\". Default: \"'mean'\"\n\nShape:\n * Input: (N, C) or (C), where C = number of classes, or (N, C,\n d_1, d_2, ..., d_K) with K \\geq 1 in the case of\n K-dimensional loss.\n * Target: (N) or (), where each value is 0 \\leq\n \\text{targets}[i] \\leq C-1, or (N, d_1, d_2, ..., d_K) with K\n \\geq 1 in the case of K-dimensional loss.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html", "category": "pytorch docs"} {"text": "\\geq 1 in the case of K-dimensional loss.\n * Output: If \"reduction\" is \"'none'\", shape (N) or (N, d_1, d_2,\n ..., d_K) with K \\geq 1 in the case of K-dimensional loss.\n Otherwise, scalar.\n\nExamples:\n >>> m = nn.LogSoftmax(dim=1)\n >>> loss = nn.NLLLoss()\n >>> # input is of size N x C = 3 x 5\n >>> input = torch.randn(3, 5, requires_grad=True)\n >>> # each element in target has to have 0 <= value < C\n >>> target = torch.tensor([1, 0, 4])\n >>> output = loss(m(input), target)\n >>> output.backward()\n >>>\n >>>\n >>> # 2D loss example (used, for example, with image inputs)\n >>> N, C = 5, 4\n >>> loss = nn.NLLLoss()\n >>> # input is of size N x C x height x width\n >>> data = torch.randn(N, 16, 10, 10)\n >>> conv = nn.Conv2d(16, C, (3, 3))\n >>> m = nn.LogSoftmax(dim=1)\n >>> # each element in target has to have 0 <= value < C\n >>> target = torch.empty(N, 8, 8, dtype=torch.long).random_(0, C)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html", "category": "pytorch docs"} {"text": "\n\n\noutput = loss(m(conv(data)), target)\n >>> output.backward()\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.NLLLoss.html", "category": "pytorch docs"} {"text": "CUDAPluggableAllocator\nclass torch.cuda.CUDAPluggableAllocator(path_to_so_file, alloc_fn_name, free_fn_name)\nCUDA memory allocator loaded from a so file.\nMemory allocators are compiled in .so files and loaded dynamically\n using ctypes. To change the active allocator use the\n \"torch.memory.cuda.change_current_allocator()\" function.\nParameters:\n * path_to_so_file (str) -- Path in the filesystem to the\n .so file containing the allocator functions\n * **alloc_fn_name** (*str*) -- Name of the function to perform\n the memory allocation in the so file. The signature must be:\n void* alloc_fn_name(ssize_t size, int device, cudaStream_t\n stream);\n\n * **free_fn_name** (*str*) -- Name of the function to perform\n the memory release in the so file. The signature must be: void\n free_fn_name(void* ptr, size_t size, cudaStream_t stream);\n\nWarning:\n This is currently supported only in unix OSs\n\nNote:", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.CUDAPluggableAllocator.html", "category": "pytorch docs"} {"text": "Note:\n See Memory management for details on creating and using a custom\n allocator\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.CUDAPluggableAllocator.html", "category": "pytorch docs"} {"text": "torch.set_deterministic_debug_mode\ntorch.set_deterministic_debug_mode(debug_mode)\nSets the debug mode for deterministic operations.\nNote:\n This is an alternative interface for\n \"torch.use_deterministic_algorithms()\". Refer to that function's\n documentation for details about affected operations.\n\nParameters:\n debug_mode (str or int) -- If \"default\" or 0, don't\n error or warn on nondeterministic operations. If \"warn\" or 1,\n warn on nondeterministic operations. If \"error\" or 2, error on\n nondeterministic operations.", "source": "https://pytorch.org/docs/stable/generated/torch.set_deterministic_debug_mode.html", "category": "pytorch docs"} {"text": "torch.Tensor.is_coalesced\nTensor.is_coalesced() -> bool\nReturns \"True\" if \"self\" is a sparse COO tensor that is coalesced,\n \"False\" otherwise.\nWarning:\n Throws an error if \"self\" is not a sparse COO tensor.\n\nSee \"coalesce()\" and uncoalesced tensors.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_coalesced.html", "category": "pytorch docs"} {"text": "ReLU6\nclass torch.ao.nn.quantized.ReLU6(inplace=False)\nApplies the element-wise function:\n\\text{ReLU6}(x) = \\min(\\max(x_0, x), q(6)), where x_0 is the\n zero_point, and q(6) is the quantized representation of number 6.\nParameters:\n inplace (bool) -- can optionally do the operation in-\n place. Default: \"False\"\nShape:\n * Input: (N, *) where *** means, any number of additional\n dimensions\n * Output: (N, *), same shape as the input\n\n[image]\nExamples:\n >>> m = nn.quantized.ReLU6()\n >>> input = torch.randn(2)\n >>> input = torch.quantize_per_tensor(input, 1.0, 0, dtype=torch.qint32)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.ReLU6.html", "category": "pytorch docs"} {"text": "torch.cumsum\ntorch.cumsum(input, dim, *, dtype=None, out=None) -> Tensor\nReturns the cumulative sum of elements of \"input\" in the dimension\n \"dim\".\nFor example, if \"input\" is a vector of size N, the result will also\n be a vector of size N, with elements.\n y_i = x_1 + x_2 + x_3 + \\dots + x_i\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int*) -- the dimension to do the operation over\n\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. If specified, the input tensor is casted\n to \"dtype\" before the operation is performed. This is useful\n for preventing data type overflows. Default: None.\n * **out** (*Tensor**, **optional*) -- the output tensor.\n\nExample:\n >>> a = torch.randn(10)\n >>> a\n tensor([-0.8286, -0.4890, 0.5155, 0.8443, 0.1865, -0.1752, -2.0595,\n 0.1850, -1.1571, -0.4243])\n >>> torch.cumsum(a, dim=0)\n", "source": "https://pytorch.org/docs/stable/generated/torch.cumsum.html", "category": "pytorch docs"} {"text": "\n\n\ntorch.cumsum(a, dim=0)\n tensor([-0.8286, -1.3175, -0.8020, 0.0423, 0.2289, 0.0537, -2.0058,\n -1.8209, -2.9780, -3.4022])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.cumsum.html", "category": "pytorch docs"} {"text": "torch.autograd.graph.Node.name\nabstract Node.name()\nReturns the name.\nExample:\n >>> import torch\n >>> a = torch.tensor([0., 0., 0.], requires_grad=True)\n >>> b = a.clone()\n >>> assert isinstance(b.grad_fn, torch.autograd.graph.Node)\n >>> print(b.grad_fn.name())\n CloneBackward0\n\nReturn type:\n str", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.graph.Node.name.html", "category": "pytorch docs"} {"text": "set_multithreading_enabled\nclass torch.autograd.set_multithreading_enabled(mode)\nContext-manager that sets multithreaded backwards on or off.\n\"set_multithreading_enabled\" will enable or disable multithreaded\n backwards based on its argument \"mode\". It can be used as a\n context-manager or as a function.\nThis context manager is thread local; it will not affect\n computation in other threads.\nParameters:\n mode (bool) -- Flag whether to enable multithreaded\n backwards (\"True\"), or disable (\"False\").\nNote:\n This API does not apply to forward-mode AD.\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.set_multithreading_enabled.html", "category": "pytorch docs"} {"text": "torch.Tensor.is_pinned\nTensor.is_pinned()\nReturns true if this tensor resides in pinned memory.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_pinned.html", "category": "pytorch docs"} {"text": "torch.signal.windows.gaussian\ntorch.signal.windows.gaussian(M, *, std=1.0, sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)\nComputes a window with a gaussian waveform.\nThe gaussian window is defined as follows:\n w_n = \\exp{\\left(-\\left(\\frac{n}{2\\sigma}\\right)^2\\right)}\n\nThe window is normalized to 1 (maximum value is 1). However, the 1\n doesn't appear if \"M\" is even and \"sym\" is True.\nParameters:\n M (int) -- the length of the window. In other words, the\n number of points of the returned window.\nKeyword Arguments:\n * std (float, optional) -- the standard deviation of\n the gaussian. It controls how narrow or wide the window is.\n Default: 1.0.\n * **sym** (*bool**, **optional*) -- If *False*, returns a\n periodic window suitable for use in spectral analysis. If\n *True*, returns a symmetric window suitable for use in filter\n design. Default: *True*.\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.gaussian.html", "category": "pytorch docs"} {"text": "design. Default: True.\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\nReturn type:\n Tensor\nExamples:\n >>> # Generates a symmetric gaussian window with a standard deviation of 1.0.\n >>> torch.signal.windows.gaussian(10)\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.gaussian.html", "category": "pytorch docs"} {"text": "\n\n\ntorch.signal.windows.gaussian(10)\n tensor([4.0065e-05, 2.1875e-03, 4.3937e-02, 3.2465e-01, 8.8250e-01, 8.8250e-01, 3.2465e-01, 4.3937e-02, 2.1875e-03, 4.0065e-05])\n\n\n\n >>> # Generates a periodic gaussian window and standard deviation equal to 0.9.\n >>> torch.signal.windows.gaussian(10, sym=False,std=0.9)\n tensor([1.9858e-07, 5.1365e-05, 3.8659e-03, 8.4658e-02, 5.3941e-01, 1.0000e+00, 5.3941e-01, 8.4658e-02, 3.8659e-03, 5.1365e-05])\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.gaussian.html", "category": "pytorch docs"} {"text": "torch.Tensor.isposinf\nTensor.isposinf() -> Tensor\nSee \"torch.isposinf()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.isposinf.html", "category": "pytorch docs"} {"text": "torch.Tensor.gather\nTensor.gather(dim, index) -> Tensor\nSee \"torch.gather()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.gather.html", "category": "pytorch docs"} {"text": "torch.linalg.lu_factor\ntorch.linalg.lu_factor(A, *, bool pivot=True, out=None) -> (Tensor, Tensor)\nComputes a compact representation of the LU factorization with\n partial pivoting of a matrix.\nThis function computes a compact representation of the\n decomposition given by \"torch.linalg.lu()\". If the matrix is\n square, this representation may be used in\n \"torch.linalg.lu_solve()\" to solve system of linear equations that\n share the matrix \"A\".\nThe returned decomposition is represented as a named tuple (LU,\n pivots). The \"LU\" matrix has the same shape as the input matrix\n \"A\". Its upper and lower triangular parts encode the non-constant\n elements of \"L\" and \"U\" of the LU decomposition of \"A\".\nThe returned permutation matrix is represented by a 1-indexed\n vector. pivots[i] == j represents that in the i-th step of the\n algorithm, the i-th row was permuted with the j-1-th row.\nOn CUDA, one may use \"pivot\"= False. In this case, this function", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu_factor.html", "category": "pytorch docs"} {"text": "returns the LU decomposition without pivoting if it exists.\nSupports inputs of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if the inputs are batches of\n matrices then the output has the same batch dimensions.\nNote:\n When inputs are on a CUDA device, this function synchronizes that\n device with the CPU. For a version of this function that does not\n synchronize, see \"torch.linalg.lu_factor_ex()\".\n\nWarning:\n The LU decomposition is almost never unique, as often there are\n different permutation matrices that can yield different LU\n decompositions. As such, different platforms, like SciPy, or\n inputs on different devices, may produce different valid\n decompositions.Gradient computations are only supported if the\n input matrix is full-rank. If this condition is not met, no error\n will be thrown, but the gradient may not be finite. This is\n because the LU decomposition with pivoting is not differentiable\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu_factor.html", "category": "pytorch docs"} {"text": "at these points.\nSee also:\n \"torch.linalg.lu_solve()\" solves a system of linear equations\n given the output of this function provided the input matrix was\n square and invertible.\n\n \"torch.lu_unpack()\" unpacks the tensors returned by \"lu_factor()\"\n into the three matrices *P, L, U* that form the decomposition.\n\n \"torch.linalg.lu()\" computes the LU decomposition with partial\n pivoting of a possibly non-square matrix. It is a composition of\n \"lu_factor()\" and \"torch.lu_unpack()\".\n\n \"torch.linalg.solve()\" solves a system of linear equations. It is\n a composition of \"lu_factor()\" and \"lu_solve()\".\n\nParameters:\n A (Tensor) -- tensor of shape (, m, n)* where *** is\n zero or more batch dimensions.\nKeyword Arguments:\n * pivot (bool, optional) -- Whether to compute the LU\n decomposition with partial pivoting, or the regular LU\n decomposition. \"pivot\"= False not supported on CPU. Default:\n True.", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu_factor.html", "category": "pytorch docs"} {"text": "True.\n * **out** (*tuple**, **optional*) -- tuple of two tensors to\n write the output to. Ignored if *None*. Default: *None*.\n\nReturns:\n A named tuple (LU, pivots).\nRaises:\n RuntimeError -- if the \"A\" matrix is not invertible or any\n matrix in a batched \"A\" is not invertible.\nExamples:\n >>> A = torch.randn(2, 3, 3)\n >>> B1 = torch.randn(2, 3, 4)\n >>> B2 = torch.randn(2, 3, 7)\n >>> A_factor = torch.linalg.lu_factor(A)\n >>> X1 = torch.linalg.lu_solve(A_factor, B1)\n >>> X2 = torch.linalg.lu_solve(A_factor, B2)\n >>> torch.allclose(A @ X1, B1)\n True\n >>> torch.allclose(A @ X2, B2)\n True\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.lu_factor.html", "category": "pytorch docs"} {"text": "torch.logical_not\ntorch.logical_not(input, *, out=None) -> Tensor\nComputes the element-wise logical NOT of the given input tensor. If\n not specified, the output tensor will have the bool dtype. If the\n input tensor is not a bool tensor, zeros are treated as \"False\" and\n non-zeros are treated as \"True\".\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> torch.logical_not(torch.tensor([True, False]))\n tensor([False, True])\n >>> torch.logical_not(torch.tensor([0, 1, -10], dtype=torch.int8))\n tensor([ True, False, False])\n >>> torch.logical_not(torch.tensor([0., 1.5, -10.], dtype=torch.double))\n tensor([ True, False, False])\n >>> torch.logical_not(torch.tensor([0., 1., -10.], dtype=torch.double), out=torch.empty(3, dtype=torch.int16))\n tensor([1, 0, 0], dtype=torch.int16)\n", "source": "https://pytorch.org/docs/stable/generated/torch.logical_not.html", "category": "pytorch docs"} {"text": "LazyConvTranspose3d\nclass torch.nn.LazyConvTranspose3d(out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)\nA \"torch.nn.ConvTranspose3d\" module with lazy initialization of the\n \"in_channels\" argument of the \"ConvTranspose3d\" that is inferred\n from the \"input.size(1)\". The attributes that will be lazily\n initialized are weight and bias.\nCheck the \"torch.nn.modules.lazy.LazyModuleMixin\" for further\n documentation on lazy modules and their limitations.\nParameters:\n * out_channels (int) -- Number of channels produced by the\n convolution\n * **kernel_size** (*int** or **tuple*) -- Size of the convolving\n kernel\n\n * **stride** (*int** or **tuple**, **optional*) -- Stride of the\n convolution. Default: 1\n\n * **padding** (*int** or **tuple**, **optional*) -- \"dilation *\n (kernel_size - 1) - padding\" zero-padding will be added to\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyConvTranspose3d.html", "category": "pytorch docs"} {"text": "both sides of each dimension in the input. Default: 0\n * **output_padding** (*int** or **tuple**, **optional*) --\n Additional size added to one side of each dimension in the\n output shape. Default: 0\n\n * **groups** (*int**, **optional*) -- Number of blocked\n connections from input channels to output channels. Default: 1\n\n * **bias** (*bool**, **optional*) -- If \"True\", adds a learnable\n bias to the output. Default: \"True\"\n\n * **dilation** (*int** or **tuple**, **optional*) -- Spacing\n between kernel elements. Default: 1\n\nSee also:\n \"torch.nn.ConvTranspose3d\" and\n \"torch.nn.modules.lazy.LazyModuleMixin\"\n\ncls_to_become\n alias of \"ConvTranspose3d\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyConvTranspose3d.html", "category": "pytorch docs"} {"text": "torch.Tensor.logit\nTensor.logit() -> Tensor\nSee \"torch.logit()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logit.html", "category": "pytorch docs"} {"text": "torch.nn.functional.hardtanh_\ntorch.nn.functional.hardtanh_(input, min_val=- 1., max_val=1.) -> Tensor\nIn-place version of \"hardtanh()\".", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.hardtanh_.html", "category": "pytorch docs"} {"text": "torch.cuda.reset_max_memory_allocated\ntorch.cuda.reset_max_memory_allocated(device=None)\nResets the starting point in tracking maximum GPU memory occupied\n by tensors for a given device.\nSee \"max_memory_allocated()\" for details.\nParameters:\n device (torch.device or int, optional) -- selected\n device. Returns statistic for the current device, given by\n \"current_device()\", if \"device\" is \"None\" (default).\nWarning:\n This function now calls \"reset_peak_memory_stats()\", which resets\n /all/ peak memory stats.\n\nNote:\n See Memory management for more details about GPU memory\n management.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.reset_max_memory_allocated.html", "category": "pytorch docs"} {"text": "torch.Tensor.dense_dim\nTensor.dense_dim() -> int\nReturn the number of dense dimensions in a sparse tensor \"self\".\nNote:\n Returns \"len(self.shape)\" if \"self\" is not a sparse tensor.\n\nSee also \"Tensor.sparse_dim()\" and hybrid tensors.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.dense_dim.html", "category": "pytorch docs"} {"text": "torch.Tensor.expm1_\nTensor.expm1_() -> Tensor\nIn-place version of \"expm1()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.expm1_.html", "category": "pytorch docs"} {"text": "torch.cuda.initial_seed\ntorch.cuda.initial_seed()\nReturns the current random seed of the current GPU.\nWarning:\n This function eagerly initializes CUDA.\n\nReturn type:\n int", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.initial_seed.html", "category": "pytorch docs"} {"text": "torch.Tensor.pow_\nTensor.pow_(exponent) -> Tensor\nIn-place version of \"pow()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.pow_.html", "category": "pytorch docs"} {"text": "PruningContainer\nclass torch.nn.utils.prune.PruningContainer(*args)\nContainer holding a sequence of pruning methods for iterative\n pruning. Keeps track of the order in which pruning methods are\n applied and handles combining successive pruning calls.\nAccepts as argument an instance of a BasePruningMethod or an\n iterable of them.\nadd_pruning_method(method)\n Adds a child pruning \"method\" to the container.\n\n Parameters:\n **method** (*subclass of BasePruningMethod*) -- child pruning\n method to be added to the container.\n\nclassmethod apply(module, name, args, importance_scores=None, *kwargs)\n Adds the forward pre-hook that enables pruning on the fly and\n the reparametrization of a tensor in terms of the original\n tensor and the pruning mask.\n\n Parameters:\n * **module** (*nn.Module*) -- module containing the tensor to\n prune\n\n * **name** (*str*) -- parameter name within \"module\" on which\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.PruningContainer.html", "category": "pytorch docs"} {"text": "pruning will act.\n * **args** -- arguments passed on to a subclass of\n \"BasePruningMethod\"\n\n * **importance_scores** (*torch.Tensor*) -- tensor of\n importance scores (of same shape as module parameter) used\n to compute mask for pruning. The values in this tensor\n indicate the importance of the corresponding elements in\n the parameter being pruned. If unspecified or None, the\n parameter will be used in its place.\n\n * **kwargs** -- keyword arguments passed on to a subclass of\n a \"BasePruningMethod\"\n\napply_mask(module)\n Simply handles the multiplication between the parameter being\n pruned and the generated mask. Fetches the mask and the original\n tensor from the module and returns the pruned version of the\n tensor.\n\n Parameters:\n **module** (*nn.Module*) -- module containing the tensor to\n prune\n\n Returns:\n pruned version of the input tensor\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.PruningContainer.html", "category": "pytorch docs"} {"text": "pruned version of the input tensor\n Return type:\n pruned_tensor (torch.Tensor)\n\ncompute_mask(t, default_mask)\n Applies the latest \"method\" by computing the new partial masks\n and returning its combination with the \"default_mask\". The new\n partial mask should be computed on the entries or channels that\n were not zeroed out by the \"default_mask\". Which portions of the\n tensor \"t\" the new mask will be calculated from depends on the\n \"PRUNING_TYPE\" (handled by the type handler):\n\n * for 'unstructured', the mask will be computed from the raveled\n list of nonmasked entries;\n\n * for 'structured', the mask will be computed from the nonmasked\n channels in the tensor;\n\n * for 'global', the mask will be computed across all entries.\n\n Parameters:\n * **t** (*torch.Tensor*) -- tensor representing the parameter\n to prune (of same dimensions as \"default_mask\").\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.PruningContainer.html", "category": "pytorch docs"} {"text": "\n\ndefault_mask (torch.Tensor) -- mask from previous\n pruning iteration.\nReturns:\n new mask that combines the effects of the \"default_mask\" and\n the new mask from the current pruning \"method\" (of same\n dimensions as \"default_mask\" and \"t\").\nReturn type:\n mask (torch.Tensor)\n\n\nprune(t, default_mask=None, importance_scores=None)\n Computes and returns a pruned version of input tensor \"t\"\n according to the pruning rule specified in \"compute_mask()\".\n\n Parameters:\n * **t** (*torch.Tensor*) -- tensor to prune (of same\n dimensions as \"default_mask\").\n\n * **importance_scores** (*torch.Tensor*) -- tensor of\n importance scores (of same shape as \"t\") used to compute\n mask for pruning \"t\". The values in this tensor indicate\n the importance of the corresponding elements in the \"t\"\n that is being pruned. If unspecified or None, the tensor\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.PruningContainer.html", "category": "pytorch docs"} {"text": "\"t\" will be used in its place.\n * **default_mask** (*torch.Tensor**, **optional*) -- mask\n from previous pruning iteration, if any. To be considered\n when determining what portion of the tensor that pruning\n should act on. If None, default to a mask of ones.\n\n Returns:\n pruned version of tensor \"t\".\n\nremove(module)\n Removes the pruning reparameterization from a module. The pruned\n parameter named \"name\" remains permanently pruned, and the\n parameter named \"name+'_orig'\" is removed from the parameter\n list. Similarly, the buffer named \"name+'_mask'\" is removed from\n the buffers.\n\n Note:\n\n Pruning itself is NOT undone or reversed!\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.PruningContainer.html", "category": "pytorch docs"} {"text": "torch.permute\ntorch.permute(input, dims) -> Tensor\nReturns a view of the original tensor \"input\" with its dimensions\n permuted.\nParameters:\n * input (Tensor) -- the input tensor.\n * **dims** (*tuple of python:int*) -- The desired ordering of\n dimensions\n\n-[ Example ]-\n\n\n\nx = torch.randn(2, 3, 5)\nx.size()\n torch.Size([2, 3, 5])\ntorch.permute(x, (2, 0, 1)).size()\n torch.Size([5, 2, 3])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.permute.html", "category": "pytorch docs"} {"text": "torch.Tensor.le_\nTensor.le_(other) -> Tensor\nIn-place version of \"le()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.le_.html", "category": "pytorch docs"} {"text": "torch.movedim\ntorch.movedim(input, source, destination) -> Tensor\nMoves the dimension(s) of \"input\" at the position(s) in \"source\" to\n the position(s) in \"destination\".\nOther dimensions of \"input\" that are not explicitly moved remain in\n their original order and appear at the positions not specified in\n \"destination\".\nParameters:\n * input (Tensor) -- the input tensor.\n * **source** (*int** or **tuple of ints*) -- Original positions\n of the dims to move. These must be unique.\n\n * **destination** (*int** or **tuple of ints*) -- Destination\n positions for each of the original dims. These must also be\n unique.\n\nExamples:\n >>> t = torch.randn(3,2,1)\n >>> t\n tensor([[[-0.3362],\n [-0.8437]],\n\n [[-0.9627],\n [ 0.1727]],\n\n [[ 0.5173],\n [-0.1398]]])\n >>> torch.movedim(t, 1, 0).shape\n torch.Size([2, 3, 1])\n >>> torch.movedim(t, 1, 0)\n", "source": "https://pytorch.org/docs/stable/generated/torch.movedim.html", "category": "pytorch docs"} {"text": "\n\n\ntorch.movedim(t, 1, 0)\n tensor([[[-0.3362],\n [-0.9627],\n [ 0.5173]],\n\n\n\n [[-0.8437],\n [ 0.1727],\n [-0.1398]]])\n >>> torch.movedim(t, (1, 2), (0, 1)).shape\n torch.Size([2, 1, 3])\n >>> torch.movedim(t, (1, 2), (0, 1))\n tensor([[[-0.3362, -0.9627, 0.5173]],\n\n [[-0.8437, 0.1727, -0.1398]]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.movedim.html", "category": "pytorch docs"} {"text": "CustomFromMask\nclass torch.nn.utils.prune.CustomFromMask(mask)\nclassmethod apply(module, name, mask)\n Adds the forward pre-hook that enables pruning on the fly and\n the reparametrization of a tensor in terms of the original\n tensor and the pruning mask.\n\n Parameters:\n * **module** (*nn.Module*) -- module containing the tensor to\n prune\n\n * **name** (*str*) -- parameter name within \"module\" on which\n pruning will act.\n\napply_mask(module)\n Simply handles the multiplication between the parameter being\n pruned and the generated mask. Fetches the mask and the original\n tensor from the module and returns the pruned version of the\n tensor.\n\n Parameters:\n **module** (*nn.Module*) -- module containing the tensor to\n prune\n\n Returns:\n pruned version of the input tensor\n\n Return type:\n pruned_tensor (torch.Tensor)\n\nprune(t, default_mask=None, importance_scores=None)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.CustomFromMask.html", "category": "pytorch docs"} {"text": "Computes and returns a pruned version of input tensor \"t\"\n according to the pruning rule specified in \"compute_mask()\".\n Parameters:\n * **t** (*torch.Tensor*) -- tensor to prune (of same\n dimensions as \"default_mask\").\n\n * **importance_scores** (*torch.Tensor*) -- tensor of\n importance scores (of same shape as \"t\") used to compute\n mask for pruning \"t\". The values in this tensor indicate\n the importance of the corresponding elements in the \"t\"\n that is being pruned. If unspecified or None, the tensor\n \"t\" will be used in its place.\n\n * **default_mask** (*torch.Tensor**, **optional*) -- mask\n from previous pruning iteration, if any. To be considered\n when determining what portion of the tensor that pruning\n should act on. If None, default to a mask of ones.\n\n Returns:\n pruned version of tensor \"t\".\n\nremove(module)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.CustomFromMask.html", "category": "pytorch docs"} {"text": "remove(module)\n Removes the pruning reparameterization from a module. The pruned\n parameter named \"name\" remains permanently pruned, and the\n parameter named \"name+'_orig'\" is removed from the parameter\n list. Similarly, the buffer named \"name+'_mask'\" is removed from\n the buffers.\n\n Note:\n\n Pruning itself is NOT undone or reversed!\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.CustomFromMask.html", "category": "pytorch docs"} {"text": "torch.foreach_expm1\ntorch.foreach_expm1(self: List[Tensor]) -> None\nApply \"torch.expm1()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_expm1_.html", "category": "pytorch docs"} {"text": "torch.Tensor.greater\nTensor.greater(other) -> Tensor\nSee \"torch.greater()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.greater.html", "category": "pytorch docs"} {"text": "torch.linalg.eigvalsh\ntorch.linalg.eigvalsh(A, UPLO='L', *, out=None) -> Tensor\nComputes the eigenvalues of a complex Hermitian or real symmetric\n matrix.\nLetting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, the eigenvalues\n of a complex Hermitian or real symmetric matrix A \\in\n \\mathbb{K}^{n \\times n} are defined as the roots (counted with\n multiplicity) of the polynomial p of degree n given by\n p(\\lambda) = \\operatorname{det}(A - \\lambda\n \\mathrm{I}_n)\\mathrlap{\\qquad \\lambda \\in \\mathbb{R}}\n\nwhere \\mathrm{I}_n is the n-dimensional identity matrix. The\n eigenvalues of a real symmetric or complex Hermitian matrix are\n always real.\nSupports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\nThe eigenvalues are returned in ascending order.\n\"A\" is assumed to be Hermitian (resp. symmetric), but this is not", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eigvalsh.html", "category": "pytorch docs"} {"text": "checked internally, instead:\n\n\nIf \"UPLO\"= 'L' (default), only the lower triangular part of the\n matrix is used in the computation.\n\n\nIf \"UPLO\"= 'U', only the upper triangular part of the matrix is\n used.\n\n\nNote:\n When inputs are on a CUDA device, this function synchronizes that\n device with the CPU.\n\nSee also:\n \"torch.linalg.eigh()\" computes the full eigenvalue decomposition.\n\nParameters:\n * A (Tensor) -- tensor of shape (, n, n)* where *** is\n zero or more batch dimensions consisting of symmetric or\n Hermitian matrices.\n * **UPLO** (*'L'**, **'U'**, **optional*) -- controls whether to\n use the upper or lower triangular part of \"A\" in the\n computations. Default: *'L'*.\n\nKeyword Arguments:\n out (Tensor, optional) -- output tensor. Ignored if\n None. Default: None.\nReturns:\n A real-valued tensor containing the eigenvalues even when \"A\" is", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eigvalsh.html", "category": "pytorch docs"} {"text": "complex. The eigenvalues are returned in ascending order.\nExamples:\n >>> A = torch.randn(2, 2, dtype=torch.complex128)\n >>> A = A + A.T.conj() # creates a Hermitian matrix\n >>> A\n tensor([[2.9228+0.0000j, 0.2029-0.0862j],\n [0.2029+0.0862j, 0.3464+0.0000j]], dtype=torch.complex128)\n >>> torch.linalg.eigvalsh(A)\n tensor([0.3277, 2.9415], dtype=torch.float64)\n\n >>> A = torch.randn(3, 2, 2, dtype=torch.float64)\n >>> A = A + A.mT # creates a batch of symmetric matrices\n >>> torch.linalg.eigvalsh(A)\n tensor([[ 2.5797, 3.4629],\n [-4.1605, 1.3780],\n [-3.1113, 2.7381]], dtype=torch.float64)\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.eigvalsh.html", "category": "pytorch docs"} {"text": "torch.nn.functional.adaptive_max_pool3d\ntorch.nn.functional.adaptive_max_pool3d(args, *kwargs)\nApplies a 3D adaptive max pooling over an input signal composed of\n several input planes.\nSee \"AdaptiveMaxPool3d\" for details and output shape.\nParameters:\n * output_size -- the target output size (single integer or\n triple-integer tuple)\n * **return_indices** -- whether to return pooling indices.\n Default: \"False\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.adaptive_max_pool3d.html", "category": "pytorch docs"} {"text": "torch.mv\ntorch.mv(input, vec, *, out=None) -> Tensor\nPerforms a matrix-vector product of the matrix \"input\" and the\n vector \"vec\".\nIf \"input\" is a (n \\times m) tensor, \"vec\" is a 1-D tensor of size\n m, \"out\" will be 1-D of size n.\nNote:\n This function does not broadcast.\n\nParameters:\n * input (Tensor) -- matrix to be multiplied\n * **vec** (*Tensor*) -- vector to be multiplied\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> mat = torch.randn(2, 3)\n >>> vec = torch.randn(3)\n >>> torch.mv(mat, vec)\n tensor([ 1.0404, -0.6361])\n", "source": "https://pytorch.org/docs/stable/generated/torch.mv.html", "category": "pytorch docs"} {"text": "torch.Tensor.median\nTensor.median(dim=None, keepdim=False)\nSee \"torch.median()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.median.html", "category": "pytorch docs"} {"text": "default_qat_qconfig\ntorch.quantization.qconfig.default_qat_qconfig\nalias of QConfig(activation=functools.partial(,\n observer=,\n quant_min=0, quant_max=255, dtype=torch.quint8,\n qscheme=torch.per_tensor_affine, reduce_range=True){},\n weight=functools.partial(,\n observer=,\n quant_min=-128, quant_max=127, dtype=torch.qint8,\n qscheme=torch.per_tensor_symmetric, reduce_range=False){})", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.default_qat_qconfig.html", "category": "pytorch docs"} {"text": "torch.Tensor.set_\nTensor.set_(source=None, storage_offset=0, size=None, stride=None) -> Tensor\nSets the underlying storage, size, and strides. If \"source\" is a\n tensor, \"self\" tensor will share the same storage and have the same\n size and strides as \"source\". Changes to elements in one tensor\n will be reflected in the other.\nIf \"source\" is a \"Storage\", the method sets the underlying storage,\n offset, size, and stride.\nParameters:\n * source (Tensor or Storage) -- the tensor or storage\n to use\n * **storage_offset** (*int**, **optional*) -- the offset in the\n storage\n\n * **size** (*torch.Size**, **optional*) -- the desired size.\n Defaults to the size of the source.\n\n * **stride** (*tuple**, **optional*) -- the desired stride.\n Defaults to C-contiguous strides.\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.set_.html", "category": "pytorch docs"} {"text": "torch.amax\ntorch.amax(input, dim, keepdim=False, *, out=None) -> Tensor\nReturns the maximum value of each slice of the \"input\" tensor in\n the given dimension(s) \"dim\".\nNote:\n The difference between \"max\"/\"min\" and \"amax\"/\"amin\" is:\n * \"amax\"/\"amin\" supports reducing on multiple dimensions,\n\n * \"amax\"/\"amin\" does not return indices,\n\n * \"amax\"/\"amin\" evenly distributes gradient between equal\n values, while \"max(dim)\"/\"min(dim)\" propagates gradient only\n to a single index in the source tensor.\n\nIf \"keepdim\" is \"True\", the output tensor is of the same size as\n \"input\" except in the dimension(s) \"dim\" where it is of size 1.\n Otherwise, \"dim\" is squeezed (see \"torch.squeeze()\"), resulting in\n the output tensor having 1 (or \"len(dim)\") fewer dimension(s).\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int** or **tuple of ints*) -- the dimension or\n dimensions to reduce.\n", "source": "https://pytorch.org/docs/stable/generated/torch.amax.html", "category": "pytorch docs"} {"text": "dimensions to reduce.\n * **keepdim** (*bool*) -- whether the output tensor has \"dim\"\n retained or not.\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(4, 4)\n >>> a\n tensor([[ 0.8177, 1.4878, -0.2491, 0.9130],\n [-0.7158, 1.1775, 2.0992, 0.4817],\n [-0.0053, 0.0164, -1.3738, -0.0507],\n [ 1.9700, 1.1106, -1.0318, -1.0816]])\n >>> torch.amax(a, 1)\n tensor([1.4878, 2.0992, 0.0164, 1.9700])\n", "source": "https://pytorch.org/docs/stable/generated/torch.amax.html", "category": "pytorch docs"} {"text": "torch.cuda.manual_seed\ntorch.cuda.manual_seed(seed)\nSets the seed for generating random numbers for the current GPU.\n It's safe to call this function if CUDA is not available; in that\n case, it is silently ignored.\nParameters:\n seed (int) -- The desired seed.\nWarning:\n If you are working with a multi-GPU model, this function is\n insufficient to get determinism. To seed all GPUs, use\n \"manual_seed_all()\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.manual_seed.html", "category": "pytorch docs"} {"text": "torch.lobpcg\ntorch.lobpcg(A, k=None, B=None, X=None, n=None, iK=None, niter=None, tol=None, largest=None, method=None, tracker=None, ortho_iparams=None, ortho_fparams=None, ortho_bparams=None)\nFind the k largest (or smallest) eigenvalues and the corresponding\n eigenvectors of a symmetric positive definite generalized\n eigenvalue problem using matrix-free LOBPCG methods.\nThis function is a front-end to the following LOBPCG algorithms\n selectable via method argument:\n *method=\"basic\"* - the LOBPCG method introduced by Andrew\n Knyazev, see [Knyazev2001]. A less robust method, may fail when\n Cholesky is applied to singular input.\n\n *method=\"ortho\"* - the LOBPCG method with orthogonal basis\n selection [StathopoulosEtal2002]. A robust method.\n\nSupported inputs are dense, sparse, and batches of dense matrices.\nNote:\n In general, the basic method spends least time per iteration.\n However, the robust methods converge much faster and are more\n", "source": "https://pytorch.org/docs/stable/generated/torch.lobpcg.html", "category": "pytorch docs"} {"text": "stable. So, the usage of the basic method is generally not\n recommended but there exist cases where the usage of the basic\n method may be preferred.\nWarning:\n The backward method does not support sparse and complex inputs.\n It works only when *B* is not provided (i.e. *B == None*). We are\n actively working on extensions, and the details of the algorithms\n are going to be published promptly.\n\nWarning:\n While it is assumed that *A* is symmetric, *A.grad* is not. To\n make sure that *A.grad* is symmetric, so that *A - t * A.grad* is\n symmetric in first-order optimization routines, prior to running\n *lobpcg* we do the following symmetrization map: *A -> (A +\n A.t()) / 2*. The map is performed only when the *A* requires\n gradients.\n\nParameters:\n * A (Tensor) -- the input tensor of size (*, m, m)\n * **B** (*Tensor**, **optional*) -- the input tensor of size (*,\n m, m). When not specified, *B* is interpreted as identity\n", "source": "https://pytorch.org/docs/stable/generated/torch.lobpcg.html", "category": "pytorch docs"} {"text": "matrix.\n * **X** (*tensor**, **optional*) -- the input tensor of size (*,\n m, n) where *k <= n <= m*. When specified, it is used as\n initial approximation of eigenvectors. X must be a dense\n tensor.\n\n * **iK** (*tensor**, **optional*) -- the input tensor of size\n (*, m, m). When specified, it will be used as preconditioner.\n\n * **k** (*integer**, **optional*) -- the number of requested\n eigenpairs. Default is the number of X columns (when\n specified) or *1*.\n\n * **n** (*integer**, **optional*) -- if X is not specified then\n *n* specifies the size of the generated random approximation\n of eigenvectors. Default value for *n* is *k*. If X is\n specified, the value of *n* (when specified) must be the\n number of X columns.\n\n * **tol** (*float**, **optional*) -- residual tolerance for\n stopping criterion. Default is *feps ** 0.5* where *feps* is\n", "source": "https://pytorch.org/docs/stable/generated/torch.lobpcg.html", "category": "pytorch docs"} {"text": "smallest non-zero floating-point number of the given input\n tensor A data type.\n * **largest** (*bool**, **optional*) -- when True, solve the\n eigenproblem for the largest eigenvalues. Otherwise, solve the\n eigenproblem for smallest eigenvalues. Default is *True*.\n\n * **method** (*str**, **optional*) -- select LOBPCG method. See\n the description of the function above. Default is \"ortho\".\n\n * **niter** (*int**, **optional*) -- maximum number of\n iterations. When reached, the iteration process is hard-\n stopped and the current approximation of eigenpairs is\n returned. For infinite iteration but until convergence\n criteria is met, use *-1*.\n\n * **tracker** (*callable**, **optional*) --\n\n a function for tracing the iteration process. When specified,\n it is called at each iteration step with LOBPCG instance as an\n argument. The LOBPCG instance holds the full state of the\n", "source": "https://pytorch.org/docs/stable/generated/torch.lobpcg.html", "category": "pytorch docs"} {"text": "iteration process in the following attributes:\n *iparams*, *fparams*, *bparams* - dictionaries of integer,\n float, and boolean valued input parameters, respectively\n\n *ivars*, *fvars*, *bvars*, *tvars* - dictionaries of\n integer, float, boolean, and Tensor valued iteration\n variables, respectively.\n\n *A*, *B*, *iK* - input Tensor arguments.\n\n *E*, *X*, *S*, *R* - iteration Tensor variables.\n\n For instance:\n\n *ivars[\"istep\"]* - the current iteration step *X* - the\n current approximation of eigenvectors *E* - the current\n approximation of eigenvalues *R* - the current residual\n *ivars[\"converged_count\"]* - the current number of\n converged eigenpairs *tvars[\"rerr\"]* - the current state of\n convergence criteria\n\n Note that when *tracker* stores Tensor objects from the LOBPCG\n instance, it must make copies of these.\n", "source": "https://pytorch.org/docs/stable/generated/torch.lobpcg.html", "category": "pytorch docs"} {"text": "instance, it must make copies of these.\n If *tracker* sets *bvars[\"force_stop\"] = True*, the iteration\n process will be hard-stopped.\n\n * **ortho_iparams** (*dict**, **optional*) -- various parameters\n to LOBPCG algorithm when using *method=\"ortho\"*.\n\n * **ortho_fparams** (*dict**, **optional*) -- various parameters\n to LOBPCG algorithm when using *method=\"ortho\"*.\n\n * **ortho_bparams** (*dict**, **optional*) -- various parameters\n to LOBPCG algorithm when using *method=\"ortho\"*.\n\nReturns:\n tensor of eigenvalues of size (*, k)\n X (Tensor): tensor of eigenvectors of size (*, m, k)\n\nReturn type:\n E (Tensor)\n-[ References ]-\n[Knyazev2001] Andrew V. Knyazev. (2001) Toward the Optimal\n Preconditioned Eigensolver: Locally Optimal Block Preconditioned\n Conjugate Gradient Method. SIAM J. Sci. Comput., 23(2), 517-541.\n (25 pages) https://epubs.siam.org/doi/abs/10.1137/S1064827500366124", "source": "https://pytorch.org/docs/stable/generated/torch.lobpcg.html", "category": "pytorch docs"} {"text": "[StathopoulosEtal2002] Andreas Stathopoulos and Kesheng Wu. (2002)\n A Block Orthogonalization Procedure with Constant Synchronization\n Requirements. SIAM J. Sci. Comput., 23(6), 2165-2182. (18 pages)\n https://epubs.siam.org/doi/10.1137/S1064827500370883\n[DuerschEtal2018] Jed A. Duersch, Meiyue Shao, Chao Yang, Ming Gu.\n (2018) A Robust and Efficient Implementation of LOBPCG. SIAM J.\n Sci. Comput., 40(5), C655-C676. (22 pages)\n https://epubs.siam.org/doi/abs/10.1137/17M1129830", "source": "https://pytorch.org/docs/stable/generated/torch.lobpcg.html", "category": "pytorch docs"} {"text": "torch.Tensor.movedim\nTensor.movedim(source, destination) -> Tensor\nSee \"torch.movedim()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.movedim.html", "category": "pytorch docs"} {"text": "torch.signal.windows.general_hamming\ntorch.signal.windows.general_hamming(M, *, alpha=0.54, sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)\nComputes the general Hamming window.\nThe general Hamming window is defined as follows:\n w_n = \\alpha - (1 - \\alpha) \\cos{ \\left( \\frac{2 \\pi n}{M-1}\n \\right)}\n\nThe window is normalized to 1 (maximum value is 1). However, the 1\n doesn't appear if \"M\" is even and \"sym\" is True.\nParameters:\n M (int) -- the length of the window. In other words, the\n number of points of the returned window.\nKeyword Arguments:\n * alpha (float, optional) -- the window coefficient.\n Default: 0.54.\n * **sym** (*bool**, **optional*) -- If *False*, returns a\n periodic window suitable for use in spectral analysis. If\n *True*, returns a symmetric window suitable for use in filter\n design. Default: *True*.\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.general_hamming.html", "category": "pytorch docs"} {"text": "design. Default: True.\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\nReturn type:\n Tensor\nExamples:\n >>> # Generates a symmetric Hamming window with the general Hamming window.\n >>> torch.signal.windows.general_hamming(10, sym=True)\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.general_hamming.html", "category": "pytorch docs"} {"text": "tensor([0.0800, 0.1876, 0.4601, 0.7700, 0.9723, 0.9723, 0.7700, 0.4601, 0.1876, 0.0800])\n >>> # Generates a periodic Hann window with the general Hamming window.\n >>> torch.signal.windows.general_hamming(10, alpha=0.5, sym=False)\n tensor([0.0000, 0.0955, 0.3455, 0.6545, 0.9045, 1.0000, 0.9045, 0.6545, 0.3455, 0.0955])\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.general_hamming.html", "category": "pytorch docs"} {"text": "torch.Tensor.arctanh\nTensor.arctanh() -> Tensor\nSee \"torch.arctanh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arctanh.html", "category": "pytorch docs"} {"text": "torch.Tensor.less_equal_\nTensor.less_equal_(other) -> Tensor\nIn-place version of \"less_equal()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.less_equal_.html", "category": "pytorch docs"} {"text": "torch.Tensor.lu_solve\nTensor.lu_solve(LU_data, LU_pivots) -> Tensor\nSee \"torch.lu_solve()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.lu_solve.html", "category": "pytorch docs"} {"text": "torch.addcdiv\ntorch.addcdiv(input, tensor1, tensor2, *, value=1, out=None) -> Tensor\nPerforms the element-wise division of \"tensor1\" by \"tensor2\",\n multiplies the result by the scalar \"value\" and adds it to \"input\".\nWarning:\n Integer division with addcdiv is no longer supported, and in a\n future release addcdiv will perform a true division of tensor1\n and tensor2. The historic addcdiv behavior can be implemented as\n (input + value * torch.trunc(tensor1 / tensor2)).to(input.dtype)\n for integer inputs and as (input + value * tensor1 / tensor2) for\n float inputs. The future addcdiv behavior is just the latter\n implementation: (input + value * tensor1 / tensor2), for all\n dtypes.\n\n \\text{out}_i = \\text{input}_i + \\text{value} \\times\n \\frac{\\text{tensor1}_i}{\\text{tensor2}_i}\n\nThe shapes of \"input\", \"tensor1\", and \"tensor2\" must be\n broadcastable.\nFor inputs of type FloatTensor or DoubleTensor, \"value\" must be", "source": "https://pytorch.org/docs/stable/generated/torch.addcdiv.html", "category": "pytorch docs"} {"text": "a real number, otherwise an integer.\nParameters:\n * input (Tensor) -- the tensor to be added\n * **tensor1** (*Tensor*) -- the numerator tensor\n\n * **tensor2** (*Tensor*) -- the denominator tensor\n\nKeyword Arguments:\n * value (Number, optional) -- multiplier for\n \\text{tensor1} / \\text{tensor2}\n * **out** (*Tensor**, **optional*) -- the output tensor.\n\nExample:\n >>> t = torch.randn(1, 3)\n >>> t1 = torch.randn(3, 1)\n >>> t2 = torch.randn(1, 3)\n >>> torch.addcdiv(t, t1, t2, value=0.1)\n tensor([[-0.2312, -3.6496, 0.1312],\n [-1.0428, 3.4292, -0.1030],\n [-0.5369, -0.9829, 0.0430]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.addcdiv.html", "category": "pytorch docs"} {"text": "VerificationOptions\nclass torch.onnx.verification.VerificationOptions(flatten=True, ignore_none=True, check_shape=True, check_dtype=True, backend=OnnxBackend.ONNX_RUNTIME_CPU, rtol=0.001, atol=1e-07, remained_onnx_input_idx=None, acceptable_error_percentage=None)\nOptions for ONNX export verification.\nVariables:\n * flatten (bool) -- If True, unpack nested list/tuple/dict\n inputs into a flattened list of Tensors for ONNX. Set this to\n False if nested structures are to be preserved for ONNX, which\n is usually the case with exporting ScriptModules. Default\n True.\n * **ignore_none** (*bool*) -- Whether to ignore None type in\n torch output, which is usually the case with tracing. Set this\n to False, if torch output should keep None type, which is\n usually the case with exporting ScriptModules. Default to\n True.\n\n * **check_shape** (*bool*) -- Whether to check the shapes\n", "source": "https://pytorch.org/docs/stable/generated/torch.onnx.verification.VerificationOptions.html", "category": "pytorch docs"} {"text": "between PyTorch and ONNX Runtime outputs are exactly the same.\n Set this to False to allow output shape broadcasting. Default\n to True.\n * **check_dtype** (*bool*) -- Whether to check the dtypes\n between PyTorch and ONNX Runtime outputs are consistent.\n Default to True.\n\n * **backend** (*torch.onnx.verification.OnnxBackend*) -- ONNX\n backend for verification. Default to\n OnnxBackend.ONNX_RUNTIME_CPU.\n\n * **rtol** (*float*) -- relative tolerance in comparison between\n ONNX and PyTorch outputs.\n\n * **atol** (*float*) -- absolute tolerance in comparison between\n ONNX and PyTorch outputs.\n\n * **remained_onnx_input_idx**\n (*Optional**[**Sequence**[**int**]**]*) -- If provided, only\n the specified inputs will be passed to the ONNX model. Supply\n a list when there are unused inputs in the model. Since unused\n inputs will be removed in the exported ONNX model, supplying\n", "source": "https://pytorch.org/docs/stable/generated/torch.onnx.verification.VerificationOptions.html", "category": "pytorch docs"} {"text": "all inputs will cause an error on unexpected inputs. This\n parameter tells the verifier which inputs to pass into the\n ONNX model.\n * **acceptable_error_percentage** (*Optional**[**float**]*) --\n acceptable percentage of element mismatches in comparison. It\n should be a float of value between 0.0 and 1.0.\n", "source": "https://pytorch.org/docs/stable/generated/torch.onnx.verification.VerificationOptions.html", "category": "pytorch docs"} {"text": "torch.foreach_floor\ntorch.foreach_floor(self: List[Tensor]) -> None\nApply \"torch.floor()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_floor_.html", "category": "pytorch docs"} {"text": "torch.Tensor.true_divide\nTensor.true_divide(value) -> Tensor\nSee \"torch.true_divide()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.true_divide.html", "category": "pytorch docs"} {"text": "torch.Tensor.isinf\nTensor.isinf() -> Tensor\nSee \"torch.isinf()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.isinf.html", "category": "pytorch docs"} {"text": "torch.sqrt\ntorch.sqrt(input, *, out=None) -> Tensor\nReturns a new tensor with the square-root of the elements of\n \"input\".\n \\text{out}_{i} = \\sqrt{\\text{input}_{i}}\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(4)\n >>> a\n tensor([-2.0755, 1.0226, 0.0831, 0.4806])\n >>> torch.sqrt(a)\n tensor([ nan, 1.0112, 0.2883, 0.6933])\n", "source": "https://pytorch.org/docs/stable/generated/torch.sqrt.html", "category": "pytorch docs"} {"text": "torch.func.stack_module_state\ntorch.func.stack_module_state(models) -> params, buffers\nPrepares a list of torch.nn.Modules for ensembling with \"vmap()\".\nGiven a list of \"M\" \"nn.Modules\" of the same class, returns two\n dictionaries that stack all of their parameters and buffers\n together, indexed by name. The stacked parameters are optimizable\n (i.e. they are new leaf nodes in the autograd history that are\n unrelated to the original parameters and can be passed directly to\n an optimizer).\nHere's an example of how to ensemble over a very simple model:\n num_models = 5\n batch_size = 64\n in_features, out_features = 3, 3\n models = [torch.nn.Linear(in_features, out_features) for i in range(num_models)]\n data = torch.randn(batch_size, 3)\n\n def wrapper(params, buffers, data):\n return torch.func.functional_call(model[0], (params, buffers), data)\n\n params, buffers = stack_module_state(models)\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.stack_module_state.html", "category": "pytorch docs"} {"text": "params, buffers = stack_module_state(models)\n output = vmap(wrapper, (0, 0, None))(params, buffers, data)\n assert output.shape == (num_models, batch_size, out_features)\n\nWhen there's submodules, this follows state dict naming conventions\n import torch.nn as nn\n class Foo(nn.Module):\n def __init__(self, in_features, out_features):\n super().__init__()\n hidden = 4\n self.l1 = nn.Linear(in_features, hidden)\n self.l2 = nn.Linear(hidden, out_features)\n\n def forward(self, x):\n return self.l2(self.l1(x))\n\n num_models = 5\n in_features, out_features = 3, 3\n models = [Foo(in_features, out_features) for i in range(num_models)]\n params, buffers = stack_module_state(models)\n print(list(params.keys())) # \"l1.weight\", \"l1.bias\", \"l2.weight\", \"l2.bias\"\n\nWarning:\n All of the modules being stacked together must be the same\n", "source": "https://pytorch.org/docs/stable/generated/torch.func.stack_module_state.html", "category": "pytorch docs"} {"text": "(except for the values of their parameters/buffers). For example,\n they should be in the same mode (training vs eval).\nReturn type:\n Tuple[Dict[str, Any], Dict[str, Any]]", "source": "https://pytorch.org/docs/stable/generated/torch.func.stack_module_state.html", "category": "pytorch docs"} {"text": "swap_module\nclass torch.quantization.swap_module(mod, mapping, custom_module_class_mapping)\nSwaps the module if it has a quantized counterpart and it has an\n observer attached.\nParameters:\n * mod -- input module\n * **mapping** -- a dictionary that maps from nn module to nnq\n module\n\nReturns:\n The corresponding quantized module of mod", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.swap_module.html", "category": "pytorch docs"} {"text": "ConvReLU2d\nclass torch.ao.nn.intrinsic.quantized.ConvReLU2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)\nA ConvReLU2d module is a fused module of Conv2d and ReLU\nWe adopt the same interface as \"torch.ao.nn.quantized.Conv2d\".\nVariables:\n torch.ao.nn.quantized.Conv2d (Same as) --", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.quantized.ConvReLU2d.html", "category": "pytorch docs"} {"text": "torch.narrow\ntorch.narrow(input, dim, start, length) -> Tensor\nReturns a new tensor that is a narrowed version of \"input\" tensor.\n The dimension \"dim\" is input from \"start\" to \"start + length\". The\n returned tensor and \"input\" tensor share the same underlying\n storage.\nParameters:\n * input (Tensor) -- the tensor to narrow\n * **dim** (*int*) -- the dimension along which to narrow\n\n * **start** (*int** or **Tensor*) -- index of the element to\n start the narrowed dimension from. Can be negative, which\n means indexing from the end of *dim*. If *Tensor*, it must be\n an 0-dim integral *Tensor* (bools not allowed)\n\n * **length** (*int*) -- length of the narrowed dimension, must\n be weakly positive\n\nExample:\n >>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])\n >>> torch.narrow(x, 0, 0, 2)\n tensor([[ 1, 2, 3],\n [ 4, 5, 6]])\n >>> torch.narrow(x, 1, 1, 2)\n tensor([[ 2, 3],\n", "source": "https://pytorch.org/docs/stable/generated/torch.narrow.html", "category": "pytorch docs"} {"text": "tensor([[ 2, 3],\n [ 5, 6],\n [ 8, 9]])\n >>> torch.narrow(x, -1, torch.tensor(-1), 1)\n tensor([[3],\n [6],\n [9]])", "source": "https://pytorch.org/docs/stable/generated/torch.narrow.html", "category": "pytorch docs"} {"text": "float16_static_qconfig\ntorch.quantization.qconfig.float16_static_qconfig\nalias of QConfig(activation=functools.partial(,\n dtype=torch.float16){}, weight=functools.partial(,\n dtype=torch.float16){})", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.float16_static_qconfig.html", "category": "pytorch docs"} {"text": "torch.nn.functional.dropout2d\ntorch.nn.functional.dropout2d(input, p=0.5, training=True, inplace=False)\nRandomly zero out entire channels (a channel is a 2D feature map,\n e.g., the j-th channel of the i-th sample in the batched input is a\n 2D tensor \\text{input}[i, j]) of the input tensor). Each channel\n will be zeroed out independently on every forward call with\n probability \"p\" using samples from a Bernoulli distribution.\nSee \"Dropout2d\" for details.\nParameters:\n * p (float) -- probability of a channel to be zeroed.\n Default: 0.5\n * **training** (*bool*) -- apply dropout if is \"True\". Default:\n \"True\"\n\n * **inplace** (*bool*) -- If set to \"True\", will do this\n operation in-place. Default: \"False\"\n\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.dropout2d.html", "category": "pytorch docs"} {"text": "torch.Tensor.cummax\nTensor.cummax(dim)\nSee \"torch.cummax()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cummax.html", "category": "pytorch docs"} {"text": "torch.nn.functional.upsample_bilinear\ntorch.nn.functional.upsample_bilinear(input, size=None, scale_factor=None)\nUpsamples the input, using bilinear upsampling.\nWarning:\n This function is deprecated in favor of\n \"torch.nn.functional.interpolate()\". This is equivalent with\n \"nn.functional.interpolate(..., mode='bilinear',\n align_corners=True)\".\n\nExpected inputs are spatial (4 dimensional). Use\n upsample_trilinear fo volumetric (5 dimensional) inputs.\nParameters:\n * input (Tensor) -- input\n * **size** (*int** or **Tuple**[**int**, **int**]*) -- output\n spatial size.\n\n * **scale_factor** (*int** or **Tuple**[**int**, **int**]*) --\n multiplier for spatial size\n\nNote:\n This operation may produce nondeterministic gradients when given\n tensors on a CUDA device. See Reproducibility for more\n information.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.upsample_bilinear.html", "category": "pytorch docs"} {"text": "torch.reciprocal\ntorch.reciprocal(input, *, out=None) -> Tensor\nReturns a new tensor with the reciprocal of the elements of \"input\"\n \\text{out}_{i} = \\frac{1}{\\text{input}_{i}}\n\nNote:\n Unlike NumPy's reciprocal, torch.reciprocal supports integral\n inputs. Integral inputs to reciprocal are automatically promoted\n to the default scalar type.\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(4)\n >>> a\n tensor([-0.4595, -2.1219, -1.4314, 0.7298])\n >>> torch.reciprocal(a)\n tensor([-2.1763, -0.4713, -0.6986, 1.3702])\n", "source": "https://pytorch.org/docs/stable/generated/torch.reciprocal.html", "category": "pytorch docs"} {"text": "torch.cuda.reset_max_memory_cached\ntorch.cuda.reset_max_memory_cached(device=None)\nResets the starting point in tracking maximum GPU memory managed by\n the caching allocator for a given device.\nSee \"max_memory_cached()\" for details.\nParameters:\n device (torch.device or int, optional) -- selected\n device. Returns statistic for the current device, given by\n \"current_device()\", if \"device\" is \"None\" (default).\nWarning:\n This function now calls \"reset_peak_memory_stats()\", which resets\n /all/ peak memory stats.\n\nNote:\n See Memory management for more details about GPU memory\n management.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.reset_max_memory_cached.html", "category": "pytorch docs"} {"text": "torch.Tensor.lgamma\nTensor.lgamma() -> Tensor\nSee \"torch.lgamma()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.lgamma.html", "category": "pytorch docs"} {"text": "SparseAdam\nclass torch.optim.SparseAdam(params, lr=0.001, betas=(0.9, 0.999), eps=1e-08, maximize=False)\nImplements lazy version of Adam algorithm suitable for sparse\n tensors.\nIn this variant, only moments that show up in the gradient get\n updated, and only those portions of the gradient get applied to the\n parameters.\nParameters:\n * params (iterable) -- iterable of parameters to optimize\n or dicts defining parameter groups\n * **lr** (*float**, **optional*) -- learning rate (default:\n 1e-3)\n\n * **betas** (*Tuple**[**float**, **float**]**, **optional*) --\n coefficients used for computing running averages of gradient\n and its square (default: (0.9, 0.999))\n\n * **eps** (*float**, **optional*) -- term added to the\n denominator to improve numerical stability (default: 1e-8)\n\n * **maximize** (*bool**, **optional*) -- maximize the params\n based on the objective, instead of minimizing (default: False)\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.SparseAdam.html", "category": "pytorch docs"} {"text": "add_param_group(param_group)\n Add a param group to the \"Optimizer\" s *param_groups*.\n\n This can be useful when fine tuning a pre-trained network as\n frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n\n Parameters:\n **param_group** (*dict*) -- Specifies what Tensors should be\n optimized along with group specific optimization options.\n\nload_state_dict(state_dict)\n Loads the optimizer state.\n\n Parameters:\n **state_dict** (*dict*) -- optimizer state. Should be an\n object returned from a call to \"state_dict()\".\n\nregister_step_post_hook(hook)\n Register an optimizer step post hook which will be called after\n optimizer step. It should have the following signature:\n\n hook(optimizer, args, kwargs) -> None\n\n The \"optimizer\" argument is the optimizer instance being used.\n\n Parameters:\n **hook** (*Callable*) -- The user defined hook to be\n registered.\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.SparseAdam.html", "category": "pytorch docs"} {"text": "registered.\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n\nregister_step_pre_hook(hook)\n Register an optimizer step pre hook which will be called before\n optimizer step. It should have the following signature:\n\n hook(optimizer, args, kwargs) -> None or modified args and kwargs\n\n The \"optimizer\" argument is the optimizer instance being used.\n If args and kwargs are modified by the pre-hook, then the\n transformed values are returned as a tuple containing the\n new_args and new_kwargs.\n\n Parameters:\n **hook** (*Callable*) -- The user defined hook to be\n registered.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n\nstate_dict()", "source": "https://pytorch.org/docs/stable/generated/torch.optim.SparseAdam.html", "category": "pytorch docs"} {"text": "state_dict()\n Returns the state of the optimizer as a \"dict\".\n\n It contains two entries:\n\n * state - a dict holding current optimization state. Its content\n differs between optimizer classes.\n\n * param_groups - a list containing all parameter groups where\n each\n parameter group is a dict\n\nstep(closure=None)\n Performs a single optimization step.\n\n Parameters:\n **closure** (*Callable**, **optional*) -- A closure that\n reevaluates the model and returns the loss.\n\nzero_grad(set_to_none=False)\n Sets the gradients of all optimized \"torch.Tensor\" s to zero.\n\n Parameters:\n **set_to_none** (*bool*) -- instead of setting to zero, set\n the grads to None. This will in general have lower memory\n footprint, and can modestly improve performance. However, it\n changes certain behaviors. For example: 1. When the user\n tries to access a gradient and perform manual ops on it, a\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.SparseAdam.html", "category": "pytorch docs"} {"text": "None attribute or a Tensor full of 0s will behave\n differently. 2. If the user requests\n \"zero_grad(set_to_none=True)\" followed by a backward pass,\n \".grad\"s are guaranteed to be None for params that did not\n receive a gradient. 3. \"torch.optim\" optimizers have a\n different behavior if the gradient is 0 or None (in one case\n it does the step with a gradient of 0 and in the other it\n skips the step altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.SparseAdam.html", "category": "pytorch docs"} {"text": "torch.Tensor.addbmm\nTensor.addbmm(batch1, batch2, *, beta=1, alpha=1) -> Tensor\nSee \"torch.addbmm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.addbmm.html", "category": "pytorch docs"} {"text": "torch.is_conj\ntorch.is_conj(input)\nReturns True if the \"input\" is a conjugated tensor, i.e. its\n conjugate bit is set to True.\nParameters:\n input (Tensor) -- the input tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.is_conj.html", "category": "pytorch docs"} {"text": "torch.log\ntorch.log(input, *, out=None) -> Tensor\nReturns a new tensor with the natural logarithm of the elements of\n \"input\".\n y_{i} = \\log_{e} (x_{i})\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.rand(5) * 5\n >>> a\n tensor([4.7767, 4.3234, 1.2156, 0.2411, 4.5739])\n >>> torch.log(a)\n tensor([ 1.5637, 1.4640, 0.1952, -1.4226, 1.5204])\n", "source": "https://pytorch.org/docs/stable/generated/torch.log.html", "category": "pytorch docs"} {"text": "torch.nn.functional.alpha_dropout\ntorch.nn.functional.alpha_dropout(input, p=0.5, training=False, inplace=False)\nApplies alpha dropout to the input.\nSee \"AlphaDropout\" for details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.alpha_dropout.html", "category": "pytorch docs"} {"text": "torch.count_nonzero\ntorch.count_nonzero(input, dim=None) -> Tensor\nCounts the number of non-zero values in the tensor \"input\" along\n the given \"dim\". If no dim is specified then all non-zeros in the\n tensor are counted.\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int** or **tuple of ints**, **optional*) -- Dim or\n tuple of dims along which to count non-zeros.\n\nExample:\n >>> x = torch.zeros(3,3)\n >>> x[torch.randn(3,3) > 0.5] = 1\n >>> x\n tensor([[0., 1., 1.],\n [0., 0., 0.],\n [0., 0., 1.]])\n >>> torch.count_nonzero(x)\n tensor(3)\n >>> torch.count_nonzero(x, dim=0)\n tensor([0, 1, 2])\n", "source": "https://pytorch.org/docs/stable/generated/torch.count_nonzero.html", "category": "pytorch docs"} {"text": "NAdam\nclass torch.optim.NAdam(params, lr=0.002, betas=(0.9, 0.999), eps=1e-08, weight_decay=0, momentum_decay=0.004, *, foreach=None, differentiable=False)\nImplements NAdam algorithm.\n \\begin{aligned} &\\rule{110mm}{0.4pt}\n \\\\ &\\textbf{input} : \\gamma_t \\text{ (lr)}, \\:\n \\beta_1,\\beta_2 \\text{ (betas)}, \\: \\theta_0 \\text{\n (params)}, \\: f(\\theta) \\text{ (objective)} \\\\\n &\\hspace{13mm} \\: \\lambda \\text{ (weight decay)}, \\:\\psi \\text{\n (momentum decay)} \\\\ &\\textbf{initialize} : m_0\n \\leftarrow 0 \\text{ ( first moment)}, v_0 \\leftarrow 0\n \\text{ ( second moment)}\n \\\\[-1.ex] &\\rule{110mm}{0.4pt}\n \\\\ &\\textbf{for} \\: t=1 \\: \\textbf{to} \\: \\ldots \\:\n \\textbf{do} \\\\ &\\hspace{5mm}g_t\n \\leftarrow \\nabla_{\\theta} f_t (\\theta_{t-1}) \\\\\n &\\hspace{5mm}if \\: \\lambda \\neq 0\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.NAdam.html", "category": "pytorch docs"} {"text": "&\\hspace{5mm}if \\: \\lambda \\neq 0\n \\ &\\hspace{10mm} g_t \\leftarrow g_t + \\lambda \\theta_{t-1}\n \\ &\\hspace{5mm} \\mu_t \\leftarrow \\beta_1 \\big(1 -\n \\frac{1}{2} 0.96^{t \\psi} \\big) \\ &\\hspace{5mm}\n \\mu_{t+1} \\leftarrow \\beta_1 \\big(1 - \\frac{1}{2}\n 0.96^{(t+1)\\psi}\\big)\\ &\\hspace{5mm}m_t\n \\leftarrow \\beta_1 m_{t-1} + (1 - \\beta_1) g_t \\\n &\\hspace{5mm}v_t \\leftarrow \\beta_2 v_{t-1} +\n (1-\\beta_2) g^2_t \\ &\\hspace{5mm}\\widehat{m_t}\n \\leftarrow \\mu_{t+1} m_t/(1-\\prod_{i=1}^{t+1}\\mu_i)\\[-1.ex]\n & \\hspace{11mm} + (1-\\mu_t) g_t /(1-\\prod_{i=1}^{t} \\mu_{i})\n \\ &\\hspace{5mm}\\widehat{v_t} \\leftarrow\n v_t/\\big(1-\\beta_2^t \\big) \\\n &\\hspace{5mm}\\theta_t \\leftarrow \\theta_{t-1} - \\gamma\n \\widehat{m_t}/ \\big(\\sqrt{\\widehat{v_t}} + \\epsilon\n \\big) \\\n &\\rule{110mm}{0.4pt}", "source": "https://pytorch.org/docs/stable/generated/torch.optim.NAdam.html", "category": "pytorch docs"} {"text": "&\\rule{110mm}{0.4pt}\n \\[-1.ex] &\\bf{return} \\: \\theta_t\n \\[-1.ex] &\\rule{110mm}{0.4pt}\n \\[-1.ex] \\end{aligned}\nFor further details regarding the algorithm we refer to\n Incorporating Nesterov Momentum into Adam.\nParameters:\n * params (iterable) -- iterable of parameters to optimize\n or dicts defining parameter groups\n * **lr** (*float**, **optional*) -- learning rate (default:\n 2e-3)\n\n * **betas** (*Tuple**[**float**, **float**]**, **optional*) --\n coefficients used for computing running averages of gradient\n and its square (default: (0.9, 0.999))\n\n * **eps** (*float**, **optional*) -- term added to the\n denominator to improve numerical stability (default: 1e-8)\n\n * **weight_decay** (*float**, **optional*) -- weight decay (L2\n penalty) (default: 0)\n\n * **momentum_decay** (*float**, **optional*) -- momentum\n momentum_decay (default: 4e-3)\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.NAdam.html", "category": "pytorch docs"} {"text": "momentum_decay (default: 4e-3)\n * **foreach** (*bool**, **optional*) -- whether foreach\n implementation of optimizer is used. If unspecified by the\n user (so foreach is None), we will try to use foreach over the\n for-loop implementation on CUDA, since it is usually\n significantly more performant. (default: None)\n\n * **differentiable** (*bool**, **optional*) -- whether autograd\n should occur through the optimizer step in training.\n Otherwise, the step() function runs in a torch.no_grad()\n context. Setting to True can impair performance, so leave it\n False if you don't intend to run autograd through this\n instance (default: False)\n\nadd_param_group(param_group)\n Add a param group to the \"Optimizer\" s *param_groups*.\n\n This can be useful when fine tuning a pre-trained network as\n frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n\n Parameters:\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.NAdam.html", "category": "pytorch docs"} {"text": "as training progresses.\n Parameters:\n **param_group** (*dict*) -- Specifies what Tensors should be\n optimized along with group specific optimization options.\n\nload_state_dict(state_dict)\n Loads the optimizer state.\n\n Parameters:\n **state_dict** (*dict*) -- optimizer state. Should be an\n object returned from a call to \"state_dict()\".\n\nregister_step_post_hook(hook)\n Register an optimizer step post hook which will be called after\n optimizer step. It should have the following signature:\n\n hook(optimizer, args, kwargs) -> None\n\n The \"optimizer\" argument is the optimizer instance being used.\n\n Parameters:\n **hook** (*Callable*) -- The user defined hook to be\n registered.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n\nregister_step_pre_hook(hook)", "source": "https://pytorch.org/docs/stable/generated/torch.optim.NAdam.html", "category": "pytorch docs"} {"text": "register_step_pre_hook(hook)\n Register an optimizer step pre hook which will be called before\n optimizer step. It should have the following signature:\n\n hook(optimizer, args, kwargs) -> None or modified args and kwargs\n\n The \"optimizer\" argument is the optimizer instance being used.\n If args and kwargs are modified by the pre-hook, then the\n transformed values are returned as a tuple containing the\n new_args and new_kwargs.\n\n Parameters:\n **hook** (*Callable*) -- The user defined hook to be\n registered.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n\nstate_dict()\n Returns the state of the optimizer as a \"dict\".\n\n It contains two entries:\n\n * state - a dict holding current optimization state. Its content\n differs between optimizer classes.\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.NAdam.html", "category": "pytorch docs"} {"text": "differs between optimizer classes.\n * param_groups - a list containing all parameter groups where\n each\n parameter group is a dict\n\nzero_grad(set_to_none=False)\n Sets the gradients of all optimized \"torch.Tensor\" s to zero.\n\n Parameters:\n **set_to_none** (*bool*) -- instead of setting to zero, set\n the grads to None. This will in general have lower memory\n footprint, and can modestly improve performance. However, it\n changes certain behaviors. For example: 1. When the user\n tries to access a gradient and perform manual ops on it, a\n None attribute or a Tensor full of 0s will behave\n differently. 2. If the user requests\n \"zero_grad(set_to_none=True)\" followed by a backward pass,\n \".grad\"s are guaranteed to be None for params that did not\n receive a gradient. 3. \"torch.optim\" optimizers have a\n different behavior if the gradient is 0 or None (in one case\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.NAdam.html", "category": "pytorch docs"} {"text": "it does the step with a gradient of 0 and in the other it\n skips the step altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.NAdam.html", "category": "pytorch docs"} {"text": "torch.Tensor.bitwise_or\nTensor.bitwise_or() -> Tensor\nSee \"torch.bitwise_or()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.bitwise_or.html", "category": "pytorch docs"} {"text": "torch.Tensor.floor_divide\nTensor.floor_divide(value) -> Tensor\nSee \"torch.floor_divide()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.floor_divide.html", "category": "pytorch docs"} {"text": "torch.Tensor.all\nTensor.all(dim=None, keepdim=False) -> Tensor\nSee \"torch.all()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.all.html", "category": "pytorch docs"} {"text": "torch.rad2deg\ntorch.rad2deg(input, *, out=None) -> Tensor\nReturns a new tensor with each of the elements of \"input\" converted\n from angles in radians to degrees.\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.tensor([[3.142, -3.142], [6.283, -6.283], [1.570, -1.570]])\n >>> torch.rad2deg(a)\n tensor([[ 180.0233, -180.0233],\n [ 359.9894, -359.9894],\n [ 89.9544, -89.9544]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.rad2deg.html", "category": "pytorch docs"} {"text": "LSTM\nclass torch.ao.nn.quantized.dynamic.LSTM(args, *kwargs)\nA dynamic quantized LSTM module with floating point tensor as\n inputs and outputs. We adopt the same interface as torch.nn.LSTM,\n please see https://pytorch.org/docs/stable/nn.html#torch.nn.LSTM\n for documentation.\nExamples:\n >>> rnn = nn.LSTM(10, 20, 2)\n >>> input = torch.randn(5, 3, 10)\n >>> h0 = torch.randn(2, 3, 20)\n >>> c0 = torch.randn(2, 3, 20)\n >>> output, (hn, cn) = rnn(input, (h0, c0))\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.dynamic.LSTM.html", "category": "pytorch docs"} {"text": "torch.can_cast\ntorch.can_cast(from, to) -> bool\nDetermines if a type conversion is allowed under PyTorch casting\n rules described in the type promotion documentation.\nParameters:\n * from (dtype) -- The original \"torch.dtype\".\n * **to** (*dtype*) -- The target \"torch.dtype\".\n\nExample:\n >>> torch.can_cast(torch.double, torch.float)\n True\n >>> torch.can_cast(torch.float, torch.int)\n False\n", "source": "https://pytorch.org/docs/stable/generated/torch.can_cast.html", "category": "pytorch docs"} {"text": "torch.autograd.function.FunctionCtx.set_materialize_grads\nFunctionCtx.set_materialize_grads(value)\nSets whether to materialize grad tensors. Default is \"True\".\nThis should be called only from inside the \"forward()\"\n method\nIf \"True\", undefined grad tensors will be expanded to tensors full\n of zeros prior to calling the \"backward()\" and \"jvp()\" methods.\nExample::\n >>> class SimpleFunc(Function):\n >>> @staticmethod\n >>> def forward(ctx, x):\n >>> return x.clone(), x.clone()\n >>>\n >>> @staticmethod\n >>> @once_differentiable\n >>> def backward(ctx, g1, g2):\n >>> return g1 + g2 # No check for None necessary\n >>>\n >>> # We modify SimpleFunc to handle non-materialized grad outputs\n >>> class Func(Function):\n >>> @staticmethod\n >>> def forward(ctx, x):\n >>> ctx.set_materialize_grads(False)", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.set_materialize_grads.html", "category": "pytorch docs"} {"text": "\n\n\n ctx.set_materialize_grads(False)\n >>> ctx.save_for_backward(x)\n >>> return x.clone(), x.clone()\n >>>\n >>> @staticmethod\n >>> @once_differentiable\n >>> def backward(ctx, g1, g2):\n >>> x, = ctx.saved_tensors\n >>> grad_input = torch.zeros_like(x)\n >>> if g1 is not None: # We must check for None now\n >>> grad_input += g1\n >>> if g2 is not None:\n >>> grad_input += g2\n >>> return grad_input\n >>>\n >>> a = torch.tensor(1., requires_grad=True)\n >>> b, _ = Func.apply(a) # induces g2 to be undefined\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.function.FunctionCtx.set_materialize_grads.html", "category": "pytorch docs"} {"text": "torch.Tensor.unique_consecutive\nTensor.unique_consecutive(return_inverse=False, return_counts=False, dim=None)\nEliminates all but the first element from every consecutive group\n of equivalent elements.\nSee \"torch.unique_consecutive()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.unique_consecutive.html", "category": "pytorch docs"} {"text": "torch.foreach_neg\ntorch.foreach_neg(self: List[Tensor]) -> None\nApply \"torch.neg()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_neg_.html", "category": "pytorch docs"} {"text": "torch.bmm\ntorch.bmm(input, mat2, *, out=None) -> Tensor\nPerforms a batch matrix-matrix product of matrices stored in\n \"input\" and \"mat2\".\n\"input\" and \"mat2\" must be 3-D tensors each containing the same\n number of matrices.\nIf \"input\" is a (b \\times n \\times m) tensor, \"mat2\" is a (b \\times\n m \\times p) tensor, \"out\" will be a (b \\times n \\times p) tensor.\n \\text{out}_i = \\text{input}_i \\mathbin{@} \\text{mat2}_i\n\nThis operator supports TensorFloat32.\nOn certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.\nNote:\n This function does not broadcast. For broadcasting matrix\n products, see \"torch.matmul()\".\n\nParameters:\n * input (Tensor) -- the first batch of matrices to be\n multiplied\n * **mat2** (*Tensor*) -- the second batch of matrices to be\n multiplied\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:", "source": "https://pytorch.org/docs/stable/generated/torch.bmm.html", "category": "pytorch docs"} {"text": "Example:\n >>> input = torch.randn(10, 3, 4)\n >>> mat2 = torch.randn(10, 4, 5)\n >>> res = torch.bmm(input, mat2)\n >>> res.size()\n torch.Size([10, 3, 5])\n", "source": "https://pytorch.org/docs/stable/generated/torch.bmm.html", "category": "pytorch docs"} {"text": "torch.cuda.memory_stats\ntorch.cuda.memory_stats(device=None)\nReturns a dictionary of CUDA memory allocator statistics for a\n given device.\nThe return value of this function is a dictionary of statistics,\n each of which is a non-negative integer.\nCore statistics:\n\n\n\"\"allocated.{all,large_pool,small_pool}.{current,peak,allocated,\n freed}\"\": number of allocation requests received by the memory\n allocator.\n\n\n\"\"allocated_bytes.{all,large_pool,small_pool}.{current,peak,allo\n cated,freed}\"\": amount of allocated memory.\n\n\n\"\"segment.{all,large_pool,small_pool}.{current,peak,allocated,fr\n eed}\"\": number of reserved segments from \"cudaMalloc()\".\n\n\n\"\"reserved_bytes.{all,large_pool,small_pool}.{current,peak,alloc\n ated,freed}\"\": amount of reserved memory.\n\n\n\"\"active.{all,large_pool,small_pool}.{current,peak,allocated,fre\n ed}\"\": number of active memory blocks.\n\n\n\"\"active_bytes.{all,large_pool,small_pool}.{current,peak,allocat\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.memory_stats.html", "category": "pytorch docs"} {"text": "ed,freed}\"\": amount of active memory.\n\n\n\"\"inactive_split.{all,large_pool,small_pool}.{current,peak,alloc\n ated,freed}\"\": number of inactive, non-releasable memory blocks.\n\n\n\"\"inactive_split_bytes.{all,large_pool,small_pool}.{current,peak\n ,allocated,freed}\"\": amount of inactive, non-releasable memory.\n\n\nFor these core statistics, values are broken down as follows.\nPool type:\n\n\n\"all\": combined statistics across all memory pools.\n\n\n\"large_pool\": statistics for the large allocation pool (as of\n October 2019, for size >= 1MB allocations).\n\n\n\"small_pool\": statistics for the small allocation pool (as of\n October 2019, for size < 1MB allocations).\n\n\nMetric type:\n\n\n\"current\": current value of this metric.\n\n\n\"peak\": maximum value of this metric.\n\n\n\"allocated\": historical total increase in this metric.\n\n\n\"freed\": historical total decrease in this metric.\n\n\nIn addition to the core statistics, we also provide some simple\n event counters:", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.memory_stats.html", "category": "pytorch docs"} {"text": "event counters:\n\n\n\"\"num_alloc_retries\"\": number of failed \"cudaMalloc\" calls that\n result in a cache flush and retry.\n\n\n\"\"num_ooms\"\": number of out-of-memory errors thrown.\n\n\nThe caching allocator can be configured via ENV to not split blocks\n larger than a defined size (see Memory Management section of the\n Cuda Semantics documentation). This helps avoid memory\n fragmentation but may have a performance penalty. Additional\n outputs to assist with tuning and evaluating impact:\n\n\n\"\"max_split_size\"\": blocks above this size will not be split.\n\n\n\"\"oversize_allocations.{current,peak,allocated,freed}\"\": number\n of over-size allocation requests received by the memory\n allocator.\n\n\n\"\"oversize_segments.{current,peak,allocated,freed}\"\": number of\n over-size reserved segments from \"cudaMalloc()\".\n\n\nParameters:\n device (torch.device or int, optional) -- selected\n device. Returns statistics for the current device, given by", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.memory_stats.html", "category": "pytorch docs"} {"text": "\"current_device()\", if \"device\" is \"None\" (default).\nReturn type:\n Dict[str, Any]\nNote:\n See Memory management for more details about GPU memory\n management.\n\nNote:\n With backend:cudaMallocAsync, some stats are not meaningful, and\n are always reported as zero.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.memory_stats.html", "category": "pytorch docs"} {"text": "BatchNorm2d\nclass torch.nn.BatchNorm2d(num_features, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True, device=None, dtype=None)\nApplies Batch Normalization over a 4D input (a mini-batch of 2D\n inputs with additional channel dimension) as described in the paper\n Batch Normalization: Accelerating Deep Network Training by Reducing\n Internal Covariate Shift .\n y = \\frac{x - \\mathrm{E}[x]}{ \\sqrt{\\mathrm{Var}[x] + \\epsilon}}\n * \\gamma + \\beta\n\nThe mean and standard-deviation are calculated per-dimension over\n the mini-batches and \\gamma and \\beta are learnable parameter\n vectors of size C (where C is the input size). By default, the\n elements of \\gamma are set to 1 and the elements of \\beta are set\n to 0. The standard-deviation is calculated via the biased\n estimator, equivalent to torch.var(input, unbiased=False).\nAlso by default, during training this layer keeps running estimates\n of its computed mean and variance, which are then used for", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html", "category": "pytorch docs"} {"text": "normalization during evaluation. The running estimates are kept\n with a default \"momentum\" of 0.1.\nIf \"track_running_stats\" is set to \"False\", this layer then does\n not keep running estimates, and batch statistics are instead used\n during evaluation time as well.\nNote:\n This \"momentum\" argument is different from one used in optimizer\n classes and the conventional notion of momentum. Mathematically,\n the update rule for running statistics here is \\hat{x}_\\text{new}\n = (1 - \\text{momentum}) \\times \\hat{x} + \\text{momentum} \\times\n x_t, where \\hat{x} is the estimated statistic and x_t is the new\n observed value.\n\nBecause the Batch Normalization is done over the C dimension,\n computing statistics on (N, H, W) slices, it's common terminology\n to call this Spatial Batch Normalization.\nParameters:\n * num_features (int) -- C from an expected input of size\n (N, C, H, W)\n * **eps** (*float*) -- a value added to the denominator for\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html", "category": "pytorch docs"} {"text": "numerical stability. Default: 1e-5\n * **momentum** (*float*) -- the value used for the running_mean\n and running_var computation. Can be set to \"None\" for\n cumulative moving average (i.e. simple average). Default: 0.1\n\n * **affine** (*bool*) -- a boolean value that when set to\n \"True\", this module has learnable affine parameters. Default:\n \"True\"\n\n * **track_running_stats** (*bool*) -- a boolean value that when\n set to \"True\", this module tracks the running mean and\n variance, and when set to \"False\", this module does not track\n such statistics, and initializes statistics buffers\n \"running_mean\" and \"running_var\" as \"None\". When these buffers\n are \"None\", this module always uses batch statistics. in both\n training and eval modes. Default: \"True\"\n\nShape:\n * Input: (N, C, H, W)\n * Output: (N, C, H, W) (same shape as input)\n\nExamples:\n >>> # With Learnable Parameters\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html", "category": "pytorch docs"} {"text": "\n\n\nWith Learnable Parameters\n >>> m = nn.BatchNorm2d(100)\n >>> # Without Learnable Parameters\n >>> m = nn.BatchNorm2d(100, affine=False)\n >>> input = torch.randn(20, 100, 35, 45)\n >>> output = m(input)\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.BatchNorm2d.html", "category": "pytorch docs"} {"text": "torch.triangular_solve\ntorch.triangular_solve(b, A, upper=True, transpose=False, unitriangular=False, *, out=None)\nSolves a system of equations with a square upper or lower\n triangular invertible matrix A and multiple right-hand sides b.\nIn symbols, it solves AX = b and assumes A is square upper-\n triangular (or lower-triangular if \"upper\"= False) and does not\n have zeros on the diagonal.\ntorch.triangular_solve(b, A) can take in 2D inputs b, A or\n inputs that are batches of 2D matrices. If the inputs are batches,\n then returns batched outputs X\nIf the diagonal of \"A\" contains zeros or elements that are very\n close to zero and \"unitriangular\"= False (default) or if the\n input matrix is badly conditioned, the result may contain NaN s.\nSupports input of float, double, cfloat and cdouble data types.\nWarning:\n \"torch.triangular_solve()\" is deprecated in favor of\n \"torch.linalg.solve_triangular()\" and will be removed in a future\n", "source": "https://pytorch.org/docs/stable/generated/torch.triangular_solve.html", "category": "pytorch docs"} {"text": "PyTorch release. \"torch.linalg.solve_triangular()\" has its\n arguments reversed and does not return a copy of one of the\n inputs.\"X = torch.triangular_solve(B, A).solution\" should be\n replaced with\n X = torch.linalg.solve_triangular(A, B)\n\nParameters:\n * b (Tensor) -- multiple right-hand sides of size (*, m,\n k) where * is zero of more batch dimensions\n * **A** (*Tensor*) -- the input triangular coefficient matrix of\n size (*, m, m) where * is zero or more batch dimensions\n\n * **upper** (*bool**, **optional*) -- whether A is upper or\n lower triangular. Default: \"True\".\n\n * **transpose** (*bool**, **optional*) -- solves *op(A)X = b*\n where *op(A) = A^T* if this flag is \"True\", and *op(A) = A* if\n it is \"False\". Default: \"False\".\n\n * **unitriangular** (*bool**, **optional*) -- whether A is unit\n triangular. If True, the diagonal elements of A are assumed to\n", "source": "https://pytorch.org/docs/stable/generated/torch.triangular_solve.html", "category": "pytorch docs"} {"text": "be 1 and not referenced from A. Default: \"False\".\nKeyword Arguments:\n out ((Tensor, Tensor), optional) -- tuple of\n two tensors to write the output to. Ignored if None. Default:\n None.\nReturns:\n A namedtuple (solution, cloned_coefficient) where\n cloned_coefficient is a clone of A and solution is the\n solution X to AX = b (or whatever variant of the system of\n equations, depending on the keyword arguments.)\nExamples:\n >>> A = torch.randn(2, 2).triu()\n >>> A\n tensor([[ 1.1527, -1.0753],\n [ 0.0000, 0.7986]])\n >>> b = torch.randn(2, 3)\n >>> b\n tensor([[-0.0210, 2.3513, -1.5492],\n [ 1.5429, 0.7403, -1.0243]])\n >>> torch.triangular_solve(b, A)\n torch.return_types.triangular_solve(\n solution=tensor([[ 1.7841, 2.9046, -2.5405],\n [ 1.9320, 0.9270, -1.2826]]),\n cloned_coefficient=tensor([[ 1.1527, -1.0753],\n [ 0.0000, 0.7986]]))\n", "source": "https://pytorch.org/docs/stable/generated/torch.triangular_solve.html", "category": "pytorch docs"} {"text": "torch.frombuffer\ntorch.frombuffer(buffer, *, dtype, count=- 1, offset=0, requires_grad=False) -> Tensor\nCreates a 1-dimensional \"Tensor\" from an object that implements the\n Python buffer protocol.\nSkips the first \"offset\" bytes in the buffer, and interprets the\n rest of the raw bytes as a 1-dimensional tensor of type \"dtype\"\n with \"count\" elements.\nNote that either of the following must be true:\n\n\n\"count\" is a positive non-zero number, and the total number of\n bytes in the buffer is less than \"offset\" plus \"count\" times the\n size (in bytes) of \"dtype\".\n\n\n\"count\" is negative, and the length (number of bytes) of the\n buffer subtracted by the \"offset\" is a multiple of the size (in\n bytes) of \"dtype\".\n\n\nThe returned tensor and buffer share the same memory. Modifications\n to the tensor will be reflected in the buffer and vice versa. The\n returned tensor is not resizable.\nNote:\n This function increments the reference count for the object that\n", "source": "https://pytorch.org/docs/stable/generated/torch.frombuffer.html", "category": "pytorch docs"} {"text": "owns the shared memory. Therefore, such memory will not be\n deallocated before the returned tensor goes out of scope.\nWarning:\n This function's behavior is undefined when passed an object\n implementing the buffer protocol whose data is not on the CPU.\n Doing so is likely to cause a segmentation fault.\n\nWarning:\n This function does not try to infer the \"dtype\" (hence, it is not\n optional). Passing a different \"dtype\" than its source may result\n in unexpected behavior.\n\nParameters:\n buffer (object) -- a Python object that exposes the buffer\n interface.\nKeyword Arguments:\n * dtype (\"torch.dtype\") -- the desired data type of returned\n tensor.\n * **count** (*int**, **optional*) -- the number of desired\n elements to be read. If negative, all the elements (until the\n end of the buffer) will be read. Default: -1.\n\n * **offset** (*int**, **optional*) -- the number of bytes to\n", "source": "https://pytorch.org/docs/stable/generated/torch.frombuffer.html", "category": "pytorch docs"} {"text": "skip at the start of the buffer. Default: 0.\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\nExample:\n >>> import array\n >>> a = array.array('i', [1, 2, 3])\n >>> t = torch.frombuffer(a, dtype=torch.int32)\n >>> t\n tensor([ 1, 2, 3])\n >>> t[0] = -1\n >>> a\n array([-1, 2, 3])\n\n >>> # Interprets the signed char bytes as 32-bit integers.\n >>> # Each 4 signed char elements will be interpreted as\n >>> # 1 signed 32-bit integer.\n >>> import array\n >>> a = array.array('b', [-1, 0, 0, 0])\n >>> torch.frombuffer(a, dtype=torch.int32)\n tensor([255], dtype=torch.int32)\n", "source": "https://pytorch.org/docs/stable/generated/torch.frombuffer.html", "category": "pytorch docs"} {"text": "StandaloneModuleConfigEntry\nclass torch.ao.quantization.fx.custom_config.StandaloneModuleConfigEntry(qconfig_mapping: 'Optional[QConfigMapping]', example_inputs: 'Tuple[Any, ...]', prepare_custom_config: 'Optional[PrepareCustomConfig]', backend_config: 'Optional[BackendConfig]')", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.StandaloneModuleConfigEntry.html", "category": "pytorch docs"} {"text": "torch.foreach_erf\ntorch.foreach_erf(self: List[Tensor]) -> None\nApply \"torch.erf()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_erf_.html", "category": "pytorch docs"} {"text": "torch.Tensor.det\nTensor.det() -> Tensor\nSee \"torch.det()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.det.html", "category": "pytorch docs"} {"text": "torch.autograd.Function.forward\nstatic Function.forward(ctx, args, *kwargs)\nThis function is to be overridden by all subclasses. There are two\n ways to define forward:\nUsage 1 (Combined forward and ctx):\n @staticmethod\n def forward(ctx: Any, *args: Any, **kwargs: Any) -> Any:\n pass\n\n\n\nIt must accept a context ctx as the first argument, followed by\n any number of arguments (tensors or other types).\n\n\nSee Combined or separate forward() and setup_context() for more\n details\n\n\nUsage 2 (Separate forward and ctx):\n @staticmethod\n def forward(*args: Any, **kwargs: Any) -> Any:\n pass\n\n @staticmethod\n def setup_context(ctx: Any, inputs: Tuple[Any, ...], output: Any) -> None:\n pass\n\n\n\nThe forward no longer accepts a ctx argument.\n\n\nInstead, you must also override the\n \"torch.autograd.Function.setup_context()\" staticmethod to handle\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.Function.forward.html", "category": "pytorch docs"} {"text": "setting up the \"ctx\" object. \"output\" is the output of the\n forward, \"inputs\" are a Tuple of inputs to the forward.\n\nSee Extending torch.autograd for more details\n\nThe context can be used to store arbitrary data that can be then\n retrieved during the backward pass. Tensors should not be stored\n directly on ctx (though this is not currently enforced for\n backward compatibility). Instead, tensors should be saved either\n with \"ctx.save_for_backward()\" if they are intended to be used in\n \"backward\" (equivalently, \"vjp\") or \"ctx.save_for_forward()\" if\n they are intended to be used for in \"jvp\".\nReturn type:\n Any", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.Function.forward.html", "category": "pytorch docs"} {"text": "torch.nn.functional.max_unpool3d\ntorch.nn.functional.max_unpool3d(input, indices, kernel_size, stride=None, padding=0, output_size=None)\nComputes a partial inverse of \"MaxPool3d\".\nSee \"MaxUnpool3d\" for details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.max_unpool3d.html", "category": "pytorch docs"} {"text": "torch.Tensor.add_\nTensor.add_(other, *, alpha=1) -> Tensor\nIn-place version of \"add()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.add_.html", "category": "pytorch docs"} {"text": "PrepareCustomConfig\nclass torch.ao.quantization.fx.custom_config.PrepareCustomConfig\nCustom configuration for \"prepare_fx()\" and \"prepare_qat_fx()\".\nExample usage:\n prepare_custom_config = PrepareCustomConfig() .set_standalone_module_name(\"module1\", qconfig_mapping, example_inputs, child_prepare_custom_config, backend_config) .set_standalone_module_class(MyStandaloneModule, qconfig_mapping, example_inputs, child_prepare_custom_config, backend_config) .set_float_to_observed_mapping(FloatCustomModule, ObservedCustomModule) .set_non_traceable_module_names([\"module2\", \"module3\"]) .set_non_traceable_module_classes([NonTraceableModule1, NonTraceableModule2]) .set_input_quantized_indexes([0]) .set_output_quantized_indexes([0]) .set_preserved_attributes([\"attr1\", \"attr2\"])\n\nclassmethod from_dict(prepare_custom_config_dict)", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.PrepareCustomConfig.html", "category": "pytorch docs"} {"text": "Create a \"PrepareCustomConfig\" from a dictionary with the\n following items:\n \"standalone_module_name\": a list of (module_name,\n qconfig_mapping, example_inputs, child_prepare_custom_config,\n backend_config) tuples\n\n \"standalone_module_class\" a list of (module_class,\n qconfig_mapping, example_inputs, child_prepare_custom_config,\n backend_config) tuples\n\n \"float_to_observed_custom_module_class\": a nested dictionary\n mapping from quantization mode to an inner mapping from float\n module classes to observed module classes, e.g. {\"static\":\n {FloatCustomModule: ObservedCustomModule}}\n\n \"non_traceable_module_name\": a list of modules names that are\n not symbolically traceable \"non_traceable_module_class\": a\n list of module classes that are not symbolically traceable\n \"input_quantized_idxs\": a list of indexes of graph inputs\n that should be quantized \"output_quantized_idxs\": a list of\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.PrepareCustomConfig.html", "category": "pytorch docs"} {"text": "indexes of graph outputs that should be quantized\n \"preserved_attributes\": a list of attributes that persist\n even if they are not used in \"forward\"\n This function is primarily for backward compatibility and may be\n removed in the future.\n\n Return type:\n *PrepareCustomConfig*\n\nset_float_to_observed_mapping(float_class, observed_class, quant_type=QuantType.STATIC)\n Set the mapping from a custom float module class to a custom\n observed module class.\n\n The observed module class must have a \"from_float\" class method\n that converts the float module class to the observed module\n class. This is currently only supported for static quantization.\n\n Return type:\n *PrepareCustomConfig*\n\nset_input_quantized_indexes(indexes)\n Set the indexes of the inputs of the graph that should be\n quantized. Inputs are otherwise assumed to be in fp32 by default\n instead.\n\n Return type:\n *PrepareCustomConfig*\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.PrepareCustomConfig.html", "category": "pytorch docs"} {"text": "Return type:\n PrepareCustomConfig\nset_non_traceable_module_classes(module_classes)\n Set the modules that are not symbolically traceable, identified\n by class.\n\n Return type:\n *PrepareCustomConfig*\n\nset_non_traceable_module_names(module_names)\n Set the modules that are not symbolically traceable, identified\n by name.\n\n Return type:\n *PrepareCustomConfig*\n\nset_output_quantized_indexes(indexes)\n Set the indexes of the outputs of the graph that should be\n quantized. Outputs are otherwise assumed to be in fp32 by\n default instead.\n\n Return type:\n *PrepareCustomConfig*\n\nset_preserved_attributes(attributes)\n Set the names of the attributes that will persist in the graph\n module even if they are not used in the model's \"forward\"\n method.\n\n Return type:\n *PrepareCustomConfig*\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.PrepareCustomConfig.html", "category": "pytorch docs"} {"text": "Return type:\n PrepareCustomConfig\nset_standalone_module_class(module_class, qconfig_mapping, example_inputs, prepare_custom_config, backend_config)\n Set the configuration for running a standalone module identified\n by \"module_class\".\n\n If \"qconfig_mapping\" is None, the parent \"qconfig_mapping\" will\n be used instead. If \"prepare_custom_config\" is None, an empty\n \"PrepareCustomConfig\" will be used. If \"backend_config\" is None,\n the parent \"backend_config\" will be used instead.\n\n Return type:\n *PrepareCustomConfig*\n\nset_standalone_module_name(module_name, qconfig_mapping, example_inputs, prepare_custom_config, backend_config)\n Set the configuration for running a standalone module identified\n by \"module_name\".\n\n If \"qconfig_mapping\" is None, the parent \"qconfig_mapping\" will\n be used instead. If \"prepare_custom_config\" is None, an empty\n \"PrepareCustomConfig\" will be used. If \"backend_config\" is None,\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.PrepareCustomConfig.html", "category": "pytorch docs"} {"text": "the parent \"backend_config\" will be used instead.\n Return type:\n *PrepareCustomConfig*\n\nto_dict()\n Convert this \"PrepareCustomConfig\" to a dictionary with the\n items described in \"from_dict()\".\n\n Return type:\n *Dict*[str, *Any*]\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.quantization.fx.custom_config.PrepareCustomConfig.html", "category": "pytorch docs"} {"text": "torch.renorm\ntorch.renorm(input, p, dim, maxnorm, *, out=None) -> Tensor\nReturns a tensor where each sub-tensor of \"input\" along dimension\n \"dim\" is normalized such that the p-norm of the sub-tensor is\n lower than the value \"maxnorm\"\nNote:\n If the norm of a row is lower than *maxnorm*, the row is\n unchanged\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **p** (*float*) -- the power for the norm computation\n\n * **dim** (*int*) -- the dimension to slice over to get the sub-\n tensors\n\n * **maxnorm** (*float*) -- the maximum norm to keep each sub-\n tensor under\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> x = torch.ones(3, 3)\n >>> x[1].fill_(2)\n tensor([ 2., 2., 2.])\n >>> x[2].fill_(3)\n tensor([ 3., 3., 3.])\n >>> x\n tensor([[ 1., 1., 1.],\n [ 2., 2., 2.],\n [ 3., 3., 3.]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.renorm.html", "category": "pytorch docs"} {"text": "[ 3., 3., 3.]])\n >>> torch.renorm(x, 1, 0, 5)\n tensor([[ 1.0000, 1.0000, 1.0000],\n [ 1.6667, 1.6667, 1.6667],\n [ 1.6667, 1.6667, 1.6667]])", "source": "https://pytorch.org/docs/stable/generated/torch.renorm.html", "category": "pytorch docs"} {"text": "torch._foreach_cos\ntorch._foreach_cos(self: List[Tensor]) -> List[Tensor]\nApply \"torch.cos()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_cos.html", "category": "pytorch docs"} {"text": "torch.Tensor.numel\nTensor.numel() -> int\nSee \"torch.numel()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.numel.html", "category": "pytorch docs"} {"text": "CosineSimilarity\nclass torch.nn.CosineSimilarity(dim=1, eps=1e-08)\nReturns cosine similarity between x_1 and x_2, computed along\n dim.\n \\text{similarity} = \\dfrac{x_1 \\cdot x_2}{\\max(\\Vert x_1 \\Vert\n _2 \\cdot \\Vert x_2 \\Vert _2, \\epsilon)}.\n\nParameters:\n * dim (int, optional) -- Dimension where cosine\n similarity is computed. Default: 1\n * **eps** (*float**, **optional*) -- Small value to avoid\n division by zero. Default: 1e-8\n\nShape:\n * Input1: (\\ast_1, D, \\ast_2) where D is at position dim\n * Input2: (\\ast_1, D, \\ast_2), same number of dimensions as x1,\n matching x1 size at dimension *dim*,\n and broadcastable with x1 at other dimensions.\n\n * Output: (\\ast_1, \\ast_2)\n\nExamples::\n >>> input1 = torch.randn(100, 128)\n >>> input2 = torch.randn(100, 128)\n >>> cos = nn.CosineSimilarity(dim=1, eps=1e-6)\n >>> output = cos(input1, input2)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.CosineSimilarity.html", "category": "pytorch docs"} {"text": "torch.Tensor.cosh_\nTensor.cosh_() -> Tensor\nIn-place version of \"cosh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.cosh_.html", "category": "pytorch docs"} {"text": "torch.tensor\ntorch.tensor(data, *, dtype=None, device=None, requires_grad=False, pin_memory=False) -> Tensor\nConstructs a tensor with no autograd history (also known as a \"leaf\n tensor\", see Autograd mechanics) by copying \"data\".\nWarning:\n When working with tensors prefer using \"torch.Tensor.clone()\",\n \"torch.Tensor.detach()\", and \"torch.Tensor.requires_grad_()\" for\n readability. Letting *t* be a tensor, \"torch.tensor(t)\" is\n equivalent to \"t.clone().detach()\", and \"torch.tensor(t,\n requires_grad=True)\" is equivalent to\n \"t.clone().detach().requires_grad_(True)\".\n\nSee also:\n \"torch.as_tensor()\" preserves autograd history and avoids copies\n where possible. \"torch.from_numpy()\" creates a tensor that shares\n storage with a NumPy array.\n\nParameters:\n data (array_like) -- Initial data for the tensor. Can be a\n list, tuple, NumPy \"ndarray\", scalar, and other types.\nKeyword Arguments:", "source": "https://pytorch.org/docs/stable/generated/torch.tensor.html", "category": "pytorch docs"} {"text": "Keyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", infers data type from\n \"data\".\n * **device** (\"torch.device\", optional) -- the device of the\n constructed tensor. If None and data is a tensor then the\n device of data is used. If None and data is not a tensor then\n the result tensor is constructed on the CPU.\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\n * **pin_memory** (*bool**, **optional*) -- If set, returned\n tensor would be allocated in the pinned memory. Works only for\n CPU tensors. Default: \"False\".\n\nExample:\n >>> torch.tensor([[0.1, 1.2], [2.2, 3.1], [4.9, 5.2]])\n tensor([[ 0.1000, 1.2000],\n [ 2.2000, 3.1000],\n [ 4.9000, 5.2000]])\n\n >>> torch.tensor([0, 1]) # Type inference on data\n tensor([ 0, 1])\n", "source": "https://pytorch.org/docs/stable/generated/torch.tensor.html", "category": "pytorch docs"} {"text": "tensor([ 0, 1])\n >>> torch.tensor([[0.11111, 0.222222, 0.3333333]],\n ... dtype=torch.float64,\n ... device=torch.device('cuda:0')) # creates a double tensor on a CUDA device\n tensor([[ 0.1111, 0.2222, 0.3333]], dtype=torch.float64, device='cuda:0')\n\n >>> torch.tensor(3.14159) # Create a zero-dimensional (scalar) tensor\n tensor(3.1416)\n\n >>> torch.tensor([]) # Create an empty tensor (of size (0,))\n tensor([])\n", "source": "https://pytorch.org/docs/stable/generated/torch.tensor.html", "category": "pytorch docs"} {"text": "Fold\nclass torch.nn.Fold(output_size, kernel_size, dilation=1, padding=0, stride=1)\nCombines an array of sliding local blocks into a large containing\n tensor.\nConsider a batched \"input\" tensor containing sliding local blocks,\n e.g., patches of images, of shape (N, C \\times\n \\prod(\\text{kernel_size}), L), where N is batch dimension, C\n \\times \\prod(\\text{kernel_size}) is the number of values within a\n block (a block has \\prod(\\text{kernel_size}) spatial locations\n each containing a C-channeled vector), and L is the total number of\n blocks. (This is exactly the same specification as the output shape\n of \"Unfold\".) This operation combines these local blocks into the\n large \"output\" tensor of shape (N, C, \\text{output_size}[0],\n \\text{output_size}[1], \\dots) by summing the overlapping values.\n Similar to \"Unfold\", the arguments must satisfy\n L = \\prod_d \\left\\lfloor\\frac{\\text{output\\_size}[d] + 2 \\times\n \\text{padding}[d] % - \\text{dilation}[d] \\times\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Fold.html", "category": "pytorch docs"} {"text": "(\\text{kernel_size}[d] - 1) - 1}{\\text{stride}[d]} +\n 1\\right\\rfloor,\nwhere d is over all spatial dimensions.\n\n\"output_size\" describes the spatial shape of the large containing\n tensor of the sliding local blocks. It is useful to resolve the\n ambiguity when multiple input shapes map to same number of\n sliding blocks, e.g., with \"stride > 0\".\n\nThe \"padding\", \"stride\" and \"dilation\" arguments specify how the\n sliding blocks are retrieved.\n\n\n\"stride\" controls the stride for the sliding blocks.\n\n\n\"padding\" controls the amount of implicit zero-paddings on both\n sides for \"padding\" number of points for each dimension before\n reshaping.\n\n\n\"dilation\" controls the spacing between the kernel points; also\n known as the \u00c3\u00a0 trous algorithm. It is harder to describe, but\n this link has a nice visualization of what \"dilation\" does.\n\n\nParameters:\n * output_size (int or tuple) -- the shape of the", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Fold.html", "category": "pytorch docs"} {"text": "spatial dimensions of the output (i.e., \"output.sizes()[2:]\")\n * **kernel_size** (*int** or **tuple*) -- the size of the\n sliding blocks\n\n * **dilation** (*int** or **tuple**, **optional*) -- a parameter\n that controls the stride of elements within the neighborhood.\n Default: 1\n\n * **padding** (*int** or **tuple**, **optional*) -- implicit\n zero padding to be added on both sides of input. Default: 0\n\n * **stride** (*int** or **tuple*) -- the stride of the sliding\n blocks in the input spatial dimensions. Default: 1\n\n\n\nIf \"output_size\", \"kernel_size\", \"dilation\", \"padding\" or\n \"stride\" is an int or a tuple of length 1 then their values will\n be replicated across all spatial dimensions.\n\n\nFor the case of two output spatial dimensions this operation is\n sometimes called \"col2im\".\n\n\nNote:\n \"Fold\" calculates each combined value in the resulting large\n tensor by summing all values from all containing blocks. \"Unfold\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Fold.html", "category": "pytorch docs"} {"text": "extracts the values in the local blocks by copying from the large\n tensor. So, if the blocks overlap, they are not inverses of each\n other.In general, folding and unfolding operations are related as\n follows. Consider \"Fold\" and \"Unfold\" instances created with the\n same parameters:\n >>> fold_params = dict(kernel_size=..., dilation=..., padding=..., stride=...)\n >>> fold = nn.Fold(output_size=..., **fold_params)\n >>> unfold = nn.Unfold(**fold_params)\n\n Then for any (supported) \"input\" tensor the following equality\n holds:\n\n fold(unfold(input)) == divisor * input\n\n where \"divisor\" is a tensor that depends only on the shape and\n dtype of the \"input\":\n\n >>> input_ones = torch.ones(input.shape, dtype=input.dtype)\n >>> divisor = fold(unfold(input_ones))\n\n When the \"divisor\" tensor contains no zero elements, then \"fold\"\n and \"unfold\" operations are inverses of each other (up to\n constant divisor).\n\nWarning:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Fold.html", "category": "pytorch docs"} {"text": "constant divisor).\nWarning:\n Currently, only unbatched (3D) or batched (4D) image-like output\n tensors are supported.\n\nShape:\n * Input: (N, C \\times \\prod(\\text{kernel_size}), L) or (C\n \\times \\prod(\\text{kernel_size}), L)\n * Output: (N, C, \\text{output\\_size}[0], \\text{output\\_size}[1],\n \\dots) or (C, \\text{output\\_size}[0], \\text{output\\_size}[1],\n \\dots) as described above\n\nExamples:\n >>> fold = nn.Fold(output_size=(4, 5), kernel_size=(2, 2))\n >>> input = torch.randn(1, 3 * 2 * 2, 12)\n >>> output = fold(input)\n >>> output.size()\n torch.Size([1, 3, 4, 5])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Fold.html", "category": "pytorch docs"} {"text": "torch.nanmedian\ntorch.nanmedian(input) -> Tensor\nReturns the median of the values in \"input\", ignoring \"NaN\" values.\nThis function is identical to \"torch.median()\" when there are no\n \"NaN\" values in \"input\". When \"input\" has one or more \"NaN\" values,\n \"torch.median()\" will always return \"NaN\", while this function will\n return the median of the non-\"NaN\" elements in \"input\". If all the\n elements in \"input\" are \"NaN\" it will also return \"NaN\".\nParameters:\n input (Tensor) -- the input tensor.\nExample:\n >>> a = torch.tensor([1, float('nan'), 3, 2])\n >>> a.median()\n tensor(nan)\n >>> a.nanmedian()\n tensor(2.)\n\ntorch.nanmedian(input, dim=- 1, keepdim=False, *, out=None)\nReturns a namedtuple \"(values, indices)\" where \"values\" contains\n the median of each row of \"input\" in the dimension \"dim\", ignoring\n \"NaN\" values, and \"indices\" contains the index of the median values\n found in the dimension \"dim\".", "source": "https://pytorch.org/docs/stable/generated/torch.nanmedian.html", "category": "pytorch docs"} {"text": "found in the dimension \"dim\".\nThis function is identical to \"torch.median()\" when there are no\n \"NaN\" values in a reduced row. When a reduced row has one or more\n \"NaN\" values, \"torch.median()\" will always reduce it to \"NaN\",\n while this function will reduce it to the median of the non-\"NaN\"\n elements. If all the elements in a reduced row are \"NaN\" then it\n will be reduced to \"NaN\", too.\nParameters:\n * input (Tensor) -- the input tensor.\n * **dim** (*int*) -- the dimension to reduce.\n\n * **keepdim** (*bool*) -- whether the output tensor has \"dim\"\n retained or not.\n\nKeyword Arguments:\n out ((Tensor, Tensor), optional) -- The first\n tensor will be populated with the median values and the second\n tensor, which must have dtype long, with their indices in the\n dimension \"dim\" of \"input\".\nExample:\n >>> a = torch.tensor([[2, 3, 1], [float('nan'), 1, float('nan')]])\n >>> a\n tensor([[2., 3., 1.],\n", "source": "https://pytorch.org/docs/stable/generated/torch.nanmedian.html", "category": "pytorch docs"} {"text": "\n\n\na\n tensor([[2., 3., 1.],\n [nan, 1., nan]])\n >>> a.median(0)\n torch.return_types.median(values=tensor([nan, 1., nan]), indices=tensor([1, 1, 1]))\n >>> a.nanmedian(0)\n torch.return_types.nanmedian(values=tensor([2., 1., 1.]), indices=tensor([0, 1, 0]))\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nanmedian.html", "category": "pytorch docs"} {"text": "EmbeddingBag\nclass torch.ao.nn.quantized.EmbeddingBag(num_embeddings, embedding_dim, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, mode='sum', sparse=False, _weight=None, include_last_offset=False, dtype=torch.quint8)\nA quantized EmbeddingBag module with quantized packed weights as\n inputs. We adopt the same interface as torch.nn.EmbeddingBag,\n please see\n https://pytorch.org/docs/stable/nn.html#torch.nn.EmbeddingBag for\n documentation.\nSimilar to \"EmbeddingBag\", attributes will be randomly initialized\n at module creation time and will be overwritten later\nVariables:\n weight (Tensor) -- the non-learnable quantized weights of\n the module of shape (\\text{num_embeddings},\n \\text{embedding_dim}).\nExamples::\n >>> m = nn.quantized.EmbeddingBag(num_embeddings=10, embedding_dim=12, include_last_offset=True, mode='sum')", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.EmbeddingBag.html", "category": "pytorch docs"} {"text": "\n\n\nindices = torch.tensor([9, 6, 5, 7, 8, 8, 9, 2, 8, 6, 6, 9, 1, 6, 8, 8, 3, 2, 3, 6, 3, 6, 5, 7, 0, 8, 4, 6, 5, 8, 2, 3])\n >>> offsets = torch.tensor([0, 19, 20, 28, 28, 32])\n >>> output = m(indices, offsets)\n >>> print(output.size())\n torch.Size([5, 12])\n\n\n\nclassmethod from_float(mod)\n Create a quantized embedding_bag module from a float module\n\n Parameters:\n **mod** (*Module*) -- a float module, either produced by\n torch.ao.quantization utilities or provided by user\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.EmbeddingBag.html", "category": "pytorch docs"} {"text": "torch.linalg.slogdet\ntorch.linalg.slogdet(A, *, out=None)\nComputes the sign and natural logarithm of the absolute value of\n the determinant of a square matrix.\nFor complex \"A\", it returns the sign and the natural logarithm of\n the modulus of the determinant, that is, a logarithmic polar\n decomposition of the determinant.\nThe determinant can be recovered as sign * exp(logabsdet). When a\n matrix has a determinant of zero, it returns (0, -inf).\nSupports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\nSee also:\n \"torch.linalg.det()\" computes the determinant of square matrices.\n\nParameters:\n A (Tensor) -- tensor of shape (, n, n)* where *** is\n zero or more batch dimensions.\nKeyword Arguments:\n out (tuple, optional) -- output tuple of two tensors.\n Ignored if None. Default: None.", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.slogdet.html", "category": "pytorch docs"} {"text": "Ignored if None. Default: None.\nReturns:\n A named tuple (sign, logabsdet).\n *sign* will have the same dtype as \"A\".\n\n *logabsdet* will always be real-valued, even when \"A\" is\n complex.\n\nExamples:\n >>> A = torch.randn(3, 3)\n >>> A\n tensor([[ 0.0032, -0.2239, -1.1219],\n [-0.6690, 0.1161, 0.4053],\n [-1.6218, -0.9273, -0.0082]])\n >>> torch.linalg.det(A)\n tensor(-0.7576)\n >>> torch.logdet(A)\n tensor(nan)\n >>> torch.linalg.slogdet(A)\n torch.return_types.linalg_slogdet(sign=tensor(-1.), logabsdet=tensor(-0.2776))\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.slogdet.html", "category": "pytorch docs"} {"text": "torch.float_power\ntorch.float_power(input, exponent, *, out=None) -> Tensor\nRaises \"input\" to the power of \"exponent\", elementwise, in double\n precision. If neither input is complex returns a \"torch.float64\"\n tensor, and if one or more inputs is complex returns a\n \"torch.complex128\" tensor.\nNote:\n This function always computes in double precision, unlike\n \"torch.pow()\", which implements more typical type promotion. This\n is useful when the computation needs to be performed in a wider\n or more precise dtype, or the results of the computation may\n contain fractional values not representable in the input dtypes,\n like when an integer base is raised to a negative integer\n exponent.\n\nParameters:\n * input (Tensor or Number) -- the base value(s)\n * **exponent** (*Tensor** or **Number*) -- the exponent value(s)\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:", "source": "https://pytorch.org/docs/stable/generated/torch.float_power.html", "category": "pytorch docs"} {"text": "Example:\n >>> a = torch.randint(10, (4,))\n >>> a\n tensor([6, 4, 7, 1])\n >>> torch.float_power(a, 2)\n tensor([36., 16., 49., 1.], dtype=torch.float64)\n\n >>> a = torch.arange(1, 5)\n >>> a\n tensor([ 1, 2, 3, 4])\n >>> exp = torch.tensor([2, -3, 4, -5])\n >>> exp\n tensor([ 2, -3, 4, -5])\n >>> torch.float_power(a, exp)\n tensor([1.0000e+00, 1.2500e-01, 8.1000e+01, 9.7656e-04], dtype=torch.float64)\n", "source": "https://pytorch.org/docs/stable/generated/torch.float_power.html", "category": "pytorch docs"} {"text": "ConvReLU2d\nclass torch.ao.nn.intrinsic.ConvReLU2d(conv, relu)\nThis is a sequential container which calls the Conv2d and ReLU\n modules. During quantization this will be replaced with the\n corresponding fused module.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.ConvReLU2d.html", "category": "pytorch docs"} {"text": "torch.istft\ntorch.istft(input, n_fft, hop_length=None, win_length=None, window=None, center=True, normalized=False, onesided=None, length=None, return_complex=False) -> Tensor:\nInverse short time Fourier Transform. This is expected to be the\n inverse of \"stft()\".\nIt has the same parameters (+ additional optional parameter of\n \"length\") and it should return the least squares estimation of the\n original signal. The algorithm will check using the NOLA condition\n ( nonzero overlap).\nImportant consideration in the parameters \"window\" and \"center\" so\n that the envelop created by the summation of all the windows is\n never zero at certain point in time. Specifically,\n \\sum_{t=-\\infty}^{\\infty} |w|^2[n-t\\times hop_length] \\cancel{=}\n 0.\nSince \"stft()\" discards elements at the end of the signal if they\n do not fit in a frame, \"istft\" may return a shorter signal than the\n original signal (can occur if \"center\" is False since the signal", "source": "https://pytorch.org/docs/stable/generated/torch.istft.html", "category": "pytorch docs"} {"text": "isn't padded). If length is given in the arguments and is longer\n than expected, \"istft\" will pad zeros to the end of the returned\n signal.\nIf \"center\" is \"True\", then there will be padding e.g.\n \"'constant'\", \"'reflect'\", etc. Left padding can be trimmed off\n exactly because they can be calculated but right padding cannot be\n calculated without additional information.\nExample: Suppose the last window is: \"[17, 18, 0, 0, 0]\" vs \"[18,\n 0, 0, 0, 0]\"\nThe \"n_fft\", \"hop_length\", \"win_length\" are all the same which\n prevents the calculation of right padding. These additional values\n could be zeros or a reflection of the signal so providing \"length\"\n could be useful. If \"length\" is \"None\" then padding will be\n aggressively removed (some loss of signal).\n[1] D. W. Griffin and J. S. Lim, \"Signal estimation from modified\n short-time Fourier transform,\" IEEE Trans. ASSP, vol.32, no.2,\n pp.236-243, Apr. 1984.\nParameters:\n * input (Tensor) --", "source": "https://pytorch.org/docs/stable/generated/torch.istft.html", "category": "pytorch docs"} {"text": "Parameters:\n * input (Tensor) --\n The input tensor. Expected to be in the format of \"stft()\",\n output. That is a complex tensor of shape (\"channel\",\n \"fft_size\", \"n_frame\"), where the \"channel\" dimension is\n optional.\n\n Changed in version 2.0: Real datatype inputs are no longer\n supported. Input must now have a complex datatype, as returned\n by \"stft(..., return_complex=True)\".\n\n * **n_fft** (*int*) -- Size of Fourier transform\n\n * **hop_length** (*Optional**[**int**]*) -- The distance between\n neighboring sliding window frames. (Default: \"n_fft // 4\")\n\n * **win_length** (*Optional**[**int**]*) -- The size of window\n frame and STFT filter. (Default: \"n_fft\")\n\n * **window** (*Optional**[**torch.Tensor**]*) -- The optional\n window function. (Default: \"torch.ones(win_length)\")\n\n * **center** (*bool*) -- Whether \"input\" was padded on both\n sides so that the t-th frame is centered at time t \\times\n", "source": "https://pytorch.org/docs/stable/generated/torch.istft.html", "category": "pytorch docs"} {"text": "\\text{hop_length}. (Default: \"True\")\n * **normalized** (*bool*) -- Whether the STFT was normalized.\n (Default: \"False\")\n\n * **onesided** (*Optional**[**bool**]*) -- Whether the STFT was\n onesided. (Default: \"True\" if \"n_fft != fft_size\" in the input\n size)\n\n * **length** (*Optional**[**int**]*) -- The amount to trim the\n signal by (i.e. the original signal length). (Default: whole\n signal)\n\n * **return_complex** (*Optional**[**bool**]*) -- Whether the\n output should be complex, or if the input should be assumed to\n derive from a real signal and window. Note that this is\n incompatible with \"onesided=True\". (Default: \"False\")\n\nReturns:\n Least squares estimation of the original signal of size (...,\n signal_length)\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.istft.html", "category": "pytorch docs"} {"text": "Softmax\nclass torch.nn.Softmax(dim=None)\nApplies the Softmax function to an n-dimensional input Tensor\n rescaling them so that the elements of the n-dimensional output\n Tensor lie in the range [0,1] and sum to 1.\nSoftmax is defined as:\n \\text{Softmax}(x_{i}) = \\frac{\\exp(x_i)}{\\sum_j \\exp(x_j)}\n\nWhen the input Tensor is a sparse tensor then the unspecified\n values are treated as \"-inf\".\nShape:\n * Input: (*) where *** means, any number of additional\n dimensions\n * Output: (*), same shape as the input\n\nReturns:\n a Tensor of the same dimension and shape as the input with\n values in the range [0, 1]\nParameters:\n dim (int) -- A dimension along which Softmax will be\n computed (so every slice along dim will sum to 1).\nReturn type:\n None\nNote:\n This module doesn't work directly with NLLLoss, which expects the\n Log to be computed between the Softmax and itself. Use\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Softmax.html", "category": "pytorch docs"} {"text": "LogSoftmax instead (it's faster and has better numerical\n properties).\nExamples:\n >>> m = nn.Softmax(dim=1)\n >>> input = torch.randn(2, 3)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Softmax.html", "category": "pytorch docs"} {"text": "torch.empty_like\ntorch.empty_like(input, *, dtype=None, layout=None, device=None, requires_grad=False, memory_format=torch.preserve_format) -> Tensor\nReturns an uninitialized tensor with the same size as \"input\".\n \"torch.empty_like(input)\" is equivalent to\n \"torch.empty(input.size(), dtype=input.dtype, layout=input.layout,\n device=input.device)\".\nParameters:\n input (Tensor) -- the size of \"input\" will determine size\n of the output tensor.\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned Tensor. Default: if \"None\", defaults to the dtype\n of \"input\".\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned tensor. Default: if \"None\", defaults to the layout of\n \"input\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", defaults to the device of\n \"input\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.empty_like.html", "category": "pytorch docs"} {"text": "\"input\".\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\n * **memory_format** (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".\n\nExample:\n >>> a=torch.empty((2,3), dtype=torch.int32, device = 'cuda')\n >>> torch.empty_like(a)\n tensor([[0, 0, 0],\n [0, 0, 0]], device='cuda:0', dtype=torch.int32)\n", "source": "https://pytorch.org/docs/stable/generated/torch.empty_like.html", "category": "pytorch docs"} {"text": "torch.linalg.matrix_exp\ntorch.linalg.matrix_exp(A) -> Tensor\nComputes the matrix exponential of a square matrix.\nLetting \\mathbb{K} be \\mathbb{R} or \\mathbb{C}, this function\n computes the matrix exponential of A \\in \\mathbb{K}^{n \\times\n n}, which is defined as\n \\mathrm{matrix_exp}(A) = \\sum_{k=0}^\\infty \\frac{1}{k!}A^k \\in\n \\mathbb{K}^{n \\times n}.\n\nIf the matrix A has eigenvalues \\lambda_i \\in \\mathbb{C}, the\n matrix \\mathrm{matrix_exp}(A) has eigenvalues e^{\\lambda_i} \\in\n \\mathbb{C}.\nSupports input of bfloat16, float, double, cfloat and cdouble\n dtypes. Also supports batches of matrices, and if \"A\" is a batch of\n matrices then the output has the same batch dimensions.\nParameters:\n A (Tensor) -- tensor of shape (, n, n)* where *** is\n zero or more batch dimensions.\nExample:\n >>> A = torch.empty(2, 2, 2)\n >>> A[0, :, :] = torch.eye(2, 2)\n >>> A[1, :, :] = 2 * torch.eye(2, 2)\n >>> A\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.matrix_exp.html", "category": "pytorch docs"} {"text": "\n\n\nA\n tensor([[[1., 0.],\n [0., 1.]],\n\n\n\n [[2., 0.],\n [0., 2.]]])\n >>> torch.linalg.matrix_exp(A)\n tensor([[[2.7183, 0.0000],\n [0.0000, 2.7183]],\n\n [[7.3891, 0.0000],\n [0.0000, 7.3891]]])\n\n >>> import math\n >>> A = torch.tensor([[0, math.pi/3], [-math.pi/3, 0]]) # A is skew-symmetric\n >>> torch.linalg.matrix_exp(A) # matrix_exp(A) = [[cos(pi/3), sin(pi/3)], [-sin(pi/3), cos(pi/3)]]\n tensor([[ 0.5000, 0.8660],\n [-0.8660, 0.5000]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.matrix_exp.html", "category": "pytorch docs"} {"text": "torch.jit.unused\ntorch.jit.unused(fn)\nThis decorator indicates to the compiler that a function or method\n should be ignored and replaced with the raising of an exception.\n This allows you to leave code in your model that is not yet\n TorchScript compatible and still export your model.\n Example (using \"@torch.jit.unused\" on a method):\n\n import torch\n import torch.nn as nn\n\n class MyModule(nn.Module):\n def __init__(self, use_memory_efficient):\n super(MyModule, self).__init__()\n self.use_memory_efficient = use_memory_efficient\n\n @torch.jit.unused\n def memory_efficient(self, x):\n import pdb\n pdb.set_trace()\n return x + 10\n\n def forward(self, x):\n # Use not-yet-scriptable memory efficient mode\n if self.use_memory_efficient:\n return self.memory_efficient(x)\n else:\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.unused.html", "category": "pytorch docs"} {"text": "else:\n return x + 10\n m = torch.jit.script(MyModule(use_memory_efficient=False))\n m.save(\"m.pt\")\n\n m = torch.jit.script(MyModule(use_memory_efficient=True))\n # exception raised\n m(torch.rand(100))\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.unused.html", "category": "pytorch docs"} {"text": "FXFloatFunctional\nclass torch.ao.nn.quantized.FXFloatFunctional\nmodule to replace FloatFunctional module before FX graph mode\n quantization, since activation_post_process will be inserted in top\n level module directly\nValid operation names:\n * add\n * cat\n\n * mul\n\n * add_relu\n\n * add_scalar\n\n * mul_scalar\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.FXFloatFunctional.html", "category": "pytorch docs"} {"text": "fuse_modules\nclass torch.quantization.fuse_modules(model, modules_to_fuse, inplace=False, fuser_func=, fuse_custom_config_dict=None)\nFuses a list of modules into a single module\nFuses only the following sequence of modules: conv, bn conv, bn,\n relu conv, relu linear, relu bn, relu All other sequences are left\n unchanged. For these sequences, replaces the first item in the list\n with the fused module, replacing the rest of the modules with\n identity.\nParameters:\n * model -- Model containing the modules to be fused\n * **modules_to_fuse** -- list of list of module names to fuse.\n Can also be a list of strings if there is only a single list\n of modules to fuse.\n\n * **inplace** -- bool specifying if fusion happens in place on\n the model, by default a new model is returned\n\n * **fuser_func** -- Function that takes in a list of modules and\n outputs a list of fused modules of the same length. For\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fuse_modules.html", "category": "pytorch docs"} {"text": "example, fuser_func([convModule, BNModule]) returns the list\n [ConvBNModule, nn.Identity()] Defaults to\n torch.ao.quantization.fuse_known_modules\n * **fuse_custom_config_dict** -- custom configuration for fusion\n\n # Example of fuse_custom_config_dict\n fuse_custom_config_dict = {\n # Additional fuser_method mapping\n \"additional_fuser_method_mapping\": {\n (torch.nn.Conv2d, torch.nn.BatchNorm2d): fuse_conv_bn\n },\n }\n\nReturns:\n model with fused modules. A new copy is created if inplace=True.\nExamples:\n >>> m = M().eval()\n >>> # m is a module containing the sub-modules below\n >>> modules_to_fuse = [ ['conv1', 'bn1', 'relu1'], ['submodule.conv', 'submodule.relu']]\n >>> fused_m = torch.ao.quantization.fuse_modules(m, modules_to_fuse)\n >>> output = fused_m(input)\n\n >>> m = M().eval()\n >>> # Alternately provide a single list of modules to fuse\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fuse_modules.html", "category": "pytorch docs"} {"text": "\n\n\nmodules_to_fuse = ['conv1', 'bn1', 'relu1']\n >>> fused_m = torch.ao.quantization.fuse_modules(m, modules_to_fuse)\n >>> output = fused_m(input)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fuse_modules.html", "category": "pytorch docs"} {"text": "torch.round\ntorch.round(input, *, decimals=0, out=None) -> Tensor\nRounds elements of \"input\" to the nearest integer.\nFor integer inputs, follows the array-api convention of returning a\n copy of the input tensor.\nNote:\n This function implements the \"round half to even\" to break ties\n when a number is equidistant from two integers (e.g. *round(2.5)*\n is 2).When the :attr:`decimals` argument is specified the\n algorithm used is similar to NumPy's *around*. This algorithm is\n fast but inexact and it can easily overflow for low precision\n dtypes. Eg. *round(tensor([10000], dtype=torch.float16),\n decimals=3)* is *inf*.\n\nSee also:\n \"torch.ceil()\", which rounds up. \"torch.floor()\", which rounds\n down. \"torch.trunc()\", which rounds towards zero.\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **decimals** (*int*) -- Number of decimal places to round to\n (default: 0). If decimals is negative, it specifies the number\n", "source": "https://pytorch.org/docs/stable/generated/torch.round.html", "category": "pytorch docs"} {"text": "of positions to the left of the decimal point.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> torch.round(torch.tensor((4.7, -2.3, 9.1, -7.7)))\n tensor([ 5., -2., 9., -8.])\n\n >>> # Values equidistant from two integers are rounded towards the\n >>> # the nearest even value (zero is treated as even)\n >>> torch.round(torch.tensor([-0.5, 0.5, 1.5, 2.5]))\n tensor([-0., 0., 2., 2.])\n\n >>> # A positive decimals argument rounds to the to that decimal place\n >>> torch.round(torch.tensor([0.1234567]), decimals=3)\n tensor([0.1230])\n\n >>> # A negative decimals argument rounds to the left of the decimal\n >>> torch.round(torch.tensor([1200.1234567]), decimals=-3)\n tensor([1000.])\n", "source": "https://pytorch.org/docs/stable/generated/torch.round.html", "category": "pytorch docs"} {"text": "torch.is_tensor\ntorch.is_tensor(obj)\nReturns True if obj is a PyTorch tensor.\nNote that this function is simply doing \"isinstance(obj, Tensor)\".\n Using that \"isinstance\" check is better for typechecking with mypy,\n and more explicit - so it's recommended to use that instead of\n \"is_tensor\".\nParameters:\n obj (Object) -- Object to test\nExample:\n >>> x = torch.tensor([1, 2, 3])\n >>> torch.is_tensor(x)\n True\n", "source": "https://pytorch.org/docs/stable/generated/torch.is_tensor.html", "category": "pytorch docs"} {"text": "torch._foreach_sin\ntorch._foreach_sin(self: List[Tensor]) -> List[Tensor]\nApply \"torch.sin()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_sin.html", "category": "pytorch docs"} {"text": "FractionalMaxPool2d\nclass torch.nn.FractionalMaxPool2d(kernel_size, output_size=None, output_ratio=None, return_indices=False, _random_samples=None)\nApplies a 2D fractional max pooling over an input signal composed\n of several input planes.\nFractional MaxPooling is described in detail in the paper\n Fractional MaxPooling by Ben Graham\nThe max-pooling operation is applied in kH \\times kW regions by a\n stochastic step size determined by the target output size. The\n number of output features is equal to the number of input planes.\nParameters:\n * kernel_size (Union[int, Tuple[int,\n int]]) -- the size of the window to take a max over.\n Can be a single number k (for a square kernel of k x k) or a\n tuple (kh, kw)\n * **output_size** (*Union**[**int**, **Tuple**[**int**,\n **int**]**]*) -- the target output size of the image of the\n form *oH x oW*. Can be a tuple *(oH, oW)* or a single number\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.FractionalMaxPool2d.html", "category": "pytorch docs"} {"text": "oH for a square image oH x oH\n * **output_ratio** (*Union**[**float**, **Tuple**[**float**,\n **float**]**]*) -- If one wants to have an output size as a\n ratio of the input size, this option can be given. This has to\n be a number or tuple in the range (0, 1)\n\n * **return_indices** (*bool*) -- if \"True\", will return the\n indices along with the outputs. Useful to pass to\n \"nn.MaxUnpool2d()\". Default: \"False\"\n\nShape:\n * Input: (N, C, H_{in}, W_{in}) or (C, H_{in}, W_{in}).\n * Output: (N, C, H_{out}, W_{out}) or (C, H_{out}, W_{out}),\n where (H_{out}, W_{out})=\\text{output\\_size} or (H_{out},\n W_{out})=\\text{output\\_ratio} \\times (H_{in}, W_{in}).\n\n-[ Examples ]-\n\n\n\npool of square window of size=3, and target output size 13x12\nm = nn.FractionalMaxPool2d(3, output_size=(13, 12))\npool of square window and target output size being half of input image size\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.FractionalMaxPool2d.html", "category": "pytorch docs"} {"text": "\n\n\nm = nn.FractionalMaxPool2d(3, output_ratio=(0.5, 0.5))\ninput = torch.randn(20, 16, 50, 32)\noutput = m(input)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.FractionalMaxPool2d.html", "category": "pytorch docs"} {"text": "torch.repeat_interleave\ntorch.repeat_interleave(input, repeats, dim=None, *, output_size=None) -> Tensor\nRepeat elements of a tensor.\nWarning:\n This is different from \"torch.Tensor.repeat()\" but similar to\n \"numpy.repeat\".\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **repeats** (*Tensor** or **int*) -- The number of repetitions\n for each element. repeats is broadcasted to fit the shape of\n the given axis.\n\n * **dim** (*int**, **optional*) -- The dimension along which to\n repeat values. By default, use the flattened input array, and\n return a flat output array.\n\nKeyword Arguments:\n output_size (int, optional) -- Total output size for\n the given axis ( e.g. sum of repeats). If given, it will avoid\n stream synchronization needed to calculate output shape of the\n tensor.\nReturns:\n Repeated tensor which has the same shape as input, except along\n the given axis.", "source": "https://pytorch.org/docs/stable/generated/torch.repeat_interleave.html", "category": "pytorch docs"} {"text": "the given axis.\nReturn type:\n Tensor\nExample:\n >>> x = torch.tensor([1, 2, 3])\n >>> x.repeat_interleave(2)\n tensor([1, 1, 2, 2, 3, 3])\n >>> y = torch.tensor([[1, 2], [3, 4]])\n >>> torch.repeat_interleave(y, 2)\n tensor([1, 1, 2, 2, 3, 3, 4, 4])\n >>> torch.repeat_interleave(y, 3, dim=1)\n tensor([[1, 1, 1, 2, 2, 2],\n [3, 3, 3, 4, 4, 4]])\n >>> torch.repeat_interleave(y, torch.tensor([1, 2]), dim=0)\n tensor([[1, 2],\n [3, 4],\n [3, 4]])\n >>> torch.repeat_interleave(y, torch.tensor([1, 2]), dim=0, output_size=3)\n tensor([[1, 2],\n [3, 4],\n [3, 4]])\n\ntorch.repeat_interleave(repeats, *, output_size=None) -> Tensor\nIf the repeats is tensor([n1, n2, n3, ...]), then the output\n will be tensor([0, 0, ..., 1, 1, ..., 2, 2, ..., ...]) where 0\n appears n1 times, 1 appears n2 times, 2 appears n3 times,\n etc.", "source": "https://pytorch.org/docs/stable/generated/torch.repeat_interleave.html", "category": "pytorch docs"} {"text": "torch.cuda.is_available\ntorch.cuda.is_available()\nReturns a bool indicating if CUDA is currently available.\nReturn type:\n bool", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.is_available.html", "category": "pytorch docs"} {"text": "torch.Tensor.norm\nTensor.norm(p='fro', dim=None, keepdim=False, dtype=None)\nSee \"torch.norm()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.norm.html", "category": "pytorch docs"} {"text": "torch.Tensor.arccosh\nTensor.arccosh()\nacosh() -> Tensor\nSee \"torch.arccosh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.arccosh.html", "category": "pytorch docs"} {"text": "torch.Tensor.nelement\nTensor.nelement() -> int\nAlias for \"numel()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.nelement.html", "category": "pytorch docs"} {"text": "torch.nn.functional.relu\ntorch.nn.functional.relu(input, inplace=False) -> Tensor\nApplies the rectified linear unit function element-wise. See \"ReLU\"\n for more details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.relu.html", "category": "pytorch docs"} {"text": "torch.sym_max\ntorch.sym_max(a, b)\nSymInt-aware utility for max().", "source": "https://pytorch.org/docs/stable/generated/torch.sym_max.html", "category": "pytorch docs"} {"text": "torch.clamp\ntorch.clamp(input, min=None, max=None, *, out=None) -> Tensor\nClamps all elements in \"input\" into the range [ \"min\", \"max\" ].\n Letting min_value and max_value be \"min\" and \"max\", respectively,\n this returns:\n y_i = \\min(\\max(x_i, \\text{min\\_value}_i), \\text{max\\_value}_i)\n\nIf \"min\" is \"None\", there is no lower bound. Or, if \"max\" is \"None\"\n there is no upper bound.\nNote:\n If \"min\" is greater than \"max\" \"torch.clamp(..., min, max)\" sets\n all elements in \"input\" to the value of \"max\".\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **min** (*Number** or **Tensor**, **optional*) -- lower-bound\n of the range to be clamped to\n\n * **max** (*Number** or **Tensor**, **optional*) -- upper-bound\n of the range to be clamped to\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(4)\n >>> a\n tensor([-1.7120, 0.1734, -0.0478, -0.0922])\n", "source": "https://pytorch.org/docs/stable/generated/torch.clamp.html", "category": "pytorch docs"} {"text": "tensor([-1.7120, 0.1734, -0.0478, -0.0922])\n >>> torch.clamp(a, min=-0.5, max=0.5)\n tensor([-0.5000, 0.1734, -0.0478, -0.0922])\n >>> min = torch.linspace(-1, 1, steps=4)\n >>> torch.clamp(a, min=min)\n tensor([-1.0000, 0.1734, 0.3333, 1.0000])\n", "source": "https://pytorch.org/docs/stable/generated/torch.clamp.html", "category": "pytorch docs"} {"text": "torch.Tensor.mode\nTensor.mode(dim=None, keepdim=False)\nSee \"torch.mode()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.mode.html", "category": "pytorch docs"} {"text": "L1Unstructured\nclass torch.nn.utils.prune.L1Unstructured(amount)\nPrune (currently unpruned) units in a tensor by zeroing out the\n ones with the lowest L1-norm.\nParameters:\n amount (int or float) -- quantity of parameters to\n prune. If \"float\", should be between 0.0 and 1.0 and represent\n the fraction of parameters to prune. If \"int\", it represents the\n absolute number of parameters to prune.\nclassmethod apply(module, name, amount, importance_scores=None)\n Adds the forward pre-hook that enables pruning on the fly and\n the reparametrization of a tensor in terms of the original\n tensor and the pruning mask.\n\n Parameters:\n * **module** (*nn.Module*) -- module containing the tensor to\n prune\n\n * **name** (*str*) -- parameter name within \"module\" on which\n pruning will act.\n\n * **amount** (*int** or **float*) -- quantity of parameters\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.L1Unstructured.html", "category": "pytorch docs"} {"text": "to prune. If \"float\", should be between 0.0 and 1.0 and\n represent the fraction of parameters to prune. If \"int\", it\n represents the absolute number of parameters to prune.\n * **importance_scores** (*torch.Tensor*) -- tensor of\n importance scores (of same shape as module parameter) used\n to compute mask for pruning. The values in this tensor\n indicate the importance of the corresponding elements in\n the parameter being pruned. If unspecified or None, the\n module parameter will be used in its place.\n\napply_mask(module)\n Simply handles the multiplication between the parameter being\n pruned and the generated mask. Fetches the mask and the original\n tensor from the module and returns the pruned version of the\n tensor.\n\n Parameters:\n **module** (*nn.Module*) -- module containing the tensor to\n prune\n\n Returns:\n pruned version of the input tensor\n\n Return type:\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.L1Unstructured.html", "category": "pytorch docs"} {"text": "Return type:\n pruned_tensor (torch.Tensor)\nprune(t, default_mask=None, importance_scores=None)\n Computes and returns a pruned version of input tensor \"t\"\n according to the pruning rule specified in \"compute_mask()\".\n\n Parameters:\n * **t** (*torch.Tensor*) -- tensor to prune (of same\n dimensions as \"default_mask\").\n\n * **importance_scores** (*torch.Tensor*) -- tensor of\n importance scores (of same shape as \"t\") used to compute\n mask for pruning \"t\". The values in this tensor indicate\n the importance of the corresponding elements in the \"t\"\n that is being pruned. If unspecified or None, the tensor\n \"t\" will be used in its place.\n\n * **default_mask** (*torch.Tensor**, **optional*) -- mask\n from previous pruning iteration, if any. To be considered\n when determining what portion of the tensor that pruning\n should act on. If None, default to a mask of ones.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.L1Unstructured.html", "category": "pytorch docs"} {"text": "Returns:\n pruned version of tensor \"t\".\nremove(module)\n Removes the pruning reparameterization from a module. The pruned\n parameter named \"name\" remains permanently pruned, and the\n parameter named \"name+'_orig'\" is removed from the parameter\n list. Similarly, the buffer named \"name+'_mask'\" is removed from\n the buffers.\n\n Note:\n\n Pruning itself is NOT undone or reversed!\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.L1Unstructured.html", "category": "pytorch docs"} {"text": "torch.cuda.set_device\ntorch.cuda.set_device(device)\nSets the current device.\nUsage of this function is discouraged in favor of \"device\". In most\n cases it's better to use \"CUDA_VISIBLE_DEVICES\" environmental\n variable.\nParameters:\n device (torch.device or int) -- selected device. This\n function is a no-op if this argument is negative.", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.set_device.html", "category": "pytorch docs"} {"text": "torch.Tensor.i0\nTensor.i0() -> Tensor\nSee \"torch.i0()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.i0.html", "category": "pytorch docs"} {"text": "torch.Tensor.orgqr\nTensor.orgqr(input2) -> Tensor\nSee \"torch.orgqr()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.orgqr.html", "category": "pytorch docs"} {"text": "torch.Tensor.signbit\nTensor.signbit() -> Tensor\nSee \"torch.signbit()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.signbit.html", "category": "pytorch docs"} {"text": "torch.Tensor.dequantize\nTensor.dequantize() -> Tensor\nGiven a quantized Tensor, dequantize it and return the dequantized\n float Tensor.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.dequantize.html", "category": "pytorch docs"} {"text": "torch.fft.fft2\ntorch.fft.fft2(input, s=None, dim=(- 2, - 1), norm=None, *, out=None) -> Tensor\nComputes the 2 dimensional discrete Fourier transform of \"input\".\n Equivalent to \"fftn()\" but FFTs only the last two dimensions by\n default.\nNote:\n The Fourier domain representation of any real signal satisfies\n the Hermitian property: \"X[i, j] = conj(X[-i, -j])\". This\n function always returns all positive and negative frequency terms\n even though, for real inputs, half of these values are redundant.\n \"rfft2()\" returns the more compact one-sided representation where\n only the positive frequencies of the last dimension are returned.\n\nNote:\n Supports torch.half and torch.chalf on CUDA with GPU Architecture\n SM53 or greater. However it only supports powers of 2 signal\n length in every transformed dimensions.\n\nParameters:\n * input (Tensor) -- the input tensor\n * **s** (*Tuple**[**int**]**, **optional*) -- Signal size in the\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fft2.html", "category": "pytorch docs"} {"text": "transformed dimensions. If given, each dimension \"dim[i]\" will\n either be zero-padded or trimmed to the length \"s[i]\" before\n computing the FFT. If a length \"-1\" is specified, no padding\n is done in that dimension. Default: \"s = [input.size(d) for d\n in dim]\"\n * **dim** (*Tuple**[**int**]**, **optional*) -- Dimensions to be\n transformed. Default: last two dimensions.\n\n * **norm** (*str**, **optional*) --\n\n Normalization mode. For the forward transform (\"fft2()\"),\n these correspond to:\n\n * \"\"forward\"\" - normalize by \"1/n\"\n\n * \"\"backward\"\" - no normalization\n\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the FFT\n orthonormal)\n\n Where \"n = prod(s)\" is the logical FFT size. Calling the\n backward transform (\"ifft2()\") with the same normalization\n mode will apply an overall normalization of \"1/n\" between the\n two transforms. This is required to make \"ifft2()\" the exact\n inverse.\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fft2.html", "category": "pytorch docs"} {"text": "inverse.\n Default is \"\"backward\"\" (no normalization).\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\n-[ Example ]-\n\n\n\nx = torch.rand(10, 10, dtype=torch.complex64)\nfft2 = torch.fft.fft2(x)\n\n\n\nThe discrete Fourier transform is separable, so \"fft2()\" here is\n equivalent to two one-dimensional \"fft()\" calls:\n\n\n\ntwo_ffts = torch.fft.fft(torch.fft.fft(x, dim=0), dim=1)\ntorch.testing.assert_close(fft2, two_ffts, check_stride=False)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fft2.html", "category": "pytorch docs"} {"text": "LazyConvTranspose2d\nclass torch.nn.LazyConvTranspose2d(out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)\nA \"torch.nn.ConvTranspose2d\" module with lazy initialization of the\n \"in_channels\" argument of the \"ConvTranspose2d\" that is inferred\n from the \"input.size(1)\". The attributes that will be lazily\n initialized are weight and bias.\nCheck the \"torch.nn.modules.lazy.LazyModuleMixin\" for further\n documentation on lazy modules and their limitations.\nParameters:\n * out_channels (int) -- Number of channels produced by the\n convolution\n * **kernel_size** (*int** or **tuple*) -- Size of the convolving\n kernel\n\n * **stride** (*int** or **tuple**, **optional*) -- Stride of the\n convolution. Default: 1\n\n * **padding** (*int** or **tuple**, **optional*) -- \"dilation *\n (kernel_size - 1) - padding\" zero-padding will be added to\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyConvTranspose2d.html", "category": "pytorch docs"} {"text": "both sides of each dimension in the input. Default: 0\n * **output_padding** (*int** or **tuple**, **optional*) --\n Additional size added to one side of each dimension in the\n output shape. Default: 0\n\n * **groups** (*int**, **optional*) -- Number of blocked\n connections from input channels to output channels. Default: 1\n\n * **bias** (*bool**, **optional*) -- If \"True\", adds a learnable\n bias to the output. Default: \"True\"\n\n * **dilation** (*int** or **tuple**, **optional*) -- Spacing\n between kernel elements. Default: 1\n\nSee also:\n \"torch.nn.ConvTranspose2d\" and\n \"torch.nn.modules.lazy.LazyModuleMixin\"\n\ncls_to_become\n alias of \"ConvTranspose2d\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyConvTranspose2d.html", "category": "pytorch docs"} {"text": "torch.Tensor.ndimension\nTensor.ndimension() -> int\nAlias for \"dim()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ndimension.html", "category": "pytorch docs"} {"text": "torch.Tensor.reciprocal_\nTensor.reciprocal_() -> Tensor\nIn-place version of \"reciprocal()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.reciprocal_.html", "category": "pytorch docs"} {"text": "torch.Tensor.minimum\nTensor.minimum(other) -> Tensor\nSee \"torch.minimum()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.minimum.html", "category": "pytorch docs"} {"text": "torch._foreach_erf\ntorch._foreach_erf(self: List[Tensor]) -> List[Tensor]\nApply \"torch.erf()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_erf.html", "category": "pytorch docs"} {"text": "torch.jit.freeze\ntorch.jit.freeze(mod, preserved_attrs=None, optimize_numerics=True)\nFreezing a \"ScriptModule\" will clone it and attempt to inline the\n cloned module's submodules, parameters, and attributes as constants\n in the TorchScript IR Graph. By default, forward will be\n preserved, as well as attributes & methods specified in\n preserved_attrs. Additionally, any attribute that is modified\n within a preserved method will be preserved.\nFreezing currently only accepts ScriptModules that are in eval\n mode.\nFreezing applies generic optimization that will speed up your model\n regardless of machine. To further optimize using server-specific\n settings, run optimize_for_inference after freezing.\nParameters:\n * mod (\"ScriptModule\") -- a module to be frozen\n * **preserved_attrs** (*Optional**[**List**[**str**]**]*) -- a\n list of attributes to preserve in addition to the forward\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.freeze.html", "category": "pytorch docs"} {"text": "method. Attributes modified in preserved methods will also be\n preserved.\n * **optimize_numerics** (*bool*) -- If \"True\", a set of\n optimization passes will be run that does not strictly\n preserve numerics. Full details of optimization can be found\n at *torch.jit.run_frozen_optimizations*.\n\nReturns:\n Frozen \"ScriptModule\".\nExample (Freezing a simple module with a Parameter):\n def forward(self, input):\n output = self.weight.mm(input)\n output = self.linear(output)\n return output\n\n scripted_module = torch.jit.script(MyModule(2, 3).eval())\n frozen_module = torch.jit.freeze(scripted_module)\n # parameters have been removed and inlined into the Graph as constants\n assert len(list(frozen_module.named_parameters())) == 0\n # See the compiled graph as Python code\n print(frozen_module.code)\n\nExample (Freezing a module with preserved attributes)\n def forward(self, input):\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.freeze.html", "category": "pytorch docs"} {"text": "def forward(self, input):\n self.modified_tensor += 1\n return input + self.modified_tensor\n scripted_module = torch.jit.script(MyModule2().eval())\n frozen_module = torch.jit.freeze(scripted_module, preserved_attrs=[\"version\"])\n # we've manually preserved `version`, so it still exists on the frozen module and can be modified\n assert frozen_module.version == 1\n frozen_module.version = 2\n # `modified_tensor` is detected as being mutated in the forward, so freezing preserves\n # it to retain model semantics\n assert frozen_module(torch.tensor(1)) == torch.tensor(12)\n # now that we've run it once, the next result will be incremented by one\n assert frozen_module(torch.tensor(1)) == torch.tensor(13)\n\nNote:\n Freezing submodule attributes is also supported: frozen_module =\n torch.jit.freeze(scripted_module,\n preserved_attrs=[\"submodule.version\"])\n\nNote:", "source": "https://pytorch.org/docs/stable/generated/torch.jit.freeze.html", "category": "pytorch docs"} {"text": "Note:\n If you're not sure why an attribute is not being inlined as a\n constant, you can run *dump_alias_db* on\n frozen_module.forward.graph to see if freezing has detected the\n attribute is being modified.\n\nNote:\n Because freezing makes weights constants and removes module\n hierarchy, *to* and other nn.Module methods to manipulate device\n or dtype no longer work. As a workaround, You can remap devices\n by specifying *map_location* in *torch.jit.load*, however device-\n specific logic may have been baked into the model.\n", "source": "https://pytorch.org/docs/stable/generated/torch.jit.freeze.html", "category": "pytorch docs"} {"text": "torch.cuda.comm.gather\ntorch.cuda.comm.gather(tensors, dim=0, destination=None, *, out=None)\nGathers tensors from multiple GPU devices.\nParameters:\n * tensors (Iterable[Tensor]) -- an iterable of\n tensors to gather. Tensor sizes in all dimensions other than\n \"dim\" have to match.\n * **dim** (*int**, **optional*) -- a dimension along which the\n tensors will be concatenated. Default: \"0\".\n\n * **destination** (*torch.device**, **str**, or **int**,\n **optional*) -- the output device. Can be CPU or CUDA.\n Default: the current CUDA device.\n\n * **out** (*Tensor**, **optional**, **keyword-only*) -- the\n tensor to store gather result. Its sizes must match those of\n \"tensors\", except for \"dim\", where the size must equal\n \"sum(tensor.size(dim) for tensor in tensors)\". Can be on CPU\n or CUDA.\n\nNote:\n \"destination\" must not be specified when \"out\" is specified.\n\nReturns:", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.comm.gather.html", "category": "pytorch docs"} {"text": "Returns:\n * If \"destination\" is specified,\n a tensor located on \"destination\" device, that is a result\n of concatenating \"tensors\" along \"dim\".\n * If \"out\" is specified,\n the \"out\" tensor, now containing results of concatenating\n \"tensors\" along \"dim\".\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.comm.gather.html", "category": "pytorch docs"} {"text": "GraphInfo\nclass torch.onnx.verification.GraphInfo(graph, input_args, params_dict, export_options=, id='', _EXCLUDED_NODE_KINDS=frozenset({'aten::ScalarImplicit', 'prim::Constant', 'prim::ListConstruct'}))\nGraphInfo contains validation information of a TorchScript graph\n and its converted ONNX graph.\nall_mismatch_leaf_graph_info()\n Return a list of all leaf *GraphInfo* objects that have\n mismatch.\n\n Return type:\n *List*[*GraphInfo*]\n\nclear()\n Clear states and results of previous verification.\n\nessential_node_count()\n Return the number of nodes in the subgraph excluding those in\n *_EXCLUDED_NODE_KINDS*.\n\n Return type:\n int\n\nessential_node_kinds()\n Return the set of node kinds in the subgraph excluding those in\n *_EXCLUDED_NODE_KINDS*.\n\n Return type:\n *Set*[str]\n\nexport_repro(repro_dir=None, name=None)\n Export the subgraph to ONNX along with the input/output data for\n repro.\n", "source": "https://pytorch.org/docs/stable/generated/torch.onnx.verification.GraphInfo.html", "category": "pytorch docs"} {"text": "repro.\n The repro directory will contain the following files:\n\n dir\n \u00e2\u0094\u009c\u00e2\u0094\u0080\u00e2\u0094\u0080 test_\n \u00e2\u0094\u0082 \u00e2\u0094\u009c\u00e2\u0094\u0080\u00e2\u0094\u0080 model.onnx\n \u00e2\u0094\u0082 \u00e2\u0094\u0094\u00e2\u0094\u0080\u00e2\u0094\u0080 test_data_set_0\n \u00e2\u0094\u0082 \u00e2\u0094\u009c\u00e2\u0094\u0080\u00e2\u0094\u0080 input_0.pb\n \u00e2\u0094\u0082 \u00e2\u0094\u009c\u00e2\u0094\u0080\u00e2\u0094\u0080 input_1.pb\n \u00e2\u0094\u0082 \u00e2\u0094\u009c\u00e2\u0094\u0080\u00e2\u0094\u0080 output_0.pb\n \u00e2\u0094\u0082 \u00e2\u0094\u0094\u00e2\u0094\u0080\u00e2\u0094\u0080 output_1.pb\n\n Parameters:\n * **repro_dir** (*Optional**[**str**]*) -- The directory to\n export the repro files to. Defaults to current working\n directory if None.\n\n * **name** (*Optional**[**str**]*) -- An optional name for\n the test case folder: \"test_{name}\".\n\n Returns:\n The path to the exported repro directory.\n\n Return type:\n str\n\nfind_mismatch(options=None)\n Find all mismatches between the TorchScript IR graph and the\n exported onnx model.\n\n Binary searches the model graph to find the minimal subgraph\n that exhibits the mismatch. A *GraphInfo* object is created for\n", "source": "https://pytorch.org/docs/stable/generated/torch.onnx.verification.GraphInfo.html", "category": "pytorch docs"} {"text": "each subgraph, recording the test inputs and export options, as\n well as the validation results.\n Parameters:\n **options** (*Optional**[**VerificationOptions**]*) -- The\n verification options.\n\nfind_partition(id)\n Find the *GraphInfo* object with the given id.\n\n Return type:\n *Optional*[*GraphInfo*]\n\nhas_mismatch()\n Return True if the subgraph has output mismatch between torch\n and ONNX.\n\n Return type:\n bool\n\npretty_print_mismatch(graph=False)\n Pretty print details of the mismatch between torch and ONNX.\n\n Parameters:\n **graph** (*bool*) -- If True, print the ATen JIT graph and\n ONNX graph.\n\npretty_print_tree()\n Pretty print *GraphInfo* tree.\n\n Each node represents a subgraph, showing the number of nodes in\n the subgraph and a check mark if the subgraph has output\n mismatch between torch and ONNX.\n\n The id of the subgraph is shown under the node. The *GraphInfo*\n", "source": "https://pytorch.org/docs/stable/generated/torch.onnx.verification.GraphInfo.html", "category": "pytorch docs"} {"text": "object for any subgraph can be retrieved by calling\n graph_info.find_partition(id).\n Example:\n\n ==================================== Tree: =====================================\n 5 X __2 X __1 \u00e2\u009c\u0093\n id: | id: 0 | id: 00\n | |\n | |__1 X (aten::relu)\n | id: 01\n |\n |__3 X __1 \u00e2\u009c\u0093\n id: 1 | id: 10\n |\n |__2 X __1 X (aten::relu)\n id: 11 | id: 110\n |\n |__1 \u00e2\u009c\u0093\n id: 111\n =========================== Mismatch leaf subgraphs: ===========================\n ['01', '110']\n ============================= Mismatch node kinds: =============================\n {'aten::relu': 2}\n\nverify_export(options)\n Verify the export from TorchScript IR graph to ONNX.\n", "source": "https://pytorch.org/docs/stable/generated/torch.onnx.verification.GraphInfo.html", "category": "pytorch docs"} {"text": "Export the TorchScript IR graph to ONNX, with the inputs,\n parameters and export options recorded in this object. Then\n verify the exported ONNX graph against the original TorchScript\n IR graph under the provided verification options.\n Parameters:\n **options** (*VerificationOptions*) -- The verification\n options.\n\n Returns:\n The AssertionError raised during the verification. Returns\n None if no error is raised. onnx_graph: The exported ONNX\n graph in TorchScript IR format. onnx_outs: The outputs from\n running exported ONNX model under the onnx backend in\n *options*. pt_outs: The outputs from running the TorchScript\n IR graph.\n\n Return type:\n error\n", "source": "https://pytorch.org/docs/stable/generated/torch.onnx.verification.GraphInfo.html", "category": "pytorch docs"} {"text": "Threshold\nclass torch.nn.Threshold(threshold, value, inplace=False)\nThresholds each element of the input Tensor.\nThreshold is defined as:\n y = \\begin{cases} x, &\\text{ if } x > \\text{threshold} \\\\\n \\text{value}, &\\text{ otherwise } \\end{cases}\n\nParameters:\n * threshold (float) -- The value to threshold at\n * **value** (*float*) -- The value to replace with\n\n * **inplace** (*bool*) -- can optionally do the operation in-\n place. Default: \"False\"\n\nShape:\n * Input: (*), where * means any number of dimensions.\n * Output: (*), same shape as the input.\n\nExamples:\n >>> m = nn.Threshold(0.1, 20)\n >>> input = torch.randn(2)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Threshold.html", "category": "pytorch docs"} {"text": "torch.addmv\ntorch.addmv(input, mat, vec, *, beta=1, alpha=1, out=None) -> Tensor\nPerforms a matrix-vector product of the matrix \"mat\" and the vector\n \"vec\". The vector \"input\" is added to the final result.\nIf \"mat\" is a (n \\times m) tensor, \"vec\" is a 1-D tensor of size\n m, then \"input\" must be broadcastable with a 1-D tensor of size\n n and \"out\" will be 1-D tensor of size n.\n\"alpha\" and \"beta\" are scaling factors on matrix-vector product\n between \"mat\" and \"vec\" and the added tensor \"input\" respectively.\n \\text{out} = \\beta\\ \\text{input} + \\alpha\\ (\\text{mat}\n \\mathbin{@} \\text{vec})\n\nIf \"beta\" is 0, then \"input\" will be ignored, and nan and inf\n in it will not be propagated.\nFor inputs of type FloatTensor or DoubleTensor, arguments\n \"beta\" and \"alpha\" must be real numbers, otherwise they should be\n integers.\nParameters:\n * input (Tensor) -- vector to be added\n * **mat** (*Tensor*) -- matrix to be matrix multiplied\n", "source": "https://pytorch.org/docs/stable/generated/torch.addmv.html", "category": "pytorch docs"} {"text": "\nvec (Tensor) -- vector to be matrix multiplied\n\nKeyword Arguments:\n * beta (Number, optional) -- multiplier for \"input\"\n (\\beta)\n * **alpha** (*Number**, **optional*) -- multiplier for mat @ vec\n (\\alpha)\n\n * **out** (*Tensor**, **optional*) -- the output tensor.\n\nExample:\n >>> M = torch.randn(2)\n >>> mat = torch.randn(2, 3)\n >>> vec = torch.randn(3)\n >>> torch.addmv(M, mat, vec)\n tensor([-0.3768, -5.5565])\n", "source": "https://pytorch.org/docs/stable/generated/torch.addmv.html", "category": "pytorch docs"} {"text": "torch.lu_unpack\ntorch.lu_unpack(LU_data, LU_pivots, unpack_data=True, unpack_pivots=True, *, out=None)\nUnpacks the LU decomposition returned by \"lu_factor()\" into the P,\n L, U matrices.\nSee also:\n \"lu()\" returns the matrices from the LU decomposition. Its\n gradient formula is more efficient than that of doing\n \"lu_factor()\" followed by \"lu_unpack()\".\n\nParameters:\n * LU_data (Tensor) -- the packed LU factorization data\n * **LU_pivots** (*Tensor*) -- the packed LU factorization pivots\n\n * **unpack_data** (*bool*) -- flag indicating if the data should\n be unpacked. If \"False\", then the returned \"L\" and \"U\" are\n empty tensors. Default: \"True\"\n\n * **unpack_pivots** (*bool*) -- flag indicating if the pivots\n should be unpacked into a permutation matrix \"P\". If \"False\",\n then the returned \"P\" is an empty tensor. Default: \"True\"\n\nKeyword Arguments:\n out (tuple, optional) -- output tuple of three", "source": "https://pytorch.org/docs/stable/generated/torch.lu_unpack.html", "category": "pytorch docs"} {"text": "tensors. Ignored if None.\nReturns:\n A namedtuple \"(P, L, U)\"\nExamples:\n >>> A = torch.randn(2, 3, 3)\n >>> LU, pivots = torch.linalg.lu_factor(A)\n >>> P, L, U = torch.lu_unpack(LU, pivots)\n >>> # We can recover A from the factorization\n >>> A_ = P @ L @ U\n >>> torch.allclose(A, A_)\n True\n\n >>> # LU factorization of a rectangular matrix:\n >>> A = torch.randn(2, 3, 2)\n >>> LU, pivots = torch.linalg.lu_factor(A)\n >>> P, L, U = torch.lu_unpack(LU, pivots)\n >>> # P, L, U are the same as returned by linalg.lu\n >>> P_, L_, U_ = torch.linalg.lu(A)\n >>> torch.allclose(P, P_) and torch.allclose(L, L_) and torch.allclose(U, U_)\n True\n", "source": "https://pytorch.org/docs/stable/generated/torch.lu_unpack.html", "category": "pytorch docs"} {"text": "LayerNorm\nclass torch.nn.LayerNorm(normalized_shape, eps=1e-05, elementwise_affine=True, device=None, dtype=None)\nApplies Layer Normalization over a mini-batch of inputs as\n described in the paper Layer Normalization\n y = \\frac{x - \\mathrm{E}[x]}{ \\sqrt{\\mathrm{Var}[x] + \\epsilon}}\n * \\gamma + \\beta\n\nThe mean and standard-deviation are calculated over the last D\n dimensions, where D is the dimension of \"normalized_shape\". For\n example, if \"normalized_shape\" is \"(3, 5)\" (a 2-dimensional shape),\n the mean and standard-deviation are computed over the last 2\n dimensions of the input (i.e. \"input.mean((-2, -1))\"). \\gamma and\n \\beta are learnable affine transform parameters of\n \"normalized_shape\" if \"elementwise_affine\" is \"True\". The standard-\n deviation is calculated via the biased estimator, equivalent to\n torch.var(input, unbiased=False).\nNote:\n Unlike Batch Normalization and Instance Normalization, which\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LayerNorm.html", "category": "pytorch docs"} {"text": "applies scalar scale and bias for each entire channel/plane with\n the \"affine\" option, Layer Normalization applies per-element\n scale and bias with \"elementwise_affine\".\nThis layer uses statistics computed from input data in both\n training and evaluation modes.\nParameters:\n * normalized_shape (int or list or torch.Size) --\n input shape from an expected input of size\n\n [* \\times \\text{normalized\\_shape}[0] \\times\n \\text{normalized\\_shape}[1] \\times \\ldots \\times\n \\text{normalized\\_shape}[-1]]\n\n If a single integer is used, it is treated as a singleton\n list, and this module will normalize over the last dimension\n which is expected to be of that specific size.\n\n * **eps** (*float*) -- a value added to the denominator for\n numerical stability. Default: 1e-5\n\n * **elementwise_affine** (*bool*) -- a boolean value that when\n set to \"True\", this module has learnable per-element affine\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LayerNorm.html", "category": "pytorch docs"} {"text": "parameters initialized to ones (for weights) and zeros (for\n biases). Default: \"True\".\nVariables:\n * weight -- the learnable weights of the module of shape\n \\text{normalized_shape} when \"elementwise_affine\" is set to\n \"True\". The values are initialized to 1.\n * **bias** -- the learnable bias of the module of shape\n \\text{normalized\\_shape} when \"elementwise_affine\" is set to\n \"True\". The values are initialized to 0.\n\nShape:\n * Input: (N, *)\n * Output: (N, *) (same shape as input)\n\nExamples:\n >>> # NLP Example\n >>> batch, sentence_length, embedding_dim = 20, 5, 10\n >>> embedding = torch.randn(batch, sentence_length, embedding_dim)\n >>> layer_norm = nn.LayerNorm(embedding_dim)\n >>> # Activate module\n >>> layer_norm(embedding)\n >>>\n >>> # Image Example\n >>> N, C, H, W = 20, 5, 10, 10\n >>> input = torch.randn(N, C, H, W)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LayerNorm.html", "category": "pytorch docs"} {"text": "\n\n\ninput = torch.randn(N, C, H, W)\n >>> # Normalize over the last three dimensions (i.e. the channel and spatial dimensions)\n >>> # as shown in the image below\n >>> layer_norm = nn.LayerNorm([C, H, W])\n >>> output = layer_norm(input)\n\n\n\n[image]", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LayerNorm.html", "category": "pytorch docs"} {"text": "torch.autograd.forward_ad.make_dual\ntorch.autograd.forward_ad.make_dual(tensor, tangent, *, level=None)\nAssociates a tensor value with a forward gradient, the tangent, to\n create a \"dual tensor\", which is used to compute forward AD\n gradients. The result is a new tensor aliased to \"tensor\" with\n \"tangent\" embedded as an attribute as-is if it has the same storage\n layout or copied otherwise. The tangent attribute can be recovered\n with \"unpack_dual()\".\nThis function is backward differentiable.\nGiven a function f whose jacobian is J, it allows one to\n compute the Jacobian-vector product (jvp) between J and a given\n vector v as follows.\nExample:\n >>> with dual_level():\n ... inp = make_dual(x, v)\n ... out = f(inp)\n ... y, jvp = unpack_dual(out)\n\nPlease see the forward-mode AD tutorial for detailed steps on how\n to use this API.", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.forward_ad.make_dual.html", "category": "pytorch docs"} {"text": "AdaptiveAvgPool1d\nclass torch.nn.AdaptiveAvgPool1d(output_size)\nApplies a 1D adaptive average pooling over an input signal composed\n of several input planes.\nThe output size is L_{out}, for any input size. The number of\n output features is equal to the number of input planes.\nParameters:\n output_size (Union[int, Tuple[int]]) --\n the target output size L_{out}.\nShape:\n * Input: (N, C, L_{in}) or (C, L_{in}).\n * Output: (N, C, L_{out}) or (C, L_{out}), where\n L_{out}=\\text{output\\_size}.\n\n-[ Examples ]-\n\n\n\ntarget output size of 5\nm = nn.AdaptiveAvgPool1d(5)\ninput = torch.randn(1, 64, 8)\noutput = m(input)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveAvgPool1d.html", "category": "pytorch docs"} {"text": "torch.Tensor.atan_\nTensor.atan_() -> Tensor\nIn-place version of \"atan()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.atan_.html", "category": "pytorch docs"} {"text": "torch.fft.ihfft2\ntorch.fft.ihfft2(input, s=None, dim=(- 2, - 1), norm=None, *, out=None) -> Tensor\nComputes the 2-dimensional inverse discrete Fourier transform of\n real \"input\". Equivalent to \"ihfftn()\" but transforms only the two\n last dimensions by default.\nNote:\n Supports torch.half on CUDA with GPU Architecture SM53 or\n greater. However it only supports powers of 2 signal length in\n every transformed dimensions.\n\nParameters:\n * input (Tensor) -- the input tensor\n * **s** (*Tuple**[**int**]**, **optional*) -- Signal size in the\n transformed dimensions. If given, each dimension \"dim[i]\" will\n either be zero-padded or trimmed to the length \"s[i]\" before\n computing the Hermitian IFFT. If a length \"-1\" is specified,\n no padding is done in that dimension. Default: \"s =\n [input.size(d) for d in dim]\"\n\n * **dim** (*Tuple**[**int**]**, **optional*) -- Dimensions to be\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ihfft2.html", "category": "pytorch docs"} {"text": "transformed. Default: last two dimensions.\n * **norm** (*str**, **optional*) --\n\n Normalization mode. For the backward transform (\"ihfft2()\"),\n these correspond to:\n\n * \"\"forward\"\" - no normalization\n\n * \"\"backward\"\" - normalize by \"1/n\"\n\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the Hermitian\n IFFT orthonormal)\n\n Where \"n = prod(s)\" is the logical IFFT size. Calling the\n forward transform (\"hfft2()\") with the same normalization mode\n will apply an overall normalization of \"1/n\" between the two\n transforms. This is required to make \"ihfft2()\" the exact\n inverse.\n\n Default is \"\"backward\"\" (normalize by \"1/n\").\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\n-[ Example ]-\n\n\n\nT = torch.rand(10, 10)\nt = torch.fft.ihfft2(t)\nt.size()\n torch.Size([10, 6])\n\n\n\nCompared against the full output from \"ifft2()\", the Hermitian", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ihfft2.html", "category": "pytorch docs"} {"text": "time-space signal takes up only half the space.\n\n\n\nfftn = torch.fft.ifft2(t)\ntorch.allclose(fftn[..., :6], rfftn)\n True\n\n\n\nThe discrete Fourier transform is separable, so \"ihfft2()\" here is\n equivalent to a combination of \"ifft()\" and \"ihfft()\":\n\n\n\ntwo_ffts = torch.fft.ifft(torch.fft.ihfft(t, dim=1), dim=0)\ntorch.allclose(t, two_ffts)\n True\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.ihfft2.html", "category": "pytorch docs"} {"text": "BNReLU3d\nclass torch.ao.nn.intrinsic.BNReLU3d(batch_norm, relu)\nThis is a sequential container which calls the BatchNorm 3d and\n ReLU modules. During quantization this will be replaced with the\n corresponding fused module.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.BNReLU3d.html", "category": "pytorch docs"} {"text": "torch.nn.functional.rrelu_\ntorch.nn.functional.rrelu_(input, lower=1. / 8, upper=1. / 3, training=False) -> Tensor\nIn-place version of \"rrelu()\".", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.rrelu_.html", "category": "pytorch docs"} {"text": "torch.arcsinh\ntorch.arcsinh(input, *, out=None) -> Tensor\nAlias for \"torch.asinh()\".", "source": "https://pytorch.org/docs/stable/generated/torch.arcsinh.html", "category": "pytorch docs"} {"text": "LazyConv1d\nclass torch.nn.LazyConv1d(out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True, padding_mode='zeros', device=None, dtype=None)\nA \"torch.nn.Conv1d\" module with lazy initialization of the\n \"in_channels\" argument of the \"Conv1d\" that is inferred from the\n \"input.size(1)\". The attributes that will be lazily initialized are\n weight and bias.\nCheck the \"torch.nn.modules.lazy.LazyModuleMixin\" for further\n documentation on lazy modules and their limitations.\nParameters:\n * out_channels (int) -- Number of channels produced by the\n convolution\n * **kernel_size** (*int** or **tuple*) -- Size of the convolving\n kernel\n\n * **stride** (*int** or **tuple**, **optional*) -- Stride of the\n convolution. Default: 1\n\n * **padding** (*int** or **tuple**, **optional*) -- Zero-padding\n added to both sides of the input. Default: 0\n\n * **padding_mode** (*str**, **optional*) -- \"'zeros'\",\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyConv1d.html", "category": "pytorch docs"} {"text": "\"'reflect'\", \"'replicate'\" or \"'circular'\". Default: \"'zeros'\"\n * **dilation** (*int** or **tuple**, **optional*) -- Spacing\n between kernel elements. Default: 1\n\n * **groups** (*int**, **optional*) -- Number of blocked\n connections from input channels to output channels. Default: 1\n\n * **bias** (*bool**, **optional*) -- If \"True\", adds a learnable\n bias to the output. Default: \"True\"\n\nSee also:\n \"torch.nn.Conv1d\" and \"torch.nn.modules.lazy.LazyModuleMixin\"\n\ncls_to_become\n alias of \"Conv1d\"\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.LazyConv1d.html", "category": "pytorch docs"} {"text": "torch.Tensor.permute\nTensor.permute(*dims) -> Tensor\nSee \"torch.permute()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.permute.html", "category": "pytorch docs"} {"text": "torch.sparse.softmax\ntorch.sparse.softmax(input, dim, *, dtype=None) -> Tensor\nApplies a softmax function.\nSoftmax is defined as:\n\\text{Softmax}(x_{i}) = \\frac{exp(x_i)}{\\sum_j exp(x_j)}\nwhere i, j run over sparse tensor indices and unspecified entries\n are ignores. This is equivalent to defining unspecified entries as\n negative infinity so that exp(x_k) = 0 when the entry with index k\n has not specified.\nIt is applied to all slices along dim, and will re-scale them so\n that the elements lie in the range [0, 1] and sum to 1.\nParameters:\n * input (Tensor) -- input\n * **dim** (*int*) -- A dimension along which softmax will be\n computed.\n\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. If specified, the input tensor is casted\n to \"dtype\" before the operation is performed. This is useful\n for preventing data type overflows. Default: None\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.softmax.html", "category": "pytorch docs"} {"text": "L1Loss\nclass torch.nn.L1Loss(size_average=None, reduce=None, reduction='mean')\nCreates a criterion that measures the mean absolute error (MAE)\n between each element in the input x and target y.\nThe unreduced (i.e. with \"reduction\" set to \"'none'\") loss can be\n described as:\n \\ell(x, y) = L = \\{l_1,\\dots,l_N\\}^\\top, \\quad l_n = \\left| x_n\n - y_n \\right|,\n\nwhere N is the batch size. If \"reduction\" is not \"'none'\" (default\n \"'mean'\"), then:\n \\ell(x, y) = \\begin{cases} \\operatorname{mean}(L), &\n \\text{if reduction} = \\text{`mean';}\\\\\n \\operatorname{sum}(L), & \\text{if reduction} = \\text{`sum'.}\n \\end{cases}\n\nx and y are tensors of arbitrary shapes with a total of n elements\n each.\nThe sum operation still operates over all the elements, and divides\n by n.\nThe division by n can be avoided if one sets \"reduction = 'sum'\".\nSupports real-valued and complex-valued inputs.\nParameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.L1Loss.html", "category": "pytorch docs"} {"text": "Parameters:\n * size_average (bool, optional) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n * **reduce** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n\n * **reduction** (*str**, **optional*) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.L1Loss.html", "category": "pytorch docs"} {"text": "the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\nShape:\n * Input: (*), where * means any number of dimensions.\n * Target: (*), same shape as the input.\n\n * Output: scalar. If \"reduction\" is \"'none'\", then (*), same\n shape as the input.\n\nExamples:\n >>> loss = nn.L1Loss()\n >>> input = torch.randn(3, 5, requires_grad=True)\n >>> target = torch.randn(3, 5)\n >>> output = loss(input, target)\n >>> output.backward()\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.L1Loss.html", "category": "pytorch docs"} {"text": "torch.cuda.mem_get_info\ntorch.cuda.mem_get_info(device=None)\nReturns the global free and total GPU memory occupied for a given\n device using cudaMemGetInfo.\nParameters:\n device (torch.device or int, optional) -- selected\n device. Returns statistic for the current device, given by\n \"current_device()\", if \"device\" is \"None\" (default).\nReturn type:\n Tuple[int, int]\nNote:\n See Memory management for more details about GPU memory\n management.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.mem_get_info.html", "category": "pytorch docs"} {"text": "torch.Tensor.as_strided\nTensor.as_strided(size, stride, storage_offset=None) -> Tensor\nSee \"torch.as_strided()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.as_strided.html", "category": "pytorch docs"} {"text": "torch.isneginf\ntorch.isneginf(input, *, out=None) -> Tensor\nTests if each element of \"input\" is negative infinity or not.\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.tensor([-float('inf'), float('inf'), 1.2])\n >>> torch.isneginf(a)\n tensor([ True, False, False])\n", "source": "https://pytorch.org/docs/stable/generated/torch.isneginf.html", "category": "pytorch docs"} {"text": "torch.divide\ntorch.divide(input, other, *, rounding_mode=None, out=None) -> Tensor\nAlias for \"torch.div()\".", "source": "https://pytorch.org/docs/stable/generated/torch.divide.html", "category": "pytorch docs"} {"text": "MaxPool1d\nclass torch.nn.MaxPool1d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)\nApplies a 1D max pooling over an input signal composed of several\n input planes.\nIn the simplest case, the output value of the layer with input size\n (N, C, L) and output (N, C, L_{out}) can be precisely described as:\n out(N_i, C_j, k) = \\max_{m=0, \\ldots, \\text{kernel\\_size} - 1}\n input(N_i, C_j, stride \\times k + m)\n\nIf \"padding\" is non-zero, then the input is implicitly padded with\n negative infinity on both sides for \"padding\" number of points.\n \"dilation\" is the stride between the elements within the sliding\n window. This link has a nice visualization of the pooling\n parameters.\nNote:\n When ceil_mode=True, sliding windows are allowed to go off-bounds\n if they start within the left padding or the input. Sliding\n windows that would start in the right padded region are ignored.\n\nParameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxPool1d.html", "category": "pytorch docs"} {"text": "Parameters:\n * kernel_size (Union[int, Tuple[int]]) --\n The size of the sliding window, must be > 0.\n * **stride** (*Union**[**int**, **Tuple**[**int**]**]*) -- The\n stride of the sliding window, must be > 0. Default value is\n \"kernel_size\".\n\n * **padding** (*Union**[**int**, **Tuple**[**int**]**]*) --\n Implicit negative infinity padding to be added on both sides,\n must be >= 0 and <= kernel_size / 2.\n\n * **dilation** (*Union**[**int**, **Tuple**[**int**]**]*) -- The\n stride between elements within a sliding window, must be > 0.\n\n * **return_indices** (*bool*) -- If \"True\", will return the\n argmax along with the max values. Useful for\n \"torch.nn.MaxUnpool1d\" later\n\n * **ceil_mode** (*bool*) -- If \"True\", will use *ceil* instead\n of *floor* to compute the output shape. This ensures that\n every element in the input tensor is covered by a sliding\n window.\n\nShape:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxPool1d.html", "category": "pytorch docs"} {"text": "window.\nShape:\n * Input: (N, C, L_{in}) or (C, L_{in}).\n * Output: (N, C, L_{out}) or (C, L_{out}), where\n\n L_{out} = \\left\\lfloor \\frac{L_{in} + 2 \\times\n \\text{padding} - \\text{dilation} \\times\n (\\text{kernel\\_size} - 1) - 1}{\\text{stride}} +\n 1\\right\\rfloor\n\nExamples:\n >>> # pool of size=3, stride=2\n >>> m = nn.MaxPool1d(3, stride=2)\n >>> input = torch.randn(20, 16, 50)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.MaxPool1d.html", "category": "pytorch docs"} {"text": "torch.atan2\ntorch.atan2(input, other, *, out=None) -> Tensor\nElement-wise arctangent of \\text{input}{i} / \\text{other} with\n consideration of the quadrant. Returns a new tensor with the signed\n angles in radians between vector (\\text{other}{i},\n \\text{input}) and vector (1, 0). (Note that \\text{other}{i},\n the second parameter, is the x-coordinate, while \\text{input},\n the first parameter, is the y-coordinate.)\nThe shapes of \"input\" and \"other\" must be broadcastable.\nParameters:\n * input (Tensor) -- the first input tensor\n * **other** (*Tensor*) -- the second input tensor\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(4)\n >>> a\n tensor([ 0.9041, 0.0196, -0.3108, -2.4423])\n >>> torch.atan2(a, torch.randn(4))\n tensor([ 0.9833, 0.0811, -1.9743, -1.4151])\n", "source": "https://pytorch.org/docs/stable/generated/torch.atan2.html", "category": "pytorch docs"} {"text": "torch.nn.functional.multilabel_margin_loss\ntorch.nn.functional.multilabel_margin_loss(input, target, size_average=None, reduce=None, reduction='mean') -> Tensor\nSee \"MultiLabelMarginLoss\" for details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.multilabel_margin_loss.html", "category": "pytorch docs"} {"text": "torch.Tensor.is_inference\nTensor.is_inference() -> bool\nSee \"torch.is_inference()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_inference.html", "category": "pytorch docs"} {"text": "torch.Tensor.sum\nTensor.sum(dim=None, keepdim=False, dtype=None) -> Tensor\nSee \"torch.sum()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.sum.html", "category": "pytorch docs"} {"text": "default_fused_act_fake_quant\ntorch.quantization.fake_quantize.default_fused_act_fake_quant\nalias of functools.partial(, observer=,\n quant_min=0, quant_max=255, dtype=torch.quint8){}", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.fake_quantize.default_fused_act_fake_quant.html", "category": "pytorch docs"} {"text": "torch.autograd.Function.vmap\nstatic Function.vmap(info, in_dims, *args)\nDefines a rule for the behavior of this autograd.Function\n underneath \"torch.vmap()\". For a \"torch.autograd.Function()\" to\n support \"torch.vmap()\", you must either override this staticmethod,\n or set \"generate_vmap_rule\" to \"True\" (you may not do both).\nIf you choose to override this staticmethod: it must accept\n\n\nan \"info\" object as the first argument. \"info.batch_size\"\n specifies the size of the dimension being vmapped over, while\n \"info.randomness\" is the randomness option passed to\n \"torch.vmap()\".\n\n\nan \"in_dims\" tuple as the second argument. For each arg in\n \"args\", \"in_dims\" has a corresponding \"Optional[int]\". It is\n \"None\" if the arg is not a Tensor or if the arg is not being\n vmapped over, otherwise, it is an integer specifying what\n dimension of the Tensor is being vmapped over.\n\n\n\"*args\", which is the same as the args to \"forward()\".\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.Function.vmap.html", "category": "pytorch docs"} {"text": "The return of the vmap staticmethod is a tuple of \"(output,\n out_dims)\". Similar to \"in_dims\", \"out_dims\" should be of the same\n structure as \"output\" and contain one \"out_dim\" per output that\n specifies if the output has the vmapped dimension and what index it\n is in.\nPlease see Extending torch.func with autograd.Function for more\n details.", "source": "https://pytorch.org/docs/stable/generated/torch.autograd.Function.vmap.html", "category": "pytorch docs"} {"text": "TripletMarginLoss\nclass torch.nn.TripletMarginLoss(margin=1.0, p=2.0, eps=1e-06, swap=False, size_average=None, reduce=None, reduction='mean')\nCreates a criterion that measures the triplet loss given an input\n tensors x1, x2, x3 and a margin with a value greater than 0. This\n is used for measuring a relative similarity between samples. A\n triplet is composed by a, p and n (i.e., anchor, positive\n examples and negative examples respectively). The shapes of all\n input tensors should be (N, D).\nThe distance swap is described in detail in the paper Learning\n shallow convolutional feature descriptors with triplet losses by V.\n Balntas, E. Riba et al.\nThe loss function for each sample in the mini-batch is:\n L(a, p, n) = \\max \\{d(a_i, p_i) - d(a_i, n_i) + {\\rm margin},\n 0\\}\n\nwhere\n d(x_i, y_i) = \\left\\lVert {\\bf x}_i - {\\bf y}_i \\right\\rVert_p\n\nSee also \"TripletMarginWithDistanceLoss\", which computes the", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TripletMarginLoss.html", "category": "pytorch docs"} {"text": "triplet margin loss for input tensors using a custom distance\n function.\nParameters:\n * margin (float, optional) -- Default: 1.\n * **p** (*int**, **optional*) -- The norm degree for pairwise\n distance. Default: 2.\n\n * **swap** (*bool**, **optional*) -- The distance swap is\n described in detail in the paper *Learning shallow\n convolutional feature descriptors with triplet losses* by V.\n Balntas, E. Riba et al. Default: \"False\".\n\n * **size_average** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n are multiple elements per sample. If the field \"size_average\"\n is set to \"False\", the losses are instead summed for each\n minibatch. Ignored when \"reduce\" is \"False\". Default: \"True\"\n\n * **reduce** (*bool**, **optional*) -- Deprecated (see\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TripletMarginLoss.html", "category": "pytorch docs"} {"text": "\"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n * **reduction** (*str**, **optional*) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n two args will override \"reduction\". Default: \"'mean'\"\n\nShape:\n * Input: (N, D) or (D) where D is the vector dimension.\n * Output: A Tensor of shape (N) if \"reduction\" is \"'none'\" and\n input shape is (N, D); a scalar otherwise.\n\nExamples:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TripletMarginLoss.html", "category": "pytorch docs"} {"text": "Examples:\n >>> triplet_loss = nn.TripletMarginLoss(margin=1.0, p=2)\n >>> anchor = torch.randn(100, 128, requires_grad=True)\n >>> positive = torch.randn(100, 128, requires_grad=True)\n >>> negative = torch.randn(100, 128, requires_grad=True)\n >>> output = triplet_loss(anchor, positive, negative)\n >>> output.backward()\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.TripletMarginLoss.html", "category": "pytorch docs"} {"text": "torch.Tensor.index_add\nTensor.index_add(dim, index, source, *, alpha=1) -> Tensor\nOut-of-place version of \"torch.Tensor.index_add_()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.index_add.html", "category": "pytorch docs"} {"text": "torch.broadcast_shapes\ntorch.broadcast_shapes(*shapes) -> Size\nSimilar to \"broadcast_tensors()\" but for shapes.\nThis is equivalent to \"torch.broadcast_tensors(*map(torch.empty,\n shapes))[0].shape\" but avoids the need create to intermediate\n tensors. This is useful for broadcasting tensors of common batch\n shape but different rightmost shape, e.g. to broadcast mean vectors\n with covariance matrices.\nExample:\n >>> torch.broadcast_shapes((2,), (3, 1), (1, 1, 1))\n torch.Size([1, 3, 2])\n\nParameters:\n shapes (torch.Size*) -- Shapes of tensors.\nReturns:\n A shape compatible with all input shapes.\nReturn type:\n shape (torch.Size)\nRaises:\n RuntimeError -- If shapes are incompatible.", "source": "https://pytorch.org/docs/stable/generated/torch.broadcast_shapes.html", "category": "pytorch docs"} {"text": "torch.nn.functional.conv_transpose1d\ntorch.nn.functional.conv_transpose1d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1) -> Tensor\nApplies a 1D transposed convolution operator over an input signal\n composed of several input planes, sometimes also called\n \"deconvolution\".\nThis operator supports TensorFloat32.\nSee \"ConvTranspose1d\" for details and output shape.\nNote:\n In some circumstances when given tensors on a CUDA device and\n using CuDNN, this operator may select a nondeterministic\n algorithm to increase performance. If this is undesirable, you\n can try to make the operation deterministic (potentially at a\n performance cost) by setting \"torch.backends.cudnn.deterministic\n = True\". See Reproducibility for more information.\n\nParameters:\n * input -- input tensor of shape (\\text{minibatch} ,\n \\text{in_channels} , iW)", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv_transpose1d.html", "category": "pytorch docs"} {"text": "\\text{in_channels} , iW)\n * **weight** -- filters of shape (\\text{in\\_channels} ,\n \\frac{\\text{out\\_channels}}{\\text{groups}} , kW)\n\n * **bias** -- optional bias of shape (\\text{out\\_channels}).\n Default: None\n\n * **stride** -- the stride of the convolving kernel. Can be a\n single number or a tuple \"(sW,)\". Default: 1\n\n * **padding** -- \"dilation * (kernel_size - 1) - padding\" zero-\n padding will be added to both sides of each dimension in the\n input. Can be a single number or a tuple \"(padW,)\". Default: 0\n\n * **output_padding** -- additional size added to one side of\n each dimension in the output shape. Can be a single number or\n a tuple \"(out_padW)\". Default: 0\n\n * **groups** -- split input into groups, \\text{in\\_channels}\n should be divisible by the number of groups. Default: 1\n\n * **dilation** -- the spacing between kernel elements. Can be a\n single number or a tuple \"(dW,)\". Default: 1\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv_transpose1d.html", "category": "pytorch docs"} {"text": "Examples:\n >>> inputs = torch.randn(20, 16, 50)\n >>> weights = torch.randn(16, 33, 5)\n >>> F.conv_transpose1d(inputs, weights)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.conv_transpose1d.html", "category": "pytorch docs"} {"text": "Hardshrink\nclass torch.nn.Hardshrink(lambd=0.5)\nApplies the Hard Shrinkage (Hardshrink) function element-wise.\nHardshrink is defined as:\n \\text{HardShrink}(x) = \\begin{cases} x, & \\text{ if } x >\n \\lambda \\\\ x, & \\text{ if } x < -\\lambda \\\\ 0, & \\text{\n otherwise } \\end{cases}\n\nParameters:\n lambd (float) -- the \\lambda value for the Hardshrink\n formulation. Default: 0.5\nShape:\n * Input: (*), where * means any number of dimensions.\n * Output: (*), same shape as the input.\n\n[image]\nExamples:\n >>> m = nn.Hardshrink()\n >>> input = torch.randn(2)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Hardshrink.html", "category": "pytorch docs"} {"text": "torch.dot\ntorch.dot(input, other, *, out=None) -> Tensor\nComputes the dot product of two 1D tensors.\nNote:\n Unlike NumPy's dot, torch.dot intentionally only supports\n computing the dot product of two 1D tensors with the same number\n of elements.\n\nParameters:\n * input (Tensor) -- first tensor in the dot product, must\n be 1D.\n * **other** (*Tensor*) -- second tensor in the dot product, must\n be 1D.\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> torch.dot(torch.tensor([2, 3]), torch.tensor([2, 1]))\n tensor(7)\n", "source": "https://pytorch.org/docs/stable/generated/torch.dot.html", "category": "pytorch docs"} {"text": "torch.cuda.current_device\ntorch.cuda.current_device()\nReturns the index of a currently selected device.\nReturn type:\n int", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.current_device.html", "category": "pytorch docs"} {"text": "AdaptiveMaxPool1d\nclass torch.nn.AdaptiveMaxPool1d(output_size, return_indices=False)\nApplies a 1D adaptive max pooling over an input signal composed of\n several input planes.\nThe output size is L_{out}, for any input size. The number of\n output features is equal to the number of input planes.\nParameters:\n * output_size (Union[int, Tuple[int]]) --\n the target output size L_{out}.\n * **return_indices** (*bool*) -- if \"True\", will return the\n indices along with the outputs. Useful to pass to\n nn.MaxUnpool1d. Default: \"False\"\n\nShape:\n * Input: (N, C, L_{in}) or (C, L_{in}).\n * Output: (N, C, L_{out}) or (C, L_{out}), where\n L_{out}=\\text{output\\_size}.\n\n-[ Examples ]-\n\n\n\ntarget output size of 5\nm = nn.AdaptiveMaxPool1d(5)\ninput = torch.randn(1, 64, 8)\noutput = m(input)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.AdaptiveMaxPool1d.html", "category": "pytorch docs"} {"text": "torch.Tensor.isnan\nTensor.isnan() -> Tensor\nSee \"torch.isnan()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.isnan.html", "category": "pytorch docs"} {"text": "default_per_channel_qconfig\ntorch.quantization.qconfig.default_per_channel_qconfig\nalias of QConfig(activation=functools.partial(, quant_min=0,\n quant_max=127){}, weight=functools.partial(,\n dtype=torch.qint8, qscheme=torch.per_channel_symmetric){})", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.qconfig.default_per_channel_qconfig.html", "category": "pytorch docs"} {"text": "ScriptFunction\nclass torch.jit.ScriptFunction\nFunctionally equivalent to a \"ScriptModule\", but represents a\n single function and does not have any attributes or Parameters.\nget_debug_state(self: torch._C.ScriptFunction) -> torch._C.GraphExecutorState\nsave(self: torch._C.ScriptFunction, filename: str, _extra_files: Dict[str, str] = {}) -> None\nsave_to_buffer(self: torch._C.ScriptFunction, _extra_files: Dict[str, str] = {}) -> bytes", "source": "https://pytorch.org/docs/stable/generated/torch.jit.ScriptFunction.html", "category": "pytorch docs"} {"text": "torch.nn.utils.parametrize.cached\ntorch.nn.utils.parametrize.cached()\nContext manager that enables the caching system within\n parametrizations registered with \"register_parametrization()\".\nThe value of the parametrized objects is computed and cached the\n first time they are required when this context manager is active.\n The cached values are discarded when leaving the context manager.\nThis is useful when using a parametrized parameter more than once\n in the forward pass. An example of this is when parametrizing the\n recurrent kernel of an RNN or when sharing weights.\nThe simplest way to activate the cache is by wrapping the forward\n pass of the neural network\n import torch.nn.utils.parametrize as P\n ...\n with P.cached():\n output = model(inputs)\n\nin training and evaluation. One may also wrap the parts of the\n modules that use several times the parametrized tensors. For", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.cached.html", "category": "pytorch docs"} {"text": "example, the loop of an RNN with a parametrized recurrent kernel:\n with P.cached():\n for x in xs:\n out_rnn = self.rnn_cell(x, out_rnn)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrize.cached.html", "category": "pytorch docs"} {"text": "LeakyReLU\nclass torch.ao.nn.quantized.LeakyReLU(scale, zero_point, negative_slope=0.01, inplace=False, device=None, dtype=None)\nThis is the quantized equivalent of \"LeakyReLU\".\nParameters:\n * scale (float) -- quantization scale of the output tensor\n * **zero_point** (*int*) -- quantization zero point of the\n output tensor\n\n * **negative_slope** (*float*) -- Controls the angle of the\n negative slope. Default: 1e-2\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.LeakyReLU.html", "category": "pytorch docs"} {"text": "torch.reshape\ntorch.reshape(input, shape) -> Tensor\nReturns a tensor with the same data and number of elements as\n \"input\", but with the specified shape. When possible, the returned\n tensor will be a view of \"input\". Otherwise, it will be a copy.\n Contiguous inputs and inputs with compatible strides can be\n reshaped without copying, but you should not depend on the copying\n vs. viewing behavior.\nSee \"torch.Tensor.view()\" on when it is possible to return a view.\nA single dimension may be -1, in which case it's inferred from the\n remaining dimensions and the number of elements in \"input\".\nParameters:\n * input (Tensor) -- the tensor to be reshaped\n * **shape** (*tuple of python:int*) -- the new shape\n\nExample:\n >>> a = torch.arange(4.)\n >>> torch.reshape(a, (2, 2))\n tensor([[ 0., 1.],\n [ 2., 3.]])\n >>> b = torch.tensor([[0, 1], [2, 3]])\n >>> torch.reshape(b, (-1,))\n tensor([ 0, 1, 2, 3])\n", "source": "https://pytorch.org/docs/stable/generated/torch.reshape.html", "category": "pytorch docs"} {"text": "torch.get_num_interop_threads\ntorch.get_num_interop_threads() -> int\nReturns the number of threads used for inter-op parallelism on CPU\n (e.g. in JIT interpreter)", "source": "https://pytorch.org/docs/stable/generated/torch.get_num_interop_threads.html", "category": "pytorch docs"} {"text": "torch.nn.functional.mse_loss\ntorch.nn.functional.mse_loss(input, target, size_average=None, reduce=None, reduction='mean') -> Tensor\nMeasures the element-wise mean squared error.\nSee \"MSELoss\" for details.\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.mse_loss.html", "category": "pytorch docs"} {"text": "torch.Tensor.less_equal\nTensor.less_equal(other) -> Tensor\nSee \"torch.less_equal()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.less_equal.html", "category": "pytorch docs"} {"text": "torch.acos\ntorch.acos(input, *, out=None) -> Tensor\nComputes the inverse cosine of each element in \"input\".\n \\text{out}_{i} = \\cos^{-1}(\\text{input}_{i})\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(4)\n >>> a\n tensor([ 0.3348, -0.5889, 0.2005, -0.1584])\n >>> torch.acos(a)\n tensor([ 1.2294, 2.2004, 1.3690, 1.7298])\n", "source": "https://pytorch.org/docs/stable/generated/torch.acos.html", "category": "pytorch docs"} {"text": "torch.Tensor.diag_embed\nTensor.diag_embed(offset=0, dim1=- 2, dim2=- 1) -> Tensor\nSee \"torch.diag_embed()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.diag_embed.html", "category": "pytorch docs"} {"text": "torch.resolve_conj\ntorch.resolve_conj(input) -> Tensor\nReturns a new tensor with materialized conjugation if \"input\"'s\n conjugate bit is set to True, else returns \"input\". The output\n tensor will always have its conjugate bit set to False.\nParameters:\n input (Tensor) -- the input tensor.\nExample:\n >>> x = torch.tensor([-1 + 1j, -2 + 2j, 3 - 3j])\n >>> y = x.conj()\n >>> y.is_conj()\n True\n >>> z = y.resolve_conj()\n >>> z\n tensor([-1 - 1j, -2 - 2j, 3 + 3j])\n >>> z.is_conj()\n False\n", "source": "https://pytorch.org/docs/stable/generated/torch.resolve_conj.html", "category": "pytorch docs"} {"text": "torch.Tensor.log10_\nTensor.log10_() -> Tensor\nIn-place version of \"log10()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.log10_.html", "category": "pytorch docs"} {"text": "Dropout1d\nclass torch.nn.Dropout1d(p=0.5, inplace=False)\nRandomly zero out entire channels (a channel is a 1D feature map,\n e.g., the j-th channel of the i-th sample in the batched input is a\n 1D tensor \\text{input}[i, j]). Each channel will be zeroed out\n independently on every forward call with probability \"p\" using\n samples from a Bernoulli distribution.\nUsually the input comes from \"nn.Conv1d\" modules.\nAs described in the paper Efficient Object Localization Using\n Convolutional Networks , if adjacent pixels within feature maps are\n strongly correlated (as is normally the case in early convolution\n layers) then i.i.d. dropout will not regularize the activations and\n will otherwise just result in an effective learning rate decrease.\nIn this case, \"nn.Dropout1d()\" will help promote independence\n between feature maps and should be used instead.\nParameters:\n * p (float, optional) -- probability of an element to\n be zero-ed.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Dropout1d.html", "category": "pytorch docs"} {"text": "be zero-ed.\n * **inplace** (*bool**, **optional*) -- If set to \"True\", will\n do this operation in-place\n\nShape:\n * Input: (N, C, L) or (C, L).\n * Output: (N, C, L) or (C, L) (same shape as input).\n\nExamples:\n >>> m = nn.Dropout1d(p=0.2)\n >>> input = torch.randn(20, 16, 32)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.Dropout1d.html", "category": "pytorch docs"} {"text": "torch.Tensor.asinh_\nTensor.asinh_() -> Tensor\nIn-place version of \"asinh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.asinh_.html", "category": "pytorch docs"} {"text": "torch.Tensor.ormqr\nTensor.ormqr(input2, input3, left=True, transpose=False) -> Tensor\nSee \"torch.ormqr()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.ormqr.html", "category": "pytorch docs"} {"text": "MultiplicativeLR\nclass torch.optim.lr_scheduler.MultiplicativeLR(optimizer, lr_lambda, last_epoch=- 1, verbose=False)\nMultiply the learning rate of each parameter group by the factor\n given in the specified function. When last_epoch=-1, sets initial\n lr as lr.\nParameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * **lr_lambda** (*function** or **list*) -- A function which\n computes a multiplicative factor given an integer parameter\n epoch, or a list of such functions, one for each group in\n optimizer.param_groups.\n\n * **last_epoch** (*int*) -- The index of last epoch. Default:\n -1.\n\n * **verbose** (*bool*) -- If \"True\", prints a message to stdout\n for each update. Default: \"False\".\n\n-[ Example ]-\n\n\n\nlmbda = lambda epoch: 0.95\nscheduler = MultiplicativeLR(optimizer, lr_lambda=lmbda)\nfor epoch in range(100):\n train(...)\n validate(...)\n scheduler.step()\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.MultiplicativeLR.html", "category": "pytorch docs"} {"text": "\n\n\nscheduler.step()\n\n\n\n\nget_last_lr()\n Return last computed learning rate by current scheduler.\n\nload_state_dict(state_dict)\n Loads the schedulers state.\n\n Parameters:\n **state_dict** (*dict*) -- scheduler state. Should be an\n object returned from a call to \"state_dict()\".\n\nprint_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n\nstate_dict()\n Returns the state of the scheduler as a \"dict\".\n\n It contains an entry for every variable in self.__dict__ which\n is not the optimizer. The learning rate lambda functions will\n only be saved if they are callable objects and not if they are\n functions or lambdas.\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.MultiplicativeLR.html", "category": "pytorch docs"} {"text": "torch.igamma\ntorch.igamma(input, other, *, out=None) -> Tensor\nAlias for \"torch.special.gammainc()\".", "source": "https://pytorch.org/docs/stable/generated/torch.igamma.html", "category": "pytorch docs"} {"text": "torch.Tensor.div\nTensor.div(value, *, rounding_mode=None) -> Tensor\nSee \"torch.div()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.div.html", "category": "pytorch docs"} {"text": "torch.cuda.reset_peak_memory_stats\ntorch.cuda.reset_peak_memory_stats(device=None)\nResets the \"peak\" stats tracked by the CUDA memory allocator.\nSee \"memory_stats()\" for details. Peak stats correspond to the\n \"peak\" key in each individual stat dict.\nParameters:\n device (torch.device or int, optional) -- selected\n device. Returns statistic for the current device, given by\n \"current_device()\", if \"device\" is \"None\" (default).\nNote:\n See Memory management for more details about GPU memory\n management.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.reset_peak_memory_stats.html", "category": "pytorch docs"} {"text": "torch.nn.functional.embedding\ntorch.nn.functional.embedding(input, weight, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False)\nA simple lookup table that looks up embeddings in a fixed\n dictionary and size.\nThis module is often used to retrieve word embeddings using\n indices. The input to the module is a list of indices, and the\n embedding matrix, and the output is the corresponding word\n embeddings.\nSee \"torch.nn.Embedding\" for more details.\nParameters:\n * input (LongTensor) -- Tensor containing indices into the\n embedding matrix\n * **weight** (*Tensor*) -- The embedding matrix with number of\n rows equal to the maximum possible index + 1, and number of\n columns equal to the embedding size\n\n * **padding_idx** (*int**, **optional*) -- If specified, the\n entries at \"padding_idx\" do not contribute to the gradient;\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.embedding.html", "category": "pytorch docs"} {"text": "therefore, the embedding vector at \"padding_idx\" is not\n updated during training, i.e. it remains as a fixed \"pad\".\n * **max_norm** (*float**, **optional*) -- If given, each\n embedding vector with norm larger than \"max_norm\" is\n renormalized to have norm \"max_norm\". Note: this will modify\n \"weight\" in-place.\n\n * **norm_type** (*float**, **optional*) -- The p of the p-norm\n to compute for the \"max_norm\" option. Default \"2\".\n\n * **scale_grad_by_freq** (*bool**, **optional*) -- If given,\n this will scale gradients by the inverse of frequency of the\n words in the mini-batch. Default \"False\".\n\n * **sparse** (*bool**, **optional*) -- If \"True\", gradient\n w.r.t. \"weight\" will be a sparse tensor. See Notes under\n \"torch.nn.Embedding\" for more details regarding sparse\n gradients.\n\nReturn type:\n Tensor\nShape:\n * Input: LongTensor of arbitrary shape containing the indices to\n extract", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.embedding.html", "category": "pytorch docs"} {"text": "extract\n * Weight: Embedding matrix of floating point type with shape\n *(V, embedding_dim)*, where V = maximum index + 1 and\n embedding_dim = the embedding size\n\n * Output: *(*, embedding_dim)*, where *** is the input shape\n\nExamples:\n >>> # a batch of 2 samples of 4 indices each\n >>> input = torch.tensor([[1, 2, 4, 5], [4, 3, 2, 9]])\n >>> # an embedding matrix containing 10 tensors of size 3\n >>> embedding_matrix = torch.rand(10, 3)\n >>> F.embedding(input, embedding_matrix)\n tensor([[[ 0.8490, 0.9625, 0.6753],\n [ 0.9666, 0.7761, 0.6108],\n [ 0.6246, 0.9751, 0.3618],\n [ 0.4161, 0.2419, 0.7383]],\n\n [[ 0.6246, 0.9751, 0.3618],\n [ 0.0237, 0.7794, 0.0528],\n [ 0.9666, 0.7761, 0.6108],\n [ 0.3385, 0.8612, 0.1867]]])\n\n >>> # example with padding_idx\n >>> weights = torch.rand(10, 3)\n >>> weights[0, :].zero_()\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.embedding.html", "category": "pytorch docs"} {"text": "\n\n\nweights[0, :].zero_()\n >>> embedding_matrix = weights\n >>> input = torch.tensor([[0, 2, 0, 5]])\n >>> F.embedding(input, embedding_matrix, padding_idx=0)\n tensor([[[ 0.0000, 0.0000, 0.0000],\n [ 0.5609, 0.5384, 0.8720],\n [ 0.0000, 0.0000, 0.0000],\n [ 0.6262, 0.2438, 0.7471]]])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.embedding.html", "category": "pytorch docs"} {"text": "torch.fft.hfft\ntorch.fft.hfft(input, n=None, dim=- 1, norm=None, *, out=None) -> Tensor\nComputes the one dimensional discrete Fourier transform of a\n Hermitian symmetric \"input\" signal.\nNote:\n \"hfft()\"/\"ihfft()\" are analogous to \"rfft()\"/\"irfft()\". The real\n FFT expects a real signal in the time-domain and gives a\n Hermitian symmetry in the frequency-domain. The Hermitian FFT is\n the opposite; Hermitian symmetric in the time-domain and real-\n valued in the frequency-domain. For this reason, special care\n needs to be taken with the length argument \"n\", in the same way\n as with \"irfft()\".\n\nNote:\n Because the signal is Hermitian in the time-domain, the result\n will be real in the frequency domain. Note that some input\n frequencies must be real-valued to satisfy the Hermitian\n property. In these cases the imaginary component will be ignored.\n For example, any imaginary component in \"input[0]\" would result\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.hfft.html", "category": "pytorch docs"} {"text": "in one or more complex frequency terms which cannot be\n represented in a real output and so will always be ignored.\nNote:\n The correct interpretation of the Hermitian input depends on the\n length of the original data, as given by \"n\". This is because\n each input shape could correspond to either an odd or even length\n signal. By default, the signal is assumed to be even length and\n odd signals will not round-trip properly. So, it is recommended\n to always pass the signal length \"n\".\n\nNote:\n Supports torch.half and torch.chalf on CUDA with GPU Architecture\n SM53 or greater. However it only supports powers of 2 signal\n length in every transformed dimension. With default arguments,\n size of the transformed dimension should be (2^n + 1) as argument\n *n* defaults to even output size = 2 * (transformed_dim_size - 1)\n\nParameters:\n * input (Tensor) -- the input tensor representing a half-\n Hermitian signal", "source": "https://pytorch.org/docs/stable/generated/torch.fft.hfft.html", "category": "pytorch docs"} {"text": "Hermitian signal\n * **n** (*int**, **optional*) -- Output signal length. This\n determines the length of the real output. If given, the input\n will either be zero-padded or trimmed to this length before\n computing the Hermitian FFT. Defaults to even output:\n \"n=2*(input.size(dim) - 1)\".\n\n * **dim** (*int**, **optional*) -- The dimension along which to\n take the one dimensional Hermitian FFT.\n\n * **norm** (*str**, **optional*) --\n\n Normalization mode. For the forward transform (\"hfft()\"),\n these correspond to:\n\n * \"\"forward\"\" - normalize by \"1/n\"\n\n * \"\"backward\"\" - no normalization\n\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the Hermitian\n FFT orthonormal)\n\n Calling the backward transform (\"ihfft()\") with the same\n normalization mode will apply an overall normalization of\n \"1/n\" between the two transforms. This is required to make\n \"ihfft()\" the exact inverse.\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.hfft.html", "category": "pytorch docs"} {"text": "\"ihfft()\" the exact inverse.\n Default is \"\"backward\"\" (no normalization).\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\n-[ Example ]-\nTaking a real-valued frequency signal and bringing it into the time\n domain gives Hermitian symmetric output:\n\n\n\nt = torch.linspace(0, 1, 5)\nt\n tensor([0.0000, 0.2500, 0.5000, 0.7500, 1.0000])\nT = torch.fft.ifft(t)\nT\n tensor([ 0.5000-0.0000j, -0.1250-0.1720j, -0.1250-0.0406j, -0.1250+0.0406j,\n -0.1250+0.1720j])\n\n\n\nNote that \"T[1] == T[-1].conj()\" and \"T[2] == T[-2].conj()\" is\n redundant. We can thus compute the forward transform without\n considering negative frequencies:\n\n\n\ntorch.fft.hfft(T[:3], n=5)\n tensor([0.0000, 0.2500, 0.5000, 0.7500, 1.0000])\n\n\n\nLike with \"irfft()\", the output length must be given in order to\n recover an even length output:\n\n\n\ntorch.fft.hfft(T[:3])\n tensor([0.1250, 0.2809, 0.6250, 0.9691])\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.hfft.html", "category": "pytorch docs"} {"text": "UpsamplingNearest2d\nclass torch.nn.UpsamplingNearest2d(size=None, scale_factor=None)\nApplies a 2D nearest neighbor upsampling to an input signal\n composed of several input channels.\nTo specify the scale, it takes either the \"size\" or the\n \"scale_factor\" as it's constructor argument.\nWhen \"size\" is given, it is the output size of the image (h, w).\nParameters:\n * size (int or Tuple[int, int],\n optional) -- output spatial sizes\n * **scale_factor** (*float** or **Tuple**[**float**,\n **float**]**, **optional*) -- multiplier for spatial size.\n\nWarning:\n This class is deprecated in favor of \"interpolate()\".\n\nShape:\n * Input: (N, C, H_{in}, W_{in})\n * Output: (N, C, H_{out}, W_{out}) where\n\n H_{out} = \\left\\lfloor H_{in} \\times \\text{scale\\_factor}\n \\right\\rfloor\n\n W_{out} = \\left\\lfloor W_{in} \\times \\text{scale\\_factor}\n \\right\\rfloor\n\nExamples:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.UpsamplingNearest2d.html", "category": "pytorch docs"} {"text": "\\right\\rfloor\nExamples:\n >>> input = torch.arange(1, 5, dtype=torch.float32).view(1, 1, 2, 2)\n >>> input\n tensor([[[[1., 2.],\n [3., 4.]]]])\n\n >>> m = nn.UpsamplingNearest2d(scale_factor=2)\n >>> m(input)\n tensor([[[[1., 1., 2., 2.],\n [1., 1., 2., 2.],\n [3., 3., 4., 4.],\n [3., 3., 4., 4.]]]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.UpsamplingNearest2d.html", "category": "pytorch docs"} {"text": "torch.Tensor.abs_\nTensor.abs_() -> Tensor\nIn-place version of \"abs()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.abs_.html", "category": "pytorch docs"} {"text": "torch.Tensor.asinh\nTensor.asinh() -> Tensor\nSee \"torch.asinh()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.asinh.html", "category": "pytorch docs"} {"text": "torch.subtract\ntorch.subtract(input, other, *, alpha=1, out=None) -> Tensor\nAlias for \"torch.sub()\".", "source": "https://pytorch.org/docs/stable/generated/torch.subtract.html", "category": "pytorch docs"} {"text": "quantize_dynamic\nclass torch.quantization.quantize_dynamic(model, qconfig_spec=None, dtype=torch.qint8, mapping=None, inplace=False)\nConverts a float model to dynamic (i.e. weights-only) quantized\n model.\nReplaces specified modules with dynamic weight-only quantized\n versions and output the quantized model.\nFor simplest usage provide dtype argument that can be float16 or\n qint8. Weight-only quantization by default is performed for layers\n with large weights size - i.e. Linear and RNN variants.\nFine grained control is possible with qconfig and mapping that\n act similarly to quantize(). If qconfig is provided, the\n dtype argument is ignored.\nParameters:\n * model -- input model\n * **qconfig_spec** --\n\n Either:\n\n * A dictionary that maps from name or type of submodule to\n quantization configuration, qconfig applies to all\n submodules of a given module unless qconfig for the\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_dynamic.html", "category": "pytorch docs"} {"text": "submodules are specified (when the submodule already has\n qconfig attribute). Entries in the dictionary need to be\n QConfig instances.\n * A set of types and/or submodule names to apply dynamic\n quantization to, in which case the *dtype* argument is used\n to specify the bit-width\n\n * **inplace** -- carry out model transformations in-place, the\n original module is mutated\n\n * **mapping** -- maps type of a submodule to a type of\n corresponding dynamically quantized version with which the\n submodule needs to be replaced\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.quantize_dynamic.html", "category": "pytorch docs"} {"text": "torch.Tensor.istft\nTensor.istft(n_fft, hop_length=None, win_length=None, window=None, center=True, normalized=False, onesided=None, length=None, return_complex=False)\nSee \"torch.istft()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.istft.html", "category": "pytorch docs"} {"text": "torch.concatenate\ntorch.concatenate(tensors, axis=0, out=None) -> Tensor\nAlias of \"torch.cat()\".", "source": "https://pytorch.org/docs/stable/generated/torch.concatenate.html", "category": "pytorch docs"} {"text": "ConvTranspose1d\nclass torch.nn.ConvTranspose1d(in_channels, out_channels, kernel_size, stride=1, padding=0, output_padding=0, groups=1, bias=True, dilation=1, padding_mode='zeros', device=None, dtype=None)\nApplies a 1D transposed convolution operator over an input image\n composed of several input planes.\nThis module can be seen as the gradient of Conv1d with respect to\n its input. It is also known as a fractionally-strided convolution\n or a deconvolution (although it is not an actual deconvolution\n operation as it does not compute a true inverse of convolution).\n For more information, see the visualizations here and the\n Deconvolutional Networks paper.\nThis module supports TensorFloat32.\nOn certain ROCm devices, when using float16 inputs this module will\n use different precision for backward.\n\n\n\"stride\" controls the stride for the cross-correlation.\n\n\n\"padding\" controls the amount of implicit zero padding on both\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose1d.html", "category": "pytorch docs"} {"text": "sides for \"dilation * (kernel_size - 1) - padding\" number of\n points. See note below for details.\n\n\n\"output_padding\" controls the additional size added to one side\n of the output shape. See note below for details.\n\n\n\"dilation\" controls the spacing between the kernel points; also\n known as the \u00c3\u00a0 trous algorithm. It is harder to describe, but the\n link here has a nice visualization of what \"dilation\" does.\n\n\n\"groups\" controls the connections between inputs and outputs.\n \"in_channels\" and \"out_channels\" must both be divisible by\n \"groups\". For example,\n* At groups=1, all inputs are convolved to all outputs.\n\n* At groups=2, the operation becomes equivalent to having two\n conv layers side by side, each seeing half the input\n channels and producing half the output channels, and both\n subsequently concatenated.\n\n* At groups= \"in_channels\", each input channel is convolved\n with its own set of filters (of size\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose1d.html", "category": "pytorch docs"} {"text": "with its own set of filters (of size\n \\frac{\\text{out_channels}}{\\text{in_channels}}).\nNote:\n The \"padding\" argument effectively adds \"dilation * (kernel_size\n - 1) - padding\" amount of zero padding to both sizes of the\n input. This is set so that when a \"Conv1d\" and a\n \"ConvTranspose1d\" are initialized with same parameters, they are\n inverses of each other in regard to the input and output shapes.\n However, when \"stride > 1\", \"Conv1d\" maps multiple input shapes\n to the same output shape. \"output_padding\" is provided to resolve\n this ambiguity by effectively increasing the calculated output\n shape on one side. Note that \"output_padding\" is only used to\n find output shape, but does not actually add zero-padding to\n output.\n\nNote:\n In some circumstances when using the CUDA backend with CuDNN,\n this operator may select a nondeterministic algorithm to increase\n performance. If this is undesirable, you can try to make the\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose1d.html", "category": "pytorch docs"} {"text": "operation deterministic (potentially at a performance cost) by\n setting \"torch.backends.cudnn.deterministic = True\". Please see\n the notes on Reproducibility for background.\nParameters:\n * in_channels (int) -- Number of channels in the input\n image\n * **out_channels** (*int*) -- Number of channels produced by the\n convolution\n\n * **kernel_size** (*int** or **tuple*) -- Size of the convolving\n kernel\n\n * **stride** (*int** or **tuple**, **optional*) -- Stride of the\n convolution. Default: 1\n\n * **padding** (*int** or **tuple**, **optional*) -- \"dilation *\n (kernel_size - 1) - padding\" zero-padding will be added to\n both sides of the input. Default: 0\n\n * **output_padding** (*int** or **tuple**, **optional*) --\n Additional size added to one side of the output shape.\n Default: 0\n\n * **groups** (*int**, **optional*) -- Number of blocked\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose1d.html", "category": "pytorch docs"} {"text": "connections from input channels to output channels. Default: 1\n * **bias** (*bool**, **optional*) -- If \"True\", adds a learnable\n bias to the output. Default: \"True\"\n\n * **dilation** (*int** or **tuple**, **optional*) -- Spacing\n between kernel elements. Default: 1\n\nShape:\n * Input: (N, C_{in}, L_{in}) or (C_{in}, L_{in})\n * Output: (N, C_{out}, L_{out}) or (C_{out}, L_{out}), where\n\n L_{out} = (L_{in} - 1) \\times \\text{stride} - 2 \\times\n \\text{padding} + \\text{dilation} \\times\n (\\text{kernel\\_size} - 1) + \\text{output\\_padding} + 1\n\nVariables:\n * weight (Tensor) -- the learnable weights of the module\n of shape (\\text{in_channels},\n \\frac{\\text{out_channels}}{\\text{groups}},\n \\text{kernel_size}). The values of these weights are sampled\n from \\mathcal{U}(-\\sqrt{k}, \\sqrt{k}) where k =\n \\frac{groups}{C_\\text{out} * \\text{kernel_size}}", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose1d.html", "category": "pytorch docs"} {"text": "\nbias (Tensor) -- the learnable bias of the module of\n shape (out_channels). If \"bias\" is \"True\", then the values of\n these weights are sampled from \\mathcal{U}(-\\sqrt{k},\n \\sqrt{k}) where k = \\frac{groups}{C_\\text{out} *\n \\text{kernel_size}}\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.ConvTranspose1d.html", "category": "pytorch docs"} {"text": "torch.hypot\ntorch.hypot(input, other, *, out=None) -> Tensor\nGiven the legs of a right triangle, return its hypotenuse.\n \\text{out}_{i} = \\sqrt{\\text{input}_{i}^{2} +\n \\text{other}_{i}^{2}}\n\nThe shapes of \"input\" and \"other\" must be broadcastable.\nParameters:\n * input (Tensor) -- the first input tensor\n * **other** (*Tensor*) -- the second input tensor\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.hypot(torch.tensor([4.0]), torch.tensor([3.0, 4.0, 5.0]))\n tensor([5.0000, 5.6569, 6.4031])\n", "source": "https://pytorch.org/docs/stable/generated/torch.hypot.html", "category": "pytorch docs"} {"text": "torch.Tensor.asin\nTensor.asin() -> Tensor\nSee \"torch.asin()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.asin.html", "category": "pytorch docs"} {"text": "torch.Tensor.floor_divide_\nTensor.floor_divide_(value) -> Tensor\nIn-place version of \"floor_divide()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.floor_divide_.html", "category": "pytorch docs"} {"text": "torch.Tensor.to_sparse_bsc\nTensor.to_sparse_bsc(blocksize, dense_dim) -> Tensor\nConvert a tensor to a block sparse column (BSC) storage format of\n given blocksize. If the \"self\" is strided, then the number of\n dense dimensions could be specified, and a hybrid BSC tensor will\n be created, with dense_dim dense dimensions and self.dim() - 2 -\n dense_dim batch dimension.\nParameters:\n * blocksize (list, tuple, \"torch.Size\", optional) -- Block\n size of the resulting BSC tensor. A block size must be a tuple\n of length two such that its items evenly divide the two sparse\n dimensions.\n * **dense_dim** (*int**, **optional*) -- Number of dense\n dimensions of the resulting BSC tensor. This argument should\n be used only if \"self\" is a strided tensor, and must be a\n value between 0 and dimension of \"self\" tensor minus two.\n\nExample:\n >>> dense = torch.randn(10, 10)\n >>> sparse = dense.to_sparse_csr()\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse_bsc.html", "category": "pytorch docs"} {"text": "\n\n\nsparse = dense.to_sparse_csr()\n >>> sparse_bsc = sparse.to_sparse_bsc((5, 5))\n >>> sparse_bsc.row_indices()\n tensor([0, 1, 0, 1])\n\n\n\n >>> dense = torch.zeros(4, 3, 1)\n >>> dense[0:2, 0] = dense[0:2, 2] = dense[2:4, 1] = 1\n >>> dense.to_sparse_bsc((2, 1), 1)\n tensor(ccol_indices=tensor([0, 1, 2, 3]),\n row_indices=tensor([0, 1, 0]),\n values=tensor([[[[1.]],\n\n [[1.]]],\n\n\n [[[1.]],\n\n [[1.]]],\n\n\n [[[1.]],\n\n [[1.]]]]), size=(4, 3, 1), nnz=3,\n layout=torch.sparse_bsc)\n", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.to_sparse_bsc.html", "category": "pytorch docs"} {"text": "torch.bartlett_window\ntorch.bartlett_window(window_length, periodic=True, *, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\nBartlett window function.\n w[n] = 1 - \\left| \\frac{2n}{N-1} - 1 \\right| = \\begin{cases}\n \\frac{2n}{N - 1} & \\text{if } 0 \\leq n \\leq \\frac{N - 1}{2} \\\\\n 2 - \\frac{2n}{N - 1} & \\text{if } \\frac{N - 1}{2} < n < N \\\\\n \\end{cases},\n\nwhere N is the full window size.\nThe input \"window_length\" is a positive integer controlling the\n returned window size. \"periodic\" flag determines whether the\n returned window trims off the last duplicate value from the\n symmetric window and is ready to be used as a periodic window with\n functions like \"torch.stft()\". Therefore, if \"periodic\" is true,\n the N in above formula is in fact \\text{window_length} + 1. Also,\n we always have \"torch.bartlett_window(L, periodic=True)\" equal to\n \"torch.bartlett_window(L + 1, periodic=False)[:-1])\".\nNote:", "source": "https://pytorch.org/docs/stable/generated/torch.bartlett_window.html", "category": "pytorch docs"} {"text": "Note:\n If \"window_length\" =1, the returned window contains a single\n value 1.\n\nParameters:\n * window_length (int) -- the size of returned window\n * **periodic** (*bool**, **optional*) -- If True, returns a\n window to be used as periodic function. If False, return a\n symmetric window.\n\nKeyword Arguments:\n * dtype (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\"). Only floating point\n types are supported.\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned window tensor. Only \"torch.strided\" (dense layout) is\n supported.\n\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n", "source": "https://pytorch.org/docs/stable/generated/torch.bartlett_window.html", "category": "pytorch docs"} {"text": "for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\nReturns:\n A 1-D tensor of size (\\text{window_length},) containing the\n window\nReturn type:\n Tensor", "source": "https://pytorch.org/docs/stable/generated/torch.bartlett_window.html", "category": "pytorch docs"} {"text": "torch.cuda.set_stream\ntorch.cuda.set_stream(stream)\nSets the current stream.This is a wrapper API to set the stream.\n Usage of this function is discouraged in favor of the \"stream\"\n context manager.\nParameters:\n stream (Stream) -- selected stream. This function is a no-\n op if this argument is \"None\".", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.set_stream.html", "category": "pytorch docs"} {"text": "torch.equal\ntorch.equal(input, other) -> bool\n\"True\" if two tensors have the same size and elements, \"False\"\n otherwise.\nExample:\n >>> torch.equal(torch.tensor([1, 2]), torch.tensor([1, 2]))\n True\n", "source": "https://pytorch.org/docs/stable/generated/torch.equal.html", "category": "pytorch docs"} {"text": "torch.i0\ntorch.i0(input, *, out=None) -> Tensor\nAlias for \"torch.special.i0()\".", "source": "https://pytorch.org/docs/stable/generated/torch.i0.html", "category": "pytorch docs"} {"text": "BasePruningMethod\nclass torch.nn.utils.prune.BasePruningMethod\nAbstract base class for creation of new pruning techniques.\nProvides a skeleton for customization requiring the overriding of\n methods such as \"compute_mask()\" and \"apply()\".\nclassmethod apply(module, name, args, importance_scores=None, *kwargs)\n Adds the forward pre-hook that enables pruning on the fly and\n the reparametrization of a tensor in terms of the original\n tensor and the pruning mask.\n\n Parameters:\n * **module** (*nn.Module*) -- module containing the tensor to\n prune\n\n * **name** (*str*) -- parameter name within \"module\" on which\n pruning will act.\n\n * **args** -- arguments passed on to a subclass of\n \"BasePruningMethod\"\n\n * **importance_scores** (*torch.Tensor*) -- tensor of\n importance scores (of same shape as module parameter) used\n to compute mask for pruning. The values in this tensor\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.BasePruningMethod.html", "category": "pytorch docs"} {"text": "indicate the importance of the corresponding elements in\n the parameter being pruned. If unspecified or None, the\n parameter will be used in its place.\n * **kwargs** -- keyword arguments passed on to a subclass of\n a \"BasePruningMethod\"\n\napply_mask(module)\n Simply handles the multiplication between the parameter being\n pruned and the generated mask. Fetches the mask and the original\n tensor from the module and returns the pruned version of the\n tensor.\n\n Parameters:\n **module** (*nn.Module*) -- module containing the tensor to\n prune\n\n Returns:\n pruned version of the input tensor\n\n Return type:\n pruned_tensor (torch.Tensor)\n\nabstract compute_mask(t, default_mask)\n Computes and returns a mask for the input tensor \"t\". Starting\n from a base \"default_mask\" (which should be a mask of ones if\n the tensor has not been pruned yet), generate a random mask to\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.BasePruningMethod.html", "category": "pytorch docs"} {"text": "apply on top of the \"default_mask\" according to the specific\n pruning method recipe.\n Parameters:\n * **t** (*torch.Tensor*) -- tensor representing the\n importance scores of the\n\n * **prune.** (*parameter to*) --\n\n * **default_mask** (*torch.Tensor*) -- Base mask from\n previous pruning\n\n * **iterations** --\n\n * **is** (*that need to be respected after the new mask*) --\n\n * **t.** (*applied. Same dims as*) --\n\n Returns:\n mask to apply to \"t\", of same dims as \"t\"\n\n Return type:\n mask (torch.Tensor)\n\nprune(t, default_mask=None, importance_scores=None)\n Computes and returns a pruned version of input tensor \"t\"\n according to the pruning rule specified in \"compute_mask()\".\n\n Parameters:\n * **t** (*torch.Tensor*) -- tensor to prune (of same\n dimensions as \"default_mask\").\n\n * **importance_scores** (*torch.Tensor*) -- tensor of\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.BasePruningMethod.html", "category": "pytorch docs"} {"text": "importance scores (of same shape as \"t\") used to compute\n mask for pruning \"t\". The values in this tensor indicate\n the importance of the corresponding elements in the \"t\"\n that is being pruned. If unspecified or None, the tensor\n \"t\" will be used in its place.\n * **default_mask** (*torch.Tensor**, **optional*) -- mask\n from previous pruning iteration, if any. To be considered\n when determining what portion of the tensor that pruning\n should act on. If None, default to a mask of ones.\n\n Returns:\n pruned version of tensor \"t\".\n\nremove(module)\n Removes the pruning reparameterization from a module. The pruned\n parameter named \"name\" remains permanently pruned, and the\n parameter named \"name+'_orig'\" is removed from the parameter\n list. Similarly, the buffer named \"name+'_mask'\" is removed from\n the buffers.\n\n Note:\n\n Pruning itself is NOT undone or reversed!\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.prune.BasePruningMethod.html", "category": "pytorch docs"} {"text": "torch.Tensor.triangular_solve\nTensor.triangular_solve(A, upper=True, transpose=False, unitriangular=False)\nSee \"torch.triangular_solve()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.triangular_solve.html", "category": "pytorch docs"} {"text": "torch.Tensor.addcdiv_\nTensor.addcdiv_(tensor1, tensor2, *, value=1) -> Tensor\nIn-place version of \"addcdiv()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.addcdiv_.html", "category": "pytorch docs"} {"text": "LinearReLU\nclass torch.ao.nn.intrinsic.LinearReLU(linear, relu)\nThis is a sequential container which calls the Linear and ReLU\n modules. During quantization this will be replaced with the\n corresponding fused module.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.LinearReLU.html", "category": "pytorch docs"} {"text": "torch.logspace\ntorch.logspace(start, end, steps, base=10.0, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False) -> Tensor\nCreates a one-dimensional tensor of size \"steps\" whose values are\n evenly spaced from {{\\text{{base}}}}^{{\\text{{start}}}} to\n {{\\text{{base}}}}^{{\\text{{end}}}}, inclusive, on a logarithmic\n scale with base \"base\". That is, the values are:\n (\\text{base}^{\\text{start}}, \\text{base}^{(\\text{start} +\n \\frac{\\text{end} - \\text{start}}{ \\text{steps} - 1})}, \\ldots,\n \\text{base}^{(\\text{start} + (\\text{steps} - 2) *\n \\frac{\\text{end} - \\text{start}}{ \\text{steps} - 1})},\n \\text{base}^{\\text{end}})\n\nFrom PyTorch 1.11 logspace requires the steps argument. Use\n steps=100 to restore the previous behavior.\nParameters:\n * start (float) -- the starting value for the set of\n points\n * **end** (*float*) -- the ending value for the set of points\n", "source": "https://pytorch.org/docs/stable/generated/torch.logspace.html", "category": "pytorch docs"} {"text": "\n\nsteps (int) -- size of the constructed tensor\n\nbase (float, optional) -- base of the logarithm\n function. Default: \"10.0\".\n\n\n\nKeyword Arguments:\n * out (Tensor, optional) -- the output tensor.\n * **dtype** (*torch.dtype**, **optional*) -- the data type to\n perform the computation in. Default: if None, uses the global\n default dtype (see torch.get_default_dtype()) when both\n \"start\" and \"end\" are real, and corresponding complex dtype\n when either is complex.\n\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n", "source": "https://pytorch.org/docs/stable/generated/torch.logspace.html", "category": "pytorch docs"} {"text": "tensor types.\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\nExample:\n >>> torch.logspace(start=-10, end=10, steps=5)\n tensor([ 1.0000e-10, 1.0000e-05, 1.0000e+00, 1.0000e+05, 1.0000e+10])\n >>> torch.logspace(start=0.1, end=1.0, steps=5)\n tensor([ 1.2589, 2.1135, 3.5481, 5.9566, 10.0000])\n >>> torch.logspace(start=0.1, end=1.0, steps=1)\n tensor([1.2589])\n >>> torch.logspace(start=2, end=2, steps=1, base=2)\n tensor([4.0])\n", "source": "https://pytorch.org/docs/stable/generated/torch.logspace.html", "category": "pytorch docs"} {"text": "max_pool2d\nclass torch.ao.nn.quantized.functional.max_pool2d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)\nApplies a 2D max pooling over a quantized input signal composed of\n several quantized input planes.\nNote:\n The input quantization parameters are propagated to the output.\n\nSee \"MaxPool2d\" for details.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.functional.max_pool2d.html", "category": "pytorch docs"} {"text": "torch.signal.windows.hamming\ntorch.signal.windows.hamming(M, *, sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)\nComputes the Hamming window.\nThe Hamming window is defined as follows:\n w_n = \\alpha - \\beta\\ \\cos \\left( \\frac{2 \\pi n}{M - 1} \\right)\n\nThe window is normalized to 1 (maximum value is 1). However, the 1\n doesn't appear if \"M\" is even and \"sym\" is True.\nParameters:\n M (int) -- the length of the window. In other words, the\n number of points of the returned window.\nKeyword Arguments:\n * sym (bool, optional) -- If False, returns a\n periodic window suitable for use in spectral analysis. If\n True, returns a symmetric window suitable for use in filter\n design. Default: True.\n * **alpha** (*float**, **optional*) -- The coefficient \\alpha in\n the equation above.\n\n * **beta** (*float**, **optional*) -- The coefficient \\beta in\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.hamming.html", "category": "pytorch docs"} {"text": "the equation above.\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\nReturn type:\n Tensor\nExamples:\n >>> # Generates a symmetric Hamming window.\n >>> torch.signal.windows.hamming(10)\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.hamming.html", "category": "pytorch docs"} {"text": "\n\n\ntorch.signal.windows.hamming(10)\n tensor([0.0800, 0.1876, 0.4601, 0.7700, 0.9723, 0.9723, 0.7700, 0.4601, 0.1876, 0.0800])\n\n\n\n >>> # Generates a periodic Hamming window.\n >>> torch.signal.windows.hamming(10, sym=False)\n tensor([0.0800, 0.1679, 0.3979, 0.6821, 0.9121, 1.0000, 0.9121, 0.6821, 0.3979, 0.1679])\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.hamming.html", "category": "pytorch docs"} {"text": "torch.fft.fftn\ntorch.fft.fftn(input, s=None, dim=None, norm=None, *, out=None) -> Tensor\nComputes the N dimensional discrete Fourier transform of \"input\".\nNote:\n The Fourier domain representation of any real signal satisfies\n the Hermitian property: \"X[i_1, ..., i_n] = conj(X[-i_1, ...,\n -i_n])\". This function always returns all positive and negative\n frequency terms even though, for real inputs, half of these\n values are redundant. \"rfftn()\" returns the more compact one-\n sided representation where only the positive frequencies of the\n last dimension are returned.\n\nNote:\n Supports torch.half and torch.chalf on CUDA with GPU Architecture\n SM53 or greater. However it only supports powers of 2 signal\n length in every transformed dimensions.\n\nParameters:\n * input (Tensor) -- the input tensor\n * **s** (*Tuple**[**int**]**, **optional*) -- Signal size in the\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fftn.html", "category": "pytorch docs"} {"text": "transformed dimensions. If given, each dimension \"dim[i]\" will\n either be zero-padded or trimmed to the length \"s[i]\" before\n computing the FFT. If a length \"-1\" is specified, no padding\n is done in that dimension. Default: \"s = [input.size(d) for d\n in dim]\"\n * **dim** (*Tuple**[**int**]**, **optional*) -- Dimensions to be\n transformed. Default: all dimensions, or the last \"len(s)\"\n dimensions if \"s\" is given.\n\n * **norm** (*str**, **optional*) --\n\n Normalization mode. For the forward transform (\"fftn()\"),\n these correspond to:\n\n * \"\"forward\"\" - normalize by \"1/n\"\n\n * \"\"backward\"\" - no normalization\n\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the FFT\n orthonormal)\n\n Where \"n = prod(s)\" is the logical FFT size. Calling the\n backward transform (\"ifftn()\") with the same normalization\n mode will apply an overall normalization of \"1/n\" between the\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fftn.html", "category": "pytorch docs"} {"text": "two transforms. This is required to make \"ifftn()\" the exact\n inverse.\n Default is \"\"backward\"\" (no normalization).\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\n-[ Example ]-\n\n\n\nx = torch.rand(10, 10, dtype=torch.complex64)\nfftn = torch.fft.fftn(x)\n\n\n\nThe discrete Fourier transform is separable, so \"fftn()\" here is\n equivalent to two one-dimensional \"fft()\" calls:\n\n\n\ntwo_ffts = torch.fft.fft(torch.fft.fft(x, dim=0), dim=1)\ntorch.testing.assert_close(fftn, two_ffts, check_stride=False)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.fftn.html", "category": "pytorch docs"} {"text": "torch.atanh\ntorch.atanh(input, *, out=None) -> Tensor\nReturns a new tensor with the inverse hyperbolic tangent of the\n elements of \"input\".\nNote:\n The domain of the inverse hyperbolic tangent is *(-1, 1)* and\n values outside this range will be mapped to \"NaN\", except for the\n values *1* and *-1* for which the output is mapped to *+/-INF*\n respectively.\n\n \\text{out}_{i} = \\tanh^{-1}(\\text{input}_{i})\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(4).uniform_(-1, 1)\n >>> a\n tensor([ -0.9385, 0.2968, -0.8591, -0.1871 ])\n >>> torch.atanh(a)\n tensor([ -1.7253, 0.3060, -1.2899, -0.1893 ])\n", "source": "https://pytorch.org/docs/stable/generated/torch.atanh.html", "category": "pytorch docs"} {"text": "torch.Tensor.mul_\nTensor.mul_(value) -> Tensor\nIn-place version of \"mul()\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.mul_.html", "category": "pytorch docs"} {"text": "torch.Tensor.logical_or\nTensor.logical_or() -> Tensor\nSee \"torch.logical_or()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logical_or.html", "category": "pytorch docs"} {"text": "MinMaxObserver\nclass torch.quantization.observer.MinMaxObserver(dtype=torch.quint8, qscheme=torch.per_tensor_affine, reduce_range=False, quant_min=None, quant_max=None, factory_kwargs=None, eps=1.1920928955078125e-07)\nObserver module for computing the quantization parameters based on\n the running min and max values.\nThis observer uses the tensor min/max statistics to compute the\n quantization parameters. The module records the running minimum and\n maximum of incoming tensors, and uses this statistic to compute the\n quantization parameters.\nParameters:\n * dtype -- dtype argument to the quantize node needed to\n implement the reference model spec.\n * **qscheme** -- Quantization scheme to be used\n\n * **reduce_range** -- Reduces the range of the quantized data\n type by 1 bit\n\n * **quant_min** -- Minimum quantization value. If unspecified,\n it will follow the 8-bit setup.\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.MinMaxObserver.html", "category": "pytorch docs"} {"text": "it will follow the 8-bit setup.\n * **quant_max** -- Maximum quantization value. If unspecified,\n it will follow the 8-bit setup.\n\n * **eps** (*Tensor*) -- Epsilon value for float32, Defaults to\n *torch.finfo(torch.float32).eps*.\n\nGiven running min/max as x_\\text{min} and x_\\text{max}, scale s and\n zero point z are computed as:\nThe running minimum/maximum x_\\text{min/max} is computed as:\n \\begin{array}{ll} x_\\text{min} &= \\begin{cases} \\min(X) &\n \\text{if~}x_\\text{min} = \\text{None} \\\\\n \\min\\left(x_\\text{min}, \\min(X)\\right) & \\text{otherwise}\n \\end{cases}\\\\ x_\\text{max} &= \\begin{cases} \\max(X) &\n \\text{if~}x_\\text{max} = \\text{None} \\\\\n \\max\\left(x_\\text{max}, \\max(X)\\right) & \\text{otherwise}\n \\end{cases}\\\\ \\end{array}\n\nwhere X is the observed tensor.\nThe scale s and zero point z are then computed as:\n \\begin{aligned} \\text{if Symmetric:}&\\\\ &s = 2\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.MinMaxObserver.html", "category": "pytorch docs"} {"text": "\\max(|x_\\text{min}|, x_\\text{max}) / \\left( Q_\\text{max}\n - Q_\\text{min} \\right) \\ &z = \\begin{cases} 0 &\n \\text{if dtype is qint8} \\ 128 & \\text{otherwise}\n \\end{cases}\\ \\text{Otherwise:}&\\ &s = \\left(\n x_\\text{max} - x_\\text{min} \\right ) / \\left(\n Q_\\text{max} - Q_\\text{min} \\right ) \\ &z =\n Q_\\text{min} - \\text{round}(x_\\text{min} / s) \\end{aligned}\nwhere Q_\\text{min} and Q_\\text{max} are the minimum and maximum of\n the quantized data type.\nWarning:\n \"dtype\" can only take \"torch.qint8\" or \"torch.quint8\".\n\nNote:\n If the running minimum equals to the running maximum, the scale\n and zero_point are set to 1.0 and 0.\n\ncalculate_qparams()\n Calculates the quantization parameters.\n\nforward(x_orig)\n Records the running minimum and maximum of \"x\".\n\nreset_min_max_vals()\n Resets the min/max values.\n", "source": "https://pytorch.org/docs/stable/generated/torch.quantization.observer.MinMaxObserver.html", "category": "pytorch docs"} {"text": "torch.Tensor.is_floating_point\nTensor.is_floating_point() -> bool\nReturns True if the data type of \"self\" is a floating point data\n type.", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.is_floating_point.html", "category": "pytorch docs"} {"text": "torch.cosh\ntorch.cosh(input, *, out=None) -> Tensor\nReturns a new tensor with the hyperbolic cosine of the elements of\n \"input\".\n \\text{out}_{i} = \\cosh(\\text{input}_{i})\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(4)\n >>> a\n tensor([ 0.1632, 1.1835, -0.6979, -0.7325])\n >>> torch.cosh(a)\n tensor([ 1.0133, 1.7860, 1.2536, 1.2805])\n\nNote:\n When \"input\" is on the CPU, the implementation of torch.cosh may\n use the Sleef library, which rounds very large results to\n infinity or negative infinity. See here for details.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cosh.html", "category": "pytorch docs"} {"text": "torch.Tensor.log2_\nTensor.log2_() -> Tensor\nIn-place version of \"log2()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.log2_.html", "category": "pytorch docs"} {"text": "torch.msort\ntorch.msort(input, *, out=None) -> Tensor\nSorts the elements of the \"input\" tensor along its first dimension\n in ascending order by value.\nNote:\n *torch.msort(t)* is equivalent to *torch.sort(t, dim=0)[0]*. See\n also \"torch.sort()\".\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> t = torch.randn(3, 4)\n >>> t\n tensor([[-0.1321, 0.4370, -1.2631, -1.1289],\n [-2.0527, -1.1250, 0.2275, 0.3077],\n [-0.0881, -0.1259, -0.5495, 1.0284]])\n >>> torch.msort(t)\n tensor([[-2.0527, -1.1250, -1.2631, -1.1289],\n [-0.1321, -0.1259, -0.5495, 0.3077],\n [-0.0881, 0.4370, 0.2275, 1.0284]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.msort.html", "category": "pytorch docs"} {"text": "GroupNorm\nclass torch.ao.nn.quantized.GroupNorm(num_groups, num_channels, weight, bias, scale, zero_point, eps=1e-05, affine=True, device=None, dtype=None)\nThis is the quantized version of \"GroupNorm\".\nAdditional args:\n * scale - quantization scale of the output, type: double.\n * **zero_point** - quantization zero point of the output, type:\n long.\n", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.quantized.GroupNorm.html", "category": "pytorch docs"} {"text": "torch.nn.functional.cosine_similarity\ntorch.nn.functional.cosine_similarity(x1, x2, dim=1, eps=1e-8) -> Tensor\nReturns cosine similarity between \"x1\" and \"x2\", computed along\n dim. \"x1\" and \"x2\" must be broadcastable to a common shape. \"dim\"\n refers to the dimension in this common shape. Dimension \"dim\" of\n the output is squeezed (see \"torch.squeeze()\"), resulting in the\n output tensor having 1 fewer dimension.\n \\text{similarity} = \\dfrac{x_1 \\cdot x_2}{\\max(\\Vert x_1 \\Vert\n _2 \\cdot \\Vert x_2 \\Vert _2, \\epsilon)}\n\nSupports type promotion.\nParameters:\n * x1 (Tensor) -- First input.\n * **x2** (*Tensor*) -- Second input.\n\n * **dim** (*int**, **optional*) -- Dimension along which cosine\n similarity is computed. Default: 1\n\n * **eps** (*float**, **optional*) -- Small value to avoid\n division by zero. Default: 1e-8\n\nExample:\n >>> input1 = torch.randn(100, 128)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.cosine_similarity.html", "category": "pytorch docs"} {"text": "\n\n\ninput1 = torch.randn(100, 128)\n >>> input2 = torch.randn(100, 128)\n >>> output = F.cosine_similarity(input1, input2)\n >>> print(output)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.cosine_similarity.html", "category": "pytorch docs"} {"text": "torch._foreach_log2\ntorch._foreach_log2(self: List[Tensor]) -> List[Tensor]\nApply \"torch.log2()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_log2.html", "category": "pytorch docs"} {"text": "torch.Tensor.copysign_\nTensor.copysign_(other) -> Tensor\nIn-place version of \"copysign()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.copysign_.html", "category": "pytorch docs"} {"text": "torch._foreach_reciprocal\ntorch._foreach_reciprocal(self: List[Tensor]) -> List[Tensor]\nApply \"torch.reciprocal()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_reciprocal.html", "category": "pytorch docs"} {"text": "torch.Tensor.divide\nTensor.divide(value, *, rounding_mode=None) -> Tensor\nSee \"torch.divide()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.divide.html", "category": "pytorch docs"} {"text": "torch.signal.windows.general_cosine\ntorch.signal.windows.general_cosine(M, *, a, sym=True, dtype=None, layout=torch.strided, device=None, requires_grad=False)\nComputes the general cosine window.\nThe general cosine window is defined as follows:\n w_n = \\sum^{M-1}_{i=0} (-1)^i a_i \\cos{ \\left( \\frac{2 \\pi i\n n}{M - 1}\\right)}\n\nThe window is normalized to 1 (maximum value is 1). However, the 1\n doesn't appear if \"M\" is even and \"sym\" is True.\nParameters:\n M (int) -- the length of the window. In other words, the\n number of points of the returned window.\nKeyword Arguments:\n * a (Iterable) -- the coefficients associated to each of\n the cosine functions.\n * **sym** (*bool**, **optional*) -- If *False*, returns a\n periodic window suitable for use in spectral analysis. If\n *True*, returns a symmetric window suitable for use in filter\n design. Default: *True*.\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.general_cosine.html", "category": "pytorch docs"} {"text": "design. Default: True.\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. Default: if \"None\", uses a global default\n (see \"torch.set_default_tensor_type()\").\n\n * **layout** (\"torch.layout\", optional) -- the desired layout of\n returned Tensor. Default: \"torch.strided\".\n\n * **device** (\"torch.device\", optional) -- the desired device of\n returned tensor. Default: if \"None\", uses the current device\n for the default tensor type (see\n \"torch.set_default_tensor_type()\"). \"device\" will be the CPU\n for CPU tensor types and the current CUDA device for CUDA\n tensor types.\n\n * **requires_grad** (*bool**, **optional*) -- If autograd should\n record operations on the returned tensor. Default: \"False\".\n\nReturn type:\n Tensor\nExamples:\n >>> # Generates a symmetric general cosine window with 3 coefficients.\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.general_cosine.html", "category": "pytorch docs"} {"text": "\n\n\ntorch.signal.windows.general_cosine(10, a=[0.46, 0.23, 0.31], sym=True)\n tensor([0.5400, 0.3376, 0.1288, 0.4200, 0.9136, 0.9136, 0.4200, 0.1288, 0.3376, 0.5400])\n\n\n\n >>> # Generates a periodic general cosine window wit 2 coefficients.\n >>> torch.signal.windows.general_cosine(10, a=[0.5, 1 - 0.5], sym=False)\n tensor([0.0000, 0.0955, 0.3455, 0.6545, 0.9045, 1.0000, 0.9045, 0.6545, 0.3455, 0.0955])\n", "source": "https://pytorch.org/docs/stable/generated/torch.signal.windows.general_cosine.html", "category": "pytorch docs"} {"text": "ConvReLU3d\nclass torch.ao.nn.intrinsic.ConvReLU3d(conv, relu)\nThis is a sequential container which calls the Conv3d and ReLU\n modules. During quantization this will be replaced with the\n corresponding fused module.", "source": "https://pytorch.org/docs/stable/generated/torch.ao.nn.intrinsic.ConvReLU3d.html", "category": "pytorch docs"} {"text": "torch.block_diag\ntorch.block_diag(*tensors)\nCreate a block diagonal matrix from provided tensors.\nParameters:\n *tensors -- One or more tensors with 0, 1, or 2 dimensions.\nReturns:\n A 2 dimensional tensor with all the input tensors arranged in\n order such that their upper left and lower right corners are\n diagonally adjacent. All other elements are set to 0.\nReturn type:\n Tensor\nExample:\n >>> import torch\n >>> A = torch.tensor([[0, 1], [1, 0]])\n >>> B = torch.tensor([[3, 4, 5], [6, 7, 8]])\n >>> C = torch.tensor(7)\n >>> D = torch.tensor([1, 2, 3])\n >>> E = torch.tensor([[4], [5], [6]])\n >>> torch.block_diag(A, B, C, D, E)\n tensor([[0, 1, 0, 0, 0, 0, 0, 0, 0, 0],\n [1, 0, 0, 0, 0, 0, 0, 0, 0, 0],\n [0, 0, 3, 4, 5, 0, 0, 0, 0, 0],\n [0, 0, 6, 7, 8, 0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 7, 0, 0, 0, 0],\n [0, 0, 0, 0, 0, 0, 1, 2, 3, 0],\n", "source": "https://pytorch.org/docs/stable/generated/torch.block_diag.html", "category": "pytorch docs"} {"text": "[0, 0, 0, 0, 0, 0, 1, 2, 3, 0],\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 4],\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 5],\n [0, 0, 0, 0, 0, 0, 0, 0, 0, 6]])", "source": "https://pytorch.org/docs/stable/generated/torch.block_diag.html", "category": "pytorch docs"} {"text": "torch.Tensor.unique\nTensor.unique(sorted=True, return_inverse=False, return_counts=False, dim=None)\nReturns the unique elements of the input tensor.\nSee \"torch.unique()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.unique.html", "category": "pytorch docs"} {"text": "torch.nn.functional.max_pool3d\ntorch.nn.functional.max_pool3d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)\nApplies a 3D max pooling over an input signal composed of several\n input planes.\nNote:\n The order of \"ceil_mode\" and \"return_indices\" is different from\n what seen in \"MaxPool3d\", and will change in a future release.\n\nSee \"MaxPool3d\" for details.\nParameters:\n * input -- input tensor (\\text{minibatch} ,\n \\text{in_channels} , iD, iH , iW), minibatch dim optional.\n * **kernel_size** -- size of the pooling region. Can be a single\n number or a tuple *(kT, kH, kW)*\n\n * **stride** -- stride of the pooling operation. Can be a single\n number or a tuple *(sT, sH, sW)*. Default: \"kernel_size\"\n\n * **padding** -- Implicit negative infinity padding to be added\n on both sides, must be >= 0 and <= kernel_size / 2.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.max_pool3d.html", "category": "pytorch docs"} {"text": "\n\ndilation -- The stride between elements within a sliding\n window, must be > 0.\n\n\nceil_mode -- If \"True\", will use ceil instead of floor\n to compute the output shape. This ensures that every element\n in the input tensor is covered by a sliding window.\n\n\nreturn_indices -- If \"True\", will return the argmax along\n with the max values. Useful for\n \"torch.nn.functional.max_unpool3d\" later\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.max_pool3d.html", "category": "pytorch docs"} {"text": "enable_grad\nclass torch.enable_grad\nContext-manager that enables gradient calculation.\nEnables gradient calculation, if it has been disabled via \"no_grad\"\n or \"set_grad_enabled\".\nThis context manager is thread local; it will not affect\n computation in other threads.\nAlso functions as a decorator. (Make sure to instantiate with\n parenthesis.)\nNote:\n enable_grad is one of several mechanisms that can enable or\n disable gradients locally see Locally disabling gradient\n computation for more information on how they compare.\n\nNote:\n This API does not apply to forward-mode AD.\n\nExample::\n >>> x = torch.tensor([1.], requires_grad=True)\n >>> with torch.no_grad():\n ... with torch.enable_grad():\n ... y = x * 2\n >>> y.requires_grad\n True\n >>> y.backward()\n >>> x.grad\n tensor([2.])\n >>> @torch.enable_grad()\n ... def doubler(x):\n ... return x * 2\n >>> with torch.no_grad():", "source": "https://pytorch.org/docs/stable/generated/torch.enable_grad.html", "category": "pytorch docs"} {"text": "\n\n\nwith torch.no_grad():\n ... z = doubler(x)\n >>> z.requires_grad\n True\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.enable_grad.html", "category": "pytorch docs"} {"text": "InstanceNorm2d\nclass torch.nn.InstanceNorm2d(num_features, eps=1e-05, momentum=0.1, affine=False, track_running_stats=False, device=None, dtype=None)\nApplies Instance Normalization over a 4D input (a mini-batch of 2D\n inputs with additional channel dimension) as described in the paper\n Instance Normalization: The Missing Ingredient for Fast\n Stylization.\n y = \\frac{x - \\mathrm{E}[x]}{ \\sqrt{\\mathrm{Var}[x] + \\epsilon}}\n * \\gamma + \\beta\n\nThe mean and standard-deviation are calculated per-dimension\n separately for each object in a mini-batch. \\gamma and \\beta are\n learnable parameter vectors of size C (where C is the input\n size) if \"affine\" is \"True\". The standard-deviation is calculated\n via the biased estimator, equivalent to torch.var(input,\n unbiased=False).\nBy default, this layer uses instance statistics computed from input\n data in both training and evaluation modes.\nIf \"track_running_stats\" is set to \"True\", during training this", "source": "https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm2d.html", "category": "pytorch docs"} {"text": "layer keeps running estimates of its computed mean and variance,\n which are then used for normalization during evaluation. The\n running estimates are kept with a default \"momentum\" of 0.1.\nNote:\n This \"momentum\" argument is different from one used in optimizer\n classes and the conventional notion of momentum. Mathematically,\n the update rule for running statistics here is \\hat{x}_\\text{new}\n = (1 - \\text{momentum}) \\times \\hat{x} + \\text{momentum} \\times\n x_t, where \\hat{x} is the estimated statistic and x_t is the new\n observed value.\n\nNote:\n \"InstanceNorm2d\" and \"LayerNorm\" are very similar, but have some\n subtle differences. \"InstanceNorm2d\" is applied on each channel\n of channeled data like RGB images, but \"LayerNorm\" is usually\n applied on entire sample and often in NLP tasks. Additionally,\n \"LayerNorm\" applies elementwise affine transform, while\n \"InstanceNorm2d\" usually don't apply affine transform.\n\nParameters:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm2d.html", "category": "pytorch docs"} {"text": "Parameters:\n * num_features (int) -- C from an expected input of size\n (N, C, H, W) or (C, H, W)\n * **eps** (*float*) -- a value added to the denominator for\n numerical stability. Default: 1e-5\n\n * **momentum** (*float*) -- the value used for the running_mean\n and running_var computation. Default: 0.1\n\n * **affine** (*bool*) -- a boolean value that when set to\n \"True\", this module has learnable affine parameters,\n initialized the same way as done for batch normalization.\n Default: \"False\".\n\n * **track_running_stats** (*bool*) -- a boolean value that when\n set to \"True\", this module tracks the running mean and\n variance, and when set to \"False\", this module does not track\n such statistics and always uses batch statistics in both\n training and eval modes. Default: \"False\"\n\nShape:\n * Input: (N, C, H, W) or (C, H, W)\n * Output: (N, C, H, W) or (C, H, W) (same shape as input)\n\nExamples:", "source": "https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm2d.html", "category": "pytorch docs"} {"text": "Examples:\n >>> # Without Learnable Parameters\n >>> m = nn.InstanceNorm2d(100)\n >>> # With Learnable Parameters\n >>> m = nn.InstanceNorm2d(100, affine=True)\n >>> input = torch.randn(20, 100, 35, 45)\n >>> output = m(input)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.InstanceNorm2d.html", "category": "pytorch docs"} {"text": "torch.foreach_log\ntorch.foreach_log(self: List[Tensor]) -> None\nApply \"torch.log()\" to each Tensor of the input list.", "source": "https://pytorch.org/docs/stable/generated/torch._foreach_log_.html", "category": "pytorch docs"} {"text": "torch.cholesky\ntorch.cholesky(input, upper=False, *, out=None) -> Tensor\nComputes the Cholesky decomposition of a symmetric positive-\n definite matrix A or for batches of symmetric positive-definite\n matrices.\nIf \"upper\" is \"True\", the returned matrix \"U\" is upper-triangular,\n and the decomposition has the form:\n A = U^TU\n\nIf \"upper\" is \"False\", the returned matrix \"L\" is lower-triangular,\n and the decomposition has the form:\n A = LL^T\n\nIf \"upper\" is \"True\", and A is a batch of symmetric positive-\n definite matrices, then the returned tensor will be composed of\n upper-triangular Cholesky factors of each of the individual\n matrices. Similarly, when \"upper\" is \"False\", the returned tensor\n will be composed of lower-triangular Cholesky factors of each of\n the individual matrices.\nWarning:\n \"torch.cholesky()\" is deprecated in favor of\n \"torch.linalg.cholesky()\" and will be removed in a future PyTorch\n", "source": "https://pytorch.org/docs/stable/generated/torch.cholesky.html", "category": "pytorch docs"} {"text": "release.\"L = torch.cholesky(A)\" should be replaced with\n L = torch.linalg.cholesky(A)\n\n \"U = torch.cholesky(A, upper=True)\" should be replaced with\n\n U = torch.linalg.cholesky(A).mH\n\n This transform will produce equivalent results for all valid\n (symmetric positive definite) inputs.\n\nParameters:\n * input (Tensor) -- the input tensor A of size (*, n, n)\n where *** is zero or more batch dimensions consisting of\n symmetric positive-definite matrices.\n * **upper** (*bool**, **optional*) -- flag that indicates\n whether to return a upper or lower triangular matrix. Default:\n \"False\"\n\nKeyword Arguments:\n out (Tensor, optional) -- the output matrix\nExample:\n >>> a = torch.randn(3, 3)\n >>> a = a @ a.mT + 1e-3 # make symmetric positive-definite\n >>> l = torch.cholesky(a)\n >>> a\n tensor([[ 2.4112, -0.7486, 1.4551],\n [-0.7486, 1.3544, 0.1294],\n", "source": "https://pytorch.org/docs/stable/generated/torch.cholesky.html", "category": "pytorch docs"} {"text": "[-0.7486, 1.3544, 0.1294],\n [ 1.4551, 0.1294, 1.6724]])\n >>> l\n tensor([[ 1.5528, 0.0000, 0.0000],\n [-0.4821, 1.0592, 0.0000],\n [ 0.9371, 0.5487, 0.7023]])\n >>> l @ l.mT\n tensor([[ 2.4112, -0.7486, 1.4551],\n [-0.7486, 1.3544, 0.1294],\n [ 1.4551, 0.1294, 1.6724]])\n >>> a = torch.randn(3, 2, 2) # Example for batched input\n >>> a = a @ a.mT + 1e-03 # make symmetric positive-definite\n >>> l = torch.cholesky(a)\n >>> z = l @ l.mT\n >>> torch.dist(z, a)\n tensor(2.3842e-07)", "source": "https://pytorch.org/docs/stable/generated/torch.cholesky.html", "category": "pytorch docs"} {"text": "torch.narrow_copy\ntorch.narrow_copy(input, dim, start, length, *, out=None) -> Tensor\nSame as \"Tensor.narrow()\" except this returns a copy rather than\n shared storage. This is primarily for sparse tensors, which do not\n have a shared-storage narrow method.\nParameters:\n * input (Tensor) -- the tensor to narrow\n * **dim** (*int*) -- the dimension along which to narrow\n\n * **start** (*int*) -- index of the element to start the\n narrowed dimension from. Can be negative, which means indexing\n from the end of *dim*\n\n * **length** (*int*) -- length of the narrowed dimension, must\n be weakly positive\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> x = torch.tensor([[1, 2, 3], [4, 5, 6], [7, 8, 9]])\n >>> torch.narrow_copy(x, 0, 0, 2)\n tensor([[ 1, 2, 3],\n [ 4, 5, 6]])\n >>> torch.narrow_copy(x, 1, 1, 2)\n tensor([[ 2, 3],\n [ 5, 6],\n", "source": "https://pytorch.org/docs/stable/generated/torch.narrow_copy.html", "category": "pytorch docs"} {"text": "tensor([[ 2, 3],\n [ 5, 6],\n [ 8, 9]])\n >>> s = torch.arange(16).reshape(2, 2, 2, 2).to_sparse(2)\n >>> torch.narrow_copy(s, 0, 0, 1)\n tensor(indices=tensor([[0, 0],\n [0, 1]]),\n values=tensor([[[0, 1],\n [2, 3]],\n [[4, 5],\n [6, 7]]]),\n size=(1, 2, 2, 2), nnz=2, layout=torch.sparse_coo)\n\nSee also: \"torch.narrow()\" for a non copy variant", "source": "https://pytorch.org/docs/stable/generated/torch.narrow_copy.html", "category": "pytorch docs"} {"text": "torch.Tensor.logical_not\nTensor.logical_not() -> Tensor\nSee \"torch.logical_not()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.logical_not.html", "category": "pytorch docs"} {"text": "torch.nn.utils.parametrizations.spectral_norm\ntorch.nn.utils.parametrizations.spectral_norm(module, name='weight', n_power_iterations=1, eps=1e-12, dim=None)\nApplies spectral normalization to a parameter in the given module.\n \\mathbf{W}_{SN} = \\dfrac{\\mathbf{W}}{\\sigma(\\mathbf{W})},\n \\sigma(\\mathbf{W}) = \\max_{\\mathbf{h}: \\mathbf{h} \\ne 0}\n \\dfrac{\\|\\mathbf{W} \\mathbf{h}\\|_2}{\\|\\mathbf{h}\\|_2}\n\nWhen applied on a vector, it simplifies to\n \\mathbf{x}_{SN} = \\dfrac{\\mathbf{x}}{\\|\\mathbf{x}\\|_2}\n\nSpectral normalization stabilizes the training of discriminators\n (critics) in Generative Adversarial Networks (GANs) by reducing the\n Lipschitz constant of the model. \\sigma is approximated performing\n one iteration of the power method every time the weight is\n accessed. If the dimension of the weight tensor is greater than 2,\n it is reshaped to 2D in power iteration method to get spectral\n norm.", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrizations.spectral_norm.html", "category": "pytorch docs"} {"text": "norm.\nSee Spectral Normalization for Generative Adversarial Networks .\nNote:\n This function is implemented using the parametrization\n functionality in \"register_parametrization()\". It is a\n reimplementation of \"torch.nn.utils.spectral_norm()\".\n\nNote:\n When this constraint is registered, the singular vectors\n associated to the largest singular value are estimated rather\n than sampled at random. These are then updated performing\n \"n_power_iterations\" of the power method whenever the tensor is\n accessed with the module on *training* mode.\n\nNote:\n If the *_SpectralNorm* module, i.e.,\n *module.parametrization.weight[idx]*, is in training mode on\n removal, it will perform another power iteration. If you'd like\n to avoid this iteration, set the module to eval mode before its\n removal.\n\nParameters:\n * module (nn.Module) -- containing module\n * **name** (*str**, **optional*) -- name of weight parameter.\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrizations.spectral_norm.html", "category": "pytorch docs"} {"text": "Default: \"\"weight\"\".\n * **n_power_iterations** (*int**, **optional*) -- number of\n power iterations to calculate spectral norm. Default: \"1\".\n\n * **eps** (*float**, **optional*) -- epsilon for numerical\n stability in calculating norms. Default: \"1e-12\".\n\n * **dim** (*int**, **optional*) -- dimension corresponding to\n number of outputs. Default: \"0\", except for modules that are\n instances of ConvTranspose{1,2,3}d, when it is \"1\"\n\nReturns:\n The original module with a new parametrization registered to the\n specified weight\nReturn type:\n Module\nExample:\n >>> snm = spectral_norm(nn.Linear(20, 40))\n >>> snm\n ParametrizedLinear(\n in_features=20, out_features=40, bias=True\n (parametrizations): ModuleDict(\n (weight): ParametrizationList(\n (0): _SpectralNorm()\n )\n )\n )\n >>> torch.linalg.matrix_norm(snm.weight, 2)\n tensor(1.0081, grad_fn=)\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.utils.parametrizations.spectral_norm.html", "category": "pytorch docs"} {"text": "torch.sparse.spdiags\ntorch.sparse.spdiags(diagonals, offsets, shape, layout=None) -> Tensor\nCreates a sparse 2D tensor by placing the values from rows of\n \"diagonals\" along specified diagonals of the output\nThe \"offsets\" tensor controls which diagonals are set.\n\n\nIf \"offsets[i]\" = 0, it is the main diagonal\n\n\nIf \"offsets[i]\" < 0, it is below the main diagonal\n\n\nIf \"offsets[i]\" > 0, it is above the main diagonal\n\n\nThe number of rows in \"diagonals\" must match the length of\n \"offsets\", and an offset may not be repeated.\nParameters:\n * diagonals (Tensor) -- Matrix storing diagonals row-wise\n * **offsets** (*Tensor*) -- The diagonals to be set, stored as a\n vector\n\n * **shape** (*2-tuple of ints*) -- The desired shape of the\n result\n\nKeyword Arguments:\n layout (\"torch.layout\", optional) -- The desired layout of\n the returned tensor. \"torch.sparse_coo\", \"torch.sparse_csc\" and", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.spdiags.html", "category": "pytorch docs"} {"text": "\"torch.sparse_csr\" are supported. Default: \"torch.sparse_coo\"\nExamples:\nSet the main and first two lower diagonals of a matrix:\n >>> diags = torch.arange(9).reshape(3, 3)\n >>> diags\n tensor([[0, 1, 2],\n [3, 4, 5],\n [6, 7, 8]])\n >>> s = torch.sparse.spdiags(diags, torch.tensor([0, -1, -2]), (3, 3))\n >>> s\n tensor(indices=tensor([[0, 1, 2, 1, 2, 2],\n [0, 1, 2, 0, 1, 0]]),\n values=tensor([0, 1, 2, 3, 4, 6]),\n size=(3, 3), nnz=6, layout=torch.sparse_coo)\n >>> s.to_dense()\n tensor([[0, 0, 0],\n [3, 1, 0],\n [6, 4, 2]])\n\nChange the output layout:\n >>> diags = torch.arange(9).reshape(3, 3)\n >>> diags\n tensor([[0, 1, 2],[3, 4, 5], [6, 7, 8])\n >>> s = torch.sparse.spdiags(diags, torch.tensor([0, -1, -2]), (3, 3), layout=torch.sparse_csr)\n >>> s\n tensor(crow_indices=tensor([0, 1, 3, 6]),\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.spdiags.html", "category": "pytorch docs"} {"text": "tensor(crow_indices=tensor([0, 1, 3, 6]),\n col_indices=tensor([0, 0, 1, 0, 1, 2]),\n values=tensor([0, 3, 1, 6, 4, 2]), size=(3, 3), nnz=6,\n layout=torch.sparse_csr)\n >>> s.to_dense()\n tensor([[0, 0, 0],\n [3, 1, 0],\n [6, 4, 2]])\nSet partial diagonals of a large output:\n >>> diags = torch.tensor([[1, 2], [3, 4]])\n >>> offsets = torch.tensor([0, -1])\n >>> torch.sparse.spdiags(diags, offsets, (5, 5)).to_dense()\n tensor([[1, 0, 0, 0, 0],\n [3, 2, 0, 0, 0],\n [0, 4, 0, 0, 0],\n [0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0]])\n\nNote:\n When setting the values along a given diagonal the index into the\n diagonal and the index into the row of \"diagonals\" is taken as\n the column index in the output. This has the effect that when\n setting a diagonal with a positive offset *k* the first value\n along that diagonal will be the value in position *k* of the row\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.spdiags.html", "category": "pytorch docs"} {"text": "of \"diagonals\"\nSpecifying a positive offset:\n >>> diags = torch.tensor([[1, 2, 3], [1, 2, 3], [1, 2, 3]])\n >>> torch.sparse.spdiags(diags, torch.tensor([0, 1, 2]), (5, 5)).to_dense()\n tensor([[1, 2, 3, 0, 0],\n [0, 2, 3, 0, 0],\n [0, 0, 3, 0, 0],\n [0, 0, 0, 0, 0],\n [0, 0, 0, 0, 0]])\n", "source": "https://pytorch.org/docs/stable/generated/torch.sparse.spdiags.html", "category": "pytorch docs"} {"text": "torch.Tensor.float_power_\nTensor.float_power_(exponent) -> Tensor\nIn-place version of \"float_power()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.float_power_.html", "category": "pytorch docs"} {"text": "torch.Tensor.igamma\nTensor.igamma(other) -> Tensor\nSee \"torch.igamma()\"", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.igamma.html", "category": "pytorch docs"} {"text": "torch.compiled_with_cxx11_abi\ntorch.compiled_with_cxx11_abi()\nReturns whether PyTorch was built with _GLIBCXX_USE_CXX11_ABI=1", "source": "https://pytorch.org/docs/stable/generated/torch.compiled_with_cxx11_abi.html", "category": "pytorch docs"} {"text": "ExternalStream\nclass torch.cuda.ExternalStream(stream_ptr, device=None, **kwargs)\nWrapper around an externally allocated CUDA stream.\nThis class is used to wrap streams allocated in other libraries in\n order to facilitate data exchange and multi-library interactions.\nNote:\n This class doesn't manage the stream life-cycle, it is the user\n responsibility to keep the referenced stream alive while this\n class is being used.\n\nParameters:\n * stream_ptr (int) -- Integer representation of the\n cudaStream_t value. allocated externally.\n * **device** (*torch.device** or **int**, **optional*) -- the\n device where the stream was originally allocated. if device is\n specified incorrectly, subsequent launches using this stream\n may fail.\n\nquery()\n Checks if all the work submitted has been completed.\n\n Returns:\n A boolean indicating if all kernels in this stream are\n completed.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.ExternalStream.html", "category": "pytorch docs"} {"text": "completed.\nrecord_event(event=None)\n Records an event.\n\n Parameters:\n **event** (*torch.cuda.Event**, **optional*) -- event to\n record. If not given, a new one will be allocated.\n\n Returns:\n Recorded event.\n\nsynchronize()\n Wait for all the kernels in this stream to complete.\n\n Note:\n\n This is a wrapper around \"cudaStreamSynchronize()\": see CUDA\n Stream documentation for more info.\n\nwait_event(event)\n Makes all future work submitted to the stream wait for an event.\n\n Parameters:\n **event** (*torch.cuda.Event*) -- an event to wait for.\n\n Note:\n\n This is a wrapper around \"cudaStreamWaitEvent()\": see CUDA\n Stream documentation for more info.This function returns\n without waiting for \"event\": only future operations are\n affected.\n\nwait_stream(stream)\n Synchronizes with another stream.\n\n All future work submitted to this stream will wait until all\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.ExternalStream.html", "category": "pytorch docs"} {"text": "kernels submitted to a given stream at the time of call\n complete.\n Parameters:\n **stream** (*Stream*) -- a stream to synchronize.\n\n Note:\n\n This function returns without waiting for currently enqueued\n kernels in \"stream\": only future operations are affected.\n", "source": "https://pytorch.org/docs/stable/generated/torch.cuda.ExternalStream.html", "category": "pytorch docs"} {"text": "torch.tanh\ntorch.tanh(input, *, out=None) -> Tensor\nReturns a new tensor with the hyperbolic tangent of the elements of\n \"input\".\n \\text{out}_{i} = \\tanh(\\text{input}_{i})\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.randn(4)\n >>> a\n tensor([ 0.8986, -0.7279, 1.1745, 0.2611])\n >>> torch.tanh(a)\n tensor([ 0.7156, -0.6218, 0.8257, 0.2553])\n", "source": "https://pytorch.org/docs/stable/generated/torch.tanh.html", "category": "pytorch docs"} {"text": "torch.exp\ntorch.exp(input, *, out=None) -> Tensor\nReturns a new tensor with the exponential of the elements of the\n input tensor \"input\".\n y_{i} = e^{x_{i}}\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> torch.exp(torch.tensor([0, math.log(2.)]))\n tensor([ 1., 2.])\n", "source": "https://pytorch.org/docs/stable/generated/torch.exp.html", "category": "pytorch docs"} {"text": "Rprop\nclass torch.optim.Rprop(params, lr=0.01, etas=(0.5, 1.2), step_sizes=(1e-06, 50), *, foreach=None, maximize=False, differentiable=False)\nImplements the resilient backpropagation algorithm.\n \\begin{aligned} &\\rule{110mm}{0.4pt}\n \\\\ &\\textbf{input} : \\theta_0 \\in \\mathbf{R}^d \\text{\n (params)},f(\\theta) \\text{ (objective)},\n \\\\ &\\hspace{13mm} \\eta_{+/-} \\text{ (etaplus,\n etaminus)}, \\Gamma_{max/min} \\text{ (step sizes)}\n \\\\ &\\textbf{initialize} : g^0_{prev} \\leftarrow 0,\n \\: \\eta_0 \\leftarrow \\text{lr (learning rate)}\n \\\\ &\\rule{110mm}{0.4pt}\n \\\\ &\\textbf{for} \\: t=1 \\: \\textbf{to} \\: \\ldots \\:\n \\textbf{do} \\\\ &\\hspace{5mm}g_t\n \\leftarrow \\nabla_{\\theta} f_t (\\theta_{t-1}) \\\\\n &\\hspace{5mm} \\textbf{for} \\text{ } i = 0, 1, \\ldots, d-1 \\:\n \\mathbf{do} \\\\ &\\hspace{10mm} \\textbf{if} \\:\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Rprop.html", "category": "pytorch docs"} {"text": "g^i_{prev} g^i_t > 0 \\\n &\\hspace{15mm} \\eta^i_t \\leftarrow \\mathrm{min}(\\eta^i_{t-1}\n \\eta_{+}, \\Gamma_{max})\n \\ &\\hspace{10mm} \\textbf{else if} \\: g^i_{prev} g^i_t <\n 0 \\ &\\hspace{15mm} \\eta^i_t\n \\leftarrow \\mathrm{max}(\\eta^i_{t-1} \\eta_{-},\n \\Gamma_{min})\n \\ &\\hspace{15mm} g^i_t \\leftarrow 0\n \\ &\\hspace{10mm} \\textbf{else} \\:\n \\ &\\hspace{15mm} \\eta^i_t \\leftarrow \\eta^i_{t-1}\n \\ &\\hspace{5mm}\\theta_t \\leftarrow \\theta_{t-1}- \\eta_t\n \\mathrm{sign}(g_t) \\ &\\hspace{5mm}g_{prev}\n \\leftarrow g_t\n \\ &\\rule{110mm}{0.4pt}\n \\[-1.ex] &\\bf{return} \\: \\theta_t\n \\[-1.ex] &\\rule{110mm}{0.4pt}\n \\[-1.ex] \\end{aligned}\nFor further details regarding the algorithm we refer to the paper A\n Direct Adaptive Method for Faster Backpropagation Learning: The\n RPROP Algorithm.\nParameters:", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Rprop.html", "category": "pytorch docs"} {"text": "RPROP Algorithm.\nParameters:\n * params (iterable) -- iterable of parameters to optimize\n or dicts defining parameter groups\n * **lr** (*float**, **optional*) -- learning rate (default:\n 1e-2)\n\n * **etas** (*Tuple**[**float**, **float**]**, **optional*) --\n pair of (etaminus, etaplus), that are multiplicative increase\n and decrease factors (default: (0.5, 1.2))\n\n * **step_sizes** (*Tuple**[**float**, **float**]**, **optional*)\n -- a pair of minimal and maximal allowed step sizes (default:\n (1e-6, 50))\n\n * **foreach** (*bool**, **optional*) -- whether foreach\n implementation of optimizer is used. If unspecified by the\n user (so foreach is None), we will try to use foreach over the\n for-loop implementation on CUDA, since it is usually\n significantly more performant. (default: None)\n\n * **maximize** (*bool**, **optional*) -- maximize the params\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Rprop.html", "category": "pytorch docs"} {"text": "based on the objective, instead of minimizing (default: False)\n * **differentiable** (*bool**, **optional*) -- whether autograd\n should occur through the optimizer step in training.\n Otherwise, the step() function runs in a torch.no_grad()\n context. Setting to True can impair performance, so leave it\n False if you don't intend to run autograd through this\n instance (default: False)\n\nadd_param_group(param_group)\n Add a param group to the \"Optimizer\" s *param_groups*.\n\n This can be useful when fine tuning a pre-trained network as\n frozen layers can be made trainable and added to the \"Optimizer\"\n as training progresses.\n\n Parameters:\n **param_group** (*dict*) -- Specifies what Tensors should be\n optimized along with group specific optimization options.\n\nload_state_dict(state_dict)\n Loads the optimizer state.\n\n Parameters:\n **state_dict** (*dict*) -- optimizer state. Should be an\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Rprop.html", "category": "pytorch docs"} {"text": "object returned from a call to \"state_dict()\".\nregister_step_post_hook(hook)\n Register an optimizer step post hook which will be called after\n optimizer step. It should have the following signature:\n\n hook(optimizer, args, kwargs) -> None\n\n The \"optimizer\" argument is the optimizer instance being used.\n\n Parameters:\n **hook** (*Callable*) -- The user defined hook to be\n registered.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n\nregister_step_pre_hook(hook)\n Register an optimizer step pre hook which will be called before\n optimizer step. It should have the following signature:\n\n hook(optimizer, args, kwargs) -> None or modified args and kwargs\n\n The \"optimizer\" argument is the optimizer instance being used.\n If args and kwargs are modified by the pre-hook, then the\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Rprop.html", "category": "pytorch docs"} {"text": "transformed values are returned as a tuple containing the\n new_args and new_kwargs.\n Parameters:\n **hook** (*Callable*) -- The user defined hook to be\n registered.\n\n Returns:\n a handle that can be used to remove the added hook by calling\n \"handle.remove()\"\n\n Return type:\n \"torch.utils.hooks.RemoveableHandle\"\n\nstate_dict()\n Returns the state of the optimizer as a \"dict\".\n\n It contains two entries:\n\n * state - a dict holding current optimization state. Its content\n differs between optimizer classes.\n\n * param_groups - a list containing all parameter groups where\n each\n parameter group is a dict\n\nzero_grad(set_to_none=False)\n Sets the gradients of all optimized \"torch.Tensor\" s to zero.\n\n Parameters:\n **set_to_none** (*bool*) -- instead of setting to zero, set\n the grads to None. This will in general have lower memory\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Rprop.html", "category": "pytorch docs"} {"text": "footprint, and can modestly improve performance. However, it\n changes certain behaviors. For example: 1. When the user\n tries to access a gradient and perform manual ops on it, a\n None attribute or a Tensor full of 0s will behave\n differently. 2. If the user requests\n \"zero_grad(set_to_none=True)\" followed by a backward pass,\n \".grad\"s are guaranteed to be None for params that did not\n receive a gradient. 3. \"torch.optim\" optimizers have a\n different behavior if the gradient is 0 or None (in one case\n it does the step with a gradient of 0 and in the other it\n skips the step altogether).", "source": "https://pytorch.org/docs/stable/generated/torch.optim.Rprop.html", "category": "pytorch docs"} {"text": "torch.fft.irfftn\ntorch.fft.irfftn(input, s=None, dim=None, norm=None, *, out=None) -> Tensor\nComputes the inverse of \"rfftn()\".\n\"input\" is interpreted as a one-sided Hermitian signal in the\n Fourier domain, as produced by \"rfftn()\". By the Hermitian\n property, the output will be real-valued.\nNote:\n Some input frequencies must be real-valued to satisfy the\n Hermitian property. In these cases the imaginary component will\n be ignored. For example, any imaginary component in the zero-\n frequency term cannot be represented in a real output and so will\n always be ignored.\n\nNote:\n The correct interpretation of the Hermitian input depends on the\n length of the original data, as given by \"s\". This is because\n each input shape could correspond to either an odd or even length\n signal. By default, the signal is assumed to be even length and\n odd signals will not round-trip properly. So, it is recommended\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.irfftn.html", "category": "pytorch docs"} {"text": "to always pass the signal shape \"s\".\nNote:\n Supports torch.half and torch.chalf on CUDA with GPU Architecture\n SM53 or greater. However it only supports powers of 2 signal\n length in every transformed dimensions. With default arguments,\n the size of last dimension should be (2^n + 1) as argument *s*\n defaults to even output size = 2 * (last_dim_size - 1)\n\nParameters:\n * input (Tensor) -- the input tensor\n * **s** (*Tuple**[**int**]**, **optional*) -- Signal size in the\n transformed dimensions. If given, each dimension \"dim[i]\" will\n either be zero-padded or trimmed to the length \"s[i]\" before\n computing the real FFT. If a length \"-1\" is specified, no\n padding is done in that dimension. Defaults to even output in\n the last dimension: \"s[-1] = 2*(input.size(dim[-1]) - 1)\".\n\n * **dim** (*Tuple**[**int**]**, **optional*) -- Dimensions to be\n transformed. The last dimension must be the half-Hermitian\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.irfftn.html", "category": "pytorch docs"} {"text": "compressed dimension. Default: all dimensions, or the last\n \"len(s)\" dimensions if \"s\" is given.\n * **norm** (*str**, **optional*) --\n\n Normalization mode. For the backward transform (\"irfftn()\"),\n these correspond to:\n\n * \"\"forward\"\" - no normalization\n\n * \"\"backward\"\" - normalize by \"1/n\"\n\n * \"\"ortho\"\" - normalize by \"1/sqrt(n)\" (making the real IFFT\n orthonormal)\n\n Where \"n = prod(s)\" is the logical IFFT size. Calling the\n forward transform (\"rfftn()\") with the same normalization mode\n will apply an overall normalization of \"1/n\" between the two\n transforms. This is required to make \"irfftn()\" the exact\n inverse.\n\n Default is \"\"backward\"\" (normalize by \"1/n\").\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\n-[ Example ]-\n\n\n\nt = torch.rand(10, 9)\nT = torch.fft.rfftn(t)\n\n\n\nWithout specifying the output length to \"irfft()\", the output will", "source": "https://pytorch.org/docs/stable/generated/torch.fft.irfftn.html", "category": "pytorch docs"} {"text": "not round-trip properly because the input is odd-length in the last\n dimension:\n\n\n\ntorch.fft.irfftn(T).size()\n torch.Size([10, 8])\n\n\n\nSo, it is recommended to always pass the signal shape \"s\".\n\n\n\nroundtrip = torch.fft.irfftn(T, t.size())\nroundtrip.size()\n torch.Size([10, 9])\ntorch.testing.assert_close(roundtrip, t, check_stride=False)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.fft.irfftn.html", "category": "pytorch docs"} {"text": "torch.Tensor.char\nTensor.char(memory_format=torch.preserve_format) -> Tensor\n\"self.char()\" is equivalent to \"self.to(torch.int8)\". See \"to()\".\nParameters:\n memory_format (\"torch.memory_format\", optional) -- the\n desired memory format of returned Tensor. Default:\n \"torch.preserve_format\".", "source": "https://pytorch.org/docs/stable/generated/torch.Tensor.char.html", "category": "pytorch docs"} {"text": "torch.linalg.inv_ex\ntorch.linalg.inv_ex(A, *, check_errors=False, out=None)\nComputes the inverse of a square matrix if it is invertible.\nReturns a namedtuple \"(inverse, info)\". \"inverse\" contains the\n result of inverting \"A\" and \"info\" stores the LAPACK error codes.\nIf \"A\" is not an invertible matrix, or if it's a batch of matrices\n and one or more of them is not an invertible matrix, then \"info\"\n stores a positive integer for the corresponding matrix. The\n positive integer indicates the diagonal element of the LU\n decomposition of the input matrix that is exactly zero. \"info\"\n filled with zeros indicates that the inversion was successful. If\n \"check_errors=True\" and \"info\" contains positive integers, then a\n RuntimeError is thrown.\nSupports input of float, double, cfloat and cdouble dtypes. Also\n supports batches of matrices, and if \"A\" is a batch of matrices\n then the output has the same batch dimensions.\nNote:", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.inv_ex.html", "category": "pytorch docs"} {"text": "Note:\n When the inputs are on a CUDA device, this function synchronizes\n only when \"check_errors\"*= True*.\n\nWarning:\n This function is \"experimental\" and it may change in a future\n PyTorch release.\n\nSee also:\n \"torch.linalg.inv()\" is a NumPy compatible variant that always\n checks for errors.\n\nParameters:\n * A (Tensor) -- tensor of shape (, n, n)* where *** is\n zero or more batch dimensions consisting of square matrices.\n * **check_errors** (*bool**, **optional*) -- controls whether to\n check the content of \"info\". Default: *False*.\n\nKeyword Arguments:\n out (tuple, optional) -- tuple of two tensors to write\n the output to. Ignored if None. Default: None.\nExamples:\n >>> A = torch.randn(3, 3)\n >>> Ainv, info = torch.linalg.inv_ex(A)\n >>> torch.dist(torch.linalg.inv(A), Ainv)\n tensor(0.)\n >>> info\n tensor(0, dtype=torch.int32)\n", "source": "https://pytorch.org/docs/stable/generated/torch.linalg.inv_ex.html", "category": "pytorch docs"} {"text": "torch.nn.functional.binary_cross_entropy_with_logits\ntorch.nn.functional.binary_cross_entropy_with_logits(input, target, weight=None, size_average=None, reduce=None, reduction='mean', pos_weight=None)\nFunction that measures Binary Cross Entropy between target and\n input logits.\nSee \"BCEWithLogitsLoss\" for details.\nParameters:\n * input (Tensor) -- Tensor of arbitrary shape as\n unnormalized scores (often referred to as logits).\n * **target** (*Tensor*) -- Tensor of the same shape as input\n with values between 0 and 1\n\n * **weight** (*Tensor**, **optional*) -- a manual rescaling\n weight if provided it's repeated to match input tensor shape\n\n * **size_average** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged over each\n loss element in the batch. Note that for some losses, there\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.binary_cross_entropy_with_logits.html", "category": "pytorch docs"} {"text": "multiple elements per sample. If the field \"size_average\" is\n set to \"False\", the losses are instead summed for each\n minibatch. Ignored when reduce is \"False\". Default: \"True\"\n * **reduce** (*bool**, **optional*) -- Deprecated (see\n \"reduction\"). By default, the losses are averaged or summed\n over observations for each minibatch depending on\n \"size_average\". When \"reduce\" is \"False\", returns a loss per\n batch element instead and ignores \"size_average\". Default:\n \"True\"\n\n * **reduction** (*str**, **optional*) -- Specifies the reduction\n to apply to the output: \"'none'\" | \"'mean'\" | \"'sum'\".\n \"'none'\": no reduction will be applied, \"'mean'\": the sum of\n the output will be divided by the number of elements in the\n output, \"'sum'\": the output will be summed. Note:\n \"size_average\" and \"reduce\" are in the process of being\n deprecated, and in the meantime, specifying either of those\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.binary_cross_entropy_with_logits.html", "category": "pytorch docs"} {"text": "two args will override \"reduction\". Default: \"'mean'\"\n * **pos_weight** (*Tensor**, **optional*) -- a weight of\n positive examples. Must be a vector with length equal to the\n number of classes.\n\nReturn type:\n Tensor\nExamples:\n >>> input = torch.randn(3, requires_grad=True)\n >>> target = torch.empty(3).random_(2)\n >>> loss = F.binary_cross_entropy_with_logits(input, target)\n >>> loss.backward()\n", "source": "https://pytorch.org/docs/stable/generated/torch.nn.functional.binary_cross_entropy_with_logits.html", "category": "pytorch docs"} {"text": "CyclicLR\nclass torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr, max_lr, step_size_up=2000, step_size_down=None, mode='triangular', gamma=1.0, scale_fn=None, scale_mode='cycle', cycle_momentum=True, base_momentum=0.8, max_momentum=0.9, last_epoch=- 1, verbose=False)\nSets the learning rate of each parameter group according to\n cyclical learning rate policy (CLR). The policy cycles the learning\n rate between two boundaries with a constant frequency, as detailed\n in the paper Cyclical Learning Rates for Training Neural Networks.\n The distance between the two boundaries can be scaled on a per-\n iteration or per-cycle basis.\nCyclical learning rate policy changes the learning rate after every\n batch. step should be called after a batch has been used for\n training.\nThis class has three built-in policies, as put forth in the paper:\n\n\n\"triangular\": A basic triangular cycle without amplitude scaling.\n\n\n\"triangular2\": A basic triangular cycle that scales initial\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CyclicLR.html", "category": "pytorch docs"} {"text": "amplitude by half each cycle.\n\n\"exp_range\": A cycle that scales initial amplitude by\n \\text{gamma}^{\\text{cycle iterations}} at each cycle iteration.\n\nThis implementation was adapted from the github repo:\n bckenstler/CLR\nParameters:\n * optimizer (Optimizer) -- Wrapped optimizer.\n * **base_lr** (*float** or **list*) -- Initial learning rate\n which is the lower boundary in the cycle for each parameter\n group.\n\n * **max_lr** (*float** or **list*) -- Upper learning rate\n boundaries in the cycle for each parameter group.\n Functionally, it defines the cycle amplitude (max_lr -\n base_lr). The lr at any cycle is the sum of base_lr and some\n scaling of the amplitude; therefore max_lr may not actually be\n reached depending on scaling function.\n\n * **step_size_up** (*int*) -- Number of training iterations in\n the increasing half of a cycle. Default: 2000\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CyclicLR.html", "category": "pytorch docs"} {"text": "\n\nstep_size_down (int) -- Number of training iterations in\n the decreasing half of a cycle. If step_size_down is None, it\n is set to step_size_up. Default: None\n\n\nmode (str) -- One of {triangular, triangular2,\n exp_range}. Values correspond to policies detailed above. If\n scale_fn is not None, this argument is ignored. Default:\n 'triangular'\n\n\ngamma (float) -- Constant in 'exp_range' scaling\n function: gamma**(cycle iterations) Default: 1.0\n\n\nscale_fn (function) -- Custom scaling policy defined by\n a single argument lambda function, where 0 <= scale_fn(x) <= 1\n for all x >= 0. If specified, then 'mode' is ignored. Default:\n None\n\n\nscale_mode (str) -- {'cycle', 'iterations'}. Defines\n whether scale_fn is evaluated on cycle number or cycle\n iterations (training iterations since start of cycle).\n Default: 'cycle'\n\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CyclicLR.html", "category": "pytorch docs"} {"text": "Default: 'cycle'\n * **cycle_momentum** (*bool*) -- If \"True\", momentum is cycled\n inversely to learning rate between 'base_momentum' and\n 'max_momentum'. Default: True\n\n * **base_momentum** (*float** or **list*) -- Lower momentum\n boundaries in the cycle for each parameter group. Note that\n momentum is cycled inversely to learning rate; at the peak of\n a cycle, momentum is 'base_momentum' and learning rate is\n 'max_lr'. Default: 0.8\n\n * **max_momentum** (*float** or **list*) -- Upper momentum\n boundaries in the cycle for each parameter group.\n Functionally, it defines the cycle amplitude (max_momentum -\n base_momentum). The momentum at any cycle is the difference of\n max_momentum and some scaling of the amplitude; therefore\n base_momentum may not actually be reached depending on scaling\n function. Note that momentum is cycled inversely to learning\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CyclicLR.html", "category": "pytorch docs"} {"text": "rate; at the start of a cycle, momentum is 'max_momentum' and\n learning rate is 'base_lr' Default: 0.9\n * **last_epoch** (*int*) -- The index of the last batch. This\n parameter is used when resuming a training job. Since *step()*\n should be invoked after each batch instead of after each\n epoch, this number represents the total number of *batches*\n computed, not the total number of epochs computed. When\n last_epoch=-1, the schedule is started from the beginning.\n Default: -1\n\n * **verbose** (*bool*) -- If \"True\", prints a message to stdout\n for each update. Default: \"False\".\n\n-[ Example ]-\n\n\n\noptimizer = torch.optim.SGD(model.parameters(), lr=0.1, momentum=0.9)\nscheduler = torch.optim.lr_scheduler.CyclicLR(optimizer, base_lr=0.01, max_lr=0.1)\ndata_loader = torch.utils.data.DataLoader(...)\nfor epoch in range(10):\n for batch in data_loader:\n train_batch(...)\n\n\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CyclicLR.html", "category": "pytorch docs"} {"text": "\n\n\n train_batch(...)\n scheduler.step()\n\n\n\n\nget_last_lr()\n Return last computed learning rate by current scheduler.\n\nget_lr()\n Calculates the learning rate at batch index. This function\n treats *self.last_epoch* as the last batch index.\n\n If *self.cycle_momentum* is \"True\", this function has a side\n effect of updating the optimizer's momentum.\n\nprint_lr(is_verbose, group, lr, epoch=None)\n Display the current learning rate.\n", "source": "https://pytorch.org/docs/stable/generated/torch.optim.lr_scheduler.CyclicLR.html", "category": "pytorch docs"} {"text": "torch.onnx diagnostics\n* Overview\n\n\nDiagnostic Rules\n\n\nAPI Reference\n\n\nOverview\nNOTE: This feature is underdevelopment and is subject to change.\nThe goal is to improve the diagnostics to help users debug and improve\ntheir model export to ONNX.\n\n\nThe diagnostics are emitted in machine parsable Static Analysis\n Results Interchange Format (SARIF).\n\n\nA new clearer, structured way to add new and keep track of\n diagnostic rules.\n\n\nServe as foundation for more future improvements consuming the\n diagnostics.\n\n\nDiagnostic Rules\n\n\nPOE0001:node-missing-onnx-shape-inference\n\n\nPOE0002:missing-custom-symbolic-function\n\n\nPOE0003:missing-standard-symbolic-function\n\n\nPOE0004:operator-supported-in-newer-opset-version\n\n\nAPI Reference\nclass torch.onnx._internal.diagnostics.ExportDiagnostic(args, *kwargs)\nBase class for all export diagnostics.\nThis class is used to represent all export diagnostics. It is a", "source": "https://pytorch.org/docs/stable/onnx_diagnostics.html", "category": "pytorch docs"} {"text": "subclass of infra.Diagnostic, and adds additional methods to add\n more information to the diagnostic.\nrecord_cpp_call_stack(frames_to_skip)\n Records the current C++ call stack in the diagnostic.\n\nrecord_python_call_stack(frames_to_skip)\n Records the current Python call stack in the diagnostic.\n\nclass torch.onnx._internal.diagnostics.infra.DiagnosticEngine\nA generic diagnostic engine based on SARIF.\nThis class is the main interface for diagnostics. It manages the\n creation of diagnostic contexts. A DiagnosticContext provides the\n entry point for recording Diagnostics. See infra.DiagnosticContext\n for more details.\n-[ Examples ]-\nStep 1: Create a set of rules. >>> rules =\n infra.RuleCollection.custom_collection_from_list( ...\n \"CustomRuleCollection\", ... [ ... infra.Rule( ...\n id=\"r1\", ... name=\"rule-1\", ...\n message_default_template=\"Mising xxx\", ... ), ... ],\n ... )", "source": "https://pytorch.org/docs/stable/onnx_diagnostics.html", "category": "pytorch docs"} {"text": "... )\nStep 2: Create a diagnostic engine. >>> engine = DiagnosticEngine()\nStep 3: Start a new diagnostic context. >>> with\n engine.create_diagnostic_context(\"torch.onnx.export\",\n version=\"1.0\") as context: ... ...\nStep 4: Add diagnostics in your code. ...\n context.diagnose(rules.rule1, infra.Level.ERROR)\nStep 5: Afterwards, get the SARIF log. >>> sarif_log =\n engine.sarif_log()\nclear()\n Clears all diagnostic contexts.\n\ncreate_diagnostic_context(name, version, options=None, diagnostic_type=)\n Creates a new diagnostic context.\n\n Parameters:\n * **name** (*str*) -- The subject name for the diagnostic\n context.\n\n * **version** (*str*) -- The subject version for the\n diagnostic context.\n\n * **options** (*Optional**[**DiagnosticOptions**]*) -- The\n options for the diagnostic context.\n\n Returns:\n A new diagnostic context.\n", "source": "https://pytorch.org/docs/stable/onnx_diagnostics.html", "category": "pytorch docs"} {"text": "Returns:\n A new diagnostic context.\n Return type:\n *DiagnosticContext*\n\npretty_print(verbose=False, level=Level.ERROR)\n Pretty prints all diagnostics in the diagnostic contexts.\n\n Parameters:\n * **verbose** (*bool*) -- Whether to print the diagnostics in\n verbose mode. See Diagnostic.pretty_print.\n\n * **level** (*Level*) -- The minimum level of diagnostics to\n print.\n", "source": "https://pytorch.org/docs/stable/onnx_diagnostics.html", "category": "pytorch docs"} {"text": "Benchmark Utils - torch.utils.benchmark\nclass torch.utils.benchmark.Timer(stmt='pass', setup='pass', global_setup='', timer=, globals=None, label=None, sub_label=None, description=None, env=None, num_threads=1, language=Language.PYTHON)\nHelper class for measuring execution time of PyTorch statements.\nFor a full tutorial on how to use this class, see:\n https://pytorch.org/tutorials/recipes/recipes/benchmark.html\nThe PyTorch Timer is based on timeit.Timer (and in fact uses\n timeit.Timer internally), but with several key differences:\n\n\nRuntime aware:\n Timer will perform warmups (important as some elements of\n PyTorch are lazily initialized), set threadpool size so that\n comparisons are apples-to-apples, and synchronize\n asynchronous CUDA functions when necessary.\n\n\nFocus on replicates:\n When measuring code, and particularly complex kernels /\n\n", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"} {"text": "models, run-to-run variation is a significant confounding\n factor. It is expected that all measurements should include\n replicates to quantify noise and allow median computation,\n which is more robust than mean. To that effect, this class\n deviates from the timeit API by conceptually merging\n timeit.Timer.repeat and timeit.Timer.autorange. (Exact\n algorithms are discussed in method docstrings.) The timeit\n method is replicated for cases where an adaptive strategy is\n not desired.\n\n\nOptional metadata:\n When defining a Timer, one can optionally specify label,\n sub_label, description, and env. (Defined later) These\n fields are included in the representation of result object\n and by the Compare class to group and display results for\n comparison.\n\n\nInstruction counts\n In addition to wall times, Timer can run a statement under\n\n", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"} {"text": "Callgrind and report instructions executed.\nDirectly analogous to timeit.Timer constructor arguments:\n *stmt*, *setup*, *timer*, *globals*\n\nPyTorch Timer specific constructor arguments:\n *label*, *sub_label*, *description*, *env*, *num_threads*\n\nParameters:\n * stmt (str) -- Code snippet to be run in a loop and\n timed.\n * **setup** (*str*) -- Optional setup code. Used to define\n variables used in *stmt*\n\n * **global_setup** (*str*) -- (C++ only) Code which is placed at\n the top level of the file for things like *#include*\n statements.\n\n * **timer** (*Callable**[**[**]**, **float**]*) -- Callable\n which returns the current time. If PyTorch was built without\n CUDA or there is no GPU present, this defaults to\n *timeit.default_timer*; otherwise it will synchronize CUDA\n before measuring the time.\n\n * **globals** (*Optional**[**Dict**[**str**, **Any**]**]*) -- A\n", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"} {"text": "dict which defines the global variables when stmt is being\n executed. This is the other method for providing variables\n which stmt needs.\n * **label** (*Optional**[**str**]*) -- String which summarizes\n *stmt*. For instance, if *stmt* is\n \"torch.nn.functional.relu(torch.add(x, 1, out=out))\" one might\n set label to \"ReLU(x + 1)\" to improve readability.\n\n * **sub_label** (*Optional**[**str**]*) --\n\n Provide supplemental information to disambiguate measurements\n with identical stmt or label. For instance, in our example\n above sub_label might be \"float\" or \"int\", so that it is easy\n to differentiate: \"ReLU(x + 1): (float)\"\n\n \"ReLU(x + 1): (int)\" when printing Measurements or summarizing\n using *Compare*.\n\n * **description** (*Optional**[**str**]*) --\n\n String to distinguish measurements with identical label and\n sub_label. The principal use of *description* is to signal to\n", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"} {"text": "Compare the columns of data. For instance one might set it\n based on the input size to create a table of the form:\n | n=1 | n=4 | ...\n ------------- ...\n ReLU(x + 1): (float) | ... | ... | ...\n ReLU(x + 1): (int) | ... | ... | ...\n\n using *Compare*. It is also included when printing a\n Measurement.\n\n * **env** (*Optional**[**str**]*) -- This tag indicates that\n otherwise identical tasks were run in different environments,\n and are therefore not equivalent, for instance when A/B\n testing a change to a kernel. *Compare* will treat\n Measurements with different *env* specification as distinct\n when merging replicate runs.\n\n * **num_threads** (*int*) -- The size of the PyTorch threadpool\n when executing *stmt*. Single threaded performance is\n important as both a key inference workload and a good\n", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"} {"text": "indicator of intrinsic algorithmic efficiency, so the default\n is set to one. This is in contrast to the default PyTorch\n threadpool size which tries to utilize all cores.\nblocked_autorange(callback=None, min_run_time=0.2)\n Measure many replicates while keeping timer overhead to a\n minimum.\n\n At a high level, blocked_autorange executes the following\n pseudo-code:\n\n `setup`\n\n total_time = 0\n while total_time < min_run_time\n start = timer()\n for _ in range(block_size):\n `stmt`\n total_time += (timer() - start)\n\n Note the variable *block_size* in the inner loop. The choice of\n block size is important to measurement quality, and must balance\n two competing objectives:\n\n 1. A small block size results in more replicates and\n generally better statistics.\n\n 2. A large block size better amortizes the cost of *timer*\n", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"} {"text": "invocation, and results in a less biased measurement. This\n is important because CUDA synchronization time is non-\n trivial (order single to low double digit microseconds)\n and would otherwise bias the measurement.\n blocked_autorange sets block_size by running a warmup period,\n increasing block size until timer overhead is less than 0.1% of\n the overall computation. This value is then used for the main\n measurement loop.\n\n Returns:\n A *Measurement* object that contains measured runtimes and\n repetition counts, and can be used to compute statistics.\n (mean, median, etc.)\n\n Return type:\n *Measurement*\n\ncollect_callgrind(number: int, , repeats: None, collect_baseline: bool, retain_out_file: bool) -> CallgrindStats\n collect_callgrind(number: int, , repeats: int, collect_baseline: bool, retain_out_file: bool) -> Tuple[CallgrindStats, ...]\n Collect instruction counts using Callgrind.\n", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"} {"text": "Collect instruction counts using Callgrind.\n Unlike wall times, instruction counts are deterministic (modulo\n non-determinism in the program itself and small amounts of\n jitter from the Python interpreter.) This makes them ideal for\n detailed performance analysis. This method runs *stmt* in a\n separate process so that Valgrind can instrument the program.\n Performance is severely degraded due to the instrumentation,\n however this is ameliorated by the fact that a small number of\n iterations is generally sufficient to obtain good measurements.\n\n In order to to use this method *valgrind*, *callgrind_control*,\n and *callgrind_annotate* must be installed.\n\n Because there is a process boundary between the caller (this\n process) and the *stmt* execution, *globals* cannot contain\n arbitrary in-memory data structures. (Unlike timing methods)\n Instead, globals are restricted to builtins, *nn.Modules*'s, and\n", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"} {"text": "TorchScripted functions/modules to reduce the surprise factor\n from serialization and subsequent deserialization. The\n GlobalsBridge class provides more detail on this subject. Take\n particular care with nn.Modules: they rely on pickle and you may\n need to add an import to setup for them to transfer properly.\n By default, a profile for an empty statement will be collected\n and cached to indicate how many instructions are from the Python\n loop which drives *stmt*.\n\n Returns:\n A *CallgrindStats* object which provides instruction counts\n and some basic facilities for analyzing and manipulating\n results.\n\ntimeit(number=1000000)\n Mirrors the semantics of timeit.Timer.timeit().\n\n Execute the main statement (*stmt*) *number* times. https://doc\n s.python.org/3/library/timeit.html#timeit.Timer.timeit\n\n Return type:\n *Measurement*\n", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"} {"text": "Return type:\n Measurement\nclass torch.utils.benchmark.Measurement(number_per_run, raw_times, task_spec, metadata=None)\nThe result of a Timer measurement.\nThis class stores one or more measurements of a given statement. It\n is serializable and provides several convenience methods (including\n a detailed repr) for downstream consumers.\nstatic merge(measurements)\n Convenience method for merging replicates.\n\n Merge will extrapolate times to *number_per_run=1* and will not\n transfer any metadata. (Since it might differ between\n replicates)\n\n Return type:\n *List*[*Measurement*]\n\nproperty significant_figures: int\n Approximate significant figure estimate.\n\n This property is intended to give a convenient way to estimate\n the precision of a measurement. It only uses the interquartile\n region to estimate statistics to try to mitigate skew from the\n tails, and uses a static z value of 1.645 since it is not\n", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"} {"text": "expected to be used for small values of n, so z can\n approximate t.\n The significant figure estimation used in conjunction with the\n *trim_sigfig* method to provide a more human interpretable data\n summary. __repr__ does not use this method; it simply displays\n raw values. Significant figure estimation is intended for\n *Compare*.\n\nclass torch.utils.benchmark.CallgrindStats(task_spec, number_per_run, built_with_debug_symbols, baseline_inclusive_stats, baseline_exclusive_stats, stmt_inclusive_stats, stmt_exclusive_stats, stmt_callgrind_out)\nTop level container for Callgrind results collected by Timer.\nManipulation is generally done using the FunctionCounts class,\n which is obtained by calling CallgrindStats.stats(...). Several\n convenience methods are provided as well; the most significant is\n CallgrindStats.as_standardized().\nas_standardized()\n Strip library names and some prefixes from function strings.\n", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"} {"text": "When comparing two different sets of instruction counts, on\n stumbling block can be path prefixes. Callgrind includes the\n full filepath when reporting a function (as it should). However,\n this can cause issues when diffing profiles. If a key component\n such as Python or PyTorch was built in separate locations in the\n two profiles, which can result in something resembling:\n 23234231 /tmp/first_build_dir/thing.c:foo(...)\n 9823794 /tmp/first_build_dir/thing.c:bar(...)\n ...\n 53453 .../aten/src/Aten/...:function_that_actually_changed(...)\n ...\n -9823794 /tmp/second_build_dir/thing.c:bar(...)\n -23234231 /tmp/second_build_dir/thing.c:foo(...)\n\n Stripping prefixes can ameliorate this issue by regularizing the\n strings and causing better cancellation of equivalent call sites\n when diffing.\n\n Return type:\n *CallgrindStats*\n\ncounts(*, denoise=False)", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"} {"text": "counts(*, denoise=False)\n Returns the total number of instructions executed.\n\n See *FunctionCounts.denoise()* for an explanation of the\n *denoise* arg.\n\n Return type:\n int\n\ndelta(other, inclusive=False)\n Diff two sets of counts.\n\n One common reason to collect instruction counts is to determine\n the the effect that a particular change will have on the number\n of instructions needed to perform some unit of work. If a change\n increases that number, the next logical question is \"why\". This\n generally involves looking at what part if the code increased in\n instruction count. This function automates that process so that\n one can easily diff counts on both an inclusive and exclusive\n basis.\n\n Return type:\n *FunctionCounts*\n\nstats(inclusive=False)\n Returns detailed function counts.\n\n Conceptually, the FunctionCounts returned can be thought of as a\n tuple of (count, path_and_function_name) tuples.\n", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"} {"text": "inclusive matches the semantics of callgrind. If True, the\n counts include instructions executed by children.\n inclusive=True is useful for identifying hot spots in code;\n inclusive=False is useful for reducing noise when diffing\n counts from two different runs. (See CallgrindStats.delta(...)\n for more details)\n Return type:\n *FunctionCounts*\n\nclass torch.utils.benchmark.FunctionCounts(_data, inclusive, truncate_rows=True, _linewidth=None)\nContainer for manipulating Callgrind results.\nIt supports:\n 1. Addition and subtraction to combine or diff results.\n 2. Tuple-like indexing.\n\n 3. A *denoise* function which strips CPython calls which are\n known to be non-deterministic and quite noisy.\n\n 4. Two higher order methods (*filter* and *transform*) for\n custom manipulation.\n\ndenoise()\n Remove known noisy instructions.\n\n Several instructions in the CPython interpreter are rather\n", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"} {"text": "noisy. These instructions involve unicode to dictionary lookups\n which Python uses to map variable names. FunctionCounts is\n generally a content agnostic container, however this is\n sufficiently important for obtaining reliable results to warrant\n an exception.\n Return type:\n *FunctionCounts*\n\nfilter(filter_fn)\n Keep only the elements where *filter_fn* applied to function\n name returns True.\n\n Return type:\n *FunctionCounts*\n\ntransform(map_fn)\n Apply *map_fn* to all of the function names.\n\n This can be used to regularize function names (e.g. stripping\n irrelevant parts of the file path), coalesce entries by mapping\n multiple functions to the same name (in which case the counts\n are added together), etc.\n\n Return type:\n *FunctionCounts*\n", "source": "https://pytorch.org/docs/stable/benchmark_utils.html", "category": "pytorch docs"} {"text": "CUDA Stream Sanitizer\nNote:\nThis is a prototype feature, which means it is at an early stage for\n feedback and testing, and its components are subject to change.\nOverview\nThis module introduces CUDA Sanitizer, a tool for detecting\nsynchronization errors between kernels ran on different streams. It\nstores information on accesses to tensors to determine if they are\nsynchronized or not. When enabled in a python program and a possible\ndata race is detected, a detailed warning will be printed and the\nprogram will exit.\nIt can be enabled either by importing this module and calling\n\"enable_cuda_sanitizer()\" or by exporting the \"TORCH_CUDA_SANITIZER\"\nenvironment variable.\nUsage\nHere is an example of a simple synchronization error in PyTorch:\nimport torch\na = torch.rand(4, 2, device=\"cuda\")\nwith torch.cuda.stream(torch.cuda.Stream()):\n torch.mul(a, 5, out=a)\nThe \"a\" tensor is initialized on the default stream and, without any", "source": "https://pytorch.org/docs/stable/cuda._sanitizer.html", "category": "pytorch docs"} {"text": "synchronization methods, modified on a new stream. The two kernels\nwill run concurrently on the same tensor, which might cause the second\nkernel to read uninitialized data before the first one was able to\nwrite it, or the first kernel might overwrite part of the result of\nthe second. When this script is run on the commandline with:\nTORCH_CUDA_SANITIZER=1 python example_error.py\nthe following output is printed by CSAN:\n============================\n CSAN detected a possible data race on tensor with data pointer 139719969079296\n Access by stream 94646435460352 during kernel:\n aten::mul.out(Tensor self, Tensor other, *, Tensor(a!) out) -> Tensor(a!)\n writing to argument(s) self, out, and to the output\n With stack trace:\n File \"example_error.py\", line 6, in \n torch.mul(a, 5, out=a)\n ...\n File \"pytorch/torch/cuda/_sanitizer.py\", line 364, in _handle_kernel_launch\n stack_trace = traceback.StackSummary.extract(\nPrevious access by stream 0 during kernel:", "source": "https://pytorch.org/docs/stable/cuda._sanitizer.html", "category": "pytorch docs"} {"text": "Previous access by stream 0 during kernel:\n aten::rand(int[] size, *, int? dtype=None, Device? device=None) -> Tensor\n writing to the output\n With stack trace:\n File \"example_error.py\", line 3, in \n a = torch.rand(10000, device=\"cuda\")\n ...\n File \"pytorch/torch/cuda/_sanitizer.py\", line 364, in _handle_kernel_launch\n stack_trace = traceback.StackSummary.extract(\nTensor was allocated with stack trace:\n File \"example_error.py\", line 3, in \n a = torch.rand(10000, device=\"cuda\")\n ...\n File \"pytorch/torch/cuda/_sanitizer.py\", line 420, in _handle_memory_allocation\n traceback.StackSummary.extract(\nThis gives extensive insight into the origin of the error:\n\n\nA tensor was incorrectly accessed from streams with ids: 0 (default\n stream) and 94646435460352 (new stream)\n\n\nThe tensor was allocated by invoking \"a = torch.rand(10000,\n device=\"cuda\")\"\n\n\nThe faulty accesses were caused by operators\n\n", "source": "https://pytorch.org/docs/stable/cuda._sanitizer.html", "category": "pytorch docs"} {"text": "\n\nThe faulty accesses were caused by operators\n\n\n\"a = torch.rand(10000, device=\"cuda\")\" on stream 0\n\n\n\"torch.mul(a, 5, out=a)\" on stream 94646435460352\n\n\n\n\nThe error message also displays the schemas of the invoked\n operators, along with a note showing which arguments of the\n operators correspond to the affected tensor.\n\n\nIn the example, it can be seen that tensor \"a\" corresponds to\n arguments \"self\", \"out\" and the \"output\" value of the invoked\n operator \"torch.mul\".\n\n\nSee also:\nThe list of supported torch operators and their schemas can be\n viewed here.\nThe bug can be fixed by forcing the new stream to wait for the default\nstream:\nwith torch.cuda.stream(torch.cuda.Stream()):\n torch.cuda.current_stream().wait_stream(torch.cuda.default_stream())\n torch.mul(a, 5, out=a)\nWhen the script is run again, there are no errors reported.\nAPI Reference\ntorch.cuda._sanitizer.enable_cuda_sanitizer()\nEnables CUDA Sanitizer.", "source": "https://pytorch.org/docs/stable/cuda._sanitizer.html", "category": "pytorch docs"} {"text": "Enables CUDA Sanitizer.\nThe sanitizer will begin to analyze low-level CUDA calls invoked by\n torch functions for synchronization errors. All data races found\n will be printed to the standard error output along with stack\n traces of suspected causes. For best results, the sanitizer should\n be enabled at the very beginning of the program.", "source": "https://pytorch.org/docs/stable/cuda._sanitizer.html", "category": "pytorch docs"} {"text": "torch::deploy has been moved to pytorch/multipy\n\"torch::deploy\" has been moved to its new home at\nhttps://github.com/pytorch/multipy.", "source": "https://pytorch.org/docs/stable/deploy.html", "category": "pytorch docs"} {"text": "Complex Numbers\nNote:\nWhen using complex numbers, use Pytorch with CUDA 11.6 downloaded\n via pip wheel as described in Get Started and select the CUDA 11.6\n pip package.\nComplex numbers are numbers that can be expressed in the form a + bj,\nwhere a and b are real numbers, and j is called the imaginary unit,\nwhich satisfies the equation j^2 = -1. Complex numbers frequently\noccur in mathematics and engineering, especially in topics like signal\nprocessing. Traditionally many users and libraries (e.g., TorchAudio)\nhave handled complex numbers by representing the data in float tensors\nwith shape (..., 2) where the last dimension contains the real and\nimaginary values.\nTensors of complex dtypes provide a more natural user experience while\nworking with complex numbers. Operations on complex tensors (e.g.,\n\"torch.mv()\", \"torch.matmul()\") are likely to be faster and more\nmemory efficient than operations on float tensors mimicking them.", "source": "https://pytorch.org/docs/stable/complex_numbers.html", "category": "pytorch docs"} {"text": "Operations involving complex numbers in PyTorch are optimized to use\nvectorized assembly instructions and specialized kernels (e.g. LAPACK,\ncuBlas).\nNote:\nSpectral operations in the torch.fft module support native complex\n tensors.\nWarning:\nComplex tensors is a beta feature and subject to change.\nCreating Complex Tensors\nWe support two complex dtypes: torch.cfloat and torch.cdouble\n\n\n\nx = torch.randn(2,2, dtype=torch.cfloat)\nx\n tensor([[-0.4621-0.0303j, -0.2438-0.5874j],\n [ 0.7706+0.1421j, 1.2110+0.1918j]])\n\n\n\nNote:\nThe default dtype for complex tensors is determined by the default\n floating point dtype. If the default floating point dtype is\n torch.float64 then complex numbers are inferred to have a dtype of\n torch.complex128, otherwise they are assumed to have a dtype of\n torch.complex64.\nAll factory functions apart from \"torch.linspace()\",\n\"torch.logspace()\", and \"torch.arange()\" are supported for complex\ntensors.", "source": "https://pytorch.org/docs/stable/complex_numbers.html", "category": "pytorch docs"} {"text": "tensors.\nTransition from the old representation\nUsers who currently worked around the lack of complex tensors with\nreal tensors of shape (..., 2) can easily to switch using the complex\ntensors in their code using \"torch.view_as_complex()\" and\n\"torch.view_as_real()\". Note that these functions don\u00e2\u0080\u0099t perform any\ncopy and return a view of the input tensor.\n\n\n\nx = torch.randn(3, 2)\nx\n tensor([[ 0.6125, -0.1681],\n [-0.3773, 1.3487],\n [-0.0861, -0.7981]])\ny = torch.view_as_complex(x)\ny\n tensor([ 0.6125-0.1681j, -0.3773+1.3487j, -0.0861-0.7981j])\ntorch.view_as_real(y)\n tensor([[ 0.6125, -0.1681],\n [-0.3773, 1.3487],\n [-0.0861, -0.7981]])\n\n\n\nAccessing real and imag\nThe real and imaginary values of a complex tensor can be accessed\nusing the \"real\" and \"imag\".\nNote:\nAccessing real and imag attributes doesn't allocate any memory,", "source": "https://pytorch.org/docs/stable/complex_numbers.html", "category": "pytorch docs"} {"text": "and in-place updates on the real and imag tensors will update\n the original complex tensor. Also, the returned real and imag\n tensors are not contiguous.\n\n\n\ny.real\n tensor([ 0.6125, -0.3773, -0.0861])\ny.imag\n tensor([-0.1681, 1.3487, -0.7981])\ny.real.mul_(2)\n tensor([ 1.2250, -0.7546, -0.1722])\ny\n tensor([ 1.2250-0.1681j, -0.7546+1.3487j, -0.1722-0.7981j])\ny.real.stride()\n (2,)\n\n\n\nAngle and abs\nThe angle and absolute values of a complex tensor can be computed\nusing \"torch.angle()\" and \"torch.abs()\".\n\n\n\nx1=torch.tensor([3j, 4+4j])\nx1.abs()\n tensor([3.0000, 5.6569])\nx1.angle()\n tensor([1.5708, 0.7854])\n\n\n\nLinear Algebra\nMany linear algebra operations, like \"torch.matmul()\", \"torch.svd()\",\n\"torch.solve()\" etc., support complex numbers. If you'd like to\nrequest an operation we don't currently support, please search if an\nissue has already been filed and if not, file one.\nSerialization", "source": "https://pytorch.org/docs/stable/complex_numbers.html", "category": "pytorch docs"} {"text": "Serialization\nComplex tensors can be serialized, allowing data to be saved as\ncomplex values.\n\n\n\ntorch.save(y, 'complex_tensor.pt')\ntorch.load('complex_tensor.pt')\n tensor([ 0.6125-0.1681j, -0.3773+1.3487j, -0.0861-0.7981j])\n\n\n\nAutograd\nPyTorch supports autograd for complex tensors. The gradient computed\nis the Conjugate Wirtinger derivative, the negative of which is\nprecisely the direction of steepest descent used in Gradient Descent\nalgorithm. Thus, all the existing optimizers work out of the box with\ncomplex parameters. For more details, check out the note Autograd for\nComplex Numbers.\nWe do not fully support the following subsystems:\n\n\nQuantization\n\n\nJIT\n\n\nSparse Tensors\n\n\nDistributed\n\n\nIf any of these would help your use case, please search if an issue\nhas already been filed and if not, file one.", "source": "https://pytorch.org/docs/stable/complex_numbers.html", "category": "pytorch docs"} {"text": "FullyShardedDataParallel\nclass torch.distributed.fsdp.FullyShardedDataParallel(module, process_group=None, sharding_strategy=None, cpu_offload=None, auto_wrap_policy=None, backward_prefetch=BackwardPrefetch.BACKWARD_PRE, mixed_precision=None, ignored_modules=None, param_init_fn=None, device_id=None, sync_module_states=False, forward_prefetch=False, limit_all_gathers=False, use_orig_params=False, ignored_parameters=None)\nA wrapper for sharding Module parameters across data parallel\n workers. This is inspired by Xu et al. as well as the ZeRO Stage 3\n from DeepSpeed. FullyShardedDataParallel is commonly shortened to\n FSDP.\nExample:\n >>> import torch\n >>> from torch.distributed.fsdp import FullyShardedDataParallel as FSDP\n >>> torch.cuda.set_device(device_id)\n >>> sharded_module = FSDP(my_module)\n >>> optim = torch.optim.Adam(sharded_module.parameters(), lr=0.0001)\n >>> x = sharded_module(x, y=3, z=torch.Tensor([1]))\n >>> loss = x.sum()\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "\n\n\nloss = x.sum()\n >>> loss.backward()\n >>> optim.step()\n\n\n\nWarning:\n The optimizer must be initialized *after* the module has been\n wrapped, since FSDP will shard parameters in-place and this will\n break any previously initialized optimizers.\n\nWarning:\n If the destination CUDA device has ID \"dev_id\", either (1)\n \"module\" should already be placed on that device, (2) the device\n should be set using \"torch.cuda.set_device(dev_id)\", or (3)\n \"dev_id\" should be passed into the \"device_id\" constructor\n argument. This FSDP instance's compute device will be that\n destination device. For (1) and (3), the FSDP initialization\n always occurs on GPU. For (2), the FSDP initialization happens on\n \"module\" 's current device, which may be CPU.\n\nWarning:\n FSDP currently does not support gradient accumulation outside\n \"no_sync()\" when using CPU offloading. Trying to do so yields\n incorrect results since FSDP will use the newly-reduced gradient\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "instead of accumulating with any existing gradient.\nWarning:\n Changing the original parameter variable names after construction\n will lead to undefined behavior.\n\nWarning:\n Passing in *sync_module_states=True* flag requires module to be\n put on GPU, or to use \"device_id\" argument to specify a CUDA\n device that FSDP will move module to. This is because\n \"sync_module_states=True\" requires GPU communication.\n\nWarning:\n As of PyTorch 1.12, FSDP only offers limited support for shared\n parameters (for example, setting one \"Linear\" layer's weight to\n another's). In particular, modules that share parameters must be\n wrapped as part of the same FSDP unit. If enhanced shared\n parameter support is needed for your use case, please ping\n https://github.com/pytorch/pytorch/issues/77724\n\nNote:\n Inputs into FSDP \"forward\" function will be moved to compute\n device (same device FSDP module is on) before running \"forward\",\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "so user does not have to manually move inputs from CPU -> GPU.\nParameters:\n * module (nn.Module) -- This is the module to be wrapped\n with FSDP.\n * **process_group** (*Optional**[**Union**[**ProcessGroup**,\n **Tuple**[**ProcessGroup**, **ProcessGroup**]**]**]*) --\n Optional[Union[ProcessGroup, Tuple[ProcessGroup,\n ProcessGroup]]] This is the process group used for collective\n communications and the one over which the model is sharded.\n For hybrid sharding strategies such as\n \"ShardingStrategy.HYBRID_SHARD\" users can pass in a tuple of\n process groups representing the groups to shard and replicate\n across, respectively.\n\n * **sharding_strategy** (*Optional**[**ShardingStrategy**]*) --\n This configures the sharding strategy used by FSDP, which may\n trade off memory saving and communication overhead. See\n \"ShardingStrategy\" for details. (Default: \"FULL_SHARD\")\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "\n\ncpu_offload (Optional[CPUOffload]) -- This\n configures CPU offloading. If this is set to \"None\", then no\n CPU offloading happens. See \"CPUOffload\" for details.\n (Default: \"None\")\n\n\nauto_wrap_policy\n (Optional[Union[Callable[[nn.Module,\n bool, int], bool], _FSDPPolicy]]) --\nThis is either \"None\", an \"_FSDPPolicy\", or a callable of a\nfixed signature. If it is \"None\", then \"module\" is wrapped\nwith only a top-level FSDP instance without any nested\nwrapping. If it is an \"_FSDPPolicy\", then the wrapping follows\nthe given policy. \"ModuleWrapPolicy\" in\n\"torch.distributed.fsdp.wrap.py\" is an example. If it is a\ncallable, then it should take in three arguments \"module:\nnn.Module\", \"recurse: bool\", and \"nonwrapped_numel: int\" and\nshould return a \"bool\" specifying whether the passed-in\n\n\n\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "\"module\" should be wrapped if \"recurse=False\" or if the\n traversal should continue down the subtree if \"recurse=True\".\n Additional custom arguments may be added to the callable. The\n \"size_based_auto_wrap_policy\" in\n \"torch.distributed.fsdp.wrap.py\" gives an example callable\n that wraps a module if the parameters in its subtree exceed\n 100M numel. A good practice is to print the model after\n wrapping and adjust as needed.\n Example:\n\n >>> def custom_auto_wrap_policy(\n >>> module: nn.Module,\n >>> recurse: bool,\n >>> nonwrapped_numel: int,\n >>> # Additional custom arguments\n >>> min_num_params: int = int(1e8),\n >>> ) -> bool:\n >>> return nonwrapped_numel >= min_num_params\n >>> # Configure a custom `min_num_params`\n >>> my_auto_wrap_policy = functools.partial(custom_auto_wrap_policy, min_num_params=int(1e5))\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "\n\nbackward_prefetch (Optional[BackwardPrefetch]) --\n This configures explicit backward prefetching of all-gathers.\n See \"BackwardPrefetch\" for details. (Default: \"BACKWARD_PRE\")\n\n\nmixed_precision (Optional[MixedPrecision]) -- This\n configures native mixed precision for FSDP. If this is set to\n \"None\", then no mixed precision is used. Otherwise, parameter,\n buffer, and gradient reduction dtypes can be set. See\n \"MixedPrecision\" for details. (Default: \"None\")\n\n\nignored_modules\n (Optional[Iterable[torch.nn.Module]]) -- Modules\n whose own parameters and child modules' parameters and buffers\n are ignored by this instance. None of the modules directly in\n \"ignored_modules\" should be \"FullyShardedDataParallel\"\n instances, and any child modules that are already-constructed\n \"FullyShardedDataParallel\" instances will not be ignored if\n\n\n\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "they are nested under this instance. This argument may be used\n to avoid sharding specific parameters at module granularity\n when using an \"auto_wrap_policy\" or if parameters' sharding is\n not managed by FSDP. (Default: \"None\")\n * **param_init_fn**\n (*Optional**[**Callable**[**[**nn.Module**]**, **None**]**]*)\n --\n\n A \"Callable[torch.nn.Module] -> None\" that specifies how\n modules that are currently on the meta device should be\n initialized onto an actual device. Note that as of v1.12, we\n detect modules on the meta device via \"is_meta\" check and\n apply a default initialization that calls \"reset_parameters\"\n method on the passed in \"nn.Module\" if \"param_init_fn\" is not\n specified, otherwise we run \"param_init_fn\" to initialize the\n passed in \"nn.Module\". In particular, this means that if\n \"is_meta=True\" for any module parameters for modules that will\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "be wrapped with FSDP and \"param_init_fn\" is not specified, we\n assume your module properly implements a \"reset_parameters()\"\n and will throw errors if not. Note that additionally, we offer\n support for modules initialized with torchdistX's\n (https://github.com/pytorch/torchdistX) \"deferred_init\" API.\n In this case, deferred modules would be initialized by a\n default initialization function that calls torchdistX's\n \"materialize_module\", or the passed in \"param_init_fn\", if it\n is not \"None\". The same \"Callable\" is applied to initialize\n all meta modules. Note that this initialization function is\n applied before doing any FSDP sharding logic.\n Example:\n\n >>> module = MyModule(device=\"meta\")\n >>> def my_init_fn(module):\n >>> # responsible for initializing a module, such as with reset_parameters\n >>> ...\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "\n\n\n...\n >>> fsdp_model = FSDP(module, param_init_fn=my_init_fn, auto_wrap_policy=size_based_auto_wrap_policy)\n >>> print(next(fsdp_model.parameters()).device) # current CUDA device\n >>> # With torchdistX\n >>> module = deferred_init.deferred_init(MyModule, device=\"cuda\")\n >>> # Will initialize via deferred_init.materialize_module().\n >>> fsdp_model = FSDP(module, auto_wrap_policy=size_based_auto_wrap_policy)\n\n\n\n\n * **device_id** (*Optional**[**Union**[**int**,\n **torch.device**]**]*) -- An \"int\" or \"torch.device\"\n describing the CUDA device the FSDP module should be moved to\n determining where initialization such as sharding takes place.\n If this argument is not specified and \"module\" is on CPU, we\n issue a warning mentioning that this argument can be specified\n for faster initialization. If specified, resulting FSDP\n instances will reside on this device, including moving ignored\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "modules' parameters if needed. Note that if \"device_id\" is\n specified but \"module\" is already on a different CUDA device,\n an error will be thrown. (Default: \"None\")\n * **sync_module_states** (*bool*) -- If \"True\", each\n individually wrapped FSDP unit will broadcast module\n parameters from rank 0 to ensure they are the same across all\n ranks after initialization. This helps ensure model parameters\n are the same across ranks before starting training, but adds\n communication overhead to \"__init__\", as at least one\n broadcast is triggered per individually wrapped FSDP unit.\n This can also help load checkpoints taken by \"state_dict\" and\n to be loaded by \"load_state_dict\" in a memory efficient way.\n See documentation for \"FullStateDictConfig\" for an example of\n this. (Default: \"False\")\n\n * **forward_prefetch** (*bool*) -- If \"True\", then FSDP\n *explicitly* prefetches the next upcoming all-gather while\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "executing in the forward pass. This may improve communication\n and computation overlap for CPU bound workloads. This should\n only be used for static graph models since the forward order\n is fixed based on the first iteration's execution. (Default:\n \"False\")\n * **limit_all_gathers** (*bool*) -- If \"False\", then FSDP allows\n the CPU thread to schedule all-gathers without any extra\n synchronization. If \"True\", then FSDP explicitly synchronizes\n the CPU thread to prevent too many in-flight all-gathers. This\n \"bool\" only affects the sharded strategies that schedule all-\n gathers. Enabling this can help lower the number of CUDA\n malloc retries.\n\n * **ignored_parameters**\n (*Optional**[**Iterable**[**torch.nn.Parameter**]**]*) --\n Ignored parameters will not be managed by this FSDP instance,\n that means these parameters will not be flattened and sharded\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "by FSDP, their gradients will not be synchronized as well.\n With this newly added argument, \"ignored_modules\" could be\n deprecated soon. For backward compatibility, both\n \"ignored_parameters\" and \"ignored_modules\" are kept for now,\n but FSDP only allows one of them to be specified as not\n \"None\".\napply(fn)\n Applies \"fn\" recursively to every submodule (as returned by\n \".children()\") as well as self. Typical use includes\n initializing the parameters of a model (see also torch.nn.init).\n\n Compared to \"torch.nn.Module.apply\", this version additionally\n gathers the full parameters before applying \"fn\". It should not\n be called from within another \"summon_full_params\" context.\n\n Parameters:\n **fn** (\"Module\" -> None) -- function to be applied to each\n submodule\n\n Returns:\n self\n\n Return type:\n Module\n\nclip_grad_norm_(max_norm, norm_type=2.0)", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "clip_grad_norm_(max_norm, norm_type=2.0)\n Clips the gradient norm of all parameters. The norm is computed\n over all parameters' gradients as viewed as a single vector, and\n the gradients are modified in-place.\n\n Parameters:\n * **max_norm** (*float** or **int*) -- max norm of the\n gradients\n\n * **norm_type** (*float** or **int*) -- type of the used\n p-norm. Can be \"'inf'\" for infinity norm.\n\n Returns:\n Total norm of the parameters (viewed as a single vector).\n\n Return type:\n *Tensor*\n\n Note:\n\n If every FSDP instance uses \"NO_SHARD\", meaning that no\n gradients are sharded across ranks, then you may directly use\n \"torch.nn.utils.clip_grad_norm_()\".\n\n Note:\n\n If at least some FSDP instance uses a sharded strategy (i.e.\n one other than \"NO_SHARD\"), then you should use this method\n instead of \"torch.nn.utils.clip_grad_norm_()\" since this\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "method handles the fact that gradients are sharded across\n ranks.\n Note:\n\n The total norm returned will have the \"largest\" dtype across\n all parameters/gradients as defined by PyTorch's type\n promotion semantics. For example, if *all*\n parameters/gradients use a low precision dtype, then the\n returned norm's dtype will be that low precision dtype, but if\n there exists at least one parameter/ gradient using FP32, then\n the returned norm's dtype will be FP32.\n\n Warning:\n\n This needs to be called on all ranks since it uses collective\n communications.\n\nstatic flatten_sharded_optim_state_dict(sharded_optim_state_dict, model, optim)\n The API is similar to \"shard_full_optim_state_dict()\". The only\n difference is that the input \"sharded_optim_state_dict\" should\n be returned from \"sharded_optim_state_dict()\". Therefore, there\n will be all-gather calls on each rank to gather \"ShardedTensor\"\n s.\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "s.\n Parameters:\n * **sharded_optim_state_dict** (*Dict**[**str**, **Any**]*)\n -- Optimizer state dict corresponding to the unflattened\n parameters and holding the sharded optimizer state.\n\n * **model** (*torch.nn.Module*) -- Refer to\n :meth:\"shard_full_optim_state_dict\".\n\n * **optim** (*torch.optim.Optimizer*) -- Optimizer for\n \"model\" 's\n\n * **parameters.** --\n\n Returns:\n Refer to \"shard_full_optim_state_dict()\".\n\n Return type:\n *Dict*[str, *Any*]\n\nforward(args, *kwargs)\n Runs the forward pass for the wrapped module, inserting FSDP-\n specific pre- and post-forward sharding logic.\n\n Return type:\n *Any*\n\nstatic fsdp_modules(module, root_only=False)\n Returns all nested FSDP instances, possibly including \"module\"\n itself and only including FSDP root modules if \"root_only=True\".\n\n Parameters:\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "Parameters:\n * module (torch.nn.Module) -- Root module, which may or\n may not be an \"FSDP\" module.\n * **root_only** (*bool*) -- Whether to return only FSDP root\n modules. (Default: \"False\")\n\n Returns:\n FSDP modules that are nested in the input \"module\".\n\n Return type:\n List[FullyShardedDataParallel]\n\nstatic full_optim_state_dict(model, optim, optim_input=None, rank0_only=True, group=None)\n Consolidates the full optimizer state on rank 0 and returns it\n as a \"dict\" following the convention of\n \"torch.optim.Optimizer.state_dict()\", i.e. with keys \"\"state\"\"\n and \"\"param_groups\"\". The flattened parameters in \"FSDP\" modules\n contained in \"model\" are mapped back to their unflattened\n parameters.\n\n Warning:\n\n This needs to be called on all ranks since it uses collective\n communications. However, if \"rank0_only=True\", then the state\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "dict is only populated on rank 0, and all other ranks return\n an empty \"dict\".\n Warning:\n\n Unlike \"torch.optim.Optimizer.state_dict()\", this method uses\n full parameter names as keys instead of parameter IDs.\n\n Note:\n\n Like in \"torch.optim.Optimizer.state_dict()\", the tensors\n contained in the optimizer state dict are not cloned, so there\n may be aliasing surprises. For best practices, consider saving\n the returned optimizer state dict immediately, e.g. using\n \"torch.save()\".\n\n Parameters:\n * **model** (*torch.nn.Module*) -- Root module (which may or\n may not be a \"FullyShardedDataParallel\" instance) whose\n parameters were passed into the optimizer \"optim\".\n\n * **optim** (*torch.optim.Optimizer*) -- Optimizer for\n \"model\" 's parameters.\n\n * **optim_input**\n (*Optional**[**Union**[**List**[**Dict**[**str**,\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "Any]], Iterable[torch.nn.Parameter]]]*)\n -- Input passed into the optimizer \"optim\" representing\n either a \"list\" of parameter groups or an iterable of\n parameters; if \"None\", then this method assumes the input\n was \"model.parameters()\". This argument is deprecated, and\n there is no need to pass it in anymore. (Default: \"None\")\n * **rank0_only** (*bool*) -- If \"True\", saves the populated\n \"dict\" only on rank 0; if \"False\", saves it on all ranks.\n (Default: \"True\")\n\n * **group** (*dist.ProcessGroup*) -- Model's process group or\n \"None\" if using the default process group. (Default:\n \"None\")\n\n Returns:\n A \"dict\" containing the optimizer state for \"model\" 's\n original unflattened parameters and including keys \"state\"\n and \"param_groups\" following the convention of\n \"torch.optim.Optimizer.state_dict()\". If \"rank0_only=True\",\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "then nonzero ranks return an empty \"dict\".\n Return type:\n Dict[str, Any]\n\nproperty module: Module\n Returns the wrapped module (like \"DistributedDataParallel\").\n\nnamed_buffers(args, *kwargs)\n Overrides \"named_buffers()\" to intercept buffer names and remove\n all occurrences of the FSDP-specific flattened buffer prefix\n when inside the \"summon_full_params()\" context manager.\n\n Return type:\n *Iterator*[*Tuple*[str, *Tensor*]]\n\nnamed_parameters(args, *kwargs)\n Overrides \"named_parameters()\" to intercept parameter names and\n remove all occurrences of the FSDP-specific flattened parameter\n prefix when inside the \"summon_full_params()\" context manager.\n\n Return type:\n *Iterator*[*Tuple*[str, *Parameter*]]\n\nno_sync()\n A context manager to disable gradient synchronizations across\n FSDP instances. Within this context, gradients will be\n accumulated in module variables, which will later be\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "synchronized in the first forward-backward pass after exiting\n the context. This should only be used on the root FSDP instance\n and will recursively apply to all children FSDP instances.\n Note:\n\n This likely results in higher memory usage because FSDP will\n accumulate the full model gradients (instead of gradient\n shards) until the eventual sync.\n\n Note:\n\n When used with CPU offloading, the gradients will not be\n offloaded to CPU when inside the context manager. Instead,\n they will only be offloaded right after the eventual sync.\n\n Return type:\n *Generator*\n\nregister_comm_hook(state, hook)\n Registers a communication hook which is an enhancement that\n provides a flexible hook to users where they can specify how\n FSDP aggregates gradients across multiple workers. This hook can\n be used to implement several algorithms like GossipGrad and\n gradient compression which involve different communication\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "strategies for parameter syncs while training with\n \"FullyShardedDataParallel\".\n Warning:\n\n FSDP communication hook should be registered before running an\n initial forward pass and only once.\n\n Parameters:\n * **state** (*object*) --\n\n Passed to the hook to maintain any state information during\n the training process. Examples include error feedback in\n gradient compression, peers to communicate with next in\n GossipGrad, etc. It is locally stored by each worker and\n shared by all the gradient tensors on the worker.\n\n * **hook** (*Callable*) -- Callable, which has one of the\n following signatures: 1) \"hook: Callable[torch.Tensor] ->\n None\": This function takes in a Python tensor, which\n represents the full, flattened, unsharded gradient with\n respect to all variables corresponding to the model this\n FSDP unit is wrapping (that are not wrapped by other FSDP\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "sub-units). It then performs all necessary processing and\n returns \"None\"; 2) \"hook: Callable[torch.Tensor,\n torch.Tensor] -> None\": This function takes in two Python\n tensors, the first one represents the full, flattened,\n unsharded gradient with respect to all variables\n corresponding to the model this FSDP unit is wrapping (that\n are not wrapped by other FSDP sub-units). The latter\n represents a pre-sized tensor to store a chunk of a sharded\n gradient after reduction. In both cases, callable performs\n all necessary processing and returns \"None\". Callables with\n signature 1 are expected to handle gradient communication\n for a NO_SHARD case. Callables with signature 2 are\n expected to handle gradient communication for sharded\n cases.\nstatic rekey_optim_state_dict(optim_state_dict, optim_state_key_type, model, optim_input=None, optim=None)", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "Re-keys the optimizer state dict \"optim_state_dict\" to use the\n key type \"optim_state_key_type\". This can be used to achieve\n compatibility between optimizer state dicts from models with\n FSDP instances and ones without.\n To re-key an FSDP full optimizer state dict (i.e. from\n \"full_optim_state_dict()\") to use parameter IDs and be loadable\n to a non-wrapped model:\n\n >>> wrapped_model, wrapped_optim = ...\n >>> full_osd = FSDP.full_optim_state_dict(wrapped_model, wrapped_optim)\n >>> nonwrapped_model, nonwrapped_optim = ...\n >>> rekeyed_osd = FSDP.rekey_optim_state_dict(full_osd, OptimStateKeyType.PARAM_ID, nonwrapped_model)\n >>> nonwrapped_optim.load_state_dict(rekeyed_osd)\n\n To re-key a normal optimizer state dict from a non-wrapped model\n to be loadable to a wrapped model:\n\n >>> nonwrapped_model, nonwrapped_optim = ...\n >>> osd = nonwrapped_optim.state_dict()\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "\n\n\nosd = nonwrapped_optim.state_dict()\n >>> rekeyed_osd = FSDP.rekey_optim_state_dict(osd, OptimStateKeyType.PARAM_NAME, nonwrapped_model)\n >>> wrapped_model, wrapped_optim = ...\n >>> sharded_osd = FSDP.shard_full_optim_state_dict(rekeyed_osd, wrapped_model)\n >>> wrapped_optim.load_state_dict(sharded_osd)\n\n\n\n Returns:\n The optimizer state dict re-keyed using the parameter keys\n specified by \"optim_state_key_type\".\n\n Return type:\n Dict[str, Any]\n\nstatic scatter_full_optim_state_dict(full_optim_state_dict, model, optim_input=None, optim=None, group=None)\n Scatters the full optimizer state dict from rank 0 to all other\n ranks, returning the sharded optimizer state dict on each rank.\n The return value is the same as \"shard_full_optim_state_dict()\",\n and on rank 0, the first argument should be the return value of\n \"full_optim_state_dict()\".\n\n Example:\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "\"full_optim_state_dict()\".\n Example:\n\n >>> from torch.distributed.fsdp import FullyShardedDataParallel as FSDP\n >>> model, optim = ...\n >>> full_osd = FSDP.full_optim_state_dict(model, optim) # only non-empty on rank 0\n >>> # Define new model with possibly different world size\n >>> new_model, new_optim, new_group = ...\n >>> sharded_osd = FSDP.scatter_full_optim_state_dict(full_osd, new_model, group=new_group)\n >>> new_optim.load_state_dict(sharded_osd)\n\n Note:\n\n Both \"shard_full_optim_state_dict()\" and\n \"scatter_full_optim_state_dict()\" may be used to get the\n sharded optimizer state dict to load. Assuming that the full\n optimizer state dict resides in CPU memory, the former\n requires each rank to have the full dict in CPU memory, where\n each rank individually shards the dict without any\n communication, while the latter requires only rank 0 to have\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "the full dict in CPU memory, where rank 0 moves each shard to\n GPU memory (for NCCL) and communicates it to ranks\n appropriately. Hence, the former has higher aggregate CPU\n memory cost, while the latter has higher communication cost.\n Parameters:\n * **full_optim_state_dict** (*Optional**[**Dict**[**str**,\n **Any**]**]*) -- Optimizer state dict corresponding to the\n unflattened parameters and holding the full non-sharded\n optimizer state if on rank 0; the argument is ignored on\n nonzero ranks.\n\n * **model** (*torch.nn.Module*) -- Root module (which may or\n may not be a \"FullyShardedDataParallel\" instance) whose\n parameters correspond to the optimizer state in\n \"full_optim_state_dict\".\n\n * **optim_input**\n (*Optional**[**Union**[**List**[**Dict**[**str**,\n **Any**]**]**, **Iterable**[**torch.nn.Parameter**]**]**]*)\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "-- Input passed into the optimizer representing either a\n \"list\" of parameter groups or an iterable of parameters; if\n \"None\", then this method assumes the input was\n \"model.parameters()\". This argument is deprecated, and\n there is no need to pass it in anymore. (Default: \"None\")\n * **optim** (*Optional**[**torch.optim.Optimizer**]*) --\n Optimizer that will load the state dict returned by this\n method. This is the preferred argument to use over\n \"optim_input\". (Default: \"None\")\n\n * **group** (*dist.ProcessGroup*) -- Model's process group or\n \"None\" if using the default process group. (Default:\n \"None\")\n\n Returns:\n The full optimizer state dict now remapped to flattened\n parameters instead of unflattened parameters and restricted\n to only include this rank's part of the optimizer state.\n\n Return type:\n Dict[str, Any]\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "Return type:\n Dict[str, Any]\nstatic set_state_dict_type(module, state_dict_type, state_dict_config=None)\n Set the \"state_dict_type\" and the corresponding (optional)\n configurations of all the descendant FSDP modules of the target\n module. The target module does not have to be a FSDP module. If\n the target module is a FSDP module, its \"state_dict_type\" will\n also be changed.\n\n Note:\n\n This API should be called for only the top-level (root)\n module.\n\n Note:\n\n This API enables users to transparently use the conventional\n \"state_dict\" API to take model checkpoints in cases where the\n root FSDP module is wrapped by another \"nn.Module\". For\n example, the following will ensure \"state_dict\" is called on\n all non-FSDP instances, while dispatching into\n *sharded_state_dict* implementation for FSDP:\n\n Example:\n\n >>> model = DDP(FSDP(...))\n >>> FSDP.set_state_dict_type(\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "\n\n\nFSDP.set_state_dict_type(\n >>> model,\n >>> StateDictType.SHARDED_STATE_DICT,\n >>> ShardedStateDictConfig(offload_to_cpu=True),\n >>> )\n >>> checkpoint = model.state_dict()\n\n\n\n Parameters:\n * **module** (*torch.nn.Module*) -- Root module.\n\n * **state_dict_type** (*StateDictType*) -- the desired\n \"state_dict_type\" to set.\n\n * **state_dict_config** (*Optional**[**StateDictConfig**]*)\n -- the configuration for the target \"state_dict_type\".\n\n Return type:\n *Tuple*[*StateDictType*, *StateDictConfig*]\n\nstatic shard_full_optim_state_dict(full_optim_state_dict, model, optim_input=None, optim=None)\n Shards the full optimizer state dict \"full_optim_state_dict\" by\n remapping the state to flattened parameters instead of\n unflattened parameters and restricting to only this rank's part\n of the optimizer state. The first argument should be the return\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "value of \"full_optim_state_dict()\".\n Example:\n\n >>> from torch.distributed.fsdp import FullyShardedDataParallel as FSDP\n >>> model, optim = ...\n >>> full_osd = FSDP.full_optim_state_dict(model, optim)\n >>> torch.save(full_osd, PATH)\n >>> # Define new model with possibly different world size\n >>> new_model, new_optim = ...\n >>> full_osd = torch.load(PATH)\n >>> sharded_osd = FSDP.shard_full_optim_state_dict(full_osd, new_model)\n >>> new_optim.load_state_dict(sharded_osd)\n\n Note:\n\n Both \"shard_full_optim_state_dict()\" and\n \"scatter_full_optim_state_dict()\" may be used to get the\n sharded optimizer state dict to load. Assuming that the full\n optimizer state dict resides in CPU memory, the former\n requires each rank to have the full dict in CPU memory, where\n each rank individually shards the dict without any\n communication, while the latter requires only rank 0 to have\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "the full dict in CPU memory, where rank 0 moves each shard to\n GPU memory (for NCCL) and communicates it to ranks\n appropriately. Hence, the former has higher aggregate CPU\n memory cost, while the latter has higher communication cost.\n Parameters:\n * **full_optim_state_dict** (*Dict**[**str**, **Any**]*) --\n Optimizer state dict corresponding to the unflattened\n parameters and holding the full non-sharded optimizer\n state.\n\n * **model** (*torch.nn.Module*) -- Root module (which may or\n may not be a \"FullyShardedDataParallel\" instance) whose\n parameters correspond to the optimizer state in\n \"full_optim_state_dict\".\n\n * **optim_input**\n (*Optional**[**Union**[**List**[**Dict**[**str**,\n **Any**]**]**, **Iterable**[**torch.nn.Parameter**]**]**]*)\n -- Input passed into the optimizer representing either a\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "\"list\" of parameter groups or an iterable of parameters; if\n \"None\", then this method assumes the input was\n \"model.parameters()\". This argument is deprecated, and\n there is no need to pass it in anymore. (Default: \"None\")\n * **optim** (*Optional**[**torch.optim.Optimizer**]*) --\n Optimizer that will load the state dict returned by this\n method. This is the preferred argument to use over\n \"optim_input\". (Default: \"None\")\n\n Returns:\n The full optimizer state dict now remapped to flattened\n parameters instead of unflattened parameters and restricted\n to only include this rank's part of the optimizer state.\n\n Return type:\n Dict[str, Any]\n\nstatic sharded_optim_state_dict(model, optim, group=None)\n The API is similar to \"full_optim_state_dict()\" but this API\n chunks all non-zero-dimension states to \"ShardedTensor\" to save\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "memory. This API should only be used when the model \"state_dict\"\n is derived with the context manager \"with\n state_dict_type(SHARDED_STATE_DICT):\".\n For the detailed usage, refer to \"full_optim_state_dict()\".\n\n Warning:\n\n The returned state dict contains \"ShardedTensor\" and cannot be\n directly used by the regular \"optim.load_state_dict\".\n\n Return type:\n *Dict*[str, *Any*]\n\nstatic state_dict_type(module, state_dict_type, state_dict_config=None)\n A context manager to set the \"state_dict_type\" of all the\n descendant FSDP modules of the target module. This context\n manager has the same functions as \"set_state_dict_type()\". Read\n the document of \"set_state_dict_type()\" for the detail.\n\n Example:\n\n >>> model = DDP(FSDP(...))\n >>> with FSDP.state_dict_type(\n >>> model,\n >>> StateDictType.SHARDED_STATE_DICT,\n >>> ):\n >>> checkpoint = model.state_dict()\n\n Parameters:\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "Parameters:\n * module (torch.nn.Module) -- Root module.\n * **state_dict_type** (*StateDictType*) -- the desired\n \"state_dict_type\" to set.\n\n * **state_dict_config** (*Optional**[**StateDictConfig**]*)\n -- the configuration for the target \"state_dict_type\".\n\n Return type:\n *Generator*\n\nstatic summon_full_params(module, recurse=True, writeback=True, rank0_only=False, offload_to_cpu=False, with_grads=False)\n A context manager to expose full params for FSDP instances. Can\n be useful *after* forward/backward for a model to get the params\n for additional processing or checking. It can take a non-FSDP\n module and will summon full params for all contained FSDP\n modules as well as their children, depending on the \"recurse\"\n argument.\n\n Note:\n\n This can be used on inner FSDPs.\n\n Note:\n\n This can *not* be used within a forward or backward pass. Nor\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "can forward and backward be started from within this context.\n Note:\n\n Parameters will revert to their local shards after the context\n manager exits, storage behavior is the same as forward.\n\n Note:\n\n The full parameters can be modified, but only the portion\n corresponding to the local param shard will persist after the\n context manager exits (unless \"writeback=False\", in which case\n changes will be discarded). In the case where FSDP does not\n shard the parameters, currently only when \"world_size == 1\",\n or \"NO_SHARD\" config, the modification is persisted regardless\n of \"writeback\".\n\n Note:\n\n This method works on modules which are not FSDP themselves but\n may contain multiple independent FSDP units. In that case, the\n given arguments will apply to all contained FSDP units.\n\n Warning:\n\n Note that \"rank0_only=True\" in conjunction with\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "\"writeback=True\" is not currently supported and will raise an\n error. This is because model parameter shapes would be\n different across ranks within the context, and writing to them\n can lead to inconsistency across ranks when the context is\n exited.\n Warning:\n\n Note that \"offload_to_cpu\" and \"rank0_only=False\" will result\n in full parameters being redundantly copied to CPU memory for\n GPUs that reside on the same machine, which may incur the risk\n of CPU OOM. It is recommended to use \"offload_to_cpu\" with\n \"rank0_only=True\".\n\n Parameters:\n * **recurse** (*bool**, **Optional*) -- recursively summon\n all params for nested FSDP instances (default: True).\n\n * **writeback** (*bool**, **Optional*) -- if \"False\",\n modifications to params are discarded after the context\n manager exits; disabling this can be slightly more\n efficient (default: True)\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "efficient (default: True)\n * **rank0_only** (*bool**, **Optional*) -- if \"True\", full\n parameters are materialized on only global rank 0. This\n means that within the context, only rank 0 will have full\n parameters and the other ranks will have sharded\n parameters. Note that setting \"rank0_only=True\" with\n \"writeback=True\" is not supported, as model parameter\n shapes will be different across ranks within the context,\n and writing to them can lead to inconsistency across ranks\n when the context is exited.\n\n * **offload_to_cpu** (*bool**, **Optional*) -- If \"True\",\n full parameters are offloaded to CPU. Note that this\n offloading currently only occurs if the parameter is\n sharded (which is only not the case for world_size = 1 or\n \"NO_SHARD\" config). It is recommended to use\n \"offload_to_cpu\" with \"rank0_only=True\" to avoid redundant\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "copies of model parameters being offloaded to the same CPU\n memory.\n * **with_grads** (*bool**, **Optional*) -- If \"True\",\n gradients are also unsharded with the parameters.\n Currently, this is only supported when passing\n \"use_orig_params=True\" to the FSDP constructor and\n \"offload_to_cpu=False\" to this method. (Default: \"False\")\n\n Return type:\n *Generator*\n\nclass torch.distributed.fsdp.BackwardPrefetch(value)\nThis configures explicit backward prefetching, which can improve\n throughput but may slightly increase peak memory usage.\nFor NCCL backend, any collectives, even if issued in different\n streams, contend for the same per-device NCCL stream, which is why\n the relative order in which the collectives are issued matters for\n overlapping. The different backward prefetching settings correspond\n to different orderings.\n\n\"BACKWARD_PRE\": This prefetches the next set of parameters before\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "the current set of parameter's gradient computation. This\n improves backward pass throughput by overlapping communication\n (next all-gather) and computation (current gradient computation).\n\n\"BACKWARD_POST\": This prefetches the next set of parameters after\n the current set of parameter's gradient computation. This may\n improve backward pass throughput by overlapping communication\n (current reduce-scatter) and computation (next gradient\n computation). Specifically, the next all-gather is reordered to\n be before the current reduce-scatter.\n\nNote:\n If the increase in peak memory usage from prefetching is an\n issue, you may consider passing \"limit_all_gathers=True\" to the\n FSDP constructor, which may help reduce peak memory usage in some\n cases.\n\nclass torch.distributed.fsdp.ShardingStrategy(value)\nThis specifies the sharding strategy to be used for distributed\n training by \"FullyShardedDataParallel\".", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "training by \"FullyShardedDataParallel\".\n\n\n\"FULL_SHARD\": Parameters, gradients, and optimizer states are\n sharded. For the parameters, this strategy unshards (via all-\n gather) before the forward, reshards after the forward, unshards\n before the backward computation, and reshards after the backward\n computation. For gradients, it synchronizes and shards them (via\n reduce-scatter) after the backward computation. The sharded\n optimizer states are updated locally per rank.\n\n\n\"SHARD_GRAD_OP\": Gradients and optimizer states are sharded\n during computation, and additionally, parameters are sharded\n outside computation. For the parameters, this strategy unshards\n before the forward, does not reshard them after the forward, and\n only reshards them after the backward computation. The sharded\n optimizer states are updated locally per rank. Inside\n \"no_sync()\", the parameters are not resharded after the backward\n computation.\n\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "computation.\n\n\n\"NO_SHARD\": Parameters, gradients, and optimizer states are not\n sharded but instead replicated across ranks similar to PyTorch's\n \"DistributedDataParallel\" API. For gradients, this strategy\n synchronizes them (via all-reduce) after the backward\n computation. The unsharded optimizer states are updated locally\n per rank.\n\n\n\"HYBRID_SHARD\": Apply \"FULL_SHARD\" within a node, and replicate\n parameters across\n nodes. This results in reduced communication volume as\n expensive all-gathers and reduce-scatters are only done within\n a node, which can be more performant for medium -sized models.\n\n\n\"_HYBRID_SHARD_ZERO2\": Apply \"SHARD_GRAD_OP\" within a node, and\n replicate parameters across\n nodes. This is like \"HYBRID_SHARD\", except this may provide\n even higher throughput since the unsharded parameters are not\n freed after the forward pass, saving the all-gathers in the\n pre-backward.\n\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "pre-backward.\nclass torch.distributed.fsdp.MixedPrecision(param_dtype=None, reduce_dtype=None, buffer_dtype=None, keep_low_precision_grads=False, cast_forward_inputs=False, cast_root_forward_inputs=True)\nThis configures FSDP-native mixed precision training.\nVariables:\n * param_dtype (torch.dtype) -- This specifies the dtype\n for model parameters, inputs (when \"cast_forward_inputs\" or\n \"cast_root_forward_inputsis set toTrue\"), and therefore\n the dtype for computation. However, outside the forward and\n backward passes, parameters are in full precision. Model\n checkpointing always happens in full precision.\n * **reduce_dtype** (*torch.dtype*) -- This specifies the dtype\n for gradient reduction, which is permitted to differ from\n \"param_dtype\".\n\n * **buffer_dtype** (*torch.dtype*) -- This specifies the dtype\n for buffers. FSDP does not shard buffers, casts them to\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "\"buffer_dtype\" in the first forward pass, and keeps them in\n that dtype thereafter. Model checkpointing always happens in\n full precision.\n * **keep_low_precision_grads** (*bool*) -- This specifies\n whether to upcast gradients back to the full parameter\n precision after the backward pass. This may be set to \"False\"\n to save memory if using custom optimizers that can perform the\n optimizer step in \"reduce_dtype\". (Default: \"False\")\n\n * **cast_forward_inputs** (*bool*) -- Cast floating point\n tensors in the forward arguments and keyword arguments to\n \"param_dtype\". (Default: \"False\")\n\n * **cast_root_forward_inputs** (*bool*) -- Cast floating point\n tensors in the forward arguments and keyword arguments to\n \"param_dtype\" for the root FSDP instance. It takes precedence\n over \"cast_forward_inputs\" for the root FSDP instance.\n (Default: \"True\")\n\nNote:\n This API is experimental and subject to change.\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "Note:\n Only floating point tensors are cast to their specified dtypes.\n\nNote:\n In \"summon_full_params\", parameters are forced to full precision,\n but buffers are not.\n\nNote:\n \"state_dict\" checkpoints parameters and buffers in full\n precision. For buffers, this is only supported for\n \"StateDictType.FULL_STATE_DICT\".\n\nNote:\n Each low precision dtype must be specified explicitly. For\n example, \"MixedPrecision(reduce_dtype=torch.float16)\" only\n specifies the reduction dtype to be low precision, and FSDP will\n not cast parameters or buffers.\n\nNote:\n If a \"reduce_dtype\" is not specified, then gradient reduction\n happens in \"param_dtype\" if specified or the original parameter\n dtype otherwise.\n\nNote:\n If the user passes a model with \"BatchNorm\" modules and an\n \"auto_wrap_policy\" to the FSDP constructor, then FSDP will\n disable mixed precision for \"BatchNorm\" modules by wrapping them\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "separately in their own FSDP instance with mixed precision\n disabled. This is due to some missing low precision \"BatchNorm\"\n kernels. If the user does not use an \"auto_wrap_policy\", then the\n user must take care to not use mixed precision for FSDP instances\n containing \"BatchNorm\" modules.\nNote:\n \"MixedPrecision\" has \"cast_root_forward_inputs=True\" and\n \"cast_forward_inputs=False\" by default. For the root FSDP\n instance, its \"cast_root_forward_inputs\" takes precedence over\n its \"cast_forward_inputs\". For non-root FSDP instances, their\n \"cast_root_forward_inputs\" values are ignored. The default\n setting is sufficient for the typical case where each FSDP\n instance has the same \"MixedPrecision\" configuration and only\n needs to cast inputs to the \"param_dtype\" at the beginning of the\n model's forward pass.\n\nNote:\n For nested FSDP instances with different \"MixedPrecision\"\n configurations, we recommend setting individual\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "\"cast_forward_inputs\" values to configure casting inputs or not\n before each instance's forward. In such a case, since the casts\n happen before each FSDP instance's forward, a parent FSDP\n instance should have its non-FSDP submodules run before its FSDP\n submodules to avoid the activation dtype being changed due to a\n different \"MixedPrecision\" configuration.Example:\n >>> model = nn.Sequential(nn.Linear(3, 3), nn.Linear(3, 3))\n >>> model[1] = FSDP(\n >>> model[1],\n >>> mixed_precision=MixedPrecision(param_dtype=torch.float16, cast_forward_inputs=True),\n >>> )\n >>> model = FSDP(\n >>> model,\n >>> mixed_precision=MixedPrecision(param_dtype=torch.bfloat16, cast_forward_inputs=True),\n >>> )\n\n The above shows a working example. On the other hand, if\n \"model[1]\" were replaced with \"model[0]\", meaning that the\n submodule using different \"MixedPrecision\" ran its forward first,\n", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "then \"model[1]\" would incorrectly see \"float16\" activations\n instead of \"bfloat16\" ones.\nclass torch.distributed.fsdp.CPUOffload(offload_params=False)\nThis configures CPU offloading.\nVariables:\n offload_params (bool) -- This specifies whether to offload\n parameters to CPU when not involved in computation. If enabled,\n this implicitly offloads gradients to CPU as well. This is to\n support the optimizer step, which requires parameters and\n gradients to be on the same device.", "source": "https://pytorch.org/docs/stable/fsdp.html", "category": "pytorch docs"} {"text": "torch.utils.cpp_extension\ntorch.utils.cpp_extension.CppExtension(name, sources, args, *kwargs)\nCreates a \"setuptools.Extension\" for C++.\nConvenience method that creates a \"setuptools.Extension\" with the\n bare minimum (but often sufficient) arguments to build a C++\n extension.\nAll arguments are forwarded to the \"setuptools.Extension\"\n constructor.\n-[ Example ]-\n\n\n\nfrom setuptools import setup\nfrom torch.utils.cpp_extension import BuildExtension, CppExtension\nsetup(\n ... name='extension',\n ... ext_modules=[\n ... CppExtension(\n ... name='extension',\n ... sources=['extension.cpp'],\n ... extra_compile_args=['-g']),\n ... ],\n ... cmdclass={\n ... 'build_ext': BuildExtension\n ... })\n\n\n\ntorch.utils.cpp_extension.CUDAExtension(name, sources, args, *kwargs)\nCreates a \"setuptools.Extension\" for CUDA/C++.", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"} {"text": "Creates a \"setuptools.Extension\" for CUDA/C++.\nConvenience method that creates a \"setuptools.Extension\" with the\n bare minimum (but often sufficient) arguments to build a CUDA/C++\n extension. This includes the CUDA include path, library path and\n runtime library.\nAll arguments are forwarded to the \"setuptools.Extension\"\n constructor.\n-[ Example ]-\n\n\n\nfrom setuptools import setup\nfrom torch.utils.cpp_extension import BuildExtension, CUDAExtension\nsetup(\n ... name='cuda_extension',\n ... ext_modules=[\n ... CUDAExtension(\n ... name='cuda_extension',\n ... sources=['extension.cpp', 'extension_kernel.cu'],\n ... extra_compile_args={'cxx': ['-g'],\n ... 'nvcc': ['-O2']})\n ... ],\n ... cmdclass={\n ... 'build_ext': BuildExtension\n ... })\n\n\n\nCompute capabilities:\nBy default the extension will be compiled to run on all archs of", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"} {"text": "the cards visible during the building process of the extension,\n plus PTX. If down the road a new card is installed the extension\n may need to be recompiled. If a visible card has a compute\n capability (CC) that's newer than the newest version for which your\n nvcc can build fully-compiled binaries, Pytorch will make nvcc fall\n back to building kernels with the newest version of PTX your nvcc\n does support (see below for details on PTX).\nYou can override the default behavior using TORCH_CUDA_ARCH_LIST\n to explicitly specify which CCs you want the extension to support:\nTORCH_CUDA_ARCH_LIST=\"6.1 8.6\" python build_my_extension.py\n TORCH_CUDA_ARCH_LIST=\"5.2 6.0 6.1 7.0 7.5 8.0 8.6+PTX\" python\n build_my_extension.py\nThe +PTX option causes extension kernel binaries to include PTX\n instructions for the specified CC. PTX is an intermediate\n representation that allows kernels to runtime-compile for any CC >=\n the specified CC (for example, 8.6+PTX generates PTX that can", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"} {"text": "runtime-compile for any GPU with CC >= 8.6). This improves your\n binary's forward compatibility. However, relying on older PTX to\n provide forward compat by runtime-compiling for newer CCs can\n modestly reduce performance on those newer CCs. If you know exact\n CC(s) of the GPUs you want to target, you're always better off\n specifying them individually. For example, if you want your\n extension to run on 8.0 and 8.6, \"8.0+PTX\" would work functionally\n because it includes PTX that can runtime-compile for 8.6, but \"8.0\n 8.6\" would be better.\nNote that while it's possible to include all supported archs, the\n more archs get included the slower the building process will be, as\n it will build a separate kernel image for each arch.\nNote that CUDA-11.5 nvcc will hit internal compiler error while\n parsing torch/extension.h on Windows. To workaround the issue, move\n python binding logic to pure C++ file.\nExample use:\n #include at::Tensor", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"} {"text": "include at::Tensor\n SigmoidAlphaBlendForwardCuda(....)\n\nInstead of:\n #include torch::Tensor\n SigmoidAlphaBlendForwardCuda(...)\nCurrently open issue for nvcc bug:\n https://github.com/pytorch/pytorch/issues/69460 Complete workaround\n code example: https://github.com/facebookresearch/pytorch3d/commit\n /cb170ac024a949f1f9614ffe6af1c38d972f7d48\nRelocatable device code linking:\nIf you want to reference device symbols across compilation units\n (across object files), the object files need to be built with\n relocatable device code (-rdc=true or -dc). An exception to this\n rule is \"dynamic parallelism\" (nested kernel launches) which is\n not used a lot anymore. Relocatable device code is less optimized\n so it needs to be used only on object files that need it. Using\n -dlto (Device Link Time Optimization) at the device code\n compilation step and dlink step help reduce the protentional perf", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"} {"text": "degradation of -rdc. Note that it needs to be used at both steps\n to be useful.\nIf you have rdc objects you need to have an extra -dlink\n (device linking) step before the CPU symbol linking step. There is\n also a case where -dlink is used without -rdc: when an\n extension is linked against a static lib containing rdc-compiled\n objects like the NVSHMEM\n library.\nNote: Ninja is required to build a CUDA Extension with RDC linking.\n-[ Example ]-\n\n\n\nCUDAExtension(\n ... name='cuda_extension',\n ... sources=['extension.cpp', 'extension_kernel.cu'],\n ... dlink=True,\n ... dlink_libraries=[\"dlink_lib\"],\n ... extra_compile_args={'cxx': ['-g'],\n ... 'nvcc': ['-O2', '-rdc=true']})\n\n\n\ntorch.utils.cpp_extension.BuildExtension(args, *kwargs)\nA custom \"setuptools\" build extension .\nThis \"setuptools.build_ext\" subclass takes care of passing the", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"} {"text": "minimum required compiler flags (e.g. \"-std=c++17\") as well as\n mixed C++/CUDA compilation (and support for CUDA files in general).\nWhen using \"BuildExtension\", it is allowed to supply a dictionary\n for \"extra_compile_args\" (rather than the usual list) that maps\n from languages (\"cxx\" or \"nvcc\") to a list of additional compiler\n flags to supply to the compiler. This makes it possible to supply\n different flags to the C++ and CUDA compiler during mixed\n compilation.\n\"use_ninja\" (bool): If \"use_ninja\" is \"True\" (default), then we\n attempt to build using the Ninja backend. Ninja greatly speeds up\n compilation compared to the standard \"setuptools.build_ext\".\n Fallbacks to the standard distutils backend if Ninja is not\n available.\nNote:\n By default, the Ninja backend uses #CPUS + 2 workers to build the\n extension. This may use up too many resources on some systems.\n One can control the number of workers by setting the *MAX_JOBS*\n", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"} {"text": "environment variable to a non-negative number.\ntorch.utils.cpp_extension.load(name, sources, extra_cflags=None, extra_cuda_cflags=None, extra_ldflags=None, extra_include_paths=None, build_directory=None, verbose=False, with_cuda=None, is_python_module=True, is_standalone=False, keep_intermediates=True)\nLoads a PyTorch C++ extension just-in-time (JIT).\nTo load an extension, a Ninja build file is emitted, which is used\n to compile the given sources into a dynamic library. This library\n is subsequently loaded into the current Python process as a module\n and returned from this function, ready for use.\nBy default, the directory to which the build file is emitted and\n the resulting library compiled to is\n \"/torch_extensions/\", where \"\" is the temporary\n folder on the current platform and \"\" the name of the\n extension. This location can be overridden in two ways. First, if\n the \"TORCH_EXTENSIONS_DIR\" environment variable is set, it replaces", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"} {"text": "\"/torch_extensions\" and all extensions will be compiled into\n subfolders of this directory. Second, if the \"build_directory\"\n argument to this function is supplied, it overrides the entire\n path, i.e. the library will be compiled into that folder directly.\nTo compile the sources, the default system compiler (\"c++\") is\n used, which can be overridden by setting the \"CXX\" environment\n variable. To pass additional arguments to the compilation process,\n \"extra_cflags\" or \"extra_ldflags\" can be provided. For example, to\n compile your extension with optimizations, pass\n \"extra_cflags=['-O3']\". You can also use \"extra_cflags\" to pass\n further include directories.\nCUDA support with mixed compilation is provided. Simply pass CUDA\n source files (\".cu\" or \".cuh\") along with other sources. Such files\n will be detected and compiled with nvcc rather than the C++\n compiler. This includes passing the CUDA lib64 directory as a", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"} {"text": "library directory, and linking \"cudart\". You can pass additional\n flags to nvcc via \"extra_cuda_cflags\", just like with\n \"extra_cflags\" for C++. Various heuristics for finding the CUDA\n install directory are used, which usually work fine. If not,\n setting the \"CUDA_HOME\" environment variable is the safest option.\nParameters:\n * name -- The name of the extension to build. This MUST be\n the same as the name of the pybind11 module!\n * **sources** (*Union**[**str**, **List**[**str**]**]*) -- A\n list of relative or absolute paths to C++ source files.\n\n * **extra_cflags** -- optional list of compiler flags to forward\n to the build.\n\n * **extra_cuda_cflags** -- optional list of compiler flags to\n forward to nvcc when building CUDA sources.\n\n * **extra_ldflags** -- optional list of linker flags to forward\n to the build.\n\n * **extra_include_paths** -- optional list of include\n directories to forward to the build.\n", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"} {"text": "directories to forward to the build.\n * **build_directory** -- optional path to use as build\n workspace.\n\n * **verbose** -- If \"True\", turns on verbose logging of load\n steps.\n\n * **with_cuda** (*Optional**[**bool**]*) -- Determines whether\n CUDA headers and libraries are added to the build. If set to\n \"None\" (default), this value is automatically determined based\n on the existence of \".cu\" or \".cuh\" in \"sources\". Set it to\n *True`* to force CUDA headers and libraries to be included.\n\n * **is_python_module** -- If \"True\" (default), imports the\n produced shared library as a Python module. If \"False\",\n behavior depends on \"is_standalone\".\n\n * **is_standalone** -- If \"False\" (default) loads the\n constructed extension into the process as a plain dynamic\n library. If \"True\", build a standalone executable.\n\nReturns:\n Returns the loaded PyTorch extension as a Python module.", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"} {"text": "If \"is_python_module\" is \"False\" and \"is_standalone\" is \"False\":\n Returns nothing. (The shared library is loaded into the\n process as a side effect.)\n If \"is_standalone\" is \"True\".\n Return the path to the executable. (On Windows,\n TORCH_LIB_PATH is added to the PATH environment variable as a\n side effect.)\n\nReturn type:\n If \"is_python_module\" is \"True\"\n-[ Example ]-\n\n\n\nfrom torch.utils.cpp_extension import load\nmodule = load(\n ... name='extension',\n ... sources=['extension.cpp', 'extension_kernel.cu'],\n ... extra_cflags=['-O2'],\n ... verbose=True)\n\n\n\ntorch.utils.cpp_extension.load_inline(name, cpp_sources, cuda_sources=None, functions=None, extra_cflags=None, extra_cuda_cflags=None, extra_ldflags=None, extra_include_paths=None, build_directory=None, verbose=False, with_cuda=None, is_python_module=True, with_pytorch_error_handling=True, keep_intermediates=True)", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"} {"text": "Loads a PyTorch C++ extension just-in-time (JIT) from string\n sources.\nThis function behaves exactly like \"load()\", but takes its sources\n as strings rather than filenames. These strings are stored to files\n in the build directory, after which the behavior of \"load_inline()\"\n is identical to \"load()\".\nSee the tests for good examples of using this function.\nSources may omit two required parts of a typical non-inline C++\n extension: the necessary header includes, as well as the (pybind11)\n binding code. More precisely, strings passed to \"cpp_sources\" are\n first concatenated into a single \".cpp\" file. This file is then\n prepended with \"#include \".\nFurthermore, if the \"functions\" argument is supplied, bindings will\n be automatically generated for each function specified. \"functions\"\n can either be a list of function names, or a dictionary mapping\n from function names to docstrings. If a list is given, the name of", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"} {"text": "each function is used as its docstring.\nThe sources in \"cuda_sources\" are concatenated into a separate\n \".cu\" file and prepended with \"torch/types.h\", \"cuda.h\" and\n \"cuda_runtime.h\" includes. The \".cpp\" and \".cu\" files are compiled\n separately, but ultimately linked into a single library. Note that\n no bindings are generated for functions in \"cuda_sources\" per se.\n To bind to a CUDA kernel, you must create a C++ function that calls\n it, and either declare or define this C++ function in one of the\n \"cpp_sources\" (and include its name in \"functions\").\nSee \"load()\" for a description of arguments omitted below.\nParameters:\n * cpp_sources -- A string, or list of strings, containing\n C++ source code.\n * **cuda_sources** -- A string, or list of strings, containing\n CUDA source code.\n\n * **functions** -- A list of function names for which to\n generate function bindings. If a dictionary is given, it\n", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"} {"text": "should map function names to docstrings (which are otherwise\n just the function names).\n * **with_cuda** -- Determines whether CUDA headers and libraries\n are added to the build. If set to \"None\" (default), this value\n is automatically determined based on whether \"cuda_sources\" is\n provided. Set it to \"True\" to force CUDA headers and libraries\n to be included.\n\n * **with_pytorch_error_handling** -- Determines whether pytorch\n error and warning macros are handled by pytorch instead of\n pybind. To do this, each function \"foo\" is called via an\n intermediary \"_safe_foo\" function. This redirection might\n cause issues in obscure cases of cpp. This flag should be set\n to \"False\" when this redirect causes issues.\n\n-[ Example ]-\n\n\n\nfrom torch.utils.cpp_extension import load_inline\nsource = \"\"\"\n at::Tensor sin_add(at::Tensor x, at::Tensor y) {\n return x.sin() + y.sin();\n }\n \"\"\"\n\n\n", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"} {"text": "return x.sin() + y.sin();\n }\n \"\"\"\n\n\n\nmodule = load_inline(name='inline_extension',\n ... cpp_sources=[source],\n ... functions=['sin_add'])\n\n\n\nNote:\n By default, the Ninja backend uses #CPUS + 2 workers to build the\n extension. This may use up too many resources on some systems.\n One can control the number of workers by setting the *MAX_JOBS*\n environment variable to a non-negative number.\n\ntorch.utils.cpp_extension.include_paths(cuda=False)\nGet the include paths required to build a C++ or CUDA extension.\nParameters:\n cuda (bool) -- If True, includes CUDA-specific include\n paths.\nReturns:\n A list of include path strings.\nReturn type:\n List[str]\ntorch.utils.cpp_extension.get_compiler_abi_compatibility_and_version(compiler)\nDetermine if the given compiler is ABI-compatible with PyTorch\n alongside its version.\nParameters:", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"} {"text": "alongside its version.\nParameters:\n compiler (str) -- The compiler executable name to check\n (e.g. \"g++\"). Must be executable in a shell process.\nReturns:\n A tuple that contains a boolean that defines if the compiler is\n (likely) ABI-incompatible with PyTorch, followed by a\n TorchVersion string that contains the compiler version\n separated by dots.\nReturn type:\n Tuple[bool, TorchVersion]\ntorch.utils.cpp_extension.verify_ninja_availability()\nRaises \"RuntimeError\" if ninja build system is not available on the\n system, does nothing otherwise.\ntorch.utils.cpp_extension.is_ninja_available()\nReturns \"True\" if the ninja build system is available on the\n system, \"False\" otherwise.", "source": "https://pytorch.org/docs/stable/cpp_extension.html", "category": "pytorch docs"} {"text": "Installing TorchDynamo\nThis section describes how to install TorchDynamo. TorchDynamo is\nincluded in the nightly binaries of PyTorch. For more information, see\nGetting Started.\nRequirements\nYou must have the following prerequisites to use TorchDynamo:\n\n\nA Linux or macOS environment\n\n\nPython 3.8 (recommended). Python 3.7 through 3.10 are supported and\n tested. Make sure to have a development version of Python installed\n locally as well.\n\n\nGPU/CUDA Requirements\nTo use GPU back ends, and in particular Triton, make sure that the\nCUDA that you have installed locally matches the PyTorch version you\nare running.\nThe following command installs GPU PyTorch + TorchDynamo along with\nGPU TorchDynamo dependencies (for CUDA 11.7):\npip3 install numpy --pre torch[dynamo] --force-reinstall --extra-index-url https://download.pytorch.org/whl/nightly/cu117\nCPU requirements\nThere are no additional requirements for CPU TorchDynamo. CPU", "source": "https://pytorch.org/docs/stable/dynamo/installation.html", "category": "pytorch docs"} {"text": "TorchDynamo is included in the nightly versions of PyTorch. To\ninstall, run the following command:\npip3 install --pre torch --extra-index-url https://download.pytorch.org/whl/nightly/cpu\nInstall from Local Source\nAlternatively, you can build PyTorch from source, which has\nTorchDynamo included.\nTo install GPU TorchDynamo dependencies, run \"make triton\" in the\nPyTorch repo root directory.\nVerify Installation\nIf you built PyTorch from source, then you can run the following\ncommands (from the PyTorch repo root directory) to check that\nTorchDynamo is installed correctly:\ncd tools/dynamo\n python verify_dynamo.py\nIf you do not have the PyTorch source locally, you can alternatively\ncopy the script (\"tools/dynamo/verify_dynamo.py\") from the PyTorch\nrepository and run it locally.\nDocker Installation\nWe also provide all the required dependencies in the PyTorch nightly\nbinaries which you can download with the following command:", "source": "https://pytorch.org/docs/stable/dynamo/installation.html", "category": "pytorch docs"} {"text": "docker pull ghcr.io/pytorch/pytorch-nightly\nAnd for ad hoc experiments just make sure that your container has\naccess to all your GPUs:\ndocker run --gpus all -it ghcr.io/pytorch/pytorch-nightly:latest /bin/bash", "source": "https://pytorch.org/docs/stable/dynamo/installation.html", "category": "pytorch docs"} {"text": "TorchDynamo Overview\nTorchDynamo is a Python-level JIT compiler designed to make\nunmodified PyTorch programs faster. TorchDynamo hooks into the frame\nevaluation API in CPython (PEP 523) to dynamically modify Python\nbytecode right before it is executed. It rewrites Python bytecode in\norder to extract sequences of PyTorch operations into an FX Graph\nwhich is then just-in-time compiled with a customizable backend. It\ncreates this FX Graph through bytecode analysis and is designed to mix\nPython execution with compiled backends to get the best of both worlds\n\u00e2\u0080\u0094 usability and performance.\nTorchDynamo makes it easy to experiment with different compiler\nbackends to make PyTorch code faster with a single line decorator\n\"torch._dynamo.optimize()\"\n[image]\nTorchInductor is one of the backends supported by TorchDynamo Graph\ninto Triton for GPUs or C++/OpenMP for CPUs. We have a training\nperformance dashboard that provides performance comparison for", "source": "https://pytorch.org/docs/stable/dynamo/index.html", "category": "pytorch docs"} {"text": "different training backends. You can read more in the TorchInductor\npost on PyTorch dev-discuss.\nSee also:\n\n\nTorchDynamo deep-dive video\n\n\ndev-discuss topics\n\n", "source": "https://pytorch.org/docs/stable/dynamo/index.html", "category": "pytorch docs"} {"text": "Guards Overview\nFrom a UX perspective, TorchDynamo is very easy to use. The user\ninvokes \"torchdynamo.optimize\" as an annotation:\n@torchdynamo.optimize(my_compiler)\n def fn_foo(bar):\nWhere a complete example looks like this:\nfrom typing import List\n import torch\n import torchdynamo\n def my_compiler(gm: torch.fx.GraphModule, example_inputs: List[torch.Tensor]):\n print(\"my_compiler() called with FX graph:\")\n gm.graph.print_tabular()\n return gm.forward # return a python callable\n @torchdynamo.optimize(my_compiler)\n def toy_example(a, b):\n x = a / (torch.abs(a) + 1)\n if b.sum() < 0:\n b = b * -1\n return x * b\n for _ in range(100):\n toy_example(torch.randn(10), torch.randn(10))\nThis allows TorchDynamo to capture the interpreted Python frames, grab\nany and all relevant information, and speed things up wherever it can.\nThe speedup comes from a few places, and can be rather dependent on", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"} {"text": "the backend (my_compiler in the example above) provided, but the one\nspeedup that is important in this section is caching. Caching\nitself is not a direct speedup but a critical enablement that prevents\nrecompilation. We dig a hole with dynamo, and caching allows us to get\nout. It enables us to hold perf neutrality while then enabling\nbackends - the true source of our speedups.\nWith even a pass-through no-op backend provided:\ndef my_compiler(gm: torch.fx.GraphModule, example_inputs: List[torch.Tensor]):\n return gm.forward\nWe can see TorchDynamo speeding up Python execution even on regular\nPython, not just PyTorch.\nCaching and Guards Overview\nTorchDynamo operates through caching transformed (by TorchDynamo) user\nbytecode. When TorchDynamo receives a frame for evaluation, it checks\nif the objects referenced in the frame have changed in certain\nways, and if not, TorchDynamo reads the previously transformed user", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"} {"text": "bytecode to evaluate it. In this section, we will focus on how we can\nidentify whether or not the objects referenced in the frame have\nchanged. This is a critical piece of functionality in TorchDynamo,\nbecause it drives the entire invalidation lifecycle. This\nfunctionality is called guards.\nAt a very high level, the flow can be summarized like this:\n\n\nTorchDynamo receives a Python frame.\n\n\nIt converts the frame (1) passing it through instruction\n translation.\n\n\nFor the objects captured in (2), TorchDynamo creates tracking\n objects that are: * tracked on an output graph, which is an\n internal specialization of a torch.fx.Tracer * guards\n\n\nTorchDynamo processes the guard objects created in (3), turning\n them into a generated Python function, check_fn, associated with\n a piece of code.\n\n\nThe check_fn is evaluated whenever we encounter this code a\n subsequent time - if a check_fn passes and evaluates to True,\n TorchDynamo identifies the code in the cache and the code\n\n", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"} {"text": "encountered here as same, and can be safely used. If it fails and\n evaluates to False, TorchDynamo identifies the code in the cache\n as not valid, and can be thrown out in favor of a new entry,\n through recompilation or a graph break.\nPython Frame Evaluation and PEP 523\nThe functionality of TorchDynamo is based on PEP 523.\nTorchDynamo installs a frame evaluation function on Python by using\n_PyInterpreterState_SetEvalFrameFunc. TorchDynamo has a hook where\nPython can hand control back to us during evaluation.\nThe function we have installed is \"convert_frame\" or\n\"convert_frame_assert\" in the \"nopython=True\" case, but glossing over\nthat nuance for now, let\u00e2\u0080\u0099s take a look at \"convert_frame_assert\", as\n\"convert_frame\" proxies to it.\nWe can find it on line 20 of convert_frame.py, with a signature as\nfollows:\ndef convert_frame_assert(compiler_fn: Callable, one_graph=True):\nThis function wraps the entry point of where Python invokes", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"} {"text": "TorchDynamo with a frame:\ndef _convert_frame_assert(frame: types.FrameType, cache_size: int):\nHere is what this function does:\n\n\nChecks if it has seen this \"code\"(see: f_code here) before and\n exits early if it did.\n\n\nChecks if the code is an unsupported case.\n\n\nChecks if the \"cache_size\" (second arg above) crosses the limit\n defined in the config, \"cache_size_limit\". If it has, the function\n drops the frame and logs warnings. This helps to avoid constant\n recompilation of a frame as it generally means that the frame is\n hot in an unexpected way and caching it produces needless overhead,\n as it is likely to get evicted the next time it is encountered.\n\n\nPasses the frame, alongside a function that creates an\n \"InstructionTranslator\" through bytecode transformation, via\n \"transform_code_object\". A few crucial things happen under the hood\n here:\n\n\nNew code is produced through \"transform_code_object\".\n\n\nAn FX tracer named \"output\" is produced through\n\n", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"} {"text": "\"InstructionTranslator\".\n This can be a bit confusing, as \"InstructionTranslator\" is not\n an *fx* tracer, but its stored in a variable named tracer, and\n its output***is***an `fx`tracer.*\n\n\n\nThe function produces guards and stores them on \"output\" above.\n\n\nThe function produces \"output_instructions\" and stores them on\n \"output\" above.\n\n\nThe function maps the newly produced transformed code to the\n initial code it read off the frame. This mapping is worth\n remembering, we will refer to it much later on below where we\n cover guard failures.\n\n\nUsing the transformed code from 4.1 and the guards from 4.3, the\n function produces a GuardedCode.\n\n\nNow that we have learned about frame evaluation, let\u00e2\u0080\u0099s review\n\"InstructionTranslator\", and see how it turns the frame we handed it\nover into TorchDynamo internal types.\nInstructionTranslator\nInstructionTranslator does a lot! We won\u00e2\u0080\u0099t cover the details of", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"} {"text": "everything it does, but most importantly for this document, it\nproduces a mapping of \"symbolic_locals\" which maintains a mapping from\nthe frame\u00e2\u0080\u0099s \"f_locals\" to TorchDynamo internal Variable objects (more\non these in a moment. \"symbolic_locals\" is filled via traversing the\nframe\u00e2\u0080\u0099s locals:\nself.symbolic_locals = collections.OrderedDict(\n (k, VariableBuilder(self, LocalSource(k))(f_locals[k]))\n for k in vars\n if k in f_locals\n )\nThe important component here is the invocation of a call into\n\"VariableBuilder\". \"VariableBuilder\"\u00e2\u0080\u0099s call implementation proxies\ninto a function called \"_wrap\", which in turn both constructs\ninstances of \"VariableTracker\" and calls \"make_guards\" on them. More\non that later.\nThis mapping, in turn, is critical as each Variable has associated\nguards, which are then passed to \"self.output\", the instance of\n\"OutputGraph\", an fx tracer, mentioned in 4.2 of the section above. If\nyou recall, this \"OutputGraph\", stored in a variable called \"output\"", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"} {"text": "is where our guards are stored before being passed on to become\n\"GuardedCode\"\nHow does \"InstructionTranslator\" do this? At the heart of it, there is\na loop that is pumped, which drives a function \"step\".\n\"step\" is just that - a single processing step, taking exactly one\ninstruction and doing something with it.\nNote:\nThese are real instructions processed by TorchDynamo\u00e2\u0080\u0099s\n \"transform_code_object\", and it is pretty cool.\nNote:\nThis section purposely skips the details of dis.get_instructions.\nFor the example above, here is a snippet of a what a few\n\"Instruction\"'s may look like:\nInstruction(opcode=124, opname='LOAD_FAST', arg=0, argval='b', offset=32, starts_line=8, is_jump_target=True, target=None)\n Instruction(opcode=100, opname='LOAD_CONST', arg=3, argval=-1, offset=34, starts_line=None, is_jump_target=False, target=None)\n Instruction(opcode=20, opname='BINARY_MULTIPLY', arg=None, argval=None, offset=36, starts_line=None, is_jump_target=False, target=None)", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"} {"text": "This is the core functionality of this function. Take a look at the\n\"opname\", and then take a look at this little snippet from inside\n\"step\";\nif not hasattr(self, inst.opname):\n unimplemented(f\"missing: {inst.opname}\")\n getattr(self, inst.opname)(inst)\nAs we can see, the function checks if the current class, the\n\"InstructionTranslator\" has an attribute set matching the operator\nname (for example, \"LOAD_CONST\"). If it does, the function invokes it,\npassing the whole instruction object in. If it does not, the function\ndrops the frame as unimplemented.\nFor the \"LOAD_CONST\" example, we can see that we do indeed support it,\nwith a relatively straightforward definition:\ndef LOAD_CONST(self, inst):\n self.push(ConstantVariable(value=inst.argval))\nWe can see that this function creates a new instance of the class\n\"ConstantVariable\" , with a value, in our example case, -1, and then\npushes it onto the stack.\nThere are dozens of such methods - see \"symbolic_convert.py\" for all", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"} {"text": "of them. Generally, we implement as many matching methods to Python\nbytecode instructions as possible.\nAcross both the logic downstream of \"step\" and the logic from invoking\n\"VariableBuilder\" - we now have a lot of \"VariableTracker\"s and of\ncourse, we\u00e2\u0080\u0099ve spoken about creating guards quiet a bit. Let\u00e2\u0080\u0099s dig into\nwhat Variables are, and get a little closer to understanding guards.\nVariables\nA \"ConstantVariable\" is an instance of\"VariableTracker\".\n\"VariableTracker\" represents a tracked Python local or stack value.\nWhen it comes to representing an object inside TorchDynamo, a\n\"VariableTracker\" does exactly what it says - it tracks a given\nvariable. It is an extremely flexible class, but there are a few\npoints to keep in mind:\n\n\nIt manages the \"guard\" relationship around the underlying object\n through:\n\n\n\"make_guard\"\n\n\n\"replace_guards\"\n\n\n\"add_guard(s)\"\n\n\n\"propagate\" - \"propagate(*vars: List[List[\"VariableTracker\"]])\" -\n\n", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"} {"text": "Perhaps the most important of all, in that it combines guards from\n all the provided \"VariableTracker\" instances passed in. It visits\n the guards and combines the guards from these onto itself.\n\n\nIt acts as a proxy on behalf of the underlying object, implementing\n methods for the rest of TorchDynamo to get information about the\n tracked object:\n\n\n\"call_method\"\n\n\n\"call_function\"\n\n\n\"python_type\"\n\n\n\"as_proxy\"\n\n\n\"is/as_python_proxy\"\n\n\nIt stores the variable \"source\" of type \"Source\", from\n \"torchdynamo/source.py\". This source type is a relatively self\n contained class that helps us organize and bookkeep where the\n original source came from, and helps provide convenience methods for\n things like getting the name, and importantly for us, producing\n guards.\n\n\nAnd this class (\"VariableTracker\") is built around subclassing,\nsomewhere between a full Abstract Base Class and fully fleshed out\nclass - it leaves many methods raising \"NotImplementedError\" - with", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"} {"text": "reliance on subclasses. See \"torchdynamo/variables/\" for all\nsubclasses to fulfill contracts and custom behaviors.\nKnowing what we know now, we can see an example of how an instruction\nfrom \"dis\", \"BUILD_TUPLE\":\n\"BUILD_TUPLE(count)\" Creates a tuple consuming count items from the\n stack, and pushes the resulting tuple onto the stack.\nIn our case, our signature will be a little different due to the way\nwe create \"Instruction\" objects, but the gist of it will be the same.\nInstead of passing in \"count\", we pass in an object with a little\nextra bookkeeping, and of course, we deal with turning regular old\npython objects into TorchDynamo notions:\ndef BUILD_TUPLE(self, inst):\n items = self.popn(inst.argval)\n options = VariableTracker.propagate(items)\n self.push(TupleVariable(items, **options))\nHere is what this code does:\n\nThe function reads \"argval\", which in this case, is analogous to\n \"counts\" in the pydoc for the equivalent instruction.\n", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"} {"text": "\n\nThe function \"popn\" the items, in this case, the signature is \"def\n popn(self, n: int) -> List[TensorVariable]:\" this hints at an\n underlying contract - we are returning \"TensorVariables\". If we\n take a closer look at \"sybmolic_convert.py\" and\n \"InstructionTranslatorBase\"/\"InstructionTranslator\"we see that the\n only thing pushed onto and popped from our stack are\n \"VariableTracker\"s.\n\n\nThe function calls \"VariableTracker.propagate\". This takes the\n guards from every single item popped off the stack in 2, and\n recursively traverses it and combines all the guards into\n \"options\": \"py return { \"guards\": guards, }\"\n\n\nThe function then makes a new instance of a \"VariableTracker\",\n \"TupleVariable\"out of the \"items\" and \"options\". This then allows\n us to install all the appropriate guards from the \"items\" that make\n up the new \"TupleVariable\"\n\n\nNote:\nWhere did the first guards come from? Propagation is a good\n technique, but we need something created before it can be", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"} {"text": "propagated. \"VariableBuilder\" calls \"make_guards\" as it creates\n \"VariableTracker\" instances, from \"f_locals\". This in turn calls\n into the \"source\", to have it create guards.\nAfter all this, bytecode translation is done and we are one step\ncloser to producing \"GuardedCode\". We now understand how locals become\n\"VariableTracker\"s, how instructions are handled, and where guards are\ncalled on for creation. Before we can go into seeing how code and\nguards are combined into a GuardedCode object, we need to dig a little\nbit into those \"make_guard\" and \"source.make_guard\" calls above. We\ncan then understand, what was going on when we made guards alongside,\nand on, \"VariableTracker\" instances.\nMaking Guards\nGuards are just Python objects, of the class \"Guard\". Let's look at\nthem in more detail.\nLooking at the definition of the dataclass (and therefore, ctor\nsignature), we see that it has a name, a source, and a create\nfunction.\n@dataclasses.dataclass\n class Guard:\n name: str", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"} {"text": "class Guard:\n name: str\n source: GuardSource\n create_fn: Callable\nThe name should be the name of the variable.\nThe source here is an enum indicating what kind of source the guard\nbelongs to.\nNote:\nNot to be confused with \"Source\" and the other types in \"source.py\",\n as stored on \"VariableTracker\".\n\"create_fn\" provides the main functionality to transition from a\nsimple dataclass to actually producing valid Python code to be invoked\nfor knowing whether or not things have changed in between invocations,\nand whether we can safely read from the code cache or not.\nThe most common code paths for getting an instance of a guard are\nthrough \"make_guards\" on \"VariableTracker\".\n\"make_guards\"->source.make_guard->return Guard(self.name(),\nself.guard_source(), fn)\nOr, in a concrete example:\n...\n elif istype(value, range):\n guards = self.make_guards(GuardBuilder.EQUALS_MATCH)\n return RangeVariable(value=value, guards=guards)", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"} {"text": "Since \"source\" was set at the construction time of this\n\"VariableTracker\", all that was needed here was to provide the \"fn\",\n\"GuardBuilder.EQUALS_MATCH\" to the \"create_fn\" field.\nThis \"create_fn\" must be a method on \"GuardBuilder\". The reason for\nthis becomes apparent in our next step. Once we have all the guards\ncreated for a frame, we move on to \"CheckFunctionManager\" and\n\"compile_check_fn\".\nBefore the \"convert_frame\" function can produce a \"GuardedCode\", it\nneeds to run the \"CheckFunctionManager\", with all the guards, to\nproduce a \"check_fn\" which will then, in turn get passed in alongside\nthe code into \"GuardedCode\". This is the same \"check_fn\" that we store\nin our cache entry, and the same one we run to know whether or not to\nretrieve the code stored alongside. For reference, here is that code:\nstatic CacheEntry create_cache_entry(CacheEntry next,\n PyObject guarded_code) {\n CacheEntry e = (CacheEntry *)malloc(sizeof(CacheEntry));", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"} {"text": "DEBUG_NULL_CHECK(e);\n e->check_fn = PyObject_GetAttrString(guarded_code, \"check_fn\");\n NULL_CHECK(e->check_fn);\n e->code = (PyCodeObject *)PyObject_GetAttrString(guarded_code, \"code\");\n NULL_CHECK(e->code);\n e->next = next;\n return e;\n }\nWe now know how a \"check_fn\" function is used, and who makes it, and\nwhat it is composed of, but what we do not yet know is how. How does a\nlist of \"Guard\" objects become a function we can run later on?\nFirst, we iterate these guards:\nfor guard in sorted(guards or [], key=Guard.sort_key):\n if not config.guard_nn_modules and guard.is_nn_module():\n continue\n guard.create(local_builder, global_builder)\nCalling \"guard.create\" runs that \"create_fn\" we set on the \"Guard\"\nclass above (don\u00e2\u0080\u0099t confuse it with the \"check_fn\" we are working on\nproducing, the names are similar, so it can get a little confusing).\nIn our example above, our \"create_fn\" is \"GuardBuilder.EQUALS_MATCH\".", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"} {"text": "So we are now invoking it, passing in the \"self\", the guard itself,\nin.\nThe signature is: \"def EQUALS_MATCH(self, guard: Guard):\"\nAnd internally to that function, we can use the \"name\" on the guard to\nget back our original object, querying it for data and type\ninformation, which in turn gets us to the most important bit:\nappending code.\nAt its simplest, \"EQUALS_MATCH\" appends just one line of code:\n\"self.code.append(f\"{ref} == {val!r}\")\". Where \"ref\" is the name of\nthe variable, and \"val\" is the value. It might produce code like this:\ny == 2\nThis is a basic example. But if we append a few other kinds of\n\"GuardBuilder\" functions and then combine them all with \"and\" in\nbetween each statement (as we do), we might get something like this:\nguardedcode.valid and _check_type_id(y, 94367738391392) and y == 2 and ___check_tensors(x)\nHere is what this code performs:\n\n\nA check for \".valid\"\n\n\nA type ID check\n\n\nA value check\n\n\nA tensor check\n\n", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"} {"text": "\n\nA value check\n\n\nA tensor check\n\n\nThis becomes the heart of the code our \"check_fn\", which in turn is\nevaluated the next time we encounter this code. It will then\ncheck:\n\n\nIs this code still valid?\n\n\nIf (1), Does \"y\" still have a type of \"94367738391392\"?\n\n\nIf (2), is \"y\" still 2?\n\n\nIf (3), let\u00e2\u0080\u0099s check on if tensor \"x\" changed in some specific ways.\n\n\nIf all of these are still true, then we can use the code cached\nalongside this \"check_fn\".\nNote:\nFor a deeper dive for how and where this happens you can read\n \"static PyCodeObject lookup(CacheEntry e, PyObject *f_locals) {\"\n of \"_eval_frame.c\".\nIf not, then, we can move on to recompiling the code anew, and storing\nthat in the cache alongside this code, and a whole new \"check_fn\",\nagain to be checked on yet another subsequent frame.\nThere are lots of other such functions on \"GuardBuilder\" which get\ncoalesced into, at times massive, strings which then get evaluated as\nPython code and stored into \"check_fn\". The example above illustrates", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"} {"text": "of a simple case. To understand this functionality better, read the\nother functions on \"GuardBuilder\", or better yet, dump the \"code\"\nvariable in \"compile_check_fn\" to see what is getting produced,\nespecially on larger, real models.\nSummary\nIn this section, we have reviewed:\n\n\nThe role of \".valid\" and invalidation around weak references (and\n potentially soon to be NN Moduleinvalidations).\n\n\nHow the C++ side of guard functions (\"checktype_id\",\n \"_check_tensors\", etc) operate\n\n\nWhat happens when guards fail.\n\n\nWhat happens if we produce invalid guard code.\n\n\nWe covered how user provided code wrapped in a TorchDynamo context\ngoes on to get traced and tracked internally, organized into\n\"VariableTracker\"s \"Source\"s and subsequently \"Guard\"s, and how those\n\"Guards\" in turn guide cache entry selection and invalidation when\nhanding Python code.", "source": "https://pytorch.org/docs/stable/dynamo/guards-overview.html", "category": "pytorch docs"} {"text": "DDP Communication Hooks\nDDP communication hook is a generic interface to control how to\ncommunicate gradients across workers by overriding the vanilla\nallreduce in DistributedDataParallel. A few built-in communication\nhooks are provided, and users can easily apply any of these hooks to\noptimize communication. Besides, the hook interface can also support\nuser-defined communication strategies for more advanced use cases.\nHow to Use a Communication Hook?\nTo use a communication hook, the user just needs to let the DDP model\nregister the hook before the training loop as below.\n\"torch.nn.parallel.DistributedDataParallel.register_comm_hook()\"\nWhat Does a Communication Hook Operate On?\nA communication hook provides a flexible way to allreduce gradients.\nTherefore, it mainly operates on the gradients on each replica before\nallreduce, which are bucketized to increase the overlap between", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"} {"text": "communication and computation. Particularly,\n\"torch.distributed.GradBucket\" represents a bucket of gradient tensors\nto be allreduced.\nclass torch.distributed.GradBucket\nThis class mainly passes a flattened gradient tensor (returned by\n \"buffer()\") to DDP communication hook. This tensor can be further\n decomposed into a list of per-parameter tensors within this bucket\n (returned by \"get_per_parameter_tensors()\") to apply layer-wise\n operations.\ntorch.distributed.GradBucket.index(self: torch._C._distributed_c10d.GradBucket) -> int\nWarning:\n Since the buckets are rebuilt after the first iteration, should\n not rely on the indices at the beginning of training.\n\nReturns:\n The index of a bucket that stores gradients of a few contiguous\n layers. All the gradients are bucketized.\ntorch.distributed.GradBucket.buffer(self: torch._C._distributed_c10d.GradBucket) -> torch.Tensor\nReturns:\n A flattened 1D \"torch.Tensor\" buffer, which can be further", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"} {"text": "decomposed into a list of per-parameter tensors within this\n bucket.\ntorch.distributed.GradBucket.gradients(self: torch._C._distributed_c10d.GradBucket) -> List[torch.Tensor]\nReturns:\n A list of \"torch.Tensor\". Each tensor in the list corresponds to\n a gradient.\ntorch.distributed.GradBucket.is_last(self: torch._C._distributed_c10d.GradBucket) -> bool\nReturns:\n Whether this bucket is the last bucket to allreduce in an\n iteration. This also means that this bucket corresponds to the\n first few layers in the forward pass.\ntorch.distributed.GradBucket.set_buffer(self: torch._C._distributed_c10d.GradBucket, buffer: torch.Tensor) -> None\nReplaces the tensor in the bucket with the input tensor buffer.\ntorch.distributed.GradBucket.parameters(self: torch._C._distributed_c10d.GradBucket) -> List[torch.Tensor]\nReturns:\n A list of \"torch.Tensor\". Each tensor in the list corresponds to\n a model parameter.\nDefault Communication Hooks", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"} {"text": "Default Communication Hooks\nDefault communication hooks are simple stateless hooks, so the\ninput state in \"register_comm_hook\" is either a process group or\n\"None\". The input \"bucket\" is a \"torch.distributed.GradBucket\" object.\ntorch.distributed.algorithms.ddp_comm_hooks.default_hooks.allreduce_hook(process_group, bucket)\nThis DDP communication hook just calls \"allreduce\" using\n \"GradBucket\" tensors. Once gradient tensors are aggregated across\n all workers, its \"then\" callback takes the mean and returns the\n result. If user registers this hook, DDP results is expected to be\n same as the case where no hook was registered. Hence, this won't\n change behavior of DDP and user can use this as a reference or\n modify this hook to log useful information or any other purposes\n while unaffecting DDP behavior.\nExample::\n >>> ddp_model.register_comm_hook(process_group, allreduce_hook)\nReturn type:\n Future[Tensor]", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"} {"text": "Return type:\n Future[Tensor]\ntorch.distributed.algorithms.ddp_comm_hooks.default_hooks.fp16_compress_hook(process_group, bucket)\nThis DDP communication hook implements a simple gradient\n compression approach that casts \"GradBucket\" tensor to half-\n precision floating-point format (\"torch.float16\") and then divides\n it by the process group size. It allreduces those \"float16\"\n gradient tensors. Once compressed gradient tensors are allreduced,\n the chained callback \"decompress\" casts it back to the input data\n type (such as \"float32\").\nExample::\n >>> ddp_model.register_comm_hook(process_group, fp16_compress_hook)\nReturn type:\n Future[Tensor]\ntorch.distributed.algorithms.ddp_comm_hooks.default_hooks.bf16_compress_hook(process_group, bucket)\nWarning: This API is experimental, and it requires NCCL version\n later than 2.9.6.\nThis DDP communication hook implements a simple gradient\n compression approach that casts \"GradBucket\" tensor to half-", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"} {"text": "precision Brain floating point format (\"torch.bfloat16\") and then\n divides it by the process group size. It allreduces those\n \"bfloat16\" gradient tensors. Once compressed gradient tensors are\n allreduced, the chained callback \"decompress\" casts it back to the\n input data type (such as \"float32\").\nExample::\n >>> ddp_model.register_comm_hook(process_group, bf16_compress_hook)\nReturn type:\n Future[Tensor]\nAdditionally, a communication hook wrapper is provided to support\n\"fp16_compress_hook()\" or \"bf16_compress_hook()\" as a wrapper, which\ncan be combined with other communication hooks.\ntorch.distributed.algorithms.ddp_comm_hooks.default_hooks.fp16_compress_wrapper(hook)\nThis wrapper casts the input gradient tensor of a given DDP\n communication hook to half-precision floating point format\n (\"torch.float16\"), and casts the resulting tensor of the given hook\n back to the input data type, such as \"float32\".\nTherefore, \"fp16_compress_hook\" is equivalent to", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"} {"text": "\"fp16_compress_wrapper(allreduce_hook)\".\nExample::\n >>> state = PowerSGDState(process_group=process_group, matrix_approximation_rank=1, start_powerSGD_iter=10)\n >>> ddp_model.register_comm_hook(state, fp16_compress_wrapper(powerSGD_hook))\nReturn type:\n Callable[[Any, GradBucket], Future[Tensor]]\ntorch.distributed.algorithms.ddp_comm_hooks.default_hooks.bf16_compress_wrapper(hook)\nWarning: This API is experimental, and it requires NCCL version\n later than 2.9.6.\nThis wrapper casts the input gradient tensor of a given DDP\n communication hook to half-precision Brain floating point format\n https://en.wikipedia.org/wiki/Bfloat16_floating-point_format _\n (``torch.bfloat16), and casts the resulting tensor of the given\n hook back to the input data type, such as \"float32\".\nTherefore, \"bf16_compress_hook\" is equivalent to\n \"bf16_compress_wrapper(allreduce_hook)\".\nExample::", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"} {"text": "Example::\n >>> state = PowerSGDState(process_group=process_group, matrix_approximation_rank=1, start_powerSGD_iter=10)\n >>> ddp_model.register_comm_hook(state, bf16_compress_wrapper(powerSGD_hook))\nReturn type:\n Callable[[Any, GradBucket], Future[Tensor]]\nPowerSGD Communication Hook\nPowerSGD (Vogels et al., NeurIPS 2019) is a gradient compression\nalgorithm, which can provide very high compression rates and\naccelerate bandwidth-bound distributed training. This algorithm needs\nto maintain both some hyperparameters and the internal state.\nTherefore, PowerSGD communication hook is a stateful hook, and the\nuser needs to provide a state object defined as below.\nPowerSGD State", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"} {"text": "PowerSGD State\nclass torch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.PowerSGDState(process_group, matrix_approximation_rank=1, start_powerSGD_iter=1000, min_compression_rate=2, use_error_feedback=True, warm_start=True, orthogonalization_epsilon=0, random_seed=0, compression_stats_logging_frequency=10000, batch_tensors_with_same_shape=False)\nStores both the algorithm's hyperparameters and the internal state\n for all the gradients during the training. Particularly,\n \"matrix_approximation_rank\" and \"start_powerSGD_iter\" are the main\n hyperparameters that should be tuned by the user. For performance,\n we suggest to keep binary hyperparameters \"use_error_feedback\" and\n \"warm_start\" on.\n\n\"matrix_approximation_rank\" controls the size of compressed low-\n rank tensors, which determines the compression rate. The lower\n the rank, the stronger the compression. 1.1. If \"matrix_approximation_rank\" is too low, the full\n\n\n", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"} {"text": "model quality will need more training steps to reach or will\n never reach and yield loss in accuracy.\n 1.2. The increase of \"matrix_approximation_rank\" can\n substantially increase the computation costs of the\n compression, and the accuracy may not be further improved\n beyond a certain \"matrix_approximation_rank\" threshold.\n\nTo tune \"matrix_approximation_rank\", we suggest to start from 1 and\n increase by factors of 2 (like an exponential grid search, 1, 2, 4,\n ...), until a satisfactory accuracy is reached. Typically only a\n small value 1-4 is used. For some NLP tasks (as shown in Appendix D\n of the original paper), this value has been increased to 32.\n\n\"start_powerSGD_iter\" defers PowerSGD compression until step\n \"start_powerSGD_iter\", and vanilla allreduce runs prior to step\n \"start_powerSGD_iter\". This hybrid scheme of **vanilla allreduce\nPowerSGD** can effectively improve the accuracy, even a\n\n\n", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"} {"text": "relatively small \"matrix_approximation_rank\" is used. This is\n because that, the beginning of training phase is usually very\n sensitive to inaccurate gradients, and compressing gradients too\n early may make the training quickly take a suboptimal\n trajectory, which can result in an irrecoverable impact on the\n accuracy.\nTo tune \"start_powerSGD_iter\", we suggest to start with 10% of\n total training steps, and increase it until a satisfactory accuracy\n is reached. If there is a warm-up stage in the training,\n \"start_powerSGD_iter\" typically should be no less than the number\n of warm-up steps.\n\n\"min_compression_rate\" is the minimum compression rate required\n when a layer is compressed. Due to the computation overheads\n incurred by the compression, a tensor is worth compressing only\n if there can be sufficient saving in bandwidth, where \"(num_rows\nnum_cols) * matrix_approximation_rank * min_compression_rate <\n\n\n", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"} {"text": "num_rows * num_cols\". If the specified compression rate\n threshold cannot be satisfied, the tensor will be directly\n allreduced without compression.\nCompression statistics are logged every\n \"compression_stats_logging_frequency\" iterations once PowerSGD\n compression starts.\n\n\n\"orthogonalization_epsilon\" can be a very small value (e.g.,\n 1e-8) added to every normalized matrix column in\n orthogonalization step, to prevent div-by-zero error if any\n column has all 0s. If this can already be prevented (e.g., by\n batch normalization), an epsilon of 0 is recommended for\n accuracy.\n\n\n\"batch_tensors_with_same_shape\" controls whether to compress and\n decompress tensors with same shape in a batched operation to\n achieve higher parallelism. Note that you should also increase\n the bucket size (i.e., \"bucket_cap_mb\" arg in DDP constructor)\n to make more same-shaped tensors appear in the same bucket,\n\n", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"} {"text": "however this may reduce the overlap between computation and\n communication, and increase the memory footprint due to stacking\n the tensors of the same shape. Set to \"True\" if the compression\n / decompression computation is a bottleneck.\nWarning:\n If error feedback or warm-up is enabled, the minimum value of\n \"start_powerSGD_iter\" allowed in DDP is 2. This is because there\n is another internal optimization that rebuilds buckets at\n iteration 1 in DDP, and this can conflict with any tensor\n memorized before the rebuild process.\n\nPowerSGD Hooks\nWarning:\nPowerSGD typically requires extra memory of the same size as the\n model's gradients to enable error feedback, which can compensate for\n biased compressed communication and improve accuracy.\nWarning:\nPowerSGD hooks may conflict with Apex automatic mixed precision\n package. Please use PyTorch native automatic mixed precision package\n instead.", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"} {"text": "instead.\ntorch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.powerSGD_hook(state, bucket)\nThis DDP communication hook implements PowerSGD gradient\n compression algorithm described in the paper. Once gradient tensors\n are aggregated across all workers, this hook applies compression as\n follows:\n\n\nViews the input flattened 1D gradient tensor as a list of per-\n parameter tensors, and divides all the tensors into two groups:\n 1.1 The tensors that should be compressed before allreduce,\n because the compression can give enough saving in bandwidth.\n\n 1.2 Rest of the tensors will be directly allreduced without\n compression, including all the vector tensors (for biases).\n\n\n\nHandles uncompressed tensors:\n 2.1. Allocate contiguous memory for those uncompressed\n tensors, and allreduces all the uncompressed tensors as a\n batch, without compression;\n\n 2.2. Copies the individual uncompressed tensors from the\n\n\n", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"} {"text": "contiguous memory back to the input tensor.\n\nHandles the tensors that should be compressed by PowerSGD\n compression: 3.1. For each tensor M, creates two low-rank tensors P and Q\n for decomposing M, such that M = PQ^T, where Q is initialized\n from a standard normal distribution and orthogonalized;\n\n 3.2. Computes each P in Ps, which is equal to MQ;\n\n 3.3. Allreduces Ps as a batch;\n\n 3.4. Orthogonalizes each P in Ps;\n\n 3.5. Computes each Q in Qs, which is approximately equal to\n M^TP;\n\n 3.6. Allreduces Qs as a batch;\n\n 3.7. Computes each M among all the compressed tensors, which\n is approximately equal to PQ^T.\n\n\n\nNote that this communication hook enforces vanilla allreduce for\n the first \"state.start_powerSGD_iter\" iterations. This not only\n gives the user more control over the tradeoff between speedup and\n accuracy, but also helps abstract away some complexity of the", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"} {"text": "internal optimization of DDP for future communication hook\n developers.\nParameters:\n * state (PowerSGDState) -- State information to configure\n the compression rate and support error feedback, warm start,\n etc. To tune the compression configs, mainly need to tune\n \"matrix_approximation_rank\", \"start_powerSGD_iter\" and\n \"min_compression_rate\".\n * **bucket** (*dist.GradBucket*) -- Bucket that stores a 1D\n flattened gradient tensor that batches multiple per-variable\n tensors. Note that since DDP comm hook only supports single\n process single device mode, only exactly one tensor is stored\n in this bucket.\n\nReturns:\n Future handler of the communication, which updates the gradients\n in place.\nReturn type:\n Future[Tensor]\nExample::\n >>> state = PowerSGDState(process_group=process_group, matrix_approximation_rank=1,\n start_powerSGD_iter=10, min_compression_rate=0.5)", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"} {"text": "\n\n\nddp_model.register_comm_hook(state, powerSGD_hook)\n\n\n\ntorch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.batched_powerSGD_hook(state, bucket)\nThis DDP communication hook implements a simplified PowerSGD\n gradient compression algorithm described in the paper. This variant\n does not compress the gradients layer by layer, but instead\n compresses the flattened input tensor that batches all the\n gradients. Therefore, it is faster than \"powerSGD_hook()\", but\n usually results in a much lower accuracy, unless\n \"matrix_approximation_rank\" is 1.\nWarning:\n Increasing \"matrix_approximation_rank\" here may not necessarily\n increase the accuracy, because batching per-parameter tensors\n without column/row alignment can destroy low-rank structure.\n Therefore, the user should always consider \"powerSGD_hook()\"\n first, and only consider this variant when a satisfactory\n accuracy can be achieved when \"matrix_approximation_rank\" is 1.\n", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"} {"text": "Once gradient tensors are aggregated across all workers, this hook\n applies compression as follows:\n\n\nViews the input flattened 1D gradient tensor as a square-shaped\n tensor M with 0 paddings;\n\n\nCreates two low-rank tensors P and Q for decomposing M, such\n that M = PQ^T, where Q is initialized from a standard normal\n distribution and orthogonalized;\n\n\nComputes P, which is equal to MQ;\n\n\nAllreduces P;\n\n\nOrthogonalizes P;\n\n\nComputes Q, which is approximately equal to M^TP;\n\n\nAllreduces Q;\n\n\nComputes M, which is approximately equal to PQ^T.\n\n\nTruncates the input tensor to the original length.\n\n\nNote that this communication hook enforces vanilla allreduce for\n the first \"state.start_powerSGD_iter\" iterations. This not only\n gives the user more control over the tradeoff between speedup and\n accuracy, but also helps abstract away some complexity of the\n internal optimization of DDP for future communication hook\n developers.", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"} {"text": "developers.\nParameters:\n * state (PowerSGDState) -- State information to configure\n the compression rate and support error feedback, warm start,\n etc. To tune the compression configs, mainly need to tune\n \"matrix_approximation_rank\" and \"start_powerSGD_iter\".\n * **bucket** (*dist.GradBucket*) -- Bucket that stores a 1D\n flattened gradient tensor that batches multiple per-variable\n tensors. Note that since DDP comm hook only supports single\n process single device mode, only exactly one tensor is stored\n in this bucket.\n\nReturns:\n Future handler of the communication, which updates the gradients\n in place.\nReturn type:\n Future[Tensor]\nExample::\n >>> state = PowerSGDState(process_group=process_group, matrix_approximation_rank=1)\n >>> ddp_model.register_comm_hook(state, batched_powerSGD_hook)\nDebugging Communication Hooks", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"} {"text": "=============================\nAs the name implies, debugging communication hooks are only used\nfor debugging and performance optimization purpose.\nWarning:\nDebugging communication hooks do not necessarily output the correct\n results.\ntorch.distributed.algorithms.ddp_comm_hooks.debugging_hooks.noop_hook(_, bucket)\nThis DDP communication hook returns a future that wraps the input,\n so it is a noop that does not incur any communication overheads.\nThis hook should only be used for headroom analysis of\n allreduce optimization, instead of the normal gradient\n synchronization. For example, if only less than 10% speedup of\n training time can be observed after this hook is registered, it\n usually implies that allreduce is not a performance bottleneck for\n this case. Such instrumentation can be particularly useful if GPU\n traces cannot be easily retrieved or the trace analysis is\n complicated some factors such as the overlap between allreduce and", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"} {"text": "computation or the desynchronization across ranks.\nExample::\n >>> ddp_model.register_comm_hook(None, noop_hook)\nReturn type:\n Future[Tensor]\nCheckpointing of Communication Hooks\nA stateful communication hook can be saved as a part of model\ncheckpointing to enable trainer restarts. To make a hook serializable,\n\"setstate\" and \"getstate\" should be defined.\nWarning:\n\"getstate\" should exclude non-serializable attributes from a\n returned dictionary.\nWarning:\n\"setstate\" should properly initialize non-serializable\n attributes, excluded from a provided \"state\".\n\"PowerSGDState\" has \"setstate\" and \"getstate\" implemented and\ncan be used as a reference.\nclass torch.distributed.algorithms.ddp_comm_hooks.powerSGD_hook.PowerSGDState\ngetstate()\n Returns a \"Dict[str, Any]\" which will be pickled and saved.\n \"process_group\" is not serializable and excluded from a returned\n state.\n\nsetstate(state)", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"} {"text": "state.\nsetstate(state)\n Takes a provided \"state\" and retrieves \"PowerSGDState\".\n \"process_group\" is set to default.\n\nHere is a simple, end-to-end example of saving and reloading PowerSGD\nstate and hook.\nimport os\n import sys\n import tempfile\n import torch\n import torch.distributed as dist\n import torch.nn as nn\n import torch.optim as optim\nfrom torch.distributed.algorithms.ddp_comm_hooks import powerSGD_hook as powerSGD\nclass SimpleModel(nn.Module):\n def init(self):\n super(SimpleModel, self).init()\n self.fc1 = nn.Linear(24,24)\n self.relu = nn.ReLU()\n self.fc2 = nn.Linear(24,12)\n def forward(self, x):\n return self.fc2(self.relu(self.fc1(x)))\n\ndef setup(rank, world_size):\n os.environ['MASTER_ADDR'] = 'localhost'\n os.environ['MASTER_PORT'] = '12355'\n # initialize the process group\n dist.init_process_group(\"nccl\", rank=rank, world_size=world_size)\n\ndef cleanup():", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"} {"text": "def cleanup():\n dist.destroy_process_group()\ndef run_demo(demo_fn, world_size):\n mp.spawn(\n demo_fn,\n args=(world_size,),\n nprocs=world_size,\n join=True)\ndef demo_serialization(rank, world_size):\n setup(rank, world_size)\n CHECKPOINT = tempfile.gettempdir() + \"/checkpoint.pt\"\n\n model = SimpleModel().to(rank)\n ddp_model = DistributedDataParallel(model, device_ids=[rank])\n\n powersgd_hook = powerSGD.powerSGD_hook\n powersgd_state = powerSGD.PowerSGDState(process_group=None)\n\n optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)\n ddp_model.register_comm_hook(powersgd_state, powersgd_hook)\n\n state = {\n 'state_dict': ddp_model.state_dict(),\n 'comm_hook': hook,\n 'comm_hook_state': hook_state}\n\n if rank == 0:\n torch.save(state, CHECKPOINT)\n\n dist.barrier()\n map_location = {'cuda:%d' % 0: 'cuda:%d' % rank}\n", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"} {"text": "checkpoint = torch.load(CHECKPOINT, map_location=map_location)\n ddp_model.load_state_dict(checkpoint['state_dict'])\n powersgd_hook = checkpoint['comm_hook']\n powersgd_state = checkpoint['comm_hook_state']\n\n ddp_model.register_comm_hook(powersgd_state, powersgd_hook)\n\n if rank == 0:\n os.remove(CHECKPOINT)\n\n cleanup()\n\nif name == \"main\":\n n_gpus = torch.cuda.device_count()\n assert n_gpus >= 2, f\"Requires at least 2 GPUs to run, but got {n_gpus}\"\n world_size = n_gpus\n run_demo(demo_serialization, world_size)\nAcknowledgements\nMany thanks to PowerSGD paper author Thijs Vogels for the code\nreview on PowerSGD communication hook, as well as the comparison\nexperiments, which show that the performance of PowerSGD communication\nhook is on par with the implementation in the original paper.", "source": "https://pytorch.org/docs/stable/ddp_comm_hooks.html", "category": "pytorch docs"} {"text": "Pipeline Parallelism\nPipeline parallelism was original introduced in the Gpipe paper and\nis an efficient technique to train large models on multiple GPUs.\nWarning:\nPipeline Parallelism is experimental and subject to change.\nModel Parallelism using multiple GPUs\nTypically for large models which don't fit on a single GPU, model\nparallelism is employed where certain parts of the model are placed on\ndifferent GPUs. Although, if this is done naively for sequential\nmodels, the training process suffers from GPU under utilization since\nonly one GPU is active at one time as shown in the figure below:\n[image]The figure represents a model with 4 layers placed on 4\n different GPUs (vertical axis). The horizontal axis represents\n training this model through time demonstrating that only 1 GPU is\n utilized at a time (image source).\nPipelined Execution\nTo alleviate this problem, pipeline parallelism splits the input", "source": "https://pytorch.org/docs/stable/pipeline.html", "category": "pytorch docs"} {"text": "minibatch into multiple microbatches and pipelines the execution of\nthese microbatches across multiple GPUs. This is outlined in the\nfigure below:\n[image]The figure represents a model with 4 layers placed on 4\n different GPUs (vertical axis). The horizontal axis represents\n training this model through time demonstrating that the GPUs are\n utilized much more efficiently. However, there still exists a\n bubble (as demonstrated in the figure) where certain GPUs are not\n utilized. (image source).\nPipe APIs in PyTorch\nclass torch.distributed.pipeline.sync.Pipe(module, chunks=1, checkpoint='except_last', deferred_batch_norm=False)\nWraps an arbitrary \"nn.Sequential\" module to train on using\n synchronous pipeline parallelism. If the module requires lots of\n memory and doesn't fit on a single GPU, pipeline parallelism is a\n useful technique to employ for training.\nThe implementation is based on the torchgpipe paper.", "source": "https://pytorch.org/docs/stable/pipeline.html", "category": "pytorch docs"} {"text": "Pipe combines pipeline parallelism with checkpointing to reduce\n peak memory required to train while minimizing device under-\n utilization.\nYou should place all the modules on the appropriate devices and\n wrap them into an \"nn.Sequential\" module defining the desired order\n of execution. If a module does not contain any parameters/buffers,\n it is assumed this module should be executed on CPU and appropriate\n input tensors to the module are moved to CPU before execution. This\n behavior can be overridden by the \"WithDevice\" wrapper which can be\n used to explicitly specify which device a module should run on.\nParameters:\n * module (\"nn.Sequential\") -- sequential module to be\n parallelized using pipelining. Each module in the sequence has\n to have all of its parameters on a single device. Each module\n in the sequence has to either be an nn.Module or\n \"nn.Sequential\" (to combine multiple sequential modules on a\n single device)", "source": "https://pytorch.org/docs/stable/pipeline.html", "category": "pytorch docs"} {"text": "single device)\n * **chunks** (*int*) -- number of micro-batches (default: \"1\")\n\n * **checkpoint** (*str*) -- when to enable checkpointing, one of\n \"'always'\", \"'except_last'\", or \"'never'\" (default:\n \"'except_last'\"). \"'never'\" disables checkpointing completely,\n \"'except_last'\" enables checkpointing for all micro-batches\n except the last one and \"'always'\" enables checkpointing for\n all micro-batches.\n\n * **deferred_batch_norm** (*bool*) -- whether to use deferred\n \"BatchNorm\" moving statistics (default: \"False\"). If set to\n \"True\", we track statistics across multiple micro-batches to\n update the running statistics per mini-batch.\n\nRaises:\n * TypeError -- the module is not a \"nn.Sequential\".\n * **ValueError** -- invalid arguments\n\nExample::\n Pipeline of two FC layers across GPUs 0 and 1.\n >>> # Need to initialize RPC framework first.\n >>> os.environ['MASTER_ADDR'] = 'localhost'\n", "source": "https://pytorch.org/docs/stable/pipeline.html", "category": "pytorch docs"} {"text": "\n\n\nos.environ['MASTER_ADDR'] = 'localhost'\n >>> os.environ['MASTER_PORT'] = '29500'\n >>> torch.distributed.rpc.init_rpc('worker', rank=0, world_size=1)\n >>>\n >>> # Build pipe.\n >>> fc1 = nn.Linear(16, 8).cuda(0)\n >>> fc2 = nn.Linear(8, 4).cuda(1)\n >>> model = nn.Sequential(fc1, fc2)\n >>> model = Pipe(model, chunks=8)\n >>> input = torch.rand(16, 16).cuda(0)\n >>> output_rref = model(input)\n\n\n\nNote:\n You can wrap a \"Pipe\" model with\n \"torch.nn.parallel.DistributedDataParallel\" only when the\n checkpoint parameter of \"Pipe\" is \"'never'\".\n\nNote:\n \"Pipe\" only supports intra-node pipelining currently, but will be\n expanded to support inter-node pipelining in the future. The\n forward function returns an \"RRef\" to allow for inter-node\n pipelining in the future, where the output might be on a remote\n host. For intra-node pipelinining you can use \"local_value()\" to\n retrieve the output locally.\n\nWarning:", "source": "https://pytorch.org/docs/stable/pipeline.html", "category": "pytorch docs"} {"text": "retrieve the output locally.\nWarning:\n \"Pipe\" is experimental and subject to change.\n\nforward(*inputs)\n Processes a single input mini-batch through the pipe and returns\n an \"RRef\" pointing to the output. \"Pipe\" is a fairly transparent\n module wrapper. It doesn't modify the input and output signature\n of the underlying module. But there's type restriction. Input\n and output have to contain at least one tensor. This restriction\n is applied at partition boundaries too.\n\n The sequence of inputs are fed into the first stage of the\n pipeline as \"*inputs\". As a result the positional args for this\n function should match the positional args for the first stage of\n the pipeline. The same condition applies for output of one stage\n of the pipeline which is the input for the next stage.\n\n The input tensor is split into multiple micro-batches based on\n the \"chunks\" parameter used to initialize \"Pipe\". The batch size\n", "source": "https://pytorch.org/docs/stable/pipeline.html", "category": "pytorch docs"} {"text": "is assumed to be the first dimension of the tensor and if the\n batch size is less than \"chunks\", the number of micro-batches is\n equal to the batch size.\n Only tensors are split into multiple micro-batches, non-Tensor\n inputs are just replicated as-is in each micro-batch. For non-\n Tensor outputs in the last stage of the pipeline, they are\n aggregated as a \"List\" and returned the user. For example, if\n you have 2 micro-batches returning the integer 5, the user would\n receive the consolidated output of *[5, 5]*\n\n All the input tensors need to be on the same device as the first\n partition of the pipeline.\n\n If a tensor is wrapped with the \"NoChunk\" wrapper, the tensor is\n not split across micro-batches and is replicated as-is similar\n to non-tensors.\n\n Parameters:\n **inputs** -- input mini-batch\n\n Returns:\n \"RRef\" to the output of the mini-batch\n\n Raises:\n", "source": "https://pytorch.org/docs/stable/pipeline.html", "category": "pytorch docs"} {"text": "Raises:\n TypeError -- input doesn't contain at least one tensor\n Return type:\n *RRef*\n\nSkip connections\nCertain models like ResNeXt are not completely sequential and have\nskip connections between layers. Naively implementing as part of\npipeline parallelism would imply that we need to copy outputs for\ncertain layers through multiple GPUs till we eventually reach the GPU\nwhere the layer for the skip connection resides. To avoid this copy\noverhead, we provide APIs below to stash and pop Tensors in different\nlayers of the model.\ntorch.distributed.pipeline.sync.skip.skippable.skippable(stash=(), pop=())\nThe decorator to define a \"nn.Module\" with skip connections.\n Decorated modules are called \"skippable\". This functionality works\n perfectly fine even when the module is not wrapped by \"Pipe\".\nEach skip tensor is managed by its name. Before manipulating skip\n tensors, a skippable module must statically declare the names for", "source": "https://pytorch.org/docs/stable/pipeline.html", "category": "pytorch docs"} {"text": "skip tensors by stash and/or pop parameters. Skip tensors with\n pre-declared name can be stashed by \"yield stash(name, tensor)\" or\n popped by \"tensor = yield pop(name)\".\nHere is an example with three layers. A skip tensor named \"1to3\" is\n stashed and popped at the first and last layer, respectively:\n @skippable(stash=['1to3'])\n class Layer1(nn.Module):\n def forward(self, input):\n yield stash('1to3', input)\n return f1(input)\n\n class Layer2(nn.Module):\n def forward(self, input):\n return f2(input)\n\n @skippable(pop=['1to3'])\n class Layer3(nn.Module):\n def forward(self, input):\n skip_1to3 = yield pop('1to3')\n return f3(input) + skip_1to3\n\n model = nn.Sequential(Layer1(), Layer2(), Layer3())\n\nOne skippable module can stash or pop multiple skip tensors:\n @skippable(stash=['alice', 'bob'], pop=['carol'])\n class StashStashPop(nn.Module):\n", "source": "https://pytorch.org/docs/stable/pipeline.html", "category": "pytorch docs"} {"text": "class StashStashPop(nn.Module):\n def forward(self, input):\n yield stash('alice', f_alice(input))\n yield stash('bob', f_bob(input))\n carol = yield pop('carol')\n return input + carol\nEvery skip tensor must be associated with exactly one pair of\n stash and pop. \"Pipe\" checks this restriction automatically\n when wrapping a module. You can also check the restriction by\n \"verify_skippables()\" without \"Pipe\".\nReturn type:\n Callable[[Type[Module]], Type[Skippable]]\nclass torch.distributed.pipeline.sync.skip.skippable.stash(name, tensor)\nThe command to stash a skip tensor.\n def forward(self, input):\n yield stash('name', input)\n return f(input)\n\nParameters:\n * name (str) -- name of skip tensor\n * **input** (*torch.Tensor** or **None*) -- tensor to pass to\n the skip connection\n\nclass torch.distributed.pipeline.sync.skip.skippable.pop(name)", "source": "https://pytorch.org/docs/stable/pipeline.html", "category": "pytorch docs"} {"text": "The command to pop a skip tensor.\n def forward(self, input):\n skip = yield pop('name')\n return f(input) + skip\n\nParameters:\n name (str) -- name of skip tensor\nReturns:\n the skip tensor previously stashed by another layer under the\n same name\nReturn type:\n None\ntorch.distributed.pipeline.sync.skip.skippable.verify_skippables(module)\nVerifies if the underlying skippable modules satisfy integrity.\nEvery skip tensor must have only one pair of stash and pop. If\n there are one or more unmatched pairs, it will raise \"TypeError\"\n with the detailed messages.\nHere are a few failure cases. \"verify_skippables()\" will report\n failure for these cases:\n # Layer1 stashes \"1to3\".\n # Layer3 pops \"1to3\".\n\n nn.Sequential(Layer1(), Layer2())\n # \u00e2\u0094\u0094\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080 ?\n\n nn.Sequential(Layer2(), Layer3())\n # ? \u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0098\n\n nn.Sequential(Layer1(), Layer2(), Layer3(), Layer3())\n", "source": "https://pytorch.org/docs/stable/pipeline.html", "category": "pytorch docs"} {"text": "\u00e2\u0094\u0094\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0098 ^^^^^^\n nn.Sequential(Layer1(), Layer1(), Layer2(), Layer3())\n # ^^^^^^ \u00e2\u0094\u0094\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0080\u00e2\u0094\u0098\n\nTo use the same name for multiple skip tensors, they must be\n isolated by different namespaces. See \"isolate()\".\nRaises:\n TypeError -- one or more pairs of stash and pop are not\n matched.\nTutorials\nThe following tutorials give a good overview of how to use the \"Pipe\"\nAPI to train your models with the rest of the components that PyTorch\nprovides:\n\n\nTraining Transformer models using Pipeline Parallelism\n\n\nTraining Transformer models using Distributed Data Parallel and\n Pipeline Parallelism\n\n\nAcknowledgements\nThe implementation for pipeline parallelism is based on fairscale's\npipe implementation and torchgpipe. We would like to thank both teams", "source": "https://pytorch.org/docs/stable/pipeline.html", "category": "pytorch docs"} {"text": "for their contributions and guidance towards bringing pipeline\nparallelism into PyTorch.", "source": "https://pytorch.org/docs/stable/pipeline.html", "category": "pytorch docs"} {"text": "Distributed Checkpoint\n", "source": "https://pytorch.org/docs/stable/distributed.checkpoint.html", "category": "pytorch docs"} {"text": "torch.backends\ntorch.backends controls the behavior of various backends that\nPyTorch supports.\nThese backends include:\n\n\n\"torch.backends.cuda\"\n\n\n\"torch.backends.cudnn\"\n\n\n\"torch.backends.mps\"\n\n\n\"torch.backends.mkl\"\n\n\n\"torch.backends.mkldnn\"\n\n\n\"torch.backends.openmp\"\n\n\n\"torch.backends.opt_einsum\"\n\n\n\"torch.backends.xeon\"\n\n\ntorch.backends.cuda\ntorch.backends.cuda.is_built()\nReturns whether PyTorch is built with CUDA support. Note that this\n doesn't necessarily mean CUDA is available; just that if this\n PyTorch binary were run a machine with working CUDA drivers and\n devices, we would be able to use it.\ntorch.backends.cuda.matmul.allow_tf32\nA \"bool\" that controls whether TensorFloat-32 tensor cores may be\n used in matrix multiplications on Ampere or newer GPUs. See\n TensorFloat-32(TF32) on Ampere devices.\ntorch.backends.cuda.matmul.allow_fp16_reduced_precision_reduction\nA \"bool\" that controls whether reduced precision reductions (e.g.,", "source": "https://pytorch.org/docs/stable/backends.html", "category": "pytorch docs"} {"text": "with fp16 accumulation type) are allowed with fp16 GEMMs.\ntorch.backends.cuda.matmul.allow_bf16_reduced_precision_reduction\nA \"bool\" that controls whether reduced precision reductions are\n allowed with bf16 GEMMs.\ntorch.backends.cuda.cufft_plan_cache\n\"cufft_plan_cache\" caches the cuFFT plans\nsize\n A readonly \"int\" that shows the number of plans currently in the\n cuFFT plan cache.\n\ntorch.backends.cuda.max_size\n A \"int\" that controls cache capacity of cuFFT plan.\n\ntorch.backends.cuda.clear()\n Clears the cuFFT plan cache.\n\ntorch.backends.cuda.preferred_linalg_library(backend=None)\nWarning:\n This flag is experimental and subject to change.\n\nWhen PyTorch runs a CUDA linear algebra operation it often uses the\n cuSOLVER or MAGMA libraries, and if both are available it decides\n which to use with a heuristic. This flag (a \"str\") allows\n overriding those heuristics.\n\nIf \"cusolver\" is set then cuSOLVER will be used wherever\n possible.\n", "source": "https://pytorch.org/docs/stable/backends.html", "category": "pytorch docs"} {"text": "possible.\n\n\nIf \"magma\" is set then MAGMA will be used wherever possible.\n\n\nIf \"default\" (the default) is set then heuristics will be used\n to pick between cuSOLVER and MAGMA if both are available.\n\n\nWhen no input is given, this function returns the currently\n preferred library.\n\n\nNote: When a library is preferred other libraries may still be used\n if the preferred library doesn't implement the operation(s) called.\n This flag may achieve better performance if PyTorch's heuristic\n library selection is incorrect for your application's inputs.\nCurrently supported linalg operators:\n\n\n\"torch.linalg.inv()\"\n\n\n\"torch.linalg.inv_ex()\"\n\n\n\"torch.linalg.cholesky()\"\n\n\n\"torch.linalg.cholesky_ex()\"\n\n\n\"torch.cholesky_solve()\"\n\n\n\"torch.cholesky_inverse()\"\n\n\n\"torch.linalg.lu_factor()\"\n\n\n\"torch.linalg.lu()\"\n\n\n\"torch.linalg.lu_solve()\"\n\n\n\"torch.linalg.qr()\"\n\n\n\"torch.linalg.eigh()\"\n\n\n\"torch.linalg.eighvals()\"\n\n\n\"torch.linalg.svd()\"\n\n", "source": "https://pytorch.org/docs/stable/backends.html", "category": "pytorch docs"} {"text": "\n\n\"torch.linalg.svd()\"\n\n\n\"torch.linalg.svdvals()\"\n\n\nReturn type:\n _LinalgBackend\nclass torch.backends.cuda.SDPBackend(value)\nEnum class for the scaled dot product attention backends.\nWarning:\n This flag is experimental and subject to change.'\n\nThis class needs to stay inline with the enum defined in:\n pytorch/aten/src/ATen/native/transformers/sdp_utils_cpp.h\ntorch.backends.cuda.flash_sdp_enabled()\nWarning:\n This flag is experimental and subject to change.\n\nReturns whether flash sdp is enabled or not.\ntorch.backends.cuda.enable_mem_efficient_sdp(enabled)\nWarning:\n This flag is experimental and subject to change.\n\nEnables or disables memory efficient sdp.\ntorch.backends.cuda.mem_efficient_sdp_enabled()\nWarning:\n This flag is experimental and subject to change.\n\nReturns whether memory efficient sdp is enabled or not.\ntorch.backends.cuda.enable_flash_sdp(enabled)\nWarning:\n This flag is experimental and subject to change.\n", "source": "https://pytorch.org/docs/stable/backends.html", "category": "pytorch docs"} {"text": "Enables or disables flash sdp.\ntorch.backends.cuda.math_sdp_enabled()\nWarning:\n This flag is experimental and subject to change.\n\nReturns whether math sdp is enabled or not.\ntorch.backends.cuda.enable_math_sdp(enabled)\nWarning:\n This flag is experimental and subject to change.\n\nEnables or disables math sdp.\ntorch.backends.cuda.sdp_kernel(enable_flash=True, enable_math=True, enable_mem_efficient=True)\nWarning:\n This flag is experimental and subject to change.\n\nThis context manager can be used to temporarily enable or disable\n flash/memory efficient sdp and math sdp. Upon exiting the context\n manager, the previous state of the flags will be restored.\ntorch.backends.cudnn\ntorch.backends.cudnn.version()\nReturns the version of cuDNN\ntorch.backends.cudnn.is_available()\nReturns a bool indicating if CUDNN is currently available.\ntorch.backends.cudnn.enabled\nA \"bool\" that controls whether cuDNN is enabled.", "source": "https://pytorch.org/docs/stable/backends.html", "category": "pytorch docs"} {"text": "torch.backends.cudnn.allow_tf32\nA \"bool\" that controls where TensorFloat-32 tensor cores may be\n used in cuDNN convolutions on Ampere or newer GPUs. See\n TensorFloat-32(TF32) on Ampere devices.\ntorch.backends.cudnn.deterministic\nA \"bool\" that, if True, causes cuDNN to only use deterministic\n convolution algorithms. See also\n \"torch.are_deterministic_algorithms_enabled()\" and\n \"torch.use_deterministic_algorithms()\".\ntorch.backends.cudnn.benchmark\nA \"bool\" that, if True, causes cuDNN to benchmark multiple\n convolution algorithms and select the fastest.\ntorch.backends.cudnn.benchmark_limit\nA \"int\" that specifies the maximum number of cuDNN convolution\n algorithms to try when torch.backends.cudnn.benchmark is True.\n Set benchmark_limit to zero to try every available algorithm.\n Note that this setting only affects convolutions dispatched via the\n cuDNN v8 API.\ntorch.backends.mps\ntorch.backends.mps.is_available()", "source": "https://pytorch.org/docs/stable/backends.html", "category": "pytorch docs"} {"text": "torch.backends.mps.is_available()\nReturns a bool indicating if MPS is currently available.\nReturn type:\n bool\ntorch.backends.mps.is_built()\nReturns whether PyTorch is built with MPS support. Note that this\n doesn't necessarily mean MPS is available; just that if this\n PyTorch binary were run a machine with working MPS drivers and\n devices, we would be able to use it.\nReturn type:\n bool\ntorch.backends.mkl\ntorch.backends.mkl.is_available()\nReturns whether PyTorch is built with MKL support.\nclass torch.backends.mkl.verbose(enable)\nOn-demand oneMKL verbosing functionality To make it easier to debug\n performance issues, oneMKL can dump verbose messages containing\n execution information like duration while executing the kernel. The\n verbosing functionality can be invoked via an environment variable\n named MKL_VERBOSE. However, this methodology dumps messages in\n all steps. Those are a large amount of verbose messages. Moreover,", "source": "https://pytorch.org/docs/stable/backends.html", "category": "pytorch docs"} {"text": "for investigating the performance issues, generally taking verbose\n messages for one single iteration is enough. This on-demand\n verbosing functionality makes it possible to control scope for\n verbose message dumping. In the following example, verbose messages\n will be dumped out for the second inference only.\n import torch\n model(data)\n with torch.backends.mkl.verbose(torch.backends.mkl.VERBOSE_ON):\n model(data)\n\nParameters:\n level -- Verbose level - \"VERBOSE_OFF\": Disable verbosing -\n \"VERBOSE_ON\": Enable verbosing\ntorch.backends.mkldnn\ntorch.backends.mkldnn.is_available()\nReturns whether PyTorch is built with MKL-DNN support.\nclass torch.backends.mkldnn.verbose(level)\nOn-demand oneDNN (former MKL-DNN) verbosing functionality To make\n it easier to debug performance issues, oneDNN can dump verbose\n messages containing information like kernel size, input data size", "source": "https://pytorch.org/docs/stable/backends.html", "category": "pytorch docs"} {"text": "and execution duration while executing the kernel. The verbosing\n functionality can be invoked via an environment variable named\n DNNL_VERBOSE. However, this methodology dumps messages in all\n steps. Those are a large amount of verbose messages. Moreover, for\n investigating the performance issues, generally taking verbose\n messages for one single iteration is enough. This on-demand\n verbosing functionality makes it possible to control scope for\n verbose message dumping. In the following example, verbose messages\n will be dumped out for the second inference only.\n import torch\n model(data)\n with torch.backends.mkldnn.verbose(torch.backends.mkldnn.VERBOSE_ON):\n model(data)\n\nParameters:\n level -- Verbose level - \"VERBOSE_OFF\": Disable verbosing -\n \"VERBOSE_ON\": Enable verbosing - \"VERBOSE_ON_CREATION\": Enable\n verbosing, including oneDNN kernel creation\ntorch.backends.openmp\ntorch.backends.openmp.is_available()", "source": "https://pytorch.org/docs/stable/backends.html", "category": "pytorch docs"} {"text": "torch.backends.openmp.is_available()\nReturns whether PyTorch is built with OpenMP support.\ntorch.backends.opt_einsum\ntorch.backends.opt_einsum.is_available()\nReturns a bool indicating if opt_einsum is currently available.\nReturn type:\n bool\ntorch.backends.opt_einsum.get_opt_einsum()\nReturns the opt_einsum package if opt_einsum is currently\n available, else None.\nReturn type:\n Any\ntorch.backends.opt_einsum.enabled\nA :class:\"bool\" that controls whether opt_einsum is enabled (\"True\"\n by default). If so, torch.einsum will use opt_einsum (https\n ://optimized-einsum.readthedocs.io/en/stable/path_finding.html) if\n available to calculate an optimal path of contraction for faster\n performance.\nIf opt_einsum is not available, torch.einsum will fall back to the\n default contraction path of left to right.\ntorch.backends.opt_einsum.strategy\nA :class:\"str\" that specifies which strategies to try when", "source": "https://pytorch.org/docs/stable/backends.html", "category": "pytorch docs"} {"text": "\"torch.backends.opt_einsum.enabled\" is \"True\". By default,\n torch.einsum will try the \"auto\" strategy, but the \"greedy\" and\n \"optimal\" strategies are also supported. Note that the \"optimal\"\n strategy is factorial on the number of inputs as it tries all\n possible paths. See more details in opt_einsum's docs (https\n ://optimized-einsum.readthedocs.io/en/stable/path_finding.html).\ntorch.backends.xeon", "source": "https://pytorch.org/docs/stable/backends.html", "category": "pytorch docs"} {"text": "torch.utils.dlpack\ntorch.utils.dlpack.from_dlpack(ext_tensor) -> Tensor\nConverts a tensor from an external library into a \"torch.Tensor\".\nThe returned PyTorch tensor will share the memory with the input\n tensor (which may have come from another library). Note that in-\n place operations will therefore also affect the data of the input\n tensor. This may lead to unexpected issues (e.g., other libraries\n may have read-only flags or immutable data structures), so the user\n should only do this if they know for sure that this is fine.\nParameters:\n ext_tensor (object with \"dlpack\" attribute, or a DLPack\n capsule) --\n The tensor or DLPack capsule to convert.\n\n If \"ext_tensor\" is a tensor (or ndarray) object, it must support\n the \"__dlpack__\" protocol (i.e., have a \"ext_tensor.__dlpack__\"\n method). Otherwise \"ext_tensor\" may be a DLPack capsule, which\n is an opaque \"PyCapsule\" instance, typically produced by a\n", "source": "https://pytorch.org/docs/stable/dlpack.html", "category": "pytorch docs"} {"text": "\"to_dlpack\" function or method.\nReturn type:\n Tensor\nExamples:\n >>> import torch.utils.dlpack\n >>> t = torch.arange(4)\n\n # Convert a tensor directly (supported in PyTorch >= 1.10)\n >>> t2 = torch.from_dlpack(t)\n >>> t2[:2] = -1 # show that memory is shared\n >>> t2\n tensor([-1, -1, 2, 3])\n >>> t\n tensor([-1, -1, 2, 3])\n\n # The old-style DLPack usage, with an intermediate capsule object\n >>> capsule = torch.utils.dlpack.to_dlpack(t)\n >>> capsule\n \n >>> t3 = torch.from_dlpack(capsule)\n >>> t3\n tensor([-1, -1, 2, 3])\n >>> t3[0] = -9 # now we're sharing memory between 3 tensors\n >>> t3\n tensor([-9, -1, 2, 3])\n >>> t2\n tensor([-9, -1, 2, 3])\n >>> t\n tensor([-9, -1, 2, 3])\n\ntorch.utils.dlpack.to_dlpack(tensor) -> PyCapsule\nReturns an opaque object (a \"DLPack capsule\") representing the\n tensor.\nNote:", "source": "https://pytorch.org/docs/stable/dlpack.html", "category": "pytorch docs"} {"text": "tensor.\nNote:\n \"to_dlpack\" is a legacy DLPack interface. The capsule it returns\n cannot be used for anything in Python other than use it as input\n to \"from_dlpack\". The more idiomatic use of DLPack is to call\n \"from_dlpack\" directly on the tensor object - this works when\n that object has a \"__dlpack__\" method, which PyTorch and most\n other libraries indeed have now.\n\nWarning:\n Only call \"from_dlpack\" once per capsule produced with\n \"to_dlpack\". Behavior when a capsule is consumed multiple times\n is undefined.\n\nParameters:\n tensor -- a tensor to be exported\nThe DLPack capsule shares the tensor's memory.", "source": "https://pytorch.org/docs/stable/dlpack.html", "category": "pytorch docs"} {"text": "PyTorch Governance | Build + CI\nHow to Add a New Maintainer\nFor the person to be a maintainer, a person needs to:\n\n\nLand at least six commits to the related part of the PyTorch\n repository\n\n\nAt least one of these commits must be submitted in the last six\n months\n\n\nTo add a qualified person to the maintainers' list, please create a PR\nthat adds a person to the persons of interests page and merge_rules\nfiles. Current maintainers will cast their votes of support. Decision\ncriteria for approving the PR: * Not earlier than two business days\npassed before merging (ensure the majority of the contributors have\nseen it) * PR has the correct label (module: ci) * There are no\nobjections from the current maintainers * There are at least three net\nthumbs up from current maintainers (or all maintainers vote thumbs\nup when the module has less than 3 maintainers).", "source": "https://pytorch.org/docs/stable/community/build_ci_governance.html", "category": "pytorch docs"} {"text": "Probability distributions - torch.distributions\nThe \"distributions\" package contains parameterizable probability\ndistributions and sampling functions. This allows the construction of\nstochastic computation graphs and stochastic gradient estimators for\noptimization. This package generally follows the design of the\nTensorFlow Distributions package.\nIt is not possible to directly backpropagate through random samples.\nHowever, there are two main methods for creating surrogate functions\nthat can be backpropagated through. These are the score function\nestimator/likelihood ratio estimator/REINFORCE and the pathwise\nderivative estimator. REINFORCE is commonly seen as the basis for\npolicy gradient methods in reinforcement learning, and the pathwise\nderivative estimator is commonly seen in the reparameterization trick\nin variational autoencoders. Whilst the score function only requires\nthe value of samples f(x), the pathwise derivative requires the", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "derivative f'(x). The next sections discuss these two in a\nreinforcement learning example. For more details see Gradient\nEstimation Using Stochastic Computation Graphs .\nScore function\nWhen the probability density function is differentiable with respect\nto its parameters, we only need \"sample()\" and \"log_prob()\" to\nimplement REINFORCE:\n\\Delta\\theta = \\alpha r \\frac{\\partial\\log\n p(a|\\pi^\\theta(s))}{\\partial\\theta}\nwhere \\theta are the parameters, \\alpha is the learning rate, r is the\nreward and p(a|\\pi^\\theta(s)) is the probability of taking action a in\nstate s given policy \\pi^\\theta.\nIn practice we would sample an action from the output of a network,\napply this action in an environment, and then use \"log_prob\" to\nconstruct an equivalent loss function. Note that we use a negative\nbecause optimizers use gradient descent, whilst the rule above assumes\ngradient ascent. With a categorical policy, the code for implementing\nREINFORCE would be as follows:\nprobs = policy_network(state)", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "probs = policy_network(state)\n # Note that this is equivalent to what used to be called multinomial\n m = Categorical(probs)\n action = m.sample()\n next_state, reward = env.step(action)\n loss = -m.log_prob(action) * reward\n loss.backward()\nPathwise derivative\nThe other way to implement these stochastic/policy gradients would be\nto use the reparameterization trick from the \"rsample()\" method, where\nthe parameterized random variable can be constructed via a\nparameterized deterministic function of a parameter-free random\nvariable. The reparameterized sample therefore becomes differentiable.\nThe code for implementing the pathwise derivative would be as follows:\nparams = policy_network(state)\n m = Normal(*params)\n # Any distribution with .has_rsample == True could work based on the application\n action = m.rsample()\n next_state, reward = env.step(action) # Assuming that reward is differentiable\n loss = -reward\n loss.backward()\nDistribution", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "loss.backward()\nDistribution\nclass torch.distributions.distribution.Distribution(batch_shape=torch.Size([]), event_shape=torch.Size([]), validate_args=None)\nBases: \"object\"\nDistribution is the abstract base class for probability\n distributions.\nproperty arg_constraints: Dict[str, Constraint]\n Returns a dictionary from argument names to \"Constraint\" objects\n that should be satisfied by each argument of this distribution.\n Args that are not tensors need not appear in this dict.\n\nproperty batch_shape: Size\n Returns the shape over which parameters are batched.\n\ncdf(value)\n Returns the cumulative density/mass function evaluated at\n *value*.\n\n Parameters:\n **value** (*Tensor*) --\n\n Return type:\n *Tensor*\n\nentropy()\n Returns entropy of distribution, batched over batch_shape.\n\n Returns:\n Tensor of shape batch_shape.\n\n Return type:\n *Tensor*\n\nenumerate_support(expand=True)", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "enumerate_support(expand=True)\n Returns tensor containing all values supported by a discrete\n distribution. The result will enumerate over dimension 0, so the\n shape of the result will be *(cardinality,) + batch_shape +\n event_shape* (where *event_shape = ()* for univariate\n distributions).\n\n Note that this enumerates over all batched tensors in lock-step\n *[[0, 0], [1, 1], ...]*. With *expand=False*, enumeration\n happens along dim 0, but with the remaining batch dimensions\n being singleton dimensions, *[[0], [1], ..*.\n\n To iterate over the full Cartesian product use\n *itertools.product(m.enumerate_support())*.\n\n Parameters:\n **expand** (*bool*) -- whether to expand the support over the\n batch dims to match the distribution's *batch_shape*.\n\n Returns:\n Tensor iterating over dimension 0.\n\n Return type:\n *Tensor*\n\nproperty event_shape: Size\n Returns the shape of a single sample (without batching).\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "expand(batch_shape, _instance=None)\n Returns a new distribution instance (or populates an existing\n instance provided by a derived class) with batch dimensions\n expanded to *batch_shape*. This method calls \"expand\" on the\n distribution's parameters. As such, this does not allocate new\n memory for the expanded distribution instance. Additionally,\n this does not repeat any args checking or parameter broadcasting\n in *__init__.py*, when an instance is first created.\n\n Parameters:\n * **batch_shape** (*torch.Size*) -- the desired expanded\n size.\n\n * **_instance** -- new instance provided by subclasses that\n need to override *.expand*.\n\n Returns:\n New distribution instance with batch dimensions expanded to\n *batch_size*.\n\nicdf(value)\n Returns the inverse cumulative density/mass function evaluated\n at *value*.\n\n Parameters:\n **value** (*Tensor*) --\n\n Return type:\n *Tensor*\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "Return type:\n Tensor\nlog_prob(value)\n Returns the log of the probability density/mass function\n evaluated at *value*.\n\n Parameters:\n **value** (*Tensor*) --\n\n Return type:\n *Tensor*\n\nproperty mean: Tensor\n Returns the mean of the distribution.\n\nproperty mode: Tensor\n Returns the mode of the distribution.\n\nperplexity()\n Returns perplexity of distribution, batched over batch_shape.\n\n Returns:\n Tensor of shape batch_shape.\n\n Return type:\n *Tensor*\n\nrsample(sample_shape=torch.Size([]))\n Generates a sample_shape shaped reparameterized sample or\n sample_shape shaped batch of reparameterized samples if the\n distribution parameters are batched.\n\n Return type:\n *Tensor*\n\nsample(sample_shape=torch.Size([]))\n Generates a sample_shape shaped sample or sample_shape shaped\n batch of samples if the distribution parameters are batched.\n\n Return type:\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "Return type:\n Tensor\nsample_n(n)\n Generates n samples or n batches of samples if the distribution\n parameters are batched.\n\n Return type:\n *Tensor*\n\nstatic set_default_validate_args(value)\n Sets whether validation is enabled or disabled.\n\n The default behavior mimics Python's \"assert\" statement:\n validation is on by default, but is disabled if Python is run in\n optimized mode (via \"python -O\"). Validation may be expensive,\n so you may want to disable it once a model is working.\n\n Parameters:\n **value** (*bool*) -- Whether to enable validation.\n\nproperty stddev: Tensor\n Returns the standard deviation of the distribution.\n\nproperty support: Optional[Any]\n Returns a \"Constraint\" object representing this distribution's\n support.\n\nproperty variance: Tensor\n Returns the variance of the distribution.\n\nExponentialFamily", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "ExponentialFamily\nclass torch.distributions.exp_family.ExponentialFamily(batch_shape=torch.Size([]), event_shape=torch.Size([]), validate_args=None)\nBases: \"Distribution\"\nExponentialFamily is the abstract base class for probability\n distributions belonging to an exponential family, whose probability\n mass/density function has the form is defined below\n p_{F}(x; \\theta) = \\exp(\\langle t(x), \\theta\\rangle - F(\\theta)\n + k(x))\n\nwhere \\theta denotes the natural parameters, t(x) denotes the\n sufficient statistic, F(\\theta) is the log normalizer function for\n a given family and k(x) is the carrier measure.\nNote:\n This class is an intermediary between the *Distribution* class\n and distributions which belong to an exponential family mainly to\n check the correctness of the *.entropy()* and analytic KL\n divergence methods. We use this class to compute the entropy and\n KL divergence using the AD framework and Bregman divergences\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "(courtesy of: Frank Nielsen and Richard Nock, Entropies and\n Cross-entropies of Exponential Families).\nentropy()\n Method to compute the entropy using Bregman divergence of the\n log normalizer.\n\nBernoulli\nclass torch.distributions.bernoulli.Bernoulli(probs=None, logits=None, validate_args=None)\nBases: \"ExponentialFamily\"\nCreates a Bernoulli distribution parameterized by \"probs\" or\n \"logits\" (but not both).\nSamples are binary (0 or 1). They take the value 1 with\n probability p and 0 with probability 1 - p.\nExample:\n >>> m = Bernoulli(torch.tensor([0.3]))\n >>> m.sample() # 30% chance 1; 70% chance 0\n tensor([ 0.])\n\nParameters:\n * probs (Number, Tensor) -- the probability of\n sampling 1\n * **logits** (*Number**, **Tensor*) -- the log-odds of sampling\n *1*\n\narg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)}\nentropy()", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "entropy()\nenumerate_support(expand=True)\nexpand(batch_shape, _instance=None)\nhas_enumerate_support = True\nlog_prob(value)\nproperty logits\nproperty mean\nproperty mode\nproperty param_shape\nproperty probs\nsample(sample_shape=torch.Size([]))\nsupport = Boolean()\nproperty variance\nBeta\nclass torch.distributions.beta.Beta(concentration1, concentration0, validate_args=None)\nBases: \"ExponentialFamily\"\nBeta distribution parameterized by \"concentration1\" and\n \"concentration0\".\nExample:\n >>> m = Beta(torch.tensor([0.5]), torch.tensor([0.5]))\n >>> m.sample() # Beta distributed with concentration concentration1 and concentration0\n tensor([ 0.1046])\n\nParameters:\n * concentration1 (float or Tensor) -- 1st\n concentration parameter of the distribution (often referred to\n as alpha)\n * **concentration0** (*float** or **Tensor*) -- 2nd\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "concentration parameter of the distribution (often referred to\n as beta)\narg_constraints = {'concentration0': GreaterThan(lower_bound=0.0), 'concentration1': GreaterThan(lower_bound=0.0)}\nproperty concentration0\nproperty concentration1\nentropy()\nexpand(batch_shape, _instance=None)\nhas_rsample = True\nlog_prob(value)\nproperty mean\nproperty mode\nrsample(sample_shape=())\nsupport = Interval(lower_bound=0.0, upper_bound=1.0)\nproperty variance\nBinomial\nclass torch.distributions.binomial.Binomial(total_count=1, probs=None, logits=None, validate_args=None)\nBases: \"Distribution\"\nCreates a Binomial distribution parameterized by \"total_count\" and\n either \"probs\" or \"logits\" (but not both). \"total_count\" must be\n broadcastable with \"probs\"/\"logits\".\nExample:\n >>> m = Binomial(100, torch.tensor([0 , .2, .8, 1]))\n >>> x = m.sample()\n tensor([ 0., 22., 71., 100.])\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "tensor([ 0., 22., 71., 100.])\n >>> m = Binomial(torch.tensor([[5.], [10.]]), torch.tensor([0.5, 0.8]))\n >>> x = m.sample()\n tensor([[ 4., 5.],\n [ 7., 6.]])\n\nParameters:\n * total_count (int or Tensor) -- number of Bernoulli\n trials\n * **probs** (*Tensor*) -- Event probabilities\n\n * **logits** (*Tensor*) -- Event log-odds\n\narg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0), 'total_count': IntegerGreaterThan(lower_bound=0)}\nentropy()\nenumerate_support(expand=True)\nexpand(batch_shape, _instance=None)\nhas_enumerate_support = True\nlog_prob(value)\nproperty logits\nproperty mean\nproperty mode\nproperty param_shape\nproperty probs\nsample(sample_shape=torch.Size([]))\nproperty support\nproperty variance\nCategorical\nclass torch.distributions.categorical.Categorical(probs=None, logits=None, validate_args=None)\nBases: \"Distribution\"", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "Bases: \"Distribution\"\nCreates a categorical distribution parameterized by either \"probs\"\n or \"logits\" (but not both).\nNote:\n It is equivalent to the distribution that \"torch.multinomial()\"\n samples from.\n\nSamples are integers from {0, \\ldots, K-1} where K is\n \"probs.size(-1)\".\nIf probs is 1-dimensional with length-K, each element is the\n relative probability of sampling the class at that index.\nIf probs is N-dimensional, the first N-1 dimensions are treated\n as a batch of relative probability vectors.\nNote:\n The *probs* argument must be non-negative, finite and have a non-\n zero sum, and it will be normalized to sum to 1 along the last\n dimension. \"probs\" will return this normalized value. The\n *logits* argument will be interpreted as unnormalized log\n probabilities and can therefore be any real number. It will\n likewise be normalized so that the resulting probabilities sum to\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "1 along the last dimension. \"logits\" will return this normalized\n value.\nSee also: \"torch.multinomial()\"\nExample:\n >>> m = Categorical(torch.tensor([ 0.25, 0.25, 0.25, 0.25 ]))\n >>> m.sample() # equal probability of 0, 1, 2, 3\n tensor(3)\n\nParameters:\n * probs (Tensor) -- event probabilities\n * **logits** (*Tensor*) -- event log probabilities\n (unnormalized)\n\narg_constraints = {'logits': IndependentConstraint(Real(), 1), 'probs': Simplex()}\nentropy()\nenumerate_support(expand=True)\nexpand(batch_shape, _instance=None)\nhas_enumerate_support = True\nlog_prob(value)\nproperty logits\nproperty mean\nproperty mode\nproperty param_shape\nproperty probs\nsample(sample_shape=torch.Size([]))\nproperty support\nproperty variance\nCauchy\nclass torch.distributions.cauchy.Cauchy(loc, scale, validate_args=None)\nBases: \"Distribution\"\nSamples from a Cauchy (Lorentz) distribution. The distribution of", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "the ratio of independent normally distributed random variables with\n means 0 follows a Cauchy distribution.\nExample:\n >>> m = Cauchy(torch.tensor([0.0]), torch.tensor([1.0]))\n >>> m.sample() # sample from a Cauchy distribution with loc=0 and scale=1\n tensor([ 2.3214])\n\nParameters:\n * loc (float or Tensor) -- mode or median of the\n distribution.\n * **scale** (*float** or **Tensor*) -- half width at half\n maximum.\n\narg_constraints = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}\ncdf(value)\nentropy()\nexpand(batch_shape, _instance=None)\nhas_rsample = True\nicdf(value)\nlog_prob(value)\nproperty mean\nproperty mode\nrsample(sample_shape=torch.Size([]))\nsupport = Real()\nproperty variance\nChi2\nclass torch.distributions.chi2.Chi2(df, validate_args=None)\nBases: \"Gamma\"\nCreates a Chi-squared distribution parameterized by shape parameter", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "\"df\". This is exactly equivalent to \"Gamma(alpha=0.5*df, beta=0.5)\"\nExample:\n >>> m = Chi2(torch.tensor([1.0]))\n >>> m.sample() # Chi2 distributed with shape df=1\n tensor([ 0.1046])\n\nParameters:\n df (float or Tensor) -- shape parameter of the\n distribution\narg_constraints = {'df': GreaterThan(lower_bound=0.0)}\nproperty df\nexpand(batch_shape, _instance=None)\nContinuousBernoulli\nclass torch.distributions.continuous_bernoulli.ContinuousBernoulli(probs=None, logits=None, lims=(0.499, 0.501), validate_args=None)\nBases: \"ExponentialFamily\"\nCreates a continuous Bernoulli distribution parameterized by\n \"probs\" or \"logits\" (but not both).\nThe distribution is supported in [0, 1] and parameterized by\n 'probs' (in (0,1)) or 'logits' (real-valued). Note that, unlike the\n Bernoulli, 'probs' does not correspond to a probability and\n 'logits' does not correspond to log-odds, but the same names are", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "used due to the similarity with the Bernoulli. See [1] for more\n details.\nExample:\n >>> m = ContinuousBernoulli(torch.tensor([0.3]))\n >>> m.sample()\n tensor([ 0.2538])\n\nParameters:\n * probs (Number, Tensor) -- (0,1) valued parameters\n * **logits** (*Number**, **Tensor*) -- real valued parameters\n whose sigmoid matches 'probs'\n\n[1] The continuous Bernoulli: fixing a pervasive error in\n variational autoencoders, Loaiza-Ganem G and Cunningham JP, NeurIPS\n 2019. https://arxiv.org/abs/1907.06845\narg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)}\ncdf(value)\nentropy()\nexpand(batch_shape, _instance=None)\nhas_rsample = True\nicdf(value)\nlog_prob(value)\nproperty logits\nproperty mean\nproperty param_shape\nproperty probs\nrsample(sample_shape=torch.Size([]))\nsample(sample_shape=torch.Size([]))\nproperty stddev\nsupport = Interval(lower_bound=0.0, upper_bound=1.0)", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "property variance\nDirichlet\nclass torch.distributions.dirichlet.Dirichlet(concentration, validate_args=None)\nBases: \"ExponentialFamily\"\nCreates a Dirichlet distribution parameterized by concentration\n \"concentration\".\nExample:\n >>> m = Dirichlet(torch.tensor([0.5, 0.5]))\n >>> m.sample() # Dirichlet distributed with concentration [0.5, 0.5]\n tensor([ 0.1046, 0.8954])\n\nParameters:\n concentration (Tensor) -- concentration parameter of the\n distribution (often referred to as alpha)\narg_constraints = {'concentration': IndependentConstraint(GreaterThan(lower_bound=0.0), 1)}\nentropy()\nexpand(batch_shape, _instance=None)\nhas_rsample = True\nlog_prob(value)\nproperty mean\nproperty mode\nrsample(sample_shape=())\nsupport = Simplex()\nproperty variance\nExponential\nclass torch.distributions.exponential.Exponential(rate, validate_args=None)\nBases: \"ExponentialFamily\"", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "Bases: \"ExponentialFamily\"\nCreates a Exponential distribution parameterized by \"rate\".\nExample:\n >>> m = Exponential(torch.tensor([1.0]))\n >>> m.sample() # Exponential distributed with rate=1\n tensor([ 0.1046])\n\nParameters:\n rate (float or Tensor) -- rate = 1 / scale of the\n distribution\narg_constraints = {'rate': GreaterThan(lower_bound=0.0)}\ncdf(value)\nentropy()\nexpand(batch_shape, _instance=None)\nhas_rsample = True\nicdf(value)\nlog_prob(value)\nproperty mean\nproperty mode\nrsample(sample_shape=torch.Size([]))\nproperty stddev\nsupport = GreaterThanEq(lower_bound=0.0)\nproperty variance\nFisherSnedecor\nclass torch.distributions.fishersnedecor.FisherSnedecor(df1, df2, validate_args=None)\nBases: \"Distribution\"\nCreates a Fisher-Snedecor distribution parameterized by \"df1\" and\n \"df2\".\nExample:\n >>> m = FisherSnedecor(torch.tensor([1.0]), torch.tensor([2.0]))\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "\n\n\nm.sample() # Fisher-Snedecor-distributed with df1=1 and df2=2\n tensor([ 0.2453])\n\n\n\nParameters:\n * df1 (float or Tensor) -- degrees of freedom\n parameter 1\n * **df2** (*float** or **Tensor*) -- degrees of freedom\n parameter 2\n\narg_constraints = {'df1': GreaterThan(lower_bound=0.0), 'df2': GreaterThan(lower_bound=0.0)}\nexpand(batch_shape, _instance=None)\nhas_rsample = True\nlog_prob(value)\nproperty mean\nproperty mode\nrsample(sample_shape=torch.Size([]))\nsupport = GreaterThan(lower_bound=0.0)\nproperty variance\nGamma\nclass torch.distributions.gamma.Gamma(concentration, rate, validate_args=None)\nBases: \"ExponentialFamily\"\nCreates a Gamma distribution parameterized by shape \"concentration\"\n and \"rate\".\nExample:\n >>> m = Gamma(torch.tensor([1.0]), torch.tensor([1.0]))\n >>> m.sample() # Gamma distributed with concentration=1 and rate=1\n tensor([ 0.1046])\n\nParameters:", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "tensor([ 0.1046])\nParameters:\n * concentration (float or Tensor) -- shape parameter\n of the distribution (often referred to as alpha)\n * **rate** (*float** or **Tensor*) -- rate = 1 / scale of the\n distribution (often referred to as beta)\n\narg_constraints = {'concentration': GreaterThan(lower_bound=0.0), 'rate': GreaterThan(lower_bound=0.0)}\ncdf(value)\nentropy()\nexpand(batch_shape, _instance=None)\nhas_rsample = True\nlog_prob(value)\nproperty mean\nproperty mode\nrsample(sample_shape=torch.Size([]))\nsupport = GreaterThanEq(lower_bound=0.0)\nproperty variance\nGeometric\nclass torch.distributions.geometric.Geometric(probs=None, logits=None, validate_args=None)\nBases: \"Distribution\"\nCreates a Geometric distribution parameterized by \"probs\", where\n \"probs\" is the probability of success of Bernoulli trials. It\n represents the probability that in k + 1 Bernoulli trials, the", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "first k trials failed, before seeing a success.\nSamples are non-negative integers [0, \\inf).\nExample:\n >>> m = Geometric(torch.tensor([0.3]))\n >>> m.sample() # underlying Bernoulli has 30% chance 1; 70% chance 0\n tensor([ 2.])\n\nParameters:\n * probs (Number, Tensor) -- the probability of\n sampling 1. Must be in range (0, 1]\n * **logits** (*Number**, **Tensor*) -- the log-odds of sampling\n *1*.\n\narg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)}\nentropy()\nexpand(batch_shape, _instance=None)\nlog_prob(value)\nproperty logits\nproperty mean\nproperty mode\nproperty probs\nsample(sample_shape=torch.Size([]))\nsupport = IntegerGreaterThan(lower_bound=0)\nproperty variance\nGumbel\nclass torch.distributions.gumbel.Gumbel(loc, scale, validate_args=None)\nBases: \"TransformedDistribution\"\nSamples from a Gumbel Distribution.\nExamples:", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "Examples:\n >>> m = Gumbel(torch.tensor([1.0]), torch.tensor([2.0]))\n >>> m.sample() # sample from Gumbel distribution with loc=1, scale=2\n tensor([ 1.0124])\n\nParameters:\n * loc (float or Tensor) -- Location parameter of the\n distribution\n * **scale** (*float** or **Tensor*) -- Scale parameter of the\n distribution\n\narg_constraints: Dict[str, constraints.Constraint] = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}\nentropy()\nexpand(batch_shape, _instance=None)\nlog_prob(value)\nproperty mean\nproperty mode\nproperty stddev\nsupport = Real()\nproperty variance\nHalfCauchy\nclass torch.distributions.half_cauchy.HalfCauchy(scale, validate_args=None)\nBases: \"TransformedDistribution\"\nCreates a half-Cauchy distribution parameterized by scale where:\n X ~ Cauchy(0, scale)\n Y = |X| ~ HalfCauchy(scale)\n\nExample:\n >>> m = HalfCauchy(torch.tensor([1.0]))\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "\n\n\nm = HalfCauchy(torch.tensor([1.0]))\n >>> m.sample() # half-cauchy distributed with scale=1\n tensor([ 2.3214])\n\n\n\nParameters:\n scale (float or Tensor) -- scale of the full Cauchy\n distribution\narg_constraints: Dict[str, constraints.Constraint] = {'scale': GreaterThan(lower_bound=0.0)}\ncdf(value)\nentropy()\nexpand(batch_shape, _instance=None)\nhas_rsample = True\nicdf(prob)\nlog_prob(value)\nproperty mean\nproperty mode\nproperty scale\nsupport = GreaterThanEq(lower_bound=0.0)\nproperty variance\nHalfNormal\nclass torch.distributions.half_normal.HalfNormal(scale, validate_args=None)\nBases: \"TransformedDistribution\"\nCreates a half-normal distribution parameterized by scale where:\n X ~ Normal(0, scale)\n Y = |X| ~ HalfNormal(scale)\n\nExample:\n >>> m = HalfNormal(torch.tensor([1.0]))\n >>> m.sample() # half-normal distributed with scale=1\n tensor([ 0.1046])\n\nParameters:", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "tensor([ 0.1046])\nParameters:\n scale (float or Tensor) -- scale of the full Normal\n distribution\narg_constraints: Dict[str, constraints.Constraint] = {'scale': GreaterThan(lower_bound=0.0)}\ncdf(value)\nentropy()\nexpand(batch_shape, _instance=None)\nhas_rsample = True\nicdf(prob)\nlog_prob(value)\nproperty mean\nproperty mode\nproperty scale\nsupport = GreaterThanEq(lower_bound=0.0)\nproperty variance\nIndependent\nclass torch.distributions.independent.Independent(base_distribution, reinterpreted_batch_ndims, validate_args=None)\nBases: \"Distribution\"\nReinterprets some of the batch dims of a distribution as event\n dims.\nThis is mainly useful for changing the shape of the result of\n \"log_prob()\". For example to create a diagonal Normal distribution\n with the same shape as a Multivariate Normal distribution (so they\n are interchangeable), you can:", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "are interchangeable), you can:\n >>> from torch.distributions.multivariate_normal import MultivariateNormal\n >>> from torch.distributions.normal import Normal\n >>> loc = torch.zeros(3)\n >>> scale = torch.ones(3)\n >>> mvn = MultivariateNormal(loc, scale_tril=torch.diag(scale))\n >>> [mvn.batch_shape, mvn.event_shape]\n [torch.Size([]), torch.Size([3])]\n >>> normal = Normal(loc, scale)\n >>> [normal.batch_shape, normal.event_shape]\n [torch.Size([3]), torch.Size([])]\n >>> diagn = Independent(normal, 1)\n >>> [diagn.batch_shape, diagn.event_shape]\n [torch.Size([]), torch.Size([3])]\n\nParameters:\n * base_distribution\n (torch.distributions.distribution.Distribution) -- a base\n distribution\n * **reinterpreted_batch_ndims** (*int*) -- the number of batch\n dims to reinterpret as event dims\n\narg_constraints: Dict[str, Constraint] = {}\nentropy()\nenumerate_support(expand=True)", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "entropy()\nenumerate_support(expand=True)\nexpand(batch_shape, _instance=None)\nproperty has_enumerate_support\nproperty has_rsample\nlog_prob(value)\nproperty mean\nproperty mode\nrsample(sample_shape=torch.Size([]))\nsample(sample_shape=torch.Size([]))\nproperty support\nproperty variance\nKumaraswamy\nclass torch.distributions.kumaraswamy.Kumaraswamy(concentration1, concentration0, validate_args=None)\nBases: \"TransformedDistribution\"\nSamples from a Kumaraswamy distribution.\nExample:\n >>> m = Kumaraswamy(torch.tensor([1.0]), torch.tensor([1.0]))\n >>> m.sample() # sample from a Kumaraswamy distribution with concentration alpha=1 and beta=1\n tensor([ 0.1729])\n\nParameters:\n * concentration1 (float or Tensor) -- 1st\n concentration parameter of the distribution (often referred to\n as alpha)\n * **concentration0** (*float** or **Tensor*) -- 2nd\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "concentration parameter of the distribution (often referred to\n as beta)\narg_constraints: Dict[str, constraints.Constraint] = {'concentration0': GreaterThan(lower_bound=0.0), 'concentration1': GreaterThan(lower_bound=0.0)}\nentropy()\nexpand(batch_shape, _instance=None)\nhas_rsample = True\nproperty mean\nproperty mode\nsupport = Interval(lower_bound=0.0, upper_bound=1.0)\nproperty variance\nLKJCholesky\nclass torch.distributions.lkj_cholesky.LKJCholesky(dim, concentration=1.0, validate_args=None)\nBases: \"Distribution\"\nLKJ distribution for lower Cholesky factor of correlation matrices.\n The distribution is controlled by \"concentration\" parameter \\eta to\n make the probability of the correlation matrix M generated from a\n Cholesky factor proportional to \\det(M)^{\\eta - 1}. Because of\n that, when \"concentration == 1\", we have a uniform distribution\n over Cholesky factors of correlation matrices:\n L ~ LKJCholesky(dim, concentration)\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "L ~ LKJCholesky(dim, concentration)\n X = L @ L' ~ LKJCorr(dim, concentration)\nNote that this distribution samples the Cholesky factor of\n correlation matrices and not the correlation matrices themselves\n and thereby differs slightly from the derivations in [1] for the\n LKJCorr distribution. For sampling, this uses the Onion method\n from [1] Section 3.\nExample:\n >>> l = LKJCholesky(3, 0.5)\n >>> l.sample() # l @ l.T is a sample of a correlation 3x3 matrix\n tensor([[ 1.0000, 0.0000, 0.0000],\n [ 0.3516, 0.9361, 0.0000],\n [-0.1899, 0.4748, 0.8593]])\n\nParameters:\n * dimension (dim) -- dimension of the matrices\n * **concentration** (*float** or **Tensor*) --\n concentration/shape parameter of the distribution (often\n referred to as eta)\n\nReferences\n[1] Generating random correlation matrices based on vines and\n extended onion method (2009), Daniel Lewandowski, Dorota", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "Kurowicka, Harry Joe. Journal of Multivariate Analysis. 100.\n 10.1016/j.jmva.2009.04.008\narg_constraints = {'concentration': GreaterThan(lower_bound=0.0)}\nexpand(batch_shape, _instance=None)\nlog_prob(value)\nsample(sample_shape=torch.Size([]))\nsupport = CorrCholesky()\nLaplace\nclass torch.distributions.laplace.Laplace(loc, scale, validate_args=None)\nBases: \"Distribution\"\nCreates a Laplace distribution parameterized by \"loc\" and \"scale\".\nExample:\n >>> m = Laplace(torch.tensor([0.0]), torch.tensor([1.0]))\n >>> m.sample() # Laplace distributed with loc=0, scale=1\n tensor([ 0.1046])\n\nParameters:\n * loc (float or Tensor) -- mean of the distribution\n * **scale** (*float** or **Tensor*) -- scale of the distribution\n\narg_constraints = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}\ncdf(value)\nentropy()\nexpand(batch_shape, _instance=None)\nhas_rsample = True\nicdf(value)\nlog_prob(value)", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "icdf(value)\nlog_prob(value)\nproperty mean\nproperty mode\nrsample(sample_shape=torch.Size([]))\nproperty stddev\nsupport = Real()\nproperty variance\nLogNormal\nclass torch.distributions.log_normal.LogNormal(loc, scale, validate_args=None)\nBases: \"TransformedDistribution\"\nCreates a log-normal distribution parameterized by \"loc\" and\n \"scale\" where:\n X ~ Normal(loc, scale)\n Y = exp(X) ~ LogNormal(loc, scale)\n\nExample:\n >>> m = LogNormal(torch.tensor([0.0]), torch.tensor([1.0]))\n >>> m.sample() # log-normal distributed with mean=0 and stddev=1\n tensor([ 0.1046])\n\nParameters:\n * loc (float or Tensor) -- mean of log of distribution\n * **scale** (*float** or **Tensor*) -- standard deviation of log\n of the distribution\n\narg_constraints: Dict[str, constraints.Constraint] = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}\nentropy()\nexpand(batch_shape, _instance=None)\nhas_rsample = True", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "has_rsample = True\nproperty loc\nproperty mean\nproperty mode\nproperty scale\nsupport = GreaterThan(lower_bound=0.0)\nproperty variance\nLowRankMultivariateNormal\nclass torch.distributions.lowrank_multivariate_normal.LowRankMultivariateNormal(loc, cov_factor, cov_diag, validate_args=None)\nBases: \"Distribution\"\nCreates a multivariate normal distribution with covariance matrix\n having a low-rank form parameterized by \"cov_factor\" and\n \"cov_diag\":\n covariance_matrix = cov_factor @ cov_factor.T + cov_diag\n\n-[ Example ]-\n\n\n\nm = LowRankMultivariateNormal(torch.zeros(2), torch.tensor([[1.], [0.]]), torch.ones(2))\nm.sample() # normally distributed with mean=[0,0], cov_factor=[[1],[0]], cov_diag=[1,1]\n tensor([-0.2102, -0.5429])\n\n\n\nParameters:\n * loc (Tensor) -- mean of the distribution with shape\n batch_shape + event_shape\n * **cov_factor** (*Tensor*) -- factor part of low-rank form of\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "covariance matrix with shape batch_shape + event_shape +\n (rank,)\n * **cov_diag** (*Tensor*) -- diagonal part of low-rank form of\n covariance matrix with shape *batch_shape + event_shape*\n\nNote:\n The computation for determinant and inverse of covariance matrix\n is avoided when *cov_factor.shape[1] << cov_factor.shape[0]*\n thanks to Woodbury matrix identity and matrix determinant lemma.\n Thanks to these formulas, we just need to compute the determinant\n and inverse of the small size \"capacitance\" matrix:\n\n capacitance = I + cov_factor.T @ inv(cov_diag) @ cov_factor\n\narg_constraints = {'cov_diag': IndependentConstraint(GreaterThan(lower_bound=0.0), 1), 'cov_factor': IndependentConstraint(Real(), 2), 'loc': IndependentConstraint(Real(), 1)}\nproperty covariance_matrix\nentropy()\nexpand(batch_shape, _instance=None)\nhas_rsample = True\nlog_prob(value)\nproperty mean\nproperty mode\nproperty precision_matrix", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "property mode\nproperty precision_matrix\nrsample(sample_shape=torch.Size([]))\nproperty scale_tril\nsupport = IndependentConstraint(Real(), 1)\nproperty variance\nMixtureSameFamily\nclass torch.distributions.mixture_same_family.MixtureSameFamily(mixture_distribution, component_distribution, validate_args=None)\nBases: \"Distribution\"\nThe MixtureSameFamily distribution implements a (batch of)\n mixture distribution where all component are from different\n parameterizations of the same distribution type. It is\n parameterized by a Categorical \"selecting distribution\" (over k\n component) and a component distribution, i.e., a Distribution\n with a rightmost batch shape (equal to [k]) which indexes each\n (batch of) component.\nExamples:\n >>> # Construct Gaussian Mixture Model in 1D consisting of 5 equally\n >>> # weighted normal distributions\n >>> mix = D.Categorical(torch.ones(5,))\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "\n\n\nmix = D.Categorical(torch.ones(5,))\n >>> comp = D.Normal(torch.randn(5,), torch.rand(5,))\n >>> gmm = MixtureSameFamily(mix, comp)\n\n\n\n >>> # Construct Gaussian Mixture Modle in 2D consisting of 5 equally\n >>> # weighted bivariate normal distributions\n >>> mix = D.Categorical(torch.ones(5,))\n >>> comp = D.Independent(D.Normal(\n ... torch.randn(5,2), torch.rand(5,2)), 1)\n >>> gmm = MixtureSameFamily(mix, comp)\n\n >>> # Construct a batch of 3 Gaussian Mixture Models in 2D each\n >>> # consisting of 5 random weighted bivariate normal distributions\n >>> mix = D.Categorical(torch.rand(3,5))\n >>> comp = D.Independent(D.Normal(\n ... torch.randn(3,5,2), torch.rand(3,5,2)), 1)\n >>> gmm = MixtureSameFamily(mix, comp)\n\nParameters:\n * mixture_distribution --\n torch.distributions.Categorical-like instance. Manages the\n probability of selecting component. The number of categories", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "must match the rightmost batch dimension of the\n component_distribution. Must have either scalar\n batch_shape or batch_shape matching\n component_distribution.batch_shape[:-1]\n * **component_distribution** --\n *torch.distributions.Distribution*-like instance. Right-most\n batch dimension indexes component.\n\narg_constraints: Dict[str, Constraint] = {}\ncdf(x)\nproperty component_distribution\nexpand(batch_shape, _instance=None)\nhas_rsample = False\nlog_prob(x)\nproperty mean\nproperty mixture_distribution\nsample(sample_shape=torch.Size([]))\nproperty support\nproperty variance\nMultinomial\nclass torch.distributions.multinomial.Multinomial(total_count=1, probs=None, logits=None, validate_args=None)\nBases: \"Distribution\"\nCreates a Multinomial distribution parameterized by \"total_count\"\n and either \"probs\" or \"logits\" (but not both). The innermost", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "dimension of \"probs\" indexes over categories. All other dimensions\n index over batches.\nNote that \"total_count\" need not be specified if only \"log_prob()\"\n is called (see example below)\nNote:\n The *probs* argument must be non-negative, finite and have a non-\n zero sum, and it will be normalized to sum to 1 along the last\n dimension. \"probs\" will return this normalized value. The\n *logits* argument will be interpreted as unnormalized log\n probabilities and can therefore be any real number. It will\n likewise be normalized so that the resulting probabilities sum to\n 1 along the last dimension. \"logits\" will return this normalized\n value.\n\n\n\n\"sample()\" requires a single shared total_count for all\n parameters and samples.\n\n\n\"log_prob()\" allows different total_count for each parameter\n and sample.\n\n\nExample:\n >>> m = Multinomial(100, torch.tensor([ 1., 1., 1., 1.]))\n >>> x = m.sample() # equal probability of 0, 1, 2, 3\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "tensor([ 21., 24., 30., 25.])\n >>> Multinomial(probs=torch.tensor([1., 1., 1., 1.])).log_prob(x)\n tensor([-4.1338])\n\nParameters:\n * total_count (int) -- number of trials\n * **probs** (*Tensor*) -- event probabilities\n\n * **logits** (*Tensor*) -- event log probabilities\n (unnormalized)\n\narg_constraints = {'logits': IndependentConstraint(Real(), 1), 'probs': Simplex()}\nentropy()\nexpand(batch_shape, _instance=None)\nlog_prob(value)\nproperty logits\nproperty mean\nproperty param_shape\nproperty probs\nsample(sample_shape=torch.Size([]))\nproperty support\ntotal_count: int\nproperty variance\nMultivariateNormal\nclass torch.distributions.multivariate_normal.MultivariateNormal(loc, covariance_matrix=None, precision_matrix=None, scale_tril=None, validate_args=None)\nBases: \"Distribution\"\nCreates a multivariate normal (also called Gaussian) distribution", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "parameterized by a mean vector and a covariance matrix.\nThe multivariate normal distribution can be parameterized either in\n terms of a positive definite covariance matrix \\mathbf{\\Sigma} or a\n positive definite precision matrix \\mathbf{\\Sigma}^{-1} or a lower-\n triangular matrix \\mathbf{L} with positive-valued diagonal entries,\n such that \\mathbf{\\Sigma} = \\mathbf{L}\\mathbf{L}^\\top. This\n triangular matrix can be obtained via e.g. Cholesky decomposition\n of the covariance.\n-[ Example ]-\n\n\n\nm = MultivariateNormal(torch.zeros(2), torch.eye(2))\nm.sample() # normally distributed with mean=[0,0] and covariance_matrix=I\n tensor([-0.2102, -0.5429])\n\n\n\nParameters:\n * loc (Tensor) -- mean of the distribution\n * **covariance_matrix** (*Tensor*) -- positive-definite\n covariance matrix\n\n * **precision_matrix** (*Tensor*) -- positive-definite precision\n matrix\n\n * **scale_tril** (*Tensor*) -- lower-triangular factor of\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "covariance, with positive-valued diagonal\nNote:\n Only one of \"covariance_matrix\" or \"precision_matrix\" or\n \"scale_tril\" can be specified.Using \"scale_tril\" will be more\n efficient: all computations internally are based on \"scale_tril\".\n If \"covariance_matrix\" or \"precision_matrix\" is passed instead,\n it is only used to compute the corresponding lower triangular\n matrices using a Cholesky decomposition.\n\narg_constraints = {'covariance_matrix': PositiveDefinite(), 'loc': IndependentConstraint(Real(), 1), 'precision_matrix': PositiveDefinite(), 'scale_tril': LowerCholesky()}\nproperty covariance_matrix\nentropy()\nexpand(batch_shape, _instance=None)\nhas_rsample = True\nlog_prob(value)\nproperty mean\nproperty mode\nproperty precision_matrix\nrsample(sample_shape=torch.Size([]))\nproperty scale_tril\nsupport = IndependentConstraint(Real(), 1)\nproperty variance\nNegativeBinomial", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "NegativeBinomial\nclass torch.distributions.negative_binomial.NegativeBinomial(total_count, probs=None, logits=None, validate_args=None)\nBases: \"Distribution\"\nCreates a Negative Binomial distribution, i.e. distribution of the\n number of successful independent and identical Bernoulli trials\n before \"total_count\" failures are achieved. The probability of\n success of each Bernoulli trial is \"probs\".\nParameters:\n * total_count (float or Tensor) -- non-negative number\n of negative Bernoulli trials to stop, although the\n distribution is still valid for real valued count\n * **probs** (*Tensor*) -- Event probabilities of success in the\n half open interval [0, 1)\n\n * **logits** (*Tensor*) -- Event log-odds for probabilities of\n success\n\narg_constraints = {'logits': Real(), 'probs': HalfOpenInterval(lower_bound=0.0, upper_bound=1.0), 'total_count': GreaterThanEq(lower_bound=0)}\nexpand(batch_shape, _instance=None)", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "expand(batch_shape, _instance=None)\nlog_prob(value)\nproperty logits\nproperty mean\nproperty mode\nproperty param_shape\nproperty probs\nsample(sample_shape=torch.Size([]))\nsupport = IntegerGreaterThan(lower_bound=0)\nproperty variance\nNormal\nclass torch.distributions.normal.Normal(loc, scale, validate_args=None)\nBases: \"ExponentialFamily\"\nCreates a normal (also called Gaussian) distribution parameterized\n by \"loc\" and \"scale\".\nExample:\n >>> m = Normal(torch.tensor([0.0]), torch.tensor([1.0]))\n >>> m.sample() # normally distributed with loc=0 and scale=1\n tensor([ 0.1046])\n\nParameters:\n * loc (float or Tensor) -- mean of the distribution\n (often referred to as mu)\n * **scale** (*float** or **Tensor*) -- standard deviation of the\n distribution (often referred to as sigma)\n\narg_constraints = {'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}\ncdf(value)\nentropy()", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "cdf(value)\nentropy()\nexpand(batch_shape, _instance=None)\nhas_rsample = True\nicdf(value)\nlog_prob(value)\nproperty mean\nproperty mode\nrsample(sample_shape=torch.Size([]))\nsample(sample_shape=torch.Size([]))\nproperty stddev\nsupport = Real()\nproperty variance\nOneHotCategorical\nclass torch.distributions.one_hot_categorical.OneHotCategorical(probs=None, logits=None, validate_args=None)\nBases: \"Distribution\"\nCreates a one-hot categorical distribution parameterized by \"probs\"\n or \"logits\".\nSamples are one-hot coded vectors of size \"probs.size(-1)\".\nNote:\n The *probs* argument must be non-negative, finite and have a non-\n zero sum, and it will be normalized to sum to 1 along the last\n dimension. \"probs\" will return this normalized value. The\n *logits* argument will be interpreted as unnormalized log\n probabilities and can therefore be any real number. It will\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "likewise be normalized so that the resulting probabilities sum to\n 1 along the last dimension. \"logits\" will return this normalized\n value.\nSee also: \"torch.distributions.Categorical()\" for specifications of\n \"probs\" and \"logits\".\nExample:\n >>> m = OneHotCategorical(torch.tensor([ 0.25, 0.25, 0.25, 0.25 ]))\n >>> m.sample() # equal probability of 0, 1, 2, 3\n tensor([ 0., 0., 0., 1.])\n\nParameters:\n * probs (Tensor) -- event probabilities\n * **logits** (*Tensor*) -- event log probabilities\n (unnormalized)\n\narg_constraints = {'logits': IndependentConstraint(Real(), 1), 'probs': Simplex()}\nentropy()\nenumerate_support(expand=True)\nexpand(batch_shape, _instance=None)\nhas_enumerate_support = True\nlog_prob(value)\nproperty logits\nproperty mean\nproperty mode\nproperty param_shape\nproperty probs\nsample(sample_shape=torch.Size([]))\nsupport = OneHot()\nproperty variance\nPareto", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "property variance\nPareto\nclass torch.distributions.pareto.Pareto(scale, alpha, validate_args=None)\nBases: \"TransformedDistribution\"\nSamples from a Pareto Type 1 distribution.\nExample:\n >>> m = Pareto(torch.tensor([1.0]), torch.tensor([1.0]))\n >>> m.sample() # sample from a Pareto distribution with scale=1 and alpha=1\n tensor([ 1.5623])\n\nParameters:\n * scale (float or Tensor) -- Scale parameter of the\n distribution\n * **alpha** (*float** or **Tensor*) -- Shape parameter of the\n distribution\n\narg_constraints: Dict[str, constraints.Constraint] = {'alpha': GreaterThan(lower_bound=0.0), 'scale': GreaterThan(lower_bound=0.0)}\nentropy()\nexpand(batch_shape, _instance=None)\nproperty mean\nproperty mode\nproperty support\nproperty variance\nPoisson\nclass torch.distributions.poisson.Poisson(rate, validate_args=None)\nBases: \"ExponentialFamily\"", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "Bases: \"ExponentialFamily\"\nCreates a Poisson distribution parameterized by \"rate\", the rate\n parameter.\nSamples are nonnegative integers, with a pmf given by\n \\mathrm{rate}^k \\frac{e^{-\\mathrm{rate}}}{k!}\n\nExample:\n >>> m = Poisson(torch.tensor([4]))\n >>> m.sample()\n tensor([ 3.])\n\nParameters:\n rate (Number, Tensor) -- the rate parameter\narg_constraints = {'rate': GreaterThanEq(lower_bound=0.0)}\nexpand(batch_shape, _instance=None)\nlog_prob(value)\nproperty mean\nproperty mode\nsample(sample_shape=torch.Size([]))\nsupport = IntegerGreaterThan(lower_bound=0)\nproperty variance\nRelaxedBernoulli\nclass torch.distributions.relaxed_bernoulli.RelaxedBernoulli(temperature, probs=None, logits=None, validate_args=None)\nBases: \"TransformedDistribution\"\nCreates a RelaxedBernoulli distribution, parametrized by\n \"temperature\", and either \"probs\" or \"logits\" (but not both). This", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "is a relaxed version of the Bernoulli distribution, so the values\n are in (0, 1), and has reparametrizable samples.\nExample:\n >>> m = RelaxedBernoulli(torch.tensor([2.2]),\n ... torch.tensor([0.1, 0.2, 0.3, 0.99]))\n >>> m.sample()\n tensor([ 0.2951, 0.3442, 0.8918, 0.9021])\n\nParameters:\n * temperature (Tensor) -- relaxation temperature\n * **probs** (*Number**, **Tensor*) -- the probability of\n sampling *1*\n\n * **logits** (*Number**, **Tensor*) -- the log-odds of sampling\n *1*\n\narg_constraints: Dict[str, constraints.Constraint] = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)}\nexpand(batch_shape, _instance=None)\nhas_rsample = True\nproperty logits\nproperty probs\nsupport = Interval(lower_bound=0.0, upper_bound=1.0)\nproperty temperature\nLogitRelaxedBernoulli", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "LogitRelaxedBernoulli\nclass torch.distributions.relaxed_bernoulli.LogitRelaxedBernoulli(temperature, probs=None, logits=None, validate_args=None)\nBases: \"Distribution\"\nCreates a LogitRelaxedBernoulli distribution parameterized by\n \"probs\" or \"logits\" (but not both), which is the logit of a\n RelaxedBernoulli distribution.\nSamples are logits of values in (0, 1). See [1] for more details.\nParameters:\n * temperature (Tensor) -- relaxation temperature\n * **probs** (*Number**, **Tensor*) -- the probability of\n sampling *1*\n\n * **logits** (*Number**, **Tensor*) -- the log-odds of sampling\n *1*\n\n[1] The Concrete Distribution: A Continuous Relaxation of Discrete\n Random Variables (Maddison et al, 2017)\n[2] Categorical Reparametrization with Gumbel-Softmax (Jang et al,\n 2017)\narg_constraints = {'logits': Real(), 'probs': Interval(lower_bound=0.0, upper_bound=1.0)}\nexpand(batch_shape, _instance=None)\nlog_prob(value)", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "log_prob(value)\nproperty logits\nproperty param_shape\nproperty probs\nrsample(sample_shape=torch.Size([]))\nsupport = Real()\nRelaxedOneHotCategorical\nclass torch.distributions.relaxed_categorical.RelaxedOneHotCategorical(temperature, probs=None, logits=None, validate_args=None)\nBases: \"TransformedDistribution\"\nCreates a RelaxedOneHotCategorical distribution parametrized by\n \"temperature\", and either \"probs\" or \"logits\". This is a relaxed\n version of the \"OneHotCategorical\" distribution, so its samples are\n on simplex, and are reparametrizable.\nExample:\n >>> m = RelaxedOneHotCategorical(torch.tensor([2.2]),\n ... torch.tensor([0.1, 0.2, 0.3, 0.4]))\n >>> m.sample()\n tensor([ 0.1294, 0.2324, 0.3859, 0.2523])\n\nParameters:\n * temperature (Tensor) -- relaxation temperature\n * **probs** (*Tensor*) -- event probabilities\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "\nlogits (Tensor) -- unnormalized log probability for each\n event\n\narg_constraints: Dict[str, constraints.Constraint] = {'logits': IndependentConstraint(Real(), 1), 'probs': Simplex()}\nexpand(batch_shape, _instance=None)\nhas_rsample = True\nproperty logits\nproperty probs\nsupport = Simplex()\nproperty temperature\nStudentT\nclass torch.distributions.studentT.StudentT(df, loc=0.0, scale=1.0, validate_args=None)\nBases: \"Distribution\"\nCreates a Student's t-distribution parameterized by degree of\n freedom \"df\", mean \"loc\" and scale \"scale\".\nExample:\n >>> m = StudentT(torch.tensor([2.0]))\n >>> m.sample() # Student's t-distributed with degrees of freedom=2\n tensor([ 0.1046])\n\nParameters:\n * df (float or Tensor) -- degrees of freedom\n * **loc** (*float** or **Tensor*) -- mean of the distribution\n\n * **scale** (*float** or **Tensor*) -- scale of the distribution\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "arg_constraints = {'df': GreaterThan(lower_bound=0.0), 'loc': Real(), 'scale': GreaterThan(lower_bound=0.0)}\nentropy()\nexpand(batch_shape, _instance=None)\nhas_rsample = True\nlog_prob(value)\nproperty mean\nproperty mode\nrsample(sample_shape=torch.Size([]))\nsupport = Real()\nproperty variance\nTransformedDistribution\nclass torch.distributions.transformed_distribution.TransformedDistribution(base_distribution, transforms, validate_args=None)\nBases: \"Distribution\"\nExtension of the Distribution class, which applies a sequence of\n Transforms to a base distribution. Let f be the composition of\n transforms applied:\n X ~ BaseDistribution\n Y = f(X) ~ TransformedDistribution(BaseDistribution, f)\n log p(Y) = log p(X) + log |det (dX/dY)|\n\nNote that the \".event_shape\" of a \"TransformedDistribution\" is the\n maximum shape of its base distribution and its transforms, since\n transforms can introduce correlations among events.", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "An example for the usage of \"TransformedDistribution\" would be:\n # Building a Logistic Distribution\n # X ~ Uniform(0, 1)\n # f = a + b * logit(X)\n # Y ~ f(X) ~ Logistic(a, b)\n base_distribution = Uniform(0, 1)\n transforms = [SigmoidTransform().inv, AffineTransform(loc=a, scale=b)]\n logistic = TransformedDistribution(base_distribution, transforms)\n\nFor more examples, please look at the implementations of \"Gumbel\",\n \"HalfCauchy\", \"HalfNormal\", \"LogNormal\", \"Pareto\", \"Weibull\",\n \"RelaxedBernoulli\" and \"RelaxedOneHotCategorical\"\narg_constraints: Dict[str, Constraint] = {}\ncdf(value)\n Computes the cumulative distribution function by inverting the\n transform(s) and computing the score of the base distribution.\n\nexpand(batch_shape, _instance=None)\nproperty has_rsample\nicdf(value)\n Computes the inverse cumulative distribution function using\n transform(s) and computing the score of the base distribution.\n\nlog_prob(value)", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "log_prob(value)\n Scores the sample by inverting the transform(s) and computing\n the score using the score of the base distribution and the log\n abs det jacobian.\n\nrsample(sample_shape=torch.Size([]))\n Generates a sample_shape shaped reparameterized sample or\n sample_shape shaped batch of reparameterized samples if the\n distribution parameters are batched. Samples first from base\n distribution and applies *transform()* for every transform in\n the list.\n\nsample(sample_shape=torch.Size([]))\n Generates a sample_shape shaped sample or sample_shape shaped\n batch of samples if the distribution parameters are batched.\n Samples first from base distribution and applies *transform()*\n for every transform in the list.\n\nproperty support\nUniform\nclass torch.distributions.uniform.Uniform(low, high, validate_args=None)\nBases: \"Distribution\"\nGenerates uniformly distributed random samples from the half-open\n interval \"[low, high)\".", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "interval \"[low, high)\".\nExample:\n >>> m = Uniform(torch.tensor([0.0]), torch.tensor([5.0]))\n >>> m.sample() # uniformly distributed in the range [0.0, 5.0)\n tensor([ 2.3418])\n\nParameters:\n * low (float or Tensor) -- lower range (inclusive).\n * **high** (*float** or **Tensor*) -- upper range (exclusive).\n\narg_constraints = {'high': Dependent(), 'low': Dependent()}\ncdf(value)\nentropy()\nexpand(batch_shape, _instance=None)\nhas_rsample = True\nicdf(value)\nlog_prob(value)\nproperty mean\nproperty mode\nrsample(sample_shape=torch.Size([]))\nproperty stddev\nproperty support\nproperty variance\nVonMises\nclass torch.distributions.von_mises.VonMises(loc, concentration, validate_args=None)\nBases: \"Distribution\"\nA circular von Mises distribution.\nThis implementation uses polar coordinates. The \"loc\" and \"value\"\n args can be any real number (to facilitate unconstrained", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "optimization), but are interpreted as angles modulo 2 pi.\nExample::\n >>> m = VonMises(torch.tensor([1.0]), torch.tensor([1.0]))\n >>> m.sample() # von Mises distributed with loc=1 and concentration=1\n tensor([1.9777])\nParameters:\n * loc (torch.Tensor) -- an angle in radians.\n * **concentration** (*torch.Tensor*) -- concentration parameter\n\narg_constraints = {'concentration': GreaterThan(lower_bound=0.0), 'loc': Real()}\nexpand(batch_shape)\nhas_rsample = False\nlog_prob(value)\nproperty mean\n The provided mean is the circular one.\n\nproperty mode\nsample(sample_shape=torch.Size([]))\n The sampling algorithm for the von Mises distribution is based\n on the following paper: Best, D. J., and Nicholas I. Fisher.\n \"Efficient simulation of the von Mises distribution.\" Applied\n Statistics (1979): 152-157.\n\nsupport = Real()\nproperty variance\n The provided variance is the circular one.\n\nWeibull", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "Weibull\nclass torch.distributions.weibull.Weibull(scale, concentration, validate_args=None)\nBases: \"TransformedDistribution\"\nSamples from a two-parameter Weibull distribution.\n-[ Example ]-\n\n\n\nm = Weibull(torch.tensor([1.0]), torch.tensor([1.0]))\nm.sample() # sample from a Weibull distribution with scale=1, concentration=1\n tensor([ 0.4784])\n\n\n\nParameters:\n * scale (float or Tensor) -- Scale parameter of\n distribution (lambda).\n * **concentration** (*float** or **Tensor*) -- Concentration\n parameter of distribution (k/shape).\n\narg_constraints: Dict[str, constraints.Constraint] = {'concentration': GreaterThan(lower_bound=0.0), 'scale': GreaterThan(lower_bound=0.0)}\nentropy()\nexpand(batch_shape, _instance=None)\nproperty mean\nproperty mode\nsupport = GreaterThan(lower_bound=0.0)\nproperty variance\nWishart", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "property variance\nWishart\nclass torch.distributions.wishart.Wishart(df, covariance_matrix=None, precision_matrix=None, scale_tril=None, validate_args=None)\nBases: \"ExponentialFamily\"\nCreates a Wishart distribution parameterized by a symmetric\n positive definite matrix \\Sigma, or its Cholesky decomposition\n \\mathbf{\\Sigma} = \\mathbf{L}\\mathbf{L}^\\top\n-[ Example ]-\n\n\n\nm = Wishart(torch.eye(2), torch.Tensor([2]))\nm.sample() # Wishart distributed with mean=df * I and\n # variance(x_ij)=df for i != j and variance(x_ij)=2 * df for i == j\n\n\n\nParameters:\n * covariance_matrix (Tensor) -- positive-definite\n covariance matrix\n * **precision_matrix** (*Tensor*) -- positive-definite precision\n matrix\n\n * **scale_tril** (*Tensor*) -- lower-triangular factor of\n covariance, with positive-valued diagonal\n\n * **df** (*float** or **Tensor*) -- real-valued parameter larger\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "than the (dimension of Square matrix) - 1\nNote:\n Only one of \"covariance_matrix\" or \"precision_matrix\" or\n \"scale_tril\" can be specified. Using \"scale_tril\" will be more\n efficient: all computations internally are based on \"scale_tril\".\n If \"covariance_matrix\" or \"precision_matrix\" is passed instead,\n it is only used to compute the corresponding lower triangular\n matrices using a Cholesky decomposition.\n 'torch.distributions.LKJCholesky' is a restricted Wishart\n distribution.[1]\n\nReferences\n[1] Wang, Z., Wu, Y. and Chu, H., 2018. On equivalence of the LKJ\n distribution and the restricted Wishart distribution. [2] Sawyer,\n S., 2007. Wishart Distributions and Inverse-Wishart Sampling. [3]\n Anderson, T. W., 2003. An Introduction to Multivariate Statistical\n Analysis (3rd ed.). [4] Odell, P. L. & Feiveson, A. H., 1966. A\n Numerical Procedure to Generate a SampleCovariance Matrix. JASA,", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "61(313):199-203. [5] Ku, Y.-C. & Bloomfield, P., 2010. Generating\n Random Wishart Matrices with Fractional Degrees of Freedom in OX.\narg_constraints = {'covariance_matrix': PositiveDefinite(), 'df': GreaterThan(lower_bound=0), 'precision_matrix': PositiveDefinite(), 'scale_tril': LowerCholesky()}\nproperty covariance_matrix\nentropy()\nexpand(batch_shape, _instance=None)\nhas_rsample = True\nlog_prob(value)\nproperty mean\nproperty mode\nproperty precision_matrix\nrsample(sample_shape=torch.Size([]), max_try_correction=None)\n Warning:\n\n In some cases, sampling algorithm based on Bartlett\n decomposition may return singular matrix samples. Several\n tries to correct singular samples are performed by default,\n but it may end up returning singular matrix samples. Singular\n samples may return *-inf* values in *.log_prob()*. In those\n cases, the user should validate the samples and either fix the\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "value of df or adjust max_try_correction value for\n argument in .rsample accordingly.\nproperty scale_tril\nsupport = PositiveDefinite()\nproperty variance\nKL Divergence\ntorch.distributions.kl.kl_divergence(p, q)\nCompute Kullback-Leibler divergence KL(p | q) between two\n distributions.\n KL(p \\| q) = \\int p(x) \\log\\frac {p(x)} {q(x)} \\,dx\n\nParameters:\n * p (Distribution) -- A \"Distribution\" object.\n * **q** (*Distribution*) -- A \"Distribution\" object.\n\nReturns:\n A batch of KL divergences of shape batch_shape.\nReturn type:\n Tensor\nRaises:\n NotImplementedError -- If the distribution types have not\n been registered via \"register_kl()\".\nKL divergence is currently implemented for the following\n distribution pairs:\n * \"Bernoulli\" and \"Bernoulli\"\n * \"Bernoulli\" and \"Poisson\"\n\n * \"Beta\" and \"Beta\"\n\n * \"Beta\" and \"ContinuousBernoulli\"\n\n * \"Beta\" and \"Exponential\"\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "\n\n\"Beta\" and \"Exponential\"\n\n\n\"Beta\" and \"Gamma\"\n\n\n\"Beta\" and \"Normal\"\n\n\n\"Beta\" and \"Pareto\"\n\n\n\"Beta\" and \"Uniform\"\n\n\n\"Binomial\" and \"Binomial\"\n\n\n\"Categorical\" and \"Categorical\"\n\n\n\"Cauchy\" and \"Cauchy\"\n\n\n\"ContinuousBernoulli\" and \"ContinuousBernoulli\"\n\n\n\"ContinuousBernoulli\" and \"Exponential\"\n\n\n\"ContinuousBernoulli\" and \"Normal\"\n\n\n\"ContinuousBernoulli\" and \"Pareto\"\n\n\n\"ContinuousBernoulli\" and \"Uniform\"\n\n\n\"Dirichlet\" and \"Dirichlet\"\n\n\n\"Exponential\" and \"Beta\"\n\n\n\"Exponential\" and \"ContinuousBernoulli\"\n\n\n\"Exponential\" and \"Exponential\"\n\n\n\"Exponential\" and \"Gamma\"\n\n\n\"Exponential\" and \"Gumbel\"\n\n\n\"Exponential\" and \"Normal\"\n\n\n\"Exponential\" and \"Pareto\"\n\n\n\"Exponential\" and \"Uniform\"\n\n\n\"ExponentialFamily\" and \"ExponentialFamily\"\n\n\n\"Gamma\" and \"Beta\"\n\n\n\"Gamma\" and \"ContinuousBernoulli\"\n\n\n\"Gamma\" and \"Exponential\"\n\n\n\"Gamma\" and \"Gamma\"\n\n\n\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "\n\n\"Gamma\" and \"Gamma\"\n\n\n\"Gamma\" and \"Gumbel\"\n\n\n\"Gamma\" and \"Normal\"\n\n\n\"Gamma\" and \"Pareto\"\n\n\n\"Gamma\" and \"Uniform\"\n\n\n\"Geometric\" and \"Geometric\"\n\n\n\"Gumbel\" and \"Beta\"\n\n\n\"Gumbel\" and \"ContinuousBernoulli\"\n\n\n\"Gumbel\" and \"Exponential\"\n\n\n\"Gumbel\" and \"Gamma\"\n\n\n\"Gumbel\" and \"Gumbel\"\n\n\n\"Gumbel\" and \"Normal\"\n\n\n\"Gumbel\" and \"Pareto\"\n\n\n\"Gumbel\" and \"Uniform\"\n\n\n\"HalfNormal\" and \"HalfNormal\"\n\n\n\"Independent\" and \"Independent\"\n\n\n\"Laplace\" and \"Beta\"\n\n\n\"Laplace\" and \"ContinuousBernoulli\"\n\n\n\"Laplace\" and \"Exponential\"\n\n\n\"Laplace\" and \"Gamma\"\n\n\n\"Laplace\" and \"Laplace\"\n\n\n\"Laplace\" and \"Normal\"\n\n\n\"Laplace\" and \"Pareto\"\n\n\n\"Laplace\" and \"Uniform\"\n\n\n\"LowRankMultivariateNormal\" and \"LowRankMultivariateNormal\"\n\n\n\"LowRankMultivariateNormal\" and \"MultivariateNormal\"\n\n\n\"MultivariateNormal\" and \"LowRankMultivariateNormal\"\n\n\n\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "\n\n\"MultivariateNormal\" and \"MultivariateNormal\"\n\n\n\"Normal\" and \"Beta\"\n\n\n\"Normal\" and \"ContinuousBernoulli\"\n\n\n\"Normal\" and \"Exponential\"\n\n\n\"Normal\" and \"Gamma\"\n\n\n\"Normal\" and \"Gumbel\"\n\n\n\"Normal\" and \"Laplace\"\n\n\n\"Normal\" and \"Normal\"\n\n\n\"Normal\" and \"Pareto\"\n\n\n\"Normal\" and \"Uniform\"\n\n\n\"OneHotCategorical\" and \"OneHotCategorical\"\n\n\n\"Pareto\" and \"Beta\"\n\n\n\"Pareto\" and \"ContinuousBernoulli\"\n\n\n\"Pareto\" and \"Exponential\"\n\n\n\"Pareto\" and \"Gamma\"\n\n\n\"Pareto\" and \"Normal\"\n\n\n\"Pareto\" and \"Pareto\"\n\n\n\"Pareto\" and \"Uniform\"\n\n\n\"Poisson\" and \"Bernoulli\"\n\n\n\"Poisson\" and \"Binomial\"\n\n\n\"Poisson\" and \"Poisson\"\n\n\n\"TransformedDistribution\" and \"TransformedDistribution\"\n\n\n\"Uniform\" and \"Beta\"\n\n\n\"Uniform\" and \"ContinuousBernoulli\"\n\n\n\"Uniform\" and \"Exponential\"\n\n\n\"Uniform\" and \"Gamma\"\n\n\n\"Uniform\" and \"Gumbel\"\n\n\n\"Uniform\" and \"Normal\"\n\n\n\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "\n\n\"Uniform\" and \"Normal\"\n\n\n\"Uniform\" and \"Pareto\"\n\n\n\"Uniform\" and \"Uniform\"\n\n\n\n\ntorch.distributions.kl.register_kl(type_p, type_q)\nDecorator to register a pairwise function with \"kl_divergence()\".\n Usage:\n @register_kl(Normal, Normal)\n def kl_normal_normal(p, q):\n # insert implementation here\n\nLookup returns the most specific (type,type) match ordered by\n subclass. If the match is ambiguous, a RuntimeWarning is raised.\n For example to resolve the ambiguous situation:\n @register_kl(BaseP, DerivedQ)\n def kl_version1(p, q): ...\n @register_kl(DerivedP, BaseQ)\n def kl_version2(p, q): ...\n\nyou should register a third most-specific implementation, e.g.:\n register_kl(DerivedP, DerivedQ)(kl_version1) # Break the tie.\n\nParameters:\n * type_p (type) -- A subclass of \"Distribution\".\n * **type_q** (*type*) -- A subclass of \"Distribution\".\n\nTransforms", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "Transforms\nclass torch.distributions.transforms.AbsTransform(cache_size=0)\nTransform via the mapping y = |x|.\nclass torch.distributions.transforms.AffineTransform(loc, scale, event_dim=0, cache_size=0)\nTransform via the pointwise affine mapping y = \\text{loc} +\n \\text{scale} \\times x.\nParameters:\n * loc (Tensor or float) -- Location parameter.\n * **scale** (*Tensor** or **float*) -- Scale parameter.\n\n * **event_dim** (*int*) -- Optional size of *event_shape*. This\n should be zero for univariate random variables, 1 for\n distributions over vectors, 2 for distributions over matrices,\n etc.\n\nclass torch.distributions.transforms.CatTransform(tseq, dim=0, lengths=None, cache_size=0)\nTransform functor that applies a sequence of transforms tseq\n component-wise to each submatrix at dim, of length\n lengths[dim], in a way compatible with \"torch.cat()\".\nExample:", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "Example:\n x0 = torch.cat([torch.range(1, 10), torch.range(1, 10)], dim=0)\n x = torch.cat([x0, x0], dim=0)\n t0 = CatTransform([ExpTransform(), identity_transform], dim=0, lengths=[10, 10])\n t = CatTransform([t0, t0], dim=0, lengths=[20, 20])\n y = t(x)\n\nclass torch.distributions.transforms.ComposeTransform(parts, cache_size=0)\nComposes multiple transforms in a chain. The transforms being\n composed are responsible for caching.\nParameters:\n * parts (list of \"Transform\") -- A list of transforms to\n compose.\n * **cache_size** (*int*) -- Size of cache. If zero, no caching\n is done. If one, the latest single value is cached. Only 0 and\n 1 are supported.\n\nclass torch.distributions.transforms.CorrCholeskyTransform(cache_size=0)\nTransforms an uncontrained real vector x with length D*(D-1)/2 into\n the Cholesky factor of a D-dimension correlation matrix. This\n Cholesky factor is a lower triangular matrix with positive", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "diagonals and unit Euclidean norm for each row. The transform is\n processed as follows:\n 1. First we convert x into a lower triangular matrix in row\n order.\n\n 2. For each row X_i of the lower triangular part, we apply a\n *signed* version of class \"StickBreakingTransform\" to\n transform X_i into a unit Euclidean length vector using the\n following steps: - Scales into the interval (-1, 1) domain:\n r_i = \\tanh(X_i). - Transforms into an unsigned domain: z_i =\n r_i^2. - Applies s_i = StickBreakingTransform(z_i). -\n Transforms back into signed domain: y_i = sign(r_i) *\n \\sqrt{s_i}.\n\nclass torch.distributions.transforms.CumulativeDistributionTransform(distribution, cache_size=0)\nTransform via the cumulative distribution function of a probability\n distribution.\nParameters:\n distribution (Distribution) -- Distribution whose\n cumulative distribution function to use for the transformation.\nExample:", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "Example:\n # Construct a Gaussian copula from a multivariate normal.\n base_dist = MultivariateNormal(\n loc=torch.zeros(2),\n scale_tril=LKJCholesky(2).sample(),\n )\n transform = CumulativeDistributionTransform(Normal(0, 1))\n copula = TransformedDistribution(base_dist, [transform])\n\nclass torch.distributions.transforms.ExpTransform(cache_size=0)\nTransform via the mapping y = \\exp(x).\nclass torch.distributions.transforms.IndependentTransform(base_transform, reinterpreted_batch_ndims, cache_size=0)\nWrapper around another transform to treat\n \"reinterpreted_batch_ndims\"-many extra of the right most dimensions\n as dependent. This has no effect on the forward or backward\n transforms, but does sum out \"reinterpreted_batch_ndims\"-many of\n the rightmost dimensions in \"log_abs_det_jacobian()\".\nParameters:\n * base_transform (\"Transform\") -- A base transform.\n * **reinterpreted_batch_ndims** (*int*) -- The number of extra\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "rightmost dimensions to treat as dependent.\nclass torch.distributions.transforms.LowerCholeskyTransform(cache_size=0)\nTransform from unconstrained matrices to lower-triangular matrices\n with nonnegative diagonal entries.\nThis is useful for parameterizing positive definite matrices in\n terms of their Cholesky factorization.\nclass torch.distributions.transforms.PositiveDefiniteTransform(cache_size=0)\nTransform from unconstrained matrices to positive-definite\n matrices.\nclass torch.distributions.transforms.PowerTransform(exponent, cache_size=0)\nTransform via the mapping y = x^{\\text{exponent}}.\nclass torch.distributions.transforms.ReshapeTransform(in_shape, out_shape, cache_size=0)\nUnit Jacobian transform to reshape the rightmost part of a tensor.\nNote that \"in_shape\" and \"out_shape\" must have the same number of\n elements, just as for \"torch.Tensor.reshape()\".\nParameters:\n * in_shape (torch.Size) -- The input event shape.", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "\nout_shape (torch.Size) -- The output event shape.\n\nclass torch.distributions.transforms.SigmoidTransform(cache_size=0)\nTransform via the mapping y = \\frac{1}{1 + \\exp(-x)} and x =\n \\text{logit}(y).\nclass torch.distributions.transforms.SoftplusTransform(cache_size=0)\nTransform via the mapping \\text{Softplus}(x) = \\log(1 + \\exp(x)).\n The implementation reverts to the linear function when x > 20.\nclass torch.distributions.transforms.TanhTransform(cache_size=0)\nTransform via the mapping y = \\tanh(x).\nIt is equivalent to \"ComposeTransform([AffineTransform(0., 2.),\n SigmoidTransform(), AffineTransform(-1., 2.)])\" However this\n might not be numerically stable, thus it is recommended to use\n TanhTransform instead.\nNote that one should use cache_size=1 when it comes to NaN/Inf\n values.\nclass torch.distributions.transforms.SoftmaxTransform(cache_size=0)\nTransform from unconstrained space to the simplex via y = \\exp(x)\n then normalizing.", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "then normalizing.\nThis is not bijective and cannot be used for HMC. However this acts\n mostly coordinate-wise (except for the final normalization), and\n thus is appropriate for coordinate-wise optimization algorithms.\nclass torch.distributions.transforms.StackTransform(tseq, dim=0, cache_size=0)\nTransform functor that applies a sequence of transforms tseq\n component-wise to each submatrix at dim in a way compatible with\n \"torch.stack()\".\nExample:\n x = torch.stack([torch.range(1, 10), torch.range(1, 10)], dim=1)\n t = StackTransform([ExpTransform(), identity_transform], dim=1)\n y = t(x)\n\nclass torch.distributions.transforms.StickBreakingTransform(cache_size=0)\nTransform from unconstrained space to the simplex of one additional\n dimension via a stick-breaking process.\nThis transform arises as an iterated sigmoid transform in a stick-\n breaking construction of the Dirichlet distribution: the first", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "logit is transformed via sigmoid to the first probability and the\n probability of everything else, and then the process recurses.\nThis is bijective and appropriate for use in HMC; however it mixes\n coordinates together and is less appropriate for optimization.\nclass torch.distributions.transforms.Transform(cache_size=0)\nAbstract class for invertable transformations with computable log\n det jacobians. They are primarily used in\n \"torch.distributions.TransformedDistribution\".\nCaching is useful for transforms whose inverses are either\n expensive or numerically unstable. Note that care must be taken\n with memoized values since the autograd graph may be reversed. For\n example while the following works with or without caching:\n y = t(x)\n t.log_abs_det_jacobian(x, y).backward() # x will receive gradients.\n\nHowever the following will error when caching due to dependency\n reversal:\n y = t(x)\n z = t.inv(y)\n grad(z.sum(), [y]) # error because z is x\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "grad(z.sum(), [y]) # error because z is x\nDerived classes should implement one or both of \"_call()\" or\n \"_inverse()\". Derived classes that set bijective=True should also\n implement \"log_abs_det_jacobian()\".\nParameters:\n cache_size (int) -- Size of cache. If zero, no caching is\n done. If one, the latest single value is cached. Only 0 and 1\n are supported.\nVariables:\n * domain (\"Constraint\") -- The constraint representing valid\n inputs to this transform.\n * **codomain** (\"Constraint\") -- The constraint representing\n valid outputs to this transform which are inputs to the\n inverse transform.\n\n * **bijective** (*bool*) -- Whether this transform is bijective.\n A transform \"t\" is bijective iff \"t.inv(t(x)) == x\" and\n \"t(t.inv(y)) == y\" for every \"x\" in the domain and \"y\" in the\n codomain. Transforms that are not bijective should at least\n maintain the weaker pseudoinverse properties \"t(t.inv(t(x)) ==\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "t(x)\" and \"t.inv(t(t.inv(y))) == t.inv(y)\".\n * **sign** (*int** or **Tensor*) -- For bijective univariate\n transforms, this should be +1 or -1 depending on whether\n transform is monotone increasing or decreasing.\n\nproperty inv\n Returns the inverse \"Transform\" of this transform. This should\n satisfy \"t.inv.inv is t\".\n\nproperty sign\n Returns the sign of the determinant of the Jacobian, if\n applicable. In general this only makes sense for bijective\n transforms.\n\nlog_abs_det_jacobian(x, y)\n Computes the log det jacobian *log |dy/dx|* given input and\n output.\n\nforward_shape(shape)\n Infers the shape of the forward computation, given the input\n shape. Defaults to preserving shape.\n\ninverse_shape(shape)\n Infers the shapes of the inverse computation, given the output\n shape. Defaults to preserving shape.\n\nConstraints\nThe following constraints are implemented:\n\n\"constraints.boolean\"\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "\n\n\"constraints.boolean\"\n\n\n\"constraints.cat\"\n\n\n\"constraints.corr_cholesky\"\n\n\n\"constraints.dependent\"\n\n\n\"constraints.greater_than(lower_bound)\"\n\n\n\"constraints.greater_than_eq(lower_bound)\"\n\n\n\"constraints.independent(constraint, reinterpreted_batch_ndims)\"\n\n\n\"constraints.integer_interval(lower_bound, upper_bound)\"\n\n\n\"constraints.interval(lower_bound, upper_bound)\"\n\n\n\"constraints.less_than(upper_bound)\"\n\n\n\"constraints.lower_cholesky\"\n\n\n\"constraints.lower_triangular\"\n\n\n\"constraints.multinomial\"\n\n\n\"constraints.nonnegative_integer\"\n\n\n\"constraints.one_hot\"\n\n\n\"constraints.positive_integer\"\n\n\n\"constraints.positive\"\n\n\n\"constraints.positive_semidefinite\"\n\n\n\"constraints.positive_definite\"\n\n\n\"constraints.real_vector\"\n\n\n\"constraints.real\"\n\n\n\"constraints.simplex\"\n\n\n\"constraints.symmetric\"\n\n\n\"constraints.stack\"\n\n\n\"constraints.square\"\n\n\n\"constraints.symmetric\"\n\n\n\"constraints.unit_interval\"\n\n\nclass torch.distributions.constraints.Constraint\nAbstract base class for constraints.", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "Abstract base class for constraints.\nA constraint object represents a region over which a variable is\n valid, e.g. within which a variable can be optimized.\nVariables:\n * is_discrete (bool) -- Whether constrained space is\n discrete. Defaults to False.\n * **event_dim** (*int*) -- Number of rightmost dimensions that\n together define an event. The \"check()\" method will remove\n this many dimensions when computing validity.\n\ncheck(value)\n Returns a byte tensor of \"sample_shape + batch_shape\" indicating\n whether each event in value satisfies this constraint.\n\ntorch.distributions.constraints.cat\nalias of \"_Cat\"\ntorch.distributions.constraints.dependent_property\nalias of \"_DependentProperty\"\ntorch.distributions.constraints.greater_than\nalias of \"_GreaterThan\"\ntorch.distributions.constraints.greater_than_eq\nalias of \"_GreaterThanEq\"\ntorch.distributions.constraints.independent\nalias of \"_IndependentConstraint\"", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "alias of \"_IndependentConstraint\"\ntorch.distributions.constraints.integer_interval\nalias of \"_IntegerInterval\"\ntorch.distributions.constraints.interval\nalias of \"_Interval\"\ntorch.distributions.constraints.half_open_interval\nalias of \"_HalfOpenInterval\"\ntorch.distributions.constraints.less_than\nalias of \"_LessThan\"\ntorch.distributions.constraints.multinomial\nalias of \"_Multinomial\"\ntorch.distributions.constraints.stack\nalias of \"_Stack\"\nConstraint Registry\nPyTorch provides two global \"ConstraintRegistry\" objects that link\n\"Constraint\" objects to \"Transform\" objects. These objects both input\nconstraints and return transforms, but they have different guarantees\non bijectivity.\n\n\"biject_to(constraint)\" looks up a bijective \"Transform\" from\n \"constraints.real\" to the given \"constraint\". The returned\n transform is guaranteed to have \".bijective = True\" and should\n implement \".log_abs_det_jacobian()\".\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "implement \".log_abs_det_jacobian()\".\n\n\"transform_to(constraint)\" looks up a not-necessarily bijective\n \"Transform\" from \"constraints.real\" to the given \"constraint\". The\n returned transform is not guaranteed to implement\n \".log_abs_det_jacobian()\".\n\nThe \"transform_to()\" registry is useful for performing unconstrained\noptimization on constrained parameters of probability distributions,\nwhich are indicated by each distribution's \".arg_constraints\" dict.\nThese transforms often overparameterize a space in order to avoid\nrotation; they are thus more suitable for coordinate-wise optimization\nalgorithms like Adam:\nloc = torch.zeros(100, requires_grad=True)\n unconstrained = torch.zeros(100, requires_grad=True)\n scale = transform_to(Normal.arg_constraints['scale'])(unconstrained)\n loss = -Normal(loc, scale).log_prob(data).sum()\nThe \"biject_to()\" registry is useful for Hamiltonian Monte Carlo,\nwhere samples from a probability distribution with constrained", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "\".support\" are propagated in an unconstrained space, and algorithms\nare typically rotation invariant.:\ndist = Exponential(rate)\n unconstrained = torch.zeros(100, requires_grad=True)\n sample = biject_to(dist.support)(unconstrained)\n potential_energy = -dist.log_prob(sample).sum()\nNote:\nAn example where \"transform_to\" and \"biject_to\" differ is\n \"constraints.simplex\": \"transform_to(constraints.simplex)\" returns a\n \"SoftmaxTransform\" that simply exponentiates and normalizes its\n inputs; this is a cheap and mostly coordinate-wise operation\n appropriate for algorithms like SVI. In contrast,\n \"biject_to(constraints.simplex)\" returns a \"StickBreakingTransform\"\n that bijects its input down to a one-fewer-dimensional space; this a\n more expensive less numerically stable transform but is needed for\n algorithms like HMC.\nThe \"biject_to\" and \"transform_to\" objects can be extended by user-\ndefined constraints and transforms using their \".register()\" method", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "either as a function on singleton constraints:\ntransform_to.register(my_constraint, my_transform)\nor as a decorator on parameterized constraints:\n@transform_to.register(MyConstraintClass)\n def my_factory(constraint):\n assert isinstance(constraint, MyConstraintClass)\n return MyTransform(constraint.param1, constraint.param2)\nYou can create your own registry by creating a new\n\"ConstraintRegistry\" object.\nclass torch.distributions.constraint_registry.ConstraintRegistry\nRegistry to link constraints to transforms.\nregister(constraint, factory=None)\n Registers a \"Constraint\" subclass in this registry. Usage:\n\n @my_registry.register(MyConstraintClass)\n def construct_transform(constraint):\n assert isinstance(constraint, MyConstraint)\n return MyTransform(constraint.arg_constraints)\n\n Parameters:\n * **constraint** (subclass of \"Constraint\") -- A subclass of\n \"Constraint\", or a singleton object of the desired class.\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "\nfactory (Callable) -- A callable that inputs a\n constraint object and returns a \"Transform\" object.\n", "source": "https://pytorch.org/docs/stable/distributions.html", "category": "pytorch docs"} {"text": "Named Tensors operator coverage\nPlease read Named Tensors first for an introduction to named tensors.\nThis document is a reference for name inference, a process that\ndefines how named tensors:\n\n\nuse names to provide additional automatic runtime correctness\n checks\n\n\npropagate names from input tensors to output tensors\n\n\nBelow is a list of all operations that are supported with named\ntensors and their associated name inference rules.\nIf you don't see an operation listed here, but it would help your use\ncase, please search if an issue has already been filed and if not,\nfile one.\nWarning:\nThe named tensor API is experimental and subject to change.\nSupported Operations\n^^^^^^^^^^^^^^^^^^^^\n+----------------------+----------------------+\n| API | Name inference rule |\n|======================|======================|\n| \"Tensor.abs()\", | Keeps input names |\n| \"torch.abs()\" | |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "| \"torch.abs()\" | |\n+----------------------+----------------------+\n| \"Tensor.abs_()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.acos()\", | Keeps input names |\n| \"torch.acos()\" | |\n+----------------------+----------------------+\n| \"Tensor.acos_()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.add()\", | Unifies names from |\n| \"torch.add()\" | inputs |\n+----------------------+----------------------+\n| \"Tensor.add_()\" | Unifies names from |\n| | inputs |\n+----------------------+----------------------+\n| \"Tensor.addmm()\", | Contracts away dims |\n| \"torch.addmm()\" | |\n+----------------------+----------------------+\n| \"Tensor.addmm_()\" | Contracts away dims |\n+----------------------+----------------------+\n| \"Tensor.addmv()\", | Contracts away dims |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "| \"Tensor.addmv()\", | Contracts away dims |\n| \"torch.addmv()\" | |\n+----------------------+----------------------+\n| \"Tensor.addmv_()\" | Contracts away dims |\n+----------------------+----------------------+\n| \"Tensor.align_as()\" | See documentation |\n+----------------------+----------------------+\n| \"Tensor.align_to()\" | See documentation |\n+----------------------+----------------------+\n| \"Tensor.all()\", | None |\n| \"torch.all()\" | |\n+----------------------+----------------------+\n| \"Tensor.any()\", | None |\n| \"torch.any()\" | |\n+----------------------+----------------------+\n| \"Tensor.asin()\", | Keeps input names |\n| \"torch.asin()\" | |\n+----------------------+----------------------+\n| \"Tensor.asin_()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.atan()\", | Keeps input names |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "| \"Tensor.atan()\", | Keeps input names |\n| \"torch.atan()\" | |\n+----------------------+----------------------+\n| \"Tensor.atan2()\", | Unifies names from |\n| \"torch.atan2()\" | inputs |\n+----------------------+----------------------+\n| \"Tensor.atan2_()\" | Unifies names from |\n| | inputs |\n+----------------------+----------------------+\n| \"Tensor.atan_()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.bernoulli() | Keeps input names |\n| \", | |\n| \"torch.bernoulli()\" | |\n+----------------------+----------------------+\n| \"Tensor.bernoulli_( | None |\n| )\" | |\n+----------------------+----------------------+\n| \"Tensor.bfloat16()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.bitwise_not | Keeps input names |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "| \"Tensor.bitwise_not | Keeps input names |\n| ()\", \"torch.bitwise | |\n| not()\" | |\n+----------------------+----------------------+\n| \"Tensor.bitwise_not | None |\n| ()\" | |\n+----------------------+----------------------+\n| \"Tensor.bmm()\", | Contracts away dims |\n| \"torch.bmm()\" | |\n+----------------------+----------------------+\n| \"Tensor.bool()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.byte()\" | Keeps input names |\n+----------------------+----------------------+\n| \"torch.cat()\" | Unifies names from |\n| | inputs |\n+----------------------+----------------------+\n| \"Tensor.cauchy_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.ceil()\", | Keeps input names |\n| \"torch.ceil()\" | |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "| \"torch.ceil()\" | |\n+----------------------+----------------------+\n| \"Tensor.ceil_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.char()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.chunk()\", | Keeps input names |\n| \"torch.chunk()\" | |\n+----------------------+----------------------+\n| \"Tensor.clamp()\", | Keeps input names |\n| \"torch.clamp()\" | |\n+----------------------+----------------------+\n| \"Tensor.clamp_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.copy_()\" | out function and in- |\n| | place variants |\n+----------------------+----------------------+\n| \"Tensor.cos()\", | Keeps input names |\n| \"torch.cos()\" | |\n+----------------------+----------------------+\n| \"Tensor.cos_()\" | None |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "| \"Tensor.cos_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.cosh()\", | Keeps input names |\n| \"torch.cosh()\" | |\n+----------------------+----------------------+\n| \"Tensor.cosh_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.acosh()\", | Keeps input names |\n| \"torch.acosh()\" | |\n+----------------------+----------------------+\n| \"Tensor.acosh_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.cpu()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.cuda()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.cumprod()\", | Keeps input names |\n| \"torch.cumprod()\" | |\n+----------------------+----------------------+\n| \"Tensor.cumsum()\", | Keeps input names |\n| \"torch.cumsum()\" | |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "| \"torch.cumsum()\" | |\n+----------------------+----------------------+\n| \"Tensor.data_ptr()\" | None |\n+----------------------+----------------------+\n| \"Tensor.deg2rad()\", | Keeps input names |\n| \"torch.deg2rad()\" | |\n+----------------------+----------------------+\n| \"Tensor.deg2rad_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.detach()\", | Keeps input names |\n| \"torch.detach()\" | |\n+----------------------+----------------------+\n| \"Tensor.detach_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.device\", | None |\n| \"torch.device()\" | |\n+----------------------+----------------------+\n| \"Tensor.digamma()\", | Keeps input names |\n| \"torch.digamma()\" | |\n+----------------------+----------------------+\n| \"Tensor.digamma_()\" | None |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "| \"Tensor.digamma_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.dim()\" | None |\n+----------------------+----------------------+\n| \"Tensor.div()\", | Unifies names from |\n| \"torch.div()\" | inputs |\n+----------------------+----------------------+\n| \"Tensor.div_()\" | Unifies names from |\n| | inputs |\n+----------------------+----------------------+\n| \"Tensor.dot()\", | None |\n| \"torch.dot()\" | |\n+----------------------+----------------------+\n| \"Tensor.double()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.element_siz | None |\n| e()\" | |\n+----------------------+----------------------+\n| \"torch.empty()\" | Factory functions |\n+----------------------+----------------------+\n| \"torch.empty_like()\" | Factory functions |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "| \"torch.empty_like()\" | Factory functions |\n+----------------------+----------------------+\n| \"Tensor.eq()\", | Unifies names from |\n| \"torch.eq()\" | inputs |\n+----------------------+----------------------+\n| \"Tensor.erf()\", | Keeps input names |\n| \"torch.erf()\" | |\n+----------------------+----------------------+\n| \"Tensor.erf_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.erfc()\", | Keeps input names |\n| \"torch.erfc()\" | |\n+----------------------+----------------------+\n| \"Tensor.erfc_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.erfinv()\", | Keeps input names |\n| \"torch.erfinv()\" | |\n+----------------------+----------------------+\n| \"Tensor.erfinv_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.exp()\", | Keeps input names |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "| \"Tensor.exp()\", | Keeps input names |\n| \"torch.exp()\" | |\n+----------------------+----------------------+\n| \"Tensor.exp_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.expand()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.expm1()\", | Keeps input names |\n| \"torch.expm1()\" | |\n+----------------------+----------------------+\n| \"Tensor.expm1_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.exponential | None |\n| ()\" | |\n+----------------------+----------------------+\n| \"Tensor.fill()\" | None |\n+----------------------+----------------------+\n| \"Tensor.flatten()\", | See documentation |\n| \"torch.flatten()\" | |\n+----------------------+----------------------+\n| \"Tensor.float()\" | Keeps input names |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "| \"Tensor.float()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.floor()\", | Keeps input names |\n| \"torch.floor()\" | |\n+----------------------+----------------------+\n| \"Tensor.floor_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.frac()\", | Keeps input names |\n| \"torch.frac()\" | |\n+----------------------+----------------------+\n| \"Tensor.frac_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.ge()\", | Unifies names from |\n| \"torch.ge()\" | inputs |\n+----------------------+----------------------+\n| \"Tensor.get_device( | None |\n| )\", | |\n| \"torch.get_device()\" | |\n+----------------------+----------------------+\n| \"Tensor.grad\" | None |\n+----------------------+----------------------+", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "+----------------------+----------------------+\n| \"Tensor.gt()\", | Unifies names from |\n| \"torch.gt()\" | inputs |\n+----------------------+----------------------+\n| \"Tensor.half()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.has_names()\" | See documentation |\n+----------------------+----------------------+\n| \"Tensor.index_fill( | Keeps input names |\n| )\", | |\n| \"torch.index_fill()\" | |\n+----------------------+----------------------+\n| \"Tensor.index_fill_ | None |\n| ()\" | |\n+----------------------+----------------------+\n| \"Tensor.int()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.is_contiguo | None |\n| us()\" | |\n+----------------------+----------------------+\n| \"Tensor.is_cuda\" | None |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "| \"Tensor.is_cuda\" | None |\n+----------------------+----------------------+\n| \"Tensor.is_floating | None |\n| _point()\", \"torch.i | |\n| s_floating_point()\" | |\n+----------------------+----------------------+\n| \"Tensor.is_leaf\" | None |\n+----------------------+----------------------+\n| \"Tensor.is_pinned()\" | None |\n+----------------------+----------------------+\n| \"Tensor.is_shared()\" | None |\n+----------------------+----------------------+\n| \"Tensor.is_signed() | None |\n| \", | |\n| \"torch.is_signed()\" | |\n+----------------------+----------------------+\n| \"Tensor.is_sparse\" | None |\n+----------------------+----------------------+\n| \"Tensor.is_sparse_c | None |\n| sr\" | |\n+----------------------+----------------------+", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "+----------------------+----------------------+\n| \"torch.is_tensor()\" | None |\n+----------------------+----------------------+\n| \"Tensor.item()\" | None |\n+----------------------+----------------------+\n| \"Tensor.kthvalue()\", | Removes dimensions |\n| \"torch.kthvalue()\" | |\n+----------------------+----------------------+\n| \"Tensor.le()\", | Unifies names from |\n| \"torch.le()\" | inputs |\n+----------------------+----------------------+\n| \"Tensor.log()\", | Keeps input names |\n| \"torch.log()\" | |\n+----------------------+----------------------+\n| \"Tensor.log10()\", | Keeps input names |\n| \"torch.log10()\" | |\n+----------------------+----------------------+\n| \"Tensor.log10_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.log1p()\", | Keeps input names |\n| \"torch.log1p()\" | |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "| \"torch.log1p()\" | |\n+----------------------+----------------------+\n| \"Tensor.log1p_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.log2()\", | Keeps input names |\n| \"torch.log2()\" | |\n+----------------------+----------------------+\n| \"Tensor.log2_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.log_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.log_normal_ | None |\n| ()\" | |\n+----------------------+----------------------+\n| \"Tensor.logical_not | Keeps input names |\n| ()\", \"torch.logical | |\n| not()\" | |\n+----------------------+----------------------+\n| \"Tensor.logical_not | None |\n| ()\" | |\n+----------------------+----------------------+", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "+----------------------+----------------------+\n| \"Tensor.logsumexp() | Removes dimensions |\n| \", | |\n| \"torch.logsumexp()\" | |\n+----------------------+----------------------+\n| \"Tensor.long()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.lt()\", | Unifies names from |\n| \"torch.lt()\" | inputs |\n+----------------------+----------------------+\n| \"torch.manual_seed( | None |\n| )\" | |\n+----------------------+----------------------+\n| \"Tensor.masked_fill | Keeps input names |\n| ()\", \"torch.masked_ | |\n| fill()\" | |\n+----------------------+----------------------+\n| \"Tensor.masked_fill | None |\n| _()\" | |\n+----------------------+----------------------+\n| \"Tensor.masked_sele | Aligns mask up to |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "| \"Tensor.masked_sele | Aligns mask up to |\n| ct()\", \"torch.maske | input and then unif |\n| d_select()\" | ies_names_from_inpu |\n| | t_tensors |\n+----------------------+----------------------+\n| \"Tensor.matmul()\", | Contracts away dims |\n| \"torch.matmul()\" | |\n+----------------------+----------------------+\n| \"Tensor.mean()\", | Removes dimensions |\n| \"torch.mean()\" | |\n+----------------------+----------------------+\n| \"Tensor.median()\", | Removes dimensions |\n| \"torch.median()\" | |\n+----------------------+----------------------+\n| \"Tensor.nanmedian() | Removes dimensions |\n| \", | |\n| \"torch.nanmedian()\" | |\n+----------------------+----------------------+\n| \"Tensor.mm()\", | Contracts away dims |\n| \"torch.mm()\" | |\n+----------------------+----------------------+", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "+----------------------+----------------------+\n| \"Tensor.mode()\", | Removes dimensions |\n| \"torch.mode()\" | |\n+----------------------+----------------------+\n| \"Tensor.mul()\", | Unifies names from |\n| \"torch.mul()\" | inputs |\n+----------------------+----------------------+\n| \"Tensor.mul_()\" | Unifies names from |\n| | inputs |\n+----------------------+----------------------+\n| \"Tensor.mv()\", | Contracts away dims |\n| \"torch.mv()\" | |\n+----------------------+----------------------+\n| \"Tensor.names\" | See documentation |\n+----------------------+----------------------+\n| \"Tensor.narrow()\", | Keeps input names |\n| \"torch.narrow()\" | |\n+----------------------+----------------------+\n| \"Tensor.ndim\" | None |\n+----------------------+----------------------+\n| \"Tensor.ndimension( | None |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "| \"Tensor.ndimension( | None |\n| )\" | |\n+----------------------+----------------------+\n| \"Tensor.ne()\", | Unifies names from |\n| \"torch.ne()\" | inputs |\n+----------------------+----------------------+\n| \"Tensor.neg()\", | Keeps input names |\n| \"torch.neg()\" | |\n+----------------------+----------------------+\n| \"Tensor.neg_()\" | None |\n+----------------------+----------------------+\n| \"torch.normal()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.normal_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.numel()\", | None |\n| \"torch.numel()\" | |\n+----------------------+----------------------+\n| \"torch.ones()\" | Factory functions |\n+----------------------+----------------------+\n| \"Tensor.pow()\", | Unifies names from |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "| \"Tensor.pow()\", | Unifies names from |\n| \"torch.pow()\" | inputs |\n+----------------------+----------------------+\n| \"Tensor.pow_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.prod()\", | Removes dimensions |\n| \"torch.prod()\" | |\n+----------------------+----------------------+\n| \"Tensor.rad2deg()\", | Keeps input names |\n| \"torch.rad2deg()\" | |\n+----------------------+----------------------+\n| \"Tensor.rad2deg_()\" | None |\n+----------------------+----------------------+\n| \"torch.rand()\" | Factory functions |\n+----------------------+----------------------+\n| \"torch.rand()\" | Factory functions |\n+----------------------+----------------------+\n| \"torch.randn()\" | Factory functions |\n+----------------------+----------------------+\n| \"torch.randn()\" | Factory functions |\n+----------------------+----------------------+", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "+----------------------+----------------------+\n| \"Tensor.random_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.reciprocal( | Keeps input names |\n| )\", | |\n| \"torch.reciprocal()\" | |\n+----------------------+----------------------+\n| \"Tensor.reciprocal_ | None |\n| ()\" | |\n+----------------------+----------------------+\n| \"Tensor.refine_name | See documentation |\n| s()\" | |\n+----------------------+----------------------+\n| \"Tensor.register_ho | None |\n| ok()\" | |\n+----------------------+----------------------+\n| \"Tensor.rename()\" | See documentation |\n+----------------------+----------------------+\n| \"Tensor.rename_()\" | See documentation |\n+----------------------+----------------------+\n| \"Tensor.requires_gr | None |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "| \"Tensor.requires_gr | None |\n| ad\" | |\n+----------------------+----------------------+\n| \"Tensor.requires_gr | None |\n| ad_()\" | |\n+----------------------+----------------------+\n| \"Tensor.resize_()\" | Only allow resizes |\n| | that do not change |\n| | shape |\n+----------------------+----------------------+\n| \"Tensor.resize_as_( | Only allow resizes |\n| )\" | that do not change |\n| | shape |\n+----------------------+----------------------+\n| \"Tensor.round()\", | Keeps input names |\n| \"torch.round()\" | |\n+----------------------+----------------------+\n| \"Tensor.round_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.rsqrt()\", | Keeps input names |\n| \"torch.rsqrt()\" | |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "| \"torch.rsqrt()\" | |\n+----------------------+----------------------+\n| \"Tensor.rsqrt_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.select()\", | Removes dimensions |\n| \"torch.select()\" | |\n+----------------------+----------------------+\n| \"Tensor.short()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.sigmoid()\", | Keeps input names |\n| \"torch.sigmoid()\" | |\n+----------------------+----------------------+\n| \"Tensor.sigmoid_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.sign()\", | Keeps input names |\n| \"torch.sign()\" | |\n+----------------------+----------------------+\n| \"Tensor.sign_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.sgn()\", | Keeps input names |\n| \"torch.sgn()\" | |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "| \"torch.sgn()\" | |\n+----------------------+----------------------+\n| \"Tensor.sgn_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.sin()\", | Keeps input names |\n| \"torch.sin()\" | |\n+----------------------+----------------------+\n| \"Tensor.sin_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.sinh()\", | Keeps input names |\n| \"torch.sinh()\" | |\n+----------------------+----------------------+\n| \"Tensor.sinh_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.asinh()\", | Keeps input names |\n| \"torch.asinh()\" | |\n+----------------------+----------------------+\n| \"Tensor.asinh_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.size()\" | None |\n+----------------------+----------------------+", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "+----------------------+----------------------+\n| \"Tensor.softmax()\", | Keeps input names |\n| \"torch.softmax()\" | |\n+----------------------+----------------------+\n| \"Tensor.split()\", | Keeps input names |\n| \"torch.split()\" | |\n+----------------------+----------------------+\n| \"Tensor.sqrt()\", | Keeps input names |\n| \"torch.sqrt()\" | |\n+----------------------+----------------------+\n| \"Tensor.sqrt_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.squeeze()\", | Removes dimensions |\n| \"torch.squeeze()\" | |\n+----------------------+----------------------+\n| \"Tensor.std()\", | Removes dimensions |\n| \"torch.std()\" | |\n+----------------------+----------------------+\n| \"torch.std_mean()\" | Removes dimensions |\n+----------------------+----------------------+\n| \"Tensor.stride()\" | None |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "| \"Tensor.stride()\" | None |\n+----------------------+----------------------+\n| \"Tensor.sub()\", | Unifies names from |\n| \"torch.sub()\" | inputs |\n+----------------------+----------------------+\n| \"Tensor.sub_()\" | Unifies names from |\n| | inputs |\n+----------------------+----------------------+\n| \"Tensor.sum()\", | Removes dimensions |\n| \"torch.sum()\" | |\n+----------------------+----------------------+\n| \"Tensor.tan()\", | Keeps input names |\n| \"torch.tan()\" | |\n+----------------------+----------------------+\n| \"Tensor.tan_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.tanh()\", | Keeps input names |\n| \"torch.tanh()\" | |\n+----------------------+----------------------+\n| \"Tensor.tanh_()\" | None |\n+----------------------+----------------------+", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "+----------------------+----------------------+\n| \"Tensor.atanh()\", | Keeps input names |\n| \"torch.atanh()\" | |\n+----------------------+----------------------+\n| \"Tensor.atanh_()\" | None |\n+----------------------+----------------------+\n| \"torch.tensor()\" | Factory functions |\n+----------------------+----------------------+\n| \"Tensor.to()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.topk()\", | Removes dimensions |\n| \"torch.topk()\" | |\n+----------------------+----------------------+\n| \"Tensor.transpose() | Permutes dimensions |\n| \", | |\n| \"torch.transpose()\" | |\n+----------------------+----------------------+\n| \"Tensor.trunc()\", | Keeps input names |\n| \"torch.trunc()\" | |\n+----------------------+----------------------+\n| \"Tensor.trunc_()\" | None |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "| \"Tensor.trunc_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.type()\" | None |\n+----------------------+----------------------+\n| \"Tensor.type_as()\" | Keeps input names |\n+----------------------+----------------------+\n| \"Tensor.unbind()\", | Removes dimensions |\n| \"torch.unbind()\" | |\n+----------------------+----------------------+\n| \"Tensor.unflatten()\" | See documentation |\n+----------------------+----------------------+\n| \"Tensor.uniform_()\" | None |\n+----------------------+----------------------+\n| \"Tensor.var()\", | Removes dimensions |\n| \"torch.var()\" | |\n+----------------------+----------------------+\n| \"torch.var_mean()\" | Removes dimensions |\n+----------------------+----------------------+\n| \"Tensor.zero_()\" | None |\n+----------------------+----------------------+\n| \"torch.zeros()\" | Factory functions |", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "| \"torch.zeros()\" | Factory functions |\n+----------------------+----------------------+\nKeeps input names\nAll pointwise unary functions follow this rule as well as some other\nunary functions.\n\n\nCheck names: None\n\n\nPropagate names: input tensor's names are propagated to the output.\n\n\n\n\n\nx = torch.randn(3, 3, names=('N', 'C'))\nx.abs().names\n ('N', 'C')\n\n\n\nRemoves dimensions\nAll reduction ops like \"sum()\" remove dimensions by reducing over the\ndesired dimensions. Other operations like \"select()\" and \"squeeze()\"\nremove dimensions.\nWherever one can pass an integer dimension index to an operator, one\ncan also pass a dimension name. Functions that take lists of dimension\nindices can also take in a list of dimension names.\n\n\nCheck names: If \"dim\" or \"dims\" is passed in as a list of names,\n check that those names exist in \"self\".\n\n\nPropagate names: If the dimensions of the input tensor specified by\n\n", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "\"dim\" or \"dims\" are not present in the output tensor, then the\n corresponding names of those dimensions do not appear in\n \"output.names\".\n\n\n\nx = torch.randn(1, 3, 3, 3, names=('N', 'C', 'H', 'W'))\nx.squeeze('N').names\n ('C', 'H', 'W')\nx = torch.randn(3, 3, 3, 3, names=('N', 'C', 'H', 'W'))\nx.sum(['N', 'C']).names\n ('H', 'W')\n\n\n\n# Reduction ops with keepdim=True don't actually remove dimensions.\n\n\n\nx = torch.randn(3, 3, 3, 3, names=('N', 'C', 'H', 'W'))\nx.sum(['N', 'C'], keepdim=True).names\n ('N', 'C', 'H', 'W')\n\n\n\nUnifies names from inputs\nAll binary arithmetic ops follow this rule. Operations that broadcast\nstill broadcast positionally from the right to preserve compatibility\nwith unnamed tensors. To perform explicit broadcasting by names, use\n\"Tensor.align_as()\".\n\nCheck names: All names must match positionally from the right. i.e.,\n in \"tensor + other\", \"match(tensor.names[i], other.names[i])\" must\n", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "be true for all \"i\" in \"(-min(tensor.dim(), other.dim()) + 1, -1]\".\n\n\nCheck names: Furthermore, all named dimensions must be aligned from\n the right. During matching, if we match a named dimension \"A\" with\n an unnamed dimension \"None\", then \"A\" must not appear in the tensor\n with the unnamed dimension.\n\n\nPropagate names: unify pairs of names from the right from both\n tensors to produce output names.\n\n\nFor example,\n# tensor: Tensor[ N, None]\n # other: Tensor[None, C]\n\n\n\ntensor = torch.randn(3, 3, names=('N', None))\nother = torch.randn(3, 3, names=(None, 'C'))\n(tensor + other).names\n ('N', 'C')\n\n\n\nCheck names:\n\n\n\"match(tensor.names[-1], other.names[-1])\" is \"True\"\n\n\n\"match(tensor.names[-2], tensor.names[-2])\" is \"True\"\n\n\nBecause we matched \"None\" in \"tensor\" with \"'C'\", check to make sure\n \"'C'\" doesn't exist in \"tensor\" (it does not).\n\n\nCheck to make sure \"'N'\" doesn't exists in \"other\" (it does not).\n\n\nFinally, the output names are computed with \"[unify('N', None),", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "unify(None, 'C')] = ['N', 'C']\"\nMore examples:\n# Dimensions don't match from the right:\n # tensor: Tensor[N, C]\n # other: Tensor[ N]\n\n\n\ntensor = torch.randn(3, 3, names=('N', 'C'))\nother = torch.randn(3, names=('N',))\n(tensor + other).names\n RuntimeError: Error when attempting to broadcast dims ['N', 'C'] and dims\n ['N']: dim 'C' and dim 'N' are at the same position from the right but do\n not match.\n\n\n\n# Dimensions aren't aligned when matching tensor.names[-1] and other.names[-1]:\n # tensor: Tensor[N, None]\n # other: Tensor[ N]\n\n\n\ntensor = torch.randn(3, 3, names=('N', None))\nother = torch.randn(3, names=('N',))\n(tensor + other).names\n RuntimeError: Misaligned dims when attempting to broadcast dims ['N'] and\n dims ['N', None]: dim 'N' appears in a different position from the right\n across both lists.\n\n\n\nNote:\nIn both of the last examples, it is possible to align the tensors by", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "names and then perform the addition. Use \"Tensor.align_as()\" to\n align tensors by name or \"Tensor.align_to()\" to align tensors to a\n custom dimension ordering.\nPermutes dimensions\nSome operations, like \"Tensor.t()\", permute the order of dimensions.\nDimension names are attached to individual dimensions so they get\npermuted as well.\nIf the operator takes in positional index \"dim\", it is also able to\ntake a dimension name as \"dim\".\n\n\nCheck names: If \"dim\" is passed as a name, check that it exists in\n the tensor.\n\n\nPropagate names: Permute dimension names in the same way as the\n dimensions that are being permuted.\n\n\n\n\n\nx = torch.randn(3, 3, names=('N', 'C'))\nx.transpose('N', 'C').names\n ('C', 'N')\n\n\n\nContracts away dims\nMatrix multiply functions follow some variant of this. Let's go\nthrough \"torch.mm()\" first and then generalize the rule for batch\nmatrix multiplication.\nFor \"torch.mm(tensor, other)\":\n\nCheck names: None\n", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "\n\nCheck names: None\n\n\nPropagate names: result names are \"(tensor.names[-2],\n other.names[-1])\".\n\n\n\n\n\nx = torch.randn(3, 3, names=('N', 'D'))\ny = torch.randn(3, 3, names=('in', 'out'))\nx.mm(y).names\n ('N', 'out')\n\n\n\nInherently, a matrix multiplication performs a dot product over two\ndimensions, collapsing them. When two tensors are matrix-multiplied,\nthe contracted dimensions disappear and do not show up in the output\ntensor.\n\"torch.mv()\", \"torch.dot()\" work in a similar way: name inference does\nnot check input names and removes the dimensions that are involved in\nthe dot product:\n\n\n\nx = torch.randn(3, 3, names=('N', 'D'))\ny = torch.randn(3, names=('something',))\nx.mv(y).names\n ('N',)\n\n\n\nNow, let's take a look at \"torch.matmul(tensor, other)\". Assume that\n\"tensor.dim() >= 2\" and \"other.dim() >= 2\".\n\nCheck names: Check that the batch dimensions of the inputs are\n aligned and broadcastable. See Unifies names from inputs for what it\n means for the inputs to be aligned.\n", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "means for the inputs to be aligned.\n\nPropagate names: result names are obtained by unifying the batch\n dimensions and removing the contracted dimensions:\n \"unify(tensor.names[:-2], other.names[:-2]) + (tensor.names[-2],\n other.names[-1])\".\n\nExamples:\n# Batch matrix multiply of matrices Tensor['C', 'D'] and Tensor['E', 'F'].\n # 'A', 'B' are batch dimensions.\n\n\n\nx = torch.randn(3, 3, 3, 3, names=('A', 'B', 'C', 'D'))\ny = torch.randn(3, 3, 3, names=('B', 'E', 'F'))\ntorch.matmul(x, y).names\n ('A', 'B', 'C', 'F')\n\n\n\nFinally, there are fused \"add\" versions of many matmul functions.\ni.e., \"addmm()\" and \"addmv()\". These are treated as composing name\ninference for i.e. \"mm()\" and name inference for \"add()\".\nFactory functions\nFactory functions now take a new \"names\" argument that associates a\nname with each dimension.\n\n\n\ntorch.zeros(2, 3, names=('N', 'C'))\n tensor([[0., 0., 0.],\n [0., 0., 0.]], names=('N', 'C'))\n\n\n\nout function and in-place variants", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "out function and in-place variants\nA tensor specified as an \"out=\" tensor has the following behavior:\n\n\nIf it has no named dimensions, then the names computed from the\n operation get propagated to it.\n\n\nIf it has any named dimensions, then the names computed from the\n operation must be exactly equal to the existing names. Otherwise,\n the operation errors.\n\n\nAll in-place methods modify inputs to have names equal to the computed\nnames from name inference. For example:\n\n\n\nx = torch.randn(3, 3)\ny = torch.randn(3, 3, names=('N', 'C'))\nx.names\n (None, None)\nx += y\nx.names\n ('N', 'C')\n\n\n", "source": "https://pytorch.org/docs/stable/name_inference.html", "category": "pytorch docs"} {"text": "Tensor Parallelism\n", "source": "https://pytorch.org/docs/stable/distributed.tensor.parallel.html", "category": "pytorch docs"} {"text": "torch.library\nPython operator registration API provides capabilities for extending\nPyTorch's core library of operators with user defined operators.\nCurrently, this can be done in two ways:\n\n\nCreating new libraries\n\n\nLets you to register new operators and kernels for various\n backends and functionalities by specifying the appropriate\n dispatch keys. For example,\n* Consider registering a new operator \"add\" in your newly\n created namespace \"foo\". You can access this operator using\n the \"torch.ops\" API and calling into by calling\n \"torch.ops.foo.add\". You can also access specific registered\n overloads by calling \"torch.ops.foo.add.{overload_name}\".\n\n* If you registered a new kernel for the \"CUDA\" dispatch key\n for this operator, then your custom defined function will be\n called for CUDA tensor inputs.\n\n\n\nThis can be done by creating Library class objects of \"\"DEF\"\"\n kind.\n\n", "source": "https://pytorch.org/docs/stable/library.html", "category": "pytorch docs"} {"text": "kind.\n\n\nExtending existing C++ libraries (e.g., aten)\n\n\nLets you register kernels for existing operators\n corresponding to various backends and functionalities by\n specifying the appropriate dispatch keys.\n\n\nThis may come in handy to fill up spotty operator support for a\n feature implemented through a dispatch key. For example.,\n* You can add operator support for Meta Tensors (by\n registering function to the \"Meta\" dispatch key).\n\n\n\nThis can be done by creating Library class objects of \"\"IMPL\"\"\n kind.\n\n\nA tutorial that walks you through some examples on how to use this API\nis available on Google Colab.\nWarning:\nDispatcher is a complicated PyTorch concept and having a sound\n understanding of Dispatcher is crucial to be able to do anything\n advanced with this API. This blog post is a good starting point to\n learn about Dispatcher.\nclass torch.library.Library(ns, kind, dispatch_key='')\nA class to create libraries that can be used to register new", "source": "https://pytorch.org/docs/stable/library.html", "category": "pytorch docs"} {"text": "operators or override operators in existing libraries from Python.\n A user can optionally pass in a dispatch keyname if they only want\n to register kernels corresponding to only one specific dispatch\n key.\nTo create a library to override operators in an existing library\n (with name ns), set the kind to \"IMPL\". To create a new library\n (with name ns) to register new operators, set the kind to \"DEF\".\n :param ns: library name :param kind: \"DEF\", \"IMPL\" (default:\n \"IMPL\") :param dispatch_key: PyTorch dispatch key (default: \"\")\ndefine(schema, alias_analysis='')\n Defines a new operator and its semantics in the ns namespace.\n\n Parameters:\n * **schema** -- function schema to define a new operator.\n\n * **alias_analysis** (*optional*) -- Indicates if the\n aliasing properties of the operator arguments can be\n inferred from the schema (default behavior) or not\n (\"CONSERVATIVE\").\n\n Returns:\n", "source": "https://pytorch.org/docs/stable/library.html", "category": "pytorch docs"} {"text": "(\"CONSERVATIVE\").\n Returns:\n name of the operator as inferred from the schema.\n\n Example::\n >>> my_lib = Library(\"foo\", \"DEF\")\n >>> my_lib.define(\"sum(Tensor self) -> Tensor\")\n\nimpl(op_name, fn, dispatch_key='')\n Registers the function implementation for an operator defined in\n the library.\n\n Parameters:\n * **op_name** -- operator name (along with the overload) or\n OpOverload object.\n\n * **fn** -- function that's the operator implementation for\n the input dispatch key.\n\n * **dispatch_key** -- dispatch key that the input function\n should be registered for. By default, it uses the dispatch\n key that the library was created with.\n\n Example::\n >>> my_lib = Library(\"aten\", \"IMPL\")\n >>> def div_cpu(self, other):\n >>> return self * (1 / other)\n >>> my_lib.impl(\"div.Tensor\", \"CPU\")\n\nWe have also added some function decorators to make it convenient to", "source": "https://pytorch.org/docs/stable/library.html", "category": "pytorch docs"} {"text": "register functions for operators:\n\n\n\"torch.library.impl()\"\n\n\n\"torch.library.define()\"\n\n", "source": "https://pytorch.org/docs/stable/library.html", "category": "pytorch docs"} {"text": "Named Tensors\nNamed Tensors allow users to give explicit names to tensor dimensions.\nIn most cases, operations that take dimension parameters will accept\ndimension names, avoiding the need to track dimensions by position. In\naddition, named tensors use names to automatically check that APIs are\nbeing used correctly at runtime, providing extra safety. Names can\nalso be used to rearrange dimensions, for example, to support\n\"broadcasting by name\" rather than \"broadcasting by position\".\nWarning:\nThe named tensor API is a prototype feature and subject to change.\nCreating named tensors\nFactory functions now take a new \"names\" argument that associates a\nname with each dimension.\n\n\n\ntorch.zeros(2, 3, names=('N', 'C'))\n tensor([[0., 0., 0.],\n [0., 0., 0.]], names=('N', 'C'))\n\n\n\nNamed dimensions, like regular Tensor dimensions, are ordered.\n\"tensor.names[i]\" is the name of dimension \"i\" of \"tensor\".\nThe following factory functions support named tensors:", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"} {"text": "\n\n\"torch.empty()\"\n\n\n\"torch.rand()\"\n\n\n\"torch.randn()\"\n\n\n\"torch.ones()\"\n\n\n\"torch.tensor()\"\n\n\n\"torch.zeros()\"\n\n\nNamed dimensions\nSee \"names\" for restrictions on tensor names.\nUse \"names\" to access the dimension names of a tensor and \"rename()\"\nto rename named dimensions.\n\n\n\nimgs = torch.randn(1, 2, 2, 3 , names=('N', 'C', 'H', 'W'))\nimgs.names\n ('N', 'C', 'H', 'W')\nrenamed_imgs = imgs.rename(H='height', W='width')\nrenamed_imgs.names\n ('N', 'C', 'height', 'width)\n\n\n\nNamed tensors can coexist with unnamed tensors; named tensors are\ninstances of \"torch.Tensor\". Unnamed tensors have \"None\"-named\ndimensions. Named tensors do not require all dimensions to be named.\n\n\n\nimgs = torch.randn(1, 2, 2, 3 , names=(None, 'C', 'H', 'W'))\nimgs.names\n (None, 'C', 'H', 'W')\n\n\n\nName propagation semantics\nNamed tensors use names to automatically check that APIs are being\ncalled correctly at runtime. This occurs in a process called *name", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"} {"text": "inference*. More formally, name inference consists of the following\ntwo steps:\n\n\nCheck names: an operator may perform automatic checks at runtime\n that check that certain dimension names must match.\n\n\nPropagate names: name inference propagates names to output\n tensors.\n\n\nAll operations that support named tensors propagate names.\n\n\n\nx = torch.randn(3, 3, names=('N', 'C'))\nx.abs().names\n ('N', 'C')\n\n\n\nmatch semantics\nTwo names match if they are equal (string equality) or if at least\none is \"None\". Nones are essentially a special \"wildcard\" name.\n\"unify(A, B)\" determines which of the names \"A\" and \"B\" to propagate\nto the outputs. It returns the more specific of the two names, if\nthey match. If the names do not match, then it errors.\nNote:\nIn practice, when working with named tensors, one should avoid\n having unnamed dimensions because their handling can be complicated.\n It is recommended to lift all unnamed dimensions to be named", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"} {"text": "dimensions by using \"refine_names()\".\nBasic name inference rules\nLet's see how \"match\" and \"unify\" are used in name inference in the\ncase of adding two one-dim tensors with no broadcasting.\nx = torch.randn(3, names=('X',))\n y = torch.randn(3)\n z = torch.randn(3, names=('Z',))\nCheck names: check that the names of the two tensors match.\nFor the following examples:\n\n\n\nx + y # match('X', None) is True\nx + z # match('X', 'Z') is False\nx + x # match('X', 'X') is True\nx + z\n Error when attempting to broadcast dims ['X'] and dims ['Z']: dim 'X' and dim 'Z' are at the same position from the right but do not match.\n\n\n\nPropagate names: unify the names to select which one to\npropagate. In the case of \"x + y\", \"unify('X', None) = 'X'\" because\n\"'X'\" is more specific than \"None\".\n\n\n\n(x + y).names\n ('X',)\n(x + x).names\n ('X',)\n\n\n\nFor a comprehensive list of name inference rules, see Named Tensors", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"} {"text": "operator coverage. Here are two common operations that may be useful\nto go over:\n\n\nBinary arithmetic ops: Unifies names from inputs\n\n\nMatrix multiplication ops: Contracts away dims\n\n\nExplicit alignment by names\nUse \"align_as()\" or \"align_to()\" to align tensor dimensions by name to\na specified ordering. This is useful for performing \"broadcasting by\nnames\".\n# This function is agnostic to the dimension ordering of input,\n # as long as it has a C dimension somewhere.\n def scale_channels(input, scale):\n scale = scale.refine_names('C')\n return input * scale.align_as(input)\n\n\n\nnum_channels = 3\nscale = torch.randn(num_channels, names=('C',))\nimgs = torch.rand(3, 3, 3, num_channels, names=('N', 'H', 'W', 'C'))\nmore_imgs = torch.rand(3, num_channels, 3, 3, names=('N', 'C', 'H', 'W'))\nvideos = torch.randn(3, num_channels, 3, 3, 3, names=('N', 'C', 'H', 'W', 'D')\nscale_channels(imgs, scale)\n\n\n", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"} {"text": "\n\n\nscale_channels(imgs, scale)\nscale_channels(more_imgs, scale)\nscale_channels(videos, scale)\n\n\n\nManipulating dimensions\nUse \"align_to()\" to permute large amounts of dimensions without\nmentioning all of them as in required by \"permute()\".\n\n\n\ntensor = torch.randn(2, 2, 2, 2, 2, 2)\nnamed_tensor = tensor.refine_names('A', 'B', 'C', 'D', 'E', 'F')\n\n\n\n# Move the F (dim 5) and E dimension (dim 4) to the front while keeping\n # the rest in the same order\n\n\n\ntensor.permute(5, 4, 0, 1, 2, 3)\nnamed_tensor.align_to('F', 'E', ...)\n\n\n\nUse \"flatten()\" and \"unflatten()\" to flatten and unflatten dimensions,\nrespectively. These methods are more verbose than \"view()\" and\n\"reshape()\", but have more semantic meaning to someone reading the\ncode.\n\n\n\nimgs = torch.randn(32, 3, 128, 128)\nnamed_imgs = imgs.refine_names('N', 'C', 'H', 'W')\nflat_imgs = imgs.view(32, -1)\nnamed_flat_imgs = named_imgs.flatten(['C', 'H', 'W'], 'features')\n\n\n", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"} {"text": "\n\n\nnamed_flat_imgs.names\n ('N', 'features')\nunflattened_imgs = imgs.view(32, 3, 128, 128)\nunflattened_named_imgs = named_flat_imgs.unflatten(\n 'features', [('C', 3), ('H', 128), ('W', 128)])\n\n\n\nAutograd support\nAutograd currently supports named tensors in a limited manner:\nautograd ignores names on all tensors. Gradient computation is still\ncorrect but we lose the safety that names give us.\n\n\n\nx = torch.randn(3, names=('D',))\nweight = torch.randn(3, names=('D',), requires_grad=True)\nloss = (x - weight).abs()\ngrad_loss = torch.randn(3)\nloss.backward(grad_loss)\nweight.grad # Unnamed for now. Will be named in the future\n tensor([-1.8107, -0.6357, 0.0783])\nweight.grad.zero_()\ngrad_loss = grad_loss.refine_names('C')\nloss = (x - weight).abs()\n # Ideally we'd check that the names of loss and grad_loss match but we don't yet.\nloss.backward(grad_loss)\nweight.grad\n\n\n", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"} {"text": "\n\n\nloss.backward(grad_loss)\nweight.grad\n tensor([-1.8107, -0.6357, 0.0783])\n\n\n\nCurrently supported operations and subsystems\nOperators\nSee Named Tensors operator coverage for a full list of the supported\ntorch and tensor operations. We do not yet support the following that\nis not covered by the link:\n\nindexing, advanced indexing.\n\nFor \"torch.nn.functional\" operators, we support the following:\n\n\n\"torch.nn.functional.relu()\"\n\n\n\"torch.nn.functional.softmax()\"\n\n\n\"torch.nn.functional.log_softmax()\"\n\n\n\"torch.nn.functional.tanh()\"\n\n\n\"torch.nn.functional.sigmoid()\"\n\n\n\"torch.nn.functional.dropout()\"\n\n\nSubsystems\nAutograd is supported, see Autograd support. Because gradients are\ncurrently unnamed, optimizers may work but are untested.\nNN modules are currently unsupported. This can lead to the following\nwhen calling modules with named tensor inputs:\n\nNN module parameters are unnamed, so outputs may be partially named.\n", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"} {"text": "\nNN module forward passes have code that don't support named tensors\n and will error out appropriately.\n\nWe also do not support the following subsystems, though some may work\nout of the box:\n\n\ndistributions\n\n\nserialization (\"torch.load()\", \"torch.save()\")\n\n\nmultiprocessing\n\n\nJIT\n\n\ndistributed\n\n\nONNX\n\n\nIf any of these would help your use case, please search if an issue\nhas already been filed and if not, file one.\nNamed tensor API reference\nIn this section please find the documentation for named tensor\nspecific APIs. For a comprehensive reference for how names are\npropagated through other PyTorch operators, see Named Tensors operator\ncoverage.\nclass torch.Tensor\nnames\n Stores names for each of this tensor's dimensions.\n\n \"names[idx]\" corresponds to the name of tensor dimension \"idx\".\n Names are either a string if the dimension is named or \"None\" if\n the dimension is unnamed.\n\n Dimension names may contain characters or underscore.\n", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"} {"text": "Furthermore, a dimension name must be a valid Python variable\n name (i.e., does not start with underscore).\n Tensors may not have two named dimensions with the same name.\n\n Warning:\n\n The named tensor API is experimental and subject to change.\n\nrename(names, *rename_map)\n Renames dimension names of \"self\".\n\n There are two main usages:\n\n \"self.rename(**rename_map)\" returns a view on tensor that has\n dims renamed as specified in the mapping \"rename_map\".\n\n \"self.rename(*names)\" returns a view on tensor, renaming all\n dimensions positionally using \"names\". Use \"self.rename(None)\"\n to drop names on a tensor.\n\n One cannot specify both positional args \"names\" and keyword args\n \"rename_map\".\n\n Examples:\n\n >>> imgs = torch.rand(2, 3, 5, 7, names=('N', 'C', 'H', 'W'))\n >>> renamed_imgs = imgs.rename(N='batch', C='channels')\n >>> renamed_imgs.names\n ('batch', 'channels', 'H', 'W')\n", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"} {"text": "('batch', 'channels', 'H', 'W')\n >>> renamed_imgs = imgs.rename(None)\n >>> renamed_imgs.names\n (None, None, None, None)\n\n >>> renamed_imgs = imgs.rename('batch', 'channel', 'height', 'width')\n >>> renamed_imgs.names\n ('batch', 'channel', 'height', 'width')\n\n Warning:\n\n The named tensor API is experimental and subject to change.\n\nrename_(names, *rename_map)\n In-place version of \"rename()\".\n\nrefine_names(*names)\n Refines the dimension names of \"self\" according to \"names\".\n\n Refining is a special case of renaming that \"lifts\" unnamed\n dimensions. A \"None\" dim can be refined to have any name; a\n named dim can only be refined to have the same name.\n\n Because named tensors can coexist with unnamed tensors, refining\n names gives a nice way to write named-tensor-aware code that\n works with both named and unnamed tensors.\n\n \"names\" may contain up to one Ellipsis (\"...\"). The Ellipsis is\n", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"} {"text": "expanded greedily; it is expanded in-place to fill \"names\" to\n the same length as \"self.dim()\" using names from the\n corresponding indices of \"self.names\".\n Python 2 does not support Ellipsis but one may use a string\n literal instead (\"'...'\").\n\n Parameters:\n **names** (*iterable of str*) -- The desired names of the\n output tensor. May contain up to one Ellipsis.\n\n Examples:\n\n >>> imgs = torch.randn(32, 3, 128, 128)\n >>> named_imgs = imgs.refine_names('N', 'C', 'H', 'W')\n >>> named_imgs.names\n ('N', 'C', 'H', 'W')\n\n >>> tensor = torch.randn(2, 3, 5, 7, 11)\n >>> tensor = tensor.refine_names('A', ..., 'B', 'C')\n >>> tensor.names\n ('A', None, None, 'B', 'C')\n\n Warning:\n\n The named tensor API is experimental and subject to change.\n\nalign_as(other) -> Tensor\n Permutes the dimensions of the \"self\" tensor to match the\n", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"} {"text": "dimension order in the \"other\" tensor, adding size-one dims for\n any new names.\n This operation is useful for explicit broadcasting by names (see\n examples).\n\n All of the dims of \"self\" must be named in order to use this\n method. The resulting tensor is a view on the original tensor.\n\n All dimension names of \"self\" must be present in \"other.names\".\n \"other\" may contain named dimensions that are not in\n \"self.names\"; the output tensor has a size-one dimension for\n each of those new names.\n\n To align a tensor to a specific order, use \"align_to()\".\n\n Examples:\n\n # Example 1: Applying a mask\n >>> mask = torch.randint(2, [127, 128], dtype=torch.bool).refine_names('W', 'H')\n >>> imgs = torch.randn(32, 128, 127, 3, names=('N', 'H', 'W', 'C'))\n >>> imgs.masked_fill_(mask.align_as(imgs), 0)\n\n\n # Example 2: Applying a per-channel-scale\n >>> def scale_channels(input, scale):\n", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"} {"text": "\n\n\ndef scale_channels(input, scale):\n >>> scale = scale.refine_names('C')\n >>> return input * scale.align_as(input)\n\n\n\n >>> num_channels = 3\n >>> scale = torch.randn(num_channels, names=('C',))\n >>> imgs = torch.rand(32, 128, 128, num_channels, names=('N', 'H', 'W', 'C'))\n >>> more_imgs = torch.rand(32, num_channels, 128, 128, names=('N', 'C', 'H', 'W'))\n >>> videos = torch.randn(3, num_channels, 128, 128, 128, names=('N', 'C', 'H', 'W', 'D'))\n\n # scale_channels is agnostic to the dimension order of the input\n >>> scale_channels(imgs, scale)\n >>> scale_channels(more_imgs, scale)\n >>> scale_channels(videos, scale)\n\n Warning:\n\n The named tensor API is experimental and subject to change.\n\nalign_to(*names)\n Permutes the dimensions of the \"self\" tensor to match the order\n specified in \"names\", adding size-one dims for any new names.\n", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"} {"text": "All of the dims of \"self\" must be named in order to use this\n method. The resulting tensor is a view on the original tensor.\n All dimension names of \"self\" must be present in \"names\".\n \"names\" may contain additional names that are not in\n \"self.names\"; the output tensor has a size-one dimension for\n each of those new names.\n\n \"names\" may contain up to one Ellipsis (\"...\"). The Ellipsis is\n expanded to be equal to all dimension names of \"self\" that are\n not mentioned in \"names\", in the order that they appear in\n \"self\".\n\n Python 2 does not support Ellipsis but one may use a string\n literal instead (\"'...'\").\n\n Parameters:\n **names** (*iterable of str*) -- The desired dimension\n ordering of the output tensor. May contain up to one Ellipsis\n that is expanded to all unmentioned dim names of \"self\".\n\n Examples:\n\n >>> tensor = torch.randn(2, 2, 2, 2, 2, 2)\n", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"} {"text": "\n\n\nnamed_tensor = tensor.refine_names('A', 'B', 'C', 'D', 'E', 'F')\n\n\n\n # Move the F and E dims to the front while keeping the rest in order\n >>> named_tensor.align_to('F', 'E', ...)\n\n Warning:\n\n The named tensor API is experimental and subject to change.\n\nflatten(dims, out_dim) -> Tensor\n Flattens \"dims\" into a single dimension with name \"out_dim\".\n\n All of *dims* must be consecutive in order in the \"self\" tensor,\n but not necessary contiguous in memory.\n\n Examples:\n\n >>> imgs = torch.randn(32, 3, 128, 128, names=('N', 'C', 'H', 'W'))\n >>> flat_imgs = imgs.flatten(['C', 'H', 'W'], 'features')\n >>> flat_imgs.names, flat_imgs.shape\n (('N', 'features'), torch.Size([32, 49152]))\n\n Warning:\n\n The named tensor API is experimental and subject to change.\n", "source": "https://pytorch.org/docs/stable/named_tensor.html", "category": "pytorch docs"} {"text": "torch.futures\nThis package provides a \"Future\" type that encapsulates an\nasynchronous execution and a set of utility functions to simplify\noperations on \"Future\" objects. Currently, the \"Future\" type is\nprimarily used by the Distributed RPC Framework.\nclass torch.futures.Future(*, devices=None)\nWrapper around a \"torch._C.Future\" which encapsulates an\n asynchronous execution of a callable, e.g. \"rpc_async()\". It also\n exposes a set of APIs to add callback functions and set results.\nWarning:\n GPU support is a beta feature, subject to changes.\n\nadd_done_callback(callback)\n Append the given callback function to this \"Future\", which will\n be run when the \"Future\" is completed. Multiple callbacks can\n be added to the same \"Future\", but the order in which they will\n be executed cannot be guaranteed. The callback must take one\n argument, which is the reference to this \"Future\". The callback\n", "source": "https://pytorch.org/docs/stable/futures.html", "category": "pytorch docs"} {"text": "function can use the \"value()\" method to get the value. Note\n that if this \"Future\" is already completed, the given callback\n will be run inline.\n We recommend that you use the \"then()\" method as it provides a\n way to synchronize after your callback has completed.\n \"add_done_callback\" can be cheaper if your callback does not\n return anything. But both \"then()\" and \"add_done_callback\" use\n the same callback registration API under the hood.\n\n With respect to GPU tensors, this method behaves in the same way\n as \"then()\".\n\n Parameters:\n **callback** (\"Future\") -- a \"Callable\" that takes in one\n argument, which is the reference to this \"Future\".\n\n Note:\n\n Note that if the callback function throws, either through the\n original future being completed with an exception and calling\n \"fut.wait()\", or through other code in the callback, error\n handling must be carefully taken care of. For example, if this\n", "source": "https://pytorch.org/docs/stable/futures.html", "category": "pytorch docs"} {"text": "callback later completes additional futures, those futures are\n not marked as completed with an error and the user is\n responsible for handling completion/waiting on those futures\n independently.\n Example::\n >>> def callback(fut):\n ... print(\"This will run after the future has finished.\")\n ... print(fut.wait())\n >>> fut = torch.futures.Future()\n >>> fut.add_done_callback(callback)\n >>> fut.set_result(5)\n This will run after the future has finished.\n 5\n\ndone()\n Return \"True\" if this \"Future\" is done. A \"Future\" is done if it\n has a result or an exception.\n\n If the value contains tensors that reside on GPUs,\n \"Future.done()\" will return \"True\" even if the asynchronous\n kernels that are populating those tensors haven't yet completed\n running on the device, because at such stage the result is\n already usable, provided one performs the appropriate\n", "source": "https://pytorch.org/docs/stable/futures.html", "category": "pytorch docs"} {"text": "synchronizations (see \"wait()\").\n Return type:\n bool\n\nset_exception(result)\n Set an exception for this \"Future\", which will mark this\n \"Future\" as completed with an error and trigger all attached\n callbacks. Note that when calling wait()/value() on this\n \"Future\", the exception set here will be raised inline.\n\n Parameters:\n **result** (*BaseException*) -- the exception for this\n \"Future\".\n\n Example::\n >>> fut = torch.futures.Future()\n >>> fut.set_exception(ValueError(\"foo\"))\n >>> fut.wait()\n Traceback (most recent call last):\n ...\n ValueError: foo\n\nset_result(result)\n Set the result for this \"Future\", which will mark this \"Future\"\n as completed and trigger all attached callbacks. Note that a\n \"Future\" cannot be marked completed twice.\n\n If the result contains tensors that reside on GPUs, this method\n can be called even if the asynchronous kernels that are\n", "source": "https://pytorch.org/docs/stable/futures.html", "category": "pytorch docs"} {"text": "populating those tensors haven't yet completed running on the\n device, provided that the streams on which those kernels were\n enqueued are set as the current ones when this method is called.\n Put simply, it's safe to call this method immediately after\n launching those kernels, without any additional synchronization,\n as long as one doesn't change streams in between. This method\n will record events on all the relevant current streams and will\n use them to ensure proper scheduling for all the consumers of\n this \"Future\".\n Parameters:\n **result** (*object*) -- the result object of this \"Future\".\n\n Example::\n >>> import threading\n >>> import time\n >>> def slow_set_future(fut, value):\n ... time.sleep(0.5)\n ... fut.set_result(value)\n >>> fut = torch.futures.Future()\n >>> t = threading.Thread(\n ... target=slow_set_future,\n ... args=(fut, torch.ones(2) * 3)\n", "source": "https://pytorch.org/docs/stable/futures.html", "category": "pytorch docs"} {"text": "... args=(fut, torch.ones(2) * 3)\n ... )\n >>> t.start()\n >>> print(fut.wait())\n tensor([3., 3.])\n >>> t.join()\nthen(callback)\n Append the given callback function to this \"Future\", which will\n be run when the \"Future\" is completed. Multiple callbacks can\n be added to the same \"Future\", but the order in which they will\n be executed cannot be guaranteed (to enforce a certain order\n consider chaining: \"fut.then(cb1).then(cb2)\"). The callback must\n take one argument, which is the reference to this \"Future\". The\n callback function can use the \"value()\" method to get the value.\n Note that if this \"Future\" is already completed, the given\n callback will be run immediately inline.\n\n If the \"Future\"'s value contains tensors that reside on GPUs,\n the callback might be invoked while the async kernels that are\n populating those tensors haven't yet finished executing on the\n", "source": "https://pytorch.org/docs/stable/futures.html", "category": "pytorch docs"} {"text": "device. However, the callback will be invoked with some\n dedicated streams set as current (fetched from a global pool)\n which will be synchronized with those kernels. Hence any\n operation performed by the callback on these tensors will be\n scheduled on the device after the kernels complete. In other\n words, as long as the callback doesn't switch streams, it can\n safely manipulate the result without any additional\n synchronization. This is similar to the non-blocking behavior of\n \"wait()\".\n Similarly, if the callback returns a value that contains tensors\n that reside on a GPU, it can do so even if the kernels that are\n producing these tensors are still running on the device, as long\n as the callback didn't change streams during its execution. If\n one wants to change streams, one must be careful to re-\n synchronize them with the original streams, that is, those that\n were current when the callback was invoked.\n\n Parameters:\n", "source": "https://pytorch.org/docs/stable/futures.html", "category": "pytorch docs"} {"text": "Parameters:\n callback (\"Callable\") -- a \"Callable\" that takes this\n \"Future\" as the only argument.\n Returns:\n A new \"Future\" object that holds the return value of the\n \"callback\" and will be marked as completed when the given\n \"callback\" finishes.\n\n Return type:\n *Future*[*S*]\n\n Note:\n\n Note that if the callback function throws, either through the\n original future being completed with an exception and calling\n \"fut.wait()\", or through other code in the callback, the\n future returned by \"then\" will be marked appropriately with\n the encountered error. However, if this callback later\n completes additional futures, those futures are not marked as\n completed with an error and the user is responsible for\n handling completion/waiting on those futures independently.\n\n Example::\n >>> def callback(fut):\n ... print(f\"RPC return value is {fut.wait()}.\")\n", "source": "https://pytorch.org/docs/stable/futures.html", "category": "pytorch docs"} {"text": "\n\n\nfut = torch.futures.Future()\n >>> # The inserted callback will print the return value when\n >>> # receiving the response from \"worker1\"\n >>> cb_fut = fut.then(callback)\n >>> chain_cb_fut = cb_fut.then(\n ... lambda x : print(f\"Chained cb done. {x.wait()}\")\n ... )\n >>> fut.set_result(5)\n RPC return value is 5.\n Chained cb done. None\n\n\n\nvalue()\n Obtain the value of an already-completed future.\n\n This method should only be called after a call to \"wait()\" has\n completed, or inside a callback function passed to \"then()\". In\n other cases this \"Future\" may not yet hold a value and calling\n \"value()\" could fail.\n\n If the value contains tensors that reside on GPUs, then this\n method will *not* perform any additional synchronization. This\n should be done beforehand, separately, through a call to\n \"wait()\" (except within callbacks, for which it's already being\n", "source": "https://pytorch.org/docs/stable/futures.html", "category": "pytorch docs"} {"text": "taken care of by \"then()\").\n Returns:\n The value held by this \"Future\". If the function (callback or\n RPC) creating the value has thrown an error, this \"value()\"\n method will also throw an error.\n\n Return type:\n *T*\n\nwait()\n Block until the value of this \"Future\" is ready.\n\n If the value contains tensors that reside on GPUs, then an\n additional synchronization is performed with the kernels\n (executing on the device) which may be asynchronously populating\n those tensors. Such sync is non-blocking, which means that\n \"wait()\" will insert the necessary instructions in the current\n streams to ensure that further operations enqueued on those\n streams will be properly scheduled after the async kernels but,\n once that is done, \"wait()\" will return, even if those kernels\n are still running. No further synchronization is required when\n accessing and using the values, as long as one doesn't change\n streams.\n", "source": "https://pytorch.org/docs/stable/futures.html", "category": "pytorch docs"} {"text": "streams.\n Returns:\n The value held by this \"Future\". If the function (callback or\n RPC) creating the value has thrown an error, this \"wait\"\n method will also throw an error.\n\n Return type:\n *T*\n\ntorch.futures.collect_all(futures)\nCollects the provided \"Future\" objects into a single combined\n \"Future\" that is completed when all of the sub-futures are\n completed.\nParameters:\n futures (list) -- a list of \"Future\" objects.\nReturns:\n Returns a \"Future\" object to a list of the passed in Futures.\nReturn type:\n Future[List[Future]]\nExample::\n >>> fut0 = torch.futures.Future()\n >>> fut1 = torch.futures.Future()\n >>> fut = torch.futures.collect_all([fut0, fut1])\n >>> fut0.set_result(0)\n >>> fut1.set_result(1)\n >>> fut_list = fut.wait()\n >>> print(f\"fut0 result = {fut_list[0].wait()}\")\n fut0 result = 0\n >>> print(f\"fut1 result = {fut_list[1].wait()}\")\n fut1 result = 1", "source": "https://pytorch.org/docs/stable/futures.html", "category": "pytorch docs"} {"text": "fut1 result = 1\ntorch.futures.wait_all(futures)\nWaits for all provided futures to be complete, and returns the list\n of completed values. If any of the futures encounters an error, the\n method will exit early and report the error not waiting for other\n futures to complete.\nParameters:\n futures (list) -- a list of \"Future\" object.\nReturns:\n A list of the completed \"Future\" results. This method will throw\n an error if \"wait\" on any \"Future\" throws.\nReturn type:\n List", "source": "https://pytorch.org/docs/stable/futures.html", "category": "pytorch docs"} {"text": "torch.config\ntorch.config.show()\nReturn a human-readable string with descriptions of the\n configuration of PyTorch.\ntorch.config.parallel_info()\nReturns detailed string with parallelization settings", "source": "https://pytorch.org/docs/stable/config_mod.html", "category": "pytorch docs"} {"text": "torch.profiler\nOverview\nPyTorch Profiler is a tool that allows the collection of performance\nmetrics during training and inference. Profiler's context manager API\ncan be used to better understand what model operators are the most\nexpensive, examine their input shapes and stack traces, study device\nkernel activity and visualize the execution trace.\nNote:\nAn earlier version of the API in \"torch.autograd\" module is\n considered legacy and will be deprecated.\nAPI Reference\nclass torch.profiler._KinetoProfile(*, activities=None, record_shapes=False, profile_memory=False, with_stack=False, with_flops=False, with_modules=False, experimental_config=None)\nLow-level profiler wrap the autograd profile\nParameters:\n * activities (iterable) -- list of activity groups (CPU,\n CUDA) to use in profiling, supported values:\n \"torch.profiler.ProfilerActivity.CPU\",\n \"torch.profiler.ProfilerActivity.CUDA\". Default value:", "source": "https://pytorch.org/docs/stable/profiler.html", "category": "pytorch docs"} {"text": "ProfilerActivity.CPU and (when available)\n ProfilerActivity.CUDA.\n * **record_shapes** (*bool*) -- save information about\n operator's input shapes.\n\n * **profile_memory** (*bool*) -- track tensor memory\n allocation/deallocation.\n\n * **with_stack** (*bool*) -- record source information (file and\n line number) for the ops.\n\n * **with_flops** (*bool*) -- use formula to estimate the FLOPS\n of specific operators (matrix multiplication and 2D\n convolution).\n\n * **with_modules** (*bool*) -- record module hierarchy\n (including function names) corresponding to the callstack of\n the op. e.g. If module A's forward call's module B's forward\n which contains an aten::add op, then aten::add's module\n hierarchy is A.B Note that this support exist, at the moment,\n only for TorchScript models and not eager mode models.\n\n * **experimental_config** (*_ExperimentalConfig*) -- A set of\n", "source": "https://pytorch.org/docs/stable/profiler.html", "category": "pytorch docs"} {"text": "experimental options used by profiler libraries like Kineto.\n Note, backward compatibility is not guaranteed.\nNote:\n This API is experimental and subject to change in the\n future.Enabling shape and stack tracing results in additional\n overhead. When record_shapes=True is specified, profiler will\n temporarily hold references to the tensors; that may further\n prevent certain optimizations that depend on the reference count\n and introduce extra tensor copies.\n\nadd_metadata(key, value)\n Adds a user defined metadata with a string key and a string\n value into the trace file\n\nadd_metadata_json(key, value)\n Adds a user defined metadata with a string key and a valid json\n value into the trace file\n\nevents()\n Returns the list of unaggregated profiler events, to be used in\n the trace callback or after the profiling is finished\n\nexport_chrome_trace(path)\n Exports the collected trace in Chrome JSON format.\n", "source": "https://pytorch.org/docs/stable/profiler.html", "category": "pytorch docs"} {"text": "export_stacks(path, metric='self_cpu_time_total')\n Save stack traces in a file in a format suitable for\n visualization.\n\n Parameters:\n * **path** (*str*) -- save stacks file to this location;\n\n * **metric** (*str*) -- metric to use: \"self_cpu_time_total\"\n or \"self_cuda_time_total\"\n\n Note:\n\n Example of using FlameGraph tool:\n\n * git clone https://github.com/brendangregg/FlameGraph\n\n * cd FlameGraph\n\n * ./flamegraph.pl --title \"CPU time\" --countname \"us.\"\n profiler.stacks > perf_viz.svg\n\nkey_averages(group_by_input_shape=False, group_by_stack_n=0)\n Averages events, grouping them by operator name and (optionally)\n input shapes and stack.\n\n Note:\n\n To use shape/stack functionality make sure to set\n record_shapes/with_stack when creating profiler context\n manager.\n", "source": "https://pytorch.org/docs/stable/profiler.html", "category": "pytorch docs"} {"text": "manager.\nclass torch.profiler.profile(*, activities=None, schedule=None, on_trace_ready=None, record_shapes=False, profile_memory=False, with_stack=False, with_flops=False, with_modules=False, experimental_config=None, use_cuda=None)\nProfiler context manager.\nParameters:\n * activities (iterable) -- list of activity groups (CPU,\n CUDA) to use in profiling, supported values:\n \"torch.profiler.ProfilerActivity.CPU\",\n \"torch.profiler.ProfilerActivity.CUDA\". Default value:\n ProfilerActivity.CPU and (when available)\n ProfilerActivity.CUDA.\n * **schedule** (*Callable*) -- callable that takes step (int) as\n a single parameter and returns \"ProfilerAction\" value that\n specifies the profiler action to perform at each step.\n\n * **on_trace_ready** (*Callable*) -- callable that is called at\n each step when \"schedule\" returns\n \"ProfilerAction.RECORD_AND_SAVE\" during the profiling.\n", "source": "https://pytorch.org/docs/stable/profiler.html", "category": "pytorch docs"} {"text": "\n\nrecord_shapes (bool) -- save information about\n operator's input shapes.\n\n\nprofile_memory (bool) -- track tensor memory\n allocation/deallocation.\n\n\nwith_stack (bool) -- record source information (file and\n line number) for the ops.\n\n\nwith_flops (bool) -- use formula to estimate the FLOPs\n (floating point operations) of specific operators (matrix\n multiplication and 2D convolution).\n\n\nwith_modules (bool) -- record module hierarchy\n (including function names) corresponding to the callstack of\n the op. e.g. If module A's forward call's module B's forward\n which contains an aten::add op, then aten::add's module\n hierarchy is A.B Note that this support exist, at the moment,\n only for TorchScript models and not eager mode models.\n\n\nexperimental_config (_ExperimentalConfig) -- A set of\n experimental options used for Kineto library features. Note,\n\n\n\n", "source": "https://pytorch.org/docs/stable/profiler.html", "category": "pytorch docs"} {"text": "backward compatibility is not guaranteed.\n * **use_cuda** (*bool*) --\n\n Deprecated since version 1.8.1: use \"activities\" instead.\n\nNote:\n Use \"schedule()\" to generate the callable schedule. Non-default\n schedules are useful when profiling long training jobs and allow\n the user to obtain multiple traces at the different iterations of\n the training process. The default schedule simply records all the\n events continuously for the duration of the context manager.\n\nNote:\n Use \"tensorboard_trace_handler()\" to generate result files for T\n ensorBoard:\"on_trace_ready=torch.profiler.tensorboard_trace_hand\n ler(dir_name)\"After profiling, result files can be found in the\n specified directory. Use the command:\"tensorboard --logdir\n dir_name\"to see the results in TensorBoard. For more information,\n see PyTorch Profiler TensorBoard Plugin\n\nNote:\n Enabling shape and stack tracing results in additional overhead.\n", "source": "https://pytorch.org/docs/stable/profiler.html", "category": "pytorch docs"} {"text": "When record_shapes=True is specified, profiler will temporarily\n hold references to the tensors; that may further prevent certain\n optimizations that depend on the reference count and introduce\n extra tensor copies.\nExamples:\n with torch.profiler.profile(\n activities=[\n torch.profiler.ProfilerActivity.CPU,\n torch.profiler.ProfilerActivity.CUDA,\n ]\n ) as p:\n code_to_profile()\n print(p.key_averages().table(\n sort_by=\"self_cuda_time_total\", row_limit=-1))\n\nUsing the profiler's \"schedule\", \"on_trace_ready\" and \"step\"\n functions:\n # Non-default profiler schedule allows user to turn profiler on and off\n # on different iterations of the training loop;\n # trace_handler is called every time a new trace becomes available\n def trace_handler(prof):\n print(prof.key_averages().table(\n sort_by=\"self_cuda_time_total\", row_limit=-1))\n", "source": "https://pytorch.org/docs/stable/profiler.html", "category": "pytorch docs"} {"text": "prof.export_chrome_trace(\"/tmp/test_trace_\" + str(prof.step_num) + \".json\")\n with torch.profiler.profile(\n activities=[\n torch.profiler.ProfilerActivity.CPU,\n torch.profiler.ProfilerActivity.CUDA,\n ],\n\n # In this example with wait=1, warmup=1, active=2,\n # profiler will skip the first step/iteration,\n # start warming up on the second, record\n # the third and the forth iterations,\n # after which the trace will become available\n # and on_trace_ready (when set) is called;\n # the cycle repeats starting with the next step\n\n schedule=torch.profiler.schedule(\n wait=1,\n warmup=1,\n active=2),\n on_trace_ready=trace_handler\n # on_trace_ready=torch.profiler.tensorboard_trace_handler('./log')\n # used when outputting for tensorboard\n ) as p:\n for iter in range(N):\n", "source": "https://pytorch.org/docs/stable/profiler.html", "category": "pytorch docs"} {"text": "for iter in range(N):\n code_iteration_to_profile(iter)\n # send a signal to the profiler that the next iteration has started\n p.step()\nstep()\n Signals the profiler that the next profiling step has started.\n\nclass torch.profiler.ProfilerAction(value)\nProfiler actions that can be taken at the specified intervals\nclass torch.profiler.ProfilerActivity\nMembers:\nCPU\nCUDA\nproperty name\ntorch.profiler.schedule(*, wait, warmup, active, repeat=0, skip_first=0)\nReturns a callable that can be used as profiler \"schedule\"\n argument. The profiler will skip the first \"skip_first\" steps, then\n wait for \"wait\" steps, then do the warmup for the next \"warmup\"\n steps, then do the active recording for the next \"active\" steps and\n then repeat the cycle starting with \"wait\" steps. The optional\n number of cycles is specified with the \"repeat\" parameter, the zero", "source": "https://pytorch.org/docs/stable/profiler.html", "category": "pytorch docs"} {"text": "value means that the cycles will continue until the profiling is\n finished.\nReturn type:\n Callable\ntorch.profiler.tensorboard_trace_handler(dir_name, worker_name=None, use_gzip=False)\nOutputs tracing files to directory of \"dir_name\", then that\n directory can be directly delivered to tensorboard as logdir.\n \"worker_name\" should be unique for each worker in distributed\n scenario, it will be set to '[hostname]_[pid]' by default.\nIntel Instrumentation and Tracing Technology APIs\ntorch.profiler.itt.is_available()\nCheck if ITT feature is available or not\ntorch.profiler.itt.mark(msg)\nDescribe an instantaneous event that occurred at some point.\nParameters:\n msg (str) -- ASCII message to associate with the event.\ntorch.profiler.itt.range_push(msg)\nPushes a range onto a stack of nested range span. Returns zero-\n based depth of the range that is started.\nParameters:", "source": "https://pytorch.org/docs/stable/profiler.html", "category": "pytorch docs"} {"text": "Parameters:\n msg (str) -- ASCII message to associate with range\ntorch.profiler.itt.range_pop()\nPops a range off of a stack of nested range spans. Returns the\n zero-based depth of the range that is ended.", "source": "https://pytorch.org/docs/stable/profiler.html", "category": "pytorch docs"} {"text": "Distributed RPC Framework\nThe distributed RPC framework provides mechanisms for multi-machine\nmodel training through a set of primitives to allow for remote\ncommunication, and a higher-level API to automatically differentiate\nmodels split across several machines.\nWarning:\nAPIs in the RPC package are stable. There are multiple ongoing work\n items to improve performance and error handling, which will ship in\n future releases.\nWarning:\nCUDA support was introduced in PyTorch 1.9 and is still a beta\n feature. Not all features of the RPC package are yet compatible with\n CUDA support and thus their use is discouraged. These unsupported\n features include: RRefs, JIT compatibility, dist autograd and dist\n optimizer, and profiling. These shortcomings will be addressed in\n future releases.\nNote:\nPlease refer to PyTorch Distributed Overview for a brief\n introduction to all features related to distributed training.\nBasics", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "Basics\nThe distributed RPC framework makes it easy to run functions remotely,\nsupports referencing remote objects without copying the real data\naround, and provides autograd and optimizer APIs to transparently run\nbackward and update parameters across RPC boundaries. These features\ncan be categorized into four sets of APIs.\n\nRemote Procedure Call (RPC) supports running a function on the\n specified destination worker with the given arguments and getting\n the return value back or creating a reference to the return value.\n There are three main RPC APIs: \"rpc_sync()\" (synchronous),\n \"rpc_async()\" (asynchronous), and \"remote()\" (asynchronous and\n returns a reference to the remote return value). Use the\n synchronous API if the user code cannot proceed without the return\n value. Otherwise, use the asynchronous API to get a future, and\n wait on the future when the return value is needed on the caller.\n The \"remote()\" API is useful when the requirement is to create\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "something remotely but never need to fetch it to the caller.\n Imagine the case that a driver process is setting up a parameter\n server and a trainer. The driver can create an embedding table on\n the parameter server and then share the reference to the embedding\n table with the trainer, but itself will never use the embedding\n table locally. In this case, \"rpc_sync()\" and \"rpc_async()\" are no\n longer appropriate, as they always imply that the return value will\n be returned to the caller immediately or in the future.\n\nRemote Reference (RRef) serves as a distributed shared pointer\n to a local or remote object. It can be shared with other workers\n and reference counting will be handled transparently. Each RRef\n only has one owner and the object only lives on that owner. Non-\n owner workers holding RRefs can get copies of the object from the\n owner by explicitly requesting it. This is useful when a worker\n needs to access some data object, but itself is neither the creator\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "(the caller of \"remote()\") or the owner of the object. The\n distributed optimizer, as we will discuss below, is one example of\n such use cases.\n\n\nDistributed Autograd stitches together local autograd engines\n on all the workers involved in the forward pass, and automatically\n reach out to them during the backward pass to compute gradients.\n This is especially helpful if the forward pass needs to span\n multiple machines when conducting, e.g., distributed model parallel\n training, parameter-server training, etc. With this feature, user\n code no longer needs to worry about how to send gradients across\n RPC boundaries and in which order should the local autograd engines\n be launched, which can become quite complicated where there are\n nested and inter-dependent RPC calls in the forward pass.\n\n\nDistributed Optimizer's constructor takes a \"Optimizer()\"\n (e.g., \"SGD()\", \"Adagrad()\", etc.) and a list of parameter RRefs,\n\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "creates an \"Optimizer()\" instance on each distinct RRef owner, and\n updates parameters accordingly when running \"step()\". When you have\n distributed forward and backward passes, parameters and gradients\n will be scattered across multiple workers, and hence it requires an\n optimizer on each of the involved workers. Distributed Optimizer\n wraps all those local optimizers into one, and provides a concise\n constructor and \"step()\" API.\nRPC\nBefore using RPC and distributed autograd primitives, initialization\nmust take place. To initialize the RPC framework we need to use\n\"init_rpc()\" which would initialize the RPC framework, RRef framework\nand distributed autograd.\ntorch.distributed.rpc.init_rpc(name, backend=None, rank=- 1, world_size=None, rpc_backend_options=None)\nInitializes RPC primitives such as the local RPC agent and\n distributed autograd, which immediately makes the current process\n ready to send and receive RPCs.\nParameters:", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "ready to send and receive RPCs.\nParameters:\n * name (str) -- a globally unique name of this node.\n (e.g., \"Trainer3\", \"ParameterServer2\", \"Master\", \"Worker1\")\n Name can only contain number, alphabet, underscore, colon,\n and/or dash, and must be shorter than 128 characters.\n * **backend** (*BackendType**, **optional*) -- The type of RPC\n backend implementation. Supported values is\n \"BackendType.TENSORPIPE\" (the default). See Backends for more\n information.\n\n * **rank** (*int*) -- a globally unique id/rank of this node.\n\n * **world_size** (*int*) -- The number of workers in the group.\n\n * **rpc_backend_options** (*RpcBackendOptions**, **optional*) --\n The options passed to the RpcAgent constructor. It must be an\n agent-specific subclass of \"RpcBackendOptions\" and contains\n agent-specific initialization configurations. By default, for\n all agents, it sets the default timeout to 60 seconds and\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "performs the rendezvous with an underlying process group\n initialized using \"init_method = \"env://\"\", meaning that\n environment variables \"MASTER_ADDR\" and \"MASTER_PORT\" need to\n be set properly. See Backends for more information and find\n which options are available.\nThe following APIs allow users to remotely execute functions as well\nas create references (RRefs) to remote data objects. In these APIs,\nwhen passing a \"Tensor\" as an argument or a return value, the\ndestination worker will try to create a \"Tensor\" with the same meta\n(i.e., shape, stride, etc.). We intentionally disallow transmitting\nCUDA tensors because it might crash if the device lists on source and\ndestination workers do not match. In such cases, applications can\nalways explicitly move the input tensors to CPU on the caller and move\nit to the desired devices on the callee if necessary.\nWarning:\nTorchScript support in RPC is a prototype feature and subject to", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "change. Since v1.5.0, \"torch.distributed.rpc\" supports calling\n TorchScript functions as RPC target functions, and this will help\n improve parallelism on the callee side as executing TorchScript\n functions does not require GIL.\ntorch.distributed.rpc.rpc_sync(to, func, args=None, kwargs=None, timeout=- 1.0)\nMake a blocking RPC call to run function \"func\" on worker \"to\". RPC\n messages are sent and received in parallel to execution of Python\n code. This method is thread-safe.\nParameters:\n * to (str or WorkerInfo or int) --\n name/rank/\"WorkerInfo\" of the destination worker.\n * **func** (*Callable*) -- a callable function, such as Python\n callables, builtin operators (e.g. \"add()\") and annotated\n TorchScript functions.\n\n * **args** (*tuple*) -- the argument tuple for the \"func\"\n invocation.\n\n * **kwargs** (*dict*) -- is a dictionary of keyword arguments\n for the \"func\" invocation.\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "for the \"func\" invocation.\n * **timeout** (*float**, **optional*) -- timeout in seconds to\n use for this RPC. If the RPC does not complete in this amount\n of time, an exception indicating it has timed out will be\n raised. A value of 0 indicates an infinite timeout, i.e. a\n timeout error will never be raised. If not provided, the\n default value set during initialization or with\n \"_set_rpc_timeout\" is used.\n\nReturns:\n Returns the result of running \"func\" with \"args\" and \"kwargs\".\nExample::\n Make sure that \"MASTER_ADDR\" and \"MASTER_PORT\" are set properly\n on both workers. Refer to \"init_process_group()\" API for more\n details. For example,\n export MASTER_ADDR=localhost export MASTER_PORT=5678\n\n Then run the following code in two different processes:\n\n >>> # On worker 0:\n >>> import torch\n >>> import torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker0\", rank=0, world_size=2)\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "\n\n\nret = rpc.rpc_sync(\"worker1\", torch.add, args=(torch.ones(2), 3))\n >>> rpc.shutdown()\n\n\n\n >>> # On worker 1:\n >>> import torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker1\", rank=1, world_size=2)\n >>> rpc.shutdown()\n\n Below is an example of running a TorchScript function using RPC.\n\n >>> # On both workers:\n >>> @torch.jit.script\n >>> def my_script_add(t1, t2):\n >>> return torch.add(t1, t2)\n\n >>> # On worker 0:\n >>> import torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker0\", rank=0, world_size=2)\n >>> ret = rpc.rpc_sync(\"worker1\", my_script_add, args=(torch.ones(2), 3))\n >>> rpc.shutdown()\n\n >>> # On worker 1:\n >>> import torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker1\", rank=1, world_size=2)\n >>> rpc.shutdown()\n\ntorch.distributed.rpc.rpc_async(to, func, args=None, kwargs=None, timeout=- 1.0)\nMake a non-blocking RPC call to run function \"func\" on worker \"to\".", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "RPC messages are sent and received in parallel to execution of\n Python code. This method is thread-safe. This method will\n immediately return a \"Future\" that can be awaited on.\nParameters:\n * to (str or WorkerInfo or int) --\n name/rank/\"WorkerInfo\" of the destination worker.\n * **func** (*Callable*) -- a callable function, such as Python\n callables, builtin operators (e.g. \"add()\") and annotated\n TorchScript functions.\n\n * **args** (*tuple*) -- the argument tuple for the \"func\"\n invocation.\n\n * **kwargs** (*dict*) -- is a dictionary of keyword arguments\n for the \"func\" invocation.\n\n * **timeout** (*float**, **optional*) -- timeout in seconds to\n use for this RPC. If the RPC does not complete in this amount\n of time, an exception indicating it has timed out will be\n raised. A value of 0 indicates an infinite timeout, i.e. a\n timeout error will never be raised. If not provided, the\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "default value set during initialization or with\n \"_set_rpc_timeout\" is used.\nReturns:\n Returns a \"Future\" object that can be waited on. When completed,\n the return value of \"func\" on \"args\" and \"kwargs\" can be\n retrieved from the \"Future\" object.\nWarning:\n Using GPU tensors as arguments or return values of \"func\" is not\n supported since we don't support sending GPU tensors over the\n wire. You need to explicitly copy GPU tensors to CPU before using\n them as arguments or return values of \"func\".\n\nWarning:\n The \"rpc_async\" API does not copy storages of argument tensors\n until sending them over the wire, which could be done by a\n different thread depending on the RPC backend type. The caller\n should make sure that the contents of those tensors stay intact\n until the returned \"Future\" completes.\n\nExample::\n Make sure that \"MASTER_ADDR\" and \"MASTER_PORT\" are set properly", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "on both workers. Refer to \"init_process_group()\" API for more\n details. For example,\n export MASTER_ADDR=localhost export MASTER_PORT=5678\n\n Then run the following code in two different processes:\n\n >>> # On worker 0:\n >>> import torch\n >>> import torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker0\", rank=0, world_size=2)\n >>> fut1 = rpc.rpc_async(\"worker1\", torch.add, args=(torch.ones(2), 3))\n >>> fut2 = rpc.rpc_async(\"worker1\", min, args=(1, 2))\n >>> result = fut1.wait() + fut2.wait()\n >>> rpc.shutdown()\n\n >>> # On worker 1:\n >>> import torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker1\", rank=1, world_size=2)\n >>> rpc.shutdown()\n\n Below is an example of running a TorchScript function using RPC.\n\n >>> # On both workers:\n >>> @torch.jit.script\n >>> def my_script_add(t1, t2):\n >>> return torch.add(t1, t2)\n\n >>> # On worker 0:\n >>> import torch.distributed.rpc as rpc\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "\n\n\nimport torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker0\", rank=0, world_size=2)\n >>> fut = rpc.rpc_async(\"worker1\", my_script_add, args=(torch.ones(2), 3))\n >>> ret = fut.wait()\n >>> rpc.shutdown()\n\n\n\n >>> # On worker 1:\n >>> import torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker1\", rank=1, world_size=2)\n >>> rpc.shutdown()\n\ntorch.distributed.rpc.remote(to, func, args=None, kwargs=None, timeout=- 1.0)\nMake a remote call to run \"func\" on worker \"to\" and return an\n \"RRef\" to the result value immediately. Worker \"to\" will be the\n owner of the returned \"RRef\", and the worker calling \"remote\" is a\n user. The owner manages the global reference count of its \"RRef\",\n and the owner \"RRef\" is only destructed when globally there are no\n living references to it.\nParameters:\n * to (str or WorkerInfo or int) --\n name/rank/\"WorkerInfo\" of the destination worker.", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "\n\nfunc (Callable) -- a callable function, such as Python\n callables, builtin operators (e.g. \"add()\") and annotated\n TorchScript functions.\n\n\nargs (tuple) -- the argument tuple for the \"func\"\n invocation.\n\n\nkwargs (dict) -- is a dictionary of keyword arguments\n for the \"func\" invocation.\n\n\ntimeout (float, optional) -- timeout in seconds for\n this remote call. If the creation of this \"RRef\" on worker\n \"to\" is not successfully processed on this worker within this\n timeout, then the next time there is an attempt to use the\n RRef (such as \"to_here()\"), a timeout will be raised\n indicating this failure. A value of 0 indicates an infinite\n timeout, i.e. a timeout error will never be raised. If not\n provided, the default value set during initialization or with\n \"_set_rpc_timeout\" is used.\n\n\n\n\nReturns:\n A user \"RRef\" instance to the result value. Use the blocking API", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "\"torch.distributed.rpc.RRef.to_here()\" to retrieve the result\n value locally.\nWarning:\n The \"remote\" API does not copy storages of argument tensors until\n sending them over the wire, which could be done by a different\n thread depending on the RPC backend type. The caller should make\n sure that the contents of those tensors stay intact until the\n returned RRef is confirmed by the owner, which can be checked\n using the \"torch.distributed.rpc.RRef.confirmed_by_owner()\" API.\n\nWarning:\n Errors such as timeouts for the \"remote\" API are handled on a\n best-effort basis. This means that when remote calls initiated by\n \"remote\" fail, such as with a timeout error, we take a best-\n effort approach to error handling. This means that errors are\n handled and set on the resulting RRef on an asynchronous basis.\n If the RRef has not been used by the application before this\n handling (such as \"to_here\" or fork call), then future uses of\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "the \"RRef\" will appropriately raise errors. However, it is\n possible that the user application will use the \"RRef\" before the\n errors are handled. In this case, errors may not be raised as\n they have not yet been handled.\nExample:\n Make sure that ``MASTER_ADDR`` and ``MASTER_PORT`` are set properly\n on both workers. Refer to :meth:`~torch.distributed.init_process_group`\n API for more details. For example,\n\n export MASTER_ADDR=localhost\n export MASTER_PORT=5678\n\n Then run the following code in two different processes:\n\n >>> # On worker 0:\n >>> import torch\n >>> import torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker0\", rank=0, world_size=2)\n >>> rref1 = rpc.remote(\"worker1\", torch.add, args=(torch.ones(2), 3))\n >>> rref2 = rpc.remote(\"worker1\", torch.add, args=(torch.ones(2), 1))\n >>> x = rref1.to_here() + rref2.to_here()\n >>> rpc.shutdown()\n\n >>> # On worker 1:\n >>> import torch.distributed.rpc as rpc\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "\n\n\nimport torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker1\", rank=1, world_size=2)\n >>> rpc.shutdown()\n\n\n\n Below is an example of running a TorchScript function using RPC.\n\n >>> # On both workers:\n >>> @torch.jit.script\n >>> def my_script_add(t1, t2):\n >>> return torch.add(t1, t2)\n\n >>> # On worker 0:\n >>> import torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker0\", rank=0, world_size=2)\n >>> rref = rpc.remote(\"worker1\", my_script_add, args=(torch.ones(2), 3))\n >>> rref.to_here()\n >>> rpc.shutdown()\n\n >>> # On worker 1:\n >>> import torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker1\", rank=1, world_size=2)\n >>> rpc.shutdown()\n\ntorch.distributed.rpc.get_worker_info(worker_name=None)\nGet \"WorkerInfo\" of a given worker name. Use this \"WorkerInfo\" to\n avoid passing an expensive string on every invocation.\nParameters:\n worker_name (str) -- the string name of a worker. If", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "\"None\", return the the id of the current worker. (default\n \"None\")\nReturns:\n \"WorkerInfo\" instance for the given \"worker_name\" or\n \"WorkerInfo\" of the current worker if \"worker_name\" is \"None\".\ntorch.distributed.rpc.shutdown(graceful=True, timeout=0)\nPerform a shutdown of the RPC agent, and then destroy the RPC\n agent. This stops the local agent from accepting outstanding\n requests, and shuts down the RPC framework by terminating all RPC\n threads. If \"graceful=True\", this will block until all local and\n remote RPC processes reach this method and wait for all outstanding\n work to complete. Otherwise, if \"graceful=False\", this is a local\n shutdown, and it does not wait for other RPC processes to reach\n this method.\nWarning:\n For \"Future\" objects returned by \"rpc_async()\", \"future.wait()\"\n should not be called after \"shutdown()\".\n\nParameters:\n graceful (bool) -- Whether to do a graceful shutdown or", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "not. If True, this will 1) wait until there is no pending system\n messages for \"UserRRefs\" and delete them; 2) block until all\n local and remote RPC processes have reached this method and wait\n for all outstanding work to complete.\nExample::\n Make sure that \"MASTER_ADDR\" and \"MASTER_PORT\" are set properly\n on both workers. Refer to \"init_process_group()\" API for more\n details. For example,\n export MASTER_ADDR=localhost export MASTER_PORT=5678\n\n Then run the following code in two different processes:\n\n >>> # On worker 0:\n >>> import torch\n >>> import torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker0\", rank=0, world_size=2)\n >>> # do some work\n >>> result = rpc.rpc_sync(\"worker1\", torch.add, args=(torch.ones(1), 1))\n >>> # ready to shutdown\n >>> rpc.shutdown()\n\n >>> # On worker 1:\n >>> import torch.distributed.rpc as rpc\n >>> rpc.init_rpc(\"worker1\", rank=1, world_size=2)\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "\n\n\nwait for worker 0 to finish work, and then shutdown.\n >>> rpc.shutdown()\n\n\n\n\nclass torch.distributed.rpc.WorkerInfo\nA structure that encapsulates information of a worker in the\n system. Contains the name and ID of the worker. This class is not\n meant to be constructed directly, rather, an instance can be\n retrieved through \"get_worker_info()\" and the result can be passed\n in to functions such as \"rpc_sync()\", \"rpc_async()\", \"remote()\" to\n avoid copying a string on every invocation.\nproperty id\n Globally unique id to identify the worker.\n\nproperty name\n The name of the worker.\n\nThe RPC package also provides decorators which allow applications to\nspecify how a given function should be treated on the callee side.\ntorch.distributed.rpc.functions.async_execution(fn)\nA decorator for a function indicating that the return value of the\n function is guaranteed to be a \"Future\" object and this function", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "can run asynchronously on the RPC callee. More specifically, the\n callee extracts the \"Future\" returned by the wrapped function and\n installs subsequent processing steps as a callback to that\n \"Future\". The installed callback will read the value from the\n \"Future\" when completed and send the value back as the RPC\n response. That also means the returned \"Future\" only exists on the\n callee side and is never sent through RPC. This decorator is useful\n when the wrapped function's (\"fn\") execution needs to pause and\n resume due to, e.g., containing \"rpc_async()\" or waiting for other\n signals.\nNote:\n To enable asynchronous execution, applications must pass the\n function object returned by this decorator to RPC APIs. If RPC\n detected attributes installed by this decorator, it knows that\n this function returns a \"Future\" object and will handle that\n accordingly. However, this does not mean this decorator has to be\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "outmost one when defining a function. For example, when combined\n with \"@staticmethod\" or \"@classmethod\",\n \"@rpc.functions.async_execution\" needs to be the inner decorator\n to allow the target function be recognized as a static or class\n function. This target function can still execute asynchronously\n because, when accessed, the static or class method preserves\n attributes installed by \"@rpc.functions.async_execution\".\nExample::\n The returned \"Future\" object can come from \"rpc_async()\",\n \"then()\", or \"Future\" constructor. The example below shows\n directly using the \"Future\" returned by \"then()\".\n >>> from torch.distributed import rpc\n >>>\n >>> # omitting setup and shutdown RPC\n >>>\n >>> # On all workers\n >>> @rpc.functions.async_execution\n >>> def async_add_chained(to, x, y, z):\n >>> # This function runs on \"worker1\" and returns immediately when\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "\n\n\n# the callback is installed through the `then(cb)` API. In the\n >>> # mean time, the `rpc_async` to \"worker2\" can run concurrently.\n >>> # When the return value of that `rpc_async` arrives at\n >>> # \"worker1\", \"worker1\" will run the lambda function accordingly\n >>> # and set the value for the previously returned `Future`, which\n >>> # will then trigger RPC to send the result back to \"worker0\".\n >>> return rpc.rpc_async(to, torch.add, args=(x, y)).then(\n >>> lambda fut: fut.wait() + z\n >>> )\n >>>\n >>> # On worker0\n >>> ret = rpc.rpc_sync(\n >>> \"worker1\",\n >>> async_add_chained,\n >>> args=(\"worker2\", torch.ones(2), 1, 1)\n >>> )\n >>> print(ret) # prints tensor([3., 3.])\n\n\n\n\n When combined with TorchScript decorators, this decorator must\n be the outmost one.\n\n >>> from torch import Tensor\n >>> from torch.futures import Future\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "\n\n\nfrom torch.futures import Future\n >>> from torch.distributed import rpc\n >>>\n >>> # omitting setup and shutdown RPC\n >>>\n >>> # On all workers\n >>> @torch.jit.script\n >>> def script_add(x: Tensor, y: Tensor) -> Tensor:\n >>> return x + y\n >>>\n >>> @rpc.functions.async_execution\n >>> @torch.jit.script\n >>> def async_add(to: str, x: Tensor, y: Tensor) -> Future[Tensor]:\n >>> return rpc.rpc_async(to, script_add, (x, y))\n >>>\n >>> # On worker0\n >>> ret = rpc.rpc_sync(\n >>> \"worker1\",\n >>> async_add,\n >>> args=(\"worker2\", torch.ones(2), 1)\n >>> )\n >>> print(ret) # prints tensor([2., 2.])\n\n\n\n When combined with static or class method, this decorator must\n be the inner one.\n\n >>> from torch.distributed import rpc\n >>>\n >>> # omitting setup and shutdown RPC\n >>>\n >>> # On all workers\n >>> class AsyncExecutionClass:\n >>>\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "\n\n\nclass AsyncExecutionClass:\n >>>\n >>> @staticmethod\n >>> @rpc.functions.async_execution\n >>> def static_async_add(to, x, y, z):\n >>> return rpc.rpc_async(to, torch.add, args=(x, y)).then(\n >>> lambda fut: fut.wait() + z\n >>> )\n >>>\n >>> @classmethod\n >>> @rpc.functions.async_execution\n >>> def class_async_add(cls, to, x, y, z):\n >>> ret_fut = torch.futures.Future()\n >>> rpc.rpc_async(to, torch.add, args=(x, y)).then(\n >>> lambda fut: ret_fut.set_result(fut.wait() + z)\n >>> )\n >>> return ret_fut\n >>>\n >>> @rpc.functions.async_execution\n >>> def bound_async_add(self, to, x, y, z):\n >>> return rpc.rpc_async(to, torch.add, args=(x, y)).then(\n >>> lambda fut: fut.wait() + z\n >>> )\n >>>\n >>> # On worker0\n >>> ret = rpc.rpc_sync(\n >>> \"worker1\",\n\n\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "\n\n\n\"worker1\",\n >>> AsyncExecutionClass.static_async_add,\n >>> args=(\"worker2\", torch.ones(2), 1, 2)\n >>> )\n >>> print(ret) # prints tensor([4., 4.])\n >>>\n >>> ret = rpc.rpc_sync(\n >>> \"worker1\",\n >>> AsyncExecutionClass.class_async_add,\n >>> args=(\"worker2\", torch.ones(2), 1, 2)\n >>> )\n >>> print(ret) # prints tensor([4., 4.])\n\n\n\n\n This decorator also works with RRef helpers, i.e., .\n \"torch.distributed.rpc.RRef.rpc_sync()\",\n \"torch.distributed.rpc.RRef.rpc_async()\", and\n \"torch.distributed.rpc.RRef.remote()\".\n\n >>> from torch.distributed import rpc\n >>>\n >>> # reuse the AsyncExecutionClass class above\n >>> rref = rpc.remote(\"worker1\", AsyncExecutionClass)\n >>> ret = rref.rpc_sync().static_async_add(\"worker2\", torch.ones(2), 1, 2)\n >>> print(ret) # prints tensor([4., 4.])\n >>>\n >>> rref = rpc.remote(\"worker1\", AsyncExecutionClass)\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "\n\n\nret = rref.rpc_async().static_async_add(\"worker2\", torch.ones(2), 1, 2).wait()\n >>> print(ret) # prints tensor([4., 4.])\n >>>\n >>> rref = rpc.remote(\"worker1\", AsyncExecutionClass)\n >>> ret = rref.remote().static_async_add(\"worker2\", torch.ones(2), 1, 2).to_here()\n >>> print(ret) # prints tensor([4., 4.])\n\n\n\nBackends\nThe RPC module can leverage different backends to perform the\ncommunication between the nodes. The backend to be used can be\nspecified in the \"init_rpc()\" function, by passing a certain value of\nthe \"BackendType\" enum. Regardless of what backend is used, the rest\nof the RPC API won't change. Each backend also defines its own\nsubclass of the \"RpcBackendOptions\" class, an instance of which can\nalso be passed to \"init_rpc()\" to configure the backend's behavior.\nclass torch.distributed.rpc.BackendType(value)\nAn enum class of available backends.\nPyTorch ships with a builtin \"BackendType.TENSORPIPE\" backend.", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "Additional ones can be registered using the \"register_backend()\"\n function.\nclass torch.distributed.rpc.RpcBackendOptions\nAn abstract structure encapsulating the options passed into the RPC\n backend. An instance of this class can be passed in to \"init_rpc()\"\n in order to initialize RPC with specific configurations, such as\n the RPC timeout and \"init_method\" to be used.\nproperty init_method\n URL specifying how to initialize the process group. Default is\n \"env://\"\n\nproperty rpc_timeout\n A float indicating the timeout to use for all RPCs. If an RPC\n does not complete in this timeframe, it will complete with an\n exception indicating that it has timed out.\n\nTensorPipe Backend\n~~~~~~~~~~~~~~~~~~\nThe TensorPipe agent, which is the default, leverages the TensorPipe\nlibrary, which provides a natively point-to-point communication\nprimitive specifically suited for machine learning that fundamentally\naddresses some of the limitations of Gloo. Compared to Gloo, it has", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "the advantage of being asynchronous, which allows a large number of\ntransfers to occur simultaneously, each at their own speed, without\nblocking each other. It will only open pipes between pairs of nodes\nwhen needed, on demand, and when one node fails only its incident\npipes will be closed, while all other ones will keep working as\nnormal. In addition, it is able to support multiple different\ntransports (TCP, of course, but also shared memory, NVLink,\nInfiniBand, ...) and can automatically detect their availability and\nnegotiate the best transport to use for each pipe.\nThe TensorPipe backend has been introduced in PyTorch v1.6 and is\nbeing actively developed. At the moment, it only supports CPU tensors,\nwith GPU support coming soon. It comes with a TCP-based transport,\njust like Gloo. It is also able to automatically chunk and multiplex\nlarge tensors over multiple sockets and threads in order to achieve\nvery high bandwidths. The agent will be able to pick the best", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "transport on its own, with no intervention required.\nExample:\n\n\n\nimport os\nfrom torch.distributed import rpc\nos.environ['MASTER_ADDR'] = 'localhost'\nos.environ['MASTER_PORT'] = '29500'\nrpc.init_rpc(\n \"worker1\",\n rank=0,\n world_size=2,\n rpc_backend_options=rpc.TensorPipeRpcBackendOptions(\n num_worker_threads=8,\n rpc_timeout=20 # 20 second timeout\n )\n)\nomitting init_rpc invocation on worker2\n\n\n\nclass torch.distributed.rpc.TensorPipeRpcBackendOptions(*, num_worker_threads=16, rpc_timeout=60.0, init_method='env://', device_maps=None, devices=None, _transports=None, _channels=None)\nThe backend options for \"TensorPipeAgent\", derived from\n \"RpcBackendOptions\".\nParameters:\n * num_worker_threads (int, optional) -- The number of\n threads in the thread-pool used by \"TensorPipeAgent\" to\n execute requests (default: 16).", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "execute requests (default: 16).\n * **rpc_timeout** (*float**, **optional*) -- The default\n timeout, in seconds, for RPC requests (default: 60 seconds).\n If the RPC has not completed in this timeframe, an exception\n indicating so will be raised. Callers can override this\n timeout for individual RPCs in \"rpc_sync()\" and \"rpc_async()\"\n if necessary.\n\n * **init_method** (*str**, **optional*) -- The URL to initialize\n the distributed store used for rendezvous. It takes any value\n accepted for the same argument of \"init_process_group()\"\n (default: \"env://\").\n\n * **device_maps** (*Dict**[**str**, **Dict**]**, **optional*) --\n Device placement mappings from this worker to the callee. Key\n is the callee worker name and value the dictionary (\"Dict\" of\n \"int\", \"str\", or \"torch.device\") that maps this worker's\n devices to the callee worker's devices. (default: \"None\")\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "\ndevices (List[int, str, or \"torch.device\"], optional) --\n all local CUDA devices used by RPC agent. By Default, it will\n be initialized to all local devices from its own \"device_maps\"\n and corresponding devices from its peers' \"device_maps\". When\n processing CUDA RPC requests, the agent will properly\n synchronize CUDA streams for all devices in this \"List\".\n\nproperty device_maps\n The device map locations.\n\nproperty devices\n All devices used by the local agent.\n\nproperty init_method\n URL specifying how to initialize the process group. Default is\n \"env://\"\n\nproperty num_worker_threads\n The number of threads in the thread-pool used by\n \"TensorPipeAgent\" to execute requests.\n\nproperty rpc_timeout\n A float indicating the timeout to use for all RPCs. If an RPC\n does not complete in this timeframe, it will complete with an\n exception indicating that it has timed out.\n\nset_device_map(to, device_map)", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "set_device_map(to, device_map)\n Set device mapping between each RPC caller and callee pair. This\n function can be called multiple times to incrementally add\n device placement configurations.\n\n Parameters:\n * **to** (*str*) -- Callee name.\n\n * **device_map** (*Dict of python:int**, **str**, or\n **torch.device*) -- Device placement mappings from this\n worker to the callee. This map must be invertible.\n\n -[ Example ]-\n\n >>> # both workers\n >>> def add(x, y):\n >>> print(x) # tensor([1., 1.], device='cuda:1')\n >>> return x + y, (x + y).to(2)\n >>>\n >>> # on worker 0\n >>> options = TensorPipeRpcBackendOptions(\n >>> num_worker_threads=8,\n >>> device_maps={\"worker1\": {0: 1}}\n >>> # maps worker0's cuda:0 to worker1's cuda:1\n >>> )\n >>> options.set_device_map(\"worker1\", {1: 2})\n >>> # maps worker0's cuda:1 to worker1's cuda:2\n >>>\n >>> rpc.init_rpc(\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "\n\n\n >>> rpc.init_rpc(\n >>> \"worker0\",\n >>> rank=0,\n >>> world_size=2,\n >>> backend=rpc.BackendType.TENSORPIPE,\n >>> rpc_backend_options=options\n >>> )\n >>>\n >>> x = torch.ones(2)\n >>> rets = rpc.rpc_sync(\"worker1\", add, args=(x.to(0), 1))\n >>> # The first argument will be moved to cuda:1 on worker1. When\n >>> # sending the return value back, it will follow the invert of\n >>> # the device map, and hence will be moved back to cuda:0 and\n >>> # cuda:1 on worker0\n >>> print(rets[0]) # tensor([2., 2.], device='cuda:0')\n >>> print(rets[1]) # tensor([2., 2.], device='cuda:1')\n\n\n\n\nset_devices(devices)\n Set local devices used by the TensorPipe RPC agent. When\n processing CUDA RPC requests, the TensorPipe RPC agent will\n properly synchronize CUDA streams for all devices in this\n \"List\".\n\n Parameters:\n **devices** (*List of python:int**, **str**, or\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "*torch.device) -- local devices used by the TensorPipe RPC\n agent.\nNote:\nThe RPC framework does not automatically retry any \"rpc_sync()\",\n \"rpc_async()\" and \"remote()\" calls. The reason being that there is\n no way the RPC framework can determine whether an operation is\n idempotent or not and whether it is safe to retry. As a result, it\n is the application's responsibility to deal with failures and retry\n if necessary. RPC communication is based on TCP and as a result\n failures could happen due to network failures or intermittent\n network connectivity issues. In such scenarios, the application\n needs to retry appropriately with reasonable backoffs to ensure the\n network isn't overwhelmed by aggressive retries.\nRRef\nWarning:\nRRefs are not currently supported when using CUDA tensors\nAn \"RRef\" (Remote REFerence) is a reference to a value of some type\n\"T\" (e.g. \"Tensor\") on a remote worker. This handle keeps the\nreferenced remote value alive on the owner, but there is no", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "implication that the value will be transferred to the local worker in\nthe future. RRefs can be used in multi-machine training by holding\nreferences to nn.Modules that exist on other workers, and calling the\nappropriate functions to retrieve or modify their parameters during\ntraining. See Remote Reference Protocol for more details.\nclass torch.distributed.rpc.RRef\nMore Information about RRef\n^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\nRemote Reference Protocol\n\n\nBackground\n\n\nAssumptions\n\n\nRRef Lifetime\n\n\nDesign Reasoning\n\n\nImplementation\n\n\n\n\nProtocol Scenarios\n\n\nUser Share RRef with Owner as Return Value\n\n\nUser Share RRef with Owner as Argument\n\n\nOwner Share RRef with User\n\n\nUser Share RRef with User\n\n\n\n\nRemoteModule\nWarning:\nRemoteModule is not currently supported when using CUDA tensors\n\"RemoteModule\" is an easy way to create an nn.Module remotely on a\ndifferent process. The actual module resides on a remote host, but the", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "local host has a handle to this module and invoke this module similar\nto a regular nn.Module. The invocation however incurs RPC calls to the\nremote end and can be performed asynchronously if needed via\nadditional APIs supported by RemoteModule.\nclass torch.distributed.nn.api.remote_module.RemoteModule(args, *kwargs)\n A RemoteModule instance can only be created after RPC\n initialization. It creates a user-specified module on a\n specified remote node. It behaves like a regular \"nn.Module\"\n except that the \"forward\" method is executed on the remote node.\n It takes care of autograd recording to ensure the backward pass\n propagates gradients back to the corresponding remote module.\n\n It generates two methods \"forward_async\" and \"forward\" based on\n the signature of the \"forward\" method of \"module_cls\".\n \"forward_async\" runs asynchronously and returns a Future. The\n arguments of \"forward_async\" and \"forward\" are the same as the\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "\"forward\" method of the module returned by the \"module_cls\".\n For example, if \"module_cls\" returns an instance of \"nn.Linear\",\n that has \"forward\" method signature: \"def forward(input: Tensor)\n -> Tensor:\", the generated \"RemoteModule\" will have 2 methods\n with the signatures:\n\n \"def forward(input: Tensor) -> Tensor:\"\n \"def forward_async(input: Tensor) -> Future[Tensor]:\"\n\nParameters:\n * remote_device (str) -- Device on the destination worker\n where we'd like to place this module. The format should be\n \"/\", where the device field can be parsed\n as torch.device type. E.g., \"trainer0/cpu\", \"trainer0\",\n \"ps0/cuda:0\". In addition, the device field can be optional\n and the default value is \"cpu\".\n * **module_cls** (*nn.Module*) --\n\n Class for the module to be created remotely. For example,\n\n >>> class MyModule(nn.Module):\n >>> def forward(input):\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "\n\n\ndef forward(input):\n >>> return input + 1\n >>>\n >>> module_cls = MyModule\n\n\n\n\n * **args** (*Sequence**, **optional*) -- args to be passed to\n \"module_cls\".\n\n * **kwargs** (*Dict**, **optional*) -- kwargs to be passed to\n \"module_cls\".\n\nReturns:\n A remote module instance which wraps the \"Module\" created by the\n user-provided \"module_cls\", it has a blocking \"forward\" method\n and an asynchronous \"forward_async\" method that returns a future\n of the \"forward\" call on the user-provided module on the remote\n side.\nExample::\n Run the following code in two different processes:\n >>> # On worker 0:\n >>> import torch\n >>> import torch.distributed.rpc as rpc\n >>> from torch import nn, Tensor\n >>> from torch.distributed.nn.api.remote_module import RemoteModule\n >>>\n >>> rpc.init_rpc(\"worker0\", rank=0, world_size=2)\n >>> remote_linear_module = RemoteModule(\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "\n\n\nremote_linear_module = RemoteModule(\n >>> \"worker1/cpu\", nn.Linear, args=(20, 30),\n >>> )\n >>> input = torch.randn(128, 20)\n >>> ret_fut = remote_linear_module.forward_async(input)\n >>> ret = ret_fut.wait()\n >>> rpc.shutdown()\n\n\n\n >>> # On worker 1:\n >>> import torch\n >>> import torch.distributed.rpc as rpc\n >>>\n >>> rpc.init_rpc(\"worker1\", rank=1, world_size=2)\n >>> rpc.shutdown()\n\n Furthermore, a more practical example that is combined with\n DistributedDataParallel (DDP) can be found in this tutorial.\n\nget_module_rref()\n Returns an \"RRef\" (\"RRef[nn.Module]\") pointing to the remote\n module.\n\n Return type:\n *RRef*[*Module*]\n\nremote_parameters(recurse=True)\n Returns a list of \"RRef\" pointing to the remote module's\n parameters. This can typically be used in conjuction with\n \"DistributedOptimizer\".\n\n Parameters:\n **recurse** (*bool*) -- if True, then returns parameters of\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "the remote module and all submodules of the remote module.\n Otherwise, returns only parameters that are direct members of\n the remote module.\n Returns:\n A list of \"RRef\" (\"List[RRef[nn.Parameter]]\") to remote\n module's parameters.\n\n Return type:\n *List*[*RRef*[*Parameter*]]\n\nDistributed Autograd Framework\nWarning:\nDistributed autograd is not currently supported when using CUDA\n tensors\nThis module provides an RPC-based distributed autograd framework that\ncan be used for applications such as model parallel training. In\nshort, applications may send and receive gradient recording tensors\nover RPC. In the forward pass, we record when gradient recording\ntensors are sent over RPC and during the backward pass we use this\ninformation to perform a distributed backward pass using RPC. For more\ndetails see Distributed Autograd Design.", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "details see Distributed Autograd Design.\ntorch.distributed.autograd.backward(context_id: int, roots: List[Tensor], retain_graph=False) -> None\nKicks off the distributed backward pass using the provided roots.\n This currently implements the FAST mode algorithm which assumes all\n RPC messages sent in the same distributed autograd context across\n workers would be part of the autograd graph during the backward\n pass.\nWe use the provided roots to discover the autograd graph and\n compute appropriate dependencies. This method blocks until the\n entire autograd computation is done.\nWe accumulate the gradients in the appropriate\n \"torch.distributed.autograd.context\" on each of the nodes. The\n autograd context to be used is looked up given the \"context_id\"\n that is passed in when \"torch.distributed.autograd.backward()\" is\n called. If there is no valid autograd context corresponding to the\n given ID, we throw an error. You can retrieve the accumulated", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "gradients using the \"get_gradients()\" API.\nParameters:\n * context_id (int) -- The autograd context id for which we\n should retrieve the gradients.\n * **roots** (*list*) -- Tensors which represent the roots of the\n autograd computation. All the tensors should be scalars.\n\n * **retain_graph** (*bool**, **optional*) -- If False, the graph\n used to compute the grad will be freed. Note that in nearly\n all cases setting this option to True is not needed and often\n can be worked around in a much more efficient way. Usually,\n you need to set this to True to run backward multiple times.\n\nExample::\n >>> import torch.distributed.autograd as dist_autograd\n >>> with dist_autograd.context() as context_id:\n >>> pred = model.forward()\n >>> loss = loss_func(pred, loss)\n >>> dist_autograd.backward(context_id, loss)\nclass torch.distributed.autograd.context", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "class torch.distributed.autograd.context\nContext object to wrap forward and backward passes when using\n distributed autograd. The \"context_id\" generated in the \"with\"\n statement is required to uniquely identify a distributed backward\n pass on all workers. Each worker stores metadata associated with\n this \"context_id\", which is required to correctly execute a\n distributed autograd pass.\nExample::\n >>> import torch.distributed.autograd as dist_autograd\n >>> with dist_autograd.context() as context_id:\n >>> t1 = torch.rand((3, 3), requires_grad=True)\n >>> t2 = torch.rand((3, 3), requires_grad=True)\n >>> loss = rpc.rpc_sync(\"worker1\", torch.add, args=(t1, t2)).sum()\n >>> dist_autograd.backward(context_id, [loss])\ntorch.distributed.autograd.get_gradients(context_id: int) -> Dict[Tensor, Tensor]\nRetrieves a map from Tensor to the appropriate gradient for that\n Tensor accumulated in the provided context corresponding to the", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "given \"context_id\" as part of the distributed autograd backward\n pass.\nParameters:\n context_id (int) -- The autograd context id for which we\n should retrieve the gradients.\nReturns:\n A map where the key is the Tensor and the value is the\n associated gradient for that Tensor.\nExample::\n >>> import torch.distributed.autograd as dist_autograd\n >>> with dist_autograd.context() as context_id:\n >>> t1 = torch.rand((3, 3), requires_grad=True)\n >>> t2 = torch.rand((3, 3), requires_grad=True)\n >>> loss = t1 + t2\n >>> dist_autograd.backward(context_id, [loss.sum()])\n >>> grads = dist_autograd.get_gradients(context_id)\n >>> print(grads[t1])\n >>> print(grads[t2])\nMore Information about RPC Autograd\n^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^\n\n\nDistributed Autograd Design\n\n\nBackground\n\n\nAutograd recording during the forward pass\n\n\nDistributed Autograd Context\n\n\nDistributed Backward Pass\n\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "\n\nDistributed Backward Pass\n\n\nComputing dependencies\n\n\nFAST mode algorithm\n\n\nSMART mode algorithm\n\n\n\n\nDistributed Optimizer\n\n\nSimple end to end example\n\n\nDistributed Optimizer\nSee the torch.distributed.optim page for documentation on distributed\noptimizers.\nDesign Notes\nThe distributed autograd design note covers the design of the RPC-\nbased distributed autograd framework that is useful for applications\nsuch as model parallel training.\n\nDistributed Autograd Design\n\nThe RRef design note covers the design of the RRef (Remote REFerence)\nprotocol used to refer to values on remote workers by the framework.\n\nRemote Reference Protocol\n\nTutorials\nThe RPC tutorials introduce users to the RPC framework, provide\nseveral example applications using torch.distributed.rpc APIs, and\ndemonstrate how to use the profiler to profile RPC-based workloads.\n\nGetting started with Distributed RPC Framework\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "\n\nGetting started with Distributed RPC Framework\n\n\nImplementing a Parameter Server using Distributed RPC Framework\n\n\nCombining Distributed DataParallel with Distributed RPC Framework\n (covers RemoteModule as well)\n\n\nProfiling RPC-based Workloads\n\n\nImplementing batch RPC processing\n\n\nDistributed Pipeline Parallel\n\n", "source": "https://pytorch.org/docs/stable/rpc.html", "category": "pytorch docs"} {"text": "torch.special\nThe torch.special module, modeled after SciPy's special module.\nFunctions\ntorch.special.airy_ai(input, *, out=None) -> Tensor\nAiry function \\text{Ai}\\left(\\text{input}\\right).\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\ntorch.special.bessel_j0(input, *, out=None) -> Tensor\nBessel function of the first kind of order 0.\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\ntorch.special.bessel_j1(input, *, out=None) -> Tensor\nBessel function of the first kind of order 1.\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\ntorch.special.digamma(input, *, out=None) -> Tensor\nComputes the logarithmic derivative of the gamma function on\n input.", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"} {"text": "input.\n \\digamma(x) = \\frac{d}{dx} \\ln\\left(\\Gamma\\left(x\\right)\\right)\n = \\frac{\\Gamma'(x)}{\\Gamma(x)}\n\nParameters:\n input (Tensor) -- the tensor to compute the digamma\n function on\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nNote:\n This function is similar to SciPy's *scipy.special.digamma*.\n\nNote:\n From PyTorch 1.8 onwards, the digamma function returns *-Inf* for\n *0*. Previously it returned *NaN* for *0*.\n\nExample:\n >>> a = torch.tensor([1, 0.5])\n >>> torch.special.digamma(a)\n tensor([-0.5772, -1.9635])\n\ntorch.special.entr(input, *, out=None) -> Tensor\nComputes the entropy on \"input\" (as defined below), elementwise.\n \\begin{align} \\text{entr(x)} = \\begin{cases} -x * \\ln(x) &\n x > 0 \\\\ 0 & x = 0.0 \\\\ -\\infty & x < 0 \\end{cases}\n \\end{align}\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"} {"text": "Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample::\n >>> a = torch.arange(-0.5, 1, 0.5)\n >>> a\n tensor([-0.5000, 0.0000, 0.5000])\n >>> torch.special.entr(a)\n tensor([ -inf, 0.0000, 0.3466])\ntorch.special.erf(input, *, out=None) -> Tensor\nComputes the error function of \"input\". The error function is\n defined as follows:\n \\mathrm{erf}(x) = \\frac{2}{\\sqrt{\\pi}} \\int_{0}^{x} e^{-t^2} dt\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> torch.special.erf(torch.tensor([0, -1., 10.]))\n tensor([ 0.0000, -0.8427, 1.0000])\n\ntorch.special.erfc(input, *, out=None) -> Tensor\nComputes the complementary error function of \"input\". The\n complementary error function is defined as follows:\n \\mathrm{erfc}(x) = 1 - \\frac{2}{\\sqrt{\\pi}} \\int_{0}^{x}\n e^{-t^2} dt\n\nParameters:", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"} {"text": "e^{-t^2} dt\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> torch.special.erfc(torch.tensor([0, -1., 10.]))\n tensor([ 1.0000, 1.8427, 0.0000])\n\ntorch.special.erfcx(input, *, out=None) -> Tensor\nComputes the scaled complementary error function for each element\n of \"input\". The scaled complementary error function is defined as\n follows:\n \\mathrm{erfcx}(x) = e^{x^2} \\mathrm{erfc}(x)\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> torch.special.erfcx(torch.tensor([0, -1., 10.]))\n tensor([ 1.0000, 5.0090, 0.0561])\n\ntorch.special.erfinv(input, *, out=None) -> Tensor\nComputes the inverse error function of \"input\". The inverse error\n function is defined in the range (-1, 1) as:\n \\mathrm{erfinv}(\\mathrm{erf}(x)) = x\n\nParameters:", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"} {"text": "Parameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> torch.special.erfinv(torch.tensor([0, 0.5, -1.]))\n tensor([ 0.0000, 0.4769, -inf])\n\ntorch.special.exp2(input, *, out=None) -> Tensor\nComputes the base two exponential function of \"input\".\n y_{i} = 2^{x_{i}}\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> torch.special.exp2(torch.tensor([0, math.log2(2.), 3, 4]))\n tensor([ 1., 2., 8., 16.])\n\ntorch.special.expit(input, *, out=None) -> Tensor\nComputes the expit (also known as the logistic sigmoid function) of\n the elements of \"input\".\n \\text{out}_{i} = \\frac{1}{1 + e^{-\\text{input}_{i}}}\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"} {"text": "Example:\n >>> t = torch.randn(4)\n >>> t\n tensor([ 0.9213, 1.0887, -0.8858, -1.7683])\n >>> torch.special.expit(t)\n tensor([ 0.7153, 0.7481, 0.2920, 0.1458])\n\ntorch.special.expm1(input, *, out=None) -> Tensor\nComputes the exponential of the elements minus 1 of \"input\".\n y_{i} = e^{x_{i}} - 1\n\nNote:\n This function provides greater precision than exp(x) - 1 for\n small values of x.\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> torch.special.expm1(torch.tensor([0, math.log(2.)]))\n tensor([ 0., 1.])\n\ntorch.special.gammainc(input, other, *, out=None) -> Tensor\nComputes the regularized lower incomplete gamma function:\n \\text{out}_{i} = \\frac{1}{\\Gamma(\\text{input}_i)}\n \\int_0^{\\text{other}_i} t^{\\text{input}_i-1} e^{-t} dt\n\nwhere both \\text{input}_i and \\text{other}_i are weakly positive", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"} {"text": "and at least one is strictly positive. If both are zero or either\n is negative then \\text{out}_i=\\text{nan}. \\Gamma(\\cdot) in the\n equation above is the gamma function,\n \\Gamma(\\text{input}_i) = \\int_0^\\infty t^{(\\text{input}_i-1)}\n e^{-t} dt.\n\nSee \"torch.special.gammaincc()\" and \"torch.special.gammaln()\" for\n related functions.\nSupports broadcasting to a common shape and float inputs.\nNote:\n The backward pass with respect to \"input\" is not yet supported.\n Please open an issue on PyTorch's Github to request it.\n\nParameters:\n * input (Tensor) -- the first non-negative input tensor\n * **other** (*Tensor*) -- the second non-negative input tensor\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a1 = torch.tensor([4.0])\n >>> a2 = torch.tensor([3.0, 4.0, 5.0])\n >>> a = torch.special.gammaincc(a1, a2)\n tensor([0.3528, 0.5665, 0.7350])\n tensor([0.3528, 0.5665, 0.7350])\n", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"} {"text": "tensor([0.3528, 0.5665, 0.7350])\n >>> b = torch.special.gammainc(a1, a2) + torch.special.gammaincc(a1, a2)\n tensor([1., 1., 1.])\ntorch.special.gammaincc(input, other, *, out=None) -> Tensor\nComputes the regularized upper incomplete gamma function:\n \\text{out}_{i} = \\frac{1}{\\Gamma(\\text{input}_i)}\n \\int_{\\text{other}_i}^{\\infty} t^{\\text{input}_i-1} e^{-t} dt\n\nwhere both \\text{input}_i and \\text{other}_i are weakly positive\n and at least one is strictly positive. If both are zero or either\n is negative then \\text{out}_i=\\text{nan}. \\Gamma(\\cdot) in the\n equation above is the gamma function,\n \\Gamma(\\text{input}_i) = \\int_0^\\infty t^{(\\text{input}_i-1)}\n e^{-t} dt.\n\nSee \"torch.special.gammainc()\" and \"torch.special.gammaln()\" for\n related functions.\nSupports broadcasting to a common shape and float inputs.\nNote:\n The backward pass with respect to \"input\" is not yet supported.\n Please open an issue on PyTorch's Github to request it.\n", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"} {"text": "Parameters:\n * input (Tensor) -- the first non-negative input tensor\n * **other** (*Tensor*) -- the second non-negative input tensor\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a1 = torch.tensor([4.0])\n >>> a2 = torch.tensor([3.0, 4.0, 5.0])\n >>> a = torch.special.gammaincc(a1, a2)\n tensor([0.6472, 0.4335, 0.2650])\n >>> b = torch.special.gammainc(a1, a2) + torch.special.gammaincc(a1, a2)\n tensor([1., 1., 1.])\n\ntorch.special.gammaln(input, *, out=None) -> Tensor\nComputes the natural logarithm of the absolute value of the gamma\n function on \"input\".\n \\text{out}_{i} = \\ln \\Gamma(|\\text{input}_{i}|)\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.arange(0.5, 2, 0.5)\n >>> torch.special.gammaln(a)\n tensor([ 0.5724, 0.0000, -0.1208])\n", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"} {"text": "tensor([ 0.5724, 0.0000, -0.1208])\ntorch.special.i0(input, *, out=None) -> Tensor\nComputes the zeroth order modified Bessel function of the first\n kind for each element of \"input\".\n \\text{out}_{i} = I_0(\\text{input}_{i}) = \\sum_{k=0}^{\\infty}\n \\frac{(\\text{input}_{i}^2/4)^k}{(k!)^2}\n\nParameters:\n input (Tensor) -- the input tensor\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> torch.i0(torch.arange(5, dtype=torch.float32))\n tensor([ 1.0000, 1.2661, 2.2796, 4.8808, 11.3019])\n\ntorch.special.i0e(input, *, out=None) -> Tensor\nComputes the exponentially scaled zeroth order modified Bessel\n function of the first kind (as defined below) for each element of\n \"input\".\n \\text{out}_{i} = \\exp(-|x|) * i0(x) = \\exp(-|x|) *\n \\sum_{k=0}^{\\infty} \\frac{(\\text{input}_{i}^2/4)^k}{(k!)^2}\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"} {"text": "Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample::\n >>> torch.special.i0e(torch.arange(5, dtype=torch.float32))\n tensor([1.0000, 0.4658, 0.3085, 0.2430, 0.2070])\ntorch.special.i1(input, *, out=None) -> Tensor\nComputes the first order modified Bessel function of the first kind\n (as defined below) for each element of \"input\".\n \\text{out}_{i} = \\frac{(\\text{input}_{i})}{2} *\n \\sum_{k=0}^{\\infty} \\frac{(\\text{input}_{i}^2/4)^k}{(k!) *\n (k+1)!}\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample::\n >>> torch.special.i1(torch.arange(5, dtype=torch.float32))\n tensor([0.0000, 0.5652, 1.5906, 3.9534, 9.7595])\ntorch.special.i1e(input, *, out=None) -> Tensor\nComputes the exponentially scaled first order modified Bessel\n function of the first kind (as defined below) for each element of\n \"input\".", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"} {"text": "\"input\".\n \\text{out}_{i} = \\exp(-|x|) * i1(x) = \\exp(-|x|) *\n \\frac{(\\text{input}_{i})}{2} * \\sum_{k=0}^{\\infty}\n \\frac{(\\text{input}_{i}^2/4)^k}{(k!) * (k+1)!}\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample::\n >>> torch.special.i1e(torch.arange(5, dtype=torch.float32))\n tensor([0.0000, 0.2079, 0.2153, 0.1968, 0.1788])\ntorch.special.log1p(input, *, out=None) -> Tensor\nAlias for \"torch.log1p()\".\ntorch.special.log_ndtr(input, *, out=None) -> Tensor\nComputes the log of the area under the standard Gaussian\n probability density function, integrated from minus infinity to\n \"input\", elementwise.\n \\text{log\\_ndtr}(x) = \\log\\left(\\frac{1}{\\sqrt{2\n \\pi}}\\int_{-\\infty}^{x} e^{-\\frac{1}{2}t^2} dt \\right)\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"} {"text": "Example::\n >>> torch.special.log_ndtr(torch.tensor([-3., -2, -1, 0, 1, 2, 3]))\n tensor([-6.6077 -3.7832 -1.841 -0.6931 -0.1728 -0.023 -0.0014])\ntorch.special.log_softmax(input, dim, *, dtype=None) -> Tensor\nComputes softmax followed by a logarithm.\nWhile mathematically equivalent to log(softmax(x)), doing these two\n operations separately is slower and numerically unstable. This\n function is computed as:\n \\text{log\\_softmax}(x_{i}) = \\log\\left(\\frac{\\exp(x_i) }{ \\sum_j\n \\exp(x_j)} \\right)\n\nParameters:\n * input (Tensor) -- input\n * **dim** (*int*) -- A dimension along which log_softmax will be\n computed.\n\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. If specified, the input tensor is cast to\n \"dtype\" before the operation is performed. This is useful for\n preventing data type overflows. Default: None.\n\nExample::\n >>> t = torch.ones(2, 2)", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"} {"text": "Example::\n >>> t = torch.ones(2, 2)\n >>> torch.special.log_softmax(t, 0)\n tensor([[-0.6931, -0.6931],\n [-0.6931, -0.6931]])\ntorch.special.logit(input, eps=None, *, out=None) -> Tensor\nReturns a new tensor with the logit of the elements of \"input\".\n \"input\" is clamped to [eps, 1 - eps] when eps is not None. When eps\n is None and \"input\" < 0 or \"input\" > 1, the function will yields\n NaN.\n \\begin{align} y_{i} &= \\ln(\\frac{z_{i}}{1 - z_{i}}) \\\\ z_{i} &=\n \\begin{cases} x_{i} & \\text{if eps is None} \\\\\n \\text{eps} & \\text{if } x_{i} < \\text{eps} \\\\ x_{i} &\n \\text{if } \\text{eps} \\leq x_{i} \\leq 1 - \\text{eps} \\\\ 1 -\n \\text{eps} & \\text{if } x_{i} > 1 - \\text{eps} \\end{cases}\n \\end{align}\n\nParameters:\n * input (Tensor) -- the input tensor.\n * **eps** (*float**, **optional*) -- the epsilon for input clamp\n bound. Default: \"None\"\n\nKeyword Arguments:", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"} {"text": "Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.rand(5)\n >>> a\n tensor([0.2796, 0.9331, 0.6486, 0.1523, 0.6516])\n >>> torch.special.logit(a, eps=1e-6)\n tensor([-0.9466, 2.6352, 0.6131, -1.7169, 0.6261])\n\ntorch.special.logsumexp(input, dim, keepdim=False, *, out=None)\nAlias for \"torch.logsumexp()\".\ntorch.special.multigammaln(input, p, *, out=None) -> Tensor\nComputes the multivariate log-gamma function with dimension p\n element-wise, given by\n \\log(\\Gamma_{p}(a)) = C + \\displaystyle \\sum_{i=1}^{p}\n \\log\\left(\\Gamma\\left(a - \\frac{i - 1}{2}\\right)\\right)\n\nwhere C = \\log(\\pi) \\cdot \\frac{p (p - 1)}{4} and \\Gamma(-) is the\n Gamma function.\nAll elements must be greater than \\frac{p - 1}{2}, otherwise the\n behavior is undefiend.\nParameters:\n * input (Tensor) -- the tensor to compute the multivariate\n log-gamma function\n * **p** (*int*) -- the number of dimensions\n", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"} {"text": "\np (int) -- the number of dimensions\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> a = torch.empty(2, 3).uniform_(1, 2)\n >>> a\n tensor([[1.6835, 1.8474, 1.1929],\n [1.0475, 1.7162, 1.4180]])\n >>> torch.special.multigammaln(a, 2)\n tensor([[0.3928, 0.4007, 0.7586],\n [1.0311, 0.3901, 0.5049]])\n\ntorch.special.ndtr(input, *, out=None) -> Tensor\nComputes the area under the standard Gaussian probability density\n function, integrated from minus infinity to \"input\", elementwise.\n \\text{ndtr}(x) = \\frac{1}{\\sqrt{2 \\pi}}\\int_{-\\infty}^{x}\n e^{-\\frac{1}{2}t^2} dt\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample::\n >>> torch.special.ndtr(torch.tensor([-3., -2, -1, 0, 1, 2, 3]))\n tensor([0.0013, 0.0228, 0.1587, 0.5000, 0.8413, 0.9772, 0.9987])", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"} {"text": "torch.special.ndtri(input, *, out=None) -> Tensor\nComputes the argument, x, for which the area under the Gaussian\n probability density function (integrated from minus infinity to x)\n is equal to \"input\", elementwise.\n \\text{ndtri}(p) = \\sqrt{2}\\text{erf}^{-1}(2p - 1)\n\nNote:\n Also known as quantile function for Normal Distribution.\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample::\n >>> torch.special.ndtri(torch.tensor([0, 0.25, 0.5, 0.75, 1]))\n tensor([ -inf, -0.6745, 0.0000, 0.6745, inf])\ntorch.special.polygamma(n, input, *, out=None) -> Tensor\nComputes the n^{th} derivative of the digamma function on \"input\".\n n \\geq 0 is called the order of the polygamma function.\n \\psi^{(n)}(x) = \\frac{d^{(n)}}{dx^{(n)}} \\psi(x)\n\nNote:\n This function is implemented only for nonnegative integers n \\geq\n 0.\n\nParameters:", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"} {"text": "0.\nParameters:\n * n (int) -- the order of the polygamma function\n * **input** (*Tensor*) -- the input tensor.\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample::\n >>> a = torch.tensor([1, 0.5])\n >>> torch.special.polygamma(1, a)\n tensor([1.64493, 4.9348])\n >>> torch.special.polygamma(2, a)\n tensor([ -2.4041, -16.8288])\n >>> torch.special.polygamma(3, a)\n tensor([ 6.4939, 97.4091])\n >>> torch.special.polygamma(4, a)\n tensor([ -24.8863, -771.4742])\ntorch.special.psi(input, *, out=None) -> Tensor\nAlias for \"torch.special.digamma()\".\ntorch.special.round(input, *, out=None) -> Tensor\nAlias for \"torch.round()\".\ntorch.special.scaled_modified_bessel_k0(input, *, out=None) -> Tensor\nScaled modified Bessel function of the second kind of order 0.\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"} {"text": "Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\ntorch.special.scaled_modified_bessel_k1(input, *, out=None) -> Tensor\nScaled modified Bessel function of the second kind of order 1.\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\ntorch.special.sinc(input, *, out=None) -> Tensor\nComputes the normalized sinc of \"input.\"\n \\text{out}_{i} = \\begin{cases} 1, & \\text{if}\\\n \\text{input}_{i}=0 \\\\ \\sin(\\pi \\text{input}_{i}) / (\\pi\n \\text{input}_{i}), & \\text{otherwise} \\end{cases}\n\nParameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample::\n >>> t = torch.randn(4)\n >>> t\n tensor([ 0.2252, -0.2948, 1.0267, -1.1566])\n >>> torch.special.sinc(t)\n tensor([ 0.9186, 0.8631, -0.0259, -0.1300])", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"} {"text": "torch.special.softmax(input, dim, *, dtype=None) -> Tensor\nComputes the softmax function.\nSoftmax is defined as:\n\\text{Softmax}(x_{i}) = \\frac{\\exp(x_i)}{\\sum_j \\exp(x_j)}\nIt is applied to all slices along dim, and will re-scale them so\n that the elements lie in the range [0, 1] and sum to 1.\nParameters:\n * input (Tensor) -- input\n * **dim** (*int*) -- A dimension along which softmax will be\n computed.\n\n * **dtype** (\"torch.dtype\", optional) -- the desired data type\n of returned tensor. If specified, the input tensor is cast to\n \"dtype\" before the operation is performed. This is useful for\n preventing data type overflows. Default: None.\n\nExamples::\n >>> t = torch.ones(2, 2)\n >>> torch.special.softmax(t, 0)\n tensor([[0.5000, 0.5000],\n [0.5000, 0.5000]])\ntorch.special.spherical_bessel_j0(input, *, out=None) -> Tensor\nSpherical Bessel function of the first kind of order 0.\nParameters:", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"} {"text": "Parameters:\n input (Tensor) -- the input tensor.\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\ntorch.special.xlog1py(input, other, *, out=None) -> Tensor\nComputes \"input * log1p(other)\" with the following cases.\n \\text{out}_{i} = \\begin{cases} \\text{NaN} & \\text{if }\n \\text{other}_{i} = \\text{NaN} \\\\ 0 & \\text{if }\n \\text{input}_{i} = 0.0 \\text{ and } \\text{other}_{i} !=\n \\text{NaN} \\\\ \\text{input}_{i} *\n \\text{log1p}(\\text{other}_{i})& \\text{otherwise} \\end{cases}\n\nSimilar to SciPy's scipy.special.xlog1py.\nParameters:\n * input (Number or Tensor) -- Multiplier\n * **other** (*Number** or **Tensor*) -- Argument\n\nNote:\n At least one of \"input\" or \"other\" must be a tensor.\n\nKeyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> x = torch.zeros(5,)\n >>> y = torch.tensor([-1, 0, 1, float('inf'), float('nan')])\n", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"} {"text": "\n\n\ntorch.special.xlog1py(x, y)\n tensor([0., 0., 0., 0., nan])\n >>> x = torch.tensor([1, 2, 3])\n >>> y = torch.tensor([3, 2, 1])\n >>> torch.special.xlog1py(x, y)\n tensor([1.3863, 2.1972, 2.0794])\n >>> torch.special.xlog1py(x, 4)\n tensor([1.6094, 3.2189, 4.8283])\n >>> torch.special.xlog1py(2, y)\n tensor([2.7726, 2.1972, 1.3863])\n\n\n\ntorch.special.xlogy(input, other, *, out=None) -> Tensor\nComputes \"input * log(other)\" with the following cases.\n \\text{out}_{i} = \\begin{cases} \\text{NaN} & \\text{if }\n \\text{other}_{i} = \\text{NaN} \\\\ 0 & \\text{if }\n \\text{input}_{i} = 0.0 \\\\ \\text{input}_{i} *\n \\log{(\\text{other}_{i})} & \\text{otherwise} \\end{cases}\n\nSimilar to SciPy's scipy.special.xlogy.\nParameters:\n * input (Number or Tensor) -- Multiplier\n * **other** (*Number** or **Tensor*) -- Argument\n\nNote:\n At least one of \"input\" or \"other\" must be a tensor.\n\nKeyword Arguments:", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"} {"text": "Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample:\n >>> x = torch.zeros(5,)\n >>> y = torch.tensor([-1, 0, 1, float('inf'), float('nan')])\n >>> torch.special.xlogy(x, y)\n tensor([0., 0., 0., 0., nan])\n >>> x = torch.tensor([1, 2, 3])\n >>> y = torch.tensor([3, 2, 1])\n >>> torch.special.xlogy(x, y)\n tensor([1.0986, 1.3863, 0.0000])\n >>> torch.special.xlogy(x, 4)\n tensor([1.3863, 2.7726, 4.1589])\n >>> torch.special.xlogy(2, y)\n tensor([2.1972, 1.3863, 0.0000])\n\ntorch.special.zeta(input, other, *, out=None) -> Tensor\nComputes the Hurwitz zeta function, elementwise.\n \\zeta(x, q) = \\sum_{k=0}^{\\infty} \\frac{1}{(k + q)^x}\n\nParameters:\n * input (Tensor) -- the input tensor corresponding to x.\n * **other** (*Tensor*) -- the input tensor corresponding to *q*.\n\nNote:\n The Riemann zeta function corresponds to the case when *q = 1*\n\nKeyword Arguments:", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"} {"text": "Keyword Arguments:\n out (Tensor, optional) -- the output tensor.\nExample::\n >>> x = torch.tensor([2., 4.])\n >>> torch.special.zeta(x, 1)\n tensor([1.6449, 1.0823])\n >>> torch.special.zeta(x, torch.tensor([1., 2.]))\n tensor([1.6449, 0.0823])\n >>> torch.special.zeta(2, torch.tensor([1., 2.]))\n tensor([1.6449, 0.6449])", "source": "https://pytorch.org/docs/stable/special.html", "category": "pytorch docs"} {"text": "torch.utils.bottleneck\ntorch.utils.bottleneck is a tool that can be used as an initial step\nfor debugging bottlenecks in your program. It summarizes runs of your\nscript with the Python profiler and PyTorch's autograd profiler.\nRun it on the command line with\npython -m torch.utils.bottleneck /path/to/source/script.py [args]\nwhere [args] are any number of arguments to script.py, or run\n\"python -m torch.utils.bottleneck -h\" for more usage instructions.\nWarning:\nBecause your script will be profiled, please ensure that it exits in\n a finite amount of time.\nWarning:\nDue to the asynchronous nature of CUDA kernels, when running against\n CUDA code, the cProfile output and CPU-mode autograd profilers may\n not show correct timings: the reported CPU time reports the amount\n of time used to launch the kernels but does not include the time the\n kernel spent executing on a GPU unless the operation does a\n synchronize. Ops that do synchronize appear to be extremely", "source": "https://pytorch.org/docs/stable/bottleneck.html", "category": "pytorch docs"} {"text": "expensive under regular CPU-mode profilers. In these case where\n timings are incorrect, the CUDA-mode autograd profiler may be\n helpful.\nNote:\nTo decide which (CPU-only-mode or CUDA-mode) autograd profiler\n output to look at, you should first check if your script is CPU-\n bound (\"CPU total time is much greater than CUDA total time\"). If it\n is CPU-bound, looking at the results of the CPU-mode autograd\n profiler will help. If on the other hand your script spends most of\n its time executing on the GPU, then it makes sense to start looking\n for responsible CUDA operators in the output of the CUDA-mode\n autograd profiler.Of course the reality is much more complicated and\n your script might not be in one of those two extremes depending on\n the part of the model you're evaluating. If the profiler outputs\n don't help, you could try looking at the result of\n \"torch.autograd.profiler.emit_nvtx()\" with \"nvprof\". However, please\n take into account that the NVTX overhead is very high and often", "source": "https://pytorch.org/docs/stable/bottleneck.html", "category": "pytorch docs"} {"text": "gives a heavily skewed timeline. Similarly, \"Intel\u00c2\u00ae VTune\u00e2\u0084\u00a2 Profiler\"\n helps to analyze performance on Intel platforms further with\n \"torch.autograd.profiler.emit_itt()\".\nWarning:\nIf you are profiling CUDA code, the first profiler that \"bottleneck\"\n runs (cProfile) will include the CUDA startup time (CUDA buffer\n allocation cost) in its time reporting. This should not matter if\n your bottlenecks result in code much slower than the CUDA startup\n time.\nFor more complicated uses of the profilers (like in a multi-GPU case),\nplease see https://docs.python.org/3/library/profile.html or\n\"torch.autograd.profiler.profile()\" for more information.", "source": "https://pytorch.org/docs/stable/bottleneck.html", "category": "pytorch docs"} {"text": "Frequently Asked Questions\nMy model reports \"cuda runtime error(2): out of memory\"\nAs the error message suggests, you have run out of memory on your GPU.\nSince we often deal with large amounts of data in PyTorch, small\nmistakes can rapidly cause your program to use up all of your GPU;\nfortunately, the fixes in these cases are often simple. Here are a few\ncommon things to check:\nDon't accumulate history across your training loop. By default,\ncomputations involving variables that require gradients will keep\nhistory. This means that you should avoid using such variables in\ncomputations which will live beyond your training loops, e.g., when\ntracking statistics. Instead, you should detach the variable or access\nits underlying data.\nSometimes, it can be non-obvious when differentiable variables can\noccur. Consider the following training loop (abridged from source):\ntotal_loss = 0\n for i in range(10000):", "source": "https://pytorch.org/docs/stable/notes/faq.html", "category": "pytorch docs"} {"text": "total_loss = 0\n for i in range(10000):\n optimizer.zero_grad()\n output = model(input)\n loss = criterion(output)\n loss.backward()\n optimizer.step()\n total_loss += loss\nHere, \"total_loss\" is accumulating history across your training loop,\nsince \"loss\" is a differentiable variable with autograd history. You\ncan fix this by writing total_loss += float(loss) instead.\nOther instances of this problem: 1.\nDon't hold onto tensors and variables you don't need. If you\nassign a Tensor or Variable to a local, Python will not deallocate\nuntil the local goes out of scope. You can free this reference by\nusing \"del x\". Similarly, if you assign a Tensor or Variable to a\nmember variable of an object, it will not deallocate until the object\ngoes out of scope. You will get the best memory usage if you don't\nhold onto temporaries you don't need.\nThe scopes of locals can be larger than you expect. For example:\nfor i in range(5):\n intermediate = f(input[i])", "source": "https://pytorch.org/docs/stable/notes/faq.html", "category": "pytorch docs"} {"text": "intermediate = f(input[i])\n result += g(intermediate)\n output = h(result)\n return output\nHere, \"intermediate\" remains live even while \"h\" is executing, because\nits scope extrudes past the end of the loop. To free it earlier, you\nshould \"del intermediate\" when you are done with it.\nAvoid running RNNs on sequences that are too large. The amount of\nmemory required to backpropagate through an RNN scales linearly with\nthe length of the RNN input; thus, you will run out of memory if you\ntry to feed an RNN a sequence that is too long.\nThe technical term for this phenomenon is backpropagation through\ntime, and there are plenty of references for how to implement\ntruncated BPTT, including in the word language model example;\ntruncation is handled by the \"repackage\" function as described in this\nforum post.\nDon't use linear layers that are too large. A linear layer\n\"nn.Linear(m, n)\" uses O(nm) memory: that is to say, the memory", "source": "https://pytorch.org/docs/stable/notes/faq.html", "category": "pytorch docs"} {"text": "requirements of the weights scales quadratically with the number of\nfeatures. It is very easy to blow through your memory this way (and\nremember that you will need at least twice the size of the weights,\nsince you also need to store the gradients.)\nConsider checkpointing. You can trade-off memory for compute by\nusing checkpoint.\nMy GPU memory isn't freed properly\nPyTorch uses a caching memory allocator to speed up memory\nallocations. As a result, the values shown in \"nvidia-smi\" usually\ndon't reflect the true memory usage. See Memory management for more\ndetails about GPU memory management.\nIf your GPU memory isn't freed even after Python quits, it is very\nlikely that some Python subprocesses are still alive. You may find\nthem via \"ps -elf | grep python\" and manually kill them with \"kill -9\n[pid]\".\nMy out of memory exception handler can't allocate memory", "source": "https://pytorch.org/docs/stable/notes/faq.html", "category": "pytorch docs"} {"text": "You may have some code that tries to recover from out of memory\nerrors.\ntry:\n run_model(batch_size)\n except RuntimeError: # Out of memory\n for _ in range(batch_size):\n run_model(1)\nBut find that when you do run out of memory, your recovery code can't\nallocate either. That's because the python exception object holds a\nreference to the stack frame where the error was raised. Which\nprevents the original tensor objects from being freed. The solution is\nto move you OOM recovery code outside of the \"except\" clause.\noom = False\n try:\n run_model(batch_size)\n except RuntimeError: # Out of memory\n oom = True\nif oom:\n for _ in range(batch_size):\n run_model(1)\nMy data loader workers return identical random numbers\nYou are likely using other libraries to generate random numbers in the\ndataset and worker subprocesses are started via \"fork\". See", "source": "https://pytorch.org/docs/stable/notes/faq.html", "category": "pytorch docs"} {"text": "\"torch.utils.data.DataLoader\"'s documentation for how to properly set\nup random seeds in workers with its \"worker_init_fn\" option.\nMy recurrent network doesn't work with data parallelism\nThere is a subtlety in using the \"pack sequence -> recurrent network\n-> unpack sequence\" pattern in a \"Module\" with \"DataParallel\" or\n\"data_parallel()\". Input to each the \"forward()\" on each device will\nonly be part of the entire input. Because the unpack operation\n\"torch.nn.utils.rnn.pad_packed_sequence()\" by default only pads up to\nthe longest input it sees, i.e., the longest on that particular\ndevice, size mismatches will happen when results are gathered\ntogether. Therefore, you can instead take advantage of the\n\"total_length\" argument of \"pad_packed_sequence()\" to make sure that\nthe \"forward()\" calls return sequences of same length. For example,\nyou can write:\nfrom torch.nn.utils.rnn import pack_padded_sequence, pad_packed_sequence", "source": "https://pytorch.org/docs/stable/notes/faq.html", "category": "pytorch docs"} {"text": "class MyModule(nn.Module):\n # ... init, other methods, etc.\n # padded_input is of shape [B x T x *] (batch_first mode) and contains\n # the sequences sorted by lengths\n # B is the batch size\n # T is max sequence length\n def forward(self, padded_input, input_lengths):\n total_length = padded_input.size(1) # get the max sequence length\n packed_input = pack_padded_sequence(padded_input, input_lengths,\n batch_first=True)\n packed_output, _ = self.my_lstm(packed_input)\n output, _ = pad_packed_sequence(packed_output, batch_first=True,\n total_length=total_length)\n return output\n\nm = MyModule().cuda()\n dp_m = nn.DataParallel(m)\nAdditionally, extra care needs to be taken when batch dimension is dim\n\"1\" (i.e., \"batch_first=False\") with data parallelism. In this case,", "source": "https://pytorch.org/docs/stable/notes/faq.html", "category": "pytorch docs"} {"text": "the first argument of pack_padded_sequence \"padding_input\" will be of\nshape \"[T x B x *]\" and should be scattered along dim \"1\", but the\nsecond argument \"input_lengths\" will be of shape \"[B]\" and should be\nscattered along dim \"0\". Extra code to manipulate the tensor shapes\nwill be needed.", "source": "https://pytorch.org/docs/stable/notes/faq.html", "category": "pytorch docs"} {"text": "MPS backend\n\"mps\" device enables high-performance training on GPU for MacOS\ndevices with Metal programming framework. It introduces a new device\nto map Machine Learning computational graphs and primitives on highly\nefficient Metal Performance Shaders Graph framework and tuned kernels\nprovided by Metal Performance Shaders framework respectively.\nThe new MPS backend extends the PyTorch ecosystem and provides\nexisting scripts capabilities to setup and run operations on GPU.\nTo get started, simply move your Tensor and Module to the \"mps\"\ndevice:\n# Check that MPS is available\n if not torch.backends.mps.is_available():\n if not torch.backends.mps.is_built():\n print(\"MPS not available because the current PyTorch install was not \"\n \"built with MPS enabled.\")\n else:\n print(\"MPS not available because the current MacOS version is not 12.3+ \"\n \"and/or you do not have an MPS-enabled device on this machine.\")\nelse:", "source": "https://pytorch.org/docs/stable/notes/mps.html", "category": "pytorch docs"} {"text": "else:\n mps_device = torch.device(\"mps\")\n # Create a Tensor directly on the mps device\n x = torch.ones(5, device=mps_device)\n # Or\n x = torch.ones(5, device=\"mps\")\n\n # Any operation happens on the GPU\n y = x * 2\n\n # Move your model to mps just like any other device\n model = YourFavoriteNet()\n model.to(mps_device)\n\n # Now every call runs on the GPU\n pred = model(x)\n", "source": "https://pytorch.org/docs/stable/notes/mps.html", "category": "pytorch docs"} {"text": "Distributed Data Parallel\nWarning:\nThe implementation of \"torch.nn.parallel.DistributedDataParallel\"\n evolves over time. This design note is written based on the state as\n of v1.4.\n\"torch.nn.parallel.DistributedDataParallel\" (DDP) transparently\nperforms distributed data parallel training. This page describes how\nit works and reveals implementation details.\nExample\nLet us start with a simple \"torch.nn.parallel.DistributedDataParallel\"\nexample. This example uses a \"torch.nn.Linear\" as the local model,\nwraps it with DDP, and then runs one forward pass, one backward pass,\nand an optimizer step on the DDP model. After that, parameters on the\nlocal model will be updated, and all models on different processes\nshould be exactly the same.\nimport torch\n import torch.distributed as dist\n import torch.multiprocessing as mp\n import torch.nn as nn\n import torch.optim as optim\n from torch.nn.parallel import DistributedDataParallel as DDP", "source": "https://pytorch.org/docs/stable/notes/ddp.html", "category": "pytorch docs"} {"text": "def example(rank, world_size):\n # create default process group\n dist.init_process_group(\"gloo\", rank=rank, world_size=world_size)\n # create local model\n model = nn.Linear(10, 10).to(rank)\n # construct DDP model\n ddp_model = DDP(model, device_ids=[rank])\n # define loss function and optimizer\n loss_fn = nn.MSELoss()\n optimizer = optim.SGD(ddp_model.parameters(), lr=0.001)\n # forward pass\n outputs = ddp_model(torch.randn(20, 10).to(rank))\n labels = torch.randn(20, 10).to(rank)\n # backward pass\n loss_fn(outputs, labels).backward()\n # update parameters\n optimizer.step()\n\ndef main():\n world_size = 2\n mp.spawn(example,\n args=(world_size,),\n nprocs=world_size,\n join=True)\nif name==\"main\":\n # Environment variables which need to be\n # set when using c10d's default \"env\"\n # initialization mode.\n os.environ[\"MASTER_ADDR\"] = \"localhost\"", "source": "https://pytorch.org/docs/stable/notes/ddp.html", "category": "pytorch docs"} {"text": "os.environ[\"MASTER_ADDR\"] = \"localhost\"\n os.environ[\"MASTER_PORT\"] = \"29500\"\n main()\nDDP works with TorchDynamo. When used with TorchDynamo, apply the DDP\nmodel wrapper before compiling the model, such that torchdynamo can\napply \"DDPOptimizer\" (graph-break optimizations) based on DDP bucket\nsizes. (See TorchDynamo DDPOptimizer for more information.)\nTorchDynamo support for DDP currently requires setting\nstatic_graph=False, due to interactions between the graph tracing\nprocess and DDP's mechanism for observing operations happening on its\nmodule, but this should be fixed ultimately.\nddp_model = DDP(model, device_ids=[rank])\n ddp_model = torch.compile(ddp_model)\nInternal Design\nThis section reveals how it works under the hood of\n\"torch.nn.parallel.DistributedDataParallel\" by diving into details of\nevery step in one iteration.\n\nPrerequisite: DDP relies on c10d \"ProcessGroup\" for\n communications. Hence, applications must create \"ProcessGroup\"\n", "source": "https://pytorch.org/docs/stable/notes/ddp.html", "category": "pytorch docs"} {"text": "instances before constructing DDP.\n\nConstruction: The DDP constructor takes a reference to the local\n module, and broadcasts \"state_dict()\" from the process with rank 0\n to all other processes in the group to make sure that all model\n replicas start from the exact same state. Then, each DDP process\n creates a local \"Reducer\", which later will take care of the\n gradients synchronization during the backward pass. To improve\n communication efficiency, the \"Reducer\" organizes parameter\n gradients into buckets, and reduces one bucket at a time. Bucket\n size can be configured by setting the bucket_cap_mb argument in\n DDP constructor. The mapping from parameter gradients to buckets is\n determined at the construction time, based on the bucket size limit\n and parameter sizes. Model parameters are allocated into buckets in\n (roughly) the reverse order of \"Model.parameters()\" from the given\n model. The reason for using the reverse order is because DDP expects\n", "source": "https://pytorch.org/docs/stable/notes/ddp.html", "category": "pytorch docs"} {"text": "gradients to become ready during the backward pass in approximately\n that order. The figure below shows an example. Note that, the\n \"grad0\" and \"grad1\" are in \"bucket1\", and the other two gradients\n are in \"bucket0\". Of course, this assumption might not always be\n true, and when that happens it could hurt DDP backward speed as the\n \"Reducer\" cannot kick off the communication at the earliest possible\n time. Besides bucketing, the \"Reducer\" also registers autograd hooks\n during construction, one hook per parameter. These hooks will be\n triggered during the backward pass when the gradient becomes ready.\n\nForward Pass: The DDP takes the input and passes it to the local\n model, and then analyzes the output from the local model if\n \"find_unused_parameters\" is set to \"True\". This mode allows running\n backward on a subgraph of the model, and DDP finds out which\n parameters are involved in the backward pass by traversing the\n autograd graph from the model output and marking all unused\n", "source": "https://pytorch.org/docs/stable/notes/ddp.html", "category": "pytorch docs"} {"text": "parameters as ready for reduction. During the backward pass, the\n \"Reducer\" would only wait for unready parameters, but it would still\n reduce all buckets. Marking a parameter gradient as ready does not\n help DDP skip buckets as for now, but it will prevent DDP from\n waiting for absent gradients forever during the backward pass. Note\n that traversing the autograd graph introduces extra overheads, so\n applications should only set \"find_unused_parameters\" to \"True\" when\n necessary.\n\nBackward Pass: The \"backward()\" function is directly invoked on\n the loss \"Tensor\", which is out of DDP's control, and DDP uses\n autograd hooks registered at construction time to trigger gradients\n synchronizations. When one gradient becomes ready, its corresponding\n DDP hook on that grad accumulator will fire, and DDP will then mark\n that parameter gradient as ready for reduction. When gradients in\n one bucket are all ready, the \"Reducer\" kicks off an asynchronous\n", "source": "https://pytorch.org/docs/stable/notes/ddp.html", "category": "pytorch docs"} {"text": "\"allreduce\" on that bucket to calculate mean of gradients across all\n processes. When all buckets are ready, the \"Reducer\" will block\n waiting for all \"allreduce\" operations to finish. When this is done,\n averaged gradients are written to the \"param.grad\" field of all\n parameters. So after the backward pass, the grad field on the same\n corresponding parameter across different DDP processes should be the\n same.\n\nOptimizer Step: From the optimizer's perspective, it is\n optimizing a local model. Model replicas on all DDP processes can\n keep in sync because they all start from the same state and they\n have the same averaged gradients in every iteration.\n\n[image: ddp_grad_sync.png][image]\nNote:\nDDP requires \"Reducer\" instances on all processes to invoke\n \"allreduce\" in exactly the same order, which is done by always\n running \"allreduce\" in the bucket index order instead of actual\n bucket ready order. Mismatched \"allreduce\" order across processes", "source": "https://pytorch.org/docs/stable/notes/ddp.html", "category": "pytorch docs"} {"text": "can lead to wrong results or DDP backward hang.\nImplementation\nBelow are pointers to the DDP implementation components. The stacked\ngraph shows the structure of the code.\nProcessGroup\n\n\nProcessGroup.hpp: contains the abstract API of all process group\n implementations. The \"c10d\" library provides 3 implementations out\n of the box, namely, ProcessGroupGloo, ProcessGroupNCCL, and\n ProcessGroupMPI. \"DistributedDataParallel\" uses\n \"ProcessGroup::broadcast()\" to send model states from the process\n with rank 0 to others during initialization and\n \"ProcessGroup::allreduce()\" to sum gradients.\n\n\nStore.hpp: assists the rendezvous service for process group\n instances to find each other.\n\n\nDistributedDataParallel\n\ndistributed.py: is the Python entry point for DDP. It implements the\n initialization steps and the \"forward\" function for the\n \"nn.parallel.DistributedDataParallel\" module which call into C++\n", "source": "https://pytorch.org/docs/stable/notes/ddp.html", "category": "pytorch docs"} {"text": "libraries. Its \"_sync_param\" function performs intra-process\n parameter synchronization when one DDP process works on multiple\n devices, and it also broadcasts model buffers from the process with\n rank 0 to all other processes. The inter-process parameter\n synchronization happens in \"Reducer.cpp\".\n\n\ncomm.h: implements the coalesced broadcast helper function which is\n invoked to broadcast model states during initialization and\n synchronize model buffers before the forward pass.\n\n\nreducer.h: provides the core implementation for gradient\n synchronization in the backward pass. It has three entry point\n functions:\n\n\n\"Reducer\": The constructor is called in \"distributed.py\" which\n registers \"Reducer::autograd_hook()\" to gradient accumulators.\n\n\n\"autograd_hook()\" function will be invoked by the autograd engine\n when a gradient becomes ready.\n\n\n\"prepare_for_backward()\" is called at the end of DDP forward pass\n in \"distributed.py\". It traverses the autograd graph to find\n\n", "source": "https://pytorch.org/docs/stable/notes/ddp.html", "category": "pytorch docs"} {"text": "unused parameters when \"find_unused_parameters\" is set to \"True\"\n in DDP constructor.\n[image: ddp_code.png][image]\nTorchDynamo DDPOptimizer\nDDP's performance advantage comes from overlapping allreduce\ncollectives with computations during backwards. AotAutograd prevents\nthis overlap when used with TorchDynamo for compiling a whole forward\nand whole backward graph, because allreduce ops are launched by\nautograd hooks after the whole optimized backwards computation\nfinishes.\nTorchDynamo's DDPOptimizer helps by breaking the forward graph at the\nlogical boundaries of DDP's allreduce buckets during backwards. Note:\nthe goal is to break the graph during backwards, and the simplest\nimplementation is to break the forward graphs and then call\nAotAutograd and compilation on each section. This allows DDP's\nallreduce hooks to fire in-between sections of backwards, and schedule\ncommunications to overlap with compute.", "source": "https://pytorch.org/docs/stable/notes/ddp.html", "category": "pytorch docs"} {"text": "communications to overlap with compute.\nSee this blog post for a more in-depth explanation and experimental\nresults, or read the docs and code at\ntorch/_dynamo/optimizations/distributed.py\nTo Debug DDPOptimizer, set torch._dynamo.config.log_level to DEBUG\n(for full graph dumps) or INFO (for basic info about bucket\nboundaries). To disable DDPOptimizer, set\ntorch._dynamo.config.optimize_ddp=False. DDP and TorchDynamo should\nstill work correctly without DDPOptimizer, but with performance\ndegradation.", "source": "https://pytorch.org/docs/stable/notes/ddp.html", "category": "pytorch docs"} {"text": "Features for large-scale deployments\n* Fleet-wide operator profiling\n\n\nAPI usage logging\n\n\nAttaching metadata to saved TorchScript models\n\n\nBuild environment considerations\n\n\nCommon extension points\n\n\nThis note talks about several extension points and tricks that might\nbe useful when running PyTorch within a larger system or operating\nmultiple systems using PyTorch in a larger organization.\nIt doesn't cover topics of deploying models to production. Check\n\"torch.jit\" or one of the corresponding tutorials.\nThe note assumes that you either build PyTorch from source in your\norganization or have an ability to statically link additional code to\nbe loaded when PyTorch is used. Therefore, many of the hooks are\nexposed as C++ APIs that can be triggered once in a centralized place,\ne.g. in static initialization code.\nFleet-wide operator profiling\nPyTorch comes with \"torch.autograd.profiler\" capable of measuring time", "source": "https://pytorch.org/docs/stable/notes/large_scale_deployments.html", "category": "pytorch docs"} {"text": "taken by individual operators on demand. One can use the same\nmechanism to do \"always ON\" measurements for any process running\nPyTorch. It might be useful for gathering information about PyTorch\nworkloads running in a given process or across the entire set of\nmachines.\nNew callbacks for any operator invocation can be added with\n\"torch::addGlobalCallback\". Hooks will be called with\n\"torch::RecordFunction\" struct that describes invocation context (e.g.\nname). If enabled, \"RecordFunction::inputs()\" contains arguments of\nthe function represented as \"torch::IValue\" variant type. Note, that\ninputs logging is relatively expensive and thus has to be enabled\nexplicitly.\nThe operator callbacks also have access to\n\"c10::ThreadLocalDebugInfo::get()\" interface that returns a pointer to\nthe struct holding the debug information. This debug information can\nbe set earlier by using \"at::DebugInfoGuard\" object. Debug information\nis propagated through the forward (including async \"fork\" tasks) and", "source": "https://pytorch.org/docs/stable/notes/large_scale_deployments.html", "category": "pytorch docs"} {"text": "backward passes and can be useful for passing some extra information\nabout execution environment (e.g. model id) from the higher layers of\nthe application down to the operator callbacks.\nInvoking callbacks adds some overhead, so usually it's useful to just\nrandomly sample operator invocations. This can be enabled on per-\ncallback basis with an optional sampling rate passed into\n\"torch::addGlobalCallback\".\nNote, that \"addGlobalCallback\" is not thread-safe and can be called\nonly when no PyTorch operator is running. Usually, it's a good idea to\ncall them once during initialization.\nHere's an example:\n// Called somewhere in the program beginning\n void init() {\n // Sample one in a hundred operator runs randomly\n addGlobalCallback(\n RecordFunctionCallback(\n &onFunctionEnter,\n &onFunctionExit)\n .needsInputs(true)\n .samplingProb(0.01)\n );\n // Note, to enable observers in the model calling thread,", "source": "https://pytorch.org/docs/stable/notes/large_scale_deployments.html", "category": "pytorch docs"} {"text": "// call enableRecordFunction() in the thread before running a model\n }\nvoid onFunctionEnter(const RecordFunction& fn) {\n std::cerr << \"Before function \" << fn.name()\n << \" with \" << fn.inputs().size() << \" inputs\" << std::endl;\n }\nvoid onFunctionExit(const RecordFunction& fn) {\n std::cerr << \"After function \" << fn.name();\n }\nAPI usage logging\nWhen running in a broader ecosystem, for example in managed job\nscheduler, it's often useful to track which binaries invoke particular\nPyTorch APIs. There exists simple instrumentation injected at several\nimportant API points that triggers a given callback. Because usually\nPyTorch is invoked in one-off python scripts, the callback fires only\nonce for a given process for each of the APIs.\n\"c10::SetAPIUsageHandler\" can be used to register API usage\ninstrumentation handler. Passed argument is going to be an \"api key\"\nidentifying used point, for example \"python.import\" for PyTorch", "source": "https://pytorch.org/docs/stable/notes/large_scale_deployments.html", "category": "pytorch docs"} {"text": "extension import or \"torch.script.compile\" if TorchScript compilation\nwas triggered.\nSetAPIUsageLogger( {\n std::cerr << \"API was used: \" << event_name << std::endl;\n });\nNote for developers: new API trigger points can be added in code with\n\"C10_LOG_API_USAGE_ONCE(\"my_api\")\" in C++ or\n\"torch._C._log_api_usage_once(\"my.api\")\" in Python.\nAttaching metadata to saved TorchScript models\nTorchScript modules can be saved as an archive file that bundles\nserialized parameters and module code as TorchScript (see\n\"torch.jit.save()\"). It's often convenient to bundle additional\ninformation together with the model, for example, description of model\nproducer or auxiliary artifacts.\nIt can be achieved by passing the \"_extra_files\" argument to\n\"torch.jit.save()\" and \"torch::jit::load\" to store and retrieve\narbitrary binary blobs during saving process. Since TorchScript files", "source": "https://pytorch.org/docs/stable/notes/large_scale_deployments.html", "category": "pytorch docs"} {"text": "are regular ZIP archives, extra information gets stored as regular\nfiles inside archive's \"extra/\" directory.\nThere's also a global hook allowing to attach extra files to any\nTorchScript archive produced in the current process. It might be\nuseful to tag models with producer metadata, akin to JPEG metadata\nproduced by digital cameras. Example usage might look like:\nSetExportModuleExtraFilesHook( {\n ExtraFilesMap files;\n files[\"producer_info.json\"] = \"{\\\"user\\\": \\\"\" + getenv(\"USER\") + \"\\\"}\";\n return files;\n });\nBuild environment considerations\nTorchScript's compilation needs to have access to the original python\nfiles as it uses python's \"inspect.getsource\" call. In certain\nproduction environments it might require explicitly deploying \".py\"\nfiles along with precompiled \".pyc\".\nCommon extension points\nPyTorch APIs are generally loosely coupled and it's easy to replace a", "source": "https://pytorch.org/docs/stable/notes/large_scale_deployments.html", "category": "pytorch docs"} {"text": "component with specialized version. Common extension points include:\n\n\nCustom operators implemented in C++ - see tutorial for more details.\n\n\nCustom data reading can be often integrated directly by invoking\n corresponding python library. Existing functionality of\n \"torch.utils.data\" can be utilized by extending \"Dataset\" or\n \"IterableDataset\".\n\n", "source": "https://pytorch.org/docs/stable/notes/large_scale_deployments.html", "category": "pytorch docs"} {"text": "Numerical accuracy\nIn modern computers, floating point numbers are represented using IEEE\n754 standard. For more details on floating point arithmetics and IEEE\n754 standard, please see Floating point arithmetic In particular, note\nthat floating point provides limited accuracy (about 7 decimal digits\nfor single precision floating point numbers, about 16 decimal digits\nfor double precision floating point numbers) and that floating point\naddition and multiplication are not associative, so the order of the\noperations affects the results. Because of this, PyTorch is not\nguaranteed to produce bitwise identical results for floating point\ncomputations that are mathematically identical. Similarly, bitwise\nidentical results are not guaranteed across PyTorch releases,\nindividual commits, or different platforms. In particular, CPU and GPU\nresults can be different even for bitwise-identical inputs and even\nafter controlling for the sources of randomness.\nBatched computations or slice computations", "source": "https://pytorch.org/docs/stable/notes/numerical_accuracy.html", "category": "pytorch docs"} {"text": "Batched computations or slice computations\nMany operations in PyTorch support batched computation, where the same\noperation is performed for the elements of the batches of inputs. An\nexample of this is \"torch.mm()\" and \"torch.bmm()\". It is possible to\nimplement batched computation as a loop over batch elements, and apply\nthe necessary math operations to the individual batch elements, for\nefficiency reasons we are not doing that, and typically perform\ncomputation for the whole batch. The mathematical libraries that we\nare calling, and PyTorch internal implementations of operations can\nproduces slightly different results in this case, compared to non-\nbatched computations. In particular, let \"A\" and \"B\" be 3D tensors\nwith the dimensions suitable for batched matrix multiplication. Then\n\"(A@B)[0]\" (the first element of the batched result) is not guaranteed\nto be bitwise identical to \"A[0]@B[0]\" (the matrix product of the", "source": "https://pytorch.org/docs/stable/notes/numerical_accuracy.html", "category": "pytorch docs"} {"text": "first elements of the input batches) even though mathematically it's\nan identical computation.\nSimilarly, an operation applied to a tensor slice is not guaranteed to\nproduce results that are identical to the slice of the result of the\nsame operation applied to the full tensor. E.g. let \"A\" be a\n2-dimensional tensor. \"A.sum(-1)[0]\" is not guaranteed to be bitwise\nequal to \"A[:,0].sum()\".\nExtremal values\nWhen inputs contain large values such that intermediate results may\noverflow the range of the used datatype, the end result may overflow\ntoo, even though it is representable in the original datatype. E.g.:\nimport torch\n a=torch.tensor([1e20, 1e20]) # fp32 type by default\n a.norm() # produces tensor(inf)\n a.double().norm() # produces tensor(1.4142e+20, dtype=torch.float64), representable in fp32\nLinear algebra (\"torch.linalg\")\nNon-finite values\nThe external libraries (backends) that \"torch.linalg\" uses provide no", "source": "https://pytorch.org/docs/stable/notes/numerical_accuracy.html", "category": "pytorch docs"} {"text": "guarantees on their behaviour when the inputs have non-finite values\nlike \"inf\" or \"NaN\". As such, neither does PyTorch. The operations may\nreturn a tensor with non-finite values, or raise an exception, or even\nsegfault.\nConsider using \"torch.isfinite()\" before calling these functions to\ndetect this situation.\nExtremal values in linalg\nFunctions within \"torch.linalg\" have more Extremal Values than other\nPyTorch functions.\nSolvers and Inverses assume that the input matrix \"A\" is invertible.\nIf it is close to being non-invertible (for example, if it has a very\nsmall singular value), then these algorithms may silently return\nincorrect results. These matrices are said to be ill-conditioned. If\nprovided with ill-conditioned inputs, the result of these functions\nthey may vary when using the same inputs on different devices or when\nusing different backends via the keyword \"driver\".\nSpectral operations like \"svd\", \"eig\", and \"eigh\" may also return", "source": "https://pytorch.org/docs/stable/notes/numerical_accuracy.html", "category": "pytorch docs"} {"text": "incorrect results (and their gradients may be infinite) when their\ninputs have singular values that are close to each other. This is\nbecause the algorithms used to compute these decompositions struggle\nto converge for these inputs.\nRunning the computation in \"float64\" (as NumPy does by default) often\nhelps, but it does not solve these issues in all cases. Analyzing the\nspectrum of the inputs via \"torch.linalg.svdvals()\" or their condition\nnumber via \"torch.linalg.cond()\" may help to detect these issues.\nTensorFloat-32(TF32) on Nvidia Ampere devices\nOn Ampere Nvidia GPUs, PyTorch can use TensorFloat32 (TF32) to speed\nup mathematically intensive operations, in particular matrix\nmultiplications and convolutions. When an operation is performed using\nTF32 tensor cores, only the first 10 bits of the input mantissa are\nread. This may reduce accuracy and produce surprising results (e.g.,\nmultiplying a matrix by the identity matrix may produce results that", "source": "https://pytorch.org/docs/stable/notes/numerical_accuracy.html", "category": "pytorch docs"} {"text": "are different from the input). By default, TF32 tensor cores are\ndisabled for matrix multiplications and enabled for convolutions,\nalthough most neural network workloads have the same convergence\nbehavior when using TF32 as they have with fp32. We recommend enabling\nTF32 tensor cores for matrix multiplications with\n\"torch.backends.cuda.matmul.allow_tf32 = True\" if your network does\nnot need full float32 precision. If your network needs full float32\nprecision for both matrix multiplications and convolutions, then TF32\ntensor cores can also be disabled for convolutions with\n\"torch.backends.cudnn.allow_tf32 = False\".\nFor more information see TensorFloat32.\nReduced Precision Reduction for FP16 and BF16 GEMMs\nHalf-precision GEMM operations are typically done with intermediate\naccumulations (reduction) in single-precision for numerical accuracy\nand improved resilience to overflow. For performance, certain GPU", "source": "https://pytorch.org/docs/stable/notes/numerical_accuracy.html", "category": "pytorch docs"} {"text": "architectures, especially more recent ones, allow a few truncations of\nthe intermediate accumulation results to the reduced precision (e.g.,\nhalf-precision). This change is often benign from the perspective of\nmodel convergence, though it may lead to unexpected results (e.g.,\n\"inf\" values when the final result should be be representable in half-\nprecision). If reduced-precision reductions are problematic, they can\nbe turned off with\n\"torch.backends.cuda.matmul.allow_fp16_reduced_precision_reduction =\nFalse\"\nA similar flag exists for BF16 GEMM operations and is turned off by\ndefault. If BF16 reduced-precision reductions are problematic, they\ncan be turned off with\n\"torch.backends.cuda.matmul.allow_bf16_reduced_precision_reduction =\nFalse\"\nFor more information see allow_fp16_reduced_precision_reduction and\nallow_bf16_reduced_precision_reduction\nReduced Precision FP16 and BF16 GEMMs and Convolutions on AMD Instinct MI200 devices", "source": "https://pytorch.org/docs/stable/notes/numerical_accuracy.html", "category": "pytorch docs"} {"text": "====================================================================================\nOn AMD Instinct MI200 GPUs, the FP16 and BF16 V_DOT2 and MFMA matrix\ninstructions flush input and output denormal values to zero. FP32 and\nFP64 MFMA matrix instructions do not flush input and output denormal\nvalues to zero. The affected instructions are only used by rocBLAS\n(GEMM) and MIOpen (convolution) kernels; all other PyTorch operations\nwill not encounter this behavior. All other supported AMD GPUs will\nnot encounter this behavior.\nrocBLAS and MIOpen provide alternate implementations for affected FP16\noperations. Alternate implementations for BF16 operations are not\nprovided; BF16 numbers have a larger dynamic range than FP16 numbers\nand are less likely to encounter denormal values. For the FP16\nalternate implementations, FP16 input values are cast to an\nintermediate BF16 value and then cast back to FP16 output after the\naccumulate FP32 operations. In this way, the input and output types\nare unchanged.", "source": "https://pytorch.org/docs/stable/notes/numerical_accuracy.html", "category": "pytorch docs"} {"text": "are unchanged.\nWhen training using FP16 precision, some models may fail to converge\nwith FP16 denorms flushed to zero. Denormal values more frequently\noccur in the backward pass of training during gradient calculation.\nPyTorch by default will use the rocBLAS and MIOpen alternate\nimplementations during the backward pass. The default behavior can be\noverridden using environment variables, ROCBLAS_INTERNAL_FP16_ALT_IMPL\nand MIOPEN_DEBUG_CONVOLUTION_ATTRIB_FP16_ALT_IMPL. The behavior of\nthese environment variables is as follows:\n+-----------------+-------------+-------------+\n| | forward | backward |\n|=================|=============|=============|\n| Env unset | original | alternate |\n+-----------------+-------------+-------------+\n| Env set to 1 | alternate | alternate |\n+-----------------+-------------+-------------+\n| Env set to 0 | original | original |\n+-----------------+-------------+-------------+", "source": "https://pytorch.org/docs/stable/notes/numerical_accuracy.html", "category": "pytorch docs"} {"text": "+-----------------+-------------+-------------+\nThe following is the list of operations where rocBLAS may be used:\n\n\ntorch.addbmm\n\n\ntorch.addmm\n\n\ntorch.baddbmm\n\n\ntorch.bmm\n\n\ntorch.mm\n\n\ntorch.nn.GRUCell\n\n\ntorch.nn.LSTMCell\n\n\ntorch.nn.Linear\n\n\ntorch.sparse.addmm\n\n\nthe following torch._C._ConvBackend implementations:\n\n\nslowNd\n\n\nslowNd_transposed\n\n\nslowNd_dilated\n\n\nslowNd_dilated_transposed\n\n\nThe following is the list of operations where MIOpen may be used:\n\n\ntorch.nn.Conv[Transpose]Nd\n\n\nthe following torch._C._ConvBackend implementations:\n\n\nConvBackend::Miopen\n\n\nConvBackend::MiopenDepthwise\n\n\nConvBackend::MiopenTranspose\n\n", "source": "https://pytorch.org/docs/stable/notes/numerical_accuracy.html", "category": "pytorch docs"} {"text": "Broadcasting semantics\nMany PyTorch operations support NumPy's broadcasting semantics. See\nhttps://numpy.org/doc/stable/user/basics.broadcasting.html for\ndetails.\nIn short, if a PyTorch operation supports broadcast, then its Tensor\narguments can be automatically expanded to be of equal sizes (without\nmaking copies of the data).\nGeneral semantics\nTwo tensors are \"broadcastable\" if the following rules hold:\n\n\nEach tensor has at least one dimension.\n\n\nWhen iterating over the dimension sizes, starting at the trailing\n dimension, the dimension sizes must either be equal, one of them is\n 1, or one of them does not exist.\n\n\nFor Example:\n\n\n\nx=torch.empty(5,7,3)\ny=torch.empty(5,7,3)\n # same shapes are always broadcastable (i.e. the above rules always hold)\nx=torch.empty((0,))\ny=torch.empty(2,2)\n # x and y are not broadcastable, because x does not have at least 1 dimension\n\n\n\n# can line up trailing dimensions\n\n\n\nx=torch.empty(5,3,4,1)\n\n\n", "source": "https://pytorch.org/docs/stable/notes/broadcasting.html", "category": "pytorch docs"} {"text": "\n\n\nx=torch.empty(5,3,4,1)\ny=torch.empty( 3,1,1)\n # x and y are broadcastable.\n # 1st trailing dimension: both have size 1\n # 2nd trailing dimension: y has size 1\n # 3rd trailing dimension: x size == y size\n # 4th trailing dimension: y dimension doesn't exist\n\n\n\n# but:\n\n\n\nx=torch.empty(5,2,4,1)\ny=torch.empty( 3,1,1)\n # x and y are not broadcastable, because in the 3rd trailing dimension 2 != 3\n\n\n\nIf two tensors \"x\", \"y\" are \"broadcastable\", the resulting tensor size\nis calculated as follows:\n\n\nIf the number of dimensions of \"x\" and \"y\" are not equal, prepend 1\n to the dimensions of the tensor with fewer dimensions to make them\n equal length.\n\n\nThen, for each dimension size, the resulting dimension size is the\n max of the sizes of \"x\" and \"y\" along that dimension.\n\n\nFor Example:\n# can line up trailing dimensions to make reading easier\n\n\n\nx=torch.empty(5,1,4,1)\ny=torch.empty( 3,1,1)\n(x+y).size()\n torch.Size([5, 3, 4, 1])\n\n\n\n# but not necessary:", "source": "https://pytorch.org/docs/stable/notes/broadcasting.html", "category": "pytorch docs"} {"text": "but not necessary:\n\n\n\nx=torch.empty(1)\ny=torch.empty(3,1,7)\n(x+y).size()\n torch.Size([3, 1, 7])\nx=torch.empty(5,2,4,1)\ny=torch.empty(3,1,1)\n(x+y).size()\n RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 1\n\n\n\nIn-place semantics\nOne complication is that in-place operations do not allow the in-place\ntensor to change shape as a result of the broadcast.\nFor Example:\n\n\n\nx=torch.empty(5,3,4,1)\ny=torch.empty(3,1,1)\n(x.add_(y)).size()\n torch.Size([5, 3, 4, 1])\n\n\n\n# but:\n\n\n\nx=torch.empty(1,3,1)\ny=torch.empty(3,1,7)\n(x.add_(y)).size()\n RuntimeError: The expanded size of the tensor (1) must match the existing size (7) at non-singleton dimension 2.\n\n\n\nBackwards compatibility\nPrior versions of PyTorch allowed certain pointwise functions to\nexecute on tensors with different shapes, as long as the number of", "source": "https://pytorch.org/docs/stable/notes/broadcasting.html", "category": "pytorch docs"} {"text": "elements in each tensor was equal. The pointwise operation would then\nbe carried out by viewing each tensor as 1-dimensional. PyTorch now\nsupports broadcasting and the \"1-dimensional\" pointwise behavior is\nconsidered deprecated and will generate a Python warning in cases\nwhere tensors are not broadcastable, but have the same number of\nelements.\nNote that the introduction of broadcasting can cause backwards\nincompatible changes in the case where two tensors do not have the\nsame shape, but are broadcastable and have the same number of\nelements. For Example:\n\n\n\ntorch.add(torch.ones(4,1), torch.randn(4))\n\n\n\nwould previously produce a Tensor with size: torch.Size([4,1]), but\nnow produces a Tensor with size: torch.Size([4,4]). In order to help\nidentify cases in your code where backwards incompatibilities\nintroduced by broadcasting may exist, you may set\ntorch.utils.backcompat.broadcast_warning.enabled to True, which\nwill generate a python warning in such cases.\nFor Example:", "source": "https://pytorch.org/docs/stable/notes/broadcasting.html", "category": "pytorch docs"} {"text": "For Example:\n\n\n\ntorch.utils.backcompat.broadcast_warning.enabled=True\ntorch.add(torch.ones(4,1), torch.ones(4))\n main:1: UserWarning: self and other do not have the same shape, but are broadcastable, and have the same number of elements.\n Changing behavior in a backwards incompatible manner to broadcasting rather than viewing as 1-dimensional.\n\n\n", "source": "https://pytorch.org/docs/stable/notes/broadcasting.html", "category": "pytorch docs"} {"text": "HIP (ROCm) semantics\nROCm\u00e2\u0084\u00a2 is AMD\u00e2\u0080\u0099s open source software platform for GPU-accelerated high\nperformance computing and machine learning. HIP is ROCm's C++ dialect\ndesigned to ease conversion of CUDA applications to portable C++ code.\nHIP is used when converting existing CUDA applications like PyTorch to\nportable C++ and for new projects that require portability between AMD\nand NVIDIA.\nHIP Interfaces Reuse the CUDA Interfaces\nPyTorch for HIP intentionally reuses the existing \"torch.cuda\"\ninterfaces. This helps to accelerate the porting of existing PyTorch\ncode and models because very few code changes are necessary, if any.\nThe example from CUDA semantics will work exactly the same for HIP:\ncuda = torch.device('cuda') # Default HIP device\n cuda0 = torch.device('cuda:0') # 'rocm' or 'hip' are not valid, use 'cuda'\n cuda2 = torch.device('cuda:2') # GPU 2 (these are 0-indexed)\nx = torch.tensor([1., 2.], device=cuda0)", "source": "https://pytorch.org/docs/stable/notes/hip.html", "category": "pytorch docs"} {"text": "x = torch.tensor([1., 2.], device=cuda0)\n # x.device is device(type='cuda', index=0)\n y = torch.tensor([1., 2.]).cuda()\n # y.device is device(type='cuda', index=0)\nwith torch.cuda.device(1):\n # allocates a tensor on GPU 1\n a = torch.tensor([1., 2.], device=cuda)\n # transfers a tensor from CPU to GPU 1\n b = torch.tensor([1., 2.]).cuda()\n # a.device and b.device are device(type='cuda', index=1)\n\n # You can also use ``Tensor.to`` to transfer a tensor:\n b2 = torch.tensor([1., 2.]).to(device=cuda)\n # b.device and b2.device are device(type='cuda', index=1)\n\n c = a + b\n # c.device is device(type='cuda', index=1)\n\n z = x + y\n # z.device is device(type='cuda', index=0)\n\n # even within a context, you can specify the device\n # (or give a GPU index to the .cuda call)\n d = torch.randn(2, device=cuda2)\n e = torch.randn(2).to(cuda2)\n f = torch.randn(2).cuda(cuda2)\n", "source": "https://pytorch.org/docs/stable/notes/hip.html", "category": "pytorch docs"} {"text": "f = torch.randn(2).cuda(cuda2)\n # d.device, e.device, and f.device are all device(type='cuda', index=2)\nChecking for HIP\nWhether you are using PyTorch for CUDA or HIP, the result of calling\n\"is_available()\" will be the same. If you are using a PyTorch that has\nbeen built with GPU support, it will return True. If you must check\nwhich version of PyTorch you are using, refer to this example below:\nif torch.cuda.is_available() and torch.version.hip:\n # do something specific for HIP\n elif torch.cuda.is_available() and torch.version.cuda:\n # do something specific for CUDA\nTensorFloat-32(TF32) on ROCm\nTF32 is not supported on ROCm.\nMemory management\nPyTorch uses a caching memory allocator to speed up memory\nallocations. This allows fast memory deallocation without device\nsynchronizations. However, the unused memory managed by the allocator\nwill still show as if used in \"rocm-smi\". You can use", "source": "https://pytorch.org/docs/stable/notes/hip.html", "category": "pytorch docs"} {"text": "\"memory_allocated()\" and \"max_memory_allocated()\" to monitor memory\noccupied by tensors, and use \"memory_reserved()\" and\n\"max_memory_reserved()\" to monitor the total amount of memory managed\nby the caching allocator. Calling \"empty_cache()\" releases all\nunused cached memory from PyTorch so that those can be used by\nother GPU applications. However, the occupied GPU memory by tensors\nwill not be freed so it can not increase the amount of GPU memory\navailable for PyTorch.\nFor more advanced users, we offer more comprehensive memory\nbenchmarking via \"memory_stats()\". We also offer the capability to\ncapture a complete snapshot of the memory allocator state via\n\"memory_snapshot()\", which can help you understand the underlying\nallocation patterns produced by your code.\nTo debug memory errors, set \"PYTORCH_NO_CUDA_MEMORY_CACHING=1\" in your\nenvironment to disable caching.\nhipFFT/rocFFT plan cache\nSetting the size of the cache for hipFFT/rocFFT plans is not\nsupported.", "source": "https://pytorch.org/docs/stable/notes/hip.html", "category": "pytorch docs"} {"text": "supported.\ntorch.distributed backends\nCurrently, only the \"nccl\" and \"gloo\" backends for torch.distributed\nare supported on ROCm.\nCUDA API to HIP API mappings in C++\nPlease refer: https://rocmdocs.amd.com/en/latest/Programming_Guides/H\nIP_API_Guide.html\nNOTE: The CUDA_VERSION macro, cudaRuntimeGetVersion and\ncudaDriverGetVersion APIs do not semantically map to the same values\nas HIP_VERSION macro, hipRuntimeGetVersion and hipDriverGetVersion\nAPIs. Please do not use them interchangeably when doing version\nchecks.\nFor example: Instead of using\n\"#if defined(CUDA_VERSION) && CUDA_VERSION >= 11000\" to implicitly\nexclude ROCm/HIP,\nuse the following to not take the code path for ROCm/HIP:\n\"#if defined(CUDA_VERSION) && CUDA_VERSION >= 11000 &&\n!defined(USE_ROCM)\"\nAlternatively, if it is desired to take the code path for ROCm/HIP:\n\"#if (defined(CUDA_VERSION) && CUDA_VERSION >= 11000) ||\ndefined(USE_ROCM)\"", "source": "https://pytorch.org/docs/stable/notes/hip.html", "category": "pytorch docs"} {"text": "defined(USE_ROCM)\"\nOr if it is desired to take the code path for ROCm/HIP only for\nspecific HIP versions:\n\"#if (defined(CUDA_VERSION) && CUDA_VERSION >= 11000) ||\n(defined(USE_ROCM) && ROCM_VERSION >= 40300)\"\nRefer to CUDA Semantics doc\nFor any sections not listed here, please refer to the CUDA semantics\ndoc: CUDA semantics\nEnabling kernel asserts\nKernel asserts are supported on ROCm, but they are disabled due to\nperformance overhead. It can be enabled by recompiling the PyTorch\nfrom source.\nPlease add below line as an argument to cmake command parameters:\n-DROCM_FORCE_ENABLE_GPU_ASSERTS:BOOL=ON", "source": "https://pytorch.org/docs/stable/notes/hip.html", "category": "pytorch docs"} {"text": "Generic Join Context Manager\nThe generic join context manager facilitates distributed training on\nuneven inputs. This page outlines the API of the relevant classes:\n\"Join\", \"Joinable\", and \"JoinHook\". For a tutorial, see Distributed\nTraining with Uneven Inputs Using the Join Context Manager.\nclass torch.distributed.algorithms.Join(joinables, enable=True, throw_on_early_termination=False, **kwargs)\nThis class defines the generic join context manager, which allows\n custom hooks to be called after a process joins. These hooks should\n shadow the collective communications of non-joined processes to\n prevent hanging and erroring and to ensure algorithmic correctness.\n Refer to \"JoinHook\" for details about the hook definition.\nWarning:\n The context manager requires each participating \"Joinable\" to\n call the method \"notify_join_context()\" before its own per-\n iteration collective communications to ensure correctness.\n\nWarning:", "source": "https://pytorch.org/docs/stable/distributed.algorithms.join.html", "category": "pytorch docs"} {"text": "Warning:\n The context manager requires that all \"process_group\" attributes\n in the \"JoinHook\" objects are the same. If there are multiple\n \"JoinHook\" objects, then the \"device\" of the first is used. The\n process group and device information is used for checking for\n non- joined processes and for notifying processes to throw an\n exception if \"throw_on_early_termination\" is enabled, both of\n which using an all- reduce.\n\nParameters:\n * joinables (List[Joinable]) -- a list of the\n participating \"Joinable\" s; their hooks are iterated over in\n the given order.\n * **enable** (*bool*) -- a flag enabling uneven input detection;\n setting to \"False\" disables the context manager's\n functionality and should only be set when the user knows the\n inputs will not be uneven (default: \"True\").\n\n * **throw_on_early_termination** (*bool*) -- a flag controlling\n whether to throw an exception upon detecting uneven inputs\n", "source": "https://pytorch.org/docs/stable/distributed.algorithms.join.html", "category": "pytorch docs"} {"text": "(default: \"False\").\nExample:\n >>> import os\n >>> import torch\n >>> import torch.distributed as dist\n >>> import torch.multiprocessing as mp\n >>> import torch.nn.parallel.DistributedDataParallel as DDP\n >>> import torch.distributed.optim.ZeroRedundancyOptimizer as ZeRO\n >>> from torch.distributed.algorithms.join import Join\n >>>\n >>> # On each spawned worker\n >>> def worker(rank):\n >>> dist.init_process_group(\"nccl\", rank=rank, world_size=2)\n >>> model = DDP(torch.nn.Linear(1, 1).to(rank), device_ids=[rank])\n >>> optim = ZeRO(model.parameters(), torch.optim.Adam, lr=0.01)\n >>> # Rank 1 gets one more input than rank 0\n >>> inputs = [torch.tensor([1.]).to(rank) for _ in range(10 + rank)]\n >>> with Join([model, optim]):\n >>> for input in inputs:\n >>> loss = model(input).sum()\n >>> loss.backward()\n >>> optim.step()\n", "source": "https://pytorch.org/docs/stable/distributed.algorithms.join.html", "category": "pytorch docs"} {"text": "\n\n\n optim.step()\n >>> # All ranks reach here without hanging/erroring\n\n\n\n\nstatic notify_join_context(joinable)\n Notifies the join context manager that the calling process has\n not yet joined; then, if \"throw_on_early_termination=True\",\n checks if uneven inputs have been detected (i.e. if one process\n has already joined) and throws an exception if so.\n\n This method should be called from a \"Joinable\" object before its\n per-iteration collective communications. For example, this\n should be called at the beginning of the forward pass in\n \"DistributedDataParallel\".\n\n Only the first \"Joinable\" object passed into the context manager\n performs the collective communications in this method, and for\n the others, this method is vacuous.\n\n Parameters:\n **joinable** (*Joinable*) -- the \"Joinable\" object calling\n this method.\n\n Returns:\n An async work handle for the all-reduce meant to notify the\n", "source": "https://pytorch.org/docs/stable/distributed.algorithms.join.html", "category": "pytorch docs"} {"text": "context manager that the process has not yet joined if\n \"joinable\" is the first one passed into the context manager;\n \"None\" otherwise.\nclass torch.distributed.algorithms.Joinable\nThis defines an abstract base class for joinable classes. A\n joinable class (inheriting from \"Joinable\") should implement\n \"join_hook()\", which returns a \"JoinHook\" instance, in addition to\n \"join_device()\" and \"join_process_group()\" that return device and\n process group information, respectively.\nabstract property join_device: device\n Returns the device from which to perform collective\n communications needed by the join context manager implementation\n itself.\n\nabstract join_hook(**kwargs)\n Returns a \"JoinHook\" instance for the given \"Joinable\".\n\n Parameters:\n **kwargs** (*dict*) -- a \"dict\" containing any keyword\n arguments to modify the behavior of the join hook at run\n time; all \"Joinable\" instances sharing the same join context\n", "source": "https://pytorch.org/docs/stable/distributed.algorithms.join.html", "category": "pytorch docs"} {"text": "manager are forwarded the same value for \"kwargs\".\n Return type:\n *JoinHook*\n\nabstract property join_process_group: Any\n Returns the process group for the collective communications\n needed by the join context manager itself.\n\nclass torch.distributed.algorithms.JoinHook\nThis defines a join hook, which provides two entry points in the\n join context manager: a main hook, which is called repeatedly while\n there exists a non-joined process, and a post-hook, which is called\n once all processes have joined.\nTo implement a join hook for the generic join context manager,\n define a class that inherits from \"JoinHook\" and override\n \"main_hook()\" and \"post_hook()\" as appropriate.\nmain_hook()\n This hook is called repeatedly while there exists a non-joined\n process to shadow collective communications in one training\n iteration (i.e. in one forward pass, backward pass, and\n optimizer step).\n\npost_hook(is_last_joiner)", "source": "https://pytorch.org/docs/stable/distributed.algorithms.join.html", "category": "pytorch docs"} {"text": "post_hook(is_last_joiner)\n This hook is called after all processes have joined. It is\n passed an additional \"bool\" argument \"is_last_joiner\", which\n indicates if the rank is one of the last to join.\n\n Parameters:\n **is_last_joiner** (*bool*) -- \"True\" if the rank is one of\n the last to join; \"False\" otherwise.\n", "source": "https://pytorch.org/docs/stable/distributed.algorithms.join.html", "category": "pytorch docs"} {"text": "torch.utils.tensorboard\nBefore going further, more details on TensorBoard can be found at\nhttps://www.tensorflow.org/tensorboard/\nOnce you've installed TensorBoard, these utilities let you log PyTorch\nmodels and metrics into a directory for visualization within the\nTensorBoard UI. Scalars, images, histograms, graphs, and embedding\nvisualizations are all supported for PyTorch models and tensors as\nwell as Caffe2 nets and blobs.\nThe SummaryWriter class is your main entry to log data for consumption\nand visualization by TensorBoard. For example:\nimport torch\n import torchvision\n from torch.utils.tensorboard import SummaryWriter\n from torchvision import datasets, transforms\n# Writer will output to ./runs/ directory by default\n writer = SummaryWriter()\ntransform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.5,), (0.5,))])\n trainset = datasets.MNIST('mnist_train', train=True, download=True, transform=transform)", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"} {"text": "trainloader = torch.utils.data.DataLoader(trainset, batch_size=64, shuffle=True)\n model = torchvision.models.resnet50(False)\n # Have ResNet model take in grayscale rather than RGB\n model.conv1 = torch.nn.Conv2d(1, 64, kernel_size=7, stride=2, padding=3, bias=False)\n images, labels = next(iter(trainloader))\ngrid = torchvision.utils.make_grid(images)\n writer.add_image('images', grid, 0)\n writer.add_graph(model, images)\n writer.close()\nThis can then be visualized with TensorBoard, which should be\ninstallable and runnable with:\npip install tensorboard\n tensorboard --logdir=runs\nLots of information can be logged for one experiment. To avoid\ncluttering the UI and have better result clustering, we can group\nplots by naming them hierarchically. For example, \"Loss/train\" and\n\"Loss/test\" will be grouped together, while \"Accuracy/train\" and\n\"Accuracy/test\" will be grouped separately in the TensorBoard\ninterface.\nfrom torch.utils.tensorboard import SummaryWriter\n import numpy as np", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"} {"text": "import numpy as np\nwriter = SummaryWriter()\nfor n_iter in range(100):\n writer.add_scalar('Loss/train', np.random.random(), n_iter)\n writer.add_scalar('Loss/test', np.random.random(), n_iter)\n writer.add_scalar('Accuracy/train', np.random.random(), n_iter)\n writer.add_scalar('Accuracy/test', np.random.random(), n_iter)\nExpected result:\n[image]\nclass torch.utils.tensorboard.writer.SummaryWriter(log_dir=None, comment='', purge_step=None, max_queue=10, flush_secs=120, filename_suffix='')\nWrites entries directly to event files in the log_dir to be\n consumed by TensorBoard.\nThe SummaryWriter class provides a high-level API to create an\n event file in a given directory and add summaries and events to it.\n The class updates the file contents asynchronously. This allows a\n training program to call methods to add data to the file directly\n from the training loop, without slowing down training.", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"} {"text": "init(log_dir=None, comment='', purge_step=None, max_queue=10, flush_secs=120, filename_suffix='')\n Creates a *SummaryWriter* that will write out events and\n summaries to the event file.\n\n Parameters:\n * **log_dir** (*str*) -- Save directory location. Default is\n runs/**CURRENT_DATETIME_HOSTNAME**, which changes after\n each run. Use hierarchical folder structure to compare\n between runs easily. e.g. pass in 'runs/exp1', 'runs/exp2',\n etc. for each new experiment to compare across them.\n\n * **comment** (*str*) -- Comment log_dir suffix appended to\n the default \"log_dir\". If \"log_dir\" is assigned, this\n argument has no effect.\n\n * **purge_step** (*int*) -- When logging crashes at step T+X\n and restarts at step T, any events whose global_step larger\n or equal to T will be purged and hidden from TensorBoard.\n Note that crashed and resumed experiments should have the\n", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"} {"text": "same \"log_dir\".\n * **max_queue** (*int*) -- Size of the queue for pending\n events and summaries before one of the 'add' calls forces a\n flush to disk. Default is ten items.\n\n * **flush_secs** (*int*) -- How often, in seconds, to flush\n the pending events and summaries to disk. Default is every\n two minutes.\n\n * **filename_suffix** (*str*) -- Suffix added to all event\n filenames in the log_dir directory. More details on\n filename construction in tensorboard.summary.writer.event_\n file_writer.EventFileWriter.\n\n Examples:\n\n from torch.utils.tensorboard import SummaryWriter\n\n # create a summary writer with automatically generated folder name.\n writer = SummaryWriter()\n # folder location: runs/May04_22-14-54_s-MacBook-Pro.local/\n\n # create a summary writer using the specified folder name.\n writer = SummaryWriter(\"my_experiment\")\n", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"} {"text": "writer = SummaryWriter(\"my_experiment\")\n # folder location: my_experiment\n # create a summary writer with comment appended.\n writer = SummaryWriter(comment=\"LR_0.1_BATCH_16\")\n # folder location: runs/May04_22-14-54_s-MacBook-Pro.localLR_0.1_BATCH_16/\n\nadd_scalar(tag, scalar_value, global_step=None, walltime=None, new_style=False, double_precision=False)\n Add scalar data to summary.\n\n Parameters:\n * **tag** (*str*) -- Data identifier\n\n * **scalar_value** (*float** or **string/blobname*) -- Value\n to save\n\n * **global_step** (*int*) -- Global step value to record\n\n * **walltime** (*float*) -- Optional override default\n walltime (time.time()) with seconds after epoch of event\n\n * **new_style** (*boolean*) -- Whether to use new style\n (tensor field) or old style (simple_value field). New style\n could lead to faster data loading.\n\n Examples:\n", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"} {"text": "Examples:\n from torch.utils.tensorboard import SummaryWriter\n writer = SummaryWriter()\n x = range(100)\n for i in x:\n writer.add_scalar('y=2x', i * 2, i)\n writer.close()\n\n Expected result:\n\n [image]\n\nadd_scalars(main_tag, tag_scalar_dict, global_step=None, walltime=None)\n Adds many scalar data to summary.\n\n Parameters:\n * **main_tag** (*str*) -- The parent name for the tags\n\n * **tag_scalar_dict** (*dict*) -- Key-value pair storing the\n tag and corresponding values\n\n * **global_step** (*int*) -- Global step value to record\n\n * **walltime** (*float*) -- Optional override default\n walltime (time.time()) seconds after epoch of event\n\n Examples:\n\n from torch.utils.tensorboard import SummaryWriter\n writer = SummaryWriter()\n r = 5\n for i in range(100):\n writer.add_scalars('run_14h', {'xsinx':i*np.sin(i/r),\n", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"} {"text": "'xcosx':i*np.cos(i/r),\n 'tanx': np.tan(i/r)}, i)\n writer.close()\n # This call adds three values to the same scalar plot with the tag\n # 'run_14h' in TensorBoard's scalar section.\n Expected result:\n\n [image]\n\nadd_histogram(tag, values, global_step=None, bins='tensorflow', walltime=None, max_bins=None)\n Add histogram to summary.\n\n Parameters:\n * **tag** (*str*) -- Data identifier\n\n * **values** (*torch.Tensor**, **numpy.ndarray**, or\n **string/blobname*) -- Values to build histogram\n\n * **global_step** (*int*) -- Global step value to record\n\n * **bins** (*str*) -- One of {'tensorflow','auto', 'fd',\n ...}. This determines how the bins are made. You can find\n other options in: https://docs.scipy.org/doc/numpy/referen\n ce/generated/numpy.histogram.html\n", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"} {"text": "ce/generated/numpy.histogram.html\n * **walltime** (*float*) -- Optional override default\n walltime (time.time()) seconds after epoch of event\n\n Examples:\n\n from torch.utils.tensorboard import SummaryWriter\n import numpy as np\n writer = SummaryWriter()\n for i in range(10):\n x = np.random.random(1000)\n writer.add_histogram('distribution centers', x + i, i)\n writer.close()\n\n Expected result:\n\n [image]\n\nadd_image(tag, img_tensor, global_step=None, walltime=None, dataformats='CHW')\n Add image data to summary.\n\n Note that this requires the \"pillow\" package.\n\n Parameters:\n * **tag** (*str*) -- Data identifier\n\n * **img_tensor** (*torch.Tensor**, **numpy.ndarray**, or\n **string/blobname*) -- Image data\n\n * **global_step** (*int*) -- Global step value to record\n\n * **walltime** (*float*) -- Optional override default\n", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"} {"text": "walltime (time.time()) seconds after epoch of event\n * **dataformats** (*str*) -- Image data format specification\n of the form CHW, HWC, HW, WH, etc.\n\n Shape:\n img_tensor: Default is (3, H, W). You can use\n \"torchvision.utils.make_grid()\" to convert a batch of tensor\n into 3xHxW format or call \"add_images\" and let us do the job.\n Tensor with (1, H, W), (H, W), (H, W, 3) is also suitable as\n long as corresponding \"dataformats\" argument is passed, e.g.\n \"CHW\", \"HWC\", \"HW\".\n\n Examples:\n\n from torch.utils.tensorboard import SummaryWriter\n import numpy as np\n img = np.zeros((3, 100, 100))\n img[0] = np.arange(0, 10000).reshape(100, 100) / 10000\n img[1] = 1 - np.arange(0, 10000).reshape(100, 100) / 10000\n\n img_HWC = np.zeros((100, 100, 3))\n img_HWC[:, :, 0] = np.arange(0, 10000).reshape(100, 100) / 10000\n", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"} {"text": "img_HWC[:, :, 1] = 1 - np.arange(0, 10000).reshape(100, 100) / 10000\n writer = SummaryWriter()\n writer.add_image('my_image', img, 0)\n\n # If you have non-default dimension setting, set the dataformats argument.\n writer.add_image('my_image_HWC', img_HWC, 0, dataformats='HWC')\n writer.close()\n\n Expected result:\n\n [image]\n\nadd_images(tag, img_tensor, global_step=None, walltime=None, dataformats='NCHW')\n Add batched image data to summary.\n\n Note that this requires the \"pillow\" package.\n\n Parameters:\n * **tag** (*str*) -- Data identifier\n\n * **img_tensor** (*torch.Tensor**, **numpy.ndarray**, or\n **string/blobname*) -- Image data\n\n * **global_step** (*int*) -- Global step value to record\n\n * **walltime** (*float*) -- Optional override default\n walltime (time.time()) seconds after epoch of event\n\n * **dataformats** (*str*) -- Image data format specification\n", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"} {"text": "of the form NCHW, NHWC, CHW, HWC, HW, WH, etc.\n Shape:\n img_tensor: Default is (N, 3, H, W). If \"dataformats\" is\n specified, other shape will be accepted. e.g. NCHW or NHWC.\n\n Examples:\n\n from torch.utils.tensorboard import SummaryWriter\n import numpy as np\n\n img_batch = np.zeros((16, 3, 100, 100))\n for i in range(16):\n img_batch[i, 0] = np.arange(0, 10000).reshape(100, 100) / 10000 / 16 * i\n img_batch[i, 1] = (1 - np.arange(0, 10000).reshape(100, 100) / 10000) / 16 * i\n\n writer = SummaryWriter()\n writer.add_images('my_image_batch', img_batch, 0)\n writer.close()\n\n Expected result:\n\n [image]\n\nadd_figure(tag, figure, global_step=None, close=True, walltime=None)\n Render matplotlib figure into an image and add it to summary.\n\n Note that this requires the \"matplotlib\" package.\n\n Parameters:\n * **tag** (*str*) -- Data identifier\n", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"} {"text": "\ntag (str) -- Data identifier * **figure** (*matplotlib.pyplot.figure*) -- Figure or a list\n of figures\n\n * **global_step** (*int*) -- Global step value to record\n\n * **close** (*bool*) -- Flag to automatically close the\n figure\n\n * **walltime** (*float*) -- Optional override default\n walltime (time.time()) seconds after epoch of event\n\n\n\nadd_video(tag, vid_tensor, global_step=None, fps=4, walltime=None)\n Add video data to summary.\n\n Note that this requires the \"moviepy\" package.\n\n Parameters:\n * **tag** (*str*) -- Data identifier\n\n * **vid_tensor** (*torch.Tensor*) -- Video data\n\n * **global_step** (*int*) -- Global step value to record\n\n * **fps** (*float** or **int*) -- Frames per second\n\n * **walltime** (*float*) -- Optional override default\n walltime (time.time()) seconds after epoch of event\n\n Shape:\n", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"} {"text": "Shape:\n vid_tensor: (N, T, C, H, W). The values should lie in [0,\n 255] for type uint8 or [0, 1] for type float.\nadd_audio(tag, snd_tensor, global_step=None, sample_rate=44100, walltime=None)\n Add audio data to summary.\n\n Parameters:\n * **tag** (*str*) -- Data identifier\n\n * **snd_tensor** (*torch.Tensor*) -- Sound data\n\n * **global_step** (*int*) -- Global step value to record\n\n * **sample_rate** (*int*) -- sample rate in Hz\n\n * **walltime** (*float*) -- Optional override default\n walltime (time.time()) seconds after epoch of event\n\n Shape:\n snd_tensor: (1, L). The values should lie between [-1, 1].\n\nadd_text(tag, text_string, global_step=None, walltime=None)\n Add text data to summary.\n\n Parameters:\n * **tag** (*str*) -- Data identifier\n\n * **text_string** (*str*) -- String to save\n\n * **global_step** (*int*) -- Global step value to record\n", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"} {"text": "\n\nwalltime (float) -- Optional override default\n walltime (time.time()) seconds after epoch of event\nExamples:\n writer.add_text('lstm', 'This is an lstm', 0)\n writer.add_text('rnn', 'This is an rnn', 10)\n\n\n\nadd_graph(model, input_to_model=None, verbose=False, use_strict_trace=True)\n Add graph data to summary.\n\n Parameters:\n * **model** (*torch.nn.Module*) -- Model to draw.\n\n * **input_to_model** (*torch.Tensor** or **list of\n torch.Tensor*) -- A variable or a tuple of variables to be\n fed.\n\n * **verbose** (*bool*) -- Whether to print graph structure in\n console.\n\n * **use_strict_trace** (*bool*) -- Whether to pass keyword\n argument *strict* to *torch.jit.trace*. Pass False when you\n want the tracer to record your mutable container types\n (list, dict)\n\nadd_embedding(mat, metadata=None, label_img=None, global_step=None, tag='default', metadata_header=None)", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"} {"text": "Add embedding projector data to summary.\n Parameters:\n * **mat** (*torch.Tensor** or **numpy.ndarray*) -- A matrix\n which each row is the feature vector of the data point\n\n * **metadata** (*list*) -- A list of labels, each element\n will be convert to string\n\n * **label_img** (*torch.Tensor*) -- Images correspond to each\n data point\n\n * **global_step** (*int*) -- Global step value to record\n\n * **tag** (*str*) -- Name for the embedding\n\n Shape:\n mat: (N, D), where N is number of data and D is feature\n dimension\n\n label_img: (N, C, H, W)\n\n Examples:\n\n import keyword\n import torch\n meta = []\n while len(meta)<100:\n meta = meta+keyword.kwlist # get some strings\n meta = meta[:100]\n\n for i, v in enumerate(meta):\n meta[i] = v+str(i)\n\n label_img = torch.rand(100, 3, 10, 32)\n for i in range(100):\n", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"} {"text": "for i in range(100):\n label_img[i]*=i/100.0\n writer.add_embedding(torch.randn(100, 5), metadata=meta, label_img=label_img)\n writer.add_embedding(torch.randn(100, 5), label_img=label_img)\n writer.add_embedding(torch.randn(100, 5), metadata=meta)\n\nadd_pr_curve(tag, labels, predictions, global_step=None, num_thresholds=127, weights=None, walltime=None)\n Adds precision recall curve. Plotting a precision-recall curve\n lets you understand your model's performance under different\n threshold settings. With this function, you provide the ground\n truth labeling (T/F) and prediction confidence (usually the\n output of your model) for each target. The TensorBoard UI will\n let you choose the threshold interactively.\n\n Parameters:\n * **tag** (*str*) -- Data identifier\n\n * **labels** (*torch.Tensor**, **numpy.ndarray**, or\n **string/blobname*) -- Ground truth data. Binary label for\n each element.\n", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"} {"text": "each element.\n * **predictions** (*torch.Tensor**, **numpy.ndarray**, or\n **string/blobname*) -- The probability that an element be\n classified as true. Value should be in [0, 1]\n\n * **global_step** (*int*) -- Global step value to record\n\n * **num_thresholds** (*int*) -- Number of thresholds used to\n draw the curve.\n\n * **walltime** (*float*) -- Optional override default\n walltime (time.time()) seconds after epoch of event\n\n Examples:\n\n from torch.utils.tensorboard import SummaryWriter\n import numpy as np\n labels = np.random.randint(2, size=100) # binary label\n predictions = np.random.rand(100)\n writer = SummaryWriter()\n writer.add_pr_curve('pr_curve', labels, predictions, 0)\n writer.close()\n\nadd_custom_scalars(layout)\n Create special chart by collecting charts tags in 'scalars'.\n Note that this function can only be called once for each\n", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"} {"text": "SummaryWriter() object. Because it only provides metadata to\n tensorboard, the function can be called before or after the\n training loop.\n Parameters:\n **layout** (*dict*) -- {categoryName: *charts*}, where\n *charts* is also a dictionary {chartName:\n *ListOfProperties*}. The first element in *ListOfProperties*\n is the chart's type (one of **Multiline** or **Margin**) and\n the second element should be a list containing the tags you\n have used in add_scalar function, which will be collected\n into the new chart.\n\n Examples:\n\n layout = {'Taiwan':{'twse':['Multiline',['twse/0050', 'twse/2330']]},\n 'USA':{ 'dow':['Margin', ['dow/aaa', 'dow/bbb', 'dow/ccc']],\n 'nasdaq':['Margin', ['nasdaq/aaa', 'nasdaq/bbb', 'nasdaq/ccc']]}}\n\n writer.add_custom_scalars(layout)\n\nadd_mesh(tag, vertices, colors=None, faces=None, config_dict=None, global_step=None, walltime=None)", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"} {"text": "Add meshes or 3D point clouds to TensorBoard. The visualization\n is based on Three.js, so it allows users to interact with the\n rendered object. Besides the basic definitions such as vertices,\n faces, users can further provide camera parameter, lighting\n condition, etc. Please check https://threejs.org/docs/index.htm\n l#manual/en/introduction/Creating-a-scene for advanced usage.\n Parameters:\n * **tag** (*str*) -- Data identifier\n\n * **vertices** (*torch.Tensor*) -- List of the 3D coordinates\n of vertices.\n\n * **colors** (*torch.Tensor*) -- Colors for each vertex\n\n * **faces** (*torch.Tensor*) -- Indices of vertices within\n each triangle. (Optional)\n\n * **config_dict** -- Dictionary with ThreeJS classes names\n and configuration.\n\n * **global_step** (*int*) -- Global step value to record\n\n * **walltime** (*float*) -- Optional override default\n", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"} {"text": "walltime (time.time()) seconds after epoch of event\n Shape:\n vertices: (B, N, 3). (batch, number_of_vertices, channels)\n\n colors: (B, N, 3). The values should lie in [0, 255] for type\n *uint8* or [0, 1] for type *float*.\n\n faces: (B, N, 3). The values should lie in [0,\n number_of_vertices] for type *uint8*.\n\n Examples:\n\n from torch.utils.tensorboard import SummaryWriter\n vertices_tensor = torch.as_tensor([\n [1, 1, 1],\n [-1, -1, 1],\n [1, -1, -1],\n [-1, 1, -1],\n ], dtype=torch.float).unsqueeze(0)\n colors_tensor = torch.as_tensor([\n [255, 0, 0],\n [0, 255, 0],\n [0, 0, 255],\n [255, 0, 255],\n ], dtype=torch.int).unsqueeze(0)\n faces_tensor = torch.as_tensor([\n [0, 2, 3],\n [0, 3, 1],\n [0, 1, 2],\n [1, 3, 2],\n ], dtype=torch.int).unsqueeze(0)\n", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"} {"text": "], dtype=torch.int).unsqueeze(0)\n writer = SummaryWriter()\n writer.add_mesh('my_mesh', vertices=vertices_tensor, colors=colors_tensor, faces=faces_tensor)\n\n writer.close()\n\nadd_hparams(hparam_dict, metric_dict, hparam_domain_discrete=None, run_name=None)\n Add a set of hyperparameters to be compared in TensorBoard.\n\n Parameters:\n * **hparam_dict** (*dict*) -- Each key-value pair in the\n dictionary is the name of the hyper parameter and it's\n corresponding value. The type of the value can be one of\n *bool*, *string*, *float*, *int*, or *None*.\n\n * **metric_dict** (*dict*) -- Each key-value pair in the\n dictionary is the name of the metric and it's corresponding\n value. Note that the key used here should be unique in the\n tensorboard record. Otherwise the value you added by\n \"add_scalar\" will be displayed in hparam plugin. In most\n cases, this is unwanted.\n", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"} {"text": "cases, this is unwanted.\n * **hparam_domain_discrete** -- (Optional[Dict[str,\n List[Any]]]) A dictionary that contains names of the\n hyperparameters and all discrete values they can hold\n\n * **run_name** (*str*) -- Name of the run, to be included as\n part of the logdir. If unspecified, will use current\n timestamp.\n\n Examples:\n\n from torch.utils.tensorboard import SummaryWriter\n with SummaryWriter() as w:\n for i in range(5):\n w.add_hparams({'lr': 0.1*i, 'bsize': i},\n {'hparam/accuracy': 10*i, 'hparam/loss': 10*i})\n\n Expected result:\n\n [image]\n\nflush()\n Flushes the event file to disk. Call this method to make sure\n that all pending events have been written to disk.\n\nclose()", "source": "https://pytorch.org/docs/stable/tensorboard.html", "category": "pytorch docs"}